Sample records for quantization codebook design-part

  1. Speech coding at low to medium bit rates

    NASA Astrophysics Data System (ADS)

    Leblanc, Wilfred Paul

    1992-09-01

    Improved search techniques coupled with improved codebook design methodologies are proposed to improve the performance of conventional code-excited linear predictive coders for speech. Improved methods for quantizing the short term filter are developed by employing a tree search algorithm and joint codebook design to multistage vector quantization. Joint codebook design procedures are developed to design locally optimal multistage codebooks. Weighting during centroid computation is introduced to improve the outlier performance of the multistage vector quantizer. Multistage vector quantization is shown to be both robust against input characteristics and in the presence of channel errors. Spectral distortions of about 1 dB are obtained at rates of 22-28 bits/frame. Structured codebook design procedures for excitation in code-excited linear predictive coders are compared to general codebook design procedures. Little is lost using significant structure in the excitation codebooks while greatly reducing the search complexity. Sparse multistage configurations are proposed for reducing computational complexity and memory size. Improved search procedures are applied to code-excited linear prediction which attempt joint optimization of the short term filter, the adaptive codebook, and the excitation. Improvements in signal to noise ratio of 1-2 dB are realized in practice.

  2. Accelerating Families of Fuzzy K-Means Algorithms for Vector Quantization Codebook Design

    PubMed Central

    Mata, Edson; Bandeira, Silvio; de Mattos Neto, Paulo; Lopes, Waslon; Madeiro, Francisco

    2016-01-01

    The performance of signal processing systems based on vector quantization depends on codebook design. In the image compression scenario, the quality of the reconstructed images depends on the codebooks used. In this paper, alternatives are proposed for accelerating families of fuzzy K-means algorithms for codebook design. The acceleration is obtained by reducing the number of iterations of the algorithms and applying efficient nearest neighbor search techniques. Simulation results concerning image vector quantization have shown that the acceleration obtained so far does not decrease the quality of the reconstructed images. Codebook design time savings up to about 40% are obtained by the accelerated versions with respect to the original versions of the algorithms. PMID:27886061

  3. Accelerating Families of Fuzzy K-Means Algorithms for Vector Quantization Codebook Design.

    PubMed

    Mata, Edson; Bandeira, Silvio; de Mattos Neto, Paulo; Lopes, Waslon; Madeiro, Francisco

    2016-11-23

    The performance of signal processing systems based on vector quantization depends on codebook design. In the image compression scenario, the quality of the reconstructed images depends on the codebooks used. In this paper, alternatives are proposed for accelerating families of fuzzy K-means algorithms for codebook design. The acceleration is obtained by reducing the number of iterations of the algorithms and applying efficient nearest neighbor search techniques. Simulation results concerning image vector quantization have shown that the acceleration obtained so far does not decrease the quality of the reconstructed images. Codebook design time savings up to about 40% are obtained by the accelerated versions with respect to the original versions of the algorithms.

  4. A recursive technique for adaptive vector quantization

    NASA Technical Reports Server (NTRS)

    Lindsay, Robert A.

    1989-01-01

    Vector Quantization (VQ) is fast becoming an accepted, if not preferred method for image compression. The VQ performs well when compressing all types of imagery including Video, Electro-Optical (EO), Infrared (IR), Synthetic Aperture Radar (SAR), Multi-Spectral (MS), and digital map data. The only requirement is to change the codebook to switch the compressor from one image sensor to another. There are several approaches for designing codebooks for a vector quantizer. Adaptive Vector Quantization is a procedure that simultaneously designs codebooks as the data is being encoded or quantized. This is done by computing the centroid as a recursive moving average where the centroids move after every vector is encoded. When computing the centroid of a fixed set of vectors the resultant centroid is identical to the previous centroid calculation. This method of centroid calculation can be easily combined with VQ encoding techniques. The defined quantizer changes after every encoded vector by recursively updating the centroid of minimum distance which is the selected by the encoder. Since the quantizer is changing definition or states after every encoded vector, the decoder must now receive updates to the codebook. This is done as side information by multiplexing bits into the compressed source data.

  5. Structured codebook design in CELP

    NASA Technical Reports Server (NTRS)

    Leblanc, W. P.; Mahmoud, S. A.

    1990-01-01

    Codebook Excited Linear Protection (CELP) is a popular analysis by synthesis technique for quantizing speech at bit rates from 4 to 6 kbps. Codebook design techniques to date have been largely based on either random (often Gaussian) codebooks, or on known binary or ternary codes which efficiently map the space of (assumed white) excitation codevectors. It has been shown that by introducing symmetries into the codebook, good complexity reduction can be realized with only marginal decrease in performance. Codebook design algorithms are considered for a wide range of structured codebooks.

  6. Recursive optimal pruning with applications to tree structured vector quantizers

    NASA Technical Reports Server (NTRS)

    Kiang, Shei-Zein; Baker, Richard L.; Sullivan, Gary J.; Chiu, Chung-Yen

    1992-01-01

    A pruning algorithm of Chou et al. (1989) for designing optimal tree structures identifies only those codebooks which lie on the convex hull of the original codebook's operational distortion rate function. The authors introduce a modified version of the original algorithm, which identifies a large number of codebooks having minimum average distortion, under the constraint that, in each step, only modes having no descendents are removed from the tree. All codebooks generated by the original algorithm are also generated by this algorithm. The new algorithm generates a much larger number of codebooks in the middle- and low-rate regions. The additional codebooks permit operation near the codebook's operational distortion rate function without time sharing by choosing from the increased number of available bit rates. Despite the statistical mismatch which occurs when coding data outside the training sequence, these pruned codebooks retain their performance advantage over full search vector quantizers (VQs) for a large range of rates.

  7. Image Coding Based on Address Vector Quantization.

    NASA Astrophysics Data System (ADS)

    Feng, Yushu

    Image coding is finding increased application in teleconferencing, archiving, and remote sensing. This thesis investigates the potential of Vector Quantization (VQ), a relatively new source coding technique, for compression of monochromatic and color images. Extensions of the Vector Quantization technique to the Address Vector Quantization method have been investigated. In Vector Quantization, the image data to be encoded are first processed to yield a set of vectors. A codeword from the codebook which best matches the input image vector is then selected. Compression is achieved by replacing the image vector with the index of the code-word which produced the best match, the index is sent to the channel. Reconstruction of the image is done by using a table lookup technique, where the label is simply used as an address for a table containing the representative vectors. A code-book of representative vectors (codewords) is generated using an iterative clustering algorithm such as K-means, or the generalized Lloyd algorithm. A review of different Vector Quantization techniques are given in chapter 1. Chapter 2 gives an overview of codebook design methods including the Kohonen neural network to design codebook. During the encoding process, the correlation of the address is considered and Address Vector Quantization is developed for color image and monochrome image coding. Address VQ which includes static and dynamic processes is introduced in chapter 3. In order to overcome the problems in Hierarchical VQ, Multi-layer Address Vector Quantization is proposed in chapter 4. This approach gives the same performance as that of the normal VQ scheme but the bit rate is about 1/2 to 1/3 as that of the normal VQ method. In chapter 5, a Dynamic Finite State VQ based on a probability transition matrix to select the best subcodebook to encode the image is developed. In chapter 6, a new adaptive vector quantization scheme, suitable for color video coding, called "A Self -Organizing Adaptive VQ Technique" is presented. In addition to chapters 2 through 6 which report on new work, this dissertation includes one chapter (chapter 1) and part of chapter 2 which review previous work on VQ and image coding, respectively. Finally, a short discussion of directions for further research is presented in conclusion.

  8. Image coding using entropy-constrained residual vector quantization

    NASA Technical Reports Server (NTRS)

    Kossentini, Faouzi; Smith, Mark J. T.; Barnes, Christopher F.

    1993-01-01

    The residual vector quantization (RVQ) structure is exploited to produce a variable length codeword RVQ. Necessary conditions for the optimality of this RVQ are presented, and a new entropy-constrained RVQ (ECRVQ) design algorithm is shown to be very effective in designing RVQ codebooks over a wide range of bit rates and vector sizes. The new EC-RVQ has several important advantages. It can outperform entropy-constrained VQ (ECVQ) in terms of peak signal-to-noise ratio (PSNR), memory, and computation requirements. It can also be used to design high rate codebooks and codebooks with relatively large vector sizes. Experimental results indicate that when the new EC-RVQ is applied to image coding, very high quality is achieved at relatively low bit rates.

  9. Vector Quantization Algorithm Based on Associative Memories

    NASA Astrophysics Data System (ADS)

    Guzmán, Enrique; Pogrebnyak, Oleksiy; Yáñez, Cornelio; Manrique, Pablo

    This paper presents a vector quantization algorithm for image compression based on extended associative memories. The proposed algorithm is divided in two stages. First, an associative network is generated applying the learning phase of the extended associative memories between a codebook generated by the LBG algorithm and a training set. This associative network is named EAM-codebook and represents a new codebook which is used in the next stage. The EAM-codebook establishes a relation between training set and the LBG codebook. Second, the vector quantization process is performed by means of the recalling stage of EAM using as associative memory the EAM-codebook. This process generates a set of the class indices to which each input vector belongs. With respect to the LBG algorithm, the main advantages offered by the proposed algorithm is high processing speed and low demand of resources (system memory); results of image compression and quality are presented.

  10. A VLSI chip set for real time vector quantization of image sequences

    NASA Technical Reports Server (NTRS)

    Baker, Richard L.

    1989-01-01

    The architecture and implementation of a VLSI chip set that vector quantizes (VQ) image sequences in real time is described. The chip set forms a programmable Single-Instruction, Multiple-Data (SIMD) machine which can implement various vector quantization encoding structures. Its VQ codebook may contain unlimited number of codevectors, N, having dimension up to K = 64. Under a weighted least squared error criterion, the engine locates at video rates the best code vector in full-searched or large tree searched VQ codebooks. The ability to manipulate tree structured codebooks, coupled with parallelism and pipelining, permits searches in as short as O (log N) cycles. A full codebook search results in O(N) performance, compared to O(KN) for a Single-Instruction, Single-Data (SISD) machine. With this VLSI chip set, an entire video code can be built on a single board that permits realtime experimentation with very large codebooks.

  11. BSIFT: toward data-independent codebook for large scale image search.

    PubMed

    Zhou, Wengang; Li, Houqiang; Hong, Richang; Lu, Yijuan; Tian, Qi

    2015-03-01

    Bag-of-Words (BoWs) model based on Scale Invariant Feature Transform (SIFT) has been widely used in large-scale image retrieval applications. Feature quantization by vector quantization plays a crucial role in BoW model, which generates visual words from the high- dimensional SIFT features, so as to adapt to the inverted file structure for the scalable retrieval. Traditional feature quantization approaches suffer several issues, such as necessity of visual codebook training, limited reliability, and update inefficiency. To avoid the above problems, in this paper, a novel feature quantization scheme is proposed to efficiently quantize each SIFT descriptor to a descriptive and discriminative bit-vector, which is called binary SIFT (BSIFT). Our quantizer is independent of image collections. In addition, by taking the first 32 bits out from BSIFT as code word, the generated BSIFT naturally lends itself to adapt to the classic inverted file structure for image indexing. Moreover, the quantization error is reduced by feature filtering, code word expansion, and query sensitive mask shielding. Without any explicit codebook for quantization, our approach can be readily applied in image search in some resource-limited scenarios. We evaluate the proposed algorithm for large scale image search on two public image data sets. Experimental results demonstrate the index efficiency and retrieval accuracy of our approach.

  12. Pipeline synthetic aperture radar data compression utilizing systolic binary tree-searched architecture for vector quantization

    NASA Technical Reports Server (NTRS)

    Chang, Chi-Yung (Inventor); Fang, Wai-Chi (Inventor); Curlander, John C. (Inventor)

    1995-01-01

    A system for data compression utilizing systolic array architecture for Vector Quantization (VQ) is disclosed for both full-searched and tree-searched. For a tree-searched VQ, the special case of a Binary Tree-Search VQ (BTSVQ) is disclosed with identical Processing Elements (PE) in the array for both a Raw-Codebook VQ (RCVQ) and a Difference-Codebook VQ (DCVQ) algorithm. A fault tolerant system is disclosed which allows a PE that has developed a fault to be bypassed in the array and replaced by a spare at the end of the array, with codebook memory assignment shifted one PE past the faulty PE of the array.

  13. Musical sound analysis/synthesis using vector-quantized time-varying spectra

    NASA Astrophysics Data System (ADS)

    Ehmann, Andreas F.; Beauchamp, James W.

    2002-11-01

    A fundamental goal of computer music sound synthesis is accurate, yet efficient resynthesis of musical sounds, with the possibility of extending the synthesis into new territories using control of perceptually intuitive parameters. A data clustering technique known as vector quantization (VQ) is used to extract a globally optimum set of representative spectra from phase vocoder analyses of instrument tones. This set of spectra, called a Codebook, is used for sinusoidal additive synthesis or, more efficiently, for wavetable synthesis. Instantaneous spectra are synthesized by first determining the Codebook indices corresponding to the best least-squares matches to the original time-varying spectrum. Spectral index versus time functions are then smoothed, and interpolation is employed to provide smooth transitions between Codebook spectra. Furthermore, spectral frames are pre-flattened and their slope, or tilt, extracted before clustering is applied. This allows spectral tilt, closely related to the perceptual parameter ''brightness,'' to be independently controlled during synthesis. The result is a highly compressed format consisting of the Codebook spectra and time-varying tilt, amplitude, and Codebook index parameters. This technique has been applied to a variety of harmonic musical instrument sounds with the resulting resynthesized tones providing good matches to the originals.

  14. Cross-entropy embedding of high-dimensional data using the neural gas model.

    PubMed

    Estévez, Pablo A; Figueroa, Cristián J; Saito, Kazumi

    2005-01-01

    A cross-entropy approach to mapping high-dimensional data into a low-dimensional space embedding is presented. The method allows to project simultaneously the input data and the codebook vectors, obtained with the Neural Gas (NG) quantizer algorithm, into a low-dimensional output space. The aim of this approach is to preserve the relationship defined by the NG neighborhood function for each pair of input and codebook vectors. A cost function based on the cross-entropy between input and output probabilities is minimized by using a Newton-Raphson method. The new approach is compared with Sammon's non-linear mapping (NLM) and the hierarchical approach of combining a vector quantizer such as the self-organizing feature map (SOM) or NG with the NLM recall algorithm. In comparison with these techniques, our method delivers a clear visualization of both data points and codebooks, and it achieves a better mapping quality in terms of the topology preservation measure q(m).

  15. Vector adaptive predictive coder for speech and audio

    NASA Technical Reports Server (NTRS)

    Chen, Juin-Hwey (Inventor); Gersho, Allen (Inventor)

    1990-01-01

    A real-time vector adaptive predictive coder which approximates each vector of K speech samples by using each of M fixed vectors in a first codebook to excite a time-varying synthesis filter and picking the vector that minimizes distortion. Predictive analysis for each frame determines parameters used for computing from vectors in the first codebook zero-state response vectors that are stored at the same address (index) in a second codebook. Encoding of input speech vectors s.sub.n is then carried out using the second codebook. When the vector that minimizes distortion is found, its index is transmitted to a decoder which has a codebook identical to the first codebook of the decoder. There the index is used to read out a vector that is used to synthesize an output speech vector s.sub.n. The parameters used in the encoder are quantized, for example by using a table, and the indices are transmitted to the decoder where they are decoded to specify transfer characteristics of filters used in producing the vector s.sub.n from the receiver codebook vector selected by the vector index transmitted.

  16. Video data compression using artificial neural network differential vector quantization

    NASA Technical Reports Server (NTRS)

    Krishnamurthy, Ashok K.; Bibyk, Steven B.; Ahalt, Stanley C.

    1991-01-01

    An artificial neural network vector quantizer is developed for use in data compression applications such as Digital Video. Differential Vector Quantization is used to preserve edge features, and a new adaptive algorithm, known as Frequency-Sensitive Competitive Learning, is used to develop the vector quantizer codebook. To develop real time performance, a custom Very Large Scale Integration Application Specific Integrated Circuit (VLSI ASIC) is being developed to realize the associative memory functions needed in the vector quantization algorithm. By using vector quantization, the need for Huffman coding can be eliminated, resulting in superior performance against channel bit errors than methods that use variable length codes.

  17. Enhancing speech recognition using improved particle swarm optimization based hidden Markov model.

    PubMed

    Selvaraj, Lokesh; Ganesan, Balakrishnan

    2014-01-01

    Enhancing speech recognition is the primary intention of this work. In this paper a novel speech recognition method based on vector quantization and improved particle swarm optimization (IPSO) is suggested. The suggested methodology contains four stages, namely, (i) denoising, (ii) feature mining (iii), vector quantization, and (iv) IPSO based hidden Markov model (HMM) technique (IP-HMM). At first, the speech signals are denoised using median filter. Next, characteristics such as peak, pitch spectrum, Mel frequency Cepstral coefficients (MFCC), mean, standard deviation, and minimum and maximum of the signal are extorted from the denoised signal. Following that, to accomplish the training process, the extracted characteristics are given to genetic algorithm based codebook generation in vector quantization. The initial populations are created by selecting random code vectors from the training set for the codebooks for the genetic algorithm process and IP-HMM helps in doing the recognition. At this point the creativeness will be done in terms of one of the genetic operation crossovers. The proposed speech recognition technique offers 97.14% accuracy.

  18. Multipath search coding of stationary signals with applications to speech

    NASA Astrophysics Data System (ADS)

    Fehn, H. G.; Noll, P.

    1982-04-01

    This paper deals with the application of multipath search coding (MSC) concepts to the coding of stationary memoryless and correlated sources, and of speech signals, at a rate of one bit per sample. Use is made of three MSC classes: (1) codebook coding, or vector quantization, (2) tree coding, and (3) trellis coding. This paper explains the performances of these coders and compares them both with those of conventional coders and with rate-distortion bounds. The potentials of MSC coding strategies are demonstrated by illustrations. The paper reports also on results of MSC coding of speech, where both the strategy of adaptive quantization and of adaptive prediction were included in coder design.

  19. Gain-adaptive vector quantization for medium-rate speech coding

    NASA Technical Reports Server (NTRS)

    Chen, J.-H.; Gersho, A.

    1985-01-01

    A class of adaptive vector quantizers (VQs) that can dynamically adjust the 'gain' of codevectors according to the input signal level is introduced. The encoder uses a gain estimator to determine a suitable normalization of each input vector prior to VQ coding. The normalized vectors have reduced dynamic range and can then be more efficiently coded. At the receiver, the VQ decoder output is multiplied by the estimated gain. Both forward and backward adaptation are considered and several different gain estimators are compared and evaluated. An approach to optimizing the design of gain estimators is introduced. Some of the more obvious techniques for achieving gain adaptation are substantially less effective than the use of optimized gain estimators. A novel design technique that is needed to generate the appropriate gain-normalized codebook for the vector quantizer is introduced. Experimental results show that a significant gain in segmental SNR can be obtained over nonadaptive VQ with a negligible increase in complexity.

  20. Model-based VQ for image data archival, retrieval and distribution

    NASA Technical Reports Server (NTRS)

    Manohar, Mareboyana; Tilton, James C.

    1995-01-01

    An ideal image compression technique for image data archival, retrieval and distribution would be one with the asymmetrical computational requirements of Vector Quantization (VQ), but without the complications arising from VQ codebooks. Codebook generation and maintenance are stumbling blocks which have limited the use of VQ as a practical image compression algorithm. Model-based VQ (MVQ), a variant of VQ described here, has the computational properties of VQ but does not require explicit codebooks. The codebooks are internally generated using mean removed error and Human Visual System (HVS) models. The error model assumed is the Laplacian distribution with mean, lambda-computed from a sample of the input image. A Laplacian distribution with mean, lambda, is generated with uniform random number generator. These random numbers are grouped into vectors. These vectors are further conditioned to make them perceptually meaningful by filtering the DCT coefficients from each vector. The DCT coefficients are filtered by multiplying by a weight matrix that is found to be optimal for human perception. The inverse DCT is performed to produce the conditioned vectors for the codebook. The only image dependent parameter used in the generation of codebook is the mean, lambda, that is included in the coded file to repeat the codebook generation process for decoding.

  1. Information preserving coding for multispectral data

    NASA Technical Reports Server (NTRS)

    Duan, J. R.; Wintz, P. A.

    1973-01-01

    A general formulation of the data compression system is presented. A method of instantaneous expansion of quantization levels by reserving two codewords in the codebook to perform a folding over in quantization is implemented for error free coding of data with incomplete knowledge of the probability density function. Results for simple DPCM with folding and an adaptive transform coding technique followed by a DPCM technique are compared using ERTS-1 data.

  2. Application of a VLSI vector quantization processor to real-time speech coding

    NASA Technical Reports Server (NTRS)

    Davidson, G.; Gersho, A.

    1986-01-01

    Attention is given to a working vector quantization processor for speech coding that is based on a first-generation VLSI chip which efficiently performs the pattern-matching operation needed for the codebook search process (CPS). Using this chip, the CPS architecture has been successfully incorporated into a compact, single-board Vector PCM implementation operating at 7-18 kbits/sec. A real time Adaptive Vector Predictive Coder system using the CPS has also been implemented.

  3. A constrained joint source/channel coder design and vector quantization of nonstationary sources

    NASA Technical Reports Server (NTRS)

    Sayood, Khalid; Chen, Y. C.; Nori, S.; Araj, A.

    1993-01-01

    The emergence of broadband ISDN as the network for the future brings with it the promise of integration of all proposed services in a flexible environment. In order to achieve this flexibility, asynchronous transfer mode (ATM) has been proposed as the transfer technique. During this period a study was conducted on the bridging of network transmission performance and video coding. The successful transmission of variable bit rate video over ATM networks relies on the interaction between the video coding algorithm and the ATM networks. Two aspects of networks that determine the efficiency of video transmission are the resource allocation algorithm and the congestion control algorithm. These are explained in this report. Vector quantization (VQ) is one of the more popular compression techniques to appear in the last twenty years. Numerous compression techniques, which incorporate VQ, have been proposed. While the LBG VQ provides excellent compression, there are also several drawbacks to the use of the LBG quantizers including search complexity and memory requirements, and a mismatch between the codebook and the inputs. The latter mainly stems from the fact that the VQ is generally designed for a specific rate and a specific class of inputs. In this work, an adaptive technique is proposed for vector quantization of images and video sequences. This technique is an extension of the recursively indexed scalar quantization (RISQ) algorithm.

  4. Vector Sum Excited Linear Prediction (VSELP) speech coding at 4.8 kbps

    NASA Technical Reports Server (NTRS)

    Gerson, Ira A.; Jasiuk, Mark A.

    1990-01-01

    Code Excited Linear Prediction (CELP) speech coders exhibit good performance at data rates as low as 4800 bps. The major drawback to CELP type coders is their larger computational requirements. The Vector Sum Excited Linear Prediction (VSELP) speech coder utilizes a codebook with a structure which allows for a very efficient search procedure. Other advantages of the VSELP codebook structure is discussed and a detailed description of a 4.8 kbps VSELP coder is given. This coder is an improved version of the VSELP algorithm, which finished first in the NSA's evaluation of the 4.8 kbps speech coders. The coder uses a subsample resolution single tap long term predictor, a single VSELP excitation codebook, a novel gain quantizer which is robust to channel errors, and a new adaptive pre/postfilter arrangement.

  5. Efficient storage and management of radiographic images using a novel wavelet-based multiscale vector quantizer

    NASA Astrophysics Data System (ADS)

    Yang, Shuyu; Mitra, Sunanda

    2002-05-01

    Due to the huge volumes of radiographic images to be managed in hospitals, efficient compression techniques yielding no perceptual loss in the reconstructed images are becoming a requirement in the storage and management of such datasets. A wavelet-based multi-scale vector quantization scheme that generates a global codebook for efficient storage and transmission of medical images is presented in this paper. The results obtained show that even at low bit rates one is able to obtain reconstructed images with perceptual quality higher than that of the state-of-the-art scalar quantization method, the set partitioning in hierarchical trees.

  6. Interframe vector wavelet coding technique

    NASA Astrophysics Data System (ADS)

    Wus, John P.; Li, Weiping

    1997-01-01

    Wavelet coding is often used to divide an image into multi- resolution wavelet coefficients which are quantized and coded. By 'vectorizing' scalar wavelet coding and combining this with vector quantization (VQ), vector wavelet coding (VWC) can be implemented. Using a finite number of states, finite-state vector quantization (FSVQ) takes advantage of the similarity between frames by incorporating memory into the video coding system. Lattice VQ eliminates the potential mismatch that could occur using pre-trained VQ codebooks. It also eliminates the need for codebook storage in the VQ process, thereby creating a more robust coding system. Therefore, by using the VWC coding method in conjunction with the FSVQ system and lattice VQ, the formulation of a high quality very low bit rate coding systems is proposed. A coding system using a simple FSVQ system where the current state is determined by the previous channel symbol only is developed. To achieve a higher degree of compression, a tree-like FSVQ system is implemented. The groupings are done in this tree-like structure from the lower subbands to the higher subbands in order to exploit the nature of subband analysis in terms of the parent-child relationship. Class A and Class B video sequences from the MPEG-IV testing evaluations are used in the evaluation of this coding method.

  7. Progressive Vector Quantization on a massively parallel SIMD machine with application to multispectral image data

    NASA Technical Reports Server (NTRS)

    Manohar, Mareboyana; Tilton, James C.

    1994-01-01

    A progressive vector quantization (VQ) compression approach is discussed which decomposes image data into a number of levels using full search VQ. The final level is losslessly compressed, enabling lossless reconstruction. The computational difficulties are addressed by implementation on a massively parallel SIMD machine. We demonstrate progressive VQ on multispectral imagery obtained from the Advanced Very High Resolution Radiometer instrument and other Earth observation image data, and investigate the trade-offs in selecting the number of decomposition levels and codebook training method.

  8. Locally adaptive vector quantization: Data compression with feature preservation

    NASA Technical Reports Server (NTRS)

    Cheung, K. M.; Sayano, M.

    1992-01-01

    A study of a locally adaptive vector quantization (LAVQ) algorithm for data compression is presented. This algorithm provides high-speed one-pass compression and is fully adaptable to any data source and does not require a priori knowledge of the source statistics. Therefore, LAVQ is a universal data compression algorithm. The basic algorithm and several modifications to improve performance are discussed. These modifications are nonlinear quantization, coarse quantization of the codebook, and lossless compression of the output. Performance of LAVQ on various images using irreversible (lossy) coding is comparable to that of the Linde-Buzo-Gray algorithm, but LAVQ has a much higher speed; thus this algorithm has potential for real-time video compression. Unlike most other image compression algorithms, LAVQ preserves fine detail in images. LAVQ's performance as a lossless data compression algorithm is comparable to that of Lempel-Ziv-based algorithms, but LAVQ uses far less memory during the coding process.

  9. High Performance Compression of Science Data

    NASA Technical Reports Server (NTRS)

    Storer, James A.; Carpentieri, Bruno; Cohn, Martin

    1994-01-01

    Two papers make up the body of this report. One presents a single-pass adaptive vector quantization algorithm that learns a codebook of variable size and shape entries; the authors present experiments on a set of test images showing that with no training or prior knowledge of the data, for a given fidelity, the compression achieved typically equals or exceeds that of the JPEG standard. The second paper addresses motion compensation, one of the most effective techniques used in interframe data compression. A parallel block-matching algorithm for estimating interframe displacement of blocks with minimum error is presented. The algorithm is designed for a simple parallel architecture to process video in real time.

  10. Subband directional vector quantization in radiological image compression

    NASA Astrophysics Data System (ADS)

    Akrout, Nabil M.; Diab, Chaouki; Prost, Remy; Goutte, Robert; Amiel, Michel

    1992-05-01

    The aim of this paper is to propose a new scheme for image compression. The method is very efficient for images which have directional edges such as the tree-like structure of the coronary vessels in digital angiograms. This method involves two steps. First, the original image is decomposed at different resolution levels using a pyramidal subband decomposition scheme. For decomposition/reconstruction of the image, free of aliasing and boundary errors, we use an ideal band-pass filter bank implemented in the Discrete Cosine Transform domain (DCT). Second, the high-frequency subbands are vector quantized using a multiresolution codebook with vertical and horizontal codewords which take into account the edge orientation of each subband. The proposed method reduces the blocking effect encountered at low bit rates in conventional vector quantization.

  11. High performance compression of science data

    NASA Technical Reports Server (NTRS)

    Storer, James A.; Cohn, Martin

    1994-01-01

    Two papers make up the body of this report. One presents a single-pass adaptive vector quantization algorithm that learns a codebook of variable size and shape entries; the authors present experiments on a set of test images showing that with no training or prior knowledge of the data, for a given fidelity, the compression achieved typically equals or exceeds that of the JPEG standard. The second paper addresses motion compensation, one of the most effective techniques used in the interframe data compression. A parallel block-matching algorithm for estimating interframe displacement of blocks with minimum error is presented. The algorithm is designed for a simple parallel architecture to process video in real time.

  12. Medical Image Compression Based on Vector Quantization with Variable Block Sizes in Wavelet Domain

    PubMed Central

    Jiang, Huiyan; Ma, Zhiyuan; Hu, Yang; Yang, Benqiang; Zhang, Libo

    2012-01-01

    An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD) was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality. PMID:23049544

  13. Medical image compression based on vector quantization with variable block sizes in wavelet domain.

    PubMed

    Jiang, Huiyan; Ma, Zhiyuan; Hu, Yang; Yang, Benqiang; Zhang, Libo

    2012-01-01

    An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD) was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality.

  14. Real-time speech encoding based on Code-Excited Linear Prediction (CELP)

    NASA Technical Reports Server (NTRS)

    Leblanc, Wilfrid P.; Mahmoud, S. A.

    1988-01-01

    This paper reports on the work proceeding with regard to the development of a real-time voice codec for the terrestrial and satellite mobile radio environments. The codec is based on a complexity reduced version of code-excited linear prediction (CELP). The codebook search complexity was reduced to only 0.5 million floating point operations per second (MFLOPS) while maintaining excellent speech quality. Novel methods to quantize the residual and the long and short term model filters are presented.

  15. Fusion of Deep Learning and Compressed Domain features for Content Based Image Retrieval.

    PubMed

    Liu, Peizhong; Guo, Jing-Ming; Wu, Chi-Yi; Cai, Danlin

    2017-08-29

    This paper presents an effective image retrieval method by combining high-level features from Convolutional Neural Network (CNN) model and low-level features from Dot-Diffused Block Truncation Coding (DDBTC). The low-level features, e.g., texture and color, are constructed by VQ-indexed histogram from DDBTC bitmap, maximum, and minimum quantizers. Conversely, high-level features from CNN can effectively capture human perception. With the fusion of the DDBTC and CNN features, the extended deep learning two-layer codebook features (DL-TLCF) is generated using the proposed two-layer codebook, dimension reduction, and similarity reweighting to improve the overall retrieval rate. Two metrics, average precision rate (APR) and average recall rate (ARR), are employed to examine various datasets. As documented in the experimental results, the proposed schemes can achieve superior performance compared to the state-of-the-art methods with either low- or high-level features in terms of the retrieval rate. Thus, it can be a strong candidate for various image retrieval related applications.

  16. Vector excitation speech or audio coder for transmission or storage

    NASA Technical Reports Server (NTRS)

    Davidson, Grant (Inventor); Gersho, Allen (Inventor)

    1989-01-01

    A vector excitation coder compresses vectors by using an optimum codebook designed off line, using an initial arbitrary codebook and a set of speech training vectors exploiting codevector sparsity (i.e., by making zero all but a selected number of samples of lowest amplitude in each of N codebook vectors). A fast-search method selects a number N.sub.c of good excitation vectors from the codebook, where N.sub.c is much smaller tha ORIGIN OF INVENTION The invention described herein was made in the performance of work under a NASA contract, and is subject to the provisions of Public Law 96-517 (35 USC 202) under which the inventors were granted a request to retain title.

  17. PCA-LBG-based algorithms for VQ codebook generation

    NASA Astrophysics Data System (ADS)

    Tsai, Jinn-Tsong; Yang, Po-Yuan

    2015-04-01

    Vector quantisation (VQ) codebooks are generated by combining principal component analysis (PCA) algorithms with Linde-Buzo-Gray (LBG) algorithms. All training vectors are grouped according to the projected values of the principal components. The PCA-LBG-based algorithms include (1) PCA-LBG-Median, which selects the median vector of each group, (2) PCA-LBG-Centroid, which adopts the centroid vector of each group, and (3) PCA-LBG-Random, which randomly selects a vector of each group. The LBG algorithm finds a codebook based on the better vectors sent to an initial codebook by the PCA. The PCA performs an orthogonal transformation to convert a set of potentially correlated variables into a set of variables that are not linearly correlated. Because the orthogonal transformation efficiently distinguishes test image vectors, the proposed PCA-LBG-based algorithm is expected to outperform conventional algorithms in designing VQ codebooks. The experimental results confirm that the proposed PCA-LBG-based algorithms indeed obtain better results compared to existing methods reported in the literature.

  18. Wavelet subband coding of computer simulation output using the A++ array class library

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bradley, J.N.; Brislawn, C.M.; Quinlan, D.J.

    1995-07-01

    The goal of the project is to produce utility software for off-line compression of existing data and library code that can be called from a simulation program for on-line compression of data dumps as the simulation proceeds. Naturally, we would like the amount of CPU time required by the compression algorithm to be small in comparison to the requirements of typical simulation codes. We also want the algorithm to accomodate a wide variety of smooth, multidimensional data types. For these reasons, the subband vector quantization (VQ) approach employed in has been replaced by a scalar quantization (SQ) strategy using amore » bank of almost-uniform scalar subband quantizers in a scheme similar to that used in the FBI fingerprint image compression standard. This eliminates the considerable computational burdens of training VQ codebooks for each new type of data and performing nearest-vector searches to encode the data. The comparison of subband VQ and SQ algorithms in indicated that, in practice, there is relatively little additional gain from using vector as opposed to scalar quantization on DWT subbands, even when the source imagery is from a very homogeneous population, and our subjective experience with synthetic computer-generated data supports this stance. It appears that a careful study is needed of the tradeoffs involved in selecting scalar vs. vector subband quantization, but such an analysis is beyond the scope of this paper. Our present work is focused on the problem of generating wavelet transform/scalar quantization (WSQ) implementations that can be ported easily between different hardware environments. This is an extremely important consideration given the great profusion of different high-performance computing architectures available, the high cost associated with learning how to map algorithms effectively onto a new architecture, and the rapid rate of evolution in the world of high-performance computing.« less

  19. Single-user MIMO versus multi-user MIMO in distributed antenna systems with limited feedback

    NASA Astrophysics Data System (ADS)

    Schwarz, Stefan; Heath, Robert W.; Rupp, Markus

    2013-12-01

    This article investigates the performance of cellular networks employing distributed antennas in addition to the central antennas of the base station. Distributed antennas are likely to be implemented using remote radio units, which is enabled by a low latency and high bandwidth dedicated link to the base station. This facilitates coherent transmission from potentially all available antennas at the same time. Such distributed antenna system (DAS) is an effective way to deal with path loss and large-scale fading in cellular systems. DAS can apply precoding across multiple transmission points to implement single-user MIMO (SU-MIMO) and multi-user MIMO (MU-MIMO) transmission. The throughput performance of various SU-MIMO and MU-MIMO transmission strategies is investigated in this article, employing a Long-Term evolution (LTE) standard compliant simulation framework. The previously theoretically established cell-capacity improvement of MU-MIMO in comparison to SU-MIMO in DASs is confirmed under the practical constraints imposed by the LTE standard, even under the assumption of imperfect channel state information (CSI) at the base station. Because practical systems will use quantized feedback, the performance of different CSI feedback algorithms for DASs is investigated. It is shown that significant gains in the CSI quantization accuracy and in the throughput of especially MU-MIMO systems can be achieved with relatively simple quantization codebook constructions that exploit the available temporal correlation and channel gain differences.

  20. A hierarchical word-merging algorithm with class separability measure.

    PubMed

    Wang, Lei; Zhou, Luping; Shen, Chunhua; Liu, Lingqiao; Liu, Huan

    2014-03-01

    In image recognition with the bag-of-features model, a small-sized visual codebook is usually preferred to obtain a low-dimensional histogram representation and high computational efficiency. Such a visual codebook has to be discriminative enough to achieve excellent recognition performance. To create a compact and discriminative codebook, in this paper we propose to merge the visual words in a large-sized initial codebook by maximally preserving class separability. We first show that this results in a difficult optimization problem. To deal with this situation, we devise a suboptimal but very efficient hierarchical word-merging algorithm, which optimally merges two words at each level of the hierarchy. By exploiting the characteristics of the class separability measure and designing a novel indexing structure, the proposed algorithm can hierarchically merge 10,000 visual words down to two words in merely 90 seconds. Also, to show the properties of the proposed algorithm and reveal its advantages, we conduct detailed theoretical analysis to compare it with another hierarchical word-merging algorithm that maximally preserves mutual information, obtaining interesting findings. Experimental studies are conducted to verify the effectiveness of the proposed algorithm on multiple benchmark data sets. As shown, it can efficiently produce more compact and discriminative codebooks than the state-of-the-art hierarchical word-merging algorithms, especially when the size of the codebook is significantly reduced.

  1. A Computer Compatible System for the Categorization, Enumeration, and Retrieval of Nineteenth and Early Twentieth Century Archaeological Material Culture. Part 1. Codebook.

    DTIC Science & Technology

    1983-12-01

    in to material of manufacture and form, organized to segregate material, style, and manufacturing techniques of functional and chronological...a system for classifying arti- facts and artifact fragments according to material of manufacture as veil as form, organized to segregate material...style, and manufacturing tech- niques of functional and chronological significance. The codebook manual contains instructions for making critical

  2. Improved Speech Coding Based on Open-Loop Parameter Estimation

    NASA Technical Reports Server (NTRS)

    Juang, Jer-Nan; Chen, Ya-Chin; Longman, Richard W.

    2000-01-01

    A nonlinear optimization algorithm for linear predictive speech coding was developed early that not only optimizes the linear model coefficients for the open loop predictor, but does the optimization including the effects of quantization of the transmitted residual. It also simultaneously optimizes the quantization levels used for each speech segment. In this paper, we present an improved method for initialization of this nonlinear algorithm, and demonstrate substantial improvements in performance. In addition, the new procedure produces monotonically improving speech quality with increasing numbers of bits used in the transmitted error residual. Examples of speech encoding and decoding are given for 8 speech segments and signal to noise levels as high as 47 dB are produced. As in typical linear predictive coding, the optimization is done on the open loop speech analysis model. Here we demonstrate that minimizing the error of the closed loop speech reconstruction, instead of the simpler open loop optimization, is likely to produce negligible improvement in speech quality. The examples suggest that the algorithm here is close to giving the best performance obtainable from a linear model, for the chosen order with the chosen number of bits for the codebook.

  3. A robust hidden Markov Gauss mixture vector quantizer for a noisy source.

    PubMed

    Pyun, Kyungsuk Peter; Lim, Johan; Gray, Robert M

    2009-07-01

    Noise is ubiquitous in real life and changes image acquisition, communication, and processing characteristics in an uncontrolled manner. Gaussian noise and Salt and Pepper noise, in particular, are prevalent in noisy communication channels, camera and scanner sensors, and medical MRI images. It is not unusual for highly sophisticated image processing algorithms developed for clean images to malfunction when used on noisy images. For example, hidden Markov Gauss mixture models (HMGMM) have been shown to perform well in image segmentation applications, but they are quite sensitive to image noise. We propose a modified HMGMM procedure specifically designed to improve performance in the presence of noise. The key feature of the proposed procedure is the adjustment of covariance matrices in Gauss mixture vector quantizer codebooks to minimize an overall minimum discrimination information distortion (MDI). In adjusting covariance matrices, we expand or shrink their elements based on the noisy image. While most results reported in the literature assume a particular noise type, we propose a framework without assuming particular noise characteristics. Without denoising the corrupted source, we apply our method directly to the segmentation of noisy sources. We apply the proposed procedure to the segmentation of aerial images with Salt and Pepper noise and with independent Gaussian noise, and we compare our results with those of the median filter restoration method and the blind deconvolution-based method, respectively. We show that our procedure has better performance than image restoration-based techniques and closely matches to the performance of HMGMM for clean images in terms of both visual segmentation results and error rate.

  4. An investigative study of multispectral data compression for remotely-sensed images using vector quantization and difference-mapped shift-coding

    NASA Technical Reports Server (NTRS)

    Jaggi, S.

    1993-01-01

    A study is conducted to investigate the effects and advantages of data compression techniques on multispectral imagery data acquired by NASA's airborne scanners at the Stennis Space Center. The first technique used was vector quantization. The vector is defined in the multispectral imagery context as an array of pixels from the same location from each channel. The error obtained in substituting the reconstructed images for the original set is compared for different compression ratios. Also, the eigenvalues of the covariance matrix obtained from the reconstructed data set are compared with the eigenvalues of the original set. The effects of varying the size of the vector codebook on the quality of the compression and on subsequent classification are also presented. The output data from the Vector Quantization algorithm was further compressed by a lossless technique called Difference-mapped Shift-extended Huffman coding. The overall compression for 7 channels of data acquired by the Calibrated Airborne Multispectral Scanner (CAMS), with an RMS error of 15.8 pixels was 195:1 (0.41 bpp) and with an RMS error of 3.6 pixels was 18:1 (.447 bpp). The algorithms were implemented in software and interfaced with the help of dedicated image processing boards to an 80386 PC compatible computer. Modules were developed for the task of image compression and image analysis. Also, supporting software to perform image processing for visual display and interpretation of the compressed/classified images was developed.

  5. Orthogonal Array Testing for Transmit Precoding based Codebooks in Space Shift Keying Systems

    NASA Astrophysics Data System (ADS)

    Al-Ansi, Mohammed; Alwee Aljunid, Syed; Sourour, Essam; Mat Safar, Anuar; Rashidi, C. B. M.

    2018-03-01

    In Space Shift Keying (SSK) systems, transmit precoding based codebook approaches have been proposed to improve the performance in limited feedback channels. The receiver performs an exhaustive search in a predefined Full-Combination (FC) codebook to select the optimal codeword that maximizes the Minimum Euclidean Distance (MED) between the received constellations. This research aims to reduce the codebook size with the purpose of minimizing the selection time and the number of feedback bits. Therefore, we propose to construct the codebooks based on Orthogonal Array Testing (OAT) methods due to their powerful inherent properties. These methods allow to acquire a short codebook where the codewords are sufficient to cover almost all the possible effects included in the FC codebook. Numerical results show the effectiveness of the proposed OAT codebooks in terms of the system performance and complexity.

  6. A visual detection model for DCT coefficient quantization

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert J., Jr.; Peterson, Heidi A.

    1993-01-01

    The discrete cosine transform (DCT) is widely used in image compression, and is part of the JPEG and MPEG compression standards. The degree of compression, and the amount of distortion in the decompressed image are determined by the quantization of the transform coefficients. The standards do not specify how the DCT coefficients should be quantized. Our approach is to set the quantization level for each coefficient so that the quantization error is at the threshold of visibility. Here we combine results from our previous work to form our current best detection model for DCT coefficient quantization noise. This model predicts sensitivity as a function of display parameters, enabling quantization matrices to be designed for display situations varying in luminance, veiling light, and spatial frequency related conditions (pixel size, viewing distance, and aspect ratio). It also allows arbitrary color space directions for the representation of color.

  7. Performance comparisons between PCA-EA-LBG and PCA-LBG-EA approaches in VQ codebook generation for image compression

    NASA Astrophysics Data System (ADS)

    Tsai, Jinn-Tsong; Chou, Ping-Yi; Chou, Jyh-Horng

    2015-11-01

    The aim of this study is to generate vector quantisation (VQ) codebooks by integrating principle component analysis (PCA) algorithm, Linde-Buzo-Gray (LBG) algorithm, and evolutionary algorithms (EAs). The EAs include genetic algorithm (GA), particle swarm optimisation (PSO), honey bee mating optimisation (HBMO), and firefly algorithm (FF). The study is to provide performance comparisons between PCA-EA-LBG and PCA-LBG-EA approaches. The PCA-EA-LBG approaches contain PCA-GA-LBG, PCA-PSO-LBG, PCA-HBMO-LBG, and PCA-FF-LBG, while the PCA-LBG-EA approaches contain PCA-LBG, PCA-LBG-GA, PCA-LBG-PSO, PCA-LBG-HBMO, and PCA-LBG-FF. All training vectors of test images are grouped according to PCA. The PCA-EA-LBG used the vectors grouped by PCA as initial individuals, and the best solution gained by the EAs was given for LBG to discover a codebook. The PCA-LBG approach is to use the PCA to select vectors as initial individuals for LBG to find a codebook. The PCA-LBG-EA used the final result of PCA-LBG as an initial individual for EAs to find a codebook. The search schemes in PCA-EA-LBG first used global search and then applied local search skill, while in PCA-LBG-EA first used local search and then employed global search skill. The results verify that the PCA-EA-LBG indeed gain superior results compared to the PCA-LBG-EA, because the PCA-EA-LBG explores a global area to find a solution, and then exploits a better one from the local area of the solution. Furthermore the proposed PCA-EA-LBG approaches in designing VQ codebooks outperform existing approaches shown in the literature.

  8. Development of a researcher codebook for use in evaluating social networking site profiles.

    PubMed

    Moreno, Megan A; Egan, Katie G; Brockman, Libby

    2011-07-01

    Social networking sites (SNSs) are immensely popular and allow for the display of personal information, including references to health behaviors. Evaluating displayed content on an SNS for research purposes requires a systematic approach and a precise data collection instrument. The purpose of this article is to describe one approach to the development of a research codebook so that others may develop and test their own codebooks for use in SNS research. Our SNS research codebook began on the basis of health behavior theory and clinical criteria. Key elements in the codebook developmental process included an iterative team approach and an emphasis on confidentiality. Codebook successes include consistently high inter-rater reliability. Challenges include time investment in coder training and SNS server changes. We hope that this article will provide detailed information about one systematic approach to codebook development so that other researchers may use this structure to develop and test their own codebooks for use in SNS research. Copyright © 2011 Society for Adolescent Health and Medicine. Published by Elsevier Inc. All rights reserved.

  9. Comparison of SOM point densities based on different criteria.

    PubMed

    Kohonen, T

    1999-11-15

    Point densities of model (codebook) vectors in self-organizing maps (SOMs) are evaluated in this article. For a few one-dimensional SOMs with finite grid lengths and a given probability density function of the input, the numerically exact point densities have been computed. The point density derived from the SOM algorithm turned out to be different from that minimizing the SOM distortion measure, showing that the model vectors produced by the basic SOM algorithm in general do not exactly coincide with the optimum of the distortion measure. A new computing technique based on the calculus of variations has been introduced. It was applied to the computation of point densities derived from the distortion measure for both the classical vector quantization and the SOM with general but equal dimensionality of the input vectors and the grid, respectively. The power laws in the continuum limit obtained in these cases were found to be identical.

  10. Office of Workers’ Compensation Programs (OWCP). Data Codebook. Version 1.0

    DTIC Science & Technology

    1993-12-01

    Section 4. OWCP Data Codebook 4.1 Codebook Description ........................... 5 4.2 Codebook Column WHading Defnitions ............... 5 4.3 Data...OWCP (EARLY-REF) First character: variable. It was originally used T = Test group case between 1987 and 1990 in a C = Control group case study done...Nondestructive testing 4255 Fuel distribution system mechanic 3707 Metalizing 4301 Miscellaneous pliable materials work 3708 Metal process working 4351

  11. Visual word ambiguity.

    PubMed

    van Gemert, Jan C; Veenman, Cor J; Smeulders, Arnold W M; Geusebroek, Jan-Mark

    2010-07-01

    This paper studies automatic image classification by modeling soft assignment in the popular codebook model. The codebook model describes an image as a bag of discrete visual words selected from a vocabulary, where the frequency distributions of visual words in an image allow classification. One inherent component of the codebook model is the assignment of discrete visual words to continuous image features. Despite the clear mismatch of this hard assignment with the nature of continuous features, the approach has been successfully applied for some years. In this paper, we investigate four types of soft assignment of visual words to image features. We demonstrate that explicitly modeling visual word assignment ambiguity improves classification performance compared to the hard assignment of the traditional codebook model. The traditional codebook model is compared against our method for five well-known data sets: 15 natural scenes, Caltech-101, Caltech-256, and Pascal VOC 2007/2008. We demonstrate that large codebook vocabulary sizes completely deteriorate the performance of the traditional model, whereas the proposed model performs consistently. Moreover, we show that our method profits in high-dimensional feature spaces and reaps higher benefits when increasing the number of image categories.

  12. A visual detection model for DCT coefficient quantization

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert J., Jr.; Watson, Andrew B.

    1994-01-01

    The discrete cosine transform (DCT) is widely used in image compression and is part of the JPEG and MPEG compression standards. The degree of compression and the amount of distortion in the decompressed image are controlled by the quantization of the transform coefficients. The standards do not specify how the DCT coefficients should be quantized. One approach is to set the quantization level for each coefficient so that the quantization error is near the threshold of visibility. Results from previous work are combined to form the current best detection model for DCT coefficient quantization noise. This model predicts sensitivity as a function of display parameters, enabling quantization matrices to be designed for display situations varying in luminance, veiling light, and spatial frequency related conditions (pixel size, viewing distance, and aspect ratio). It also allows arbitrary color space directions for the representation of color. A model-based method of optimizing the quantization matrix for an individual image was developed. The model described above provides visual thresholds for each DCT frequency. These thresholds are adjusted within each block for visual light adaptation and contrast masking. For given quantization matrix, the DCT quantization errors are scaled by the adjusted thresholds to yield perceptual errors. These errors are pooled nonlinearly over the image to yield total perceptual error. With this model one may estimate the quantization matrix for a particular image that yields minimum bit rate for a given total perceptual error, or minimum perceptual error for a given bit rate. Custom matrices for a number of images show clear improvement over image-independent matrices. Custom matrices are compatible with the JPEG standard, which requires transmission of the quantization matrix.

  13. Developing Codebooks as a New Tool to Analyze Students' ePortfolios

    ERIC Educational Resources Information Center

    Impedovo, Maria Antonietta; Ritella, Giuseppe; Ligorio, Maria Beatrice

    2013-01-01

    This paper describes a three-step method for the construction of codebooks meant for analyzing ePortfolio content. The first step produces a prototype based on qualitative analysis of very different ePortfolios from the same course. During the second step, the initial version of the codebook is tested on a larger sample and subsequently revised.…

  14. Electronic Codebooks for Windows 95/98: 22 ECBs. Electronic Codebook (ECB) Updates for Previously Released Data Files. [CD-ROM].

    ERIC Educational Resources Information Center

    National Center for Education Statistics (ED), Washington, DC.

    This CD-ROM contains a separate electronic codebook for each of the following National Center for Education Statistics data sets: (1) B94, Baccalaureate and Beyond 1993-94 (restricted); (2) B97, Baccalaureate and Beyond 1993-97 (restricted); (3) BP4, Beginning Postsecondary Students 1990-94 (restricted); (4) FAC, 1992-93 National Student of…

  15. Vector quantizer based on brightness maps for image compression with the polynomial transform

    NASA Astrophysics Data System (ADS)

    Escalante-Ramirez, Boris; Moreno-Gutierrez, Mauricio; Silvan-Cardenas, Jose L.

    2002-11-01

    We present a vector quantization scheme acting on brightness fields based on distance/distortion criteria correspondent with psycho-visual aspects. These criteria quantify sensorial distortion between vectors that represent either portions of a digital image or alternatively, coefficients of a transform-based coding system. In the latter case, we use an image representation model, namely the Hermite transform, that is based on some of the main perceptual characteristics of the human vision system (HVS) and in their response to light stimulus. Energy coding in the brightness domain, determination of local structure, code-book training and local orientation analysis are all obtained by means of the Hermite transform. This paper, for thematic reasons, is divided in four sections. The first one will shortly highlight the importance of having newer and better compression algorithms. This section will also serve to explain briefly the most relevant characteristics of the HVS, advantages and disadvantages related with the behavior of our vision in front of ocular stimulus. The second section shall go through a quick review of vector quantization techniques, focusing their performance on image treatment, as a preview for the image vector quantizer compressor actually constructed in section 5. Third chapter was chosen to concentrate the most important data gathered on brightness models. The building of this so-called brightness maps (quantification of the human perception on the visible objects reflectance), in a bi-dimensional model, will be addressed here. The Hermite transform, a special case of polynomial transforms, and its usefulness, will be treated, in an applicable discrete form, in the fourth chapter. As we have learned from previous works 1, Hermite transform has showed to be a useful and practical solution to efficiently code the energy within an image block, deciding which kind of quantization is to be used upon them (whether scalar or vector). It will also be a unique tool to structurally classify the image block within a given lattice. This particular operation intends to be one of the main contributions of this work. The fifth section will fuse the proposals derived from the study of the three main topics- addressed in the last sections- in order to propose an image compression model that takes advantage of vector quantizers inside the brightness transformed domain to determine the most important structures, finding the energy distribution inside the Hermite domain. Sixth and last section will show some results obtained while testing the coding-decoding model. The guidelines to evaluate the image compressing performance were the compression ratio, SNR and psycho-visual quality. Some conclusions derived from the research and possible unexplored paths will be shown on this section as well.

  16. The Communications Toolbox for MATLAB and E0 3513 Laboratory Design

    DTIC Science & Technology

    1994-03-24

    lengtli(quanchj)-1)) sxniejbinnuMs=bin...nums(1:6) %;.. M.asure dhe quantization noise sig-toLia.is~snr(s1 ,quansig); str --numn2str(sigjoQnois); %Plot 4...compressed quantized signal exp..comp=expmnd(quansig2,mu2,tmx(quansi2)); %use maximum signal value =nrw.ccwnp-snr(s,exp~coinp); str -num2str(sir..comp); MPot...8217) ylabelCAmplitude’) grid cig %Part 2--Observe fte differences in the spectra Program mung Laboratory 3B Key-age 3 319 %for two types of pulse-modulatWd signals

  17. Ice Shape Characterization Using Self-Organizing Maps

    NASA Technical Reports Server (NTRS)

    McClain, Stephen T.; Tino, Peter; Kreeger, Richard E.

    2011-01-01

    A method for characterizing ice shapes using a self-organizing map (SOM) technique is presented. Self-organizing maps are neural-network techniques for representing noisy, multi-dimensional data aligned along a lower-dimensional and possibly nonlinear manifold. For a large set of noisy data, each element of a finite set of codebook vectors is iteratively moved in the direction of the data closest to the winner codebook vector. Through successive iterations, the codebook vectors begin to align with the trends of the higher-dimensional data. In information processing, the intent of SOM methods is to transmit the codebook vectors, which contains far fewer elements and requires much less memory or bandwidth, than the original noisy data set. When applied to airfoil ice accretion shapes, the properties of the codebook vectors and the statistical nature of the SOM methods allows for a quantitative comparison of experimentally measured mean or average ice shapes to ice shapes predicted using computer codes such as LEWICE. The nature of the codebook vectors also enables grid generation and surface roughness descriptions for use with the discrete-element roughness approach. In the present study, SOM characterizations are applied to a rime ice shape, a glaze ice shape at an angle of attack, a bi-modal glaze ice shape, and a multi-horn glaze ice shape. Improvements and future explorations will be discussed.

  18. Survey of digital filtering

    NASA Technical Reports Server (NTRS)

    Nagle, H. T., Jr.

    1972-01-01

    A three part survey is made of the state-of-the-art in digital filtering. Part one presents background material including sampled data transformations and the discrete Fourier transform. Part two, digital filter theory, gives an in-depth coverage of filter categories, transfer function synthesis, quantization and other nonlinear errors, filter structures and computer aided design. Part three presents hardware mechanization techniques. Implementations by general purpose, mini-, and special-purpose computers are presented.

  19. Probabilistic distance-based quantizer design for distributed estimation

    NASA Astrophysics Data System (ADS)

    Kim, Yoon Hak

    2016-12-01

    We consider an iterative design of independently operating local quantizers at nodes that should cooperate without interaction to achieve application objectives for distributed estimation systems. We suggest as a new cost function a probabilistic distance between the posterior distribution and its quantized one expressed as the Kullback Leibler (KL) divergence. We first present the analysis that minimizing the KL divergence in the cyclic generalized Lloyd design framework is equivalent to maximizing the logarithmic quantized posterior distribution on the average which can be further computationally reduced in our iterative design. We propose an iterative design algorithm that seeks to maximize the simplified version of the posterior quantized distribution and discuss that our algorithm converges to a global optimum due to the convexity of the cost function and generates the most informative quantized measurements. We also provide an independent encoding technique that enables minimization of the cost function and can be efficiently simplified for a practical use of power-constrained nodes. We finally demonstrate through extensive experiments an obvious advantage of improved estimation performance as compared with the typical designs and the novel design techniques previously published.

  20. Perceptual Optimization of DCT Color Quantization Matrices

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Statler, Irving C. (Technical Monitor)

    1994-01-01

    Many image compression schemes employ a block Discrete Cosine Transform (DCT) and uniform quantization. Acceptable rate/distortion performance depends upon proper design of the quantization matrix. In previous work, we showed how to use a model of the visibility of DCT basis functions to design quantization matrices for arbitrary display resolutions and color spaces. Subsequently, we showed how to optimize greyscale quantization matrices for individual images, for optimal rate/perceptual distortion performance. Here we describe extensions of this optimization algorithm to color images.

  1. High-performance ultra-low power VLSI analog processor for data compression

    NASA Technical Reports Server (NTRS)

    Tawel, Raoul (Inventor)

    1996-01-01

    An apparatus for data compression employing a parallel analog processor. The apparatus includes an array of processor cells with N columns and M rows wherein the processor cells have an input device, memory device, and processor device. The input device is used for inputting a series of input vectors. Each input vector is simultaneously input into each column of the array of processor cells in a pre-determined sequential order. An input vector is made up of M components, ones of which are input into ones of M processor cells making up a column of the array. The memory device is used for providing ones of M components of a codebook vector to ones of the processor cells making up a column of the array. A different codebook vector is provided to each of the N columns of the array. The processor device is used for simultaneously comparing the components of each input vector to corresponding components of each codebook vector, and for outputting a signal representative of the closeness between the compared vector components. A combination device is used to combine the signal output from each processor cell in each column of the array and to output a combined signal. A closeness determination device is then used for determining which codebook vector is closest to an input vector from the combined signals, and for outputting a codebook vector index indicating which of the N codebook vectors was the closest to each input vector input into the array.

  2. Reflective Course Construction: An Analysis of Student Feedback and Its Role in Curricular Design

    ERIC Educational Resources Information Center

    Mitchell, Erik

    2013-01-01

    This study uses formal and informal student feedback as a source for understanding the impact of experimental course elements. Responses were used to develop a codebook, which was then applied to the entire dataset. The results inform our understanding of student's conceptions of professional identity, learning styles and curriculum design.…

  3. Robust fault tolerant control based on sliding mode method for uncertain linear systems with quantization.

    PubMed

    Hao, Li-Ying; Yang, Guang-Hong

    2013-09-01

    This paper is concerned with the problem of robust fault-tolerant compensation control problem for uncertain linear systems subject to both state and input signal quantization. By incorporating novel matrix full-rank factorization technique with sliding surface design successfully, the total failure of certain actuators can be coped with, under a special actuator redundancy assumption. In order to compensate for quantization errors, an adjustment range of quantization sensitivity for a dynamic uniform quantizer is given through the flexible choices of design parameters. Comparing with the existing results, the derived inequality condition leads to the fault tolerance ability stronger and much wider scope of applicability. With a static adjustment policy of quantization sensitivity, an adaptive sliding mode controller is then designed to maintain the sliding mode, where the gain of the nonlinear unit vector term is updated automatically to compensate for the effects of actuator faults, quantization errors, exogenous disturbances and parameter uncertainties without the need for a fault detection and isolation (FDI) mechanism. Finally, the effectiveness of the proposed design method is illustrated via a model of a rocket fairing structural-acoustic. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  4. Conditional Entropy-Constrained Residual VQ with Application to Image Coding

    NASA Technical Reports Server (NTRS)

    Kossentini, Faouzi; Chung, Wilson C.; Smith, Mark J. T.

    1996-01-01

    This paper introduces an extension of entropy-constrained residual vector quantization (VQ) where intervector dependencies are exploited. The method, which we call conditional entropy-constrained residual VQ, employs a high-order entropy conditioning strategy that captures local information in the neighboring vectors. When applied to coding images, the proposed method is shown to achieve better rate-distortion performance than that of entropy-constrained residual vector quantization with less computational complexity and lower memory requirements. Moreover, it can be designed to support progressive transmission in a natural way. It is also shown to outperform some of the best predictive and finite-state VQ techniques reported in the literature. This is due partly to the joint optimization between the residual vector quantizer and a high-order conditional entropy coder as well as the efficiency of the multistage residual VQ structure and the dynamic nature of the prediction.

  5. The Introductory Sociology Survey

    ERIC Educational Resources Information Center

    Best, Joel

    1977-01-01

    The Introductory Sociology Survey (ISS) is designed to teach introductory students basic skills in developing causal arguments and in using a computerized statistical package to analyze survey data. Students are given codebooks for survey data and asked to write a brief paper predicting the relationship between at least two variables. (Author)

  6. Educational Information Quantization for Improving Content Quality in Learning Management Systems

    ERIC Educational Resources Information Center

    Rybanov, Alexander Aleksandrovich

    2014-01-01

    The article offers the educational information quantization method for improving content quality in Learning Management Systems. The paper considers questions concerning analysis of quality of quantized presentation of educational information, based on quantitative text parameters: average frequencies of parts of speech, used in the text; formal…

  7. Rate-distortion analysis of dead-zone plus uniform threshold scalar quantization and its application--part II: two-pass VBR coding for H.264/AVC.

    PubMed

    Sun, Jun; Duan, Yizhou; Li, Jiangtao; Liu, Jiaying; Guo, Zongming

    2013-01-01

    In the first part of this paper, we derive a source model describing the relationship between the rate, distortion, and quantization steps of the dead-zone plus uniform threshold scalar quantizers with nearly uniform reconstruction quantizers for generalized Gaussian distribution. This source model consists of rate-quantization, distortion-quantization (D-Q), and distortion-rate (D-R) models. In this part, we first rigorously confirm the accuracy of the proposed source model by comparing the calculated results with the coding data of JM 16.0. Efficient parameter estimation strategies are then developed to better employ this source model in our two-pass rate control method for H.264 variable bit rate coding. Based on our D-Q and D-R models, the proposed method is of high stability, low complexity and is easy to implement. Extensive experiments demonstrate that the proposed method achieves: 1) average peak signal-to-noise ratio variance of only 0.0658 dB, compared to 1.8758 dB of JM 16.0's method, with an average rate control error of 1.95% and 2) significant improvement in smoothing the video quality compared with the latest two-pass rate control method.

  8. Low-Bit Rate Feedback Strategies for Iterative IA-Precoded MIMO-OFDM-Based Systems

    PubMed Central

    Teodoro, Sara; Silva, Adão; Dinis, Rui; Gameiro, Atílio

    2014-01-01

    Interference alignment (IA) is a promising technique that allows high-capacity gains in interference channels, but which requires the knowledge of the channel state information (CSI) for all the system links. We design low-complexity and low-bit rate feedback strategies where a quantized version of some CSI parameters is fed back from the user terminal (UT) to the base station (BS), which shares it with the other BSs through a limited-capacity backhaul network. This information is then used by BSs to perform the overall IA design. With the proposed strategies, we only need to send part of the CSI information, and this can even be sent only once for a set of data blocks transmitted over time-varying channels. These strategies are applied to iterative MMSE-based IA techniques for the downlink of broadband wireless OFDM systems with limited feedback. A new robust iterative IA technique, where channel quantization errors are taken into account in IA design, is also proposed and evaluated. With our proposed strategies, we need a small number of quantization bits to transmit and share the CSI, when comparing with the techniques used in previous works, while allowing performance close to the one obtained with perfect channel knowledge. PMID:24678274

  9. Low-bit rate feedback strategies for iterative IA-precoded MIMO-OFDM-based systems.

    PubMed

    Teodoro, Sara; Silva, Adão; Dinis, Rui; Gameiro, Atílio

    2014-01-01

    Interference alignment (IA) is a promising technique that allows high-capacity gains in interference channels, but which requires the knowledge of the channel state information (CSI) for all the system links. We design low-complexity and low-bit rate feedback strategies where a quantized version of some CSI parameters is fed back from the user terminal (UT) to the base station (BS), which shares it with the other BSs through a limited-capacity backhaul network. This information is then used by BSs to perform the overall IA design. With the proposed strategies, we only need to send part of the CSI information, and this can even be sent only once for a set of data blocks transmitted over time-varying channels. These strategies are applied to iterative MMSE-based IA techniques for the downlink of broadband wireless OFDM systems with limited feedback. A new robust iterative IA technique, where channel quantization errors are taken into account in IA design, is also proposed and evaluated. With our proposed strategies, we need a small number of quantization bits to transmit and share the CSI, when comparing with the techniques used in previous works, while allowing performance close to the one obtained with perfect channel knowledge.

  10. Key Themes in Mobile Learning: Prospects for Learner-Generated Learning through AR and VR

    ERIC Educational Resources Information Center

    Aguayo, Claudio; Cochrane, Thomas; Narayan, Vickel

    2017-01-01

    This paper summarises the findings from a literature review in mobile learning, developed as part of a 2-year six-institution project in New Zealand. Through the development of a key themes codebook, we address selected key themes with respect to their relevance to learner-generated learning through emerging technologies, with attention to mobile…

  11. Vector quantizer designs for joint compression and terrain categorization of multispectral imagery

    NASA Technical Reports Server (NTRS)

    Gorman, John D.; Lyons, Daniel F.

    1994-01-01

    Two vector quantizer designs for compression of multispectral imagery and their impact on terrain categorization performance are evaluated. The mean-squared error (MSE) and classification performance of the two quantizers are compared, and it is shown that a simple two-stage design minimizing MSE subject to a constraint on classification performance has a significantly better classification performance than a standard MSE-based tree-structured vector quantizer followed by maximum likelihood classification. This improvement in classification performance is obtained with minimal loss in MSE performance. The results show that it is advantageous to tailor compression algorithm designs to the required data exploitation tasks. Applications of joint compression/classification include compression for the archival or transmission of Landsat imagery that is later used for land utility surveys and/or radiometric analysis.

  12. Simultaneous fault detection and control design for switched systems with two quantized signals.

    PubMed

    Li, Jian; Park, Ju H; Ye, Dan

    2017-01-01

    The problem of simultaneous fault detection and control design for switched systems with two quantized signals is presented in this paper. Dynamic quantizers are employed, respectively, before the output is passed to fault detector, and before the control input is transmitted to the switched system. Taking the quantized errors into account, the robust performance for this kind of system is given. Furthermore, sufficient conditions for the existence of fault detector/controller are presented in the framework of linear matrix inequalities, and fault detector/controller gains and the supremum of quantizer range are derived by a convex optimized method. Finally, two illustrative examples demonstrate the effectiveness of the proposed method. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  13. Data compression experiments with LANDSAT thematic mapper and Nimbus-7 coastal zone color scanner data

    NASA Technical Reports Server (NTRS)

    Tilton, James C.; Ramapriyan, H. K.

    1989-01-01

    A case study is presented where an image segmentation based compression technique is applied to LANDSAT Thematic Mapper (TM) and Nimbus-7 Coastal Zone Color Scanner (CZCS) data. The compression technique, called Spatially Constrained Clustering (SCC), can be regarded as an adaptive vector quantization approach. The SCC can be applied to either single or multiple spectral bands of image data. The segmented image resulting from SCC is encoded in small rectangular blocks, with the codebook varying from block to block. Lossless compression potential (LDP) of sample TM and CZCS images are evaluated. For the TM test image, the LCP is 2.79. For the CZCS test image the LCP is 1.89, even though when only a cloud-free section of the image is considered the LCP increases to 3.48. Examples of compressed images are shown at several compression ratios ranging from 4 to 15. In the case of TM data, the compressed data are classified using the Bayes' classifier. The results show an improvement in the similarity between the classification results and ground truth when compressed data are used, thus showing that compression is, in fact, a useful first step in the analysis.

  14. Dynamic State Estimation of Power Systems With Quantization Effects: A Recursive Filter Approach.

    PubMed

    Hu, Liang; Wang, Zidong; Liu, Xiaohui

    2016-08-01

    In this paper, a recursive filter algorithm is developed to deal with the state estimation problem for power systems with quantized nonlinear measurements. The measurements from both the remote terminal units and the phasor measurement unit are subject to quantizations described by a logarithmic quantizer. Attention is focused on the design of a recursive filter such that, in the simultaneous presence of nonlinear measurements and quantization effects, an upper bound for the estimation error covariance is guaranteed and subsequently minimized. Instead of using the traditional approximation methods in nonlinear estimation that simply ignore the linearization errors, we treat both the linearization and quantization errors as norm-bounded uncertainties in the algorithm development so as to improve the performance of the estimator. For the power system with such kind of introduced uncertainties, a filter is designed in the framework of robust recursive estimation, and the developed filter algorithm is tested on the IEEE benchmark power system to demonstrate its effectiveness.

  15. A novel unsupervised spike sorting algorithm for intracranial EEG.

    PubMed

    Yadav, R; Shah, A K; Loeb, J A; Swamy, M N S; Agarwal, R

    2011-01-01

    This paper presents a novel, unsupervised spike classification algorithm for intracranial EEG. The method combines template matching and principal component analysis (PCA) for building a dynamic patient-specific codebook without a priori knowledge of the spike waveforms. The problem of misclassification due to overlapping classes is resolved by identifying similar classes in the codebook using hierarchical clustering. Cluster quality is visually assessed by projecting inter- and intra-clusters onto a 3D plot. Intracranial EEG from 5 patients was utilized to optimize the algorithm. The resulting codebook retains 82.1% of the detected spikes in non-overlapping and disjoint clusters. Initial results suggest a definite role of this method for both rapid review and quantitation of interictal spikes that could enhance both clinical treatment and research studies on epileptic patients.

  16. Information theory-based decision support system for integrated design of multivariable hydrometric networks

    NASA Astrophysics Data System (ADS)

    Keum, Jongho; Coulibaly, Paulin

    2017-07-01

    Adequate and accurate hydrologic information from optimal hydrometric networks is an essential part of effective water resources management. Although the key hydrologic processes in the water cycle are interconnected, hydrometric networks (e.g., streamflow, precipitation, groundwater level) have been routinely designed individually. A decision support framework is proposed for integrated design of multivariable hydrometric networks. The proposed method is applied to design optimal precipitation and streamflow networks simultaneously. The epsilon-dominance hierarchical Bayesian optimization algorithm was combined with Shannon entropy of information theory to design and evaluate hydrometric networks. Specifically, the joint entropy from the combined networks was maximized to provide the most information, and the total correlation was minimized to reduce redundant information. To further optimize the efficiency between the networks, they were designed by maximizing the conditional entropy of the streamflow network given the information of the precipitation network. Compared to the traditional individual variable design approach, the integrated multivariable design method was able to determine more efficient optimal networks by avoiding the redundant stations. Additionally, four quantization cases were compared to evaluate their effects on the entropy calculations and the determination of the optimal networks. The evaluation results indicate that the quantization methods should be selected after careful consideration for each design problem since the station rankings and the optimal networks can change accordingly.

  17. Signal Prediction With Input Identification

    NASA Technical Reports Server (NTRS)

    Juang, Jer-Nan; Chen, Ya-Chin

    1999-01-01

    A novel coding technique is presented for signal prediction with applications including speech coding, system identification, and estimation of input excitation. The approach is based on the blind equalization method for speech signal processing in conjunction with the geometric subspace projection theory to formulate the basic prediction equation. The speech-coding problem is often divided into two parts, a linear prediction model and excitation input. The parameter coefficients of the linear predictor and the input excitation are solved simultaneously and recursively by a conventional recursive least-squares algorithm. The excitation input is computed by coding all possible outcomes into a binary codebook. The coefficients of the linear predictor and excitation, and the index of the codebook can then be used to represent the signal. In addition, a variable-frame concept is proposed to block the same excitation signal in sequence in order to reduce the storage size and increase the transmission rate. The results of this work can be easily extended to the problem of disturbance identification. The basic principles are outlined in this report and differences from other existing methods are discussed. Simulations are included to demonstrate the proposed method.

  18. Subband Image Coding with Jointly Optimized Quantizers

    NASA Technical Reports Server (NTRS)

    Kossentini, Faouzi; Chung, Wilson C.; Smith Mark J. T.

    1995-01-01

    An iterative design algorithm for the joint design of complexity- and entropy-constrained subband quantizers and associated entropy coders is proposed. Unlike conventional subband design algorithms, the proposed algorithm does not require the use of various bit allocation algorithms. Multistage residual quantizers are employed here because they provide greater control of the complexity-performance tradeoffs, and also because they allow efficient and effective high-order statistical modeling. The resulting subband coder exploits statistical dependencies within subbands, across subbands, and across stages, mainly through complexity-constrained high-order entropy coding. Experimental results demonstrate that the complexity-rate-distortion performance of the new subband coder is exceptional.

  19. Modeling and analysis of energy quantization effects on single electron inverter performance

    NASA Astrophysics Data System (ADS)

    Dan, Surya Shankar; Mahapatra, Santanu

    2009-08-01

    In this paper, for the first time, the effects of energy quantization on single electron transistor (SET) inverter performance are analyzed through analytical modeling and Monte Carlo simulations. It is shown that energy quantization mainly changes the Coulomb blockade region and drain current of SET devices and thus affects the noise margin, power dissipation, and the propagation delay of SET inverter. A new analytical model for the noise margin of SET inverter is proposed which includes the energy quantization effects. Using the noise margin as a metric, the robustness of SET inverter is studied against the effects of energy quantization. A compact expression is developed for a novel parameter quantization threshold which is introduced for the first time in this paper. Quantization threshold explicitly defines the maximum energy quantization that an SET inverter logic circuit can withstand before its noise margin falls below a specified tolerance level. It is found that SET inverter designed with CT:CG=1/3 (where CT and CG are tunnel junction and gate capacitances, respectively) offers maximum robustness against energy quantization.

  20. Vector quantization

    NASA Technical Reports Server (NTRS)

    Gray, Robert M.

    1989-01-01

    During the past ten years Vector Quantization (VQ) has developed from a theoretical possibility promised by Shannon's source coding theorems into a powerful and competitive technique for speech and image coding and compression at medium to low bit rates. In this survey, the basic ideas behind the design of vector quantizers are sketched and some comments made on the state-of-the-art and current research efforts.

  1. Unsupervised categorization method of graphemes on handwritten manuscripts: application to style recognition

    NASA Astrophysics Data System (ADS)

    Daher, H.; Gaceb, D.; Eglin, V.; Bres, S.; Vincent, N.

    2012-01-01

    We present in this paper a feature selection and weighting method for medieval handwriting images that relies on codebooks of shapes of small strokes of characters (graphemes that are issued from the decomposition of manuscripts). These codebooks are important to simplify the automation of the analysis, the manuscripts transcription and the recognition of styles or writers. Our approach provides a precise features weighting by genetic algorithms and a highperformance methodology for the categorization of the shapes of graphemes by using graph coloring into codebooks which are applied in turn on CBIR (Content Based Image Retrieval) in a mixed handwriting database containing different pages from different writers, periods of the history and quality. We show how the coupling of these two mechanisms 'features weighting - graphemes classification' can offer a better separation of the forms to be categorized by exploiting their grapho-morphological, their density and their significant orientations particularities.

  2. Exploratory research session on the quantization of the gravitational field. At the Institute for Theoretical Physics, Copenhagen, Denmark, June-July 1957

    NASA Astrophysics Data System (ADS)

    DeWitt, Bryce S.

    2017-06-01

    During the period June-July 1957 six physicists met at the Institute for Theoretical Physics of the University of Copenhagen in Denmark to work together on problems connected with the quantization of the gravitational field. A large part of the discussion was devoted to exposition of the individual work of the various participants, but a number of new results were also obtained. The topics investigated by these physicists are outlined in this report and may be grouped under the following main headings: The theory of measurement. Topographical problems in general relativity. Feynman quantization. Canonical quantization. Approximation methods. Special problems.

  3. Evaluation of NASA speech encoder

    NASA Technical Reports Server (NTRS)

    1976-01-01

    Techniques developed by NASA for spaceflight instrumentation were used in the design of a quantizer for speech-decoding. Computer simulation of the actions of the quantizer was tested with synthesized and real speech signals. Results were evaluated by a phometician. Topics discussed include the relationship between the number of quantizer levels and the required sampling rate; reconstruction of signals; digital filtering; speech recording, sampling, and storage, and processing results.

  4. Applying the WHO conceptual framework for the International Classification for Patient Safety to a surgical population

    PubMed Central

    McElroy, L. M.; Woods, D. M.; Yanes, A. F.; Skaro, A. I.; Daud, A.; Curtis, T.; Wymore, E.; Holl, J. L.; Abecassis, M. M.; Ladner, D. P.

    2016-01-01

    Objective Efforts to improve patient safety are challenged by the lack of universally agreed upon terms. The International Classification for Patient Safety (ICPS) was developed by the World Health Organization for this purpose. This study aimed to test the applicability of the ICPS to a surgical population. Design A web-based safety debriefing was sent to clinicians involved in surgical care of abdominal organ transplant patients. A multidisciplinary team of patient safety experts, surgeons and researchers used the data to develop a system of classification based on the ICPS. Disagreements were reconciled via consensus, and a codebook was developed for future use by researchers. Results A total of 320 debriefing responses were used for the initial review and codebook development. In total, the 320 debriefing responses contained 227 patient safety incidents (range: 0–7 per debriefing) and 156 contributing factors/hazards (0–5 per response). The most common severity classification was ‘reportable circumstance,’ followed by ‘near miss.’ The most common incident types were ‘resources/organizational management,’ followed by ‘medical device/equipment.’ Several aspects of surgical care were encompassed by more than one classification, including operating room scheduling, delays in care, trainee-related incidents, interruptions and handoffs. Conclusions This study demonstrates that a framework for patient safety can be applied to facilitate the organization and analysis of surgical safety data. Several unique aspects of surgical care require consideration, and by using a standardized framework for describing concepts, research findings can be compared and disseminated across surgical specialties. The codebook is intended for use as a framework for other specialties and institutions. PMID:26803539

  5. Performance of customized DCT quantization tables on scientific data

    NASA Technical Reports Server (NTRS)

    Ratnakar, Viresh; Livny, Miron

    1994-01-01

    We show that it is desirable to use data-specific or customized quantization tables for scaling the spatial frequency coefficients obtained using the Discrete Cosine Transform (DCT). DCT is widely used for image and video compression (MP89, PM93) but applications typically use default quantization matrices. Using actual scientific data gathered from divers sources such as spacecrafts and electron-microscopes, we show that the default compression/quality tradeoffs can be significantly improved upon by using customized tables. We also show that significant improvements are possible for the standard test images Lena and Baboon. This work is part of an effort to develop a practical scheme for optimizing quantization matrices for any given image or video stream, under any given quality or compression constraints.

  6. Gravitational surface Hamiltonian and entropy quantization

    NASA Astrophysics Data System (ADS)

    Bakshi, Ashish; Majhi, Bibhas Ranjan; Samanta, Saurav

    2017-02-01

    The surface Hamiltonian corresponding to the surface part of a gravitational action has xp structure where p is conjugate momentum of x. Moreover, it leads to TS on the horizon of a black hole. Here T and S are temperature and entropy of the horizon. Imposing the hermiticity condition we quantize this Hamiltonian. This leads to an equidistant spectrum of its eigenvalues. Using this we show that the entropy of the horizon is quantized. This analysis holds for any order of Lanczos-Lovelock gravity. For general relativity, the area spectrum is consistent with Bekenstein's observation. This provides a more robust confirmation of this earlier result as the calculation is based on the direct quantization of the Hamiltonian in the sense of usual quantum mechanics.

  7. The 1986 ARI Survey of U.S. Army Recruits: Codebook for Army Reserve/ National Guard Survey Respondents

    DTIC Science & Technology

    1987-04-01

    NA NA NA YES 89 THE 1986 ARI SURVEY OF ARMY RECRUITS: CODEBOOK FOR SUMMER 86 USAR 6 ARNG SURVEY RESPONDENTS SSN WHAT IS YOUR SOCIAL SECURITY... Social Sciences Approved (of public relcise distribution unlimited U. S. ARMY RESEARCH INSTITUTE FOR THE BEHAVIORAL AND SOCIAL SCIENCES A Field...RtMirch Institut* for tn« Behavioral and Social Sciancas. NOTE: ThU Rasaarch Product Is not to ba construad as an olflclal Dapartmant of tha Army

  8. Electronic structure robustness and design rules for 2D colloidal heterostructures

    NASA Astrophysics Data System (ADS)

    Chu, Audrey; Livache, Clément; Ithurria, Sandrine; Lhuillier, Emmanuel

    2018-01-01

    Among the colloidal quantum dots, 2D nanoplatelets present exceptionally narrow optical features. Rationalizing the design of heterostructures of these objects is of utmost interest; however, very little work has been focused on the investigation of their electronic properties. This work is organized into two main parts. In the first part, we use 1D solving of the Schrödinger equation to extract the effective masses for nanoplatelets (NPLs) of CdSe, CdS, and CdTe and the valence band offset for NPL core/shell of CdSe/CdS. In the second part, using the determined parameters, we quantize how the spectra of the CdSe/CdS heterostructure get affected by (i) the application of an electric field and (ii) by the presence of a dull interface. We also propose design strategies to make the heterostructure even more robust.

  9. Investigating Students' Mental Models about the Quantization of Light, Energy, and Angular Momentum

    ERIC Educational Resources Information Center

    Didis, Nilüfer; Eryilmaz, Ali; Erkoç, Sakir

    2014-01-01

    This paper is the first part of a multiphase study examining students' mental models about the quantization of physical observables--light, energy, and angular momentum. Thirty-one second-year physics and physics education college students who were taking a modern physics course participated in the study. The qualitative analysis of data revealed…

  10. A hybrid LBG/lattice vector quantizer for high quality image coding

    NASA Technical Reports Server (NTRS)

    Ramamoorthy, V.; Sayood, K.; Arikan, E. (Editor)

    1991-01-01

    It is well known that a vector quantizer is an efficient coder offering a good trade-off between quantization distortion and bit rate. The performance of a vector quantizer asymptotically approaches the optimum bound with increasing dimensionality. A vector quantized image suffers from the following types of degradations: (1) edge regions in the coded image contain staircase effects, (2) quasi-constant or slowly varying regions suffer from contouring effects, and (3) textured regions lose details and suffer from granular noise. All three of these degradations are due to the finite size of the code book, the distortion measures used in the design, and due to the finite training procedure involved in the construction of the code book. In this paper, we present an adaptive technique which attempts to ameliorate the edge distortion and contouring effects.

  11. Armed Forces 1996 Equal Opportunity Survey: Administration, Datasets, and Codebook

    DTIC Science & Technology

    1997-12-01

    was taken in the preparation of analysis files. These files balance two needs: public access to data with sufficient information for accurate estimates...Native American/Alaskan Native, and Other. The duty location variable has two levels: US (a duty station in any of the 50 states or the District of...More specifically, the new DMDC procedures most closely follow CASRO’s Sample Type U design. As discussed by CASRO, the overall response rate has two

  12. Early Childhood Longitudinal Study, Kindergarten Class of 1998-99 (ECLS-K): Combined User's Manual for the ECLS-K Eighth-Grade and K-8 Full Sample Data Files and Electronic Codebooks. NCES 2009-004

    ERIC Educational Resources Information Center

    Tourangeau, Karen; Nord, Christine; Le, Thanh; Sorongon, Alberto G.; Najarian, Michelle

    2009-01-01

    This manual provides guidance and documentation for users of the eighth-grade data of the Early Childhood Longitudinal Study, Kindergarten Class of 1998-99 (ECLS-K). It begins with an overview of the ECLS-K study. Subsequent chapters provide details on the instruments and measures used, the sample design, weighting procedures, response rates, data…

  13. Table look-up estimation of signal and noise parameters from quantized observables

    NASA Technical Reports Server (NTRS)

    Vilnrotter, V. A.; Rodemich, E. R.

    1986-01-01

    A table look-up algorithm for estimating underlying signal and noise parameters from quantized observables is examined. A general mathematical model is developed, and a look-up table designed specifically for estimating parameters from four-bit quantized data is described. Estimator performance is evaluated both analytically and by means of numerical simulation, and an example is provided to illustrate the use of the look-up table for estimating signal-to-noise ratios commonly encountered in Voyager-type data.

  14. Digital television system design study

    NASA Technical Reports Server (NTRS)

    Huth, G. K.

    1976-01-01

    The use of digital techniques for transmission of pictorial data is discussed for multi-frame images (television). Video signals are processed in a manner which includes quantization and coding such that they are separable from the noise introduced into the channel. The performance of digital television systems is determined by the nature of the processing techniques (i.e., whether the video signal itself or, instead, something related to the video signal is quantized and coded) and to the quantization and coding schemes employed.

  15. Integral Sliding Mode Fault-Tolerant Control for Uncertain Linear Systems Over Networks With Signals Quantization.

    PubMed

    Hao, Li-Ying; Park, Ju H; Ye, Dan

    2017-09-01

    In this paper, a new robust fault-tolerant compensation control method for uncertain linear systems over networks is proposed, where only quantized signals are assumed to be available. This approach is based on the integral sliding mode (ISM) method where two kinds of integral sliding surfaces are constructed. One is the continuous-state-dependent surface with the aim of sliding mode stability analysis and the other is the quantization-state-dependent surface, which is used for ISM controller design. A scheme that combines the adaptive ISM controller and quantization parameter adjustment strategy is then proposed. Through utilizing H ∞ control analytical technique, once the system is in the sliding mode, the nature of performing disturbance attenuation and fault tolerance from the initial time can be found without requiring any fault information. Finally, the effectiveness of our proposed ISM control fault-tolerant schemes against quantization errors is demonstrated in the simulation.

  16. Low-rate image coding using vector quantization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Makur, A.

    1990-01-01

    This thesis deals with the development and analysis of a computationally simple vector quantization image compression system for coding monochrome images at low bit rate. Vector quantization has been known to be an effective compression scheme when a low bit rate is desirable, but the intensive computation required in a vector quantization encoder has been a handicap in using it for low rate image coding. The present work shows that, without substantially increasing the coder complexity, it is indeed possible to achieve acceptable picture quality while attaining a high compression ratio. Several modifications to the conventional vector quantization coder aremore » proposed in the thesis. These modifications are shown to offer better subjective quality when compared to the basic coder. Distributed blocks are used instead of spatial blocks to construct the input vectors. A class of input-dependent weighted distortion functions is used to incorporate psychovisual characteristics in the distortion measure. Computationally simple filtering techniques are applied to further improve the decoded image quality. Finally, unique designs of the vector quantization coder using electronic neural networks are described, so that the coding delay is reduced considerably.« less

  17. Luminance-model-based DCT quantization for color image compression

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert J., Jr.; Peterson, Heidi A.

    1992-01-01

    A model is developed to approximate visibility thresholds for discrete cosine transform (DCT) coefficient quantization error based on the peak-to-peak luminance of the error image. Experimentally measured visibility thresholds for R, G, and B DCT basis functions can be predicted by a simple luminance-based detection model. This model allows DCT coefficient quantization matrices to be designed for display conditions other than those of the experimental measurements: other display luminances, other veiling luminances, and other spatial frequencies (different pixel spacings, viewing distances, and aspect ratios).

  18. From black holes to white holes: a quantum gravitational, symmetric bounce

    NASA Astrophysics Data System (ADS)

    Olmedo, Javier; Saini, Sahil; Singh, Parampreet

    2017-11-01

    Recently, a consistent non-perturbative quantization of the Schwarzschild interior resulting in a bounce from black hole to white hole geometry has been obtained by loop quantizing the Kantowski-Sachs vacuum spacetime. As in other spacetimes where the singularity is dominated by the Weyl part of the spacetime curvature, the structure of the singularity is highly anisotropic in the Kantowski-Sachs vacuum spacetime. As a result, the bounce turns out to be in general asymmetric, creating a large mass difference between the parent black hole and the child white hole. In this manuscript, we investigate under what circumstances a symmetric bounce scenario can be constructed in the above quantization. Using the setting of Dirac observables and geometric clocks, we obtain a symmetric bounce condition which can be satisfied by a slight modification in the construction of loops over which holonomies are considered in the quantization procedure. These modifications can be viewed as quantization ambiguities, and are demonstrated in three different flavors, all of which lead to a non-singular black to white hole transition with identical masses. Our results show that quantization ambiguities can mitigate or even qualitatively change some key features of the physics of singularity resolution. Further, these results are potentially helpful in motivating and constructing symmetric black to white hole transition scenarios.

  19. Global synchronization of complex dynamical networks through digital communication with limited data rate.

    PubMed

    Wang, Yan-Wu; Bian, Tao; Xiao, Jiang-Wen; Wen, Changyun

    2015-10-01

    This paper studies the global synchronization of complex dynamical network (CDN) under digital communication with limited bandwidth. To realize the digital communication, the so-called uniform-quantizer-sets are introduced to quantize the states of nodes, which are then encoded and decoded by newly designed encoders and decoders. To meet the requirement of the bandwidth constraint, a scaling function is utilized to guarantee the quantizers having bounded inputs and thus achieving bounded real-time quantization levels. Moreover, a new type of vector norm is introduced to simplify the expression of the bandwidth limit. Through mathematical induction, a sufficient condition is derived to ensure global synchronization of the CDNs. The lower bound on the sum of the real-time quantization levels is analyzed for different cases. Optimization method is employed to relax the requirements on the network topology and to determine the minimum of such lower bound for each case, respectively. Simulation examples are also presented to illustrate the established results.

  20. A Low Power Digital Accumulation Technique for Digital-Domain CMOS TDI Image Sensor.

    PubMed

    Yu, Changwei; Nie, Kaiming; Xu, Jiangtao; Gao, Jing

    2016-09-23

    In this paper, an accumulation technique suitable for digital domain CMOS time delay integration (TDI) image sensors is proposed to reduce power consumption without degrading the rate of imaging. In terms of the slight variations of quantization codes among different pixel exposures towards the same object, the pixel array is divided into two groups: one is for coarse quantization of high bits only, and the other one is for fine quantization of low bits. Then, the complete quantization codes are composed of both results from the coarse-and-fine quantization. The equivalent operation comparably reduces the total required bit numbers of the quantization. In the 0.18 µm CMOS process, two versions of 16-stage digital domain CMOS TDI image sensor chains based on a 10-bit successive approximate register (SAR) analog-to-digital converter (ADC), with and without the proposed technique, are designed. The simulation results show that the average power consumption of slices of the two versions are 6 . 47 × 10 - 8 J/line and 7 . 4 × 10 - 8 J/line, respectively. Meanwhile, the linearity of the two versions are 99.74% and 99.99%, respectively.

  1. Finite-time H∞ control for a class of discrete-time switched time-delay systems with quantized feedback

    NASA Astrophysics Data System (ADS)

    Song, Haiyu; Yu, Li; Zhang, Dan; Zhang, Wen-An

    2012-12-01

    This paper is concerned with the finite-time quantized H∞ control problem for a class of discrete-time switched time-delay systems with time-varying exogenous disturbances. By using the sector bound approach and the average dwell time method, sufficient conditions are derived for the switched system to be finite-time bounded and ensure a prescribed H∞ disturbance attenuation level, and a mode-dependent quantized state feedback controller is designed by solving an optimization problem. Two illustrative examples are provided to demonstrate the effectiveness of the proposed theoretical results.

  2. JND measurements of the speech formants parameters and its implication in the LPC pole quantization

    NASA Astrophysics Data System (ADS)

    Orgad, Yaakov

    1988-08-01

    The inherent sensitivity of auditory perception is explicitly used with the objective of designing an efficient speech encoder. Speech can be modelled by a filter representing the vocal tract shape that is driven by an excitation signal representing glottal air flow. This work concentrates on the filter encoding problem, assuming that excitation signal encoding is optimal. Linear predictive coding (LPC) techniques were used to model a short speech segment by an all-pole filter; each pole was directly related to the speech formants. Measurements were made of the auditory just noticeable difference (JND) corresponding to the natural speech formants, with the LPC filter poles as the best candidates to represent the speech spectral envelope. The JND is the maximum precision required in speech quantization; it was defined on the basis of the shift of one pole parameter of a single frame of a speech segment, necessary to induce subjective perception of the distortion, with .75 probability. The average JND in LPC filter poles in natural speech was found to increase with increasing pole bandwidth and, to a lesser extent, frequency. The JND measurements showed a large spread of the residuals around the average values, indicating that inter-formant coupling and, perhaps, other, not yet fully understood, factors were not taken into account at this stage of the research. A future treatment should consider these factors. The average JNDs obtained in this work were used to design pole quantization tables for speech coding and provided a better bit-rate than the standard quantizer of reflection coefficient; a 30-bits-per-frame pole quantizer yielded a speech quality similar to that obtained with a standard 41-bits-per-frame reflection coefficient quantizer. Owing to the complexity of the numerical root extraction system, the practical implementation of the pole quantization approach remains to be proved.

  3. Adaptive robust fault tolerant control design for a class of nonlinear uncertain MIMO systems with quantization.

    PubMed

    Ao, Wei; Song, Yongdong; Wen, Changyun

    2017-05-01

    In this paper, we investigate the adaptive control problem for a class of nonlinear uncertain MIMO systems with actuator faults and quantization effects. Under some mild conditions, an adaptive robust fault-tolerant control is developed to compensate the affects of uncertainties, actuator failures and errors caused by quantization, and a range of the parameters for these quantizers is established. Furthermore, a Lyapunov-like approach is adopted to demonstrate that the ultimately uniformly bounded output tracking error is guaranteed by the controller, and the signals of the closed-loop system are ensured to be bounded, even in the presence of at most m-q actuators stuck or outage. Finally, numerical simulations are provided to verify and illustrate the effectiveness of the proposed adaptive schemes. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  4. A new apparatus for studies of quantized vortex dynamics in dilute-gas Bose-Einstein condensates

    NASA Astrophysics Data System (ADS)

    Newman, Zachary L.

    The presence of quantized vortices and a high level of control over trap geometries and other system parameters make dilute-gas Bose-Einstein condensates (BECs) a natural environment for studies of vortex dynamics and quantum turbulence in superfluids, primary interests of the BEC group at the University of Arizona. Such research may lead to deeper understanding of the nature of quantum fluid dynamics and far-from-equilbrium phenomena. Despite the importance of quantized vortex dynamics in the fields of superfluidity, superconductivity and quantum turbulence, direct imaging of vortices in trapped BECs remains a significant technical challenge. This is primarily due to the small size of the vortex core in a trapped gas, which is typically a few hundred nanometers in diameter. In this dissertation I present the design and construction of a new 87Rb BEC apparatus with the goal of studying vortex dynamics in trapped BECs. The heart of the apparatus is a compact vacuum chamber with a custom, all-glass science cell designed to accommodate the use of commercial high-numerical-aperture microscope objectives for in situ imaging of vortices. The designs for the new system are, in part, based on prior work in our group on in situ imaging of vortices. Here I review aspects of our prior work and discuss some of the successes and limitations that are relevant to the new apparatus. The bulk of the thesis is used to described the major subsystems of the new apparatus which include the vacuum chamber, the laser systems, the magnetic transfer system and the final magnetic trap for the atoms. Finally, I demonstrate the creation of a BEC of ˜ 2 x 106 87Rb atoms in our new system and show that the BEC can be transferred into a weak, spherical, magnetic trap with a well defined magnetic field axis that may be useful for future vortex imaging studies.

  5. Choice of word length in the design of a specialized hardware for lossless wavelet compression of medical images

    NASA Astrophysics Data System (ADS)

    Urriza, Isidro; Barragan, Luis A.; Artigas, Jose I.; Garcia, Jose I.; Navarro, Denis

    1997-11-01

    Image compression plays an important role in the archiving and transmission of medical images. Discrete cosine transform (DCT)-based compression methods are not suitable for medical images because of block-like image artifacts that could mask or be mistaken for pathology. Wavelet transforms (WTs) are used to overcome this problem. When implementing WTs in hardware, finite precision arithmetic introduces quantization errors. However, lossless compression is usually required in the medical image field. Thus, the hardware designer must look for the optimum register length that, while ensuring the lossless accuracy criteria, will also lead to a high-speed implementation with small chip area. In addition, wavelet choice is a critical issue that affects image quality as well as system design. We analyze the filters best suited to image compression that appear in the literature. For them, we obtain the maximum quantization errors produced in the calculation of the WT components. Thus, we deduce the minimum word length required for the reconstructed image to be numerically identical to the original image. The theoretical results are compared with experimental results obtained from algorithm simulations on random test images. These results enable us to compare the hardware implementation cost of the different filter banks. Moreover, to reduce the word length, we have analyzed the case of increasing the integer part of the numbers while maintaining constant the word length when the scale increases.

  6. Necessary conditions for the optimality of variable rate residual vector quantizers

    NASA Technical Reports Server (NTRS)

    Kossentini, Faouzi; Smith, Mark J. T.; Barnes, Christopher F.

    1993-01-01

    Residual vector quantization (RVQ), or multistage VQ, as it is also called, has recently been shown to be a competitive technique for data compression. The competitive performance of RVQ reported in results from the joint optimization of variable rate encoding and RVQ direct-sum code books. In this paper, necessary conditions for the optimality of variable rate RVQ's are derived, and an iterative descent algorithm based on a Lagrangian formulation is introduced for designing RVQ's having minimum average distortion subject to an entropy constraint. Simulation results for these entropy-constrained RVQ's (EC-RVQ's) are presented for memory less Gaussian, Laplacian, and uniform sources. A Gauss-Markov source is also considered. The performance is superior to that of entropy-constrained scalar quantizers (EC-SQ's) and practical entropy-constrained vector quantizers (EC-VQ's), and is competitive with that of some of the best source coding techniques that have appeared in the literature.

  7. Distributed Adaptive Containment Control for a Class of Nonlinear Multiagent Systems With Input Quantization.

    PubMed

    Wang, Chenliang; Wen, Changyun; Hu, Qinglei; Wang, Wei; Zhang, Xiuyu

    2018-06-01

    This paper is devoted to distributed adaptive containment control for a class of nonlinear multiagent systems with input quantization. By employing a matrix factorization and a novel matrix normalization technique, some assumptions involving control gain matrices in existing results are relaxed. By fusing the techniques of sliding mode control and backstepping control, a two-step design method is proposed to construct controllers and, with the aid of neural networks, all system nonlinearities are allowed to be unknown. Moreover, a linear time-varying model and a similarity transformation are introduced to circumvent the obstacle brought by quantization, and the controllers need no information about the quantizer parameters. The proposed scheme is able to ensure the boundedness of all closed-loop signals and steer the containment errors into an arbitrarily small residual set. The simulation results illustrate the effectiveness of the scheme.

  8. Reducing weight precision of convolutional neural networks towards large-scale on-chip image recognition

    NASA Astrophysics Data System (ADS)

    Ji, Zhengping; Ovsiannikov, Ilia; Wang, Yibing; Shi, Lilong; Zhang, Qiang

    2015-05-01

    In this paper, we develop a server-client quantization scheme to reduce bit resolution of deep learning architecture, i.e., Convolutional Neural Networks, for image recognition tasks. Low bit resolution is an important factor in bringing the deep learning neural network into hardware implementation, which directly determines the cost and power consumption. We aim to reduce the bit resolution of the network without sacrificing its performance. To this end, we design a new quantization algorithm called supervised iterative quantization to reduce the bit resolution of learned network weights. In the training stage, the supervised iterative quantization is conducted via two steps on server - apply k-means based adaptive quantization on learned network weights and retrain the network based on quantized weights. These two steps are alternated until the convergence criterion is met. In this testing stage, the network configuration and low-bit weights are loaded to the client hardware device to recognize coming input in real time, where optimized but expensive quantization becomes infeasible. Considering this, we adopt a uniform quantization for the inputs and internal network responses (called feature maps) to maintain low on-chip expenses. The Convolutional Neural Network with reduced weight and input/response precision is demonstrated in recognizing two types of images: one is hand-written digit images and the other is real-life images in office scenarios. Both results show that the new network is able to achieve the performance of the neural network with full bit resolution, even though in the new network the bit resolution of both weight and input are significantly reduced, e.g., from 64 bits to 4-5 bits.

  9. The Analysis of Design of Robust Nonlinear Estimators and Robust Signal Coding Schemes.

    DTIC Science & Technology

    1982-09-16

    b - )’/ 12. between uniform and nonuniform quantizers. For the nonuni- Proof: If b - acca then form quantizer we can expect the mean-square error to...in the window greater than or equal to the value at We define f7 ’(s) as the n-times filtered signal p + 1; consequently, point p + 1 is the median and

  10. Texton-based analysis of paintings

    NASA Astrophysics Data System (ADS)

    van der Maaten, Laurens J. P.; Postma, Eric O.

    2010-08-01

    The visual examination of paintings is traditionally performed by skilled art historians using their eyes. Recent advances in intelligent systems may support art historians in determining the authenticity or date of creation of paintings. In this paper, we propose a technique for the examination of brushstroke structure that views the wildly overlapping brushstrokes as texture. The analysis of the painting texture is performed with the help of a texton codebook, i.e., a codebook of small prototypical textural patches. The texton codebook can be learned from a collection of paintings. Our textural analysis technique represents paintings in terms of histograms that measure the frequency by which the textons in the codebook occur in the painting (so-called texton histograms). We present experiments that show the validity and effectiveness of our technique for textural analysis on a collection of digitized high-resolution reproductions of paintings by Van Gogh and his contemporaries. As texton histograms cannot be easily be interpreted by art experts, the paper proposes to approaches to visualize the results on the textural analysis. The first approach visualizes the similarities between the histogram representations of paintings by employing a recently proposed dimensionality reduction technique, called t-SNE. We show that t-SNE reveals a clear separation of paintings created by Van Gogh and those created by other painters. In addition, the period of creation is faithfully reflected in the t-SNE visualizations. The second approach visualizes the similarities and differences between paintings by highlighting regions in a painting in which the textural structure of the painting is unusual. We illustrate the validity of this approach by means of an experiment in which we highlight regions in a painting by Monet that are not very "Van Gogh-like". Taken together, we believe the tools developed in this study are well capable of assisting for art historians in support of their study of paintings.

  11. Particle on a torus knot: Constrained dynamics and semi-classical quantization in a magnetic field

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Das, Praloy, E-mail: praloydasdurgapur@gmail.com; Pramanik, Souvik, E-mail: souvick.in@gmail.com; Ghosh, Subir, E-mail: subirghosh20@gmail.com

    2016-11-15

    Kinematics and dynamics of a particle moving on a torus knot poses an interesting problem as a constrained system. In the first part of the paper we have derived the modified symplectic structure or Dirac brackets of the above model in Dirac’s Hamiltonian framework, both in toroidal and Cartesian coordinate systems. This algebra has been used to study the dynamics, in particular small fluctuations in motion around a specific torus. The spatial symmetries of the system have also been studied. In the second part of the paper we have considered the quantum theory of a charge moving in a torusmore » knot in the presence of a uniform magnetic field along the axis of the torus in a semiclassical quantization framework. We exploit the Einstein–Brillouin–Keller (EBK) scheme of quantization that is appropriate for multidimensional systems. Embedding of the knot on a specific torus is inherently two dimensional that gives rise to two quantization conditions. This shows that although the system, after imposing the knot condition reduces to a one dimensional system, even then it has manifested non-planar features which shows up again in the study of fractional angular momentum. Finally we compare the results obtained from EBK (multi-dimensional) and Bohr–Sommerfeld (single dimensional) schemes. The energy levels and fractional spin depend on the torus knot parameters that specifies its non-planar features. Interestingly, we show that there can be non-planar corrections to the planar anyon-like fractional spin.« less

  12. Pedestrian Detection in Far-Infrared Daytime Images Using a Hierarchical Codebook of SURF

    PubMed Central

    Besbes, Bassem; Rogozan, Alexandrina; Rus, Adela-Maria; Bensrhair, Abdelaziz; Broggi, Alberto

    2015-01-01

    One of the main challenges in intelligent vehicles concerns pedestrian detection for driving assistance. Recent experiments have showed that state-of-the-art descriptors provide better performances on the far-infrared (FIR) spectrum than on the visible one, even in daytime conditions, for pedestrian classification. In this paper, we propose a pedestrian detector with on-board FIR camera. Our main contribution is the exploitation of the specific characteristics of FIR images to design a fast, scale-invariant and robust pedestrian detector. Our system consists of three modules, each based on speeded-up robust feature (SURF) matching. The first module allows generating regions-of-interest (ROI), since in FIR images of the pedestrian shapes may vary in large scales, but heads appear usually as light regions. ROI are detected with a high recall rate with the hierarchical codebook of SURF features located in head regions. The second module consists of pedestrian full-body classification by using SVM. This module allows one to enhance the precision with low computational cost. In the third module, we combine the mean shift algorithm with inter-frame scale-invariant SURF feature tracking to enhance the robustness of our system. The experimental evaluation shows that our system outperforms, in the FIR domain, the state-of-the-art Haar-like Adaboost-cascade, histogram of oriented gradients (HOG)/linear SVM (linSVM) and MultiFtrpedestrian detectors, trained on the FIR images. PMID:25871724

  13. A CMOS Imager with Focal Plane Compression using Predictive Coding

    NASA Technical Reports Server (NTRS)

    Leon-Salas, Walter D.; Balkir, Sina; Sayood, Khalid; Schemm, Nathan; Hoffman, Michael W.

    2007-01-01

    This paper presents a CMOS image sensor with focal-plane compression. The design has a column-level architecture and it is based on predictive coding techniques for image decorrelation. The prediction operations are performed in the analog domain to avoid quantization noise and to decrease the area complexity of the circuit, The prediction residuals are quantized and encoded by a joint quantizer/coder circuit. To save area resources, the joint quantizerlcoder circuit exploits common circuitry between a single-slope analog-to-digital converter (ADC) and a Golomb-Rice entropy coder. This combination of ADC and encoder allows the integration of the entropy coder at the column level. A prototype chip was fabricated in a 0.35 pm CMOS process. The output of the chip is a compressed bit stream. The test chip occupies a silicon area of 2.60 mm x 5.96 mm which includes an 80 X 44 APS array. Tests of the fabricated chip demonstrate the validity of the design.

  14. Large-scale classification of traffic signs under real-world conditions

    NASA Astrophysics Data System (ADS)

    Hazelhoff, Lykele; Creusen, Ivo; van de Wouw, Dennis; de With, Peter H. N.

    2012-02-01

    Traffic sign inventories are important to governmental agencies as they facilitate evaluation of traffic sign locations and are beneficial for road and sign maintenance. These inventories can be created (semi-)automatically based on street-level panoramic images. In these images, object detection is employed to detect the signs in each image, followed by a classification stage to retrieve the specific sign type. Classification of traffic signs is a complicated matter, since sign types are very similar with only minor differences within the sign, a high number of different signs is involved and multiple distortions occur, including variations in capturing conditions, occlusions, viewpoints and sign deformations. Therefore, we propose a method for robust classification of traffic signs, based on the Bag of Words approach for generic object classification. We extend the approach with a flexible, modular codebook to model the specific features of each sign type independently, in order to emphasize at the inter-sign differences instead of the parts common for all sign types. Additionally, this allows us to model and label the present false detections. Furthermore, analysis of the classification output provides the unreliable results. This classification system has been extensively tested for three different sign classes, covering 60 different sign types in total. These three data sets contain the sign detection results on street-level panoramic images, extracted from a country-wide database. The introduction of the modular codebook shows a significant improvement for all three sets, where the system is able to classify about 98% of the reliable results correctly.

  15. Codebook-based electrooculography data analysis towards cognitive activity recognition.

    PubMed

    Lagodzinski, P; Shirahama, K; Grzegorzek, M

    2018-04-01

    With the advancement in mobile/wearable technology, people started to use a variety of sensing devices to track their daily activities as well as health and fitness conditions in order to improve the quality of life. This work addresses an idea of eye movement analysis, which due to the strong correlation with cognitive tasks can be successfully utilized in activity recognition. Eye movements are recorded using an electrooculographic (EOG) system built into the frames of glasses, which can be worn more unobtrusively and comfortably than other devices. Since the obtained information is low-level sensor data expressed as a sequence representing values in constant intervals (100 Hz), the cognitive activity recognition problem is formulated as sequence classification. However, it is unclear what kind of features are useful for accurate cognitive activity recognition. Thus, a machine learning algorithm like a codebook approach is applied, which instead of focusing on feature engineering is using a distribution of characteristic subsequences (codewords) to describe sequences of recorded EOG data, where the codewords are obtained by clustering a large number of subsequences. Further, statistical analysis of the codeword distribution results in discovering features which are characteristic to a certain activity class. Experimental results demonstrate good accuracy of the codebook-based cognitive activity recognition reflecting the effective usage of the codewords. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. The 1984 ARI Survey of Army Recruits: Codebook for October 84/February 85 USAR (U.S. Army Reserve) and ARNG (Army National Guard) Survey Respondents

    DTIC Science & Technology

    1986-05-01

    I 1 ICHECKED -IN OTHER PARTS OF A NEWSPAPER 1290 --- _ _ _ _I__ _ _ _ __ __ _ __ _ __ _ __ 120I 100.0 TTA I 1 1 A I B I C I D T 11982 1198317 IN I__...FREQ I PERCENT I VALUE I MEANING ___ I _ _ _ I _ _ _ __ __1_ __ _ _ _ _ _8 1 0.6 1 . I NO RESPONSE 768 1 59.5 1 0 1 NOT CHECKED 514 I39.8 I 1 ICHECKED ...27 1 2.1 1 1 ICHECKED - BOOKLET ABOUT ARMY COLLEGE FUND 1290 1 100.0 I T-OTALSI TO CHECK FOR OUT OF RANGE VALUES FOR THIS ’MARK ALL THAT APPLY

  17. Quantized circular photogalvanic effect in Weyl semimetals

    NASA Astrophysics Data System (ADS)

    de Juan, Fernando; Grushin, Adolfo G.; Morimoto, Takahiro; Moore, Joel E.

    The circular photogalvanic effect (CPGE) is the part of a photocurrent that switches depending on the sense of circular polarization of the incident light. It has been consistently observed in systems without inversion symmetry and depends on non-universal material details. We find that in a class of Weyl semimetals (e.g. SrSi2) and three-dimensional Rashba materials (e.g. doped Te) without inversion and mirror symmetries, the CPGE trace is effectively Quantized in terms of the combination of fundamental constants e3/h2 cɛ0 with no material-dependent parameters. This is so because the CPGE directly measures the topological charge of Weyl points near the Fermi surface, and non-quantized corrections from disorder and additional bands can be small over a significant range of incident frequencies. Moreover, the magnitude of the CPGE induced by a Weyl node is relatively large, which enables the direct detection of the monopole charge with current techniques.

  18. Quantized circular photogalvanic effect in Weyl semimetals

    NASA Astrophysics Data System (ADS)

    de Juan, Fernando; Grushin, Adolfo G.; Morimoto, Takahiro; Moore, Joel E.

    2017-07-01

    The circular photogalvanic effect (CPGE) is the part of a photocurrent that switches depending on the sense of circular polarization of the incident light. It has been consistently observed in systems without inversion symmetry and depends on non-universal material details. Here we find that in a class of Weyl semimetals (for example, SrSi2) and three-dimensional Rashba materials (for example, doped Te) without inversion and mirror symmetries, the injection contribution to the CPGE trace is effectively quantized in terms of the fundamental constants e, h, c and with no material-dependent parameters. This is so because the CPGE directly measures the topological charge of Weyl points, and non-quantized corrections from disorder and additional bands can be small over a significant range of incident frequencies. Moreover, the magnitude of the CPGE induced by a Weyl node is relatively large, which enables the direct detection of the monopole charge with current techniques.

  19. On-chip integratable all-optical quantizer using strong cross-phase modulation in a silicon-organic hybrid slot waveguide

    PubMed Central

    Kang, Zhe; Yuan, Jinhui; Zhang, Xianting; Sang, Xinzhu; Wang, Kuiru; Wu, Qiang; Yan, Binbin; Li, Feng; Zhou, Xian; Zhong, Kangping; Zhou, Guiyao; Yu, Chongxiu; Farrell, Gerald; Lu, Chao; Yaw Tam, Hwa; Wai, P. K. A.

    2016-01-01

    High performance all-optical quantizer based on silicon waveguide is believed to have significant applications in photonic integratable optical communication links, optical interconnection networks, and real-time signal processing systems. In this paper, we propose an integratable all-optical quantizer for on-chip and low power consumption all-optical analog-to-digital converters. The quantization is realized by the strong cross-phase modulation and interference in a silicon-organic hybrid (SOH) slot waveguide based Mach-Zehnder interferometer. By carefully designing the dimension of the SOH waveguide, large nonlinear coefficients up to 16,000 and 18,069 W−1/m for the pump and probe signals can be obtained respectively, along with a low pulse walk-off parameter of 66.7 fs/mm, and all-normal dispersion in the wavelength regime considered. Simulation results show that the phase shift of the probe signal can reach 8π at a low pump pulse peak power of 206 mW and propagation length of 5 mm such that a 4-bit all-optical quantizer can be realized. The corresponding signal-to-noise ratio is 23.42 dB and effective number of bit is 3.89-bit. PMID:26777054

  20. Swings and roundabouts: optical Poincaré spheres for polarization and Gaussian beams

    NASA Astrophysics Data System (ADS)

    Dennis, M. R.; Alonso, M. A.

    2017-02-01

    The connection between Poincaré spheres for polarization and Gaussian beams is explored, focusing on the interpretation of elliptic polarization in terms of the isotropic two-dimensional harmonic oscillator in Hamiltonian mechanics, its canonical quantization and semiclassical interpretation. This leads to the interpretation of structured Gaussian modes, the Hermite-Gaussian, Laguerre-Gaussian and generalized Hermite-Laguerre-Gaussian modes as eigenfunctions of operators corresponding to the classical constants of motion of the two-dimensional oscillator, which acquire an extra significance as families of classical ellipses upon semiclassical quantization. This article is part of the themed issue 'Optical orbital angular momentum'.

  1. Synthetic aperture radar signal data compression using block adaptive quantization

    NASA Technical Reports Server (NTRS)

    Kuduvalli, Gopinath; Dutkiewicz, Melanie; Cumming, Ian

    1994-01-01

    This paper describes the design and testing of an on-board SAR signal data compression algorithm for ESA's ENVISAT satellite. The Block Adaptive Quantization (BAQ) algorithm was selected, and optimized for the various operational modes of the ASAR instrument. A flexible BAQ scheme was developed which allows a selection of compression ratio/image quality trade-offs. Test results show the high quality of the SAR images processed from the reconstructed signal data, and the feasibility of on-board implementation using a single ASIC.

  2. Annual Progress Report for July 1, 1978 through June 30, 1979,

    DTIC Science & Technology

    1979-08-01

    Alexandrou , D. Gibbons, J. A. Pietras, J. V. Altshuler, D. Gilbert, P. Rathbun, L. C. Au, S. H. Goodman, B. A. Reed, D. A. Avramovic, B. R. Govindaraj...January 1979, pp. 54-61. H. V. Poor and D. Alexandrou , "A General Relationship Between Two Quantizer Design Criteria," IEEE Trans. on Information Theory...Electronic Systems, Vol. AES-14, November 1978, pp. 241-253. Meeting Papers D. Alexandrou and H. V. Poor, "Data Quantization in Stochastic- Signal Detection

  3. Optimal block cosine transform image coding for noisy channels

    NASA Technical Reports Server (NTRS)

    Vaishampayan, V.; Farvardin, N.

    1986-01-01

    The two dimensional block transform coding scheme based on the discrete cosine transform was studied extensively for image coding applications. While this scheme has proven to be efficient in the absence of channel errors, its performance degrades rapidly over noisy channels. A method is presented for the joint source channel coding optimization of a scheme based on the 2-D block cosine transform when the output of the encoder is to be transmitted via a memoryless design of the quantizers used for encoding the transform coefficients. This algorithm produces a set of locally optimum quantizers and the corresponding binary code assignment for the assumed transform coefficient statistics. To determine the optimum bit assignment among the transform coefficients, an algorithm was used based on the steepest descent method, which under certain convexity conditions on the performance of the channel optimized quantizers, yields the optimal bit allocation. Comprehensive simulation results for the performance of this locally optimum system over noisy channels were obtained and appropriate comparisons against a reference system designed for no channel error were rendered.

  4. Analysis of the possibility of using G.729 codec for steganographic transmission

    NASA Astrophysics Data System (ADS)

    Piotrowski, Zbigniew; Ciołek, Michał; Dołowski, Jerzy; Wojtuń, Jarosław

    2017-04-01

    Network steganography is dedicated in particular for those communication services for which there are no bridges or nodes carrying out unintentional attacks on steganographic sequence. In order to set up a hidden communication channel the method of data encoding and decoding was implemented using code books of codec G.729. G.729 codec includes, in its construction, linear prediction vocoder CS-ACELP (Conjugate Structure Algebraic Code Excited Linear Prediction), and by modifying the binary content of the codebook, it is easy to change a binary output stream. The article describes the results of research on the selection of these bits of the codebook codec G.729 which the negation of the least have influence to the loss of quality and fidelity of the output signal. The study was performed with the use of subjective and objective listening tests.

  5. Assessment of Ice Shape Roughness Using a Self-Orgainizing Map Approach

    NASA Technical Reports Server (NTRS)

    Mcclain, Stephen T.; Kreeger, Richard E.

    2013-01-01

    Self-organizing maps are neural-network techniques for representing noisy, multidimensional data aligned along a lower-dimensional and nonlinear manifold. For a large set of noisy data, each element of a finite set of codebook vectors is iteratively moved in the direction of the data closest to the winner codebook vector. Through successive iterations, the codebook vectors begin to align with the trends of the higher-dimensional data. Prior investigations of ice shapes have focused on using self-organizing maps to characterize mean ice forms. The Icing Research Branch has recently acquired a high resolution three dimensional scanner system capable of resolving ice shape surface roughness. A method is presented for the evaluation of surface roughness variations using high-resolution surface scans based on a self-organizing map representation of the mean ice shape. The new method is demonstrated for 1) an 18-in. NACA 23012 airfoil 2 AOA just after the initial ice coverage of the leading 5 of the suction surface of the airfoil, 2) a 21-in. NACA 0012 at 0AOA following coverage of the leading 10 of the airfoil surface, and 3) a cold-soaked 21-in.NACA 0012 airfoil without ice. The SOM method resulted in descriptions of the statistical coverage limits and a quantitative representation of early stages of ice roughness formation on the airfoils. Limitations of the SOM method are explored, and the uncertainty limits of the method are investigated using the non-iced NACA 0012 airfoil measurements.

  6. Hybrid digital-analog coding with bandwidth expansion for correlated Gaussian sources under Rayleigh fading

    NASA Astrophysics Data System (ADS)

    Yahampath, Pradeepa

    2017-12-01

    Consider communicating a correlated Gaussian source over a Rayleigh fading channel with no knowledge of the channel signal-to-noise ratio (CSNR) at the transmitter. In this case, a digital system cannot be optimal for a range of CSNRs. Analog transmission however is optimal at all CSNRs, if the source and channel are memoryless and bandwidth matched. This paper presents new hybrid digital-analog (HDA) systems for sources with memory and channels with bandwidth expansion, which outperform both digital-only and analog-only systems over a wide range of CSNRs. The digital part is either a predictive quantizer or a transform code, used to achieve a coding gain. Analog part uses linear encoding to transmit the quantization error which improves the performance under CSNR variations. The hybrid encoder is optimized to achieve the minimum AMMSE (average minimum mean square error) over the CSNR distribution. To this end, analytical expressions are derived for the AMMSE of asymptotically optimal systems. It is shown that the outage CSNR of the channel code and the analog-digital power allocation must be jointly optimized to achieve the minimum AMMSE. In the case of HDA predictive quantization, a simple algorithm is presented to solve the optimization problem. Experimental results are presented for both Gauss-Markov sources and speech signals.

  7. Manipulating and probing angular momentum and quantized circulation in optical fields and matter waves

    NASA Astrophysics Data System (ADS)

    Lowney, Joseph Daniel

    Methods to generate, manipulate, and measure optical and atomic fields with global or local angular momentum have a wide range of applications in both fundamental physics research and technology development. In optics, the engineering of angular momentum states of light can aid studies of orbital angular momentum (OAM) exchange between light and matter. The engineering of optical angular momentum states can also be used to increase the bandwidth of optical communications or serve as a means to distribute quantum keys, for example. Similar capabilities in Bose-Einstein condensates are being investigated to improve our understanding of superfluid dynamics, superconductivity, and turbulence, the last of which is widely considered to be one of most ubiquitous yet poorly understood subjects in physics. The first part of this two-part dissertation presents an analysis of techniques for measuring and manipulating quantized vortices in BECs. The second part of this dissertation presents theoretical and numerical analyses of new methods to engineer the OAM spectra of optical beams. The superfluid dynamics of a BEC are often well described by a nonlinear Schrodinger equation. The nonlinearity arises from interatomic scattering and enables BECs to support quantized vortices, which have quantized circulation and are fundamental structural elements of quantum turbulence. With the experimental tools to dynamically manipulate and measure quantized vortices, BECs are proving to be a useful medium for testing the theoretical predictions of quantum turbulence. In this dissertation we analyze a method for making minimally destructive in situ observations of quantized vortices in a BEC. Secondly, we numerically study a mechanism to imprint vortex dipoles in a BEC. With these advancements, more robust experiments of vortex dynamics and quantum turbulence will be within reach. A more complete understanding of quantum turbulence will enable principles of microscopic fluid flow to be related to the statistical properties of turbulence in a superfluid. In the second part of this dissertation we explore frequency mixing, a subset of nonlinear optical processes in which one or more input optical beam(s) are converted into one or more output beams with different optical frequencies. The ability of parametric nonlinear processes such as second harmonic generation or parametric amplification to manipulate the OAM spectra of optical beams is an active area of research. In a theoretical and numerical investigation, two complimentary methods for sculpting the OAM spectra are developed. The first method employs second harmonic generation with two non-collinear input beams to develop a broad spectrum of OAM states in an optical field. The second method utilizes parametric amplification with collinear input beams to develop an OAM-dependent gain or attenuation, termed dichroism for OAM, to effectively narrow the OAM spectrum of an optical beam. The theoretical principles developed in this dissertation enhance our understanding of how nonlinear processes can be used to engineer the OAM spectra of optical beams and could serve as methods to increase the bandwidth of an optical signal by multiplexing over a range of OAM states.

  8. Memory-efficient decoding of LDPC codes

    NASA Technical Reports Server (NTRS)

    Kwok-San Lee, Jason; Thorpe, Jeremy; Hawkins, Jon

    2005-01-01

    We present a low-complexity quantization scheme for the implementation of regular (3,6) LDPC codes. The quantization parameters are optimized to maximize the mutual information between the source and the quantized messages. Using this non-uniform quantized belief propagation algorithm, we have simulated that an optimized 3-bit quantizer operates with 0.2dB implementation loss relative to a floating point decoder, and an optimized 4-bit quantizer operates less than 0.1dB quantization loss.

  9. Visually Lossless JPEG 2000 for Remote Image Browsing

    PubMed Central

    Oh, Han; Bilgin, Ali; Marcellin, Michael

    2017-01-01

    Image sizes have increased exponentially in recent years. The resulting high-resolution images are often viewed via remote image browsing. Zooming and panning are desirable features in this context, which result in disparate spatial regions of an image being displayed at a variety of (spatial) resolutions. When an image is displayed at a reduced resolution, the quantization step sizes needed for visually lossless quality generally increase. This paper investigates the quantization step sizes needed for visually lossless display as a function of resolution, and proposes a method that effectively incorporates the resulting (multiple) quantization step sizes into a single JPEG2000 codestream. This codestream is JPEG2000 Part 1 compliant and allows for visually lossless decoding at all resolutions natively supported by the wavelet transform as well as arbitrary intermediate resolutions, using only a fraction of the full-resolution codestream. When images are browsed remotely using the JPEG2000 Interactive Protocol (JPIP), the required bandwidth is significantly reduced, as demonstrated by extensive experimental results. PMID:28748112

  10. Generalized Ehrenfest Relations, Deformation Quantization, and the Geometry of Inter-model Reduction

    NASA Astrophysics Data System (ADS)

    Rosaler, Joshua

    2018-03-01

    This study attempts to spell out more explicitly than has been done previously the connection between two types of formal correspondence that arise in the study of quantum-classical relations: one the one hand, deformation quantization and the associated continuity between quantum and classical algebras of observables in the limit \\hbar → 0, and, on the other, a certain generalization of Ehrenfest's Theorem and the result that expectation values of position and momentum evolve approximately classically for narrow wave packet states. While deformation quantization establishes a direct continuity between the abstract algebras of quantum and classical observables, the latter result makes in-eliminable reference to the quantum and classical state spaces on which these structures act—specifically, via restriction to narrow wave packet states. Here, we describe a certain geometrical re-formulation and extension of the result that expectation values evolve approximately classically for narrow wave packet states, which relies essentially on the postulates of deformation quantization, but describes a relationship between the actions of quantum and classical algebras and groups over their respective state spaces that is non-trivially distinct from deformation quantization. The goals of the discussion are partly pedagogical in that it aims to provide a clear, explicit synthesis of known results; however, the particular synthesis offered aspires to some novelty in its emphasis on a certain general type of mathematical and physical relationship between the state spaces of different models that represent the same physical system, and in the explicitness with which it details the above-mentioned connection between quantum and classical models.

  11. Experimental Studies on a Compact Storage Scheme for Wavelet-based Multiresolution Subregion Retrieval

    NASA Technical Reports Server (NTRS)

    Poulakidas, A.; Srinivasan, A.; Egecioglu, O.; Ibarra, O.; Yang, T.

    1996-01-01

    Wavelet transforms, when combined with quantization and a suitable encoding, can be used to compress images effectively. In order to use them for image library systems, a compact storage scheme for quantized coefficient wavelet data must be developed with a support for fast subregion retrieval. We have designed such a scheme and in this paper we provide experimental studies to demonstrate that it achieves good image compression ratios, while providing a natural indexing mechanism that facilitates fast retrieval of portions of the image at various resolutions.

  12. Predictive Multiple Model Switching Control with the Self-Organizing Map

    NASA Technical Reports Server (NTRS)

    Motter, Mark A.

    2000-01-01

    A predictive, multiple model control strategy is developed by extension of self-organizing map (SOM) local dynamic modeling of nonlinear autonomous systems to a control framework. Multiple SOMs collectively model the global response of a nonautonomous system to a finite set of representative prototype controls. Each SOM provides a codebook representation of the dynamics corresponding to a prototype control. Different dynamic regimes are organized into topological neighborhoods where the adjacent entries in the codebook represent the global minimization of a similarity metric. The SOM is additionally employed to identify the local dynamical regime, and consequently implements a switching scheme that selects the best available model for the applied control. SOM based linear models are used to predict the response to a larger family of control sequences which are clustered on the representative prototypes. The control sequence which corresponds to the prediction that best satisfies the requirements on the system output is applied as the external driving signal.

  13. Modular Multi-Sensor Display System Design Study. Volume 1. Requirements Analysis and Design Studies

    DTIC Science & Technology

    1974-08-01

    1 0.00001 0.00001 0.00003 Residual 1x2x3 1 0.01002 0.01002 0.0348 Total 11 0.28822 Gray Shade Quantization. The main effect of gray shade...Quantization 3 Tararets 1 x 2 1 x 3 Z x i 1x2x3 RppTiratjnns Totals Degrees Freedom 1 2 5 10 10 3j6 71 Sum Squares 0.6463 0.5985 1.2009...x 3 2x3 1x2x3 Replications Totals DF 1 2 5 2 5 10 10 36 71 5S 1708.8 1413.2 2355.5 39.6 1342.2 1563.9 1568.8 2583.8 12576.1

  14. Mysterious Stoichiometry

    ERIC Educational Resources Information Center

    Bowman, L. H.; Shull, C. M.

    1975-01-01

    Describes an experiment designed to augment textual materials conducive to the inquiry approach to learning. The experiment is the culminating experience in a series designed to illustrate the fundamental nature of the atom: its quantized energy nature, its electrical nature, and its combining ability. (Author/GS)

  15. An Off-Grid Turbo Channel Estimation Algorithm for Millimeter Wave Communications.

    PubMed

    Han, Lingyi; Peng, Yuexing; Wang, Peng; Li, Yonghui

    2016-09-22

    The bandwidth shortage has motivated the exploration of the millimeter wave (mmWave) frequency spectrum for future communication networks. To compensate for the severe propagation attenuation in the mmWave band, massive antenna arrays can be adopted at both the transmitter and receiver to provide large array gains via directional beamforming. To achieve such array gains, channel estimation (CE) with high resolution and low latency is of great importance for mmWave communications. However, classic super-resolution subspace CE methods such as multiple signal classification (MUSIC) and estimation of signal parameters via rotation invariant technique (ESPRIT) cannot be applied here due to RF chain constraints. In this paper, an enhanced CE algorithm is developed for the off-grid problem when quantizing the angles of mmWave channel in the spatial domain where off-grid problem refers to the scenario that angles do not lie on the quantization grids with high probability, and it results in power leakage and severe reduction of the CE performance. A new model is first proposed to formulate the off-grid problem. The new model divides the continuously-distributed angle into a quantized discrete grid part, referred to as the integral grid angle, and an offset part, termed fractional off-grid angle. Accordingly, an iterative off-grid turbo CE (IOTCE) algorithm is proposed to renew and upgrade the CE between the integral grid part and the fractional off-grid part under the Turbo principle. By fully exploiting the sparse structure of mmWave channels, the integral grid part is estimated by a soft-decoding based compressed sensing (CS) method called improved turbo compressed channel sensing (ITCCS). It iteratively updates the soft information between the linear minimum mean square error (LMMSE) estimator and the sparsity combiner. Monte Carlo simulations are presented to evaluate the performance of the proposed method, and the results show that it enhances the angle detection resolution greatly.

  16. Trucks involved in fatal accidents codebook 2008.

    DOT National Transportation Integrated Search

    2011-01-01

    This report provides documentation for UMTRIs file of Trucks Involved in Fatal Accidents : (TIFA), 2008, including distributions of the code values for each variable in the file. The 2008 : TIFA file is a census of all medium and heavy trucks invo...

  17. Buses involved in fatal accidents codebook 2008.

    DOT National Transportation Integrated Search

    2011-03-01

    This report provides documentation for UMTRIs file of Buses Involved in Fatal Accidents (BIFA), 2008, : including distributions of the code values for each variable in the file. The 2008 BIFA file is a census of all : buses involved in a fatal acc...

  18. Buses involved in fatal accidents codebook 2007.

    DOT National Transportation Integrated Search

    2009-12-01

    This report provides documentation for UMTRIs file of Buses Involved in Fatal Accidents (BIFA), 2007, : including distributions of the code values for each variable in the file. The 2007 BIFA file is a census of all : buses involved in a fatal acc...

  19. High resolution angular sensor. [reducing ring laser gyro output quantization using phase locked loops

    NASA Technical Reports Server (NTRS)

    Gneses, M. I.; Berg, D. S.

    1981-01-01

    Specifications for the pointing stabilization system of the large space telescope were used in an investigation of the feasibility of reducing ring laser gyro output quantization to the sub-arc-second level by the use of phase locked loops and associated electronics. Systems analysis procedures are discussed and a multioscillator laser gyro model is presented along with data on the oscillator noise. It is shown that a second order closed loop can meet the measurement noise requirements when the loop gain and time constant of the loop filter are appropriately chosen. The preliminary electrical design is discussed from the standpoint of circuit tradeoff considerations. Analog, digital, and hybrid designs are given and their applicability to the high resolution sensor is examined. the electrical design choice of a system configuration is detailed. The design and operation of the various modules is considered and system block diagrams are included. Phase 1 and 2 test results using the multioscillator laser gyro are included.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bestwick, A. J.; Fox, E. J.; Kou, Xufeng

    In this study, we report a nearly ideal quantum anomalous Hall effect in a three-dimensional topological insulator thin film with ferromagnetic doping. Near zero applied magnetic field we measure exact quantization in the Hall resistance to within a part per 10,000 and a longitudinal resistivity under 1 Ω per square, with chiral edge transport explicitly confirmed by nonlocal measurements. Deviations from this behavior are found to be caused by thermally activated carriers, as indicated by an Arrhenius law temperature dependence. Using the deviations as a thermometer, we demonstrate an unexpected magnetocaloric effect and use it to reach near-perfect quantization bymore » cooling the sample below the dilution refrigerator base temperature in a process approximating adiabatic demagnetization refrigeration.« less

  1. BRST quantization of cosmological perturbations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Armendariz-Picon, Cristian; Şengör, Gizem

    2016-11-08

    BRST quantization is an elegant and powerful method to quantize theories with local symmetries. In this article we study the Hamiltonian BRST quantization of cosmological perturbations in a universe dominated by a scalar field, along with the closely related quantization method of Dirac. We describe how both formalisms apply to perturbations in a time-dependent background, and how expectation values of gauge-invariant operators can be calculated in the in-in formalism. Our analysis focuses mostly on the free theory. By appropriate canonical transformations we simplify and diagonalize the free Hamiltonian. BRST quantization in derivative gauges allows us to dramatically simplify the structuremore » of the propagators, whereas Dirac quantization, which amounts to quantization in synchronous gauge, dispenses with the need to introduce ghosts and preserves the locality of the gauge-fixed action.« less

  2. Methods of Contemporary Gauge Theory

    NASA Astrophysics Data System (ADS)

    Makeenko, Yuri

    2002-08-01

    Preface; Part I. Path Integrals: 1. Operator calculus; 2. Second quantization; 3. Quantum anomalies from path integral; 4. Instantons in quantum mechanics; Part II. Lattice Gauge Theories: 5. Observables in gauge theories; 6. Gauge fields on a lattice; 7. Lattice methods; 8. Fermions on a lattice; 9. Finite temperatures; Part III. 1/N Expansion: 10. O(N) vector models; 11. Multicolor QCD; 12. QCD in loop space; 13. Matrix models; Part IV. Reduced Models: 14. Eguchi-Kawai model; 15. Twisted reduced models; 16. Non-commutative gauge theories.

  3. Methods of Contemporary Gauge Theory

    NASA Astrophysics Data System (ADS)

    Makeenko, Yuri

    2005-11-01

    Preface; Part I. Path Integrals: 1. Operator calculus; 2. Second quantization; 3. Quantum anomalies from path integral; 4. Instantons in quantum mechanics; Part II. Lattice Gauge Theories: 5. Observables in gauge theories; 6. Gauge fields on a lattice; 7. Lattice methods; 8. Fermions on a lattice; 9. Finite temperatures; Part III. 1/N Expansion: 10. O(N) vector models; 11. Multicolor QCD; 12. QCD in loop space; 13. Matrix models; Part IV. Reduced Models: 14. Eguchi-Kawai model; 15. Twisted reduced models; 16. Non-commutative gauge theories.

  4. Error floor behavior study of LDPC codes for concatenated codes design

    NASA Astrophysics Data System (ADS)

    Chen, Weigang; Yin, Liuguo; Lu, Jianhua

    2007-11-01

    Error floor behavior of low-density parity-check (LDPC) codes using quantized decoding algorithms is statistically studied with experimental results on a hardware evaluation platform. The results present the distribution of the residual errors after decoding failure and reveal that the number of residual error bits in a codeword is usually very small using quantized sum-product (SP) algorithm. Therefore, LDPC code may serve as the inner code in a concatenated coding system with a high code rate outer code and thus an ultra low error floor can be achieved. This conclusion is also verified by the experimental results.

  5. Deformation of second and third quantization

    NASA Astrophysics Data System (ADS)

    Faizal, Mir

    2015-03-01

    In this paper, we will deform the second and third quantized theories by deforming the canonical commutation relations in such a way that they become consistent with the generalized uncertainty principle. Thus, we will first deform the second quantized commutator and obtain a deformed version of the Wheeler-DeWitt equation. Then we will further deform the third quantized theory by deforming the third quantized canonical commutation relation. This way we will obtain a deformed version of the third quantized theory for the multiverse.

  6. Quantization selection in the high-throughput H.264/AVC encoder based on the RD

    NASA Astrophysics Data System (ADS)

    Pastuszak, Grzegorz

    2013-10-01

    In the hardware video encoder, the quantization is responsible for quality losses. On the other hand, it allows the reduction of bit rates to the target one. If the mode selection is based on the rate-distortion criterion, the quantization can also be adjusted to obtain better compression efficiency. Particularly, the use of Lagrangian function with a given multiplier enables the encoder to select the most suitable quantization step determined by the quantization parameter QP. Moreover, the quantization offset added before discarding the fraction value after quantization can be adjusted. In order to select the best quantization parameter and offset in real time, the HD/SD encoder should be implemented in the hardware. In particular, the hardware architecture should embed the transformation and quantization modules able to process the same residuals many times. In this work, such an architecture is used. Experimental results show what improvements in terms of compression efficiency are achievable for Intra coding.

  7. Successive approximation-like 4-bit full-optical analog-to-digital converter based on Kerr-like nonlinear photonic crystal ring resonators

    NASA Astrophysics Data System (ADS)

    Tavousi, Alireza; Mansouri-Birjandi, Mohammad Ali; Saffari, Mehdi

    2016-09-01

    Implementing of photonic sampling and quantizing analog-to-digital converters (ADCs) enable us to extract a single binary word from optical signals without need for extra electronic assisting parts. This would enormously increase the sampling and quantizing time as well as decreasing the consumed power. To this end, based on the concept of successive approximation method, a 4-bit full-optical ADC that operates using the intensity-dependent Kerr-like nonlinearity in a two dimensional photonic crystal (2DPhC) platform is proposed. The Silicon (Si) nanocrystal is chosen because of the suitable nonlinear material characteristic. An optical limiter is used for the clamping and quantization of each successive levels that represent the ADC bits. In the proposal, an energy efficient optical ADC circuit is implemented by controlling the system parameters such as ring-to-waveguide coupling coefficients, the ring's nonlinear refractive index, and the ring's length. The performance of the ADC structure is verified by the simulation using finite difference time domain (FDTD) method.

  8. Compress compound images in H.264/MPGE-4 AVC by exploiting spatial correlation.

    PubMed

    Lan, Cuiling; Shi, Guangming; Wu, Feng

    2010-04-01

    Compound images are a combination of text, graphics and natural image. They present strong anisotropic features, especially on the text and graphics parts. These anisotropic features often render conventional compression inefficient. Thus, this paper proposes a novel coding scheme from the H.264 intraframe coding. In the scheme, two new intramodes are developed to better exploit spatial correlation in compound images. The first is the residual scalar quantization (RSQ) mode, where intrapredicted residues are directly quantized and coded without transform. The second is the base colors and index map (BCIM) mode that can be viewed as an adaptive color quantization. In this mode, an image block is represented by several representative colors, referred to as base colors, and an index map to compress. Every block selects its coding mode from two new modes and the previous intramodes in H.264 by rate-distortion optimization (RDO). Experimental results show that the proposed scheme improves the coding efficiency even more than 10 dB at most bit rates for compound images and keeps a comparable efficient performance to H.264 for natural images.

  9. Full Spectrum Conversion Using Traveling Pulse Wave Quantization

    DTIC Science & Technology

    2017-03-01

    Full Spectrum Conversion Using Traveling Pulse Wave Quantization Michael S. Kappes Mikko E. Waltari IQ-Analog Corporation San Diego, California...temporal-domain quantization technique called Traveling Pulse Wave Quantization (TPWQ). Full spectrum conversion is defined as the complete...pulse width measurements that are continuously generated hence the name “traveling” pulse wave quantization. Our TPWQ-based ADC is composed of a

  10. Image size invariant visual cryptography for general access structures subject to display quality constraints.

    PubMed

    Lee, Kai-Hui; Chiu, Pei-Ling

    2013-10-01

    Conventional visual cryptography (VC) suffers from a pixel-expansion problem, or an uncontrollable display quality problem for recovered images, and lacks a general approach to construct visual secret sharing schemes for general access structures. We propose a general and systematic approach to address these issues without sophisticated codebook design. This approach can be used for binary secret images in non-computer-aided decryption environments. To avoid pixel expansion, we design a set of column vectors to encrypt secret pixels rather than using the conventional VC-based approach. We begin by formulating a mathematic model for the VC construction problem to find the column vectors for the optimal VC construction, after which we develop a simulated-annealing-based algorithm to solve the problem. The experimental results show that the display quality of the recovered image is superior to that of previous papers.

  11. A study of the effectiveness of machine learning methods for classification of clinical interview fragments into a large number of categories.

    PubMed

    Hasan, Mehedi; Kotov, Alexander; Carcone, April; Dong, Ming; Naar, Sylvie; Hartlieb, Kathryn Brogan

    2016-08-01

    This study examines the effectiveness of state-of-the-art supervised machine learning methods in conjunction with different feature types for the task of automatic annotation of fragments of clinical text based on codebooks with a large number of categories. We used a collection of motivational interview transcripts consisting of 11,353 utterances, which were manually annotated by two human coders as the gold standard, and experimented with state-of-art classifiers, including Naïve Bayes, J48 Decision Tree, Support Vector Machine (SVM), Random Forest (RF), AdaBoost, DiscLDA, Conditional Random Fields (CRF) and Convolutional Neural Network (CNN) in conjunction with lexical, contextual (label of the previous utterance) and semantic (distribution of words in the utterance across the Linguistic Inquiry and Word Count dictionaries) features. We found out that, when the number of classes is large, the performance of CNN and CRF is inferior to SVM. When only lexical features were used, interview transcripts were automatically annotated by SVM with the highest classification accuracy among all classifiers of 70.8%, 61% and 53.7% based on the codebooks consisting of 17, 20 and 41 codes, respectively. Using contextual and semantic features, as well as their combination, in addition to lexical ones, improved the accuracy of SVM for annotation of utterances in motivational interview transcripts with a codebook consisting of 17 classes to 71.5%, 74.2%, and 75.1%, respectively. Our results demonstrate the potential of using machine learning methods in conjunction with lexical, semantic and contextual features for automatic annotation of clinical interview transcripts with near-human accuracy. Copyright © 2016 Elsevier Inc. All rights reserved.

  12. Synthesis of a combined system for precise stabilization of the Spektr-UF observatory: II

    NASA Astrophysics Data System (ADS)

    Bychkov, I. V.; Voronov, V. A.; Druzhinin, E. I.; Kozlov, R. I.; Ul'yanov, S. A.; Belyaev, B. B.; Telepnev, P. P.; Ul'yashin, A. I.

    2014-03-01

    The paper presents the second part of the results of search studies for the development of a combined system of high-precision stabilization of the optical telescope for the designed Spectr-UF international observatory [1]. A new modification of the strict method of the synthesis of nonlinear discrete-continuous stabilization systems with uncertainties is described, which is based on the minimization of the guaranteed accuracy estimate calculated using vector Lyapunov functions. Using this method, the synthesis of the feedback parameters in the mode of precise inertial stabilization of the optical telescope axis is performed taking the design nonrigidity, quantization of signals over time and level, and errors of orientation meters, as well as the errors and limitation of control moments of executive engine-flywheels into account. The results of numerical experiments that demonstrate the quality of the synthesized system are presented.

  13. Quantizing and sampling considerations in digital phased-locked loops

    NASA Technical Reports Server (NTRS)

    Hurst, G. T.; Gupta, S. C.

    1974-01-01

    The quantizer problem is first considered. The conditions under which the uniform white sequence model for the quantizer error is valid are established independent of the sampling rate. An equivalent spectral density is defined for the quantizer error resulting in an effective SNR value. This effective SNR may be used to determine quantized performance from infinitely fine quantized results. Attention is given to sampling rate considerations. Sampling rate characteristics of the digital phase-locked loop (DPLL) structure are investigated for the infinitely fine quantized system. The predicted phase error variance equation is examined as a function of the sampling rate. Simulation results are presented and a method is described which enables the minimum required sampling rate to be determined from the predicted phase error variance equations.

  14. Trucks involved in fatal accidents codebook 2004 (Version March 23, 2007).

    DOT National Transportation Integrated Search

    2007-03-01

    "This report provides documentation for UMTRIs file of Trucks Involved in Fatal Accidents (TIFA), : 2004, including distributions of the code values for each variable in the file. The 2004 TIFA file is : a census of all medium and heavy trucks inv...

  15. December 2005 Status of Forces Survey of Active Duty Members: Administration, Datasets, and Codebook

    DTIC Science & Technology

    2006-06-01

    Moderate extent Small extent Not a problem p. Settling damage claims q. Non- reimbursed transportation...costs incurred during the move r. Timeliness of reimbursements s. Accuracy...of reimbursements t. Change in cost of living (Continued) For your most

  16. Trucks involved in fatal accidents codebook 2010 (Version October 22, 2012).

    DOT National Transportation Integrated Search

    2012-11-01

    This report provides documentation for UMTRIs file of Trucks Involved in Fatal Accidents : (TIFA), 2010, including distributions of the code values for each variable in the file. The 2010 : TIFA file is a census of all medium and heavy trucks invo...

  17. Distributed single source coding with side information

    NASA Astrophysics Data System (ADS)

    Vila-Forcen, Jose E.; Koval, Oleksiy; Voloshynovskiy, Sviatoslav V.

    2004-01-01

    In the paper we advocate image compression technique in the scope of distributed source coding framework. The novelty of the proposed approach is twofold: classical image compression is considered from the positions of source coding with side information and, contrarily to the existing scenarios, where side information is given explicitly, side information is created based on deterministic approximation of local image features. We consider an image in the transform domain as a realization of a source with a bounded codebook of symbols where each symbol represents a particular edge shape. The codebook is image independent and plays the role of auxiliary source. Due to the partial availability of side information at both encoder and decoder we treat our problem as a modification of Berger-Flynn-Gray problem and investigate a possible gain over the solutions when side information is either unavailable or available only at decoder. Finally, we present a practical compression algorithm for passport photo images based on our concept that demonstrates the superior performance in very low bit rate regime.

  18. Comparisons between Common and Dedicated Reference Signals for MIMO Multiplexing Using Precoding in Evolved UTRA Downlink

    NASA Astrophysics Data System (ADS)

    Taoka, Hidekazu; Kishiyama, Yoshihisa; Higuchi, Kenichi; Sawahashi, Mamoru

    This paper presents comparisons between common and dedicated reference signals (RSs) for channel estimation in MIMO multiplexing using codebook-based precoding for orthogonal frequency division multiplexing (OFDM) radio access in the Evolved UTRA downlink with frequency division duplexing (FDD). We clarify the best RS structure for precoding-based MIMO multiplexing based on comparisons of the structures in terms of the achievable throughput taking into account the overhead of the common and dedicated RSs and the precoding matrix indication (PMI) signal. Based on extensive simulations on the throughput in 2-by-2 and 4-by-4 MIMO multiplexing with precoding, we clarify that channel estimation based on common RSs multiplied with the precoding matrix indicated by the PMI signal achieves higher throughput compared to that using dedicated RSs irrespective of the number of spatial multiplexing streams when the number of available precoding matrices, i.e., the codebook size, is less than approximately 16 and 32 for 2-by-2 and 4-by-4 MIMO multiplexing, respectively.

  19. Quantization-Based Adaptive Actor-Critic Tracking Control With Tracking Error Constraints.

    PubMed

    Fan, Quan-Yong; Yang, Guang-Hong; Ye, Dan

    2018-04-01

    In this paper, the problem of adaptive actor-critic (AC) tracking control is investigated for a class of continuous-time nonlinear systems with unknown nonlinearities and quantized inputs. Different from the existing results based on reinforcement learning, the tracking error constraints are considered and new critic functions are constructed to improve the performance further. To ensure that the tracking errors keep within the predefined time-varying boundaries, a tracking error transformation technique is used to constitute an augmented error system. Specific critic functions, rather than the long-term cost function, are introduced to supervise the tracking performance and tune the weights of the AC neural networks (NNs). A novel adaptive controller with a special structure is designed to reduce the effect of the NN reconstruction errors, input quantization, and disturbances. Based on the Lyapunov stability theory, the boundedness of the closed-loop signals and the desired tracking performance can be guaranteed. Finally, simulations on two connected inverted pendulums are given to illustrate the effectiveness of the proposed method.

  20. Quantization of wave equations and hermitian structures in partial differential varieties

    PubMed Central

    Paneitz, S. M.; Segal, I. E.

    1980-01-01

    Sufficiently close to 0, the solution variety of a nonlinear relativistic wave equation—e.g., of the form □ϕ + m2ϕ + gϕp = 0—admits a canonical Lorentz-invariant hermitian structure, uniquely determined by the consideration that the action of the differential scattering transformation in each tangent space be unitary. Similar results apply to linear time-dependent equations or to equations in a curved asymptotically flat space-time. A close relation of the Riemannian structure to the determination of vacuum expectation values is developed and illustrated by an explicit determination of a perturbative 2-point function for the case of interaction arising from curvature. The theory underlying these developments is in part a generalization of that of M. G. Krein and collaborators concerning stability of differential equations in Hilbert space and in part a precise relation between the unitarization of given symplectic linear actions and their full probabilistic quantization. The unique causal structure in the infinite symplectic group is instrumental in these developments. PMID:16592923

  1. A CU-Level Rate and Distortion Estimation Scheme for RDO of Hardware-Friendly HEVC Encoders Using Low-Complexity Integer DCTs.

    PubMed

    Lee, Bumshik; Kim, Munchurl

    2016-08-01

    In this paper, a low complexity coding unit (CU)-level rate and distortion estimation scheme is proposed for High Efficiency Video Coding (HEVC) hardware-friendly implementation where a Walsh-Hadamard transform (WHT)-based low-complexity integer discrete cosine transform (DCT) is employed for distortion estimation. Since HEVC adopts quadtree structures of coding blocks with hierarchical coding depths, it becomes more difficult to estimate accurate rate and distortion values without actually performing transform, quantization, inverse transform, de-quantization, and entropy coding. Furthermore, DCT for rate-distortion optimization (RDO) is computationally high, because it requires a number of multiplication and addition operations for various transform block sizes of 4-, 8-, 16-, and 32-orders and requires recursive computations to decide the optimal depths of CU or transform unit. Therefore, full RDO-based encoding is highly complex, especially for low-power implementation of HEVC encoders. In this paper, a rate and distortion estimation scheme is proposed in CU levels based on a low-complexity integer DCT that can be computed in terms of WHT whose coefficients are produced in prediction stages. For rate and distortion estimation in CU levels, two orthogonal matrices of 4×4 and 8×8 , which are applied to WHT that are newly designed in a butterfly structure only with addition and shift operations. By applying the integer DCT based on the WHT and newly designed transforms in each CU block, the texture rate can precisely be estimated after quantization using the number of non-zero quantized coefficients and the distortion can also be precisely estimated in transform domain without de-quantization and inverse transform required. In addition, a non-texture rate estimation is proposed by using a pseudoentropy code to obtain accurate total rate estimates. The proposed rate and the distortion estimation scheme can effectively be used for HW-friendly implementation of HEVC encoders with 9.8% loss over HEVC full RDO, which much less than 20.3% and 30.2% loss of a conventional approach and Hadamard-only scheme, respectively.

  2. A low complexity, low spur digital IF conversion circuit for high-fidelity GNSS signal playback

    NASA Astrophysics Data System (ADS)

    Su, Fei; Ying, Rendong

    2016-01-01

    A low complexity high efficiency and low spur digital intermediate frequency (IF) conversion circuit is discussed in the paper. This circuit is key element in high-fidelity GNSS signal playback instrument. We analyze the spur performance of a finite state machine (FSM) based numerically controlled oscillators (NCO), by optimization of the control algorithm, a FSM based NCO with 3 quantization stage can achieves 65dB SFDR in the range of the seventh harmonic. Compare with traditional lookup table based NCO design with the same Spurious Free Dynamic Range (SFDR) performance, the logic resource require to implemented the NCO is reduced to 1/3. The proposed design method can be extended to the IF conversion system with good SFDR in the range of higher harmonic components by increasing the quantization stage.

  3. Berezin-Toeplitz quantization and naturally defined star products for Kähler manifolds

    NASA Astrophysics Data System (ADS)

    Schlichenmaier, Martin

    2018-04-01

    For compact quantizable Kähler manifolds the Berezin-Toeplitz quantization schemes, both operator and deformation quantization (star product) are reviewed. The treatment includes Berezin's covariant symbols and the Berezin transform. The general compact quantizable case was done by Bordemann-Meinrenken-Schlichenmaier, Schlichenmaier, and Karabegov-Schlichenmaier. For star products on Kähler manifolds, separation of variables, or equivalently star product of (anti-) Wick type, is a crucial property. As canonically defined star products the Berezin-Toeplitz, Berezin, and the geometric quantization are treated. It turns out that all three are equivalent, but different.

  4. From classical to quantum mechanics: ``How to translate physical ideas into mathematical language''

    NASA Astrophysics Data System (ADS)

    Bergeron, H.

    2001-09-01

    Following previous works by E. Prugovečki [Physica A 91A, 202 (1978) and Stochastic Quantum Mechanics and Quantum Space-time (Reidel, Dordrecht, 1986)] on common features of classical and quantum mechanics, we develop a unified mathematical framework for classical and quantum mechanics (based on L2-spaces over classical phase space), in order to investigate to what extent quantum mechanics can be obtained as a simple modification of classical mechanics (on both logical and analytical levels). To obtain this unified framework, we split quantum theory in two parts: (i) general quantum axiomatics (a system is described by a state in a Hilbert space, observables are self-adjoints operators, and so on) and (ii) quantum mechanics proper that specifies the Hilbert space as L2(Rn); the Heisenberg rule [pi,qj]=-iℏδij with p=-iℏ∇, the free Hamiltonian H=-ℏ2Δ/2m and so on. We show that general quantum axiomatics (up to a supplementary "axiom of classicity") can be used as a nonstandard mathematical ground to formulate physical ideas and equations of ordinary classical statistical mechanics. So, the question of a "true quantization" with "ℏ" must be seen as an independent physical problem not directly related with quantum formalism. At this stage, we show that this nonstandard formulation of classical mechanics exhibits a new kind of operation that has no classical counterpart: this operation is related to the "quantization process," and we show why quantization physically depends on group theory (the Galilei group). This analytical procedure of quantization replaces the "correspondence principle" (or canonical quantization) and allows us to map classical mechanics into quantum mechanics, giving all operators of quantum dynamics and the Schrödinger equation. The great advantage of this point of view is that quantization is based on concrete physical arguments and not derived from some "pure algebraic rule" (we exhibit also some limit of the correspondence principle). Moreover spins for particles are naturally generated, including an approximation of their interaction with magnetic fields. We also recover by this approach the semi-classical formalism developed by E. Prugovečki [Stochastic Quantum Mechanics and Quantum Space-time (Reidel, Dordrecht, 1986)].

  5. The influence of instructional interactions on students’ mental models about the quantization of physical observables: a modern physics course case

    NASA Astrophysics Data System (ADS)

    Didiş Körhasan, Nilüfer; Eryılmaz, Ali; Erkoç, Şakir

    2016-01-01

    Mental models are coherently organized knowledge structures used to explain phenomena. They interact with social environments and evolve with the interaction. Lacking daily experience with phenomena, the social interaction gains much more importance. In this part of our multiphase study, we investigate how instructional interactions influenced students’ mental models about the quantization of physical observables. Class observations and interviews were analysed by studying students’ mental models constructed in a modern physics course during an academic semester. The research revealed that students’ mental models were influenced by (1) the manner of teaching, including instructional methodologies and content specific techniques used by the instructor, (2) order of the topics and familiarity with concepts, and (3) peers.

  6. Optical memory based on quantized atomic center-of-mass motion.

    PubMed

    Lopez, J P; de Almeida, A J F; Felinto, D; Tabosa, J W R

    2017-11-01

    We report a new type of optical memory using a pure two-level system of cesium atoms cooled by the magnetically assisted Sisyphus effect. The optical information of a probe field is stored in the coherence between quantized vibrational levels of the atoms in the potential wells of a 1-D optical lattice. The retrieved pulse shows Rabi oscillations with a frequency determined by the reading beam intensity and are qualitatively understood in terms of a simple theoretical model. The exploration of the external degrees of freedom of an atom may add another capability in the design of quantum-information protocols using light.

  7. Quantization and fractional quantization of currents in periodically driven stochastic systems. I. Average currents

    NASA Astrophysics Data System (ADS)

    Chernyak, Vladimir Y.; Klein, John R.; Sinitsyn, Nikolai A.

    2012-04-01

    This article studies Markovian stochastic motion of a particle on a graph with finite number of nodes and periodically time-dependent transition rates that satisfy the detailed balance condition at any time. We show that under general conditions, the currents in the system on average become quantized or fractionally quantized for adiabatic driving at sufficiently low temperature. We develop the quantitative theory of this quantization and interpret it in terms of topological invariants. By implementing the celebrated Kirchhoff theorem we derive a general and explicit formula for the average generated current that plays a role of an efficient tool for treating the current quantization effects.

  8. Action recognition via cumulative histogram of multiple features

    NASA Astrophysics Data System (ADS)

    Yan, Xunshi; Luo, Yupin

    2011-01-01

    Spatial-temporal interest points (STIPs) are popular in human action recognition. However, they suffer from difficulties in determining size of codebook and losing much information during forming histograms. In this paper, spatial-temporal interest regions (STIRs) are proposed, which are based on STIPs and are capable of marking the locations of the most ``shining'' human body parts. In order to represent human actions, the proposed approach takes great advantages of multiple features, including STIRs, pyramid histogram of oriented gradients and pyramid histogram of oriented optical flows. To achieve this, cumulative histogram is used to integrate dynamic information in sequences and to form feature vectors. Furthermore, the widely used nearest neighbor and AdaBoost methods are employed as classification algorithms. Experiments on public datasets KTH, Weizmann and UCF sports show that the proposed approach achieves effective and robust results.

  9. Optimal Quantization Scheme for Data-Efficient Target Tracking via UWSNs Using Quantized Measurements.

    PubMed

    Zhang, Senlin; Chen, Huayan; Liu, Meiqin; Zhang, Qunfei

    2017-11-07

    Target tracking is one of the broad applications of underwater wireless sensor networks (UWSNs). However, as a result of the temporal and spatial variability of acoustic channels, underwater acoustic communications suffer from an extremely limited bandwidth. In order to reduce network congestion, it is important to shorten the length of the data transmitted from local sensors to the fusion center by quantization. Although quantization can reduce bandwidth cost, it also brings about bad tracking performance as a result of information loss after quantization. To solve this problem, this paper proposes an optimal quantization-based target tracking scheme. It improves the tracking performance of low-bit quantized measurements by minimizing the additional covariance caused by quantization. The simulation demonstrates that our scheme performs much better than the conventional uniform quantization-based target tracking scheme and the increment of the data length affects our scheme only a little. Its tracking performance improves by only 4.4% from 2- to 3-bit, which means our scheme weakly depends on the number of data bits. Moreover, our scheme also weakly depends on the number of participate sensors, and it can work well in sparse sensor networks. In a 6 × 6 × 6 sensor network, compared with 4 × 4 × 4 sensor networks, the number of participant sensors increases by 334.92%, while the tracking accuracy using 1-bit quantized measurements improves by only 50.77%. Overall, our optimal quantization-based target tracking scheme can achieve the pursuit of data-efficiency, which fits the requirements of low-bandwidth UWSNs.

  10. What is quantum in quantum randomness?

    PubMed

    Grangier, P; Auffèves, A

    2018-07-13

    It is often said that quantum and classical randomness are of different nature, the former being ontological and the latter epistemological. However, so far the question of 'What is quantum in quantum randomness?', i.e. what is the impact of quantization and discreteness on the nature of randomness, remains to be answered. In a first part, we make explicit the differences between quantum and classical randomness within a recently proposed ontology for quantum mechanics based on contextual objectivity. In this view, quantum randomness is the result of contextuality and quantization. We show that this approach strongly impacts the purposes of quantum theory as well as its areas of application. In particular, it challenges current programmes inspired by classical reductionism, aiming at the emergence of the classical world from a large number of quantum systems. In a second part, we analyse quantum physics and thermodynamics as theories of randomness, unveiling their mutual influences. We finally consider new technological applications of quantum randomness that have opened up in the emerging field of quantum thermodynamics.This article is part of a discussion meeting issue 'Foundations of quantum mechanics and their impact on contemporary society'. © 2018 The Author(s).

  11. Perceptual Image Compression in Telemedicine

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Ahumada, Albert J., Jr.; Eckstein, Miguel; Null, Cynthia H. (Technical Monitor)

    1996-01-01

    The next era of space exploration, especially the "Mission to Planet Earth" will generate immense quantities of image data. For example, the Earth Observing System (EOS) is expected to generate in excess of one terabyte/day. NASA confronts a major technical challenge in managing this great flow of imagery: in collection, pre-processing, transmission to earth, archiving, and distribution to scientists at remote locations. Expected requirements in most of these areas clearly exceed current technology. Part of the solution to this problem lies in efficient image compression techniques. For much of this imagery, the ultimate consumer is the human eye. In this case image compression should be designed to match the visual capacities of the human observer. We have developed three techniques for optimizing image compression for the human viewer. The first consists of a formula, developed jointly with IBM and based on psychophysical measurements, that computes a DCT quantization matrix for any specified combination of viewing distance, display resolution, and display brightness. This DCT quantization matrix is used in most recent standards for digital image compression (JPEG, MPEG, CCITT H.261). The second technique optimizes the DCT quantization matrix for each individual image, based on the contents of the image. This is accomplished by means of a model of visual sensitivity to compression artifacts. The third technique extends the first two techniques to the realm of wavelet compression. Together these two techniques will allow systematic perceptual optimization of image compression in NASA imaging systems. Many of the image management challenges faced by NASA are mirrored in the field of telemedicine. Here too there are severe demands for transmission and archiving of large image databases, and the imagery is ultimately used primarily by human observers, such as radiologists. In this presentation I will describe some of our preliminary explorations of the applications of our technology to the special problems of telemedicine.

  12. Trucks involved in fatal accidents codebook 1999 (version September 19, 2001)

    DOT National Transportation Integrated Search

    2001-09-01

    This report provides one-way frequencies for all the vehicles in UMTRI's file of Trucks Involved in Fatal Accidents (TIFA), 1999. The 1999 TIFA file is a census of all medium and heavy trucks involved in a fatal accident in the United States. The TIF...

  13. 2000 SURVEY OF RESERVE COMPONENT PERSONNEL: ADMINISTRATION, DATASETS, AND CODEBOOK

    DTIC Science & Technology

    2002-07-01

    Record Number 1 DPOC * DoD Primary Occupation Code 1860 DR* Physician 1862 DSVCOCC* Duty Occupation 1863 DTYOCC* Duty Occupation 1864 DUPRET...Constructed Pay Grade Group 2 1856 CPAYGRP3 Constructed Pay Grade Group 3 1857 CRACECAT Race/Ethnic Category 2 1858 CSERVICE CService - Member 1859 DPOC

  14. Probing topology by "heating": Quantized circular dichroism in ultracold atoms.

    PubMed

    Tran, Duc Thanh; Dauphin, Alexandre; Grushin, Adolfo G; Zoller, Peter; Goldman, Nathan

    2017-08-01

    We reveal an intriguing manifestation of topology, which appears in the depletion rate of topological states of matter in response to an external drive. This phenomenon is presented by analyzing the response of a generic two-dimensional (2D) Chern insulator subjected to a circular time-periodic perturbation. Because of the system's chiral nature, the depletion rate is shown to depend on the orientation of the circular shake; taking the difference between the rates obtained from two opposite orientations of the drive, and integrating over a proper drive-frequency range, provides a direct measure of the topological Chern number (ν) of the populated band: This "differential integrated rate" is directly related to the strength of the driving field through the quantized coefficient η 0 = ν/ ℏ 2 , where h = 2π ℏ is Planck's constant. Contrary to the integer quantum Hall effect, this quantized response is found to be nonlinear with respect to the strength of the driving field, and it explicitly involves interband transitions. We investigate the possibility of probing this phenomenon in ultracold gases and highlight the crucial role played by edge states in this effect. We extend our results to 3D lattices, establishing a link between depletion rates and the nonlinear photogalvanic effect predicted for Weyl semimetals. The quantized circular dichroism revealed in this work designates depletion rate measurements as a universal probe for topological order in quantum matter.

  15. Quantized Spectral Compressed Sensing: Cramer–Rao Bounds and Recovery Algorithms

    NASA Astrophysics Data System (ADS)

    Fu, Haoyu; Chi, Yuejie

    2018-06-01

    Efficient estimation of wideband spectrum is of great importance for applications such as cognitive radio. Recently, sub-Nyquist sampling schemes based on compressed sensing have been proposed to greatly reduce the sampling rate. However, the important issue of quantization has not been fully addressed, particularly for high-resolution spectrum and parameter estimation. In this paper, we aim to recover spectrally-sparse signals and the corresponding parameters, such as frequency and amplitudes, from heavy quantizations of their noisy complex-valued random linear measurements, e.g. only the quadrant information. We first characterize the Cramer-Rao bound under Gaussian noise, which highlights the trade-off between sample complexity and bit depth under different signal-to-noise ratios for a fixed budget of bits. Next, we propose a new algorithm based on atomic norm soft thresholding for signal recovery, which is equivalent to proximal mapping of properly designed surrogate signals with respect to the atomic norm that motivates spectral sparsity. The proposed algorithm can be applied to both the single measurement vector case, as well as the multiple measurement vector case. It is shown that under the Gaussian measurement model, the spectral signals can be reconstructed accurately with high probability, as soon as the number of quantized measurements exceeds the order of K log n, where K is the level of spectral sparsity and $n$ is the signal dimension. Finally, numerical simulations are provided to validate the proposed approaches.

  16. Quantum memristors

    DOE PAGES

    Pfeiffer, P.; Egusquiza, I. L.; Di Ventra, M.; ...

    2016-07-06

    Technology based on memristors, resistors with memory whose resistance depends on the history of the crossing charges, has lately enhanced the classical paradigm of computation with neuromorphic architectures. However, in contrast to the known quantized models of passive circuit elements, such as inductors, capacitors or resistors, the design and realization of a quantum memristor is still missing. Here, we introduce the concept of a quantum memristor as a quantum dissipative device, whose decoherence mechanism is controlled by a continuous-measurement feedback scheme, which accounts for the memory. Indeed, we provide numerical simulations showing that memory effects actually persist in the quantummore » regime. Our quantization method, specifically designed for superconducting circuits, may be extended to other quantum platforms, allowing for memristor-type constructions in different quantum technologies. As a result, the proposed quantum memristor is then a building block for neuromorphic quantum computation and quantum simulations of non-Markovian systems.« less

  17. Simultaneous Conduction and Valence Band Quantization in Ultrashallow High-Density Doping Profiles in Semiconductors

    NASA Astrophysics Data System (ADS)

    Mazzola, F.; Wells, J. W.; Pakpour-Tabrizi, A. C.; Jackman, R. B.; Thiagarajan, B.; Hofmann, Ph.; Miwa, J. A.

    2018-01-01

    We demonstrate simultaneous quantization of conduction band (CB) and valence band (VB) states in silicon using ultrashallow, high-density, phosphorus doping profiles (so-called Si:P δ layers). We show that, in addition to the well-known quantization of CB states within the dopant plane, the confinement of VB-derived states between the subsurface P dopant layer and the Si surface gives rise to a simultaneous quantization of VB states in this narrow region. We also show that the VB quantization can be explained using a simple particle-in-a-box model, and that the number and energy separation of the quantized VB states depend on the depth of the P dopant layer beneath the Si surface. Since the quantized CB states do not show a strong dependence on the dopant depth (but rather on the dopant density), it is straightforward to exhibit control over the properties of the quantized CB and VB states independently of each other by choosing the dopant density and depth accordingly, thus offering new possibilities for engineering quantum matter.

  18. A Variant of the Mukai Pairing via Deformation Quantization

    NASA Astrophysics Data System (ADS)

    Ramadoss, Ajay C.

    2012-06-01

    Let X be a smooth projective complex variety. The Hochschild homology HH•( X) of X is an important invariant of X, which is isomorphic to the Hodge cohomology of X via the Hochschild-Kostant-Rosenberg isomorphism. On HH•( X), one has the Mukai pairing constructed by Caldararu. An explicit formula for the Mukai pairing at the level of Hodge cohomology was proven by the author in an earlier work (following ideas of Markarian). This formula implies a similar explicit formula for a closely related variant of the Mukai pairing on HH•( X). The latter pairing on HH•( X) is intimately linked to the study of Fourier-Mukai transforms of complex projective varieties. We give a new method to prove a formula computing the aforementioned variant of Caldararu's Mukai pairing. Our method is based on some important results in the area of deformation quantization. In particular, we use part of the work of Kashiwara and Schapira on Deformation Quantization modules together with an algebraic index theorem of Bressler, Nest and Tsygan. Our new method explicitly shows that the "Noncommutative Riemann-Roch" implies the classical Riemann-Roch. Further, it is hoped that our method would be useful for generalization to settings involving certain singular varieties.

  19. An adaptive vector quantization scheme

    NASA Technical Reports Server (NTRS)

    Cheung, K.-M.

    1990-01-01

    Vector quantization is known to be an effective compression scheme to achieve a low bit rate so as to minimize communication channel bandwidth and also to reduce digital memory storage while maintaining the necessary fidelity of the data. However, the large number of computations required in vector quantizers has been a handicap in using vector quantization for low-rate source coding. An adaptive vector quantization algorithm is introduced that is inherently suitable for simple hardware implementation because it has a simple architecture. It allows fast encoding and decoding because it requires only addition and subtraction operations.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Serwer, Philip, E-mail: serwer@uthscsa.edu; Wright, Elena T.; Liu, Zheng

    DNA packaging of phages phi29, T3 and T7 sometimes produces incompletely packaged DNA with quantized lengths, based on gel electrophoretic band formation. We discover here a packaging ATPase-free, in vitro model for packaged DNA length quantization. We use directed evolution to isolate a five-site T3 point mutant that hyper-produces tail-free capsids with mature DNA (heads). Three tail gene mutations, but no head gene mutations, are present. A variable-length DNA segment leaks from some mutant heads, based on DNase I-protection assay and electron microscopy. The protected DNA segment has quantized lengths, based on restriction endonuclease analysis: six sharp bands of DNAmore » missing 3.7–12.3% of the last end packaged. Native gel electrophoresis confirms quantized DNA expulsion and, after removal of external DNA, provides evidence that capsid radius is the quantization-ruler. Capsid-based DNA length quantization possibly evolved via selection for stalling that provides time for feedback control during DNA packaging and injection. - Graphical abstract: Highlights: • We implement directed evolution- and DNA-sequencing-based phage assembly genetics. • We purify stable, mutant phage heads with a partially leaked mature DNA molecule. • Native gels and DNase-protection show leaked DNA segments to have quantized lengths. • Native gels after DNase I-removal of leaked DNA reveal the capsids to vary in radius. • Thus, we hypothesize leaked DNA quantization via variably quantized capsid radius.« less

  1. Dimensional quantization effects in the thermodynamics of conductive filaments

    NASA Astrophysics Data System (ADS)

    Niraula, D.; Grice, C. R.; Karpov, V. G.

    2018-06-01

    We consider the physical effects of dimensional quantization in conductive filaments that underlie operations of some modern electronic devices. We show that, as a result of quantization, a sufficiently thin filament acquires a positive charge. Several applications of this finding include the host material polarization, the stability of filament constrictions, the equilibrium filament radius, polarity in device switching, and quantization of conductance.

  2. Nearly associative deformation quantization

    NASA Astrophysics Data System (ADS)

    Vassilevich, Dmitri; Oliveira, Fernando Martins Costa

    2018-04-01

    We study several classes of non-associative algebras as possible candidates for deformation quantization in the direction of a Poisson bracket that does not satisfy Jacobi identities. We show that in fact alternative deformation quantization algebras require the Jacobi identities on the Poisson bracket and, under very general assumptions, are associative. At the same time, flexible deformation quantization algebras exist for any Poisson bracket.

  3. Dimensional quantization effects in the thermodynamics of conductive filaments.

    PubMed

    Niraula, D; Grice, C R; Karpov, V G

    2018-06-29

    We consider the physical effects of dimensional quantization in conductive filaments that underlie operations of some modern electronic devices. We show that, as a result of quantization, a sufficiently thin filament acquires a positive charge. Several applications of this finding include the host material polarization, the stability of filament constrictions, the equilibrium filament radius, polarity in device switching, and quantization of conductance.

  4. 2008 Post-Election Voting Survey of Federal Civilians Overseas: Administration, Datasets and Codebook

    DTIC Science & Technology

    2009-09-01

    184 LEGALRESR Recode-Tab:[8] State of legal voting res 72 LITHO * Litho code 473 NOFVAPA* 42a. [42a] Not used FVAP tele:Did not know 363...471 INRECNO Master SCS ID number 472 LITHO Litho code 473 QCOMPF Binary variable indicating if case compl 474 QCOMPN [QCOMPN] Questions

  5. Agency Online: Trends in a University Learning Course

    ERIC Educational Resources Information Center

    Ligorio, Maria Beatrice; Impedovo, Maria Antonietta; Arcidiacono, Francesco

    2017-01-01

    This article aims to investigate how university students perform agency in an online course and whether the collaborative nature of the course affects such expression. A total of 11 online web forums involving 18 students (N = 745 posts in total) were qualitatively analysed through the use of a codebook composed of five categories (individual,…

  6. Topological quantization in units of the fine structure constant.

    PubMed

    Maciejko, Joseph; Qi, Xiao-Liang; Drew, H Dennis; Zhang, Shou-Cheng

    2010-10-15

    Fundamental topological phenomena in condensed matter physics are associated with a quantized electromagnetic response in units of fundamental constants. Recently, it has been predicted theoretically that the time-reversal invariant topological insulator in three dimensions exhibits a topological magnetoelectric effect quantized in units of the fine structure constant α=e²/ℏc. In this Letter, we propose an optical experiment to directly measure this topological quantization phenomenon, independent of material details. Our proposal also provides a way to measure the half-quantized Hall conductances on the two surfaces of the topological insulator independently of each other.

  7. Bayer image parallel decoding based on GPU

    NASA Astrophysics Data System (ADS)

    Hu, Rihui; Xu, Zhiyong; Wei, Yuxing; Sun, Shaohua

    2012-11-01

    In the photoelectrical tracking system, Bayer image is decompressed in traditional method, which is CPU-based. However, it is too slow when the images become large, for example, 2K×2K×16bit. In order to accelerate the Bayer image decoding, this paper introduces a parallel speedup method for NVIDA's Graphics Processor Unit (GPU) which supports CUDA architecture. The decoding procedure can be divided into three parts: the first is serial part, the second is task-parallelism part, and the last is data-parallelism part including inverse quantization, inverse discrete wavelet transform (IDWT) as well as image post-processing part. For reducing the execution time, the task-parallelism part is optimized by OpenMP techniques. The data-parallelism part could advance its efficiency through executing on the GPU as CUDA parallel program. The optimization techniques include instruction optimization, shared memory access optimization, the access memory coalesced optimization and texture memory optimization. In particular, it can significantly speed up the IDWT by rewriting the 2D (Tow-dimensional) serial IDWT into 1D parallel IDWT. Through experimenting with 1K×1K×16bit Bayer image, data-parallelism part is 10 more times faster than CPU-based implementation. Finally, a CPU+GPU heterogeneous decompression system was designed. The experimental result shows that it could achieve 3 to 5 times speed increase compared to the CPU serial method.

  8. On the Dequantization of Fedosov's Deformation Quantization

    NASA Astrophysics Data System (ADS)

    Karabegov, Alexander V.

    2003-08-01

    To each natural deformation quantization on a Poisson manifold M we associate a Poisson morphism from the formal neighborhood of the zero section of the cotangent bundle to M to the formal neighborhood of the diagonal of the product M x M~, where M~ is a copy of M with the opposite Poisson structure. We call it dequantization of the natural deformation quantization. Then we "dequantize" Fedosov's quantization.

  9. Probing topology by “heating”: Quantized circular dichroism in ultracold atoms

    PubMed Central

    Tran, Duc Thanh; Dauphin, Alexandre; Grushin, Adolfo G.; Zoller, Peter; Goldman, Nathan

    2017-01-01

    We reveal an intriguing manifestation of topology, which appears in the depletion rate of topological states of matter in response to an external drive. This phenomenon is presented by analyzing the response of a generic two-dimensional (2D) Chern insulator subjected to a circular time-periodic perturbation. Because of the system’s chiral nature, the depletion rate is shown to depend on the orientation of the circular shake; taking the difference between the rates obtained from two opposite orientations of the drive, and integrating over a proper drive-frequency range, provides a direct measure of the topological Chern number (ν) of the populated band: This “differential integrated rate” is directly related to the strength of the driving field through the quantized coefficient η0 = ν/ℏ2, where h = 2π ℏ is Planck’s constant. Contrary to the integer quantum Hall effect, this quantized response is found to be nonlinear with respect to the strength of the driving field, and it explicitly involves interband transitions. We investigate the possibility of probing this phenomenon in ultracold gases and highlight the crucial role played by edge states in this effect. We extend our results to 3D lattices, establishing a link between depletion rates and the nonlinear photogalvanic effect predicted for Weyl semimetals. The quantized circular dichroism revealed in this work designates depletion rate measurements as a universal probe for topological order in quantum matter. PMID:28835930

  10. Construction of fuzzy spaces and their applications to matrix models

    NASA Astrophysics Data System (ADS)

    Abe, Yasuhiro

    Quantization of spacetime by means of finite dimensional matrices is the basic idea of fuzzy spaces. There remains an issue of quantizing time, however, the idea is simple and it provides an interesting interplay of various ideas in mathematics and physics. Shedding some light on such an interplay is the main theme of this dissertation. The dissertation roughly separates into two parts. In the first part, we consider rather mathematical aspects of fuzzy spaces, namely, their construction. We begin with a review of construction of fuzzy complex projective spaces CP k (k = 1, 2, · · ·) in relation to geometric quantization. This construction facilitates defining symbols and star products on fuzzy CPk. Algebraic construction of fuzzy CPk is also discussed. We then present construction of fuzzy S 4, utilizing the fact that CP3 is an S2 bundle over S4. Fuzzy S4 is obtained by imposing an additional algebraic constraint on fuzzy CP3. Consequently it is proposed that coordinates on fuzzy S4 are described by certain block-diagonal matrices. It is also found that fuzzy S8 can analogously be constructed. In the second part of this dissertation, we consider applications of fuzzy spaces to physics. We first consider theories of gravity on fuzzy spaces, anticipating that they may offer a novel way of regularizing spacetime dynamics. We obtain actions for gravity on fuzzy S2 and on fuzzy CP3 in terms of finite dimensional matrices. Application to M(atrix) theory is also discussed. With an introduction of extra potentials to the theory, we show that it also has new brane solutions whose transverse directions are described by fuzzy S 4 and fuzzy CP3. The extra potentials can be considered as fuzzy versions of differential forms or fluxes, which enable us to discuss compactification models of M(atrix) theory. In particular, compactification down to fuzzy S4 is discussed and a realistic matrix model of M-theory in four-dimensions is proposed.

  11. VLSI realization of learning vector quantization with hardware/software co-design for different applications

    NASA Astrophysics Data System (ADS)

    An, Fengwei; Akazawa, Toshinobu; Yamasaki, Shogo; Chen, Lei; Jürgen Mattausch, Hans

    2015-04-01

    This paper reports a VLSI realization of learning vector quantization (LVQ) with high flexibility for different applications. It is based on a hardware/software (HW/SW) co-design concept for on-chip learning and recognition and designed as a SoC in 180 nm CMOS. The time consuming nearest Euclidean distance search in the LVQ algorithm’s competition layer is efficiently implemented as a pipeline with parallel p-word input. Since neuron number in the competition layer, weight values, input and output number are scalable, the requirements of many different applications can be satisfied without hardware changes. Classification of a d-dimensional input vector is completed in n × \\lceil d/p \\rceil + R clock cycles, where R is the pipeline depth, and n is the number of reference feature vectors (FVs). Adjustment of stored reference FVs during learning is done by the embedded 32-bit RISC CPU, because this operation is not time critical. The high flexibility is verified by the application of human detection with different numbers for the dimensionality of the FVs.

  12. Optimized nonorthogonal transforms for image compression.

    PubMed

    Guleryuz, O G; Orchard, M T

    1997-01-01

    The transform coding of images is analyzed from a common standpoint in order to generate a framework for the design of optimal transforms. It is argued that all transform coders are alike in the way they manipulate the data structure formed by transform coefficients. A general energy compaction measure is proposed to generate optimized transforms with desirable characteristics particularly suited to the simple transform coding operation of scalar quantization and entropy coding. It is shown that the optimal linear decoder (inverse transform) must be an optimal linear estimator, independent of the structure of the transform generating the coefficients. A formulation that sequentially optimizes the transforms is presented, and design equations and algorithms for its computation provided. The properties of the resulting transform systems are investigated. In particular, it is shown that the resulting basis are nonorthogonal and complete, producing energy compaction optimized, decorrelated transform coefficients. Quantization issues related to nonorthogonal expansion coefficients are addressed with a simple, efficient algorithm. Two implementations are discussed, and image coding examples are given. It is shown that the proposed design framework results in systems with superior energy compaction properties and excellent coding results.

  13. Compression of next-generation sequencing quality scores using memetic algorithm

    PubMed Central

    2014-01-01

    Background The exponential growth of next-generation sequencing (NGS) derived DNA data poses great challenges to data storage and transmission. Although many compression algorithms have been proposed for DNA reads in NGS data, few methods are designed specifically to handle the quality scores. Results In this paper we present a memetic algorithm (MA) based NGS quality score data compressor, namely MMQSC. The algorithm extracts raw quality score sequences from FASTQ formatted files, and designs compression codebook using MA based multimodal optimization. The input data is then compressed in a substitutional manner. Experimental results on five representative NGS data sets show that MMQSC obtains higher compression ratio than the other state-of-the-art methods. Particularly, MMQSC is a lossless reference-free compression algorithm, yet obtains an average compression ratio of 22.82% on the experimental data sets. Conclusions The proposed MMQSC compresses NGS quality score data effectively. It can be utilized to improve the overall compression ratio on FASTQ formatted files. PMID:25474747

  14. Quantum Computing and Second Quantization

    DOE PAGES

    Makaruk, Hanna Ewa

    2017-02-10

    Quantum computers are by their nature many particle quantum systems. Both the many-particle arrangement and being quantum are necessary for the existence of the entangled states, which are responsible for the parallelism of the quantum computers. Second quantization is a very important approximate method of describing such systems. This lecture will present the general idea of the second quantization, and discuss shortly some of the most important formulations of second quantization.

  15. Quantum Computing and Second Quantization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Makaruk, Hanna Ewa

    Quantum computers are by their nature many particle quantum systems. Both the many-particle arrangement and being quantum are necessary for the existence of the entangled states, which are responsible for the parallelism of the quantum computers. Second quantization is a very important approximate method of describing such systems. This lecture will present the general idea of the second quantization, and discuss shortly some of the most important formulations of second quantization.

  16. Density-Dependent Quantized Least Squares Support Vector Machine for Large Data Sets.

    PubMed

    Nan, Shengyu; Sun, Lei; Chen, Badong; Lin, Zhiping; Toh, Kar-Ann

    2017-01-01

    Based on the knowledge that input data distribution is important for learning, a data density-dependent quantization scheme (DQS) is proposed for sparse input data representation. The usefulness of the representation scheme is demonstrated by using it as a data preprocessing unit attached to the well-known least squares support vector machine (LS-SVM) for application on big data sets. Essentially, the proposed DQS adopts a single shrinkage threshold to obtain a simple quantization scheme, which adapts its outputs to input data density. With this quantization scheme, a large data set is quantized to a small subset where considerable sample size reduction is generally obtained. In particular, the sample size reduction can save significant computational cost when using the quantized subset for feature approximation via the Nyström method. Based on the quantized subset, the approximated features are incorporated into LS-SVM to develop a data density-dependent quantized LS-SVM (DQLS-SVM), where an analytic solution is obtained in the primal solution space. The developed DQLS-SVM is evaluated on synthetic and benchmark data with particular emphasis on large data sets. Extensive experimental results show that the learning machine incorporating DQS attains not only high computational efficiency but also good generalization performance.

  17. Image-adapted visually weighted quantization matrices for digital image compression

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B. (Inventor)

    1994-01-01

    A method for performing image compression that eliminates redundant and invisible image components is presented. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The present invention adapts or customizes the quantization matrix to the image being compressed. The quantization matrix comprises visual masking by luminance and contrast techniques and by an error pooling technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.

  18. A point particle model of lightly bound skyrmions

    NASA Astrophysics Data System (ADS)

    Gillard, Mike; Harland, Derek; Kirk, Elliot; Maybee, Ben; Speight, Martin

    2017-04-01

    A simple model of the dynamics of lightly bound skyrmions is developed in which skyrmions are replaced by point particles, each carrying an internal orientation. The model accounts well for the static energy minimizers of baryon number 1 ≤ B ≤ 8 obtained by numerical simulation of the full field theory. For 9 ≤ B ≤ 23, a large number of static solutions of the point particle model are found, all closely resembling size B subsets of a face centred cubic lattice, with the particle orientations dictated by a simple colouring rule. Rigid body quantization of these solutions is performed, and the spin and isospin of the corresponding ground states extracted. As part of the quantization scheme, an algorithm to compute the symmetry group of an oriented point cloud, and to determine its corresponding Finkelstein-Rubinstein constraints, is devised.

  19. Scale relativity and quantization of planet obliquities.

    NASA Astrophysics Data System (ADS)

    Nottale, L.

    1998-07-01

    The author applies the theory of scale relativity to the equations of rotational motion of solid bodies. He predicts in the new framework that the obliquities and inclinations of planets and satellites in the solar system must be quantized. Namely, one expects their distribution to be no longer uniform between 0 and π, but instead to display well-defined peaks of probability density at angles θk = kπ/n. The author shows in the present paper that the observational data agree very well with the prediction for n = 7, including the retrograde bodies and those which are heeled over the ecliptic plane. In particular, the value 23°27' of the obliquity of the Earth, which partly determines its climate, is not a random one, but lies in one of the main probability peaks at θ = π/7.

  20. Generic absence of strong singularities in loop quantum Bianchi-IX spacetimes

    NASA Astrophysics Data System (ADS)

    Saini, Sahil; Singh, Parampreet

    2018-03-01

    We study the generic resolution of strong singularities in loop quantized effective Bianchi-IX spacetime in two different quantizations—the connection operator based ‘A’ quantization and the extrinsic curvature based ‘K’ quantization. We show that in the effective spacetime description with arbitrary matter content, it is necessary to include inverse triad corrections to resolve all the strong singularities in the ‘A’ quantization. Whereas in the ‘K’ quantization these results can be obtained without including inverse triad corrections. Under these conditions, the energy density, expansion and shear scalars for both of the quantization prescriptions are bounded. Notably, both the quantizations can result in potentially curvature divergent events if matter content allows divergences in the partial derivatives of the energy density with respect to the triad variables at a finite energy density. Such events are found to be weak curvature singularities beyond which geodesics can be extended in the effective spacetime. Our results show that all potential strong curvature singularities of the classical theory are forbidden in Bianchi-IX spacetime in loop quantum cosmology and geodesic evolution never breaks down for such events.

  1. Research on conceptual/innovative design for the life cycle

    NASA Technical Reports Server (NTRS)

    Cagan, Jonathan; Agogino, Alice M.

    1990-01-01

    The goal of this research is developing and integrating qualitative and quantitative methods for life cycle design. The definition of the problem includes formal computer-based methods limited to final detailing stages of design; CAD data bases do not capture design intent or design history; and life cycle issues were ignored during early stages of design. Viewgraphs outline research in conceptual design; the SYMON (SYmbolic MONotonicity analyzer) algorithm; multistart vector quantization optimization algorithm; intelligent manufacturing: IDES - Influence Diagram Architecture; and 1st PRINCE (FIRST PRINciple Computational Evaluator).

  2. Image-adaptive and robust digital wavelet-domain watermarking for images

    NASA Astrophysics Data System (ADS)

    Zhao, Yi; Zhang, Liping

    2018-03-01

    We propose a new frequency domain wavelet based watermarking technique. The key idea of our scheme is twofold: multi-tier solution representation of image and odd-even quantization embedding/extracting watermark. Because many complementary watermarks need to be hidden, the watermark image designed is image-adaptive. The meaningful and complementary watermark images was embedded into the original image (host image) by odd-even quantization modifying coefficients, which was selected from the detail wavelet coefficients of the original image, if their magnitudes are larger than their corresponding Just Noticeable Difference thresholds. The tests show good robustness against best-known attacks such as noise addition, image compression, median filtering, clipping as well as geometric transforms. Further research may improve the performance by refining JND thresholds.

  3. Performance of noncoherent MFSK channels with coding

    NASA Technical Reports Server (NTRS)

    Butman, S. A.; Lyon, R. F.

    1974-01-01

    Computer simulation of data transmission over a noncoherent channel with predetection signal-to-noise ratio of 1 shows that convolutional coding can reduce the energy requirement by 4.5 dB at a bit error rate of 0.001. The effects of receiver quantization and choice of number of tones are analyzed; nearly optimum performance is attained with eight quantization levels and sixteen tones at predetection S/N ratio of 1. The effects of changing predetection S/N ratio are also analyzed; for lower predetection S/N ratio, accurate extrapolations can be made from the data, but for higher values, the results are more complicated. These analyses will be useful in designing telemetry systems when coherence is limited by turbulence in the signal propagation medium or oscillator instability.

  4. Pseudo-Kähler Quantization on Flag Manifolds

    NASA Astrophysics Data System (ADS)

    Karabegov, Alexander V.

    A unified approach to geometric, symbol and deformation quantizations on a generalized flag manifold endowed with an invariant pseudo-Kähler structure is proposed. In particular cases we arrive at Berezin's quantization via covariant and contravariant symbols.

  5. Instant-Form and Light-Front Quantization of Field Theories

    NASA Astrophysics Data System (ADS)

    Kulshreshtha, Usha; Kulshreshtha, Daya Shankar; Vary, James

    2018-05-01

    In this work we consider the instant-form and light-front quantization of some field theories. As an example, we consider a class of gauged non-linear sigma models with different regularizations. In particular, we present the path integral quantization of the gauged non-linear sigma model in the Faddeevian regularization. We also make a comparision of the possible differences in the instant-form and light-front quantization at appropriate places.

  6. Quantization improves stabilization of dynamical systems with delayed feedback

    NASA Astrophysics Data System (ADS)

    Stepan, Gabor; Milton, John G.; Insperger, Tamas

    2017-11-01

    We show that an unstable scalar dynamical system with time-delayed feedback can be stabilized by quantizing the feedback. The discrete time model corresponds to a previously unrecognized case of the microchaotic map in which the fixed point is both locally and globally repelling. In the continuous-time model, stabilization by quantization is possible when the fixed point in the absence of feedback is an unstable node, and in the presence of feedback, it is an unstable focus (spiral). The results are illustrated with numerical simulation of the unstable Hayes equation. The solutions of the quantized Hayes equation take the form of oscillations in which the amplitude is a function of the size of the quantization step. If the quantization step is sufficiently small, the amplitude of the oscillations can be small enough to practically approximate the dynamics around a stable fixed point.

  7. On Correspondence of BRST-BFV, Dirac, and Refined Algebraic Quantizations of Constrained Systems

    NASA Astrophysics Data System (ADS)

    Shvedov, O. Yu.

    2002-11-01

    The correspondence between BRST-BFV, Dirac, and refined algebraic (group averaging, projection operator) approaches to quantizing constrained systems is analyzed. For the closed-algebra case, it is shown that the component of the BFV wave function corresponding to maximal (minimal) value of number of ghosts and antighosts in the Schrodinger representation may be viewed as a wave function in the refined algebraic (Dirac) quantization approach. The Giulini-Marolf group averaging formula for the inner product in the refined algebraic quantization approach is obtained from the Batalin-Marnelius prescription for the BRST-BFV inner product, which should be generally modified due to topological problems. The considered prescription for the correspondence of states is observed to be applicable to the open-algebra case. The refined algebraic quantization approach is generalized then to the case of nontrivial structure functions. A simple example is discussed. The correspondence of observables for different quantization methods is also investigated.

  8. Perceptual compression of magnitude-detected synthetic aperture radar imagery

    NASA Technical Reports Server (NTRS)

    Gorman, John D.; Werness, Susan A.

    1994-01-01

    A perceptually-based approach for compressing synthetic aperture radar (SAR) imagery is presented. Key components of the approach are a multiresolution wavelet transform, a bit allocation mask based on an empirical human visual system (HVS) model, and hybrid scalar/vector quantization. Specifically, wavelet shrinkage techniques are used to segregate wavelet transform coefficients into three components: local means, edges, and texture. Each of these three components is then quantized separately according to a perceptually-based bit allocation scheme. Wavelet coefficients associated with local means and edges are quantized using high-rate scalar quantization while texture information is quantized using low-rate vector quantization. The impact of the perceptually-based multiresolution compression algorithm on visual image quality, impulse response, and texture properties is assessed for fine-resolution magnitude-detected SAR imagery; excellent image quality is found at bit rates at or above 1 bpp along with graceful performance degradation at rates below 1 bpp.

  9. Quantization of geometric phase with integer and fractional topological characterization in a quantum Ising chain with long-range interaction.

    PubMed

    Sarkar, Sujit

    2018-04-12

    An attempt is made to study and understand the behavior of quantization of geometric phase of a quantum Ising chain with long range interaction. We show the existence of integer and fractional topological characterization for this model Hamiltonian with different quantization condition and also the different quantized value of geometric phase. The quantum critical lines behave differently from the perspective of topological characterization. The results of duality and its relation to the topological quantization is presented here. The symmetry study for this model Hamiltonian is also presented. Our results indicate that the Zak phase is not the proper physical parameter to describe the topological characterization of system with long range interaction. We also present quite a few exact solutions with physical explanation. Finally we present the relation between duality, symmetry and topological characterization. Our work provides a new perspective on topological quantization.

  10. Event-Triggered Distributed Average Consensus Over Directed Digital Networks With Limited Communication Bandwidth.

    PubMed

    Li, Huaqing; Chen, Guo; Huang, Tingwen; Dong, Zhaoyang; Zhu, Wei; Gao, Lan

    2016-12-01

    In this paper, we consider the event-triggered distributed average-consensus of discrete-time first-order multiagent systems with limited communication data rate and general directed network topology. In the framework of digital communication network, each agent has a real-valued state but can only exchange finite-bit binary symbolic data sequence with its neighborhood agents at each time step due to the digital communication channels with energy constraints. Novel event-triggered dynamic encoder and decoder for each agent are designed, based on which a distributed control algorithm is proposed. A scheme that selects the number of channel quantization level (number of bits) at each time step is developed, under which all the quantizers in the network are never saturated. The convergence rate of consensus is explicitly characterized, which is related to the scale of network, the maximum degree of nodes, the network structure, the scaling function, the quantization interval, the initial states of agents, the control gain and the event gain. It is also found that under the designed event-triggered protocol, by selecting suitable parameters, for any directed digital network containing a spanning tree, the distributed average consensus can be always achieved with an exponential convergence rate based on merely one bit information exchange between each pair of adjacent agents at each time step. Two simulation examples are provided to illustrate the feasibility of presented protocol and the correctness of the theoretical results.

  11. 2007 Workplace and Equal Opportunity Survey of Reserve Component Members: Administration, Datasets, and Codebook

    DTIC Science & Technology

    2008-11-01

    component, gender, paygrade, race/ethnicity, ethnic ancestry, education , active duty service, and military installation proximity...5. Female Ancestry refers to your ethnic origin or descent, “roots,” or heritage. It may refer to your parents ’ or ancestors’ country of birth...Pay and benefits .............................. Fair performance evaluations ........... Education and training opportunities

  12. 2008 Post-Election Survey of Department of State Voting Assistance Officers: Administration, Datasets, and Codebook

    DTIC Science & Technology

    2009-08-01

    useful *************************************************************/ array zara (*) TRAININGA TRAININGB TRAININGC TRAININGD TRAININGE TRAININGF...TRAININGFR TRAININGGR TRAININGHR SRCEINFOAR SRCEINFOBR SRCEINFOCR SRCEINFODR SRCEINFOER; do I = 1 to dim( Zara ); Zarb(i)= Zara (i); if Zara (i...60 then Zarb(i) = .; else Zara (i) = Zarb(i); end; Drop i; I-9 /* coding for TRAININGAR2 variable

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ballesteros, Ángel, E-mail: angelb@ubu.es; Enciso, Alberto, E-mail: aenciso@icmat.es; Herranz, Francisco J., E-mail: fjherranz@ubu.es

    In this paper we quantize the N-dimensional classical Hamiltonian system H=(|q|)/(2(η+|q|)) p{sup 2}−k/(η+|q|) , that can be regarded as a deformation of the Coulomb problem with coupling constant k, that it is smoothly recovered in the limit η→0. Moreover, the kinetic energy term in H is just the one corresponding to an N-dimensional Taub–NUT space, a fact that makes this system relevant from a geometric viewpoint. Since the Hamiltonian H is known to be maximally superintegrable, we propose a quantization prescription that preserves such superintegrability in the quantum mechanical setting. We show that, to this end, one must choose asmore » the kinetic part of the Hamiltonian the conformal Laplacian of the underlying Riemannian manifold, which combines the usual Laplace–Beltrami operator on the Taub–NUT manifold and a multiple of its scalar curvature. As a consequence, we obtain a novel exactly solvable deformation of the quantum Coulomb problem, whose spectrum is computed in closed form for positive values of η and k, and showing that the well-known maximal degeneracy of the flat system is preserved in the deformed case. Several interesting algebraic and physical features of this new exactly solvable quantum system are analyzed, and the quantization problem for negative values of η and/or k is also sketched.« less

  14. Noise-shaping gradient descent-based online adaptation algorithms for digital calibration of analog circuits.

    PubMed

    Chakrabartty, Shantanu; Shaga, Ravi K; Aono, Kenji

    2013-04-01

    Analog circuits that are calibrated using digital-to-analog converters (DACs) use a digital signal processor-based algorithm for real-time adaptation and programming of system parameters. In this paper, we first show that this conventional framework for adaptation yields suboptimal calibration properties because of artifacts introduced by quantization noise. We then propose a novel online stochastic optimization algorithm called noise-shaping or ΣΔ gradient descent, which can shape the quantization noise out of the frequency regions spanning the parameter adaptation trajectories. As a result, the proposed algorithms demonstrate superior parameter search properties compared to floating-point gradient methods and better convergence properties than conventional quantized gradient-methods. In the second part of this paper, we apply the ΣΔ gradient descent algorithm to two examples of real-time digital calibration: 1) balancing and tracking of bias currents, and 2) frequency calibration of a band-pass Gm-C biquad filter biased in weak inversion. For each of these examples, the circuits have been prototyped in a 0.5-μm complementary metal-oxide-semiconductor process, and we demonstrate that the proposed algorithm is able to find the optimal solution even in the presence of spurious local minima, which are introduced by the nonlinear and non-monotonic response of calibration DACs.

  15. Photoinduced half-integer quantized conductance plateaus in topological-insulator/superconductor heterostructures

    NASA Astrophysics Data System (ADS)

    Yap, Han Hoe; Zhou, Longwen; Lee, Ching Hua; Gong, Jiangbin

    2018-04-01

    The past few years have witnessed increased attention to the quest for Majorana-like excitations in the condensed matter community. As a promising candidate in this race, the one-dimensional chiral Majorana edge mode (CMEM) in topological insulator-superconductor heterostructures has gathered renewed interests after an experimental breakthrough [Q. L. He et al., Science 357, 294 (2017), 10.1126/science.aag2792]. In this work, we study computationally the quantum transport of topological insulator-superconductor hybrid devices subject to time-periodic modulation. We report half-integer quantized conductance plateaus at 1/2 e/2h and 3/2 e/2h upon applying the so-called sum rule in the theory of quantum transport in Floquet topological matter. In particular, in a photoinduced topological superconductor sandwiched between two Floquet Chern insulators, it is found that for each Floquet sideband, the CMEM admits equal probability for normal transmission and local Andreev reflection over a wide range of parameter regimes, yielding half-integer quantized plateaus that resist static and time-periodic disorder. While it is well-established that periodic driving fields can simultaneously create and manipulate multiple pairs of Majorana bound states, their detection scheme remains elusive, in part due to their being neutral excitations. Therefore the 3/2 e/2h plateau indicates the possibility to verify the generation of multiple pairs of photoinduced CMEMs via transport measurements. The robust and half-quantized conductance plateaus due to CMEMs are both fascinating and subtle because they only emerge after a summation over contributions from all Floquet sidebands. Our work may add insights into the transport properties of Floquet topological systems and stimulate further studies on the optical control of topological superconductivity.

  16. Metamaterial bricks and quantization of meta-surfaces

    PubMed Central

    Memoli, Gianluca; Caleap, Mihai; Asakawa, Michihiro; Sahoo, Deepak R.; Drinkwater, Bruce W.; Subramanian, Sriram

    2017-01-01

    Controlling acoustic fields is crucial in diverse applications such as loudspeaker design, ultrasound imaging and therapy or acoustic particle manipulation. The current approaches use fixed lenses or expensive phased arrays. Here, using a process of analogue-to-digital conversion and wavelet decomposition, we develop the notion of quantal meta-surfaces. The quanta here are small, pre-manufactured three-dimensional units—which we call metamaterial bricks—each encoding a specific phase delay. These bricks can be assembled into meta-surfaces to generate any diffraction-limited acoustic field. We apply this methodology to show experimental examples of acoustic focusing, steering and, after stacking single meta-surfaces into layers, the more complex field of an acoustic tractor beam. We demonstrate experimentally single-sided air-borne acoustic levitation using meta-layers at various bit-rates: from a 4-bit uniform to 3-bit non-uniform quantization in phase. This powerful methodology dramatically simplifies the design of acoustic devices and provides a key-step towards realizing spatial sound modulators. PMID:28240283

  17. Metamaterial bricks and quantization of meta-surfaces

    NASA Astrophysics Data System (ADS)

    Memoli, Gianluca; Caleap, Mihai; Asakawa, Michihiro; Sahoo, Deepak R.; Drinkwater, Bruce W.; Subramanian, Sriram

    2017-02-01

    Controlling acoustic fields is crucial in diverse applications such as loudspeaker design, ultrasound imaging and therapy or acoustic particle manipulation. The current approaches use fixed lenses or expensive phased arrays. Here, using a process of analogue-to-digital conversion and wavelet decomposition, we develop the notion of quantal meta-surfaces. The quanta here are small, pre-manufactured three-dimensional units--which we call metamaterial bricks--each encoding a specific phase delay. These bricks can be assembled into meta-surfaces to generate any diffraction-limited acoustic field. We apply this methodology to show experimental examples of acoustic focusing, steering and, after stacking single meta-surfaces into layers, the more complex field of an acoustic tractor beam. We demonstrate experimentally single-sided air-borne acoustic levitation using meta-layers at various bit-rates: from a 4-bit uniform to 3-bit non-uniform quantization in phase. This powerful methodology dramatically simplifies the design of acoustic devices and provides a key-step towards realizing spatial sound modulators.

  18. Metamaterial bricks and quantization of meta-surfaces.

    PubMed

    Memoli, Gianluca; Caleap, Mihai; Asakawa, Michihiro; Sahoo, Deepak R; Drinkwater, Bruce W; Subramanian, Sriram

    2017-02-27

    Controlling acoustic fields is crucial in diverse applications such as loudspeaker design, ultrasound imaging and therapy or acoustic particle manipulation. The current approaches use fixed lenses or expensive phased arrays. Here, using a process of analogue-to-digital conversion and wavelet decomposition, we develop the notion of quantal meta-surfaces. The quanta here are small, pre-manufactured three-dimensional units-which we call metamaterial bricks-each encoding a specific phase delay. These bricks can be assembled into meta-surfaces to generate any diffraction-limited acoustic field. We apply this methodology to show experimental examples of acoustic focusing, steering and, after stacking single meta-surfaces into layers, the more complex field of an acoustic tractor beam. We demonstrate experimentally single-sided air-borne acoustic levitation using meta-layers at various bit-rates: from a 4-bit uniform to 3-bit non-uniform quantization in phase. This powerful methodology dramatically simplifies the design of acoustic devices and provides a key-step towards realizing spatial sound modulators.

  19. Optical determination of material abundances by using neural networks for the derivation of spectral filters

    NASA Astrophysics Data System (ADS)

    Krippner, Wolfgang; Wagner, Felix; Bauer, Sebastian; Puente León, Fernando

    2017-06-01

    Using appropriately designed spectral filters allows to optically determine material abundances. While an infinite number of possibilities exist for determining spectral filters, we take advantage of using neural networks to derive spectral filters leading to precise estimations. To overcome some drawbacks that regularly influence the determination of material abundances using hyperspectral data, we incorporate the spectral variability of the raw materials into the training of the considered neural networks. As a main result, we successfully classify quantized material abundances optically. Thus, the main part of the high computational load, which belongs to the use of neural networks, is avoided. In addition, the derived material abundances become invariant against spatially varying illumination intensity as a remarkable benefit in comparison with spectral filters based on the Moore-Penrose pseudoinverse, for instance.

  20. Noncommutative gerbes and deformation quantization

    NASA Astrophysics Data System (ADS)

    Aschieri, Paolo; Baković, Igor; Jurčo, Branislav; Schupp, Peter

    2010-11-01

    We define noncommutative gerbes using the language of star products. Quantized twisted Poisson structures are discussed as an explicit realization in the sense of deformation quantization. Our motivation is the noncommutative description of D-branes in the presence of topologically non-trivial background fields.

  1. Quantized discrete space oscillators

    NASA Technical Reports Server (NTRS)

    Uzes, C. A.; Kapuscik, Edward

    1993-01-01

    A quasi-canonical sequence of finite dimensional quantizations was found which has canonical quantization as its limit. In order to demonstrate its practical utility and its numerical convergence, this formalism is applied to the eigenvalue and 'eigenfunction' problem of several harmonic and anharmonic oscillators.

  2. Visibility of wavelet quantization noise

    NASA Technical Reports Server (NTRS)

    Watson, A. B.; Yang, G. Y.; Solomon, J. A.; Villasenor, J.

    1997-01-01

    The discrete wavelet transform (DWT) decomposes an image into bands that vary in spatial frequency and orientation. It is widely used for image compression. Measures of the visibility of DWT quantization errors are required to achieve optimal compression. Uniform quantization of a single band of coefficients results in an artifact that we call DWT uniform quantization noise; it is the sum of a lattice of random amplitude basis functions of the corresponding DWT synthesis filter. We measured visual detection thresholds for samples of DWT uniform quantization noise in Y, Cb, and Cr color channels. The spatial frequency of a wavelet is r 2-lambda, where r is display visual resolution in pixels/degree, and lambda is the wavelet level. Thresholds increase rapidly with wavelet spatial frequency. Thresholds also increase from Y to Cr to Cb, and with orientation from lowpass to horizontal/vertical to diagonal. We construct a mathematical model for DWT noise detection thresholds that is a function of level, orientation, and display visual resolution. This allows calculation of a "perceptually lossless" quantization matrix for which all errors are in theory below the visual threshold. The model may also be used as the basis for adaptive quantization schemes.

  3. Thermal field theory and generalized light front quantization

    NASA Astrophysics Data System (ADS)

    Weldon, H. Arthur

    2003-04-01

    The dependence of thermal field theory on the surface of quantization and on the velocity of the heat bath is investigated by working in general coordinates that are arbitrary linear combinations of the Minkowski coordinates. In the general coordinates the metric tensor gμν¯ is nondiagonal. The Kubo-Martin-Schwinger condition requires periodicity in thermal correlation functions when the temporal variable changes by an amount -i/(T(g00¯)). Light-front quantization fails since g00¯=0; however, various related quantizations are possible.

  4. Generalized radiation-field quantization method and the Petermann excess-noise factor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheng, Y.-J.; Siegman, A.E.; E.L. Ginzton Laboratory, Stanford University, Stanford, California 94305

    2003-10-01

    We propose a generalized radiation-field quantization formalism, where quantization does not have to be referenced to a set of power-orthogonal eigenmodes as conventionally required. This formalism can be used to directly quantize the true system eigenmodes, which can be non-power-orthogonal due to the open nature of the system or the gain/loss medium involved in the system. We apply this generalized field quantization to the laser linewidth problem, in particular, lasers with non-power-orthogonal oscillation modes, and derive the excess-noise factor in a fully quantum-mechanical framework. We also show that, despite the excess-noise factor for oscillating modes, the total spatially averaged decaymore » rate for the laser atoms remains unchanged.« less

  5. BFV approach to geometric quantization

    NASA Astrophysics Data System (ADS)

    Fradkin, E. S.; Linetsky, V. Ya.

    1994-12-01

    A gauge-invariant approach to geometric quantization is developed. It yields a complete quantum description for dynamical systems with non-trivial geometry and topology of the phase space. The method is a global version of the gauge-invariant approach to quantization of second-class constraints developed by Batalin, Fradkin and Fradkina (BFF). Physical quantum states and quantum observables are respectively described by covariantly constant sections of the Fock bundle and the bundle of hermitian operators over the phase space with a flat connection defined by the nilpotent BVF-BRST operator. Perturbative calculation of the first non-trivial quantum correction to the Poisson brackets leads to the Chevalley cocycle known in deformation quantization. Consistency conditions lead to a topological quantization condition with metaplectic anomaly.

  6. Deformation quantization of fermi fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Galaviz, I.; Garcia-Compean, H.; Departamento de Fisica, Centro de Investigacion y de Estudios Avanzados del IPN, P.O. Box 14-740, 07000 Mexico, D.F.

    2008-04-15

    Deformation quantization for any Grassmann scalar free field is described via the Weyl-Wigner-Moyal formalism. The Stratonovich-Weyl quantizer, the Moyal *-product and the Wigner functional are obtained by extending the formalism proposed recently in [I. Galaviz, H. Garcia-Compean, M. Przanowski, F.J. Turrubiates, Weyl-Wigner-Moyal Formalism for Fermi Classical Systems, arXiv:hep-th/0612245] to the fermionic systems of infinite number of degrees of freedom. In particular, this formalism is applied to quantize the Dirac free field. It is observed that the use of suitable oscillator variables facilitates considerably the procedure. The Stratonovich-Weyl quantizer, the Moyal *-product, the Wigner functional, the normal ordering operator, and finally,more » the Dirac propagator have been found with the use of these variables.« less

  7. Polymer-Fourier quantization of the scalar field revisited

    NASA Astrophysics Data System (ADS)

    Garcia-Chung, Angel; Vergara, J. David

    2016-10-01

    The polymer quantization of the Fourier modes of the real scalar field is studied within algebraic scheme. We replace the positive linear functional of the standard Poincaré invariant quantization by a singular one. This singular positive linear functional is constructed as mimicking the singular limit of the complex structure of the Poincaré invariant Fock quantization. The resulting symmetry group of such polymer quantization is the subgroup SDiff(ℝ4) which is a subgroup of Diff(ℝ4) formed by spatial volume preserving diffeomorphisms. In consequence, this yields an entirely different irreducible representation of the canonical commutation relations, nonunitary equivalent to the standard Fock representation. We also compared the Poincaré invariant Fock vacuum with the polymer Fourier vacuum.

  8. Quantized Rabi oscillations and circular dichroism in quantum Hall systems

    NASA Astrophysics Data System (ADS)

    Tran, D. T.; Cooper, N. R.; Goldman, N.

    2018-06-01

    The dissipative response of a quantum system upon periodic driving can be exploited as a probe of its topological properties. Here we explore the implications of such phenomena in two-dimensional gases subjected to a uniform magnetic field. It is shown that a filled Landau level exhibits a quantized circular dichroism, which can be traced back to its underlying nontrivial topology. Based on selection rules, we find that this quantized effect can be suitably described in terms of Rabi oscillations, whose frequencies satisfy simple quantization laws. We discuss how quantized dissipative responses can be probed locally, both in the bulk and at the boundaries of the system. This work suggests alternative forms of topological probes based on circular dichroism.

  9. Instabilities caused by floating-point arithmetic quantization.

    NASA Technical Reports Server (NTRS)

    Phillips, C. L.

    1972-01-01

    It is shown that an otherwise stable digital control system can be made unstable by signal quantization when the controller operates on floating-point arithmetic. Sufficient conditions of instability are determined, and an example of loss of stability is treated when only one quantizer is operated.

  10. Direct comparison of fractional and integer quantized Hall resistance

    NASA Astrophysics Data System (ADS)

    Ahlers, Franz J.; Götz, Martin; Pierz, Klaus

    2017-08-01

    We present precision measurements of the fractional quantized Hall effect, where the quantized resistance {{R}≤ft[ 1/3 \\right]} in the fractional quantum Hall state at filling factor 1/3 was compared with a quantized resistance {{R}[2]} , represented by an integer quantum Hall state at filling factor 2. A cryogenic current comparator bridge capable of currents down to the nanoampere range was used to directly compare two resistance values of two GaAs-based devices located in two cryostats. A value of 1-(5.3  ±  6.3) 10-8 (95% confidence level) was obtained for the ratio ({{R}≤ft[ 1/3 \\right]}/6{{R}[2]} ). This constitutes the most precise comparison of integer resistance quantization (in terms of h/e 2) in single-particle systems and of fractional quantization in fractionally charged quasi-particle systems. While not relevant for practical metrology, such a test of the validity of the underlying physics is of significance in the context of the upcoming revision of the SI.

  11. Canonical quantization of classical mechanics in curvilinear coordinates. Invariant quantization procedure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Błaszak, Maciej, E-mail: blaszakm@amu.edu.pl; Domański, Ziemowit, E-mail: ziemowit@amu.edu.pl

    In the paper is presented an invariant quantization procedure of classical mechanics on the phase space over flat configuration space. Then, the passage to an operator representation of quantum mechanics in a Hilbert space over configuration space is derived. An explicit form of position and momentum operators as well as their appropriate ordering in arbitrary curvilinear coordinates is demonstrated. Finally, the extension of presented formalism onto non-flat case and related ambiguities of the process of quantization are discussed. -- Highlights: •An invariant quantization procedure of classical mechanics on the phase space over flat configuration space is presented. •The passage tomore » an operator representation of quantum mechanics in a Hilbert space over configuration space is derived. •Explicit form of position and momentum operators and their appropriate ordering in curvilinear coordinates is shown. •The invariant form of Hamiltonian operators quadratic and cubic in momenta is derived. •The extension of presented formalism onto non-flat case and related ambiguities of the quantization process are discussed.« less

  12. Quantization noise in digital speech. M.S. Thesis- Houston Univ.

    NASA Technical Reports Server (NTRS)

    Schmidt, O. L.

    1972-01-01

    The amount of quantization noise generated in a digital-to-analog converter is dependent on the number of bits or quantization levels used to digitize the analog signal in the analog-to-digital converter. The minimum number of quantization levels and the minimum sample rate were derived for a digital voice channel. A sample rate of 6000 samples per second and lowpass filters with a 3 db cutoff of 2400 Hz are required for 100 percent sentence intelligibility. Consonant sounds are the first speech components to be degraded by quantization noise. A compression amplifier can be used to increase the weighting of the consonant sound amplitudes in the analog-to-digital converter. An expansion network must be installed at the output of the digital-to-analog converter to restore the original weighting of the consonant sounds. This technique results in 100 percent sentence intelligibility for a sample rate of 5000 samples per second, eight quantization levels, and lowpass filters with a 3 db cutoff of 2000 Hz.

  13. Optical angular momentum and atoms

    PubMed Central

    2017-01-01

    Any coherent interaction of light and atoms needs to conserve energy, linear momentum and angular momentum. What happens to an atom’s angular momentum if it encounters light that carries orbital angular momentum (OAM)? This is a particularly intriguing question as the angular momentum of atoms is quantized, incorporating the intrinsic spin angular momentum of the individual electrons as well as the OAM associated with their spatial distribution. In addition, a mechanical angular momentum can arise from the rotation of the entire atom, which for very cold atoms is also quantized. Atoms therefore allow us to probe and access the quantum properties of light’s OAM, aiding our fundamental understanding of light–matter interactions, and moreover, allowing us to construct OAM-based applications, including quantum memories, frequency converters for shaped light and OAM-based sensors. This article is part of the themed issue ‘Optical orbital angular momentum’. PMID:28069766

  14. Study of communications data compression methods

    NASA Technical Reports Server (NTRS)

    Jones, H. W.

    1978-01-01

    A simple monochrome conditional replenishment system was extended to higher compression and to higher motion levels, by incorporating spatially adaptive quantizers and field repeating. Conditional replenishment combines intraframe and interframe compression, and both areas are investigated. The gain of conditional replenishment depends on the fraction of the image changing, since only changed parts of the image need to be transmitted. If the transmission rate is set so that only one fourth of the image can be transmitted in each field, greater change fractions will overload the system. A computer simulation was prepared which incorporated (1) field repeat of changes, (2) a variable change threshold, (3) frame repeat for high change, and (4) two mode, variable rate Hadamard intraframe quantizers. The field repeat gives 2:1 compression in moving areas without noticeable degradation. Variable change threshold allows some flexibility in dealing with varying change rates, but the threshold variation must be limited for acceptable performance.

  15. Robust transport signatures of topological superconductivity in topological insulator nanowires.

    PubMed

    de Juan, Fernando; Ilan, Roni; Bardarson, Jens H

    2014-09-05

    Finding a clear signature of topological superconductivity in transport experiments remains an outstanding challenge. In this work, we propose exploiting the unique properties of three-dimensional topological insulator nanowires to generate a normal-superconductor junction in the single-mode regime where an exactly quantized 2e2/h zero-bias conductance can be observed over a wide range of realistic system parameters. This is achieved by inducing superconductivity in half of the wire, which can be tuned at will from trivial to topological with a parallel magnetic field, while a perpendicular field is used to gap out the normal part, except for two spatially separated chiral channels. The combination of chiral mode transport and perfect Andreev reflection makes the measurement robust to moderate disorder, and the quantization of conductance survives to much higher temperatures than in tunnel junction experiments. Our proposal may be understood as a variant of a Majorana interferometer which is easily realizable in experiments.

  16. Defense and Development in Sub-Saharan Africa: Codebook.

    DTIC Science & Technology

    1988-03-01

    countries by presenting the different data sources and explaining how they were compiled. The statistics in the 0 database cover 41 African countries for...February 1984, pp. 157-164 -vi Finally, in addition to the economic and military data , some statistics have been compiled that monitor social and...32 IX. SOCIAL/POLITICAL STATISTICS ....................................34 SOURCES AND NOTES ON COLLECTION OF DATA

  17. International Data Archive and Analysis Center. I. International Relations Archive. II. Voluntary International Coordination. III. Attachments.

    ERIC Educational Resources Information Center

    Miller, Warren; Tanter, Raymond

    The International Relations Archive undertakes as its primary goals the acquisition, management and dissemination of international affairs data. The first document enclosed is a copy of the final machine readable codebook prepared for the data from the Political Events Project, 1948-1965. Also included is a copy of the final machine-readable…

  18. The 1980 Survey of Certain Unrestricted Line Officers of the Navy Regarding Their Reassignment to a New Position.

    DTIC Science & Technology

    1981-04-01

    variables resulting from the survey and their location on an SPSS system file are documented in a user-oriented codebook. Free responses to an open...and added expense), anguish in transfer of family. (3) The Navy cannot afford to treat people like cattle. If I could have re- signed without a six

  19. 2005 Workplace and Equal Opportunity Survey of Active-Duty Members Administration, Datasets, and Codebook

    DTIC Science & Technology

    2007-06-01

    Results .......................................................................17 Table 7. E-mail Address Availability by Active-Duty Service ...the following ten topic areas: 1. Background Information— Service , gender, paygrade, race/ethnicity, ethnic ancestry, and education. 2. Family and...likelihood to stay on active duty, spouse/family support to stay on active duty, years spent in military service , willingness to recommend military

  20. Hispanic Male’s Perspectives of Health Behaviors Related to Weight Management

    PubMed Central

    Garcia, David O.; Valdez, Luis A.; Hooker, Steven P.

    2015-01-01

    Hispanic males have the highest prevalence of overweight and obesity among men in the United States; yet are significantly underrepresented in weight loss research. The purpose of the current study was to examine Hispanic male’s perspectives of health behaviors related to weight management to refine the methodologies to deliver a gender-sensitive and culturally sensitive weight loss intervention. From October 2014 to April 2015, semistructured interviews were conducted with 14 overweight Hispanic men of ages 18 to 64 years. The interviews lasted approximately 60 minutes. Participants also completed a brief questionnaire and body weight/height were measured. Grounded in a deductive process, a preliminary codebook was developed based on the topics included in the interview guides. A thematic analysis facilitated the identification of inductive themes and the finalization of the codebook used for transcript analysis. Four overarching themes were identified: (a) general health beliefs of how diet and physical activity behaviors affect health outcomes, (b) barriers to healthy eating and physical activity, (c) motivators for change, and (d) viable recruitment and intervention approaches. Future research should examine feasible and appropriate recruitment and intervention strategies identified by Hispanic males to improve weight management in this vulnerable group. PMID:26634854

  1. Coherent state quantization of quaternions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Muraleetharan, B., E-mail: bbmuraleetharan@jfn.ac.lk, E-mail: santhar@gmail.com; Thirulogasanthar, K., E-mail: bbmuraleetharan@jfn.ac.lk, E-mail: santhar@gmail.com

    Parallel to the quantization of the complex plane, using the canonical coherent states of a right quaternionic Hilbert space, quaternion field of quaternionic quantum mechanics is quantized. Associated upper symbols, lower symbols, and related quantities are analyzed. Quaternionic version of the harmonic oscillator and Weyl-Heisenberg algebra are also obtained.

  2. A Heisenberg Algebra Bundle of a Vector Field in Three-Space and its Weyl Quantization

    NASA Astrophysics Data System (ADS)

    Binz, Ernst; Pods, Sonja

    2006-01-01

    In these notes we associate a natural Heisenberg group bundle Ha with a singularity free smooth vector field X = (id,a) on a submanifold M in a Euclidean three-space. This bundle yields naturally an infinite dimensional Heisenberg group HX∞. A representation of the C*-group algebra of HX∞ is a quantization. It causes a natural Weyl-deformation quantization of X. The influence of the topological structure of M on this quantization is encoded in the Chern class of a canonical complex line bundle inside Ha.

  3. BFV quantization on hermitian symmetric spaces

    NASA Astrophysics Data System (ADS)

    Fradkin, E. S.; Linetsky, V. Ya.

    1995-02-01

    Gauge-invariant BFV approach to geometric quantization is applied to the case of hermitian symmetric spaces G/ H. In particular, gauge invariant quantization on the Lobachevski plane and sphere is carried out. Due to the presence of symmetry, master equations for the first-class constraints, quantum observables and physical quantum states are exactly solvable. BFV-BRST operator defines a flat G-connection in the Fock bundle over G/ H. Physical quantum states are covariantly constant sections with respect to this connection and are shown to coincide with the generalized coherent states for the group G. Vacuum expectation values of the quantum observables commuting with the quantum first-class constraints reduce to the covariant symbols of Berezin. The gauge-invariant approach to quantization on symplectic manifolds synthesizes geometric, deformation and Berezin quantization approaches.

  4. Optimized universal color palette design for error diffusion

    NASA Astrophysics Data System (ADS)

    Kolpatzik, Bernd W.; Bouman, Charles A.

    1995-04-01

    Currently, many low-cost computers can only simultaneously display a palette of 256 color. However, this palette is usually selectable from a very large gamut of available colors. For many applications, this limited palette size imposes a significant constraint on the achievable image quality. We propose a method for designing an optimized universal color palette for use with halftoning methods such as error diffusion. The advantage of a universal color palette is that it is fixed and therefore allows multiple images to be displayed simultaneously. To design the palette, we employ a new vector quantization method known as sequential scalar quantization (SSQ) to allocate the colors in a visually uniform color space. The SSQ method achieves near-optimal allocation, but may be efficiently implemented using a series of lookup tables. When used with error diffusion, SSQ adds little computational overhead and may be used to minimize the visual error in an opponent color coordinate system. We compare the performance of the optimized algorithm to standard error diffusion by evaluating a visually weighted mean-squared-error measure. Our metric is based on the color difference in CIE L*AL*B*, but also accounts for the lowpass characteristic of human contrast sensitivity.

  5. Does the Cultural Formulation Interview (CFI) for the Fifth Revision of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5) affect medical communication? A qualitative exploratory study from the New York site

    PubMed Central

    Aggarwal, Neil K.; DeSilva, Ravi; Nicasio, Andel V.; Boiler, Marit; Lewis-Fernández, Roberto

    2014-01-01

    Objectives Cross-cultural mental health researchers often analyze patient explanatory models of illness to optimize service provision. The Cultural Formulation Interview (CFI) is a cross-cultural assessment tool released in May 2013 with DSM-5 to revise shortcomings from the DSM-IV Outline for Cultural Formulation (OCF). The CFI field trial took place in 6 countries, 14 sites, and with 321 patients to explore its feasibility, acceptability, and clinical utility with patients and clinicians. We sought to analyze if and how CFI feasibility, acceptability, and clinical utility were related to patient-clinician communication. Design We report data from the New York site which enrolled 7 clinicians and 32 patients in 32 patient-clinician dyads. We undertook a data analysis independent of the parent field trial by conducting content analyses of debriefing interviews with all participants (n=64) based on codebooks derived from frameworks for medical communication and implementation outcomes. Three coders created codebooks, coded independently, established inter-rater coding reliability, and analyzed if the CFI affects medical communication with respect to feasibility, acceptability, and clinical utility. Results Despite racial, ethnic, cultural, and professional differences within our group of patients and clinicians, we found that promoting satisfaction through the interview, eliciting data, eliciting the patient’s perspective, and perceiving data at multiple levels were common codes that explained how the CFI affected medical communication. We also found that all but 2 codes fell under the implementation outcome of clinical utility, 2 fell under acceptability, and none fell under feasibility. Conclusion Our study offers new directions for research on how a cultural interview affects patient-clinician communication. Future research can analyze how the CFI and other cultural interviews impact medical communication in clinical settings with subsequent effects on outcomes such as medication adherence, appointment retention, and health condition. PMID:25372242

  6. A Algebraic Approach to the Quantization of Constrained Systems: Finite Dimensional Examples.

    NASA Astrophysics Data System (ADS)

    Tate, Ranjeet Shekhar

    1992-01-01

    General relativity has two features in particular, which make it difficult to apply to it existing schemes for the quantization of constrained systems. First, there is no background structure in the theory, which could be used, e.g., to regularize constraint operators, to identify a "time" or to define an inner product on physical states. Second, in the Ashtekar formulation of general relativity, which is a promising avenue to quantum gravity, the natural variables for quantization are not canonical; and, classically, there are algebraic identities between them. Existing schemes are usually not concerned with such identities. Thus, from the point of view of canonical quantum gravity, it has become imperative to find a framework for quantization which provides a general prescription to find the physical inner product, and is flexible enough to accommodate non -canonical variables. In this dissertation I present an algebraic formulation of the Dirac approach to the quantization of constrained systems. The Dirac quantization program is augmented by a general principle to find the inner product on physical states. Essentially, the Hermiticity conditions on physical operators determine this inner product. I also clarify the role in quantum theory of possible algebraic identities between the elementary variables. I use this approach to quantize various finite dimensional systems. Some of these models test the new aspects of the algebraic framework. Others bear qualitative similarities to general relativity, and may give some insight into the pitfalls lurking in quantum gravity. The previous quantizations of one such model had many surprising features. When this model is quantized using the algebraic program, there is no longer any unexpected behaviour. I also construct the complete quantum theory for a previously unsolved relativistic cosmology. All these models indicate that the algebraic formulation provides powerful new tools for quantization. In (spatially compact) general relativity, the Hamiltonian is constrained to vanish. I present various approaches one can take to obtain an interpretation of the quantum theory of such "dynamically constrained" systems. I apply some of these ideas to the Bianchi I cosmology, and analyze the issue of the initial singularity in quantum theory.

  7. Quantization of Electromagnetic Fields in Cavities

    NASA Technical Reports Server (NTRS)

    Kakazu, Kiyotaka; Oshiro, Kazunori

    1996-01-01

    A quantization procedure for the electromagnetic field in a rectangular cavity with perfect conductor walls is presented, where a decomposition formula of the field plays an essential role. All vector mode functions are obtained by using the decomposition. After expanding the field in terms of the vector mode functions, we get the quantized electromagnetic Hamiltonian.

  8. Quantization Distortion in Block Transform-Compressed Data

    NASA Technical Reports Server (NTRS)

    Boden, A. F.

    1995-01-01

    The popular JPEG image compression standard is an example of a block transform-based compression scheme; the image is systematically subdivided into block that are individually transformed, quantized, and encoded. The compression is achieved by quantizing the transformed data, reducing the data entropy and thus facilitating efficient encoding. A generic block transform model is introduced.

  9. Quantized impedance dealing with the damping behavior of the one-dimensional oscillator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Jinghao; Zhang, Jing; Li, Yuan

    2015-11-15

    A quantized impedance is proposed to theoretically establish the relationship between the atomic eigenfrequency and the intrinsic frequency of the one-dimensional oscillator in this paper. The classical oscillator is modified by the idea that the electron transition is treated as a charge-discharge process of a suggested capacitor with the capacitive energy equal to the energy level difference of the jumping electron. The quantized capacitance of the impedance interacting with the jumping electron can lead the resonant frequency of the oscillator to the same as the atomic eigenfrequency. The quantized resistance reflects that the damping coefficient of the oscillator is themore » mean collision frequency of the transition electron. In addition, the first and third order electric susceptibilities based on the oscillator are accordingly quantized. Our simulation of the hydrogen atom emission spectrum based on the proposed method agrees well with the experimental one. Our results exhibits that the one-dimensional oscillator with the quantized impedance may become useful in the estimations of the refractive index and one- or multi-photon absorption coefficients of some nonmagnetic media composed of hydrogen-like atoms.« less

  10. Quantized impedance dealing with the damping behavior of the one-dimensional oscillator

    NASA Astrophysics Data System (ADS)

    Zhu, Jinghao; Zhang, Jing; Li, Yuan; Zhang, Yong; Fang, Zhengji; Zhao, Peide; Li, Erping

    2015-11-01

    A quantized impedance is proposed to theoretically establish the relationship between the atomic eigenfrequency and the intrinsic frequency of the one-dimensional oscillator in this paper. The classical oscillator is modified by the idea that the electron transition is treated as a charge-discharge process of a suggested capacitor with the capacitive energy equal to the energy level difference of the jumping electron. The quantized capacitance of the impedance interacting with the jumping electron can lead the resonant frequency of the oscillator to the same as the atomic eigenfrequency. The quantized resistance reflects that the damping coefficient of the oscillator is the mean collision frequency of the transition electron. In addition, the first and third order electric susceptibilities based on the oscillator are accordingly quantized. Our simulation of the hydrogen atom emission spectrum based on the proposed method agrees well with the experimental one. Our results exhibits that the one-dimensional oscillator with the quantized impedance may become useful in the estimations of the refractive index and one- or multi-photon absorption coefficients of some nonmagnetic media composed of hydrogen-like atoms.

  11. Quantization and Superselection Sectors I:. Transformation Group C*-ALGEBRAS

    NASA Astrophysics Data System (ADS)

    Landsman, N. P.

    Quantization is defined as the act of assigning an appropriate C*-algebra { A} to a given configuration space Q, along with a prescription mapping self-adjoint elements of { A} into physically interpretable observables. This procedure is adopted to solve the problem of quantizing a particle moving on a homogeneous locally compact configuration space Q=G/H. Here { A} is chosen to be the transformation group C*-algebra corresponding to the canonical action of G on Q. The structure of these algebras and their representations are examined in some detail. Inequivalent quantizations are identified with inequivalent irreducible representations of the C*-algebra corresponding to the system, hence with its superselection sectors. Introducing the concept of a pre-Hamiltonian, we construct a large class of G-invariant time-evolutions on these algebras, and find the Hamiltonians implementing these time-evolutions in each irreducible representation of { A}. “Topological” terms in the Hamiltonian (or the corresponding action) turn out to be representation-dependent, and are automatically induced by the quantization procedure. Known “topological” charge quantization or periodicity conditions are then identically satisfied as a consequence of the representation theory of { A}.

  12. Light-cone quantization of two dimensional field theory in the path integral approach

    NASA Astrophysics Data System (ADS)

    Cortés, J. L.; Gamboa, J.

    1999-05-01

    A quantization condition due to the boundary conditions and the compatification of the light cone space-time coordinate x- is identified at the level of the classical equations for the right-handed fermionic field in two dimensions. A detailed analysis of the implications of the implementation of this quantization condition at the quantum level is presented. In the case of the Thirring model one has selection rules on the excitations as a function of the coupling and in the case of the Schwinger model a double integer structure of the vacuum is derived in the light-cone frame. Two different quantized chiral Schwinger models are found, one of them without a θ-vacuum structure. A generalization of the quantization condition to theories with several fermionic fields and to higher dimensions is presented.

  13. Relational symplectic groupoid quantization for constant poisson structures

    NASA Astrophysics Data System (ADS)

    Cattaneo, Alberto S.; Moshayedi, Nima; Wernli, Konstantin

    2017-09-01

    As a detailed application of the BV-BFV formalism for the quantization of field theories on manifolds with boundary, this note describes a quantization of the relational symplectic groupoid for a constant Poisson structure. The presence of mixed boundary conditions and the globalization of results are also addressed. In particular, the paper includes an extension to space-times with boundary of some formal geometry considerations in the BV-BFV formalism, and specifically introduces into the BV-BFV framework a "differential" version of the classical and quantum master equations. The quantization constructed in this paper induces Kontsevich's deformation quantization on the underlying Poisson manifold, i.e., the Moyal product, which is known in full details. This allows focussing on the BV-BFV technology and testing it. For the inexperienced reader, this is also a practical and reasonably simple way to learn it.

  14. The Digital Data Acquisition System for the Russian VLBI Network of New Generation

    NASA Technical Reports Server (NTRS)

    Fedotov, Leonid; Nosov, Eugeny; Grenkov, Sergey; Marshalov, Dmitry

    2010-01-01

    The system consists of several identical channels of 1024 MHz bandwidth each. In each channel, the RF band is frequency-translated to the intermediate frequency range 1 - 2 GHz. Each channel consists of two parts: the digitizer and Mark 5C recorder. The digitizer is placed on the antenna close to the corresponding Low-Noise Amplifier output and consists of the analog frequency converter, ADC, and a device for digital processing of the signals using FPGA. In the digitizer the subdigitization on frequency of 2048 MHz is used. For producing narrow-band channels and to interface with existing data acquisition systems, the polyphase filtering with FPGA can be used. Digital signals are re-quantized to 2-bits in the FPGA and are transferred to an input of Mark 5C through a fiber line. The breadboard model of the digitizer is being tested, and the data acquisition system is being designed.

  15. Segmentation of magnetic resonance images using fuzzy algorithms for learning vector quantization.

    PubMed

    Karayiannis, N B; Pai, P I

    1999-02-01

    This paper evaluates a segmentation technique for magnetic resonance (MR) images of the brain based on fuzzy algorithms for learning vector quantization (FALVQ). These algorithms perform vector quantization by updating all prototypes of a competitive network through an unsupervised learning process. Segmentation of MR images is formulated as an unsupervised vector quantization process, where the local values of different relaxation parameters form the feature vectors which are represented by a relatively small set of prototypes. The experiments evaluate a variety of FALVQ algorithms in terms of their ability to identify different tissues and discriminate between normal tissues and abnormalities.

  16. Splitting Times of Doubly Quantized Vortices in Dilute Bose-Einstein Condensates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huhtamaeki, J. A. M.; Pietilae, V.; Virtanen, S. M. M.

    2006-09-15

    Recently, the splitting of a topologically created doubly quantized vortex into two singly quantized vortices was experimentally investigated in dilute atomic cigar-shaped Bose-Einstein condensates [Y. Shin et al., Phys. Rev. Lett. 93, 160406 (2004)]. In particular, the dependency of the splitting time on the peak particle density was studied. We present results of theoretical simulations which closely mimic the experimental setup. We show that the combination of gravitational sag and time dependency of the trapping potential alone suffices to split the doubly quantized vortex in time scales which are in good agreement with the experiments.

  17. Response of two-band systems to a single-mode quantized field

    NASA Astrophysics Data System (ADS)

    Shi, Z. C.; Shen, H. Z.; Wang, W.; Yi, X. X.

    2016-03-01

    The response of topological insulators (TIs) to an external weakly classical field can be expressed in terms of Kubo formula, which predicts quantized Hall conductivity of the quantum Hall family. The response of TIs to a single-mode quantized field, however, remains unexplored. In this work, we take the quantum nature of the external field into account and define a Hall conductance to characterize the linear response of a two-band system to the quantized field. The theory is then applied to topological insulators. Comparisons with the traditional Hall conductance are presented and discussed.

  18. Quantized Iterative Learning Consensus Tracking of Digital Networks With Limited Information Communication.

    PubMed

    Xiong, Wenjun; Yu, Xinghuo; Chen, Yao; Gao, Jie

    2017-06-01

    This brief investigates the quantized iterative learning problem for digital networks with time-varying topologies. The information is first encoded as symbolic data and then transmitted. After the data are received, a decoder is used by the receiver to get an estimate of the sender's state. Iterative learning quantized communication is considered in the process of encoding and decoding. A sufficient condition is then presented to achieve the consensus tracking problem in a finite interval using the quantized iterative learning controllers. Finally, simulation results are given to illustrate the usefulness of the developed criterion.

  19. Scan-Based Implementation of JPEG 2000 Extensions

    NASA Technical Reports Server (NTRS)

    Rountree, Janet C.; Webb, Brian N.; Flohr, Thomas J.; Marcellin, Michael W.

    2001-01-01

    JPEG 2000 Part 2 (Extensions) contains a number of technologies that are of potential interest in remote sensing applications. These include arbitrary wavelet transforms, techniques to limit boundary artifacts in tiles, multiple component transforms, and trellis-coded quantization (TCQ). We are investigating the addition of these features to the low-memory (scan-based) implementation of JPEG 2000 Part 1. A scan-based implementation of TCQ has been realized and tested, with a very small performance loss as compared with the full image (frame-based) version. A proposed amendment to JPEG 2000 Part 2 will effect the syntax changes required to make scan-based TCQ compatible with the standard.

  20. Education Longitudinal Study of 2002: Base Year Data File User's Manual. NCES 2004-405

    ERIC Educational Resources Information Center

    Ingels, Steven J.; Pratt, Daniel J.; Rogers, James E.; Siegel, Peter H.; Stutts, Ellen S.

    2004-01-01

    This manual has been produced to familiarize data users with the procedures followed for data collection and processing for the base year of the Education Longitudinal Study of 2002 (ELS:2002). It also provides the necessary documentation for use of the public-use data files, as they appear on the ELS:2002 base year Electronic Codebook (ECB). Most…

  1. 2000 Military Recruiter Survey: Administration, Datasets and Codebook

    DTIC Science & Technology

    2002-08-01

    recruiters who have not learned everything necessary from their training…………………...... f. Recruiters need constant pressure in order for them to make their...Distance learning ......................... j. Filling out electronic forms........... k. Other........................................... Background...e. It is my job to teach recruiters who have not learned everything necessary from their training…………………...... f. Recruiters need constant

  2. Collective Beer Brand Identity: A Semiotic Analysis of the Websites Representing Small and Medium Enterprises in the Brewing Industry of Western PA

    ERIC Educational Resources Information Center

    Cincotta, Dominic

    2014-01-01

    This research studies how brand identities, individually and communally, as read through websites are created among small and medium-sized enterprise breweries in western Pennsylvania. Content analysis through the frame of Kress and van Leeuwen was used as the basis for the codebook that reads each brand identity for the researcher. The…

  3. Teacher Followup Survey, 1994-95: Data File User's Manual Restricted-Use Codebook. Working Paper Series.

    ERIC Educational Resources Information Center

    Whitener, Summer D.; Gruber, Kerry J.; Rohr, Carol L.; Fondelier, Sharon E.

    The Teacher Followup Survey (TFS) is a 1-year followup of a sample of teachers who were originally selected for the Teacher Questionnaire of the Schools and Staffing Survey (SASS) of the National Center for Education Statistics. There have been three data cycles for the SASS and three TFS versions. This data file user's manual enables the user to…

  4. Getting More Value from the LibQUAL+® Survey: The Merits of Qualitative Analysis and Importance-Satisfaction Matrices in Assessing Library Patron Comments

    ERIC Educational Resources Information Center

    Detlor, Brian; Ball, Kathryn

    2015-01-01

    This paper examines the merit of conducting a qualitative analysis of LibQUAL+® survey comments as a means of leveraging quantitative LibQUAL+ results, and using importance-satisfaction matrices to present and assess qualitative findings. Comments collected from the authors' institution's LibQUAL+ survey were analyzed using a codebook based on…

  5. August 2005 Status of Forces Survey of Active-Duty Members: Administration, Datasets, and Codebook

    DTIC Science & Technology

    2005-09-01

    14. Military/Civilian Comparisons—Comparisons of military to the civilian world , including promotion opportunities, hours worked, compensation...MILITARY/CIVILIAN COMPARISONS 110. How do the following opportunities in the military compare to opportunities in the civilian world ? Much...opportunities in the military compare to opportunities in the civilian world ? Much better as a civilian Somewhat better as a civilian No

  6. Using LEDs and Phosphorescent Materials to Teach High School Students Quantum Mechanics: A Guided-Inquiry Laboratory for Introductory High School Chemistry

    ERIC Educational Resources Information Center

    Green, William P.; Trotochaud, Alan; Sherman, Julia; Kazerounian, Kazem; Faraclas, Elias W.

    2009-01-01

    The quantization of electronic energy levels in atoms is foundational to a mechanistic explanation of the periodicity of elemental properties and behavior. This paper presents a hands-on, guided inquiry approach to teaching this concept as part of a broader treatment of quantum mechanics, and as a foundation for an understanding of chemical…

  7. Three-Level De-Multiplexed Dual-Branch Complex Delta-Sigma Transmitter.

    PubMed

    Arfi, Anis Ben; Elsayed, Fahmi; Aflaki, Pouya M; Morris, Brad; Ghannouchi, Fadhel M

    2018-02-20

    In this paper, a dual-branch topology driven by a Delta-Sigma Modulator (DSM) with a complex quantizer, also known as the Complex Delta Sigma Modulator (CxDSM), with a 3-level quantized output signal is proposed. By de-multiplexing the 3-level Delta-Sigma-quantized signal into two bi-level streams, an efficiency enhancement over the operational frequency range is achieved. The de-multiplexed signals drive a dual-branch amplification block composed of two switch-mode back-to-back power amplifiers working at peak power. A signal processing technique known as quantization noise reduction with In-band Filtering (QNRIF) is applied to each of the de-multiplexed streams to boost the overall performances; particularly the Adjacent Channel Leakage Ratio (ACLR). After amplification, the two branches are combined using a non-isolated combiner, preserving the efficiency of the transmitter. A comprehensive study on the operation of this topology and signal characteristics used to drive the dual-branch Switch-Mode Power Amplifiers (SMPAs) was established. Moreover, this work proposes a highly efficient design of the amplification block based on a back-to-back power topology performing a dynamic load modulation exploiting the non-overlapping properties of the de-multiplexed Complex DSM signal. For experimental validation, the proposed de-multiplexed 3-level Delta-Sigma topology was implemented on the BEEcube™ platform followed by the back-to-back Class-E switch-mode power amplification block. The full transceiver is assessed using a 4th-Generation mobile communications standard LTE (Long Term Evolution) standard 1.4 MHz signal with a peak to average power ratio (PAPR) of 8 dB. The dual-branch topology exhibited a good linearity and a coding efficiency of the transmitter chain higher than 72% across the band of frequency from 1.8 GHz to 2.7 GHz.

  8. Integrated Data and Control Level Fault Tolerance Techniques for Signal Processing Computer Design

    DTIC Science & Technology

    1990-09-01

    TOLERANCE TECHNIQUES FOR SIGNAL PROCESSING COMPUTER DESIGN G. Robert Redinbo I. INTRODUCTION High-speed signal processing is an important application of...techniques and mathematical approaches will be expanded later to the situation where hardware errors and roundoff and quantization noise affect all...detect errors equal in number to the degree of g(X), the maximum permitted by the Singleton bound [13]. Real cyclic codes, primarily applicable to

  9. Universe creation from the third-quantized vacuum

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McGuigan, M.

    1989-04-15

    Third quantization leads to a Hilbert space containing a third-quantized vacuum in which no universes are present as well as multiuniverse states. We consider the possibility of universe creation for the special case where the universe emerges in a no-particle state. The probability of such a creation is computed from both the path-integral and operator formalisms.

  10. Universe creation from the third-quantized vacuum

    NASA Astrophysics Data System (ADS)

    McGuigan, Michael

    1989-04-01

    Third quantization leads to a Hilbert space containing a third-quantized vacuum in which no universes are present as well as multiuniverse states. We consider the possibility of universe creation for the special case where the universe emerges in a no-particle state. The probability of such a creation is computed from both the path-integral and operator formalisms.

  11. 4D Sommerfeld quantization of the complex extended charge

    NASA Astrophysics Data System (ADS)

    Bulyzhenkov, Igor E.

    2017-12-01

    Gravitational fields and accelerations cannot change quantized magnetic flux in closed line contours due to flat 3D section of curved 4D space-time-matter. The relativistic Bohr-Sommerfeld quantization of the imaginary charge reveals an electric analog of the Compton length, which can introduce quantitatively the fine structure constant and the Plank length.

  12. An analog gamma correction scheme for high dynamic range CMOS logarithmic image sensors.

    PubMed

    Cao, Yuan; Pan, Xiaofang; Zhao, Xiaojin; Wu, Huisi

    2014-12-15

    In this paper, a novel analog gamma correction scheme with a logarithmic image sensor dedicated to minimize the quantization noise of the high dynamic applications is presented. The proposed implementation exploits a non-linear voltage-controlled-oscillator (VCO) based analog-to-digital converter (ADC) to perform the gamma correction during the analog-to-digital conversion. As a result, the quantization noise does not increase while the same high dynamic range of logarithmic image sensor is preserved. Moreover, by combining the gamma correction with the analog-to-digital conversion, the silicon area and overall power consumption can be greatly reduced. The proposed gamma correction scheme is validated by the reported simulation results and the experimental results measured for our designed test structure, which is fabricated with 0.35 μm standard complementary-metal-oxide-semiconductor (CMOS) process.

  13. An Analog Gamma Correction Scheme for High Dynamic Range CMOS Logarithmic Image Sensors

    PubMed Central

    Cao, Yuan; Pan, Xiaofang; Zhao, Xiaojin; Wu, Huisi

    2014-01-01

    In this paper, a novel analog gamma correction scheme with a logarithmic image sensor dedicated to minimize the quantization noise of the high dynamic applications is presented. The proposed implementation exploits a non-linear voltage-controlled-oscillator (VCO) based analog-to-digital converter (ADC) to perform the gamma correction during the analog-to-digital conversion. As a result, the quantization noise does not increase while the same high dynamic range of logarithmic image sensor is preserved. Moreover, by combining the gamma correction with the analog-to-digital conversion, the silicon area and overall power consumption can be greatly reduced. The proposed gamma correction scheme is validated by the reported simulation results and the experimental results measured for our designed test structure, which is fabricated with 0.35 μm standard complementary-metal-oxide-semiconductor (CMOS) process. PMID:25517692

  14. Confrontation Between a Quantized Periods of Some Exo-planetary Systems and Observations

    NASA Astrophysics Data System (ADS)

    El Fady Morcos, Abd

    2012-07-01

    Confrontation Between a Quantized Periods of Some Exo-planetary Systems and Observations A.B. Morcos Corot and Kepler were designed to detect Earth-like extra solar planets. The orbital elements and periods of these planets will contain some uncertainties. Many theoretical treatments depend on the idea of quantization were done aiming to find orbital elements of these exoplenets. In the present work, as an extension of previous works, the periods of some extoplanetary systems are calculated by using a simple derived formula. The orbital velocities of some of them are predicted . A comparison between the calculated and observed data is done References 1-J.M. Barnothy , the stability of the Solar System and of small Stellar Systems . (Y.Kazai edn,IAU,1974). 2-L.Nottale,Fractal Space-Time and Microphysics,Towards a Theory of Scale Relativity,( World Scientific, London,1994). 3-L. Nottale, A&A Lett. 315, L9 (1996). 4-L. Nottale, G. Schumacher and J. Gay, A&A , 322, 1018 , (1997). 5-L. Nottale, A&A , 361, 379 (2000). 6-A.G. Agnese and R.Festa, arXiv:astro-ph/9807186v1, (1998). 7-A.G. Agnese and R.Festa, arXiv:astro-ph/9910534v2. (1999). 8- A.B.Morcos, MG 12 , France (2009). 9- A.B.Morcs, Cospar 38 , Bremen , Germany (2010)

  15. The coordinate coherent states approach revisited

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miao, Yan-Gang, E-mail: miaoyg@nankai.edu.cn; Zhang, Shao-Jun, E-mail: sjzhang@mail.nankai.edu.cn

    2013-02-15

    We revisit the coordinate coherent states approach through two different quantization procedures in the quantum field theory on the noncommutative Minkowski plane. The first procedure, which is based on the normal commutation relation between an annihilation and creation operators, deduces that a point mass can be described by a Gaussian function instead of the usual Dirac delta function. However, we argue this specific quantization by adopting the canonical one (based on the canonical commutation relation between a field and its conjugate momentum) and show that a point mass should still be described by the Dirac delta function, which implies thatmore » the concept of point particles is still valid when we deal with the noncommutativity by following the coordinate coherent states approach. In order to investigate the dependence on quantization procedures, we apply the two quantization procedures to the Unruh effect and Hawking radiation and find that they give rise to significantly different results. Under the first quantization procedure, the Unruh temperature and Unruh spectrum are not deformed by noncommutativity, but the Hawking temperature is deformed by noncommutativity while the radiation specturm is untack. However, under the second quantization procedure, the Unruh temperature and Hawking temperature are untack but the both spectra are modified by an effective greybody (deformed) factor. - Highlights: Black-Right-Pointing-Pointer Suggest a canonical quantization in the coordinate coherent states approach. Black-Right-Pointing-Pointer Prove the validity of the concept of point particles. Black-Right-Pointing-Pointer Apply the canonical quantization to the Unruh effect and Hawking radiation. Black-Right-Pointing-Pointer Find no deformations in the Unruh temperature and Hawking temperature. Black-Right-Pointing-Pointer Provide the modified spectra of the Unruh effect and Hawking radiation.« less

  16. Hierarchically clustered adaptive quantization CMAC and its learning convergence.

    PubMed

    Teddy, S D; Lai, E M K; Quek, C

    2007-11-01

    The cerebellar model articulation controller (CMAC) neural network (NN) is a well-established computational model of the human cerebellum. Nevertheless, there are two major drawbacks associated with the uniform quantization scheme of the CMAC network. They are the following: (1) a constant output resolution associated with the entire input space and (2) the generalization-accuracy dilemma. Moreover, the size of the CMAC network is an exponential function of the number of inputs. Depending on the characteristics of the training data, only a small percentage of the entire set of CMAC memory cells is utilized. Therefore, the efficient utilization of the CMAC memory is a crucial issue. One approach is to quantize the input space nonuniformly. For existing nonuniformly quantized CMAC systems, there is a tradeoff between memory efficiency and computational complexity. Inspired by the underlying organizational mechanism of the human brain, this paper presents a novel CMAC architecture named hierarchically clustered adaptive quantization CMAC (HCAQ-CMAC). HCAQ-CMAC employs hierarchical clustering for the nonuniform quantization of the input space to identify significant input segments and subsequently allocating more memory cells to these regions. The stability of the HCAQ-CMAC network is theoretically guaranteed by the proof of its learning convergence. The performance of the proposed network is subsequently benchmarked against the original CMAC network, as well as two other existing CMAC variants on two real-life applications, namely, automated control of car maneuver and modeling of the human blood glucose dynamics. The experimental results have demonstrated that the HCAQ-CMAC network offers an efficient memory allocation scheme and improves the generalization and accuracy of the network output to achieve better or comparable performances with smaller memory usages. Index Terms-Cerebellar model articulation controller (CMAC), hierarchical clustering, hierarchically clustered adaptive quantization CMAC (HCAQ-CMAC), learning convergence, nonuniform quantization.

  17. Entropy-aware projected Landweber reconstruction for quantized block compressive sensing of aerial imagery

    NASA Astrophysics Data System (ADS)

    Liu, Hao; Li, Kangda; Wang, Bing; Tang, Hainie; Gong, Xiaohui

    2017-01-01

    A quantized block compressive sensing (QBCS) framework, which incorporates the universal measurement, quantization/inverse quantization, entropy coder/decoder, and iterative projected Landweber reconstruction, is summarized. Under the QBCS framework, this paper presents an improved reconstruction algorithm for aerial imagery, QBCS, with entropy-aware projected Landweber (QBCS-EPL), which leverages the full-image sparse transform without Wiener filter and an entropy-aware thresholding model for wavelet-domain image denoising. Through analyzing the functional relation between the soft-thresholding factors and entropy-based bitrates for different quantization methods, the proposed model can effectively remove wavelet-domain noise of bivariate shrinkage and achieve better image reconstruction quality. For the overall performance of QBCS reconstruction, experimental results demonstrate that the proposed QBCS-EPL algorithm significantly outperforms several existing algorithms. With the experiment-driven methodology, the QBCS-EPL algorithm can obtain better reconstruction quality at a relatively moderate computational cost, which makes it more desirable for aerial imagery applications.

  18. Quantized kernel least mean square algorithm.

    PubMed

    Chen, Badong; Zhao, Songlin; Zhu, Pingping; Príncipe, José C

    2012-01-01

    In this paper, we propose a quantization approach, as an alternative of sparsification, to curb the growth of the radial basis function structure in kernel adaptive filtering. The basic idea behind this method is to quantize and hence compress the input (or feature) space. Different from sparsification, the new approach uses the "redundant" data to update the coefficient of the closest center. In particular, a quantized kernel least mean square (QKLMS) algorithm is developed, which is based on a simple online vector quantization method. The analytical study of the mean square convergence has been carried out. The energy conservation relation for QKLMS is established, and on this basis we arrive at a sufficient condition for mean square convergence, and a lower and upper bound on the theoretical value of the steady-state excess mean square error. Static function estimation and short-term chaotic time-series prediction examples are presented to demonstrate the excellent performance.

  19. Rate and power efficient image compressed sensing and transmission

    NASA Astrophysics Data System (ADS)

    Olanigan, Saheed; Cao, Lei; Viswanathan, Ramanarayanan

    2016-01-01

    This paper presents a suboptimal quantization and transmission scheme for multiscale block-based compressed sensing images over wireless channels. The proposed method includes two stages: dealing with quantization distortion and transmission errors. First, given the total transmission bit rate, the optimal number of quantization bits is assigned to the sensed measurements in different wavelet sub-bands so that the total quantization distortion is minimized. Second, given the total transmission power, the energy is allocated to different quantization bit layers based on their different error sensitivities. The method of Lagrange multipliers with Karush-Kuhn-Tucker conditions is used to solve both optimization problems, for which the first problem can be solved with relaxation and the second problem can be solved completely. The effectiveness of the scheme is illustrated through simulation results, which have shown up to 10 dB improvement over the method without the rate and power optimization in medium and low signal-to-noise ratio cases.

  20. Robust vector quantization for noisy channels

    NASA Technical Reports Server (NTRS)

    Demarca, J. R. B.; Farvardin, N.; Jayant, N. S.; Shoham, Y.

    1988-01-01

    The paper briefly discusses techniques for making vector quantizers more tolerant to tranmsission errors. Two algorithms are presented for obtaining an efficient binary word assignment to the vector quantizer codewords without increasing the transmission rate. It is shown that about 4.5 dB gain over random assignment can be achieved with these algorithms. It is also proposed to reduce the effects of error propagation in vector-predictive quantizers by appropriately constraining the response of the predictive loop. The constrained system is shown to have about 4 dB of SNR gain over an unconstrained system in a noisy channel, with a small loss of clean-channel performance.

  1. Image data compression having minimum perceptual error

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B. (Inventor)

    1995-01-01

    A method for performing image compression that eliminates redundant and invisible image components is described. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The present invention adapts or customizes the quantization matrix to the image being compressed. The quantization matrix comprises visual masking by luminance and contrast techniques and by an error pooling technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.

  2. Immirzi parameter without Immirzi ambiguity: Conformal loop quantization of scalar-tensor gravity

    NASA Astrophysics Data System (ADS)

    Veraguth, Olivier J.; Wang, Charles H.-T.

    2017-10-01

    Conformal loop quantum gravity provides an approach to loop quantization through an underlying conformal structure i.e. conformally equivalent class of metrics. The property that general relativity itself has no conformal invariance is reinstated with a constrained scalar field setting the physical scale. Conformally equivalent metrics have recently been shown to be amenable to loop quantization including matter coupling. It has been suggested that conformal geometry may provide an extended symmetry to allow a reformulated Immirzi parameter necessary for loop quantization to behave like an arbitrary group parameter that requires no further fixing as its present standard form does. Here, we find that this can be naturally realized via conformal frame transformations in scalar-tensor gravity. Such a theory generally incorporates a dynamical scalar gravitational field and reduces to general relativity when the scalar field becomes a pure gauge. In particular, we introduce a conformal Einstein frame in which loop quantization is implemented. We then discuss how different Immirzi parameters under this description may be related by conformal frame transformations and yet share the same quantization having, for example, the same area gaps, modulated by the scalar gravitational field.

  3. Tribology of the lubricant quantized sliding state.

    PubMed

    Castelli, Ivano Eligio; Capozza, Rosario; Vanossi, Andrea; Santoro, Giuseppe E; Manini, Nicola; Tosatti, Erio

    2009-11-07

    In the framework of Langevin dynamics, we demonstrate clear evidence of the peculiar quantized sliding state, previously found in a simple one-dimensional boundary lubricated model [A. Vanossi et al., Phys. Rev. Lett. 97, 056101 (2006)], for a substantially less idealized two-dimensional description of a confined multilayer solid lubricant under shear. This dynamical state, marked by a nontrivial "quantized" ratio of the averaged lubricant center-of-mass velocity to the externally imposed sliding speed, is recovered, and shown to be robust against the effects of thermal fluctuations, quenched disorder in the confining substrates, and over a wide range of loading forces. The lubricant softness, setting the width of the propagating solitonic structures, is found to play a major role in promoting in-registry commensurate regions beneficial to this quantized sliding. By evaluating the force instantaneously exerted on the top plate, we find that this quantized sliding represents a dynamical "pinned" state, characterized by significantly low values of the kinetic friction. While the quantized sliding occurs due to solitons being driven gently, the transition to ordinary unpinned sliding regimes can involve lubricant melting due to large shear-induced Joule heating, for example at large speed.

  4. Optimal Compression of Floating-Point Astronomical Images Without Significant Loss of Information

    NASA Technical Reports Server (NTRS)

    Pence, William D.; White, R. L.; Seaman, R.

    2010-01-01

    We describe a compression method for floating-point astronomical images that gives compression ratios of 6 - 10 while still preserving the scientifically important information in the image. The pixel values are first preprocessed by quantizing them into scaled integer intensity levels, which removes some of the uncompressible noise in the image. The integers are then losslessly compressed using the fast and efficient Rice algorithm and stored in a portable FITS format file. Quantizing an image more coarsely gives greater image compression, but it also increases the noise and degrades the precision of the photometric and astrometric measurements in the quantized image. Dithering the pixel values during the quantization process greatly improves the precision of measurements in the more coarsely quantized images. We perform a series of experiments on both synthetic and real astronomical CCD images to quantitatively demonstrate that the magnitudes and positions of stars in the quantized images can be measured with the predicted amount of precision. In order to encourage wider use of these image compression methods, we have made available a pair of general-purpose image compression programs, called fpack and funpack, which can be used to compress any FITS format image.

  5. Quantized Majorana conductance

    NASA Astrophysics Data System (ADS)

    Zhang, Hao; Liu, Chun-Xiao; Gazibegovic, Sasa; Xu, Di; Logan, John A.; Wang, Guanzhong; van Loo, Nick; Bommer, Jouri D. S.; de Moor, Michiel W. A.; Car, Diana; Op Het Veld, Roy L. M.; van Veldhoven, Petrus J.; Koelling, Sebastian; Verheijen, Marcel A.; Pendharkar, Mihir; Pennachio, Daniel J.; Shojaei, Borzoyeh; Lee, Joon Sue; Palmstrøm, Chris J.; Bakkers, Erik P. A. M.; Sarma, S. Das; Kouwenhoven, Leo P.

    2018-04-01

    Majorana zero-modes—a type of localized quasiparticle—hold great promise for topological quantum computing. Tunnelling spectroscopy in electrical transport is the primary tool for identifying the presence of Majorana zero-modes, for instance as a zero-bias peak in differential conductance. The height of the Majorana zero-bias peak is predicted to be quantized at the universal conductance value of 2e2/h at zero temperature (where e is the charge of an electron and h is the Planck constant), as a direct consequence of the famous Majorana symmetry in which a particle is its own antiparticle. The Majorana symmetry protects the quantization against disorder, interactions and variations in the tunnel coupling. Previous experiments, however, have mostly shown zero-bias peaks much smaller than 2e2/h, with a recent observation of a peak height close to 2e2/h. Here we report a quantized conductance plateau at 2e2/h in the zero-bias conductance measured in indium antimonide semiconductor nanowires covered with an aluminium superconducting shell. The height of our zero-bias peak remains constant despite changing parameters such as the magnetic field and tunnel coupling, indicating that it is a quantized conductance plateau. We distinguish this quantized Majorana peak from possible non-Majorana origins by investigating its robustness to electric and magnetic fields as well as its temperature dependence. The observation of a quantized conductance plateau strongly supports the existence of Majorana zero-modes in the system, consequently paving the way for future braiding experiments that could lead to topological quantum computing.

  6. Quantized Majorana conductance.

    PubMed

    Zhang, Hao; Liu, Chun-Xiao; Gazibegovic, Sasa; Xu, Di; Logan, John A; Wang, Guanzhong; van Loo, Nick; Bommer, Jouri D S; de Moor, Michiel W A; Car, Diana; Op Het Veld, Roy L M; van Veldhoven, Petrus J; Koelling, Sebastian; Verheijen, Marcel A; Pendharkar, Mihir; Pennachio, Daniel J; Shojaei, Borzoyeh; Lee, Joon Sue; Palmstrøm, Chris J; Bakkers, Erik P A M; Sarma, S Das; Kouwenhoven, Leo P

    2018-04-05

    Majorana zero-modes-a type of localized quasiparticle-hold great promise for topological quantum computing. Tunnelling spectroscopy in electrical transport is the primary tool for identifying the presence of Majorana zero-modes, for instance as a zero-bias peak in differential conductance. The height of the Majorana zero-bias peak is predicted to be quantized at the universal conductance value of 2e 2 /h at zero temperature (where e is the charge of an electron and h is the Planck constant), as a direct consequence of the famous Majorana symmetry in which a particle is its own antiparticle. The Majorana symmetry protects the quantization against disorder, interactions and variations in the tunnel coupling. Previous experiments, however, have mostly shown zero-bias peaks much smaller than 2e 2 /h, with a recent observation of a peak height close to 2e 2 /h. Here we report a quantized conductance plateau at 2e 2 /h in the zero-bias conductance measured in indium antimonide semiconductor nanowires covered with an aluminium superconducting shell. The height of our zero-bias peak remains constant despite changing parameters such as the magnetic field and tunnel coupling, indicating that it is a quantized conductance plateau. We distinguish this quantized Majorana peak from possible non-Majorana origins by investigating its robustness to electric and magnetic fields as well as its temperature dependence. The observation of a quantized conductance plateau strongly supports the existence of Majorana zero-modes in the system, consequently paving the way for future braiding experiments that could lead to topological quantum computing.

  7. Controlling charge quantization with quantum fluctuations.

    PubMed

    Jezouin, S; Iftikhar, Z; Anthore, A; Parmentier, F D; Gennser, U; Cavanna, A; Ouerghi, A; Levkivskyi, I P; Idrisov, E; Sukhorukov, E V; Glazman, L I; Pierre, F

    2016-08-04

    In 1909, Millikan showed that the charge of electrically isolated systems is quantized in units of the elementary electron charge e. Today, the persistence of charge quantization in small, weakly connected conductors allows for circuits in which single electrons are manipulated, with applications in, for example, metrology, detectors and thermometry. However, as the connection strength is increased, the discreteness of charge is progressively reduced by quantum fluctuations. Here we report the full quantum control and characterization of charge quantization. By using semiconductor-based tunable elemental conduction channels to connect a micrometre-scale metallic island to a circuit, we explore the complete evolution of charge quantization while scanning the entire range of connection strengths, from a very weak (tunnel) to a perfect (ballistic) contact. We observe, when approaching the ballistic limit, that charge quantization is destroyed by quantum fluctuations, and scales as the square root of the residual probability for an electron to be reflected across the quantum channel; this scaling also applies beyond the different regimes of connection strength currently accessible to theory. At increased temperatures, the thermal fluctuations result in an exponential suppression of charge quantization and in a universal square-root scaling, valid for all connection strengths, in agreement with expectations. Besides being pertinent for the improvement of single-electron circuits and their applications, and for the metal-semiconductor hybrids relevant to topological quantum computing, knowledge of the quantum laws of electricity will be essential for the quantum engineering of future nanoelectronic devices.

  8. 1979 Reserve Force Studies Surveys: User’s Manual and Codebooks.

    DTIC Science & Technology

    1981-09-01

    units, for instance, artillery, which have the same manpower demand characteristics (similar size, skills and grade structure) provided better...personnel groups. With random cluster sampling, the pattern of questionnaire returns for each group of analytic interest should match the Guard and...marking the matching bubbles. First, although the instructions ask the respondent to "zero-fill" and "right-justify," some respondents entered the value

  9. August 2004 Status of Forces Survey of Active-Duty Members: Administration, Datasets, and Codebook

    DTIC Science & Technology

    2005-01-01

    exercising, and self-reported weight (i.e., underweight or overweight ). 11. Compensation—Present versus alternative retirement pay systems; present...South Africa) nmlkjWestern Hemisphere (e.g., Cuba, Honduras, Peru ) nmlkjOther or not sure BACKGROUND INFORMATION Please select from the list...South Africa) nmlkjWestern Hemisphere (e.g., Cuba, Honduras, Peru ) nmlkjOther or not sure TEMPO, READINESS, AND STRESS Please select from

  10. 1986 Proteus Survey: Technical Manual and Codebook

    DTIC Science & Technology

    1992-06-01

    Officer Candidate School and Direct Commission) and by gender. Female officers were oversampled (30% in the sample versus ap- proximately 16% in the...analyze the effects of this change in policy both on the individual cadets and on the Academy and to study the process of coeducation over four years...Candidate School (OCS), and Direct Commissioning (DC). Approximately 1,000 officers were randomly selected from each commissioning year group 1980-1984 from

  11. SIFT Meets CNN: A Decade Survey of Instance Retrieval.

    PubMed

    Zheng, Liang; Yang, Yi; Tian, Qi

    2018-05-01

    In the early days, content-based image retrieval (CBIR) was studied with global features. Since 2003, image retrieval based on local descriptors (de facto SIFT) has been extensively studied for over a decade due to the advantage of SIFT in dealing with image transformations. Recently, image representations based on the convolutional neural network (CNN) have attracted increasing interest in the community and demonstrated impressive performance. Given this time of rapid evolution, this article provides a comprehensive survey of instance retrieval over the last decade. Two broad categories, SIFT-based and CNN-based methods, are presented. For the former, according to the codebook size, we organize the literature into using large/medium-sized/small codebooks. For the latter, we discuss three lines of methods, i.e., using pre-trained or fine-tuned CNN models, and hybrid methods. The first two perform a single-pass of an image to the network, while the last category employs a patch-based feature extraction scheme. This survey presents milestones in modern instance retrieval, reviews a broad selection of previous works in different categories, and provides insights on the connection between SIFT and CNN-based methods. After analyzing and comparing retrieval performance of different categories on several datasets, we discuss promising directions towards generic and specialized instance retrieval.

  12. Content-based retrieval of historical Ottoman documents stored as textual images.

    PubMed

    Saykol, Ediz; Sinop, Ali Kemal; Güdükbay, Ugur; Ulusoy, Ozgür; Cetin, A Enis

    2004-03-01

    There is an accelerating demand to access the visual content of documents stored in historical and cultural archives. Availability of electronic imaging tools and effective image processing techniques makes it feasible to process the multimedia data in large databases. In this paper, a framework for content-based retrieval of historical documents in the Ottoman Empire archives is presented. The documents are stored as textual images, which are compressed by constructing a library of symbols occurring in a document, and the symbols in the original image are then replaced with pointers into the codebook to obtain a compressed representation of the image. The features in wavelet and spatial domain based on angular and distance span of shapes are used to extract the symbols. In order to make content-based retrieval in historical archives, a query is specified as a rectangular region in an input image and the same symbol-extraction process is applied to the query region. The queries are processed on the codebook of documents and the query images are identified in the resulting documents using the pointers in textual images. The querying process does not require decompression of images. The new content-based retrieval framework is also applicable to many other document archives using different scripts.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ulmer, W

    Purpose: During the past decade the quantization of coupled/forced electromagnetic circuits with or without Ohm’s resistance has gained the subject of some fundamental studies, since even problems of quantum electrodynamics can be solved in an elegant manner, e.g. the creation of quantized electromagnetic fields. In this communication, we shall use these principles to describe optimization procedures in the design of klystrons, synchrotron irradiation and high energy bremsstrahlung. Methods: The base is the Hamiltonian of an electromagnetic circuit and the extension to coupled circuits, which allow the study of symmetries and perturbed symmetries in a very apparent way (SU2, SU3, SU4).more » The introduction resistance and forced oscillators for the emission and absorption in such coupled systems provides characteristic resonance conditions, and atomic orbitals can be described by that. The extension to virtual orbitals leads to creation of bremsstrahlung, if the incident electron (velocity v nearly c) is described by a current, which is associated with its inductivitance and the virtual orbital to the charge distribution (capacitance). Coupled systems with forced oscillators can be used to amplify drastically the resonance frequencies to describe klystrons and synchrotron radiation. Results: The cross-section formula for bremsstrahlung given by the propagator method of Feynman can readily be derived. The design of klystrons and synchrotrons inclusive the radiation outcome can be described and optimized by the determination of the mutual magnetic couplings between the oscillators induced by the currents. Conclusions: The presented methods of quantization of circuits inclusive resistance provide rather a straightforward way to understand complex technical processes such as creation of bremsstrahlung or creation of radiation by klystrons and synchrotrons. They can either be used for optimization procedures and, last but not least, for pedagogical purposes with regard to a qualified understanding of radiation physics for students.« less

  14. Integrated design of multivariable hydrometric networks using entropy theory with a multiobjective optimization approach

    NASA Astrophysics Data System (ADS)

    Kim, Y.; Hwang, T.; Vose, J. M.; Martin, K. L.; Band, L. E.

    2016-12-01

    Obtaining quality hydrologic observations is the first step towards a successful water resources management. While remote sensing techniques have enabled to convert satellite images of the Earth's surface to hydrologic data, the importance of ground-based observations has never been diminished because in-situ data are often highly accurate and can be used to validate remote measurements. The existence of efficient hydrometric networks is becoming more important to obtain as much as information with minimum redundancy. The World Meteorological Organization (WMO) has recommended a guideline for the minimum hydrometric network density based on physiography; however, this guideline is not for the optimum network design but for avoiding serious deficiency from a network. Moreover, all hydrologic variables are interconnected within the hydrologic cycle, while monitoring networks have been designed individually. This study proposes an integrated network design method using entropy theory with a multiobjective optimization approach. In specific, a precipitation and a streamflow networks in a semi-urban watershed in Ontario, Canada were designed simultaneously by maximizing joint entropy, minimizing total correlation, and maximizing conditional entropy of streamflow network given precipitation network. After comparing with the typical individual network designs, the proposed design method would be able to determine more efficient optimal networks by avoiding the redundant stations, in which hydrologic information is transferable. Additionally, four quantization cases were applied in entropy calculations to assess their implications on the station rankings and the optimal networks. The results showed that the selection of quantization method should be considered carefully because the rankings and optimal networks are subject to change accordingly.

  15. Integrated design of multivariable hydrometric networks using entropy theory with a multiobjective optimization approach

    NASA Astrophysics Data System (ADS)

    Keum, J.; Coulibaly, P. D.

    2017-12-01

    Obtaining quality hydrologic observations is the first step towards a successful water resources management. While remote sensing techniques have enabled to convert satellite images of the Earth's surface to hydrologic data, the importance of ground-based observations has never been diminished because in-situ data are often highly accurate and can be used to validate remote measurements. The existence of efficient hydrometric networks is becoming more important to obtain as much as information with minimum redundancy. The World Meteorological Organization (WMO) has recommended a guideline for the minimum hydrometric network density based on physiography; however, this guideline is not for the optimum network design but for avoiding serious deficiency from a network. Moreover, all hydrologic variables are interconnected within the hydrologic cycle, while monitoring networks have been designed individually. This study proposes an integrated network design method using entropy theory with a multiobjective optimization approach. In specific, a precipitation and a streamflow networks in a semi-urban watershed in Ontario, Canada were designed simultaneously by maximizing joint entropy, minimizing total correlation, and maximizing conditional entropy of streamflow network given precipitation network. After comparing with the typical individual network designs, the proposed design method would be able to determine more efficient optimal networks by avoiding the redundant stations, in which hydrologic information is transferable. Additionally, four quantization cases were applied in entropy calculations to assess their implications on the station rankings and the optimal networks. The results showed that the selection of quantization method should be considered carefully because the rankings and optimal networks are subject to change accordingly.

  16. A general diagrammatic algorithm for contraction and subsequent simplification of second-quantized expressions.

    PubMed

    Bochevarov, Arteum D; Sherrill, C David

    2004-08-22

    We present a general computer algorithm to contract an arbitrary number of second-quantized expressions and simplify the obtained analytical result. The functions that perform these operations are a part of the program Nostromo which facilitates the handling and analysis of the complicated mathematical formulas which are often encountered in modern quantum-chemical models. In contrast to existing codes of this kind, Nostromo is based solely on the Goldstone-diagrammatic representation of algebraic expressions in Fock space and has capabilities to work with operators as well as scalars. Each Goldstone diagram is internally represented by a line of text which is easy to interpret and transform. The calculation of matrix elements does not exploit Wick's theorem in a direct way, but uses diagrammatic techniques to produce only nonzero terms. The identification of equivalent expressions and their subsequent factorization in the final result is performed easily by analyzing the topological structure of the diagrammatic expressions. (c) 2004 American Institute of Physics

  17. Quantum games of opinion formation based on the Marinatto-Weber quantum game scheme

    NASA Astrophysics Data System (ADS)

    Deng, Xinyang; Deng, Yong; Liu, Qi; Shi, Lei; Wang, Zhen

    2016-06-01

    Quantization has become a new way to investigate classical game theory since quantum strategies and quantum games were proposed. In the existing studies, many typical game models, such as the prisoner's dilemma, battle of the sexes, Hawk-Dove game, have been extensively explored by using quantization approach. Along a similar method, here several game models of opinion formations will be quantized on the basis of the Marinatto-Weber quantum game scheme, a frequently used scheme of converting classical games to quantum versions. Our results show that the quantization can fascinatingly change the properties of some classical opinion formation game models so as to generate win-win outcomes.

  18. On Fock-space representations of quantized enveloping algebras related to noncommutative differential geometry

    NASA Astrophysics Data System (ADS)

    Jurčo, B.; Schlieker, M.

    1995-07-01

    In this paper explicitly natural (from the geometrical point of view) Fock-space representations (contragradient Verma modules) of the quantized enveloping algebras are constructed. In order to do so, one starts from the Gauss decomposition of the quantum group and introduces the differential operators on the corresponding q-deformed flag manifold (assumed as a left comodule for the quantum group) by a projection to it of the right action of the quantized enveloping algebra on the quantum group. Finally, the representatives of the elements of the quantized enveloping algebra corresponding to the left-invariant vector fields on the quantum group are expressed as first-order differential operators on the q-deformed flag manifold.

  19. Magnetic resonance image compression using scalar-vector quantization

    NASA Astrophysics Data System (ADS)

    Mohsenian, Nader; Shahri, Homayoun

    1995-12-01

    A new coding scheme based on the scalar-vector quantizer (SVQ) is developed for compression of medical images. SVQ is a fixed-rate encoder and its rate-distortion performance is close to that of optimal entropy-constrained scalar quantizers (ECSQs) for memoryless sources. The use of a fixed-rate quantizer is expected to eliminate some of the complexity issues of using variable-length scalar quantizers. When transmission of images over noisy channels is considered, our coding scheme does not suffer from error propagation which is typical of coding schemes which use variable-length codes. For a set of magnetic resonance (MR) images, coding results obtained from SVQ and ECSQ at low bit-rates are indistinguishable. Furthermore, our encoded images are perceptually indistinguishable from the original, when displayed on a monitor. This makes our SVQ based coder an attractive compression scheme for picture archiving and communication systems (PACS), currently under consideration for an all digital radiology environment in hospitals, where reliable transmission, storage, and high fidelity reconstruction of images are desired.

  20. Topologies on quantum topoi induced by quantization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakayama, Kunji

    2013-07-15

    In the present paper, we consider effects of quantization in a topos approach of quantum theory. A quantum system is assumed to be coded in a quantum topos, by which we mean the topos of presheaves on the context category of commutative subalgebras of a von Neumann algebra of bounded operators on a Hilbert space. A classical system is modeled by a Lie algebra of classical observables. It is shown that a quantization map from the classical observables to self-adjoint operators on the Hilbert space naturally induces geometric morphisms from presheaf topoi related to the classical system to the quantummore » topos. By means of the geometric morphisms, we give Lawvere-Tierney topologies on the quantum topos (and their equivalent Grothendieck topologies on the context category). We show that, among them, there exists a canonical one which we call a quantization topology. We furthermore give an explicit expression of a sheafification functor associated with the quantization topology.« less

  1. Bulk-edge correspondence in topological transport and pumping

    NASA Astrophysics Data System (ADS)

    Imura, Ken-Ichiro; Yoshimura, Yukinori; Fukui, Takahiro; Hatsugai, Yasuhiro

    2018-03-01

    The bulk-edge correspondence (BEC) refers to a one-to-one relation between the bulk and edge properties ubiquitous in topologically nontrivial systems. Depending on the setup, BEC manifests in different forms and govern the spectral and transport properties of topological insulators and semimetals. Although the topological pump is theoretically old, BEC in the pump has been established just recently [1] motivated by the state-of-the-art experiments using cold atoms [2, 3]. The center of mass (CM) of a system with boundaries shows a sequence of quantized jumps in the adiabatic limit associated with the edge states. Despite that the bulk is adiabatic, the edge is inevitably non-adiabatic in the experimental setup or in any numerical simulations. Still the pumped charge is quantized and carried by the bulk. Its quantization is guaranteed by a compensation between the bulk and edges. We show that in the presence of disorder the pumped charge continues to be quantized despite the appearance of non-quantized jumps.

  2. 2-Step scalar deadzone quantization for bitplane image coding.

    PubMed

    Auli-Llinas, Francesc

    2013-12-01

    Modern lossy image coding systems generate a quality progressive codestream that, truncated at increasing rates, produces an image with decreasing distortion. Quality progressivity is commonly provided by an embedded quantizer that employs uniform scalar deadzone quantization (USDQ) together with a bitplane coding strategy. This paper introduces a 2-step scalar deadzone quantization (2SDQ) scheme that achieves same coding performance as that of USDQ while reducing the coding passes and the emitted symbols of the bitplane coding engine. This serves to reduce the computational costs of the codec and/or to code high dynamic range images. The main insights behind 2SDQ are the use of two quantization step sizes that approximate wavelet coefficients with more or less precision depending on their density, and a rate-distortion optimization technique that adjusts the distortion decreases produced when coding 2SDQ indexes. The integration of 2SDQ in current codecs is straightforward. The applicability and efficiency of 2SDQ are demonstrated within the framework of JPEG2000.

  3. Fast large-scale object retrieval with binary quantization

    NASA Astrophysics Data System (ADS)

    Zhou, Shifu; Zeng, Dan; Shen, Wei; Zhang, Zhijiang; Tian, Qi

    2015-11-01

    The objective of large-scale object retrieval systems is to search for images that contain the target object in an image database. Where state-of-the-art approaches rely on global image representations to conduct searches, we consider many boxes per image as candidates to search locally in a picture. In this paper, a feature quantization algorithm called binary quantization is proposed. In binary quantization, a scale-invariant feature transform (SIFT) feature is quantized into a descriptive and discriminative bit-vector, which allows itself to adapt to the classic inverted file structure for box indexing. The inverted file, which stores the bit-vector and box ID where the SIFT feature is located inside, is compact and can be loaded into the main memory for efficient box indexing. We evaluate our approach on available object retrieval datasets. Experimental results demonstrate that the proposed approach is fast and achieves excellent search quality. Therefore, the proposed approach is an improvement over state-of-the-art approaches for object retrieval.

  4. Haralick texture features from apparent diffusion coefficient (ADC) MRI images depend on imaging and pre-processing parameters.

    PubMed

    Brynolfsson, Patrik; Nilsson, David; Torheim, Turid; Asklund, Thomas; Karlsson, Camilla Thellenberg; Trygg, Johan; Nyholm, Tufve; Garpebring, Anders

    2017-06-22

    In recent years, texture analysis of medical images has become increasingly popular in studies investigating diagnosis, classification and treatment response assessment of cancerous disease. Despite numerous applications in oncology and medical imaging in general, there is no consensus regarding texture analysis workflow, or reporting of parameter settings crucial for replication of results. The aim of this study was to assess how sensitive Haralick texture features of apparent diffusion coefficient (ADC) MR images are to changes in five parameters related to image acquisition and pre-processing: noise, resolution, how the ADC map is constructed, the choice of quantization method, and the number of gray levels in the quantized image. We found that noise, resolution, choice of quantization method and the number of gray levels in the quantized images had a significant influence on most texture features, and that the effect size varied between different features. Different methods for constructing the ADC maps did not have an impact on any texture feature. Based on our results, we recommend using images with similar resolutions and noise levels, using one quantization method, and the same number of gray levels in all quantized images, to make meaningful comparisons of texture feature results between different subjects.

  5. From Weyl to Born-Jordan quantization: The Schrödinger representation revisited

    NASA Astrophysics Data System (ADS)

    de Gosson, Maurice A.

    2016-03-01

    The ordering problem has been one of the long standing and much discussed questions in quantum mechanics from its very beginning. Nowadays, there is more or less a consensus among physicists that the right prescription is Weyl's rule, which is closely related to the Moyal-Wigner phase space formalism. We propose in this report an alternative approach by replacing Weyl quantization with the less well-known Born-Jordan quantization. This choice is actually natural if we want the Heisenberg and Schrödinger pictures of quantum mechanics to be mathematically equivalent. It turns out that, in addition, Born-Jordan quantization can be recovered from Feynman's path integral approach provided that one used short-time propagators arising from correct formulas for the short-time action, as observed by Makri and Miller. These observations lead to a slightly different quantum mechanics, exhibiting some unexpected features, and this without affecting the main existing theory; for instance quantizations of physical Hamiltonian functions are the same as in the Weyl correspondence. The differences are in fact of a more subtle nature; for instance, the quantum observables will not correspond in a one-to-one fashion to classical ones, and the dequantization of a Born-Jordan quantum operator is less straightforward than that of the corresponding Weyl operator. The use of Born-Jordan quantization moreover solves the "angular momentum dilemma", which already puzzled L. Pauling. Born-Jordan quantization has been known for some time (but not fully exploited) by mathematicians working in time-frequency analysis and signal analysis, but ignored by physicists. One of the aims of this report is to collect and synthesize these sporadic discussions, while analyzing the conceptual differences with Weyl quantization, which is also reviewed in detail. Another striking feature is that the Born-Jordan formalism leads to a redefinition of phase space quantum mechanics, where the usual Wigner distribution has to be replaced with a new quasi-distribution reducing interference effects.

  6. System design of the annular suspension and pointing system /ASPS/

    NASA Technical Reports Server (NTRS)

    Cunningham, D. C.; Gismondi, T. P.; Wilson, G. W.

    1978-01-01

    This paper presents the control system design for the Annular Suspension and Pointing System. Actuator sizing and configuration of the system are explained, and the control laws developed for linearizing and compensating the magnetic bearings, roll induction motor and gimbal torquers are given. Decoupling, feedforward and error compensation for the vernier and gimbal controllers is developed. The algorithm for computing the strapdown attitude reference is derived, and the allowable sampling rates, time delays and quantization of control signals are specified.

  7. An analogue of Weyl’s law for quantized irreducible generalized flag manifolds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matassa, Marco, E-mail: marco.matassa@gmail.com, E-mail: mmatassa@math.uio.no

    2015-09-15

    We prove an analogue of Weyl’s law for quantized irreducible generalized flag manifolds. This is formulated in terms of a zeta function which, similarly to the classical setting, satisfies the following two properties: as a functional on the quantized algebra it is proportional to the Haar state and its first singularity coincides with the classical dimension. The relevant formulas are given for the more general case of compact quantum groups.

  8. Quantization error of CCD cameras and their influence on phase calculation in fringe pattern analysis.

    PubMed

    Skydan, Oleksandr A; Lilley, Francis; Lalor, Michael J; Burton, David R

    2003-09-10

    We present an investigation into the phase errors that occur in fringe pattern analysis that are caused by quantization effects. When acquisition devices with a limited value of camera bit depth are used, there are a limited number of quantization levels available to record the signal. This may adversely affect the recorded signal and adds a potential source of instrumental error to the measurement system. Quantization effects also determine the accuracy that may be achieved by acquisition devices in a measurement system. We used the Fourier fringe analysis measurement technique. However, the principles can be applied equally well for other phase measuring techniques to yield a phase error distribution that is caused by the camera bit depth.

  9. Quantization of Non-Lagrangian Systems

    NASA Astrophysics Data System (ADS)

    Kochan, Denis

    A novel method for quantization of non-Lagrangian (open) systems is proposed. It is argued that the essential object, which provides both classical and quantum evolution, is a certain canonical two-form defined in extended velocity space. In this setting classical dynamics is recovered from the stringy-type variational principle, which employs umbilical surfaces instead of histories of the system. Quantization is then accomplished in accordance with the introduced variational principle. The path integral for the transition probability amplitude (propagator) is rearranged to a surface functional integral. In the standard case of closed (Lagrangian) systems the presented method reduces to the standard Feynman's approach. The inverse problem of the calculus of variation, the problem of quantization ambiguity and the quantum mechanics in the presence of friction are analyzed in detail.

  10. Proteus Survey: Technical Manual and Codebook

    DTIC Science & Technology

    1992-06-01

    Reserve Officer Training Corp, 20% from Officer Candidate School and Direct Commission) and by gender . Female officers were supposed to be oversampled...stratified by source of commission (40% USMA, 40% ROTC, 20% OCS and DC) and by gender , as in previous years. A total of approximately 7,000 surveys were...for the 1987 Proteus Survey and the population percentages from the OLRDB for each of the key strata ( gender and source of commission) reportedly used

  11. Joint Source-Channel Coding by Means of an Oversampled Filter Bank Code

    NASA Astrophysics Data System (ADS)

    Marinkovic, Slavica; Guillemot, Christine

    2006-12-01

    Quantized frame expansions based on block transforms and oversampled filter banks (OFBs) have been considered recently as joint source-channel codes (JSCCs) for erasure and error-resilient signal transmission over noisy channels. In this paper, we consider a coding chain involving an OFB-based signal decomposition followed by scalar quantization and a variable-length code (VLC) or a fixed-length code (FLC). This paper first examines the problem of channel error localization and correction in quantized OFB signal expansions. The error localization problem is treated as an[InlineEquation not available: see fulltext.]-ary hypothesis testing problem. The likelihood values are derived from the joint pdf of the syndrome vectors under various hypotheses of impulse noise positions, and in a number of consecutive windows of the received samples. The error amplitudes are then estimated by solving the syndrome equations in the least-square sense. The message signal is reconstructed from the corrected received signal by a pseudoinverse receiver. We then improve the error localization procedure by introducing a per-symbol reliability information in the hypothesis testing procedure of the OFB syndrome decoder. The per-symbol reliability information is produced by the soft-input soft-output (SISO) VLC/FLC decoders. This leads to the design of an iterative algorithm for joint decoding of an FLC and an OFB code. The performance of the algorithms developed is evaluated in a wavelet-based image coding system.

  12. Quantization of Gaussian samples at very low SNR regime in continuous variable QKD applications

    NASA Astrophysics Data System (ADS)

    Daneshgaran, Fred; Mondin, Marina

    2016-09-01

    The main problem for information reconciliation in continuous variable Quantum Key Distribution (QKD) at low Signal to Noise Ratio (SNR) is quantization and assignment of labels to the samples of the Gaussian Random Variables (RVs) observed at Alice and Bob. Trouble is that most of the samples, assuming that the Gaussian variable is zero mean which is de-facto the case, tend to have small magnitudes and are easily disturbed by noise. Transmission over longer and longer distances increases the losses corresponding to a lower effective SNR exasperating the problem. This paper looks at the quantization problem of the Gaussian samples at very low SNR regime from an information theoretic point of view. We look at the problem of two bit per sample quantization of the Gaussian RVs at Alice and Bob and derive expressions for the mutual information between the bit strings as a result of this quantization. The quantization threshold for the Most Significant Bit (MSB) should be chosen based on the maximization of the mutual information between the quantized bit strings. Furthermore, while the LSB string at Alice and Bob are balanced in a sense that their entropy is close to maximum, this is not the case for the second most significant bit even under optimal threshold. We show that with two bit quantization at SNR of -3 dB we achieve 75.8% of maximal achievable mutual information between Alice and Bob, hence, as the number of quantization bits increases beyond 2-bits, the number of additional useful bits that can be extracted for secret key generation decreases rapidly. Furthermore, the error rates between the bit strings at Alice and Bob at the same significant bit level are rather high demanding very powerful error correcting codes. While our calculations and simulation shows that the mutual information between the LSB at Alice and Bob is 0.1044 bits, that at the MSB level is only 0.035 bits. Hence, it is only by looking at the bits jointly that we are able to achieve a mutual information of 0.2217 bits which is 75.8% of maximum achievable. The implication is that only by coding both MSB and LSB jointly can we hope to get close to this 75.8% limit. Hence, non-binary codes are essential to achieve acceptable performance.

  13. Three-Axis Superconducting Gravity Gradiometer

    NASA Technical Reports Server (NTRS)

    Paik, Ho Jung

    1987-01-01

    Gravity gradients measured even on accelerating platforms. Three-axis superconducting gravity gradiometer based on flux quantization and Meissner effect in superconductors and employs superconducting quantum interference device as amplifier. Incorporates several magnetically levitated proof masses. Gradiometer design integrates accelerometers for operation in differential mode. Principal use in commercial instruments for measurement of Earth-gravity gradients in geo-physical surveying and exploration for oil.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fernández Cristóbal, Jose Ma, E-mail: jmariaffc@gmail.com

    Under the generic designation of unimodular theory, two theoretical models of gravity are considered: the unimodular gravity and the TDiff theory. Our approach is primarily pedagogical. We aim to describe these models both from a geometric and a field-theoretical point of view. In addition, we explore connections with the cosmological-constant problem and outline some applications. We do not discuss the application of this theory to the quantization of gravity.

  15. Finite wordlength implementation of a megachannel digital spectrum analyzer

    NASA Technical Reports Server (NTRS)

    Satorius, E. H.; Grimm, M. J.; Zimmerman, G. A.; Wilck, H. C.

    1986-01-01

    The results of an extensive system analysis of the megachannel spectrum analyzer currently being developed for use in various applications of the Deep Space Network are presented. The intent of this analysis is to quantify the effects of digital quantization errors on system performance. The results of this analysis provide useful guidelines for choosing various system design parameters to enhance system performance.

  16. Topological charge quantization via path integration: An application of the Kustaanheimo-Stiefel transformation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Inomata, A.; Junker, G.; Wilson, R.

    1993-08-01

    The unified treatment of the Dirac monopole, the Schwinger monopole, and the Aharonov-Bahn problem by Barut and Wilson is revisited via a path integral approach. The Kustaanheimo-Stiefel transformation of space and time is utilized to calculate the path integral for a charged particle in the singular vector potential. In the process of dimensional reduction, a topological charge quantization rule is derived, which contains Dirac's quantization condition as a special case. 32 refs.

  17. Development of Advanced Technologies for Complete Genomic and Proteomic Characterization of Quantized Human Tumor Cells

    DTIC Science & Technology

    2014-07-01

    establishment of Glioblastoma ( GBM ) cell lines from GBM patient’s tumor samples and quantized cell populations of each of the parental GBM cell lines, we... GBM patients are now well established and from the basis of the molecular characterization of the tumor development and signatures presented by these...analysis of these quantized cell sub populations and have begun to assemble the protein signatures of GBM tumors underpinned by the comprehensive

  18. Differential calculus on quantized simple lie groups

    NASA Astrophysics Data System (ADS)

    Jurčo, Branislav

    1991-07-01

    Differential calculi, generalizations of Woronowicz's four-dimensional calculus on SU q (2), are introduced for quantized classical simple Lie groups in a constructive way. For this purpose, the approach of Faddeev and his collaborators to quantum groups was used. An equivalence of Woronowicz's enveloping algebra generated by the dual space to the left-invariant differential forms and the corresponding quantized universal enveloping algebra, is obtained for our differential calculi. Real forms for q ∈ ℝ are also discussed.

  19. Light-hole quantization in the optical response of ultra-wide GaAs/Al(x)Ga(1-x)As quantum wells.

    PubMed

    Solovyev, V V; Bunakov, V A; Schmult, S; Kukushkin, I V

    2013-01-16

    Temperature-dependent reflectivity and photoluminescence spectra are studied for undoped ultra-wide 150 and 250 nm GaAs quantum wells. It is shown that spectral features previously attributed to a size quantization of the exciton motion in the z-direction coincide well with energies of quantized levels for light holes. Furthermore, optical spectra reveal very similar properties at temperatures above the exciton dissociation point.

  20. Deformation quantizations with separation of variables on a Kähler manifold

    NASA Astrophysics Data System (ADS)

    Karabegov, Alexander V.

    1996-10-01

    We give a simple geometric description of all formal differentiable deformation quantizations on a Kähler manifold M such that for each open subset U⊂ M ⋆-multiplication from the left by a holomorphic function and from the right by an antiholomorphic function on U coincides with the pointwise multiplication by these functions. We show that these quantizations are in 1-1 correspondence with the formal deformations of the original Kähler metrics on M.

  1. Extension of loop quantum gravity to f(R) theories.

    PubMed

    Zhang, Xiangdong; Ma, Yongge

    2011-04-29

    The four-dimensional metric f(R) theories of gravity are cast into connection-dynamical formalism with real su(2) connections as configuration variables. Through this formalism, the classical metric f(R) theories are quantized by extending the loop quantization scheme of general relativity. Our results imply that the nonperturbative quantization procedure of loop quantum gravity is valid not only for general relativity but also for a rather general class of four-dimensional metric theories of gravity.

  2. Foundations of Quantum Mechanics: Derivation of a dissipative Schrödinger equation from first principles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gonçalves, L.A.; Olavo, L.S.F., E-mail: olavolsf@gmail.com

    Dissipation in Quantum Mechanics took some time to become a robust field of investigation after the birth of the field. The main issue hindering developments in the field is that the Quantization process was always tightly connected to the Hamiltonian formulation of Classical Mechanics. In this paper we present a quantization process that does not depend upon the Hamiltonian formulation of Classical Mechanics (although still departs from Classical Mechanics) and thus overcome the problem of finding, from first principles, a completely general Schrödinger equation encompassing dissipation. This generalized process of quantization is shown to be nothing but an extension ofmore » a more restricted version that is shown to produce the Schrödinger equation for Hamiltonian systems from first principles (even for Hamiltonian velocity dependent potential). - Highlights: • A Quantization process independent of the Hamiltonian formulation of quantum Mechanics is proposed. • This quantization method is applied to dissipative or absorptive systems. • A Dissipative Schrödinger equation is derived from first principles.« less

  3. Can one ADM quantize relativistic bosonicstrings and membranes?

    NASA Astrophysics Data System (ADS)

    Moncrief, Vincent

    2006-04-01

    The standard methods for quantizing relativistic strings diverge significantly from the Dirac-Wheeler-DeWitt program for quantization of generally covariant systems and one wonders whether the latter could be successfully implemented as an alternative to the former. As a first step in this direction, we consider the possibility of quantizing strings (and also relativistic membranes) via a partially gauge-fixed ADM (Arnowitt, Deser and Misner) formulation of the reduced field equations for these systems. By exploiting some (Euclidean signature) Hamilton-Jacobi techniques that Mike Ryan and I had developed previously for the quantization of Bianchi IX cosmological models, I show how to construct Diff( S 1)-invariant (or Diff(Σ)-invariant in the case of membranes) ground state wave functionals for the cases of co-dimension one strings and membranes embedded in Minkowski spacetime. I also show that the reduced Hamiltonian density operators for these systems weakly commute when applied to physical (i.e. Diff( S 1) or Diff(Σ)-invariant) states. While many open questions remain, these preliminary results seem to encourage further research along the same lines.

  4. Application of heterogeneous pulse coupled neural network in image quantization

    NASA Astrophysics Data System (ADS)

    Huang, Yi; Ma, Yide; Li, Shouliang; Zhan, Kun

    2016-11-01

    On the basis of the different strengths of synaptic connections between actual neurons, this paper proposes a heterogeneous pulse coupled neural network (HPCNN) algorithm to perform quantization on images. HPCNNs are developed from traditional pulse coupled neural network (PCNN) models, which have different parameters corresponding to different image regions. This allows pixels of different gray levels to be classified broadly into two categories: background regional and object regional. Moreover, an HPCNN also satisfies human visual characteristics. The parameters of the HPCNN model are calculated automatically according to these categories, and quantized results will be optimal and more suitable for humans to observe. At the same time, the experimental results of natural images from the standard image library show the validity and efficiency of our proposed quantization method.

  5. Tampered Region Localization of Digital Color Images Based on JPEG Compression Noise

    NASA Astrophysics Data System (ADS)

    Wang, Wei; Dong, Jing; Tan, Tieniu

    With the availability of various digital image edit tools, seeing is no longer believing. In this paper, we focus on tampered region localization for image forensics. We propose an algorithm which can locate tampered region(s) in a lossless compressed tampered image when its unchanged region is output of JPEG decompressor. We find the tampered region and the unchanged region have different responses for JPEG compression. The tampered region has stronger high frequency quantization noise than the unchanged region. We employ PCA to separate different spatial frequencies quantization noises, i.e. low, medium and high frequency quantization noise, and extract high frequency quantization noise for tampered region localization. Post-processing is involved to get final localization result. The experimental results prove the effectiveness of our proposed method.

  6. Nonperturbative light-front Hamiltonian methods

    NASA Astrophysics Data System (ADS)

    Hiller, J. R.

    2016-09-01

    We examine the current state-of-the-art in nonperturbative calculations done with Hamiltonians constructed in light-front quantization of various field theories. The language of light-front quantization is introduced, and important (numerical) techniques, such as Pauli-Villars regularization, discrete light-cone quantization, basis light-front quantization, the light-front coupled-cluster method, the renormalization group procedure for effective particles, sector-dependent renormalization, and the Lanczos diagonalization method, are surveyed. Specific applications are discussed for quenched scalar Yukawa theory, ϕ4 theory, ordinary Yukawa theory, supersymmetric Yang-Mills theory, quantum electrodynamics, and quantum chromodynamics. The content should serve as an introduction to these methods for anyone interested in doing such calculations and as a rallying point for those who wish to solve quantum chromodynamics in terms of wave functions rather than random samplings of Euclidean field configurations.

  7. Design and fabrication of a diffractive beam splitter for dual-wavelength and concurrent irradiation of process points.

    PubMed

    Amako, Jun; Shinozaki, Yu

    2016-07-11

    We report on a dual-wavelength diffractive beam splitter designed for use in parallel laser processing. This novel optical element generates two beam arrays of different wavelengths and allows their overlap at the process points on a workpiece. To design the deep surface-relief profile of a splitter using a simulated annealing algorithm, we introduce a heuristic but practical scheme to determine the maximum depth and the number of quantization levels. The designed corrugations were fabricated in a photoresist by maskless grayscale exposure using a high-resolution spatial light modulator. We characterized the photoresist splitter, thereby validating the proposed beam-splitting concept.

  8. Quantized phase coding and connected region labeling for absolute phase retrieval.

    PubMed

    Chen, Xiangcheng; Wang, Yuwei; Wang, Yajun; Ma, Mengchao; Zeng, Chunnian

    2016-12-12

    This paper proposes an absolute phase retrieval method for complex object measurement based on quantized phase-coding and connected region labeling. A specific code sequence is embedded into quantized phase of three coded fringes. Connected regions of different codes are labeled and assigned with 3-digit-codes combining the current period and its neighbors. Wrapped phase, more than 36 periods, can be restored with reference to the code sequence. Experimental results verify the capability of the proposed method to measure multiple isolated objects.

  9. The wavelet/scalar quantization compression standard for digital fingerprint images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bradley, J.N.; Brislawn, C.M.

    1994-04-01

    A new digital image compression standard has been adopted by the US Federal Bureau of Investigation for use on digitized gray-scale fingerprint images. The algorithm is based on adaptive uniform scalar quantization of a discrete wavelet transform image decomposition and is referred to as the wavelet/scalar quantization standard. The standard produces archival quality images at compression ratios of around 20:1 and will allow the FBI to replace their current database of paper fingerprint cards with digital imagery.

  10. Rotating effects on the Landau quantization for an atom with a magnetic quadrupole moment

    NASA Astrophysics Data System (ADS)

    Fonseca, I. C.; Bakke, K.

    2016-01-01

    Based on the single particle approximation [Dmitriev et al., Phys. Rev. C 50, 2358 (1994) and C.-C. Chen, Phys. Rev. A 51, 2611 (1995)], the Landau quantization associated with an atom with a magnetic quadrupole moment is introduced, and then, rotating effects on this analogue of the Landau quantization is investigated. It is shown that rotating effects can modify the cyclotron frequency and breaks the degeneracy of the analogue of the Landau levels.

  11. Investigation of Coding Techniques for Memory and Delay Efficient Interleaving in Slow Rayleigh Fading

    DTIC Science & Technology

    1991-11-01

    2 mega joule/m 2 (MJ/m 2 ) curie 3.700000 x E +1 *giga becquerel (GBq) degree (angle) 1.745329 x E -2 radian (rad) degree Farenheit tK = (tp...quantization assigned two quantization values. One value was assigned for demodulation values that was larger than zero and another quantization value to...demodulation values that were smaller than zero (for maximum-likelihood decisions). Logic 0 was assigned for a positive demodulation value and a logic 1 was

  12. Rotating effects on the Landau quantization for an atom with a magnetic quadrupole moment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fonseca, I. C.; Bakke, K., E-mail: kbakke@fisica.ufpb.br

    2016-01-07

    Based on the single particle approximation [Dmitriev et al., Phys. Rev. C 50, 2358 (1994) and C.-C. Chen, Phys. Rev. A 51, 2611 (1995)], the Landau quantization associated with an atom with a magnetic quadrupole moment is introduced, and then, rotating effects on this analogue of the Landau quantization is investigated. It is shown that rotating effects can modify the cyclotron frequency and breaks the degeneracy of the analogue of the Landau levels.

  13. Gold nanoparticles produced in situ mediate bioelectricity and hydrogen production in a microbial fuel cell by quantized capacitance charging.

    PubMed

    Kalathil, Shafeer; Lee, Jintae; Cho, Moo Hwan

    2013-02-01

    Oppan quantized style: By adding a gold precursor at its cathode, a microbial fuel cell (MFC) is demonstrated to form gold nanoparticles that can be used to simultaneously produce bioelectricity and hydrogen. By exploiting the quantized capacitance charging effect, the gold nanoparticles mediate the production of hydrogen without requiring an external power supply, while the MFC produces a stable power density. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Early Childhood Longitudinal Study, Kindergarten Class of 2010-11 (ECLS-K:2011): User's Manual for the ECLS-K:2011 Kindergarten-Second Grade Data File and Electronic Codebook, Public Version. NCES 2017-285

    ERIC Educational Resources Information Center

    Tourangeau, Karen; Nord, Christine; Lê, Thanh; Wallner-Allen, Kathleen; Vaden-Kiernan, Nancy; Blaker, Lisa; Najarian, Michelle

    2017-01-01

    This manual provides guidance and documentation for users of the longitudinal kindergarten-second grade (K-2) data file of the Early Childhood Longitudinal Study, Kindergarten Class of 2010-11 (ECLS-K:2011). It mainly provides information specific to the second-grade rounds of data collection. Users should refer to the "Early Childhood…

  15. 2013 Workplace and Equal Opportunity Survey of Active Duty Members: Administration, Datasets, and Codebook

    DTIC Science & Technology

    2016-05-01

    and Kroeger (2002) provide details on sampling and weighting. Following the summary of the survey methodology is a description of the survey analysis... description of priority, for the ADDRESS file). At any given time, the current address used corresponded to the address number with the highest priority...types of address updates provided by the postal service. They are detailed below; each includes a description of the processing steps. 1. Postal Non

  16. Early Childhood Longitudinal Study, Kindergarten Class of 2010-11 (ECLS-K:2011). User's Manual for the ECLS-K:2011 Kindergarten Data File and Electronic Codebook, Public Version. NCES 2015-074

    ERIC Educational Resources Information Center

    Tourangeau, Karen; Nord, Christine; Lê, Thanh; Sorongon, Alberto G.; Hagedorn, Mary C.; Daly, Peggy; Najarian, Michelle

    2015-01-01

    This manual provides guidance and documentation for users of the kindergarten (or base year) data of the Early Childhood Longitudinal Study, Kindergarten Class of 2010-11 (ECLS-K:2011). It begins with an overview of the ECLS-K:2011. Subsequent chapters provide details on the study data collection instruments and methods; the direct and indirect…

  17. 2006 Workplace and Gender Relations Survey of Active Duty Members: Administration, Datasets, and Codebook

    DTIC Science & Technology

    2010-12-01

    education , time at sea, and field exercises/alerts. 10. In the past 12 months, how many nights have you been away from your permanent duty station...nmlkj nmlkj b. A friend? nmlkj nmlkj nmlkj c. A family member (e.g., parent , brother/sister)? nmlkj nmlkj nmlkj d. A chaplain...nmlkj nmlkj b. A friend? nmlkj nmlkj nmlkj c. A family member (e.g., parent , brother/sister)? nmlkj nmlkj nmlkj d. A chaplain

  18. Early Childhood Longitudinal Study, Kindergarten Class of 2010-11 (ECLS-K:2011): User's Manual for the ECLS-K:2011 Kindergarten-Fourth Grade Data File and Electronic Codebook, Public Version. NCES 2018-032

    ERIC Educational Resources Information Center

    Tourangeau, Karen; Nord, Christine; Lê, Thanh; Wallner-Allen, Kathleen; Vaden-Kiernan, Nancy; Blaker, Lisa; Najarian, Michelle

    2018-01-01

    This manual provides guidance and documentation for users of the longitudinal kindergarten-fourth grade (K-4) data file of the Early Childhood Longitudinal Study, Kindergarten Class of 2010-11 (ECLS-K:2011). It mainly provides information specific to the fourth-grade round of data collection. The first chapter provides an overview of the…

  19. 1999 Survey of Active Duty Personnel: Administration, Datasets, and Codebook

    DTIC Science & Technology

    2000-12-01

    but left key items blank. These surveys were treated as nonrespondents when at least one item in each of the Questions 39, 50, and 52 were not...be useable, a questionnaire had to have at least one item in each of the questions 39, 50 and 52 answered. 27 These two subgroups of records are...1988). Mail survey response rate: A meta-analysis of selected techniques for inducing response. Public Opinion Quarterly, 52 , 467-491. Francisco, C

  20. Advance of the Black Flags: Symbolism, Social Identity, and Psychological Operations in Violent Conflict

    DTIC Science & Technology

    2015-12-01

    priorities. Cross points out that even the definition of music itself is somewhat subjective, as sound, rhythm, melody, and even body movement ...to disrupt the conditions that allow a violent enemy to develop . The literature indicates that music is a universal social phenomenon. Music is a...Upon viewing the sample films and videos and listening to the music samples, a set of codes was developed and organized into a codebook in order

  1. Context-Aware Local Binary Feature Learning for Face Recognition.

    PubMed

    Duan, Yueqi; Lu, Jiwen; Feng, Jianjiang; Zhou, Jie

    2018-05-01

    In this paper, we propose a context-aware local binary feature learning (CA-LBFL) method for face recognition. Unlike existing learning-based local face descriptors such as discriminant face descriptor (DFD) and compact binary face descriptor (CBFD) which learn each feature code individually, our CA-LBFL exploits the contextual information of adjacent bits by constraining the number of shifts from different binary bits, so that more robust information can be exploited for face representation. Given a face image, we first extract pixel difference vectors (PDV) in local patches, and learn a discriminative mapping in an unsupervised manner to project each pixel difference vector into a context-aware binary vector. Then, we perform clustering on the learned binary codes to construct a codebook, and extract a histogram feature for each face image with the learned codebook as the final representation. In order to exploit local information from different scales, we propose a context-aware local binary multi-scale feature learning (CA-LBMFL) method to jointly learn multiple projection matrices for face representation. To make the proposed methods applicable for heterogeneous face recognition, we present a coupled CA-LBFL (C-CA-LBFL) method and a coupled CA-LBMFL (C-CA-LBMFL) method to reduce the modality gap of corresponding heterogeneous faces in the feature level, respectively. Extensive experimental results on four widely used face datasets clearly show that our methods outperform most state-of-the-art face descriptors.

  2. Iris Image Classification Based on Hierarchical Visual Codebook.

    PubMed

    Zhenan Sun; Hui Zhang; Tieniu Tan; Jianyu Wang

    2014-06-01

    Iris recognition as a reliable method for personal identification has been well-studied with the objective to assign the class label of each iris image to a unique subject. In contrast, iris image classification aims to classify an iris image to an application specific category, e.g., iris liveness detection (classification of genuine and fake iris images), race classification (e.g., classification of iris images of Asian and non-Asian subjects), coarse-to-fine iris identification (classification of all iris images in the central database into multiple categories). This paper proposes a general framework for iris image classification based on texture analysis. A novel texture pattern representation method called Hierarchical Visual Codebook (HVC) is proposed to encode the texture primitives of iris images. The proposed HVC method is an integration of two existing Bag-of-Words models, namely Vocabulary Tree (VT), and Locality-constrained Linear Coding (LLC). The HVC adopts a coarse-to-fine visual coding strategy and takes advantages of both VT and LLC for accurate and sparse representation of iris texture. Extensive experimental results demonstrate that the proposed iris image classification method achieves state-of-the-art performance for iris liveness detection, race classification, and coarse-to-fine iris identification. A comprehensive fake iris image database simulating four types of iris spoof attacks is developed as the benchmark for research of iris liveness detection.

  3. Reformulation of the covering and quantizer problems as ground states of interacting particles.

    PubMed

    Torquato, S

    2010-11-01

    It is known that the sphere-packing problem and the number-variance problem (closely related to an optimization problem in number theory) can be posed as energy minimizations associated with an infinite number of point particles in d-dimensional Euclidean space R(d) interacting via certain repulsive pair potentials. We reformulate the covering and quantizer problems as the determination of the ground states of interacting particles in R(d) that generally involve single-body, two-body, three-body, and higher-body interactions. This is done by linking the covering and quantizer problems to certain optimization problems involving the "void" nearest-neighbor functions that arise in the theory of random media and statistical mechanics. These reformulations, which again exemplify the deep interplay between geometry and physics, allow one now to employ theoretical and numerical optimization techniques to analyze and solve these energy minimization problems. The covering and quantizer problems have relevance in numerous applications, including wireless communication network layouts, the search of high-dimensional data parameter spaces, stereotactic radiation therapy, data compression, digital communications, meshing of space for numerical analysis, and coding and cryptography, among other examples. In the first three space dimensions, the best known solutions of the sphere-packing and number-variance problems (or their "dual" solutions) are directly related to those of the covering and quantizer problems, but such relationships may or may not exist for d≥4 , depending on the peculiarities of the dimensions involved. Our reformulation sheds light on the reasons for these similarities and differences. We also show that disordered saturated sphere packings provide relatively thin (economical) coverings and may yield thinner coverings than the best known lattice coverings in sufficiently large dimensions. In the case of the quantizer problem, we derive improved upper bounds on the quantizer error using sphere-packing solutions, which are generally substantially sharper than an existing upper bound in low to moderately large dimensions. We also demonstrate that disordered saturated sphere packings yield relatively good quantizers. Finally, we remark on possible applications of our results for the detection of gravitational waves.

  4. Reformulation of the covering and quantizer problems as ground states of interacting particles

    NASA Astrophysics Data System (ADS)

    Torquato, S.

    2010-11-01

    It is known that the sphere-packing problem and the number-variance problem (closely related to an optimization problem in number theory) can be posed as energy minimizations associated with an infinite number of point particles in d -dimensional Euclidean space Rd interacting via certain repulsive pair potentials. We reformulate the covering and quantizer problems as the determination of the ground states of interacting particles in Rd that generally involve single-body, two-body, three-body, and higher-body interactions. This is done by linking the covering and quantizer problems to certain optimization problems involving the “void” nearest-neighbor functions that arise in the theory of random media and statistical mechanics. These reformulations, which again exemplify the deep interplay between geometry and physics, allow one now to employ theoretical and numerical optimization techniques to analyze and solve these energy minimization problems. The covering and quantizer problems have relevance in numerous applications, including wireless communication network layouts, the search of high-dimensional data parameter spaces, stereotactic radiation therapy, data compression, digital communications, meshing of space for numerical analysis, and coding and cryptography, among other examples. In the first three space dimensions, the best known solutions of the sphere-packing and number-variance problems (or their “dual” solutions) are directly related to those of the covering and quantizer problems, but such relationships may or may not exist for d≥4 , depending on the peculiarities of the dimensions involved. Our reformulation sheds light on the reasons for these similarities and differences. We also show that disordered saturated sphere packings provide relatively thin (economical) coverings and may yield thinner coverings than the best known lattice coverings in sufficiently large dimensions. In the case of the quantizer problem, we derive improved upper bounds on the quantizer error using sphere-packing solutions, which are generally substantially sharper than an existing upper bound in low to moderately large dimensions. We also demonstrate that disordered saturated sphere packings yield relatively good quantizers. Finally, we remark on possible applications of our results for the detection of gravitational waves.

  5. Remarks on a New Possible Discretization Scheme for Gauge Theories

    NASA Astrophysics Data System (ADS)

    Magnot, Jean-Pierre

    2018-03-01

    We propose here a new discretization method for a class of continuum gauge theories which action functionals are polynomials of the curvature. Based on the notion of holonomy, this discretization procedure appears gauge-invariant for discretized analogs of Yang-Mills theories, and hence gauge-fixing is fully rigorous for these discretized action functionals. Heuristic parts are forwarded to the quantization procedure via Feynman integrals and the meaning of the heuristic infinite dimensional Lebesgue integral is questioned.

  6. Remarks on a New Possible Discretization Scheme for Gauge Theories

    NASA Astrophysics Data System (ADS)

    Magnot, Jean-Pierre

    2018-07-01

    We propose here a new discretization method for a class of continuum gauge theories which action functionals are polynomials of the curvature. Based on the notion of holonomy, this discretization procedure appears gauge-invariant for discretized analogs of Yang-Mills theories, and hence gauge-fixing is fully rigorous for these discretized action functionals. Heuristic parts are forwarded to the quantization procedure via Feynman integrals and the meaning of the heuristic infinite dimensional Lebesgue integral is questioned.

  7. A new design approach to achieve a minimum impulse limit cycle in the presence of significant measurement uncertainties

    NASA Technical Reports Server (NTRS)

    Martin, M. W.; Kubiak, E. T.

    1982-01-01

    A new design was developed for the Space Shuttle Transition Phase Digital Autopilot to reduce the impact of large measurement uncertainties in the rate signal during attitude control. The signal source, which was dictated by early computer constraints, is characterized by large quantization, noise, bias, and transport lag which produce a measurement uncertainty larger than the minimum impulse rate change. To ensure convergence to a minimum impulse limit cycle, the design employed bias and transport lag compensation and a switching logic with hysteresis, rate deadzone, and 'walking' switching line. The design background, the rate measurement uncertainties, and the design solution are documented.

  8. Studies in the Theory of Quantum Games

    NASA Astrophysics Data System (ADS)

    Iqbal, Azhar

    2005-03-01

    Theory of quantum games is a new area of investigation that has gone through rapid development during the last few years. Initial motivation for playing games, in the quantum world, comes from the possibility of re-formulating quantum communication protocols, and algorithms, in terms of games between quantum and classical players. The possibility led to the view that quantum games have a potential to provide helpful insight into working of quantum algorithms, and even in finding new ones. This thesis analyzes and compares some interesting games when played classically and quantum mechanically. A large part of the thesis concerns investigations into a refinement notion of the Nash equilibrium concept. The refinement, called an evolutionarily stable strategy (ESS), was originally introduced in 1970s by mathematical biologists to model an evolving population using techniques borrowed from game theory. Analysis is developed around a situation when quantization changes ESSs without affecting corresponding Nash equilibria. Effects of quantization on solution-concepts other than Nash equilibrium are presented and discussed. For this purpose the notions of value of coalition, backwards-induction outcome, and subgame-perfect outcome are selected. Repeated games are known to have different information structure than one-shot games. Investigation is presented into a possible way where quantization changes the outcome of a repeated game. Lastly, two new suggestions are put forward to play quantum versions of classical matrix games. The first one uses the association of De Broglie's waves, with travelling material objects, as a resource for playing a quantum game. The second suggestion concerns an EPR type setting exploiting directly the correlations in Bell's inequalities to play a bi-matrix game.

  9. Error diffusion concept for multi-level quantization

    NASA Astrophysics Data System (ADS)

    Broja, Manfred; Michalowski, Kristina; Bryngdahl, Olof

    1990-11-01

    The error diffusion binarization procedure is adapted to multi-level quantization. The threshold parameters then available have a noticeable influence on the process. Characteristic features of the technique are shown together with experimental results.

  10. Natural inflation from polymer quantization

    NASA Astrophysics Data System (ADS)

    Ali, Masooma; Seahra, Sanjeev S.

    2017-11-01

    We study the polymer quantization of a homogeneous massive scalar field in the early Universe using a prescription inequivalent to those previously appearing in the literature. Specifically, we assume a Hilbert space for which the scalar field momentum is well defined but its amplitude is not. This is closer in spirit to the quantization scheme of loop quantum gravity, in which no unique configuration operator exists. We show that in the semiclassical approximation, the main effect of this polymer quantization scheme is to compactify the phase space of chaotic inflation in the field amplitude direction. This gives rise to an effective scalar potential closely resembling that of hybrid natural inflation. Unlike polymer schemes in which the scalar field amplitude is well defined, the semiclassical dynamics involves a past cosmological singularity; i.e., this approach does not mitigate the big bang.

  11. Optimal sampling and quantization of synthetic aperture radar signals

    NASA Technical Reports Server (NTRS)

    Wu, C.

    1978-01-01

    Some theoretical and experimental results on optimal sampling and quantization of synthetic aperture radar (SAR) signals are presented. It includes a description of a derived theoretical relationship between the pixel signal to noise ratio of processed SAR images and the number of quantization bits per sampled signal, assuming homogeneous extended targets. With this relationship known, a solution may be realized for the problem of optimal allocation of a fixed data bit-volume (for specified surface area and resolution criterion) between the number of samples and the number of bits per sample. The results indicate that to achieve the best possible image quality for a fixed bit rate and a given resolution criterion, one should quantize individual samples coarsely and thereby maximize the number of multiple looks. The theoretical results are then compared with simulation results obtained by processing aircraft SAR data.

  12. Effect of temperature degeneracy and Landau quantization on drift solitary waves and double layers

    NASA Astrophysics Data System (ADS)

    Shan, Shaukat Ali; Haque, Q.

    2018-01-01

    The linear and nonlinear drift ion acoustic waves have been investigated in an inhomogeneous, magnetized, dense degenerate, and quantized magnetic field plasma. The linear drift ion acoustic wave propagation along with the nonlinear structures like double layers and solitary waves has been found to be strongly dependent on the drift speed, magnetic field quantization parameter β, and the temperature degeneracy. The graphical illustrations show that the frequency of linear waves and the amplitude of the solitary waves increase with the increase in temperature degeneracy and Landau quantization effect, while the amplitude of the double layers decreases with the increase in η and T. The relevance of the present study is pointed out in the plasma environment of fast ignition inertial confinement fusion, the white dwarf stars, and short pulsed petawatt laser technology.

  13. Time-Symmetric Quantization in Spacetimes with Event Horizons

    NASA Astrophysics Data System (ADS)

    Kobakhidze, Archil; Rodd, Nicholas

    2013-08-01

    The standard quantization formalism in spacetimes with event horizons implies a non-unitary evolution of quantum states, as initial pure states may evolve into thermal states. This phenomenon is behind the famous black hole information loss paradox which provoked long-standing debates on the compatibility of quantum mechanics and gravity. In this paper we demonstrate that within an alternative time-symmetric quantization formalism thermal radiation is absent and states evolve unitarily in spacetimes with event horizons. We also discuss the theoretical consistency of the proposed formalism. We explicitly demonstrate that the theory preserves the microcausality condition and suggest a "reinterpretation postulate" to resolve other apparent pathologies associated with negative energy states. Accordingly as there is a consistent alternative, we argue that choosing to use time-asymmetric quantization is a necessary condition for the black hole information loss paradox.

  14. On a canonical quantization of 3D Anti de Sitter pure gravity

    NASA Astrophysics Data System (ADS)

    Kim, Jihun; Porrati, Massimo

    2015-10-01

    We perform a canonical quantization of pure gravity on AdS 3 using as a technical tool its equivalence at the classical level with a Chern-Simons theory with gauge group SL(2,{R})× SL(2,{R}) . We first quantize the theory canonically on an asymptotically AdS space -which is topologically the real line times a Riemann surface with one connected boundary. Using the "constrain first" approach we reduce canonical quantization to quantization of orbits of the Virasoro group and Kähler quantization of Teichmüller space. After explicitly computing the Kähler form for the torus with one boundary component and after extending that result to higher genus, we recover known results, such as that wave functions of SL(2,{R}) Chern-Simons theory are conformal blocks. We find new restrictions on the Hilbert space of pure gravity by imposing invariance under large diffeomorphisms and normalizability of the wave function. The Hilbert space of pure gravity is shown to be the target space of Conformal Field Theories with continuous spectrum and a lower bound on operator dimensions. A projection defined by topology changing amplitudes in Euclidean gravity is proposed. It defines an invariant subspace that allows for a dual interpretation in terms of a Liouville CFT. Problems and features of the CFT dual are assessed and a new definition of the Hilbert space, exempt from those problems, is proposed in the case of highly-curved AdS 3.

  15. Theories of Matter, Space and Time, Volume 2; Quantum theories

    NASA Astrophysics Data System (ADS)

    Evans, N.; King, S. F.

    2018-06-01

    This book and its prequel Theories of Matter Space and Time: Classical Theories grew out of courses that we have both taught as part of the undergraduate degree program in Physics at Southampton University, UK. Our goal was to guide the full MPhys undergraduate cohort through some of the trickier areas of theoretical physics that we expect our undergraduates to master. Here we teach the student to understand first quantized relativistic quantum theories. We first quickly review the basics of quantum mechanics which should be familiar to the reader from a prior course. Then we will link the Schrödinger equation to the principle of least action introducing Feynman's path integral methods. Next, we present the relativistic wave equations of Klein, Gordon and Dirac. Finally, we convert Maxwell's equations of electromagnetism to a wave equation for photons and make contact with quantum electrodynamics (QED) at a first quantized level. Between the two volumes we hope to move a student's understanding from their prior courses to a place where they are ready, beyond, to embark on graduate level courses on quantum field theory.

  16. Landau quantization in monolayer GaAs

    NASA Astrophysics Data System (ADS)

    Chung, Hsien-Ching; Ho, Ching-Hong; Chang, Cheng-Peng; Chen, Chun-Nan; Chiu, Chih-Wei; Lin, Ming-Fa

    In the past decade, the discovery of graphene has opened the possibility of two-dimensional materials both in fundamental researches and technological applications. However, the gapless feature shrinks the applications of pristine graphene. Recently, researchers have new challenges and opportunities for post-graphene two-dimensional nanomaterials, such as silicene (Si), germanene (Ge), and tinene (Sn), due to the large enough energy gap (of the size comparable to the thermal energy at room temperature). Apart from the graphene analogs of group IV elements, the buckled honeycomb lattices of the binary compositions of group III-V elements have been proposed as a new class of post-graphene two-dimensional nanomaterials. In this study, the generalized tight-binding model considering the spin-orbital coupling is used to investigate the essential properties of monolayer GaAs. The Landau quantization, band structure, wave function, and density of states are discussed in detail. One of us (Hsien-Ching Chung) thanks Ming-Hui Chung and Su-Ming Chen for financial support. This work was supported in part by the Ministry of Science and Technology of Taiwan under Grant Number MOST 105-2811-M-017-003.

  17. Adjustments for the display of quantized ion channel dwell times in histograms with logarithmic bins.

    PubMed

    Stark, J A; Hladky, S B

    2000-02-01

    Dwell-time histograms are often plotted as part of patch-clamp investigations of ion channel currents. The advantages of plotting these histograms with a logarithmic time axis were demonstrated by, J. Physiol. (Lond.). 378:141-174), Pflügers Arch. 410:530-553), and, Biophys. J. 52:1047-1054). Sigworth and Sine argued that the interpretation of such histograms is simplified if the counts are presented in a manner similar to that of a probability density function. However, when ion channel records are recorded as a discrete time series, the dwell times are quantized. As a result, the mapping of dwell times to logarithmically spaced bins is highly irregular; bins may be empty, and significant irregularities may extend beyond the duration of 100 samples. Using simple approximations based on the nature of the binning process and the transformation rules for probability density functions, we develop adjustments for the display of the counts to compensate for this effect. Tests with simulated data suggest that this procedure provides a faithful representation of the data.

  18. The Holographic Electron Density Theorem, de-quantization, re-quantization, and nuclear charge space extrapolations of the Universal Molecule Model

    NASA Astrophysics Data System (ADS)

    Mezey, Paul G.

    2017-11-01

    Two strongly related theorems on non-degenerate ground state electron densities serve as the basis of "Molecular Informatics". The Hohenberg-Kohn theorem is a statement on global molecular information, ensuring that the complete electron density contains the complete molecular information. However, the Holographic Electron Density Theorem states more: the local information present in each and every positive volume density fragment is already complete: the information in the fragment is equivalent to the complete molecular information. In other words, the complete molecular information provided by the Hohenberg-Kohn Theorem is already provided, in full, by any positive volume, otherwise arbitrarily small electron density fragment. In this contribution some of the consequences of the Holographic Electron Density Theorem are discussed within the framework of the "Nuclear Charge Space" and the Universal Molecule Model. In the Nuclear Charge Space" the nuclear charges are regarded as continuous variables, and in the more general Universal Molecule Model some other quantized parameteres are also allowed to become "de-quantized and then re-quantized, leading to interrelations among real molecules through abstract molecules. Here the specific role of the Holographic Electron Density Theorem is discussed within the above context.

  19. Spectrally efficient digitized radio-over-fiber system with k-means clustering-based multidimensional quantization.

    PubMed

    Zhang, Lu; Pang, Xiaodan; Ozolins, Oskars; Udalcovs, Aleksejs; Popov, Sergei; Xiao, Shilin; Hu, Weisheng; Chen, Jiajia

    2018-04-01

    We propose a spectrally efficient digitized radio-over-fiber (D-RoF) system by grouping highly correlated neighboring samples of the analog signals into multidimensional vectors, where the k-means clustering algorithm is adopted for adaptive quantization. A 30  Gbit/s D-RoF system is experimentally demonstrated to validate the proposed scheme, reporting a carrier aggregation of up to 40 100 MHz orthogonal frequency division multiplexing (OFDM) channels with quadrate amplitude modulation (QAM) order of 4 and an aggregation of 10 100 MHz OFDM channels with a QAM order of 16384. The equivalent common public radio interface rates from 37 to 150  Gbit/s are supported. Besides, the error vector magnitude (EVM) of 8% is achieved with the number of quantization bits of 4, and the EVM can be further reduced to 1% by increasing the number of quantization bits to 7. Compared with conventional pulse coding modulation-based D-RoF systems, the proposed D-RoF system improves the signal-to-noise-ratio up to ∼9  dB and greatly reduces the EVM, given the same number of quantization bits.

  20. Prediction-guided quantization for video tone mapping

    NASA Astrophysics Data System (ADS)

    Le Dauphin, Agnès.; Boitard, Ronan; Thoreau, Dominique; Olivier, Yannick; Francois, Edouard; LeLéannec, Fabrice

    2014-09-01

    Tone Mapping Operators (TMOs) compress High Dynamic Range (HDR) content to address Low Dynamic Range (LDR) displays. However, before reaching the end-user, this tone mapped content is usually compressed for broadcasting or storage purposes. Any TMO includes a quantization step to convert floating point values to integer ones. In this work, we propose to adapt this quantization, in the loop of an encoder, to reduce the entropy of the tone mapped video content. Our technique provides an appropriate quantization for each mode of both the Intra and Inter-prediction that is performed in the loop of a block-based encoder. The mode that minimizes a rate-distortion criterion uses its associated quantization to provide integer values for the rest of the encoding process. The method has been implemented in HEVC and was tested over two different scenarios: the compression of tone mapped LDR video content (using the HM10.0) and the compression of perceptually encoded HDR content (HM14.0). Results show an average bit-rate reduction under the same PSNR for all the sequences and TMO considered of 20.3% and 27.3% for tone mapped content and 2.4% and 2.7% for HDR content.

  1. Improved image decompression for reduced transform coding artifacts

    NASA Technical Reports Server (NTRS)

    Orourke, Thomas P.; Stevenson, Robert L.

    1994-01-01

    The perceived quality of images reconstructed from low bit rate compression is severely degraded by the appearance of transform coding artifacts. This paper proposes a method for producing higher quality reconstructed images based on a stochastic model for the image data. Quantization (scalar or vector) partitions the transform coefficient space and maps all points in a partition cell to a representative reconstruction point, usually taken as the centroid of the cell. The proposed image estimation technique selects the reconstruction point within the quantization partition cell which results in a reconstructed image which best fits a non-Gaussian Markov random field (MRF) image model. This approach results in a convex constrained optimization problem which can be solved iteratively. At each iteration, the gradient projection method is used to update the estimate based on the image model. In the transform domain, the resulting coefficient reconstruction points are projected to the particular quantization partition cells defined by the compressed image. Experimental results will be shown for images compressed using scalar quantization of block DCT and using vector quantization of subband wavelet transform. The proposed image decompression provides a reconstructed image with reduced visibility of transform coding artifacts and superior perceived quality.

  2. Landau quantization effects on hole-acoustic instability in semiconductor plasmas

    NASA Astrophysics Data System (ADS)

    Sumera, P.; Rasheed, A.; Jamil, M.; Siddique, M.; Areeb, F.

    2017-12-01

    The growth rate of the hole acoustic waves (HAWs) exciting in magnetized semiconductor quantum plasma pumped by the electron beam has been investigated. The instability of the waves contains quantum effects including the exchange and correlation potential, Bohm potential, Fermi-degenerate pressure, and the magnetic quantization of semiconductor plasma species. The effects of various plasma parameters, which include relative concentration of plasma particles, beam electron temperature, beam speed, plasma temperature (temperature of electrons/holes), and Landau electron orbital magnetic quantization parameter η, on the growth rate of HAWs, have been discussed. The numerical study of our model of acoustic waves has been applied, as an example, to the GaAs semiconductor exposed to electron beam in the magnetic field environment. An increment in either the concentration of the semiconductor electrons or the speed of beam electrons, in the presence of magnetic quantization of fermion orbital motion, enhances remarkably the growth rate of the HAWs. Although the growth rate of the waves reduces with a rise in the thermal temperature of plasma species, at a particular temperature, we receive a higher instability due to the contribution of magnetic quantization of fermions to it.

  3. Design and evaluation of sparse quantization index modulation watermarking schemes

    NASA Astrophysics Data System (ADS)

    Cornelis, Bruno; Barbarien, Joeri; Dooms, Ann; Munteanu, Adrian; Cornelis, Jan; Schelkens, Peter

    2008-08-01

    In the past decade the use of digital data has increased significantly. The advantages of digital data are, amongst others, easy editing, fast, cheap and cross-platform distribution and compact storage. The most crucial disadvantages are the unauthorized copying and copyright issues, by which authors and license holders can suffer considerable financial losses. Many inexpensive methods are readily available for editing digital data and, unlike analog information, the reproduction in the digital case is simple and robust. Hence, there is great interest in developing technology that helps to protect the integrity of a digital work and the copyrights of its owners. Watermarking, which is the embedding of a signal (known as the watermark) into the original digital data, is one method that has been proposed for the protection of digital media elements such as audio, video and images. In this article, we examine watermarking schemes for still images, based on selective quantization of the coefficients of a wavelet transformed image, i.e. sparse quantization-index modulation (QIM) watermarking. Different grouping schemes for the wavelet coefficients are evaluated and experimentally verified for robustness against several attacks. Wavelet tree-based grouping schemes yield a slightly improved performance over block-based grouping schemes. Additionally, the impact of the deployment of error correction codes on the most promising configurations is examined. The utilization of BCH-codes (Bose, Ray-Chaudhuri, Hocquenghem) results in an improved robustness as long as the capacity of the error codes is not exceeded (cliff-effect).

  4. Progressive low-bitrate digital color/monochrome image coding by neuro-fuzzy clustering

    NASA Astrophysics Data System (ADS)

    Mitra, Sunanda; Meadows, Steven

    1997-10-01

    Color image coding at low bit rates is an area of research that is just being addressed in recent literature since the problems of storage and transmission of color images are becoming more prominent in many applications. Current trends in image coding exploit the advantage of subband/wavelet decompositions in reducing the complexity in optimal scalar/vector quantizer (SQ/VQ) design. Compression ratios (CRs) of the order of 10:1 to 20:1 with high visual quality have been achieved by using vector quantization of subband decomposed color images in perceptually weighted color spaces. We report the performance of a recently developed adaptive vector quantizer, namely, AFLC-VQ for effective reduction in bit rates while maintaining high visual quality of reconstructed color as well as monochrome images. For 24 bit color images, excellent visual quality is maintained upto a bit rate reduction to approximately 0.48 bpp (for each color plane or monochrome 0.16 bpp, CR 50:1) by using the RGB color space. Further tuning of the AFLC-VQ, and addition of an entropy coder module after the VQ stage results in extremely low bit rates (CR 80:1) for good quality, reconstructed images. Our recent study also reveals that for similar visual quality, RGB color space requires less bits/pixel than either the YIQ, or HIS color space for storing the same information when entropy coding is applied. AFLC-VQ outperforms other standard VQ and adaptive SQ techniques in retaining visual fidelity at similar bit rate reduction.

  5. Quantization of Space-like States in Lorentz-Violating Theories

    NASA Astrophysics Data System (ADS)

    Colladay, Don

    2018-01-01

    Lorentz violation frequently induces modified dispersion relations that can yield space-like states that impede the standard quantization procedures. In certain cases, an extended Hamiltonian formalism can be used to define observer-covariant normalization factors for field expansions and phase space integrals. These factors extend the theory to include non-concordant frames in which there are negative-energy states. This formalism provides a rigorous way to quantize certain theories containing space-like states and allows for the consistent computation of Cherenkov radiation rates in arbitrary frames and avoids singular expressions.

  6. Correspondence between quantization schemes for two-player nonzero-sum games and CNOT complexity

    NASA Astrophysics Data System (ADS)

    Vijayakrishnan, V.; Balakrishnan, S.

    2018-05-01

    The well-known quantization schemes for two-player nonzero-sum games are Eisert-Wilkens-Lewenstein scheme and Marinatto-Weber scheme. In this work, we establish the connection between the two schemes from the perspective of quantum circuits. Further, we provide the correspondence between any game quantization schemes and the CNOT complexity, where CNOT complexity is up to the local unitary operations. While CNOT complexity is known to be useful in the analysis of universal quantum circuit, in this work, we find its applicability in quantum game theory.

  7. Equivalence of Einstein and Jordan frames in quantized anisotropic cosmological models

    NASA Astrophysics Data System (ADS)

    Pandey, Sachin; Pal, Sridip; Banerjee, Narayan

    2018-06-01

    The present work shows that the mathematical equivalence of the Jordan frame and its conformally transformed version, the Einstein frame, so as far as Brans-Dicke theory is concerned, survives a quantization of cosmological models, arising as solutions to the Brans-Dicke theory. We work with the Wheeler-deWitt quantization scheme and take up quite a few anisotropic cosmological models as examples. We effectively show that the transformation from the Jordan to the Einstein frame is a canonical one and hence two frames furnish equivalent description of same physical scenario.

  8. Gauge fixing and BFV quantization

    NASA Astrophysics Data System (ADS)

    Rogers, Alice

    2000-01-01

    Non-singularity conditions are established for the Batalin-Fradkin-Vilkovisky (BFV) gauge-fixing fermion which are sufficient for it to lead to the correct path integral for a theory with constraints canonically quantized in the BFV approach. The conditions ensure that the anticommutator of this fermion with the BRST charge regularizes the path integral by regularizing the trace over non-physical states in each ghost sector. The results are applied to the quantization of a system which has a Gribov problem, using a non-standard form of the gauge-fixing fermion.

  9. Covariant spinor representation of iosp(d,2/2) and quantization of the spinning relativistic particle

    NASA Astrophysics Data System (ADS)

    Jarvis, P. D.; Corney, S. P.; Tsohantjis, I.

    1999-12-01

    A covariant spinor representation of iosp(d,2/2) is constructed for the quantization of the spinning relativistic particle. It is found that, with appropriately defined wavefunctions, this representation can be identified with the state space arising from the canonical extended BFV-BRST quantization of the spinning particle with admissible gauge fixing conditions after a contraction procedure. For this model, the cohomological determination of physical states can thus be obtained purely from the representation theory of the iosp(d,2/2) algebra.

  10. Pseudoclassical Foldy-Wouthuysen transformation and canonical quantization of (D-2n)-dimensional relativistic particle with spin in an external electromagnetic field

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grigoryan, G.V.; Grigoryan, R.P.

    1995-09-01

    The canonical quantization of a (D=2n)-dimensional Dirac particle with spin in an arbitrary external electromagnetic field is performed in a gauge that makes it possible to describe simultaneously particles and antiparticles (both massive and massless) already at the classical level. A pseudoclassical Foldy-Wouthuysen transformation is used to find the canonical (Newton-Wigner) coordinates. The connection between this quantization scheme and Blount`s picture describing the behavior of a Dirac particle in an external electromagnetic field is discussed.

  11. Image Data Compression Having Minimum Perceptual Error

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B. (Inventor)

    1997-01-01

    A method is presented for performing color or grayscale image compression that eliminates redundant and invisible image components. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The quantization matrix comprises visual masking by luminance and contrast technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.

  12. Fault detection of helicopter gearboxes using the multi-valued influence matrix method

    NASA Technical Reports Server (NTRS)

    Chin, Hsinyung; Danai, Kourosh; Lewicki, David G.

    1993-01-01

    In this paper we investigate the effectiveness of a pattern classifying fault detection system that is designed to cope with the variability of fault signatures inherent in helicopter gearboxes. For detection, the measurements are monitored on-line and flagged upon the detection of abnormalities, so that they can be attributed to a faulty or normal case. As such, the detection system is composed of two components, a quantization matrix to flag the measurements, and a multi-valued influence matrix (MVIM) that represents the behavior of measurements during normal operation and at fault instances. Both the quantization matrix and influence matrix are tuned during a training session so as to minimize the error in detection. To demonstrate the effectiveness of this detection system, it was applied to vibration measurements collected from a helicopter gearbox during normal operation and at various fault instances. The results indicate that the MVIM method provides excellent results when the full range of faults effects on the measurements are included in the training set.

  13. Multi-mode energy management strategy for fuel cell electric vehicles based on driving pattern identification using learning vector quantization neural network algorithm

    NASA Astrophysics Data System (ADS)

    Song, Ke; Li, Feiqiang; Hu, Xiao; He, Lin; Niu, Wenxu; Lu, Sihao; Zhang, Tong

    2018-06-01

    The development of fuel cell electric vehicles can to a certain extent alleviate worldwide energy and environmental issues. While a single energy management strategy cannot meet the complex road conditions of an actual vehicle, this article proposes a multi-mode energy management strategy for electric vehicles with a fuel cell range extender based on driving condition recognition technology, which contains a patterns recognizer and a multi-mode energy management controller. This paper introduces a learning vector quantization (LVQ) neural network to design the driving patterns recognizer according to a vehicle's driving information. This multi-mode strategy can automatically switch to the genetic algorithm optimized thermostat strategy under specific driving conditions in the light of the differences in condition recognition results. Simulation experiments were carried out based on the model's validity verification using a dynamometer test bench. Simulation results show that the proposed strategy can obtain better economic performance than the single-mode thermostat strategy under dynamic driving conditions.

  14. Structure variation of the index of refraction of GaAs-AlAs superlattices and multiple quantum wells

    NASA Technical Reports Server (NTRS)

    Kahen, K. B.; Leburton, J. P.

    1985-01-01

    A detailed calculation of the index refraction of various GaAs-AlAs superlattices is presented for the first time. The calculation is performed by using a hybrid approach which combines the k-p method with the pseudopotential technique. Appropriate quantization conditions account for the influence of the superstructures on the electronic properties of the systems. The results of the model are in very good agreement with the experimental data. In comparison with the index of refraction of the corresponding AlGaAs alloy, characterized by the same average mole fraction of Al, the results indicate that the superlattice index of refraction values attain maxima at the various quantized transition energies. For certain structures the difference can be as large as 2 percent. These results suggest that the waveguiding and dispersion relation properties of optoelectronic devices can be tailored to design for specific optical application by an appropriate choice of the superlattice structure parameters.

  15. Application of a Noise Adaptive Contrast Sensitivity Function to Image Data Compression

    NASA Astrophysics Data System (ADS)

    Daly, Scott J.

    1989-08-01

    The visual contrast sensitivity function (CSF) has found increasing use in image compression as new algorithms optimize the display-observer interface in order to reduce the bit rate and increase the perceived image quality. In most compression algorithms, increasing the quantization intervals reduces the bit rate at the expense of introducing more quantization error, a potential image quality degradation. The CSF can be used to distribute this error as a function of spatial frequency such that it is undetectable by the human observer. Thus, instead of being mathematically lossless, the compression algorithm can be designed to be visually lossless, with the advantage of a significantly reduced bit rate. However, the CSF is strongly affected by image noise, changing in both shape and peak sensitivity. This work describes a model of the CSF that includes these changes as a function of image noise level by using the concepts of internal visual noise, and tests this model in the context of image compression with an observer study.

  16. Diverse magnetic quantization in bilayer silicene

    NASA Astrophysics Data System (ADS)

    Do, Thi-Nga; Shih, Po-Hsin; Gumbs, Godfrey; Huang, Danhong; Chiu, Chih-Wei; Lin, Ming-Fa

    2018-03-01

    The generalized tight-binding model is developed to investigate the rich and unique electronic properties of A B -bt (bottom-top) bilayer silicene under uniform perpendicular electric and magnetic fields. The first pair of conduction and valence bands, with an observable energy gap, displays unusual energy dispersions. Each group of conduction/valence Landau levels (LLs) is further classified into four subgroups, i.e., the sublattice- and spin-dominated LL subgroups. The magnetic-field-dependent LL energy spectra exhibit irregular behavior corresponding to the critical points of the band structure. Moreover, the electric field can induce many LL anticrossings. The main features of the LLs are uncovered with many van Hove singularities in the density-of-states and nonuniform delta-function-like peaks in the magnetoabsorption spectra. The feature-rich magnetic quantization directly reflects the geometric symmetries, intralayer and interlayer atomic interactions, spin-orbital couplings, and field effects. The results of this work can be applied to novel designs of Si-based nanoelectronics and nanodevices with enhanced mobilities.

  17. Approaching the Planck scale from a generally relativistic point of view: A philosophical appraisal of loop quantum gravity

    NASA Astrophysics Data System (ADS)

    Wuthrich, Christian

    My dissertation studies the foundations of loop quantum gravity (LQG), a candidate for a quantum theory of gravity based on classical general relativity. At the outset, I discuss two---and I claim separate---questions: first, do we need a quantum theory of gravity at all; and second, if we do, does it follow that gravity should or even must be quantized? My evaluation of different arguments either way suggests that while no argument can be considered conclusive, there are strong indications that gravity should be quantized. LQG attempts a canonical quantization of general relativity and thereby provokes a foundational interest as it must take a stance on many technical issues tightly linked to the interpretation of general relativity. Most importantly, it codifies general relativity's main innovation, the so-called background independence, in a formalism suitable for quantization. This codification pulls asunder what has been joined together in general relativity: space and time. It is thus a central issue whether or not general relativity's four-dimensional structure can be retrieved in the alternative formalism and how it fares through the quantization process. I argue that the rightful four-dimensional spacetime structure can only be partially retrieved at the classical level. What happens at the quantum level is an entirely open issue. Known examples of classically singular behaviour which gets regularized by quantization evoke an admittedly pious hope that the singularities which notoriously plague the classical theory may be washed away by quantization. This work scrutinizes pronouncements claiming that the initial singularity of classical cosmological models vanishes in quantum cosmology based on LQG and concludes that these claims must be severely qualified. In particular, I explicate why casting the quantum cosmological models in terms of a deterministic temporal evolution fails to capture the concepts at work adequately. Finally, a scheme is developed of how the re-emergence of the smooth spacetime from the underlying discrete quantum structure could be understood.

  18. 2010 Workplace and Gender Relations Survey of Active Duty Members. Administration, Datasets, and Codebook

    DTIC Science & Technology

    2011-04-01

    from the survey litho code list if a survey form was sent or independently if only a letter was sent. Ticket Numbers for Web Survey Access Prior...variables BATCH, SERIAL, and LITHO uniquely identify each returned survey. LITHO is the lithocode scanned from the survey. BATCH and SERIAL are the...Uned 593 LEADERSAT Tabs: Leadership Satisfaction Scale- Q11 176 LITHO * Litho code 1086 MAILTYP* Mail Type 1087 MENTOR 12. [12] Do you have a

  19. Early Childhood Longitudinal Study, Kindergarten Class of 2010-11 (ECLS-K:2011). User's Manual for the ECLS-K:2011 Kindergarten-First Grade Data File and Electronic Codebook, Public Version. NCES 2015-078

    ERIC Educational Resources Information Center

    Tourangeau, Karen; Nord, Christine; Lê, Thanh; Wallner-Allen, Kathleen; Hagedorn, Mary C.; Leggitt, John; Najarian, Michelle

    2015-01-01

    This manual provides guidance and documentation for users of the longitudinal kindergarten-first grade (K-1) data file of the Early Childhood Longitudinal Study, Kindergarten Class of 2010-11 (ECLS-K:2011). It mainly provides information specific to the first-grade rounds of data collection. Data for the ECLS-K:2011 are released in both a…

  20. April 2006 Status of Forces Survey of Active-Duty Members: Administration, Datasets, and Codebook

    DTIC Science & Technology

    2006-08-01

    be gathered on a particular question. For example, AD080SP is a flag variable indicating when respondents had another reason for a physical injury...94. [94---] Collge credits since enlistment 510 COLCREDF* Top coding flag for COLCRED 1028 COLCREDR Rec COLCRED-Lvl col cred since enlisting 256...homew 507 COMPEDSK [92SK-] Use home PC/onlin dstnce ed 508 COMPSPEDSK [93SK-] Sp Use home PC onlin dstnce 509 COLCRED 94. [94---] Collge credits

  1. Service Academy 2006 Gender Relations Survey: Administration, Datasets, and Codebook

    DTIC Science & Technology

    2006-07-01

    you without your consent Someone repeatedly showed you pornographic materials, even after you asked him/her not to Considered neither sexual...consent? 1 2 3 4 c. Someone repeatedly showed you pornographic materials, even after you asked him/her not to? 1 2 3...SC006C 6c. Know- Pornographic materials SC006D 6d. Know-Rumors about your sex beh Know-Little drunk, forced sex SC006E 6e. SC006F 6f. Know-Ruin

  2. August 2006 Status of Forces Survey of Active Duty Members: Administration, Datasets, and Codebook

    DTIC Science & Technology

    2006-12-01

    beverages on any one occasion? Yes No f. Posters g. Web sites h. Brochures i. Other ATTITUDES...retired" f. Posters NOT [SRSVC1] = "None, I have separated or retired" g. Web sites NOT [SRSVC1] = "None...AL115E* 115e. [115e] Drinking info: news 419 AL115ER Recode AL115ER:common denom 159 AL115F* 115f. [115f] Drinking info: posters 420 AL115FR Recode

  3. On the Perturbative Equivalence Between the Hamiltonian and Lagrangian Quantizations

    NASA Astrophysics Data System (ADS)

    Batalin, I. A.; Tyutin, I. V.

    The Hamiltonian (BFV) and Lagrangian (BV) quantization schemes are proved to be perturbatively equivalent to each other. It is shown in particular that the quantum master equation being treated perturbatively possesses a local formal solution.

  4. Fill-in binary loop pulse-torque quantizer

    NASA Technical Reports Server (NTRS)

    Lory, C. B.

    1975-01-01

    Fill-in binary (FIB) loop provides constant heating of torque generator, an advantage of binary current switching. At the same time, it avoids mode-related dead zone and data delay of binary, an advantage of ternary quantization.

  5. Theory of quantized systems: formal basis for DEVS/HLA distributed simulation environment

    NASA Astrophysics Data System (ADS)

    Zeigler, Bernard P.; Lee, J. S.

    1998-08-01

    In the context of a DARPA ASTT project, we are developing an HLA-compliant distributed simulation environment based on the DEVS formalism. This environment will provide a user- friendly, high-level tool-set for developing interoperable discrete and continuous simulation models. One application is the study of contract-based predictive filtering. This paper presents a new approach to predictive filtering based on a process called 'quantization' to reduce state update transmission. Quantization, which generates state updates only at quantum level crossings, abstracts a sender model into a DEVS representation. This affords an alternative, efficient approach to embedding continuous models within distributed discrete event simulations. Applications of quantization to message traffic reduction are discussed. The theory has been validated by DEVSJAVA simulations of test cases. It will be subject to further test in actual distributed simulations using the DEVS/HLA modeling and simulation environment.

  6. Landau quantization of Dirac fermions in graphene and its multilayers

    NASA Astrophysics Data System (ADS)

    Yin, Long-Jing; Bai, Ke-Ke; Wang, Wen-Xiao; Li, Si-Yu; Zhang, Yu; He, Lin

    2017-08-01

    When electrons are confined in a two-dimensional (2D) system, typical quantum-mechanical phenomena such as Landau quantization can be detected. Graphene systems, including the single atomic layer and few-layer stacked crystals, are ideal 2D materials for studying a variety of quantum-mechanical problems. In this article, we review the experimental progress in the unusual Landau quantized behaviors of Dirac fermions in monolayer and multilayer graphene by using scanning tunneling microscopy (STM) and scanning tunneling spectroscopy (STS). Through STS measurement of the strong magnetic fields, distinct Landau-level spectra and rich level-splitting phenomena are observed in different graphene layers. These unique properties provide an effective method for identifying the number of layers, as well as the stacking orders, and investigating the fundamentally physical phenomena of graphene. Moreover, in the presence of a strain and charged defects, the Landau quantization of graphene can be significantly modified, leading to unusual spectroscopic and electronic properties.

  7. More on quantum groups from the quantization point of view

    NASA Astrophysics Data System (ADS)

    Jurčo, Branislav

    1994-12-01

    Star products on the classical double group of a simple Lie group and on corresponding symplectic groupoids are given so that the quantum double and the “quantized tangent bundle” are obtained in the deformation description. “Complex” quantum groups and bicovariant quantum Lie algebras are discussed from this point of view. Further we discuss the quantization of the Poisson structure on the symmetric algebra S(g) leading to the quantized enveloping algebra U h (g) as an example of biquantization in the sense of Turaev. Description of U h (g) in terms of the generators of the bicovariant differential calculus on F(G q ) is very convenient for this purpose. Finaly we interpret in the deformation framework some well known properties of compact quantum groups as simple consequences of corresponding properties of classical compact Lie groups. An analogue of the classical Kirillov's universal character formula is given for the unitary irreducble representation in the compact case.

  8. Quantization of gauge fields, graph polynomials and graph homology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kreimer, Dirk, E-mail: kreimer@physik.hu-berlin.de; Sars, Matthias; Suijlekom, Walter D. van

    2013-09-15

    We review quantization of gauge fields using algebraic properties of 3-regular graphs. We derive the Feynman integrand at n loops for a non-abelian gauge theory quantized in a covariant gauge from scalar integrands for connected 3-regular graphs, obtained from the two Symanzik polynomials. The transition to the full gauge theory amplitude is obtained by the use of a third, new, graph polynomial, the corolla polynomial. This implies effectively a covariant quantization without ghosts, where all the relevant signs of the ghost sector are incorporated in a double complex furnished by the corolla polynomial–we call it cycle homology–and by graph homology.more » -- Highlights: •We derive gauge theory Feynman from scalar field theory with 3-valent vertices. •We clarify the role of graph homology and cycle homology. •We use parametric renormalization and the new corolla polynomial.« less

  9. Augmenting Phase Space Quantization to Introduce Additional Physical Effects

    NASA Astrophysics Data System (ADS)

    Robbins, Matthew P. G.

    Quantum mechanics can be done using classical phase space functions and a star product. The state of the system is described by a quasi-probability distribution. A classical system can be quantized in phase space in different ways with different quasi-probability distributions and star products. A transition differential operator relates different phase space quantizations. The objective of this thesis is to introduce additional physical effects into the process of quantization by using the transition operator. As prototypical examples, we first look at the coarse-graining of the Wigner function and the damped simple harmonic oscillator. By generalizing the transition operator and star product to also be functions of the position and momentum, we show that additional physical features beyond damping and coarse-graining can be introduced into a quantum system, including the generalized uncertainty principle of quantum gravity phenomenology, driving forces, and decoherence.

  10. Event-triggered H∞ state estimation for semi-Markov jumping discrete-time neural networks with quantization.

    PubMed

    Rakkiyappan, R; Maheswari, K; Velmurugan, G; Park, Ju H

    2018-05-17

    This paper investigates H ∞ state estimation problem for a class of semi-Markovian jumping discrete-time neural networks model with event-triggered scheme and quantization. First, a new event-triggered communication scheme is introduced to determine whether or not the current sampled sensor data should be broad-casted and transmitted to the quantizer, which can save the limited communication resource. Second, a novel communication framework is employed by the logarithmic quantizer that quantifies and reduces the data transmission rate in the network, which apparently improves the communication efficiency of networks. Third, a stabilization criterion is derived based on the sufficient condition which guarantees a prescribed H ∞ performance level in the estimation error system in terms of the linear matrix inequalities. Finally, numerical simulations are given to illustrate the correctness of the proposed scheme. Copyright © 2018 Elsevier Ltd. All rights reserved.

  11. Model predictive control of non-linear systems over networks with data quantization and packet loss.

    PubMed

    Yu, Jimin; Nan, Liangsheng; Tang, Xiaoming; Wang, Ping

    2015-11-01

    This paper studies the approach of model predictive control (MPC) for the non-linear systems under networked environment where both data quantization and packet loss may occur. The non-linear controlled plant in the networked control system (NCS) is represented by a Tagaki-Sugeno (T-S) model. The sensed data and control signal are quantized in both links and described as sector bound uncertainties by applying sector bound approach. Then, the quantized data are transmitted in the communication networks and may suffer from the effect of packet losses, which are modeled as Bernoulli process. A fuzzy predictive controller which guarantees the stability of the closed-loop system is obtained by solving a set of linear matrix inequalities (LMIs). A numerical example is given to illustrate the effectiveness of the proposed method. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  12. Thermal distributions of first, second and third quantization

    NASA Astrophysics Data System (ADS)

    McGuigan, Michael

    1989-05-01

    We treat first quantized string theory as two-dimensional gravity plus matter. This allows us to compute the two-dimensional density of one string states by the method of Darwin and Fowler. One can then use second quantized methods to form a grand microcanonical ensemble in which one can compute the density of multistring states of arbitrary momentum and mass. It is argued that modelling an elementary particle as a d-1-dimensional object whose internal degrees of freedom are described by a massless d-dimensional gas yields a density of internal states given by σ d(m)∼m -aexp((bm) {2(d-1)}/{d}) . This indicates that these objects cannot be in thermal equilibrium at any temperature unless d⩽2; that is for a string or a particle. Finally, we discuss the application of the above ideas to four-dimensional gravity and introduce an ensemble of multiuniverse states parameterized by second quantized canonical momenta and particle number.

  13. Fine structure constant and quantized optical transparency of plasmonic nanoarrays.

    PubMed

    Kravets, V G; Schedin, F; Grigorenko, A N

    2012-01-24

    Optics is renowned for displaying quantum phenomena. Indeed, studies of emission and absorption lines, the photoelectric effect and blackbody radiation helped to build the foundations of quantum mechanics. Nevertheless, it came as a surprise that the visible transparency of suspended graphene is determined solely by the fine structure constant, as this kind of universality had been previously reserved only for quantized resistance and flux quanta in superconductors. Here we describe a plasmonic system in which relative optical transparency is determined solely by the fine structure constant. The system consists of a regular array of gold nanoparticles fabricated on a thin metallic sublayer. We show that its relative transparency can be quantized in the near-infrared, which we attribute to the quantized contact resistance between the nanoparticles and the metallic sublayer. Our results open new possibilities in the exploration of universal dynamic conductance in plasmonic nanooptics.

  14. Sub-Selective Quantization for Learning Binary Codes in Large-Scale Image Search.

    PubMed

    Li, Yeqing; Liu, Wei; Huang, Junzhou

    2018-06-01

    Recently with the explosive growth of visual content on the Internet, large-scale image search has attracted intensive attention. It has been shown that mapping high-dimensional image descriptors to compact binary codes can lead to considerable efficiency gains in both storage and performing similarity computation of images. However, most existing methods still suffer from expensive training devoted to large-scale binary code learning. To address this issue, we propose a sub-selection based matrix manipulation algorithm, which can significantly reduce the computational cost of code learning. As case studies, we apply the sub-selection algorithm to several popular quantization techniques including cases using linear and nonlinear mappings. Crucially, we can justify the resulting sub-selective quantization by proving its theoretic properties. Extensive experiments are carried out on three image benchmarks with up to one million samples, corroborating the efficacy of the sub-selective quantization method in terms of image retrieval.

  15. A joint source-channel distortion model for JPEG compressed images.

    PubMed

    Sabir, Muhammad F; Sheikh, Hamid Rahim; Heath, Robert W; Bovik, Alan C

    2006-06-01

    The need for efficient joint source-channel coding (JSCC) is growing as new multimedia services are introduced in commercial wireless communication systems. An important component of practical JSCC schemes is a distortion model that can predict the quality of compressed digital multimedia such as images and videos. The usual approach in the JSCC literature for quantifying the distortion due to quantization and channel errors is to estimate it for each image using the statistics of the image for a given signal-to-noise ratio (SNR). This is not an efficient approach in the design of real-time systems because of the computational complexity. A more useful and practical approach would be to design JSCC techniques that minimize average distortion for a large set of images based on some distortion model rather than carrying out per-image optimizations. However, models for estimating average distortion due to quantization and channel bit errors in a combined fashion for a large set of images are not available for practical image or video coding standards employing entropy coding and differential coding. This paper presents a statistical model for estimating the distortion introduced in progressive JPEG compressed images due to quantization and channel bit errors in a joint manner. Statistical modeling of important compression techniques such as Huffman coding, differential pulse-coding modulation, and run-length coding are included in the model. Examples show that the distortion in terms of peak signal-to-noise ratio (PSNR) can be predicted within a 2-dB maximum error over a variety of compression ratios and bit-error rates. To illustrate the utility of the proposed model, we present an unequal power allocation scheme as a simple application of our model. Results show that it gives a PSNR gain of around 6.5 dB at low SNRs, as compared to equal power allocation.

  16. Extracontextuality and extravalence in quantum mechanics.

    PubMed

    Auffèves, Alexia; Grangier, Philippe

    2018-07-13

    We develop the point of view where quantum mechanics results from the interplay between the quantized number of 'modalities' accessible to a quantum system, and the continuum of 'contexts' that are required to define these modalities. We point out the specific roles of 'extracontextuality' and 'extravalence' of modalities, and relate them to the Kochen-Specker and Gleason theorems.This article is part of a discussion meeting issue 'Foundations of quantum mechanics and their impact on contemporary society'. © 2018 The Author(s).

  17. Spatially Invariant Vector Quantization: A pattern matching algorithm for multiple classes of image subject matter including pathology.

    PubMed

    Hipp, Jason D; Cheng, Jerome Y; Toner, Mehmet; Tompkins, Ronald G; Balis, Ulysses J

    2011-02-26

    HISTORICALLY, EFFECTIVE CLINICAL UTILIZATION OF IMAGE ANALYSIS AND PATTERN RECOGNITION ALGORITHMS IN PATHOLOGY HAS BEEN HAMPERED BY TWO CRITICAL LIMITATIONS: 1) the availability of digital whole slide imagery data sets and 2) a relative domain knowledge deficit in terms of application of such algorithms, on the part of practicing pathologists. With the advent of the recent and rapid adoption of whole slide imaging solutions, the former limitation has been largely resolved. However, with the expectation that it is unlikely for the general cohort of contemporary pathologists to gain advanced image analysis skills in the short term, the latter problem remains, thus underscoring the need for a class of algorithm that has the concurrent properties of image domain (or organ system) independence and extreme ease of use, without the need for specialized training or expertise. In this report, we present a novel, general case pattern recognition algorithm, Spatially Invariant Vector Quantization (SIVQ), that overcomes the aforementioned knowledge deficit. Fundamentally based on conventional Vector Quantization (VQ) pattern recognition approaches, SIVQ gains its superior performance and essentially zero-training workflow model from its use of ring vectors, which exhibit continuous symmetry, as opposed to square or rectangular vectors, which do not. By use of the stochastic matching properties inherent in continuous symmetry, a single ring vector can exhibit as much as a millionfold improvement in matching possibilities, as opposed to conventional VQ vectors. SIVQ was utilized to demonstrate rapid and highly precise pattern recognition capability in a broad range of gross and microscopic use-case settings. With the performance of SIVQ observed thus far, we find evidence that indeed there exist classes of image analysis/pattern recognition algorithms suitable for deployment in settings where pathologists alone can effectively incorporate their use into clinical workflow, as a turnkey solution. We anticipate that SIVQ, and other related class-independent pattern recognition algorithms, will become part of the overall armamentarium of digital image analysis approaches that are immediately available to practicing pathologists, without the need for the immediate availability of an image analysis expert.

  18. Interplay between topology, gauge fields and gravity

    NASA Astrophysics Data System (ADS)

    Corichi Rodriguez Gil, Alejandro

    In this thesis we consider several physical systems that illustrate an interesting interplay between quantum theory, connections and knot theory. It can be divided into two parts. In the first one, we consider the quantization of the free Maxwell field. We show that there is an important role played by knot theory, and in particular the Gauss linking number, in the quantum theory. This manifestation is twofold. The first occurs at the level of the algebra of observables given by fluxes of electric and magnetic field across surfaces. The commutator of the operators, and thus the basic uncertainty relations, are given in terms of the linking number of the loops that bound the surfaces. Next, we consider the quantization of the Maxwell field based on self-dual connections in the loop representation. We show that the measure which determines the quantum inner product can be expressed in terms of the self linking number of thickened loops. Therefore, the linking number manifests itself at two key points of the theory: the Heisenberg uncertainty principle and the inner product. In the second part, we bring gravity into play. First we consider quantum test particles on certain stationary space-times. We demonstrate that a geometric phase exists for those space-times and focus on the example of a rotating cosmic string. The geometric phase can be explicitly computed, providing a fully relativistic gravitational Aharonov-Bohm effect. Finally, we consider 3-dimensional gravity with non-vanishing cosmological constant in the connection dynamics formulation. We restrict our attention to Lorentzian gravity with positive cosmological constant and Euclidean signature with negative cosmological constant. A complex transformation is performed in phase space that makes the constraints simple. The reduced phase space is characterized as the moduli space of flat complex connections. We construct the quantization of the theory when the initial hyper-surface is a torus. Two important issues relevant to full 3 + 1 gravity are clarified, namely, the incorporation of the 'reality conditions' in the quantum theory and the role played by the signature of the classical metric in the quantum theory.

  19. Mixed Linear/Square-Root Encoded Single-Slope Ramp Provides Low-Noise ADC with High Linearity for Focal Plane Arrays

    NASA Technical Reports Server (NTRS)

    Wrigley, Chris J.; Hancock, Bruce R.; Newton, Kenneth W.; Cunningham, Thomas J.

    2013-01-01

    Single-slope analog-to-digital converters (ADCs) are particularly useful for onchip digitization in focal plane arrays (FPAs) because of their inherent monotonicity, relative simplicity, and efficiency for column-parallel applications, but they are comparatively slow. Squareroot encoding can allow the number of code values to be reduced without loss of signal-to-noise ratio (SNR) by keeping the quantization noise just below the signal shot noise. This encoding can be implemented directly by using a quadratic ramp. The reduction in the number of code values can substantially increase the quantization speed. However, in an FPA, the fixed pattern noise (FPN) limits the use of small quantization steps at low signal levels. If the zero-point is adjusted so that the lowest column is onscale, the other columns, including those at the center of the distribution, will be pushed up the ramp where the quantization noise is higher. Additionally, the finite frequency response of the ramp buffer amplifier and the comparator distort the shape of the ramp, so that the effective ramp value at the time the comparator trips differs from the intended value, resulting in errors. Allowing increased settling time decreases the quantization speed, while increasing the bandwidth increases the noise. The FPN problem is solved by breaking the ramp into two portions, with some fraction of the available code values allocated to a linear ramp and the remainder to a quadratic ramp. To avoid large transients, both the value and the slope of the linear and quadratic portions should be equal where they join. The span of the linear portion must cover the minimum offset, but not necessarily the maximum, since the fraction of the pixels above the upper limit will still be correctly quantized, albeit with increased quantization noise. The required linear span, maximum signal and ratio of quantization noise to shot noise at high signal, along with the continuity requirement, determines the number of code values that must be allocated to each portion. The distortion problem is solved by using a lookup table to convert captured code values back to signal levels. The values in this table will be similar to the intended ramp value, but with a correction for the finite bandwidth effects. Continuous-time comparators are used, and their bandwidth is set below the step rate, which smoothes the ramp and reduces the noise. No settling time is needed, as would be the case for clocked comparators, but the low bandwidth enhances the distortion of the non-linear portion. This is corrected by use of a return lookup table, which differs from the one used to generate the ramp. The return lookup table is obtained by calibrating against a stepped precision DC reference. This results in a residual non-linearity well below the quantization noise. This method can also compensate for differential non-linearity (DNL) in the DAC used to generate the ramp. The use of a ramp with a combination of linear and quadratic portions for a single-slope ADC is novel. The number of steps is minimized by keeping the step size just below the photon shot noise. This in turn maximizes the speed of the conversion. High resolution is maintained by keeping small quantization steps at low signals, and noise is minimized by allowing the lowest analog bandwidth, all without increasing the quantization noise. A calibrated return lookup table allows the system to maintain excellent linearity.

  20. Perspectives of Light-Front Quantized Field Theory: Some New Results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Srivastava, Prem P.

    1999-08-13

    A review of some basic topics in the light-front (LF) quantization of relativistic field theory is made. It is argued that the LF quantization is equally appropriate as the conventional one and that they lead, assuming the microcausality principle, to the same physical content. This is confirmed in the studies on the LF of the spontaneous symmetry breaking (SSB), of the degenerate vacua in Schwinger model (SM) and Chiral SM (CSM), of the chiral boson theory, and of the QCD in covariant gauges among others. The discussion on the LF is more economical and more transparent than that found inmore » the conventional equal-time quantized theory. The removal of the constraints on the LF phase space by following the Dirac method, in fact, results in a substantially reduced number of independent dynamical variables. Consequently, the descriptions of the physical Hilbert space and the vacuum structure, for example, become more tractable. In the context of the Dyson-Wick perturbation theory the relevant propagators in the front form theory are causal. The Wick rotation can then be performed to employ the Euclidean space integrals in momentum space. The lack of manifest covariance becomes tractable, and still more so if we employ, as discussed in the text, the Fourier transform of the fermionic field based on a special construction of the LF spinor. The fact that the hyperplanes x{sup {+-}} = 0 constitute characteristic surfaces of the hyperbolic partial differential equation is found irrelevant in the quantized theory; it seems sufficient to quantize the theory on one of the characteristic hyperplanes.« less

  1. Quantization and Quantum-Like Phenomena: A Number Amplitude Approach

    NASA Astrophysics Data System (ADS)

    Robinson, T. R.; Haven, E.

    2015-12-01

    Historically, quantization has meant turning the dynamical variables of classical mechanics that are represented by numbers into their corresponding operators. Thus the relationships between classical variables determine the relationships between the corresponding quantum mechanical operators. Here, we take a radically different approach to this conventional quantization procedure. Our approach does not rely on any relations based on classical Hamiltonian or Lagrangian mechanics nor on any canonical quantization relations, nor even on any preconceptions of particle trajectories in space and time. Instead we examine the symmetry properties of certain Hermitian operators with respect to phase changes. This introduces harmonic operators that can be identified with a variety of cyclic systems, from clocks to quantum fields. These operators are shown to have the characteristics of creation and annihilation operators that constitute the primitive fields of quantum field theory. Such an approach not only allows us to recover the Hamiltonian equations of classical mechanics and the Schrödinger wave equation from the fundamental quantization relations, but also, by freeing the quantum formalism from any physical connotation, makes it more directly applicable to non-physical, so-called quantum-like systems. Over the past decade or so, there has been a rapid growth of interest in such applications. These include, the use of the Schrödinger equation in finance, second quantization and the number operator in social interactions, population dynamics and financial trading, and quantum probability models in cognitive processes and decision-making. In this paper we try to look beyond physical analogies to provide a foundational underpinning of such applications.

  2. Generalized noise terms for the quantized fluctuational electrodynamics

    NASA Astrophysics Data System (ADS)

    Partanen, Mikko; Häyrynen, Teppo; Tulkki, Jukka; Oksanen, Jani

    2017-03-01

    The quantization of optical fields in vacuum has been known for decades, but extending the field quantization to lossy and dispersive media in nonequilibrium conditions has proven to be complicated due to the position-dependent electric and magnetic responses of the media. In fact, consistent position-dependent quantum models for the photon number in resonant structures have only been formulated very recently and only for dielectric media. Here we present a general position-dependent quantized fluctuational electrodynamics (QFED) formalism that extends the consistent field quantization to describe the photon number also in the presence of magnetic field-matter interactions. It is shown that the magnetic fluctuations provide an additional degree of freedom in media where the magnetic coupling to the field is prominent. Therefore, the field quantization requires an additional independent noise operator that is commuting with the conventional bosonic noise operator describing the polarization current fluctuations in dielectric media. In addition to allowing the detailed description of field fluctuations, our methods provide practical tools for modeling optical energy transfer and the formation of thermal balance in general dielectric and magnetic nanodevices. We use QFED to investigate the magnetic properties of microcavity systems to demonstrate an example geometry in which it is possible to probe fields arising from the electric and magnetic source terms. We show that, as a consequence of the magnetic Purcell effect, the tuning of the position of an emitter layer placed inside a vacuum cavity can make the emissivity of a magnetic emitter to exceed the emissivity of a corresponding electric emitter.

  3. Analysis of first and second order binary quantized digital phase-locked loops for ideal and white Gaussian noise inputs

    NASA Technical Reports Server (NTRS)

    Blasche, P. R.

    1980-01-01

    Specific configurations of first and second order all digital phase locked loops are analyzed for both ideal and additive white gaussian noise inputs. In addition, a design for a hardware digital phase locked loop capable of either first or second order operation is presented along with appropriate experimental data obtained from testing of the hardware loop. All parameters chosen for the analysis and the design of the digital phase locked loop are consistent with an application to an Omega navigation receiver although neither the analysis nor the design are limited to this application.

  4. Bfv Quantization of Relativistic Spinning Particles with a Single Bosonic Constraint

    NASA Astrophysics Data System (ADS)

    Rabello, Silvio J.; Vaidya, Arvind N.

    Using the BFV approach we quantize a pseudoclassical model of the spin-1/2 relativistic particle that contains a single bosonic constraint, contrary to the usual locally supersymmetric models that display first and second class constraints.

  5. Quantized Step-up Model for Evaluation of Internship in Teaching of Prospective Science Teachers.

    ERIC Educational Resources Information Center

    Sindhu, R. S.

    2002-01-01

    Describes the quantized step-up model developed for the evaluation purposes of internship in teaching which is an analogous model of the atomic structure. Assesses prospective teachers' abilities in lesson delivery. (YDS)

  6. Minimum uncertainty and squeezing in diffusion processes and stochastic quantization

    NASA Technical Reports Server (NTRS)

    Demartino, S.; Desiena, S.; Illuminati, Fabrizo; Vitiello, Giuseppe

    1994-01-01

    We show that uncertainty relations, as well as minimum uncertainty coherent and squeezed states, are structural properties for diffusion processes. Through Nelson stochastic quantization we derive the stochastic image of the quantum mechanical coherent and squeezed states.

  7. A consistent covariant quantization of the Brink-Schwarz superparticle

    NASA Astrophysics Data System (ADS)

    Eisenberg, Yeshayahu

    1992-02-01

    We perform the covariant quantization of the ten-dimensional Brink-Schwarz superparticle by reducing it to a system whose constraints are all first class, covariant and have only two levels of reducibility. Research supported by the Rothschild Fellowship.

  8. Quantized Overcomplete Expansions: Analysis, Synthesis and Algorithms

    DTIC Science & Technology

    1995-07-01

    would be in the spirit of the Lempel - Ziv algorithm . The decoder would have to be aware of changes in the dictionary, but depending on the nature of the...37 3.4 A General Vector Compression Algorithm Based on Frames : : : : : : : : : : 40 ii 3.4.1 Design Considerations...x3.3. Along with exploring general properties of matching pursuit, we are interested in its application to compressing data vectors in RN. A general

  9. Covariant scalar representation of ? and quantization of the scalar relativistic particle

    NASA Astrophysics Data System (ADS)

    Jarvis, P. D.; Tsohantjis, I.

    1996-03-01

    A covariant scalar representation of iosp(d,2/2) is constructed and analysed in comparison with existing BFV-BRST methods for the quantization of the scalar relativistic particle. It is found that, with appropriately defined wavefunctions, this iosp(d,2/2) produced representation can be identified with the state space arising from the canonical BFV-BRST quantization of the modular-invariant, unoriented scalar particle (or antiparticle) with admissible gauge-fixing conditions. For this model, the cohomological determination of physical states can thus be obtained purely from the representation theory of the iosp(d,2/2) algebra.

  10. High Order Entropy-Constrained Residual VQ for Lossless Compression of Images

    NASA Technical Reports Server (NTRS)

    Kossentini, Faouzi; Smith, Mark J. T.; Scales, Allen

    1995-01-01

    High order entropy coding is a powerful technique for exploiting high order statistical dependencies. However, the exponentially high complexity associated with such a method often discourages its use. In this paper, an entropy-constrained residual vector quantization method is proposed for lossless compression of images. The method consists of first quantizing the input image using a high order entropy-constrained residual vector quantizer and then coding the residual image using a first order entropy coder. The distortion measure used in the entropy-constrained optimization is essentially the first order entropy of the residual image. Experimental results show very competitive performance.

  11. Toward a perceptual image quality assessment of color quantized images

    NASA Astrophysics Data System (ADS)

    Frackiewicz, Mariusz; Palus, Henryk

    2018-04-01

    Color image quantization is an important operation in the field of color image processing. In this paper, we consider new perceptual image quality metrics for assessment of quantized images. These types of metrics, e.g. DSCSI, MDSIs, MDSIm and HPSI achieve the highest correlation coefficients with MOS during tests on the six publicly available image databases. Research was limited to images distorted by two types of compression: JPG and JPG2K. Statistical analysis of correlation coefficients based on the Friedman test and post-hoc procedures showed that the differences between the four new perceptual metrics are not statistically significant.

  12. Constraints on operator ordering from third quantization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ohkuwa, Yoshiaki; Faizal, Mir, E-mail: f2mir@uwaterloo.ca; Ezawa, Yasuo

    2016-02-15

    In this paper, we analyse the Wheeler–DeWitt equation in the third quantized formalism. We will demonstrate that for certain operator ordering, the early stages of the universe are dominated by quantum fluctuations, and the universe becomes classical at later stages during the cosmic expansion. This is physically expected, if the universe is formed from quantum fluctuations in the third quantized formalism. So, we will argue that this physical requirement can be used to constrain the form of the operator ordering chosen. We will explicitly demonstrate this to be the case for two different cosmological models.

  13. Information efficiency in visual communication

    NASA Astrophysics Data System (ADS)

    Alter-Gartenberg, Rachel; Rahman, Zia-ur

    1993-08-01

    This paper evaluates the quantization process in the context of the end-to-end performance of the visual-communication channel. Results show that the trade-off between data transmission and visual quality revolves around the information in the acquired signal, not around its energy. Improved information efficiency is gained by frequency dependent quantization that maintains the information capacity of the channel and reduces the entropy of the encoded signal. Restorations with energy bit-allocation lose both in sharpness and clarity relative to restorations with information bit-allocation. Thus, quantization with information bit-allocation is preferred for high information efficiency and visual quality in optimized visual communication.

  14. Method for calculating the duration of vacuum drying of a metal-concrete container for spent nuclear fuel

    NASA Astrophysics Data System (ADS)

    Karyakin, Yu. E.; Nekhozhin, M. A.; Pletnev, A. A.

    2013-07-01

    A method for calculating the quantity of moisture in a metal-concrete container in the process of its charging with spent nuclear fuel is proposed. A computing method and results obtained by it for conservative estimation of the time of vacuum drying of a container charged with spent nuclear fuel by technologies with quantization and without quantization of the lower fuel element cluster are presented. It has been shown that the absence of quantization in loading spent fuel increases several times the time of vacuum drying of the metal-concrete container.

  15. Information efficiency in visual communication

    NASA Technical Reports Server (NTRS)

    Alter-Gartenberg, Rachel; Rahman, Zia-Ur

    1993-01-01

    This paper evaluates the quantization process in the context of the end-to-end performance of the visual-communication channel. Results show that the trade-off between data transmission and visual quality revolves around the information in the acquired signal, not around its energy. Improved information efficiency is gained by frequency dependent quantization that maintains the information capacity of the channel and reduces the entropy of the encoded signal. Restorations with energy bit-allocation lose both in sharpness and clarity relative to restorations with information bit-allocation. Thus, quantization with information bit-allocation is preferred for high information efficiency and visual quality in optimized visual communication.

  16. Cascade Error Projection with Low Bit Weight Quantization for High Order Correlation Data

    NASA Technical Reports Server (NTRS)

    Duong, Tuan A.; Daud, Taher

    1998-01-01

    In this paper, we reinvestigate the solution for chaotic time series prediction problem using neural network approach. The nature of this problem is such that the data sequences are never repeated, but they are rather in chaotic region. However, these data sequences are correlated between past, present, and future data in high order. We use Cascade Error Projection (CEP) learning algorithm to capture the high order correlation between past and present data to predict a future data using limited weight quantization constraints. This will help to predict a future information that will provide us better estimation in time for intelligent control system. In our earlier work, it has been shown that CEP can sufficiently learn 5-8 bit parity problem with 4- or more bits, and color segmentation problem with 7- or more bits of weight quantization. In this paper, we demonstrate that chaotic time series can be learned and generalized well with as low as 4-bit weight quantization using round-off and truncation techniques. The results show that generalization feature will suffer less as more bit weight quantization is available and error surfaces with the round-off technique are more symmetric around zero than error surfaces with the truncation technique. This study suggests that CEP is an implementable learning technique for hardware consideration.

  17. Group theoretical quantization of isotropic loop cosmology

    NASA Astrophysics Data System (ADS)

    Livine, Etera R.; Martín-Benito, Mercedes

    2012-06-01

    We achieve a group theoretical quantization of the flat Friedmann-Robertson-Walker model coupled to a massless scalar field adopting the improved dynamics of loop quantum cosmology. Deparemetrizing the system using the scalar field as internal time, we first identify a complete set of phase space observables whose Poisson algebra is isomorphic to the su(1,1) Lie algebra. It is generated by the volume observable and the Hamiltonian. These observables describe faithfully the regularized phase space underlying the loop quantization: they account for the polymerization of the variable conjugate to the volume and for the existence of a kinematical nonvanishing minimum volume. Since the Hamiltonian is an element in the su(1,1) Lie algebra, the dynamics is now implemented as SU(1, 1) transformations. At the quantum level, the system is quantized as a timelike irreducible representation of the group SU(1, 1). These representations are labeled by a half-integer spin, which gives the minimal volume. They provide superselection sectors without quantization anomalies and no factor ordering ambiguity arises when representing the Hamiltonian. We then explicitly construct SU(1, 1) coherent states to study the quantum evolution. They not only provide semiclassical states but truly dynamical coherent states. Their use further clarifies the nature of the bounce that resolves the big bang singularity.

  18. Noncommutative Line Bundles and Gerbes

    NASA Astrophysics Data System (ADS)

    Jurčo, B.

    We introduce noncommutative line bundles and gerbes within the framework of deformation quantization. The Seiberg-Witten map is used to construct the corresponding noncommutative Čech cocycles. Morita equivalence of star products and quantization of twisted Poisson structures are discussed from this point of view.

  19. The ultraviolet behavior of quantum gravity

    NASA Astrophysics Data System (ADS)

    Anselmi, Damiano; Piva, Marco

    2018-05-01

    A theory of quantum gravity has been recently proposed by means of a novel quantization prescription, which is able to turn the poles of the free propagators that are due to the higher derivatives into fakeons. The classical Lagrangian contains the cosmological term, the Hilbert term, √{-g}{R}_{μ ν }{R}^{μ ν } and √{-g}{R}^2 . In this paper, we compute the one-loop renormalization of the theory and the absorptive part of the graviton self energy. The results illustrate the mechanism that makes renormalizability compatible with unitarity. The fakeons disentangle the real part of the self energy from the imaginary part. The former obeys a renormalizable power counting, while the latter obeys the nonrenormalizable power counting of the low energy expansion and is consistent with unitarity in the limit of vanishing cosmological constant. The value of the absorptive part is related to the central charge c of the matter fields coupled to gravity.

  20. Displaying radiologic images on personal computers: image storage and compression--Part 2.

    PubMed

    Gillespy, T; Rowberg, A H

    1994-02-01

    This is part 2 of our article on image storage and compression, the third article of our series for radiologists and imaging scientists on displaying, manipulating, and analyzing radiologic images on personal computers. Image compression is classified as lossless (nondestructive) or lossy (destructive). Common lossless compression algorithms include variable-length bit codes (Huffman codes and variants), dictionary-based compression (Lempel-Ziv variants), and arithmetic coding. Huffman codes and the Lempel-Ziv-Welch (LZW) algorithm are commonly used for image compression. All of these compression methods are enhanced if the image has been transformed into a differential image based on a differential pulse-code modulation (DPCM) algorithm. The LZW compression after the DPCM image transformation performed the best on our example images, and performed almost as well as the best of the three commercial compression programs tested. Lossy compression techniques are capable of much higher data compression, but reduced image quality and compression artifacts may be noticeable. Lossy compression is comprised of three steps: transformation, quantization, and coding. Two commonly used transformation methods are the discrete cosine transformation and discrete wavelet transformation. In both methods, most of the image information is contained in a relatively few of the transformation coefficients. The quantization step reduces many of the lower order coefficients to 0, which greatly improves the efficiency of the coding (compression) step. In fractal-based image compression, image patterns are stored as equations that can be reconstructed at different levels of resolution.

  1. Compression of digital images over local area networks. Appendix 1: Item 3. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Gorjala, Bhargavi

    1991-01-01

    Differential Pulse Code Modulation (DPCM) has been used with speech for many years. It has not been as successful for images because of poor edge performance. The only corruption in DPC is quantizer error but this corruption becomes quite large in the region of an edge because of the abrupt changes in the statistics of the signal. We introduce two improved DPCM schemes; Edge correcting DPCM and Edge Preservation Differential Coding. These two coding schemes will detect the edges and take action to correct them. In an Edge Correcting scheme, the quantizer error for an edge is encoded using a recursive quantizer with entropy coding and sent to the receiver as side information. In an Edge Preserving scheme, when the quantizer input falls in the overload region, the quantizer error is encoded and sent to the receiver repeatedly until the quantizer input falls in the inner levels. Therefore these coding schemes increase the bit rate in the region of an edge and require variable rate channels. We implement these two variable rate coding schemes on a token wing network. Timed token protocol supports two classes of messages; asynchronous and synchronous. The synchronous class provides a pre-allocated bandwidth and guaranteed response time. The remaining bandwidth is dynamically allocated to the asynchronous class. The Edge Correcting DPCM is simulated by considering the edge information under the asynchronous class. For the simulation of the Edge Preserving scheme, the amount of information sent each time is fixed, but the length of the packet or the bit rate for that packet is chosen depending on the availability capacity. The performance of the network, and the performance of the image coding algorithms, is studied.

  2. Feynman formulae and phase space Feynman path integrals for tau-quantization of some Lévy-Khintchine type Hamilton functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Butko, Yana A., E-mail: yanabutko@yandex.ru, E-mail: kinderknecht@math.uni-sb.de; Grothaus, Martin, E-mail: grothaus@mathematik.uni-kl.de; Smolyanov, Oleg G., E-mail: Smolyanov@yandex.ru

    2016-02-15

    Evolution semigroups generated by pseudo-differential operators are considered. These operators are obtained by different (parameterized by a number τ) procedures of quantization from a certain class of functions (or symbols) defined on the phase space. This class contains Hamilton functions of particles with variable mass in magnetic and potential fields and more general symbols given by the Lévy-Khintchine formula. The considered semigroups are represented as limits of n-fold iterated integrals when n tends to infinity. Such representations are called Feynman formulae. Some of these representations are constructed with the help of another pseudo-differential operator, obtained by the same procedure ofmore » quantization; such representations are called Hamiltonian Feynman formulae. Some representations are based on integral operators with elementary kernels; these are called Lagrangian Feynman formulae. Langrangian Feynman formulae provide approximations of evolution semigroups, suitable for direct computations and numerical modeling of the corresponding dynamics. Hamiltonian Feynman formulae allow to represent the considered semigroups by means of Feynman path integrals. In the article, a family of phase space Feynman pseudomeasures corresponding to different procedures of quantization is introduced. The considered evolution semigroups are represented as phase space Feynman path integrals with respect to these Feynman pseudomeasures, i.e., different quantizations correspond to Feynman path integrals with the same integrand but with respect to different pseudomeasures. This answers Berezin’s problem of distinguishing a procedure of quantization on the language of Feynman path integrals. Moreover, the obtained Lagrangian Feynman formulae allow also to calculate these phase space Feynman path integrals and to connect them with some functional integrals with respect to probability measures.« less

  3. Combining Vector Quantization and Histogram Equalization.

    ERIC Educational Resources Information Center

    Cosman, Pamela C.; And Others

    1992-01-01

    Discussion of contrast enhancement techniques focuses on the use of histogram equalization with a data compression technique, i.e., tree-structured vector quantization. The enhancement technique of intensity windowing is described, and the use of enhancement techniques for medical images is explained, including adaptive histogram equalization.…

  4. Introduction to quantized LIE groups and algebras

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tjin, T.

    1992-10-10

    In this paper, the authors give a self-contained introduction to the theory of quantum groups according to Drinfeld, highlighting the formal aspects as well as the applications to the Yang-Baxter equation and representation theory. Introductions to Hopf algebras, Poisson structures and deformation quantization are also provided. After defining Poisson Lie groups the authors study their relation to Lie bialgebras and the classical Yang-Baxter equation. Then the authors explain in detail the concept of quantization for them. As an example the quantization of sl[sub 2] is explicitly carried out. Next, the authors show how quantum groups are related to the Yang-Baxtermore » equation and how they can be used to solve it. Using the quantum double construction, the authors explicitly construct the universal R matrix for the quantum sl[sub 2] algebra. In the last section, the authors deduce all finite-dimensional irreducible representations for q a root of unity. The authors also give their tensor product decomposition (fusion rules), which is relevant to conformal field theory.« less

  5. Method and system employing finite state machine modeling to identify one of a plurality of different electric load types

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Du, Liang; Yang, Yi; Harley, Ronald Gordon

    A system is for a plurality of different electric load types. The system includes a plurality of sensors structured to sense a voltage signal and a current signal for each of the different electric loads; and a processor. The processor acquires a voltage and current waveform from the sensors for a corresponding one of the different electric load types; calculates a power or current RMS profile of the waveform; quantizes the power or current RMS profile into a set of quantized state-values; evaluates a state-duration for each of the quantized state-values; evaluates a plurality of state-types based on the powermore » or current RMS profile and the quantized state-values; generates a state-sequence that describes a corresponding finite state machine model of a generalized load start-up or transient profile for the corresponding electric load type; and identifies the corresponding electric load type.« less

  6. Image compression system and method having optimized quantization tables

    NASA Technical Reports Server (NTRS)

    Ratnakar, Viresh (Inventor); Livny, Miron (Inventor)

    1998-01-01

    A digital image compression preprocessor for use in a discrete cosine transform-based digital image compression device is provided. The preprocessor includes a gathering mechanism for determining discrete cosine transform statistics from input digital image data. A computing mechanism is operatively coupled to the gathering mechanism to calculate a image distortion array and a rate of image compression array based upon the discrete cosine transform statistics for each possible quantization value. A dynamic programming mechanism is operatively coupled to the computing mechanism to optimize the rate of image compression array against the image distortion array such that a rate-distortion-optimal quantization table is derived. In addition, a discrete cosine transform-based digital image compression device and a discrete cosine transform-based digital image compression and decompression system are provided. Also, a method for generating a rate-distortion-optimal quantization table, using discrete cosine transform-based digital image compression, and operating a discrete cosine transform-based digital image compression and decompression system are provided.

  7. Quantization of Poisson Manifolds from the Integrability of the Modular Function

    NASA Astrophysics Data System (ADS)

    Bonechi, F.; Ciccoli, N.; Qiu, J.; Tarlini, M.

    2014-10-01

    We discuss a framework for quantizing a Poisson manifold via the quantization of its symplectic groupoid, combining the tools of geometric quantization with the results of Renault's theory of groupoid C*-algebras. This setting allows very singular polarizations. In particular, we consider the case when the modular function is multiplicatively integrable, i.e., when the space of leaves of the polarization inherits a groupoid structure. If suitable regularity conditions are satisfied, then one can define the quantum algebra as the convolution algebra of the subgroupoid of leaves satisfying the Bohr-Sommerfeld conditions. We apply this procedure to the case of a family of Poisson structures on , seen as Poisson homogeneous spaces of the standard Poisson-Lie group SU( n + 1). We show that a bihamiltonian system on defines a multiplicative integrable model on the symplectic groupoid; we compute the Bohr-Sommerfeld groupoid and show that it satisfies the needed properties for applying Renault theory. We recover and extend Sheu's description of quantum homogeneous spaces as groupoid C*-algebras.

  8. High-resolution quantization based on soliton self-frequency shift and spectral compression in a bi-directional comb-fiber architecture

    NASA Astrophysics Data System (ADS)

    Zhang, Xuyan; Zhang, Zhiyao; Wang, Shubing; Liang, Dong; Li, Heping; Liu, Yong

    2018-03-01

    We propose and demonstrate an approach that can achieve high-resolution quantization by employing soliton self-frequency shift and spectral compression. Our approach is based on a bi-directional comb-fiber architecture which is composed of a Sagnac-loop-based mirror and a comb-like combination of N sections of interleaved single-mode fibers and high nonlinear fibers. The Sagnac-loop-based mirror placed at the terminal of a bus line reflects the optical pulses back to the bus line to achieve additional N-stage spectral compression, thus single-stage soliton self-frequency shift (SSFS) and (2 N - 1)-stage spectral compression are realized in the bi-directional scheme. The fiber length in the architecture is numerically optimized, and the proposed quantization scheme is evaluated by both simulation and experiment in the case of N = 2. In the experiment, a quantization resolution of 6.2 bits is obtained, which is 1.2-bit higher than that of its uni-directional counterpart.

  9. Quantization ambiguities and bounds on geometric scalars in anisotropic loop quantum cosmology

    NASA Astrophysics Data System (ADS)

    Singh, Parampreet; Wilson-Ewing, Edward

    2014-02-01

    We study quantization ambiguities in loop quantum cosmology that arise for space-times with non-zero spatial curvature and anisotropies. Motivated by lessons from different possible loop quantizations of the closed Friedmann-Lemaître-Robertson-Walker cosmology, we find that using open holonomies of the extrinsic curvature, which due to gauge-fixing can be treated as a connection, leads to the same quantum geometry effects that are found in spatially flat cosmologies. More specifically, in contrast to the quantization based on open holonomies of the Ashtekar-Barbero connection, the expansion and shear scalars in the effective theories of the Bianchi type II and Bianchi type IX models have upper bounds, and these are in exact agreement with the bounds found in the effective theories of the Friedmann-Lemaître-Robertson-Walker and Bianchi type I models in loop quantum cosmology. We also comment on some ambiguities present in the definition of inverse triad operators and their role.

  10. The electronic structure of Au25 clusters: between discrete and continuous

    NASA Astrophysics Data System (ADS)

    Katsiev, Khabiboulakh; Lozova, Nataliya; Wang, Lu; Sai Krishna, Katla; Li, Ruipeng; Mei, Wai-Ning; Skrabalak, Sara E.; Kumar, Challa S. S. R.; Losovyj, Yaroslav

    2016-08-01

    Here, an approach based on synchrotron resonant photoemission is employed to explore the transition between quantization and hybridization of the electronic structure in atomically precise ligand-stabilized nanoparticles. While the presence of ligands maintains quantization in Au25 clusters, their removal renders increased hybridization of the electronic states in the vicinity of the Fermi level. These observations are supported by DFT studies.Here, an approach based on synchrotron resonant photoemission is employed to explore the transition between quantization and hybridization of the electronic structure in atomically precise ligand-stabilized nanoparticles. While the presence of ligands maintains quantization in Au25 clusters, their removal renders increased hybridization of the electronic states in the vicinity of the Fermi level. These observations are supported by DFT studies. Electronic supplementary information (ESI) available: Experimental details including chemicals, sample preparation, and characterization methods. Computation techniques, SV-AUC, GIWAXS, XPS, UPS, MALDI-TOF, ESI data of Au25 clusters. See DOI: 10.1039/c6nr02374f

  11. Design and Implementation of Multi-Input Adaptive Signal Extractions.

    DTIC Science & Technology

    1982-09-01

    deflected gradient) algorithm requiring only N+ l multiplications per adaptation step. Additional quantization is introduced to eliminate all multiplications...noise cancellation for intermittent-signal applications," IEEE Trans. Information Theory, Vol. IT-26. Nov. 1980, pp. 746-750. 1-2 J. Kazakoff and W. A...cancellation," Proc. IEEE, July 1981, Vol. 69, pp. 846-847. *I-10 P. L . Kelly and W. A. Gardner, "Pilot-Directed Adaptive Signal Extraction," Dept. of

  12. Quantization of an electromagnetic field in two-dimensional photonic structures based on the scattering matrix formalism ( S-quantization)

    NASA Astrophysics Data System (ADS)

    Ivanov, K. A.; Nikolaev, V. V.; Gubaydullin, A. R.; Kaliteevski, M. A.

    2017-10-01

    Based on the scattering matrix formalism, we have developed a method of quantization of an electromagnetic field in two-dimensional photonic nanostructures ( S-quantization in the two-dimensional case). In this method, the fields at the boundaries of the quantization box are expanded into a Fourier series and are related with each other by the scattering matrix of the system, which is the product of matrices describing the propagation of plane waves in empty regions of the quantization box and the scattering matrix of the photonic structure (or an arbitrary inhomogeneity). The quantization condition (similarly to the onedimensional case) is formulated as follows: the eigenvalues of the scattering matrix are equal to unity, which corresponds to the fact that the set of waves that are incident on the structure (components of the expansion into the Fourier series) is equal to the set of waves that travel away from the structure (outgoing waves). The coefficients of the matrix of scattering through the inhomogeneous structure have been calculated using the following procedure: the structure is divided into parallel layers such that the permittivity in each layer varies only along the axis that is perpendicular to the layers. Using the Fourier transform, the Maxwell equations have been written in the form of a matrix that relates the Fourier components of the electric field at the boundaries of neighboring layers. The product of these matrices is the transfer matrix in the basis of the Fourier components of the electric field. Represented in a block form, it is composed by matrices that contain the reflection and transmission coefficients for the Fourier components of the field, which, in turn, constitute the scattering matrix. The developed method considerably simplifies the calculation scheme for the analysis of the behavior of the electromagnetic field in structures with a two-dimensional inhomogeneity. In addition, this method makes it possible to obviate difficulties that arise in the analysis of the Purcell effect because of the divergence of the integral describing the effective volume of the mode in open systems.

  13. Exact quantization of Einstein-Rosen waves coupled to massless scalar matter.

    PubMed

    Barbero G, J Fernando; Garay, Iñaki; Villaseñor, Eduardo J S

    2005-07-29

    We show in this Letter that gravity coupled to a massless scalar field with full cylindrical symmetry can be exactly quantized by an extension of the techniques used in the quantization of Einstein-Rosen waves. This system provides a useful test bed to discuss a number of issues in quantum general relativity, such as the emergence of the classical metric, microcausality, and large quantum gravity effects. It may also provide an appropriate framework to study gravitational critical phenomena from a quantum point of view, issues related to black hole evaporation, and the consistent definition of test fields and particles in quantum gravity.

  14. Second quantization techniques in the scattering of nonidentical composite bodies

    NASA Technical Reports Server (NTRS)

    Norbury, J. W.; Townsend, L. W.; Deutchman, P. A.

    1986-01-01

    Second quantization techniques for describing elastic and inelastic interactions between nonidentical composite bodies are presented and are applied to nucleus-nucleus collisions involving ground-state and one-particle-one-hole excitations. Evaluations of the resultant collision matrix elements are made through use of Wick's theorem.

  15. Quantization of simple parametrized systems

    NASA Astrophysics Data System (ADS)

    Ruffini, G.

    2005-11-01

    I study the canonical formulation and quantization of some simple parametrized systems, including the non-relativistic parametrized particle and the relativistic parametrized particle. Using Dirac's formalism I construct for each case the classical reduced phase space and study the dependence on the gauge fixing used. Two separate features of these systems can make this construction difficult: the actions are not invariant at the boundaries, and the constraints may have disconnected solution spaces. The relativistic particle is affected by both, while the non-relativistic particle displays only by the first. Analyzing the role of canonical transformations in the reduced phase space, I show that a change of gauge fixing is equivalent to a canonical transformation. In the relativistic case, quantization of one branch of the constraint at the time is applied and I analyze the electromagenetic backgrounds in which it is possible to quantize simultaneously both branches and still obtain a covariant unitary quantum theory. To preserve unitarity and space-time covariance, second quantization is needed unless there is no electric field. I motivate a definition of the inner product in all these cases and derive the Klein-Gordon inner product for the relativistic case. I construct phase space path integral representations for amplitudes for the BFV and the Faddeev path integrals, from which the path integrals in coordinate space (Faddeev-Popov and geometric path integrals) are derived.

  16. Vortex creation during magnetic trap manipulations of spinor Bose-Einstein condensates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Itin, A. P.; Space Research Institute, RAS, Moscow; Morishita, T.

    2006-06-15

    We investigate several mechanisms of vortex creation during splitting of a spinor Bose-Einstein condensate (BEC) in a magnetic double-well trap controlled by a pair of current carrying wires and bias magnetic fields. Our study is motivated by a recent MIT experiment on splitting BECs with a similar trap [Y. Shin et al., Phys. Rev. A 72, 021604 (2005)], where an unexpected fork-like structure appeared in the interference fringes indicating the presence of a singly quantized vortex in one of the interfering condensates. It is well known that in a spin-1 BEC in a quadrupole trap, a doubly quantized vortex ismore » topologically produced by a 'slow' reversal of bias magnetic field B{sub z}. Since in the experiment a doubly quantized vortex had never been seen, Shin et al. ruled out the topological mechanism and concentrated on the nonadiabatic mechanical mechanism for explanation of the vortex creation. We find, however, that in the magnetic trap considered both mechanisms are possible: singly quantized vortices can be formed in a spin-1 BEC topologically (for example, during the magnetic field switching-off process). We therefore provide a possible alternative explanation for the interference patterns observed in the experiment. We also present a numerical example of creation of singly quantized vortices due to 'fast' splitting; i.e., by a dynamical (nonadiabatic) mechanism.« less

  17. Prior-Based Quantization Bin Matching for Cloud Storage of JPEG Images.

    PubMed

    Liu, Xianming; Cheung, Gene; Lin, Chia-Wen; Zhao, Debin; Gao, Wen

    2018-07-01

    Millions of user-generated images are uploaded to social media sites like Facebook daily, which translate to a large storage cost. However, there exists an asymmetry in upload and download data: only a fraction of the uploaded images are subsequently retrieved for viewing. In this paper, we propose a cloud storage system that reduces the storage cost of all uploaded JPEG photos, at the expense of a controlled increase in computation mainly during download of requested image subset. Specifically, the system first selectively re-encodes code blocks of uploaded JPEG images using coarser quantization parameters for smaller storage sizes. Then during download, the system exploits known signal priors-sparsity prior and graph-signal smoothness prior-for reverse mapping to recover original fine quantization bin indices, with either deterministic guarantee (lossless mode) or statistical guarantee (near-lossless mode). For fast reverse mapping, we use small dictionaries and sparse graphs that are tailored for specific clusters of similar blocks, which are classified via tree-structured vector quantizer. During image upload, cluster indices identifying the appropriate dictionaries and graphs for the re-quantized blocks are encoded as side information using a differential distributed source coding scheme to facilitate reverse mapping during image download. Experimental results show that our system can reap significant storage savings (up to 12.05%) at roughly the same image PSNR (within 0.18 dB).

  18. Encryption techniques to the design of e-beam-generated digital pixel hologram for anti-counterfeiting

    NASA Astrophysics Data System (ADS)

    Chan, Hau P.; Bao, Nai-Keng; Kwok, Wing O.; Wong, Wing H.

    2002-04-01

    The application of Digital Pixel Hologram (DPH) as anti-counterfeiting technology for products such as commercial goods, credit cards, identity cards, paper money banknote etc. is growing important nowadays. It offers many advantages over other anti-counterfeiting tools and this includes high diffraction effect, high resolving power, resistance to photo copying using two-dimensional Xeroxes, potential for mass production of patterns at a very low cost. Recently, we have successfully in fabricating high definition DPH with resolution higher than 2500dpi for the purpose of anti-counterfeiting by applying modern optical diffraction theory to computer pattern generation technique with the assist of electron beam lithography (EBL). In this paper, we introduce five levels of encryption techniques, which can be embedded in the design of such DPHs to further improve its anti-counterfeiting performance with negligible added on cost. The techniques involved, in the ascending order of decryption complexity, are namely Gray-level Encryption, Pattern Encryption, Character Encryption, Image Modification Encryption and Codebook Encryption. A Hong Kong Special Administration Regions (HKSAR) DPH emblem was fabricated at a resolution of 2540dpi using the facilities housed in our Optoelectronics Research Center. This emblem will be used as an illustration to discuss in details about each encryption idea during the conference.

  19. Dirac fields in flat FLRW cosmology: Uniqueness of the Fock quantization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cortez, Jerónimo, E-mail: jacq@ciencias.unam.mx; Elizaga Navascués, Beatriz, E-mail: beatriz.elizaga@iem.cfmac.csic.es; Martín-Benito, Mercedes, E-mail: m.martin@hef.ru.nl

    We address the issue of the infinite ambiguity that affects the construction of a Fock quantization of a Dirac field propagating in a cosmological spacetime with flat compact sections. In particular, we discuss a physical criterion that restricts to a unique possibility (up to unitary equivalence) the infinite set of available vacua. We prove that this desired uniqueness is guaranteed, for any possible choice of spin structure on the spatial sections, if we impose two conditions. The first one is that the symmetries of the classical system must be implemented quantum mechanically, so that the vacuum is invariant under themore » symmetry transformations. The second and more important condition is that the constructed theory must have a quantum dynamics that is implementable as a (non-trivial) unitary operator in Fock space. Actually, this unitarity of the quantum dynamics leads us to identify as explicitly time dependent some very specific contributions of the Dirac field. In doing that, we essentially characterize the part of the dynamics governed by the Dirac equation that is unitarily implementable. The uniqueness of the Fock vacuum is attained then once a physically motivated convention for the concepts of particles and antiparticles is fixed.« less

  20. 1999 Survey of Active Duty Personnel: Administration, Datasets, and Codebook. Appendix G: Frequency and Percentage Distributions for Variables in the Survey Analysis Files.

    DTIC Science & Technology

    2000-12-01

    A SKIP FLAG INDICATING THE RESULT OF CHECKING THE RESPONSE ON THE PARENT (SCREENING) ITEM AGAINST THE RESPONSE(S) ON THE ITEMS WITHIN THE SKIP...RESPONSE ON THE PARENT (SCREENING) ITEM AGAINST THE RESPONSE(S) ON THE ITEMS WITHIN THE SKIP PATTERN. SEE TABLE D-5, NOTE 2, IN APPENDIX D. G-52...RESULT OF CHECKING THE RESPONSE ON THE PARENT (SCREENING) ITEM AGAINST THE RESPONSE(S) ON THE ITEMS WITHIN THE SKIP PATTERN. SEE TABLE D-5

  1. The 1985 ARI Survey of Army Recruits: Codebook for Summer 85 USAR and ARNG Survey Respondents. Volume 2

    DTIC Science & Technology

    1986-05-01

    CHECKED 100 I 1.9 I 1 ICHECKED - I NEVER RECEIVED A RESPONSE IN THE I I IMAIL FROM THE CARD I SENT IN 516 1 100.0 1 TOTALSI I I A I B I C 1 11982 l1983...I . NO RESPONSE 2626 I 49.4 I C IVALID SKIP 2369 I 44.6 I 0 INOT CHECKED 84 I 1 .6 I 1 ICHECKED - I NEVER RECEIVED A RESPONSE IN THE I I IMAIL TO MY

  2. The 1984 ARI Survey of Army Recruits. Codebook for Summer 84 Active Army Survey Respondents

    DTIC Science & Technology

    1986-05-01

    ARMY SURVEY RESPONDENTS T261 - DO YOU HATCH ANY OF THE FOLLOWING PROGRAMS OR PROGRAMMING TYPES ON TV? - NBA BASKETBALL . RAN DATA ICARD i1 COLS ILENGTHII... BASKETBALL 280 T262 WATCH TV PROG:COLLEGE BASKETBALL 281 T263 WATCH TV PROG:NHL HOCKEY 282 T264 WATCH TV PROG:PROFESSIONAL WRESTLING 283 T265 WATCH TV...SURVEY RESPONDENTS T262 - DO YOU HATCH ANY OF THE FOLLOWING PROGRAMS OR PROGRAMMING TYPES ON TV? - COLLEGE BASKETBALL . RAW DATA ICARD #1 COLS ILENGTHII

  3. 2012 Workplace and Gender Relations Survey of Active Duty Members: Administration, Datasets and Codebook

    DTIC Science & Technology

    2013-11-04

    banners, etc.) 1950 1.8 5 5 Posters , brochures and/or stickers 9511 8.8 6 6 Unit 680 0.6 7 7 Chaplain 2148 2.0 8 8 Other 108478 100.0 TOTALS...service announcement 411 0.4 3 3 Print advertisement 688 0.6 4 4 Online media (e.g., website, blog, banners, etc.) 1950 1.8 5 5 Posters , brochures...Safe Helpline? Posters , brochures and/or stickers OS DATA SAS DATA COLS LENGTH FORMAT NAME TYPE LENGTH INFORMAT NA-NA NA MARKED NUM 3 STDOS2

  4. Configuration study for a 30 GHz monolithic receive array, volume 1

    NASA Technical Reports Server (NTRS)

    Nester, W. H.; Cleaveland, B.; Edward, B.; Gotkis, S.; Hesserbacker, G.; Loh, J.; Mitchell, B.

    1984-01-01

    Gregorian, Cassegrain, and single reflector systems were analyzed in configuration studies for communications satellite receive antennas. Parametric design and performance curves were generated. A preliminary design of each reflector/feed system was derived including radiating elements, beam-former network, beamsteering system, and MMIC module architecture. Performance estimates and component requirements were developed for each design. A recommended design was selected for both the scanning beam and the fixed beam case. Detailed design and performance analysis results are presented for the selected Cassegrain configurations. The final design point is characterized in detail and performance measures evaluated in terms of gain, sidelobe level, noise figure, carrier-to-interference ratio, prime power, and beamsteering. The effects of mutual coupling and excitation errors (including phase and amplitude quantization errors) are evaluated. Mechanical assembly drawings are given for the final design point. Thermal design requirements are addressed in the mechanical design.

  5. Quanta of geometry and unification

    NASA Astrophysics Data System (ADS)

    Chamseddine, Ali H.

    2016-11-01

    This is a tribute to Abdus Salam’s memory whose insight and creative thinking set for me a role model to follow. In this contribution I show that the simple requirement of volume quantization in spacetime (with Euclidean signature) uniquely determines the geometry to be that of a noncommutative space whose finite part is based on an algebra that leads to Pati-Salam grand unified models. The Standard Model corresponds to a special case where a mathematical constraint (order one condition) is satisfied. This provides evidence that Salam was a visionary who was generations ahead of his time.

  6. Quanta of Geometry and Unification

    NASA Astrophysics Data System (ADS)

    Chamseddine, Ali H.

    This is a tribute to Abdus Salam's memory whose insight and creative thinking set for me a role model to follow. In this contribution I show that the simple requirement of volume quantization in space-time (with Euclidean signature) uniquely determines the geometry to be that of a noncommutative space whose finite part is based on an algebra that leads to Pati-Salam grand unified models. The Standard Model corresponds to a special case where a mathematical constraint (order one condition) is satisfied. This provides evidence that Salam was a visionary who was generations ahead of his time.

  7. Visual data mining for quantized spatial data

    NASA Technical Reports Server (NTRS)

    Braverman, Amy; Kahn, Brian

    2004-01-01

    In previous papers we've shown how a well known data compression algorithm called Entropy-constrained Vector Quantization ( can be modified to reduce the size and complexity of very large, satellite data sets. In this paper, we descuss how to visualize and understand the content of such reduced data sets.

  8. Stochastic quantization of topological field theory: Generalized Langevin equation with memory kernel

    NASA Astrophysics Data System (ADS)

    Menezes, G.; Svaiter, N. F.

    2006-07-01

    We use the method of stochastic quantization in a topological field theory defined in an Euclidean space, assuming a Langevin equation with a memory kernel. We show that our procedure for the Abelian Chern-Simons theory converges regardless of the nature of the Chern-Simons coefficient.

  9. Symplectic Quantization of a Reducible Theory

    NASA Astrophysics Data System (ADS)

    Barcelos-Neto, J.; Silva, M. B. D.

    We use the symplectic formalism to quantize the Abelian antisymmetric tensor gauge field. It is related to a reducible theory in the sense that all of its constraints are not independent. A procedure like ghost-of-ghost of the BFV method has to be used, but in terms of Lagrange multipliers.

  10. Quantized Algebra I Texts

    ERIC Educational Resources Information Center

    DeBuvitz, William

    2014-01-01

    I am a volunteer reader at the Princeton unit of "Learning Ally" (formerly "Recording for the Blind & Dyslexic") and I recently discovered that high school students are introduced to the concept of quantization well before they take chemistry and physics. For the past few months I have been reading onto computer files a…

  11. Techniques for decoding speech phonemes and sounds: A concept

    NASA Technical Reports Server (NTRS)

    Lokerson, D. C.; Holby, H. G.

    1975-01-01

    Techniques studied involve conversion of speech sounds into machine-compatible pulse trains. (1) Voltage-level quantizer produces number of output pulses proportional to amplitude characteristics of vowel-type phoneme waveforms. (2) Pulses produced by quantizer of first speech formants are compared with pulses produced by second formants.

  12. Hazardous sign detection for safety applications in traffic monitoring

    NASA Astrophysics Data System (ADS)

    Benesova, Wanda; Kottman, Michal; Sidla, Oliver

    2012-01-01

    The transportation of hazardous goods in public streets systems can pose severe safety threats in case of accidents. One of the solutions for these problems is an automatic detection and registration of vehicles which are marked with dangerous goods signs. We present a prototype system which can detect a trained set of signs in high resolution images under real-world conditions. This paper compares two different methods for the detection: bag of visual words (BoW) procedure and our approach presented as pairs of visual words with Hough voting. The results of an extended series of experiments are provided in this paper. The experiments show that the size of visual vocabulary is crucial and can significantly affect the recognition success rate. Different code-book sizes have been evaluated for this detection task. The best result of the first method BoW was 67% successfully recognized hazardous signs, whereas the second method proposed in this paper - pairs of visual words and Hough voting - reached 94% of correctly detected signs. The experiments are designed to verify the usability of the two proposed approaches in a real-world scenario.

  13. Exploring the physical layer frontiers of cellular uplink: The Vienna LTE-A Uplink Simulator.

    PubMed

    Zöchmann, Erich; Schwarz, Stefan; Pratschner, Stefan; Nagel, Lukas; Lerch, Martin; Rupp, Markus

    Communication systems in practice are subject to many technical/technological constraints and restrictions. Multiple input, multiple output (MIMO) processing in current wireless communications, as an example, mostly employs codebook-based pre-coding to save computational complexity at the transmitters and receivers. In such cases, closed form expressions for capacity or bit-error probability are often unattainable; effects of realistic signal processing algorithms on the performance of practical communication systems rather have to be studied in simulation environments. The Vienna LTE-A Uplink Simulator is a 3GPP LTE-A standard compliant MATLAB-based link level simulator that is publicly available under an academic use license, facilitating reproducible evaluations of signal processing algorithms and transceiver designs in wireless communications. This paper reviews research results that have been obtained by means of the Vienna LTE-A Uplink Simulator, highlights the effects of single-carrier frequency-division multiplexing (as the distinguishing feature to LTE-A downlink), extends known link adaptation concepts to uplink transmission, shows the implications of the uplink pilot pattern for gathering channel state information at the receiver and completes with possible future research directions.

  14. Digital simulation of a communication link for Pioneer Saturn Uranus atmospheric entry probe, part 1

    NASA Technical Reports Server (NTRS)

    Hinrichs, C. A.

    1975-01-01

    A digital simulation study is presented for a candidate modulator/demodulator design in an atmospheric scintillation environment with Doppler, Doppler rate, and signal attenuation typical of the conditions of an outer planet atmospheric probe. The simulation results indicate that the mean channel error rate with and without scintillation are similar to theoretical characterizations of the link. The simulation gives information for calculating other channel statistics and generates a quantized symbol stream on magnetic tape from which error correction decoding is analyzed. Results from the magnetic tape data analyses are also included. The receiver and bit synchronizer are modeled in the simulation at the level of hardware component parameters rather than at the loop equation level and individual hardware parameters are identified. The atmospheric scintillation amplitude and phase are modeled independently. Normal and log normal amplitude processes are studied. In each case the scintillations are low pass filtered. The receiver performance is given for a range of signal to noise ratios with and without the effects of scintillation. The performance is reviewed for critical reciever parameter variations.

  15. Electrical and thermal conductance quantization in nanostructures

    NASA Astrophysics Data System (ADS)

    Nawrocki, Waldemar

    2008-10-01

    In the paper problems of electron transport in mesoscopic structures and nanostructures are considered. The electrical conductance of nanowires was measured in a simple experimental system. Investigations have been performed in air at room temperature measuring the conductance between two vibrating metal wires with standard oscilloscope. Conductance quantization in units of G0 = 2e2/h = (12.9 kΩ)-1 up to five quanta of conductance has been observed for nanowires formed in many metals. The explanation of this universal phenomena is the formation of a nanometer-sized wire (nanowire) between macroscopic metallic contacts which induced, due to theory proposed by Landauer, the quantization of conductance. Thermal problems in nanowires are also discussed in the paper.

  16. On the quantization of the massless Bateman system

    NASA Astrophysics Data System (ADS)

    Takahashi, K.

    2018-03-01

    The so-called Bateman system for the damped harmonic oscillator is reduced to a genuine dual dissipation system (DDS) by setting the mass to zero. We explore herein the condition under which the canonical quantization of the DDS is consistently performed. The roles of the observable and auxiliary coordinates are discriminated. The results show that the complete and orthogonal Fock space of states can be constructed on the stable vacuum if an anti-Hermite representation of the canonical Hamiltonian is adopted. The amplitude of the one-particle wavefunction is consistent with the classical solution. The fields can be quantized as bosonic or fermionic. For bosonic systems, the quantum fluctuation of the field is directly associated with the dissipation rate.

  17. Quantization with maximally degenerate Poisson brackets: the harmonic oscillator!

    NASA Astrophysics Data System (ADS)

    Nutku, Yavuz

    2003-07-01

    Nambu's construction of multi-linear brackets for super-integrable systems can be thought of as degenerate Poisson brackets with a maximal set of Casimirs in their kernel. By introducing privileged coordinates in phase space these degenerate Poisson brackets are brought to the form of Heisenberg's equations. We propose a definition for constructing quantum operators for classical functions, which enables us to turn the maximally degenerate Poisson brackets into operators. They pose a set of eigenvalue problems for a new state vector. The requirement of the single-valuedness of this eigenfunction leads to quantization. The example of the harmonic oscillator is used to illustrate this general procedure for quantizing a class of maximally super-integrable systems.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kovchavtsev, A. P., E-mail: kap@isp.nsc.ru; Tsarenko, A. V.; Guzev, A. A.

    The influence of electron energy quantization in a space-charge region on the accumulation capacitance of the InAs-based metal-oxide-semiconductor capacitors (MOSCAPs) has been investigated by modeling and comparison with the experimental data from Au/anodic layer(4-20 nm)/n-InAs(111)A MOSCAPs. The accumulation capacitance for MOSCAPs has been calculated by the solution of Poisson equation with different assumptions and the self-consistent solution of Schrödinger and Poisson equations with quantization taken into account. It was shown that the quantization during the MOSCAPs accumulation capacitance calculations should be taken into consideration for the correct interface states density determination by Terman method and the evaluation of gate dielectric thicknessmore » from capacitance-voltage measurements.« less

  19. Canonical methods in classical and quantum gravity: An invitation to canonical LQG

    NASA Astrophysics Data System (ADS)

    Reyes, Juan D.

    2018-04-01

    Loop Quantum Gravity (LQG) is a candidate quantum theory of gravity still under construction. LQG was originally conceived as a background independent canonical quantization of Einstein’s general relativity theory. This contribution provides some physical motivations and an overview of some mathematical tools employed in canonical Loop Quantum Gravity. First, Hamiltonian classical methods are reviewed from a geometric perspective. Canonical Dirac quantization of general gauge systems is sketched next. The Hamiltonian formultation of gravity in geometric ADM and connection-triad variables is then presented to finally lay down the canonical loop quantization program. The presentation is geared toward advanced undergradute or graduate students in physics and/or non-specialists curious about LQG.

  20. Fractional quantization of the magnetic flux in cylindrical unconventional superconductors.

    PubMed

    Loder, F; Kampf, A P; Kopp, T

    2013-07-26

    The magnetic flux threading a conventional superconducting ring is typically quantized in units of Φ0=hc/2e. The factor of 2 in the denominator of Φ0 originates from the existence of two different types of pairing states with minima of the free energy at even and odd multiples of Φ0. Here we show that spatially modulated pairing states exist with energy minima at fractional flux values, in particular, at multiples of Φ0/2. In such states, condensates with different center-of-mass momenta of the Cooper pairs coexist. The proposed mechanism for fractional flux quantization is discussed in the context of cuprate superconductors, where hc/4e flux periodicities were observed.

  1. Floating-point system quantization errors in digital control systems

    NASA Technical Reports Server (NTRS)

    Phillips, C. L.

    1973-01-01

    The results are reported of research into the effects on system operation of signal quantization in a digital control system. The investigation considered digital controllers (filters) operating in floating-point arithmetic in either open-loop or closed-loop systems. An error analysis technique is developed, and is implemented by a digital computer program that is based on a digital simulation of the system. As an output the program gives the programing form required for minimum system quantization errors (either maximum of rms errors), and the maximum and rms errors that appear in the system output for a given bit configuration. The program can be integrated into existing digital simulations of a system.

  2. The uniform quantized electron gas revisited

    NASA Astrophysics Data System (ADS)

    Lomba, Enrique; Høye, Johan S.

    2017-11-01

    In this article we continue and extend our recent work on the correlation energy of the quantized electron gas of uniform density at temperature T=0 . As before, we utilize the methods, properties, and results obtained by means of classical statistical mechanics. These were extended to quantized systems via the Feynman path integral formalism. The latter translates the quantum problem into a classical polymer problem in four dimensions. Again, the well known RPA (random phase approximation) is recovered as a basic result which we then modify and improve upon. Here we analyze the condition of thermodynamic self-consistency. Our numerical calculations exhibit a remarkable agreement with well known results of a standard parameterization of Monte Carlo correlation energies.

  3. Consistency of certain constitutive relations with quantum electromagnetism

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Horsley, S. A. R.

    2011-12-15

    Recent work by Philbin [New J. Phys. 12, 123008 (2010)] has provided a Lagrangian theory that establishes a general method for the canonical quantization of the electromagnetic field in any dispersive, lossy, linear dielectric. Working from this theory, we extend the Lagrangian description to reciprocal and nonreciprocal magnetoelectric (bianisotropic) media, showing that some versions of the constitutive relations are inconsistent with a real Lagrangian, and hence with quantization. This amounts to a restriction on the magnitude of the magnetoelectric coupling. Moreover, from the point of view of quantization, moving media are shown to be fundamentally different from stationary magnetoelectrics, despitemore » the formal similarity in the constitutive relations.« less

  4. Quantization of higher abelian gauge theory in generalized differential cohomology

    NASA Astrophysics Data System (ADS)

    Szabo, R.

    We review and elaborate on some aspects of the quantization of certain classes of higher abelian gauge theories using techniques of generalized differential cohomology. Particular emphasis is placed on the examples of generalized Maxwell theory and Cheeger-Simons cohomology, and of Ramond-Ramond fields in Type II superstring theory and differential K-theory.

  5. Radiation dose-rate meter using an energy-sensitive counter

    DOEpatents

    Kopp, Manfred K.

    1988-01-01

    A radiation dose-rate meter is provided which uses an energy-sensitive detector and combines charge quantization and pulse-rate measurement to monitor radiation dose rates. The charge from each detected photon is quantized by level-sensitive comparators so that the resulting total output pulse rate is proportional to the dose-rate.

  6. Quantized Chiral Magnetic Current from Reconnections of Magnetic Flux.

    PubMed

    Hirono, Yuji; Kharzeev, Dmitri E; Yin, Yi

    2016-10-21

    We introduce a new mechanism for the chiral magnetic effect that does not require an initial chirality imbalance. The chiral magnetic current is generated by reconnections of magnetic flux that change the magnetic helicity of the system. The resulting current is entirely determined by the change of magnetic helicity, and it is quantized.

  7. Quantized Chiral Magnetic Current from Reconnections of Magnetic Flux

    DOE PAGES

    Hirono, Yuji; Kharzeev, Dmitri E.; Yin, Yi

    2016-10-20

    We introduce a new mechanism for the chiral magnetic e ect that does not require an initial chirality imbalance. The chiral magnetic current is generated by reconnections of magnetic ux that change the magnetic helicity of the system. The resulting current is entirely determined by the change of magnetic helicity, and it is quantized.

  8. Dirac’s magnetic monopole and the Kontsevich star product

    NASA Astrophysics Data System (ADS)

    Soloviev, M. A.

    2018-03-01

    We examine relationships between various quantization schemes for an electrically charged particle in the field of a magnetic monopole. Quantization maps are defined in invariant geometrical terms, appropriate to the case of nontrivial topology, and are constructed for two operator representations. In the first setting, the quantum operators act on the Hilbert space of sections of a nontrivial complex line bundle associated with the Hopf bundle, whereas the second approach uses instead a quaternionic Hilbert module of sections of a trivial quaternionic line bundle. We show that these two quantizations are naturally related by a bundle morphism and, as a consequence, induce the same phase-space star product. We obtain explicit expressions for the integral kernels of star-products corresponding to various operator orderings and calculate their asymptotic expansions up to the third order in the Planck constant \\hbar . We also show that the differential form of the magnetic Weyl product corresponding to the symmetric ordering agrees completely with the Kontsevich formula for deformation quantization of Poisson structures and can be represented by Kontsevich’s graphs.

  9. Monopoles for gravitation and for higher spin fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bunster, Claudio; Portugues, Ruben; Cnockaert, Sandrine

    2006-05-15

    We consider massless higher spin gauge theories with both electric and magnetic sources, with a special emphasis on the spin two case. We write the equations of motion at the linear level (with conserved external sources) and introduce Dirac strings so as to derive the equations from a variational principle. We then derive a quantization condition that generalizes the familiar Dirac quantization condition, and which involves the conserved charges associated with the asymptotic symmetries for higher spins. Next we discuss briefly how the result extends to the nonlinear theory. This is done in the context of gravitation, where the Taub-NUTmore » solution provides the exact solution of the field equations with both types of sources. We rederive, in analogy with electromagnetism, the quantization condition from the quantization of the angular momentum. We also observe that the Taub-NUT metric is asymptotically flat at spatial infinity in the sense of Regge and Teitelboim (including their parity conditions). It follows, in particular, that one can consistently consider in the variational principle configurations with different electric and magnetic masses.« less

  10. Conductance Quantization in Resistive Random Access Memory

    NASA Astrophysics Data System (ADS)

    Li, Yang; Long, Shibing; Liu, Yang; Hu, Chen; Teng, Jiao; Liu, Qi; Lv, Hangbing; Suñé, Jordi; Liu, Ming

    2015-10-01

    The intrinsic scaling-down ability, simple metal-insulator-metal (MIM) sandwich structure, excellent performances, and complementary metal-oxide-semiconductor (CMOS) technology-compatible fabrication processes make resistive random access memory (RRAM) one of the most promising candidates for the next-generation memory. The RRAM device also exhibits rich electrical, thermal, magnetic, and optical effects, in close correlation with the abundant resistive switching (RS) materials, metal-oxide interface, and multiple RS mechanisms including the formation/rupture of nanoscale to atomic-sized conductive filament (CF) incorporated in RS layer. Conductance quantization effect has been observed in the atomic-sized CF in RRAM, which provides a good opportunity to deeply investigate the RS mechanism in mesoscopic dimension. In this review paper, the operating principles of RRAM are introduced first, followed by the summarization of the basic conductance quantization phenomenon in RRAM and the related RS mechanisms, device structures, and material system. Then, we discuss the theory and modeling of quantum transport in RRAM. Finally, we present the opportunities and challenges in quantized RRAM devices and our views on the future prospects.

  11. Combinatorial quantization of the Hamiltonian Chern-Simons theory II

    NASA Astrophysics Data System (ADS)

    Alekseev, Anton Yu.; Grosse, Harald; Schomerus, Volker

    1996-01-01

    This paper further develops the combinatorial approach to quantization of the Hamiltonian Chern Simons theory advertised in [1]. Using the theory of quantum Wilson lines, we show how the Verlinde algebra appears within the context of quantum group gauge theory. This allows to discuss flatness of quantum connections so that we can give a mathematically rigorous definition of the algebra of observables A CS of the Chern Simons model. It is a *-algebra of “functions on the quantum moduli space of flat connections” and comes equipped with a positive functional ω (“integration”). We prove that this data does not depend on the particular choices which have been made in the construction. Following ideas of Fock and Rosly [2], the algebra A CS provides a deformation quantization of the algebra of functions on the moduli space along the natural Poisson bracket induced by the Chern Simons action. We evaluate a volume of the quantized moduli space and prove that it coincides with the Verlinde number. This answer is also interpreted as a partition partition function of the lattice Yang-Mills theory corresponding to a quantum gauge group.

  12. Conductance Quantization in Resistive Random Access Memory.

    PubMed

    Li, Yang; Long, Shibing; Liu, Yang; Hu, Chen; Teng, Jiao; Liu, Qi; Lv, Hangbing; Suñé, Jordi; Liu, Ming

    2015-12-01

    The intrinsic scaling-down ability, simple metal-insulator-metal (MIM) sandwich structure, excellent performances, and complementary metal-oxide-semiconductor (CMOS) technology-compatible fabrication processes make resistive random access memory (RRAM) one of the most promising candidates for the next-generation memory. The RRAM device also exhibits rich electrical, thermal, magnetic, and optical effects, in close correlation with the abundant resistive switching (RS) materials, metal-oxide interface, and multiple RS mechanisms including the formation/rupture of nanoscale to atomic-sized conductive filament (CF) incorporated in RS layer. Conductance quantization effect has been observed in the atomic-sized CF in RRAM, which provides a good opportunity to deeply investigate the RS mechanism in mesoscopic dimension. In this review paper, the operating principles of RRAM are introduced first, followed by the summarization of the basic conductance quantization phenomenon in RRAM and the related RS mechanisms, device structures, and material system. Then, we discuss the theory and modeling of quantum transport in RRAM. Finally, we present the opportunities and challenges in quantized RRAM devices and our views on the future prospects.

  13. Thin noble metal films on Si (111) investigated by optical second-harmonic generation and photoemission

    NASA Astrophysics Data System (ADS)

    Pedersen, K.; Kristensen, T. B.; Pedersen, T. G.; Morgen, P.; Li, Z.; Hoffmann, S. V.

    2002-05-01

    Thin noble metal films (Ag, Au and Cu) on Si (111) have been investigated by optical second-harmonic generation (SHG) in combination with synchrotron radiation photoemission spectroscopy. The valence band spectra of Ag films show a quantization of the sp-band in the 4-eV energy range from the Fermi level down to the onset of the d-bands. For Cu and Au the corresponding energy range is much narrower and quantization effects are less visible. Quantization effects in SHG are observed as oscillations in the signal as a function of film thickness. The oscillations are strongest for Ag and less pronounced for Cu, in agreement with valence band photoemission spectra. In the case of Au, a reacted layer floating on top of the Au film masks the observation of quantum well levels by photoemission. However, SHG shows a well-developed quantization of levels in the Au film below the reacted layer. For Ag films, the relation between film thickness and photon energy of the SHG resonances indicates different types of resonances, some of which involve both quantum well and substrate states.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guedes, Carlos; Oriti, Daniele; Raasakka, Matti

    The phase space given by the cotangent bundle of a Lie group appears in the context of several models for physical systems. A representation for the quantum system in terms of non-commutative functions on the (dual) Lie algebra, and a generalized notion of (non-commutative) Fourier transform, different from standard harmonic analysis, has been recently developed, and found several applications, especially in the quantum gravity literature. We show that this algebra representation can be defined on the sole basis of a quantization map of the classical Poisson algebra, and identify the conditions for its existence. In particular, the corresponding non-commutative star-productmore » carried by this representation is obtained directly from the quantization map via deformation quantization. We then clarify under which conditions a unitary intertwiner between such algebra representation and the usual group representation can be constructed giving rise to the non-commutative plane waves and consequently, the non-commutative Fourier transform. The compact groups U(1) and SU(2) are considered for different choices of quantization maps, such as the symmetric and the Duflo map, and we exhibit the corresponding star-products, algebra representations, and non-commutative plane waves.« less

  15. The fundamental role of quantized vibrations in coherent light harvesting by cryptophyte algae

    NASA Astrophysics Data System (ADS)

    Kolli, Avinash; O'Reilly, Edward J.; Scholes, Gregory D.; Olaya-Castro, Alexandra

    2012-11-01

    The influence of fast vibrations on energy transfer and conversion in natural molecular aggregates is an issue of central interest. This article shows the important role of high-energy quantized vibrations and their non-equilibrium dynamics for energy transfer in photosynthetic systems with highly localized excitonic states. We consider the cryptophyte antennae protein phycoerythrin 545 and show that coupling to quantized vibrations, which are quasi-resonant with excitonic transitions is fundamental for biological function as it generates non-cascaded transport with rapid and wider spatial distribution of excitation energy. Our work also indicates that the non-equilibrium dynamics of such vibrations can manifest itself in ultrafast beating of both excitonic populations and coherences at room temperature, with time scales in agreement with those reported in experiments. Moreover, we show that mechanisms supporting coherent excitonic dynamics assist coupling to selected modes that channel energy to preferential sites in the complex. We therefore argue that, in the presence of strong coupling between electronic excitations and quantized vibrations, a concrete and important advantage of quantum coherent dynamics is precisely to tune resonances that promote fast and effective energy distribution.

  16. "We'll Get to You When We Get to You": Exploring Potential Contributions of Health Care Staff Behaviors to Patient Perceptions of Discrimination and Satisfaction.

    PubMed

    Tajeu, Gabriel S; Cherrington, Andrea L; Andreae, Lynn; Prince, Candice; Holt, Cheryl L; Halanych, Jewell H

    2015-10-01

    We qualitatively assessed patients' perceptions of discrimination and patient satisfaction in the health care setting specific to interactions with nonphysician health care staff. We conducted 12 focus-group interviews with African American and European American participants, stratified by race and gender, from June to November 2008. We used a topic guide to facilitate discussion and identify factors contributing to perceived discrimination and analyzed transcripts for relevant themes using a codebook. We enrolled 92 participants: 55 African Americans and 37 European Americans, all of whom reported perceived discrimination and lower patient satisfaction as a result of interactions with nonphysician health care staff. Perceived discrimination was associated with 2 main characteristics: insurance or socioeconomic status and race. Both verbal and nonverbal communication style on the part of nonphysician health care staff were related to individuals' perceptions of how they were treated. The behaviors of nonphysician health care staff in the clinical setting can potentially contribute to patients' perceptions of discrimination and lowered patient satisfaction. Future interventions to reduce health care discrimination should include a focus on staff cultural competence and customer service skills.

  17. “We’ll Get to You When We Get to You”: Exploring Potential Contributions of Health Care Staff Behaviors to Patient Perceptions of Discrimination and Satisfaction

    PubMed Central

    Cherrington, Andrea L.; Andreae, Lynn; Prince, Candice; Holt, Cheryl L.; Halanych, Jewell H.

    2015-01-01

    Objectives. We qualitatively assessed patients’ perceptions of discrimination and patient satisfaction in the health care setting specific to interactions with nonphysician health care staff. Methods. We conducted 12 focus-group interviews with African American and European American participants, stratified by race and gender, from June to November 2008. We used a topic guide to facilitate discussion and identify factors contributing to perceived discrimination and analyzed transcripts for relevant themes using a codebook. Results. We enrolled 92 participants: 55 African Americans and 37 European Americans, all of whom reported perceived discrimination and lower patient satisfaction as a result of interactions with nonphysician health care staff. Perceived discrimination was associated with 2 main characteristics: insurance or socioeconomic status and race. Both verbal and nonverbal communication style on the part of nonphysician health care staff were related to individuals’ perceptions of how they were treated. Conclusions. The behaviors of nonphysician health care staff in the clinical setting can potentially contribute to patients’ perceptions of discrimination and lowered patient satisfaction. Future interventions to reduce health care discrimination should include a focus on staff cultural competence and customer service skills. PMID:26270291

  18. U.S. hookah tobacco smoking establishments advertised on the internet.

    PubMed

    Primack, Brian A; Rice, Kristen R; Shensa, Ariel; Carroll, Mary V; DePenna, Erica J; Nakkash, Rima; Barnett, Tracey E

    2012-02-01

    Establishments dedicated to hookah tobacco smoking recently have proliferated and helped introduce hookah use to U.S. communities. To conduct a comprehensive, qualitative assessment of websites promoting these establishments. In June 2009, a systematic search process was initiated to access the universe of websites representing major hookah tobacco smoking establishments. In 2009-2010, codebook development followed an iterative paradigm involving three researchers and resulted in a final codebook consisting of 36 codes within eight categories. After two independent coders had nearly perfect agreement (Cohen's κ = 0.93) on double-coding the data in the first 20% of sites, the coders divided the remaining sites and coded them independently. A thematic approach to the synthesis of findings and selection of exemplary quotations was used. The search yielded a sample of 144 websites originating from states in all U.S. regions. Among the hookah establishments promoted on the websites, 79% served food and 41% served alcohol. Of the websites, none required age verification, <1% included a tobacco-related warning on the first page, and 4% included a warning on any page. Although mention of the word tobacco was relatively uncommon (appearing on the first page of only 26% sites and on any page of 58% of sites), the promotion of flavorings, pleasure, relaxation, product quality, and cultural and social aspects of hookah smoking was common. Websites may play a role in enhancing or propagating misinformation related to hookah tobacco smoking. Health education and policy measures may be valuable in countering this misinformation. Copyright © 2012 American Journal of Preventive Medicine. Published by Elsevier Inc. All rights reserved.

  19. Optically-synchronized encoder and multiplexer scheme for interleaved photonics analog-to-digital conversion

    NASA Astrophysics Data System (ADS)

    Villa, Carlos; Kumavor, Patrick; Donkor, Eric

    2008-04-01

    Photonics Analog-to-Digital Converters (ADCs) utilize a train of optical pulses to sample an electrical input waveform applied to an electrooptic modulator or a reverse biased photodiode. In the former, the resulting train of amplitude-modulated optical pulses is detected (converter to electrical) and quantized using a conversional electronics ADC- as at present there are no practical, cost-effective optical quantizers available with performance that rival electronic quantizers. In the latter, the electrical samples are directly quantized by the electronics ADC. In both cases however, the sampling rate is limited by the speed with which the electronics ADC can quantize the electrical samples. One way to increase the sampling rate by a factor N is by using the time-interleaved technique which consists of a parallel array of N electrical ADC converters, which have the same sampling rate but different sampling phase. Each operating at a quantization rate of fs/N where fs is the aggregated sampling rate. In a system with no real-time operation, the N channels digital outputs are stored in memory, and then aggregated (multiplexed) to obtain the digital representation of the analog input waveform. Alternatively, for real-time operation systems the reduction of storing time in the multiplexing process is desired to improve the time response of the ADC. The complete elimination of memories come expenses of concurrent timing and synchronization in the aggregation of the digital signal that became critical for a good digital representation of the analog signal waveform. In this paper we propose and demonstrate a novel optically synchronized encoder and multiplexer scheme for interleaved photonics ADCs that utilize the N optical signals used to sample different phases of an analog input signal to synchronize the multiplexing of the resulting N digital output channels in a single digital output port. As a proof of concept, four 320 Megasamples/sec 12-bit of resolution digital signals were multiplexed to form an aggregated 1.28 Gigasamples/sec single digital output signal.

  20. Cryogenic and radiation hard ASIC design for large format NIR/SWIR detector

    NASA Astrophysics Data System (ADS)

    Gao, Peng; Dupont, Benoit; Dierickx, Bart; Müller, Eric; Verbruggen, Geert; Gielis, Stijn; Valvekens, Ramses

    2014-10-01

    An ASIC is developed to control and data quantization for large format NIR/SWIR detector arrays. Both cryogenic and space radiation environment issue are considered during the design. Therefore it can be integrated in the cryogenic chamber, which reduces significantly the vast amount of long wires going in and out the cryogenic chamber, i.e. benefits EMI and noise concerns, as well as the power consumption of cooling system and interfacing circuits. In this paper, we will describe the development of this prototype ASIC for image sensor driving and signal processing as well as the testing in both room and cryogenic temperature.

  1. MPEG-1 low-cost encoder solution

    NASA Astrophysics Data System (ADS)

    Grueger, Klaus; Schirrmeister, Frank; Filor, Lutz; von Reventlow, Christian; Schneider, Ulrich; Mueller, Gerriet; Sefzik, Nicolai; Fiedrich, Sven

    1995-02-01

    A solution for real-time compression of digital YCRCB video data to an MPEG-1 video data stream has been developed. As an additional option, motion JPEG and video telephone streams (H.261) can be generated. For MPEG-1, up to two bidirectional predicted images are supported. The required computational power for motion estimation and DCT/IDCT, memory size and memory bandwidth have been the main challenges. The design uses fast-page-mode memory accesses and requires only one single 80 ns EDO-DRAM with 256 X 16 organization for video encoding. This can be achieved only by using adequate access and coding strategies. The architecture consists of an input processing and filter unit, a memory interface, a motion estimation unit, a motion compensation unit, a DCT unit, a quantization control, a VLC unit and a bus interface. For using the available memory bandwidth by the processing tasks, a fixed schedule for memory accesses has been applied, that can be interrupted for asynchronous events. The motion estimation unit implements a highly sophisticated hierarchical search strategy based on block matching. The DCT unit uses a separated fast-DCT flowgraph realized by a switchable hardware unit for both DCT and IDCT operation. By appropriate multiplexing, only one multiplier is required for: DCT, quantization, inverse quantization, and IDCT. The VLC unit generates the video-stream up to the video sequence layer and is directly coupled with an intelligent bus-interface. Thus, the assembly of video, audio and system data can easily be performed by the host computer. Having a relatively low complexity and only small requirements for DRAM circuits, the developed solution can be applied to low-cost encoding products for consumer electronics.

  2. FAST TRACK COMMUNICATION: Quantization over boson operator spaces

    NASA Astrophysics Data System (ADS)

    Prosen, Tomaž; Seligman, Thomas H.

    2010-10-01

    The framework of third quantization—canonical quantization in the Liouville space—is developed for open many-body bosonic systems. We show how to diagonalize the quantum Liouvillean for an arbitrary quadratic n-boson Hamiltonian with arbitrary linear Lindblad couplings to the baths and, as an example, explicitly work out a general case of a single boson.

  3. Quantized Vector Potential and the Photon Wave-function

    NASA Astrophysics Data System (ADS)

    Meis, C.; Dahoo, P. R.

    2017-12-01

    The vector potential function {\\overrightarrow{α }}kλ (\\overrightarrow{r},t) for a k-mode and λ-polarization photon, with the quantized amplitude α 0k (ω k ) = ξω k , satisfies the classical wave propagation equation as well as the Schrodinger’s equation with the relativistic massless Hamiltonian \\mathop{H}\\limits∼ =-i\\hslash c\\overrightarrow{\

  4. A heat kernel proof of the index theorem for deformation quantization

    NASA Astrophysics Data System (ADS)

    Karabegov, Alexander

    2017-11-01

    We give a heat kernel proof of the algebraic index theorem for deformation quantization with separation of variables on a pseudo-Kähler manifold. We use normalizations of the canonical trace density of a star product and of the characteristic classes involved in the index formula for which this formula contains no extra constant factors.

  5. Floating-point system quantization errors in digital control systems

    NASA Technical Reports Server (NTRS)

    Phillips, C. L.; Vallely, D. P.

    1978-01-01

    This paper considers digital controllers (filters) operating in floating-point arithmetic in either open-loop or closed-loop systems. A quantization error analysis technique is developed, and is implemented by a digital computer program that is based on a digital simulation of the system. The program can be integrated into existing digital simulations of a system.

  6. Phase-Quantized Block Noncoherent Communication

    DTIC Science & Technology

    2013-07-01

    2828 IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 61, NO. 7, JULY 2013 Phase-Quantized Block Noncoherent Communication Jaspreet Singh and Upamanyu...in a carrier asynchronous system. Specifically, we consider transmission over the block noncoherent additive white Gaussian noise channel, and...block noncoherent channel. Several results, based on the symmetry inherent in the channel model, are provided to characterize this transition density

  7. Tachyon field in loop quantum cosmology: An example of traversable singularity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li Lifang; Zhu Jianyang

    2009-06-15

    Loop quantum cosmology (LQC) predicts a nonsingular evolution of the universe through a bounce in the high energy region. But LQC has an ambiguity about the quantization scheme. Recently, the authors in [Phys. Rev. D 77, 124008 (2008)] proposed a new quantization scheme. Similar to others, this new quantization scheme also replaces the big bang singularity with the quantum bounce. More interestingly, it introduces a quantum singularity, which is traversable. We investigate this novel dynamics quantitatively with a tachyon scalar field, which gives us a concrete example. Our result shows that our universe can evolve through the quantum singularity regularly,more » which is different from the classical big bang singularity. So this singularity is only a weak singularity.« less

  8. Quantized magnetoresistance in atomic-size contacts.

    PubMed

    Sokolov, Andrei; Zhang, Chunjuan; Tsymbal, Evgeny Y; Redepenning, Jody; Doudin, Bernard

    2007-03-01

    When the dimensions of a metallic conductor are reduced so that they become comparable to the de Broglie wavelengths of the conduction electrons, the absence of scattering results in ballistic electron transport and the conductance becomes quantized. In ferromagnetic metals, the spin angular momentum of the electrons results in spin-dependent conductance quantization and various unusual magnetoresistive phenomena. Theorists have predicted a related phenomenon known as ballistic anisotropic magnetoresistance (BAMR). Here we report the first experimental evidence for BAMR by observing a stepwise variation in the ballistic conductance of cobalt nanocontacts as the direction of an applied magnetic field is varied. Our results show that BAMR can be positive and negative, and exhibits symmetric and asymmetric angular dependences, consistent with theoretical predictions.

  9. Obliquely propagating ion acoustic solitary structures in the presence of quantized magnetic field

    NASA Astrophysics Data System (ADS)

    Iqbal Shaukat, Muzzamal

    2017-10-01

    The effect of linear and nonlinear propagation of electrostatic waves have been studied in degenerate magnetoplasma taking into account the effect of electron trapping and finite temperature with quantizing magnetic field. The formation of solitary structures has been investigated by employing the small amplitude approximation both for fully and partially degenerate quantum plasma. It is observed that the inclusion of quantizing magnetic field significantly affects the propagation characteristics of the solitary wave. Importantly, the Zakharov-Kuznetsov equation under consideration has been found to allow the formation of compressive solitary structures only. The present investigation may be beneficial to understand the propagation of nonlinear electrostatic structures in dense astrophysical environments such as those found in white dwarfs.

  10. There are many ways to spin a photon: Half-quantization of a total optical angular momentum

    PubMed Central

    Ballantine, Kyle E.; Donegan, John F.; Eastham, Paul R.

    2016-01-01

    The angular momentum of light plays an important role in many areas, from optical trapping to quantum information. In the usual three-dimensional setting, the angular momentum quantum numbers of the photon are integers, in units of the Planck constant ħ. We show that, in reduced dimensions, photons can have a half-integer total angular momentum. We identify a new form of total angular momentum, carried by beams of light, comprising an unequal mixture of spin and orbital contributions. We demonstrate the half-integer quantization of this total angular momentum using noise measurements. We conclude that for light, as is known for electrons, reduced dimensionality allows new forms of quantization. PMID:28861467

  11. Third Quantization and Quantum Universes

    NASA Astrophysics Data System (ADS)

    Kim, Sang Pyo

    2014-01-01

    We study the third quantization of the Friedmann-Robertson-Walker cosmology with N-minimal massless fields. The third quantized Hamiltonian for the Wheeler-DeWitt equation in the minisuperspace consists of infinite number of intrinsic time-dependent, decoupled oscillators. The Hamiltonian has a pair of invariant operators for each universe with conserved momenta of the fields that play a role of the annihilation and the creation operators and that construct various quantum states for the universe. The closed universe exhibits an interesting feature of transitions from stable states to tachyonic states depending on the conserved momenta of the fields. In the classical forbidden unstable regime, the quantum states have googolplex growing position and conjugate momentum dispersions, which defy any measurements of the position of the universe.

  12. Validation of a quantized-current source with 0.2 ppm uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stein, Friederike; Fricke, Lukas, E-mail: lukas.fricke@ptb.de; Scherer, Hansjörg

    2015-09-07

    We report on high-accuracy measurements of quantized current, sourced by a tunable-barrier single-electron pump at frequencies f up to 1 GHz. The measurements were performed with an ultrastable picoammeter instrument, traceable to the Josephson and quantum Hall effects. Current quantization according to I = ef with e being the elementary charge was confirmed at f = 545 MHz with a total relative uncertainty of 0.2 ppm, improving the state of the art by about a factor of 5. The accuracy of a possible future quantum current standard based on single-electron transport was experimentally validated to be better than the best (indirect) realization of the ampere within themore » present SI.« less

  13. Quantized topological magnetoelectric effect of the zero-plateau quantum anomalous Hall state

    DOE PAGES

    Wang, Jing; Lian, Biao; Qi, Xiao-Liang; ...

    2015-08-10

    The topological magnetoelectric effect in a three-dimensional topological insulator is a novel phenomenon, where an electric field induces a magnetic field in the same direction, with a universal coefficient of proportionality quantized in units of $e²/2h$. Here in this paper, we propose that the topological magnetoelectric effect can be realized in the zero-plateau quantum anomalous Hall state of magnetic topological insulators or a ferromagnet-topological insulator heterostructure. The finite-size effect is also studied numerically, where the magnetoelectric coefficient is shown to converge to a quantized value when the thickness of the topological insulator film increases. We further propose a device setupmore » to eliminate nontopological contributions from the side surface.« less

  14. Quantum no-singularity theorem from geometric flows

    NASA Astrophysics Data System (ADS)

    Alsaleh, Salwa; Alasfar, Lina; Faizal, Mir; Ali, Ahmed Farag

    2018-04-01

    In this paper, we analyze the classical geometric flow as a dynamical system. We obtain an action for this system, such that its equation of motion is the Raychaudhuri equation. This action will be used to quantize this system. As the Raychaudhuri equation is the basis for deriving the singularity theorems, we will be able to understand the effects and such a quantization will have on the classical singularity theorems. Thus, quantizing the geometric flow, we can demonstrate that a quantum space-time is complete (nonsingular). This is because the existence of a conjugate point is a necessary condition for the occurrence of singularities, and we will be able to demonstrate that such conjugate points cannot occur due to such quantum effects.

  15. Part-based deep representation for product tagging and search

    NASA Astrophysics Data System (ADS)

    Chen, Keqing

    2017-06-01

    Despite previous studies, tagging and indexing the product images remain challenging due to the large inner-class variation of the products. In the traditional methods, the quantized hand-crafted features such as SIFTs are extracted as the representation of the product images, which are not discriminative enough to handle the inner-class variation. For discriminative image representation, this paper firstly presents a novel deep convolutional neural networks (DCNNs) architect true pre-trained on a large-scale general image dataset. Compared to the traditional features, our DCNNs representation is of more discriminative power with fewer dimensions. Moreover, we incorporate the part-based model into the framework to overcome the negative effect of bad alignment and cluttered background and hence the descriptive ability of the deep representation is further enhanced. Finally, we collect and contribute a well-labeled shoe image database, i.e., the TBShoes, on which we apply the part-based deep representation for product image tagging and search, respectively. The experimental results highlight the advantages of the proposed part-based deep representation.

  16. 2005 Service Academies: Sexual Assault Survey: Administration, Datasets and Codebook

    DTIC Science & Technology

    2005-10-01

    sex w/you-Uned 326 SB026SK* [26sk] Situation w/ greatest eff -Skip 204 SB027* 27. [27---] In which semester did this occur 205 SB027U* [27...Tab recode SB038CR: Commis Officer COC 669 SB038CU* [38c] Retal by officer in chain o c-Uned 370 SB038D* 38d. [38d] Retal by other academy...sex w/you 203 SB026SK [26sk] Situation w/ greatest eff -Skip 204 SB027 27. [27---] In which semester did this occur 205 SB028 28. [28---] Where

  17. The 1984 ARI Survey of Army Recruits: Codebook for Summer 84 USAR and ARNG Survey Respondents

    DTIC Science & Technology

    1986-05-01

    THE FOLLOWING PROGRAMS OR PROGRAMMING TYPES ON TV: NBA BASKETBALL . RAW DATA ICARD #I COLS ILENGTHI I _ _ _ I _ _ I _ _ _ I05 0-2-043 20I __ I I SAS...LEAG BASEBALL REG SEAS 249 T259 WATCH TV PROG:MJR LEAG BASEBALL PLAYOFFS 250 T260 WATCH TV PROG:WORLD SERIES 251 V T261 WATCH TV PROG:NBA BASKETBALL 252...T262 WATCH TV PROG:COLLEGE BASKETBALL 253 T263 WATCH TV PROG:NHL HOCKEY 254 T264 WATCH TV PROG:PROFESSIONAL WRESTLING 255 T265 WATCH TV PROG:CAR RACES

  18. Diffuse optical microscopy for quantification of depth-dependent epithelial backscattering in the cervix

    NASA Astrophysics Data System (ADS)

    Bodenschatz, Nico; Lam, Sylvia; Carraro, Anita; Korbelik, Jagoda; Miller, Dianne M.; McAlpine, Jessica N.; Lee, Marette; Kienle, Alwin; MacAulay, Calum

    2016-06-01

    A fiber optic imaging approach is presented using structured illumination for quantification of almost pure epithelial backscattering. We employ multiple spatially modulated projection patterns and camera-based reflectance capture to image depth-dependent epithelial scattering. The potential diagnostic value of our approach is investigated on cervical ex vivo tissue specimens. Our study indicates a strong backscattering increase in the upper part of the cervical epithelium caused by dysplastic microstructural changes. Quantization of relative depth-dependent backscattering is confirmed as a potentially useful diagnostic feature for detection of precancerous lesions in cervical squamous epithelium.

  19. Quantum state of the multiverse

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robles-Perez, Salvador; Gonzalez-Diaz, Pedro F.

    2010-04-15

    A third quantization formalism is applied to a simplified multiverse scenario. A well-defined quantum state of the multiverse is obtained which agrees with standard boundary condition proposals. These states are found to be squeezed, and related to accelerating universes: they share similar properties to those obtained previously by Grishchuk and Siderov. We also comment on related works that have criticized the third quantization approach.

  20. Hadron Spectra, Decays and Scattering Properties Within Basis Light Front Quantization

    NASA Astrophysics Data System (ADS)

    Vary, James P.; Adhikari, Lekha; Chen, Guangyao; Jia, Shaoyang; Li, Meijian; Li, Yang; Maris, Pieter; Qian, Wenyang; Spence, John R.; Tang, Shuo; Tuchin, Kirill; Yu, Anji; Zhao, Xingbo

    2018-07-01

    We survey recent progress in calculating properties of the electron and hadrons within the basis light front quantization (BLFQ) approach. We include applications to electromagnetic and strong scattering processes in relativistic heavy ion collisions. We present an initial investigation into the glueball states by applying BLFQ with multigluon sectors, introducing future research possibilities on multi-quark and multi-gluon systems.

Top