Science.gov

Sample records for adaptive vector quantization

  1. A recursive technique for adaptive vector quantization

    NASA Technical Reports Server (NTRS)

    Lindsay, Robert A.

    1989-01-01

    Vector Quantization (VQ) is fast becoming an accepted, if not preferred method for image compression. The VQ performs well when compressing all types of imagery including Video, Electro-Optical (EO), Infrared (IR), Synthetic Aperture Radar (SAR), Multi-Spectral (MS), and digital map data. The only requirement is to change the codebook to switch the compressor from one image sensor to another. There are several approaches for designing codebooks for a vector quantizer. Adaptive Vector Quantization is a procedure that simultaneously designs codebooks as the data is being encoded or quantized. This is done by computing the centroid as a recursive moving average where the centroids move after every vector is encoded. When computing the centroid of a fixed set of vectors the resultant centroid is identical to the previous centroid calculation. This method of centroid calculation can be easily combined with VQ encoding techniques. The defined quantizer changes after every encoded vector by recursively updating the centroid of minimum distance which is the selected by the encoder. Since the quantizer is changing definition or states after every encoded vector, the decoder must now receive updates to the codebook. This is done as side information by multiplexing bits into the compressed source data.

  2. Multiscale video compression using adaptive finite-state vector quantization

    NASA Astrophysics Data System (ADS)

    Kwon, Heesung; Venkatraman, Mahesh; Nasrabadi, Nasser M.

    1998-10-01

    We investigate the use of vector quantizers (VQs) with memory to encode image sequences. A multiscale video coding technique using adaptive finite-state vector quantization (FSVQ) is presented.In this technique, a small codebook (subcodebook) is generated for each input vector from a much larger codebook (supercodebook) by the selection (through a reordering procedure) of a set of appropriate codevectors that is the best representative of the input vector. Therefore, the subcodebook dynamically adapts to the characteristics of the motion-compensated frame difference signal. Several reordering procedures are introduced, and their performance is evaluated. In adaptive FSVQ, two different methods, predefined thresholding and rate- distortion cost optimization, are used to decide between the supercodebook and subcodebook for encoding a given input vector. A cache-based vector quantizer, a form of adaptive FSVQ, is also presented for very-low-bit-rate video coding. An efficient bit-allocation strategy using quadtree decomposition is used with the cache-based VQ to compress the video signal. The proposed video codec outperforms H.263 in terms of the peak signal-to-noise ratio and perceptual quality at very low bit rates, ranging from 5 to 20 kbps. The picture quality of the proposed video codec is a significant improvement over previous codecs, in terms of annoying distortions (blocking artifacts and mosquito noises), and is comparable to that of recently developed wavelet-based video codecs. This similarity in picture quality can be explained by the fact that the proposed video codex uses multiscale segmentation and subsequent variable- rate coding, which are conceptually similar to wavelet-based coding techniques. The simplicity of the encoder and decoder of the proposed codec makes it more suitable than wavelet- based coding for real-time, very-low-bit rate video applications.

  3. Vector quantization

    NASA Technical Reports Server (NTRS)

    Gray, Robert M.

    1989-01-01

    During the past ten years Vector Quantization (VQ) has developed from a theoretical possibility promised by Shannon's source coding theorems into a powerful and competitive technique for speech and image coding and compression at medium to low bit rates. In this survey, the basic ideas behind the design of vector quantizers are sketched and some comments made on the state-of-the-art and current research efforts.

  4. Gain-adaptive vector quantization for medium-rate speech coding

    NASA Astrophysics Data System (ADS)

    Chen, J.-H.; Gersho, A.

    A class of adaptive vector quantizers (VQs) that can dynamically adjust the 'gain' of codevectors according to the input signal level is introduced. The encoder uses a gain estimator to determine a suitable normalization of each input vector prior to VQ coding. The normalized vectors have reduced dynamic range and can then be more efficiently coded. At the receiver, the VQ decoder output is multiplied by the estimated gain. Both forward and backward adaptation are considered and several different gain estimators are compared and evaluated. An approach to optimizing the design of gain estimators is introduced. Some of the more obvious techniques for achieving gain adaptation are substantially less effective than the use of optimized gain estimators. A novel design technique that is needed to generate the appropriate gain-normalized codebook for the vector quantizer is introduced. Experimental results show that a significant gain in segmental SNR can be obtained over nonadaptive VQ with a negligible increase in complexity.

  5. Gain-adaptive vector quantization for medium-rate speech coding

    NASA Technical Reports Server (NTRS)

    Chen, J.-H.; Gersho, A.

    1985-01-01

    A class of adaptive vector quantizers (VQs) that can dynamically adjust the 'gain' of codevectors according to the input signal level is introduced. The encoder uses a gain estimator to determine a suitable normalization of each input vector prior to VQ coding. The normalized vectors have reduced dynamic range and can then be more efficiently coded. At the receiver, the VQ decoder output is multiplied by the estimated gain. Both forward and backward adaptation are considered and several different gain estimators are compared and evaluated. An approach to optimizing the design of gain estimators is introduced. Some of the more obvious techniques for achieving gain adaptation are substantially less effective than the use of optimized gain estimators. A novel design technique that is needed to generate the appropriate gain-normalized codebook for the vector quantizer is introduced. Experimental results show that a significant gain in segmental SNR can be obtained over nonadaptive VQ with a negligible increase in complexity.

  6. Radiographic image sequence coding using adaptive finite-state vector quantization

    NASA Astrophysics Data System (ADS)

    Joo, Chang-Hee; Choi, Jong S.

    1990-11-01

    Vector quantization is an effective spatial domain image coding technique at under 1 . 0 bits per pixel. To achieve the quality at lower rates it is necessary to exploit spatial redundancy over a larger region of pixels than is possible with memoryless VQ. A fmite state vector quant. izer can achieve the same performance as memoryless VQ at lower rates. This paper describes an athptive finite state vector quantization for radiographic image sequence coding. Simulation experiment has been carried out with 4*4 blocks of pixels from a sequence of cardiac angiogram consisting of 40 frames of size 256*256pixels each. At 0. 45 bpp the resulting adaptive FSVQ encoder achieves performance comparable to earlier memoryless VQs at 0. 8 bpp.

  7. Adaptive vector quantization of MR images using online k-means algorithm

    NASA Astrophysics Data System (ADS)

    Shademan, Azad; Zia, Mohammad A.

    2001-12-01

    The k-means algorithm is widely used to design image codecs using vector quantization (VQ). In this paper, we focus on an adaptive approach to implement a VQ technique using the online version of k-means algorithm, in which the size of the codebook is adapted continuously to the statistical behavior of the image. Based on the statistical analysis of the feature space, a set of thresholds are designed such that those codewords corresponding to the low-density clusters would be removed from the codebook and hence, resulting in a higher bit-rate efficiency. Applications of this approach would be in telemedicine, where sequences of highly correlated medical images, e.g. consecutive brain slices, are transmitted over a low bit-rate channel. We have applied this algorithm on magnetic resonance (MR) images and the simulation results on a sample sequence are given. The proposed method has been compared to the standard k-means algorithm in terms of PSNR, MSE, and elapsed time to complete the algorithm.

  8. Video data compression using artificial neural network differential vector quantization

    NASA Technical Reports Server (NTRS)

    Krishnamurthy, Ashok K.; Bibyk, Steven B.; Ahalt, Stanley C.

    1991-01-01

    An artificial neural network vector quantizer is developed for use in data compression applications such as Digital Video. Differential Vector Quantization is used to preserve edge features, and a new adaptive algorithm, known as Frequency-Sensitive Competitive Learning, is used to develop the vector quantizer codebook. To develop real time performance, a custom Very Large Scale Integration Application Specific Integrated Circuit (VLSI ASIC) is being developed to realize the associative memory functions needed in the vector quantization algorithm. By using vector quantization, the need for Huffman coding can be eliminated, resulting in superior performance against channel bit errors than methods that use variable length codes.

  9. Successive refinement lattice vector quantization.

    PubMed

    Mukherjee, Debargha; Mitra, Sanjit K

    2002-01-01

    Lattice Vector quantization (LVQ) solves the complexity problem of LBG based vector quantizers, yielding very general codebooks. However, a single stage LVQ, when applied to high resolution quantization of a vector, may result in very large and unwieldy indices, making it unsuitable for applications requiring successive refinement. The goal of this work is to develop a unified framework for progressive uniform quantization of vectors without having to sacrifice the mean- squared-error advantage of lattice quantization. A successive refinement uniform vector quantization methodology is developed, where the codebooks in successive stages are all lattice codebooks, each in the shape of the Voronoi regions of the lattice at the previous stage. Such Voronoi shaped geometric lattice codebooks are named Voronoi lattice VQs (VLVQ). Measures of efficiency of successive refinement are developed based on the entropy of the indices transmitted by the VLVQs. Additionally, a constructive method for asymptotically optimal uniform quantization is developed using tree-structured subset VLVQs in conjunction with entropy coding. The methodology developed here essentially yields the optimal vector counterpart of scalar "bitplane-wise" refinement. Unfortunately it is not as trivial to implement as in the scalar case. Furthermore, the benefits of asymptotic optimality in tree-structured subset VLVQs remain elusive in practical nonasymptotic situations. Nevertheless, because scalar bitplane- wise refinement is extensively used in modern wavelet image coders, we have applied the VLVQ techniques to successively refine vectors of wavelet coefficients in the vector set-partitioning (VSPIHT) framework. The results are compared against SPIHT and the previous successive approximation wavelet vector quantization (SA-W-VQ) results of Sampson, da Silva and Ghanbari.

  10. Adaptive image segmentation by quantization

    NASA Astrophysics Data System (ADS)

    Liu, Hui; Yun, David Y.

    1992-12-01

    Segmentation of images into textural homogeneous regions is a fundamental problem in an image understanding system. Most region-oriented segmentation approaches suffer from the problem of different thresholds selecting for different images. In this paper an adaptive image segmentation based on vector quantization is presented. It automatically segments images without preset thresholds. The approach contains a feature extraction module and a two-layer hierarchical clustering module, a vector quantizer (VQ) implemented by a competitive learning neural network in the first layer. A near-optimal competitive learning algorithm (NOLA) is employed to train the vector quantizer. NOLA combines the advantages of both Kohonen self- organizing feature map (KSFM) and K-means clustering algorithm. After the VQ is trained, the weights of the network and the number of input vectors clustered by each neuron form a 3- D topological feature map with separable hills aggregated by similar vectors. This overcomes the inability to visualize the geometric properties of data in a high-dimensional space for most other clustering algorithms. The second clustering algorithm operates in the feature map instead of the input set itself. Since the number of units in the feature map is much less than the number of feature vectors in the feature set, it is easy to check all peaks and find the `correct' number of clusters, also a key problem in current clustering techniques. In the experiments, we compare our algorithm with K-means clustering method on a variety of images. The results show that our algorithm achieves better performance.

  11. Image compression using address-vector quantization

    NASA Astrophysics Data System (ADS)

    Nasrabadi, Nasser M.; Feng, Yushu

    1990-12-01

    A novel vector quantization scheme, the address-vector quantizer (A-VQ), is proposed which exploits the interblock correlation by encoding a group of blocks together using an address-codebook (AC). The AC is a set of address-codevectors (ACVs), each representing a combination of addresses or indices. Each element of the ACV is an address of an entry in the LBG-codebook, representing a vector-quantized block. The AC consists of an active (addressable) region and an inactive (nonaddressable) region. During encoding the ACVs in the AC are reordered adaptively to bring the most probable ACVs into the active region. When encoding an ACV, the active region is checked, and if such an address combination exists, its index is transmitted to the receiver. Otherwise, the address of each block is transmitted individually. The SNR of the images encoded by the A-VQ method is the same as that of a memoryless vector quantizer, but the bit rate is by a factor of approximately two.

  12. Distance learning in discriminative vector quantization.

    PubMed

    Schneider, Petra; Biehl, Michael; Hammer, Barbara

    2009-10-01

    Discriminative vector quantization schemes such as learning vector quantization (LVQ) and extensions thereof offer efficient and intuitive classifiers based on the representation of classes by prototypes. The original methods, however, rely on the Euclidean distance corresponding to the assumption that the data can be represented by isotropic clusters. For this reason, extensions of the methods to more general metric structures have been proposed, such as relevance adaptation in generalized LVQ (GLVQ) and matrix learning in GLVQ. In these approaches, metric parameters are learned based on the given classification task such that a data-driven distance measure is found. In this letter, we consider full matrix adaptation in advanced LVQ schemes. In particular, we introduce matrix learning to a recent statistical formalization of LVQ, robust soft LVQ, and we compare the results on several artificial and real-life data sets to matrix learning in GLVQ, a derivation of LVQ-like learning based on a (heuristic) cost function. In all cases, matrix adaptation allows a significant improvement of the classification accuracy. Interestingly, however, the principled behavior of the models with respect to prototype locations and extracted matrix dimensions shows several characteristic differences depending on the data sets.

  13. Perceptual vector quantization for video coding

    NASA Astrophysics Data System (ADS)

    Valin, Jean-Marc; Terriberry, Timothy B.

    2015-03-01

    This paper applies energy conservation principles to the Daala video codec using gain-shape vector quantization to encode a vector of AC coefficients as a length (gain) and direction (shape). The technique originates from the CELT mode of the Opus audio codec, where it is used to conserve the spectral envelope of an audio signal. Conserving energy in video has the potential to preserve textures rather than low-passing them. Explicitly quantizing a gain allows a simple contrast masking model with no signaling cost. Vector quantizing the shape keeps the number of degrees of freedom the same as scalar quantization, avoiding redundancy in the representation. We demonstrate how to predict the vector by transforming the space it is encoded in, rather than subtracting off the predictor, which would make energy conservation impossible. We also derive an encoding of the vector-quantized codewords that takes advantage of their non-uniform distribution. We show that the resulting technique outperforms scalar quantization by an average of 0.90 dB on still images, equivalent to a 24.8% reduction in bitrate at equal quality, while for videos, the improvement averages 0.83 dB, equivalent to a 13.7% reduction in bitrate.

  14. Honey Bee Mating Optimization Vector Quantization Scheme in Image Compression

    NASA Astrophysics Data System (ADS)

    Horng, Ming-Huwi

    The vector quantization is a powerful technique in the applications of digital image compression. The traditionally widely used method such as the Linde-Buzo-Gray (LBG) algorithm always generated local optimal codebook. Recently, particle swarm optimization (PSO) is adapted to obtain the near-global optimal codebook of vector quantization. In this paper, we applied a new swarm algorithm, honey bee mating optimization, to construct the codebook of vector quantization. The proposed method is called the honey bee mating optimization based LBG (HBMO-LBG) algorithm. The results were compared with the other two methods that are LBG and PSO-LBG algorithms. Experimental results showed that the proposed HBMO-LBG algorithm is more reliable and the reconstructed images get higher quality than those generated form the other three methods.

  15. Optimized vector quantization with fuzzy distortion measure

    NASA Astrophysics Data System (ADS)

    Mitra, Sunanda; Pemmaraju, Suryalakshmi

    1996-06-01

    From the perspective of information theory, the design of vector quantizers (VQs) in optimizing the rate distortion function has been extensively studied. In practice, however, the existing VQ algorithms, often, suffer from a number of serious problems, e.g., long search process, codebook initialization, and getting trapped in local minima, inherent to most iterative processes. The generalized Lloyd algorithm, for designing VQs with embedded k-means clustering for codebook generation has been recently used by a number of researcher for efficient image coding by quantizing wavelet decomposed subimages. We present a new approach to vector quantization by generating such multiresolution codebooks using two different neuro-fuzzy clustering techniques that eliminate the existing problems. These clustering techniques integrate fuzzy optimization constraints from the fuzzy-C-means with self-organizing neural network architectures. In one of the new clustering techniques, a new distance measure has also been introduced. The resulting multiresolution codebooks generated from the wavelet decomposed images yield significant improvement in the coding process. The signal transformation and vector quantization stages together yield, at least, 64:1 bit rate reduction with good visual quality and acceptable peak signal to noise ratio (PSNR) and mean square error (MSE). Additional bit rate reduction can be easily obtained by employing conventional entropy encoding after the quantization stage. The performance of this new VQ coding technique has been compared to that of the well-known Linde, Buzo, and Gray (LBG) - VQ for a variety of image classes. The new VQ technique demonstrated superior ability for fast convergence with minimum distortion at similar bit rate reduction then the existing VQ technique for several classes of images/signals including standard test images and medical images in terms of mean-squared error (MSE), peak-signal-to- noise-ratio (PSNR), and visual quality.

  16. Scalar-vector quantization of medical images.

    PubMed

    Mohsenian, N; Shahri, H; Nasrabadi, N M

    1996-01-01

    A new coding scheme based on the scalar-vector quantizer (SVQ) is developed for compression of medical images. The SVQ is a fixed rate encoder and its rate-distortion performance is close to that of optimal entropy-constrained scalar quantizers (ECSQs) for memoryless sources. The use of a fixed-rate quantizer is expected to eliminate some of the complexity of using variable-length scalar quantizers. When transmission of images over noisy channels is considered, our coding scheme does not suffer from error propagation that is typical of coding schemes using variable-length codes. For a set of magnetic resonance (MR) images, coding results obtained from SVQ and ECSQ at low bit rates are indistinguishable. Furthermore, our encoded images are perceptually indistinguishable from the original when displayed on a monitor. This makes our SVQ-based coder an attractive compression scheme for picture archiving and communication systems (PACS). PACS are currently under study for use in an all-digital radiology environment in hospitals, where reliable transmission, storage, and high fidelity reconstruction of images are desired. PMID:18285124

  17. Wavelet-based learning vector quantization for automatic target recognition

    NASA Astrophysics Data System (ADS)

    Chan, Lipchen A.; Nasrabadi, Nasser M.; Mirelli, Vincent

    1996-06-01

    An automatic target recognition classifier is constructed that uses a set of dedicated vector quantizers (VQs). The background pixels in each input image are properly clipped out by a set of aspect windows. The extracted target area for each aspect window is then enlarged to a fixed size, after which a wavelet decomposition splits the enlarged extraction into several subbands. A dedicated VQ codebook is generated for each subband of a particular target class at a specific range of aspects. Thus, each codebook consists of a set of feature templates that are iteratively adapted to represent a particular subband of a given target class at a specific range of aspects. These templates are then further trained by a modified learning vector quantization (LVQ) algorithm that enhances their discriminatory characteristics. A recognition rate of 69.0 percent is achieved on a highly cluttered test set.

  18. Logarithmic Adaptive Quantization Projection for Audio Watermarking

    NASA Astrophysics Data System (ADS)

    Zhao, Xuemin; Guo, Yuhong; Liu, Jian; Yan, Yonghong; Fu, Qiang

    In this paper, a logarithmic adaptive quantization projection (LAQP) algorithm for digital watermarking is proposed. Conventional quantization index modulation uses a fixed quantization step in the watermarking embedding procedure, which leads to poor fidelity. Moreover, the conventional methods are sensitive to value-metric scaling attack. The LAQP method combines the quantization projection scheme with a perceptual model. In comparison to some conventional quantization methods with a perceptual model, the LAQP only needs to calculate the perceptual model in the embedding procedure, avoiding the decoding errors introduced by the difference of the perceptual model used in the embedding and decoding procedure. Experimental results show that the proposed watermarking scheme keeps a better fidelity and is robust against the common signal processing attack. More importantly, the proposed scheme is invariant to value-metric scaling attack.

  19. The decoding method based on wavelet image En vector quantization

    NASA Astrophysics Data System (ADS)

    Liu, Chun-yang; Li, Hui; Wang, Tao

    2013-12-01

    With the rapidly progress of internet technology, large scale integrated circuit and computer technology, digital image processing technology has been greatly developed. Vector quantization technique plays a very important role in digital image compression. It has the advantages other than scalar quantization, which possesses the characteristics of higher compression ratio, simple algorithm of image decoding. Vector quantization, therefore, has been widely used in many practical fields. This paper will combine the wavelet analysis method and vector quantization En encoder efficiently, make a testing in standard image. The experiment result in PSNR will have a great improvement compared with the LBG algorithm.

  20. Image Compression on a VLSI Neural-Based Vector Quantizer.

    ERIC Educational Resources Information Center

    Chen, Oscal T.-C.; And Others

    1992-01-01

    Describes a modified frequency-sensitive self-organization (FSO) algorithm for image data compression and the associated VLSI architecture. Topics discussed include vector quantization; VLSI neural processor architecture; detailed circuit implementation; and a neural network vector quantization prototype chip. Examples of images using the FSO…

  1. Design and Performance of Tree-Structured Vector Quantizers.

    ERIC Educational Resources Information Center

    Lin, Jianhua; Storer, James A.

    1994-01-01

    Describes the design of optimal tree-structured vector quantizers that minimize the expected distortion subject to cost functions related to storage cost, encoding rate, or quantization time. Since the optimal design problem is intractable in most cases, the performance of a general design heuristic based on successive partitioning is analyzed.…

  2. Automatic target recognition using vector quantization and neural networks

    NASA Astrophysics Data System (ADS)

    Chan, Lipchen A.; Nasrabadi, Nasser M.

    1999-12-01

    We propose an automatic target recognition (ATR) algorithm that uses a set of dedicated vector quantizers (VQs) and multilayer perceptrons (MLPs). For each target class at a specific range of aspects, the background pixels of an input image are first removed. The extracted target area is then subdivided into several subimages. A dedicated VQ codebook is constructed for each of the resulting subimages. Using the K-means algorithm, each VQ codebook learns a set of patterns representing the local features of a particular target for a specific range of aspects. The resulting codebooks are further trained by a modified learning vector quantization algorithm, which enhances the discriminatory power of the codebooks. Each final codebook is expected to give the lowest mean squared error (MSE) for its correct target class and range of aspects. These MSEs are then input to an array of window-level MLPs (WMLPs), where each WMLP is specialized in recognizing its intended target class for a specific range of aspects. The outputs of these WMLPs are manipulated and passed to a target-level MLP, which produces the final recognition results. We trained and tested the proposed ATR algorithm on large and realistic data sets and obtained impressive results using the wavelet-based adaptive produce VQs configuration.

  3. Application of a VLSI vector quantization processor to real-time speech coding

    NASA Technical Reports Server (NTRS)

    Davidson, G.; Gersho, A.

    1986-01-01

    Attention is given to a working vector quantization processor for speech coding that is based on a first-generation VLSI chip which efficiently performs the pattern-matching operation needed for the codebook search process (CPS). Using this chip, the CPS architecture has been successfully incorporated into a compact, single-board Vector PCM implementation operating at 7-18 kbits/sec. A real time Adaptive Vector Predictive Coder system using the CPS has also been implemented.

  4. Block adaptive quantization of Magellan SAR data

    NASA Technical Reports Server (NTRS)

    Kwok, Ronald; Johnson, William T. K.

    1989-01-01

    A report is presented on a data compression scheme that will be used to reduce the SAR data rate on the NASA Magellan mission to Venus. The spacecraft has only one scientific instrument, a radar system for imaging the surface, for altimetric profiling of the planet topography, and for measuring radiation from the planet surface. A straightforward implementation of the scientific requirements of the mission results in a data rate higher than can be accommodated by the available system bandwidth. A data-rate-reduction scheme which includes operation of the radar in burst mode and block-adaptive quantization of the SAR data is selected to satisfy the scientific requirements. Descriptions of the quantization scheme and its hardware implementation are given. Burst-mode SAR operation is also briefly discussed.

  5. Vector-quantization-based scheme for data embedding for images

    NASA Astrophysics Data System (ADS)

    Liu, Ning; Subbalakshmi, Koduvayur P.

    2004-06-01

    Today, data hiding has become more and more important in a variety of applications including security. Since Costa's work in the context of communication, the set of quantization based schemes have been proposed as one class of data hiding schemes. Most of these schemes are based on uniform scalar quantizer, which is optimal only if the host signal is uniformly distributed. In this paper, we propose pdf -matched embedding schemes, which not only consider pdf -matched quantizers, but also extend them to multiple dimensions. Specifically, our contributions to this paper are: We propose a pdf-matched embedding (PME) scheme by generalizing the probability distribution of host image and then constructing a pdf-matched quantizer as the starting point. We show experimentally that the proposed pdf-matched quantizer provides better trade-offs between distortion caused by embedding, the robustness to attacks and the embedding capacity. We extend our algorithm to embed a vector of bits in a host signal vector. We show by experiments that our scheme can be closer to the data hiding capacity by embedding larger dimension bit vectors in larger dimension VQs. Two enhancements have been proposed to our method: by vector flipping and by using distortion compensation (DC-PME), that serve to further decrease the embedding distortion. For the 1-D case, the PME scheme shows a 1 dB improvement over the QIM method in a robustness-distortion sense, while DC-PME is 1 dB better than DC-QIM and the 4-D vector quantizer based PME scheme performs about 3 dB better than the 1-D PME.

  6. Optimal Pruning for Tree-Structured Vector Quantization.

    ERIC Educational Resources Information Center

    Lin, Jianhua; And Others

    1992-01-01

    Analyzes the computational complexity of optimal binary tree pruning for tree-structured vector quantization. Topics discussed include the combinatorial nature of the optimization problem; the complexity of optimal tree pruning; and finding a minimal size pruned tree. (11 references) (LRW)

  7. Hierarchical Vector Quantization with Application to Speech Waveform Coding.

    NASA Astrophysics Data System (ADS)

    Shoham, Yair

    Digital voice communication has long been of great engineering concern due to the vital role of voice communication in human society. Recently, a new and theoretically powerful coding technique, Vector Quantization (VQ), has been used in enhancing existing speech coders and in developing new coding algorithms. VQ operates on a vector of source samples as an elementary unit. The basic coding operation is that of pattern matching, where a finite set of codevectors (the codebook) is searched for the best approximation to the input source vector. However, the applicability of VQ to speech coding is limited by the computational complexity, associated with the codebook search, which grows exponentially with the source dimension and the coding rate. A fundamental property of speech is that it is composed of long highly correlated segments, due to its quasi-periodicity and short -term stationarity. These features of speech cannot be directly exploited by vector quantization because of the complexity problem. Thus, to be able to efficiently exploit the redundancy in speech by a VQ-based coding scheme, a special technique is needed, capable of handling very high dimensional vectors. Such a technique, called Hierarchical Vector Quantization (HVQ) is developed in this work. HVQ is based on representing the source by a multi -level tree-structure of low dimensional vectors. The bottom level of the tree contains the data vectors which are subvectors of the main high dimensional input. Vectors at higher levels contain suitably defined signal parameters which are extracted from the lower levels. These parameters, called features, are used as side-information in coding the data vectors. This technique partially exploits the correlation and structure of the main input vector while actually performing low dimensional, low complexity vector quantization. In this work HVQ, is used in quantizing the coefficients of a specially structured transform coder. This coder employs variable

  8. Error-resilient pyramid vector quantization for image compression.

    PubMed

    Hung, A C; Tsern, E K; Meng, T H

    1998-01-01

    Pyramid vector quantization (PVQ) uses the lattice points of a pyramidal shape in multidimensional space as the quantizer codebook. It is a fixed-rate quantization technique that can be used for the compression of Laplacian-like sources arising from transform and subband image coding, where its performance approaches the optimal entropy-coded scalar quantizer without the necessity of variable length codes. In this paper, we investigate the use of PVQ for compressed image transmission over noisy channels, where the fixed-rate quantization reduces the susceptibility to bit-error corruption. We propose a new method of deriving the indices of the lattice points of the multidimensional pyramid and describe how these techniques can also improve the channel noise immunity of general symmetric lattice quantizers. Our new indexing scheme improves channel robustness by up to 3 dB over previous indexing methods, and can be performed with similar computational cost. The final fixed-rate coding algorithm surpasses the performance of typical Joint Photographic Experts Group (JPEG) implementations and exhibits much greater error resilience.

  9. Hierarchical vector quantizers with table-lookup encoders

    NASA Astrophysics Data System (ADS)

    Chang, P. C.; Gray, R. M.; May, J.

    This paper presents a technique for the design of vector quantizer (VQ) encoders implemented by table lookups rather than by a minimum distortion search. In a table lookup encoder, input vectors to the encoder are used directly as addresses in code tables to choose the channel symbol codewords. In order to preserve manageable table sizes for large dimension VQs, hierarchical structures are used to quantize the signal successively in stages. The encoder of a hierarchical VQ (HVQ) consists of several stages, each stage being a VQ implemented by a lookup table. Since both the encoder and the decoder are implemented by table lookups, there are no arithmetic computations required in the final VQ implementation. Preliminary simulation results are presented which demonstrate that the degradation using HVQ over full search VQ is less than 1 dB for speech waveform coding.

  10. Indexing and retrieval of color images using vector quantization

    NASA Astrophysics Data System (ADS)

    Panchanathan, Sethuraman; Huang, Changan

    1999-10-01

    Image and Video indexing is becoming popular with the increasing volumes of visual information that is being stored and transmitted in various multimedia applications. An important focus of the upcoming MPEG 7 standard is on indexing and retrieval of multimedia data. The visual information can be indexed using the spatial (color, texture, shape, sketch, etc.) and temporal (motion, camera operations, etc.) features. Since multimedia data is likely to be stored in compressed form, indexing the information in compressed domain entails savings in compute time and storage space. In this paper, we present a novel indexing and retrieval technique using vector quantization of color images. Color is an important feature for indexing the visual information. Several color based indexing schemes have been reported in the recent literature. Vector Quantization (VQ) is a popular compression technique for low-power applications. Indexing the visual information based on VQ features such as luminance codebook and labels have also been recently presented in the literature. Previous VQ-based indexing techniques describes the entire image content by modeling the histogram of the image without taking into account the location of colors, which may result in unsatisfactory retrieval. We propose to incorporate spatial information in the content representation in VQ-compressed domain. We employ the luminance and chrominance codebooks trained and generated from wavelet-vector-quantized (WVQ) images, in which the images are first decomposed using wavelet transform followed by vector quantization of the transform coefficients. The labels, and the usage maps corresponding to the utilization pattern of codebooks for the individual images serve as indices to the associated color information contained in the images. Hence, the VQ compression parameters serve the purpose of indexing resulting in joint compression and indexing of the color information. Our simulations indicate superior indexing and

  11. Robust image hashing based on random Gabor filtering and dithered lattice vector quantization.

    PubMed

    Li, Yuenan; Lu, Zheming; Zhu, Ce; Niu, Xiamu

    2012-04-01

    In this paper, we propose a robust-hash function based on random Gabor filtering and dithered lattice vector quantization (LVQ). In order to enhance the robustness against rotation manipulations, the conventional Gabor filter is adapted to be rotation invariant, and the rotation-invariant filter is randomized to facilitate secure feature extraction. Particularly, a novel dithered-LVQ-based quantization scheme is proposed for robust hashing. The dithered-LVQ-based quantization scheme is well suited for robust hashing with several desirable features, including better tradeoff between robustness and discrimination, higher randomness, and secrecy, which are validated by analytical and experimental results. The performance of the proposed hashing algorithm is evaluated over a test image database under various content-preserving manipulations. The proposed hashing algorithm shows superior robustness and discrimination performance compared with other state-of-the-art algorithms, particularly in the robustness against rotations (of large degrees).

  12. Image coding using entropy-constrained residual vector quantization

    NASA Technical Reports Server (NTRS)

    Kossentini, Faouzi; Smith, Mark J. T.; Barnes, Christopher F.

    1993-01-01

    The residual vector quantization (RVQ) structure is exploited to produce a variable length codeword RVQ. Necessary conditions for the optimality of this RVQ are presented, and a new entropy-constrained RVQ (ECRVQ) design algorithm is shown to be very effective in designing RVQ codebooks over a wide range of bit rates and vector sizes. The new EC-RVQ has several important advantages. It can outperform entropy-constrained VQ (ECVQ) in terms of peak signal-to-noise ratio (PSNR), memory, and computation requirements. It can also be used to design high rate codebooks and codebooks with relatively large vector sizes. Experimental results indicate that when the new EC-RVQ is applied to image coding, very high quality is achieved at relatively low bit rates.

  13. A constrained joint source/channel coder design and vector quantization of nonstationary sources

    NASA Technical Reports Server (NTRS)

    Sayood, Khalid; Chen, Y. C.; Nori, S.; Araj, A.

    1993-01-01

    The emergence of broadband ISDN as the network for the future brings with it the promise of integration of all proposed services in a flexible environment. In order to achieve this flexibility, asynchronous transfer mode (ATM) has been proposed as the transfer technique. During this period a study was conducted on the bridging of network transmission performance and video coding. The successful transmission of variable bit rate video over ATM networks relies on the interaction between the video coding algorithm and the ATM networks. Two aspects of networks that determine the efficiency of video transmission are the resource allocation algorithm and the congestion control algorithm. These are explained in this report. Vector quantization (VQ) is one of the more popular compression techniques to appear in the last twenty years. Numerous compression techniques, which incorporate VQ, have been proposed. While the LBG VQ provides excellent compression, there are also several drawbacks to the use of the LBG quantizers including search complexity and memory requirements, and a mismatch between the codebook and the inputs. The latter mainly stems from the fact that the VQ is generally designed for a specific rate and a specific class of inputs. In this work, an adaptive technique is proposed for vector quantization of images and video sequences. This technique is an extension of the recursively indexed scalar quantization (RISQ) algorithm.

  14. A constrained joint source/channel coder design and vector quantization of nonstationary sources

    NASA Astrophysics Data System (ADS)

    Sayood, Khalid; Chen, Y. C.; Nori, S.; Araj, A.

    1993-12-01

    The emergence of broadband ISDN as the network for the future brings with it the promise of integration of all proposed services in a flexible environment. In order to achieve this flexibility, asynchronous transfer mode (ATM) has been proposed as the transfer technique. During this period a study was conducted on the bridging of network transmission performance and video coding. The successful transmission of variable bit rate video over ATM networks relies on the interaction between the video coding algorithm and the ATM networks. Two aspects of networks that determine the efficiency of video transmission are the resource allocation algorithm and the congestion control algorithm. These are explained in this report. Vector quantization (VQ) is one of the more popular compression techniques to appear in the last twenty years. Numerous compression techniques, which incorporate VQ, have been proposed. While the LBG VQ provides excellent compression, there are also several drawbacks to the use of the LBG quantizers including search complexity and memory requirements, and a mismatch between the codebook and the inputs. The latter mainly stems from the fact that the VQ is generally designed for a specific rate and a specific class of inputs. In this work, an adaptive technique is proposed for vector quantization of images and video sequences. This technique is an extension of the recursively indexed scalar quantization (RISQ) algorithm.

  15. Interframe hierarchical vector quantization using hashing-based reorganized codebook

    NASA Astrophysics Data System (ADS)

    Choo, Chang Y.; Cheng, Che H.; Nasrabadi, Nasser M.

    1995-12-01

    Real-time multimedia communication over PSTN (Public Switched Telephone Network) or wireless channel requires video signals to be encoded at the bit rate well below 64 kbits/second. Most of the current works on such very low bit rate video coding are based on H.261 or H.263 scheme. The H.263 encoding scheme, for example, consists mainly of motion estimation and compensation, discrete cosine transform, and run and variable/fixed length coding. Vector quantization (VQ) is an efficient and alternative scheme for coding at very low bit rate. One such VQ code applied to video coding is interframe hierarchical vector quantization (IHVQ). One problem of IHVQ, and VQ in general, is the computational complexity due to codebook search. A number of techniques have been proposed to reduce the search time which include tree-structured VQ, finite-state VQ, cache VQ, and hashing based codebook reorganization. In this paper, we present an IHVQ code with a hashing based scheme to reorganize the codebook so that codebook search time, and thus encoding time, can be significantly reduced. We applied the algorithm to the same test environment as in H.263 and evaluated coding performance. It turned out that the performance of the proposed scheme is significantly better than that of IHVQ without hashed codebook. Also, the performance of the proposed scheme was comparable to and often better than that of the H.263, due mainly to hashing based reorganized codebook.

  16. Multipurpose image watermarking algorithm based on multistage vector quantization.

    PubMed

    Lu, Zhe-Ming; Xu, Dian-Guo; Sun, Sheng-He

    2005-06-01

    The rapid growth of digital multimedia and Internet technologies has made copyright protection, copy protection, and integrity verification three important issues in the digital world. To solve these problems, the digital watermarking technique has been presented and widely researched. Traditional watermarking algorithms are mostly based on discrete transform domains, such as the discrete cosine transform, discrete Fourier transform (DFT), and discrete wavelet transform (DWT). Most of these algorithms are good for only one purpose. Recently, some multipurpose digital watermarking methods have been presented, which can achieve the goal of content authentication and copyright protection simultaneously. However, they are based on DWT or DFT. Lately, several robust watermarking schemes based on vector quantization (VQ) have been presented, but they can only be used for copyright protection. In this paper, we present a novel multipurpose digital image watermarking method based on the multistage vector quantizer structure, which can be applied to image authentication and copyright protection. In the proposed method, the semi-fragile watermark and the robust watermark are embedded in different VQ stages using different techniques, and both of them can be extracted without the original image. Simulation results demonstrate the effectiveness of our algorithm in terms of robustness and fragility. PMID:15971780

  17. Round Randomized Learning Vector Quantization for Brain Tumor Imaging.

    PubMed

    Sheikh Abdullah, Siti Norul Huda; Bohani, Farah Aqilah; Nayef, Baher H; Sahran, Shahnorbanun; Al Akash, Omar; Iqbal Hussain, Rizuana; Ismail, Fuad

    2016-01-01

    Brain magnetic resonance imaging (MRI) classification into normal and abnormal is a critical and challenging task. Owing to that, several medical imaging classification techniques have been devised in which Learning Vector Quantization (LVQ) is amongst the potential. The main goal of this paper is to enhance the performance of LVQ technique in order to gain higher accuracy detection for brain tumor in MRIs. The classical way of selecting the winner code vector in LVQ is to measure the distance between the input vector and the codebook vectors using Euclidean distance function. In order to improve the winner selection technique, round off function is employed along with the Euclidean distance function. Moreover, in competitive learning classifiers, the fitting model is highly dependent on the class distribution. Therefore this paper proposed a multiresampling technique for which better class distribution can be achieved. This multiresampling is executed by using random selection via preclassification. The test data sample used are the brain tumor magnetic resonance images collected from Universiti Kebangsaan Malaysia Medical Center and UCI benchmark data sets. Comparative studies showed that the proposed methods with promising results are LVQ1, Multipass LVQ, Hierarchical LVQ, Multilayer Perceptron, and Radial Basis Function. PMID:27516807

  18. Round Randomized Learning Vector Quantization for Brain Tumor Imaging

    PubMed Central

    2016-01-01

    Brain magnetic resonance imaging (MRI) classification into normal and abnormal is a critical and challenging task. Owing to that, several medical imaging classification techniques have been devised in which Learning Vector Quantization (LVQ) is amongst the potential. The main goal of this paper is to enhance the performance of LVQ technique in order to gain higher accuracy detection for brain tumor in MRIs. The classical way of selecting the winner code vector in LVQ is to measure the distance between the input vector and the codebook vectors using Euclidean distance function. In order to improve the winner selection technique, round off function is employed along with the Euclidean distance function. Moreover, in competitive learning classifiers, the fitting model is highly dependent on the class distribution. Therefore this paper proposed a multiresampling technique for which better class distribution can be achieved. This multiresampling is executed by using random selection via preclassification. The test data sample used are the brain tumor magnetic resonance images collected from Universiti Kebangsaan Malaysia Medical Center and UCI benchmark data sets. Comparative studies showed that the proposed methods with promising results are LVQ1, Multipass LVQ, Hierarchical LVQ, Multilayer Perceptron, and Radial Basis Function. PMID:27516807

  19. Quad-tree product vector quantization of images

    NASA Astrophysics Data System (ADS)

    Chiu, Chung-Yen; Baker, Richard L.

    1989-06-01

    Variable rate image coding schemes are an efficient way to achieve low bit rates while maintaining acceptable image quality. This paper describes several ways to design variable rate product vector quantizers (VQ) which use a quad-tree data structure to communicate the VQ's block size. The first is a direct encoding method which uses VQs having previously specified rates. The second uses a threshold decision rule together with a method to compute the threshold to keep average distortion below a given level. This computation is based on the relationship between the quantizer performance function and the source variance. The third design uses a new algorithm to determine stepwise optimum VQ codebook rates to minimize rate while limiting distortion. Quad-trees are used in all cases to communicate block sizes to the receiver. Simulations show that these variable rate VQs encode over 70 percent of the Lena image at a very low rate while maintaining good fidelity. The proposed schemes also preserve edge fidelity, even at low rates.

  20. Recursive optimal pruning with applications to tree structured vector quantizers

    NASA Technical Reports Server (NTRS)

    Kiang, Shei-Zein; Baker, Richard L.; Sullivan, Gary J.; Chiu, Chung-Yen

    1992-01-01

    A pruning algorithm of Chou et al. (1989) for designing optimal tree structures identifies only those codebooks which lie on the convex hull of the original codebook's operational distortion rate function. The authors introduce a modified version of the original algorithm, which identifies a large number of codebooks having minimum average distortion, under the constraint that, in each step, only modes having no descendents are removed from the tree. All codebooks generated by the original algorithm are also generated by this algorithm. The new algorithm generates a much larger number of codebooks in the middle- and low-rate regions. The additional codebooks permit operation near the codebook's operational distortion rate function without time sharing by choosing from the increased number of available bit rates. Despite the statistical mismatch which occurs when coding data outside the training sequence, these pruned codebooks retain their performance advantage over full search vector quantizers (VQs) for a large range of rates.

  1. Recursive optimal pruning with applications to tree structured vector quantizers.

    PubMed

    Kiang, S Z; Baker, R L; Sullivan, G J; Chiu, C Y

    1992-01-01

    A pruning algorithm of P.A. Chou et al. (1989) for designing optimal tree structures identifies only those codebooks which lie on the convex hull of the original codebook's operational distortion rate function. The authors introduce a modified version of the original algorithm, which identifies a large number of codebooks having minimum average distortion, under the constraint that, in each step, only modes having no descendents are removed from the tree. All codebooks generated by the original algorithm are also generated by this algorithm. The new algorithm generates a much larger number of codebooks in the middle- and low-rate regions. The additional codebooks permit operation near the codebook's operational distortion rate function without time sharing by choosing from the increased number of available bit rates. Despite the statistical mismatch which occurs when coding data outside the training sequence, these pruned codebooks retain their performance advantage over full search vector quantizers (VQs) for a large range of rates.

  2. Study on adaptive compressed sensing & reconstruction of quantized speech signals

    NASA Astrophysics Data System (ADS)

    Yunyun, Ji; Zhen, Yang

    2012-12-01

    Compressed sensing (CS) is a rising focus in recent years for its simultaneous sampling and compression of sparse signals. Speech signals can be considered approximately sparse or compressible in some domains for natural characteristics. Thus, it has great prospect to apply compressed sensing to speech signals. This paper is involved in three aspects. Firstly, the sparsity and sparsifying matrix for speech signals are analyzed. Simultaneously, a kind of adaptive sparsifying matrix based on the long-term prediction of voiced speech signals is constructed. Secondly, a CS matrix called two-block diagonal (TBD) matrix is constructed for speech signals based on the existing block diagonal matrix theory to find out that its performance is empirically superior to that of the dense Gaussian random matrix when the sparsifying matrix is the DCT basis. Finally, we consider the quantization effect on the projections. Two corollaries about the impact of the adaptive quantization and nonadaptive quantization on reconstruction performance with two different matrices, the TBD matrix and the dense Gaussian random matrix, are derived. We find that the adaptive quantization and the TBD matrix are two effective ways to mitigate the quantization effect on reconstruction of speech signals in the framework of CS.

  3. Reducing and filtering point clouds with enhanced vector quantization.

    PubMed

    Ferrari, Stefano; Ferrigno, Giancarlo; Piuri, Vincenzo; Borghese, N Alberto

    2007-01-01

    Modern scanners are able to deliver huge quantities of three-dimensional (3-D) data points sampled on an object's surface, in a short time. These data have to be filtered and their cardinality reduced to come up with a mesh manageable at interactive rates. We introduce here a novel procedure to accomplish these two tasks, which is based on an optimized version of soft vector quantization (VQ). The resulting technique has been termed enhanced vector quantization (EVQ) since it introduces several improvements with respect to the classical soft VQ approaches. These are based on computationally expensive iterative optimization; local computation is introduced here, by means of an adequate partitioning of the data space called hyperbox (HB), to reduce the computational time so as to be linear in the number of data points N, saving more than 80% of time in real applications. Moreover, the algorithm can be fully parallelized, thus leading to an implementation that is sublinear in N. The voxel side and the other parameters are automatically determined from data distribution on the basis of the Zador's criterion. This makes the algorithm completely automatic. Because the only parameter to be specified is the compression rate, the procedure is suitable even for nontrained users. Results obtained in reconstructing faces of both humans and puppets as well as artifacts from point clouds publicly available on the web are reported and discussed, in comparison with other methods available in the literature. EVQ has been conceived as a general procedure, suited for VQ applications with large data sets whose data space has relatively low dimensionality.

  4. Reduced-Complexity Deterministic Annealing for Vector Quantizer Design

    NASA Astrophysics Data System (ADS)

    Demirciler, Kemal; Ortega, Antonio

    2005-12-01

    This paper presents a reduced-complexity deterministic annealing (DA) approach for vector quantizer (VQ) design by using soft information processing with simplified assignment measures. Low-complexity distributions are designed to mimic the Gibbs distribution, where the latter is the optimal distribution used in the standard DA method. These low-complexity distributions are simple enough to facilitate fast computation, but at the same time they can closely approximate the Gibbs distribution to result in near-optimal performance. We have also derived the theoretical performance loss at a given system entropy due to using the simple soft measures instead of the optimal Gibbs measure. We use thederived result to obtain optimal annealing schedules for the simple soft measures that approximate the annealing schedule for the optimal Gibbs distribution. The proposed reduced-complexity DA algorithms have significantly improved the quality of the final codebooks compared to the generalized Lloyd algorithm and standard stochastic relaxation techniques, both with and without the pairwise nearest neighbor (PNN) codebook initialization. The proposed algorithms are able to evade the local minima and the results show that they are not sensitive to the choice of the initial codebook. Compared to the standard DA approach, the reduced-complexity DA algorithms can operate over 100 times faster with negligible performance difference. For example, for the design of a 16-dimensional vector quantizer having a rate of 0.4375 bit/sample for Gaussian source, the standard DA algorithm achieved 3.60 dB performance in 16 483 CPU seconds, whereas the reduced-complexity DA algorithm achieved the same performance in 136 CPU seconds. Other than VQ design, the DA techniques are applicable to problems such as classification, clustering, and resource allocation.

  5. Hierarchically clustered adaptive quantization CMAC and its learning convergence.

    PubMed

    Teddy, S D; Lai, E M K; Quek, C

    2007-11-01

    The cerebellar model articulation controller (CMAC) neural network (NN) is a well-established computational model of the human cerebellum. Nevertheless, there are two major drawbacks associated with the uniform quantization scheme of the CMAC network. They are the following: (1) a constant output resolution associated with the entire input space and (2) the generalization-accuracy dilemma. Moreover, the size of the CMAC network is an exponential function of the number of inputs. Depending on the characteristics of the training data, only a small percentage of the entire set of CMAC memory cells is utilized. Therefore, the efficient utilization of the CMAC memory is a crucial issue. One approach is to quantize the input space nonuniformly. For existing nonuniformly quantized CMAC systems, there is a tradeoff between memory efficiency and computational complexity. Inspired by the underlying organizational mechanism of the human brain, this paper presents a novel CMAC architecture named hierarchically clustered adaptive quantization CMAC (HCAQ-CMAC). HCAQ-CMAC employs hierarchical clustering for the nonuniform quantization of the input space to identify significant input segments and subsequently allocating more memory cells to these regions. The stability of the HCAQ-CMAC network is theoretically guaranteed by the proof of its learning convergence. The performance of the proposed network is subsequently benchmarked against the original CMAC network, as well as two other existing CMAC variants on two real-life applications, namely, automated control of car maneuver and modeling of the human blood glucose dynamics. The experimental results have demonstrated that the HCAQ-CMAC network offers an efficient memory allocation scheme and improves the generalization and accuracy of the network output to achieve better or comparable performances with smaller memory usages. Index Terms-Cerebellar model articulation controller (CMAC), hierarchical clustering, hierarchically

  6. U h( g) invariant quantization of coadjoint orbits and vector bundles over them

    NASA Astrophysics Data System (ADS)

    Donin, Joseph

    2001-04-01

    Let M be a coadjoint semisimple orbit of a simple Lie group G. Let U h( g) be a quantum group corresponding to G. We construct a universal family of U h( g) invariant quantizations of the sheaf of functions on M and describe all such quantizations. We also describe all two parameter U h( g) invariant quantizations on M, which can be considered as U h( g) invariant quantizations of the Kirillov-Kostant-Souriau (KKS) Poisson bracket on M. We also consider how those quantizations relate to the natural polarizations of M with respect to the KKS bracket. Using polarizations, we quantize the sheaves of sections of vector bundles on M as one- and two-sided U h( g) invariant modules over a quantized function sheaf.

  7. Novel hybrid classified vector quantization using discrete cosine transform for image compression

    NASA Astrophysics Data System (ADS)

    Al-Fayadh, Ali; Hussain, Abir Jaafar; Lisboa, Paulo; Al-Jumeily, Dhiya

    2009-04-01

    We present a novel image compression technique using a classified vector Quantizer and singular value decomposition for the efficient representation of still images. The proposed method is called hybrid classified vector quantization. It involves a simple but efficient classifier-based gradient method in the spatial domain, which employs only one threshold to determine the class of the input image block, and uses three AC coefficients of discrete cosine transform coefficients to determine the orientation of the block without employing any threshold. The proposed technique is benchmarked with each of the standard vector quantizers generated using the k-means algorithm, standard classified vector quantizer schemes, and JPEG-2000. Simulation results indicate that the proposed approach alleviates edge degradation and can reconstruct good visual quality images with higher peak signal-to-noise ratio than the benchmarked techniques, or be competitive with them.

  8. Optimisation of Block-Adaptive Quantization for SAR Raw Data

    NASA Astrophysics Data System (ADS)

    Parraga Niebla, C.; Krieger, G.

    In SAR systems using a satellite platform, the amount of raw data to be transmitted to ground for processing is huge. Effort has to be spent to reduce the raw data. One technique that can be applied here is block adaptive quantization. For SAR systems, the raw data set is organised as a two-dimensional complex array (in-phase and quadrature) whose axes correspond to range and azimuth of the SAR image, normally using 8 bit coding per pixel, which generates a big amount of data to be transmitted and processed. In the case of satellites with store and forward function, data storage becomes a problem since the buffer capacity downlink bandwidth are limited. Therefore, there is a need to reduce the raw data set to be transmitted. One approach to solve this problem is to reduce the number of levels for amplitude coding. The Block-Adaptive Quantization algorithm consists of (i) dividing the data set in blocks and (ii) the adaptation of the quantization threshold levels and reconstruction values to the statistics of the signal within each block in order to better fit the dynamic margin, reducing this way the required number of bits of each block. Asuming a non-uniform quantization, the knowledge of SAR raw data statistical properties (which can be asumed as complex Gaussian distributed) can be applied to optimise the threshold values and reconstruction leves to the probability density function (pdf) of the signal. As every compression technique, the Block-Adaptive Quantization algorithm is loosing information as long as the number of bits is reduced. The effect of this information loss will be investigated in detail in this paper to find the right balance between compression rate and information loss in order to keep the processing quality for different remote sensing applications (SAR processing, interferometry, polarimetry) at an sufficient level. Furthermore, the selection of an optimum block size to be treated as statistically stationary is an issue for systematic

  9. Distortion-rate models for entropy-coded lattice vector quantization.

    PubMed

    Raffy, P; Antonini, M; Barlaud, M

    2000-01-01

    The increasing demand for real-time applications requires the use of variable-rate quantizers having good performance in the low bit rate domain. In order to minimize the complexity of quantization, as well as maintaining a reasonably high PSNR ratio, we propose to use an entropy-coded lattice vector quantizer (ECLVQ). These quantizers have proven to outperform the well-known EZW algorithm's performance in terms of rate-distortion tradeoff. In this paper, we focus our attention on the modeling of the mean squared error (MSE) distortion and the prefix code rate for ECLVQ. First, we generalize the distortion model of Jeong and Gibson (1993) on fixed-rate cubic quantizers to lattices under a high rate assumption. Second, we derive new rate models for ECLVQ, efficient at low bit rates without any high rate assumptions. Simulation results prove the precision of our models. PMID:18262939

  10. Vector Quantization of Harmonic Magnitudes in Speech Coding Applications—A Survey and New Technique

    NASA Astrophysics Data System (ADS)

    Chu, Wai C.

    2004-12-01

    A harmonic coder extracts the harmonic components of a signal and represents them efficiently using a few parameters. The principles of harmonic coding have become quite successful and several standardized speech and audio coders are based on it. One of the key issues in harmonic coder design is in the quantization of harmonic magnitudes, where many propositions have appeared in the literature. The objective of this paper is to provide a survey of the various techniques that have appeared in the literature for vector quantization of harmonic magnitudes, with emphasis on those adopted by the major speech coding standards; these include constant magnitude approximation, partial quantization, dimension conversion, and variable-dimension vector quantization (VDVQ). In addition, a refined VDVQ technique is proposed where experimental data are provided to demonstrate its effectiveness.

  11. Synthetic aperture radar signal data compression using block adaptive quantization

    NASA Technical Reports Server (NTRS)

    Kuduvalli, Gopinath; Dutkiewicz, Melanie; Cumming, Ian

    1994-01-01

    This paper describes the design and testing of an on-board SAR signal data compression algorithm for ESA's ENVISAT satellite. The Block Adaptive Quantization (BAQ) algorithm was selected, and optimized for the various operational modes of the ASAR instrument. A flexible BAQ scheme was developed which allows a selection of compression ratio/image quality trade-offs. Test results show the high quality of the SAR images processed from the reconstructed signal data, and the feasibility of on-board implementation using a single ASIC.

  12. Comparison study of EMG signals compression by methods transform using vector quantization, SPIHT and arithmetic coding.

    PubMed

    Ntsama, Eloundou Pascal; Colince, Welba; Ele, Pierre

    2016-01-01

    In this article, we make a comparative study for a new approach compression between discrete cosine transform (DCT) and discrete wavelet transform (DWT). We seek the transform proper to vector quantization to compress the EMG signals. To do this, we initially associated vector quantization and DCT, then vector quantization and DWT. The coding phase is made by the SPIHT coding (set partitioning in hierarchical trees coding) associated with the arithmetic coding. The method is demonstrated and evaluated on actual EMG data. Objective performance evaluations metrics are presented: compression factor, percentage root mean square difference and signal to noise ratio. The results show that method based on the DWT is more efficient than the method based on the DCT.

  13. Cascade of FISDW Phases: Wave Vector Quantization and its Consequences

    NASA Astrophysics Data System (ADS)

    Héritier, M.

    We discuss a formation of the field-induced spin-density-wave phases in organic conductors (TMTSF)2X in terms of the so-called quantized nesting model (QNM), suggested by Heritier, Montambaux, and Lederer on a basis of the Go'kov- Lebed theory. The QNM, developed by Lebed, Maki et al., Yamaji, Poilblanc et al., and Yakovenko et al., is able to account for experimentally observed threedimensional quantum Hall effect.

  14. Vector adaptive predictive coder for speech and audio

    NASA Technical Reports Server (NTRS)

    Chen, Juin-Hwey (Inventor); Gersho, Allen (Inventor)

    1990-01-01

    A real-time vector adaptive predictive coder which approximates each vector of K speech samples by using each of M fixed vectors in a first codebook to excite a time-varying synthesis filter and picking the vector that minimizes distortion. Predictive analysis for each frame determines parameters used for computing from vectors in the first codebook zero-state response vectors that are stored at the same address (index) in a second codebook. Encoding of input speech vectors s.sub.n is then carried out using the second codebook. When the vector that minimizes distortion is found, its index is transmitted to a decoder which has a codebook identical to the first codebook of the decoder. There the index is used to read out a vector that is used to synthesize an output speech vector s.sub.n. The parameters used in the encoder are quantized, for example by using a table, and the indices are transmitted to the decoder where they are decoded to specify transfer characteristics of filters used in producing the vector s.sub.n from the receiver codebook vector selected by the vector index transmitted.

  15. Necessary conditions for the optimality of variable rate residual vector quantizers

    NASA Technical Reports Server (NTRS)

    Kossentini, Faouzi; Smith, Mark J. T.; Barnes, Christopher F.

    1993-01-01

    Residual vector quantization (RVQ), or multistage VQ, as it is also called, has recently been shown to be a competitive technique for data compression. The competitive performance of RVQ reported in results from the joint optimization of variable rate encoding and RVQ direct-sum code books. In this paper, necessary conditions for the optimality of variable rate RVQ's are derived, and an iterative descent algorithm based on a Lagrangian formulation is introduced for designing RVQ's having minimum average distortion subject to an entropy constraint. Simulation results for these entropy-constrained RVQ's (EC-RVQ's) are presented for memory less Gaussian, Laplacian, and uniform sources. A Gauss-Markov source is also considered. The performance is superior to that of entropy-constrained scalar quantizers (EC-SQ's) and practical entropy-constrained vector quantizers (EC-VQ's), and is competitive with that of some of the best source coding techniques that have appeared in the literature.

  16. Image-adapted visually weighted quantization matrices for digital image compression

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B. (Inventor)

    1994-01-01

    A method for performing image compression that eliminates redundant and invisible image components is presented. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The present invention adapts or customizes the quantization matrix to the image being compressed. The quantization matrix comprises visual masking by luminance and contrast techniques and by an error pooling technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.

  17. Image subband coding using context-based classification and adaptive quantization.

    PubMed

    Yoo, Y; Ortega, A; Yu, B

    1999-01-01

    Adaptive compression methods have been a key component of many proposed subband (or wavelet) image coding techniques. This paper deals with a particular type of adaptive subband image coding where we focus on the image coder's ability to adjust itself "on the fly" to the spatially varying statistical nature of image contents. This backward adaptation is distinguished from more frequently used forward adaptation in that forward adaptation selects the best operating parameters from a predesigned set and thus uses considerable amount of side information in order for the encoder and the decoder to operate with the same parameters. Specifically, we present backward adaptive quantization using a new context-based classification technique which classifies each subband coefficient based on the surrounding quantized coefficients. We couple this classification with online parametric adaptation of the quantizer applied to each class. A simple uniform threshold quantizer is employed as the baseline quantizer for which adaptation is achieved. Our subband image coder based on the proposed adaptive classification quantization idea exhibits excellent rate-distortion performance, in particular at very low rates. For popular test images, it is comparable or superior to most of the state-of-the-art coders in the literature.

  18. Simple, fast codebook training algorithm by entropy sequence for vector quantization

    NASA Astrophysics Data System (ADS)

    Pang, Chao-yang; Yao, Shaowen; Qi, Zhang; Sun, Shi-xin; Liu, Jingde

    2001-09-01

    The traditional training algorithm for vector quantization such as the LBG algorithm uses the convergence of distortion sequence as the condition of the end of algorithm. We presented a novel training algorithm for vector quantization in this paper. The convergence of the entropy sequence of each region sequence is employed as the condition of the end of the algorithm. Compared with the famous LBG algorithm, it is simple, fast and easy to be comprehended and controlled. We test the performance of the algorithm by typical test image Lena and Barb. The result shows that the PSNR difference between the algorithm and LBG is less than 0.1dB, but the running time of it is at most one second of LBG.

  19. A Hashing-Based Search Algorithm for Coding Digital Images by Vector Quantization

    NASA Astrophysics Data System (ADS)

    Chu, Chen-Chau

    1989-11-01

    This paper describes a fast algorithm to compress digital images by vector quantization. Vector quantization relies heavily on searching to build codebooks and to classify blocks of pixels into code indices. The proposed algorithm uses hashing, localized search, and multi-stage search to accelerate the searching process. The average of pixel values in a block is used as the feature for hashing and intermediate screening. Experimental results using monochrome images are presented. This algorithm compares favorably with other methods with regard to processing time, and has comparable or better mean square error measurements than some of them. The major advantages of the proposed algorithm are its speed, good quality of the reconstructed images, and flexibility.

  20. Progressive Vector Quantization on a massively parallel SIMD machine with application to multispectral image data

    NASA Technical Reports Server (NTRS)

    Manohar, Mareboyana; Tilton, James C.

    1994-01-01

    A progressive vector quantization (VQ) compression approach is discussed which decomposes image data into a number of levels using full search VQ. The final level is losslessly compressed, enabling lossless reconstruction. The computational difficulties are addressed by implementation on a massively parallel SIMD machine. We demonstrate progressive VQ on multispectral imagery obtained from the Advanced Very High Resolution Radiometer instrument and other Earth observation image data, and investigate the trade-offs in selecting the number of decomposition levels and codebook training method.

  1. Application Of Entropy-Constrained Vector Quantization To Waveform Coding Of Images

    NASA Astrophysics Data System (ADS)

    Chou, Philip A.

    1989-11-01

    An algorithm recently introduced to design vector quantizers for optimal joint performance with entropy codes is applied to waveform coding of monochrome images. Experiments show that when such entropy-constrained vector quantizers (ECVQs) are followed by optimal entropy codes, they outperform standard vector quantizers (VQs) that are also followed by optimal entropy codes, by several dB at equivalent bit rates. Two image sources are considered in these experiments: twenty-five 256x 256 magnetic resonance (MR) brain scans produced by a General Electric Signa at Stanford University, and six 512 x 512 (luminance component) images from the standard USC image database. The MR images are blocked into 2 x 2 components, and the USC images are blocked into 4 x 4 components. Both sources are divided into training and test sequences. Under the mean squared error distortion measure, entropy-coded ECVQ shows an improvement over entropy-coded standard VQ by 3.83 dB on the MR test sequence at 1.29 bit/pixel, and by 1.70 dB on the USC test sequence at 0.40 bit/pixel. Further experiments, in which memory is added to both ECVQ and VQ systems, are in progress.

  2. Entropy-constrained vector quantization of images in the transform domain

    NASA Astrophysics Data System (ADS)

    Lee, Jong Seok; Kim, Rin C.; Lee, Sang Uk

    1994-09-01

    In this paper, two image coding techniques employing an entropy constrained vector quantizer (ECVQ) in the transform domain are presented. In both techniques, the transformed DCT coefficients are rearranged into the Mandala blocks for vector quantization. The first technique is based on the unstructured ECVQ designed separately for each Mandala block, while the second technique employs a structured ECVQ, i.e., an entropy constrained lattice vector quantizer (ECLVQ). In the ECLVQ, unlike the conventional lattice VQ combined with entropy coding, we take into account both the distortion and entropy in the encoding. Moreover, in order to improve the performance further, the ECLVQ parameters are optimized according to the input image statistics. Also we reduce the size of the variable word-length code table, by decomposing the lattice codeword into its magnitude and sign information. The performances of both techniques are evaluated on the real images, and it is found that the proposed techniques provide 1 - 2 dB gain over the DCT-classified VQ at bit rates in the range of 0.3 - 0.5 bits per pixel.

  3. Multiobjective Image Color Quantization Algorithm Based on Self-Adaptive Hybrid Differential Evolution

    PubMed Central

    Xia, Xuewen

    2016-01-01

    In recent years, some researchers considered image color quantization as a single-objective problem and applied heuristic algorithms to solve it. This paper establishes a multiobjective image color quantization model with intracluster distance and intercluster separation as its objectives. Inspired by a multipopulation idea, a multiobjective image color quantization algorithm based on self-adaptive hybrid differential evolution (MoDE-CIQ) is then proposed to solve this model. Two numerical experiments on four common test images are conducted to analyze the effectiveness and competitiveness of the multiobjective model and the proposed algorithm. PMID:27738423

  4. Pipeline synthetic aperture radar data compression utilizing systolic binary tree-searched architecture for vector quantization

    NASA Technical Reports Server (NTRS)

    Chang, Chi-Yung (Inventor); Fang, Wai-Chi (Inventor); Curlander, John C. (Inventor)

    1995-01-01

    A system for data compression utilizing systolic array architecture for Vector Quantization (VQ) is disclosed for both full-searched and tree-searched. For a tree-searched VQ, the special case of a Binary Tree-Search VQ (BTSVQ) is disclosed with identical Processing Elements (PE) in the array for both a Raw-Codebook VQ (RCVQ) and a Difference-Codebook VQ (DCVQ) algorithm. A fault tolerant system is disclosed which allows a PE that has developed a fault to be bypassed in the array and replaced by a spare at the end of the array, with codebook memory assignment shifted one PE past the faulty PE of the array.

  5. Fast Encoding Method for Image Vector Quantization Based on Multiple Appropriate Features to Estimate Euclidean Distance

    NASA Astrophysics Data System (ADS)

    Pan, Zhibin; Kotani, Koji; Ohmi, Tadahiro

    The encoding process of finding the best-matched codeword (winner) for a certain input vector in image vector quantization (VQ) is computationally very expensive due to a lot of k-dimensional Euclidean distance computations. In order to speed up the VQ encoding process, it is beneficial to firstly estimate how large the Euclidean distance is between the input vector and a candidate codeword by using appropriate low dimensional features of a vector instead of an immediate Euclidean distance computation. If the estimated Euclidean distance is large enough, it implies that the current candidate codeword could not be a winner so that it can be rejected safely and thus avoid actual Euclidean distance computation. Sum (1-D), L2 norm (1-D) and partial sums (2-D) of a vector are used together as the appropriate features in this paper because they are the first three simplest features. Then, four estimations of Euclidean distance between the input vector and a codeword are connected to each other by the Cauchy-Schwarz inequality to realize codeword rejection. For typical standard images with very different details (Lena, F-16, Pepper and Baboon), the final remaining must-do actual Euclidean distance computations can be eliminated obviously and the total computational cost including all overhead can also be reduced obviously compared to the state-of-the-art EEENNS method meanwhile keeping a full search (FS) equivalent PSNR.

  6. Evaluation of Raman spectra of human brain tumor tissue using the learning vector quantization neural network

    NASA Astrophysics Data System (ADS)

    Liu, Tuo; Chen, Changshui; Shi, Xingzhe; Liu, Chengyong

    2016-05-01

    The Raman spectra of tissue of 20 brain tumor patients was recorded using a confocal microlaser Raman spectroscope with 785 nm excitation in vitro. A total of 133 spectra were investigated. Spectra peaks from normal white matter tissue and tumor tissue were analyzed. Algorithms, such as principal component analysis, linear discriminant analysis, and the support vector machine, are commonly used to analyze spectral data. However, in this study, we employed the learning vector quantization (LVQ) neural network, which is typically used for pattern recognition. By applying the proposed method, a normal diagnosis accuracy of 85.7% and a glioma diagnosis accuracy of 89.5% were achieved. The LVQ neural network is a recent approach to excavating Raman spectra information. Moreover, it is fast and convenient, does not require the spectra peak counterpart, and achieves a relatively high accuracy. It can be used in brain tumor prognostics and in helping to optimize the cutting margins of gliomas.

  7. Reconfigurable VLSI implementation for learning vector quantization with on-chip learning circuit

    NASA Astrophysics Data System (ADS)

    Zhang, Xiangyu; An, Fengwei; Chen, Lei; Jürgen Mattausch, Hans

    2016-04-01

    As an alternative to conventional single-instruction-multiple-data (SIMD) mode solutions with massive parallelism for self-organizing-map (SOM) neural network models, this paper reports a memory-based proposal for the learning vector quantization (LVQ), which is a variant of SOM. A dual-mode LVQ system, enabling both on-chip learning and classification, is implemented by using a reconfigurable pipeline with parallel p-word input (R-PPPI) architecture. As a consequence of the reuse of R-PPPI for solving the most severe computational demands in both modes, power dissipation and Si-area consumption can be dramatically reduced in comparison to previous LVQ implementations. In addition, the designed LVQ ASIC has high flexibility with respect to feature-vector dimensionality and reference-vector number, allowing the execution of many different machine-learning applications. The fabricated test chip in 180 nm CMOS with parallel 8-word inputs and 102 K-bit on-chip memory achieves low power consumption of 66.38 mW (at 75 MHz and 1.8 V) and high learning speed of (R + 1) × \\lceil d/8 \\rceil + 10 clock cycles per d-dimensional sample vector where R is the reference-vector number.

  8. Vector Adaptive/Predictive Encoding Of Speech

    NASA Technical Reports Server (NTRS)

    Chen, Juin-Hwey; Gersho, Allen

    1989-01-01

    Vector adaptive/predictive technique for digital encoding of speech signals yields decoded speech of very good quality after transmission at coding rate of 9.6 kb/s and of reasonably good quality at 4.8 kb/s. Requires 3 to 4 million multiplications and additions per second. Combines advantages of adaptive/predictive coding, and code-excited linear prediction, yielding speech of high quality but requires 600 million multiplications and additions per second at encoding rate of 4.8 kb/s. Vector adaptive/predictive coding technique bridges gaps in performance and complexity between adaptive/predictive coding and code-excited linear prediction.

  9. Combining nonlinear multiresolution system and vector quantization for still image compression

    SciTech Connect

    Wong, Y.

    1993-12-17

    It is popular to use multiresolution systems for image coding and compression. However, general-purpose techniques such as filter banks and wavelets are linear. While these systems are rigorous, nonlinear features in the signals cannot be utilized in a single entity for compression. Linear filters are known to blur the edges. Thus, the low-resolution images are typically blurred, carrying little information. We propose and demonstrate that edge-preserving filters such as median filters can be used in generating a multiresolution system using the Laplacian pyramid. The signals in the detail images are small and localized to the edge areas. Principal component vector quantization (PCVQ) is used to encode the detail images. PCVQ is a tree-structured VQ which allows fast codebook design and encoding/decoding. In encoding, the quantization error at each level is fed back through the pyramid to the previous level so that ultimately all the error is confined to the first level. With simple coding methods, we demonstrate that images with PSNR 33 dB can be obtained at 0.66 bpp without the use of entropy coding. When the rate is decreased to 0.25 bpp, the PSNR of 30 dB can still be achieved. Combined with an earlier result, our work demonstrate that nonlinear filters can be used for multiresolution systems and image coding.

  10. VLSI realization of learning vector quantization with hardware/software co-design for different applications

    NASA Astrophysics Data System (ADS)

    An, Fengwei; Akazawa, Toshinobu; Yamasaki, Shogo; Chen, Lei; Jürgen Mattausch, Hans

    2015-04-01

    This paper reports a VLSI realization of learning vector quantization (LVQ) with high flexibility for different applications. It is based on a hardware/software (HW/SW) co-design concept for on-chip learning and recognition and designed as a SoC in 180 nm CMOS. The time consuming nearest Euclidean distance search in the LVQ algorithm’s competition layer is efficiently implemented as a pipeline with parallel p-word input. Since neuron number in the competition layer, weight values, input and output number are scalable, the requirements of many different applications can be satisfied without hardware changes. Classification of a d-dimensional input vector is completed in n × \\lceil d/p \\rceil + R clock cycles, where R is the pipeline depth, and n is the number of reference feature vectors (FVs). Adjustment of stored reference FVs during learning is done by the embedded 32-bit RISC CPU, because this operation is not time critical. The high flexibility is verified by the application of human detection with different numbers for the dimensionality of the FVs.

  11. [The Identification of Lettuce Varieties by Using Unsupervised Possibilistic Fuzzy Learning Vector Quantization and Near Infrared Spectroscopy].

    PubMed

    Wu, Xiao-hong; Cai, Pei-qiang; Wu, Bin; Sun, Jun; Ji, Gang

    2016-03-01

    To solve the noisy sensitivity problem of fuzzy learning vector quantization (FLVQ), unsupervised possibilistic fuzzy learning vector quantization (UPFLVQ) was proposed based on unsupervised possibilistic fuzzy clustering (UPFC). UPFLVQ aimed to use fuzzy membership values and typicality values of UPFC to update the learning rate of learning vector quantization network and cluster centers. UPFLVQ is an unsupervised machine learning algorithm and it can be applied to classify without learning samples. UPFLVQ was used in the identification of lettuce varieties by near infrared spectroscopy (NIS). Short wave and long wave near infrared spectra of three types of lettuces were collected by FieldSpec@3 portable spectrometer in the wave-length range of 350-2 500 nm. When the near infrared spectra were compressed by principal component analysis (PCA), the first three principal components explained 97.50% of the total variance in near infrared spectra. After fuzzy c-means (FCM). clustering was performed for its cluster centers as the initial cluster centers of UPFLVQ, UPFLVQ could classify lettuce varieties with the terminal fuzzy membership values and typicality values. The experimental results showed that UPFLVQ together with NIS provided an effective method of identification of lettuce varieties with advantages such as fast testing, high accuracy rate and non-destructive characteristics. UPFLVQ is a clustering algorithm by combining UPFC and FLVQ, and it need not prepare any learning samples for the identification of lettuce varieties by NIS. UPFLVQ is suitable for linear separable data clustering and it provides a novel method for fast and nondestructive identification of lettuce varieties.

  12. [The Identification of Lettuce Varieties by Using Unsupervised Possibilistic Fuzzy Learning Vector Quantization and Near Infrared Spectroscopy].

    PubMed

    Wu, Xiao-hong; Cai, Pei-qiang; Wu, Bin; Sun, Jun; Ji, Gang

    2016-03-01

    To solve the noisy sensitivity problem of fuzzy learning vector quantization (FLVQ), unsupervised possibilistic fuzzy learning vector quantization (UPFLVQ) was proposed based on unsupervised possibilistic fuzzy clustering (UPFC). UPFLVQ aimed to use fuzzy membership values and typicality values of UPFC to update the learning rate of learning vector quantization network and cluster centers. UPFLVQ is an unsupervised machine learning algorithm and it can be applied to classify without learning samples. UPFLVQ was used in the identification of lettuce varieties by near infrared spectroscopy (NIS). Short wave and long wave near infrared spectra of three types of lettuces were collected by FieldSpec@3 portable spectrometer in the wave-length range of 350-2 500 nm. When the near infrared spectra were compressed by principal component analysis (PCA), the first three principal components explained 97.50% of the total variance in near infrared spectra. After fuzzy c-means (FCM). clustering was performed for its cluster centers as the initial cluster centers of UPFLVQ, UPFLVQ could classify lettuce varieties with the terminal fuzzy membership values and typicality values. The experimental results showed that UPFLVQ together with NIS provided an effective method of identification of lettuce varieties with advantages such as fast testing, high accuracy rate and non-destructive characteristics. UPFLVQ is a clustering algorithm by combining UPFC and FLVQ, and it need not prepare any learning samples for the identification of lettuce varieties by NIS. UPFLVQ is suitable for linear separable data clustering and it provides a novel method for fast and nondestructive identification of lettuce varieties. PMID:27400511

  13. Light-Front Quantization of the Vector Schwinger Model with a Photon Mass Term in Faddeevian Regularization

    NASA Astrophysics Data System (ADS)

    Kulshreshtha, Usha; Kulshreshtha, Daya Shankar; Vary, James P.

    2016-07-01

    In this talk, we study the light-front quantization of the vector Schwinger model with photon mass term in Faddeevian Regularization, describing two-dimensional electrodynamics with mass-less fermions but with a mass term for the U(1) gauge field. This theory is gauge-non-invariant (GNI). We construct a gauge-invariant (GI) theory using Stueckelberg mechanism and then recover the physical content of the original GNI theory from the newly constructed GI theory under some special gauge-fixing conditions (GFC's). We then study LFQ of this new GI theory.

  14. Spatially Invariant Vector Quantization: A pattern matching algorithm for multiple classes of image subject matter including pathology

    PubMed Central

    Hipp, Jason D.; Cheng, Jerome Y.; Toner, Mehmet; Tompkins, Ronald G.; Balis, Ulysses J.

    2011-01-01

    Introduction: Historically, effective clinical utilization of image analysis and pattern recognition algorithms in pathology has been hampered by two critical limitations: 1) the availability of digital whole slide imagery data sets and 2) a relative domain knowledge deficit in terms of application of such algorithms, on the part of practicing pathologists. With the advent of the recent and rapid adoption of whole slide imaging solutions, the former limitation has been largely resolved. However, with the expectation that it is unlikely for the general cohort of contemporary pathologists to gain advanced image analysis skills in the short term, the latter problem remains, thus underscoring the need for a class of algorithm that has the concurrent properties of image domain (or organ system) independence and extreme ease of use, without the need for specialized training or expertise. Results: In this report, we present a novel, general case pattern recognition algorithm, Spatially Invariant Vector Quantization (SIVQ), that overcomes the aforementioned knowledge deficit. Fundamentally based on conventional Vector Quantization (VQ) pattern recognition approaches, SIVQ gains its superior performance and essentially zero-training workflow model from its use of ring vectors, which exhibit continuous symmetry, as opposed to square or rectangular vectors, which do not. By use of the stochastic matching properties inherent in continuous symmetry, a single ring vector can exhibit as much as a millionfold improvement in matching possibilities, as opposed to conventional VQ vectors. SIVQ was utilized to demonstrate rapid and highly precise pattern recognition capability in a broad range of gross and microscopic use-case settings. Conclusion: With the performance of SIVQ observed thus far, we find evidence that indeed there exist classes of image analysis/pattern recognition algorithms suitable for deployment in settings where pathologists alone can effectively incorporate their

  15. Using the Relevance Vector Machine Model Combined with Local Phase Quantization to Predict Protein-Protein Interactions from Protein Sequences

    PubMed Central

    An, Ji-Yong; Meng, Fan-Rong; You, Zhu-Hong; Fang, Yu-Hong; Zhao, Yu-Jun; Zhang, Ming

    2016-01-01

    We propose a novel computational method known as RVM-LPQ that combines the Relevance Vector Machine (RVM) model and Local Phase Quantization (LPQ) to predict PPIs from protein sequences. The main improvements are the results of representing protein sequences using the LPQ feature representation on a Position Specific Scoring Matrix (PSSM), reducing the influence of noise using a Principal Component Analysis (PCA), and using a Relevance Vector Machine (RVM) based classifier. We perform 5-fold cross-validation experiments on Yeast and Human datasets, and we achieve very high accuracies of 92.65% and 97.62%, respectively, which is significantly better than previous works. To further evaluate the proposed method, we compare it with the state-of-the-art support vector machine (SVM) classifier on the Yeast dataset. The experimental results demonstrate that our RVM-LPQ method is obviously better than the SVM-based method. The promising experimental results show the efficiency and simplicity of the proposed method, which can be an automatic decision support tool for future proteomics research. PMID:27314023

  16. Recognition of Manual Actions Using Vector Quantization and Dynamic Time Warping

    NASA Astrophysics Data System (ADS)

    Martin, Marcel; Maycock, Jonathan; Schmidt, Florian Paul; Kramer, Oliver

    The recognition of manual actions, i.e., hand movements, hand postures and gestures, plays an important role in human-computer interaction, while belonging to a category of particularly difficult tasks. Using a Vicon system to capture 3D spatial data, we investigate the recognition of manual actions in tasks such as pouring a cup of milk and writing into a book. We propose recognizing sequences in multidimensional time-series by first learning a smooth quantization of the data, and then using a variant of dynamic time warping to recognize short sequences of prototypical motions in a long unknown sequence. An experimental analysis validates our approach. Short manual actions are successfully recognized and the approach is shown to be spatially invariant. We also show that the approach speeds up processing while not decreasing recognition performance.

  17. Soft learning vector quantization and clustering algorithms based on non-Euclidean norms: single-norm algorithms.

    PubMed

    Karayiannis, Nicolaos B; Randolph-Gips, Mary M

    2005-03-01

    This paper presents the development of soft clustering and learning vector quantization (LVQ) algorithms that rely on a weighted norm to measure the distance between the feature vectors and their prototypes. The development of LVQ and clustering algorithms is based on the minimization of a reformulation function under the constraint that the generalized mean of the norm weights be constant. According to the proposed formulation, the norm weights can be computed from the data in an iterative fashion together with the prototypes. An error analysis provides some guidelines for selecting the parameter involved in the definition of the generalized mean in terms of the feature variances. The algorithms produced from this formulation are easy to implement and they are almost as fast as clustering algorithms relying on the Euclidean norm. An experimental evaluation on four data sets indicates that the proposed algorithms outperform consistently clustering algorithms relying on the Euclidean norm and they are strong competitors to non-Euclidean algorithms which are computationally more demanding.

  18. More About Vector Adaptive/Predictive Coding Of Speech

    NASA Technical Reports Server (NTRS)

    Jedrey, Thomas C.; Gersho, Allen

    1992-01-01

    Report presents additional information about digital speech-encoding and -decoding system described in "Vector Adaptive/Predictive Encoding of Speech" (NPO-17230). Summarizes development of vector adaptive/predictive coding (VAPC) system and describes basic functions of algorithm. Describes refinements introduced enabling receiver to cope with errors. VAPC algorithm implemented in integrated-circuit coding/decoding processors (codecs). VAPC and other codecs tested under variety of operating conditions. Tests designed to reveal effects of various background quiet and noisy environments and of poor telephone equipment. VAPC found competitive with and, in some respects, superior to other 4.8-kb/s codecs and other codecs of similar complexity.

  19. An investigative study of multispectral data compression for remotely-sensed images using vector quantization and difference-mapped shift-coding

    NASA Technical Reports Server (NTRS)

    Jaggi, S.

    1993-01-01

    A study is conducted to investigate the effects and advantages of data compression techniques on multispectral imagery data acquired by NASA's airborne scanners at the Stennis Space Center. The first technique used was vector quantization. The vector is defined in the multispectral imagery context as an array of pixels from the same location from each channel. The error obtained in substituting the reconstructed images for the original set is compared for different compression ratios. Also, the eigenvalues of the covariance matrix obtained from the reconstructed data set are compared with the eigenvalues of the original set. The effects of varying the size of the vector codebook on the quality of the compression and on subsequent classification are also presented. The output data from the Vector Quantization algorithm was further compressed by a lossless technique called Difference-mapped Shift-extended Huffman coding. The overall compression for 7 channels of data acquired by the Calibrated Airborne Multispectral Scanner (CAMS), with an RMS error of 15.8 pixels was 195:1 (0.41 bpp) and with an RMS error of 3.6 pixels was 18:1 (.447 bpp). The algorithms were implemented in software and interfaced with the help of dedicated image processing boards to an 80386 PC compatible computer. Modules were developed for the task of image compression and image analysis. Also, supporting software to perform image processing for visual display and interpretation of the compressed/classified images was developed.

  20. Adaptive support vector regression for UAV flight control.

    PubMed

    Shin, Jongho; Jin Kim, H; Kim, Youdan

    2011-01-01

    This paper explores an application of support vector regression for adaptive control of an unmanned aerial vehicle (UAV). Unlike neural networks, support vector regression (SVR) generates global solutions, because SVR basically solves quadratic programming (QP) problems. With this advantage, the input-output feedback-linearized inverse dynamic model and the compensation term for the inversion error are identified off-line, which we call I-SVR (inversion SVR) and C-SVR (compensation SVR), respectively. In order to compensate for the inversion error and the unexpected uncertainty, an online adaptation algorithm for the C-SVR is proposed. Then, the stability of the overall error dynamics is analyzed by the uniformly ultimately bounded property in the nonlinear system theory. In order to validate the effectiveness of the proposed adaptive controller, numerical simulations are performed on the UAV model.

  1. Adaptive Segmentation and Feature Quantization of Sublingual Veins of Healthy Humans

    NASA Astrophysics Data System (ADS)

    Yan, Zifei; Li, Naimin

    The Sublingual Vein Diagnosis, one part of Tongue Diagnosis, plays an important role in deciding the healthy condition of humans. This paper focuses on establishing a feature quantization framework for the inspection of sublingual veins of healthy humans, composed of two parts: the segmentation of sublingual veins of healthy humans and the feature quantization of them. Firstly, a novel technique of sublingual vein segmentation is proposed here. Sublingual Vein Color Model, which combines the Bayesian Decision with CIEYxy color space, is established based on a large number of labeled sublingual images. Experiments prove that the proposed method performs well on the segmentation of images from healthy humans with weak color contrast between sublingual vein and tongue proper. And then, a chromatic system in conformity with diagnostic standard of Traditional Chinese Medicine doctors is established to describe the chromatic feature of sublingual veins. Experimental results show that the geometrical and chromatic features quantized by the proposed framework are properly consistent with the diagnostic standard summarized by TCM doctors for healthy humans.

  2. Discriminative Common Spatial Pattern Sub-bands Weighting Based on Distinction Sensitive Learning Vector Quantization Method in Motor Imagery Based Brain-computer Interface

    PubMed Central

    Jamaloo, Fatemeh; Mikaeili, Mohammad

    2015-01-01

    Common spatial pattern (CSP) is a method commonly used to enhance the effects of event-related desynchronization and event-related synchronization present in multichannel electroencephalogram-based brain-computer interface (BCI) systems. In the present study, a novel CSP sub-band feature selection has been proposed based on the discriminative information of the features. Besides, a distinction sensitive learning vector quantization based weighting of the selected features has been considered. Finally, after the classification of the weighted features using a support vector machine classifier, the performance of the suggested method has been compared with the existing methods based on frequency band selection, on the same BCI competitions datasets. The results show that the proposed method yields superior results on “ay” subject dataset compared against existing approaches such as sub-band CSP, filter bank CSP (FBCSP), discriminative FBCSP, and sliding window discriminative CSP. PMID:26284171

  3. Adaptive segmentation of an x-ray CT image using vector quantization

    NASA Astrophysics Data System (ADS)

    Li, Lihua; Qian, Wei; Clarke, Laurence P.

    1997-04-01

    This paper is part of a feasibility study of using an image segmentation method to automatically identify the tumor or target boundaries in each axial slice or to assist an expert physician to manually draw these boundaries.A two-stage segmentation method is proposed. In the first step, the outlying bone structure is removed from the raw CT data and the brain parenchymal area is extracted. Then a VQ-based method is applied for the segmentation of the soft tissue inside the brain area. Representative results for two sets of x-ray CT axial slice images from tow patients are presented. Problems and further modifications are discussed.

  4. Online Sequential Projection Vector Machine with Adaptive Data Mean Update

    PubMed Central

    Chen, Lin; Jia, Ji-Ting; Zhang, Qiong; Deng, Wan-Yu; Wei, Wei

    2016-01-01

    We propose a simple online learning algorithm especial for high-dimensional data. The algorithm is referred to as online sequential projection vector machine (OSPVM) which derives from projection vector machine and can learn from data in one-by-one or chunk-by-chunk mode. In OSPVM, data centering, dimension reduction, and neural network training are integrated seamlessly. In particular, the model parameters including (1) the projection vectors for dimension reduction, (2) the input weights, biases, and output weights, and (3) the number of hidden nodes can be updated simultaneously. Moreover, only one parameter, the number of hidden nodes, needs to be determined manually, and this makes it easy for use in real applications. Performance comparison was made on various high-dimensional classification problems for OSPVM against other fast online algorithms including budgeted stochastic gradient descent (BSGD) approach, adaptive multihyperplane machine (AMM), primal estimated subgradient solver (Pegasos), online sequential extreme learning machine (OSELM), and SVD + OSELM (feature selection based on SVD is performed before OSELM). The results obtained demonstrated the superior generalization performance and efficiency of the OSPVM. PMID:27143958

  5. Trypanosoma cruzi: adaptation to its vectors and its hosts

    PubMed Central

    Noireau, François; Diosque, Patricio; Jansen, Ana Maria

    2009-01-01

    American trypanosomiasis is a parasitic zoonosis that occurs throughout Latin America. The etiological agent, Trypanosoma cruzi, is able to infect almost all tissues of its mammalian hosts and spreads in the environment in multifarious transmission cycles that may or not be connected. This biological plasticity, which is probably the result of the considerable heterogeneity of the taxon, exemplifies a successful adaptation of a parasite resulting in distinct outcomes of infection and a complex epidemiological pattern. In the 1990s, most endemic countries strengthened national control programs to interrupt the transmission of this parasite to humans. However, many obstacles remain to the effective control of the disease. Current knowledge of the different components involved in elaborate system that is American trypanosomiasis (the protozoan parasite T. cruzi, vectors Triatominae and the many reservoirs of infection), as well as the interactions existing within the system, is still incomplete. The Triatominae probably evolve from predatory reduvids in response to the availability of vertebrate food source. However, the basic mechanisms of adaptation of some of them to artificial ecotopes remain poorly understood. Nevertheless, these adaptations seem to be associated with a behavioral plasticity, a reduction in the genetic repertoire and increasing developmental instability. PMID:19250627

  6. Quantization of Electromagnetic Fields in Cavities

    NASA Technical Reports Server (NTRS)

    Kakazu, Kiyotaka; Oshiro, Kazunori

    1996-01-01

    A quantization procedure for the electromagnetic field in a rectangular cavity with perfect conductor walls is presented, where a decomposition formula of the field plays an essential role. All vector mode functions are obtained by using the decomposition. After expanding the field in terms of the vector mode functions, we get the quantized electromagnetic Hamiltonian.

  7. Fourth quantization

    NASA Astrophysics Data System (ADS)

    Faizal, Mir

    2013-12-01

    In this Letter we will analyze the creation of the multiverse. We will first calculate the wave function for the multiverse using third quantization. Then we will fourth-quantize this theory. We will show that there is no single vacuum state for this theory. Thus, we can end up with a multiverse, even after starting from a vacuum state. This will be used as a possible explanation for the creation of the multiverse. We also analyze the effect of interactions in this fourth-quantized theory.

  8. Learning Vector Quantization Neural Networks Improve Accuracy of Transcranial Color-coded Duplex Sonography in Detection of Middle Cerebral Artery Spasm—Preliminary Report

    PubMed Central

    Swiercz, Miroslaw; Kochanowicz, Jan; Weigele, John; Hurst, Robert; Liebeskind, David S.; Mariak, Zenon; Melhem, Elias R.

    2009-01-01

    To determine the performance of an artificial neural network in transcranial color-coded duplex sonography (TCCS) diagnosis of middle cerebral artery (MCA) spasm. TCCS was prospectively acquired within 2 h prior to routine cerebral angiography in 100 consecutive patients (54M:46F, median age 50 years). Angiographic MCA vasospasm was classified as mild (<25% of vessel caliber reduction), moderate (25–50%), or severe (>50%). A Learning Vector Quantization neural network classified MCA spasm based on TCCS peak-systolic, mean, and end-diastolic velocity data. During a four-class discrimination task, accurate classification by the network ranged from 64.9% to 72.3%, depending on the number of neurons in the Kohonen layer. Accurate classification of vasospasm ranged from 79.6% to 87.6%, with an accuracy of 84.7% to 92.1% for the detection of moderate-to-severe vasospasm. An artificial neural network may increase the accuracy of TCCS in diagnosis of MCA spasm. PMID:18704768

  9. Alice in microbes' land: adaptations and counter-adaptations of vector-borne parasitic protozoa and their hosts.

    PubMed

    Caljon, Guy; De Muylder, Géraldine; Durnez, Lies; Jennes, Wim; Vanaerschot, Manu; Dujardin, Jean-Claude

    2016-09-01

    In the present review, we aim to provide a general introduction to different facets of the arms race between pathogens and their hosts/environment, emphasizing its evolutionary aspects. We focus on vector-borne parasitic protozoa, which have to adapt to both invertebrate and vertebrate hosts. Using Leishmania, Trypanosoma and Plasmodium as main models, we review successively (i) the adaptations and counter-adaptations of parasites and their invertebrate host, (ii) the adaptations and counter-adaptations of parasites and their vertebrate host and (iii) the impact of human interventions (chemotherapy, vaccination, vector control and environmental changes) on these adaptations. We conclude by discussing the practical impact this knowledge can have on translational research and public health. PMID:27400870

  10. Alice in microbes' land: adaptations and counter-adaptations of vector-borne parasitic protozoa and their hosts.

    PubMed

    Caljon, Guy; De Muylder, Géraldine; Durnez, Lies; Jennes, Wim; Vanaerschot, Manu; Dujardin, Jean-Claude

    2016-09-01

    In the present review, we aim to provide a general introduction to different facets of the arms race between pathogens and their hosts/environment, emphasizing its evolutionary aspects. We focus on vector-borne parasitic protozoa, which have to adapt to both invertebrate and vertebrate hosts. Using Leishmania, Trypanosoma and Plasmodium as main models, we review successively (i) the adaptations and counter-adaptations of parasites and their invertebrate host, (ii) the adaptations and counter-adaptations of parasites and their vertebrate host and (iii) the impact of human interventions (chemotherapy, vaccination, vector control and environmental changes) on these adaptations. We conclude by discussing the practical impact this knowledge can have on translational research and public health.

  11. Fast large-scale object retrieval with binary quantization

    NASA Astrophysics Data System (ADS)

    Zhou, Shifu; Zeng, Dan; Shen, Wei; Zhang, Zhijiang; Tian, Qi

    2015-11-01

    The objective of large-scale object retrieval systems is to search for images that contain the target object in an image database. Where state-of-the-art approaches rely on global image representations to conduct searches, we consider many boxes per image as candidates to search locally in a picture. In this paper, a feature quantization algorithm called binary quantization is proposed. In binary quantization, a scale-invariant feature transform (SIFT) feature is quantized into a descriptive and discriminative bit-vector, which allows itself to adapt to the classic inverted file structure for box indexing. The inverted file, which stores the bit-vector and box ID where the SIFT feature is located inside, is compact and can be loaded into the main memory for efficient box indexing. We evaluate our approach on available object retrieval datasets. Experimental results demonstrate that the proposed approach is fast and achieves excellent search quality. Therefore, the proposed approach is an improvement over state-of-the-art approaches for object retrieval.

  12. Stokes vector analysis of adaptive optics images of the retina.

    PubMed

    Song, Hongxin; Zhao, Yanming; Qi, Xiaofeng; Chui, Yuenping Toco; Burns, Stephen A

    2008-01-15

    A high-resolution Stokes vector imaging polarimeter was developed to measure the polarization properties at the cellular level in living human eyes. The application of this cellular level polarimetric technique to in vivo retinal imaging has allowed us to measure depolarization in the retina and to improve the retinal image contrast of retinal structures based on their polarization properties. PMID:18197217

  13. Use of a local cone model to predict essential CSF light adaptation behavior used in the design of luminance quantization nonlinearities

    NASA Astrophysics Data System (ADS)

    Daly, Scott; Golestaneh, S. A.

    2015-03-01

    The human visual system's luminance nonlinearity ranges continuously from square root behavior in the very dark, gamma-like behavior in dim ambient, cube-root in office lighting, and logarithmic for daylight ranges. Early display quantization nonlinearities have been developed based on luminance bipartite JND data. More advanced approaches considered spatial frequency behavior, and used the Barten light-adaptive Contrast Sensitivity Function (CSF) modelled across a range of light adaptation to determine the luminance nonlinearity (e.g., DICOM, referred to as a GSDF {grayscale display function}). A recent approach for a GSDF, also referred to as an electrical-to-optical transfer function (EOTF), using that light-adaptive CSF model improves on this by tracking the CSF for the most sensitive spatial frequency, which changes with adaptation level. We explored the cone photoreceptor's contribution to the behavior of this maximum sensitivity of the CSF as a function of light adaptation, despite the CSF's frequency variations and that the cone's nonlinearity is a point-process. We found that parameters of a local cone model could fit the max sensitivity of the CSF model, across all frequencies, and are within the ranges of parameters commonly accepted for psychophysicallytuned cone models. Thus, a linking of the spatial frequency and luminance dimensions has been made for a key neural component. This provides a better theoretical foundation for the recently designed visual signal format using the aforementioned EOTF.

  14. Quantization noise filtering in ADPCM systems

    NASA Astrophysics Data System (ADS)

    Gibson, J. D.

    1980-08-01

    Differential pulse code modulation (DPCM) systems utilizing adaptive quantizers and fixed or adaptive predictors are effective methods for voice encoding at data rates of 9.6 to 40 kbits/s. The principal performance limitation on these systems is the presence of quantization noise in the receiver output and the predictor feedback loop. An approach to reducing the quantization noise using sequential filtering methods based on estimation theory concepts is described. Several different filter structures are presented and the efficacy of the approach is illustrated via system simulations using actual speech. Signal-to-quantization noise ratio, sound spectrograms, and subjective listening tests are used for system performance evaluations.

  15. Adaptation of orientation vectors of otolith-related central vestibular neurons to gravity.

    PubMed

    Eron, Julia N; Cohen, Bernard; Raphan, Theodore; Yakushin, Sergei B

    2008-09-01

    Behavioral experiments indicate that central pathways that process otolith-ocular and perceptual information have adaptive capabilities. Because polarization vectors of otolith afferents are directly related to the electro-mechanical properties of the hair cell bundle, it is unlikely that they change their direction of excitation. This indicates that the adaptation must take place in central pathways. Here we demonstrate for the first time that otolith polarization vectors of canal-otolith convergent neurons in the vestibular nuclei have adaptive capability. A total of 10 vestibular-only and vestibular-plus-saccade neurons were recorded extracellularly in two monkeys before and after they were in side-down positions for 2 h. The spatial characteristics of the otolith input were determined from the response vector orientation (RVO), which is the projection of the otolith polarization vector, onto the head horizontal plane. The RVOs had no specific orientation before animals were in side-down positions but moved toward the gravitational axis after the animals were tilted for extended periods. Vector reorientations varied from 0 to 109 degrees and were linearly related to the original deviation of the RVOs from gravity in the position of adaptation. Such reorientation of central polarization vectors could provide the basis for changes in perception and eye movements related to prolonged head tilts relative to gravity or in microgravity.

  16. Error compensation in random vector double step saccades with and without global adaptation.

    PubMed

    Zerr, Paul; Thakkar, Katharine N; Uzunbajakau, Siarhei; Van der Stigchel, Stefan

    2016-10-01

    In saccade sequences without visual feedback endpoint errors pose a problem for subsequent saccades. Accurate error compensation has previously been demonstrated in double step saccades (DSS) and is thought to rely on a copy of the saccade motor vector. However, these studies typically use fixed target vectors on each trial, calling into question the generalizability of the findings due to the high stimulus predictability. We present a random walk DSS paradigm (random target vector amplitudes and directions) to provide a more complete, realistic and generalizable description of error compensation in saccade sequences. We regressed the vector between the endpoint of the second saccade and the endpoint of a hypothetical second saccade that does not take first saccade error into account on the ideal compensation vector. This provides a direct and complete estimation of error compensation in DSS. We observed error compensation with varying stimulus displays that was comparable to previous findings. We also employed this paradigm to extend experiments that showed accurate compensation for systematic undershoots after specific-vector saccade adaptation. Utilizing the random walk paradigm for saccade adaptation by Rolfs et al. (2010) together with our random walk DSS paradigm we now also demonstrate transfer of adaptation from reactive to memory guided saccades for global saccade adaptation. We developed a new, generalizable DSS paradigm with unpredictable stimuli and successfully employed it to verify, replicate and extend previous findings, demonstrating that endpoint errors are compensated for saccades in all directions and variable amplitudes.

  17. Error compensation in random vector double step saccades with and without global adaptation.

    PubMed

    Zerr, Paul; Thakkar, Katharine N; Uzunbajakau, Siarhei; Van der Stigchel, Stefan

    2016-10-01

    In saccade sequences without visual feedback endpoint errors pose a problem for subsequent saccades. Accurate error compensation has previously been demonstrated in double step saccades (DSS) and is thought to rely on a copy of the saccade motor vector. However, these studies typically use fixed target vectors on each trial, calling into question the generalizability of the findings due to the high stimulus predictability. We present a random walk DSS paradigm (random target vector amplitudes and directions) to provide a more complete, realistic and generalizable description of error compensation in saccade sequences. We regressed the vector between the endpoint of the second saccade and the endpoint of a hypothetical second saccade that does not take first saccade error into account on the ideal compensation vector. This provides a direct and complete estimation of error compensation in DSS. We observed error compensation with varying stimulus displays that was comparable to previous findings. We also employed this paradigm to extend experiments that showed accurate compensation for systematic undershoots after specific-vector saccade adaptation. Utilizing the random walk paradigm for saccade adaptation by Rolfs et al. (2010) together with our random walk DSS paradigm we now also demonstrate transfer of adaptation from reactive to memory guided saccades for global saccade adaptation. We developed a new, generalizable DSS paradigm with unpredictable stimuli and successfully employed it to verify, replicate and extend previous findings, demonstrating that endpoint errors are compensated for saccades in all directions and variable amplitudes. PMID:27543803

  18. A mariner transposon vector adapted for mutagenesis in oral streptococci

    PubMed Central

    Nilsson, Martin; Christiansen, Natalia; Høiby, Niels; Twetman, Svante; Givskov, Michael; Tolker-Nielsen, Tim

    2014-01-01

    This article describes the construction and characterization of a mariner-based transposon vector designed for use in oral streptococci, but with a potential use in other Gram-positive bacteria. The new transposon vector, termed pMN100, contains the temperature-sensitive origin of replication repATs-pWV01, a selectable kanamycin resistance gene, a Himar1 transposase gene regulated by a xylose-inducible promoter, and an erythromycin resistance gene flanked by himar inverted repeats. The pMN100 plasmid was transformed into Streptococcus mutans UA159 and transposon mutagenesis was performed via a protocol established to perform high numbers of separate transpositions despite a low frequency of transposition. The distribution of transposon inserts in 30 randomly picked mutants suggested that mariner transposon mutagenesis is unbiased in S. mutans. A generated transposon mutant library containing 5000 mutants was used in a screen to identify genes involved in the production of sucrose-dependent extracellular matrix components. Mutants with transposon inserts in genes encoding glycosyltransferases and the competence-related secretory locus were predominantly found in this screen. PMID:24753509

  19. Adaptive mesh refinement for time-domain electromagnetics using vector finite elements :a feasibility study.

    SciTech Connect

    Turner, C. David; Kotulski, Joseph Daniel; Pasik, Michael Francis

    2005-12-01

    This report investigates the feasibility of applying Adaptive Mesh Refinement (AMR) techniques to a vector finite element formulation for the wave equation in three dimensions. Possible error estimators are considered first. Next, approaches for refining tetrahedral elements are reviewed. AMR capabilities within the Nevada framework are then evaluated. We summarize our conclusions on the feasibility of AMR for time-domain vector finite elements and identify a path forward.

  20. Quantization of Algebraic Reduction

    SciTech Connect

    Sniatycki, Jeodrzej

    2007-11-14

    For a Poisson algebra obtained by algebraic reduction of symmetries of a quantizable system we develop an analogue of geometric quantization based on the quantization structure of the original system.

  1. Energy-saving technology of vector controlled induction motor based on the adaptive neuro-controller

    NASA Astrophysics Data System (ADS)

    Engel, E.; Kovalev, I. V.; Karandeev, D.

    2015-10-01

    The ongoing evolution of the power system towards a Smart Grid implies an important role of intelligent technologies, but poses strict requirements on their control schemes to preserve stability and controllability. This paper presents the adaptive neuro-controller for the vector control of induction motor within Smart Gird. The validity and effectiveness of the proposed energy-saving technology of vector controlled induction motor based on adaptive neuro-controller are verified by simulation results at different operating conditions over a wide speed range of induction motor.

  2. Robust fault tolerant control based on sliding mode method for uncertain linear systems with quantization.

    PubMed

    Hao, Li-Ying; Yang, Guang-Hong

    2013-09-01

    This paper is concerned with the problem of robust fault-tolerant compensation control problem for uncertain linear systems subject to both state and input signal quantization. By incorporating novel matrix full-rank factorization technique with sliding surface design successfully, the total failure of certain actuators can be coped with, under a special actuator redundancy assumption. In order to compensate for quantization errors, an adjustment range of quantization sensitivity for a dynamic uniform quantizer is given through the flexible choices of design parameters. Comparing with the existing results, the derived inequality condition leads to the fault tolerance ability stronger and much wider scope of applicability. With a static adjustment policy of quantization sensitivity, an adaptive sliding mode controller is then designed to maintain the sliding mode, where the gain of the nonlinear unit vector term is updated automatically to compensate for the effects of actuator faults, quantization errors, exogenous disturbances and parameter uncertainties without the need for a fault detection and isolation (FDI) mechanism. Finally, the effectiveness of the proposed design method is illustrated via a model of a rocket fairing structural-acoustic.

  3. Visibility of wavelet quantization noise

    NASA Technical Reports Server (NTRS)

    Watson, A. B.; Yang, G. Y.; Solomon, J. A.; Villasenor, J.

    1997-01-01

    The discrete wavelet transform (DWT) decomposes an image into bands that vary in spatial frequency and orientation. It is widely used for image compression. Measures of the visibility of DWT quantization errors are required to achieve optimal compression. Uniform quantization of a single band of coefficients results in an artifact that we call DWT uniform quantization noise; it is the sum of a lattice of random amplitude basis functions of the corresponding DWT synthesis filter. We measured visual detection thresholds for samples of DWT uniform quantization noise in Y, Cb, and Cr color channels. The spatial frequency of a wavelet is r 2-lambda, where r is display visual resolution in pixels/degree, and lambda is the wavelet level. Thresholds increase rapidly with wavelet spatial frequency. Thresholds also increase from Y to Cr to Cb, and with orientation from lowpass to horizontal/vertical to diagonal. We construct a mathematical model for DWT noise detection thresholds that is a function of level, orientation, and display visual resolution. This allows calculation of a "perceptually lossless" quantization matrix for which all errors are in theory below the visual threshold. The model may also be used as the basis for adaptive quantization schemes.

  4. Visibility of Wavelet Quantization Noise

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Yang, Gloria Y.; Solomon, Joshua A.; Villasenor, John; Null, Cynthia H. (Technical Monitor)

    1995-01-01

    The Discrete Wavelet Transform (DWT) decomposes an image into bands that vary in spatial frequency and orientation. It is widely used for image compression. Measures of the visibility of DWT quantization errors are required to achieve optimal compression. Uniform quantization of a single band of coefficients results in an artifact that is the sum of a lattice of random amplitude basis functions of the corresponding DWT synthesis filter, which we call DWT uniform quantization noise. We measured visual detection thresholds for samples of DWT uniform quantization noise in Y, Cb, and Cr color channels. The spatial frequency of a wavelet is r 2(exp)-L , where r is display visual resolution in pixels/degree, and L is the wavelet level. Amplitude thresholds increase rapidly with spatial frequency. Thresholds also increase from Y to Cr to Cb, and with orientation from low-pass to horizontal/vertical to diagonal. We describe a mathematical model to predict DWT noise detection thresholds as a function of level, orientation, and display visual resolution. This allows calculation of a "perceptually lossless" quantization matrix for which all errors are in theory below the visual threshold. The model may also be used as the basis for adaptive quantization schemes.

  5. Regularized estimate of the weight vector of an adaptive antenna array

    NASA Astrophysics Data System (ADS)

    Ermolayev, V. T.; Flaksman, A. G.; Sorokin, I. S.

    2013-02-01

    We consider an adaptive antenna array (AAA) with the maximum signal-to-noise ratio (SNR) at the output. The antenna configuration is assumed to be arbitrary. A rigorous analytical solution for the optimal weight vector of the AAA is obtained if the input process is defined by the noise correlation matrix and the useful-signal vector. On the basis of this solution, the regularized estimate of the weight vector is derived by using a limited number of input noise samples, which can be either greater or smaller than the number of array elements. Computer simulation results of adaptive signal processing indicate small losses in the SNR compared with the optimal SNR value. It is shown that the computing complexity of the proposed estimate is proportional to the number of noise samples, the number of external noise sources, and the squared number of array elements.

  6. Third quantization

    NASA Astrophysics Data System (ADS)

    Seligman, Thomas H.; Prosen, Tomaž

    2010-12-01

    The basic ideas of second quantization and Fock space are extended to density operator states, used in treatments of open many-body systems. This can be done for fermions and bosons. While the former only requires the use of a non-orthogonal basis, the latter requires the introduction of a dual set of spaces. In both cases an operator algebra closely resembling the canonical one is developed and used to define the dual sets of bases. We here concentrated on the bosonic case where the unboundedness of the operators requires the definitions of dual spaces to support the pair of bases. Some applications, mainly to non-equilibrium steady states, will be mentioned.

  7. Support Vector Machine Based on Adaptive Acceleration Particle Swarm Optimization

    PubMed Central

    Abdulameer, Mohammed Hasan; Othman, Zulaiha Ali

    2014-01-01

    Existing face recognition methods utilize particle swarm optimizer (PSO) and opposition based particle swarm optimizer (OPSO) to optimize the parameters of SVM. However, the utilization of random values in the velocity calculation decreases the performance of these techniques; that is, during the velocity computation, we normally use random values for the acceleration coefficients and this creates randomness in the solution. To address this problem, an adaptive acceleration particle swarm optimization (AAPSO) technique is proposed. To evaluate our proposed method, we employ both face and iris recognition based on AAPSO with SVM (AAPSO-SVM). In the face and iris recognition systems, performance is evaluated using two human face databases, YALE and CASIA, and the UBiris dataset. In this method, we initially perform feature extraction and then recognition on the extracted features. In the recognition process, the extracted features are used for SVM training and testing. During the training and testing, the SVM parameters are optimized with the AAPSO technique, and in AAPSO, the acceleration coefficients are computed using the particle fitness values. The parameters in SVM, which are optimized by AAPSO, perform efficiently for both face and iris recognition. A comparative analysis between our proposed AAPSO-SVM and the PSO-SVM technique is presented. PMID:24790584

  8. Support vector machine based on adaptive acceleration particle swarm optimization.

    PubMed

    Abdulameer, Mohammed Hasan; Sheikh Abdullah, Siti Norul Huda; Othman, Zulaiha Ali

    2014-01-01

    Existing face recognition methods utilize particle swarm optimizer (PSO) and opposition based particle swarm optimizer (OPSO) to optimize the parameters of SVM. However, the utilization of random values in the velocity calculation decreases the performance of these techniques; that is, during the velocity computation, we normally use random values for the acceleration coefficients and this creates randomness in the solution. To address this problem, an adaptive acceleration particle swarm optimization (AAPSO) technique is proposed. To evaluate our proposed method, we employ both face and iris recognition based on AAPSO with SVM (AAPSO-SVM). In the face and iris recognition systems, performance is evaluated using two human face databases, YALE and CASIA, and the UBiris dataset. In this method, we initially perform feature extraction and then recognition on the extracted features. In the recognition process, the extracted features are used for SVM training and testing. During the training and testing, the SVM parameters are optimized with the AAPSO technique, and in AAPSO, the acceleration coefficients are computed using the particle fitness values. The parameters in SVM, which are optimized by AAPSO, perform efficiently for both face and iris recognition. A comparative analysis between our proposed AAPSO-SVM and the PSO-SVM technique is presented. PMID:24790584

  9. On the Computation of Integral Curves in Adaptive Mesh Refinement Vector Fields

    SciTech Connect

    Deines, Eduard; Weber, Gunther H.; Garth, Christoph; Van Straalen, Brian; Borovikov, Sergey; Martin, Daniel F.; Joy, Kenneth I.

    2011-06-27

    Integral curves, such as streamlines, streaklines, pathlines, and timelines, are an essential tool in the analysis of vector field structures, offering straightforward and intuitive interpretation of visualization results. While such curves have a long-standing tradition in vector field visualization, their application to Adaptive Mesh Refinement (AMR) simulation results poses unique problems. AMR is a highly effective discretization method for a variety of physical simulation problems and has recently been applied to the study of vector fields in flow and magnetohydrodynamic applications. The cell-centered nature of AMR data and discontinuities in the vector field representation arising from AMR level boundaries complicate the application of numerical integration methods to compute integral curves. In this paper, we propose a novel approach to alleviate these problems and show its application to streamline visualization in an AMR model of the magnetic field of the solar system as well as to a simulation of two incompressible viscous vortex rings merging.

  10. Stereo matching based on adaptive support-weight approach in RGB vector space.

    PubMed

    Geng, Yingnan; Zhao, Yan; Chen, Hexin

    2012-06-01

    Gradient similarity is a simple, yet powerful, data descriptor which shows robustness in stereo matching. In this paper, a RGB vector space is defined for stereo matching. Based on the adaptive support-weight approach, a matching algorithm, which uses the pixel gradient similarity, color similarity, and proximity in RGB vector space to compute the corresponding support-weights and dissimilarity measurements, is proposed. The experimental results are evaluated on the Middlebury stereo benchmark, showing that our algorithm outperforms other stereo matching algorithms and the algorithm with gradient similarity can achieve better results in stereo matching. PMID:22695592

  11. Adaptive track scheduling to optimize concurrency and vectorization in GeantV

    NASA Astrophysics Data System (ADS)

    Apostolakis, J.; Bandieramonte, M.; Bitzes, G.; Brun, R.; Canal, P.; Carminati, F.; De Fine Licht, J. C.; Duhem, L.; Elvira, V. D.; Gheata, A.; Jun, S. Y.; Lima, G.; Novak, M.; Sehgal, R.; Shadura, O.; Wenzel, S.

    2015-05-01

    The GeantV project is focused on the R&D of new particle transport techniques to maximize parallelism on multiple levels, profiting from the use of both SIMD instructions and co-processors for the CPU-intensive calculations specific to this type of applications. In our approach, vectors of tracks belonging to multiple events and matching different locality criteria must be gathered and dispatched to algorithms having vector signatures. While the transport propagates tracks and changes their individual states, data locality becomes harder to maintain. The scheduling policy has to be changed to maintain efficient vectors while keeping an optimal level of concurrency. The model has complex dynamics requiring tuning the thresholds to switch between the normal regime and special modes, i.e. prioritizing events to allow flushing memory, adding new events in the transport pipeline to boost locality, dynamically adjusting the particle vector size or switching between vector to single track mode when vectorization causes only overhead. This work requires a comprehensive study for optimizing these parameters to make the behaviour of the scheduler self-adapting, presenting here its initial results.

  12. Design of smart composite platforms for adaptive trust vector control and adaptive laser telescope for satellite applications

    NASA Astrophysics Data System (ADS)

    Ghasemi-Nejhad, Mehrdad N.

    2013-04-01

    This paper presents design of smart composite platforms for adaptive trust vector control (TVC) and adaptive laser telescope for satellite applications. To eliminate disturbances, the proposed adaptive TVC and telescope systems will be mounted on two analogous smart composite platform with simultaneous precision positioning (pointing) and vibration suppression (stabilizing), SPPVS, with micro-radian pointing resolution, and then mounted on a satellite in two different locations. The adaptive TVC system provides SPPVS with large tip-tilt to potentially eliminate the gimbals systems. The smart composite telescope will be mounted on a smart composite platform with SPPVS and then mounted on a satellite. The laser communication is intended for the Geosynchronous orbit. The high degree of directionality increases the security of the laser communication signal (as opposed to a diffused RF signal), but also requires sophisticated subsystems for transmission and acquisition. The shorter wavelength of the optical spectrum increases the data transmission rates, but laser systems require large amounts of power, which increases the mass and complexity of the supporting systems. In addition, the laser communication on the Geosynchronous orbit requires an accurate platform with SPPVS capabilities. Therefore, this work also addresses the design of an active composite platform to be used to simultaneously point and stabilize an intersatellite laser communication telescope with micro-radian pointing resolution. The telescope is a Cassegrain receiver that employs two mirrors, one convex (primary) and the other concave (secondary). The distance, as well as the horizontal and axial alignment of the mirrors, must be precisely maintained or else the optical properties of the system will be severely degraded. The alignment will also have to be maintained during thruster firings, which will require vibration suppression capabilities of the system as well. The innovative platform has been

  13. Evidence of local adaptation in plant virus effects on host-vector interactions.

    PubMed

    Mauck, K E; De Moraes, C M; Mescher, M C

    2014-07-01

    host and apparently maladaptive with respect to virus transmission (e.g., host plant quality for aphids was significantly improved in this instance, and aphid dispersal was reduced). Taken together, these findings provide evidence of adaption by CMV to local hosts (including reduced infectivity and replication in novel versus native hosts) and further suggest that such adaptation may extend to effects on host-plant traits mediating interactions with aphid vectors. Thus, these results are consistent with the hypothesis that virus effects on host-vector interactions can be adaptive, and they suggest that multi-host pathogens may exhibit adaptation with respect to these and other effects on host phenotypes, perhaps especially in homogeneous monocultures.

  14. Adaptive vector validation in image velocimetry to minimise the influence of outlier clusters

    NASA Astrophysics Data System (ADS)

    Masullo, Alessandro; Theunissen, Raf

    2016-03-01

    The universal outlier detection scheme (Westerweel and Scarano in Exp Fluids 39:1096-1100, 2005) and the distance-weighted universal outlier detection scheme for unstructured data (Duncan et al. in Meas Sci Technol 21:057002, 2010) are the most common PIV data validation routines. However, such techniques rely on a spatial comparison of each vector with those in a fixed-size neighbourhood and their performance subsequently suffers in the presence of clusters of outliers. This paper proposes an advancement to render outlier detection more robust while reducing the probability of mistakenly invalidating correct vectors. Velocity fields undergo a preliminary evaluation in terms of local coherency, which parametrises the extent of the neighbourhood with which each vector will be compared subsequently. Such adaptivity is shown to reduce the number of undetected outliers, even when implemented in the afore validation schemes. In addition, the authors present an alternative residual definition considering vector magnitude and angle adopting a modified Gaussian-weighted distance-based averaging median. This procedure is able to adapt the degree of acceptable background fluctuations in velocity to the local displacement magnitude. The traditional, extended and recommended validation methods are numerically assessed on the basis of flow fields from an isolated vortex, a turbulent channel flow and a DNS simulation of forced isotropic turbulence. The resulting validation method is adaptive, requires no user-defined parameters and is demonstrated to yield the best performances in terms of outlier under- and over-detection. Finally, the novel validation routine is applied to the PIV analysis of experimental studies focused on the near wake behind a porous disc and on a supersonic jet, illustrating the potential gains in spatial resolution and accuracy.

  15. Iterative Robust Capon Beamforming with Adaptively Updated Array Steering Vector Mismatch Levels

    PubMed Central

    Sun, Liguo

    2014-01-01

    The performance of the conventional adaptive beamformer is sensitive to the array steering vector (ASV) mismatch. And the output signal-to interference and noise ratio (SINR) suffers deterioration, especially in the presence of large direction of arrival (DOA) error. To improve the robustness of traditional approach, we propose a new approach to iteratively search the ASV of the desired signal based on the robust capon beamformer (RCB) with adaptively updated uncertainty levels, which are derived in the form of quadratically constrained quadratic programming (QCQP) problem based on the subspace projection theory. The estimated levels in this iterative beamformer present the trend of decreasing. Additionally, other array imperfections also degrade the performance of beamformer in practice. To cover several kinds of mismatches together, the adaptive flat ellipsoid models are introduced in our method as tight as possible. In the simulations, our beamformer is compared with other methods and its excellent performance is demonstrated via the numerical examples. PMID:27355008

  16. Performance Characteristics of an Adaptive Mesh RefinementCalculation on Scalar and Vector Platforms

    SciTech Connect

    Welcome, Michael; Rendleman, Charles; Oliker, Leonid; Biswas, Rupak

    2006-01-31

    Adaptive mesh refinement (AMR) is a powerful technique thatreduces the resources necessary to solve otherwise in-tractable problemsin computational science. The AMR strategy solves the problem on arelatively coarse grid, and dynamically refines it in regions requiringhigher resolution. However, AMR codes tend to be far more complicatedthan their uniform grid counterparts due to the software infrastructurenecessary to dynamically manage the hierarchical grid framework. Despitethis complexity, it is generally believed that future multi-scaleapplications will increasingly rely on adaptive methods to study problemsat unprecedented scale and resolution. Recently, a new generation ofparallel-vector architectures have become available that promise toachieve extremely high sustained performance for a wide range ofapplications, and are the foundation of many leadership-class computingsystems worldwide. It is therefore imperative to understand the tradeoffsbetween conventional scalar and parallel-vector platforms for solvingAMR-based calculations. In this paper, we examine the HyperCLaw AMRframework to compare and contrast performance on the Cray X1E, IBM Power3and Power5, and SGI Altix. To the best of our knowledge, this is thefirst work that investigates and characterizes the performance of an AMRcalculation on modern parallel-vector systems.

  17. Intrusive versus domiciliated triatomines and the challenge of adapting vector control practices against Chagas disease

    PubMed Central

    Waleckx, Etienne; Gourbière, Sébastien; Dumonteil, Eric

    2015-01-01

    Chagas disease prevention remains mostly based on triatomine vector control to reduce or eliminate house infestation with these bugs. The level of adaptation of triatomines to human housing is a key part of vector competence and needs to be precisely evaluated to allow for the design of effective vector control strategies. In this review, we examine how the domiciliation/intrusion level of different triatomine species/populations has been defined and measured and discuss how these concepts may be improved for a better understanding of their ecology and evolution, as well as for the design of more effective control strategies against a large variety of triatomine species. We suggest that a major limitation of current criteria for classifying triatomines into sylvatic, intrusive, domiciliary and domestic species is that these are essentially qualitative and do not rely on quantitative variables measuring population sustainability and fitness in their different habitats. However, such assessments may be derived from further analysis and modelling of field data. Such approaches can shed new light on the domiciliation process of triatomines and may represent a key tool for decision-making and the design of vector control interventions. PMID:25993504

  18. An adaptive online learning approach for Support Vector Regression: Online-SVR-FID

    NASA Astrophysics Data System (ADS)

    Liu, Jie; Zio, Enrico

    2016-08-01

    Support Vector Regression (SVR) is a popular supervised data-driven approach for building empirical models from available data. Like all data-driven methods, under non-stationary environmental and operational conditions it needs to be provided with adaptive learning capabilities, which might become computationally burdensome with large datasets cumulating dynamically. In this paper, a cost-efficient online adaptive learning approach is proposed for SVR by combining Feature Vector Selection (FVS) and Incremental and Decremental Learning. The proposed approach adaptively modifies the model only when different pattern drifts are detected according to proposed criteria. Two tolerance parameters are introduced in the approach to control the computational complexity, reduce the influence of the intrinsic noise in the data and avoid the overfitting problem of SVR. Comparisons of the prediction results is made with other online learning approaches e.g. NORMA, SOGA, KRLS, Incremental Learning, on several artificial datasets and a real case study concerning time series prediction based on data recorded on a component of a nuclear power generation system. The performance indicators MSE and MARE computed on the test dataset demonstrate the efficiency of the proposed online learning method.

  19. Support vector machine with adaptive composite kernel for hyperspectral image classification

    NASA Astrophysics Data System (ADS)

    Li, Wei; Du, Qian

    2015-05-01

    With the improvement of spatial resolution of hyperspectral imagery, it is more reasonable to include spatial information in classification. The resulting spectral-spatial classification outperforms the traditional hyperspectral image classification with spectral information only. Among many spectral-spatial classifiers, support vector machine with composite kernel (SVM-CK) can provide superior performance, with one kernel for spectral information and the other for spatial information. In the original SVM-CK, the spatial information is retrieved by spatial averaging of pixels in a local neighborhood, and used in classifying the central pixel. Obviously, not all the pixels in such a local neighborhood may belong to the same class. Thus, we investigate the performance of Gaussian lowpass filter and an adaptive filter with weights being assigned based on the similarity to the central pixel. The adaptive filter can significantly improve classification accuracy while the Gaussian lowpass filter is less time-consuming and less sensitive to the window size.

  20. Adaptive Developmental Delay in Chagas Disease Vectors: An Evolutionary Ecology Approach

    PubMed Central

    Menu, Frédéric; Ginoux, Marine; Rajon, Etienne; Lazzari, Claudio R.; Rabinovich, Jorge E.

    2010-01-01

    Background The developmental time of vector insects is important in population dynamics, evolutionary biology, epidemiology and in their responses to global climatic change. In the triatomines (Triatominae, Reduviidae), vectors of Chagas disease, evolutionary ecology concepts, which may allow for a better understanding of their biology, have not been applied. Despite delay in the molting in some individuals observed in triatomines, no effort was made to explain this variability. Methodology We applied four methods: (1) an e-mail survey sent to 30 researchers with experience in triatomines, (2) a statistical description of the developmental time of eleven triatomine species, (3) a relationship between development time pattern and climatic inter-annual variability, (4) a mathematical optimization model of evolution of developmental delay (diapause). Principal Findings 85.6% of responses informed on prolonged developmental times in 5th instar nymphs, with 20 species identified with remarkable developmental delays. The developmental time analysis showed some degree of bi-modal pattern of the development time of the 5th instars in nine out of eleven species but no trend between development time pattern and climatic inter-annual variability was observed. Our optimization model predicts that the developmental delays could be due to an adaptive risk-spreading diapause strategy, only if survival throughout the diapause period and the probability of random occurrence of “bad” environmental conditions are sufficiently high. Conclusions/Significance Developmental delay may not be a simple non-adaptive phenotypic plasticity in development time, and could be a form of adaptive diapause associated to a physiological mechanism related to the postponement of the initiation of reproduction, as an adaptation to environmental stochasticity through a spreading of risk (bet-hedging) strategy. We identify a series of parameters that can be measured in the field and laboratory to test

  1. Visual data mining for quantized spatial data

    NASA Technical Reports Server (NTRS)

    Braverman, Amy; Kahn, Brian

    2004-01-01

    In previous papers we've shown how a well known data compression algorithm called Entropy-constrained Vector Quantization ( can be modified to reduce the size and complexity of very large, satellite data sets. In this paper, we descuss how to visualize and understand the content of such reduced data sets.

  2. Robust vibration suppression of an adaptive circular composite plate for satellite thrust vector control

    NASA Astrophysics Data System (ADS)

    Yan, Su; Ma, Kougen; Ghasemi-Nejhad, Mehrdad N.

    2008-03-01

    In this paper, a novel application of adaptive composite structures, a University of Hawaii at Manoa (UHM) smart composite platform, is developed for the Thrust Vector Control (TVC) of satellites. The device top plate of the UHM platform is an adaptive circular composite plate (ACCP) that utilizes integrated sensors/actuators and controllers to suppress low frequency vibrations during the thruster firing as well as to potentially isolate dynamic responses from the satellite structure bus. Since the disturbance due to the satellite thruster firing can be estimated, a combined strategy of an adaptive disturbance observer (DOB) and feed-forward control is proposed for vibration suppression of the ACCP with multi-sensors and multi-actuators. Meanwhile, the effects of the DOB cut-off frequency and the relative degree of the low-pass filter on the DOB performance are investigated. Simulations and experimental results show that higher relative degree of the low-pass filter with the required cut-off frequency will enhance the DOB performance for a high-order system control. Further, although the increase of the filter cut-off frequency can guarantee a sufficient stability margin, it may cause an undesirable increase of the control bandwidth. The effectiveness of the proposed adaptive DOB with feed-forward control strategy is verified through simulations and experiments using the ACCP system.

  3. Performance Enhancement for a GPS Vector-Tracking Loop Utilizing an Adaptive Iterated Extended Kalman Filter

    PubMed Central

    Chen, Xiyuan; Wang, Xiying; Xu, Yuan

    2014-01-01

    This paper deals with the problem of state estimation for the vector-tracking loop of a software-defined Global Positioning System (GPS) receiver. For a nonlinear system that has the model error and white Gaussian noise, a noise statistics estimator is used to estimate the model error, and based on this, a modified iterated extended Kalman filter (IEKF) named adaptive iterated Kalman filter (AIEKF) is proposed. A vector-tracking GPS receiver utilizing AIEKF is implemented to evaluate the performance of the proposed method. Through road tests, it is shown that the proposed method has an obvious accuracy advantage over the IEKF and Adaptive Extended Kalman filter (AEKF) in position determination. The results show that the proposed method is effective to reduce the root-mean-square error (RMSE) of position (including longitude, latitude and altitude). Comparing with EKF, the position RMSE values of AIEKF are reduced by about 45.1%, 40.9% and 54.6% in the east, north and up directions, respectively. Comparing with IEKF, the position RMSE values of AIEKF are reduced by about 25.7%, 19.3% and 35.7% in the east, north and up directions, respectively. Compared with AEKF, the position RMSE values of AIEKF are reduced by about 21.6%, 15.5% and 30.7% in the east, north and up directions, respectively. PMID:25502124

  4. Performance enhancement for a GPS vector-tracking loop utilizing an adaptive iterated extended Kalman filter.

    PubMed

    Chen, Xiyuan; Wang, Xiying; Xu, Yuan

    2014-12-09

    This paper deals with the problem of state estimation for the vector-tracking loop of a software-defined Global Positioning System (GPS) receiver. For a nonlinear system that has the model error and white Gaussian noise, a noise statistics estimator is used to estimate the model error, and based on this, a modified iterated extended Kalman filter (IEKF) named adaptive iterated Kalman filter (AIEKF) is proposed. A vector-tracking GPS receiver utilizing AIEKF is implemented to evaluate the performance of the proposed method. Through road tests, it is shown that the proposed method has an obvious accuracy advantage over the IEKF and Adaptive Extended Kalman filter (AEKF) in position determination. The results show that the proposed method is effective to reduce the root-mean-square error (RMSE) of position (including longitude, latitude and altitude). Comparing with EKF, the position RMSE values of AIEKF are reduced by about 45.1%, 40.9% and 54.6% in the east, north and up directions, respectively. Comparing with IEKF, the position RMSE values of AIEKF are reduced by about 25.7%, 19.3% and 35.7% in the east, north and up directions, respectively. Compared with AEKF, the position RMSE values of AIEKF are reduced by about 21.6%, 15.5% and 30.7% in the east, north and up directions, respectively.

  5. The research and application of visual saliency and adaptive support vector machine in target tracking field.

    PubMed

    Chen, Yuantao; Xu, Weihong; Kuang, Fangjun; Gao, Shangbing

    2013-01-01

    The efficient target tracking algorithm researches have become current research focus of intelligent robots. The main problems of target tracking process in mobile robot face environmental uncertainty. They are very difficult to estimate the target states, illumination change, target shape changes, complex backgrounds, and other factors and all affect the occlusion in tracking robustness. To further improve the target tracking's accuracy and reliability, we present a novel target tracking algorithm to use visual saliency and adaptive support vector machine (ASVM). Furthermore, the paper's algorithm has been based on the mixture saliency of image features. These features include color, brightness, and sport feature. The execution process used visual saliency features and those common characteristics have been expressed as the target's saliency. Numerous experiments demonstrate the effectiveness and timeliness of the proposed target tracking algorithm in video sequences where the target objects undergo large changes in pose, scale, and illumination. PMID:24363779

  6. Separable quantizations of Stäckel systems

    NASA Astrophysics Data System (ADS)

    Błaszak, Maciej; Marciniak, Krzysztof; Domański, Ziemowit

    2016-08-01

    In this article we prove that many Hamiltonian systems that cannot be separably quantized in the classical approach of Robertson and Eisenhart can be separably quantized if we extend the class of admissible quantizations through a suitable choice of Riemann space adapted to the Poisson geometry of the system. Actually, in this article we prove that for every quadratic in momenta Stäckel system (defined on 2 n dimensional Poisson manifold) for which Stäckel matrix consists of monomials in position coordinates there exist infinitely many quantizations-parametrized by n arbitrary functions-that turn this system into a quantum separable Stäckel system.

  7. Modeling of variable speed refrigerated display cabinets based on adaptive support vector machine

    NASA Astrophysics Data System (ADS)

    Cao, Zhikun; Han, Hua; Gu, Bo

    2010-01-01

    In this paper the adaptive support vector machine (ASVM) method is introduced to the field of intelligent modeling of refrigerated display cabinets and used to construct a highly precise mathematical model of their performance. A model for a variable speed open vertical display cabinet was constructed using preprocessing techniques for measured data, including the elimination of outlying data points by the use of an exponential weighted moving average (EWMA). Using dynamic loss coefficient adjustment, the adaptation of the SVM for use in this application was achieved. From there, the object function for energy use per unit of display area total energy consumption (TEC)/total display area (TDA) was constructed and solved using the ASVM method. When compared to the results achieved using a back-propagation neural network (BPNN) model, the ASVM model for the refrigerated display cabinet was characterized by its simple structure, fast convergence speed and high prediction accuracy. The ASVM model also has better noise rejection properties than that of original SVM model. It was revealed by the theoretical analysis and experimental results presented in this paper that it is feasible to model of the display cabinet built using the ASVM method.

  8. An adaptive supervisory sliding fuzzy cerebellar model articulation controller for sensorless vector-controlled induction motor drive systems.

    PubMed

    Wang, Shun-Yuan; Tseng, Chwan-Lu; Lin, Shou-Chuang; Chiu, Chun-Jung; Chou, Jen-Hsiang

    2015-01-01

    This paper presents the implementation of an adaptive supervisory sliding fuzzy cerebellar model articulation controller (FCMAC) in the speed sensorless vector control of an induction motor (IM) drive system. The proposed adaptive supervisory sliding FCMAC comprised a supervisory controller, integral sliding surface, and an adaptive FCMAC. The integral sliding surface was employed to eliminate steady-state errors and enhance the responsiveness of the system. The adaptive FCMAC incorporated an FCMAC with a compensating controller to perform a desired control action. The proposed controller was derived using the Lyapunov approach, which guarantees learning-error convergence. The implementation of three intelligent control schemes--the adaptive supervisory sliding FCMAC, adaptive sliding FCMAC, and adaptive sliding CMAC--were experimentally investigated under various conditions in a realistic sensorless vector-controlled IM drive system. The root mean square error (RMSE) was used as a performance index to evaluate the experimental results of each control scheme. The analysis results indicated that the proposed adaptive supervisory sliding FCMAC substantially improved the system performance compared with the other control schemes. PMID:25815450

  9. An Adaptive Supervisory Sliding Fuzzy Cerebellar Model Articulation Controller for Sensorless Vector-Controlled Induction Motor Drive Systems

    PubMed Central

    Wang, Shun-Yuan; Tseng, Chwan-Lu; Lin, Shou-Chuang; Chiu, Chun-Jung; Chou, Jen-Hsiang

    2015-01-01

    This paper presents the implementation of an adaptive supervisory sliding fuzzy cerebellar model articulation controller (FCMAC) in the speed sensorless vector control of an induction motor (IM) drive system. The proposed adaptive supervisory sliding FCMAC comprised a supervisory controller, integral sliding surface, and an adaptive FCMAC. The integral sliding surface was employed to eliminate steady-state errors and enhance the responsiveness of the system. The adaptive FCMAC incorporated an FCMAC with a compensating controller to perform a desired control action. The proposed controller was derived using the Lyapunov approach, which guarantees learning-error convergence. The implementation of three intelligent control schemes—the adaptive supervisory sliding FCMAC, adaptive sliding FCMAC, and adaptive sliding CMAC—were experimentally investigated under various conditions in a realistic sensorless vector-controlled IM drive system. The root mean square error (RMSE) was used as a performance index to evaluate the experimental results of each control scheme. The analysis results indicated that the proposed adaptive supervisory sliding FCMAC substantially improved the system performance compared with the other control schemes. PMID:25815450

  10. Self-adapting root-MUSIC algorithm and its real-valued formulation for acoustic vector sensor array

    NASA Astrophysics Data System (ADS)

    Wang, Peng; Zhang, Guo-jun; Xue, Chen-yang; Zhang, Wen-dong; Xiong, Ji-jun

    2012-12-01

    In this paper, based on the root-MUSIC algorithm for acoustic pressure sensor array, a new self-adapting root-MUSIC algorithm for acoustic vector sensor array is proposed by self-adaptive selecting the lead orientation vector, and its real-valued formulation by Forward-Backward(FB) smoothing and real-valued inverse covariance matrix is also proposed, which can reduce the computational complexity and distinguish the coherent signals. The simulation experiment results show the better performance of two new algorithm with low Signal-to-Noise (SNR) in direction of arrival (DOA) estimation than traditional MUSIC algorithm, and the experiment results using MEMS vector hydrophone array in lake trails show the engineering practicability of two new algorithms.

  11. Retrieval of Brain Tumors by Adaptive Spatial Pooling and Fisher Vector Representation

    PubMed Central

    Huang, Meiyan; Huang, Wei; Jiang, Jun; Zhou, Yujia; Yang, Ru; Zhao, Jie; Feng, Yanqiu; Feng, Qianjin; Chen, Wufan

    2016-01-01

    Content-based image retrieval (CBIR) techniques have currently gained increasing popularity in the medical field because they can use numerous and valuable archived images to support clinical decisions. In this paper, we concentrate on developing a CBIR system for retrieving brain tumors in T1-weighted contrast-enhanced MRI images. Specifically, when the user roughly outlines the tumor region of a query image, brain tumor images in the database of the same pathological type are expected to be returned. We propose a novel feature extraction framework to improve the retrieval performance. The proposed framework consists of three steps. First, we augment the tumor region and use the augmented tumor region as the region of interest to incorporate informative contextual information. Second, the augmented tumor region is split into subregions by an adaptive spatial division method based on intensity orders; within each subregion, we extract raw image patches as local features. Third, we apply the Fisher kernel framework to aggregate the local features of each subregion into a respective single vector representation and concatenate these per-subregion vector representations to obtain an image-level signature. After feature extraction, a closed-form metric learning algorithm is applied to measure the similarity between the query image and database images. Extensive experiments are conducted on a large dataset of 3604 images with three types of brain tumors, namely, meningiomas, gliomas, and pituitary tumors. The mean average precision can reach 94.68%. Experimental results demonstrate the power of the proposed algorithm against some related state-of-the-art methods on the same dataset. PMID:27273091

  12. Direct Images, Fields of Hilbert Spaces, and Geometric Quantization

    NASA Astrophysics Data System (ADS)

    Lempert, László; Szőke, Róbert

    2014-04-01

    Geometric quantization often produces not one Hilbert space to represent the quantum states of a classical system but a whole family H s of Hilbert spaces, and the question arises if the spaces H s are canonically isomorphic. Axelrod et al. (J. Diff. Geo. 33:787-902, 1991) and Hitchin (Commun. Math. Phys. 131:347-380, 1990) suggest viewing H s as fibers of a Hilbert bundle H, introduce a connection on H, and use parallel transport to identify different fibers. Here we explore to what extent this can be done. First we introduce the notion of smooth and analytic fields of Hilbert spaces, and prove that if an analytic field over a simply connected base is flat, then it corresponds to a Hermitian Hilbert bundle with a flat connection and path independent parallel transport. Second we address a general direct image problem in complex geometry: pushing forward a Hermitian holomorphic vector bundle along a non-proper map . We give criteria for the direct image to be a smooth field of Hilbert spaces. Third we consider quantizing an analytic Riemannian manifold M by endowing TM with the family of adapted Kähler structures from Lempert and Szőke (Bull. Lond. Math. Soc. 44:367-374, 2012). This leads to a direct image problem. When M is homogeneous, we prove the direct image is an analytic field of Hilbert spaces. For certain such M—but not all—the direct image is even flat; which means that in those cases quantization is unique.

  13. A physically motivated quantization of the electromagnetic field

    NASA Astrophysics Data System (ADS)

    Bennett, Robert; Barlow, Thomas M.; Beige, Almut

    2016-01-01

    The notion that the electromagnetic field is quantized is usually inferred from observations such as the photoelectric effect and the black-body spectrum. However accounts of the quantization of this field are usually mathematically motivated and begin by introducing a vector potential, followed by the imposition of a gauge that allows the manipulation of the solutions of Maxwell’s equations into a form that is amenable for the machinery of canonical quantization. By contrast, here we quantize the electromagnetic field in a less mathematically and more physically motivated way. Starting from a direct description of what one sees in experiments, we show that the usual expressions of the electric and magnetic field observables follow from Heisenberg’s equation of motion. In our treatment, there is no need to invoke the vector potential in a specific gauge and we avoid the commonly used notion of a fictitious cavity that applies boundary conditions to the field.

  14. Lagrange structure and quantization

    NASA Astrophysics Data System (ADS)

    Kazinski, Peter O.; Lyakhovich, Simon L.; Sharapov, Alexey A.

    2005-07-01

    A path-integral quantization method is proposed for dynamical systems whose classical equations of motion do not necessarily follow from the action principle. The key new notion behind this quantization scheme is the Lagrange structure which is more general than the lagrangian formalism in the same sense as Poisson geometry is more general than the symplectic one. The Lagrange structure is shown to admit a natural BRST description which is used to construct an AKSZ-type topological sigma-model. The dynamics of this sigma-model in d+1 dimensions, being localized on the boundary, are proved to be equivalent to the original theory in d dimensions. As the topological sigma-model has a well defined action, it is path-integral quantized in the usual way that results in quantization of the original (not necessarily lagrangian) theory. When the original equations of motion come from the action principle, the standard BV path-integral is explicitly deduced from the proposed quantization scheme. The general quantization scheme is exemplified by several models including the ones whose classical dynamics are not variational.

  15. Adaptive mutation in nuclear export protein allows stable transgene expression in a chimaeric influenza A virus vector.

    PubMed

    Kuznetsova, Irina; Shurygina, Anna-Polina; Wolf, Brigitte; Wolschek, Markus; Enzmann, Florian; Sansyzbay, Abylay; Khairullin, Berik; Sandybayev, Nurlan; Stukova, Marina; Kiselev, Oleg; Egorov, Andrej; Bergmann, Michael

    2014-02-01

    The development of influenza virus vectors with long insertions of foreign sequences remains difficult due to the small size and instable nature of the virus. Here, we used the influenza virus inherent property of self-optimization to generate a vector stably expressing long transgenes from the NS1 protein ORF. This was achieved by continuous selection of bright fluorescent plaques of a GFP-expressing vector during multiple passages in mouse B16f1 cells. The newly generated vector acquired stability in IFN-competent cell lines and in vivo in murine lungs. Although improved vector fitness was associated with the appearance of four coding mutations in the polymerase (PB2), haemagglutinin and non-structural (NS) segments, the stability of the transgene expression was dependent primarily on the single mutation Q20R in the nuclear export protein (NEP). Importantly, a longer insert, such as a cassette of 1299 nt encoding two Mycobacterium tuberculosis Esat6 and Ag85A proteins, could substitute for the GFP transgene. Thus, the inherent property of the influenza virus to adapt can also be used to adjust a vector backbone to give stable expression of long transgenes. PMID:24222196

  16. Semi-logarithmic and hybrid quantization of Laplacian source inwide range of variances

    NASA Astrophysics Data System (ADS)

    Savić, Milan S.; Perić, Zoran H.; Panić, Stefan R.; Mosić, Aleksandar V.

    2012-12-01

    A novel semilogarithmic hybrid quantizer for non-uniform scalar quantization of Laplacian source, which consist of uniform quantizer and companding quantizer is introduced. Uniform quantizer has unit gain in area around zero. Companding quantizer is defined with a novel logarithm characteristic. Also an analysis of classic semilogarithmic A-law for various values of A parameter is provided. Comparation with classic semilogarithmic A-law is performed. The main advantage of hybrid quantizer is that number of representation levels for both uniform and companding quantizer are not unambiguously determined function of the A parameter value, as it is the case with classic semilogarithmic A companding characteristic. It is shown that by using hybrid quantizer, average of signal-to-quantization noise ratio SQNR quality obtained by using classic A companding law can be overachieved for 0.47 dB. Numbers of representation levels of hybrid quantizer are adapted to the input signal variances, in order to achieve high SQNR in a wide range of signal volumes (variances). By using this adaptation higher average SQNR quality of 2.52 dB could be achieved compared to classic A companding law. Forward adaptation of hybrid quantizer is analyzed and obtained performances correspond to adaptive classic A companding law case but possible advantage arises in simpler practical realization of hybrid quantizers. Obtained performances correspond to classic A-law companding case, because during the adaptation process, optimal values of parameter A are chosen. For each other A parameter values proposed hybrid quantizer provides better results. For value of A = 50 hybrid model has higher SQNR value for 0.79 dB

  17. Quantization Effects on Complex Networks

    PubMed Central

    Wang, Ying; Wang, Lin; Yang, Wen; Wang, Xiaofan

    2016-01-01

    Weights of edges in many complex networks we constructed are quantized values of the real weights. To what extent does the quantization affect the properties of a network? In this work, quantization effects on network properties are investigated based on the spectrum of the corresponding Laplacian. In contrast to the intuition that larger quantization level always implies a better approximation of the quantized network to the original one, we find a ubiquitous periodic jumping phenomenon with peak-value decreasing in a power-law relationship in all the real-world weighted networks that we investigated. We supply theoretical analysis on the critical quantization level and the power laws. PMID:27226049

  18. Quantization Effects on Complex Networks

    NASA Astrophysics Data System (ADS)

    Wang, Ying; Wang, Lin; Yang, Wen; Wang, Xiaofan

    2016-05-01

    Weights of edges in many complex networks we constructed are quantized values of the real weights. To what extent does the quantization affect the properties of a network? In this work, quantization effects on network properties are investigated based on the spectrum of the corresponding Laplacian. In contrast to the intuition that larger quantization level always implies a better approximation of the quantized network to the original one, we find a ubiquitous periodic jumping phenomenon with peak-value decreasing in a power-law relationship in all the real-world weighted networks that we investigated. We supply theoretical analysis on the critical quantization level and the power laws.

  19. Quantization Effects on Complex Networks.

    PubMed

    Wang, Ying; Wang, Lin; Yang, Wen; Wang, Xiaofan

    2016-01-01

    Weights of edges in many complex networks we constructed are quantized values of the real weights. To what extent does the quantization affect the properties of a network? In this work, quantization effects on network properties are investigated based on the spectrum of the corresponding Laplacian. In contrast to the intuition that larger quantization level always implies a better approximation of the quantized network to the original one, we find a ubiquitous periodic jumping phenomenon with peak-value decreasing in a power-law relationship in all the real-world weighted networks that we investigated. We supply theoretical analysis on the critical quantization level and the power laws. PMID:27226049

  20. Adaptive quarter-pel motion estimation and motion vector coding algorithm for the H.264/AVC standard

    NASA Astrophysics Data System (ADS)

    Jung, Seung-Won; Park, Chun-Su; Ha, Le Thanh; Ko, Sung-Jea

    2009-11-01

    We present an adaptive quarter-pel (Qpel) motion estimation (ME) method for H.264/AVC. Instead of applying Qpel ME to all macroblocks (MBs), the proposed method selectively performs Qpel ME in an MB level. In order to reduce the bit rate, we also propose a motion vector (MV) encoding technique that adaptively selects a different variable length coding (VLC) table according to the accuracy of the MV. Experimental results show that the proposed method can achieve about 3% average bit rate reduction.

  1. A visual detection model for DCT coefficient quantization

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert J., Jr.; Watson, Andrew B.

    1994-01-01

    The discrete cosine transform (DCT) is widely used in image compression and is part of the JPEG and MPEG compression standards. The degree of compression and the amount of distortion in the decompressed image are controlled by the quantization of the transform coefficients. The standards do not specify how the DCT coefficients should be quantized. One approach is to set the quantization level for each coefficient so that the quantization error is near the threshold of visibility. Results from previous work are combined to form the current best detection model for DCT coefficient quantization noise. This model predicts sensitivity as a function of display parameters, enabling quantization matrices to be designed for display situations varying in luminance, veiling light, and spatial frequency related conditions (pixel size, viewing distance, and aspect ratio). It also allows arbitrary color space directions for the representation of color. A model-based method of optimizing the quantization matrix for an individual image was developed. The model described above provides visual thresholds for each DCT frequency. These thresholds are adjusted within each block for visual light adaptation and contrast masking. For given quantization matrix, the DCT quantization errors are scaled by the adjusted thresholds to yield perceptual errors. These errors are pooled nonlinearly over the image to yield total perceptual error. With this model one may estimate the quantization matrix for a particular image that yields minimum bit rate for a given total perceptual error, or minimum perceptual error for a given bit rate. Custom matrices for a number of images show clear improvement over image-independent matrices. Custom matrices are compatible with the JPEG standard, which requires transmission of the quantization matrix.

  2. Improving the textural characterization of trabecular bone structure to quantify its changes: the locally adapted scaling vector method

    NASA Astrophysics Data System (ADS)

    Raeth, Christoph W.; Mueller, Dirk; Boehm, Holger F.; Rummeny, Ernst J.; Link, Thomas M.; Monetti, Roberto

    2005-04-01

    We extend the recently introduced scaling vector method (SVM) to improve the textural characterization of oriented trabecular bone structures in the context of osteoporosis. Using the concept of scaling vectors one obtains non-linear structural information from data sets, which can account for global anisotropies. In this work we present a method which allows us to determine the local directionalities in images by using scaling vectors. Thus it becomes possible to better account for local anisotropies and to implement this knowledge in the calculation of the scaling properties of the image. By applying this adaptive technique, a refined quantification of the image structure is possible: we test and evaluate our new method using realistic two-dimensional simulations of bone structures, which model the effect of osteoblasts and osteoclasts on the local change of relative bone density. The partial differential equations involved in the model are solved numerically using cellular automata (CA). Different realizations with slightly varying control parameters are considered. Our results show that even small changes in the trabecular structures, which are induced by variation of a control parameters of the system, become discernible by applying the locally adapted scaling vector method. The results are superior to those obtained by isotropic and/or bulk measures. These findings may be especially important for monitoring the treatment of patients, where the early recognition of (drug-induced) changes in the trabecular structure is crucial.

  3. Quantization of Black Holes

    NASA Astrophysics Data System (ADS)

    He, Xiao-Gang; Ma, Bo-Qiang

    We show that black holes can be quantized in an intuitive and elegant way with results in agreement with conventional knowledge of black holes by using Bohr's idea of quantizing the motion of an electron inside the atom in quantum mechanics. We find that properties of black holes can also be derived from an ansatz of quantized entropy Δ S = 4π k Δ R/{{-{λ }}}, which was suggested in a previous work to unify the black hole entropy formula and Verlinde's conjecture to explain gravity as an entropic force. Such an Ansatz also explains gravity as an entropic force from quantum effect. This suggests a way to unify gravity with quantum theory. Several interesting and surprising results of black holes are given from which we predict the existence of primordial black holes ranging from Planck scale both in size and energy to big ones in size but with low energy behaviors.

  4. Aedes aegypti (L.) in Latin American and Caribbean region: With growing evidence for vector adaptation to climate change?

    PubMed

    Chadee, Dave D; Martinez, Raymond

    2016-04-01

    Within Latin America and the Caribbean region the impact of climate change has been associated with the effects of rainfall and temperature on seasonal outbreaks of dengue but few studies have been conducted on the impacts of climate on the behaviour and ecology of Aedes aegypti mosquitoes.This study was conducted to examine the adaptive behaviours currently being employed by A. aegypti mosquitoes exposed to the force of climate change in LAC countries. The literature on the association between climate and dengue incidence is small and sometimes speculative. Few laboratory and field studies have identified research gaps. Laboratory and field experiments were designed and conducted to better understand the container preferences, climate-associated-adaptive behaviour, ecology and the effects of different temperatures and light regimens on the life history of A. aegypti mosquitoes. A. aegypti adaptive behaviours and changes in container preferences demonstrate how complex dengue transmission dynamics is, in different ecosystems. The use of underground drains and septic tanks represents a major behaviour change identified and compounds an already difficult task to control A. aegypti populations. A business as usual approach will exacerbate the problem and lead to more frequent outbreaks of dengue and chikungunya in LAC countries unless both area-wide and targeted vector control approaches are adopted. The current evidence and the results from proposed transdisciplinary research on dengue within different ecosystems will help guide the development of new vector control strategies and foster a better understanding of climate change impacts on vector-borne disease transmission.

  5. Aedes aegypti (L.) in Latin American and Caribbean region: With growing evidence for vector adaptation to climate change?

    PubMed

    Chadee, Dave D; Martinez, Raymond

    2016-04-01

    Within Latin America and the Caribbean region the impact of climate change has been associated with the effects of rainfall and temperature on seasonal outbreaks of dengue but few studies have been conducted on the impacts of climate on the behaviour and ecology of Aedes aegypti mosquitoes.This study was conducted to examine the adaptive behaviours currently being employed by A. aegypti mosquitoes exposed to the force of climate change in LAC countries. The literature on the association between climate and dengue incidence is small and sometimes speculative. Few laboratory and field studies have identified research gaps. Laboratory and field experiments were designed and conducted to better understand the container preferences, climate-associated-adaptive behaviour, ecology and the effects of different temperatures and light regimens on the life history of A. aegypti mosquitoes. A. aegypti adaptive behaviours and changes in container preferences demonstrate how complex dengue transmission dynamics is, in different ecosystems. The use of underground drains and septic tanks represents a major behaviour change identified and compounds an already difficult task to control A. aegypti populations. A business as usual approach will exacerbate the problem and lead to more frequent outbreaks of dengue and chikungunya in LAC countries unless both area-wide and targeted vector control approaches are adopted. The current evidence and the results from proposed transdisciplinary research on dengue within different ecosystems will help guide the development of new vector control strategies and foster a better understanding of climate change impacts on vector-borne disease transmission. PMID:26796862

  6. On Quantizable Odd Lie Bialgebras

    NASA Astrophysics Data System (ADS)

    Khoroshkin, Anton; Merkulov, Sergei; Willwacher, Thomas

    2016-09-01

    Motivated by the obstruction to the deformation quantization of Poisson structures in infinite dimensions, we introduce the notion of a quantizable odd Lie bialgebra. The main result of the paper is a construction of the highly non-trivial minimal resolution of the properad governing such Lie bialgebras, and its link with the theory of so-called quantizable Poisson structures.

  7. Quantized Algebra I Texts

    ERIC Educational Resources Information Center

    DeBuvitz, William

    2014-01-01

    I am a volunteer reader at the Princeton unit of "Learning Ally" (formerly "Recording for the Blind & Dyslexic") and I recently discovered that high school students are introduced to the concept of quantization well before they take chemistry and physics. For the past few months I have been reading onto computer files a…

  8. Laser-induced Breakdown spectroscopy quantitative analysis method via adaptive analytical line selection and relevance vector machine regression model

    NASA Astrophysics Data System (ADS)

    Yang, Jianhong; Yi, Cancan; Xu, Jinwu; Ma, Xianghong

    2015-05-01

    A new LIBS quantitative analysis method based on analytical line adaptive selection and Relevance Vector Machine (RVM) regression model is proposed. First, a scheme of adaptively selecting analytical line is put forward in order to overcome the drawback of high dependency on a priori knowledge. The candidate analytical lines are automatically selected based on the built-in characteristics of spectral lines, such as spectral intensity, wavelength and width at half height. The analytical lines which will be used as input variables of regression model are determined adaptively according to the samples for both training and testing. Second, an LIBS quantitative analysis method based on RVM is presented. The intensities of analytical lines and the elemental concentrations of certified standard samples are used to train the RVM regression model. The predicted elemental concentration analysis results will be given with a form of confidence interval of probabilistic distribution, which is helpful for evaluating the uncertainness contained in the measured spectra. Chromium concentration analysis experiments of 23 certified standard high-alloy steel samples have been carried out. The multiple correlation coefficient of the prediction was up to 98.85%, and the average relative error of the prediction was 4.01%. The experiment results showed that the proposed LIBS quantitative analysis method achieved better prediction accuracy and better modeling robustness compared with the methods based on partial least squares regression, artificial neural network and standard support vector machine.

  9. Uniform quantized electron gas.

    PubMed

    Høye, Johan S; Lomba, Enrique

    2016-10-19

    In this work we study the correlation energy of the quantized electron gas of uniform density at temperature T  =  0. To do so we utilize methods from classical statistical mechanics. The basis for this is the Feynman path integral for the partition function of quantized systems. With this representation the quantum mechanical problem can be interpreted as, and is equivalent to, a classical polymer problem in four dimensions where the fourth dimension is imaginary time. Thus methods, results, and properties obtained in the statistical mechanics of classical fluids can be utilized. From this viewpoint we recover the well known RPA (random phase approximation). Then to improve it we modify the RPA by requiring the corresponding correlation function to be such that electrons with equal spins can not be on the same position. Numerical evaluations are compared with well known results of a standard parameterization of Monte Carlo correlation energies. PMID:27546166

  10. Uniform quantized electron gas

    NASA Astrophysics Data System (ADS)

    Høye, Johan S.; Lomba, Enrique

    2016-10-01

    In this work we study the correlation energy of the quantized electron gas of uniform density at temperature T  =  0. To do so we utilize methods from classical statistical mechanics. The basis for this is the Feynman path integral for the partition function of quantized systems. With this representation the quantum mechanical problem can be interpreted as, and is equivalent to, a classical polymer problem in four dimensions where the fourth dimension is imaginary time. Thus methods, results, and properties obtained in the statistical mechanics of classical fluids can be utilized. From this viewpoint we recover the well known RPA (random phase approximation). Then to improve it we modify the RPA by requiring the corresponding correlation function to be such that electrons with equal spins can not be on the same position. Numerical evaluations are compared with well known results of a standard parameterization of Monte Carlo correlation energies.

  11. Consistent quantization of massive chiral electrodynamics in four dimensions

    SciTech Connect

    Andrianov, A. ); Bassetto, A.; Soldati, R.

    1989-10-09

    We discuss the quantization of a four-dimensional model in which a massive Abelian vector field interacts with chiral massless fermions. We show that, by introducing extra scalar fields, a renormalizable unitary {ital S} matrix can be obtained in a suitably defined Hilbert space of physical states.

  12. Development of a novel plasmid vector pTIO-1 adapted for electrotransformation of Porphyromonas gingivalis.

    PubMed

    Tagawa, Junpei; Inoue, Tetsuyoshi; Naito, Mariko; Sato, Keiko; Kuwahara, Tomomi; Nakayama, Masaaki; Nakayama, Koji; Yamashiro, Takashi; Ohara, Naoya

    2014-10-01

    We report here the construction of a plasmid vector designed for the efficient electrotransformation of the periodontal pathogen Porphyromonas gingivalis. The novel Escherichia coli-Bacteroides/P. gingivalis shuttle vector, designated pTIO-1, is based on the 11.0-kb E. coli-Bacteroides conjugative shuttle vector, pVAL-1 (a pB8-51 derivative). To construct pTIO-1, the pB8-51 origin of replication and erythromycin resistance determinant of pVAL-1 were cloned into the E. coli cloning vector pBluescript II SK(-) and non-functional regions were deleted. pTIO-1 has an almost complete multiple cloning site from pBluescript II SK(-). The size of pTIO-1 is 4.5kb, which is convenient for routine gene manipulation. pTIO-1 was introduced into P. gingivalis via electroporation, and erythromycin-resistant transformants carrying pTIO-1 were obtained. We characterized the transformation efficiency, copy number, host range, stability, and insert size capacity of pTIO-1. An efficient plasmid electrotransformation of P. gingivalis will facilitate functional analysis and expression of P. gingivalis genes, including the virulence factors of this bacterium.

  13. Impacts of Climate Change on Vector Borne Diseases in the Mediterranean Basin - Implications for Preparedness and Adaptation Policy.

    PubMed

    Negev, Maya; Paz, Shlomit; Clermont, Alexandra; Pri-Or, Noemie Groag; Shalom, Uri; Yeger, Tamar; Green, Manfred S

    2015-06-15

    The Mediterranean region is vulnerable to climatic changes. A warming trend exists in the basin with changes in rainfall patterns. It is expected that vector-borne diseases (VBD) in the region will be influenced by climate change since weather conditions influence their emergence. For some diseases (i.e., West Nile virus) the linkage between emergence andclimate change was recently proved; for others (such as dengue) the risk for local transmission is real. Consequently, adaptation and preparation for changing patterns of VBD distribution is crucial in the Mediterranean basin. We analyzed six representative Mediterranean countries and found that they have started to prepare for this threat, but the preparation levels among them differ, and policy mechanisms are limited and basic. Furthermore, cross-border cooperation is not stable and depends on international frameworks. The Mediterranean countries should improve their adaptation plans, and develop more cross-sectoral, multidisciplinary and participatory approaches. In addition, based on experience from existing local networks in advancing national legislation and trans-border cooperation, we outline recommendations for a regional cooperation framework. We suggest that a stable and neutral framework is required, and that it should address the characteristics and needs of African, Asian and European countries around the Mediterranean in order to ensure participation. Such a regional framework is essential to reduce the risk of VBD transmission, since the vectors of infectious diseases know no political borders.

  14. Impacts of Climate Change on Vector Borne Diseases in the Mediterranean Basin — Implications for Preparedness and Adaptation Policy

    PubMed Central

    Negev, Maya; Paz, Shlomit; Clermont, Alexandra; Pri-Or, Noemie Groag; Shalom, Uri; Yeger, Tamar; Green, Manfred S.

    2015-01-01

    The Mediterranean region is vulnerable to climatic changes. A warming trend exists in the basin with changes in rainfall patterns. It is expected that vector-borne diseases (VBD) in the region will be influenced by climate change since weather conditions influence their emergence. For some diseases (i.e., West Nile virus) the linkage between emergence andclimate change was recently proved; for others (such as dengue) the risk for local transmission is real. Consequently, adaptation and preparation for changing patterns of VBD distribution is crucial in the Mediterranean basin. We analyzed six representative Mediterranean countries and found that they have started to prepare for this threat, but the preparation levels among them differ, and policy mechanisms are limited and basic. Furthermore, cross-border cooperation is not stable and depends on international frameworks. The Mediterranean countries should improve their adaptation plans, and develop more cross-sectoral, multidisciplinary and participatory approaches. In addition, based on experience from existing local networks in advancing national legislation and trans-border cooperation, we outline recommendations for a regional cooperation framework. We suggest that a stable and neutral framework is required, and that it should address the characteristics and needs of African, Asian and European countries around the Mediterranean in order to ensure participation. Such a regional framework is essential to reduce the risk of VBD transmission, since the vectors of infectious diseases know no political borders. PMID:26084000

  15. Impacts of Climate Change on Vector Borne Diseases in the Mediterranean Basin - Implications for Preparedness and Adaptation Policy.

    PubMed

    Negev, Maya; Paz, Shlomit; Clermont, Alexandra; Pri-Or, Noemie Groag; Shalom, Uri; Yeger, Tamar; Green, Manfred S

    2015-06-01

    The Mediterranean region is vulnerable to climatic changes. A warming trend exists in the basin with changes in rainfall patterns. It is expected that vector-borne diseases (VBD) in the region will be influenced by climate change since weather conditions influence their emergence. For some diseases (i.e., West Nile virus) the linkage between emergence andclimate change was recently proved; for others (such as dengue) the risk for local transmission is real. Consequently, adaptation and preparation for changing patterns of VBD distribution is crucial in the Mediterranean basin. We analyzed six representative Mediterranean countries and found that they have started to prepare for this threat, but the preparation levels among them differ, and policy mechanisms are limited and basic. Furthermore, cross-border cooperation is not stable and depends on international frameworks. The Mediterranean countries should improve their adaptation plans, and develop more cross-sectoral, multidisciplinary and participatory approaches. In addition, based on experience from existing local networks in advancing national legislation and trans-border cooperation, we outline recommendations for a regional cooperation framework. We suggest that a stable and neutral framework is required, and that it should address the characteristics and needs of African, Asian and European countries around the Mediterranean in order to ensure participation. Such a regional framework is essential to reduce the risk of VBD transmission, since the vectors of infectious diseases know no political borders. PMID:26084000

  16. Aquaporin water channel AgAQP1 in the malaria vector mosquito Anopheles gambiae during blood feeding and humidity adaptation

    PubMed Central

    Liu, Kun; Tsujimoto, Hitoshi; Cha, Sung-Jae; Agre, Peter; Rasgon, Jason L.

    2011-01-01

    Altered patterns of malaria endemicity reflect, in part, changes in feeding behavior and climate adaptation of mosquito vectors. Aquaporin (AQP) water channels are found throughout nature and confer high-capacity water flow through cell membranes. The genome of the major malaria vector mosquito Anopheles gambiae contains at least seven putative AQP sequences. Anticipating that transmembrane water movements are important during the life cycle of A. gambiae, we identified and characterized the A. gambiae aquaporin 1 (AgAQP1) protein that is homologous to AQPs known in humans, Drosophila, and sap-sucking insects. When expressed in Xenopus laevis oocytes, AgAQP1 transports water but not glycerol. Similar to mammalian AQPs, water permeation of AgAQP1 is inhibited by HgCl2 and tetraethylammonium, with Tyr185 conferring tetraethylammonium sensitivity. AgAQP1 is more highly expressed in adult female A. gambiae mosquitoes than in males. Expression is high in gut, ovaries, and Malpighian tubules where immunofluorescence microscopy reveals that AgAQP1 resides in stellate cells but not principal cells. AgAQP1 expression is up-regulated in fat body and ovary by blood feeding but not by sugar feeding, and it is reduced by exposure to a dehydrating environment (42% relative humidity). RNA interference reduces AgAQP1 mRNA and protein levels. In a desiccating environment (<20% relative humidity), mosquitoes with reduced AgAQP1 protein survive significantly longer than controls. These studies support a role for AgAQP1 in water homeostasis during blood feeding and humidity adaptation of A. gambiae, a major mosquito vector of human malaria in sub-Saharan Africa. PMID:21444767

  17. How the Malaria Vector Anopheles gambiae Adapts to the Use of Insecticide-Treated Nets by African Populations

    PubMed Central

    Ndiath, Mamadou Ousmane; Mazenot, Catherine; Sokhna, Cheikh; Trape, Jean-François

    2014-01-01

    Background Insecticide treated bed nets have been recommended and proven efficient as a measure to protect African populations from malaria mosquito vector Anopheles spp. This study evaluates the consequences of bed nets use on vectors resistance to insecticides, their feeding behavior and malaria transmission in Dielmo village, Senegal, were LLINs were offered to all villagers in July 2008. Methods Adult mosquitoes were collected monthly from January 2006 to December 2011 by human landing catches (HLC) and by pyrethroid spray catches (PCS). A randomly selected sub-sample of 15–20% of An. gambiae s.l. collected each month was used to investigate the molecular forms of the An. gambiae complex, kdr mutations, and Plasmodium falciparum circumsporozoite (CSP) rate. Malaria prevalence and gametocytaemia in Dielmo villagers were measured quarterly. Results Insecticide susceptible mosquitoes (wild kdr genotype) presented a reduced lifespan after LLINs implementation but they rapidly adapted their feeding behavior, becoming more exophageous and zoophilic, and biting earlier during the night. In the meantime, insecticide-resistant specimens (kdr L1014F genotype) increased in frequency in the population, with an unchanged lifespan and feeding behaviour. P. falciparum prevalence and gametocyte rate in villagers decreased dramatically after LLINs deployment. Malaria infection rate tended to zero in susceptible mosquitoes whereas the infection rate increased markedly in the kdr homozygote mosquitoes. Conclusion Dramatic changes in vector populations and their behavior occurred after the deployment of LLINs due to the extraordinary adaptative skills of An. gambiae s. l. mosquitoes. However, despite the increasing proportion of insecticide resistant mosquitoes and their almost exclusive responsibility in malaria transmission, the P. falciparum gametocyte reservoir continued to decrease three years after the deployment of LLINs. PMID:24892677

  18. Adaptation of plants to altered shoot orientation relative to the gravity vector.

    PubMed

    Smolyanina, S O; Berkovich, Yu A; Ivanov, V B

    2004-07-01

    Wheat Triticum aestivum L., carrots Daucus carota L., Chinese cabbage Brassica pekinensis Rupr., and African marigold Tagetes patula L. were grown at natural and inverted orientation in the Earth gravitational field. Light vector was set unidirectional or opposite directional relative to the gravity vector. Shoot orientation relative to the gravity vector was set natural or invert. Plants grew in the special pots furnished with plane or cylindrical hydrophilic porous membranes. The membrane allowed to stabilize a water potential in the root zone at the fixed level. Seeds were put into a fiber ion-exchange artificial soil overlaying horizontal hydrophilic plates of porous titanium or anchored to porous metal-ceramic tubes. Plants grew at the PPF level 550 +/- 20 micromoles/(m2 s) during 24-hr lighting and at the water potential level at the membrane surface (-1.00) +/- 0.08 kPa. Normal plants were obtained both at the natural and at the inverse shoot orientation in the all experiments. The wheat plants were yielded healthy germinating seeds no matter plant orientation. In the inverse orientation, no negative influence for plant biomass accruing was marked, but the increasing of shoot to root mass ratio was considerable. However carrot root crop mass decreasing was not revealed in the inverse orientation. The results demonstrated substantial dependence of morphological and physiological characteristics of higher plants on the gravity factor.

  19. A Nonlinear Adaptive Beamforming Algorithm Based on Least Squares Support Vector Regression

    PubMed Central

    Wang, Lutao; Jin, Gang; Li, Zhengzhou; Xu, Hongbin

    2012-01-01

    To overcome the performance degradation in the presence of steering vector mismatches, strict restrictions on the number of available snapshots, and numerous interferences, a novel beamforming approach based on nonlinear least-square support vector regression machine (LS-SVR) is derived in this paper. In this approach, the conventional linearly constrained minimum variance cost function used by minimum variance distortionless response (MVDR) beamformer is replaced by a squared-loss function to increase robustness in complex scenarios and provide additional control over the sidelobe level. Gaussian kernels are also used to obtain better generalization capacity. This novel approach has two highlights, one is a recursive regression procedure to estimate the weight vectors on real-time, the other is a sparse model with novelty criterion to reduce the final size of the beamformer. The analysis and simulation tests show that the proposed approach offers better noise suppression capability and achieve near optimal signal-to-interference-and-noise ratio (SINR) with a low computational burden, as compared to other recently proposed robust beamforming techniques.

  20. Quantization of Generally Covariant Systems

    NASA Astrophysics Data System (ADS)

    Sforza, Daniel M.

    2000-12-01

    Finite dimensional models that mimic the constraint structure of Einstein's General Relativity are quantized in the framework of BRST and Dirac's canonical formalisms. The first system to be studied is one featuring a constraint quadratic in the momenta (the "super-Hamiltonian") and a set of constraints linear in the momenta (the "supermomentum" constraints). The starting point is to realize that the ghost contributions to the supermomentum constraint operators can be read in terms of the natural volume induced by the constraints in the orbits. This volume plays a fundamental role in the construction of the quadratic sector of the nilpotent BRST charge. It is shown that the quantum theory is invariant under scaling of the super-Hamiltonian. As long as the system has an intrinsic time, this property translates in a contribution of the potential to the kinetic term. In this aspect, the results substantially differ from other works where the scaling invariance is forced by introducing a coupling to the curvature. The contribution of the potential, far from being unnatural, is beautifully justified in the light of the Jacobi's principle. Then, it is shown that the obtained results can be extended to systems with extrinsic time. In this case, if the metric has a conformal temporal Killing vector and the potential exhibits a suitable behavior with respect to it, the role played by the potential in the case of intrinsic time is now played by the norm of the Killing vector. Finally, the results for the previous cases are extended to a system featuring two super-Hamiltonian constraints. This step is extremely important due to the fact that General Relativity features an infinite number of such constraints satisfying a non trivial algebra among themselves.

  1. Quantization of Inequivalent Classical Hamiltonians.

    ERIC Educational Resources Information Center

    Edwards, Ian K.

    1979-01-01

    Shows how the quantization of a Hamiltonian which is not canonically related to the energy is ambiguous and thereby results in conflicting physical interpretations. Concludes that only the Hamiltonian corresponding to the total energy of a classical system or one canonically related to it is suitable for consistent quantization. (GA)

  2. Coherent state quantization of quaternions

    SciTech Connect

    Muraleetharan, B. E-mail: santhar@gmail.com; Thirulogasanthar, K. E-mail: santhar@gmail.com

    2015-08-15

    Parallel to the quantization of the complex plane, using the canonical coherent states of a right quaternionic Hilbert space, quaternion field of quaternionic quantum mechanics is quantized. Associated upper symbols, lower symbols, and related quantities are analyzed. Quaternionic version of the harmonic oscillator and Weyl-Heisenberg algebra are also obtained.

  3. Coherent state quantization of quaternions

    NASA Astrophysics Data System (ADS)

    Muraleetharan, B.; Thirulogasanthar, K.

    2015-08-01

    Parallel to the quantization of the complex plane, using the canonical coherent states of a right quaternionic Hilbert space, quaternion field of quaternionic quantum mechanics is quantized. Associated upper symbols, lower symbols, and related quantities are analyzed. Quaternionic version of the harmonic oscillator and Weyl-Heisenberg algebra are also obtained.

  4. Visual optimization of DCT quantization matrices for individual images

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.

    1993-01-01

    Many image compression standards (JPEG, MPEG, H.261) are based on the Discrete Cosine Transform (DCT). However, these standards do not specify the actual DCT quantization matrix. We have previously provided mathematical formulae to compute a perceptually lossless quantization matrix. Here I show how to compute a matrix that is optimized for a particular image. The method treats each DCT coefficient as an approximation to the local response of a visual 'channel'. For a given quantization matrix, the DCT quantization errors are adjusted by contrast sensitivity, light adaptation, and contrast masking, and are pooled non-linearly over the blocks of the image. This yields an 8x8 'perceptual error matrix'. A second non-linear pooling over the perceptual error matrix yields total perceptual error. With this model we may estimate the quantization matrix for a particular image that yields minimum bit rate for a given total perceptual error, or minimum perceptual error for a given bit rate. Custom matrices for a number of images show clear improvement over image-independent matrices. Custom matrices are compatible with the JPEG standard, which requires transmission of the quantization matrix.

  5. A vector-product information retrieval system adapted to heterogeneous, distributed computing environments

    NASA Technical Reports Server (NTRS)

    Rorvig, Mark E.

    1991-01-01

    Vector-product information retrieval (IR) systems produce retrieval results superior to all other searching methods but presently have no commercial implementations beyond the personal computer environment. The NASA Electronic Library Systems (NELS) provides a ranked list of the most likely relevant objects in collections in response to a natural language query. Additionally, the system is constructed using standards and tools (Unix, X-Windows, Notif, and TCP/IP) that permit its operation in organizations that possess many different hosts, workstations, and platforms. There are no known commercial equivalents to this product at this time. The product has applications in all corporate management environments, particularly those that are information intensive, such as finance, manufacturing, biotechnology, and research and development.

  6. Artificial immune system based on adaptive clonal selection for feature selection and parameters optimisation of support vector machines

    NASA Astrophysics Data System (ADS)

    Sadat Hashemipour, Maryam; Soleimani, Seyed Ali

    2016-01-01

    Artificial immune system (AIS) algorithm based on clonal selection method can be defined as a soft computing method inspired by theoretical immune system in order to solve science and engineering problems. Support vector machine (SVM) is a popular pattern classification method with many diverse applications. Kernel parameter setting in the SVM training procedure along with the feature selection significantly impacts on the classification accuracy rate. In this study, AIS based on Adaptive Clonal Selection (AISACS) algorithm has been used to optimise the SVM parameters and feature subset selection without degrading the SVM classification accuracy. Several public datasets of University of California Irvine machine learning (UCI) repository are employed to calculate the classification accuracy rate in order to evaluate the AISACS approach then it was compared with grid search algorithm and Genetic Algorithm (GA) approach. The experimental results show that the feature reduction rate and running time of the AISACS approach are better than the GA approach.

  7. Structural analysis of human proximal femur for the prediction of biomechanical strength in vitro: the locally adapted scaling vector method

    NASA Astrophysics Data System (ADS)

    Monetti, Roberto A.; Boehm, Holger; Mueller, Dirk; Rummeny, Ernst; Link, Thomas; Raeth, Christoph

    2005-04-01

    We introduce an image structure analysis technique suitable in cases where anisotropy plays an important role. The so-called Locally Adapted Scaling Vector Method (LSVM) comprises two steps. First, a procedure to estimate the local main orientation at every point of the image is applied. These orientations are then incorporated in a structure characterization procedure. We apply this methodology to High Resolution Magnetic Resonance Images (HRMRI) of human proximal femoral specimens IN VITRO. We extract a 3D local texture measure to establish correlations with the biomechanical properties of bone specimens quantified via the bone maximum compressive strength. The purpose is to compare our results with the prediction of bone strength using similar isotropic texture measures, bone mineral density, and standard 2D morphometric parameters. Our findings suggest that anisotropic texture measures are superior in cases where directional properties are relevant.

  8. Fast subpel motion estimation for H.264/advanced video coding with an adaptive motion vector accuracy decision

    NASA Astrophysics Data System (ADS)

    Lee, Hoyoung; Jung, Bongsoo; Jung, Jooyoung; Jeon, Byeungwoo

    2012-11-01

    The quarter-pel motion vector accuracy supported by H.264/advanced video coding (AVC) in motion estimation (ME) and compensation (MC) provides high compression efficiency. However, it also increases the computational complexity. While various well-known fast integer-pel ME methods are already available, lack of a good, fast subpel ME method results in problems associated with relatively high computational complexity. This paper presents one way of solving the complexity problem of subpel ME by making adaptive motion vector (MV) accuracy decisions in inter-mode selection. The proposed MV accuracy decision is made using inter-mode selection of a macroblock with two decision criteria. Pixels are classified as stationary (and/or homogeneous) or nonstationary (and/or nonhomogeneous). In order to avoid unnecessary interpolation and processing, a proper subpel ME level is chosen among four different combinations, each of which has a different MV accuracy and number of subpel ME iterations based on the classification. Simulation results using an open source x264 software encoder show that without any noticeable degradation (by -0.07 dB on average), the proposed method reduces total encoding time and subpel ME time, respectively, by 51.78% and by 76.49% on average, as compared to the conventional full-pel pixel search.

  9. First quantized electrodynamics

    SciTech Connect

    Bennett, A.F.

    2014-06-15

    The parametrized Dirac wave equation represents position and time as operators, and can be formulated for many particles. It thus provides, unlike field-theoretic Quantum Electrodynamics (QED), an elementary and unrestricted representation of electrons entangled in space or time. The parametrized formalism leads directly and without further conjecture to the Bethe–Salpeter equation for bound states. The formalism also yields the Uehling shift of the hydrogenic spectrum, the anomalous magnetic moment of the electron to leading order in the fine structure constant, the Lamb shift and the axial anomaly of QED. -- Highlights: •First-quantized electrodynamics of the parametrized Dirac equation is developed. •Unrestricted entanglement in time is made explicit. •Bethe and Salpeter’s equation for relativistic bound states is derived without further conjecture. •One-loop scattering corrections and the axial anomaly are derived using a partial summation. •Wide utility of semi-classical Quantum Electrodynamics is argued.

  10. Quantized beam shifts in graphene

    SciTech Connect

    de Melo Kort-Kamp, Wilton Junior; Sinitsyn, Nikolai; Dalvit, Diego Alejandro Roberto

    2015-10-08

    We predict the existence of quantized Imbert-Fedorov, Goos-Hanchen, and photonic spin Hall shifts for light beams impinging on a graphene-on-substrate system in an external magnetic field. In the quantum Hall regime the Imbert-Fedorov and photonic spin Hall shifts are quantized in integer multiples of the fine structure constant α, while the Goos-Hanchen ones in multiples of α2. We investigate the influence on these shifts of magnetic field, temperature, and material dispersion and dissipation. An experimental demonstration of quantized beam shifts could be achieved at terahertz frequencies for moderate values of the magnetic field.

  11. Minimum distortion quantizers. [determined by max algorithm

    NASA Technical Reports Server (NTRS)

    Jones, H. W., Jr.

    1977-01-01

    The well-known algorithm of Max is used to determine the minimum distortion quantizers for normal, two-sided exponential, and specialized two-sided gamma input distributions and for mean-square, magnitude, and relative magnitude error distortion criteria. The optimum equally-spaced and unequally-spaced quantizers are found, with the resulting quantizer distortion and entropy. The quantizers, and the quantizers with entropy coding, are compared to the rate distortion bounds for mean-square and magnitude error.

  12. The quantized D-transformation.

    PubMed

    Saraceno, M.; Vallejos, R. O.

    1996-06-01

    We construct a new example of a quantum map, the quantized version of the D-transformation, which is the natural extension to two dimensions of the tent map. The classical, quantum and semiclassical behavior is studied. We also exhibit some relationships between the quantum versions of the D-map and the parity projected baker's map. The method of construction allows a generalization to dissipative maps which includes the quantization of a horseshoe. (c) 1996 American Institute of Physics.

  13. Quantized string models

    NASA Astrophysics Data System (ADS)

    Fradkin, E. S.; Tseytlin, A. A.

    1982-10-01

    We discuss and compare the Lorentz covariant path integral quantization of the three bose string models, namely, the Nambu, Eguchi and Brink-Di Vecchia-Howe-Polyakov (BDHP) ones. Along with a critical review of the subject with some uncertainties and ambiguities clearly stated, various new results are presented. We work out the form of the BDHP string ansatz for the Wilson average and prove a formal inequivalence of the exact Nambu and BDHP models for any space-time dimension d. The above three models, known to be equivalent on the classical level, are shown to be equivalent in a semiclassical approximation near a minimal surface and also in the leading 1/d- approximation for the static overlineqq-potential. We analyse scattering amplitudes predicted by the BDHP string and find that when exactly calculated for d < 26 they are different from the old dual ones, and possess a non-linear spectrum which may be considered as free from tachyons in the ground state.

  14. Quantized visual awareness

    PubMed Central

    Escobar, W. A.

    2013-01-01

    The proposed model holds that, at its most fundamental level, visual awareness is quantized. That is to say that visual awareness arises as individual bits of awareness through the action of neural circuits with hundreds to thousands of neurons in at least the human striate cortex. Circuits with specific topologies will reproducibly result in visual awareness that correspond to basic aspects of vision like color, motion, and depth. These quanta of awareness (qualia) are produced by the feedforward sweep that occurs through the geniculocortical pathway but are not integrated into a conscious experience until recurrent processing from centers like V4 or V5 select the appropriate qualia being produced in V1 to create a percept. The model proposed here has the potential to shift the focus of the search for visual awareness to the level of microcircuits and these likely exist across the kingdom Animalia. Thus establishing qualia as the fundamental nature of visual awareness will not only provide a deeper understanding of awareness, but also allow for a more quantitative understanding of the evolution of visual awareness throughout the animal kingdom. PMID:24319436

  15. Adaptive Square-Root Cubature-Quadrature Kalman Particle Filter for satellite attitude determination using vector observations

    NASA Astrophysics Data System (ADS)

    Kiani, Maryam; Pourtakdoust, Seid H.

    2014-12-01

    A novel algorithm is presented in this study for estimation of spacecraft's attitudes and angular rates from vector observations. In this regard, a new cubature-quadrature particle filter (CQPF) is initially developed that uses the Square-Root Cubature-Quadrature Kalman Filter (SR-CQKF) to generate the importance proposal distribution. The developed CQPF scheme avoids the basic limitation of particle filter (PF) with regards to counting the new measurements. Subsequently, CQPF is enhanced to adjust the sample size at every time step utilizing the idea of confidence intervals, thus improving the efficiency and accuracy of the newly proposed adaptive CQPF (ACQPF). In addition, application of the q-method for filter initialization has intensified the computation burden as well. The current study also applies ACQPF to the problem of attitude estimation of a low Earth orbit (LEO) satellite. For this purpose, the undertaken satellite is equipped with a three-axis magnetometer (TAM) as well as a sun sensor pack that provide noisy geomagnetic field data and Sun direction measurements, respectively. The results and performance of the proposed filter are investigated and compared with those of the extended Kalman filter (EKF) and the standard particle filter (PF) utilizing a Monte Carlo simulation. The comparison demonstrates the viability and the accuracy of the proposed nonlinear estimator.

  16. Periodic roads and quantized wheels

    NASA Astrophysics Data System (ADS)

    de Campos Valadares, Eduardo

    2016-08-01

    We propose a simple approach to determine all possible wheels that can roll smoothly without slipping on a periodic roadbed, while maintaining the center of mass at a fixed height. We also address the inverse problem that of obtaining the roadbed profile compatible with a specific wheel and all other related "quantized wheels." The role of symmetry is highlighted, which might preclude the center of mass from remaining at a fixed height. A straightforward consequence of such geometric quantization is that the gravitational potential energy and the moment of inertia are discrete, suggesting a parallelism between macroscopic wheels and nano-systems, such as carbon nanotubes.

  17. Speech recognition in reverberant and noisy environments employing multiple feature extractors and i-vector speaker adaptation

    NASA Astrophysics Data System (ADS)

    Alam, Md Jahangir; Gupta, Vishwa; Kenny, Patrick; Dumouchel, Pierre

    2015-12-01

    The REVERB challenge provides a common framework for the evaluation of feature extraction techniques in the presence of both reverberation and additive background noise. State-of-the-art speech recognition systems perform well in controlled environments, but their performance degrades in realistic acoustical conditions, especially in real as well as simulated reverberant environments. In this contribution, we utilize multiple feature extractors including the conventional mel-filterbank, multi-taper spectrum estimation-based mel-filterbank, robust mel and compressive gammachirp filterbank, iterative deconvolution-based dereverberated mel-filterbank, and maximum likelihood inverse filtering-based dereverberated mel-frequency cepstral coefficient features for speech recognition with multi-condition training data. In order to improve speech recognition performance, we combine their results using ROVER (Recognizer Output Voting Error Reduction). For two- and eight-channel tasks, to get benefited from the multi-channel data, we also use ROVER, instead of the multi-microphone signal processing method, to reduce word error rate by selecting the best scoring word at each channel. As in a previous work, we also apply i-vector-based speaker adaptation which was found effective. In speech recognition task, speaker adaptation tries to reduce mismatch between the training and test speakers. Speech recognition experiments are conducted on the REVERB challenge 2014 corpora using the Kaldi recognizer. In our experiments, we use both utterance-based batch processing and full batch processing. In the single-channel task, full batch processing reduced word error rate (WER) from 10.0 to 9.3 % on SimData as compared to utterance-based batch processing. Using full batch processing, we obtained an average WER of 9.0 and 23.4 % on the SimData and RealData, respectively, for the two-channel task, whereas for the eight-channel task on the SimData and RealData, the average WERs found were 8

  18. The Bartonella quintana extracytoplasmic function sigma factor RpoE has a role in bacterial adaptation to the arthropod vector environment.

    PubMed

    Abromaitis, Stephanie; Koehler, Jane E

    2013-06-01

    Bartonella quintana is a vector-borne bacterial pathogen that causes fatal disease in humans. During the infectious cycle, B. quintana transitions from the hemin-restricted human bloodstream to the hemin-rich body louse vector. Because extracytoplasmic function (ECF) sigma factors often regulate adaptation to environmental changes, we hypothesized that a previously unstudied B. quintana ECF sigma factor, RpoE, is involved in the transition from the human host to the body louse vector. The genomic context of B. quintana rpoE identified it as a member of the ECF15 family of sigma factors found only in alphaproteobacteria. ECF15 sigma factors are believed to be the master regulators of the general stress response in alphaproteobacteria. In this study, we examined the B. quintana RpoE response to two stressors that are encountered in the body louse vector environment, a decreased temperature and an increased hemin concentration. We determined that the expression of rpoE is significantly upregulated at the body louse (28°C) versus the human host (37°C) temperature. rpoE expression also was upregulated when B. quintana was exposed to high hemin concentrations. In vitro and in vivo analyses demonstrated that RpoE function is regulated by a mechanism involving the anti-sigma factor NepR and the response regulator PhyR. The ΔrpoE ΔnepR mutant strain of B. quintana established that RpoE-mediated transcription is important in mediating the tolerance of B. quintana to high hemin concentrations. We present the first analysis of an ECF15 sigma factor in a vector-borne human pathogen and conclude that RpoE has a role in the adaptation of B. quintana to the hemin-rich arthropod vector environment.

  19. The Bartonella quintana Extracytoplasmic Function Sigma Factor RpoE Has a Role in Bacterial Adaptation to the Arthropod Vector Environment

    PubMed Central

    Abromaitis, Stephanie

    2013-01-01

    Bartonella quintana is a vector-borne bacterial pathogen that causes fatal disease in humans. During the infectious cycle, B. quintana transitions from the hemin-restricted human bloodstream to the hemin-rich body louse vector. Because extracytoplasmic function (ECF) sigma factors often regulate adaptation to environmental changes, we hypothesized that a previously unstudied B. quintana ECF sigma factor, RpoE, is involved in the transition from the human host to the body louse vector. The genomic context of B. quintana rpoE identified it as a member of the ECF15 family of sigma factors found only in alphaproteobacteria. ECF15 sigma factors are believed to be the master regulators of the general stress response in alphaproteobacteria. In this study, we examined the B. quintana RpoE response to two stressors that are encountered in the body louse vector environment, a decreased temperature and an increased hemin concentration. We determined that the expression of rpoE is significantly upregulated at the body louse (28°C) versus the human host (37°C) temperature. rpoE expression also was upregulated when B. quintana was exposed to high hemin concentrations. In vitro and in vivo analyses demonstrated that RpoE function is regulated by a mechanism involving the anti-sigma factor NepR and the response regulator PhyR. The ΔrpoE ΔnepR mutant strain of B. quintana established that RpoE-mediated transcription is important in mediating the tolerance of B. quintana to high hemin concentrations. We present the first analysis of an ECF15 sigma factor in a vector-borne human pathogen and conclude that RpoE has a role in the adaptation of B. quintana to the hemin-rich arthropod vector environment. PMID:23564167

  20. The relative magnitude of transgene-specific adaptive immune responses induced by human and chimpanzee adenovirus vectors differs between laboratory animals and a target species.

    PubMed

    Dicks, Matthew D J; Guzman, Efrain; Spencer, Alexandra J; Gilbert, Sarah C; Charleston, Bryan; Hill, Adrian V S; Cottingham, Matthew G

    2015-02-25

    Adenovirus vaccine vectors generated from new viral serotypes are routinely screened in pre-clinical laboratory animal models to identify the most immunogenic and efficacious candidates for further evaluation in clinical human and veterinary settings. Here, we show that studies in a laboratory species do not necessarily predict the hierarchy of vector performance in other mammals. In mice, after intramuscular immunization, HAdV-5 (Human adenovirus C) based vectors elicited cellular and humoral adaptive responses of higher magnitudes compared to the chimpanzee adenovirus vectors ChAdOx1 and AdC68 from species Human adenovirus E. After HAdV-5 vaccination, transgene specific IFN-γ(+) CD8(+) T cell responses reached peak magnitude later than after ChAdOx1 and AdC68 vaccination, and exhibited a slower contraction to a memory phenotype. In cattle, cellular and humoral immune responses were at least equivalent, if not higher, in magnitude after ChAdOx1 vaccination compared to HAdV-5. Though we have not tested protective efficacy in a disease model, these findings have important implications for the selection of candidate vectors for further evaluation. We propose that vaccines based on ChAdOx1 or other Human adenovirus E serotypes could be at least as immunogenic as current licensed bovine vaccines based on HAdV-5.

  1. Area potentials and deformation quantization.

    SciTech Connect

    Curtright, T. L.; Polychronakos, A. P.; Zachos, C. K.; High Energy Physics; Univ. of Miami; Rockefeller Univ.; Univ. of Ioannina

    2002-04-01

    Systems built out of N-body interactions, beyond 2-body interactions, are formulated on the plane, and investigated classically and quantum mechanically (in phase space). Their Wigner functions--the density matrices in phase-space quantization--are given and analyzed.

  2. Geometric Quantization and Foliation Reduction

    NASA Astrophysics Data System (ADS)

    Skerritt, Paul

    A standard question in the study of geometric quantization is whether symplectic reduction interacts nicely with the quantized theory, and in particular whether "quantization commutes with reduction." Guillemin and Sternberg first proposed this question, and answered it in the affirmative for the case of a free action of a compact Lie group on a compact Kahler manifold. Subsequent work has focused mainly on extending their proof to non-free actions and non-Kahler manifolds. For realistic physical examples, however, it is desirable to have a proof which also applies to non-compact symplectic manifolds. In this thesis we give a proof of the quantization-reduction problem for general symplectic manifolds. This is accomplished by working in a particular wavefunction representation, associated with a polarization that is in some sense compatible with reduction. While the polarized sections described by Guillemin and Sternberg are nonzero on a dense subset of the Kahler manifold, the ones considered here are distributional, having support only on regions of the phase space associated with certain quantized, or "admissible", values of momentum. We first propose a reduction procedure for the prequantum geometric structures that "covers" symplectic reduction, and demonstrate how both symplectic and prequantum reduction can be viewed as examples of foliation reduction. Consistency of prequantum reduction imposes the above-mentioned admissibility conditions on the quantized momenta, which can be seen as analogues of the Bohr-Wilson-Sommerfeld conditions for completely integrable systems. We then describe our reduction-compatible polarization, and demonstrate a one-to-one correspondence between polarized sections on the unreduced and reduced spaces. Finally, we describe a factorization of the reduced prequantum bundle, suggested by the structure of the underlying reduced symplectic manifold. This in turn induces a factorization of the space of polarized sections that agrees

  3. The wavelet/scalar quantization compression standard for digital fingerprint images

    SciTech Connect

    Bradley, J.N.; Brislawn, C.M.

    1994-04-01

    A new digital image compression standard has been adopted by the US Federal Bureau of Investigation for use on digitized gray-scale fingerprint images. The algorithm is based on adaptive uniform scalar quantization of a discrete wavelet transform image decomposition and is referred to as the wavelet/scalar quantization standard. The standard produces archival quality images at compression ratios of around 20:1 and will allow the FBI to replace their current database of paper fingerprint cards with digital imagery.

  4. Deformation of second and third quantization

    NASA Astrophysics Data System (ADS)

    Faizal, Mir

    2015-03-01

    In this paper, we will deform the second and third quantized theories by deforming the canonical commutation relations in such a way that they become consistent with the generalized uncertainty principle. Thus, we will first deform the second quantized commutator and obtain a deformed version of the Wheeler-DeWitt equation. Then we will further deform the third quantized theory by deforming the third quantized canonical commutation relation. This way we will obtain a deformed version of the third quantized theory for the multiverse.

  5. On quantization of matrix models

    NASA Astrophysics Data System (ADS)

    Starodubtsev, Artem

    2002-12-01

    The issue of non-perturbative background independent quantization of matrix models is addressed. The analysis is carried out by considering a simple matrix model which is a matrix extension of ordinary mechanics reduced to 0 dimension. It is shown that this model has an ordinary mechanical system evolving in time as a classical solution. But in this treatment the action principle admits a natural modification which results in algebraic relations describing quantum theory. The origin of quantization is similar to that in Adler's generalized quantum dynamics. The problem with extension of this formalism to many degrees of freedom is solved by packing all the degrees of freedom into a single matrix. The possibility to apply this scheme to various matrix models is discussed.

  6. An adaptive algorithm for motion compensated color image coding

    NASA Technical Reports Server (NTRS)

    Kwatra, Subhash C.; Whyte, Wayne A.; Lin, Chow-Ming

    1987-01-01

    This paper presents an adaptive algorithm for motion compensated color image coding. The algorithm can be used for video teleconferencing or broadcast signals. Activity segmentation is used to reduce the bit rate and a variable stage search is conducted to save computations. The adaptive algorithm is compared with the nonadaptive algorithm and it is shown that with approximately 60 percent savings in computing the motion vector and 33 percent additional compression, the performance of the adaptive algorithm is similar to the nonadaptive algorithm. The adaptive algorithm results also show improvement of up to 1 bit/pel over interframe DPCM coding with nonuniform quantization. The test pictures used for this study were recorded directly from broadcast video in color.

  7. Wavelet/scalar quantization compression standard for fingerprint images

    SciTech Connect

    Brislawn, C.M.

    1996-06-12

    US Federal Bureau of Investigation (FBI) has recently formulated a national standard for digitization and compression of gray-scale fingerprint images. Fingerprints are scanned at a spatial resolution of 500 dots per inch, with 8 bits of gray-scale resolution. The compression algorithm for the resulting digital images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition (wavelet/scalar quantization method). The FBI standard produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. The compression standard specifies a class of potential encoders and a universal decoder with sufficient generality to reconstruct compressed images produced by any compliant encoder, allowing flexibility for future improvements in encoder technology. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations.

  8. Bloch theory and quantization of magnetic systems

    NASA Astrophysics Data System (ADS)

    Gruber, Michael J.

    2000-06-01

    Quantizing the motion of particles on a Riemannian manifold in the presence of a magnetic field poses the problems of existence and uniqueness of quantizations. Both of them are considered since the early days of geometric quantization but there is still some structural insight to gain from spectral theory. Following the work of Asch et al. (Magnetic Bloch analysis and Bochner Laplacians, J. Geom. Phys. 13 (3) (1994) 275-288) for the 2-torus we describe the relation between quantization on the manifold and Bloch theory on its covering space for more general compact manifolds.

  9. Quantized vortices in interacting gauge theories

    NASA Astrophysics Data System (ADS)

    Butera, Salvatore; Valiente, Manuel; Öhberg, Patrik

    2016-01-01

    We consider a two-dimensional weakly interacting ultracold Bose gas whose constituents are two-level atoms. We study the effects of a synthetic density-dependent gauge field that arises from laser-matter coupling in the adiabatic limit with a laser configuration such that the single-particle zeroth-order vector potential corresponds to a constant synthetic magnetic field. We find a new exotic type of current nonlinearity in the Gross-Pitaevskii equation which affects the dynamics of the order parameter of the condensate. We investigate the rotational properties of this system in the Thomas-Fermi limit, focusing in particular on the physical conditions that make the existence of a quantized vortex in the system energetically favourable with respect to the non-rotating solution. We point out that two different physical interpretations can be given to this new nonlinearity: firstly it can be seen as a local modification of the mean field coupling constant, whose value depends on the angular momentum of the condensate. Secondly, it can be interpreted as a density modulated angular velocity given to the cloud. Looking at the problem from both of these viewpoints, we show that the effect of the new nonlinearity is to induce a rotation to the condensate, where the transition from non-rotating to rotating states depends on the density of the cloud.

  10. Quantized vortices in interacting gauge theories

    NASA Astrophysics Data System (ADS)

    Butera, Salvatore; Valiente, Manuel; Ohberg, Patrik

    2015-05-01

    We consider a two-dimensional weakly interacting ultracold Bose gas whose constituents are two-level atoms. We study the effects of a synthetic density-dependent gauge field that arises from laser-matter coupling in the adiabatic limit with a laser configuration such that the single-particle vector potential corresponds to a constant synthetic magnetic field. We find a new type of current non-linearity in the Gross-Pitaevskii equation which affects the dynamics of the order parameter of the condensate. We investigate on the physical conditions that make the nucleation of a quantized vortex in the system energetically favourable with respect to the non rotating solution. Two different physical interpretations can be given to this new non linearity: firstly it can be seen as a local modification of the mean field coupling constant, whose value depends on the angular momentum of the condensate. Secondly, it can be interpreted as a density modulated angular velocity given to the cloud. We analyze the physical conditions that make a single vortex state energetically favourable. In the Thomas-Fermi limit, we show that the effect of the new nonlinearity is to induce a rotation to the condensate, where the transition from non-rotating to rotating depends on the density of the cloud. The authors acknowledge support from CM-DTC and EPSRC.

  11. Quantization of general linear electrodynamics

    SciTech Connect

    Rivera, Sergio; Schuller, Frederic P.

    2011-03-15

    General linear electrodynamics allow for an arbitrary linear constitutive relation between the field strength 2-form and induction 2-form density if crucial hyperbolicity and energy conditions are satisfied, which render the theory predictive and physically interpretable. Taking into account the higher-order polynomial dispersion relation and associated causal structure of general linear electrodynamics, we carefully develop its Hamiltonian formulation from first principles. Canonical quantization of the resulting constrained system then results in a quantum vacuum which is sensitive to the constitutive tensor of the classical theory. As an application we calculate the Casimir effect in a birefringent linear optical medium.

  12. Interframe vector wavelet coding technique

    NASA Astrophysics Data System (ADS)

    Wus, John P.; Li, Weiping

    1997-01-01

    Wavelet coding is often used to divide an image into multi- resolution wavelet coefficients which are quantized and coded. By 'vectorizing' scalar wavelet coding and combining this with vector quantization (VQ), vector wavelet coding (VWC) can be implemented. Using a finite number of states, finite-state vector quantization (FSVQ) takes advantage of the similarity between frames by incorporating memory into the video coding system. Lattice VQ eliminates the potential mismatch that could occur using pre-trained VQ codebooks. It also eliminates the need for codebook storage in the VQ process, thereby creating a more robust coding system. Therefore, by using the VWC coding method in conjunction with the FSVQ system and lattice VQ, the formulation of a high quality very low bit rate coding systems is proposed. A coding system using a simple FSVQ system where the current state is determined by the previous channel symbol only is developed. To achieve a higher degree of compression, a tree-like FSVQ system is implemented. The groupings are done in this tree-like structure from the lower subbands to the higher subbands in order to exploit the nature of subband analysis in terms of the parent-child relationship. Class A and Class B video sequences from the MPEG-IV testing evaluations are used in the evaluation of this coding method.

  13. Predictive search algorithm for vector quantization of images

    NASA Astrophysics Data System (ADS)

    Kuo, Chung-Ming; Hsieh, Chaur-Heh; Weng, Shiuh-Ku

    2002-05-01

    We present a fast predictive search algorithm for vectorquantization (VQ) based on a wavelet transform and weighted average Kalman filter (WAKF). With the proposed algorithm, the minimum distortion code word can be found by searching only a portion of the wavelet transformed code book. If the minimum distortion code word found falls within a predicted search area obtained by the WAKF algorithm, the relative address that is shorter than the absolute address for a full search range is sent to the decoder. Simulation results indicate that the proposed algorithm achieves a significant reduction in computations and about a 30% bit-rate reduction, as compared to conventional full search VQs. In addition, the reconstructed quality is equivalent to that of the full search algorithm.

  14. Breathers on quantized superfluid vortices.

    PubMed

    Salman, Hayder

    2013-10-18

    We consider the propagation of breathers along a quantized superfluid vortex. Using the correspondence between the local induction approximation (LIA) and the nonlinear Schrödinger equation, we identify a set of initial conditions corresponding to breather solutions of vortex motion governed by the LIA. These initial conditions, which give rise to a long-wavelength modulational instability, result in the emergence of large amplitude perturbations that are localized in both space and time. The emergent structures on the vortex filament are analogous to loop solitons but arise from the dual action of bending and twisting of the vortex. Although the breather solutions we study are exact solutions of the LIA equations, we demonstrate through full numerical simulations that their key emergent attributes carry over to vortex dynamics governed by the Biot-Savart law and to quantized vortices described by the Gross-Pitaevskii equation. The breather excitations can lead to self-reconnections, a mechanism that can play an important role within the crossover range of scales in superfluid turbulence. Moreover, the observation of breather solutions on vortices in a field model suggests that these solutions are expected to arise in a wide range of other physical contexts from classical vortices to cosmological strings. PMID:24182275

  15. Breathers on Quantized Superfluid Vortices

    NASA Astrophysics Data System (ADS)

    Salman, Hayder

    2013-10-01

    We consider the propagation of breathers along a quantized superfluid vortex. Using the correspondence between the local induction approximation (LIA) and the nonlinear Schrödinger equation, we identify a set of initial conditions corresponding to breather solutions of vortex motion governed by the LIA. These initial conditions, which give rise to a long-wavelength modulational instability, result in the emergence of large amplitude perturbations that are localized in both space and time. The emergent structures on the vortex filament are analogous to loop solitons but arise from the dual action of bending and twisting of the vortex. Although the breather solutions we study are exact solutions of the LIA equations, we demonstrate through full numerical simulations that their key emergent attributes carry over to vortex dynamics governed by the Biot-Savart law and to quantized vortices described by the Gross-Pitaevskii equation. The breather excitations can lead to self-reconnections, a mechanism that can play an important role within the crossover range of scales in superfluid turbulence. Moreover, the observation of breather solutions on vortices in a field model suggests that these solutions are expected to arise in a wide range of other physical contexts from classical vortices to cosmological strings.

  16. Weak associativity and deformation quantization

    NASA Astrophysics Data System (ADS)

    Kupriyanov, V. G.

    2016-09-01

    Non-commutativity and non-associativity are quite natural in string theory. For open strings it appears due to the presence of non-vanishing background two-form in the world volume of Dirichlet brane, while in closed string theory the flux compactifications with non-vanishing three-form also lead to non-geometric backgrounds. In this paper, working in the framework of deformation quantization, we study the violation of associativity imposing the condition that the associator of three elements should vanish whenever each two of them are equal. The corresponding star products are called alternative and satisfy important for physical applications properties like the Moufang identities, alternative identities, Artin's theorem, etc. The condition of alternativity is invariant under the gauge transformations, just like it happens in the associative case. The price to pay is the restriction on the non-associative algebra which can be represented by the alternative star product, it should satisfy the Malcev identity. The example of nontrivial Malcev algebra is the algebra of imaginary octonions. For this case we construct an explicit expression of the non-associative and alternative star product. We also discuss the quantization of Malcev-Poisson algebras of general form, study its properties and provide the lower order expression for the alternative star product. To conclude we define the integration on the algebra of the alternative star products and show that the integrated associator vanishes.

  17. Quantization of higher spin fields

    SciTech Connect

    Wagenaar, J. W.; Rijken, T. A

    2009-11-15

    In this article we quantize (massive) higher spin (1{<=}j{<=}2) fields by means of Dirac's constrained Hamilton procedure both in the situation were they are totally free and were they are coupled to (an) auxiliary field(s). A full constraint analysis and quantization is presented by determining and discussing all constraints and Lagrange multipliers and by giving all equal times (anti)commutation relations. Also we construct the relevant propagators. In the free case we obtain the well-known propagators and show that they are not covariant, which is also well known. In the coupled case we do obtain covariant propagators (in the spin-3/2 case this requires b=0) and show that they have a smooth massless limit connecting perfectly to the massless case (with auxiliary fields). We notice that in our system of the spin-3/2 and spin-2 case the massive propagators coupled to conserved currents only have a smooth limit to the pure massless spin-propagator, when there are ghosts in the massive case.

  18. Weighted Bergman Kernels and Quantization}

    NASA Astrophysics Data System (ADS)

    Engliš, Miroslav

    Let Ω be a bounded pseudoconvex domain in CN, φ, ψ two positive functions on Ω such that - log ψ, - log φ are plurisubharmonic, and z∈Ω a point at which - log φ is smooth and strictly plurisubharmonic. We show that as k-->∞, the Bergman kernels with respect to the weights φkψ have an asymptotic expansion for x,y near z, where φ(x,y) is an almost-analytic extension of &\\phi(x)=φ(x,x) and similarly for ψ. Further, . If in addition Ω is of finite type, φ,ψ behave reasonably at the boundary, and - log φ, - log ψ are strictly plurisubharmonic on Ω, we obtain also an analogous asymptotic expansion for the Berezin transform and give applications to the Berezin quantization. Finally, for Ω smoothly bounded and strictly pseudoconvex and φ a smooth strictly plurisubharmonic defining function for Ω, we also obtain results on the Berezin-Toeplitz quantization.

  19. Integral quantizations with two basic examples

    SciTech Connect

    Bergeron, H.; Gazeau, J.P.

    2014-05-15

    The paper concerns integral quantization, a procedure based on operator-valued measure and resolution of the identity. We insist on covariance properties in the important case where group representation theory is involved. We also insist on the inherent probabilistic aspects of this classical–quantum map. The approach includes and generalizes coherent state quantization. Two applications based on group representation are carried out. The first one concerns the Weyl–Heisenberg group and the euclidean plane viewed as the corresponding phase space. We show that a world of quantizations exist, which yield the canonical commutation rule and the usual quantum spectrum of the harmonic oscillator. The second one concerns the affine group of the real line and gives rise to an interesting regularization of the dilation origin in the half-plane viewed as the corresponding phase space. -- Highlights: •Original approach to quantization based on (positive) operator-valued measures. •Includes Berezin–Klauder–Toeplitz and Weyl–Wigner quantizations. •Infinitely many such quantizations produce canonical commutation rule. •Set of objects to be quantized is enlarged in order to include singular functions or distributions. •Are given illuminating examples like quantum angle and affine or wavelet quantization.

  20. Adapt

    NASA Astrophysics Data System (ADS)

    Bargatze, L. F.

    2015-12-01

    Active Data Archive Product Tracking (ADAPT) is a collection of software routines that permits one to generate XML metadata files to describe and register data products in support of the NASA Heliophysics Virtual Observatory VxO effort. ADAPT is also a philosophy. The ADAPT concept is to use any and all available metadata associated with scientific data to produce XML metadata descriptions in a consistent, uniform, and organized fashion to provide blanket access to the full complement of data stored on a targeted data server. In this poster, we present an application of ADAPT to describe all of the data products that are stored by using the Common Data File (CDF) format served out by the CDAWEB and SPDF data servers hosted at the NASA Goddard Space Flight Center. These data servers are the primary repositories for NASA Heliophysics data. For this purpose, the ADAPT routines have been used to generate data resource descriptions by using an XML schema named Space Physics Archive, Search, and Extract (SPASE). SPASE is the designated standard for documenting Heliophysics data products, as adopted by the Heliophysics Data and Model Consortium. The set of SPASE XML resource descriptions produced by ADAPT includes high-level descriptions of numerical data products, display data products, or catalogs and also includes low-level "Granule" descriptions. A SPASE Granule is effectively a universal access metadata resource; a Granule associates an individual data file (e.g. a CDF file) with a "parent" high-level data resource description, assigns a resource identifier to the file, and lists the corresponding assess URL(s). The CDAWEB and SPDF file systems were queried to provide the input required by the ADAPT software to create an initial set of SPASE metadata resource descriptions. Then, the CDAWEB and SPDF data repositories were queried subsequently on a nightly basis and the CDF file lists were checked for any changes such as the occurrence of new, modified, or deleted

  1. Quantized conic sections; quantum gravity

    SciTech Connect

    Noyes, H.P.

    1993-03-15

    Starting from free relativistic particles whose position and velocity can only be measured to a precision < {Delta}r{Delta}v > {equivalent_to} {plus_minus} k/2 meter{sup 2}sec{sup {minus}1} , we use the relativistic conservation laws to define the relative motion of the coordinate r = r{sub 1} {minus} r{sub 2} of two particles of mass m{sub 1}, m{sub 2} and relative velocity v = {beta}c = {sub (k{sub 1} + k{sub 2}})/ {sup (k{sub 1} {minus} k{sub 2}}) in terms of conic section equation v{sup 2} = {Gamma} [2/r {plus_minus} 1/a] where ``+`` corresponds to hyperbolic and ``{minus}`` to elliptical trajectories. Equation is quantized by expressing Kepler`s Second Law as conservation of angular niomentum per unit mass in units of k. Principal quantum number is n {equivalent_to} j + {1/2} with``square`` {sub T{sup 2}}/{sup A{sup 2}} = (n {minus}1)nk{sup 2} {equivalent_to} {ell}{sub {circle_dot}}({ell}{sub {circle_dot}} + 1)k{sup 2}. Here {ell}{sub {circle_dot}} = n {minus} 1 is the angular momentumquantum number for circular orbits. In a sense, we obtain ``spin`` from this quantization. Since {Gamma}/a cannot reach c{sup 2} without predicting either circular or asymptotic velocities equal to the limiting velocity for particulate motion, we can also quantize velocities in terms of the principle quantum number by defining {beta}{sub n}/{sup 2} = {sub c{sup 2}}/{sup v{sub n{sup 2}} = {sub n{sup 2}}/1({sub c{sup 2}}a/{Gamma}) = ({sub nN{Gamma}}/1){sup 2}. For the Z{sub 1}e,Z{sub 2}e of the same sign and {alpha} {triple_bond} e{sup 2}/m{sub e}{kappa}c, we find that {Gamma}/c{sup 2}a = Z{sub 1}Z{sub 2}{alpha}. The characteristic Coulomb parameter {eta}(n) {triple_bond} Z{sub 1}Z{sub 2}{alpha}/{beta}{sub n} = Z{sub 1}Z{sub 2}nN{sub {Gamma}} then specifies the penetration factor C{sup 2}({eta}) = 2{pi}{eta}/(e{sup 2{pi}{eta}} {minus} 1}). For unlike charges, with {eta} still taken as positive, C{sup 2}({minus}{eta}) = 2{pi}{eta}/(1 {minus} e{sup {minus}2{pi}{eta}}).

  2. Flux Quantization Without Cooper Pairs

    NASA Astrophysics Data System (ADS)

    Kadin, Alan

    2013-03-01

    It is universally accepted that the superconducting flux quantum h/2e requires the existence of a phase-coherent macroscopic wave function of Cooper pairs, each with charge 2e. On the contrary, we assert that flux quantization can be better understood in terms of single-electron quantum states, localized on the scale of the coherence length and organized into a real-space phase-antiphase structure. This packing configuration is consistent with the Pauli exclusion principle for single-electron states, maintains long-range phase coherence, and is compatible with much of the BCS formalism. This also accounts for h/2e in the Josephson effect, without Cooper pairs. Experimental evidence for this alternative picture may be found in deviations from h/2e in loops and devices much smaller than the coherence length. A similar phase-antiphase structure may also account for superfluids, without the need for boson condensation.

  3. Quantized ionic conductance in nanopores

    SciTech Connect

    Zwolak, Michael; Lagerqvist, Johan; Di Ventra, Massimilliano

    2009-01-01

    Ionic transport in nanopores is a fundamentally and technologically important problem in view of its ubiquitous occurrence in biological processes and its impact on DNA sequencing applications. Using microscopic calculations, we show that ion transport may exhibit strong non-liDearities as a function of the pore radius reminiscent of the conductance quantization steps as a function of the transverse cross section of quantum point contacts. In the present case, however, conductance steps originate from the break up of the hydration layers that form around ions in aqueous solution. Once in the pore, the water molecules form wavelike structures due to multiple scattering at the surface of the pore walls and interference with the radial waves around the ion. We discuss these effects as well as the conditions under which the step-like features in the ionic conductance should be experimentally observable.

  4. Variable-rate colour image quantization based on quadtree segmentation

    NASA Astrophysics Data System (ADS)

    Hu, Y. C.; Li, C. Y.; Chuang, J. C.; Lo, C. C.

    2011-09-01

    A novel variable-sized block encoding with threshold control for colour image quantization (CIQ) is presented in this paper. In CIQ, the colour palette used has a great influence on the reconstructed image quality. Typically, a higher image quality and a larger storage cost are obtained when a larger-sized palette is used in CIQ. To cut down the storage cost while preserving quality of the reconstructed images, the threshold control policy for quadtree segmentation is used in this paper. Experimental results show that the proposed method adaptively provides desired bit rates while having better image qualities comparing to CIQ with the usage of multiple palettes of different sizes.

  5. Combining support vector machines with linear quadratic regulator adaptation for the online design of an automotive active suspension system

    NASA Astrophysics Data System (ADS)

    Chiou, J.-S.; Liu, M.-T.

    2008-02-01

    As a powerful machine-learning approach to pattern recognition problems, the support vector machine (SVM) is known to easily allow generalization. More importantly, it works very well in a high-dimensional feature space. This paper presents a nonlinear active suspension controller which achieves a high level performance by compensating for actuator dynamics. We use a linear quadratic regulator (LQR) to ensure optimal control of nonlinear systems. An LQR is used to solve the problem of state feedback and an SVM is used to address the question of the estimation and examination of the state. These two are then combined and designed in a way that outputs feedback control. The real-time simulation demonstrates that an active suspension using the combined SVM-LQR controller provides passengers with a much more comfortable ride and better road handling.

  6. Genome of Rhodnius prolixus, an insect vector of Chagas disease, reveals unique adaptations to hematophagy and parasite infection.

    PubMed

    Mesquita, Rafael D; Vionette-Amaral, Raquel J; Lowenberger, Carl; Rivera-Pomar, Rolando; Monteiro, Fernando A; Minx, Patrick; Spieth, John; Carvalho, A Bernardo; Panzera, Francisco; Lawson, Daniel; Torres, André Q; Ribeiro, Jose M C; Sorgine, Marcos H F; Waterhouse, Robert M; Montague, Michael J; Abad-Franch, Fernando; Alves-Bezerra, Michele; Amaral, Laurence R; Araujo, Helena M; Araujo, Ricardo N; Aravind, L; Atella, Georgia C; Azambuja, Patricia; Berni, Mateus; Bittencourt-Cunha, Paula R; Braz, Gloria R C; Calderón-Fernández, Gustavo; Carareto, Claudia M A; Christensen, Mikkel B; Costa, Igor R; Costa, Samara G; Dansa, Marilvia; Daumas-Filho, Carlos R O; De-Paula, Iron F; Dias, Felipe A; Dimopoulos, George; Emrich, Scott J; Esponda-Behrens, Natalia; Fampa, Patricia; Fernandez-Medina, Rita D; da Fonseca, Rodrigo N; Fontenele, Marcio; Fronick, Catrina; Fulton, Lucinda A; Gandara, Ana Caroline; Garcia, Eloi S; Genta, Fernando A; Giraldo-Calderón, Gloria I; Gomes, Bruno; Gondim, Katia C; Granzotto, Adriana; Guarneri, Alessandra A; Guigó, Roderic; Harry, Myriam; Hughes, Daniel S T; Jablonka, Willy; Jacquin-Joly, Emmanuelle; Juárez, M Patricia; Koerich, Leonardo B; Lange, Angela B; Latorre-Estivalis, José Manuel; Lavore, Andrés; Lawrence, Gena G; Lazoski, Cristiano; Lazzari, Claudio R; Lopes, Raphael R; Lorenzo, Marcelo G; Lugon, Magda D; Majerowicz, David; Marcet, Paula L; Mariotti, Marco; Masuda, Hatisaburo; Megy, Karine; Melo, Ana C A; Missirlis, Fanis; Mota, Theo; Noriega, Fernando G; Nouzova, Marcela; Nunes, Rodrigo D; Oliveira, Raquel L L; Oliveira-Silveira, Gilbert; Ons, Sheila; Orchard, Ian; Pagola, Lucia; Paiva-Silva, Gabriela O; Pascual, Agustina; Pavan, Marcio G; Pedrini, Nicolás; Peixoto, Alexandre A; Pereira, Marcos H; Pike, Andrew; Polycarpo, Carla; Prosdocimi, Francisco; Ribeiro-Rodrigues, Rodrigo; Robertson, Hugh M; Salerno, Ana Paula; Salmon, Didier; Santesmasses, Didac; Schama, Renata; Seabra-Junior, Eloy S; Silva-Cardoso, Livia; Silva-Neto, Mario A C; Souza-Gomes, Matheus; Sterkel, Marcos; Taracena, Mabel L; Tojo, Marta; Tu, Zhijian Jake; Tubio, Jose M C; Ursic-Bedoya, Raul; Venancio, Thiago M; Walter-Nuno, Ana Beatriz; Wilson, Derek; Warren, Wesley C; Wilson, Richard K; Huebner, Erwin; Dotson, Ellen M; Oliveira, Pedro L

    2015-12-01

    Rhodnius prolixus not only has served as a model organism for the study of insect physiology, but also is a major vector of Chagas disease, an illness that affects approximately seven million people worldwide. We sequenced the genome of R. prolixus, generated assembled sequences covering 95% of the genome (∼ 702 Mb), including 15,456 putative protein-coding genes, and completed comprehensive genomic analyses of this obligate blood-feeding insect. Although immune-deficiency (IMD)-mediated immune responses were observed, R. prolixus putatively lacks key components of the IMD pathway, suggesting a reorganization of the canonical immune signaling network. Although both Toll and IMD effectors controlled intestinal microbiota, neither affected Trypanosoma cruzi, the causal agent of Chagas disease, implying the existence of evasion or tolerance mechanisms. R. prolixus has experienced an extensive loss of selenoprotein genes, with its repertoire reduced to only two proteins, one of which is a selenocysteine-based glutathione peroxidase, the first found in insects. The genome contained actively transcribed, horizontally transferred genes from Wolbachia sp., which showed evidence of codon use evolution toward the insect use pattern. Comparative protein analyses revealed many lineage-specific expansions and putative gene absences in R. prolixus, including tandem expansions of genes related to chemoreception, feeding, and digestion that possibly contributed to the evolution of a blood-feeding lifestyle. The genome assembly and these associated analyses provide critical information on the physiology and evolution of this important vector species and should be instrumental for the development of innovative disease control methods. PMID:26627243

  7. Genome of Rhodnius prolixus, an insect vector of Chagas disease, reveals unique adaptations to hematophagy and parasite infection

    PubMed Central

    Mesquita, Rafael D.; Vionette-Amaral, Raquel J.; Lowenberger, Carl; Rivera-Pomar, Rolando; Monteiro, Fernando A.; Minx, Patrick; Spieth, John; Carvalho, A. Bernardo; Panzera, Francisco; Lawson, Daniel; Torres, André Q.; Ribeiro, Jose M. C.; Sorgine, Marcos H. F.; Waterhouse, Robert M.; Abad-Franch, Fernando; Alves-Bezerra, Michele; Amaral, Laurence R.; Araujo, Helena M.; Aravind, L.; Atella, Georgia C.; Azambuja, Patricia; Berni, Mateus; Bittencourt-Cunha, Paula R.; Braz, Gloria R. C.; Calderón-Fernández, Gustavo; Carareto, Claudia M. A.; Christensen, Mikkel B.; Costa, Igor R.; Costa, Samara G.; Dansa, Marilvia; Daumas-Filho, Carlos R. O.; De-Paula, Iron F.; Dias, Felipe A.; Dimopoulos, George; Emrich, Scott J.; Esponda-Behrens, Natalia; Fampa, Patricia; Fernandez-Medina, Rita D.; da Fonseca, Rodrigo N.; Fontenele, Marcio; Fronick, Catrina; Fulton, Lucinda A.; Gandara, Ana Caroline; Garcia, Eloi S.; Genta, Fernando A.; Giraldo-Calderón, Gloria I.; Gomes, Bruno; Gondim, Katia C.; Granzotto, Adriana; Guarneri, Alessandra A.; Guigó, Roderic; Harry, Myriam; Hughes, Daniel S. T.; Jablonka, Willy; Jacquin-Joly, Emmanuelle; Juárez, M. Patricia; Koerich, Leonardo B.; Lange, Angela B.; Latorre-Estivalis, José Manuel; Lavore, Andrés; Lawrence, Gena G.; Lazoski, Cristiano; Lazzari, Claudio R.; Lopes, Raphael R.; Lorenzo, Marcelo G.; Lugon, Magda D.; Marcet, Paula L.; Mariotti, Marco; Masuda, Hatisaburo; Megy, Karine; Missirlis, Fanis; Mota, Theo; Noriega, Fernando G.; Nouzova, Marcela; Nunes, Rodrigo D.; Oliveira, Raquel L. L.; Oliveira-Silveira, Gilbert; Ons, Sheila; Orchard, Ian; Pagola, Lucia; Paiva-Silva, Gabriela O.; Pascual, Agustina; Pavan, Marcio G.; Pedrini, Nicolás; Peixoto, Alexandre A.; Pereira, Marcos H.; Pike, Andrew; Polycarpo, Carla; Prosdocimi, Francisco; Ribeiro-Rodrigues, Rodrigo; Robertson, Hugh M.; Salerno, Ana Paula; Salmon, Didier; Santesmasses, Didac; Schama, Renata; Seabra-Junior, Eloy S.; Silva-Cardoso, Livia; Silva-Neto, Mario A. C.; Souza-Gomes, Matheus; Sterkel, Marcos; Taracena, Mabel L.; Tojo, Marta; Tu, Zhijian Jake; Tubio, Jose M. C.; Ursic-Bedoya, Raul; Venancio, Thiago M.; Walter-Nuno, Ana Beatriz; Wilson, Derek; Warren, Wesley C.; Wilson, Richard K.; Huebner, Erwin; Dotson, Ellen M.; Oliveira, Pedro L.

    2015-01-01

    Rhodnius prolixus not only has served as a model organism for the study of insect physiology, but also is a major vector of Chagas disease, an illness that affects approximately seven million people worldwide. We sequenced the genome of R. prolixus, generated assembled sequences covering 95% of the genome (∼702 Mb), including 15,456 putative protein-coding genes, and completed comprehensive genomic analyses of this obligate blood-feeding insect. Although immune-deficiency (IMD)-mediated immune responses were observed, R. prolixus putatively lacks key components of the IMD pathway, suggesting a reorganization of the canonical immune signaling network. Although both Toll and IMD effectors controlled intestinal microbiota, neither affected Trypanosoma cruzi, the causal agent of Chagas disease, implying the existence of evasion or tolerance mechanisms. R. prolixus has experienced an extensive loss of selenoprotein genes, with its repertoire reduced to only two proteins, one of which is a selenocysteine-based glutathione peroxidase, the first found in insects. The genome contained actively transcribed, horizontally transferred genes from Wolbachia sp., which showed evidence of codon use evolution toward the insect use pattern. Comparative protein analyses revealed many lineage-specific expansions and putative gene absences in R. prolixus, including tandem expansions of genes related to chemoreception, feeding, and digestion that possibly contributed to the evolution of a blood-feeding lifestyle. The genome assembly and these associated analyses provide critical information on the physiology and evolution of this important vector species and should be instrumental for the development of innovative disease control methods. PMID:26627243

  8. Genome of Rhodnius prolixus, an insect vector of Chagas disease, reveals unique adaptations to hematophagy and parasite infection.

    PubMed

    Mesquita, Rafael D; Vionette-Amaral, Raquel J; Lowenberger, Carl; Rivera-Pomar, Rolando; Monteiro, Fernando A; Minx, Patrick; Spieth, John; Carvalho, A Bernardo; Panzera, Francisco; Lawson, Daniel; Torres, André Q; Ribeiro, Jose M C; Sorgine, Marcos H F; Waterhouse, Robert M; Montague, Michael J; Abad-Franch, Fernando; Alves-Bezerra, Michele; Amaral, Laurence R; Araujo, Helena M; Araujo, Ricardo N; Aravind, L; Atella, Georgia C; Azambuja, Patricia; Berni, Mateus; Bittencourt-Cunha, Paula R; Braz, Gloria R C; Calderón-Fernández, Gustavo; Carareto, Claudia M A; Christensen, Mikkel B; Costa, Igor R; Costa, Samara G; Dansa, Marilvia; Daumas-Filho, Carlos R O; De-Paula, Iron F; Dias, Felipe A; Dimopoulos, George; Emrich, Scott J; Esponda-Behrens, Natalia; Fampa, Patricia; Fernandez-Medina, Rita D; da Fonseca, Rodrigo N; Fontenele, Marcio; Fronick, Catrina; Fulton, Lucinda A; Gandara, Ana Caroline; Garcia, Eloi S; Genta, Fernando A; Giraldo-Calderón, Gloria I; Gomes, Bruno; Gondim, Katia C; Granzotto, Adriana; Guarneri, Alessandra A; Guigó, Roderic; Harry, Myriam; Hughes, Daniel S T; Jablonka, Willy; Jacquin-Joly, Emmanuelle; Juárez, M Patricia; Koerich, Leonardo B; Lange, Angela B; Latorre-Estivalis, José Manuel; Lavore, Andrés; Lawrence, Gena G; Lazoski, Cristiano; Lazzari, Claudio R; Lopes, Raphael R; Lorenzo, Marcelo G; Lugon, Magda D; Majerowicz, David; Marcet, Paula L; Mariotti, Marco; Masuda, Hatisaburo; Megy, Karine; Melo, Ana C A; Missirlis, Fanis; Mota, Theo; Noriega, Fernando G; Nouzova, Marcela; Nunes, Rodrigo D; Oliveira, Raquel L L; Oliveira-Silveira, Gilbert; Ons, Sheila; Orchard, Ian; Pagola, Lucia; Paiva-Silva, Gabriela O; Pascual, Agustina; Pavan, Marcio G; Pedrini, Nicolás; Peixoto, Alexandre A; Pereira, Marcos H; Pike, Andrew; Polycarpo, Carla; Prosdocimi, Francisco; Ribeiro-Rodrigues, Rodrigo; Robertson, Hugh M; Salerno, Ana Paula; Salmon, Didier; Santesmasses, Didac; Schama, Renata; Seabra-Junior, Eloy S; Silva-Cardoso, Livia; Silva-Neto, Mario A C; Souza-Gomes, Matheus; Sterkel, Marcos; Taracena, Mabel L; Tojo, Marta; Tu, Zhijian Jake; Tubio, Jose M C; Ursic-Bedoya, Raul; Venancio, Thiago M; Walter-Nuno, Ana Beatriz; Wilson, Derek; Warren, Wesley C; Wilson, Richard K; Huebner, Erwin; Dotson, Ellen M; Oliveira, Pedro L

    2015-12-01

    Rhodnius prolixus not only has served as a model organism for the study of insect physiology, but also is a major vector of Chagas disease, an illness that affects approximately seven million people worldwide. We sequenced the genome of R. prolixus, generated assembled sequences covering 95% of the genome (∼ 702 Mb), including 15,456 putative protein-coding genes, and completed comprehensive genomic analyses of this obligate blood-feeding insect. Although immune-deficiency (IMD)-mediated immune responses were observed, R. prolixus putatively lacks key components of the IMD pathway, suggesting a reorganization of the canonical immune signaling network. Although both Toll and IMD effectors controlled intestinal microbiota, neither affected Trypanosoma cruzi, the causal agent of Chagas disease, implying the existence of evasion or tolerance mechanisms. R. prolixus has experienced an extensive loss of selenoprotein genes, with its repertoire reduced to only two proteins, one of which is a selenocysteine-based glutathione peroxidase, the first found in insects. The genome contained actively transcribed, horizontally transferred genes from Wolbachia sp., which showed evidence of codon use evolution toward the insect use pattern. Comparative protein analyses revealed many lineage-specific expansions and putative gene absences in R. prolixus, including tandem expansions of genes related to chemoreception, feeding, and digestion that possibly contributed to the evolution of a blood-feeding lifestyle. The genome assembly and these associated analyses provide critical information on the physiology and evolution of this important vector species and should be instrumental for the development of innovative disease control methods.

  9. Segmentation of Planar Surfaces from Laser Scanning Data Using the Magnitude of Normal Position Vector for Adaptive Neighborhoods.

    PubMed

    Kim, Changjae; Habib, Ayman; Pyeon, Muwook; Kwon, Goo-rak; Jung, Jaehoon; Heo, Joon

    2016-01-01

    Diverse approaches to laser point segmentation have been proposed since the emergence of the laser scanning system. Most of these segmentation techniques, however, suffer from limitations such as sensitivity to the choice of seed points, lack of consideration of the spatial relationships among points, and inefficient performance. In an effort to overcome these drawbacks, this paper proposes a segmentation methodology that: (1) reduces the dimensions of the attribute space; (2) considers the attribute similarity and the proximity of the laser point simultaneously; and (3) works well with both airborne and terrestrial laser scanning data. A neighborhood definition based on the shape of the surface increases the homogeneity of the laser point attributes. The magnitude of the normal position vector is used as an attribute for reducing the dimension of the accumulator array. The experimental results demonstrate, through both qualitative and quantitative evaluations, the outcomes' high level of reliability. The proposed segmentation algorithm provided 96.89% overall correctness, 95.84% completeness, a 0.25 m overall mean value of centroid difference, and less than 1° of angle difference. The performance of the proposed approach was also verified with a large dataset and compared with other approaches. Additionally, the evaluation of the sensitivity of the thresholds was carried out. In summary, this paper proposes a robust and efficient segmentation methodology for abstraction of an enormous number of laser points into plane information. PMID:26805849

  10. Segmentation of Planar Surfaces from Laser Scanning Data Using the Magnitude of Normal Position Vector for Adaptive Neighborhoods.

    PubMed

    Kim, Changjae; Habib, Ayman; Pyeon, Muwook; Kwon, Goo-rak; Jung, Jaehoon; Heo, Joon

    2016-01-22

    Diverse approaches to laser point segmentation have been proposed since the emergence of the laser scanning system. Most of these segmentation techniques, however, suffer from limitations such as sensitivity to the choice of seed points, lack of consideration of the spatial relationships among points, and inefficient performance. In an effort to overcome these drawbacks, this paper proposes a segmentation methodology that: (1) reduces the dimensions of the attribute space; (2) considers the attribute similarity and the proximity of the laser point simultaneously; and (3) works well with both airborne and terrestrial laser scanning data. A neighborhood definition based on the shape of the surface increases the homogeneity of the laser point attributes. The magnitude of the normal position vector is used as an attribute for reducing the dimension of the accumulator array. The experimental results demonstrate, through both qualitative and quantitative evaluations, the outcomes' high level of reliability. The proposed segmentation algorithm provided 96.89% overall correctness, 95.84% completeness, a 0.25 m overall mean value of centroid difference, and less than 1° of angle difference. The performance of the proposed approach was also verified with a large dataset and compared with other approaches. Additionally, the evaluation of the sensitivity of the thresholds was carried out. In summary, this paper proposes a robust and efficient segmentation methodology for abstraction of an enormous number of laser points into plane information.

  11. Segmentation of Planar Surfaces from Laser Scanning Data Using the Magnitude of Normal Position Vector for Adaptive Neighborhoods

    PubMed Central

    Kim, Changjae; Habib, Ayman; Pyeon, Muwook; Kwon, Goo-rak; Jung, Jaehoon; Heo, Joon

    2016-01-01

    Diverse approaches to laser point segmentation have been proposed since the emergence of the laser scanning system. Most of these segmentation techniques, however, suffer from limitations such as sensitivity to the choice of seed points, lack of consideration of the spatial relationships among points, and inefficient performance. In an effort to overcome these drawbacks, this paper proposes a segmentation methodology that: (1) reduces the dimensions of the attribute space; (2) considers the attribute similarity and the proximity of the laser point simultaneously; and (3) works well with both airborne and terrestrial laser scanning data. A neighborhood definition based on the shape of the surface increases the homogeneity of the laser point attributes. The magnitude of the normal position vector is used as an attribute for reducing the dimension of the accumulator array. The experimental results demonstrate, through both qualitative and quantitative evaluations, the outcomes’ high level of reliability. The proposed segmentation algorithm provided 96.89% overall correctness, 95.84% completeness, a 0.25 m overall mean value of centroid difference, and less than 1° of angle difference. The performance of the proposed approach was also verified with a large dataset and compared with other approaches. Additionally, the evaluation of the sensitivity of the thresholds was carried out. In summary, this paper proposes a robust and efficient segmentation methodology for abstraction of an enormous number of laser points into plane information. PMID:26805849

  12. Loop quantization of Schwarzschild interior revisited

    NASA Astrophysics Data System (ADS)

    Singh, Parampreet; Corichi, Alejandro

    2016-03-01

    Several studies of different inequivalent loop quantizations have shown, that there exists no fully satisfactory quantum theory for the Schwarzschild interior. Existing quantizations fail either on dependence on the fiducial structure or on the lack of the classical limit. Here we put forward a novel viewpoint to construct the quantum theory that overcomes all of the known problems of the existing quantizations. It is shown that the quantum gravitational constraint is well defined past the singularity and that its effective dynamics possesses a bounce into an expanding regime. The classical singularity is avoided, and a semiclassical spacetime satisfying vacuum Einstein's equations is recovered on the ``other side'' of the bounce. We argue that such metric represents the interior region of a white-hole spacetime, but for which the corresponding ``white-hole mass'' differs from the original black hole mass. We compare the differences in physical implications with other quantizations.

  13. Topologies on quantum topoi induced by quantization

    SciTech Connect

    Nakayama, Kunji

    2013-07-15

    In the present paper, we consider effects of quantization in a topos approach of quantum theory. A quantum system is assumed to be coded in a quantum topos, by which we mean the topos of presheaves on the context category of commutative subalgebras of a von Neumann algebra of bounded operators on a Hilbert space. A classical system is modeled by a Lie algebra of classical observables. It is shown that a quantization map from the classical observables to self-adjoint operators on the Hilbert space naturally induces geometric morphisms from presheaf topoi related to the classical system to the quantum topos. By means of the geometric morphisms, we give Lawvere-Tierney topologies on the quantum topos (and their equivalent Grothendieck topologies on the context category). We show that, among them, there exists a canonical one which we call a quantization topology. We furthermore give an explicit expression of a sheafification functor associated with the quantization topology.

  14. Towards quantized current arbitrary waveform synthesis

    NASA Astrophysics Data System (ADS)

    Mirovsky, P.; Fricke, L.; Hohls, F.; Kaestner, B.; Leicht, Ch.; Pierz, K.; Melcher, J.; Schumacher, H. W.

    2013-06-01

    The generation of ac modulated quantized current waveforms using a semiconductor non-adiabatic single electron pump is demonstrated. In standard operation, the single electron pump generates a quantized output current of I = ef, where e is the charge of the electron and f is the pumping frequency. Suitable frequency modulation of f allows the generation of ac modulated output currents with different characteristics. By sinusoidal and saw tooth like modulation of f accordingly modulated quantized current waveforms with kHz modulation frequencies and peak currents up to 100 pA are obtained. Such ac quantized current sources could find applications ranging from precision ac metrology to on-chip signal generation.

  15. Color quantization and processing by Fibonacci lattices.

    PubMed

    Mojsilovic, A; Soljanin, E

    2001-01-01

    Color quantization is sampling of three-dimensional (3-D) color spaces (such as RGB or Lab) which results in a discrete subset of colors known as a color codebook or palette. It is extensively used for display, transfer, and storage of natural images in Internet-based applications, computer graphics, and animation. We propose a sampling scheme which provides a uniform quantization of the Lab space. The idea is based on several results from number theory and phyllotaxy. The sampling algorithm is very much systematic and allows easy design of universal (image-independent) color codebooks for a given set of parameters. The codebook structure allows fast quantization and ordered dither of color images. The display quality of images quantized by the proposed color codebooks is comparable with that of image-dependent quantizers. Most importantly, the quantized images are more amenable to the type of processing used for grayscale ones. Methods for processing grayscale images cannot be simply extended to color images because they rely on the fact that each gray-level is described by a single number and the fact that a relation of full order can be easily established on the set of those numbers. Color spaces (such as RGB or Lab) are, on the other hand, 3-D. The proposed color quantization, i.e., color space sampling and numbering of sampled points, makes methods for processing grayscale images extendible to color images. We illustrate possible processing of color images by first introducing the basic average and difference operations and then implementing edge detection and compression of color quantized images. PMID:18255513

  16. pAUL: A Gateway-Based Vector System for Adaptive Expression and Flexible Tagging of Proteins in Arabidopsis

    PubMed Central

    Lyska, Dagmar; Engelmann, Kerstin; Meierhoff, Karin; Westhoff, Peter

    2013-01-01

    Determination of protein function requires tools that allow its detection and/or purification. As generation of specific antibodies often is laborious and insufficient, protein tagging using epitopes that are recognized by commercially available antibodies and matrices appears more promising. Also, proper spatial and temporal expression of tagged proteins is required to prevent falsification of results. We developed a new series of binary Gateway cloning vectors named pAUL1-20 for C- and N-terminal in-frame fusion of proteins to four different tags: a single (i) HA epitope and (ii) Strep-tagIII, (iii) both epitopes combined to a double tag, and (iv) a triple tag consisting of the double tag extended by a Protein A tag possessing a 3C protease cleavage site. Expression can be driven by either the 35 S CaMV promoter or, for C-terminal fusions, promoters from genes encoding the chloroplast biogenesis factors HCF107, HCF136, or HCF173. Fusions of the four promoters to the GUS gene showed that endogenous promoter sequences are functional and drive expression more moderately and consistently throughout different transgenic lines when compared to the 35 S CaMV promoter. By testing complementation of mutations affected in chloroplast biogenesis factors HCF107 and HCF208, we found that the effect of different promoters and tags on protein function strongly depends on the protein itself. Single-step and tandem affinity purification of HCF208 via different tags confirmed the integrity of the cloned tags. PMID:23326506

  17. Controlling charge quantization with quantum fluctuations

    NASA Astrophysics Data System (ADS)

    Jezouin, S.; Iftikhar, Z.; Anthore, A.; Parmentier, F. D.; Gennser, U.; Cavanna, A.; Ouerghi, A.; Levkivskyi, I. P.; Idrisov, E.; Sukhorukov, E. V.; Glazman, L. I.; Pierre, F.

    2016-08-01

    In 1909, Millikan showed that the charge of electrically isolated systems is quantized in units of the elementary electron charge e. Today, the persistence of charge quantization in small, weakly connected conductors allows for circuits in which single electrons are manipulated, with applications in, for example, metrology, detectors and thermometry. However, as the connection strength is increased, the discreteness of charge is progressively reduced by quantum fluctuations. Here we report the full quantum control and characterization of charge quantization. By using semiconductor-based tunable elemental conduction channels to connect a micrometre-scale metallic island to a circuit, we explore the complete evolution of charge quantization while scanning the entire range of connection strengths, from a very weak (tunnel) to a perfect (ballistic) contact. We observe, when approaching the ballistic limit, that charge quantization is destroyed by quantum fluctuations, and scales as the square root of the residual probability for an electron to be reflected across the quantum channel; this scaling also applies beyond the different regimes of connection strength currently accessible to theory. At increased temperatures, the thermal fluctuations result in an exponential suppression of charge quantization and in a universal square-root scaling, valid for all connection strengths, in agreement with expectations. Besides being pertinent for the improvement of single-electron circuits and their applications, and for the metal–semiconductor hybrids relevant to topological quantum computing, knowledge of the quantum laws of electricity will be essential for the quantum engineering of future nanoelectronic devices.

  18. Controlling charge quantization with quantum fluctuations.

    PubMed

    Jezouin, S; Iftikhar, Z; Anthore, A; Parmentier, F D; Gennser, U; Cavanna, A; Ouerghi, A; Levkivskyi, I P; Idrisov, E; Sukhorukov, E V; Glazman, L I; Pierre, F

    2016-08-01

    In 1909, Millikan showed that the charge of electrically isolated systems is quantized in units of the elementary electron charge e. Today, the persistence of charge quantization in small, weakly connected conductors allows for circuits in which single electrons are manipulated, with applications in, for example, metrology, detectors and thermometry. However, as the connection strength is increased, the discreteness of charge is progressively reduced by quantum fluctuations. Here we report the full quantum control and characterization of charge quantization. By using semiconductor-based tunable elemental conduction channels to connect a micrometre-scale metallic island to a circuit, we explore the complete evolution of charge quantization while scanning the entire range of connection strengths, from a very weak (tunnel) to a perfect (ballistic) contact. We observe, when approaching the ballistic limit, that charge quantization is destroyed by quantum fluctuations, and scales as the square root of the residual probability for an electron to be reflected across the quantum channel; this scaling also applies beyond the different regimes of connection strength currently accessible to theory. At increased temperatures, the thermal fluctuations result in an exponential suppression of charge quantization and in a universal square-root scaling, valid for all connection strengths, in agreement with expectations. Besides being pertinent for the improvement of single-electron circuits and their applications, and for the metal-semiconductor hybrids relevant to topological quantum computing, knowledge of the quantum laws of electricity will be essential for the quantum engineering of future nanoelectronic devices.

  19. Controlling charge quantization with quantum fluctuations

    NASA Astrophysics Data System (ADS)

    Jezouin, S.; Iftikhar, Z.; Anthore, A.; Parmentier, F. D.; Gennser, U.; Cavanna, A.; Ouerghi, A.; Levkivskyi, I. P.; Idrisov, E.; Sukhorukov, E. V.; Glazman, L. I.; Pierre, F.

    2016-08-01

    In 1909, Millikan showed that the charge of electrically isolated systems is quantized in units of the elementary electron charge e. Today, the persistence of charge quantization in small, weakly connected conductors allows for circuits in which single electrons are manipulated, with applications in, for example, metrology, detectors and thermometry. However, as the connection strength is increased, the discreteness of charge is progressively reduced by quantum fluctuations. Here we report the full quantum control and characterization of charge quantization. By using semiconductor-based tunable elemental conduction channels to connect a micrometre-scale metallic island to a circuit, we explore the complete evolution of charge quantization while scanning the entire range of connection strengths, from a very weak (tunnel) to a perfect (ballistic) contact. We observe, when approaching the ballistic limit, that charge quantization is destroyed by quantum fluctuations, and scales as the square root of the residual probability for an electron to be reflected across the quantum channel; this scaling also applies beyond the different regimes of connection strength currently accessible to theory. At increased temperatures, the thermal fluctuations result in an exponential suppression of charge quantization and in a universal square-root scaling, valid for all connection strengths, in agreement with expectations. Besides being pertinent for the improvement of single-electron circuits and their applications, and for the metal-semiconductor hybrids relevant to topological quantum computing, knowledge of the quantum laws of electricity will be essential for the quantum engineering of future nanoelectronic devices.

  20. Controlling charge quantization with quantum fluctuations.

    PubMed

    Jezouin, S; Iftikhar, Z; Anthore, A; Parmentier, F D; Gennser, U; Cavanna, A; Ouerghi, A; Levkivskyi, I P; Idrisov, E; Sukhorukov, E V; Glazman, L I; Pierre, F

    2016-08-01

    In 1909, Millikan showed that the charge of electrically isolated systems is quantized in units of the elementary electron charge e. Today, the persistence of charge quantization in small, weakly connected conductors allows for circuits in which single electrons are manipulated, with applications in, for example, metrology, detectors and thermometry. However, as the connection strength is increased, the discreteness of charge is progressively reduced by quantum fluctuations. Here we report the full quantum control and characterization of charge quantization. By using semiconductor-based tunable elemental conduction channels to connect a micrometre-scale metallic island to a circuit, we explore the complete evolution of charge quantization while scanning the entire range of connection strengths, from a very weak (tunnel) to a perfect (ballistic) contact. We observe, when approaching the ballistic limit, that charge quantization is destroyed by quantum fluctuations, and scales as the square root of the residual probability for an electron to be reflected across the quantum channel; this scaling also applies beyond the different regimes of connection strength currently accessible to theory. At increased temperatures, the thermal fluctuations result in an exponential suppression of charge quantization and in a universal square-root scaling, valid for all connection strengths, in agreement with expectations. Besides being pertinent for the improvement of single-electron circuits and their applications, and for the metal-semiconductor hybrids relevant to topological quantum computing, knowledge of the quantum laws of electricity will be essential for the quantum engineering of future nanoelectronic devices. PMID:27488797

  1. Introducing Vectors.

    ERIC Educational Resources Information Center

    Roche, John

    1997-01-01

    Suggests an approach to teaching vectors that promotes active learning through challenging questions addressed to the class, as opposed to subtle explanations. Promotes introducing vector graphics with concrete examples, beginning with an explanation of the displacement vector. Also discusses artificial vectors, vector algebra, and unit vectors.…

  2. ECG compression using uniform scalar dead-zone quantization and conditional entropy coding.

    PubMed

    Chen, Jianhua; Wang, Fuyan; Zhang, Yufeng; Shi, Xinling

    2008-05-01

    A new wavelet-based method for the compression of electrocardiogram (ECG) data is presented. A discrete wavelet transform (DWT) is applied to the digitized ECG signal. The DWT coefficients are first quantized with a uniform scalar dead-zone quantizer, and then the quantized coefficients are decomposed into four symbol streams, representing a binary significance stream, the signs, the positions of the most significant bits, and the residual bits. An adaptive arithmetic coder with several different context models is employed for the entropy coding of these symbol streams. Simulation results on several records from the MIT-BIH arrhythmia database show that the proposed coding algorithm outperforms some recently developed ECG compression algorithms.

  3. Rate quantization modeling for rate control of MPEG video coding and recording

    NASA Astrophysics Data System (ADS)

    Ding, Wei; Liu, Bede

    1995-04-01

    For MPEG video coding and recording applications, it is important to select quantization parameters at slice and macroblock levels to produce nearly constant quality image for a given bit count budget. A well designed rate control strategy can improve overall image quality for video transmission over a constant-bit-rate channel and fulfill editing requirement of video recording, where a certain number of new pictures are encoded to replace consecutive frames on the storage media using at most the same number of bits. In this paper, we developed a feedback method with a rate-quantization model, which can be adapted to changes in picture activities. The model is used for quantization parameter selection at the frame and slice level. Extra computations needed are modest. Experiments show the accuracy of the model and the effectiveness of the proposed rate control method. A new bit allocation algorithm is then proposed for MPEG video coding.

  4. A comparative study of artificial neural network, adaptive neuro fuzzy inference system and support vector machine for forecasting river flow in the semiarid mountain region

    NASA Astrophysics Data System (ADS)

    He, Zhibin; Wen, Xiaohu; Liu, Hu; Du, Jun

    2014-02-01

    Data driven models are very useful for river flow forecasting when the underlying physical relationships are not fully understand, but it is not clear whether these data driven models still have a good performance in the small river basin of semiarid mountain regions where have complicated topography. In this study, the potential of three different data driven methods, artificial neural network (ANN), adaptive neuro fuzzy inference system (ANFIS) and support vector machine (SVM) were used for forecasting river flow in the semiarid mountain region, northwestern China. The models analyzed different combinations of antecedent river flow values and the appropriate input vector has been selected based on the analysis of residuals. The performance of the ANN, ANFIS and SVM models in training and validation sets are compared with the observed data. The model which consists of three antecedent values of flow has been selected as the best fit model for river flow forecasting. To get more accurate evaluation of the results of ANN, ANFIS and SVM models, the four quantitative standard statistical performance evaluation measures, the coefficient of correlation (R), root mean squared error (RMSE), Nash-Sutcliffe efficiency coefficient (NS) and mean absolute relative error (MARE), were employed to evaluate the performances of various models developed. The results indicate that the performance obtained by ANN, ANFIS and SVM in terms of different evaluation criteria during the training and validation period does not vary substantially; the performance of the ANN, ANFIS and SVM models in river flow forecasting was satisfactory. A detailed comparison of the overall performance indicated that the SVM model performed better than ANN and ANFIS in river flow forecasting for the validation data sets. The results also suggest that ANN, ANFIS and SVM method can be successfully applied to establish river flow with complicated topography forecasting models in the semiarid mountain regions.

  5. Recovery of quantized compressed sensing measurements

    NASA Astrophysics Data System (ADS)

    Tsagkatakis, Grigorios; Tsakalides, Panagiotis

    2015-03-01

    Compressed Sensing (CS) is a novel mathematical framework that has revolutionized modern signal and image acquisition architectures ranging from one-pixel cameras, to range imaging and medical ultrasound imaging. According to CS, a sparse signal, or a signal that can be sparsely represented in an appropriate collection of elementary examples, can be recovered from a small number of random linear measurements. However, real life systems may introduce non-linearities in the encoding in order to achieve a particular goal. Quantization of the acquired measurements is an example of such a non-linearity introduced in order to reduce storage and communications requirements. In this work, we consider the case of scalar quantization of CS measurements and propose a novel recovery mechanism that enforces the constraints associated with the quantization processes during recovery. The proposed recovery mechanism, termed Quantized Orthogonal Matching Pursuit (Q-OMP) is based on a modification of the OMP greedy sparsity seeking algorithm where the process of quantization is explicit considered during decoding. Simulation results on the recovery of images acquired by a CS approach reveal that the modified framework is able to achieve significantly higher reconstruction performance compared to its naive counterpart under a wide range of sampling rates and sensing parameters, at a minimum cost in computational complexity.

  6. Application of least square support vector machine and multivariate adaptive regression spline models in long term prediction of river water pollution

    NASA Astrophysics Data System (ADS)

    Kisi, Ozgur; Parmar, Kulwinder Singh

    2016-03-01

    This study investigates the accuracy of least square support vector machine (LSSVM), multivariate adaptive regression splines (MARS) and M5 model tree (M5Tree) in modeling river water pollution. Various combinations of water quality parameters, Free Ammonia (AMM), Total Kjeldahl Nitrogen (TKN), Water Temperature (WT), Total Coliform (TC), Fecal Coliform (FC) and Potential of Hydrogen (pH) monitored at Nizamuddin, Delhi Yamuna River in India were used as inputs to the applied models. Results indicated that the LSSVM and MARS models had almost same accuracy and they performed better than the M5Tree model in modeling monthly chemical oxygen demand (COD). The average root mean square error (RMSE) of the LSSVM and M5Tree models was decreased by 1.47% and 19.1% using MARS model, respectively. Adding TC input to the models did not increase their accuracy in modeling COD while adding FC and pH inputs to the models generally decreased the accuracy. The overall results indicated that the MARS and LSSVM models could be successfully used in estimating monthly river water pollution level by using AMM, TKN and WT parameters as inputs.

  7. [Vector control and malaria control].

    PubMed

    Carnevale, P; Mouchet, J

    1990-01-01

    Vector control is an integral part of malaria control. Limiting parasite transmission vector control must be considered as one of the main preventive measure. Indeed it prevents transmission of Plasmodium from man to vector and from vector to man. But vector control must be adapted to local situation to be efficient and feasible. Targets of vector control can be larval and/or adults stages. In both cases 3 main methods are currently available: physical (source reduction), chemical (insecticides) and biological tolls. Antilarval control is useful only in some particular circumstances (unstable malaria, island, oasis...) Antiadult control is mainly based upon house-spraying while pyrethroid treated bed nets is advocated regarding efficiency, simple technique and cheap price. Vector control measures could seem restricted but can be very efficient if political will is added to a right choice of adapted measures, a good training of involved personal and a large information of the population concerned with vector control.

  8. Smooth big bounce from affine quantization

    NASA Astrophysics Data System (ADS)

    Bergeron, Hervé; Dapor, Andrea; Gazeau, Jean Pierre; Małkiewicz, Przemysław

    2014-04-01

    We examine the possibility of dealing with gravitational singularities on a quantum level through the use of coherent state or wavelet quantization instead of canonical quantization. We consider the Robertson-Walker metric coupled to a perfect fluid. It is the simplest model of a gravitational collapse, and the results obtained here may serve as a useful starting point for more complex investigations in the future. We follow a quantization procedure based on affine coherent states or wavelets built from the unitary irreducible representation of the affine group of the real line with positive dilation. The main issue of our approach is the appearance of a quantum centrifugal potential allowing for regularization of the singularity, essential self-adjointness of the Hamiltonian, and unambiguous quantum dynamical evolution.

  9. Single Abrikosov vortices as quantized information bits.

    PubMed

    Golod, T; Iovan, A; Krasnov, V M

    2015-10-12

    Superconducting digital devices can be advantageously used in future supercomputers because they can greatly reduce the dissipation power and increase the speed of operation. Non-volatile quantized states are ideal for the realization of classical Boolean logics. A quantized Abrikosov vortex represents the most compact magnetic object in superconductors, which can be utilized for creation of high-density digital cryoelectronics. In this work we provide a proof of concept for Abrikosov-vortex-based random access memory cell, in which a single vortex is used as an information bit. We demonstrate high-endurance write operation and two different ways of read-out using a spin valve or a Josephson junction. These memory cells are characterized by an infinite magnetoresistance between 0 and 1 states, a short access time, a scalability to nm sizes and an extremely low write energy. Non-volatility and perfect reproducibility are inherent for such a device due to the quantized nature of the vortex.

  10. Single Abrikosov vortices as quantized information bits

    NASA Astrophysics Data System (ADS)

    Golod, T.; Iovan, A.; Krasnov, V. M.

    2015-10-01

    Superconducting digital devices can be advantageously used in future supercomputers because they can greatly reduce the dissipation power and increase the speed of operation. Non-volatile quantized states are ideal for the realization of classical Boolean logics. A quantized Abrikosov vortex represents the most compact magnetic object in superconductors, which can be utilized for creation of high-density digital cryoelectronics. In this work we provide a proof of concept for Abrikosov-vortex-based random access memory cell, in which a single vortex is used as an information bit. We demonstrate high-endurance write operation and two different ways of read-out using a spin valve or a Josephson junction. These memory cells are characterized by an infinite magnetoresistance between 0 and 1 states, a short access time, a scalability to nm sizes and an extremely low write energy. Non-volatility and perfect reproducibility are inherent for such a device due to the quantized nature of the vortex.

  11. Subband Image Coding with Jointly Optimized Quantizers

    NASA Technical Reports Server (NTRS)

    Kossentini, Faouzi; Chung, Wilson C.; Smith Mark J. T.

    1995-01-01

    An iterative design algorithm for the joint design of complexity- and entropy-constrained subband quantizers and associated entropy coders is proposed. Unlike conventional subband design algorithms, the proposed algorithm does not require the use of various bit allocation algorithms. Multistage residual quantizers are employed here because they provide greater control of the complexity-performance tradeoffs, and also because they allow efficient and effective high-order statistical modeling. The resulting subband coder exploits statistical dependencies within subbands, across subbands, and across stages, mainly through complexity-constrained high-order entropy coding. Experimental results demonstrate that the complexity-rate-distortion performance of the new subband coder is exceptional.

  12. Indexing Large Visual Vocabulary by Randomized Dimensions Hashing for High Quantization Accuracy: Improving the Object Retrieval Quality

    NASA Astrophysics Data System (ADS)

    Yang, Heng; Wang, Qing; He, Zhoucan

    The bag-of-visual-words approach, inspired by text retrieval methods, has proven successful in achieving high performance in object retrieval on large-scale databases. A key step of these methods is the quantization stage which maps the high-dimensional image feature vectors to discriminatory visual words. In this paper, we consider the quantization step as the nearest neighbor search in large visual vocabulary, and thus proposed a randomized dimensions hashing (RDH) algorithm to efficiently index and search the large visual vocabulary. The experimental results have demonstrated that the proposed algorithm can effectively increase the quantization accuracy compared to the vocabulary tree based methods which represent the state-of-the-art. Consequently, the object retrieval performance can be significantly improved by our method in the large-scale database.

  13. Observation of Quantized and Partial Quantized Conductance in Polymer-Suspended Graphene Nanoplatelets

    NASA Astrophysics Data System (ADS)

    Kang, Yuhong; Ruan, Hang; Claus, Richard O.; Heremans, Jean; Orlowski, Marius

    2016-04-01

    Quantized conductance is observed at zero magnetic field and room temperature in metal-insulator-metal structures with graphene submicron-sized nanoplatelets embedded in a 3-hexylthiophene (P3HT) polymer layer. In devices with medium concentration of graphene platelets, integer multiples of G o = 2 e 2/ h (=12.91 kΩ-1), and in some devices partially quantized including a series of with ( n/7) × G o, steps are observed. Such an organic memory device exhibits reliable memory operation with an on/off ratio of more than 10. We attribute the quantized conductance to the existence of a 1-D electron waveguide along the conductive path. The partial quantized conductance results likely from imperfect transmission coefficient due to impedance mismatch of the first waveguide modes.

  14. Deterministic Quantization by Dynamical Boundary Conditions

    SciTech Connect

    Dolce, Donatello

    2010-06-15

    We propose an unexplored quantization method. It is based on the assumption of dynamical space-time intrinsic periodicities for relativistic fields, which in turn can be regarded as dual to extra-dimensional fields. As a consequence we obtain a unified and consistent interpretation of Special Relativity and Quantum Mechanics in terms of Deterministic Geometrodynamics.

  15. Nonlinear ADC with digitally selectable quantizing characteristics

    SciTech Connect

    Lygouras, J.N.

    1988-10-01

    In this paper a method is presented for generating linear or nonlinear functions digitally. The Nonlinear Analog to Digital Conversion (NLADC) is accomplished using the Pulse Width Modulation (PWM) of the analog input voltage. The conversion is done according to a special Quantizing Characteristic Function (Q.C.F.), which depends on the specific application. This special Q.C.F. sampled, quantized and coded has been stored in an EPROM. The quantizing characteristic can be any monotonically increasing function of any type (e.g. linear, square, exponential e.t.c.) resulting in a very flexible linear or nonlinear A/D converter. More than one Q.C.F. can be stored in the EPROM. Such a NLADC could be used for the expansion or compression of the dynamic range in Nuclear Science measurements, in robotics for the cartesian space path planning, as in the case of Pulse Code Modulation (PCM) nonlinear quantization, e.t.c. The corresponding nonlinear Digital to Analog Converter is described.

  16. Bolometric Device Based on Fluxoid Quantization

    NASA Technical Reports Server (NTRS)

    Bonetti, Joseph A.; Kenyon, Matthew E.; Leduc, Henry G.; Day, Peter K.

    2010-01-01

    The temperature dependence of fluxoid quantization in a superconducting loop. The sensitivity of the device is expected to surpass that of other superconducting- based bolometric devices, such as superconducting transition-edge sensors and superconducting nanowire devices. Just as important, the proposed device has advantages in sample fabrication.

  17. Deformation quantization and boundary value problems

    NASA Astrophysics Data System (ADS)

    Tarkhanov, Nikolai

    2016-11-01

    We describe a natural construction of deformation quantization on a compact symplectic manifold with boundary. On the algebra of quantum observables a trace functional is defined which as usual annihilates the commutators. This gives rise to an index as the trace of the unity element. We formulate the index theorem as a conjecture and examine it by the classical harmonic oscillator.

  18. Hysteresis in a quantized superfluid 'atomtronic' circuit.

    PubMed

    Eckel, Stephen; Lee, Jeffrey G; Jendrzejewski, Fred; Murray, Noel; Clark, Charles W; Lobb, Christopher J; Phillips, William D; Edwards, Mark; Campbell, Gretchen K

    2014-02-13

    Atomtronics is an emerging interdisciplinary field that seeks to develop new functional methods by creating devices and circuits where ultracold atoms, often superfluids, have a role analogous to that of electrons in electronics. Hysteresis is widely used in electronic circuits-it is routinely observed in superconducting circuits and is essential in radio-frequency superconducting quantum interference devices. Furthermore, it is as fundamental to superfluidity (and superconductivity) as quantized persistent currents, critical velocity and Josephson effects. Nevertheless, despite multiple theoretical predictions, hysteresis has not been previously observed in any superfluid, atomic-gas Bose-Einstein condensate. Here we directly detect hysteresis between quantized circulation states in an atomtronic circuit formed from a ring of superfluid Bose-Einstein condensate obstructed by a rotating weak link (a region of low atomic density). This contrasts with previous experiments on superfluid liquid helium where hysteresis was observed directly in systems in which the quantization of flow could not be observed, and indirectly in systems that showed quantized flow. Our techniques allow us to tune the size of the hysteresis loop and to consider the fundamental excitations that accompany hysteresis. The results suggest that the relevant excitations involved in hysteresis are vortices, and indicate that dissipation has an important role in the dynamics. Controlled hysteresis in atomtronic circuits may prove to be a crucial feature for the development of practical devices, just as it has in electronic circuits such as memories, digital noise filters (for example Schmitt triggers) and magnetometers (for example superconducting quantum interference devices). PMID:24522597

  19. Hysteresis in a quantized superfluid 'atomtronic' circuit.

    PubMed

    Eckel, Stephen; Lee, Jeffrey G; Jendrzejewski, Fred; Murray, Noel; Clark, Charles W; Lobb, Christopher J; Phillips, William D; Edwards, Mark; Campbell, Gretchen K

    2014-02-13

    Atomtronics is an emerging interdisciplinary field that seeks to develop new functional methods by creating devices and circuits where ultracold atoms, often superfluids, have a role analogous to that of electrons in electronics. Hysteresis is widely used in electronic circuits-it is routinely observed in superconducting circuits and is essential in radio-frequency superconducting quantum interference devices. Furthermore, it is as fundamental to superfluidity (and superconductivity) as quantized persistent currents, critical velocity and Josephson effects. Nevertheless, despite multiple theoretical predictions, hysteresis has not been previously observed in any superfluid, atomic-gas Bose-Einstein condensate. Here we directly detect hysteresis between quantized circulation states in an atomtronic circuit formed from a ring of superfluid Bose-Einstein condensate obstructed by a rotating weak link (a region of low atomic density). This contrasts with previous experiments on superfluid liquid helium where hysteresis was observed directly in systems in which the quantization of flow could not be observed, and indirectly in systems that showed quantized flow. Our techniques allow us to tune the size of the hysteresis loop and to consider the fundamental excitations that accompany hysteresis. The results suggest that the relevant excitations involved in hysteresis are vortices, and indicate that dissipation has an important role in the dynamics. Controlled hysteresis in atomtronic circuits may prove to be a crucial feature for the development of practical devices, just as it has in electronic circuits such as memories, digital noise filters (for example Schmitt triggers) and magnetometers (for example superconducting quantum interference devices).

  20. Quantization of the chiral soliton in medium

    NASA Astrophysics Data System (ADS)

    Nagai, S.; Sawado, N.; Shiiki, N.

    2006-01-01

    Chiral solitons coupled with quarks in medium are studied based on the Wigner-Seitz approximation. The chiral quark soliton model is used to obtain the classical soliton solutions. To investigate nucleon and Δ in matter, the semi-classical quantization is performed by the cranking method. The saturation for nucleon matter and Δ matter are observed.

  1. Multiverse in the Third Quantized Formalism

    NASA Astrophysics Data System (ADS)

    Mir, Faizal

    2014-11-01

    In this paper we will analyze the third quantization of gravity in path integral formalism. We will use the time-dependent version of Wheeler—DeWitt equation to analyze the multiverse in this formalism. We will propose a mechanism for baryogenesis to occur in the multiverse, without violating the baryon number conservation.

  2. The Angular Momentum Dilemma and Born-Jordan Quantization

    NASA Astrophysics Data System (ADS)

    de Gosson, Maurice A.

    2016-10-01

    The rigorous equivalence of the Schrödinger and Heisenberg pictures requires that one uses Born-Jordan quantization in place of Weyl quantization. We confirm this by showing that the much discussed " angular momentum dilemma" disappears if one uses Born-Jordan quantization. We argue that the latter is the only physically correct quantization procedure. We also briefly discuss a possible redefinition of phase space quantum mechanics, where the usual Wigner distribution has to be replaced with a new quasi-distribution associated with Born-Jordan quantization, and which has proven to be successful in time-frequency analysis.

  3. Quantized fiber dynamics for extended elementary objects involving gravitation

    NASA Astrophysics Data System (ADS)

    Drechsler, W.

    1992-08-01

    The geometro-stochastic quantization of a gauge theory for extended objects based on the (4, 1)-de Sitter group is used for the description of quantized matter in interaction with gravitation. In this context a Hilbert bundle ℋ over curved space-time B is introduced, possessing the standard fiber ℋ_{bar η }^{(ρ )} , being a resolution kernel Hilbert space (with resolution generatortilde η and generalized coherent state basis) carrying a spin-zero phase space representation of G=SO( 4, 1) belonging to the principal series of unitary irreducible representations determined by the parameter ρ. The bundle ℋ, associated to the de Sitter frame bundle P(B, G), provides a geometric arena with built-in fundamental length parameter R (taken to be of the order of 10-13 cm characterizing hadron physics) yielding, in the presence of gravitation, a quantum kinematical framework for the geometro-stochastic description of spinless matter described in terms of generalized quantum mechanical wave functions, Ψ{x/ρ}(ξ, ζ), defined on #x210B;. By going over to a nonlinear realization of the de Sitter group with the help of a section ξ(x) on the soldered bundle E, associated to P, with homogeneous fiber V'4⋍G/H, one is able to recover gravitation in a de Sitter gauge invariant manner as a gauge theory related to the Lorentz subgroup H of G. ξ(x) plays the dual role of a symmetry-reducing and an extension field. After introducing covariant bilinear source currents in the fields Ψ{x/ρ}(ξ, ζ) and their adjoints determined by G-invariant integration over the local fibers in ℋ, a quantum fiber dynamical (QFD) framework is set up for the dynamics at small distances in B determining the geometric quantities beyond the classical metric of Einstein's theory through a set of current-curvature field equations representing the source equations for axial vector torsion and the de Sitter boost contributions to the bundle connection (the latter defining the soldering forms of

  4. Vector Sum Excited Linear Prediction (VSELP) speech coding at 4.8 kbps

    NASA Technical Reports Server (NTRS)

    Gerson, Ira A.; Jasiuk, Mark A.

    1990-01-01

    Code Excited Linear Prediction (CELP) speech coders exhibit good performance at data rates as low as 4800 bps. The major drawback to CELP type coders is their larger computational requirements. The Vector Sum Excited Linear Prediction (VSELP) speech coder utilizes a codebook with a structure which allows for a very efficient search procedure. Other advantages of the VSELP codebook structure is discussed and a detailed description of a 4.8 kbps VSELP coder is given. This coder is an improved version of the VSELP algorithm, which finished first in the NSA's evaluation of the 4.8 kbps speech coders. The coder uses a subsample resolution single tap long term predictor, a single VSELP excitation codebook, a novel gain quantizer which is robust to channel errors, and a new adaptive pre/postfilter arrangement.

  5. Estimation of breast percent density in raw and processed full field digital mammography images via adaptive fuzzy c-means clustering and support vector machine segmentation

    SciTech Connect

    Keller, Brad M.; Nathan, Diane L.; Wang Yan; Zheng Yuanjie; Gee, James C.; Conant, Emily F.; Kontos, Despina

    2012-08-15

    Purpose: The amount of fibroglandular tissue content in the breast as estimated mammographically, commonly referred to as breast percent density (PD%), is one of the most significant risk factors for developing breast cancer. Approaches to quantify breast density commonly focus on either semiautomated methods or visual assessment, both of which are highly subjective. Furthermore, most studies published to date investigating computer-aided assessment of breast PD% have been performed using digitized screen-film mammograms, while digital mammography is increasingly replacing screen-film mammography in breast cancer screening protocols. Digital mammography imaging generates two types of images for analysis, raw (i.e., 'FOR PROCESSING') and vendor postprocessed (i.e., 'FOR PRESENTATION'), of which postprocessed images are commonly used in clinical practice. Development of an algorithm which effectively estimates breast PD% in both raw and postprocessed digital mammography images would be beneficial in terms of direct clinical application and retrospective analysis. Methods: This work proposes a new algorithm for fully automated quantification of breast PD% based on adaptive multiclass fuzzy c-means (FCM) clustering and support vector machine (SVM) classification, optimized for the imaging characteristics of both raw and processed digital mammography images as well as for individual patient and image characteristics. Our algorithm first delineates the breast region within the mammogram via an automated thresholding scheme to identify background air followed by a straight line Hough transform to extract the pectoral muscle region. The algorithm then applies adaptive FCM clustering based on an optimal number of clusters derived from image properties of the specific mammogram to subdivide the breast into regions of similar gray-level intensity. Finally, a SVM classifier is trained to identify which clusters within the breast tissue are likely fibroglandular, which are then

  6. Loop quantization of the Schwarzschild black hole.

    PubMed

    Gambini, Rodolfo; Pullin, Jorge

    2013-05-24

    We quantize spherically symmetric vacuum gravity without gauge fixing the diffeomorphism constraint. Through a rescaling, we make the algebra of Hamiltonian constraints Abelian, and therefore the constraint algebra is a true Lie algebra. This allows the completion of the Dirac quantization procedure using loop quantum gravity techniques. We can construct explicitly the exact solutions of the physical Hilbert space annihilated by all constraints. New observables living in the bulk appear at the quantum level (analogous to spin in quantum mechanics) that are not present at the classical level and are associated with the discrete nature of the spin network states of loop quantum gravity. The resulting quantum space-times resolve the singularity present in the classical theory inside black holes. PMID:23745855

  7. Quantized magnetoresistance in atomic-size contacts.

    PubMed

    Sokolov, Andrei; Zhang, Chunjuan; Tsymbal, Evgeny Y; Redepenning, Jody; Doudin, Bernard

    2007-03-01

    When the dimensions of a metallic conductor are reduced so that they become comparable to the de Broglie wavelengths of the conduction electrons, the absence of scattering results in ballistic electron transport and the conductance becomes quantized. In ferromagnetic metals, the spin angular momentum of the electrons results in spin-dependent conductance quantization and various unusual magnetoresistive phenomena. Theorists have predicted a related phenomenon known as ballistic anisotropic magnetoresistance (BAMR). Here we report the first experimental evidence for BAMR by observing a stepwise variation in the ballistic conductance of cobalt nanocontacts as the direction of an applied magnetic field is varied. Our results show that BAMR can be positive and negative, and exhibits symmetric and asymmetric angular dependences, consistent with theoretical predictions. PMID:18654248

  8. Loop quantization of the Schwarzschild black hole.

    PubMed

    Gambini, Rodolfo; Pullin, Jorge

    2013-05-24

    We quantize spherically symmetric vacuum gravity without gauge fixing the diffeomorphism constraint. Through a rescaling, we make the algebra of Hamiltonian constraints Abelian, and therefore the constraint algebra is a true Lie algebra. This allows the completion of the Dirac quantization procedure using loop quantum gravity techniques. We can construct explicitly the exact solutions of the physical Hilbert space annihilated by all constraints. New observables living in the bulk appear at the quantum level (analogous to spin in quantum mechanics) that are not present at the classical level and are associated with the discrete nature of the spin network states of loop quantum gravity. The resulting quantum space-times resolve the singularity present in the classical theory inside black holes.

  9. Second quantization in bit-string physics

    NASA Technical Reports Server (NTRS)

    Noyes, H. Pierre

    1993-01-01

    Using a new fundamental theory based on bit-strings, a finite and discrete version of the solutions of the free one particle Dirac equation as segmented trajectories with steps of length h/mc along the forward and backward light cones executed at velocity +/- c are derived. Interpreting the statistical fluctuations which cause the bends in these segmented trajectories as emission and absorption of radiation, these solutions are analogous to a fermion propagator in a second quantized theory. This allows us to interpret the mass parameter in the step length as the physical mass of the free particle. The radiation in interaction with it has the usual harmonic oscillator structure of a second quantized theory. How these free particle masses can be generated gravitationally using the combinatorial hierarchy sequence (3,10,137,2(sup 127) + 136), and some of the predictive consequences are sketched.

  10. Quantization of the nonlinear sigma model revisited

    NASA Astrophysics Data System (ADS)

    Nguyen, Timothy

    2016-08-01

    We revisit the subject of perturbatively quantizing the nonlinear sigma model in two dimensions from a rigorous, mathematical point of view. Our main contribution is to make precise the cohomological problem of eliminating potential anomalies that may arise when trying to preserve symmetries under quantization. The symmetries we consider are twofold: (i) diffeomorphism covariance for a general target manifold; (ii) a transitive group of isometries when the target manifold is a homogeneous space. We show that there are no anomalies in case (i) and that (ii) is also anomaly-free under additional assumptions on the target homogeneous space, in agreement with the work of Friedan. We carry out some explicit computations for the O(N)-model. Finally, we show how a suitable notion of the renormalization group establishes the Ricci flow as the one loop renormalization group flow of the nonlinear sigma model.

  11. Quantized Mechanics of Nanotubes and Bundles

    NASA Astrophysics Data System (ADS)

    Pugno, Nicola M.

    In this chapter, the mechanics of carbon nanotubes and related bundles is reviewed, with an eye to their application as ultra-sharp tips for scanning probe "nanoscopy". In particular, the role of thermodynamically unavoidable, atomistic defects with different sizes and shapes on the fracture strength, fatigue life, and elasticity is quantified, thanks to new quantized fracture mechanics approaches. The reader is introduced in a simple way to such innovative treatments at the beginning of the chapter.

  12. Capture and transparency in coarse quantized images.

    PubMed

    Morrone, M C; Burr, D C

    1997-09-01

    This study examines the effect of coarse quantization (blocking) on image recognition, and explores possible mechanisms. Thresholds for noise corruption showed that coarse quantization reduces drastically the recognizability of both faces and letters, well beyond the levels expected by equivalent blurring. Phase-shifting the spurious high frequencies introduced by the blocking (with an operation designed to leave both overall and local contrast unaffected, and feature localization) greatly improved recognizability of both faces and letters. For large phase shifts, the low spatial frequencies appear in transparency behind a grid structure of checks or lines. We also studied a more simple example of blocking, the checkerboard, that can be considered as a coarse quantized diagonal sinusoidal plaid. When one component of the plaid was contrast-inverted, it was seen in transparency against the checkerboard, while the other remained "captured" within the block structure. If the higher harmonics are then phase-shifted by pi, the contrast-reversed fundamental becomes captured and the other seen in transparency. Intermediate phase shifts of the higher harmonics cause intermediate effects, which we measured by adjusting the relative contrast of the fundamentals until neither orientation dominated. The contrast match varied considerably with the phase of the higher harmonics, over a range of about 1.5 log units. Simulations with the local energy model predicted qualitatively the results of the recognizability of both faces and letters, and quantitatively the apparent orientation of the modified checkerboard pattern. More generally, the model predicts the conditions under which an image will be "captured" by coarse quantization, or seen in transparency.

  13. Light-Front Quantization of Gauge Theories

    SciTech Connect

    Brodskey, Stanley

    2002-12-01

    Light-front wavefunctions provide a frame-independent representation of hadrons in terms of their physical quark and gluon degrees of freedom. The light-front Hamiltonian formalism provides new nonperturbative methods for obtaining the QCD spectrum and eigensolutions, including resolvant methods, variational techniques, and discretized light-front quantization. A new method for quantizing gauge theories in light-cone gauge using Dirac brackets to implement constraints is presented. In the case of the electroweak theory, this method of light-front quantization leads to a unitary and renormalizable theory of massive gauge particles, automatically incorporating the Lorentz and 't Hooft conditions as well as the Goldstone boson equivalence theorem. Spontaneous symmetry breaking is represented by the appearance of zero modes of the Higgs field leaving the light-front vacuum equal to the perturbative vacuum. I also discuss an ''event amplitude generator'' for automatically computing renormalized amplitudes in perturbation theory. The importance of final-state interactions for the interpretation of diffraction, shadowing, and single-spin asymmetries in inclusive reactions such as deep inelastic lepton-hadron scattering is emphasized.

  14. Single Abrikosov vortices as quantized information bits.

    PubMed

    Golod, T; Iovan, A; Krasnov, V M

    2015-01-01

    Superconducting digital devices can be advantageously used in future supercomputers because they can greatly reduce the dissipation power and increase the speed of operation. Non-volatile quantized states are ideal for the realization of classical Boolean logics. A quantized Abrikosov vortex represents the most compact magnetic object in superconductors, which can be utilized for creation of high-density digital cryoelectronics. In this work we provide a proof of concept for Abrikosov-vortex-based random access memory cell, in which a single vortex is used as an information bit. We demonstrate high-endurance write operation and two different ways of read-out using a spin valve or a Josephson junction. These memory cells are characterized by an infinite magnetoresistance between 0 and 1 states, a short access time, a scalability to nm sizes and an extremely low write energy. Non-volatility and perfect reproducibility are inherent for such a device due to the quantized nature of the vortex. PMID:26456592

  15. Single Abrikosov vortices as quantized information bits

    PubMed Central

    Golod, T.; Iovan, A.; Krasnov, V. M.

    2015-01-01

    Superconducting digital devices can be advantageously used in future supercomputers because they can greatly reduce the dissipation power and increase the speed of operation. Non-volatile quantized states are ideal for the realization of classical Boolean logics. A quantized Abrikosov vortex represents the most compact magnetic object in superconductors, which can be utilized for creation of high-density digital cryoelectronics. In this work we provide a proof of concept for Abrikosov-vortex-based random access memory cell, in which a single vortex is used as an information bit. We demonstrate high-endurance write operation and two different ways of read-out using a spin valve or a Josephson junction. These memory cells are characterized by an infinite magnetoresistance between 0 and 1 states, a short access time, a scalability to nm sizes and an extremely low write energy. Non-volatility and perfect reproducibility are inherent for such a device due to the quantized nature of the vortex. PMID:26456592

  16. Quantized Nambu-Poisson manifolds and n-Lie algebras

    NASA Astrophysics Data System (ADS)

    DeBellis, Joshua; Sämann, Christian; Szabo, Richard J.

    2010-12-01

    We investigate the geometric interpretation of quantized Nambu-Poisson structures in terms of noncommutative geometries. We describe an extension of the usual axioms of quantization in which classical Nambu-Poisson structures are translated to n-Lie algebras at quantum level. We demonstrate that this generalized procedure matches an extension of Berezin-Toeplitz quantization yielding quantized spheres, hyperboloids, and superspheres. The extended Berezin quantization of spheres is closely related to a deformation quantization of n-Lie algebras as well as the approach based on harmonic analysis. We find an interpretation of Nambu-Heisenberg n-Lie algebras in terms of foliations of {{R}}^n by fuzzy spheres, fuzzy hyperboloids, and noncommutative hyperplanes. Some applications to the quantum geometry of branes in M-theory are also briefly discussed.

  17. Analysis and Design of Logarithmic-type Dynamic Quantizer

    NASA Astrophysics Data System (ADS)

    Sugie, Toshiharu; Okamoto, Tetsuro

    This paper is concerned with quantized feedback control in the case where logarithmic-type dynamic quantizers are adopted instead of conventional static (memoryless) ones. First, when the plant and the state feedback controller are given, the admissible coarsest quantization density which guarantees quadratic stability of the closed loop system is given in a closed form, which does not depend on the choice of controller in contrast to the static quantizer case. Second, when the plant, the state feedback controller and the coarseness of the quantization density are given, we provide a design method of the dynamic quantizers via convex optimization. Third, these results are extended to the case of output feedback control systems. Finally, some numerical examples are given to demonstrate the effectiveness of the proposed method.

  18. Quantized Nambu-Poisson manifolds and n-Lie algebras

    SciTech Connect

    DeBellis, Joshua; Saemann, Christian; Szabo, Richard J.

    2010-12-15

    We investigate the geometric interpretation of quantized Nambu-Poisson structures in terms of noncommutative geometries. We describe an extension of the usual axioms of quantization in which classical Nambu-Poisson structures are translated to n-Lie algebras at quantum level. We demonstrate that this generalized procedure matches an extension of Berezin-Toeplitz quantization yielding quantized spheres, hyperboloids, and superspheres. The extended Berezin quantization of spheres is closely related to a deformation quantization of n-Lie algebras as well as the approach based on harmonic analysis. We find an interpretation of Nambu-Heisenberg n-Lie algebras in terms of foliations of R{sup n} by fuzzy spheres, fuzzy hyperboloids, and noncommutative hyperplanes. Some applications to the quantum geometry of branes in M-theory are also briefly discussed.

  19. Imaginary-scaling versus indefinite-metric quantization of the Pais-Uhlenbeck oscillator

    NASA Astrophysics Data System (ADS)

    Mostafazadeh, Ali

    2011-11-01

    Using the Pais-Uhlenbeck oscillator as a toy model, we outline a consistent alternative to the indefinite-metric quantization scheme that does not violate unitarity. We describe the basic mathematical structure of this method by giving an explicit construction of the Hilbert space of state vectors and the corresponding creation and annihilation operators. The latter satisfy the usual bosonic commutation relation and differ from those of the indefinite-metric theories by a sign in the definition of the creation operator. This change of sign achieves a definitization of the indefinite metric that gives life to the ghost states without changing their contribution to the energy spectrum.

  20. Detection of perturbed quantization class stego images based on possible change modes

    NASA Astrophysics Data System (ADS)

    Zhang, Yi; Liu, Fenlin; Yang, Chunfang; Luo, Xiangyang; Song, Xiaofeng

    2015-11-01

    To improve the detection performance for perturbed quantization (PQ) class [PQ, energy-adaptive PQ (PQe), and texture-adaptive PQ (PQt)] stego images, a detection method based on possible change modes is proposed. First, by using the relationship between the changeable coefficients used for carrying secret messages and the second quantization steps, the modes having even second quantization steps are identified as possible change modes. Second, by referencing the existing features, the modified features that can accurately capture the embedding changes based on possible change modes are extracted. Next, feature sensitivity analyses based on the modifications performed before and after the embedding are carried out. These analyses show that the modified features are more sensitive to the original features. Experimental results indicate that detection performance of the modified features is better than that of the corresponding original features for three typical feature models [Cartesian calibrated PEVny (ccPEV), Cartesian calibrated co-occurrence matrix features (CF), and JPEG rich model (JRM)], and the integrated feature consisting of enhanced histogram features (EHF) and the modified JRM outperforms two current state-of-the-art feature models, namely, phase aware projection model (PHARM) and Gabor rich model (GRM).

  1. Stochastic variational method as quantization scheme: Field quantization of the complex Klein-Gordon equation

    NASA Astrophysics Data System (ADS)

    Koide, T.; Kodama, T.

    2015-09-01

    The stochastic variational method (SVM) is the generalization of the variational approach to systems described by stochastic variables. In this paper, we investigate the applicability of SVM as an alternative field-quantization scheme, by considering the complex Klein-Gordon equation. There, the Euler-Lagrangian equation for the stochastic field variables leads to the functional Schrödinger equation, which can be interpreted as the Euler (ideal fluid) equation in the functional space. The present formulation is a quantization scheme based on commutable variables, so that there appears no ambiguity associated with the ordering of operators, e.g., in the definition of Noether charges.

  2. Adaptive codebook selection schemes for image classification in correlated channels

    NASA Astrophysics Data System (ADS)

    Hu, Chia Chang; Liu, Xiang Lian; Liu, Kuan-Fu

    2015-09-01

    The multiple-input multiple-output (MIMO) system with the use of transmit and receive antenna arrays achieves diversity and array gains via transmit beamforming. Due to the absence of full channel state information (CSI) at the transmitter, the transmit beamforming vector can be quantized at the receiver and sent back to the transmitter by a low-rate feedback channel, called limited feedback beamforming. One of the key roles of Vector Quantization (VQ) is how to generate a good codebook such that the distortion between the original image and the reconstructed image is the minimized. In this paper, a novel adaptive codebook selection scheme for image classification is proposed with taking both spatial and temporal correlation inherent in the channel into consideration. The new codebook selection algorithm is developed to select two codebooks from the discrete Fourier transform (DFT) codebook, the generalized Lloyd algorithm (GLA) codebook and the Grassmannian codebook to be combined and used as candidates of the original image and the reconstructed image for image transmission. The channel is estimated and divided into four regions based on the spatial and temporal correlation of the channel and an appropriate codebook is assigned to each region. The proposed method can efficiently reduce the required information of feedback under the spatially and temporally correlated channels, where each region is adaptively. Simulation results show that in the case of temporally and spatially correlated channels, the bit-error-rate (BER) performance can be improved substantially by the proposed algorithm compared to the one with only single codebook.

  3. Self-adjointness of the Fourier expansion of quantized interaction field Lagrangians

    PubMed Central

    Paneitz, S. M.; Segal, I. E.

    1983-01-01

    Regularity properties significantly stronger than were previously known are developed for four-dimensional non-linear conformally invariant quantized fields. The Fourier coefficients of the interaction Lagrangian in the interaction representation—i.e., evaluated after substitution of the associated quantized free field—is a densely defined operator on the associated free field Hilbert space K. These Fourier coefficients are with respect to a natural basis in the universal cosmos ˜M, to which such fields canonically and maximally extend from Minkowski space-time M0, which is covariantly a submanifold of ˜M. However, conformally invariant free fields over M0 and ˜M are canonically identifiable. The kth Fourier coefficient of the interaction Lagrangian has domain inclusive of all vectors in K to which arbitrary powers of the free hamiltonian in ˜M are applicable. Its adjoint in the rigorous Hilbert space sense is a-k in the case of a hermitian Lagrangian. In particular (k = 0) the leading term in the perturbative expansion of the S-matrix for a conformally invariant quantized field in M0 is a self-adjoint operator. Thus, e.g., if ϕ(x) denotes the free massless neutral scalar field in M0, then ∫M0:ϕ(x)4:d4x is a self-adjoint operator. No coupling constant renormalization is involved here. PMID:16593346

  4. Semiclassical quantization of nonadiabatic systems with hopping periodic orbits.

    PubMed

    Fujii, Mikiya; Yamashita, Koichi

    2015-02-21

    We present a semiclassical quantization condition, i.e., quantum-classical correspondence, for steady states of nonadiabatic systems consisting of fast and slow degrees of freedom (DOFs) by extending Gutzwiller's trace formula to a nonadiabatic form. The quantum-classical correspondence indicates that a set of primitive hopping periodic orbits, which are invariant under time evolution in the phase space of the slow DOF, should be quantized. The semiclassical quantization is then applied to a simple nonadiabatic model and accurately reproduces exact quantum energy levels. In addition to the semiclassical quantization condition, we also discuss chaotic dynamics involved in the classical limit of nonadiabatic dynamics.

  5. Perceptually optimized quantization tables for H.264/AVC

    NASA Astrophysics Data System (ADS)

    Chen, Heng; Braeckman, Geert; Barbarien, Joeri; Munteanu, Adrian; Schelkens, Peter

    2010-08-01

    The H.264/AVC video coding standard currently represents the state-of-the-art in video compression technology. The initial version of the standard only supported a single quantization step size for all the coefficients in a transformed block. Later, support for custom quantization tables was added, which allows to independently specify the quantization step size for each coefficient in a transformed block. In this way, different quantization can be applied to the highfrequency and low-frequency coefficients, reflecting the human visual system's different sensitivity to high-frequency and low-frequency spatial variations in the signal. In this paper, we design custom quantization tables taking into account the properties of the human visual system as well as the viewing conditions. Our proposed design is based on a model for the human visual system's contrast sensitivity function, which specifies the contrast sensitivity in function of the spatial frequency of the signal. By calculating the spatial frequencies corresponding to each of the transform's basis functions, taking into account viewing distance and dot pitch of the screen, the sensitivity of the human visual system to variations in the transform coefficient corresponding to each basis function can be determined and used to define the corresponding quantization step size. Experimental results, whereby the video quality is measured using VQM, show that the designed quantization tables yield improved performance compared to uniform quantization and to the default quantization tables provided as a part of the reference encoder.

  6. Semiclassical quantization of nonadiabatic systems with hopping periodic orbits

    NASA Astrophysics Data System (ADS)

    Fujii, Mikiya; Yamashita, Koichi

    2015-02-01

    We present a semiclassical quantization condition, i.e., quantum-classical correspondence, for steady states of nonadiabatic systems consisting of fast and slow degrees of freedom (DOFs) by extending Gutzwiller's trace formula to a nonadiabatic form. The quantum-classical correspondence indicates that a set of primitive hopping periodic orbits, which are invariant under time evolution in the phase space of the slow DOF, should be quantized. The semiclassical quantization is then applied to a simple nonadiabatic model and accurately reproduces exact quantum energy levels. In addition to the semiclassical quantization condition, we also discuss chaotic dynamics involved in the classical limit of nonadiabatic dynamics.

  7. Semiclassical quantization of nonadiabatic systems with hopping periodic orbits

    SciTech Connect

    Fujii, Mikiya Yamashita, Koichi

    2015-02-21

    We present a semiclassical quantization condition, i.e., quantum–classical correspondence, for steady states of nonadiabatic systems consisting of fast and slow degrees of freedom (DOFs) by extending Gutzwiller’s trace formula to a nonadiabatic form. The quantum–classical correspondence indicates that a set of primitive hopping periodic orbits, which are invariant under time evolution in the phase space of the slow DOF, should be quantized. The semiclassical quantization is then applied to a simple nonadiabatic model and accurately reproduces exact quantum energy levels. In addition to the semiclassical quantization condition, we also discuss chaotic dynamics involved in the classical limit of nonadiabatic dynamics.

  8. Quantization effects in radiation spectroscopy based on digital pulse processing

    SciTech Connect

    Jordanov, V. T.; Jordanova, K. V.

    2011-07-01

    Radiation spectra represent inherently quantization data in the form of stacked channels of equal width. The spectrum is an experimental measurement of the discrete probability density function (PDF) of the detector pulse heights. The quantization granularity of the spectra depends on the total number of channels covering the full range of pulse heights. In analog pulse processing the total number of channels is equal to the total digital values produced by a spectroscopy analog-to-digital converter (ADC). In digital pulse processing each detector pulse is sampled and quantized by a fast ADC producing certain number of quantized numerical values. These digital values are linearly processed to obtain a digital quantity representing the peak of the digitally shaped pulse. Using digital pulse processing it is possible to acquire a spectrum with the total number of channels greater than the number of ADC values. Noise and sample averaging are important in the transformation of ADC quantized data into spectral quantized data. Analysis of this transformation is performed using an area sampling model of quantization. Spectrum differential nonlinearity (DNL) is shown to be related to the quantization at low noise levels and small number of averaged samples. Theoretical analysis and experimental measurements are used to obtain the condition to minimize the DNL due to quantization. (authors)

  9. Gamow vectors explain the shock profile.

    PubMed

    Braidotti, Maria Chiara; Gentilini, Silvia; Conti, Claudio

    2016-09-19

    The description of shock waves beyond the shock point is a challenge in nonlinear physics and optics. Finding solutions to the global dynamics of dispersive shock waves is not always possible due to the lack of integrability. Here we propose a new method based on the eigenstates (Gamow vectors) of a reversed harmonic oscillator in a rigged Hilbert space. These vectors allow analytical formulation for the development of undular bores of shock waves in a nonlinear nonlocal medium. Experiments by a photothermal induced nonlinearity confirm theoretical predictions: the undulation period as a function of power and the characteristic quantized decays of Gamow vectors. Our results demonstrate that Gamow vectors are a novel and effective paradigm for describing extreme nonlinear phenomena. PMID:27661931

  10. Canonical quantization of classical mechanics in curvilinear coordinates. Invariant quantization procedure

    SciTech Connect

    Błaszak, Maciej Domański, Ziemowit

    2013-12-15

    In the paper is presented an invariant quantization procedure of classical mechanics on the phase space over flat configuration space. Then, the passage to an operator representation of quantum mechanics in a Hilbert space over configuration space is derived. An explicit form of position and momentum operators as well as their appropriate ordering in arbitrary curvilinear coordinates is demonstrated. Finally, the extension of presented formalism onto non-flat case and related ambiguities of the process of quantization are discussed. -- Highlights: •An invariant quantization procedure of classical mechanics on the phase space over flat configuration space is presented. •The passage to an operator representation of quantum mechanics in a Hilbert space over configuration space is derived. •Explicit form of position and momentum operators and their appropriate ordering in curvilinear coordinates is shown. •The invariant form of Hamiltonian operators quadratic and cubic in momenta is derived. •The extension of presented formalism onto non-flat case and related ambiguities of the quantization process are discussed.

  11. Quantization of soluble classical constrained systems

    SciTech Connect

    Belhadi, Z.; Menas, F.; Bérard, A.; Mohrbach, H.

    2014-12-15

    The derivation of the brackets among coordinates and momenta for classical constrained systems is a necessary step toward their quantization. Here we present a new approach for the determination of the classical brackets which does neither require Dirac’s formalism nor the symplectic method of Faddeev and Jackiw. This approach is based on the computation of the brackets between the constants of integration of the exact solutions of the equations of motion. From them all brackets of the dynamical variables of the system can be deduced in a straightforward way.

  12. Phase-space quantization of field theory.

    SciTech Connect

    Curtright, T.; Zachos, C.

    1999-04-20

    In this lecture, a limited introduction of gauge invariance in phase-space is provided, predicated on canonical transformations in quantum phase-space. Exact characteristic trajectories are also specified for the time-propagating Wigner phase-space distribution function: they are especially simple--indeed, classical--for the quantized simple harmonic oscillator. This serves as the underpinning of the field theoretic Wigner functional formulation introduced. Scalar field theory is thus reformulated in terms of distributions in field phase-space. This is a pedagogical selection from work published and reported at the Yukawa Institute Workshop ''Gauge Theory and Integrable Models'', 26-29 January, 1999.

  13. Lattices of quantized vortices in polariton superfluids

    NASA Astrophysics Data System (ADS)

    Boulier, Thomas; Cancellieri, Emiliano; Sangouard, Nicolas D.; Hivet, Romain; Glorieux, Quentin; Giacobino, Élisabeth; Bramati, Alberto

    2016-10-01

    In this review, we will focus on the description of the recent studies conducted in the quest for the observation of lattices of quantized vortices in resonantly injected polariton superfluids. In particular, we will show how the implementation of optical traps for polaritons allows for the realization of vortex-antivortex lattices in confined geometries and how the development of a flexible method to inject a controlled orbital angular momentum (OAM) in such systems results in the observation of patterns of same-sign vortices.

  14. Quantization of soluble classical constrained systems

    NASA Astrophysics Data System (ADS)

    Belhadi, Z.; Menas, F.; Bérard, A.; Mohrbach, H.

    2014-12-01

    The derivation of the brackets among coordinates and momenta for classical constrained systems is a necessary step toward their quantization. Here we present a new approach for the determination of the classical brackets which does neither require Dirac's formalism nor the symplectic method of Faddeev and Jackiw. This approach is based on the computation of the brackets between the constants of integration of the exact solutions of the equations of motion. From them all brackets of the dynamical variables of the system can be deduced in a straightforward way.

  15. Conductance quantization in strongly disordered graphene ribbons

    NASA Astrophysics Data System (ADS)

    Ihnatsenka, S.; Kirczenow, G.

    2009-11-01

    We present numerical studies of conduction in graphene nanoribbons with different types of disorder. We find that even when defect scattering depresses the conductance to values two orders of magnitude lower than 2e2/h , equally spaced conductance plateaus occur at moderately low temperatures due to enhanced electron backscattering near subband edge energies if bulk vacancies are present in the ribbon. This work accounts quantitatively for the surprising conductance quantization observed by Lin [Phys. Rev. B 78, 161409(R) (2008)] in ribbons with such low conductances.

  16. Path integral quantization of generalized quantum electrodynamics

    SciTech Connect

    Bufalo, R.; Pimentel, B. M.; Zambrano, G. E. R.

    2011-02-15

    In this paper, a complete covariant quantization of generalized electrodynamics is shown through the path integral approach. To this goal, we first studied the Hamiltonian structure of the system following Dirac's methodology and, then, we followed the Faddeev-Senjanovic procedure to obtain the transition amplitude. The complete propagators (Schwinger-Dyson-Fradkin equations) of the correct gauge fixation and the generalized Ward-Fradkin-Takahashi identities are also obtained. Afterwards, an explicit calculation of one-loop approximations of all Green's functions and a discussion about the obtained results are presented.

  17. Lorenz gauge quantization in conformally flat spacetimes

    NASA Astrophysics Data System (ADS)

    Cresswell, Jesse C.; Vollick, Dan N.

    2015-04-01

    Recently it was shown that Dirac's method of quantizing constrained dynamical systems can be used to impose the Lorenz gauge condition in a four-dimensional cosmological spacetime. In this paper we use Dirac's method to impose the Lorenz gauge condition in a general four-dimensional conformally flat spacetime and find that there is no particle production. We show that in cosmological spacetimes with dimension D ≠4 there will be particle production when the scale factor changes, and we calculate the particle production due to a sudden change.

  18. Quantum mechanics, gravity and modified quantization relations.

    PubMed

    Calmet, Xavier

    2015-08-01

    In this paper, we investigate a possible energy scale dependence of the quantization rules and, in particular, from a phenomenological point of view, an energy scale dependence of an effective [Formula: see text] (reduced Planck's constant). We set a bound on the deviation of the value of [Formula: see text] at the muon scale from its usual value using measurements of the anomalous magnetic moment of the muon. Assuming that inflation has taken place, we can conclude that nature is described by a quantum theory at least up to an energy scale of about 10(16) GeV.

  19. Neural nets for adaptive filtering and adaptive pattern recognition

    SciTech Connect

    Widrow, B.; Winter, R.

    1988-03-01

    The fields of adaptive signal processing and adaptive neural networks have been developing independently but have that adaptive linear combiner (ALC) in common. With its inputs connected to a tapped delay line, the ALC becomes a key component of an adaptive filter. With its output connected to a quantizer, the ALC becomes an adaptive threshold element of adaptive neuron. Adaptive threshold elements, on the other hand, are the building blocks of neural networks. Today neural nets are the focus of widespread research interest. Areas of investigation include pattern recognition and trainable logic. Neural network systems have not yet had the commercial impact of adaptive filtering. The commonality of the ALC to adaptive signal processing and adaptive neural networks suggests the two fields have much to share with each other. This article describes practical applications of the ALC in signal processing and pattern recognition.

  20. Loop quantization of the Schwarzschild interior revisited

    NASA Astrophysics Data System (ADS)

    Corichi, Alejandro; Singh, Parampreet

    2016-03-01

    The loop quantization of the Schwarzschild interior region, as described by a homogeneous anisotropic Kantowski-Sachs model, is re-examined. As several studies of different—inequivalent—loop quantizations have shown, to date there exists no fully satisfactory quantum theory for this model. This fact poses challenges to the validity of some scenarios to address the black hole information problem. Here we put forward a novel viewpoint to construct the quantum theory that builds from some of the models available in the literature. The final picture is a quantum theory that is both independent of any auxiliary structure and possesses a correct low curvature limit. It represents a subtle but non-trivial modification of the original prescription given by Ashtekar and Bojowald. It is shown that the quantum gravitational constraint is well defined past the singularity and that its effective dynamics possesses a bounce into an expanding regime. The classical singularity is avoided, and a semiclassical spacetime satisfying vacuum Einstein’s equations is recovered on the ‘other side’ of the bounce. We argue that such a metric represents the interior region of a white-hole spacetime, but for which the corresponding ‘white hole mass’ differs from the original black hole mass. Furthermore, we find that the value of the white hole mass is proportional to the third power of the starting black hole mass.

  1. Bohmian quantization of the big rip

    NASA Astrophysics Data System (ADS)

    Pinto-Neto, Nelson; Pantoja, Diego Moraes

    2009-10-01

    It is shown in this paper that minisuperspace quantization of homogeneous and isotropic geometries with phantom scalar fields, when examined in the light of the Bohm-de Broglie interpretation of quantum mechanics, does not eliminate, in general, the classical big rip singularity present in the classical model. For some values of the Hamilton-Jacobi separation constant present in a class of quantum state solutions of the Wheeler-De Witt equation, the big rip can be either completely eliminated or may still constitute a future attractor for all expanding solutions. This is contrary to the conclusion presented in [M. P. Dabrowski, C. Kiefer, and B. Sandhofer, Phys. Rev. DPRVDAQ1550-7998 74, 044022 (2006).10.1103/PhysRevD.74.044022], using a different interpretation of the wave function, where the big rip singularity is completely eliminated (“smoothed out”) through quantization, independently of such a separation constant and for all members of the above mentioned class of solutions. This is an example of the very peculiar situation where different interpretations of the same quantum state of a system are predicting different physical facts, instead of just giving different descriptions of the same observable facts: in fact, there is nothing more observable than the fate of the whole Universe.

  2. Second-quantized formulation of geometric phases

    SciTech Connect

    Deguchi, Shinichi; Fujikawa, Kazuo

    2005-07-15

    The level crossing problem and associated geometric terms are neatly formulated by the second-quantized formulation. This formulation exhibits a hidden local gauge symmetry related to the arbitrariness of the phase choice of the complete orthonormal basis set. By using this second-quantized formulation, which does not assume adiabatic approximation, a convenient exact formula for the geometric terms including off-diagonal geometric terms is derived. The analysis of geometric phases is then reduced to a simple diagonalization of the Hamiltonian, and it is analyzed both in the operator and path-integral formulations. If one diagonalizes the geometric terms in the infinitesimal neighborhood of level crossing, the geometric phases become trivial (and thus no monopole singularity) for arbitrarily large but finite time interval T. The integrability of Schroedinger equation and the appearance of the seemingly nonintegrable phases are thus consistent. The topological proof of the Longuet-Higgins' phase-change rule, for example, fails in the practical Born-Oppenheimer approximation where a large but finite ratio of two time scales is involved and T is identified with the period of the slower system. The difference and similarity between the geometric phases associated with level crossing and the exact topological object such as the Aharonov-Bohm phase become clear in the present formulation. A crucial difference between the quantum anomaly and the geometric phases is also noted.

  3. Exciton condensation in microcavities under three-dimensional quantization conditions

    SciTech Connect

    Kochereshko, V. P. Platonov, A. V.; Savvidis, P.; Kavokin, A. V.; Bleuse, J.; Mariette, H.

    2013-11-15

    The dependence of the spectra of the polarized photoluminescence of excitons in microcavities under conditions of three-dimensional quantization on the optical-excitation intensity is investigated. The cascade relaxation of polaritons between quantized states of a polariton Bose condensate is observed.

  4. Faddeev-Jackiw quantization of non-autonomous singular systems

    NASA Astrophysics Data System (ADS)

    Belhadi, Zahir; Bérard, Alain; Mohrbach, Hervé

    2016-10-01

    We extend the quantization à la Faddeev-Jackiw for non-autonomous singular systems. This leads to a generalization of the Schrödinger equation for those systems. The method is exemplified by the quantization of the damped harmonic oscillator and the relativistic particle in an external electromagnetic field.

  5. Weighted MinMax Algorithm for Color Image Quantization

    NASA Technical Reports Server (NTRS)

    Reitan, Paula J.

    1999-01-01

    The maximum intercluster distance and the maximum quantization error that are minimized by the MinMax algorithm are shown to be inappropriate error measures for color image quantization. A fast and effective (improves image quality) method for generalizing activity weighting to any histogram-based color quantization algorithm is presented. A new non-hierarchical color quantization technique called weighted MinMax that is a hybrid between the MinMax and Linde-Buzo-Gray (LBG) algorithms is also described. The weighted MinMax algorithm incorporates activity weighting and seeks to minimize WRMSE, whereby obtaining high quality quantized images with significantly less visual distortion than the MinMax algorithm.

  6. Adaptation, isolation by distance and human-mediated transport determine patterns of gene flow among populations of the disease vector Aedes taeniorhynchus in the Galapagos Islands.

    PubMed

    Bataille, Arnaud; Cunningham, Andrew A; Cruz, Marilyn; Cedeño, Virna; Goodman, Simon J

    2011-12-01

    The black salt-marsh mosquito (Aedes taeniorhynchus) is the only native mosquito in the Galapagos Islands and potentially a major disease vector for Galapagos wildlife. Little is known about its population structure, or how its dynamics may be influenced by human presence in the archipelago. We used microsatellite data to assess the structure and patterns of A. taeniorhynchus gene flow among and within islands, to identify potential barriers to mosquito dispersal, and to investigate human-aided transport of mosquitoes across the archipelago. Our results show that inter-island migration of A. taeniorhynchus occurs frequently on an isolation by distance basis. High levels of inter-island migration were detected amongst the major ports of the archipelago, strongly suggesting the occurrence of human-aided transport of mosquitoes among islands, underlining the need for strict control measures to avoid the transport of disease vectors between islands. The prevalence of filarial nematode infection in Galapagos flightless cormorants is correlated with the population structure and migration patterns of A. taeniorhynchus, suggesting that A. taeniorhynchus is an important vector of this arthropod-borne parasite in the Galapagos Islands. Therefore mosquito population structure in Galapagos may have the potential to influence mosquito-borne parasite population dynamics, and the subsequent impacts of such pathogens on their host species in the islands.

  7. Combined optimal quantization and lossless coding of digital holograms of three-dimensional objects

    NASA Astrophysics Data System (ADS)

    Shortt, Alison E.; Naughton, Thomas J.; Javidi, Bahram

    2006-10-01

    Digital holography is an inherently three-dimensional (3D) technique for the capture of real-world objects. Many existing 3D imaging and processing techniques are based on the explicit combination of several 2D perspectives (or light stripes, etc.) through digital image processing. The advantage of recording a hologram is that multiple 2D perspectives can be optically combined in parallel, and in a constant number of steps independent of the hologram size. Although holography and its capabilities have been known for many decades, it is only very recently that digital holography has been practically investigated due to the recent development of megapixel digital sensors with sufficient spatial resolution and dynamic range. The applications of digital holography could include 3D television, virtual reality, and medical imaging. If these applications are realized, compression standards will have to be defined. We outline the techniques that have been proposed to date for the compression of digital hologram data and show that they are comparable to the performance of what in communication theory is known as optimal signal quantization. We adapt the optimal signal quantization technique to complex-valued 2D signals. The technique relies on knowledge of the histograms of real and imaginary values in the digital holograms. Our digital holograms of 3D objects are captured using phase-shift interferometry. We complete the compression procedure by applying lossless techniques to the quantized holographic pixels.

  8. Cloning vector

    DOEpatents

    Guilfoyle, R.A.; Smith, L.M.

    1994-12-27

    A vector comprising a filamentous phage sequence containing a first copy of filamentous phage gene X and other sequences necessary for the phage to propagate is disclosed. The vector also contains a second copy of filamentous phage gene X downstream from a promoter capable of promoting transcription in a bacterial host. In a preferred form of the present invention, the filamentous phage is M13 and the vector additionally includes a restriction endonuclease site located in such a manner as to substantially inactivate the second gene X when a DNA sequence is inserted into the restriction site. 2 figures.

  9. Cloning vector

    DOEpatents

    Guilfoyle, Richard A.; Smith, Lloyd M.

    1994-01-01

    A vector comprising a filamentous phage sequence containing a first copy of filamentous phage gene X and other sequences necessary for the phage to propagate is disclosed. The vector also contains a second copy of filamentous phage gene X downstream from a promoter capable of promoting transcription in a bacterial host. In a preferred form of the present invention, the filamentous phage is M13 and the vector additionally includes a restriction endonuclease site located in such a manner as to substantially inactivate the second gene X when a DNA sequence is inserted into the restriction site.

  10. Kerr Black Hole Entropy and its Quantization

    NASA Astrophysics Data System (ADS)

    Jiang, Ji-Jian; Li, Chuan-An; Cheng, Xie-Feng

    2016-08-01

    By constructing the four-dimensional phase space based on the observable physical quantity of Kerr black hole and gauge transformation, the Kerr black hole entropy in the phase space was obtained. Then considering the corresponding mechanical quantities as operators and making the operators quantized, entropy spectrum of Kerr black hole was obtained. Our results show that the Kerr black hole has the entropy spectrum with equal intervals, which is in agreement with the idea of Bekenstein. In the limit of large event horizon, the area of the adjacent event horizon of the black hole have equal intervals. The results are in consistent with the results based on the loop quantum gravity theory by Dreyer et al.

  11. Numerical Investigations of Reconnection of Quantized Vortices

    NASA Astrophysics Data System (ADS)

    Rorai, Cecilia; Fisher, Michael E.; Lathrop, Daniel P.; Sreenivasan, Katepalli R.; Kerr, Robert M.

    2011-11-01

    Reconnection of quantized vortices in superfluid helium was conjectured by Feynman in 1955, and first observed experimentally by Bewley et al. (PNAS 105, 13708, 2007). The nature of this phenomenon is quantum mechanical, involving atomically thin vortex cores. At the same time, this phenomenon influences the large scale dynamics, since a tangle of vortices can change topology through reconnection and evolve in time. Numerically, the Gross-Pitaevskii (GP) equation allows detailed predictions of vortex reconnection as first shown by Koplik and Levine (1993). We have undertaken further calculations to characterize the dynamics of isolated reconnection events. Initial conditions have been analyzed carefully, different geometries have been considered and a new approach has been proposed. This approach consists in using the diffusion equation associated to the GP equation to set minimum energy initial vortex profiles. The underlying questions we wish to answer are the universality of vortex reconnection and its effect on energy dissipation to the phonon field.

  12. Quantized topological Hall effect in skyrmion crystal

    NASA Astrophysics Data System (ADS)

    Hamamoto, Keita; Ezawa, Motohiko; Nagaosa, Naoto

    2015-09-01

    We theoretically study the quantized topological Hall effect (QTHE) in skyrmion crystal (SkX) without external magnetic field. The emergent magnetic field in SkX could be gigantic, as much as 4000 T , when its lattice constant is 1 nm . The band structure is not flat but has a finite gap in the low electron-density regime. We also study the conditions to realize the QTHE for the skyrmion size, carrier density, disorder strength, and temperature. Comparing the SkX and the system under the corresponding uniform magnetic field, the former is more fragile against the temperature compared with the latter since the gap is reduced by a factor of 1/5, while they are almost equally robust against the disorder. Therefore, it is expected that the QTHE of the SkX system is realized even with strong disorder at room temperature when the electron density is of the order of 1 per skyrmion.

  13. Quaternionic quantization principle in general relativity and supergravity

    NASA Astrophysics Data System (ADS)

    Kober, Martin

    2016-01-01

    A generalized quantization principle is considered, which incorporates nontrivial commutation relations of the components of the variables of the quantized theory with the components of the corresponding canonical conjugated momenta referring to other space-time directions. The corresponding commutation relations are formulated by using quaternions. At the beginning, this extended quantization concept is applied to the variables of quantum mechanics. The resulting Dirac equation and the corresponding generalized expression for plane waves are formulated and some consequences for quantum field theory are considered. Later, the quaternionic quantization principle is transferred to canonical quantum gravity. Within quantum geometrodynamics as well as the Ashtekar formalism, the generalized algebraic properties of the operators describing the gravitational observables and the corresponding quantum constraints implied by the generalized representations of these operators are determined. The generalized algebra also induces commutation relations of the several components of the quantized variables with each other. Finally, the quaternionic quantization procedure is also transferred to 𝒩 = 1 supergravity. Accordingly, the quantization principle has to be generalized to be compatible with Dirac brackets, which appear in canonical quantum supergravity.

  14. Implementation of digital filters for minimum quantization errors

    NASA Technical Reports Server (NTRS)

    Phillips, C. L.; Vallely, D. P.

    1974-01-01

    In this paper a technique is developed for choosing programing forms and bit configurations for digital filters that minimize the quantization errors. The technique applies to digital filters operating in fixed-point arithmetic in either open-loop or closed-loop systems, and is implemented by a digital computer program that is based on a digital simulation of the system. As an output the program gives the programing form required for minimum quantization errors, the total bit configuration required in the filter, and the location of the binary decimal point at each quantizer within the filter.

  15. Direct observation of Kelvin waves excited by quantized vortex reconnection.

    PubMed

    Fonda, Enrico; Meichle, David P; Ouellette, Nicholas T; Hormoz, Sahand; Lathrop, Daniel P

    2014-03-25

    Quantized vortices are key features of quantum fluids such as superfluid helium and Bose-Einstein condensates. The reconnection of quantized vortices and subsequent emission of Kelvin waves along the vortices are thought to be central to dissipation in such systems. By visualizing the motion of submicron particles dispersed in superfluid (4)He, we have directly observed the emission of Kelvin waves from quantized vortex reconnection. We characterize one event in detail, using dimensionless similarity coordinates, and compare it with several theories. Finally, we give evidence for other examples of wavelike behavior in our system.

  16. Modified 8×8 quantization table and Huffman encoding steganography

    NASA Astrophysics Data System (ADS)

    Guo, Yongning; Sun, Shuliang

    2014-10-01

    A new secure steganography, which is based on Huffman encoding and modified quantized discrete cosine transform (DCT) coefficients, is provided in this paper. Firstly, the cover image is segmented into 8×8 blocks and modified DCT transformation is applied on each block. Huffman encoding is applied to code the secret image before embedding. DCT coefficients are quantized by modified quantization table. Inverse DCT(IDCT) is conducted on each block. All the blocks are combined together and the steg image is finally achieved. The experiment shows that the proposed method is better than DCT and Mahender Singh's in PSNR and Capacity.

  17. Quantization of Fayet-Iliopoulos parameters in supergravity

    SciTech Connect

    Distler, Jacques; Sharpe, Eric

    2011-04-15

    In this short article we discuss quantization of the Fayet-Iliopoulos parameter in supergravity theories. We argue that, in supergravity, the Fayet-Iliopoulos parameter determines a lift of the group action to a line bundle, and such lifts are quantized. Just as D-terms in rigid N=1 supersymmetry are interpreted in terms of moment maps and symplectic reductions, we argue that in supergravity the quantization of the Fayet-Iliopoulos parameter has a natural understanding in terms of linearizations in geometric invariant theory quotients, the algebro-geometric version of symplectic quotients.

  18. Direct observation of Kelvin waves excited by quantized vortex reconnection

    PubMed Central

    Fonda, Enrico; Meichle, David P.; Ouellette, Nicholas T.; Hormoz, Sahand; Lathrop, Daniel P.

    2014-01-01

    Quantized vortices are key features of quantum fluids such as superfluid helium and Bose–Einstein condensates. The reconnection of quantized vortices and subsequent emission of Kelvin waves along the vortices are thought to be central to dissipation in such systems. By visualizing the motion of submicron particles dispersed in superfluid 4He, we have directly observed the emission of Kelvin waves from quantized vortex reconnection. We characterize one event in detail, using dimensionless similarity coordinates, and compare it with several theories. Finally, we give evidence for other examples of wavelike behavior in our system. PMID:24704878

  19. Evolutionary genetics and vector adaptation of recombinant viruses of the western equine encephalitis antigenic complex provides new insights into alphavirus diversity and host switching

    PubMed Central

    Allison, Andrew B.; Stallknecht, David E.; Holmes, Edward C.

    2014-01-01

    Western equine encephalitis virus (WEEV), Highlands J virus (HJV), and Fort Morgan virus (FMV) are the sole representatives of the WEE antigenic complex of the genus Alphavirus, family Togaviridae, that are endemic to North America. All three viruses have their ancestry in a recombination event involving eastern equine encephalitis virus (EEEV) and a Sindbis (SIN)-like virus that gave rise to a chimeric alphavirus that subsequently diversified into the present-day WEEV, HJV, and FMV. Here, we present a comparative analysis of the genetic, ecological, and evolutionary relationships among these recombinant-origin viruses, including the description of a nsP4 polymerase mutation in FMV that allows it to circumvent the host range barrier to Asian tiger mosquito cells, a vector species that is normally refractory to infection. Notably, we also provide evidence that the recombination event that gave rise to these three WEEV antigenic complex viruses may have occurred in North America. PMID:25463613

  20. Evolutionary genetics and vector adaptation of recombinant viruses of the western equine encephalitis antigenic complex provides new insights into alphavirus diversity and host switching.

    PubMed

    Allison, Andrew B; Stallknecht, David E; Holmes, Edward C

    2015-01-01

    Western equine encephalitis virus (WEEV), Highlands J virus (HJV), and Fort Morgan virus (FMV) are the sole representatives of the WEE antigenic complex of the genus Alphavirus, family Togaviridae, that are endemic to North America. All three viruses have their ancestry in a recombination event involving eastern equine encephalitis virus (EEEV) and a Sindbis (SIN)-like virus that gave rise to a chimeric alphavirus that subsequently diversified into the present-day WEEV, HJV, and FMV. Here, we present a comparative analysis of the genetic, ecological, and evolutionary relationships among these recombinant-origin viruses, including the description of a nsP4 polymerase mutation in FMV that allows it to circumvent the host range barrier to Asian tiger mosquito cells, a vector species that is normally refractory to infection. Notably, we also provide evidence that the recombination event that gave rise to these three WEEV antigenic complex viruses may have occurred in North America.

  1. In-loop atom modulus quantization for matching pursuit and its application to video coding.

    PubMed

    De Vleeschouwer, Christophe; Zakhor, Avideh

    2003-01-01

    This paper provides a precise analytical study of the selection and modulus quantization of matching pursuit (MP) coefficients. We demonstrate that an optimal rate-distortion trade-off is achieved by selecting the atoms up to a quality-dependent threshold, and by defining the modulus quantizer in terms of that threshold. In doing so, we take into account quantization error re-injection resulting from inserting the modulus quantizer inside the MP atom computation loop. In-loop quantization not only improves coding performance, but also affects the optimal quantizer design for both uniform and nonuniform quantization. We measure the impact of our work in the context of video coding. For both uniform and nonuniform quantization, the precise understanding of the relation between atom selection and quantization results in significant improvements in terms of coding efficiency. At high bitrates, the proposed nonuniform quantization scheme results in 0.5 to 2 dB improvement over the previous method.

  2. Minimum uncertainty and squeezing in diffusion processes and stochastic quantization

    NASA Technical Reports Server (NTRS)

    Demartino, S.; Desiena, S.; Illuminati, Fabrizo; Vitiello, Giuseppe

    1994-01-01

    We show that uncertainty relations, as well as minimum uncertainty coherent and squeezed states, are structural properties for diffusion processes. Through Nelson stochastic quantization we derive the stochastic image of the quantum mechanical coherent and squeezed states.

  3. Quantization of maximally charged slowly moving black holes

    NASA Astrophysics Data System (ADS)

    Siopsis, George

    2001-05-01

    We discuss the quantization of a system of slowly moving extreme Reissner-Nordström black holes. In the near-horizon limit, this system has been shown to possess an SL(2,R) conformal symmetry. However, the Hamiltonian appears to have no well-defined ground state. This problem can be circumvented by a redefinition of the Hamiltonian due to de Alfaro, Fubini, and Furlan (DFF). We apply the Faddeev-Popov quantization procedure to show that the Hamiltonian with no ground state corresponds to a gauge in which there is an obstruction at the singularities of moduli space requiring a modification of the quantization rules. The redefinition of the Hamiltonian in the manner of DFF corresponds to a different choice of gauge. The latter is a good gauge leading to standard quantization rules. Thus the DFF trick is a consequence of a standard gauge-fixing procedure in the case of black hole scattering.

  4. Fill-in binary loop pulse-torque quantizer

    NASA Technical Reports Server (NTRS)

    Lory, C. B.

    1975-01-01

    Fill-in binary (FIB) loop provides constant heating of torque generator, an advantage of binary current switching. At the same time, it avoids mode-related dead zone and data delay of binary, an advantage of ternary quantization.

  5. On the vector model of angular momentum

    NASA Astrophysics Data System (ADS)

    Saari, Peeter

    2016-09-01

    Instead of (or in addition to) the common vector diagram with cones, we propose to visualize the peculiarities of quantum mechanical angular momentum by a completely quantized 3D model. It spotlights the discrete eigenvalues and noncommutativity of components of angular momentum and corresponds to outcomes of measurements—real or computer-simulated. The latter can be easily realized by an interactive worksheet of a suitable program package of algebraic calculations. The proposed complementary method of visualization helps undergraduate students to better understand the counterintuitive properties of this quantum mechanical observable.

  6. Lentiviral vectors.

    PubMed

    Giry-Laterrière, Marc; Verhoeyen, Els; Salmon, Patrick

    2011-01-01

    Lentiviral vectors have evolved over the last decade as powerful, reliable, and safe tools for stable gene transfer in a wide variety of mammalian cells. Contrary to other vectors derived from oncoretroviruses, they allow for stable gene delivery into most nondividing primary cells. In particular, lentivectors (LVs) derived from HIV-1 have gradually evolved to display many desirable features aimed at increasing both their safety and their versatility. This is why lentiviral vectors are becoming the most useful and promising tools for genetic engineering, to generate cells that can be used for research, diagnosis, and therapy. This chapter describes protocols and guidelines, for production and titration of LVs, which can be implemented in a research laboratory setting, with an emphasis on standardization in order to improve transposability of results between laboratories. We also discuss latest designs in LV technology.

  7. Chikungunya Virus–Vector Interactions

    PubMed Central

    Coffey, Lark L.; Failloux, Anna-Bella; Weaver, Scott C.

    2014-01-01

    Chikungunya virus (CHIKV) is a mosquito-borne alphavirus that causes chikungunya fever, a severe, debilitating disease that often produces chronic arthralgia. Since 2004, CHIKV has emerged in Africa, Indian Ocean islands, Asia, Europe, and the Americas, causing millions of human infections. Central to understanding CHIKV emergence is knowledge of the natural ecology of transmission and vector infection dynamics. This review presents current understanding of CHIKV infection dynamics in mosquito vectors and its relationship to human disease emergence. The following topics are reviewed: CHIKV infection and vector life history traits including transmission cycles, genetic origins, distribution, emergence and spread, dispersal, vector competence, vector immunity and microbial interactions, and co-infection by CHIKV and other arboviruses. The genetics of vector susceptibility and host range changes, population heterogeneity and selection for the fittest viral genomes, dual host cycling and its impact on CHIKV adaptation, viral bottlenecks and intrahost diversity, and adaptive constraints on CHIKV evolution are also discussed. The potential for CHIKV re-emergence and expansion into new areas and prospects for prevention via vector control are also briefly reviewed. PMID:25421891

  8. On the deformation quantization description of Matrix compactifications

    NASA Astrophysics Data System (ADS)

    García-Compeán, Hugo

    1999-03-01

    Matrix theory compactifications on tori have associated Yang-Mills theories on the dual tori with sixteen supercharges. A non-commutative description of these Yang-Mills theories based in deformation quantization theory is provided. We show that this framework allows a natural generalization of the 'Moya B-deformation' of the Yang-Mills theories to non-constant background B-fields on curved spaces. This generalization is described through Fedosov's geometry of deformation quantization.

  9. A progressive data compression scheme based upon adaptive transform coding: Mixture block coding of natural images

    NASA Technical Reports Server (NTRS)

    Rost, Martin C.; Sayood, Khalid

    1991-01-01

    A method for efficiently coding natural images using a vector-quantized variable-blocksized transform source coder is presented. The method, mixture block coding (MBC), incorporates variable-rate coding by using a mixture of discrete cosine transform (DCT) source coders. Which coders are selected to code any given image region is made through a threshold driven distortion criterion. In this paper, MBC is used in two different applications. The base method is concerned with single-pass low-rate image data compression. The second is a natural extension of the base method which allows for low-rate progressive transmission (PT). Since the base method adapts easily to progressive coding, it offers the aesthetic advantage of progressive coding without incorporating extensive channel overhead. Image compression rates of approximately 0.5 bit/pel are demonstrated for both monochrome and color images.

  10. Multivariate Fronthaul Quantization for Downlink C-RAN

    NASA Astrophysics Data System (ADS)

    Lee, Wonju; Simeone, Osvaldo; Kang, Joonhyuk; Shamai, Shlomo

    2016-10-01

    The Cloud-Radio Access Network (C-RAN) cellular architecture relies on the transfer of complex baseband signals to and from a central unit (CU) over digital fronthaul links to enable the virtualization of the baseband processing functionalities of distributed radio units (RUs). The standard design of digital fronthauling is based on either scalar quantization or on more sophisticated point to-point compression techniques operating on baseband signals. Motivated by network-information theoretic results, techniques for fronthaul quantization and compression that improve over point-to-point solutions by allowing for joint processing across multiple fronthaul links at the CU have been recently proposed for both the uplink and the downlink. For the downlink, a form of joint compression, known in network information theory as multivariate compression, was shown to be advantageous under a non-constructive asymptotic information-theoretic framework. In this paper, instead, the design of a practical symbol-by-symbol fronthaul quantization algorithm that implements the idea of multivariate compression is investigated for the C-RAN downlink. As compared to current standards, the proposed multivariate quantization (MQ) only requires changes in the CU processing while no modification is needed at the RUs. The algorithm is extended to enable the joint optimization of downlink precoding and quantization, reduced-complexity MQ via successive block quantization, and variable-length compression. Numerical results, which include performance evaluations over standard cellular models, demonstrate the advantages of MQ and the merits of a joint optimization with precoding.

  11. Adaptive VFH

    NASA Astrophysics Data System (ADS)

    Odriozola, Iñigo; Lazkano, Elena; Sierra, Basi

    2011-10-01

    This paper investigates the improvement of the Vector Field Histogram (VFH) local planning algorithm for mobile robot systems. The Adaptive Vector Field Histogram (AVFH) algorithm has been developed to improve the effectiveness of the traditional VFH path planning algorithm overcoming the side effects of using static parameters. This new algorithm permits the adaptation of planning parameters for the different type of areas in an environment. Genetic Algorithms are used to fit the best VFH parameters to each type of sector and, afterwards, every section in the map is labelled with the sector-type which best represents it. The Player/Stage simulation platform has been chosen for making all sort of tests and to prove the new algorithm's adequateness. Even though there is still much work to be carried out, the developed algorithm showed good navigation properties and turned out to be softer and more effective than the traditional VFH algorithm.

  12. Monte Carlo simulation of a quantized universe.

    NASA Astrophysics Data System (ADS)

    Berger, Beverly K.

    1988-08-01

    A Monte Carlo simulation method which yields groundstate wave functions for multielectron atoms is applied to quantized cosmological models. In quantum mechanics, the propagator for the Schrödinger equation reduces to the absolute value squared of the groundstate wave function in the limit of infinite Euclidean time. The wave function of the universe as the solution to the Wheeler-DeWitt equation may be regarded as the zero energy mode of a Schrödinger equation in coordinate time. The simulation evaluates the path integral formulation of the propagator by constructing a large number of paths and computing their contribution to the path integral using the Metropolis algorithm to drive the paths toward a global minimum in the path energy. The result agrees with a solution to the Wheeler-DeWitt equation which has the characteristics of a nodeless groundstate wave function. Oscillatory behavior cannot be reproduced although the simulation results may be physically reasonable. The primary advantage of the simulations is that they may easily be extended to cosmologies with many degrees of freedom. Examples with one, two, and three degrees of freedom (d.f.) are presented.

  13. Interactions between unidirectional quantized vortex rings

    NASA Astrophysics Data System (ADS)

    Zhu, T.; Evans, M. L.; Brown, R. A.; Walmsley, P. M.; Golov, A. I.

    2016-08-01

    We have used the vortex filament method to numerically investigate the interactions between pairs of quantized vortex rings that are initially traveling in the same direction but with their axes offset by a variable impact parameter. The interaction of two circular rings of comparable radii produces outcomes that can be categorized into four regimes, dependent only on the impact parameter; the two rings can either miss each other on the inside or outside or reconnect leading to final states consisting of either one or two deformed rings. The fraction of energy that went into ring deformations and the transverse component of velocity of the rings are analyzed for each regime. We find that rings of very similar radius only reconnect for a very narrow range of the impact parameter, much smaller than would be expected from the geometrical cross-section alone. In contrast, when the radii of the rings are very different, the range of impact parameters producing a reconnection is close to the geometrical value. A second type of interaction considered is the collision of circular rings with a highly deformed ring. This type of interaction appears to be a productive mechanism for creating small vortex rings. The simulations are discussed in the context of experiments on colliding vortex rings and quantum turbulence in superfluid helium in the zero-temperature limit.

  14. Light-cone quantization and hadron structure

    SciTech Connect

    Brodsky, S.J.

    1996-04-01

    Quantum chromodynamics provides a fundamental description of hadronic and nuclear structure and dynamics in terms of elementary quark and gluon degrees of freedom. In practice, the direct application of QCD to reactions involving the structure of hadrons is extremely complex because of the interplay of nonperturbative effects such as color confinement and multi-quark coherence. In this talk, the author will discuss light-cone quantization and the light-cone Fock expansion as a tractable and consistent representation of relativistic many-body systems and bound states in quantum field theory. The Fock state representation in QCD includes all quantum fluctuations of the hadron wavefunction, including fax off-shell configurations such as intrinsic strangeness and charm and, in the case of nuclei, hidden color. The Fock state components of the hadron with small transverse size, which dominate hard exclusive reactions, have small color dipole moments and thus diminished hadronic interactions. Thus QCD predicts minimal absorptive corrections, i.e., color transparency for quasi-elastic exclusive reactions in nuclear targets at large momentum transfer. In other applications, such as the calculation of the axial, magnetic, and quadrupole moments of light nuclei, the QCD relativistic Fock state description provides new insights which go well beyond the usual assumptions of traditional hadronic and nuclear physics.

  15. Dynamics of Quantized Vortices Before Reconnection

    NASA Astrophysics Data System (ADS)

    Andryushchenko, V. A.; Kondaurova, L. P.; Nemirovskii, S. K.

    2016-04-01

    The main goal of this paper is to investigate numerically the dynamics of quantized vortex loops, just before the reconnection at finite temperature, when mutual friction essentially changes the evolution of lines. Modeling is performed on the base of vortex filament method using the full Biot-Savart equation. It was discovered that the initial position of vortices and the temperature strongly affect the dependence on time of the minimum distance δ (t) between tips of two vortex loops. In particular, in some cases, the shrinking and collapse of vortex loops due to mutual friction occur earlier than the reconnection, thereby canceling the latter. However, this relationship takes a universal square-root form δ ( t) =√{( κ/2π ) ( t_{*}-t) } at distances smaller than the distances, satisfying the Schwarz reconnection criterion, when the nonlocal contribution to the Biot-Savart equation becomes about equal to the local contribution. In the "universal" stage, the nearest parts of vortices form a pyramid-like structure with angles which neither depend on the initial configuration nor on temperature.

  16. Causal Poisson bracket via deformation quantization

    NASA Astrophysics Data System (ADS)

    Berra-Montiel, Jasel; Molgado, Alberto; Palacios-García, César D.

    2016-06-01

    Starting with the well-defined product of quantum fields at two spacetime points, we explore an associated Poisson structure for classical field theories within the deformation quantization formalism. We realize that the induced star-product is naturally related to the standard Moyal product through an appropriate causal Green’s functions connecting points in the space of classical solutions to the equations of motion. Our results resemble the Peierls-DeWitt bracket that has been analyzed in the multisymplectic context. Once our star-product is defined, we are able to apply the Wigner-Weyl map in order to introduce a generalized version of Wick’s theorem. Finally, we include some examples to explicitly test our method: the real scalar field, the bosonic string and a physically motivated nonlinear particle model. For the field theoretic models, we have encountered causal generalizations of the creation/annihilation relations, and also a causal generalization of the Virasoro algebra for the bosonic string. For the nonlinear particle case, we use the approximate solution in terms of the Green’s function, in order to construct a well-behaved causal bracket.

  17. Quantization of β-Fermi-Pasta-Ulam Lattice with Nearest and Next-nearest Neighbour Interactions

    NASA Astrophysics Data System (ADS)

    Dey, Bishwajyoti

    2015-03-01

    We quantize the β-Fermi-Pasta-Ulam (FPU) model with nearest and next-nearest neighbour (NNN) interactions using a number conserving approximation and a numerically exact diagonalization method. Our numerical mean field bi-phonon spectrum shows excellent agreement with the analytic mean field results of Ivic and Tsironis, except for the wave vector at the midpoint of the Brillouin zone. We then relax the mean field approximation and calculate the eigenvalue spectrum of the full Hamiltonian. We show the existence of multi-phonon bound states and analyze the properties of these states by varying the system parameters. From the calculation of the spatial correlation function we then show that these multi-phonon bound states are particle like states with finite spatial correlation. Accordingly we identify these multi-phonon bound states as the quantum equivalent of the breather solutions of the corresponding classical FPU model. The four-phonon spectrum of the system is then obtained and its properties are studied. We then generalize the study to an extended range interaction and quantize the β-FPU model with NNN interactions. We analyze the effects of the NNN interactions on the eigenvalue spectrum and the correlation functions of the system. I would like to thank DST, India and BCUD, Pune University, Pune for financial support through research projects.

  18. Polarization of He II films upon the relative motion of the superfluid component and the quantized vortices

    NASA Astrophysics Data System (ADS)

    Adamenko, I. N.; Nemchenko, E. K.

    2016-04-01

    Theoretical study of the electrical activity of the saturated superfluid helium (He II) film upon the relative motion of the normal and superfluid components in the film was performed. The polarization vector due to the dipole moments of the quantized vortex rings in He II in the field of van der Waals forces was calculated taking into account the relative motion of the normal and superfluid components. An explicit analytical expression for the electric potential difference arising upon the relative motion of the normal and superfluid components in a torsional oscillator was derived. The obtained time, temperature and relative velocity dependences of the potential difference were in agreement with the experimental data.

  19. Charge quantization and the Standard Model from the CP2 and CP3 nonlinear σ-models

    NASA Astrophysics Data System (ADS)

    Hellerman, Simeon; Kehayias, John; Yanagida, Tsutomu T.

    2014-04-01

    We investigate charge quantization in the Standard Model (SM) through a CP2 nonlinear sigma model (NLSM), SU(3/(SU(2×U(1), and a CP3 model, SU(4/(SU(3×U(1). We also generalize to any CPk model. Charge quantization follows from the consistency and dynamics of the NLSM, without a monopole or Grand Unified Theory, as shown in our earlier work on the CP1 model (arXiv:1309.0692). We find that representations of the matter fields under the unbroken non-abelian subgroup dictate their charge quantization under the U(1 factor. In the CP2 model the unbroken group is identified with the weak and hypercharge groups of the SM, and the Nambu-Goldstone boson (NGB) has the quantum numbers of a SM Higgs. There is the intriguing possibility of a connection with the vanishing of the Higgs self-coupling at the Planck scale. Interestingly, with some minor assumptions (no vector-like matter and minimal representations) and starting with a single quark doublet, anomaly cancellation requires the matter structure of a generation in the SM. Similar analysis holds in the CP3 model, with the unbroken group identified with QCD and hypercharge, and the NGB having the up quark as a partner in a supersymmetric model. This can motivate solving the strong CP problem with a vanishing up quark mass.

  20. Quantizing remote sensing radiation field research based on J-C model

    NASA Astrophysics Data System (ADS)

    Zhen, Ming; Bi, Siwen

    2014-03-01

    Remote sensing provides a powerful tool for human to explore the environment around us from multidimensional perspective and macroscopic view. As marrow of remote sensing, remote sensing information is about the message of light or electromagnetic wave obtained by remote sensing platform. Quantum remote sensing reveals remote sensing theories and methods in quantum level. Quantum remote sensing information is about how to express and transmit information by quantum state. Quantizing remote sensing radiation field is its main basis. Based on J-C model, which describes interaction between single mode light field and a two-level atom, expressions of operators correlated with light field can be obtained through state vector of atom-light field coupling system and Schrodinger equation. Both analysis and calculations show that quantum fluctuation of the light field can be squeezed. Numerical simulation is used to study the variation of quantum fluctuation, which deepens our understanding of quantum remote sensing information.

  1. High-resolution frequency measurement method with a wide-frequency range based on a quantized phase step law.

    PubMed

    Du, Baoqiang; Dong, Shaofeng; Wang, Yanfeng; Guo, Shuting; Cao, Lingzhi; Zhou, Wei; Zuo, Yandi; Liu, Dan

    2013-11-01

    A wide-frequency and high-resolution frequency measurement method based on the quantized phase step law is presented in this paper. Utilizing a variation law of the phase differences, the direct different frequency phase processing, and the phase group synchronization phenomenon, combining an A/D converter and the adaptive phase shifting principle, a counter gate is established in the phase coincidences at one-group intervals, which eliminates the ±1 counter error in the traditional frequency measurement method. More importantly, the direct phase comparison, the measurement, and the control between any periodic signals have been realized without frequency normalization in this method. Experimental results show that sub-picosecond resolution can be easily obtained in the frequency measurement, the frequency standard comparison, and the phase-locked control based on the phase quantization processing technique. The method may be widely used in navigation positioning, space techniques, communication, radar, astronomy, atomic frequency standards, and other high-tech fields.

  2. Quantized Concentration Gradient in Picoliter Scale

    NASA Astrophysics Data System (ADS)

    Hong, Jong Wook

    2010-10-01

    Generation of concentration gradient is of paramount importance in the success of reactions for cell biology, molecular biology, biochemistry, drug-discovery, chemotaxis, cell culture, biomaterials synthesis, and tissue engineering. In conventional method of conducting reactions, the concentration gradients is achieved by using pipettes, test tubes, 96-well assay plates, and robotic systems. Conventional methods require milliliter or microliter volumes of samples for typical experiments with multiple and sequential reactions. It is a challenge to carry out experiments with precious samples that have strict limitations with the amount of samples or the price to pay for the amount. In order to overcome this challenge faced by the conventional methods, fluidic devices with micrometer scale channels have been developed. These devices, however, cause restrictions on changing the concentration due to the fixed gradient set based on fixed fluidic channels.ootnotetextJambovane, S.; Duin, E. C.; Kim, S-K.; Hong, J. W., Determination of Kinetic Parameters, KM and kcat, with a Single Experiment on a Chip. textitAnalytical Chemistry, 81, (9), 3239-3245, 2009.^,ootnotetextJambovane, S.; Hong, J. W., Lorenz-like Chatotic System on a Chip In The 14th International Conference on Miniaturized Systems for Chemistry and Life Sciences (MicroTAS), The Netherlands, October, 2010. Here, we present a unique microfluidic system that can generate quantized concentration gradient by using series of droplets generated by a mechanical valve based injection method.ootnotetextJambovane, S.; Rho, H.; Hong, J., Fluidic Circuit based Predictive Model of Microdroplet Generation through Mechanical Cutting. In ASME International Mechanical Engineering Congress & Exposition, Lake Buena Vista, Florida, USA, October, 2009.^,ootnotetextLee, W.; Jambovane, S.; Kim, D.; Hong, J., Predictive Model on Micro Droplet Generation through Mechanical Cutting. Microfluidics and Nanofluidics, 7, (3), 431-438, 2009

  3. Remote Sensing and Quantization of Analog Sensors

    NASA Technical Reports Server (NTRS)

    Strauss, Karl F.

    2011-01-01

    This method enables sensing and quantization of analog strain gauges. By manufacturing a piezoelectric sensor stack in parallel (physical) with a piezoelectric actuator stack, the capacitance of the sensor stack varies in exact proportion to the exertion applied by the actuator stack. This, in turn, varies the output frequency of the local sensor oscillator. The output, F(sub out), is fed to a phase detector, which is driven by a stable reference, F(sub ref). The output of the phase detector is a square waveform, D(sub out), whose duty cycle, t(sub W), varies in exact proportion according to whether F(sub out) is higher or lower than F(sub ref). In this design, should F(sub out) be precisely equal to F(sub ref), then the waveform has an exact 50/50 duty cycle. The waveform, D(sub out), is of generally very low frequency suitable for safe transmission over long distances without corruption. The active portion of the waveform, t(sub W), gates a remotely located counter, which is driven by a stable oscillator (source) of such frequency as to give sufficient digitization of t(sub W) to the resolution required by the application. The advantage to this scheme is that it negates the most-common, present method of sending either very low level signals (viz. direct output from the sensors) across great distances (anything over one-half meter) or the need to transmit widely varying higher frequencies over significant distances thereby eliminating interference [both in terms of beat frequency generation and in-situ EMI (electromagnetic interference)] caused by ineffective shielding. It also results in a significant reduction in shielding mass.

  4. Perturbation theory in light-cone quantization

    SciTech Connect

    Langnau, A.

    1992-01-01

    A thorough investigation of light-cone properties which are characteristic for higher dimensions is very important. The easiest way of addressing these issues is by analyzing the perturbative structure of light-cone field theories first. Perturbative studies cannot be substituted for an analysis of problems related to a nonperturbative approach. However, in order to lay down groundwork for upcoming nonperturbative studies, it is indispensable to validate the renormalization methods at the perturbative level, i.e., to gain control over the perturbative treatment first. A clear understanding of divergences in perturbation theory, as well as their numerical treatment, is a necessary first step towards formulating such a program. The first objective of this dissertation is to clarify this issue, at least in second and fourth-order in perturbation theory. The work in this dissertation can provide guidance for the choice of counterterms in Discrete Light-Cone Quantization or the Tamm-Dancoff approach. A second objective of this work is the study of light-cone perturbation theory as a competitive tool for conducting perturbative Feynman diagram calculations. Feynman perturbation theory has become the most practical tool for computing cross sections in high energy physics and other physical properties of field theory. Although this standard covariant method has been applied to a great range of problems, computations beyond one-loop corrections are very difficult. Because of the algebraic complexity of the Feynman calculations in higher-order perturbation theory, it is desirable to automatize Feynman diagram calculations so that algebraic manipulation programs can carry out almost the entire calculation. This thesis presents a step in this direction. The technique we are elaborating on here is known as light-cone perturbation theory.

  5. Dynamic Quantizer Synthesis Based on Invariant Set Analysis for SISO Systems with Discrete-Valued Input

    NASA Astrophysics Data System (ADS)

    Sawada, Kenji; Shin, Seiichi

    This paper proposes analysis and synthesis methods of dynamic quantizers for linear feedback single input single output (SISO) systems with discrete-valued input in terms of invariant set analysis. First, this paper derives the quantizer analysis and synthesis conditions that clarify an optimal quantizer within the ellipsoidal invariant set analysis framework. In the case of minimum phase feedback systems, next, this paper presents that the structure of the proposed quantizer is also optimal in the sense that the quantizer gives an optimal output approximation property. Finally, this paper points out that the proposed design method can design a stable quantizer for non-minimum phase feedback systems through a numerical example.

  6. Some effects of quantization on a noiseless phase-locked loop. [sampling phase errors

    NASA Technical Reports Server (NTRS)

    Greenhall, C. A.

    1979-01-01

    If the VCO of a phase-locked receiver is to be replaced by a digitally programmed synthesizer, the phase error signal must be sampled and quantized. Effects of quantizing after the loop filter (frequency quantization) or before (phase error quantization) are investigated. Constant Doppler or Doppler rate noiseless inputs are assumed. The main result gives the phase jitter due to frequency quantization for a Doppler-rate input. By itself, however, frequency quantization is impractical because it makes the loop dynamic range too small.

  7. Estimation with Wireless Sensor Networks: Censoring and Quantization Perspectives

    NASA Astrophysics Data System (ADS)

    Msechu, Eric James

    In the last decade there has been an increase in application areas for wireless sensor networks (WSNs), which can be attributed to the advances in the enabling sensor technology. These advances include integrated circuit miniaturization and mass-production of highly-reliable hardware for sensing, processing, and data storage at a lower cost. In many emerging applications, massive amounts of data are acquired by a large number of low-cost sensing devices. The design of signal processing algorithms for these WSNs, unlike in wireless networks designed for communications, face a different set of challenges due to resource constraints sensor nodes must adhere to. These include: (i) limited on-board memory for storage; (ii) limited energy source, typically based on irreplaceable battery cells; (iii) radios with limited transmission range; and (iv) stringent data rates either due to a need to save energy or due to limited radio-frequency bandwidth allocated to sensor networks. This work addresses distributed data-reduction at sensor nodes using a combination of measurement-censoring and measurement quantization. The WSN is envisioned for decentralized estimation of either a vector of unknown parameters in a maximum likelihood framework, or, for decentralized estimation of a random signal using Bayesian optimality criteria. Early research effort in data-reduction methods involved using a centralized computation platform directing selection of the most informative data and focusing computational and communication resources toward the selected data only. Robustness against failure of the central computation unit, as well as the need for iterative data-selection and data-gathering in some applications (e.g., real-time navigation systems), motivates a rethinking of the centralized data-selection approach. Recently, research focus has been on collaborative signal processing in sensor neighborhoods for the data-reduction step. It is in this spirit that investigation of methods

  8. Canonical quantization theory of general singular QED system of Fermi field interaction with generally decomposed gauge potential

    SciTech Connect

    Zhang, Zhen-Lu; Huang, Yong-Chang

    2014-03-15

    Quantization theory gives rise to transverse phonons for the traditional Coulomb gauge condition and to scalar and longitudinal photons for the Lorentz gauge condition. We describe a new approach to quantize the general singular QED system by decomposing a general gauge potential into two orthogonal components in general field theory, which preserves scalar and longitudinal photons. Using these two orthogonal components, we obtain an expansion of the gauge-invariant Lagrangian density, from which we deduce the two orthogonal canonical momenta conjugate to the two components of the gauge potential. We then obtain the canonical Hamiltonian in the phase space and deduce the inherent constraints. In terms of the naturally deduced gauge condition, the quantization results are exactly consistent with those in the traditional Coulomb gauge condition and superior to those in the Lorentz gauge condition. Moreover, we find that all the nonvanishing quantum commutators are permanently gauge-invariant. A system can only be measured in physical experiments when it is gauge-invariant. The vanishing longitudinal vector potential means that the gauge invariance of the general QED system cannot be retained. This is similar to the nucleon spin crisis dilemma, which is an example of a physical quantity that cannot be exactly measured experimentally. However, the theory here solves this dilemma by keeping the gauge invariance of the general QED system. -- Highlights: •We decompose the general gauge potential into two orthogonal parts according to general field theory. •We identify a new approach for quantizing the general singular QED system. •The results obtained are superior to those for the Lorentz gauge condition. •The theory presented solves dilemmas such as the nucleon spin crisis.

  9. Adaptive Development

    NASA Technical Reports Server (NTRS)

    2005-01-01

    The goal of this research is to develop and demonstrate innovative adaptive seal technologies that can lead to dramatic improvements in engine performance, life, range, and emissions, and enhance operability for next generation gas turbine engines. This work is concentrated on the development of self-adaptive clearance control systems for gas turbine engines. Researchers have targeted the high-pressure turbine (HPT) blade tip seal location for following reasons: Current active clearance control (ACC) systems (e.g., thermal case-cooling schemes) cannot respond to blade tip clearance changes due to mechanical, thermal, and aerodynamic loads. As such they are prone to wear due to the required tight running clearances during operation. Blade tip seal wear (increased clearances) reduces engine efficiency, performance, and service life. Adaptive sealing technology research has inherent impact on all envisioned 21st century propulsion systems (e.g. distributed vectored, hybrid and electric drive propulsion concepts).

  10. Fine structure constant and quantized optical transparency of plasmonic nanoarrays.

    PubMed

    Kravets, V G; Schedin, F; Grigorenko, A N

    2012-01-01

    Optics is renowned for displaying quantum phenomena. Indeed, studies of emission and absorption lines, the photoelectric effect and blackbody radiation helped to build the foundations of quantum mechanics. Nevertheless, it came as a surprise that the visible transparency of suspended graphene is determined solely by the fine structure constant, as this kind of universality had been previously reserved only for quantized resistance and flux quanta in superconductors. Here we describe a plasmonic system in which relative optical transparency is determined solely by the fine structure constant. The system consists of a regular array of gold nanoparticles fabricated on a thin metallic sublayer. We show that its relative transparency can be quantized in the near-infrared, which we attribute to the quantized contact resistance between the nanoparticles and the metallic sublayer. Our results open new possibilities in the exploration of universal dynamic conductance in plasmonic nanooptics.

  11. Quantized Space-Time and Black Hole Entropy

    NASA Astrophysics Data System (ADS)

    Ma, Meng-Sen; Li, Huai-Fan; Zhao, Ren

    2014-06-01

    On the basis of Snyder's idea of quantized space-time, we derive a new generalized uncertainty principle and a new modified density of states. Accordingly, we obtain a corrected black hole entropy with a logarithmic correction term by employing the new generalized uncertainty principle. In addition, we recalculate the entropy of spherically symmetric black holes using statistical mechanics. Because of the use of the minimal length in quantized space-time as a natural cutoff, the entanglement entropy we obtained does not have the usual form A/4 but has a coefficient dependent on the minimal length, which shows differences between black hole entropy in quantized space-time and that in continuous space-time.

  12. Performance of customized DCT quantization tables on scientific data

    NASA Technical Reports Server (NTRS)

    Ratnakar, Viresh; Livny, Miron

    1994-01-01

    We show that it is desirable to use data-specific or customized quantization tables for scaling the spatial frequency coefficients obtained using the Discrete Cosine Transform (DCT). DCT is widely used for image and video compression (MP89, PM93) but applications typically use default quantization matrices. Using actual scientific data gathered from divers sources such as spacecrafts and electron-microscopes, we show that the default compression/quality tradeoffs can be significantly improved upon by using customized tables. We also show that significant improvements are possible for the standard test images Lena and Baboon. This work is part of an effort to develop a practical scheme for optimizing quantization matrices for any given image or video stream, under any given quality or compression constraints.

  13. Quantization of gauge fields, graph polynomials and graph homology

    SciTech Connect

    Kreimer, Dirk; Sars, Matthias; Suijlekom, Walter D. van

    2013-09-15

    We review quantization of gauge fields using algebraic properties of 3-regular graphs. We derive the Feynman integrand at n loops for a non-abelian gauge theory quantized in a covariant gauge from scalar integrands for connected 3-regular graphs, obtained from the two Symanzik polynomials. The transition to the full gauge theory amplitude is obtained by the use of a third, new, graph polynomial, the corolla polynomial. This implies effectively a covariant quantization without ghosts, where all the relevant signs of the ghost sector are incorporated in a double complex furnished by the corolla polynomial–we call it cycle homology–and by graph homology. -- Highlights: •We derive gauge theory Feynman from scalar field theory with 3-valent vertices. •We clarify the role of graph homology and cycle homology. •We use parametric renormalization and the new corolla polynomial.

  14. Quantized stabilization of wireless networked control systems with packet losses.

    PubMed

    Qu, Feng-Lin; Hu, Bin; Guan, Zhi-Hong; Wu, Yong-Hong; He, Ding-Xin; Zheng, Ding-Fu

    2016-09-01

    This paper considers stabilization of discrete-time linear systems, where wireless networks exist for transmitting the sensor and controller information. Based on Markov jump systems, we show that the coarsest quantizer that stabilizes the WNCS is logarithmic in the sense of mean square quadratic stability and the stabilization of this system can be transformed into the robust stabilization of an equivalent uncertain system. Moreover, a method of optimal quantizer/controller design in terms of linear matrix inequality is presented. Finally, a numerical example is provided to illustrate the effectiveness of the developed theoretical results.

  15. Effective Field Theory of Fractional Quantized Hall Nematics

    SciTech Connect

    Mulligan, Michael; Nayak, Chetan; Kachru, Shamit; /Stanford U., Phys. Dept. /SLAC

    2012-06-06

    We present a Landau-Ginzburg theory for a fractional quantized Hall nematic state and the transition to it from an isotropic fractional quantum Hall state. This justifies Lifshitz-Chern-Simons theory - which is shown to be its dual - on a more microscopic basis and enables us to compute a ground state wave function in the symmetry-broken phase. In such a state of matter, the Hall resistance remains quantized while the longitudinal DC resistivity due to thermally-excited quasiparticles is anisotropic. We interpret recent experiments at Landau level filling factor {nu} = 7/3 in terms of our theory.

  16. On precanonical quantization of gravity in spin connection variables

    SciTech Connect

    Kanatchikov, I. V.

    2013-02-21

    The basics of precanonical quantization and its relation to the functional Schroedinger picture in QFT are briefly outlined. The approach is then applied to quantization of Einstein's gravity in vielbein and spin connection variables and leads to a quantum dynamics described by the covariant Schroedinger equation for the transition amplitudes on the bundle of spin connection coefficients over space-time, that yields a novel quantum description of space-time geometry. A toy model of precanonical quantum cosmology based on the example of flat FLRW universe is considered.

  17. BRST operator quantization of generally covariant gauge systems

    NASA Astrophysics Data System (ADS)

    Ferraro, Rafael; Sforza, Daniel M.

    1997-04-01

    The BRST generator is realized as a Hermitian nilpotent operator for a finite-dimensional gauge system featuring a quadratic super-Hamiltonian and linear supermomentum constraints. As a result, the emerging ordering for the Hamiltonian constraint is not trivial, because the potential must enter the kinetic term in order to obtain a quantization invariant under scaling. Namely, BRST quantization does not lead to the curvature term used in the literature as a means to get that invariance. The inclusion of the potential in the kinetic term, far from being unnatural, is beautifully justified in light of the Jacobi's principle.

  18. New Exact Quantization Condition for Toric Calabi-Yau Geometries

    NASA Astrophysics Data System (ADS)

    Wang, Xin; Zhang, Guojun; Huang, Min-xin

    2015-09-01

    We propose a new exact quantization condition for a class of quantum mechanical systems derived from local toric Calabi-Yau threefolds. Our proposal includes all contributions to the energy spectrum which are nonperturbative in the Planck constant, and is much simpler than the available quantization condition in the literature. We check that our proposal is consistent with previous works and implies nontrivial relations among the topological Gopakumar-Vafa invariants of the toric Calabi-Yau geometries. Together with the recent developments, our proposal opens a new avenue in the long investigations at the interface of geometry, topology and quantum mechanics.

  19. New Exact Quantization Condition for Toric Calabi-Yau Geometries.

    PubMed

    Wang, Xin; Zhang, Guojun; Huang, Min-Xin

    2015-09-18

    We propose a new exact quantization condition for a class of quantum mechanical systems derived from local toric Calabi-Yau threefolds. Our proposal includes all contributions to the energy spectrum which are nonperturbative in the Planck constant, and is much simpler than the available quantization condition in the literature. We check that our proposal is consistent with previous works and implies nontrivial relations among the topological Gopakumar-Vafa invariants of the toric Calabi-Yau geometries. Together with the recent developments, our proposal opens a new avenue in the long investigations at the interface of geometry, topology and quantum mechanics. PMID:26430981

  20. Enhanced current quantization in high-frequency electron pumps in a perpendicular magnetic field

    SciTech Connect

    Wright, S. J.; Blumenthal, M. D.; Gumbs, Godfrey; Thorn, A. L.; Pepper, M.; Anderson, D.; Jones, G. A. C.; Nicoll, C. A.; Ritchie, D. A.; Janssen, T. J. B. M.; Holmes, S. N.

    2008-12-15

    We present experimental results of high-frequency quantized charge pumping through a quantum dot formed by the electric field arising from applied voltages in a GaAs/AlGaAs system in the presence of a perpendicular magnetic field B. Clear changes are observed in the quantized current plateaus as a function of applied magnetic field. We report on the robustness in the length of the quantized plateaus and improvements in the quantization as a result of the applied B field.

  1. One-Dimensional Relativistic Dissipative System with Constant Force and its Quantization

    NASA Astrophysics Data System (ADS)

    López, G.; López, X. E.; Hernández, H.

    2006-04-01

    For a relativistic particle under a constant force and a linear velocity dissipation force, a constant of motion is found. Problems are shown for getting the Hamiltonian of this system. Thus, the quantization of this system is carried out through the constant of motion and using the quantization on the velocity variable. The dissipative relativistic quantum bouncer is outlined within this quantization approach.

  2. Analytic semi-classical quantization of a QCD string with light quarks

    SciTech Connect

    Theodore J. Allen et al.

    2002-08-14

    We perform an analytic semi-classical quantization of the straight QCD string with one end fixed and a massless quark on the other, in the limits of orbital and radial dominant motion. Our results well approximate those of the exact numerical semi-classical quantization as well as our exact numerical canonical quantization.

  3. A quantum-drive-time (QDT) quantization of the Taub cosmology

    SciTech Connect

    Miller, W.A.; Kheyfets, A.

    1994-10-01

    We present here an application of a new quantization scheme. We quantize the Taub cosmology by quantizing only the anisotropy parameter {beta} and imposing the super-Hamiltonian constraint as an expectation-value equation to recover the relationship between the scale factor {Omega} and time t. This approach appears to avoid the problem of time.

  4. Mathematics of Quantization and Quantum Fields

    NASA Astrophysics Data System (ADS)

    Dereziński, Jan; Gérard, Christian

    2013-03-01

    Preface; 1. Vector spaces; 2. Operators in Hilbert spaces; 3. Tensor algebras; 4. Analysis in L2(Rd); 5. Measures; 6. Algebras; 7. Anti-symmetric calculus; 8. Canonical commutation relations; 9. CCR on Fock spaces; 10. Symplectic invariance of CCR in finite dimensions; 11. Symplectic invariance of the CCR on Fock spaces; 12. Canonical anti-commutation relations; 13. CAR on Fock spaces; 14. Orthogonal invariance of CAR algebras; 15. Clifford relations; 16. Orthogonal invariance of the CAR on Fock spaces; 17. Quasi-free states; 18. Dynamics of quantum fields; 19. Quantum fields on space-time; 20. Diagrammatics; 21. Euclidean approach for bosons; 22. Interacting bosonic fields; Subject index; Symbols index.

  5. Nonperturbative renormalization of QED in light-cone quantization

    SciTech Connect

    Hiller, J.R.; Brodsky, S.J.

    1996-08-01

    As a precursor to work on QCD, we study the dressed electron in QED non-perturbatively. The calculational scheme uses an invariant mass cutoff, discretized light cone quantization, a Tamm-Dancoff truncation of the Fock space, and a small photon mass. Nonperturbative renormalization of the coupling and electron mass is developed.

  6. Multispectral data compression through transform coding and block quantization

    NASA Technical Reports Server (NTRS)

    Ready, P. J.; Wintz, P. A.

    1972-01-01

    Transform coding and block quantization techniques are applied to multispectral aircraft scanner data, and digitized satellite imagery. The multispectral source is defined and an appropriate mathematical model proposed. The Karhunen-Loeve, Fourier, and Hadamard encoders are considered and are compared to the rate distortion function for the equivalent Gaussian source and to the performance of the single sample PCM encoder.

  7. Local mesh quantized extrema patterns for image retrieval.

    PubMed

    Koteswara Rao, L; Venkata Rao, D; Reddy, L Pratap

    2016-01-01

    In this paper, we propose a new feature descriptor, named local mesh quantized extrema patterns (LMeQEP) for image indexing and retrieval. The standard local quantized patterns collect the spatial relationship in the form of larger or deeper texture pattern based on the relative variations in the gray values of center pixel and its neighbors. Directional local extrema patterns explore the directional information in 0°, 90°, 45° and 135° for a pixel positioned at the center. A mesh structure is created from a quantized extrema to derive significant textural information. Initially, the directional quantized data from the mesh structure is extracted to form LMeQEP of given image. Then, RGB color histogram is built and integrated with the LMeQEP to enhance the performance of the system. In order to test the impact of proposed method, experimentation is done with bench mark image repositories such as MIT VisTex and Corel-1k. Avg. retrieval rate and avg. retrieval precision are considered as the evaluation metrics to record the performance level. The results from experiments show a considerable improvement when compared to other recent techniques in the image retrieval. PMID:27429886

  8. Quantization method for describing the motion of celestial systems

    NASA Astrophysics Data System (ADS)

    Christianto, Victor; Smarandache, Florentin

    2015-11-01

    Criticism arises concerning the use of quantization method for describing the motion of celestial systems, arguing that the method is oversimplifying the problem, and cannot explain other phenomena, for instance planetary migration. Using quantization method like Nottale-Schumacher did, one can expect to predict new exoplanets with remarkable result. The ``conventional'' theories explaining planetary migration normally use fluid theory involving diffusion process. Gibson have shown that these migration phenomena could be described via Navier-Stokes approach. Kiehn's argument was based on exact-mapping between Schrodinger equation and Navier-Stokes equations, while our method may be interpreted as an oversimplification of the real planetary migration process which took place sometime in the past, providing useful tool for prediction (e.g. other planetoids, which are likely to be observed in the near future, around 113.8AU and 137.7 AU). Therefore, quantization method could be seen as merely a ``plausible'' theory. We would like to emphasize that the quantization method does not have to be the true description of reality with regards to celestial phenomena. This method could explain some phenomena, while perhaps lacks explanation for other phenomena.

  9. FAST TRACK COMMUNICATION: Quantization over boson operator spaces

    NASA Astrophysics Data System (ADS)

    Prosen, Tomaž; Seligman, Thomas H.

    2010-10-01

    The framework of third quantization—canonical quantization in the Liouville space—is developed for open many-body bosonic systems. We show how to diagonalize the quantum Liouvillean for an arbitrary quadratic n-boson Hamiltonian with arbitrary linear Lindblad couplings to the baths and, as an example, explicitly work out a general case of a single boson.

  10. Semiclassical Quantization of the Electron-Dipole System.

    ERIC Educational Resources Information Center

    Turner, J. E.

    1979-01-01

    This paper presents a derivation of the number given by Fermi in 1925, in his semiclassical treatment of the motion of an electron in the field of two stationary positive charges, for Bohr quantization of the electron orbits when the stationary charges are positive, and applies it to an electron moving in the field of a stationary dipole.…

  11. Local mesh quantized extrema patterns for image retrieval.

    PubMed

    Koteswara Rao, L; Venkata Rao, D; Reddy, L Pratap

    2016-01-01

    In this paper, we propose a new feature descriptor, named local mesh quantized extrema patterns (LMeQEP) for image indexing and retrieval. The standard local quantized patterns collect the spatial relationship in the form of larger or deeper texture pattern based on the relative variations in the gray values of center pixel and its neighbors. Directional local extrema patterns explore the directional information in 0°, 90°, 45° and 135° for a pixel positioned at the center. A mesh structure is created from a quantized extrema to derive significant textural information. Initially, the directional quantized data from the mesh structure is extracted to form LMeQEP of given image. Then, RGB color histogram is built and integrated with the LMeQEP to enhance the performance of the system. In order to test the impact of proposed method, experimentation is done with bench mark image repositories such as MIT VisTex and Corel-1k. Avg. retrieval rate and avg. retrieval precision are considered as the evaluation metrics to record the performance level. The results from experiments show a considerable improvement when compared to other recent techniques in the image retrieval.

  12. Equivalent Electrical Circuit Representations of AC Quantized Hall Resistance Standards

    PubMed Central

    Cage, M. E.; Jeffery, A.; Matthews, J.

    1999-01-01

    We use equivalent electrical circuits to analyze the effects of large parasitic impedances existing in all sample probes on four-terminal-pair measurements of the ac quantized Hall resistance RH. The circuit components include the externally measurable parasitic capacitances, inductances, lead resistances, and leakage resistances of ac quantized Hall resistance standards, as well as components that represent the electrical characteristics of the quantum Hall effect device (QHE). Two kinds of electrical circuit connections to the QHE are described and considered: single-series “offset” and quadruple-series. (We eliminated other connections in earlier analyses because they did not provide the desired accuracy with all sample probe leads attached at the device.) Exact, but complicated, algebraic equations are derived for the currents and measured quantized Hall voltages for these two circuits. Only the quadruple-series connection circuit meets our desired goal of measuring RH for both ac and dc currents with a one-standard-deviation uncertainty of 10−8 RH or less during the same cool-down with all leads attached at the device. The single-series “offset” connection circuit meets our other desired goal of also measuring the longitudinal resistance Rx for both ac and dc currents during that same cool-down. We will use these predictions to apply small measurable corrections, and uncertainties of the corrections, to ac measurements of RH in order to realize an intrinsic ac quantized Hall resistance standard of 10−8 RH uncertainty or less.

  13. Classification of mass-spectrometric data in clinical proteomics using learning vector quantization methods.

    PubMed

    Villmann, Thomas; Schleif, Frank-Michael; Kostrzewa, Markus; Walch, Axel; Hammer, Barbara

    2008-03-01

    In the present contribution we propose two recently developed classification algorithms for the analysis of mass-spectrometric data-the supervised neural gas and the fuzzy-labeled self-organizing map. The algorithms are inherently regularizing, which is recommended, for these spectral data because of its high dimensionality and the sparseness for specific problems. The algorithms are both prototype-based such that the principle of characteristic representants is realized. This leads to an easy interpretation of the generated classifcation model. Further, the fuzzy-labeled self-organizing map is able to process uncertainty in data, and classification results can be obtained as fuzzy decisions. Moreover, this fuzzy classification together with the property of topographic mapping offers the possibility of class similarity detection, which can be used for class visualization. We demonstrate the power of both methods for two exemplary examples: the classification of bacteria (listeria types) and neoplastic and non-neoplastic cell populations in breast cancer tissue sections.

  14. Eddy Current Signature Classification of Steam Generator Tube Defects Using A Learning Vector Quantization Neural Network

    SciTech Connect

    Gabe V. Garcia

    2005-01-03

    A major cause of failure in nuclear steam generators is degradation of their tubes. Although seven primary defect categories exist, one of the principal causes of tube failure is intergranular attack/stress corrosion cracking (IGA/SCC). This type of defect usually begins on the secondary side surface of the tubes and propagates both inwards and laterally. In many cases this defect is found at or near the tube support plates.

  15. Phase quantization of chaos in the semiclassical regime

    NASA Astrophysics Data System (ADS)

    Takahashi, Satoshi; Takatsuka, Kazuo

    2007-08-01

    Since the early stage of the study of Hamilton chaos, semiclassical quantization based on the low-order Wentzel-Kramers-Brillouin theory, the primitive semiclassical approximation to the Feynman path integrals (or the so-called Van Vleck propagator), and their variants have been suffering from difficulties such as divergence in the correlation function, nonconvergence in the trace formula, and so on. These difficulties have been hampering the progress of quantum chaos, and it is widely recognized that the essential drawback of these semiclassical theories commonly originates from the erroneous feature of the amplitude factors in their applications to classically chaotic systems. This forms a clear contrast to the success of the Einstein-Brillouin-Keller quantization condition for regular (integrable) systems. We show here that energy quantization of chaos in semiclassical regime is, in principle, possible in terms of constructive and destructive interference of phases alone, and the role of the semiclassical amplitude factor is indeed negligibly small, as long as it is not highly oscillatory. To do so, we first sketch the mechanism of semiclassical quantization of energy spectrum with the Fourier analysis of phase interference in a time correlation function, from which the amplitude factor is practically factored out due to its slowly varying nature. In this argument there is no distinction between integrability and nonintegrability of classical dynamics. Then we present numerical evidence that chaos can be indeed quantized by means of amplitude-free quasicorrelation functions and Heller's frozen Gaussian method. This is called phase quantization. Finally, we revisit the work of Yamashita and Takatsuka [Prog. Theor. Phys. Suppl. 161, 56 (2007)] who have shown explicitly that the semiclassical spectrum is quite insensitive to smooth modification (rescaling) of the amplitude factor. At the same time, we note that the phase quantization naturally breaks down when the

  16. A perceptual quantization strategy for HEVC based on a convolutional neural network trained on natural images

    NASA Astrophysics Data System (ADS)

    Alam, Md Mushfiqul; Nguyen, Tuan D.; Hagan, Martin T.; Chandler, Damon M.

    2015-09-01

    Fast prediction models of local distortion visibility and local quality can potentially make modern spatiotemporally adaptive coding schemes feasible for real-time applications. In this paper, a fast convolutional-neural- network based quantization strategy for HEVC is proposed. Local artifact visibility is predicted via a network trained on data derived from our improved contrast gain control model. The contrast gain control model was trained on our recent database of local distortion visibility in natural scenes [Alam et al. JOV 2014]. Further- more, a structural facilitation model was proposed to capture effects of recognizable structures on distortion visibility via the contrast gain control model. Our results provide on average 11% improvements in compression efficiency for spatial luma channel of HEVC while requiring almost one hundredth of the computational time of an equivalent gain control model. Our work opens the doors for similar techniques which may work for different forthcoming compression standards.

  17. Quantization and Quantum-Like Phenomena: A Number Amplitude Approach

    NASA Astrophysics Data System (ADS)

    Robinson, T. R.; Haven, E.

    2015-12-01

    Historically, quantization has meant turning the dynamical variables of classical mechanics that are represented by numbers into their corresponding operators. Thus the relationships between classical variables determine the relationships between the corresponding quantum mechanical operators. Here, we take a radically different approach to this conventional quantization procedure. Our approach does not rely on any relations based on classical Hamiltonian or Lagrangian mechanics nor on any canonical quantization relations, nor even on any preconceptions of particle trajectories in space and time. Instead we examine the symmetry properties of certain Hermitian operators with respect to phase changes. This introduces harmonic operators that can be identified with a variety of cyclic systems, from clocks to quantum fields. These operators are shown to have the characteristics of creation and annihilation operators that constitute the primitive fields of quantum field theory. Such an approach not only allows us to recover the Hamiltonian equations of classical mechanics and the Schrödinger wave equation from the fundamental quantization relations, but also, by freeing the quantum formalism from any physical connotation, makes it more directly applicable to non-physical, so-called quantum-like systems. Over the past decade or so, there has been a rapid growth of interest in such applications. These include, the use of the Schrödinger equation in finance, second quantization and the number operator in social interactions, population dynamics and financial trading, and quantum probability models in cognitive processes and decision-making. In this paper we try to look beyond physical analogies to provide a foundational underpinning of such applications.

  18. Adaptive Sampling using Support Vector Machines

    SciTech Connect

    D. Mandelli; C. Smith

    2012-11-01

    Reliability/safety analysis of stochastic dynamic systems (e.g., nuclear power plants, airplanes, chemical plants) is currently performed through a combination of Event-Tress and Fault-Trees. However, these conventional methods suffer from certain drawbacks: • Timing of events is not explicitly modeled • Ordering of events is preset by the analyst • The modeling of complex accident scenarios is driven by expert-judgment For these reasons, there is currently an increasing interest into the development of dynamic PRA methodologies since they can be used to address the deficiencies of conventional methods listed above.

  19. In search of a new initialization of K-means clustering for color quantization

    NASA Astrophysics Data System (ADS)

    Frackiewicz, Mariusz; Palus, Henryk

    2015-12-01

    Color quantization is still an important auxiliary operation in the processing of color images. The K-means clustering (KM), used to quantize the color, requires an appropriate initialization. In this paper, we propose a combined KM method that use to initialize the results of well-known quantization algorithms such as Wu's, NeuQuant (NQ) and Neural Gas (NG). This approach, assessed by three quality indices: PSNR, ΔE and ΔM, improves the results. Experimental results of such combined quantization indicate that the deterministic Wu+KM and random NG+KM approaches leading to the best quantized images.

  20. Relativistic Landau–He–McKellar–Wilkens quantization and relativistic bound states solutions for a Coulomb-like potential induced by the Lorentz symmetry breaking effects

    SciTech Connect

    Bakke, K.; Belich, H.

    2013-06-15

    In this work, we discuss the relativistic Landau–He–McKellar–Wilkens quantization and relativistic bound states solutions for a Dirac neutral particle under the influence of a Coulomb-like potential induced by the Lorentz symmetry breaking effects. We present new possible scenarios of studying Lorentz symmetry breaking effects by fixing the space-like vector field background in special configurations. It is worth mentioning that the criterion for studying the violation of Lorentz symmetry is preserving the gauge symmetry. -- Highlights: •Two new possible scenarios of studying Lorentz symmetry breaking effects. •Coulomb-like potential induced by the Lorentz symmetry breaking effects. •Relativistic Landau–He–McKellar–Wilkens quantization. •Exact solutions of the Dirac equation.

  1. Quantization and superselection sectors III: Multiply connected spaces and indistinguishable particles

    NASA Astrophysics Data System (ADS)

    Landsman, N. P. Klaas

    2016-09-01

    We reconsider the (non-relativistic) quantum theory of indistinguishable particles on the basis of Rieffel’s notion of C∗-algebraic (“strict”) deformation quantization. Using this formalism, we relate the operator approach of Messiah and Greenberg (1964) to the configuration space approach pioneered by Souriau (1967), Laidlaw and DeWitt-Morette (1971), Leinaas and Myrheim (1977), and others. In dimension d > 2, the former yields bosons, fermions, and paraparticles, whereas the latter seems to leave room for bosons and fermions only, apparently contradicting the operator approach as far as the admissibility of parastatistics is concerned. To resolve this, we first prove that in d > 2 the topologically non-trivial configuration spaces of the second approach are quantized by the algebras of observables of the first. Secondly, we show that the irreducible representations of the latter may be realized by vector bundle constructions, among which the line bundles recover the results of the second approach. Mathematically speaking, representations on higher-dimensional bundles (which define parastatistics) cannot be excluded, which render the configuration space approach incomplete. Physically, however, we show that the corresponding particle states may always be realized in terms of bosons and/or fermions with an unobserved internal degree of freedom (although based on non-relativistic quantum mechanics, this conclusion is analogous to the rigorous results of the Doplicher-Haag-Roberts analysis in algebraic quantum field theory, as well as to the heuristic arguments which led Gell-Mann and others to QCD (i.e. Quantum Chromodynamics)).

  2. Quantization of electromagnetic field and analysis of Purcell effect based on formalism of scattering matrix

    NASA Astrophysics Data System (ADS)

    Kaliteevski, M. A.; Gubaydullin, A. R.; Ivanov, K. A.; Mazlin, V. A.

    2016-09-01

    We have developed a rigorous self-consistent approach for the quantization of electromagnetic field in inhomogeneous structures. The approach is based on utilization of the scattering matrix of the system. Instead of the use of standard periodic Born-Karman boundary conditions, we use the quantization condition implying equating eigenvalues of the scattering matrix (S-matrix) of the system to unity (S-quantization). In the trivial case of uniform medium boundary condition for S-quantization is nothing but periodic boundary condition. S-quantization allows calculating modification of the spontaneous emission rate for arbitrary inhomogeneous structure and direction of the emitted radiation. S-quantization solves the long-standing problem coupled to normalization of the quasi-stationary electromagnetic modes. Examples of application of S-quantization for the calculation of spontaneous emission rate for the cases of Bragg reflector and microcavity are demonstrated.

  3. Finite-Horizon Near-Optimal Output Feedback Neural Network Control of Quantized Nonlinear Discrete-Time Systems With Input Constraint.

    PubMed

    Xu, Hao; Zhao, Qiming; Jagannathan, Sarangapani

    2015-08-01

    The output feedback-based near-optimal regulation of uncertain and quantized nonlinear discrete-time systems in affine form with control constraint over finite horizon is addressed in this paper. First, the effect of input constraint is handled using a nonquadratic cost functional. Next, a neural network (NN)-based Luenberger observer is proposed to reconstruct both the system states and the control coefficient matrix so that a separate identifier is not needed. Then, approximate dynamic programming-based actor-critic framework is utilized to approximate the time-varying solution of the Hamilton-Jacobi-Bellman using NNs with constant weights and time-dependent activation functions. A new error term is defined and incorporated in the NN update law so that the terminal constraint error is also minimized over time. Finally, a novel dynamic quantizer for the control inputs with adaptive step size is designed to eliminate the quantization error overtime, thus overcoming the drawback of the traditional uniform quantizer. The proposed scheme functions in a forward-in-time manner without offline training phase. Lyapunov analysis is used to investigate the stability. Simulation results are given to show the effectiveness and feasibility of the proposed method. PMID:25794403

  4. Finite-Horizon Near-Optimal Output Feedback Neural Network Control of Quantized Nonlinear Discrete-Time Systems With Input Constraint.

    PubMed

    Xu, Hao; Zhao, Qiming; Jagannathan, Sarangapani

    2015-08-01

    The output feedback-based near-optimal regulation of uncertain and quantized nonlinear discrete-time systems in affine form with control constraint over finite horizon is addressed in this paper. First, the effect of input constraint is handled using a nonquadratic cost functional. Next, a neural network (NN)-based Luenberger observer is proposed to reconstruct both the system states and the control coefficient matrix so that a separate identifier is not needed. Then, approximate dynamic programming-based actor-critic framework is utilized to approximate the time-varying solution of the Hamilton-Jacobi-Bellman using NNs with constant weights and time-dependent activation functions. A new error term is defined and incorporated in the NN update law so that the terminal constraint error is also minimized over time. Finally, a novel dynamic quantizer for the control inputs with adaptive step size is designed to eliminate the quantization error overtime, thus overcoming the drawback of the traditional uniform quantizer. The proposed scheme functions in a forward-in-time manner without offline training phase. Lyapunov analysis is used to investigate the stability. Simulation results are given to show the effectiveness and feasibility of the proposed method.

  5. Wave-vector dispersion versus angular-momentum dispersion of collective modes in small metal particles

    NASA Astrophysics Data System (ADS)

    Ekardt, W.

    1987-09-01

    The wave-vector dispersion of collective modes in small particles is investigated within the time-dependent local-density approximation as applied to a self-consistent jellium particle. It is shown that the dispersion of the volume plasmons can be understood from that in an infinite electron gas. For a given multipole an optimum wave vector exists for the quasiresonant excitation of the volume mode but not for the surface mode. It is pointed out that-for the volume modes-the hydrodynamic approximation gives a reasonable first guess for the relation between frequencies and size-quantized wave vectors.

  6. Bandwidth reduction of high-frequency sonar imagery in shallow water using content-adaptive hybrid image coding

    NASA Astrophysics Data System (ADS)

    Shin, Frances B.; Kil, David H.

    1998-09-01

    One of the biggest challenges in distributed underwater mine warfare for area sanitization and safe power projection during regional conflicts is transmission of compressed raw imagery data to a central processing station via a limited bandwidth channel while preserving crucial target information for further detection and automatic target recognition processing. Moreover, operating in an extremely shallow water with fluctuating channels and numerous interfering sources makes it imperative that image compression algorithms effectively deal with background nonstationarity within an image as well as content variation between images. In this paper, we present a novel approach to lossy image compression that combines image- content classification, content-adaptive bit allocation, and hybrid wavelet tree-based coding for over 100:1 bandwidth reduction with little sacrifice in signal-to-noise ratio (SNR). Our algorithm comprises (1) content-adaptive coding that takes advantage of a classify-before-coding strategy to reduce data mismatch, (2) subimage transformation for energy compaction, and (3) a wavelet tree-based coding for efficient encoding of significant wavelet coefficients. Furthermore, instead of using the embedded zerotree coding with scalar quantization (SQ), we investigate the use of a hybrid coding strategy that combines SQ for high-magnitude outlier transform coefficients and classified vector quantization (CVQ) for compactly clustered coefficients. This approach helps us achieve reduced distortion error and robustness while achieving high compression ratio. Our analysis based on the high-frequency sonar real data that exhibit severe content variability and contain both mines and mine-like clutter indicates that we can achieve over 100:1 compression ratio without losing crucial signal attributes. In comparison, benchmarking of the same data set with the best still-picture compression algorithm called the set partitioning in hierarchical trees (SPIHT) reveals

  7. Generalized Selection Weighted Vector Filters

    NASA Astrophysics Data System (ADS)

    Lukac, Rastislav; Plataniotis, Konstantinos N.; Smolka, Bogdan; Venetsanopoulos, Anastasios N.

    2004-12-01

    This paper introduces a class of nonlinear multichannel filters capable of removing impulsive noise in color images. The here-proposed generalized selection weighted vector filter class constitutes a powerful filtering framework for multichannel signal processing. Previously defined multichannel filters such as vector median filter, basic vector directional filter, directional-distance filter, weighted vector median filters, and weighted vector directional filters are treated from a global viewpoint using the proposed framework. Robust order-statistic concepts and increased degree of freedom in filter design make the proposed method attractive for a variety of applications. Introduced multichannel sigmoidal adaptation of the filter parameters and its modifications allow to accommodate the filter parameters to varying signal and noise statistics. Simulation studies reported in this paper indicate that the proposed filter class is computationally attractive, yields excellent performance, and is able to preserve fine details and color information while efficiently suppressing impulsive noise. This paper is an extended version of the paper by Lukac et al. presented at the 2003 IEEE-EURASIP Workshop on Nonlinear Signal and Image Processing (NSIP '03) in Grado, Italy.

  8. New approach of color image quantization based on multidimensional directory

    NASA Astrophysics Data System (ADS)

    Chang, Chin-Chen; Su, Yuan-Yuan

    2003-04-01

    Color image quantization is a strategy in which a smaller number of colors are used to represent the image. The objective is to make the quality approximate as closely to the original true-color image. The technology is widely used in non-true-color displays and in color printers that cannot reproduce a large number of different colors. However, the main problem the quantization of color image has to face is how to use less colors to show the color image. Therefore, it is very important to choose one suitable palette for an index color image. In this paper, we shall propose a new approach which employs the concept of Multi-Dimensional Directory (MDD) together with the one cycle LBG algorithm to create a high-quality index color image. Compared with the approaches such as VQ, ISQ, and Photoshop v.5, our approach can not only acquire high quality image but also shorten the operation time.

  9. Features of multiphoton-stimulated bremsstrahlung in a quantized field

    NASA Astrophysics Data System (ADS)

    Burenkov, Ivan A.; Tikhonova, Olga V.

    2010-12-01

    The process of absorption and emission of external field quanta by a free electron during the scattering on a potential centre is investigated in the case of interaction with a quantized electromagnetic field. The analytical expression for differential cross-sections and probabilities of different multiphoton channels are obtained. We demonstrate that in the case of a non-classical 'squeezed vacuum' initial field state the probability for the electron to absorb a large number of photons appears to be larger by several orders of magnitude in comparison to the classical field and leads to the formation of the high-energy plateau in the electron energy spectrum. The generalization of the Marcuse effect to the case of the quantized field is worked out. The total probability of energy absorption by electron from the non-classical light is analysed.

  10. Conformal Loop quantization of gravity coupled to the standard model

    NASA Astrophysics Data System (ADS)

    Pullin, Jorge; Gambini, Rodolfo

    2016-03-01

    We consider a local conformal invariant coupling of the standard model to gravity free of any dimensional parameter. The theory is formulated in order to have a quantized version that admits a spin network description at the kinematical level like that of loop quantum gravity. The Gauss constraint, the diffeomorphism constraint and the conformal constraint are automatically satisfied and the standard inner product of the spin-network basis still holds. The resulting theory has resemblances with the Bars-Steinhardt-Turok local conformal theory, except it admits a canonical quantization in terms of loops. By considering a gauge fixed version of the theory we show that the Standard model coupled to gravity is recovered and the Higgs boson acquires mass. This in turn induces via the standard mechanism masses for massive bosons, baryons and leptons.

  11. Statistical amplitude scale estimation for quantization-based watermarking

    NASA Astrophysics Data System (ADS)

    Shterev, Ivo D.; Lagendijk, Reginald L.; Heusdens, Richard

    2004-06-01

    Quantization-based watermarking schemes are vulnerable to amplitude scaling. Therefore the scaling factor has to be accounted for either at the encoder, or at the decoder, prior to watermark decoding. In this paper we derive the marginal probability density model for the watermarked and attacked data, when the attack channel consists of amplitude scaling followed by additive noise. The encoder is Quantization Index Modulation with Distortion Compensation. Based on this model we obtain two estimation procedures for the scale parameter. The first approach is based on Fourier Analysis of the probability density function. The estimation of the scaling parameter relies on the structure of the received data. The second approach that we obtain is the Maximum Likelihood estimator of the scaling factor. We study the performance of the estimation procedures theoretically and experimentally with real audio signals, and compare them to other well known approaches for amplitude scale estimation in the literature.

  12. Floating-point system quantization errors in digital control systems

    NASA Technical Reports Server (NTRS)

    Phillips, C. L.

    1973-01-01

    The results are reported of research into the effects on system operation of signal quantization in a digital control system. The investigation considered digital controllers (filters) operating in floating-point arithmetic in either open-loop or closed-loop systems. An error analysis technique is developed, and is implemented by a digital computer program that is based on a digital simulation of the system. As an output the program gives the programing form required for minimum system quantization errors (either maximum of rms errors), and the maximum and rms errors that appear in the system output for a given bit configuration. The program can be integrated into existing digital simulations of a system.

  13. On Group Phase Quantization and Its Physical Characteristics

    NASA Astrophysics Data System (ADS)

    Du, Bao-Qiang; Zhou, Wei; Yu, Jian-Guo; Dong, Shao-Feng

    2011-05-01

    The physical characteristics of phase quantum are further revealed, based on the proposition of concepts of the greatest common factor frequency, the least common multiple period, quantized phase shift resolution and equivalent phase comparison frequency. Then the problem of phase comparison between different frequency signals is certified in detail. Using the basic principle of phase comparison between different frequencies and the variation law of group phase difference, a point of view on group phase quantization is presented. Group phase quantum is not only an indivisible individual of group phase, but also a basic unit composing group phase difference. It is equal to the equivalent phase comparison period of phase comparison between different frequencies in size. Experimental results show not only a high measurement resolution of 10-12/s in frequency measurement based on group phase quantum, but also a super-high locked phase precision of 10-13/s in active H atomic clock.

  14. Polymer quantization of the Einstein-Rosen wormhole throat

    SciTech Connect

    Kunstatter, Gabor; Peltola, Ari; Louko, Jorma

    2010-01-15

    We present a polymer quantization of spherically symmetric Einstein gravity in which the polymerized variable is the area of the Einstein-Rosen wormhole throat. In the classical polymer theory, the singularity is replaced by a bounce at a radius that depends on the polymerization scale. In the polymer quantum theory, we show numerically that the area spectrum is evenly spaced and in agreement with a Bohr-Sommerfeld semiclassical estimate, and this spectrum is not qualitatively sensitive to issues of factor ordering or boundary conditions except in the lowest few eigenvalues. In the limit of small polymerization scale we recover, within the numerical accuracy, the area spectrum obtained from a Schroedinger quantization of the wormhole throat dynamics. The prospects of recovering from the polymer throat theory a full quantum-corrected spacetime are discussed.

  15. Novel properties of the q-analogue quantized radiation field

    NASA Technical Reports Server (NTRS)

    Nelson, Charles A.

    1993-01-01

    The 'classical limit' of the q-analog quantized radiation field is studied paralleling conventional quantum optics analyses. The q-generalizations of the phase operator of Susskind and Glogower and that of Pegg and Barnett are constructed. Both generalizations and their associated number-phase uncertainty relations are manifestly q-independent in the n greater than g number basis. However, in the q-coherent state z greater than q basis, the variance of the generic electric field, (delta(E))(sup 2) is found to be increased by a factor lambda(z) where lambda(z) greater than 1 if q not equal to 1. At large amplitudes, the amplitude itself would be quantized if the available resolution of unity for the q-analog coherent states is accepted in the formulation. These consequences are remarkable versus the conventional q = 1 limit.

  16. Quantized conductance through the quantum evaporation of bosonic atoms

    NASA Astrophysics Data System (ADS)

    Papoular, D. J.; Pitaevskii, L. P.; Stringari, S.

    2016-08-01

    We analyze theoretically the quantization of conductance occurring with cold bosonic atoms trapped in two reservoirs connected by a constriction with an attractive gate potential. We focus on temperatures slightly above the condensation threshold in the reservoirs. We show that a conductance step occurs, coinciding with the appearance of a condensate in the constriction. Conductance relies on a collective process involving the quantum condensation of an atom into an elementary excitation and the subsequent quantum evaporation of an atom, in contrast with ballistic fermion transport. The value of the bosonic conductance plateau is strongly enhanced compared to fermions and explicitly depends on temperature. We highlight the role of the repulsive interactions between the bosons in preventing them from collapsing into the constriction. We also point out the differences between the bosonic and fermionic thermoelectric effects in the quantized conductance regime.

  17. Precise quantization of anomalous Hall effect near zero magnetic field

    SciTech Connect

    Bestwick, A. J.; Fox, E. J.; Kou, Xufeng; Pan, Lei; Wang, Kang L.; Goldhaber-Gordon, D.

    2015-05-04

    In this study, we report a nearly ideal quantum anomalous Hall effect in a three-dimensional topological insulator thin film with ferromagnetic doping. Near zero applied magnetic field we measure exact quantization in the Hall resistance to within a part per 10,000 and a longitudinal resistivity under 1 Ω per square, with chiral edge transport explicitly confirmed by nonlocal measurements. Deviations from this behavior are found to be caused by thermally activated carriers, as indicated by an Arrhenius law temperature dependence. Using the deviations as a thermometer, we demonstrate an unexpected magnetocaloric effect and use it to reach near-perfect quantization by cooling the sample below the dilution refrigerator base temperature in a process approximating adiabatic demagnetization refrigeration.

  18. Gauge Invariance of Parametrized Systems and Path Integral Quantization

    NASA Astrophysics Data System (ADS)

    de Cicco, Hernán; Simeone, Claudio

    Gauge invariance of systems whose Hamilton-Jacobi equation is separable is improved by adding surface terms to the action functional. The general form of these terms is given for some complete solutions of the Hamilton-Jacobi equation. The procedure is applied to the relativistic particle and toy universes, which are quantized by imposing canonical gauge conditions in the path integral; in the case of empty models, we first quantize the parametrized system called "ideal clock," and then we examine the possibility of obtaining the amplitude for the minisuperspaces by matching them with the ideal clock. The relation existing between the geometrical properties of the constraint surface and the variables identifying the quantum states in the path integral is discussed.

  19. Corrected Hawking Temperature in Snyder's Quantized Space-time

    NASA Astrophysics Data System (ADS)

    Ma, Meng-Sen; Liu, Fang; Zhao, Ren

    2015-06-01

    In the quantized space-time of Snyder, generalized uncertainty relation and commutativity are both included. In this paper we analyze the possible form for the corrected Hawking temperature and derive it from the both effects. It is shown that the corrected Hawking temperature has a form similar to the one of noncommutative geometry inspired Schwarzschild black hole, however with an requirement for the noncommutative parameter 𝜃 and the minimal length a.

  20. Robust image analysis with sparse representation on quantized visual features.

    PubMed

    Bao, Bing-Kun; Zhu, Guangyu; Shen, Jialie; Yan, Shuicheng

    2013-03-01

    Recent techniques based on sparse representation (SR) have demonstrated promising performance in high-level visual recognition, exemplified by the highly accurate face recognition under occlusion and other sparse corruptions. Most research in this area has focused on classification algorithms using raw image pixels, and very few have been proposed to utilize the quantized visual features, such as the popular bag-of-words feature abstraction. In such cases, besides the inherent quantization errors, ambiguity associated with visual word assignment and misdetection of feature points, due to factors such as visual occlusions and noises, constitutes the major cause of dense corruptions of the quantized representation. The dense corruptions can jeopardize the decision process by distorting the patterns of the sparse reconstruction coefficients. In this paper, we aim to eliminate the corruptions and achieve robust image analysis with SR. Toward this goal, we introduce two transfer processes (ambiguity transfer and mis-detection transfer) to account for the two major sources of corruption as discussed. By reasonably assuming the rarity of the two kinds of distortion processes, we augment the original SR-based reconstruction objective with l(0) norm regularization on the transfer terms to encourage sparsity and, hence, discourage dense distortion/transfer. Computationally, we relax the nonconvex l(0) norm optimization into a convex l(1) norm optimization problem, and employ the accelerated proximal gradient method to optimize the convergence provable updating procedure. Extensive experiments on four benchmark datasets, Caltech-101, Caltech-256, Corel-5k, and CMU pose, illumination, and expression, manifest the necessity of removing the quantization corruptions and the various advantages of the proposed framework.

  1. Hamiltonian BRST quantization of Chern-Simons gauge theory

    SciTech Connect

    Imai, H.; So, H. . Dept. of Physics); Igarashi, Y. ); Kitakado, S. ); Kubo, J. . Coll. of Liberal Arts)

    1990-08-30

    This paper quantizes non-abelian gauge theory with only a Chern-Simons term in three dimensions by using the generalized Hamiltonian formalism of Batalin and Fradkin for irreducible first- and second-class constrained systems, and derives a covariant action for the theory which is invariant under the off-shell nilpotent BRST transformation. Some aspects of the theory, finiteness and supersymmetry are discussed.

  2. Torus as phase space: Weyl quantization, dequantization, and Wigner formalism

    NASA Astrophysics Data System (ADS)

    Ligabò, Marilena

    2016-08-01

    The Weyl quantization of classical observables on the torus (as phase space) without regularity assumptions is explicitly computed. The equivalence class of symbols yielding the same Weyl operator is characterized. The Heisenberg equation for the dynamics of general quantum observables is written through the Moyal brackets on the torus and the support of the Wigner transform is characterized. Finally, a dequantization procedure is introduced that applies, for instance, to the Pauli matrices. As a result we obtain the corresponding classical symbols.

  3. Polymer quantization and the saddle point approximation of partition functions

    NASA Astrophysics Data System (ADS)

    Morales-Técotl, Hugo A.; Orozco-Borunda, Daniel H.; Rastgoo, Saeed

    2015-11-01

    The saddle point approximation of the path integral partition functions is an important way of deriving the thermodynamical properties of black holes. However, there are certain black hole models and some mathematically analog mechanical models for which this method cannot be applied directly. This is due to the fact that their action evaluated on a classical solution is not finite and its first variation does not vanish for all consistent boundary conditions. These problems can be dealt with by adding a counterterm to the classical action, which is a solution of the corresponding Hamilton-Jacobi equation. In this work we study the effects of polymer quantization on a mechanical model presenting the aforementioned difficulties and contrast it with the above counterterm method. This type of quantization for mechanical models is motivated by the loop quantization of gravity, which is known to play a role in the thermodynamics of black hole systems. The model we consider is a nonrelativistic particle in an inverse square potential, and we analyze two polarizations of the polymer quantization in which either the position or the momentum is discrete. In the former case, Thiemann's regularization is applied to represent the inverse power potential, but we still need to incorporate the Hamilton-Jacobi counterterm, which is now modified by polymer corrections. In the latter, momentum discrete case, however, such regularization could not be implemented. Yet, remarkably, owing to the fact that the position is bounded, we do not need a Hamilton-Jacobi counterterm in order to have a well-defined saddle point approximation. Further developments and extensions are commented upon in the discussion.

  4. Factors influencing the titer and infectivity of lentiviral vectors.

    PubMed

    Logan, Aaron C; Nightingale, Sarah J; Haas, Dennis L; Cho, Gerald J; Pepper, Karen A; Kohn, Donald B

    2004-10-01

    Lentiviral vectors have undergone several generations of design improvement to enhance their biosafety and expression characteristics, and have been approved for use in human clinical studies. Most preclinical studies with these vectors have employed easily assayed marker genes for the purpose of determining vector titers and transduction efficiencies. Naturally, the adaptation of these vector systems to clinical use will increasingly involve the transfer of genes whose products may not be easily measured, meaning that the determination of vector titer will be more complicated. One method for determining vector titer that can be universally employed on all human immunodeficiency virus type 1-based lentiviral vector supernatants involves the measurement of Gag (p24) protein concentration in vector supernatants by immunoassay. We have studied the effects that manipulation of several variables involved in vector design and production by transient transfection have on vector titer and infectivity. We have determined that manipulation of the amount of transfer vector, packaging, and envelope plasmids used to transfect the packaging cells does not alter vector infectivity, but does influence vector titer. We also found that modifications to the transfer vector construct, such as replacing the internal promoter or transgene, do not generally alter vector infectivity, whereas inclusion of the central polypurine tract in the transfer vector increases vector infectivity on HEK293 cells and human umbilical cord blood CD34+ hematopoietic progenitor cells (HPCs). The infectivities of vector supernatants can also be increased by harvesting at early time points after the initiation of vector production, collection in serum-free medium, and concentration by ultracentrifugation. For the transduction of CD34+ HPCs, we found that the simplest method of increasing vector infectivity is to pseudotype vector particles with the RD114 envelope instead of vesicular stomatitis virus G

  5. Image compression system and method having optimized quantization tables

    NASA Technical Reports Server (NTRS)

    Ratnakar, Viresh (Inventor); Livny, Miron (Inventor)

    1998-01-01

    A digital image compression preprocessor for use in a discrete cosine transform-based digital image compression device is provided. The preprocessor includes a gathering mechanism for determining discrete cosine transform statistics from input digital image data. A computing mechanism is operatively coupled to the gathering mechanism to calculate a image distortion array and a rate of image compression array based upon the discrete cosine transform statistics for each possible quantization value. A dynamic programming mechanism is operatively coupled to the computing mechanism to optimize the rate of image compression array against the image distortion array such that a rate-distortion-optimal quantization table is derived. In addition, a discrete cosine transform-based digital image compression device and a discrete cosine transform-based digital image compression and decompression system are provided. Also, a method for generating a rate-distortion-optimal quantization table, using discrete cosine transform-based digital image compression, and operating a discrete cosine transform-based digital image compression and decompression system are provided.

  6. Rotations with Rodrigues' Vector

    ERIC Educational Resources Information Center

    Pina, E.

    2011-01-01

    The rotational dynamics was studied from the point of view of Rodrigues' vector. This vector is defined here by its connection with other forms of parametrization of the rotation matrix. The rotation matrix was expressed in terms of this vector. The angular velocity was computed using the components of Rodrigues' vector as coordinates. It appears…

  7. Influence of quantized diffractive phase element on the axial uniformity of pseudo-nondiffracting beams

    NASA Astrophysics Data System (ADS)

    Sze, Jyh-Rou; Wei, An-Chi; Wu, Wen-Hong; Chang, Chun-Li

    2015-07-01

    The analysis of the diffractive phase elements (DPEs) that synthesizes pseudo-nondiffracting beams (PNDBs) in different axial regions are described. Those elements are designed by using conjugate-gradient method algorithm. To meet the requirement of lithography fabrication process, the obtained optimum continuous surface profile of DPEs must be quantized in the multilevel structure. In order to analyze the impact of different quantization levels, the axial-illuminance RMS variance of PNDBs for each quantized DPE is calculated and compared with each other. The compared results show that the axial illuminance of the PNDB of DPE with smaller-levels quantization fluctuates more rapidly than that of DPE with larger-levels quantization. Meanwhile, the analyses also show that the axial uniformity of the PNDB of DPE with a longer focal length is less sensitive to the quantization level.

  8. Block adaptive rate controlled image data compression

    NASA Technical Reports Server (NTRS)

    Rice, R. F.; Hilbert, E.; Lee, J.-J.; Schlutsmeyer, A.

    1979-01-01

    A block adaptive rate controlled (BARC) image data compression algorithm is described. It is noted that in the algorithm's principal rate controlled mode, image lines can be coded at selected rates by combining practical universal noiseless coding techniques with block adaptive adjustments in linear quantization. Compression of any source data at chosen rates of 3.0 bits/sample and above can be expected to yield visual image quality with imperceptible degradation. Exact reconstruction will be obtained if the one-dimensional difference entropy is below the selected compression rate. It is noted that the compressor can also be operated as a floating rate noiseless coder by simply not altering the input data quantization. Here, the universal noiseless coder ensures that the code rate is always close to the entropy. Application of BARC image data compression to the Galileo orbiter mission of Jupiter is considered.

  9. Length quantization of DNA partially expelled from heads of a bacteriophage T3 mutant

    SciTech Connect

    Serwer, Philip; Wright, Elena T.; Liu, Zheng; Jiang, Wen

    2014-05-15

    DNA packaging of phages phi29, T3 and T7 sometimes produces incompletely packaged DNA with quantized lengths, based on gel electrophoretic band formation. We discover here a packaging ATPase-free, in vitro model for packaged DNA length quantization. We use directed evolution to isolate a five-site T3 point mutant that hyper-produces tail-free capsids with mature DNA (heads). Three tail gene mutations, but no head gene mutations, are present. A variable-length DNA segment leaks from some mutant heads, based on DNase I-protection assay and electron microscopy. The protected DNA segment has quantized lengths, based on restriction endonuclease analysis: six sharp bands of DNA missing 3.7–12.3% of the last end packaged. Native gel electrophoresis confirms quantized DNA expulsion and, after removal of external DNA, provides evidence that capsid radius is the quantization-ruler. Capsid-based DNA length quantization possibly evolved via selection for stalling that provides time for feedback control during DNA packaging and injection. - Graphical abstract: Highlights: • We implement directed evolution- and DNA-sequencing-based phage assembly genetics. • We purify stable, mutant phage heads with a partially leaked mature DNA molecule. • Native gels and DNase-protection show leaked DNA segments to have quantized lengths. • Native gels after DNase I-removal of leaked DNA reveal the capsids to vary in radius. • Thus, we hypothesize leaked DNA quantization via variably quantized capsid radius.

  10. [Vector control, perspectives and realities].

    PubMed

    Carnevale, P

    1995-01-01

    In the WHO Global Strategy for Malaria Control, selective and sustainable vector control is one of the measures to be implemented to complement case management and for the control of epidemics. Vector control can be targeted against larvae and adults, but two elements must be recognized: -vector control measures must be selected according to the existing eco-epidemiological diversity, which has to be well understood before embarking upon any extensive action; -efficient tools are currently available, both for large scale and household use. House spraying is still the method of choice for epidemic control but must be carefully considered and used selectively in endemic countries for various well known reasons. The promotion of personal protection measures for malaria prevention is advocated because insecticide-impregnated mosquito nets and other materials have proved to be effective in different situations. Implementation, sustainability and large scale use of impregnated nets implies a strong community participation supported by well motivated community health workers, the availability of suitable materials (insecticide, mosquito nets), intersectorial collaboration at all levels, well trained health workers from central to the most peripheral level and appropriate educational messages (Knowledge, Attitude and Practices) adapted and elaborated after surveys. It has to be kept in mind that the evaluation of the impact of vector control activities will be made in epidemiological terms such as the reduction of malaria morbidity and mortality.

  11. Rotation-Invariant Relations in Vector Meson Decays into Fermion Pairs

    NASA Astrophysics Data System (ADS)

    Faccioli, Pietro; Lourenço, Carlos; Seixas, João

    2010-08-01

    The covariance properties of angular momentum eigenstates imply the existence of a rotation-invariant relation among the parameters of the difermion decay distribution of inclusively observed vector mesons. This relation is a generalization of the Lam-Tung identity, a result specific to Drell-Yan production in perturbative QCD, here shown to be equivalent to the dynamical condition that the dilepton is always produced transversely polarized with respect to quantization axes belonging to the production plane.

  12. Mapping the dimensionality, density and topology of data: the growing adaptive neural gas.

    PubMed

    Cselényi, Zsolt

    2005-05-01

    Self-organized maps are commonly applied for tasks of cluster analysis, vector quantization or interpolation. The artificial neural network model introduced in this paper is a hybrid model of the growing neural gas model introduced by Fritzke (Fritzke, in Advances in Neural Information Processing Systems 7, MIT Press, Cambridge MA, 1995) and the adaptive resolution clustering modification for self-organized maps proposed by Firenze (Firenze et al., in International Conference on Artificial Neural Networks, Springer-Verlag, London, 1994). The hybrid model is capable of mapping the distribution, dimensionality and topology of the input data. It has a local performance measure that enables the network to terminate growing in areas of the input space that is mapped by units reaching a performance goal. Therefore the network can accurately map clusters of data appearing on different scales of density. The capabilities of the algorithm are tested using simulated datasets with similar spatial spread but different local density distributions, and a simulated multivariate MR dataset of an anatomical human brain phantom with mild multiple sclerosis lesions. These tests demonstrate the advantages of the model compared to the growing neural gas algorithm when adaptive mapping of areas with low sample density is desirable. PMID:15848269

  13. Digital Model of Fourier and Fresnel Quantized Holograms

    NASA Astrophysics Data System (ADS)

    Boriskevich, Anatoly A.; Erokhovets, Valery K.; Tkachenko, Vadim V.

    Some models schemes of Fourier and Fresnel quantized protective holograms with visual effects are suggested. The condition to arrive at optimum relationship between the quality of reconstructed images, and the coefficient of data reduction about a hologram, and quantity of iterations in the reconstructing hologram process has been estimated through computer model. Higher protection level is achieved by means of greater number both bi-dimensional secret keys (more than 2128) in form of pseudorandom amplitude and phase encoding matrixes, and one-dimensional encoding key parameters for every image of single-layer or superimposed holograms.

  14. Conductance quantization in magnetic nanowires electrodeposited in nanopores

    NASA Astrophysics Data System (ADS)

    Elhoussine, F.; Mátéfi-Tempfli, S.; Encinas, A.; Piraux, L.

    2002-08-01

    Magnetic nanocontacts have been prepared by a templating method that involves the electrodeposition of Ni within the pores of track-etched polymer membranes. The nanocontacts are made at the extremity of a single Ni nanowire either inside or outside the pores. The method is simple, flexible, and controllable as the width of the constriction can be varied reversibly by controlling the potential between the electrodeposited nanowire and a ferromagnetic electrode. At room temperature, the electrical conductance shows quantization steps in units of e2/h, as expected for ferromagnetic metals without spin degeneracy. Our fabrication method enables future investigation of ballistic spin transport phenomena in electrodeposited magnetic nanocontacts.

  15. Energy distribution in quantized mesoscopic RLC electric circuit

    NASA Astrophysics Data System (ADS)

    Wu, Wei-Feng; Fan, Hong-Yi

    2016-09-01

    Quantum information processing experimentally depends on optical-electronic devices. In this paper, we consider quantized mesoscopic RLC (resistance, inductance and capacitance) electric circuit in stable case as a quantum statistical ensemble, and calculate energy distribution (i.e. the energy stored in inductance and capacitance as well as the energy consumed on the resistance). For this aim, we employ the technique of integration within ordered product (IWOP) of operator to derive the thermo-vacuum state for this mesoscopic system, with which ensemble average energy calculation is replaced by evaluating expected value in pure state. This approach is concise and the result we deduced is physically appealling.

  16. Quantization of surface rust by using laser imaging techniques

    NASA Astrophysics Data System (ADS)

    Koyuncu, B.; Yasin, A.; Abu-Rezq, A.

    1995-06-01

    Laser speckle interferometry 1,2 and image processing 3,4 have been used to detect and quantize the rust build-up on metal surfaces under water. Speckle information from the sample metal surface was captured by a CCD camera and a frame grabber card. Software techniques were used to convert the image data files into ASCII files in an appropriate format. Three-dimensional surface plots were generated to define the numerical values for the amout of rust build-up.

  17. Quantized Eigenstates of a Classical Particle in a Ponderomotive Potential

    SciTech Connect

    I.Y. Dodin; N.J. Fisch

    2004-12-21

    The average dynamics of a classical particle under the action of a high-frequency radiation resembles quantum particle motion in a conservative field with an effective de Broglie wavelength ë equal to the particle average displacement on a period of oscillations. In a "quasi-classical" field, with a spatial scale large compared to ë, the guiding center motion is adiabatic. Otherwise, a particle exhibits quantized eigenstates in a ponderomotive potential well, can tunnel through classically forbidden regions and experience reflection from an attractive potential. Discrete energy levels are also found for a "crystal" formed by multiple ponderomotive barriers.

  18. Note on Stochastic Quantization of Field Theories with Bottomless Actions

    NASA Astrophysics Data System (ADS)

    Ito, M.; Morita, K.

    1993-07-01

    It is shown that the kerneled Langevin equation, which has recently been proposed by Tanaka et al. to quantize field theories with bottomless actions, reproduces perturbation theory results independent of the initial conditions. The effective potential is approximately determined from the kerneled Langevin equation to be bounded from below. The evolution equation for the two-point correlation function also defines the effective potential for the propagator, which is given for the zero-dimensional ``wrong-sign'' -λφ4 model under the assumption that all higher-moment cumulants than the second vanish.

  19. Formal verification of communication protocols using quantized Horn clauses

    NASA Astrophysics Data System (ADS)

    Balu, Radhakrishnan

    2016-05-01

    The stochastic nature of quantum communication protocols naturally lends itself for expression via probabilistic logic languages. In this work we describe quantized computation using Horn clauses and base the semantics on quantum probability. Turing computable Horn clauses are very convenient to work with and the formalism can be extended to general form of first order languages. Towards this end we build a Hilbert space of H-interpretations and a corresponding non commutative von Neumann algebra of bounded linear operators. We demonstrate the expressive power of the language by casting quantum communication protocols as Horn clauses.

  20. Canonical Functional Quantization of Pseudo-Photons in Planar Systems

    SciTech Connect

    Ferreira, P. Castelo

    2008-06-25

    Extended U{sub e}(1)xU{sub g}(1) electromagnetism containing both a photon and a pseudo-photon is introduced at the variational level and is justified by the violation of the Bianchi identities in conceptual systems, either in the presence of magnetic monopoles or non-regular external fields, not being accounted for by the standard Maxwell Lagrangian. A dimensional reduction is carried out that yields a U{sub e}(1)xU{sub g}(1) Maxwell-BF type theory and a canonical functional quantization in planar systems is considered which may be relevant in Hall systems.

  1. Quantization of scalar fields coupled to point masses

    NASA Astrophysics Data System (ADS)

    Barbero G, J. Fernando; Juárez-Aubry, Benito A.; Margalef-Bentabol, Juan; Villaseñor, Eduardo J. S.

    2015-12-01

    We study the Fock quantization of a compound classical system consisting of point masses and a scalar field. We consider the Hamiltonian formulation of the model by using the geometric constraint algorithm of Gotay, Nester and Hinds. By relying on this Hamiltonian description, we characterize in a precise way the real Hilbert space of classical solutions to the equations of motion and use it to rigorously construct the Fock space of the system. We finally discuss the structure of this space, in particular the impossibility of writing it in a natural way as a tensor product of Hilbert spaces associated with the point masses and the field, respectively.

  2. Quantization conditions and functional equations in ABJ(M) theories

    NASA Astrophysics Data System (ADS)

    Grassi, Alba; Hatsuda, Yasuyuki; Mariño, Marcos

    2016-03-01

    The partition function of ABJ(M) theories on the three-sphere can be regarded as the canonical partition function of an ideal Fermi gas with a non-trivial Hamiltonian. We propose an exact expression for the spectral determinant of this Hamiltonian, which generalizes recent results obtained in the maximally supersymmetric case. As a consequence, we find an exact WKB quantization condition determining the spectrum which is in agreement with numerical results. In addition, we investigate the factorization properties and functional equations for our conjectured spectral determinants. These functional equations relate the spectral determinants of ABJ theories to consecutive ranks of gauge groups but the same Chern-Simons coupling.

  3. Canonical quantization of general relativity in discrete space-times.

    PubMed

    Gambini, Rodolfo; Pullin, Jorge

    2003-01-17

    It has long been recognized that lattice gauge theory formulations, when applied to general relativity, conflict with the invariance of the theory under diffeomorphisms. We analyze discrete lattice general relativity and develop a canonical formalism that allows one to treat constrained theories in Lorentzian signature space-times. The presence of the lattice introduces a "dynamical gauge" fixing that makes the quantization of the theories conceptually clear, albeit computationally involved. The problem of a consistent algebra of constraints is automatically solved in our approach. The approach works successfully in other field theories as well, including topological theories. A simple cosmological application exhibits quantum elimination of the singularity at the big bang.

  4. Canonical quantization of general relativity in discrete space-times.

    PubMed

    Gambini, Rodolfo; Pullin, Jorge

    2003-01-17

    It has long been recognized that lattice gauge theory formulations, when applied to general relativity, conflict with the invariance of the theory under diffeomorphisms. We analyze discrete lattice general relativity and develop a canonical formalism that allows one to treat constrained theories in Lorentzian signature space-times. The presence of the lattice introduces a "dynamical gauge" fixing that makes the quantization of the theories conceptually clear, albeit computationally involved. The problem of a consistent algebra of constraints is automatically solved in our approach. The approach works successfully in other field theories as well, including topological theories. A simple cosmological application exhibits quantum elimination of the singularity at the big bang. PMID:12570532

  5. Canonical quantization of electromagnetism in spatially dispersive media

    NASA Astrophysics Data System (ADS)

    Horsley, S. A. R.; Philbin, T. G.

    2014-01-01

    We find the action that describes the electromagnetic field in a spatially dispersive, homogeneous medium. This theory is quantized and the Hamiltonian is diagonalized in terms of a continuum of normal modes. It is found that the introduction of nonlocal response in the medium automatically regulates some previously divergent results, and we calculate a finite value for the intensity of the electromagnetic field at a fixed frequency within a homogeneous medium. To conclude we discuss the potential importance of spatial dispersion in taming the divergences that arise in calculations of Casimir-type effects.

  6. Photophysics and photochemistry of quantized ZnO colloids

    SciTech Connect

    Kamat, P.V.; Patrick, B.

    1992-08-06

    The photophysical and photochemical behavior of quantized ZnO colloids in ethanol has been investigated by time-resolved transient absorption and emission measurements. Trapping of electrons at the ZnO surface resulted in broad absorption in the red region. The green emission of ZnO colloids was readily quenched by hole scavengers such as SCN{sup -} and I{sup -}. The photoinduced charge transfer to these hole scavengers was studied by laser flash photolysis. The yield of oxidized product increased considerably when ZnO colloids were coupled with ZnSe. 36 refs., 11 figs., 1 tab.

  7. Gravity quantized: Loop quantum gravity with a scalar field

    SciTech Connect

    Domagala, Marcin; Kaminski, Wojciech; Giesel, Kristina; Lewandowski, Jerzy

    2010-11-15

    ...''but we do not have quantum gravity.'' This phrase is often used when analysis of a physical problem enters the regime in which quantum gravity effects should be taken into account. In fact, there are several models of the gravitational field coupled to (scalar) fields for which the quantization procedure can be completed using loop quantum gravity techniques. The model we present in this paper consists of the gravitational field coupled to a scalar field. The result has similar structure to the loop quantum cosmology models, except that it involves all the local degrees of freedom of the gravitational field because no symmetry reduction has been performed at the classical level.

  8. Nucleation of Quantized Vortices from Rotating Superfluid Drops

    NASA Technical Reports Server (NTRS)

    Donnelly, Russell J.

    2001-01-01

    The long-term goal of this project is to study the nucleation of quantized vortices in helium II by investigating the behavior of rotating droplets of helium II in a reduced gravity environment. The objective of this ground-based research grant was to develop new experimental techniques to aid in accomplishing that goal. The development of an electrostatic levitator for superfluid helium, described below, and the successful suspension of charged superfluid drops in modest electric fields was the primary focus of this work. Other key technologies of general low temperature use were developed and are also discussed.

  9. Basis Light-Front Quantization: Recent Progress and Future Prospects

    NASA Astrophysics Data System (ADS)

    Vary, James P.; Adhikari, Lekha; Chen, Guangyao; Li, Yang; Maris, Pieter; Zhao, Xingbo

    2016-08-01

    Light-front Hamiltonian field theory has advanced to the stage of becoming a viable non-perturbative method for solving forefront problems in strong interaction physics. Physics drivers include hadron mass spectroscopy, generalized parton distribution functions, spin structures of the hadrons, inelastic structure functions, hadronization, particle production by strong external time-dependent fields in relativistic heavy ion collisions, and many more. We review selected recent results and future prospects with basis light-front quantization that include fermion-antifermion bound states in QCD, fermion motion in a strong time-dependent external field and a novel non-perturbative renormalization scheme.

  10. Are Bred Vectors The Same As Lyapunov Vectors?

    NASA Astrophysics Data System (ADS)

    Kalnay, E.; Corazza, M.; Cai, M.

    and J. A. Yorke, 1996: Chaos: an introduction to dynamical systems. Springer-Verlag, New York. Buizza R., J. Tribbia, F. Molteni and T. Palmer, 1993: Computation of optimal unstable 3 structures for numerical weather prediction models. Tellus, 45A, 388-407. Cai, M., E. Kalnay and Z. Toth, 2001: Potential impact of bred vectors on ensemble forecasting and data assimilation in the Zebiak-Cane model. Submitted to J of Climate. Corazza, M., E. Kalnay, D. J. Patil, R. Morss, M. Cai, I. Szunyogh, B. R. Hunt, E. Ott and J. Yorke, 2001: Use of the breeding technique to determine the structure of the "errors of the day". Submitted to Nonlinear Processes in Geophysics. Corazza, M., E. Kalnay, DJ Patil, E. Ott, J. Yorke, I Szunyogh and M. Cai, 2001: Use of the breeding technique in the estimation of the background error covariance matrix for a quasigeostrophic model. AMS Symposium on Observations, Data Assimilation and Predictability, Preprints volume, Orlando, FA, 14-17 January 2002. Farrell, B., 1988: Small error dynamics and the predictability of atmospheric flow, J. Atmos. Sciences, 45, 163-172. Kalnay, E 2002: Atmospheric modeling, data assimilation and predictability. Chapter 6. Cambridge University Press, UK. In press. Kalnay E and Z Toth 1994: Removing growing errors in the analysis. Preprints, Tenth Conference on Numerical Weather Prediction, pp 212-215. Amer. Meteor. Soc., July 18-22, 1994. Lorenz, E.N., 1965: A study of the predictability of a 28-variable atmospheric model. Tellus, 21, 289-307. Lorenz, E.N., 1996: Predictability- A problem partly solved. Proceedings of the ECMWF Seminar on Predictability, Reading, England, Vol. 1 1-18. Molteni F. and TN Palmer, 1993: Predictability and finite-time instability of the north- ern winter circulation. Q. J. Roy. Meteorol. Soc. 119, 269-298. Morss, R.E.: 1999: Adaptive observations: Idealized sampling strategies for improving numerical weather prediction. Ph.D. Thesis, Massachussetts Institute of Technology, 225pp. Ott, E

  11. Understanding Singular Vectors

    ERIC Educational Resources Information Center

    James, David; Botteron, Cynthia

    2013-01-01

    matrix yields a surprisingly simple, heuristical approximation to its singular vectors. There are correspondingly good approximations to the singular values. Such rules of thumb provide an intuitive interpretation of the singular vectors that helps explain why the SVD is so…

  12. Rhotrix Vector Spaces

    ERIC Educational Resources Information Center

    Aminu, Abdulhadi

    2010-01-01

    By rhotrix we understand an object that lies in some way between (n x n)-dimensional matrices and (2n - 1) x (2n - 1)-dimensional matrices. Representation of vectors in rhotrices is different from the representation of vectors in matrices. A number of vector spaces in matrices and their properties are known. On the other hand, little seems to be…

  13. Scalar field quantization without divergences in all spacetime dimensions

    NASA Astrophysics Data System (ADS)

    Klauder, John R.

    2011-07-01

    Covariant, self-interacting scalar quantum field theories admit solutions for low enough spacetime dimensions, but when additional divergences appear in higher dimensions, the traditional approach leads to results, such as triviality, that are less than satisfactory. Guided by idealized but soluble nonrenormalizable models, a nontraditional proposal for the quantization of covariant scalar field theories is advanced, which achieves a term-by-term, divergence-free, perturbation analysis of interacting models expanded about a suitable pseudofree theory, which differs from a free theory by an O(planck2) counterterm. These positive features are realized within a functional integral formulation by a local, nonclassical, counterterm that effectively transforms parameter changes in the action from generating mutually singular measures, which are the basis for divergences, to equivalent measures, thereby removing all divergences. The use of an alternative model about which to perturb is already supported by properties of the classical theory and is allowed by the inherent ambiguity in the quantization process itself. This procedure not only provides acceptable solutions for models for which no acceptable, faithful solution currently exists, e.g. phiv4n, for spacetime dimensions n >= 4, but offers a new, divergence-free solution for less-singular models as well, e.g. phiv4n, for n = 2, 3. Our analysis implies similar properties for multicomponent scalar models, such as those associated with the Higgs model.

  14. Unified framework for quasispecies evolution and stochastic quantization.

    PubMed

    Bianconi, Ginestra; Rahmede, Christoph

    2011-05-01

    In this paper we provide a unified framework for quasispecies evolution and stochastic quantization. We map the biological evolution described by the quasispecies equation to the stochastic dynamics of an ensemble of particles undergoing a creation-annihilation process. We show that this mapping identifies a natural decomposition of the probability that an individual has a certain genotype into eigenfunctions of the evolutionary operator. This alternative approach to study the quasispecies equation allows for a generalization of the Fisher theorem equivalent to the Price equation. According to this relation the average fitness of an asexual population increases with time proportional to the variance of the eigenvalues of the evolutionary operator. Moreover, from the present alternative formulation of stochastic quantization a novel scenario emerges to be compared with existing approaches. The evolution of an ensemble of particles undergoing diffusion and a creation-annihilation process is parametrized by a variable β that we call the inverse temperature of the stochastic dynamics. We find that the evolution equation at high temperatures is simply related to the Schrödinger equation, but at low temperature it strongly deviates from it. In the presence of additional noise in scattering processes between the particles, the evolution reaches a steady state described by the Bose-Einstein statistics.

  15. Dual approach to circuit quantization using loop charges

    NASA Astrophysics Data System (ADS)

    Ulrich, Jascha; Hassler, Fabian

    2016-09-01

    The conventional approach to circuit quantization is based on node fluxes and traces the motion of node charges on the islands of the circuit. However, for some devices, the relevant physics can be best described by the motion of polarization charges over the branches of the circuit that are in general related to the node charges in a highly nonlocal way. Here, we present a method, dual to the conventional approach, for quantizing planar circuits in terms of loop charges. In this way, the polarization charges are directly obtained as the differences of the two loop charges on the neighboring loops. The loop charges trace the motion of fluxes through the circuit loops. We show that loop charges yield a simple description of the flux transport across phase-slip junctions. We outline a concrete construction of circuits based on phase-slip junctions that are electromagnetically dual to arbitrary planar Josephson junction circuits. We argue that loop charges also yield a simple description of the flux transport in conventional Josephson junctions shunted by large impedances. We show that a mixed circuit description in terms of node fluxes and loop charges yields an insight into the flux decompactification of a Josephson junction shunted by an inductor. As an application, we show that the fluxonium qubit is well approximated as a phase-slip junction for the experimentally relevant parameters. Moreover, we argue that the 0 -π qubit is effectively the dual of a Majorana Josephson junction.

  16. Path-memory induced quantization of classical orbits

    PubMed Central

    Fort, Emmanuel; Eddi, Antonin; Boudaoud, Arezki; Moukhtar, Julien; Couder, Yves

    2010-01-01

    A droplet bouncing on a liquid bath can self-propel due to its interaction with the waves it generates. The resulting “walker” is a dynamical association where, at a macroscopic scale, a particle (the droplet) is driven by a pilot-wave field. A specificity of this system is that the wave field itself results from the superposition of the waves generated at the points of space recently visited by the particle. It thus contains a memory of the past trajectory of the particle. Here, we investigate the response of this object to forces orthogonal to its motion. We find that the resulting closed orbits present a spontaneous quantization. This is observed only when the memory of the system is long enough for the particle to interact with the wave sources distributed along the whole orbit. An additional force then limits the possible orbits to a discrete set. The wave-sustained path memory is thus demonstrated to generate a quantization of angular momentum. Because a quantum-like uncertainty was also observed recently in these systems, the nonlocality generated by path memory opens new perspectives.

  17. Quantization of the massive gravitino on FRW spacetimes

    NASA Astrophysics Data System (ADS)

    Schenkel, Alexander; Uhlemann, Christoph F.

    2012-01-01

    In this article we study the quantization and causal properties of a massive spin 3/2 Rarita-Schwinger field on spatially flat Friedmann-Robertson-Walker (FRW) spacetimes. We construct Zuckerman’s universal conserved current and prove that it leads to a positive definite inner product on solutions of the field equation. Based on this inner product, we quantize the Rarita-Schwinger field in terms of a CAR-algebra. The transversal and longitudinal parts constituting the independent on-shell degrees of freedom decouple. We find a Dirac-type equation for the transversal polarizations, ensuring a causal propagation. The equation of motion for the longitudinal part is also of Dirac-type, but with respect to an “effective metric”. We obtain that for all four-dimensional FRW solutions with a matter equation of state p=ωρ and ω∈(-1,1] the light cones of the effective metric are more narrow than the standard cones, which are recovered for the de Sitter case ω=-1. In particular, this shows that the propagation of the longitudinal part, although nonstandard for ω≠-1, is completely causal in cosmological constant, dust and radiation dominated universes.

  18. Size quantization of Dirac fermions in graphene constrictions

    PubMed Central

    Terrés, B.; Chizhova, L. A.; Libisch, F.; Peiro, J.; Jörger, D.; Engels, S.; Girschik, A.; Watanabe, K.; Taniguchi, T.; Rotkin, S. V.; Burgdörfer, J.; Stampfer, C.

    2016-01-01

    Quantum point contacts are cornerstones of mesoscopic physics and central building blocks for quantum electronics. Although the Fermi wavelength in high-quality bulk graphene can be tuned up to hundreds of nanometres, the observation of quantum confinement of Dirac electrons in nanostructured graphene has proven surprisingly challenging. Here we show ballistic transport and quantized conductance of size-confined Dirac fermions in lithographically defined graphene constrictions. At high carrier densities, the observed conductance agrees excellently with the Landauer theory of ballistic transport without any adjustable parameter. Experimental data and simulations for the evolution of the conductance with magnetic field unambiguously confirm the identification of size quantization in the constriction. Close to the charge neutrality point, bias voltage spectroscopy reveals a renormalized Fermi velocity of ∼1.5 × 106 m s−1 in our constrictions. Moreover, at low carrier density transport measurements allow probing the density of localized states at edges, thus offering a unique handle on edge physics in graphene devices. PMID:27198961

  19. Design and evaluation of sparse quantization index modulation watermarking schemes

    NASA Astrophysics Data System (ADS)

    Cornelis, Bruno; Barbarien, Joeri; Dooms, Ann; Munteanu, Adrian; Cornelis, Jan; Schelkens, Peter

    2008-08-01

    In the past decade the use of digital data has increased significantly. The advantages of digital data are, amongst others, easy editing, fast, cheap and cross-platform distribution and compact storage. The most crucial disadvantages are the unauthorized copying and copyright issues, by which authors and license holders can suffer considerable financial losses. Many inexpensive methods are readily available for editing digital data and, unlike analog information, the reproduction in the digital case is simple and robust. Hence, there is great interest in developing technology that helps to protect the integrity of a digital work and the copyrights of its owners. Watermarking, which is the embedding of a signal (known as the watermark) into the original digital data, is one method that has been proposed for the protection of digital media elements such as audio, video and images. In this article, we examine watermarking schemes for still images, based on selective quantization of the coefficients of a wavelet transformed image, i.e. sparse quantization-index modulation (QIM) watermarking. Different grouping schemes for the wavelet coefficients are evaluated and experimentally verified for robustness against several attacks. Wavelet tree-based grouping schemes yield a slightly improved performance over block-based grouping schemes. Additionally, the impact of the deployment of error correction codes on the most promising configurations is examined. The utilization of BCH-codes (Bose, Ray-Chaudhuri, Hocquenghem) results in an improved robustness as long as the capacity of the error codes is not exceeded (cliff-effect).

  20. Quantization of redshift differences in isolated galaxy pairs

    SciTech Connect

    Tifft, W.G.; Cocke, W.J.

    1989-01-01

    Improved 21 cm data on isolated galaxy pairs are presented which eliminate questions of inhomogeneity in the data on such pairs and reduce observational error to below 5 km/s. Quantization is sharpened, and the zero peak is shown to be displaced from zero to a location near 24 km/s. An exclusion principle is suggested whereby identical redshifts are forbidden in limited volumes. The radio data and data from Schweizer (1987) are combined with the best optical data on close Karachentsev pairs to provide a cumulative sample of 84 of the best differentials now available. New 21 cm observations are used to test for the presence of small differentials in very wide pairs, and the deficiency near zero is found to continue to very wide spacings. A loss of wide pairs by selection bias cannot produce the observed zero deficiency. A new test using pairs selected from the Fisher-Tully catalog is used to demonstrate quantization properties of third components associated with possible pairs. 27 references.

  1. Analysis of the quantum bouncer using polymer quantization

    NASA Astrophysics Data System (ADS)

    Martín-Ruiz, A.; Frank, A.; Urrutia, L. F.

    2015-08-01

    Polymer quantization (PQ) is a background independent quantization scheme that arises in loop quantum gravity. This framework leads to a new short-distance (discretized) structure characterized by a fundamental length. In this paper we use PQ to analyze the problem of a particle bouncing on a perfectly reflecting surface under the influence of Earth's gravitational field. In this scenario, deviations from the usual quantum effects are induced by the spatial discreteness, but not by a new short-range gravitational interaction. We solve the polymer Schrödinger equation in an analytical fashion, and we evaluate numerically the corresponding energy levels. We find that the polymer energy spectrum exhibits a negative shift compared to the one obtained for the quantum bouncer. The comparison of our results with those obtained in the GRANIT experiment leads to an upper bound for the fundamental length scale, namely λ ≪0.6 Å . We find polymer corrections to the transition probability between levels, induced by small vibrations, together with the probability of spontaneous emission in the quadrupole approximation.

  2. Universality and quantized response in bosonic mesoscopic tunneling

    NASA Astrophysics Data System (ADS)

    Yin, Shaoyu; Béri, Benjamin

    2016-06-01

    We show that tunneling involving bosonic wires and/or boson integer quantum Hall (bIQH) edges is characterized by features that are far more universal than those in their fermionic counterpart. Considering a pair of minimal geometries, we examine the tunneling conductance as a function of energy (e.g., chemical potential bias) at high and low energy limits, finding a low energy enhancement and a universal high versus zero energy relation that hold for all wire/bIQH edge combinations. Beyond this universality present in all the different topological (bIQH-edge) and nontopological (wire) setups, we also discover a number of features distinguishing the topological bIQH edges, which include a current imbalance to chemical potential bias ratio that is quantized despite the lack of conductance quantization in the bIQH edges themselves. The predicted phenomena require only initial states to be thermal and thus are well suited for tests with ultracold bosons forming wires and bIQH states. For the latter, we highlight a potential realization based on single component bosons in the recently observed Harper-Hofstadter band structure.

  3. A short course on quantum mechanics and methods of quantization

    NASA Astrophysics Data System (ADS)

    Ercolessi, Elisa

    2015-07-01

    These notes collect the lectures given by the author to the "XXIII International Workshop on Geometry and Physics" held in Granada (Spain) in September 2014. The first part of this paper aims at introducing a mathematical oriented reader to the realm of Quantum Mechanics (QM) and then to present the geometric structures that underline the mathematical formalism of QM which, contrary to what is usually done in Classical Mechanics (CM), are usually not taught in introductory courses. The mathematics related to Hilbert spaces and Differential Geometry are assumed to be known by the reader. In the second part, we concentrate on some quantization procedures, that are founded on the geometric structures of QM — as we have described them in the first part — and represent the ones that are more operatively used in modern theoretical physics. We will discuss first the so-called Coherent State Approach which, mainly complemented by "Feynman Path Integral Technique", is the method which is most widely used in quantum field theory. Finally, we will describe the "Weyl Quantization Approach" which is at the origin of modern tomographic techniques, originally used in optics and now in quantum information theory.

  4. q-bosons and the q-analogue quantized field

    NASA Technical Reports Server (NTRS)

    Nelson, Charles A.

    1995-01-01

    The q-analogue coherent states are used to identify physical signatures for the presence of a 1-analogue quantized radiation field in the q-CS classical limits where the absolute value of z is large. In this quantum-optics-like limit, the fractional uncertainties of most physical quantities (momentum, position, amplitude, phase) which characterize the quantum field are O(1). They only vanish as O(1/absolute value of z) when q = 1. However, for the number operator, N, and the N-Hamiltonian for a free q-boson gas, H(sub N) = h(omega)(N + 1/2), the fractional uncertainties do still approach zero. A signature for q-boson counting statistics is that (Delta N)(exp 2)/ (N) approaches 0 as the absolute value of z approaches infinity. Except for its O(1) fractional uncertainty, the q-generalization of the Hermitian phase operator of Pegg and Barnett, phi(sub q), still exhibits normal classical behavior. The standard number-phase uncertainty-relation, Delta(N) Delta phi(sub q) = 1/2, and the approximate commutation relation, (N, phi(sub q)) = i, still hold for the single-mode q-analogue quantized field. So, N and phi(sub q) are almost canonically conjugate operators in the q-CS classical limit. The q-analogue CS's minimize this uncertainty relation for moderate (absolute value of z)(exp 2).

  5. Pisot q-coherent states quantization of the harmonic oscillator

    SciTech Connect

    Gazeau, J.P.; Olmo, M.A. del

    2013-03-15

    We revisit the quantized version of the harmonic oscillator obtained through a q-dependent family of coherent states. For each q, 0Quantized version of the harmonic oscillator (HO) through a q-family of coherent states. Black-Right-Pointing-Pointer For q,0

  6. Pythagorean quantization, action(s) and the arrow of time

    NASA Astrophysics Data System (ADS)

    Schuch, Dieter

    2010-06-01

    Searching for the first well-documented attempts of introducing some kind of "quantization" into the description of nature inevitably leads to the ancient Greeks, in particular Plato and Pythagoras. The question of finding the so-called Pythagorean triples, i.e., right-angled triangles with integer length of all three sides, is, surprisingly, connected with complex nonlinear Riccati equations that occur in time-dependent quantum mechanics. The complex Riccati equation together with the usual Newtonian equation of the system, leads to a dynamical invariant with the dimension of an action. The relation between this invariant and a conserved "angular momentum" for the motion in the complex plane will be determined. The "Pythagorean quantization" shows similarities with the quantum Hall effect and leads to an interpretation of Sommerfeld's fine structure constant that involves another quantum of action, the "least Coulombic action" e2/c. Since natural evolution is characterized by irreversibility and dissipation, the question of how these aspects can be incorporated into a quantum mechanical description arises. Two effective approaches that also both possess a dynamical invariant (like the one mentioned above) will be discussed. One uses an explicitly time-dependent (linear) Hamiltonian, whereas the other leads to a nonlinear Schrödinger equation with complex logarithmic nonlinearity. Both approaches can be transformed into each other via a non-unitary transformation that involves Schrödinger's original definition of a (complex) action via the wave function.

  7. Topos quantum theory on quantization-induced sheaves

    SciTech Connect

    Nakayama, Kunji

    2014-10-15

    In this paper, we construct a sheaf-based topos quantum theory. It is well known that a topos quantum theory can be constructed on the topos of presheaves on the category of commutative von Neumann algebras of bounded operators on a Hilbert space. Also, it is already known that quantization naturally induces a Lawvere-Tierney topology on the presheaf topos. We show that a topos quantum theory akin to the presheaf-based one can be constructed on sheaves defined by the quantization-induced Lawvere-Tierney topology. That is, starting from the spectral sheaf as a state space of a given quantum system, we construct sheaf-based expressions of physical propositions and truth objects, and thereby give a method of truth-value assignment to the propositions. Furthermore, we clarify the relationship to the presheaf-based quantum theory. We give translation rules between the sheaf-based ingredients and the corresponding presheaf-based ones. The translation rules have “coarse-graining” effects on the spaces of the presheaf-based ingredients; a lot of different proposition presheaves, truth presheaves, and presheaf-based truth-values are translated to a proposition sheaf, a truth sheaf, and a sheaf-based truth-value, respectively. We examine the extent of the coarse-graining made by translation.

  8. Optical evidence for quantization in transparent amorphous oxide semiconductor superlattice

    NASA Astrophysics Data System (ADS)

    Abe, Katsumi; Nomura, Kenji; Kamiya, Toshio; Hosono, Hideo

    2012-08-01

    We fabricated transparent amorphous oxide semiconductor superlattices composed of In-Ga-Zn-O (a-IGZO) well layers and Ga2O3 (a-Ga2O3) barrier layers, and investigated their optical absorption properties to examine energy quantization in the a-IGZO well layer. The Tauc gap of a-IGZO well layers monotonically increases with decreasing well thickness at ≤5 nm. The thickness dependence of the Tauc gap is quantitatively explained by a Krönig-Penny model employing a conduction band offset of 1.2 eV between the a-IGZO and the a-Ga2O3, and the effective masses of 0.35m0 for the a-IGZO well layer and 0.5m0 for the a-Ga2O3 barrier layer, where m0 is the electron rest mass. This result demonstrates the quantization in the a-IGZO well layer. The phase relaxation length of the a-IGZO is estimated to be larger than 3.5 nm.

  9. Educational Information Quantization for Improving Content Quality in Learning Management Systems

    ERIC Educational Resources Information Center

    Rybanov, Alexander Aleksandrovich

    2014-01-01

    The article offers the educational information quantization method for improving content quality in Learning Management Systems. The paper considers questions concerning analysis of quality of quantized presentation of educational information, based on quantitative text parameters: average frequencies of parts of speech, used in the text; formal…

  10. On the Stochastic Quantization Method: Characteristics and Applications to Singular Systems

    NASA Technical Reports Server (NTRS)

    Kanenaga, Masahiko; Namiki, Mikio

    1996-01-01

    Introducing the generalized Langevin equation, we extend the stochastic quantization method so as to deal with singular dynamical systems beyond the ordinary territory of quantum mechanics. We also show how the uncertainty relation is built up to the quantum mechanical limit with respect to fictitious time, irrespective of its initial value, within the framework of the usual stochastic quantization method.

  11. On the macroscopic quantization in mesoscopic rings and single-electron devices

    NASA Astrophysics Data System (ADS)

    Semenov, Andrew G.

    2016-05-01

    In this letter we investigate the phenomenon of macroscopic quantization and consider particle on the ring interacting with the dissipative bath as an example. We demonstrate that even in presence of environment, there is macroscopically quantized observable which can take only integer values in the zero temperature limit. This fact follows from the total angular momentum conservation combined with momentum quantization for bare particle on the ring. The nontrivial thing is that the model under consideration, including the notion of quantized observable, can be mapped onto the Ambegaokar-Eckern-Schon model of the single-electron box (SEB). We evaluate SEB observable, originating after mapping, and reveal new physics, which follows from the macroscopic quantization phenomenon and the existence of additional conservation law. Some generalizations of the obtained results are also presented.

  12. Flat minimal quantizations of Stäckel systems and quantum separability

    SciTech Connect

    Błaszak, Maciej; Domański, Ziemowit; Silindir, Burcu

    2014-12-15

    In this paper, we consider the problem of quantization of classical Stäckel systems and the problem of separability of related quantum Hamiltonians. First, using the concept of Stäckel transform, natural Hamiltonian systems from a given Riemann space are expressed by some flat coordinates of related Euclidean configuration space. Then, the so-called flat minimal quantization procedure is applied in order to construct an appropriate Hermitian operator in the respective Hilbert space. Finally, we distinguish a class of Stäckel systems which remains separable after any of admissible flat minimal quantizations. - Highlights: • Using Stäckel transform, separable Hamiltonians are expressed by flat coordinates. • The concept of admissible flat minimal quantizations is developed. • The class of Stäckel systems, separable after minimal flat quantization is established. • Separability of related stationary Schrödinger equations is presented in explicit form.

  13. Performance of peaky template matching under additive white Gaussian noise and uniform quantization

    NASA Astrophysics Data System (ADS)

    Horvath, Matthew S.; Rigling, Brian D.

    2015-05-01

    Peaky template matching (PTM) is a special case of a general algorithm known as multinomial pattern matching originally developed for automatic target recognition of synthetic aperture radar data. The algorithm is a model- based approach that first quantizes pixel values into Nq = 2 discrete values yielding generative Beta-Bernoulli models as class-conditional templates. Here, we consider the case of classification of target chips in AWGN and develop approximations to image-to-template classification performance as a function of the noise power. We focus specifically on the case of a uniform quantization" scheme, where a fixed number of the largest pixels are quantized high as opposed to using a fixed threshold. This quantization method reduces sensitivity to the scaling of pixel intensities and quantization in general reduces sensitivity to various nuisance parameters difficult to account for a priori. Our performance expressions are verified using forward-looking infrared imagery from the Army Research Laboratory Comanche dataset.

  14. Index Sets and Vectorization

    SciTech Connect

    Keasler, J A

    2012-03-27

    Vectorization is data parallelism (SIMD, SIMT, etc.) - extension of ISA enabling the same instruction to be performed on multiple data items simultaeously. Many/most CPUs support vectorization in some form. Vectorization is difficult to enable, but can yield large efficiency gains. Extra programmer effort is required because: (1) not all algorithms can be vectorized (regular algorithm structure and fine-grain parallelism must be used); (2) most CPUs have data alignment restrictions for load/store operations (obey or risk incorrect code); (3) special directives are often needed to enable vectorization; and (4) vector instructions are architecture-specific. Vectorization is the best way to optimize for power and performance due to reduced clock cycles. When data is organized properly, a vector load instruction (i.e. movaps) can replace 'normal' load instructions (i.e. movsd). Vector operations can potentially have a smaller footprint in the instruction cache when fewer instructions need to be executed. Hybrid index sets insulate users from architecture specific details. We have applied hybrid index sets to achieve optimal vectorization. We can extend this concept to handle other programming models.

  15. Constrained Adaptive Sensing

    NASA Astrophysics Data System (ADS)

    Davenport, Mark A.; Massimino, Andrew K.; Needell, Deanna; Woolf, Tina

    2016-10-01

    Suppose that we wish to estimate a vector $\\mathbf{x} \\in \\mathbb{C}^n$ from a small number of noisy linear measurements of the form $\\mathbf{y} = \\mathbf{A x} + \\mathbf{z}$, where $\\mathbf{z}$ represents measurement noise. When the vector $\\mathbf{x}$ is sparse, meaning that it has only $s$ nonzeros with $s \\ll n$, one can obtain a significantly more accurate estimate of $\\mathbf{x}$ by adaptively selecting the rows of $\\mathbf{A}$ based on the previous measurements provided that the signal-to-noise ratio (SNR) is sufficiently large. In this paper we consider the case where we wish to realize the potential of adaptivity but where the rows of $\\mathbf{A}$ are subject to physical constraints. In particular, we examine the case where the rows of $\\mathbf{A}$ are constrained to belong to a finite set of allowable measurement vectors. We demonstrate both the limitations and advantages of adaptive sensing in this constrained setting. We prove that for certain measurement ensembles, the benefits offered by adaptive designs fall far short of the improvements that are possible in the unconstrained adaptive setting. On the other hand, we also provide both theoretical and empirical evidence that in some scenarios adaptivity does still result in substantial improvements even in the constrained setting. To illustrate these potential gains, we propose practical algorithms for constrained adaptive sensing by exploiting connections to the theory of optimal experimental design and show that these algorithms exhibit promising performance in some representative applications.

  16. Canonical Groups for Quantization on the Two-Dimensional Sphere and One-Dimensional Complex Projective Space

    NASA Astrophysics Data System (ADS)

    A, Sumadi A. H.; H, Zainuddin

    2014-11-01

    Using Isham's group-theoretic quantization scheme, we construct the canonical groups of the systems on the two-dimensional sphere and one-dimensional complex projective space, which are homeomorphic. In the first case, we take SO(3) as the natural canonical Lie group of rotations of the two-sphere and find all the possible Hamiltonian vector fields, and followed by verifying the commutator and Poisson bracket algebra correspondences with the Lie algebra of the group. In the second case, the same technique is resumed to define the Lie group, in this case SU (2), of CP'.We show that one can simply use a coordinate transformation from S2 to CP1 to obtain all the Hamiltonian vector fields of CP1. We explicitly show that the Lie algebra structures of both canonical groups are locally homomorphic. On the other hand, globally their corresponding canonical groups are acting on different geometries, the latter of which is almost complex. Thus the canonical group for CP1 is the double-covering group of SO(3), namely SU(2). The relevance of the proposed formalism is to understand the idea of CP1 as a space of where the qubit lives which is known as a Bloch sphere.

  17. Adaptive mode-dependent scan for H.264/AVC intracoding

    NASA Astrophysics Data System (ADS)

    Wei, Yung-Chiang; Yang, Jar-Ferr

    2010-07-01

    In image/video coding standards, the zigzag scan provides an effective encoding order of the quantized transform coefficients such that the quantized coefficients can be arranged statistically from large to small magnitudes. Generally, the optimal scan should transfer the 2-D transform coefficients into 1-D data in descending order of their average power levels. With the optimal scan order, we can achieve more efficient variable length coding. In H.264 advanced video coding (AVC), the residuals resulting from various intramode predictions have different statistical characteristics. After analyzing the transformed residuals, we propose an adaptive scan order scheme, which optimally matches up with intraprediction mode, to further improve the efficiency of intracoding. Simulation results show that the proposed adaptive scan scheme can improve the context-adaptive variable length coding to achieve better rate-distortion performance for the H.264/AVC video coder without the increase of computation.

  18. Quantized water transport: ideal desalination through graphyne-4 membrane.

    PubMed

    Zhu, Chongqin; Li, Hui; Zeng, Xiao Cheng; Wang, E G; Meng, Sheng

    2013-11-07

    Graphyne sheet exhibits promising potential for nanoscale desalination to achieve both high water permeability and salt rejection rate. Extensive molecular dynamics simulations on pore-size effects suggest that γ-graphyne-4, with 4 acetylene bonds between two adjacent phenyl rings, has the best performance with 100% salt rejection and an unprecedented water permeability, to our knowledge, of ~13 L/cm(2)/day/MPa, 3 orders of magnitude higher than prevailing commercial membranes based on reverse osmosis, and ~10 times higher than the state-of-the-art nanoporous graphene. Strikingly, water permeability across graphyne exhibits unexpected nonlinear dependence on the pore size. This counter-intuitive behavior is attributed to the quantized nature of water flow at the nanoscale, which has wide implications in controlling nanoscale water transport and designing highly effective membranes.

  19. Face Recognition Using Local Quantized Patterns and Gabor Filters

    NASA Astrophysics Data System (ADS)

    Khryashchev, V.; Priorov, A.; Stepanova, O.; Nikitin, A.

    2015-05-01

    The problem of face recognition in a natural or artificial environment has received a great deal of researchers' attention over the last few years. A lot of methods for accurate face recognition have been proposed. Nevertheless, these methods often fail to accurately recognize the person in difficult scenarios, e.g. low resolution, low contrast, pose variations, etc. We therefore propose an approach for accurate and robust face recognition by using local quantized patterns and Gabor filters. The estimation of the eye centers is used as a preprocessing stage. The evaluation of our algorithm on different samples from a standardized FERET database shows that our method is invariant to the general variations of lighting, expression, occlusion and aging. The proposed approach allows about 20% correct recognition accuracy increase compared with the known face recognition algorithms from the OpenCV library. The additional use of Gabor filters can significantly improve the robustness to changes in lighting conditions.

  20. Semiclassical Quantization of Spinning Quasiparticles in Ballistic Josephson Junctions

    NASA Astrophysics Data System (ADS)

    Konschelle, François; Bergeret, F. Sebastián; Tokatly, Ilya V.

    2016-06-01

    A Josephson junction made of a generic magnetic material sandwiched between two conventional superconductors is studied in the ballistic semiclassic limit. The spectrum of Andreev bound states is obtained from the single valuedness of a particle-hole spinor over closed orbits generated by electron-hole reflections at the interfaces between superconducting and normal materials. The semiclassical quantization condition is shown to depend only on the angle mismatch between initial and final spin directions along such closed trajectories. For the demonstration, an Andreev-Wilson loop in the composite position-particle-hole-spin space is constructed and shown to depend on only two parameters, namely, a magnetic phase shift and a local precession axis for the spin. The details of the Andreev-Wilson loop can be extracted via measuring the spin-resolved density of states. A Josephson junction can thus be viewed as an analog computer of closed-path-ordered exponentials.

  1. Oscillating magnetocaloric effect in size-quantized diamagnetic film

    SciTech Connect

    Alisultanov, Z. Z.

    2014-03-21

    We investigate the oscillating magnetocaloric effect on a size-quantized diamagnetic film in a transverse magnetic field. We obtain the analytical expression for the thermodynamic potential in case of the arbitrary spectrum of carriers. The entropy change is shown to be the oscillating function of the magnetic field and the film thickness. The nature of this effect is the same as for the de Haas–van Alphen effect. The magnetic part of entropy has a maximal value at some temperature. Such behavior of the entropy is not observed in magneto-ordered materials. We discuss the nature of unusual behavior of the magnetic entropy. We compare our results with the data obtained for 2D and 3D cases.

  2. Theory of the Knight Shift and Flux Quantization in Superconductors

    DOE R&D Accomplishments Database

    Cooper, L. N.; Lee, H. J.; Schwartz, B. B.; Silvert, W.

    1962-05-01

    Consequences of a generalization of the theory of superconductivity that yields a finite Knight shift are presented. In this theory, by introducing an electron-electron interaction that is not spatially invariant, the pairing of electrons with varying total momentum is made possible. An expression for Xs (the spin susceptibility in the superconducting state) is derived. In general Xs is smaller than Xn, but is not necessarily zero. The precise magnitude of Xs will vary from sample to sample and will depend on the nonuniformity of the samples. There should be no marked size dependence and no marked dependence on the strength of the magnetic field; this is in accord with observation. The basic superconducting properties are retained, but there are modifications in the various electromagnetic and thermal properties since the electrons paired are not time sequences of this generalized theory on flux quantization arguments are presented.(auth)

  3. Quantized Water Transport: Ideal Desalination through Graphyne-4 Membrane

    PubMed Central

    Zhu, Chongqin; Li, Hui; Zeng, Xiao Cheng; Wang, E. G.; Meng, Sheng

    2013-01-01

    Graphyne sheet exhibits promising potential for nanoscale desalination to achieve both high water permeability and salt rejection rate. Extensive molecular dynamics simulations on pore-size effects suggest that γ-graphyne-4, with 4 acetylene bonds between two adjacent phenyl rings, has the best performance with 100% salt rejection and an unprecedented water permeability, to our knowledge, of ~13 L/cm2/day/MPa, 3 orders of magnitude higher than prevailing commercial membranes based on reverse osmosis, and ~10 times higher than the state-of-the-art nanoporous graphene. Strikingly, water permeability across graphyne exhibits unexpected nonlinear dependence on the pore size. This counter-intuitive behavior is attributed to the quantized nature of water flow at the nanoscale, which has wide implications in controlling nanoscale water transport and designing highly effective membranes. PMID:24196437

  4. The Einstein-Brillouin Action Quantization for Dirac Fermions

    NASA Astrophysics Data System (ADS)

    Onorato, P.

    The Einstein-Brillouin-Keller semiclassical quantization and the topological Maslov index are used to compute the electronic structure of carbon based nanostructures with or without transverse magnetic field. The calculation is based on the Dirac Fermions approach in the limit of strong coupling for the pseudospin. The electronic bandstructure for carbon nanotubes and graphene nanoribbons are discussed, focusing on the role of the chirality and of the unbonded edges configuration respectively. The effects of a transverse uniform magnetic field are analyzed, the different kinds of classical trajectories are discussed and related to the corresponding energies. The development is concise, transparent, and involves only elementary integral calculus and provides a conceptual and intuitive introduction to the quantum nature of carbon nanostructures.

  5. The Casimir Effect in Light-Front Quantization

    NASA Astrophysics Data System (ADS)

    Hiller, J. R.

    2015-09-01

    We show that the standard result for the Casimir force between conducting plates at rest in an inertial frame can be computed in light-front quantization. This is not the same as light-front analyses where the plates are at "rest" in an infinite momentum frame. In that case, Lenz and Steinbacher have shown that the result does not agree with the standard result for plates at rest. The two important ingredients in the present analysis are a careful treatment of the boundary conditions, inspired by the work of Almeida et al. on oblique light-front coordinates, and computation of the ordinary energy density, rather than the light-front energy density.

  6. Observing the quantization of zero mass carriers in graphene.

    PubMed

    Miller, David L; Kubista, Kevin D; Rutter, Gregory M; Ruan, Ming; de Heer, Walt A; First, Phillip N; Stroscio, Joseph A

    2009-05-15

    Application of a magnetic field to conductors causes the charge carriers to circulate in cyclotron orbits with quantized energies called Landau levels (LLs). These are equally spaced in normal metals and two-dimensional electron gases. In graphene, however, the charge carrier velocity is independent of their energy (like massless photons). Consequently, the LL energies are not equally spaced and include a characteristic zero-energy state (the n = 0 LL). With the use of scanning tunneling spectroscopy of graphene grown on silicon carbide, we directly observed the discrete, non-equally-spaced energy-level spectrum of LLs, including the hallmark zero-energy state of graphene. We also detected characteristic magneto-oscillations in the tunneling conductance and mapped the electrostatic potential of graphene by measuring spatial variations in the energy of the n = 0 LL.

  7. Cosmological backreaction of a quantized massless scalar field

    SciTech Connect

    Kaya, Ali; Tarman, Merve E-mail: merve.tarman@boun.edu.tr

    2012-01-01

    We consider the backreaction problem of a quantized minimally coupled massless scalar field in cosmology. The adiabatically regularized stress-energy tensor in a general Friedmann-Robertson-Walker background is approximately evaluated by using the fact that subhorizon modes evolve adiabatically and superhorizon modes are frozen. The vacuum energy density is verified to obey a new first order differential equation depending on a dimensionless parameter of order unity, which calibrates subhorizon/superhorizon division. We check the validity of the approximation by calculating the corresponding vacuum energy densities in fixed backgrounds, which are shown to agree with the known results in de Sitter space and space-times undergoing power law expansions. We then apply our findings to slow-roll inflationary models. Although backreaction effects are found to be negligible during the near exponential expansion, the vacuum energy density generated during this period might be important at later stages since it decreases slower than radiation or dust.

  8. Casimir effect for a scalar field via Krein quantization

    SciTech Connect

    Pejhan, H.; Tanhayi, M.R.; Takook, M.V.

    2014-02-15

    In this work, we present a rather simple method to study the Casimir effect on a spherical shell for a massless scalar field with Dirichlet boundary condition by applying the indefinite metric field (Krein) quantization technique. In this technique, the field operators are constructed from both negative and positive norm states. Having understood that negative norm states are un-physical, they are only used as a mathematical tool for renormalizing the theory and then one can get rid of them by imposing some proper physical conditions. -- Highlights: • A modification of QFT is considered to address the vacuum energy divergence problem. • Casimir energy of a spherical shell is calculated, through this approach. • In this technique, it is shown, the theory is automatically regularized.

  9. A deformation quantization theory for noncommutative quantum mechanics

    SciTech Connect

    Costa Dias, Nuno; Prata, Joao Nuno; Gosson, Maurice de; Luef, Franz

    2010-07-15

    We show that the deformation quantization of noncommutative quantum mechanics previously considered by Dias and Prata ['Weyl-Wigner formulation of noncommutative quantum mechanics', J. Math. Phys. 49, 072101 (2008)] and Bastos, Dias, and Prata ['Wigner measures in non-commutative quantum mechanics', e-print arXiv:math-ph/0907.4438v1; Commun. Math. Phys. (to appear)] can be expressed as a Weyl calculus on a double phase space. We study the properties of the star-product thus defined and prove a spectral theorem for the star-genvalue equation using an extension of the methods recently initiated by de Gosson and Luef ['A new approach to the *-genvalue equation', Lett. Math. Phys. 85, 173-183 (2008)].

  10. Phase space quantization, noncommutativity, and the gravitational field

    NASA Astrophysics Data System (ADS)

    Chatzistavrakidis, Athanasios

    2014-07-01

    In this paper we study the structure of the phase space in noncommutative geometry in the presence of a nontrivial frame. Our basic assumptions are that the underlying space is a symplectic and parallelizable manifold. Furthermore, we assume the validity of the Leibniz rule and the Jacobi identities. We consider noncommutative spaces due to the quantization of the symplectic structure and determine the momentum operators that guarantee a set of canonical commutation relations, appropriately extended to include the nontrivial frame. We stress the important role of left vs right acting operators and of symplectic duality. This enables us to write down the form of the full phase space algebra on these noncommutative spaces, both in the noncompact and in the compact case. We test our results against the class of four-dimensional and six-dimensional symplectic nilmanifolds, thus presenting a large set of nontrivial examples that realizes the general formalism.

  11. Paul Weiss and the genesis of canonical quantization

    NASA Astrophysics Data System (ADS)

    Rickles, Dean; Blum, Alexander

    2015-12-01

    This paper describes the life and work of a figure who, we argue, was of primary importance during the early years of field quantisation and (albeit more indirectly) quantum gravity. A student of Dirac and Born, he was interned in Canada during the second world war as an enemy alien and after his release never seemed to regain a good foothold in physics, identifying thereafter as a mathematician. He developed a general method of quantizing (linear and non-linear) field theories based on the parameters labelling an arbitrary hypersurface. This method (the `parameter formalism' often attributed to Dirac), though later discarded, was employed (and viewed at the time as an extremely important tool) by the leading figures associated with canonical quantum gravity: Dirac, Pirani and Schild, Bergmann, DeWitt, and others. We argue that he deserves wider recognition for this and other innovations.

  12. Semiclassical Quantization of Spinning Quasiparticles in Ballistic Josephson Junctions.

    PubMed

    Konschelle, François; Bergeret, F Sebastián; Tokatly, Ilya V

    2016-06-10

    A Josephson junction made of a generic magnetic material sandwiched between two conventional superconductors is studied in the ballistic semiclassic limit. The spectrum of Andreev bound states is obtained from the single valuedness of a particle-hole spinor over closed orbits generated by electron-hole reflections at the interfaces between superconducting and normal materials. The semiclassical quantization condition is shown to depend only on the angle mismatch between initial and final spin directions along such closed trajectories. For the demonstration, an Andreev-Wilson loop in the composite position-particle-hole-spin space is constructed and shown to depend on only two parameters, namely, a magnetic phase shift and a local precession axis for the spin. The details of the Andreev-Wilson loop can be extracted via measuring the spin-resolved density of states. A Josephson junction can thus be viewed as an analog computer of closed-path-ordered exponentials. PMID:27341251

  13. Dynamical Buildup of a Quantized Hall Response from Nontopological States

    NASA Astrophysics Data System (ADS)

    Hu, Ying; Zoller, Peter; Budich, Jan Carl

    2016-09-01

    We consider a two-dimensional system initialized in a topologically trivial state before its Hamiltonian is ramped through a phase transition into a Chern insulator regime. This scenario is motivated by current experiments with ultracold atomic gases aimed at realizing time-dependent dynamics in topological insulators. Our main findings are twofold. First, considering coherent dynamics, the non-equilibrium Hall response is found to approach a topologically quantized time averaged value in the limit of slow but non-adiabatic parameter ramps, even though the Chern number of the state remains trivial. Second, adding dephasing, the destruction of quantum coherence is found to stabilize this Hall response, while the Chern number generically becomes undefined. We provide a geometric picture of this phenomenology in terms of the time-dependent Berry curvature.

  14. Optical detection of the quantization of collective atomic motion.

    PubMed

    Brahms, Nathan; Botter, Thierry; Schreppler, Sydney; Brooks, Daniel W C; Stamper-Kurn, Dan M

    2012-03-30

    We directly measure the quantized collective motion of a gas of thousands of ultracold atoms, coupled to light in a high-finesse optical cavity. We detect strong asymmetries, as high as 3:1, in the intensity of light scattered into low- and high-energy motional sidebands. Owing to high cavity-atom cooperativity, the optical output of the cavity contains a spectroscopic record of the energy exchanged between light and motion, directly quantifying the heat deposited by a quantum position measurement's backaction. Such backaction selectively causes the phonon occupation of the observed collective modes to increase with the measurement rate. These results, in addition to providing a method for calibrating the motion of low-occupation mechanical systems, offer new possibilities for investigating collective modes of degenerate gases and for diagnosing optomechanical measurement backaction.

  15. Retroviral vector production.

    PubMed

    Miller, A Dusty

    2014-01-01

    In this unit, the basic protocol generates stable cell lines that produce retroviral vectors that carry selectable markers. Also included are an alternate protocol that applies when the retroviral vector does not carry a selectable marker, and another alternate protocol for rapidly generating retroviral vector preparations by transient transfection. A support protocol describes construction of the retroviral vectors. The methods for generating virus from retroviral vector plasmids rely on the use of packaging cells that synthesize all of the retroviral proteins but do not produce replication-competent virus. Additional protocols detail plasmid transfection, virus titration, assay for replication-competent virus, and histochemical staining to detect transfer of a vector encoding alkaline phosphatase.

  16. Vectorization of a Treecode

    NASA Astrophysics Data System (ADS)

    Makino, Junichiro

    1990-03-01

    Vectorized algorithms for the force calculation and tree construction in the Barnes-Hut tree algorithm are described. The basic idea for the vectorization of the force calculation is to vectorize the tree traversal across particles, so that all particles in the system traverse the tree simultaneously. The tree construction algorithm also makes use of the fact that particles can be treated in parallel. Thus these algorithms take advantage of the internal parallelism in the N-body system and the tree algorithm most effectively. As a natural result, these algorithms can be used on a wide range of vector/parallel architectures, including current supercomputers and highly parallel architectures such as the Connection Machine. The vectorized code runs about five times faster than the non-vector code on a Cyber 205 for an N-body system with N = 8192.

  17. Support vector tracking.

    PubMed

    Avidan, Shai

    2004-08-01

    Support Vector Tracking (SVT) integrates the Support Vector Machine (SVM) classifier into an optic-flow-based tracker. Instead of minimizing an intensity difference function between successive frames, SVT maximizes the SVM classification score. To account for large motions between successive frames, we build pyramids from the support vectors and use a coarse-to-fine approach in the classification stage. We show results of using SVT for vehicle tracking in image sequences.

  18. Vectorized Monte Carlo

    SciTech Connect

    Brown, F.B.

    1981-01-01

    Examination of the global algorithms and local kernels of conventional general-purpose Monte Carlo codes shows that multigroup Monte Carlo methods have sufficient structure to permit efficient vectorization. A structured multigroup Monte Carlo algorithm for vector computers is developed in which many particle events are treated at once on a cell-by-cell basis. Vectorization of kernels for tracking and variance reduction is described, and a new method for discrete sampling is developed to facilitate the vectorization of collision analysis. To demonstrate the potential of the new method, a vectorized Monte Carlo code for multigroup radiation transport analysis was developed. This code incorporates many features of conventional general-purpose production codes, including general geometry, splitting and Russian roulette, survival biasing, variance estimation via batching, a number of cutoffs, and generalized tallies of collision, tracklength, and surface crossing estimators with response functions. Predictions of vectorized performance characteristics for the CYBER-205 were made using emulated coding and a dynamic model of vector instruction timing. Computation rates were examined for a variety of test problems to determine sensitivities to batch size and vector lengths. Significant speedups are predicted for even a few hundred particles per batch, and asymptotic speedups by about 40 over equivalent Amdahl 470V/8 scalar codes arepredicted for a few thousand particles per batch. The principal conclusion is that vectorization of a general-purpose multigroup Monte Carlo code is well worth the significant effort required for stylized coding and major algorithmic changes.

  19. Comment on ``Symplectic quantization, inequivalent quantum theories, and Heisenberg's principle of uncertainty''

    NASA Astrophysics Data System (ADS)

    Latimer, D. C.

    2007-06-01

    In Phys. Rev. A 70, 032104 (2004), M. Montesinos and G. F. Torres del Castillo consider various symplectic structures on the classical phase-space of the two-dimensional isotropic harmonic oscillator. Using Dirac’s quantization condition, the authors investigate how these alternative symplectic forms affect this system’s quantization. They claim that these symplectic structures result in mutually inequivalent quantum theories. In fact, we show here that there exists a unitary map between the two representation spaces so that the various quantizations are equivalent.

  20. Optimization of the Sampling Periods and the Quantization Bit Lengths for Networked Estimation

    PubMed Central

    Suh, Young Soo; Ro, Young Sik; Kang, Hee Jun

    2010-01-01

    This paper is concerned with networked estimation, where sensor data are transmitted over a network of limited transmission rate. The transmission rate depends on the sampling periods and the quantization bit lengths. To investigate how the sampling periods and the quantization bit lengths affect the estimation performance, an equation to compute the estimation performance is provided. An algorithm is proposed to find sampling periods and quantization bit lengths combination, which gives good estimation performance while satisfying the transmission rate constraint. Through the numerical example, the proposed algorithm is verified. PMID:22163557

  1. Conformally covariant quantization of the Maxwell field in de Sitter space

    NASA Astrophysics Data System (ADS)

    Faci, S.; Huguet, E.; Queva, J.; Renaud, J.

    2009-12-01

    In this article, we quantize the Maxwell (“massless spin one”) de Sitter field in a conformally invariant gauge. This quantization is invariant under the SO0(2,4) group and consequently under the de Sitter group. We obtain a new de Sitter-invariant two-point function which is very simple. Our method relies on the one hand on a geometrical point of view which uses the realization of Minkowski, de Sitter and anti-de Sitter spaces as intersections of the null cone in R6 and a moving plane, and on the other hand on a canonical quantization scheme of the Gupta-Bleuler type.

  2. Conformally covariant quantization of the Maxwell field in de Sitter space

    SciTech Connect

    Faci, S.; Huguet, E.; Queva, J.; Renaud, J.

    2009-12-15

    In this article, we quantize the Maxwell ('massless spin one') de Sitter field in a conformally invariant gauge. This quantization is invariant under the SO{sub 0}(2,4) group and consequently under the de Sitter group. We obtain a new de Sitter-invariant two-point function which is very simple. Our method relies on the one hand on a geometrical point of view which uses the realization of Minkowski, de Sitter and anti-de Sitter spaces as intersections of the null cone in R{sup 6} and a moving plane, and on the other hand on a canonical quantization scheme of the Gupta-Bleuler type.

  3. Response of two-band systems to a single-mode quantized field

    NASA Astrophysics Data System (ADS)

    Shi, Z. C.; Shen, H. Z.; Wang, W.; Yi, X. X.

    2016-03-01

    The response of topological insulators (TIs) to an external weakly classical field can be expressed in terms of Kubo formula, which predicts quantized Hall conductivity of the quantum Hall family. The response of TIs to a single-mode quantized field, however, remains unexplored. In this work, we take the quantum nature of the external field into account and define a Hall conductance to characterize the linear response of a two-band system to the quantized field. The theory is then applied to topological insulators. Comparisons with the traditional Hall conductance are presented and discussed.

  4. Gammaretroviral vectors: biology, technology and application.

    PubMed

    Maetzig, Tobias; Galla, Melanie; Baum, Christopher; Schambach, Axel

    2011-06-01

    Retroviruses are evolutionary optimized gene carriers that have naturally adapted to their hosts to efficiently deliver their nucleic acids into the target cell chromatin, thereby overcoming natural cellular barriers. Here we will review-starting with a deeper look into retroviral biology-how Murine Leukemia Virus (MLV), a simple gammaretrovirus, can be converted into an efficient vehicle of genetic therapeutics. Furthermore, we will describe how more rational vector backbones can be designed and how these so-called self-inactivating vectors can be pseudotyped and produced. Finally, we will provide an overview on existing clinical trials and how biosafety can be improved. PMID:21994751

  5. Uncertainty in adaptive capacity

    NASA Astrophysics Data System (ADS)

    Adger, W. Neil; Vincent, Katharine

    2005-03-01

    The capacity to adapt is a critical element of the process of adaptation: it is the vector of resources that represent the asset base from which adaptation actions can be made. Adaptive capacity can in theory be identified and measured at various scales, from the individual to the nation. The assessment of uncertainty within such measures comes from the contested knowledge domain and theories surrounding the nature of the determinants of adaptive capacity and the human action of adaptation. While generic adaptive capacity at the national level, for example, is often postulated as being dependent on health, governance and political rights, and literacy, and economic well-being, the determinants of these variables at national levels are not widely understood. We outline the nature of this uncertainty for the major elements of adaptive capacity and illustrate these issues with the example of a social vulnerability index for countries in Africa. To cite this article: W.N. Adger, K. Vincent, C. R. Geoscience 337 (2005).

  6. Vector processing unit

    SciTech Connect

    Garcia, L.C.; Tjon-Pian-Gi, D.C.; Tucker, S.G.; Zajac, M.W.

    1988-12-13

    This patent describes a data processing system comprising: memory means for storing instruction words of operands; a central processing unit (CPU) connected to the memory means for fetching and decoding instructions and controlling execution of instructions, including transfer of operands to and from the memory means, the control of execution of instructions is effected by a CPU clock and microprogram control means connected to the CPU clock for generating periodic execution control signals in synchronism with the CPU clock; vector processing means tightly coupled to the CPU for effecting data processing on vector data; and interconnection means, connecting the CPU and the vector processing means, including operand transfer lines for transfer of vector data between the CPU and the vector processing means, control lines, status lines for signalling conditions of the vector processor means to the CPU, and a vector timing signal line connected to one of the execution control signals from the microprogram control means, whereby the vector processing means receives periodic execution control signals at the clock rate and is synchronized with the CPU clock on a clock pulse by clock pulse basis during execution of instructions.

  7. Vector generator scan converter

    DOEpatents

    Moore, J.M.; Leighton, J.F.

    1988-02-05

    High printing speeds for graphics data are achieved with a laser printer by transmitting compressed graphics data from a main processor over an I/O channel to a vector generator scan converter which reconstructs a full graphics image for input to the laser printer through a raster data input port. The vector generator scan converter includes a microprocessor with associated microcode memory containing a microcode instruction set, a working memory for storing compressed data, vector generator hardware for drawing a full graphic image from vector parameters calculated by the microprocessor, image buffer memory for storing the reconstructed graphics image and an output scanner for reading the graphics image data and inputting the data to the printer. The vector generator scan converter eliminates the bottleneck created by the I/O channel for transmitting graphics data from the main processor to the laser printer, and increases printer speed up to thirty fold. 7 figs.

  8. Vector generator scan converter

    DOEpatents

    Moore, James M.; Leighton, James F.

    1990-01-01

    High printing speeds for graphics data are achieved with a laser printer by transmitting compressed graphics data from a main processor over an I/O (input/output) channel to a vector generator scan converter which reconstructs a full graphics image for input to the laser printer through a raster data input port. The vector generator scan converter includes a microprocessor with associated microcode memory containing a microcode instruction set, a working memory for storing compressed data, vector generator hardward for drawing a full graphic image from vector parameters calculated by the microprocessor, image buffer memory for storing the reconstructed graphics image and an output scanner for reading the graphics image data and inputting the data to the printer. The vector generator scan converter eliminates the bottleneck created by the I/O channel for transmitting graphics data from the main processor to the laser printer, and increases printer speed up to thirty fold.

  9. Vector theories in cosmology

    SciTech Connect

    Esposito-Farese, Gilles; Pitrou, Cyril; Uzan, Jean-Philippe

    2010-03-15

    This article provides a general study of the Hamiltonian stability and the hyperbolicity of vector field models involving both a general function of the Faraday tensor and its dual, f(F{sup 2},FF-tilde), as well as a Proca potential for the vector field, V(A{sup 2}). In particular it is demonstrated that theories involving only f(F{sup 2}) do not satisfy the hyperbolicity conditions. It is then shown that in this class of models, the cosmological dynamics always dilutes the vector field. In the case of a nonminimal coupling to gravity, it is established that theories involving Rf(A{sup 2}) or Rf(F{sup 2}) are generically pathologic. To finish, we exhibit a model where the vector field is not diluted during the cosmological evolution, because of a nonminimal vector field-curvature coupling which maintains second-order field equations. The relevance of such models for cosmology is discussed.

  10. Vector generator scan converter

    SciTech Connect

    Moore, J.M.; Leighton, J.F.

    1990-04-17

    This patent describes high printing speeds for graphics data that are achieved with a laser printer by transmitting compressed graphics data from a main processor over an I/O (input/output) channel to a vector generator scan converter which reconstructs a full graphics image for input to the laser printer through a raster data input port. The vector generator scan converter includes a microprocessor with associated microcode memory containing a microcode instruction set, a working memory for storing compressed data, vector generator hardware for drawing a full graphic image from vector parameters calculated by the microprocessor, image buffer memory for storing the reconstructed graphics image and an output scanner for reading the graphics image data and inputting the data to the printer. The vector generator scan converter eliminates the bottleneck created by the I/O channel for transmitting graphics data from the main processor to the laser printer, and increases printer speed up to thirty fold.

  11. Analysis of sampling and quantization effects on the performance of PN code tracking loops

    NASA Technical Reports Server (NTRS)

    Quirk, K. J.; Srinivasan, M.

    2002-01-01

    Pseudonoise (PN) code tracking loops in direct-sequence spread-spectrum systems are often implemented using digital hardware. Performance degradation due to quantization and sampling effects is not adequately characterized by the traditional analog system feedback loop analysis.

  12. Event-triggered H∞ filter design for delayed neural network with quantization.

    PubMed

    Liu, Jinliang; Tang, Jia; Fei, Shumin

    2016-10-01

    This paper is concerned with H∞ filter design for a class of neural network systems with event-triggered communication scheme and quantization. Firstly, a new event-triggered communication scheme is introduced to determine whether or not the current sampled sensor data should be broadcasted and transmitted to quantizer, which can save the limited communication resource. Secondly, a logarithmic quantizer is used to quantify the sampled data, which can reduce the data transmission rate in the network. Thirdly, considering the influence of the constrained network resource, we investigate the problem of H∞ filter design for a class of event-triggered neural network systems with quantization. By using Lyapunov functional and linear matrix inequality (LMI) techniques, some delay-dependent stability conditions for the existence of the desired filter are obtained. Furthermore, the explicit expression is given for the designed filter parameters in terms of LMIs. Finally, a numerical example is given to show the usefulness of the obtained theoretical results.

  13. Fractional quantization of the topological charge pumping in a one-dimensional superlattice

    NASA Astrophysics Data System (ADS)

    Marra, Pasquale; Citro, Roberta; Ortix, Carmine

    2015-03-01

    A one-dimensional quantum charge pump transfers a quantized charge in each pumping cycle. This quantization is topologically robust, being analogous to the quantum Hall effect. The charge transferred in a fraction of the pumping period is instead generally unquantized. We show, however, that with specific symmetries in parameter space the charge transferred at well-defined fractions of the pumping period is quantized as integer fractions of the Chern number. We illustrate this in a one-dimensional Harper-Hofstadter model and show that the fractional quantization of the topological charge pumping is independent of the specific boundary conditions taken into account. We further discuss the relevance of this phenomenon for cold atomic gases in optical superlattices.

  14. A POCS-based restoration algorithm for restoring halftoned color-quantized images.

    PubMed

    Fung, Yik-Hing; Chan, Yuk-Hee

    2006-07-01

    This paper studies the restoration of images which are color-quantized with error diffusion. Though there are many reported algorithms proposed for restoring noisy blurred color images and inverse halftoning, restoration of color-quantized images is rarely addressed in the literature especially when the images are color-quantized with halftoning. Direct application of existing restoration techniques are generally inadequate to deal with this problem. In this paper, a restoration algorithm based on projection onto convex sets is proposed. This algorithm makes use of the available color palette and the mechanism of a halftoning process to derive useful a priori information for restoration. Simulation results showed that it could improve the quality of a halftoned color-quantized image remarkably in terms of both SNR and CIELAB color difference metric.

  15. Anatomy of a deformed symmetry: Field quantization on curved momentum space

    SciTech Connect

    Arzano, Michele

    2011-01-15

    In certain scenarios of deformed relativistic symmetries relevant for noncommutative field theories particles exhibit a momentum space described by a non-Abelian group manifold. Starting with a formulation of phase space for such particles which allows for a generalization to include group-valued momenta we discuss quantization of the corresponding field theory. Focusing on the particular case of {kappa}-deformed phase space we construct the one-particle Hilbert space and show how curvature in momentum space leads to an ambiguity in the quantization procedure reminiscent of the ambiguities one finds when quantizing fields in curved space-times. The tools gathered in the discussion on quantization allow for a clear definition of the basic deformed field mode operators and two-point function for {kappa}-quantum fields.

  16. Quantized Step-up Model for Evaluation of Internship in Teaching of Prospective Science Teachers.

    ERIC Educational Resources Information Center

    Sindhu, R. S.

    2002-01-01

    Describes the quantized step-up model developed for the evaluation purposes of internship in teaching which is an analogous model of the atomic structure. Assesses prospective teachers' abilities in lesson delivery. (YDS)

  17. Impact Analysis of Baseband Quantizer on Coding Efficiency for HDR Video

    NASA Astrophysics Data System (ADS)

    Wong, Chau-Wai; Su, Guan-Ming; Wu, Min

    2016-10-01

    Digitally acquired high dynamic range (HDR) video baseband signal can take 10 to 12 bits per color channel. It is economically important to be able to reuse the legacy 8 or 10-bit video codecs to efficiently compress the HDR video. Linear or nonlinear mapping on the intensity can be applied to the baseband signal to reduce the dynamic range before the signal is sent to the codec, and we refer to this range reduction step as a baseband quantization. We show analytically and verify using test sequences that the use of the baseband quantizer lowers the coding efficiency. Experiments show that as the baseband quantizer is strengthened by 1.6 bits, the drop of PSNR at a high bitrate is up to 1.60dB. Our result suggests that in order to achieve high coding efficiency, information reduction of videos in terms of quantization error should be introduced in the video codec instead of on the baseband signal.

  18. Effect of trapping in a degenerate plasma in the presence of a quantizing magnetic field

    NASA Astrophysics Data System (ADS)

    Shah, H. A.; Iqbal, M. J.; Tsintsadze, N.; Masood, W.; Qureshi, M. N. S.

    2012-09-01

    Effect of trapping as a microscopic phenomenon in a degenerate plasma is investigated in the presence of a quantizing magnetic field. The plasma comprises degenerate electrons and non-degenerate ions. The presence of the quantizing magnetic field is discussed briefly and the effect of trapping is investigated by using the Fermi-Dirac distribution function. The linear dispersion relation for ion acoustic wave is derived in the presence of the quantizing magnetic field and its influence on the propagation characteristics of the linear ion acoustic wave is discussed. Subsequently, fully nonlinear equations for ion acoustic waves are used to obtain the Sagdeev potential and the investigation of solitary structures. The formation of solitary structures is studied both for fully and partially degenerate plasmas in the presence of a quantizing magnetic field. Both compressive and rarefactive solitons are obtained for different conditions of temperature and magnetic field.

  19. Gauge symmetries in spin-foam gravity: the case for "cellular quantization".

    PubMed

    Bonzom, Valentin; Smerlak, Matteo

    2012-06-15

    The spin-foam approach to quantum gravity rests on a quantization of BF theory using 2-complexes and group representations. We explain why, in dimension three and higher, this spin-foam quantization must be amended to be made consistent with the gauge symmetries of discrete BF theory. We discuss a suitable generalization, called "cellular quantization," which (1) is finite, (2) produces a topological invariant, (3) matches with the properties of the continuum BF theory, and (4) corresponds to its loop quantization. These results significantly clarify the foundations--and limitations--of the spin-foam formalism and open the path to understanding, in a discrete setting, the symmetry-breaking which reduces BF theory to gravity.

  20. Line Integral of a Vector.

    ERIC Educational Resources Information Center

    Balabanian, Norman

    This programed booklet is designed for the engineering student who understands and can use vector and unit vector notation, components of a vector, parallel law of vector addition, and the dot product of two vectors. Content begins with work done by a force in moving a body a certain distance along some path. For each of the examples and problem…

  1. Threshold expansion of the three-particle quantization condition

    NASA Astrophysics Data System (ADS)

    Hansen, Maxwell T.; Sharpe, Stephen R.

    2016-05-01

    We recently derived a quantization condition for the energy of three relativistic particles in a cubic box [M. T. Hansen and S. R. Sharpe, Phys. Rev. D 90, 116003 (2014); M. T. Hansen and S. R. Sharpe, Phys. Rev. D 92, 114509 (2015)]. Here we use this condition to study the energy level closest to the three-particle threshold when the total three-momentum vanishes. We expand this energy in powers of 1 /L , where L is the linear extent of the finite volume. The expansion begins at O (1 /L3), and we determine the coefficients of the terms through O (1 /L6). As is also the case for the two-particle threshold energy, the 1 /L3, 1 /L4 and 1 /L5 coefficients depend only on the two-particle scattering length a . These can be compared to previous results obtained using nonrelativistic quantum mechanics [K. Huang and C. N. Yang, Phys. Rev. 105, 767 (1957); S. R. Beane, W. Detmold, and M. J. Savage, Phys. Rev. D 76, 074507 (2007); S. Tan, Phys. Rev. A 78, 013636 (2008)], and we find complete agreement. The 1 /L6 coefficients depend additionally on the two-particle effective range r (just as in the two-particle case) and on a suitably defined threshold three-particle scattering amplitude (a new feature for three particles). A second new feature in the three-particle case is that logarithmic dependence on L appears at O (1 /L6). Relativistic effects enter at this order, and the only comparison possible with the nonrelativistic result is for the coefficient of the logarithm, where we again find agreement. For a more thorough check of the 1 /L6 result, and thus of the quantization condition, we also compare to a perturbative calculation of the threshold energy in relativistic λ ϕ4 theory, which we have recently presented in [M. T. Hansen and S. R. Sharpe, Phys. Rev. D 93, 014506 (2016)]. Here, all terms can be compared, and we find full agreement.

  2. Quantized Vortex State in hcp Solid 4He

    NASA Astrophysics Data System (ADS)

    Kubota, Minoru

    2012-11-01

    The quantized vortex state appearing in the recently discovered new states in hcp 4He since their discovery (Kim and Chan, Nature, 427:225-227, 2004; Science, 305:1941, 2004) is discussed. Special attention is given to evidence for the vortex state as the vortex fluid (VF) state (Anderson, Nat. Phys., 3:160-162, 2007; Phys. Rev. Lett., 100:215301, 2008; Penzev et al., Phys. Rev. Lett., 101:065301, 2008; Nemirovskii et al., arXiv:0907.0330, 2009) and its transition into the supersolid (SS) state (Shimizu et al., arXiv:0903.1326, 2009; Kubota et al., J. Low Temp. Phys., 158:572-577, 2010; J. Low Temp. Phys., 162:483-491, 2011). Its features are described. The historical explanations (Reatto and Chester, Phys. Rev., 155(1):88-100, 1967; Chester, Phys. Rev. A, 2(1):256-258, 1970; Andreev and Lifshitz, JETP Lett., 29:1107-1113, 1969; Leggett, Phys. Rev. Lett., 25(22), 1543-1546, 1970; Matsuda and Tsuneto, Prog. Theor. Phys., 46:411-436, 1970) for the SS state in quantum solids such as solid 4He were based on the idea of Bose Einstein Condensation (BEC) of the imperfections such as vacancies, interstitials and other possible excitations in the quantum solids which are expected because of the large zero-point motions. The SS state was proposed as a new state of matter in which real space ordering of the lattice structure of the solid coexists with the momentum space ordering of superfluidity. A new type of superconductors, since the discovery of the cuprate high T c superconductors, HTSCs (Bednorz and Mueller, Z. Phys., 64:189, 1986), has been shown to share a feature with the vortex state, involving the VF and vortex solid states. The high T c s of these materials are being discussed in connection to the large fluctuations associated with some other phase transitions like the antiferromagnetic transition in addition to that of the low dimensionality. The supersolidity in the hcp solid 4He, in contrast to the new superconductors which have multiple degrees of freedom of

  3. Rotating effects on the Landau quantization for an atom with a magnetic quadrupole moment.

    PubMed

    Fonseca, I C; Bakke, K

    2016-01-01

    Based on the single particle approximation [Dmitriev et al., Phys. Rev. C 50, 2358 (1994) and C.-C. Chen, Phys. Rev. A 51, 2611 (1995)], the Landau quantization associated with an atom with a magnetic quadrupole moment is introduced, and then, rotating effects on this analogue of the Landau quantization is investigated. It is shown that rotating effects can modify the cyclotron frequency and breaks the degeneracy of the analogue of the Landau levels.

  4. An analogue of Weyl’s law for quantized irreducible generalized flag manifolds

    SciTech Connect

    Matassa, Marco E-mail: mmatassa@math.uio.no

    2015-09-15

    We prove an analogue of Weyl’s law for quantized irreducible generalized flag manifolds. This is formulated in terms of a zeta function which, similarly to the classical setting, satisfies the following two properties: as a functional on the quantized algebra it is proportional to the Haar state and its first singularity coincides with the classical dimension. The relevant formulas are given for the more general case of compact quantum groups.

  5. Fermi surface determination from wavevector quantization in LaSrCuO films

    NASA Astrophysics Data System (ADS)

    Ariosa, D.; Cancellieri, C.; Lin, P. H.; Pavuna, D.

    2008-03-01

    We have observed the wavevector quantization in LaSrCuO films thinner than 12 unit cells grown on SrTiO3 substrates. Low energy dispersions were determined in situ for different photon energies by angle resolved photoemission spectroscopy. From the wavevector quantization, we extract three dimensional dispersions within a tight-binding model and obtain the Fermi surface topology, without resorting to the nearly free-electron approximation. Such method can be extended to similar confined electron nanostructures.

  6. The method of Ostrogradsky, quantization, and a move toward a ghost-free future

    SciTech Connect

    Nucci, M C; Leach, P G L

    2009-11-15

    The method of Ostrogradsky has been used to construct a first-order Lagrangian, hence Hamiltonian, for the fourth-order field-theoretical model of Pais-Uhlenbeck with unfortunate results when quantization is undertaken since states with negative norm, commonly called ''ghosts,'' appear. We propose an alternative route based on the preservation of symmetry and this leads to a ghost-free quantization.

  7. Quantizing the Discrete Painlevé VI Equation: The Lax Formalism

    NASA Astrophysics Data System (ADS)

    Hasegawa, Koji

    2013-08-01

    A discretization of Painlevé VI equation was obtained by Jimbo and Sakai (Lett Math Phys 38:145-154, 1996). There are two ways to quantize it: (1) use the affine Weyl group symmetry (of {D_5^{(1)}}) (Hasegawa in Adv Stud Pure Math 61:275-288, 2011), (2) Lax formalism, i.e. monodromy preserving point of view. It turns out that the second approach is also successful and gives the same quantization as in the first approach.

  8. Quantized states in superconducting quantum wells biased by an external field

    NASA Astrophysics Data System (ADS)

    Shafranjuk, Serhii; Ketterson, John

    2004-03-01

    The interest to quantized states in superconducting quantum wells (SQW) is stimulated by rapid development of qubit devices. The SQW may be formed in different ways. In this report we consider SQW at a minimum of the superconducting order parameter, which happens, e.g, at a normal core of an Abrikosov vortex or in SINIS junctions (S are the superconducting banks, I is an insulating barrier, N is a thin normal metal layer). The Andreev reflection (when an incident electron is reflected as a hole and vice versa) at opposite SN and NS interfaces (or on SIN and NIS interfaces, which have an intermediate transparency) creates quantized states, which are observed in experiments. The quantization condition depends on the sample purity and the quantum well size, which should be comparable to the superconducting coherence length. However, the quantization condition may also be changed when a bias field is applied across the quantum well, and the phase of the superfluid condensate wave function becomes time-dependent. If the time dependence is arbitrary, and the energy is a bad quantum number, then in accordance with a general quantum mechanical rules no quantized states could arise. However, if the behavior is time-homogeneous (e.g., under influence of a dc field, or of an ac field of constant amplitude), the energy is a good quantum number, and the quantized states may exist. In this work we consider the formation of the quantized states in the SINIS junction biased by a dc and ac voltages. The calculations are made using the boundary conditions in the quasiclassical approximation. The quantization conditions are analyzed versus the quantum well size, the electron mean free path, and the external bias field magnitude.

  9. Can Dirac quantization of constrained systems be fulfilled within the intrinsic geometry?

    SciTech Connect

    Xun, D.M.; Liu, Q.H.

    2014-02-15

    For particles constrained on a curved surface, how to perform quantization within Dirac’s canonical quantization scheme is a long-standing problem. On one hand, Dirac stressed that the Cartesian coordinate system has fundamental importance in passing from the classical Hamiltonian to its quantum mechanical form while preserving the classical algebraic structure between positions, momenta and Hamiltonian to the extent possible. On the other, on the curved surface, we have no exact Cartesian coordinate system within intrinsic geometry. These two facts imply that the three-dimensional Euclidean space in which the curved surface is embedded must be invoked otherwise no proper canonical quantization is attainable. In this paper, we take a minimum surface, helicoid, on which the motion is constrained, to explore whether the intrinsic geometry offers a proper framework in which the quantum theory can be established in a self-consistent way. Results show that not only an inconsistency within Dirac theory occurs, but also an incompatibility with Schrödinger theory happens. In contrast, in three-dimensional Euclidean space, the Dirac quantization turns out to be satisfactory all around, and the resultant geometric momentum and potential are then in agreement with those given by the Schrödinger theory. -- Highlights: • Quantum motion on a minimum surface, helicoid, is examined within canonical quantization. • Both geometric momentum and geometric potential are embedding quantities. • No canonical quantization can be fulfilled within the intrinsic geometry.

  10. High-Resolution Group Quantization Phase Processing Method in Radio Frequency Measurement Range.

    PubMed

    Du, Baoqing; Feng, Dazheng; Tang, Yaohua; Geng, Xin; Zhang, Duo; Cai, Chaofeng; Wan, Maoquan; Yang, Zhigang

    2016-01-01

    Aiming at the more complex frequency translation, the longer response time and the limited measurement precision in the traditional phase processing, a high-resolution phase processing method by group quantization higher than 100 fs level is proposed in radio frequency measurement range. First, the phase quantization is used as a step value to quantize every phase difference in a group by using the fixed phase relationships between different frequencies signals. The group quantization is formed by the results of the quantized phase difference. In the light of frequency drift mainly caused by phase noise of measurement device, a regular phase shift of the group quantization is produced, which results in the phase coincidence of two comparing signals which obtain high-resolution measurement. Second, in order to achieve the best coincidences pulse, a subtle delay is initiatively used to reduce the width of the coincidences fuzzy area according to the transmission characteristics of the coincidences in the specific medium. Third, a series of feature coincidences pulses of fuzzy area can be captured by logic gate to achieve the best phase coincidences information for the improvement of the measurement precision. The method provides a novel way to precise time and frequency measurement. PMID:27388587

  11. Finite-temperature effective boundary theory of the quantized thermal Hall effect

    NASA Astrophysics Data System (ADS)

    Nakai, Ryota; Ryu, Shinsei; Nomura, Kentaro

    2016-02-01

    A finite-temperature effective free energy of the boundary of a quantized thermal Hall system is derived microscopically from the bulk two-dimensional Dirac fermion coupled with a gravitational field. In two spatial dimensions, the thermal Hall conductivity of fully gapped insulators and superconductors is quantized and given by the bulk Chern number, in analogy to the quantized electric Hall conductivity in quantum Hall systems. From the perspective of effective action functionals, two distinct types of the field theory have been proposed to describe the quantized thermal Hall effect. One of these, known as the gravitational Chern-Simons action, is a kind of topological field theory, and the other is a phenomenological theory relevant to the Strěda formula. In order to solve this problem, we derive microscopically an effective theory that accounts for the quantized thermal Hall effect. In this paper, the two-dimensional Dirac fermion under a static background gravitational field is considered in equilibrium at a finite temperature, from which an effective boundary free energy functional of the gravitational field is derived. This boundary theory is shown to explain the quantized thermal Hall conductivity and thermal Hall current in the bulk by assuming the Lorentz symmetry. The bulk effective theory is consistently determined via the boundary effective theory.

  12. High-Resolution Group Quantization Phase Processing Method in Radio Frequency Measurement Range

    PubMed Central

    Du, Baoqing; Feng, Dazheng; Tang, Yaohua; Geng, Xin; Zhang, Duo; Cai, Chaofeng; Wan, Maoquan; Yang, Zhigang

    2016-01-01

    Aiming at the more complex frequency translation, the longer response time and the limited measurement precision in the traditional phase processing, a high-resolution phase processing method by group quantization higher than 100 fs level is proposed in radio frequency measurement range. First, the phase quantization is used as a step value to quantize every phase difference in a group by using the fixed phase relationships between different frequencies signals. The group quantization is formed by the results of the quantized phase difference. In the light of frequency drift mainly caused by phase noise of measurement device, a regular phase shift of the group quantization is produced, which results in the phase coincidence of two comparing signals which obtain high-resolution measurement. Second, in order to achieve the best coincidences pulse, a subtle delay is initiatively used to reduce the width of the coincidences fuzzy area according to the transmission characteristics of the coincidences in the specific medium. Third, a series of feature coincidences pulses of fuzzy area can be captured by logic gate to achieve the best phase coincidences information for the improvement of the measurement precision. The method provides a novel way to precise time and frequency measurement. PMID:27388587

  13. High-Resolution Group Quantization Phase Processing Method in Radio Frequency Measurement Range.

    PubMed

    Du, Baoqing; Feng, Dazheng; Tang, Yaohua; Geng, Xin; Zhang, Duo; Cai, Chaofeng; Wan, Maoquan; Yang, Zhigang

    2016-07-08

    Aiming at the more complex frequency translation, the longer response time and the limited measurement precision in the traditional phase processing, a high-resolution phase processing method by group quantization higher than 100 fs level is proposed in radio frequency measurement range. First, the phase quantization is used as a step value to quantize every phase difference in a group by using the fixed phase relationships between different frequencies signals. The group quantization is formed by the results of the quantized phase difference. In the light of frequency drift mainly caused by phase noise of measurement device, a regular phase shift of the group quantization is produced, which results in the phase coincidence of two comparing signals which obtain high-resolution measurement. Second, in order to achieve the best coincidences pulse, a subtle delay is initiatively used to reduce the width of the coincidences fuzzy area according to the transmission characteristics of the coincidences in the specific medium. Third, a series of feature coincidences pulses of fuzzy area can be captured by logic gate to achieve the best phase coincidences information for the improvement of the measurement precision. The method provides a novel way to precise time and frequency measurement.

  14. High-Resolution Group Quantization Phase Processing Method in Radio Frequency Measurement Range

    NASA Astrophysics Data System (ADS)

    Du, Baoqing; Feng, Dazheng; Tang, Yaohua; Geng, Xin; Zhang, Duo; Cai, Chaofeng; Wan, Maoquan; Yang, Zhigang

    2016-07-01

    Aiming at the more complex frequency translation, the longer response time and the limited measurement precision in the traditional phase processing, a high-resolution phase processing method by group quantization higher than 100 fs level is proposed in radio frequency measurement range. First, the phase quantization is used as a step value to quantize every phase difference in a group by using the fixed phase relationships between different frequencies signals. The group quantization is formed by the results of the quantized phase difference. In the light of frequency drift mainly caused by phase noise of measurement device, a regular phase shift of the group quantization is produced, which results in the phase coincidence of two comparing signals which obtain high-resolution measurement. Second, in order to achieve the best coincidences pulse, a subtle delay is initiatively used to reduce the width of the coincidences fuzzy area according to the transmission characteristics of the coincidences in the specific medium. Third, a series of feature coincidences pulses of fuzzy area can be captured by logic gate to achieve the best phase coincidences information for the improvement of the measurement precision. The method provides a novel way to precise time and frequency measurement.

  15. Polycistronic viral vectors.

    PubMed

    de Felipe, P

    2002-09-01

    Traditionally, vectors for gene transfer/therapy experiments were mono- or bicistronic. In the latter case, vectors express the gene of interest coupled with a marker gene. An increasing demand for more complex polycistronic vectors has arisen in recent years to obtain complex gene transfer/therapy effects. In particular, this demand is stimulated by the hope of a more powerful effect from combined gene therapy than from single gene therapy in a process whose parallels lie in the multi-drug combined therapies for cancer or AIDS. In the 1980's we had only splicing signals and internal promoters to construct such vectors: now a new set of biotechnological tools enables us to design new and more reliable bicistronic and polycistronic vectors. This article focuses on the description and comparison of the strategies for co-expression of two genes in bicistronic vectors, from the oldest to the more recently described: internal promoters, splicing, reinitiation, IRES, self-processing peptides (e.g. foot-and-mouth disease virus 2A), proteolytic cleavable sites (e.g. fusagen) and fusion of genes. I propose a classification of these strategies based upon either the use of multiple transcripts (with transcriptional mechanisms), or single transcripts (using translational/post-translational mechanisms). I also examine the different attempts to utilize these strategies in the construction of polycistronic vectors and the main problems encountered. Several potential uses of these polycistronic vectors, both in basic research and in therapy-focused applications, are discussed. The importance of the study of viral gene expression strategies and the need to transfer this knowledge to vector design is highlighted.

  16. Fractal vector optical fields.

    PubMed

    Pan, Yue; Gao, Xu-Zhen; Cai, Meng-Qiang; Zhang, Guan-Lin; Li, Yongnan; Tu, Chenghou; Wang, Hui-Tian

    2016-07-15

    We introduce the concept of a fractal, which provides an alternative approach for flexibly engineering the optical fields and their focal fields. We propose, design, and create a new family of optical fields-fractal vector optical fields, which build a bridge between the fractal and vector optical fields. The fractal vector optical fields have polarization states exhibiting fractal geometry, and may also involve the phase and/or amplitude simultaneously. The results reveal that the focal fields exhibit self-similarity, and the hierarchy of the fractal has the "weeding" role. The fractal can be used to engineer the focal field. PMID:27420485

  17. Finite energy quantization on a topology changing spacetime

    NASA Astrophysics Data System (ADS)

    Krasnikov, S.

    2016-08-01

    The "trousers" spacetime is a pair of flat two-dimensional cylinders ("legs") merging into a single one ("trunk"). In spite of its simplicity this spacetime has a few features (including, in particular, a naked singularity in the "crotch") each of which is presumably unphysical, but for none of which a mechanism is known able to prevent its occurrence. Therefore, it is interesting and important to study the behavior of the quantum fields in such a space. Anderson and DeWitt were the first to consider the free scalar field in the trousers spacetime. They argued that the crotch singularity produces an infinitely bright flash, which was interpreted as evidence that the topology of space is dynamically preserved. Similar divergencies were later discovered by Manogue, Copeland, and Dray who used a more exotic quantization scheme. Later yet the same result obtained within a somewhat different approach led Sorkin to the conclusion that the topological transition in question is suppressed in quantum gravity. In this paper I show that the Anderson-DeWitt divergence is an artifact of their choice of the Fock space. By choosing a different one-particle Hilbert space one gets a quantum state in which the components of the stress-energy tensor (SET) are bounded in the frame of a free-falling observer.

  18. Gigahertz quantized charge pumping in graphene quantum dots

    NASA Astrophysics Data System (ADS)

    Connolly, M. R.; Chiu, K. L.; Giblin, S. P.; Kataoka, M.; Fletcher, J. D.; Chua, C.; Griffiths, J. P.; Jones, G. A. C.; Fal'Ko, V. I.; Smith, C. G.; Janssen, T. J. B. M.

    2013-06-01

    Single-electron pumps are set to revolutionize electrical metrology by enabling the ampere to be redefined in terms of the elementary charge of an electron. Pumps based on lithographically fixed tunnel barriers in mesoscopic metallic systems and normal/superconducting hybrid turnstiles can reach very small error rates, but only at megahertz pumping speeds that correspond to small currents of the order of picoamperes. Tunable barrier pumps in semiconductor structures are operated at gigahertz frequencies, but the theoretical treatment of the error rate is more complex and only approximate predictions are available. Here, we present a monolithic, fixed-barrier single-electron pump made entirely from graphene that performs at frequencies up to several gigahertz. Combined with the record-high accuracy of the quantum Hall effect and proximity-induced Josephson junctions, quantized-current generation brings an all-graphene closure of the quantum metrological triangle within reach. Envisaged applications for graphene charge pumps outside quantum metrology include single-photon generation via electron-hole recombination in electrostatically doped bilayer graphene reservoirs, single Dirac fermion emission in relativistic electron quantum optics and read-out of spin-based graphene qubits in quantum information processing.

  19. Generalized Hasimoto Transform, Binormal Flow and Quantized Vortices

    NASA Astrophysics Data System (ADS)

    Strong, Scott A.; Carr, Lincoln D.

    2014-03-01

    A quantized vortex is a topological object central to the study of quantum liquids. Current models of vortex dynamics are motivated by the nonlinear Schrödingier equation and porting techniques from classical vortices. Self induction of classical vorticity ideally localized to a space curve asserts that a curved vortex filament propagates at a speed proportional to its curvature, | v | ~ κ , in the binormal direction of the Frenet frame, b& circ;. Interestingly, this autonomous dynamic can be mapped into the space of solutions to a cubic focusing nonlinear Schrödinger equation, iψt +ψss +1/2| ψ | 2 ψ = 0 , where ψ is a plane-wave defined by curvature and torsion of the vortex filament, ψ = κexp [ i ∫ dsτ ] . Using these two results, one can define a vortex configuration, within superfluid helium or a Bose-Einstein condensate, and prescribe a binormal evolution. In general, however, binormal flow depends nonlinearly on local curvature and maps to a class of nonlinear integro-differential Schrödinger equations. In this talk we discuss how system size affects higher-order nonlinearity and filament geometry which is applicable to theoretical and numerical investigations of vortex dominated quantum hydrodynamics. Funded by NSF.

  20. On second quantization on noncommutative spaces with twisted symmetries

    NASA Astrophysics Data System (ADS)

    Fiore, Gaetano

    2010-04-01

    By the application of the general twist-induced sstarf-deformation procedure we translate second quantization of a system of bosons/fermions on a symmetric spacetime into a noncommutative language. The procedure deforms, in a coordinated way, the spacetime algebra and its symmetries, the wave-mechanical description of a system of n bosons/fermions, the algebra of creation and annihilation operators and also the commutation relations of the latter with functions of spacetime; our key requirement is the mode-decomposition independence of the quantum field. In a minimalistic view, the use of noncommutative coordinates can be seen just as a way to better express non-local interactions of a special kind. In a non-conservative one, we obtain a closed, covariant framework for quantum field theory (QFT) on the corresponding noncommutative spacetime consistent with quantum mechanical axioms and Bose-Fermi statistics. One distinguishing feature is that the field commutation relations remain of the type 'field (anti)commutator=a distribution'. We illustrate the results by choosing as examples interacting non-relativistic and free relativistic QFT on Moyal space(time)s.