A recursive technique for adaptive vector quantization
NASA Technical Reports Server (NTRS)
Lindsay, Robert A.
1989-01-01
Vector Quantization (VQ) is fast becoming an accepted, if not preferred method for image compression. The VQ performs well when compressing all types of imagery including Video, Electro-Optical (EO), Infrared (IR), Synthetic Aperture Radar (SAR), Multi-Spectral (MS), and digital map data. The only requirement is to change the codebook to switch the compressor from one image sensor to another. There are several approaches for designing codebooks for a vector quantizer. Adaptive Vector Quantization is a procedure that simultaneously designs codebooks as the data is being encoded or quantized. This is done by computing the centroid as a recursive moving average where the centroids move after every vector is encoded. When computing the centroid of a fixed set of vectors the resultant centroid is identical to the previous centroid calculation. This method of centroid calculation can be easily combined with VQ encoding techniques. The defined quantizer changes after every encoded vector by recursively updating the centroid of minimum distance which is the selected by the encoder. Since the quantizer is changing definition or states after every encoded vector, the decoder must now receive updates to the codebook. This is done as side information by multiplexing bits into the compressed source data.
Online Adaptive Vector Quantization with Variable Size Codebook Entries.
ERIC Educational Resources Information Center
Constantinescu, Cornel; Storer, James A.
1994-01-01
Presents a new image compression algorithm that employs some of the most successful approaches to adaptive lossless compression to perform adaptive online (single pass) vector quantization with variable size codebook entries. Results of tests of the algorithm's effectiveness on standard test images are given. (12 references) (KRN)
Locally adaptive vector quantization: Data compression with feature preservation
NASA Technical Reports Server (NTRS)
Cheung, K. M.; Sayano, M.
1992-01-01
A study of a locally adaptive vector quantization (LAVQ) algorithm for data compression is presented. This algorithm provides high-speed one-pass compression and is fully adaptable to any data source and does not require a priori knowledge of the source statistics. Therefore, LAVQ is a universal data compression algorithm. The basic algorithm and several modifications to improve performance are discussed. These modifications are nonlinear quantization, coarse quantization of the codebook, and lossless compression of the output. Performance of LAVQ on various images using irreversible (lossy) coding is comparable to that of the Linde-Buzo-Gray algorithm, but LAVQ has a much higher speed; thus this algorithm has potential for real-time video compression. Unlike most other image compression algorithms, LAVQ preserves fine detail in images. LAVQ's performance as a lossless data compression algorithm is comparable to that of Lempel-Ziv-based algorithms, but LAVQ uses far less memory during the coding process.
NASA Technical Reports Server (NTRS)
Gray, Robert M.
1989-01-01
During the past ten years Vector Quantization (VQ) has developed from a theoretical possibility promised by Shannon's source coding theorems into a powerful and competitive technique for speech and image coding and compression at medium to low bit rates. In this survey, the basic ideas behind the design of vector quantizers are sketched and some comments made on the state-of-the-art and current research efforts.
Gain-adaptive vector quantization for medium-rate speech coding
NASA Technical Reports Server (NTRS)
Chen, J.-H.; Gersho, A.
1985-01-01
A class of adaptive vector quantizers (VQs) that can dynamically adjust the 'gain' of codevectors according to the input signal level is introduced. The encoder uses a gain estimator to determine a suitable normalization of each input vector prior to VQ coding. The normalized vectors have reduced dynamic range and can then be more efficiently coded. At the receiver, the VQ decoder output is multiplied by the estimated gain. Both forward and backward adaptation are considered and several different gain estimators are compared and evaluated. An approach to optimizing the design of gain estimators is introduced. Some of the more obvious techniques for achieving gain adaptation are substantially less effective than the use of optimized gain estimators. A novel design technique that is needed to generate the appropriate gain-normalized codebook for the vector quantizer is introduced. Experimental results show that a significant gain in segmental SNR can be obtained over nonadaptive VQ with a negligible increase in complexity.
Adaptive vector quantization of MR images using online k-means algorithm
NASA Astrophysics Data System (ADS)
Shademan, Azad; Zia, Mohammad A.
2001-12-01
The k-means algorithm is widely used to design image codecs using vector quantization (VQ). In this paper, we focus on an adaptive approach to implement a VQ technique using the online version of k-means algorithm, in which the size of the codebook is adapted continuously to the statistical behavior of the image. Based on the statistical analysis of the feature space, a set of thresholds are designed such that those codewords corresponding to the low-density clusters would be removed from the codebook and hence, resulting in a higher bit-rate efficiency. Applications of this approach would be in telemedicine, where sequences of highly correlated medical images, e.g. consecutive brain slices, are transmitted over a low bit-rate channel. We have applied this algorithm on magnetic resonance (MR) images and the simulation results on a sample sequence are given. The proposed method has been compared to the standard k-means algorithm in terms of PSNR, MSE, and elapsed time to complete the algorithm.
Divergence-based vector quantization.
Villmann, Thomas; Haase, Sven
2011-05-01
Supervised and unsupervised vector quantization methods for classification and clustering traditionally use dissimilarities, frequently taken as Euclidean distances. In this article, we investigate the applicability of divergences instead, focusing on online learning. We deduce the mathematical fundamentals for its utilization in gradient-based online vector quantization algorithms. It bears on the generalized derivatives of the divergences known as Fréchet derivatives in functional analysis, which reduces in finite-dimensional problems to partial derivatives in a natural way. We demonstrate the application of this methodology for widely applied supervised and unsupervised online vector quantization schemes, including self-organizing maps, neural gas, and learning vector quantization. Additionally, principles for hyperparameter optimization and relevance learning for parameterized divergences in the case of supervised vector quantization are given to achieve improved classification accuracy. PMID:21299418
VLSI Processor For Vector Quantization
NASA Technical Reports Server (NTRS)
Tawel, Raoul
1995-01-01
Pixel intensities in each kernel compared simultaneously with all code vectors. Prototype high-performance, low-power, very-large-scale integrated (VLSI) circuit designed to perform compression of image data by vector-quantization method. Contains relatively simple analog computational cells operating on direct or buffered outputs of photodetectors grouped into blocks in imaging array, yielding vector-quantization code word for each such block in sequence. Scheme exploits parallel-processing nature of vector-quantization architecture, with consequent increase in speed.
Video data compression using artificial neural network differential vector quantization
NASA Technical Reports Server (NTRS)
Krishnamurthy, Ashok K.; Bibyk, Steven B.; Ahalt, Stanley C.
1991-01-01
An artificial neural network vector quantizer is developed for use in data compression applications such as Digital Video. Differential Vector Quantization is used to preserve edge features, and a new adaptive algorithm, known as Frequency-Sensitive Competitive Learning, is used to develop the vector quantizer codebook. To develop real time performance, a custom Very Large Scale Integration Application Specific Integrated Circuit (VLSI ASIC) is being developed to realize the associative memory functions needed in the vector quantization algorithm. By using vector quantization, the need for Huffman coding can be eliminated, resulting in superior performance against channel bit errors than methods that use variable length codes.
Combining Vector Quantization and Histogram Equalization.
ERIC Educational Resources Information Center
Cosman, Pamela C.; And Others
1992-01-01
Discussion of contrast enhancement techniques focuses on the use of histogram equalization with a data compression technique, i.e., tree-structured vector quantization. The enhancement technique of intensity windowing is described, and the use of enhancement techniques for medical images is explained, including adaptive histogram equalization.…
Adaptive image segmentation by quantization
NASA Astrophysics Data System (ADS)
Liu, Hui; Yun, David Y.
1992-12-01
Segmentation of images into textural homogeneous regions is a fundamental problem in an image understanding system. Most region-oriented segmentation approaches suffer from the problem of different thresholds selecting for different images. In this paper an adaptive image segmentation based on vector quantization is presented. It automatically segments images without preset thresholds. The approach contains a feature extraction module and a two-layer hierarchical clustering module, a vector quantizer (VQ) implemented by a competitive learning neural network in the first layer. A near-optimal competitive learning algorithm (NOLA) is employed to train the vector quantizer. NOLA combines the advantages of both Kohonen self- organizing feature map (KSFM) and K-means clustering algorithm. After the VQ is trained, the weights of the network and the number of input vectors clustered by each neuron form a 3- D topological feature map with separable hills aggregated by similar vectors. This overcomes the inability to visualize the geometric properties of data in a high-dimensional space for most other clustering algorithms. The second clustering algorithm operates in the feature map instead of the input set itself. Since the number of units in the feature map is much less than the number of feature vectors in the feature set, it is easy to check all peaks and find the `correct' number of clusters, also a key problem in current clustering techniques. In the experiments, we compare our algorithm with K-means clustering method on a variety of images. The results show that our algorithm achieves better performance.
Motion-vector-based adaptive quantization in MPEG-4 fine granular scalable coding
NASA Astrophysics Data System (ADS)
Yang, Shuping; Lin, Xinggang; Wang, Guijin
2003-05-01
Selective enhancement mechanism of Fine-Granular-Scalability (FGS) In MPEG-4 is able to enhance specific objects under bandwidth variation. A novel technique for self-adaptive enhancement of interested regions based on Motion Vectors (MVs) of the base layer is proposed, which is suitable for those video sequences having still background and what we are interested in is only the moving objects in the scene, such as news broadcasting, video surveillance, Internet education, etc. Motion vectors generated during base layer encoding are obtained and analyzed. A Gaussian model is introduced to describe non-moving macroblocks which may have non-zero MVs caused by random noise or luminance variation. MVs of these macroblocks are set to zero to prevent them from being enhanced. A segmentation algorithm, region growth, based on MV values is exploited to separate foreground from background. Post-process is needed to reduce the influence of burst noise so that only the interested moving regions are left. Applying the result in selective enhancement during enhancement layer encoding can significantly improves the visual quality of interested regions within an aforementioned video transmitted at different bit-rate in our experiments.
Image compression using address-vector quantization
NASA Astrophysics Data System (ADS)
Nasrabadi, Nasser M.; Feng, Yushu
1990-12-01
A novel vector quantization scheme, the address-vector quantizer (A-VQ), is proposed which exploits the interblock correlation by encoding a group of blocks together using an address-codebook (AC). The AC is a set of address-codevectors (ACVs), each representing a combination of addresses or indices. Each element of the ACV is an address of an entry in the LBG-codebook, representing a vector-quantized block. The AC consists of an active (addressable) region and an inactive (nonaddressable) region. During encoding the ACVs in the AC are reordered adaptively to bring the most probable ACVs into the active region. When encoding an ACV, the active region is checked, and if such an address combination exists, its index is transmitted to the receiver. Otherwise, the address of each block is transmitted individually. The SNR of the images encoded by the A-VQ method is the same as that of a memoryless vector quantizer, but the bit rate is by a factor of approximately two.
Han, Hao; Li, Lihong; Han, Fangfang; Song, Bowen; Moore, William; Liang, Zhengrong
2014-01-01
Computer-aided detection (CADe) of pulmonary nodules is critical to assisting radiologists in early identification of lung cancer from computed tomography (CT) scans. This paper proposes a novel CADe system based on a hierarchical vector quantization (VQ) scheme. Compared with the commonly-used simple thresholding approach, high-level VQ yields a more accurate segmentation of the lungs from the chest volume. In identifying initial nodule candidates (INCs) within the lungs, low-level VQ proves to be effective for INCs detection and segmentation, as well as computationally efficient compared to existing approaches. False-positive (FP) reduction is conducted via rule-based filtering operations in combination with a feature-based support vector machine classifier. The proposed system was validated on 205 patient cases from the publically available on-line LIDC (Lung Image Database Consortium) database, with each case having at least one juxta-pleural nodule annotation. Experimental results demonstrated that our CADe system obtained an overall sensitivity of 82.7% at a specificity of 4 FPs/scan, and 89.2% sensitivity at 4.14 FPs/scan for the classification of juxta-pleural INCs only. With respect to comparable CADe systems, the proposed system shows outperformance and demonstrates its potential for fast and adaptive detection of pulmonary nodules via CT imaging. PMID:25486657
Systolic architectures for vector quantization
NASA Technical Reports Server (NTRS)
Davidson, Grant A.; Cappello, Peter R.; Gersho, Allen
1988-01-01
A family of architectural techniques are proposed which offer efficient computation of weighted Euclidean distance measures for nearest-neighbor codebook searching. The general approach uses a single metric comparator chip in conjunction with a linear array of inner product processor chips. Very high vector-quantization (VQ) throughput can be achieved for many speech and image-processing applications. Several alternative configurations allow reasonable tradeoffs between speed and VLSI chip area required.
Vector quantization for volume rendering
NASA Technical Reports Server (NTRS)
Ning, Paul; Hesselink, Lambertus
1992-01-01
Volume rendering techniques typically process volumetric data in raw, uncompressed form. As algorithmic and architectural advances improve rendering speeds, however, larger data sets will be evaluated requiring consideration of data storage and transmission issues. In this paper, we analyze the data compression requirements for volume rendering applications and present a solution based on vector quantization. The proposed system compresses volumetric data and then renders images directly from the new data format. Tests on a fluid flow data set demonstrate that good image quality may be achieved at a compression ratio of 17:1 with only a 5 percent cost in additional rendering time.
Image indexing based on vector quantization
NASA Astrophysics Data System (ADS)
Grana Romay, Manuel; Rebollo, Israel
2000-10-01
We propose the computation of the color palette of each image in isolation, using Vector Quantization methods. The image features are, then, the color palette and the histogram of the color quantization of the image with this color palette. We propose as a measure of similitude the weighted sum of the differences between the color palettes and the corresponding histograms. This approach allows the increase of the database without the recomputation of the image features and without substantial loss of discriminative power.
Honey Bee Mating Optimization Vector Quantization Scheme in Image Compression
NASA Astrophysics Data System (ADS)
Horng, Ming-Huwi
The vector quantization is a powerful technique in the applications of digital image compression. The traditionally widely used method such as the Linde-Buzo-Gray (LBG) algorithm always generated local optimal codebook. Recently, particle swarm optimization (PSO) is adapted to obtain the near-global optimal codebook of vector quantization. In this paper, we applied a new swarm algorithm, honey bee mating optimization, to construct the codebook of vector quantization. The proposed method is called the honey bee mating optimization based LBG (HBMO-LBG) algorithm. The results were compared with the other two methods that are LBG and PSO-LBG algorithms. Experimental results showed that the proposed HBMO-LBG algorithm is more reliable and the reconstructed images get higher quality than those generated form the other three methods.
Segmentation and texture representation with vector quantizers
NASA Astrophysics Data System (ADS)
Yuan, Li; Barba, Joseph
1990-11-01
An algorithm for segmentation of cell images and the extraction of texture textons based on vector quantization is presented. Initially a few low dimensional code vectors are employed in a standard vector quantization algorithm to generate a coarse code book a procedure which is equivalent to histogram sharpening. Representative gray level value from each coarse code vector are used to construct a larger fine code book. Coding the original image with the fine code book produces a less distorted image and facilitates cell and nuclear extraction. Texture textons are extracted by application of the same algorithm to the cell area using a larger number of initial code vectors and fine code book. Applications of the algorithm to cytological specimen are presented.
Image coding with uniform and piecewise-uniform vector quantizers.
Jeong, D G; Gibson, J D
1995-01-01
New lattice vector quantizer design procedures for nonuniform sources that yield excellent performance while retaining the structure required for fast quantization are described. Analytical methods for truncating and scaling lattices to be used in vector quantization are given, and an analytical technique for piecewise-linear multidimensional companding is presented. The uniform and piecewise-uniform lattice vector quantizers are then used to quantize the discrete cosine transform coefficients of images, and their objective and subjective performance and complexity are contrasted with other lattice vector quantizers and with LBG training-mode designs. PMID:18289966
Vector quantization of 3-D point clouds
NASA Astrophysics Data System (ADS)
Sim, Jae-Young; Kim, Chang-Su; Lee, Sang-Uk
2005-10-01
A geometry compression algorithm for 3-D QSplat data using vector quantization (VQ) is proposed in this work. The positions of child spheres are transformed to the local coordinate system, which is determined by the parent children relationship. The coordinate transform makes child positions more compactly distributed in 3-D space, facilitating effective quantization. Moreover, we develop a constrained encoding method for sphere radii, which guarantees hole-free surface rendering at the decoder side. Simulation results show that the proposed algorithm provides a faithful rendering quality even at low bitrates.
Vector Quantization Algorithm Based on Associative Memories
NASA Astrophysics Data System (ADS)
Guzmán, Enrique; Pogrebnyak, Oleksiy; Yáñez, Cornelio; Manrique, Pablo
This paper presents a vector quantization algorithm for image compression based on extended associative memories. The proposed algorithm is divided in two stages. First, an associative network is generated applying the learning phase of the extended associative memories between a codebook generated by the LBG algorithm and a training set. This associative network is named EAM-codebook and represents a new codebook which is used in the next stage. The EAM-codebook establishes a relation between training set and the LBG codebook. Second, the vector quantization process is performed by means of the recalling stage of EAM using as associative memory the EAM-codebook. This process generates a set of the class indices to which each input vector belongs. With respect to the LBG algorithm, the main advantages offered by the proposed algorithm is high processing speed and low demand of resources (system memory); results of image compression and quality are presented.
Quantization noise in adaptive weighting networks
NASA Astrophysics Data System (ADS)
Davis, R. M.; Sher, P. J.-S.
1984-09-01
Adaptive weighting networks can be implemented using in-phase and quadrature, phase-phase, or phase-amplitude modulators. The statistical properties of the quantization error are derived for each modulator and the quantization noise power produced by the modulators are compared at the output of an adaptive antenna. Other relevant characteristics of the three types of modulators are also discussed.
Scalar-vector quantization of medical images.
Mohsenian, N; Shahri, H; Nasrabadi, N M
1996-01-01
A new coding scheme based on the scalar-vector quantizer (SVQ) is developed for compression of medical images. The SVQ is a fixed rate encoder and its rate-distortion performance is close to that of optimal entropy-constrained scalar quantizers (ECSQs) for memoryless sources. The use of a fixed-rate quantizer is expected to eliminate some of the complexity of using variable-length scalar quantizers. When transmission of images over noisy channels is considered, our coding scheme does not suffer from error propagation that is typical of coding schemes using variable-length codes. For a set of magnetic resonance (MR) images, coding results obtained from SVQ and ECSQ at low bit rates are indistinguishable. Furthermore, our encoded images are perceptually indistinguishable from the original when displayed on a monitor. This makes our SVQ-based coder an attractive compression scheme for picture archiving and communication systems (PACS). PACS are currently under study for use in an all-digital radiology environment in hospitals, where reliable transmission, storage, and high fidelity reconstruction of images are desired. PMID:18285124
Logarithmic Adaptive Quantization Projection for Audio Watermarking
NASA Astrophysics Data System (ADS)
Zhao, Xuemin; Guo, Yuhong; Liu, Jian; Yan, Yonghong; Fu, Qiang
In this paper, a logarithmic adaptive quantization projection (LAQP) algorithm for digital watermarking is proposed. Conventional quantization index modulation uses a fixed quantization step in the watermarking embedding procedure, which leads to poor fidelity. Moreover, the conventional methods are sensitive to value-metric scaling attack. The LAQP method combines the quantization projection scheme with a perceptual model. In comparison to some conventional quantization methods with a perceptual model, the LAQP only needs to calculate the perceptual model in the embedding procedure, avoiding the decoding errors introduced by the difference of the perceptual model used in the embedding and decoding procedure. Experimental results show that the proposed watermarking scheme keeps a better fidelity and is robust against the common signal processing attack. More importantly, the proposed scheme is invariant to value-metric scaling attack.
The decoding method based on wavelet image En vector quantization
NASA Astrophysics Data System (ADS)
Liu, Chun-yang; Li, Hui; Wang, Tao
2013-12-01
With the rapidly progress of internet technology, large scale integrated circuit and computer technology, digital image processing technology has been greatly developed. Vector quantization technique plays a very important role in digital image compression. It has the advantages other than scalar quantization, which possesses the characteristics of higher compression ratio, simple algorithm of image decoding. Vector quantization, therefore, has been widely used in many practical fields. This paper will combine the wavelet analysis method and vector quantization En encoder efficiently, make a testing in standard image. The experiment result in PSNR will have a great improvement compared with the LBG algorithm.
Image Compression on a VLSI Neural-Based Vector Quantizer.
ERIC Educational Resources Information Center
Chen, Oscal T.-C.; And Others
1992-01-01
Describes a modified frequency-sensitive self-organization (FSO) algorithm for image data compression and the associated VLSI architecture. Topics discussed include vector quantization; VLSI neural processor architecture; detailed circuit implementation; and a neural network vector quantization prototype chip. Examples of images using the FSO…
Application of a VLSI vector quantization processor to real-time speech coding
NASA Technical Reports Server (NTRS)
Davidson, G.; Gersho, A.
1986-01-01
Attention is given to a working vector quantization processor for speech coding that is based on a first-generation VLSI chip which efficiently performs the pattern-matching operation needed for the codebook search process (CPS). Using this chip, the CPS architecture has been successfully incorporated into a compact, single-board Vector PCM implementation operating at 7-18 kbits/sec. A real time Adaptive Vector Predictive Coder system using the CPS has also been implemented.
Image Coding By Vector Quantization In A Transformed Domain
NASA Astrophysics Data System (ADS)
Labit, C.; Marescq, J. P...
1986-05-01
Using vector quantization in a transformed domain, TV images are coded. The method exploit spatial redundancies of small 4x4 blocks of pixel : first, a DCT (or Hadamard) trans-form is performed on these blocks. A classification algorithm ranks them into visual and transform properties-based classes. For each class, high energy carrying coefficients are retained and using vector quantization, a codebook is built for the AC remaining part of the transformed blocks. The whole of the codeworks are referenced by an index. Each block is then coded by specifying its DC coefficient and associated index.
Subband Image Coding Using Entropy-Constrained Residual Vector Quantization.
ERIC Educational Resources Information Center
Kossentini, Faouzi; And Others
1994-01-01
Discusses a flexible, high performance subband coding system. Residual vector quantization is discussed as a basis for coding subbands, and subband decomposition and bit allocation issues are covered. Experimental results showing the quality achievable at low bit rates are presented. (13 references) (KRN)
Improved vector quantization scheme for grayscale image compression
NASA Astrophysics Data System (ADS)
Hu, Y.-C.; Chen, W.-L.; Lo, C.-C.; Chuang, J.-C.
2012-06-01
This paper proposes an improved image coding scheme based on vector quantization. It is well known that the image quality of a VQ-compressed image is poor when a small-sized codebook is used. In order to solve this problem, the mean value of the image block is taken as an alternative block encoding rule to improve the image quality in the proposed scheme. To cut down the storage cost of compressed codes, a two-stage lossless coding approach including the linear prediction technique and the Huffman coding technique is employed in the proposed scheme. The results show that the proposed scheme achieves better image qualities than vector quantization while keeping low bit rates.
Mitra, S; Yang, S; Kustov, V
1998-11-01
Compression of medical images has always been viewed with skepticism, since the loss of information involved is thought to affect diagnostic information. However, recent research indicates that some wavelet-based compression techniques may not effectively reduce the image quality, even when subjected to compression ratios up to 30:1. The performance of a recently designed wavelet-based adaptive vector quantization is compared with a well-known wavelet-based scalar quantization technique to demonstrate the superiority of the former technique at compression ratios higher than 30:1. The use of higher compression with high fidelity of the reconstructed images allows fast transmission of images over the Internet for prompt inspection by radiologists at remote locations in an emergency situation, while higher quality images follow in a progressive manner if desired. Such fast and progressive transmission can also be used for downloading large data sets such as the Visible Human at a quality desired by the users for research or education. This new adaptive vector quantization uses a neural networks-based clustering technique for efficient quantization of the wavelet-decomposed subimages, yielding minimal distortion in the reconstructed images undergoing high compression. Results of compression up to 100:1 are shown for 24-bit color and 8-bit monochrome medical images. PMID:9848058
Texture Classification Using Local Pattern Based on Vector Quantization.
Pan, Zhibin; Fan, Hongcheng; Zhang, Li
2015-12-01
Local binary pattern (LBP) is a simple and effective descriptor for texture classification. However, it has two main disadvantages: (1) different structural patterns sometimes have the same binary code and (2) it is sensitive to noise. In order to overcome these disadvantages, we propose a new local descriptor named local vector quantization pattern (LVQP). In LVQP, different kinds of texture images are chosen to train a local pattern codebook, where each different structural pattern is described by a unique codeword index. Contrarily to the original LBP and its many variants, LVQP does not quantize each neighborhood pixel separately to 0/1, but aims at quantizing the whole difference vector between the central pixel and its neighborhood pixels. Since LVQP deals with the structural pattern as a whole, it has a high discriminability and is less sensitive to noise. Our experimental results, achieved by using four representative texture databases of Outex, UIUC, CUReT, and Brodatz, show that the proposed LVQP method can improve classification accuracy significantly and is more robust to noise. PMID:26353370
Finite-state residual vector quantizer for image coding
NASA Astrophysics Data System (ADS)
Huang, Steve S.; Wang, Jia-Shung
1993-10-01
Finite state vector quantization (FSVQ) has been proven during recent years to be a high quality and low bit rate coding scheme. A FSVQ has achieved the efficiency of a small codebook (the state codebook) VQ while maintaining the quality of a large codebook (the master codebook) VQ. However, the large master codebook has become a primary limitation of FSVQ if the implementation is carefully taken into account. A large amount of memory would be required in storing the master codebook and also much effort would be spent in maintaining the state codebook if the master codebook became too large. This problem could be partially solved by the mean/residual technique (MRVQ). That is, the block means and the residual vectors would be separately coded. A new hybrid coding scheme called the finite state residual vector quantization (FSRVQ) is proposed in this paper for the sake of utilizing both advantage in FSVQ and MRVQ. The codewords in FSRVQ were designed by removing the block means so as to reduce the codebook size. The block means were predicted by the neighboring blocks to reduce the bit rate. Additionally, the predicted means were added to the residual vectors so that the state codebooks could be generated entirely. The performance of FSRVQ was indicated from the experimental results to be better than that of both ordinary FSVQ and RMVQ uniformly.
A constrained joint source/channel coder design and vector quantization of nonstationary sources
NASA Technical Reports Server (NTRS)
Sayood, Khalid; Chen, Y. C.; Nori, S.; Araj, A.
1993-01-01
The emergence of broadband ISDN as the network for the future brings with it the promise of integration of all proposed services in a flexible environment. In order to achieve this flexibility, asynchronous transfer mode (ATM) has been proposed as the transfer technique. During this period a study was conducted on the bridging of network transmission performance and video coding. The successful transmission of variable bit rate video over ATM networks relies on the interaction between the video coding algorithm and the ATM networks. Two aspects of networks that determine the efficiency of video transmission are the resource allocation algorithm and the congestion control algorithm. These are explained in this report. Vector quantization (VQ) is one of the more popular compression techniques to appear in the last twenty years. Numerous compression techniques, which incorporate VQ, have been proposed. While the LBG VQ provides excellent compression, there are also several drawbacks to the use of the LBG quantizers including search complexity and memory requirements, and a mismatch between the codebook and the inputs. The latter mainly stems from the fact that the VQ is generally designed for a specific rate and a specific class of inputs. In this work, an adaptive technique is proposed for vector quantization of images and video sequences. This technique is an extension of the recursively indexed scalar quantization (RISQ) algorithm.
Image coding using entropy-constrained residual vector quantization
NASA Technical Reports Server (NTRS)
Kossentini, Faouzi; Smith, Mark J. T.; Barnes, Christopher F.
1993-01-01
The residual vector quantization (RVQ) structure is exploited to produce a variable length codeword RVQ. Necessary conditions for the optimality of this RVQ are presented, and a new entropy-constrained RVQ (ECRVQ) design algorithm is shown to be very effective in designing RVQ codebooks over a wide range of bit rates and vector sizes. The new EC-RVQ has several important advantages. It can outperform entropy-constrained VQ (ECVQ) in terms of peak signal-to-noise ratio (PSNR), memory, and computation requirements. It can also be used to design high rate codebooks and codebooks with relatively large vector sizes. Experimental results indicate that when the new EC-RVQ is applied to image coding, very high quality is achieved at relatively low bit rates.
The vector quantization for AVIRIS hyperspectral imagery compression with fixed low bitrate
NASA Astrophysics Data System (ADS)
Zhang, Jing; Li, Yunsong; Wang, Keyan; Liu, Haiying
2012-10-01
Vector quantization is an optimal compression strategy for hyperspectral imagery, but it can't satisfy the fixed bitrate application. In this paper, we propose a vector quantization algorithm for AVIRIS hyperspectral imagery compression with fixed low bitrate. The 2D-TCE lossless compression for codebook image and index image, the codebook reordering, the remove water absorbed band algorithm are introduced to the classical vector quantization, and the bitrate distribution is replaced by choosing the appropriate codebook size algorithm. Experimental results show that the proposed vector quantization has a better performance than the traditional hyperspectral imagery lossy compression with fixed low bitrate.
Image Compression Using Vector Quantization with Variable Block Size Division
NASA Astrophysics Data System (ADS)
Matsumoto, Hiroki; Kichikawa, Fumito; Sasazaki, Kazuya; Maeda, Junji; Suzuki, Yukinori
In this paper, we propose a method for compressing a still image using vector quantization (VQ). Local fractal dimension (LFD) is computed to divided an image into variable block size. The LFD shows the complexity of local regions of an image, so that a region of an image that shows higher LFD values than those of other regions is partitioned into small blocks of pixels, while a region of an image that shows lower LFD values than those of other regions is partitioned into large blocks. Furthermore, we developed a division and merging algorithm to decrease the number of blocks to encode. This results in improvement of compression rate. We construct code books for respective blocks sizes. To encode an image, a block of pixels is transformed by discrete cosine transform (DCT) and the closest vector is chosen from the code book (CB). In decoding, the code vector corresponding to the index is selected from the CB and then the code vector is transformed by inverse DCT to reconstruct a block of pixels. Computational experiments were carried out to show the effectiveness of the proposed method. Performance of the proposed method is slightly better than that of JPEG. In the case of learning images to construct a CB being different from test images, the compression rate is comparable to compression rates of methods proposed so far, while image quality evaluated by NPIQM (normalized perceptual image quality measure) is almost the highest step. The results show that the proposed method is effective for still image compression.
Round Randomized Learning Vector Quantization for Brain Tumor Imaging
2016-01-01
Brain magnetic resonance imaging (MRI) classification into normal and abnormal is a critical and challenging task. Owing to that, several medical imaging classification techniques have been devised in which Learning Vector Quantization (LVQ) is amongst the potential. The main goal of this paper is to enhance the performance of LVQ technique in order to gain higher accuracy detection for brain tumor in MRIs. The classical way of selecting the winner code vector in LVQ is to measure the distance between the input vector and the codebook vectors using Euclidean distance function. In order to improve the winner selection technique, round off function is employed along with the Euclidean distance function. Moreover, in competitive learning classifiers, the fitting model is highly dependent on the class distribution. Therefore this paper proposed a multiresampling technique for which better class distribution can be achieved. This multiresampling is executed by using random selection via preclassification. The test data sample used are the brain tumor magnetic resonance images collected from Universiti Kebangsaan Malaysia Medical Center and UCI benchmark data sets. Comparative studies showed that the proposed methods with promising results are LVQ1, Multipass LVQ, Hierarchical LVQ, Multilayer Perceptron, and Radial Basis Function. PMID:27516807
Round Randomized Learning Vector Quantization for Brain Tumor Imaging.
Sheikh Abdullah, Siti Norul Huda; Bohani, Farah Aqilah; Nayef, Baher H; Sahran, Shahnorbanun; Al Akash, Omar; Iqbal Hussain, Rizuana; Ismail, Fuad
2016-01-01
Brain magnetic resonance imaging (MRI) classification into normal and abnormal is a critical and challenging task. Owing to that, several medical imaging classification techniques have been devised in which Learning Vector Quantization (LVQ) is amongst the potential. The main goal of this paper is to enhance the performance of LVQ technique in order to gain higher accuracy detection for brain tumor in MRIs. The classical way of selecting the winner code vector in LVQ is to measure the distance between the input vector and the codebook vectors using Euclidean distance function. In order to improve the winner selection technique, round off function is employed along with the Euclidean distance function. Moreover, in competitive learning classifiers, the fitting model is highly dependent on the class distribution. Therefore this paper proposed a multiresampling technique for which better class distribution can be achieved. This multiresampling is executed by using random selection via preclassification. The test data sample used are the brain tumor magnetic resonance images collected from Universiti Kebangsaan Malaysia Medical Center and UCI benchmark data sets. Comparative studies showed that the proposed methods with promising results are LVQ1, Multipass LVQ, Hierarchical LVQ, Multilayer Perceptron, and Radial Basis Function. PMID:27516807
Vector quantization for efficient coding of upper subbands
NASA Technical Reports Server (NTRS)
Zeng, W. J.; Huang, Y. F.
1994-01-01
This paper examines the application of vector quantization (VQ) to exploit both intra-band and inter-band redundancy in subband coding. The focus here is on the exploitation of inter-band dependency. It is shown that VQ is particularly suitable and effective for coding the upper subbands. Three subband decomposition-based VQ coding schemes are proposed here to exploit the inter-band dependency by making full use of the extra flexibility of VQ approach over scalar quantization. A quadtree-based variable rate VQ (VRVQ) scheme which takes full advantage of the intra-band and inter-band redundancy is first proposed. Then, a more easily implementable alternative based on an efficient block-based edge estimation technique is employed to overcome the implementational barriers of the first scheme. Finally, a predictive VQ scheme formulated in the context of finite state VQ is proposed to further exploit the dependency among different subbands. A VRVQ scheme proposed elsewhere is extended to provide an efficient bit allocation procedure. Simulation results show that these three hybrid techniques have advantages, in terms of peak signal-to-noise ratio (PSNR) and complexity, over other existing subband-VQ approaches.
Recursive optimal pruning with applications to tree structured vector quantizers
NASA Technical Reports Server (NTRS)
Kiang, Shei-Zein; Baker, Richard L.; Sullivan, Gary J.; Chiu, Chung-Yen
1992-01-01
A pruning algorithm of Chou et al. (1989) for designing optimal tree structures identifies only those codebooks which lie on the convex hull of the original codebook's operational distortion rate function. The authors introduce a modified version of the original algorithm, which identifies a large number of codebooks having minimum average distortion, under the constraint that, in each step, only modes having no descendents are removed from the tree. All codebooks generated by the original algorithm are also generated by this algorithm. The new algorithm generates a much larger number of codebooks in the middle- and low-rate regions. The additional codebooks permit operation near the codebook's operational distortion rate function without time sharing by choosing from the increased number of available bit rates. Despite the statistical mismatch which occurs when coding data outside the training sequence, these pruned codebooks retain their performance advantage over full search vector quantizers (VQs) for a large range of rates.
Fast clustering algorithm for codebook production in image vector quantization
NASA Astrophysics Data System (ADS)
Al-Otum, Hazem M.
2001-04-01
In this paper, a fast clustering algorithm (FCA) is proposed to be implemented in vector quantization codebook production. This algorithm gives the ability to avoid iterative averaging of vectors and is based on collecting vectors with similar or closely similar characters to produce corresponding clusters. FCA gives an increase in peak signal-to-noise ratio (PSNR) about 0.3 - 1.1 dB, over the LBG algorithm and reduces the computational cost for codebook production (10% - 60%) at different bit rates. Here, two FCA modifications are proposed: FCA with limited cluster size 1& (FCA-LCS1 and FCA-LCS2, respectively). FCA- LCS1 tends to subdivide large clusters into smaller ones while FCA-LCS2 reduces a predetermined threshold by a step to reach the required cluster size. The FCA-LCS1 and FCA- LCS2 give an increase in PSNR of about 0.9 - 1.0 and 0.9 - 1.1 dB, respectively, over the FCA algorithm, at the expense of about 15% - 25% and 18% - 28% increase in the output codebook size.
Evaluation of learning vector quantization to classify cotton trash
NASA Astrophysics Data System (ADS)
Lieberman, Michael A.; Patil, Rajendra B.
1997-03-01
The cotton industry needs a method to identify the type of trash [nonlint material (NLM)] in cotton samples; learning vector quantization (LVQ) is evaluated as that method. LVQ is a classification technique that defines reference vectors (group prototypes) in an N-dimensional feature space (RN). Normalized trash object features extracted from images of compressed cotton samples define RN. An unknown NLM object is given the label of the closest reference vector (as defined by Euclidean distance). Different normalized features spaces and NLM classifications are evaluated and accuracies reported for correctly identifying the NLM type. LVQ is used to partition cotton trash into: (1) bark (B), leaf (L), pepper (P), or stick (S); (2) bark and nonbark (N); or (3) bark, combined leaf and pepper (LP), or stick. Percentage accuracy for correctly identifying 139 pieces of test trash placed on laboratory prepared samples for the three scenarios are (B:95, L:87, P:100, S:88), (B:100, N:97), and (B:95, LP:99, S:88), respectively. Also, LVQ results are compared to previous work using backpropagating neural networks.
Hierarchically clustered adaptive quantization CMAC and its learning convergence.
Teddy, S D; Lai, E M K; Quek, C
2007-11-01
The cerebellar model articulation controller (CMAC) neural network (NN) is a well-established computational model of the human cerebellum. Nevertheless, there are two major drawbacks associated with the uniform quantization scheme of the CMAC network. They are the following: (1) a constant output resolution associated with the entire input space and (2) the generalization-accuracy dilemma. Moreover, the size of the CMAC network is an exponential function of the number of inputs. Depending on the characteristics of the training data, only a small percentage of the entire set of CMAC memory cells is utilized. Therefore, the efficient utilization of the CMAC memory is a crucial issue. One approach is to quantize the input space nonuniformly. For existing nonuniformly quantized CMAC systems, there is a tradeoff between memory efficiency and computational complexity. Inspired by the underlying organizational mechanism of the human brain, this paper presents a novel CMAC architecture named hierarchically clustered adaptive quantization CMAC (HCAQ-CMAC). HCAQ-CMAC employs hierarchical clustering for the nonuniform quantization of the input space to identify significant input segments and subsequently allocating more memory cells to these regions. The stability of the HCAQ-CMAC network is theoretically guaranteed by the proof of its learning convergence. The performance of the proposed network is subsequently benchmarked against the original CMAC network, as well as two other existing CMAC variants on two real-life applications, namely, automated control of car maneuver and modeling of the human blood glucose dynamics. The experimental results have demonstrated that the HCAQ-CMAC network offers an efficient memory allocation scheme and improves the generalization and accuracy of the network output to achieve better or comparable performances with smaller memory usages. Index Terms-Cerebellar model articulation controller (CMAC), hierarchical clustering, hierarchically
Novel multivariate vector quantization for effective compression of hyperspectral imagery
NASA Astrophysics Data System (ADS)
Li, Xiaohui; Ren, Jinchang; Zhao, Chunhui; Qiao, Tong; Marshall, Stephen
2014-12-01
Although hyperspectral imagery (HSI) has been successfully deployed in a wide range of applications, it suffers from extremely large data volumes for storage and transmission. Consequently, coding and compression is needed for effective data reduction whilst maintaining the image integrity. In this paper, a multivariate vector quantization (MVQ) approach is proposed for the compression of HSI, where the pixel spectra is considered as a linear combination of two codewords from the codebook, and the indexed maps and their corresponding coefficients are separately coded and compressed. A strategy is proposed for effective codebook design, using the fuzzy C-mean (FCM) to determine the optimal number of clusters of data and selected codewords for the codebook. Comprehensive experiments on several real datasets are used for performance assessment, including quantitative evaluations to measure the degree of data reduction and the distortion of reconstructed images. Our results have indicated that the proposed MVQ approach outperforms conventional VQ and several typical algorithms for effective compression of HSI, where the image quality measured using mean squared error (MSE) has been significantly improved even under the same level of compressed bitrate.
Novel hybrid classified vector quantization using discrete cosine transform for image compression
NASA Astrophysics Data System (ADS)
Al-Fayadh, Ali; Hussain, Abir Jaafar; Lisboa, Paulo; Al-Jumeily, Dhiya
2009-04-01
We present a novel image compression technique using a classified vector Quantizer and singular value decomposition for the efficient representation of still images. The proposed method is called hybrid classified vector quantization. It involves a simple but efficient classifier-based gradient method in the spatial domain, which employs only one threshold to determine the class of the input image block, and uses three AC coefficients of discrete cosine transform coefficients to determine the orientation of the block without employing any threshold. The proposed technique is benchmarked with each of the standard vector quantizers generated using the k-means algorithm, standard classified vector quantizer schemes, and JPEG-2000. Simulation results indicate that the proposed approach alleviates edge degradation and can reconstruct good visual quality images with higher peak signal-to-noise ratio than the benchmarked techniques, or be competitive with them.
SAR ATR using a modified learning vector quantization algorithm
NASA Astrophysics Data System (ADS)
Marinelli, Anne Marie P.; Kaplan, Lance M.; Nasrabadi, Nasser M.
1999-08-01
We addressed the problem of classifying 10 target types in imagery formed from synthetic aperture radar (SAR). By executing a group training process, we show how to increase the performance of 10 initial sets of target templates formed by simple averaging. This training process is a modified learning vector quantization (LVQ) algorithm that was previously shown effective with forward-looking infrared (FLIR) imagery. For comparison, we ran the LVQ experiments using coarse, medium, and fine template sets that captured the target pose signature variations over 60 degrees, 40 degrees, and 20 degrees, respectively. Using sequestered test imagery, we evaluated how well the original and post-LVQ template sets classify the 10 target types. We show that after the LVQ training process, the coarse template set outperforms the coarse and medium original sets. And, for a test set that included untrained version variants, we show that classification using coarse template sets nearly matches that of the fine template sets. In a related experiment, we stored 9 initial template sets to classify 9 of the target types and used a threshold to separate the 10th type, previously found to be a 'confusing' type. We used imagery of all 10 targets in the LVQ training process to modify the 9 template sets. Overall classification performance increased slightly and an equalization of the individual target classification rates occurred, as compared to the 10-template experiment. The SAR imagery that we used is publicly available from the Moving and Stationary Target Acquisition and Recognition (MSTAR) program, sponsored by the Defense Advanced Research Projects Agency (DARPA).
Distortion-rate models for entropy-coded lattice vector quantization.
Raffy, P; Antonini, M; Barlaud, M
2000-01-01
The increasing demand for real-time applications requires the use of variable-rate quantizers having good performance in the low bit rate domain. In order to minimize the complexity of quantization, as well as maintaining a reasonably high PSNR ratio, we propose to use an entropy-coded lattice vector quantizer (ECLVQ). These quantizers have proven to outperform the well-known EZW algorithm's performance in terms of rate-distortion tradeoff. In this paper, we focus our attention on the modeling of the mean squared error (MSE) distortion and the prefix code rate for ECLVQ. First, we generalize the distortion model of Jeong and Gibson (1993) on fixed-rate cubic quantizers to lattices under a high rate assumption. Second, we derive new rate models for ECLVQ, efficient at low bit rates without any high rate assumptions. Simulation results prove the precision of our models. PMID:18262939
Vector Quantization of Harmonic Magnitudes in Speech Coding Applications—A Survey and New Technique
NASA Astrophysics Data System (ADS)
Chu, Wai C.
2004-12-01
A harmonic coder extracts the harmonic components of a signal and represents them efficiently using a few parameters. The principles of harmonic coding have become quite successful and several standardized speech and audio coders are based on it. One of the key issues in harmonic coder design is in the quantization of harmonic magnitudes, where many propositions have appeared in the literature. The objective of this paper is to provide a survey of the various techniques that have appeared in the literature for vector quantization of harmonic magnitudes, with emphasis on those adopted by the major speech coding standards; these include constant magnitude approximation, partial quantization, dimension conversion, and variable-dimension vector quantization (VDVQ). In addition, a refined VDVQ technique is proposed where experimental data are provided to demonstrate its effectiveness.
Synthetic aperture radar signal data compression using block adaptive quantization
NASA Technical Reports Server (NTRS)
Kuduvalli, Gopinath; Dutkiewicz, Melanie; Cumming, Ian
1994-01-01
This paper describes the design and testing of an on-board SAR signal data compression algorithm for ESA's ENVISAT satellite. The Block Adaptive Quantization (BAQ) algorithm was selected, and optimized for the various operational modes of the ASAR instrument. A flexible BAQ scheme was developed which allows a selection of compression ratio/image quality trade-offs. Test results show the high quality of the SAR images processed from the reconstructed signal data, and the feasibility of on-board implementation using a single ASIC.
A Heisenberg Algebra Bundle of a Vector Field in Three-Space and its Weyl Quantization
Binz, Ernst; Pods, Sonja
2006-01-04
In these notes we associate a natural Heisenberg group bundle Ha with a singularity free smooth vector field X = (id,a) on a submanifold M in a Euclidean three-space. This bundle yields naturally an infinite dimensional Heisenberg group H{sub X}{sup {infinity}}. A representation of the C*-group algebra of H{sub X}{sup {infinity}} is a quantization. It causes a natural Weyl-deformation quantization of X. The influence of the topological structure of M on this quantization is encoded in the Chern class of a canonical complex line bundle inside Ha.
Vector adaptive predictive coder for speech and audio
NASA Technical Reports Server (NTRS)
Chen, Juin-Hwey (Inventor); Gersho, Allen (Inventor)
1990-01-01
A real-time vector adaptive predictive coder which approximates each vector of K speech samples by using each of M fixed vectors in a first codebook to excite a time-varying synthesis filter and picking the vector that minimizes distortion. Predictive analysis for each frame determines parameters used for computing from vectors in the first codebook zero-state response vectors that are stored at the same address (index) in a second codebook. Encoding of input speech vectors s.sub.n is then carried out using the second codebook. When the vector that minimizes distortion is found, its index is transmitted to a decoder which has a codebook identical to the first codebook of the decoder. There the index is used to read out a vector that is used to synthesize an output speech vector s.sub.n. The parameters used in the encoder are quantized, for example by using a table, and the indices are transmitted to the decoder where they are decoded to specify transfer characteristics of filters used in producing the vector s.sub.n from the receiver codebook vector selected by the vector index transmitted.
Ntsama, Eloundou Pascal; Colince, Welba; Ele, Pierre
2016-01-01
In this article, we make a comparative study for a new approach compression between discrete cosine transform (DCT) and discrete wavelet transform (DWT). We seek the transform proper to vector quantization to compress the EMG signals. To do this, we initially associated vector quantization and DCT, then vector quantization and DWT. The coding phase is made by the SPIHT coding (set partitioning in hierarchical trees coding) associated with the arithmetic coding. The method is demonstrated and evaluated on actual EMG data. Objective performance evaluations metrics are presented: compression factor, percentage root mean square difference and signal to noise ratio. The results show that method based on the DWT is more efficient than the method based on the DCT. PMID:27104132
Necessary conditions for the optimality of variable rate residual vector quantizers
NASA Technical Reports Server (NTRS)
Kossentini, Faouzi; Smith, Mark J. T.; Barnes, Christopher F.
1993-01-01
Residual vector quantization (RVQ), or multistage VQ, as it is also called, has recently been shown to be a competitive technique for data compression. The competitive performance of RVQ reported in results from the joint optimization of variable rate encoding and RVQ direct-sum code books. In this paper, necessary conditions for the optimality of variable rate RVQ's are derived, and an iterative descent algorithm based on a Lagrangian formulation is introduced for designing RVQ's having minimum average distortion subject to an entropy constraint. Simulation results for these entropy-constrained RVQ's (EC-RVQ's) are presented for memory less Gaussian, Laplacian, and uniform sources. A Gauss-Markov source is also considered. The performance is superior to that of entropy-constrained scalar quantizers (EC-SQ's) and practical entropy-constrained vector quantizers (EC-VQ's), and is competitive with that of some of the best source coding techniques that have appeared in the literature.
Laha, Arijit; Pal, Nikhil R; Chanda, Bhabatosh
2004-10-01
We propose a new scheme of designing a vector quantizer for image compression. First, a set of codevectors is generated using the self-organizing feature map algorithm. Then, the set of blocks associated with each code vector is modeled by a cubic surface for better perceptual fidelity of the reconstructed images. Mean-removed vectors from a set of training images is used for the construction of a generic codebook. Further, Huffman coding of the indices generated by the encoder and the difference-coded mean values of the blocks are used to achieve better compression ratio. We proposed two indices for quantitative assessment of the psychovisual quality (blocking effect) of the reconstructed image. Our experiments on several training and test images demonstrate that the proposed scheme can produce reconstructed images of good quality while achieving compression at low bit rates. Index Terms-Cubic surface fitting, generic codebook, image compression, self-organizing feature map, vector quantization. PMID:15462140
Medical Image Compression Based on Vector Quantization with Variable Block Sizes in Wavelet Domain
Jiang, Huiyan; Ma, Zhiyuan; Hu, Yang; Yang, Benqiang; Zhang, Libo
2012-01-01
An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD) was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality. PMID:23049544
A VLSI chip set for real time vector quantization of image sequences
NASA Technical Reports Server (NTRS)
Baker, Richard L.
1989-01-01
The architecture and implementation of a VLSI chip set that vector quantizes (VQ) image sequences in real time is described. The chip set forms a programmable Single-Instruction, Multiple-Data (SIMD) machine which can implement various vector quantization encoding structures. Its VQ codebook may contain unlimited number of codevectors, N, having dimension up to K = 64. Under a weighted least squared error criterion, the engine locates at video rates the best code vector in full-searched or large tree searched VQ codebooks. The ability to manipulate tree structured codebooks, coupled with parallelism and pipelining, permits searches in as short as O (log N) cycles. A full codebook search results in O(N) performance, compared to O(KN) for a Single-Instruction, Single-Data (SISD) machine. With this VLSI chip set, an entire video code can be built on a single board that permits realtime experimentation with very large codebooks.
Image-adapted visually weighted quantization matrices for digital image compression
NASA Technical Reports Server (NTRS)
Watson, Andrew B. (Inventor)
1994-01-01
A method for performing image compression that eliminates redundant and invisible image components is presented. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The present invention adapts or customizes the quantization matrix to the image being compressed. The quantization matrix comprises visual masking by luminance and contrast techniques and by an error pooling technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.
Vector quantization based on a psychovisual lattice for a visual subband coding scheme
NASA Astrophysics Data System (ADS)
Senane, Hakim; Saadane, Abdelhakim; Barba, Dominique
1997-01-01
A vector quantization based on a psychovisual lattice is used in a visual components image coding scheme to achieve a high compression ratio with an excellent visual quality. The vectors construction methodology preserves the main properties of the human visual system concerning the perception of quantization impairments and takes into account the masking effect due to interaction between subbands with the same radial frequency but with different orientations. The vectors components are the local band limited contrasts Cij defined as the ratio between the luminance Lij at point, which belongs to the radial subband i and angular sector j, and the average luminance at this location corresponding to the radial frequencies up to subband i-1. Hence the vectors dimension is depending on the orientation selectivity of the chosen decomposition. The low pass subband, which is nondirectional is scalar quantized. The performances of the coding scheme have been evaluated on a set of images in terms of peak SNR, true bit rates and visual quality. For this, no impairments are visible at a distance of 4 times the height of a high quality TV monitor. The SNR are about 6 to 8 dB under the ones of classical subband image coding schemes when producing the same visual quality. Due to the use of the local band limited contrast, the particularity of this approach relies in the structure of the reconstruction image error which is found to be highly correlated to the structure of the original image.
NASA Astrophysics Data System (ADS)
Ng, Theam Foo; Pham, Tuan D.; Zhou, Xiaobo
2010-01-01
With the fast development of multi-dimensional data compression and pattern classification techniques, vector quantization (VQ) has become a system that allows large reduction of data storage and computational effort. One of the most recent VQ techniques that handle the poor estimation of vector centroids due to biased data from undersampling is to use fuzzy declustering-based vector quantization (FDVQ) technique. Therefore, in this paper, we are motivated to propose a justification of FDVQ based hidden Markov model (HMM) for investigating its effectiveness and efficiency in classification of genotype-image phenotypes. The performance evaluation and comparison of the recognition accuracy between a proposed FDVQ based HMM (FDVQ-HMM) and a well-known LBG (Linde, Buzo, Gray) vector quantization based HMM (LBG-HMM) will be carried out. The experimental results show that the performances of both FDVQ-HMM and LBG-HMM are almost similar. Finally, we have justified the competitiveness of FDVQ-HMM in classification of cellular phenotype image database by using hypotheses t-test. As a result, we have validated that the FDVQ algorithm is a robust and an efficient classification technique in the application of RNAi genome-wide screening image data.
NASA Astrophysics Data System (ADS)
Dandawate, Yogesh H.; Joshi, Madhuri A.; Umrani, Shrirang
Now a days all medical imaging equipments give output as digital image and non-invasive techniques are becoming cheaper, the database of images is becoming larger. This archive of images increases up to significant size and in telemedicine-based applications the storage and transmission requires large memory and bandwidth respectively. There is a need for compression to save memory space and fast transmission over internet and 3G mobile with good quality decompressed image, even though compression is lossy. This paper presents a novel approach for designing enhanced vector quantizer, which uses Kohonen's Self Organizing neural network. The vector quantizer (codebook) is designed by training with a neatly designed training image and by selective training approach .Compressing; images using it gives better quality. The quality analysis of decompressed images is evaluated by using various quality measures along with conventionally used PSNR.
Simple, fast codebook training algorithm by entropy sequence for vector quantization
NASA Astrophysics Data System (ADS)
Pang, Chao-yang; Yao, Shaowen; Qi, Zhang; Sun, Shi-xin; Liu, Jingde
2001-09-01
The traditional training algorithm for vector quantization such as the LBG algorithm uses the convergence of distortion sequence as the condition of the end of algorithm. We presented a novel training algorithm for vector quantization in this paper. The convergence of the entropy sequence of each region sequence is employed as the condition of the end of the algorithm. Compared with the famous LBG algorithm, it is simple, fast and easy to be comprehended and controlled. We test the performance of the algorithm by typical test image Lena and Barb. The result shows that the PSNR difference between the algorithm and LBG is less than 0.1dB, but the running time of it is at most one second of LBG.
NASA Technical Reports Server (NTRS)
Manohar, Mareboyana; Tilton, James C.
1994-01-01
A progressive vector quantization (VQ) compression approach is discussed which decomposes image data into a number of levels using full search VQ. The final level is losslessly compressed, enabling lossless reconstruction. The computational difficulties are addressed by implementation on a massively parallel SIMD machine. We demonstrate progressive VQ on multispectral imagery obtained from the Advanced Very High Resolution Radiometer instrument and other Earth observation image data, and investigate the trade-offs in selecting the number of decomposition levels and codebook training method.
Vector quantizer based on brightness maps for image compression with the polynomial transform
NASA Astrophysics Data System (ADS)
Escalante-Ramirez, Boris; Moreno-Gutierrez, Mauricio; Silvan-Cardenas, Jose L.
2002-11-01
We present a vector quantization scheme acting on brightness fields based on distance/distortion criteria correspondent with psycho-visual aspects. These criteria quantify sensorial distortion between vectors that represent either portions of a digital image or alternatively, coefficients of a transform-based coding system. In the latter case, we use an image representation model, namely the Hermite transform, that is based on some of the main perceptual characteristics of the human vision system (HVS) and in their response to light stimulus. Energy coding in the brightness domain, determination of local structure, code-book training and local orientation analysis are all obtained by means of the Hermite transform. This paper, for thematic reasons, is divided in four sections. The first one will shortly highlight the importance of having newer and better compression algorithms. This section will also serve to explain briefly the most relevant characteristics of the HVS, advantages and disadvantages related with the behavior of our vision in front of ocular stimulus. The second section shall go through a quick review of vector quantization techniques, focusing their performance on image treatment, as a preview for the image vector quantizer compressor actually constructed in section 5. Third chapter was chosen to concentrate the most important data gathered on brightness models. The building of this so-called brightness maps (quantification of the human perception on the visible objects reflectance), in a bi-dimensional model, will be addressed here. The Hermite transform, a special case of polynomial transforms, and its usefulness, will be treated, in an applicable discrete form, in the fourth chapter. As we have learned from previous works 1, Hermite transform has showed to be a useful and practical solution to efficiently code the energy within an image block, deciding which kind of quantization is to be used upon them (whether scalar or vector). It will also be
Fuzzy Adaptive Quantized Control for a Class of Stochastic Nonlinear Uncertain Systems.
Liu, Zhi; Wang, Fang; Zhang, Yun; Chen, C L Philip
2016-02-01
In this paper, a fuzzy adaptive approach for stochastic strict-feedback nonlinear systems with quantized input signal is developed. Compared with the existing research on quantized input problem, the existing works focus on quantized stabilization, while this paper considers the quantized tracking problem, which recovers stabilization as a special case. In addition, uncertain nonlinearity and the unknown stochastic disturbances are simultaneously considered in the quantized feedback control systems. By putting forward a new nonlinear decomposition of the quantized input, the relationship between the control signal and the quantized signal is established, as a result, the major technique difficulty arising from the piece-wise quantized input is overcome. Based on fuzzy logic systems' universal approximation capability, a novel fuzzy adaptive tracking controller is constructed via backstepping technique. The proposed controller guarantees that the tracking error converges to a neighborhood of the origin in the sense of probability and all the signals in the closed-loop system remain bounded in probability. Finally, an example illustrates the effectiveness of the proposed control approach. PMID:25751885
Vector Adaptive/Predictive Encoding Of Speech
NASA Technical Reports Server (NTRS)
Chen, Juin-Hwey; Gersho, Allen
1989-01-01
Vector adaptive/predictive technique for digital encoding of speech signals yields decoded speech of very good quality after transmission at coding rate of 9.6 kb/s and of reasonably good quality at 4.8 kb/s. Requires 3 to 4 million multiplications and additions per second. Combines advantages of adaptive/predictive coding, and code-excited linear prediction, yielding speech of high quality but requires 600 million multiplications and additions per second at encoding rate of 4.8 kb/s. Vector adaptive/predictive coding technique bridges gaps in performance and complexity between adaptive/predictive coding and code-excited linear prediction.
NASA Technical Reports Server (NTRS)
Chang, Chi-Yung (Inventor); Fang, Wai-Chi (Inventor); Curlander, John C. (Inventor)
1995-01-01
A system for data compression utilizing systolic array architecture for Vector Quantization (VQ) is disclosed for both full-searched and tree-searched. For a tree-searched VQ, the special case of a Binary Tree-Search VQ (BTSVQ) is disclosed with identical Processing Elements (PE) in the array for both a Raw-Codebook VQ (RCVQ) and a Difference-Codebook VQ (DCVQ) algorithm. A fault tolerant system is disclosed which allows a PE that has developed a fault to be bypassed in the array and replaced by a spare at the end of the array, with codebook memory assignment shifted one PE past the faulty PE of the array.
Light-Front BRST Quantization of the Vector Schwinger Model with a Photon Mass Term
NASA Astrophysics Data System (ADS)
Kulshreshtha, Usha; Kulshreshtha, Daya Shankar; Vary, James P.; Sharma, Lalit Kumar
2014-12-01
Vector Schwinger model with a mass term for the photon, describing 2D electrodynamics with mass-less fermions, studied by us recently (UK, Mod. Phys. Lett A22, 2993 (2007), PoS LC2008, 008 (2008), UK and DSK, Int. J. Mod. Phys. A22, 6183 (2007), UK, Mod. Phys. Lett A27, 1250157 (2012)), represents a new class of models. This theory becomes gauge-invariant when studied on the light-front. This is in contrast to the instant-form theory which is gauge-non-invariant. The light-front Hamiltonian and path integral quantization of this theory has been studied recently by one of us (UK, Mod. Phys. Lett. A27 (No. 27) 1250157 (2012)). In the present work we study the light-front Becchi-Rouet-Stora and Tyutin (BRST) quantization of this theory under appropriate light-cone BRST gauge-fixing. Here the BRST (gauge) symmetry of the theory is maintained even under BRST-gauge-fixing which is in contrast to its Hamiltonian and path integral quantization where the gauge symmetry of the theory necessarily gets broken under gauge-fixing.
NASA Astrophysics Data System (ADS)
Pan, Zhibin; Kotani, Koji; Ohmi, Tadahiro
The encoding process of finding the best-matched codeword (winner) for a certain input vector in image vector quantization (VQ) is computationally very expensive due to a lot of k-dimensional Euclidean distance computations. In order to speed up the VQ encoding process, it is beneficial to firstly estimate how large the Euclidean distance is between the input vector and a candidate codeword by using appropriate low dimensional features of a vector instead of an immediate Euclidean distance computation. If the estimated Euclidean distance is large enough, it implies that the current candidate codeword could not be a winner so that it can be rejected safely and thus avoid actual Euclidean distance computation. Sum (1-D), L2 norm (1-D) and partial sums (2-D) of a vector are used together as the appropriate features in this paper because they are the first three simplest features. Then, four estimations of Euclidean distance between the input vector and a codeword are connected to each other by the Cauchy-Schwarz inequality to realize codeword rejection. For typical standard images with very different details (Lena, F-16, Pepper and Baboon), the final remaining must-do actual Euclidean distance computations can be eliminated obviously and the total computational cost including all overhead can also be reduced obviously compared to the state-of-the-art EEENNS method meanwhile keeping a full search (FS) equivalent PSNR.
NASA Astrophysics Data System (ADS)
Liu, Tuo; Chen, Changshui; Shi, Xingzhe; Liu, Chengyong
2016-05-01
The Raman spectra of tissue of 20 brain tumor patients was recorded using a confocal microlaser Raman spectroscope with 785 nm excitation in vitro. A total of 133 spectra were investigated. Spectra peaks from normal white matter tissue and tumor tissue were analyzed. Algorithms, such as principal component analysis, linear discriminant analysis, and the support vector machine, are commonly used to analyze spectral data. However, in this study, we employed the learning vector quantization (LVQ) neural network, which is typically used for pattern recognition. By applying the proposed method, a normal diagnosis accuracy of 85.7% and a glioma diagnosis accuracy of 89.5% were achieved. The LVQ neural network is a recent approach to excavating Raman spectra information. Moreover, it is fast and convenient, does not require the spectra peak counterpart, and achieves a relatively high accuracy. It can be used in brain tumor prognostics and in helping to optimize the cutting margins of gliomas.
Reconfigurable VLSI implementation for learning vector quantization with on-chip learning circuit
NASA Astrophysics Data System (ADS)
Zhang, Xiangyu; An, Fengwei; Chen, Lei; Jürgen Mattausch, Hans
2016-04-01
As an alternative to conventional single-instruction-multiple-data (SIMD) mode solutions with massive parallelism for self-organizing-map (SOM) neural network models, this paper reports a memory-based proposal for the learning vector quantization (LVQ), which is a variant of SOM. A dual-mode LVQ system, enabling both on-chip learning and classification, is implemented by using a reconfigurable pipeline with parallel p-word input (R-PPPI) architecture. As a consequence of the reuse of R-PPPI for solving the most severe computational demands in both modes, power dissipation and Si-area consumption can be dramatically reduced in comparison to previous LVQ implementations. In addition, the designed LVQ ASIC has high flexibility with respect to feature-vector dimensionality and reference-vector number, allowing the execution of many different machine-learning applications. The fabricated test chip in 180 nm CMOS with parallel 8-word inputs and 102 K-bit on-chip memory achieves low power consumption of 66.38 mW (at 75 MHz and 1.8 V) and high learning speed of (R + 1) × \\lceil d/8 \\rceil + 10 clock cycles per d-dimensional sample vector where R is the reference-vector number.
Combining nonlinear multiresolution system and vector quantization for still image compression
Wong, Y.
1993-12-17
It is popular to use multiresolution systems for image coding and compression. However, general-purpose techniques such as filter banks and wavelets are linear. While these systems are rigorous, nonlinear features in the signals cannot be utilized in a single entity for compression. Linear filters are known to blur the edges. Thus, the low-resolution images are typically blurred, carrying little information. We propose and demonstrate that edge-preserving filters such as median filters can be used in generating a multiresolution system using the Laplacian pyramid. The signals in the detail images are small and localized to the edge areas. Principal component vector quantization (PCVQ) is used to encode the detail images. PCVQ is a tree-structured VQ which allows fast codebook design and encoding/decoding. In encoding, the quantization error at each level is fed back through the pyramid to the previous level so that ultimately all the error is confined to the first level. With simple coding methods, we demonstrate that images with PSNR 33 dB can be obtained at 0.66 bpp without the use of entropy coding. When the rate is decreased to 0.25 bpp, the PSNR of 30 dB can still be achieved. Combined with an earlier result, our work demonstrate that nonlinear filters can be used for multiresolution systems and image coding.
Online learning vector quantization: a harmonic competition approach based on conservation network.
Wang, J H; Sun, W D
1999-01-01
This paper presents a self-creating neural network in which a conservation principle is incorporated with the competitive learning algorithm to harmonize equi-probable and equi-distortion criteria. Each node is associated with a measure of vitality which is updated after each input presentation. The total amount of vitality in the network at any time is 1, hence the name conservation. Competitive learning based on a vitality conservation principle is near-optimum, in the sense that problem of trapping in a local minimum is alleviated by adding perturbations to the learning rate during node generation processes. Combined with a procedure that redistributes the learning rate variables after generation and removal of nodes, the competitive conservation strategy provides a novel approach to the problem of harmonizing equi-error and equi-probable criteria. The training process is smooth and incremental, it not only achieves the biologically plausible learning property, but also facilitates systematic derivations for training parameters. Comparison studies on learning vector quantization involving stationary and nonstationary, structured and nonstructured inputs demonstrate that the proposed network outperforms other competitive networks in terms of quantization error, learning speed, and codeword search efficiency. PMID:18252343
Bradley, J.N.; Brislawn, C.M.
1992-04-11
This report describes the development of a Wavelet Vector Quantization (WVQ) image compression algorithm for fingerprint raster files. The pertinent work was performed at Los Alamos National Laboratory for the Federal Bureau of Investigation. This document describes a previously-sent package of C-language source code, referred to as LAFPC, that performs the WVQ fingerprint compression and decompression tasks. The particulars of the WVQ algorithm and the associated design procedure are detailed elsewhere; the purpose of this document is to report the results of the design algorithm for the fingerprint application and to delineate the implementation issues that are incorporated in LAFPC. Special attention is paid to the computation of the wavelet transform, the fast search algorithm used for the VQ encoding, and the entropy coding procedure used in the transmission of the source symbols.
Compression of color facial images using feature correction two-stage vector quantization.
Huang, J; Wang, Y
1999-01-01
A feature correction two-stage vector quantization (FC2VQ) algorithm was previously developed to compress gray-scale photo identification (ID) pictures. This algorithm is extended to color images in this work. Three options are compared, which apply the FC2VQ algorithm in RGB, YCbCr, and Karhunen-Loeve transform (KLT) color spaces, respectively. The RGB-FC2VQ algorithm is found to yield better image quality than KLT-FC2VQ or YCbCr-FC2VQ at similar bit rates. With the RGB-FC2VQ algorithm, a 128 x 128 24-b color ID image (49,152 bytes) can be compressed down to about 500 bytes with satisfactory quality. When the codeword indices are further compressed losslessly using a first order Huffman coder, this size is further reduced to about 450 bytes. PMID:18262869
NASA Astrophysics Data System (ADS)
An, Fengwei; Akazawa, Toshinobu; Yamasaki, Shogo; Chen, Lei; Jürgen Mattausch, Hans
2015-04-01
This paper reports a VLSI realization of learning vector quantization (LVQ) with high flexibility for different applications. It is based on a hardware/software (HW/SW) co-design concept for on-chip learning and recognition and designed as a SoC in 180 nm CMOS. The time consuming nearest Euclidean distance search in the LVQ algorithm’s competition layer is efficiently implemented as a pipeline with parallel p-word input. Since neuron number in the competition layer, weight values, input and output number are scalable, the requirements of many different applications can be satisfied without hardware changes. Classification of a d-dimensional input vector is completed in n × \\lceil d/p \\rceil + R clock cycles, where R is the pipeline depth, and n is the number of reference feature vectors (FVs). Adjustment of stored reference FVs during learning is done by the embedded 32-bit RISC CPU, because this operation is not time critical. The high flexibility is verified by the application of human detection with different numbers for the dimensionality of the FVs.
NASA Astrophysics Data System (ADS)
Huang, Bormin; Wei, Shih-Chieh; Huang, Allen H.-L.; Smuga-Otto, Maciek; Knuteson, Robert; Revercomb, Henry E.; Smith, William L., Sr.
2007-09-01
The Geostationary Imaging Fourier Transform Spectrometer (GIFTS), as part of NASA's New Millennium Program, is an advanced instrument to provide high-temporal-resolution measurements of atmospheric temperature and water vapor, which will greatly facilitate the detection of rapid atmospheric changes associated with destructive weather events, including tornadoes, severe thunderstorms, flash floods, and hurricanes. The Committee on Earth Science and Applications from Space under the National Academy of Sciences recommended that NASA and NOAA complete the fabrication, testing, and space qualification of the GIFTS instrument and that they support the international effort to launch GIFTS by 2008. Lossless data compression is critical for the overall success of the GIFTS experiment, or any other very high data rate experiment where the data is to be disseminated to the user community in real-time and archived for scientific studies and climate assessment. In general, lossless data compression is needed for high data rate hyperspectral sounding instruments such as GIFTS for (1) transmitting the data down to the ground within the bandwidth capabilities of the satellite transmitter and ground station receiving system, (2) compressing the data at the ground station for distribution to the user community (as is traditionally performed with GOES data via satellite rebroadcast), and (3) archival of the data without loss of any information content so that it can be used in scientific studies and climate assessment for many years after the date of the measurements. In this paper we study lossless compression of GIFTS data that has been collected as part of the calibration or ground based tests that were conducted in 2006. The predictive partitioned vector quantization (PPVQ) is investigated for higher lossless compression performance. PPVQ consists of linear prediction, channel partitioning and vector quantization. It yields an average compression ratio of 4.65 on the GIFTS test
Speech recognition method based on genetic vector quantization and BP neural network
NASA Astrophysics Data System (ADS)
Gao, Li'ai; Li, Lihua; Zhou, Jian; Zhao, Qiuxia
2009-07-01
Vector Quantization is one of popular codebook design methods for speech recognition at present. In the process of codebook design, traditional LBG algorithm owns the advantage of fast convergence, but it is easy to get the local optimal result and be influenced by initial codebook. According to the understanding that Genetic Algorithm has the capability of getting the global optimal result, this paper proposes a hybrid clustering method GA-L based on Genetic Algorithm and LBG algorithm to improve the codebook.. Then using genetic neural networks for speech recognition. consequently search a global optimization codebook of the training vector space. The experiments show that neural network identification method based on genetic algorithm can extricate from its local maximum value and the initial restrictions, it can show superior to the standard genetic algorithm and BP neural network algorithm from various sources, and the genetic BP neural networks has a higher recognition rate and the unique application advantages than the general BP neural network in the same GA-VQ codebook, it can achieve a win-win situation in the time and efficiency.
Adaptive wavelet methods - Matrix-vector multiplication
NASA Astrophysics Data System (ADS)
Černá, Dana; Finěk, Václav
2012-12-01
The design of most adaptive wavelet methods for elliptic partial differential equations follows a general concept proposed by A. Cohen, W. Dahmen and R. DeVore in [3, 4]. The essential steps are: transformation of the variational formulation into the well-conditioned infinite-dimensional l2 problem, finding of the convergent iteration process for the l2 problem and finally derivation of its finite dimensional version which works with an inexact right hand side and approximate matrix-vector multiplications. In our contribution, we shortly review all these parts and wemainly pay attention to approximate matrix-vector multiplications. Effective approximation of matrix-vector multiplications is enabled by an off-diagonal decay of entries of the wavelet stiffness matrix. We propose here a new approach which better utilize actual decay of matrix entries.
Optimization of ion-exchange protein separations using a vector quantizing neural network.
Klein, E J; Rivera, S L; Porter, J E
2000-01-01
In this work, a previously proposed methodology for the optimization of analytical scale protein separations using ion-exchange chromatography is subjected to two challenging case studies. The optimization methodology uses a Doehlert shell design for design of experiments and a novel criteria function to rank chromatograms in order of desirability. This chromatographic optimization function (COF) accounts for the separation between neighboring peaks, the total number of peaks eluted, and total analysis time. The COF is penalized when undesirable peak geometries (i.e., skewed and/or shouldered peaks) are present as determined by a vector quantizing neural network. Results of the COF analysis are fit to a quadratic response model, which is optimized with respect to the optimization variables using an advanced Nelder and Mead simplex algorithm. The optimization methodology is tested on two case study sample mixtures, the first of which is composed of equal parts of lysozyme, conalbumin, bovine serum albumin, and transferrin, and the second of which contains equal parts of conalbumin, bovine serum albumin, tranferrin, beta-lactoglobulin, insulin, and alpha -chymotrypsinogen A. Mobile-phase pH and gradient length are optimized to achieve baseline resolution of all solutes for both case studies in acceptably short analysis times, thus demonstrating the usefulness of the empirical optimization methodology. PMID:10835256
Optimization of a vector quantization codebook for objective evaluation of surgical skill.
Kowalewski, Timothy M; Rosen, Jacob; Chang, Lily; Sinanan, Mika N; Hannaford, Blake
2004-01-01
Surgical robotic systems and virtual reality simulators have introduced an unprecedented precision of measurement for both tool-tissue and tool-surgeon interaction; thus holding promise for more objective analyses of surgical skill. Integrative or averaged metrics such as path length, time-to-task, success/failure percentages, etc., have often been employed towards this end but these fail to address the processes associated with a surgical task as a dynamic phenomena. Stochastic tools such as Markov modeling using a 'white-box' approach have proven amenable to this type of analysis. While such an approach reveals the internal structure of the of the surgical task as a process, it requires a task decomposition based on expert knowledge, which may result in a relatively large/complex model. In this work, a 'black box' approach is developed with generalized cross-procedural applications., the model is characterized by a compact topology, abstract state definitions, and optimized codebook size. Data sets of isolated tasks were extracted from the Blue DRAGON database consisting of 30 surgical subjects stratified into six training levels. Vector quantization (VQ) was employed on the entire database, thus synthesizing a lexicon of discrete, task-independent surgical tool/tissue interactions. VQ has successfully established a dictionary of 63 surgical code words and displayed non-temporal skill discrimination. VQ allows for a more cross-procedural analysis without relying on a thorough study of the procedure, links the results of the black-box approach to observable phenomena, and reduces the computational cost of the analysis by discretizing a complex, continuous data space. PMID:15544266
Wu, Xiao-hong; Cai, Pei-qiang; Wu, Bin; Sun, Jun; Ji, Gang
2016-03-01
To solve the noisy sensitivity problem of fuzzy learning vector quantization (FLVQ), unsupervised possibilistic fuzzy learning vector quantization (UPFLVQ) was proposed based on unsupervised possibilistic fuzzy clustering (UPFC). UPFLVQ aimed to use fuzzy membership values and typicality values of UPFC to update the learning rate of learning vector quantization network and cluster centers. UPFLVQ is an unsupervised machine learning algorithm and it can be applied to classify without learning samples. UPFLVQ was used in the identification of lettuce varieties by near infrared spectroscopy (NIS). Short wave and long wave near infrared spectra of three types of lettuces were collected by FieldSpec@3 portable spectrometer in the wave-length range of 350-2 500 nm. When the near infrared spectra were compressed by principal component analysis (PCA), the first three principal components explained 97.50% of the total variance in near infrared spectra. After fuzzy c-means (FCM). clustering was performed for its cluster centers as the initial cluster centers of UPFLVQ, UPFLVQ could classify lettuce varieties with the terminal fuzzy membership values and typicality values. The experimental results showed that UPFLVQ together with NIS provided an effective method of identification of lettuce varieties with advantages such as fast testing, high accuracy rate and non-destructive characteristics. UPFLVQ is a clustering algorithm by combining UPFC and FLVQ, and it need not prepare any learning samples for the identification of lettuce varieties by NIS. UPFLVQ is suitable for linear separable data clustering and it provides a novel method for fast and nondestructive identification of lettuce varieties. PMID:27400511
NASA Astrophysics Data System (ADS)
Kulshreshtha, Usha; Kulshreshtha, Daya Shankar; Vary, James P.
2016-07-01
In this talk, we study the light-front quantization of the vector Schwinger model with photon mass term in Faddeevian Regularization, describing two-dimensional electrodynamics with mass-less fermions but with a mass term for the U(1) gauge field. This theory is gauge-non-invariant (GNI). We construct a gauge-invariant (GI) theory using Stueckelberg mechanism and then recover the physical content of the original GNI theory from the newly constructed GI theory under some special gauge-fixing conditions (GFC's). We then study LFQ of this new GI theory.
Lossless compression of weight vectors from an adaptive filter
Bredemann, M.V.; Elliott, G.R.; Stearns, S.D.
1994-08-01
Techniques for lossless waveform compression can be applied to the transmission of weight vectors from an orbiting satellite. The vectors, which are a part of a hybrid analog/digital adaptive filter, are a representation of the radio frequency background seen by the satellite. An approach is used which treats each adaptive weight as a time-varying waveform.
NASA Astrophysics Data System (ADS)
Iftekharuddin, Khan M.; Razzaque, Mohammad A.
2005-06-01
We obtain a novel analytical derivation for distortion-related constraints in a neural network- (NN)-based automatic target recognition (ATR) system. We obtain two types of constraints for a realistic ATR system implementation involving 4-f correlator architecture. The first constraint determines the relative size between the input objects and input correlation filters. The second constraint dictates the limits on amount of rotation, translation, and scale of input objects for system implementation. We exploit these constraints in recognition of targets varying in rotation, translation, scale, occlusion, and the combination of all of these distortions using a learning vector quantization (LVQ) NN. We present the simulation verification of the constraints using both the gray-scale images and Defense Advanced Research Projects Agency's (DARPA's) Moving and Stationary Target Recognition (MSTAR) synthetic aperture radar (SAR) images with different depression and pose angles.
An, Ji-Yong; Meng, Fan-Rong; You, Zhu-Hong; Fang, Yu-Hong; Zhao, Yu-Jun; Zhang, Ming
2016-01-01
We propose a novel computational method known as RVM-LPQ that combines the Relevance Vector Machine (RVM) model and Local Phase Quantization (LPQ) to predict PPIs from protein sequences. The main improvements are the results of representing protein sequences using the LPQ feature representation on a Position Specific Scoring Matrix (PSSM), reducing the influence of noise using a Principal Component Analysis (PCA), and using a Relevance Vector Machine (RVM) based classifier. We perform 5-fold cross-validation experiments on Yeast and Human datasets, and we achieve very high accuracies of 92.65% and 97.62%, respectively, which is significantly better than previous works. To further evaluate the proposed method, we compare it with the state-of-the-art support vector machine (SVM) classifier on the Yeast dataset. The experimental results demonstrate that our RVM-LPQ method is obviously better than the SVM-based method. The promising experimental results show the efficiency and simplicity of the proposed method, which can be an automatic decision support tool for future proteomics research. PMID:27314023
An, Ji-Yong; Meng, Fan-Rong; You, Zhu-Hong; Fang, Yu-Hong; Zhao, Yu-Jun; Zhang, Ming
2016-01-01
We propose a novel computational method known as RVM-LPQ that combines the Relevance Vector Machine (RVM) model and Local Phase Quantization (LPQ) to predict PPIs from protein sequences. The main improvements are the results of representing protein sequences using the LPQ feature representation on a Position Specific Scoring Matrix (PSSM), reducing the influence of noise using a Principal Component Analysis (PCA), and using a Relevance Vector Machine (RVM) based classifier. We perform 5-fold cross-validation experiments on Yeast and Human datasets, and we achieve very high accuracies of 92.65% and 97.62%, respectively, which is significantly better than previous works. To further evaluate the proposed method, we compare it with the state-of-the-art support vector machine (SVM) classifier on the Yeast dataset. The experimental results demonstrate that our RVM-LPQ method is obviously better than the SVM-based method. The promising experimental results show the efficiency and simplicity of the proposed method, which can be an automatic decision support tool for future proteomics research. PMID:27314023
Recognition of Manual Actions Using Vector Quantization and Dynamic Time Warping
NASA Astrophysics Data System (ADS)
Martin, Marcel; Maycock, Jonathan; Schmidt, Florian Paul; Kramer, Oliver
The recognition of manual actions, i.e., hand movements, hand postures and gestures, plays an important role in human-computer interaction, while belonging to a category of particularly difficult tasks. Using a Vicon system to capture 3D spatial data, we investigate the recognition of manual actions in tasks such as pouring a cup of milk and writing into a book. We propose recognizing sequences in multidimensional time-series by first learning a smooth quantization of the data, and then using a variant of dynamic time warping to recognize short sequences of prototypical motions in a long unknown sequence. An experimental analysis validates our approach. Short manual actions are successfully recognized and the approach is shown to be spatially invariant. We also show that the approach speeds up processing while not decreasing recognition performance.
More About Vector Adaptive/Predictive Coding Of Speech
NASA Technical Reports Server (NTRS)
Jedrey, Thomas C.; Gersho, Allen
1992-01-01
Report presents additional information about digital speech-encoding and -decoding system described in "Vector Adaptive/Predictive Encoding of Speech" (NPO-17230). Summarizes development of vector adaptive/predictive coding (VAPC) system and describes basic functions of algorithm. Describes refinements introduced enabling receiver to cope with errors. VAPC algorithm implemented in integrated-circuit coding/decoding processors (codecs). VAPC and other codecs tested under variety of operating conditions. Tests designed to reveal effects of various background quiet and noisy environments and of poor telephone equipment. VAPC found competitive with and, in some respects, superior to other 4.8-kb/s codecs and other codecs of similar complexity.
Adaptive support vector regression for UAV flight control.
Shin, Jongho; Jin Kim, H; Kim, Youdan
2011-01-01
This paper explores an application of support vector regression for adaptive control of an unmanned aerial vehicle (UAV). Unlike neural networks, support vector regression (SVR) generates global solutions, because SVR basically solves quadratic programming (QP) problems. With this advantage, the input-output feedback-linearized inverse dynamic model and the compensation term for the inversion error are identified off-line, which we call I-SVR (inversion SVR) and C-SVR (compensation SVR), respectively. In order to compensate for the inversion error and the unexpected uncertainty, an online adaptation algorithm for the C-SVR is proposed. Then, the stability of the overall error dynamics is analyzed by the uniformly ultimately bounded property in the nonlinear system theory. In order to validate the effectiveness of the proposed adaptive controller, numerical simulations are performed on the UAV model. PMID:20970303
2013-01-01
Background Over-sampling methods based on Synthetic Minority Over-sampling Technique (SMOTE) have been proposed for classification problems of imbalanced biomedical data. However, the existing over-sampling methods achieve slightly better or sometimes worse result than the simplest SMOTE. In order to improve the effectiveness of SMOTE, this paper presents a novel over-sampling method using codebooks obtained by the learning vector quantization. In general, even when an existing SMOTE applied to a biomedical dataset, its empty feature space is still so huge that most classification algorithms would not perform well on estimating borderlines between classes. To tackle this problem, our over-sampling method generates synthetic samples which occupy more feature space than the other SMOTE algorithms. Briefly saying, our over-sampling method enables to generate useful synthetic samples by referring to actual samples taken from real-world datasets. Results Experiments on eight real-world imbalanced datasets demonstrate that our proposed over-sampling method performs better than the simplest SMOTE on four of five standard classification algorithms. Moreover, it is seen that the performance of our method increases if the latest SMOTE called MWMOTE is used in our algorithm. Experiments on datasets for β-turn types prediction show some important patterns that have not been seen in previous analyses. Conclusions The proposed over-sampling method generates useful synthetic samples for the classification of imbalanced biomedical data. Besides, the proposed over-sampling method is basically compatible with basic classification algorithms and the existing over-sampling methods. PMID:24088532
NASA Technical Reports Server (NTRS)
Jaggi, S.
1993-01-01
A study is conducted to investigate the effects and advantages of data compression techniques on multispectral imagery data acquired by NASA's airborne scanners at the Stennis Space Center. The first technique used was vector quantization. The vector is defined in the multispectral imagery context as an array of pixels from the same location from each channel. The error obtained in substituting the reconstructed images for the original set is compared for different compression ratios. Also, the eigenvalues of the covariance matrix obtained from the reconstructed data set are compared with the eigenvalues of the original set. The effects of varying the size of the vector codebook on the quality of the compression and on subsequent classification are also presented. The output data from the Vector Quantization algorithm was further compressed by a lossless technique called Difference-mapped Shift-extended Huffman coding. The overall compression for 7 channels of data acquired by the Calibrated Airborne Multispectral Scanner (CAMS), with an RMS error of 15.8 pixels was 195:1 (0.41 bpp) and with an RMS error of 3.6 pixels was 18:1 (.447 bpp). The algorithms were implemented in software and interfaced with the help of dedicated image processing boards to an 80386 PC compatible computer. Modules were developed for the task of image compression and image analysis. Also, supporting software to perform image processing for visual display and interpretation of the compressed/classified images was developed.
Jamaloo, Fatemeh; Mikaeili, Mohammad
2015-01-01
Common spatial pattern (CSP) is a method commonly used to enhance the effects of event-related desynchronization and event-related synchronization present in multichannel electroencephalogram-based brain-computer interface (BCI) systems. In the present study, a novel CSP sub-band feature selection has been proposed based on the discriminative information of the features. Besides, a distinction sensitive learning vector quantization based weighting of the selected features has been considered. Finally, after the classification of the weighted features using a support vector machine classifier, the performance of the suggested method has been compared with the existing methods based on frequency band selection, on the same BCI competitions datasets. The results show that the proposed method yields superior results on “ay” subject dataset compared against existing approaches such as sub-band CSP, filter bank CSP (FBCSP), discriminative FBCSP, and sliding window discriminative CSP. PMID:26284171
Online Sequential Projection Vector Machine with Adaptive Data Mean Update
Chen, Lin; Jia, Ji-Ting; Zhang, Qiong; Deng, Wan-Yu; Wei, Wei
2016-01-01
We propose a simple online learning algorithm especial for high-dimensional data. The algorithm is referred to as online sequential projection vector machine (OSPVM) which derives from projection vector machine and can learn from data in one-by-one or chunk-by-chunk mode. In OSPVM, data centering, dimension reduction, and neural network training are integrated seamlessly. In particular, the model parameters including (1) the projection vectors for dimension reduction, (2) the input weights, biases, and output weights, and (3) the number of hidden nodes can be updated simultaneously. Moreover, only one parameter, the number of hidden nodes, needs to be determined manually, and this makes it easy for use in real applications. Performance comparison was made on various high-dimensional classification problems for OSPVM against other fast online algorithms including budgeted stochastic gradient descent (BSGD) approach, adaptive multihyperplane machine (AMM), primal estimated subgradient solver (Pegasos), online sequential extreme learning machine (OSELM), and SVD + OSELM (feature selection based on SVD is performed before OSELM). The results obtained demonstrated the superior generalization performance and efficiency of the OSPVM. PMID:27143958
Online Sequential Projection Vector Machine with Adaptive Data Mean Update.
Chen, Lin; Jia, Ji-Ting; Zhang, Qiong; Deng, Wan-Yu; Wei, Wei
2016-01-01
We propose a simple online learning algorithm especial for high-dimensional data. The algorithm is referred to as online sequential projection vector machine (OSPVM) which derives from projection vector machine and can learn from data in one-by-one or chunk-by-chunk mode. In OSPVM, data centering, dimension reduction, and neural network training are integrated seamlessly. In particular, the model parameters including (1) the projection vectors for dimension reduction, (2) the input weights, biases, and output weights, and (3) the number of hidden nodes can be updated simultaneously. Moreover, only one parameter, the number of hidden nodes, needs to be determined manually, and this makes it easy for use in real applications. Performance comparison was made on various high-dimensional classification problems for OSPVM against other fast online algorithms including budgeted stochastic gradient descent (BSGD) approach, adaptive multihyperplane machine (AMM), primal estimated subgradient solver (Pegasos), online sequential extreme learning machine (OSELM), and SVD + OSELM (feature selection based on SVD is performed before OSELM). The results obtained demonstrated the superior generalization performance and efficiency of the OSPVM. PMID:27143958
Trypanosoma cruzi: adaptation to its vectors and its hosts
Noireau, François; Diosque, Patricio; Jansen, Ana Maria
2009-01-01
American trypanosomiasis is a parasitic zoonosis that occurs throughout Latin America. The etiological agent, Trypanosoma cruzi, is able to infect almost all tissues of its mammalian hosts and spreads in the environment in multifarious transmission cycles that may or not be connected. This biological plasticity, which is probably the result of the considerable heterogeneity of the taxon, exemplifies a successful adaptation of a parasite resulting in distinct outcomes of infection and a complex epidemiological pattern. In the 1990s, most endemic countries strengthened national control programs to interrupt the transmission of this parasite to humans. However, many obstacles remain to the effective control of the disease. Current knowledge of the different components involved in elaborate system that is American trypanosomiasis (the protozoan parasite T. cruzi, vectors Triatominae and the many reservoirs of infection), as well as the interactions existing within the system, is still incomplete. The Triatominae probably evolve from predatory reduvids in response to the availability of vertebrate food source. However, the basic mechanisms of adaptation of some of them to artificial ecotopes remain poorly understood. Nevertheless, these adaptations seem to be associated with a behavioral plasticity, a reduction in the genetic repertoire and increasing developmental instability. PMID:19250627
Quantization of Electromagnetic Fields in Cavities
NASA Technical Reports Server (NTRS)
Kakazu, Kiyotaka; Oshiro, Kazunori
1996-01-01
A quantization procedure for the electromagnetic field in a rectangular cavity with perfect conductor walls is presented, where a decomposition formula of the field plays an essential role. All vector mode functions are obtained by using the decomposition. After expanding the field in terms of the vector mode functions, we get the quantized electromagnetic Hamiltonian.
Caljon, Guy; De Muylder, Géraldine; Durnez, Lies; Jennes, Wim; Vanaerschot, Manu; Dujardin, Jean-Claude
2016-09-01
In the present review, we aim to provide a general introduction to different facets of the arms race between pathogens and their hosts/environment, emphasizing its evolutionary aspects. We focus on vector-borne parasitic protozoa, which have to adapt to both invertebrate and vertebrate hosts. Using Leishmania, Trypanosoma and Plasmodium as main models, we review successively (i) the adaptations and counter-adaptations of parasites and their invertebrate host, (ii) the adaptations and counter-adaptations of parasites and their vertebrate host and (iii) the impact of human interventions (chemotherapy, vaccination, vector control and environmental changes) on these adaptations. We conclude by discussing the practical impact this knowledge can have on translational research and public health. PMID:27400870
NASA Astrophysics Data System (ADS)
Faizal, Mir
2013-12-01
In this Letter we will analyze the creation of the multiverse. We will first calculate the wave function for the multiverse using third quantization. Then we will fourth-quantize this theory. We will show that there is no single vacuum state for this theory. Thus, we can end up with a multiverse, even after starting from a vacuum state. This will be used as a possible explanation for the creation of the multiverse. We also analyze the effect of interactions in this fourth-quantized theory.
NASA Astrophysics Data System (ADS)
Huang, Bormin; Wei, Shih-Chieh; Huang, Hung-Lung; Smith, William L.; Bloom, Hal J.
2008-08-01
As part of NASA's New Millennium Program, the Geostationary Imaging Fourier Transform Spectrometer (GIFTS) is an advanced ultraspectral sounder with a 128x128 array of interferograms for the retrieval of such geophysical parameters as atmospheric temperature, moisture, and wind. With massive data volume that would be generated by future advanced satellite sensors such as GIFTS, chances are that even the state-of-the-art channel coding (e.g. Turbo codes, LDPC) with low BER might not correct all the errors. Due to the error-sensitive ill-posed nature of the retrieval problem, lossless compression with error resilience is desired for ultraspectral sounder data downlink and rebroadcast. Previously, we proposed the fast precomputed vector quantization (FPVQ) with arithmetic coding (AC) which can produce high compression gain for ground operation. In this paper we adopt FPVQ with the reversible variable-length coding (RVLC) to provide better resilience against satellite transmission errors remaining after channel decoding. The FPVQ-RVLC method is compared with the previous FPVQ-AC method for lossless compression of the GIFTS data. The experiment shows that the FPVQ-RVLC method is a significantly better tool for rebroadcast of massive ultraspectral sounder data.
Local adaptation to temperature and the implications for vector-borne diseases.
Sternberg, Eleanore D; Thomas, Matthew B
2014-03-01
Vector life-history traits and parasite development respond in strongly nonlinear ways to changes in temperature. These thermal sensitivities create the potential for climate change to have a marked impact on disease transmission. To date, most research considering impacts of climate change on vector-borne diseases assumes that all populations of a given parasite or vector species respond similarly to temperature, regardless of their source population. This may be an inappropriate assumption because spatial variation in selective pressures such as temperature can lead to local adaptation. We examine evidence for local adaptation in disease vectors and present conceptual models for understanding how local adaptation might modulate the effects of both short- and long-term changes in climate. PMID:24513566
Fast large-scale object retrieval with binary quantization
NASA Astrophysics Data System (ADS)
Zhou, Shifu; Zeng, Dan; Shen, Wei; Zhang, Zhijiang; Tian, Qi
2015-11-01
The objective of large-scale object retrieval systems is to search for images that contain the target object in an image database. Where state-of-the-art approaches rely on global image representations to conduct searches, we consider many boxes per image as candidates to search locally in a picture. In this paper, a feature quantization algorithm called binary quantization is proposed. In binary quantization, a scale-invariant feature transform (SIFT) feature is quantized into a descriptive and discriminative bit-vector, which allows itself to adapt to the classic inverted file structure for box indexing. The inverted file, which stores the bit-vector and box ID where the SIFT feature is located inside, is compact and can be loaded into the main memory for efficient box indexing. We evaluate our approach on available object retrieval datasets. Experimental results demonstrate that the proposed approach is fast and achieves excellent search quality. Therefore, the proposed approach is an improvement over state-of-the-art approaches for object retrieval.
No evidence for local adaptation of dengue viruses to mosquito vector populations in Thailand.
Fansiri, Thanyalak; Pongsiri, Arissara; Klungthong, Chonticha; Ponlawat, Alongkot; Thaisomboonsuk, Butsaya; Jarman, Richard G; Scott, Thomas W; Lambrechts, Louis
2016-04-01
Despite their epidemiological importance, the evolutionary forces that shape the spatial structure of dengue virus genetic diversity are not fully understood. Fine-scale genetic structure of mosquito vector populations and evidence for genotype × genotype interactions between dengue viruses and their mosquito vectors are consistent with the hypothesis that the geographical distribution of dengue virus genetic diversity may reflect viral adaptation to local mosquito populations. To test this hypothesis, we measured vector competence in all sympatric and allopatric combinations of 14 low-passage dengue virus isolates and two wild-type populations of Aedes aegypti mosquitoes sampled in Bangkok and Kamphaeng Phet, two sites located about 300 km apart in Thailand. Despite significant genotype × genotype interactions, we found no evidence for superior vector competence in sympatric versus allopatric vector-virus combinations. Viral phylogenetic analysis revealed no geographical clustering of the 14 isolates, suggesting that high levels of viral migration (gene flow) in Thailand may counteract spatially heterogeneous natural selection. We conclude that it is unlikely that vector-mediated selection is a major driver of dengue virus adaptive evolution at the regional scale that we examined. Dengue virus local adaptation to mosquito vector populations could happen, however, in places or times that we did not test, or at a different geographical scale. PMID:27099625
NASA Astrophysics Data System (ADS)
Daly, Scott; Golestaneh, S. A.
2015-03-01
The human visual system's luminance nonlinearity ranges continuously from square root behavior in the very dark, gamma-like behavior in dim ambient, cube-root in office lighting, and logarithmic for daylight ranges. Early display quantization nonlinearities have been developed based on luminance bipartite JND data. More advanced approaches considered spatial frequency behavior, and used the Barten light-adaptive Contrast Sensitivity Function (CSF) modelled across a range of light adaptation to determine the luminance nonlinearity (e.g., DICOM, referred to as a GSDF {grayscale display function}). A recent approach for a GSDF, also referred to as an electrical-to-optical transfer function (EOTF), using that light-adaptive CSF model improves on this by tracking the CSF for the most sensitive spatial frequency, which changes with adaptation level. We explored the cone photoreceptor's contribution to the behavior of this maximum sensitivity of the CSF as a function of light adaptation, despite the CSF's frequency variations and that the cone's nonlinearity is a point-process. We found that parameters of a local cone model could fit the max sensitivity of the CSF model, across all frequencies, and are within the ranges of parameters commonly accepted for psychophysicallytuned cone models. Thus, a linking of the spatial frequency and luminance dimensions has been made for a key neural component. This provides a better theoretical foundation for the recently designed visual signal format using the aforementioned EOTF.
Local Adaptation and Vector-Mediated Population Structure in Plasmodium vivax Malaria
Gonzalez-Ceron, Lilia; Carlton, Jane M.; Gueye, Amy; Fay, Michael; McCutchan, Thomas F.; Su, Xin-zhuan
2008-01-01
Plasmodium vivax in southern Mexico exhibits different infectivities to 2 local mosquito vectors, Anopheles pseudopunctipennis and Anopheles albimanus. Previous work has tied these differences in mosquito infectivity to variation in the central repeat motif of the malaria parasite's circumsporozoite (csp) gene, but subsequent studies have questioned this view. Here we present evidence that P. vivax in southern Mexico comprised 3 genetic populations whose distributions largely mirror those of the 2 mosquito vectors. Additionally, laboratory colony feeding experiments indicate that parasite populations are most compatible with sympatric mosquito species. Our results suggest that reciprocal selection between malaria parasites and mosquito vectors has led to local adaptation of the parasite. Adaptation to local vectors may play an important role in generating population structure in Plasmodium. A better understanding of coevolutionary dynamics between sympatric mosquitoes and parasites will facilitate the identification of molecular mechanisms relevant to disease transmission in nature and provide crucial information for malaria control. PMID:18385220
Turner, C. David; Kotulski, Joseph Daniel; Pasik, Michael Francis
2005-12-01
This report investigates the feasibility of applying Adaptive Mesh Refinement (AMR) techniques to a vector finite element formulation for the wave equation in three dimensions. Possible error estimators are considered first. Next, approaches for refining tetrahedral elements are reviewed. AMR capabilities within the Nevada framework are then evaluated. We summarize our conclusions on the feasibility of AMR for time-domain vector finite elements and identify a path forward.
Regularized Estimate of the Weight Vector of an Adaptive Interference Canceller
NASA Astrophysics Data System (ADS)
Ermolayev, V. T.; Sorokin, I. S.; Flaksman, A. G.; Yastrebov, A. V.
2016-05-01
We consider an adaptive multi-channel interference canceller, which ensures the minimum value of the average output power of interference. It is proposed to form the weight vector of such a canceller as the power-vector expansion. It is shown that this approach allows one to obtain an exact analytical solution for the optimal weight vector by using the procedure of the power-vector orthogonalization. In the case of a limited number of the input-process samples, the solution becomes ill-defined and its regularization is required. An effective regularization method, which ensures a high degree of the interference suppression and does not involve the procedure of inversion of the correlation matrix of interference, is proposed, which significantly reduces the computational cost of the weight-vector estimation.
Regularized Estimate of the Weight Vector of an Adaptive Interference Canceller
NASA Astrophysics Data System (ADS)
Ermolayev, V. T.; Sorokin, I. S.; Flaksman, A. G.; Yastrebov, A. V.
2016-06-01
We consider an adaptive multi-channel interference canceller, which ensures the minimum value of the average output power of interference. It is proposed to form the weight vector of such a canceller as the power-vector expansion. It is shown that this approach allows one to obtain an exact analytical solution for the optimal weight vector by using the procedure of the power-vector orthogonalization. In the case of a limited number of the input-process samples, the solution becomes ill-defined and its regularization is required. An effective regularization method, which ensures a high degree of the interference suppression and does not involve the procedure of inversion of the correlation matrix of interference, is proposed, which significantly reduces the computational cost of the weight-vector estimation.
Adaptive nonseparable vector lifting scheme for digital holographic data compression.
Xing, Yafei; Kaaniche, Mounir; Pesquet-Popescu, Béatrice; Dufaux, Frédéric
2015-01-01
Holographic data play a crucial role in recent three-dimensional imaging as well as microscopic applications. As a result, huge amounts of storage capacity will be involved for this kind of data. Therefore, it becomes necessary to develop efficient hologram compression schemes for storage and transmission purposes. In this paper, we focus on the shifted distance information, obtained by the phase-shifting algorithm, where two sets of difference data need to be encoded. More precisely, a nonseparable vector lifting scheme is investigated in order to exploit the two-dimensional characteristics of the holographic contents. Simulations performed on different digital holograms have shown the effectiveness of the proposed method in terms of bitrate saving and quality of object reconstruction. PMID:25967029
Energy-saving technology of vector controlled induction motor based on the adaptive neuro-controller
NASA Astrophysics Data System (ADS)
Engel, E.; Kovalev, I. V.; Karandeev, D.
2015-10-01
The ongoing evolution of the power system towards a Smart Grid implies an important role of intelligent technologies, but poses strict requirements on their control schemes to preserve stability and controllability. This paper presents the adaptive neuro-controller for the vector control of induction motor within Smart Gird. The validity and effectiveness of the proposed energy-saving technology of vector controlled induction motor based on adaptive neuro-controller are verified by simulation results at different operating conditions over a wide speed range of induction motor.
Support Vector Machine Based on Adaptive Acceleration Particle Swarm Optimization
Abdulameer, Mohammed Hasan; Othman, Zulaiha Ali
2014-01-01
Existing face recognition methods utilize particle swarm optimizer (PSO) and opposition based particle swarm optimizer (OPSO) to optimize the parameters of SVM. However, the utilization of random values in the velocity calculation decreases the performance of these techniques; that is, during the velocity computation, we normally use random values for the acceleration coefficients and this creates randomness in the solution. To address this problem, an adaptive acceleration particle swarm optimization (AAPSO) technique is proposed. To evaluate our proposed method, we employ both face and iris recognition based on AAPSO with SVM (AAPSO-SVM). In the face and iris recognition systems, performance is evaluated using two human face databases, YALE and CASIA, and the UBiris dataset. In this method, we initially perform feature extraction and then recognition on the extracted features. In the recognition process, the extracted features are used for SVM training and testing. During the training and testing, the SVM parameters are optimized with the AAPSO technique, and in AAPSO, the acceleration coefficients are computed using the particle fitness values. The parameters in SVM, which are optimized by AAPSO, perform efficiently for both face and iris recognition. A comparative analysis between our proposed AAPSO-SVM and the PSO-SVM technique is presented. PMID:24790584
Support vector machine based on adaptive acceleration particle swarm optimization.
Abdulameer, Mohammed Hasan; Sheikh Abdullah, Siti Norul Huda; Othman, Zulaiha Ali
2014-01-01
Existing face recognition methods utilize particle swarm optimizer (PSO) and opposition based particle swarm optimizer (OPSO) to optimize the parameters of SVM. However, the utilization of random values in the velocity calculation decreases the performance of these techniques; that is, during the velocity computation, we normally use random values for the acceleration coefficients and this creates randomness in the solution. To address this problem, an adaptive acceleration particle swarm optimization (AAPSO) technique is proposed. To evaluate our proposed method, we employ both face and iris recognition based on AAPSO with SVM (AAPSO-SVM). In the face and iris recognition systems, performance is evaluated using two human face databases, YALE and CASIA, and the UBiris dataset. In this method, we initially perform feature extraction and then recognition on the extracted features. In the recognition process, the extracted features are used for SVM training and testing. During the training and testing, the SVM parameters are optimized with the AAPSO technique, and in AAPSO, the acceleration coefficients are computed using the particle fitness values. The parameters in SVM, which are optimized by AAPSO, perform efficiently for both face and iris recognition. A comparative analysis between our proposed AAPSO-SVM and the PSO-SVM technique is presented. PMID:24790584
On the Computation of Integral Curves in Adaptive Mesh Refinement Vector Fields
Deines, Eduard; Weber, Gunther H.; Garth, Christoph; Van Straalen, Brian; Borovikov, Sergey; Martin, Daniel F.; Joy, Kenneth I.
2011-06-27
Integral curves, such as streamlines, streaklines, pathlines, and timelines, are an essential tool in the analysis of vector field structures, offering straightforward and intuitive interpretation of visualization results. While such curves have a long-standing tradition in vector field visualization, their application to Adaptive Mesh Refinement (AMR) simulation results poses unique problems. AMR is a highly effective discretization method for a variety of physical simulation problems and has recently been applied to the study of vector fields in flow and magnetohydrodynamic applications. The cell-centered nature of AMR data and discontinuities in the vector field representation arising from AMR level boundaries complicate the application of numerical integration methods to compute integral curves. In this paper, we propose a novel approach to alleviate these problems and show its application to streamline visualization in an AMR model of the magnetic field of the solar system as well as to a simulation of two incompressible viscous vortex rings merging.
Visibility of wavelet quantization noise
NASA Technical Reports Server (NTRS)
Watson, A. B.; Yang, G. Y.; Solomon, J. A.; Villasenor, J.
1997-01-01
The discrete wavelet transform (DWT) decomposes an image into bands that vary in spatial frequency and orientation. It is widely used for image compression. Measures of the visibility of DWT quantization errors are required to achieve optimal compression. Uniform quantization of a single band of coefficients results in an artifact that we call DWT uniform quantization noise; it is the sum of a lattice of random amplitude basis functions of the corresponding DWT synthesis filter. We measured visual detection thresholds for samples of DWT uniform quantization noise in Y, Cb, and Cr color channels. The spatial frequency of a wavelet is r 2-lambda, where r is display visual resolution in pixels/degree, and lambda is the wavelet level. Thresholds increase rapidly with wavelet spatial frequency. Thresholds also increase from Y to Cr to Cb, and with orientation from lowpass to horizontal/vertical to diagonal. We construct a mathematical model for DWT noise detection thresholds that is a function of level, orientation, and display visual resolution. This allows calculation of a "perceptually lossless" quantization matrix for which all errors are in theory below the visual threshold. The model may also be used as the basis for adaptive quantization schemes.
Visibility of Wavelet Quantization Noise
NASA Technical Reports Server (NTRS)
Watson, Andrew B.; Yang, Gloria Y.; Solomon, Joshua A.; Villasenor, John; Null, Cynthia H. (Technical Monitor)
1995-01-01
The Discrete Wavelet Transform (DWT) decomposes an image into bands that vary in spatial frequency and orientation. It is widely used for image compression. Measures of the visibility of DWT quantization errors are required to achieve optimal compression. Uniform quantization of a single band of coefficients results in an artifact that is the sum of a lattice of random amplitude basis functions of the corresponding DWT synthesis filter, which we call DWT uniform quantization noise. We measured visual detection thresholds for samples of DWT uniform quantization noise in Y, Cb, and Cr color channels. The spatial frequency of a wavelet is r 2(exp)-L , where r is display visual resolution in pixels/degree, and L is the wavelet level. Amplitude thresholds increase rapidly with spatial frequency. Thresholds also increase from Y to Cr to Cb, and with orientation from low-pass to horizontal/vertical to diagonal. We describe a mathematical model to predict DWT noise detection thresholds as a function of level, orientation, and display visual resolution. This allows calculation of a "perceptually lossless" quantization matrix for which all errors are in theory below the visual threshold. The model may also be used as the basis for adaptive quantization schemes.
Adaptive track scheduling to optimize concurrency and vectorization in GeantV
NASA Astrophysics Data System (ADS)
Apostolakis, J.; Bandieramonte, M.; Bitzes, G.; Brun, R.; Canal, P.; Carminati, F.; De Fine Licht, J. C.; Duhem, L.; Elvira, V. D.; Gheata, A.; Jun, S. Y.; Lima, G.; Novak, M.; Sehgal, R.; Shadura, O.; Wenzel, S.
2015-05-01
The GeantV project is focused on the R&D of new particle transport techniques to maximize parallelism on multiple levels, profiting from the use of both SIMD instructions and co-processors for the CPU-intensive calculations specific to this type of applications. In our approach, vectors of tracks belonging to multiple events and matching different locality criteria must be gathered and dispatched to algorithms having vector signatures. While the transport propagates tracks and changes their individual states, data locality becomes harder to maintain. The scheduling policy has to be changed to maintain efficient vectors while keeping an optimal level of concurrency. The model has complex dynamics requiring tuning the thresholds to switch between the normal regime and special modes, i.e. prioritizing events to allow flushing memory, adding new events in the transport pipeline to boost locality, dynamically adjusting the particle vector size or switching between vector to single track mode when vectorization causes only overhead. This work requires a comprehensive study for optimizing these parameters to make the behaviour of the scheduler self-adapting, presenting here its initial results.
Seligman, Thomas H.; Prosen, Tomaz
2010-12-23
The basic ideas of second quantization and Fock space are extended to density operator states, used in treatments of open many-body systems. This can be done for fermions and bosons. While the former only requires the use of a non-orthogonal basis, the latter requires the introduction of a dual set of spaces. In both cases an operator algebra closely resembling the canonical one is developed and used to define the dual sets of bases. We here concentrated on the bosonic case where the unboundedness of the operators requires the definitions of dual spaces to support the pair of bases. Some applications, mainly to non-equilibrium steady states, will be mentioned.
Using unknown input observers for robust adaptive fault detection in vector second-order systems
NASA Astrophysics Data System (ADS)
Demetriou, Michael A.
2005-03-01
The purpose of this manuscript is to construct natural observers for vector second-order systems by utilising unknown input observer (UIO) methods. This observer is subsequently used for a robust fault detection scheme and also as an adaptive detection scheme for a certain class of actuator faults wherein the time instance and characteristics of an incipient actuator fault are detected. Stability of the adaptive scheme is provided by a parameter-dependent Lyapunov function for second-order systems. Numerical example on a mechanical system describing an automobile suspension system is used to illustrate the theoretical results.
NASA Astrophysics Data System (ADS)
Ghasemi-Nejhad, Mehrdad N.
2013-04-01
This paper presents design of smart composite platforms for adaptive trust vector control (TVC) and adaptive laser telescope for satellite applications. To eliminate disturbances, the proposed adaptive TVC and telescope systems will be mounted on two analogous smart composite platform with simultaneous precision positioning (pointing) and vibration suppression (stabilizing), SPPVS, with micro-radian pointing resolution, and then mounted on a satellite in two different locations. The adaptive TVC system provides SPPVS with large tip-tilt to potentially eliminate the gimbals systems. The smart composite telescope will be mounted on a smart composite platform with SPPVS and then mounted on a satellite. The laser communication is intended for the Geosynchronous orbit. The high degree of directionality increases the security of the laser communication signal (as opposed to a diffused RF signal), but also requires sophisticated subsystems for transmission and acquisition. The shorter wavelength of the optical spectrum increases the data transmission rates, but laser systems require large amounts of power, which increases the mass and complexity of the supporting systems. In addition, the laser communication on the Geosynchronous orbit requires an accurate platform with SPPVS capabilities. Therefore, this work also addresses the design of an active composite platform to be used to simultaneously point and stabilize an intersatellite laser communication telescope with micro-radian pointing resolution. The telescope is a Cassegrain receiver that employs two mirrors, one convex (primary) and the other concave (secondary). The distance, as well as the horizontal and axial alignment of the mirrors, must be precisely maintained or else the optical properties of the system will be severely degraded. The alignment will also have to be maintained during thruster firings, which will require vibration suppression capabilities of the system as well. The innovative platform has been
Adaptive vector validation in image velocimetry to minimise the influence of outlier clusters
NASA Astrophysics Data System (ADS)
Masullo, Alessandro; Theunissen, Raf
2016-03-01
The universal outlier detection scheme (Westerweel and Scarano in Exp Fluids 39:1096-1100, 2005) and the distance-weighted universal outlier detection scheme for unstructured data (Duncan et al. in Meas Sci Technol 21:057002, 2010) are the most common PIV data validation routines. However, such techniques rely on a spatial comparison of each vector with those in a fixed-size neighbourhood and their performance subsequently suffers in the presence of clusters of outliers. This paper proposes an advancement to render outlier detection more robust while reducing the probability of mistakenly invalidating correct vectors. Velocity fields undergo a preliminary evaluation in terms of local coherency, which parametrises the extent of the neighbourhood with which each vector will be compared subsequently. Such adaptivity is shown to reduce the number of undetected outliers, even when implemented in the afore validation schemes. In addition, the authors present an alternative residual definition considering vector magnitude and angle adopting a modified Gaussian-weighted distance-based averaging median. This procedure is able to adapt the degree of acceptable background fluctuations in velocity to the local displacement magnitude. The traditional, extended and recommended validation methods are numerically assessed on the basis of flow fields from an isolated vortex, a turbulent channel flow and a DNS simulation of forced isotropic turbulence. The resulting validation method is adaptive, requires no user-defined parameters and is demonstrated to yield the best performances in terms of outlier under- and over-detection. Finally, the novel validation routine is applied to the PIV analysis of experimental studies focused on the near wake behind a porous disc and on a supersonic jet, illustrating the potential gains in spatial resolution and accuracy.
Adaptive strategies of African horse sickness virus to facilitate vector transmission
Wilson, Anthony; Mellor, Philip Scott; Szmaragd, Camille; Mertens, Peter Paul Clement
2009-01-01
African horse sickness virus (AHSV) is an orbivirus that is usually transmitted between its equid hosts by adult Culicoides midges. In this article, we review the ways in which AHSV may have adapted to this mode of transmission. The AHSV particle can be modified by the pH or proteolytic enzymes of its immediate environment, altering its ability to infect different cell types. The degree of pathogenesis in the host and vector may also represent adaptations maximising the likelihood of successful vectorial transmission. However, speculation upon several adaptations for vectorial transmission is based upon research on related viruses such as bluetongue virus (BTV), and further direct studies of AHSV are required in order to improve our understanding of this important virus. PMID:19094921
Iterative Robust Capon Beamforming with Adaptively Updated Array Steering Vector Mismatch Levels
Sun, Liguo
2014-01-01
The performance of the conventional adaptive beamformer is sensitive to the array steering vector (ASV) mismatch. And the output signal-to interference and noise ratio (SINR) suffers deterioration, especially in the presence of large direction of arrival (DOA) error. To improve the robustness of traditional approach, we propose a new approach to iteratively search the ASV of the desired signal based on the robust capon beamformer (RCB) with adaptively updated uncertainty levels, which are derived in the form of quadratically constrained quadratic programming (QCQP) problem based on the subspace projection theory. The estimated levels in this iterative beamformer present the trend of decreasing. Additionally, other array imperfections also degrade the performance of beamformer in practice. To cover several kinds of mismatches together, the adaptive flat ellipsoid models are introduced in our method as tight as possible. In the simulations, our beamformer is compared with other methods and its excellent performance is demonstrated via the numerical examples. PMID:27355008
NASA Astrophysics Data System (ADS)
Lasaad, Sbita; Dalila, Zaltni; Naceurq, Abdelkrim Mohamed
This study demonstrates that high performance speed control can be obtained by using an adaptative sliding mode control method for a direct vector controlled Squirrel Cage Induction Motor (SCIM). In this study a new method of designing a simple and effective adaptative sliding mode rotational speed control law is developed. The design includes an accurate sliding mode flux observation from the measured stator terminals and rotor speed. The performance of the Direct Field-Orientation Control (DFOC) is ensured by online tuning based on a Model Reference Adaptative System (MRAS) rotor time constant estimator. The control strategy is derived in the sense of Lyapunov stability theory so that the stable tracking performance can be guaranteed under the occurrence of system uncertainties and external disturbances. The proposed scheme is a solution for a robust and high performance induction motor servo drives. Simulation results are provided to validate the effectiveness and robustness of the developed methodology.
Performance Characteristics of an Adaptive Mesh RefinementCalculation on Scalar and Vector Platforms
Welcome, Michael; Rendleman, Charles; Oliker, Leonid; Biswas, Rupak
2006-01-31
Adaptive mesh refinement (AMR) is a powerful technique thatreduces the resources necessary to solve otherwise in-tractable problemsin computational science. The AMR strategy solves the problem on arelatively coarse grid, and dynamically refines it in regions requiringhigher resolution. However, AMR codes tend to be far more complicatedthan their uniform grid counterparts due to the software infrastructurenecessary to dynamically manage the hierarchical grid framework. Despitethis complexity, it is generally believed that future multi-scaleapplications will increasingly rely on adaptive methods to study problemsat unprecedented scale and resolution. Recently, a new generation ofparallel-vector architectures have become available that promise toachieve extremely high sustained performance for a wide range ofapplications, and are the foundation of many leadership-class computingsystems worldwide. It is therefore imperative to understand the tradeoffsbetween conventional scalar and parallel-vector platforms for solvingAMR-based calculations. In this paper, we examine the HyperCLaw AMRframework to compare and contrast performance on the Cray X1E, IBM Power3and Power5, and SGI Altix. To the best of our knowledge, this is thefirst work that investigates and characterizes the performance of an AMRcalculation on modern parallel-vector systems.
Waleckx, Etienne; Gourbière, Sébastien; Dumonteil, Eric
2015-01-01
Chagas disease prevention remains mostly based on triatomine vector control to reduce or eliminate house infestation with these bugs. The level of adaptation of triatomines to human housing is a key part of vector competence and needs to be precisely evaluated to allow for the design of effective vector control strategies. In this review, we examine how the domiciliation/intrusion level of different triatomine species/populations has been defined and measured and discuss how these concepts may be improved for a better understanding of their ecology and evolution, as well as for the design of more effective control strategies against a large variety of triatomine species. We suggest that a major limitation of current criteria for classifying triatomines into sylvatic, intrusive, domiciliary and domestic species is that these are essentially qualitative and do not rely on quantitative variables measuring population sustainability and fitness in their different habitats. However, such assessments may be derived from further analysis and modelling of field data. Such approaches can shed new light on the domiciliation process of triatomines and may represent a key tool for decision-making and the design of vector control interventions. PMID:25993504
An adaptive online learning approach for Support Vector Regression: Online-SVR-FID
NASA Astrophysics Data System (ADS)
Liu, Jie; Zio, Enrico
2016-08-01
Support Vector Regression (SVR) is a popular supervised data-driven approach for building empirical models from available data. Like all data-driven methods, under non-stationary environmental and operational conditions it needs to be provided with adaptive learning capabilities, which might become computationally burdensome with large datasets cumulating dynamically. In this paper, a cost-efficient online adaptive learning approach is proposed for SVR by combining Feature Vector Selection (FVS) and Incremental and Decremental Learning. The proposed approach adaptively modifies the model only when different pattern drifts are detected according to proposed criteria. Two tolerance parameters are introduced in the approach to control the computational complexity, reduce the influence of the intrinsic noise in the data and avoid the overfitting problem of SVR. Comparisons of the prediction results is made with other online learning approaches e.g. NORMA, SOGA, KRLS, Incremental Learning, on several artificial datasets and a real case study concerning time series prediction based on data recorded on a component of a nuclear power generation system. The performance indicators MSE and MARE computed on the test dataset demonstrate the efficiency of the proposed online learning method.
Tsetsarkin, Konstantin A; Weaver, Scott C
2011-12-01
The adaptation of Chikungunya virus (CHIKV) to a new vector, the Aedes albopictus mosquito, is a major factor contributing to its ongoing re-emergence in a series of large-scale epidemics of arthritic disease in many parts of the world since 2004. Although the initial step of CHIKV adaptation to A. albopictus was determined to involve an A226V amino acid substitution in the E1 envelope glycoprotein that first arose in 2005, little attention has been paid to subsequent CHIKV evolution after this adaptive mutation was convergently selected in several geographic locations. To determine whether selection of second-step adaptive mutations in CHIKV or other arthropod-borne viruses occurs in nature, we tested the effect of an additional envelope glycoprotein amino acid change identified in Kerala, India in 2009. This substitution, E2-L210Q, caused a significant increase in the ability of CHIKV to develop a disseminated infection in A. albopictus, but had no effect on CHIKV fitness in the alternative mosquito vector, A. aegypti, or in vertebrate cell lines. Using infectious viruses or virus-like replicon particles expressing the E2-210Q and E2-210L residues, we determined that E2-L210Q acts primarily at the level of infection of A. albopictus midgut epithelial cells. In addition, we observed that the initial adaptive substitution, E1-A226V, had a significantly stronger effect on CHIKV fitness in A. albopictus than E2-L210Q, thus explaining the observed time differences required for selective sweeps of these mutations in nature. These results indicate that the continuous CHIKV circulation in an A. albopictus-human cycle since 2005 has resulted in the selection of an additional, second-step mutation that may facilitate even more efficient virus circulation and persistence in endemic areas, further increasing the risk of more severe and expanded CHIK epidemics. PMID:22174678
Adaptive Developmental Delay in Chagas Disease Vectors: An Evolutionary Ecology Approach
Menu, Frédéric; Ginoux, Marine; Rajon, Etienne; Lazzari, Claudio R.; Rabinovich, Jorge E.
2010-01-01
Background The developmental time of vector insects is important in population dynamics, evolutionary biology, epidemiology and in their responses to global climatic change. In the triatomines (Triatominae, Reduviidae), vectors of Chagas disease, evolutionary ecology concepts, which may allow for a better understanding of their biology, have not been applied. Despite delay in the molting in some individuals observed in triatomines, no effort was made to explain this variability. Methodology We applied four methods: (1) an e-mail survey sent to 30 researchers with experience in triatomines, (2) a statistical description of the developmental time of eleven triatomine species, (3) a relationship between development time pattern and climatic inter-annual variability, (4) a mathematical optimization model of evolution of developmental delay (diapause). Principal Findings 85.6% of responses informed on prolonged developmental times in 5th instar nymphs, with 20 species identified with remarkable developmental delays. The developmental time analysis showed some degree of bi-modal pattern of the development time of the 5th instars in nine out of eleven species but no trend between development time pattern and climatic inter-annual variability was observed. Our optimization model predicts that the developmental delays could be due to an adaptive risk-spreading diapause strategy, only if survival throughout the diapause period and the probability of random occurrence of “bad” environmental conditions are sufficiently high. Conclusions/Significance Developmental delay may not be a simple non-adaptive phenotypic plasticity in development time, and could be a form of adaptive diapause associated to a physiological mechanism related to the postponement of the initiation of reproduction, as an adaptation to environmental stochasticity through a spreading of risk (bet-hedging) strategy. We identify a series of parameters that can be measured in the field and laboratory to test
NASA Astrophysics Data System (ADS)
Yan, Su; Ma, Kougen; Ghasemi-Nejhad, Mehrdad N.
2008-03-01
In this paper, a novel application of adaptive composite structures, a University of Hawaii at Manoa (UHM) smart composite platform, is developed for the Thrust Vector Control (TVC) of satellites. The device top plate of the UHM platform is an adaptive circular composite plate (ACCP) that utilizes integrated sensors/actuators and controllers to suppress low frequency vibrations during the thruster firing as well as to potentially isolate dynamic responses from the satellite structure bus. Since the disturbance due to the satellite thruster firing can be estimated, a combined strategy of an adaptive disturbance observer (DOB) and feed-forward control is proposed for vibration suppression of the ACCP with multi-sensors and multi-actuators. Meanwhile, the effects of the DOB cut-off frequency and the relative degree of the low-pass filter on the DOB performance are investigated. Simulations and experimental results show that higher relative degree of the low-pass filter with the required cut-off frequency will enhance the DOB performance for a high-order system control. Further, although the increase of the filter cut-off frequency can guarantee a sufficient stability margin, it may cause an undesirable increase of the control bandwidth. The effectiveness of the proposed adaptive DOB with feed-forward control strategy is verified through simulations and experiments using the ACCP system.
Chen, Xiyuan; Wang, Xiying; Xu, Yuan
2014-01-01
This paper deals with the problem of state estimation for the vector-tracking loop of a software-defined Global Positioning System (GPS) receiver. For a nonlinear system that has the model error and white Gaussian noise, a noise statistics estimator is used to estimate the model error, and based on this, a modified iterated extended Kalman filter (IEKF) named adaptive iterated Kalman filter (AIEKF) is proposed. A vector-tracking GPS receiver utilizing AIEKF is implemented to evaluate the performance of the proposed method. Through road tests, it is shown that the proposed method has an obvious accuracy advantage over the IEKF and Adaptive Extended Kalman filter (AEKF) in position determination. The results show that the proposed method is effective to reduce the root-mean-square error (RMSE) of position (including longitude, latitude and altitude). Comparing with EKF, the position RMSE values of AIEKF are reduced by about 45.1%, 40.9% and 54.6% in the east, north and up directions, respectively. Comparing with IEKF, the position RMSE values of AIEKF are reduced by about 25.7%, 19.3% and 35.7% in the east, north and up directions, respectively. Compared with AEKF, the position RMSE values of AIEKF are reduced by about 21.6%, 15.5% and 30.7% in the east, north and up directions, respectively. PMID:25502124
Completely quantized collapse and consequences
Pearle, Philip
2005-08-15
Promotion of quantum theory from a theory of measurement to a theory of reality requires an unambiguous specification of the ensemble of realizable states (and each state's probability of realization). Although not yet achieved within the framework of standard quantum theory, it has been achieved within the framework of the continuous spontaneous localization (CSL) wave-function collapse model. In CSL, a classical random field w(x,t) interacts with quantum particles. The state vector corresponding to each w(x,t) is a realizable state. In this paper, I consider a previously presented model, which is predictively equivalent to CSL. In this completely quantized collapse (CQC) model, the classical random field is quantized. It is represented by the operator W(x,t) which satisfies [W(x,t),W(x{sup '},t{sup '})]=0. The ensemble of realizable states is described by a single state vector, the 'ensemble vector'. Each superposed state which comprises the ensemble vector at time t is the direct product of an eigenstate of W(x,t{sup '}), for all x and for 0{<=}t{sup '}{<=}t, and the CSL state corresponding to that eigenvalue. These states never interfere (they satisfy a superselection rule at any time), they only branch, so the ensemble vector may be considered to be, as Schroedinger put it, a 'catalog' of the realizable states. In this context, many different interpretations (e.g., many worlds, environmental decoherence, consistent histories, modal interpretation) may be satisfactorily applied. Using this description, a long-standing problem is resolved, where the energy comes from the particles gain due to the narrowing of their wave packets by the collapse mechanism. It is shown how to define the energy of the random field and its energy of interaction with particles so that total energy is conserved for the ensemble of realizable states. As a by-product, since the random-field energy spectrum is unbounded, its canonical conjugate, a self-adjoint time operator, can be
Visual data mining for quantized spatial data
NASA Technical Reports Server (NTRS)
Braverman, Amy; Kahn, Brian
2004-01-01
In previous papers we've shown how a well known data compression algorithm called Entropy-constrained Vector Quantization ( can be modified to reduce the size and complexity of very large, satellite data sets. In this paper, we descuss how to visualize and understand the content of such reduced data sets.
Chen, Yuantao; Xu, Weihong; Kuang, Fangjun; Gao, Shangbing
2013-01-01
The efficient target tracking algorithm researches have become current research focus of intelligent robots. The main problems of target tracking process in mobile robot face environmental uncertainty. They are very difficult to estimate the target states, illumination change, target shape changes, complex backgrounds, and other factors and all affect the occlusion in tracking robustness. To further improve the target tracking's accuracy and reliability, we present a novel target tracking algorithm to use visual saliency and adaptive support vector machine (ASVM). Furthermore, the paper's algorithm has been based on the mixture saliency of image features. These features include color, brightness, and sport feature. The execution process used visual saliency features and those common characteristics have been expressed as the target's saliency. Numerous experiments demonstrate the effectiveness and timeliness of the proposed target tracking algorithm in video sequences where the target objects undergo large changes in pose, scale, and illumination. PMID:24363779
Modeling of variable speed refrigerated display cabinets based on adaptive support vector machine
NASA Astrophysics Data System (ADS)
Cao, Zhikun; Han, Hua; Gu, Bo
2010-01-01
In this paper the adaptive support vector machine (ASVM) method is introduced to the field of intelligent modeling of refrigerated display cabinets and used to construct a highly precise mathematical model of their performance. A model for a variable speed open vertical display cabinet was constructed using preprocessing techniques for measured data, including the elimination of outlying data points by the use of an exponential weighted moving average (EWMA). Using dynamic loss coefficient adjustment, the adaptation of the SVM for use in this application was achieved. From there, the object function for energy use per unit of display area total energy consumption (TEC)/total display area (TDA) was constructed and solved using the ASVM method. When compared to the results achieved using a back-propagation neural network (BPNN) model, the ASVM model for the refrigerated display cabinet was characterized by its simple structure, fast convergence speed and high prediction accuracy. The ASVM model also has better noise rejection properties than that of original SVM model. It was revealed by the theoretical analysis and experimental results presented in this paper that it is feasible to model of the display cabinet built using the ASVM method.
Quantization of color histograms using GLA
NASA Astrophysics Data System (ADS)
Yang, Christopher C.; Yip, Milo K.
2002-09-01
Color histogram has been used as one of the most important image descriptor in a wide range of content-based image retrieval (CBIR) projects for color image indexing. It captures the global chromatic distribution of an image. Traditionally, there are two major approaches to quantize the color space: (1) quantize each dimension of a color coordinate system uniformly to generate a fixed number of bins; and (2) quantize a color coordinate system arbitrarily. The first approach works best on cubical color coordinate systems, such as RGB. For other non-cubical color coordinate system, such as CIELAB and CIELUV, some bins may fall out of the gamut (transformed from the RGB cube) of the color space. As a result, it reduces the effectiveness of the color histogram and hence reduces the retrieval performance. The second approach uses arbitrarily quantization. The volume of the bins is not necessary uniform. As a result, it affects the effectiveness of the histogram significantly. In this paper, we propose to develop the color histogram by tessellating the non-cubical color gamut transformed from RGB cube using a vector quantization (VQ) method, the General Loyld Algorithm (GLA) [6]. Using such approach, the problem of empty bins due to the gamut of the color coordinate system can be avoided. Besides, all bins quantized by GLA will occupy the same volume. It guarantees that uniformity of each quantized bins in the histogram. An experiment has been conducted to evaluate the quantitative performance of our approach. The image collection from UC Berkeley's digital library project is used as the test bed. The indexing effectiveness of a histogram space [3] is used as the measurement of the performance. The experimental result shows that using the GLA quantization approach significantly increase the indexing effectiveness.
Wang, Shun-Yuan; Tseng, Chwan-Lu; Lin, Shou-Chuang; Chiu, Chun-Jung; Chou, Jen-Hsiang
2015-01-01
This paper presents the implementation of an adaptive supervisory sliding fuzzy cerebellar model articulation controller (FCMAC) in the speed sensorless vector control of an induction motor (IM) drive system. The proposed adaptive supervisory sliding FCMAC comprised a supervisory controller, integral sliding surface, and an adaptive FCMAC. The integral sliding surface was employed to eliminate steady-state errors and enhance the responsiveness of the system. The adaptive FCMAC incorporated an FCMAC with a compensating controller to perform a desired control action. The proposed controller was derived using the Lyapunov approach, which guarantees learning-error convergence. The implementation of three intelligent control schemes—the adaptive supervisory sliding FCMAC, adaptive sliding FCMAC, and adaptive sliding CMAC—were experimentally investigated under various conditions in a realistic sensorless vector-controlled IM drive system. The root mean square error (RMSE) was used as a performance index to evaluate the experimental results of each control scheme. The analysis results indicated that the proposed adaptive supervisory sliding FCMAC substantially improved the system performance compared with the other control schemes. PMID:25815450
Self-adapting root-MUSIC algorithm and its real-valued formulation for acoustic vector sensor array
NASA Astrophysics Data System (ADS)
Wang, Peng; Zhang, Guo-jun; Xue, Chen-yang; Zhang, Wen-dong; Xiong, Ji-jun
2012-12-01
In this paper, based on the root-MUSIC algorithm for acoustic pressure sensor array, a new self-adapting root-MUSIC algorithm for acoustic vector sensor array is proposed by self-adaptive selecting the lead orientation vector, and its real-valued formulation by Forward-Backward(FB) smoothing and real-valued inverse covariance matrix is also proposed, which can reduce the computational complexity and distinguish the coherent signals. The simulation experiment results show the better performance of two new algorithm with low Signal-to-Noise (SNR) in direction of arrival (DOA) estimation than traditional MUSIC algorithm, and the experiment results using MEMS vector hydrophone array in lake trails show the engineering practicability of two new algorithms.
Retrieval of Brain Tumors by Adaptive Spatial Pooling and Fisher Vector Representation
Huang, Meiyan; Huang, Wei; Jiang, Jun; Zhou, Yujia; Yang, Ru; Zhao, Jie; Feng, Yanqiu; Feng, Qianjin; Chen, Wufan
2016-01-01
Content-based image retrieval (CBIR) techniques have currently gained increasing popularity in the medical field because they can use numerous and valuable archived images to support clinical decisions. In this paper, we concentrate on developing a CBIR system for retrieving brain tumors in T1-weighted contrast-enhanced MRI images. Specifically, when the user roughly outlines the tumor region of a query image, brain tumor images in the database of the same pathological type are expected to be returned. We propose a novel feature extraction framework to improve the retrieval performance. The proposed framework consists of three steps. First, we augment the tumor region and use the augmented tumor region as the region of interest to incorporate informative contextual information. Second, the augmented tumor region is split into subregions by an adaptive spatial division method based on intensity orders; within each subregion, we extract raw image patches as local features. Third, we apply the Fisher kernel framework to aggregate the local features of each subregion into a respective single vector representation and concatenate these per-subregion vector representations to obtain an image-level signature. After feature extraction, a closed-form metric learning algorithm is applied to measure the similarity between the query image and database images. Extensive experiments are conducted on a large dataset of 3604 images with three types of brain tumors, namely, meningiomas, gliomas, and pituitary tumors. The mean average precision can reach 94.68%. Experimental results demonstrate the power of the proposed algorithm against some related state-of-the-art methods on the same dataset. PMID:27273091
Retrieval of Brain Tumors by Adaptive Spatial Pooling and Fisher Vector Representation.
Cheng, Jun; Yang, Wei; Huang, Meiyan; Huang, Wei; Jiang, Jun; Zhou, Yujia; Yang, Ru; Zhao, Jie; Feng, Yanqiu; Feng, Qianjin; Chen, Wufan
2016-01-01
Content-based image retrieval (CBIR) techniques have currently gained increasing popularity in the medical field because they can use numerous and valuable archived images to support clinical decisions. In this paper, we concentrate on developing a CBIR system for retrieving brain tumors in T1-weighted contrast-enhanced MRI images. Specifically, when the user roughly outlines the tumor region of a query image, brain tumor images in the database of the same pathological type are expected to be returned. We propose a novel feature extraction framework to improve the retrieval performance. The proposed framework consists of three steps. First, we augment the tumor region and use the augmented tumor region as the region of interest to incorporate informative contextual information. Second, the augmented tumor region is split into subregions by an adaptive spatial division method based on intensity orders; within each subregion, we extract raw image patches as local features. Third, we apply the Fisher kernel framework to aggregate the local features of each subregion into a respective single vector representation and concatenate these per-subregion vector representations to obtain an image-level signature. After feature extraction, a closed-form metric learning algorithm is applied to measure the similarity between the query image and database images. Extensive experiments are conducted on a large dataset of 3604 images with three types of brain tumors, namely, meningiomas, gliomas, and pituitary tumors. The mean average precision can reach 94.68%. Experimental results demonstrate the power of the proposed algorithm against some related state-of-the-art methods on the same dataset. PMID:27273091
Separable quantizations of Stäckel systems
NASA Astrophysics Data System (ADS)
Błaszak, Maciej; Marciniak, Krzysztof; Domański, Ziemowit
2016-08-01
In this article we prove that many Hamiltonian systems that cannot be separably quantized in the classical approach of Robertson and Eisenhart can be separably quantized if we extend the class of admissible quantizations through a suitable choice of Riemann space adapted to the Poisson geometry of the system. Actually, in this article we prove that for every quadratic in momenta Stäckel system (defined on 2 n dimensional Poisson manifold) for which Stäckel matrix consists of monomials in position coordinates there exist infinitely many quantizations-parametrized by n arbitrary functions-that turn this system into a quantum separable Stäckel system.
A physically motivated quantization of the electromagnetic field
NASA Astrophysics Data System (ADS)
Bennett, Robert; Barlow, Thomas M.; Beige, Almut
2016-01-01
The notion that the electromagnetic field is quantized is usually inferred from observations such as the photoelectric effect and the black-body spectrum. However accounts of the quantization of this field are usually mathematically motivated and begin by introducing a vector potential, followed by the imposition of a gauge that allows the manipulation of the solutions of Maxwell’s equations into a form that is amenable for the machinery of canonical quantization. By contrast, here we quantize the electromagnetic field in a less mathematically and more physically motivated way. Starting from a direct description of what one sees in experiments, we show that the usual expressions of the electric and magnetic field observables follow from Heisenberg’s equation of motion. In our treatment, there is no need to invoke the vector potential in a specific gauge and we avoid the commonly used notion of a fictitious cavity that applies boundary conditions to the field.
NASA Astrophysics Data System (ADS)
Raeth, Christoph W.; Mueller, Dirk; Boehm, Holger F.; Rummeny, Ernst J.; Link, Thomas M.; Monetti, Roberto
2005-04-01
We extend the recently introduced scaling vector method (SVM) to improve the textural characterization of oriented trabecular bone structures in the context of osteoporosis. Using the concept of scaling vectors one obtains non-linear structural information from data sets, which can account for global anisotropies. In this work we present a method which allows us to determine the local directionalities in images by using scaling vectors. Thus it becomes possible to better account for local anisotropies and to implement this knowledge in the calculation of the scaling properties of the image. By applying this adaptive technique, a refined quantification of the image structure is possible: we test and evaluate our new method using realistic two-dimensional simulations of bone structures, which model the effect of osteoblasts and osteoclasts on the local change of relative bone density. The partial differential equations involved in the model are solved numerically using cellular automata (CA). Different realizations with slightly varying control parameters are considered. Our results show that even small changes in the trabecular structures, which are induced by variation of a control parameters of the system, become discernible by applying the locally adapted scaling vector method. The results are superior to those obtained by isotropic and/or bulk measures. These findings may be especially important for monitoring the treatment of patients, where the early recognition of (drug-induced) changes in the trabecular structure is crucial.
Chadee, Dave D; Martinez, Raymond
2016-04-01
Within Latin America and the Caribbean region the impact of climate change has been associated with the effects of rainfall and temperature on seasonal outbreaks of dengue but few studies have been conducted on the impacts of climate on the behaviour and ecology of Aedes aegypti mosquitoes.This study was conducted to examine the adaptive behaviours currently being employed by A. aegypti mosquitoes exposed to the force of climate change in LAC countries. The literature on the association between climate and dengue incidence is small and sometimes speculative. Few laboratory and field studies have identified research gaps. Laboratory and field experiments were designed and conducted to better understand the container preferences, climate-associated-adaptive behaviour, ecology and the effects of different temperatures and light regimens on the life history of A. aegypti mosquitoes. A. aegypti adaptive behaviours and changes in container preferences demonstrate how complex dengue transmission dynamics is, in different ecosystems. The use of underground drains and septic tanks represents a major behaviour change identified and compounds an already difficult task to control A. aegypti populations. A business as usual approach will exacerbate the problem and lead to more frequent outbreaks of dengue and chikungunya in LAC countries unless both area-wide and targeted vector control approaches are adopted. The current evidence and the results from proposed transdisciplinary research on dengue within different ecosystems will help guide the development of new vector control strategies and foster a better understanding of climate change impacts on vector-borne disease transmission. PMID:26796862
Lagrange structure and quantization
NASA Astrophysics Data System (ADS)
Kazinski, Peter O.; Lyakhovich, Simon L.; Sharapov, Alexey A.
2005-07-01
A path-integral quantization method is proposed for dynamical systems whose classical equations of motion do not necessarily follow from the action principle. The key new notion behind this quantization scheme is the Lagrange structure which is more general than the lagrangian formalism in the same sense as Poisson geometry is more general than the symplectic one. The Lagrange structure is shown to admit a natural BRST description which is used to construct an AKSZ-type topological sigma-model. The dynamics of this sigma-model in d+1 dimensions, being localized on the boundary, are proved to be equivalent to the original theory in d dimensions. As the topological sigma-model has a well defined action, it is path-integral quantized in the usual way that results in quantization of the original (not necessarily lagrangian) theory. When the original equations of motion come from the action principle, the standard BV path-integral is explicitly deduced from the proposed quantization scheme. The general quantization scheme is exemplified by several models including the ones whose classical dynamics are not variational.
Exact quantization of a paraxial electromagnetic field
Aiello, A.; Woerdman, J. P.
2005-12-15
A nonperturbative quantization of a paraxial electromagnetic field is achieved via a generalized dispersion relation imposed on the longitudinal and the transverse components of the photon wave vector. This theoretical formalism yields a seamless transition between the paraxial- and the Maxwell-equation solutions. This obviates the need to introduce either ad hoc or perturbatively defined field operators. Moreover, our (exact) formalism remains valid beyond the quasimonochromatic paraxial limit.
NASA Astrophysics Data System (ADS)
Wang, Xianmin; Niu, Ruiqing; Wu, Ke
2011-07-01
Remote sensing provides a new idea and an advanced method for lithology identification, but lithology identification by remote sensing is quite difficult because 1. the disciplines of lithology identification in a concrete region are often quite different from the experts' experience; 2. in the regions with flourishing vegetation, lithology information is poor, so it is very difficult to identify the lithologies by remote sensing images. At present, the studies on lithology identification by remote sensing are primarily conducted on the regions with low vegetation coverage and high rock bareness. And there is no mature method of lithology identification in the regions with flourishing vegetation. Traditional methods lacking in the mining and extraction of the various complicated lithology information from a remote sensing image, often need much manual intervention and possess poor intelligence and accuracy. An intelligent method proposed in this paper for lithology identification based on support vector machine (SVM) and adaptive cellular automata (ACA) is expected to solve the above problems. The method adopted Landsat-7 ETM+ images and 1:50000 geological map as the data origins. It first derived the lithology identification factors on three aspects: 1. spectra, 2. texture and 3. vegetation cover. Second, it plied the remote sensing images with the geological map and established the SVM to obtain the transition rules according to the factor values of the samples. Finally, it established an ACA model to intelligently identify the lithologies according to the transition and neighborhood rules. In this paper an ACA model is proposed and compared with the traditional one. Results of 2 real-world examples show that: 1. The SVM-ACA method obtains a good result of lithology identification in the regions with flourishing vegetation; 2. it possesses high accuracies of lithology identification (with the overall accuracies of 92.29% and 85.54%, respectively, in the two
Quantization Effects on Complex Networks.
Wang, Ying; Wang, Lin; Yang, Wen; Wang, Xiaofan
2016-01-01
Weights of edges in many complex networks we constructed are quantized values of the real weights. To what extent does the quantization affect the properties of a network? In this work, quantization effects on network properties are investigated based on the spectrum of the corresponding Laplacian. In contrast to the intuition that larger quantization level always implies a better approximation of the quantized network to the original one, we find a ubiquitous periodic jumping phenomenon with peak-value decreasing in a power-law relationship in all the real-world weighted networks that we investigated. We supply theoretical analysis on the critical quantization level and the power laws. PMID:27226049
Quantization Effects on Complex Networks
NASA Astrophysics Data System (ADS)
Wang, Ying; Wang, Lin; Yang, Wen; Wang, Xiaofan
2016-05-01
Weights of edges in many complex networks we constructed are quantized values of the real weights. To what extent does the quantization affect the properties of a network? In this work, quantization effects on network properties are investigated based on the spectrum of the corresponding Laplacian. In contrast to the intuition that larger quantization level always implies a better approximation of the quantized network to the original one, we find a ubiquitous periodic jumping phenomenon with peak-value decreasing in a power-law relationship in all the real-world weighted networks that we investigated. We supply theoretical analysis on the critical quantization level and the power laws.
Quantization Effects on Complex Networks
Wang, Ying; Wang, Lin; Yang, Wen; Wang, Xiaofan
2016-01-01
Weights of edges in many complex networks we constructed are quantized values of the real weights. To what extent does the quantization affect the properties of a network? In this work, quantization effects on network properties are investigated based on the spectrum of the corresponding Laplacian. In contrast to the intuition that larger quantization level always implies a better approximation of the quantized network to the original one, we find a ubiquitous periodic jumping phenomenon with peak-value decreasing in a power-law relationship in all the real-world weighted networks that we investigated. We supply theoretical analysis on the critical quantization level and the power laws. PMID:27226049
A visual detection model for DCT coefficient quantization
NASA Technical Reports Server (NTRS)
Ahumada, Albert J., Jr.; Watson, Andrew B.
1994-01-01
The discrete cosine transform (DCT) is widely used in image compression and is part of the JPEG and MPEG compression standards. The degree of compression and the amount of distortion in the decompressed image are controlled by the quantization of the transform coefficients. The standards do not specify how the DCT coefficients should be quantized. One approach is to set the quantization level for each coefficient so that the quantization error is near the threshold of visibility. Results from previous work are combined to form the current best detection model for DCT coefficient quantization noise. This model predicts sensitivity as a function of display parameters, enabling quantization matrices to be designed for display situations varying in luminance, veiling light, and spatial frequency related conditions (pixel size, viewing distance, and aspect ratio). It also allows arbitrary color space directions for the representation of color. A model-based method of optimizing the quantization matrix for an individual image was developed. The model described above provides visual thresholds for each DCT frequency. These thresholds are adjusted within each block for visual light adaptation and contrast masking. For given quantization matrix, the DCT quantization errors are scaled by the adjusted thresholds to yield perceptual errors. These errors are pooled nonlinearly over the image to yield total perceptual error. With this model one may estimate the quantization matrix for a particular image that yields minimum bit rate for a given total perceptual error, or minimum perceptual error for a given bit rate. Custom matrices for a number of images show clear improvement over image-independent matrices. Custom matrices are compatible with the JPEG standard, which requires transmission of the quantization matrix.
NASA Astrophysics Data System (ADS)
Fröhlich, J.; Knowles, A.; Pizzo, A.
2007-03-01
Within the framework of the theory of interacting classical and quantum gases, it is shown that the atomistic constitution of gases can be understood as a consequence of (second) quantization of a continuum theory of gases. In this paper, this is explained in some detail for the theory of non-relativistic interacting Bose gases, which can be viewed as the second quantization of a continuum theory whose dynamics is given by the Hartree equation. Conversely, the Hartree equation emerges from the theory of Bose gases in the mean-field limit. It is shown that, for such systems, the time evolution of 'observables' commutes with their Wick quantization, up to quantum corrections that tend to zero in the mean-field limit. This is an Egorov-type theorem.
NASA Astrophysics Data System (ADS)
He, Xiao-Gang; Ma, Bo-Qiang
We show that black holes can be quantized in an intuitive and elegant way with results in agreement with conventional knowledge of black holes by using Bohr's idea of quantizing the motion of an electron inside the atom in quantum mechanics. We find that properties of black holes can also be derived from an ansatz of quantized entropy Δ S = 4π k Δ R/{{-{λ }}}, which was suggested in a previous work to unify the black hole entropy formula and Verlinde's conjecture to explain gravity as an entropic force. Such an Ansatz also explains gravity as an entropic force from quantum effect. This suggests a way to unify gravity with quantum theory. Several interesting and surprising results of black holes are given from which we predict the existence of primordial black holes ranging from Planck scale both in size and energy to big ones in size but with low energy behaviors.
NASA Astrophysics Data System (ADS)
Yang, Jianhong; Yi, Cancan; Xu, Jinwu; Ma, Xianghong
2015-05-01
A new LIBS quantitative analysis method based on analytical line adaptive selection and Relevance Vector Machine (RVM) regression model is proposed. First, a scheme of adaptively selecting analytical line is put forward in order to overcome the drawback of high dependency on a priori knowledge. The candidate analytical lines are automatically selected based on the built-in characteristics of spectral lines, such as spectral intensity, wavelength and width at half height. The analytical lines which will be used as input variables of regression model are determined adaptively according to the samples for both training and testing. Second, an LIBS quantitative analysis method based on RVM is presented. The intensities of analytical lines and the elemental concentrations of certified standard samples are used to train the RVM regression model. The predicted elemental concentration analysis results will be given with a form of confidence interval of probabilistic distribution, which is helpful for evaluating the uncertainness contained in the measured spectra. Chromium concentration analysis experiments of 23 certified standard high-alloy steel samples have been carried out. The multiple correlation coefficient of the prediction was up to 98.85%, and the average relative error of the prediction was 4.01%. The experiment results showed that the proposed LIBS quantitative analysis method achieved better prediction accuracy and better modeling robustness compared with the methods based on partial least squares regression, artificial neural network and standard support vector machine.
DCT quantization matrices visually optimized for individual images
NASA Technical Reports Server (NTRS)
Watson, Andrew B.
1993-01-01
This presentation describes how a vision model incorporating contrast sensitivity, contrast masking, and light adaptation is used to design visually optimal quantization matrices for Discrete Cosine Transform image compression. The Discrete Cosine Transform (DCT) underlies several image compression standards (JPEG, MPEG, H.261). The DCT is applied to 8x8 pixel blocks, and the resulting coefficients are quantized by division and rounding. The 8x8 'quantization matrix' of divisors determines the visual quality of the reconstructed image; the design of this matrix is left to the user. Since each DCT coefficient corresponds to a particular spatial frequency in a particular image region, each quantization error consists of a local increment or decrement in a particular frequency. After adjustments for contrast sensitivity, local light adaptation, and local contrast masking, this coefficient error can be converted to a just-noticeable-difference (jnd). The jnd's for different frequencies and image blocks can be pooled to yield a global perceptual error metric. With this metric, we can compute for each image the quantization matrix that minimizes the bit-rate for a given perceptual error, or perceptual error for a given bit-rate. Implementation of this system demonstrates its advantages over existing techniques. A unique feature of this scheme is that the quantization matrix is optimized for each individual image. This is compatible with the JPEG standard, which requires transmission of the quantization matrix.
On Quantizable Odd Lie Bialgebras
NASA Astrophysics Data System (ADS)
Khoroshkin, Anton; Merkulov, Sergei; Willwacher, Thomas
2016-09-01
Motivated by the obstruction to the deformation quantization of Poisson structures in infinite dimensions, we introduce the notion of a quantizable odd Lie bialgebra. The main result of the paper is a construction of the highly non-trivial minimal resolution of the properad governing such Lie bialgebras, and its link with the theory of so-called quantizable Poisson structures.
ERIC Educational Resources Information Center
DeBuvitz, William
2014-01-01
I am a volunteer reader at the Princeton unit of "Learning Ally" (formerly "Recording for the Blind & Dyslexic") and I recently discovered that high school students are introduced to the concept of quantization well before they take chemistry and physics. For the past few months I have been reading onto computer files a…
Tagawa, Junpei; Inoue, Tetsuyoshi; Naito, Mariko; Sato, Keiko; Kuwahara, Tomomi; Nakayama, Masaaki; Nakayama, Koji; Yamashiro, Takashi; Ohara, Naoya
2014-10-01
We report here the construction of a plasmid vector designed for the efficient electrotransformation of the periodontal pathogen Porphyromonas gingivalis. The novel Escherichia coli-Bacteroides/P. gingivalis shuttle vector, designated pTIO-1, is based on the 11.0-kb E. coli-Bacteroides conjugative shuttle vector, pVAL-1 (a pB8-51 derivative). To construct pTIO-1, the pB8-51 origin of replication and erythromycin resistance determinant of pVAL-1 were cloned into the E. coli cloning vector pBluescript II SK(-) and non-functional regions were deleted. pTIO-1 has an almost complete multiple cloning site from pBluescript II SK(-). The size of pTIO-1 is 4.5kb, which is convenient for routine gene manipulation. pTIO-1 was introduced into P. gingivalis via electroporation, and erythromycin-resistant transformants carrying pTIO-1 were obtained. We characterized the transformation efficiency, copy number, host range, stability, and insert size capacity of pTIO-1. An efficient plasmid electrotransformation of P. gingivalis will facilitate functional analysis and expression of P. gingivalis genes, including the virulence factors of this bacterium. PMID:25102110
Negev, Maya; Paz, Shlomit; Clermont, Alexandra; Pri-Or, Noemie Groag; Shalom, Uri; Yeger, Tamar; Green, Manfred S.
2015-01-01
The Mediterranean region is vulnerable to climatic changes. A warming trend exists in the basin with changes in rainfall patterns. It is expected that vector-borne diseases (VBD) in the region will be influenced by climate change since weather conditions influence their emergence. For some diseases (i.e., West Nile virus) the linkage between emergence andclimate change was recently proved; for others (such as dengue) the risk for local transmission is real. Consequently, adaptation and preparation for changing patterns of VBD distribution is crucial in the Mediterranean basin. We analyzed six representative Mediterranean countries and found that they have started to prepare for this threat, but the preparation levels among them differ, and policy mechanisms are limited and basic. Furthermore, cross-border cooperation is not stable and depends on international frameworks. The Mediterranean countries should improve their adaptation plans, and develop more cross-sectoral, multidisciplinary and participatory approaches. In addition, based on experience from existing local networks in advancing national legislation and trans-border cooperation, we outline recommendations for a regional cooperation framework. We suggest that a stable and neutral framework is required, and that it should address the characteristics and needs of African, Asian and European countries around the Mediterranean in order to ensure participation. Such a regional framework is essential to reduce the risk of VBD transmission, since the vectors of infectious diseases know no political borders. PMID:26084000
Negev, Maya; Paz, Shlomit; Clermont, Alexandra; Pri-Or, Noemie Groag; Shalom, Uri; Yeger, Tamar; Green, Manfred S
2015-06-01
The Mediterranean region is vulnerable to climatic changes. A warming trend exists in the basin with changes in rainfall patterns. It is expected that vector-borne diseases (VBD) in the region will be influenced by climate change since weather conditions influence their emergence. For some diseases (i.e., West Nile virus) the linkage between emergence andclimate change was recently proved; for others (such as dengue) the risk for local transmission is real. Consequently, adaptation and preparation for changing patterns of VBD distribution is crucial in the Mediterranean basin. We analyzed six representative Mediterranean countries and found that they have started to prepare for this threat, but the preparation levels among them differ, and policy mechanisms are limited and basic. Furthermore, cross-border cooperation is not stable and depends on international frameworks. The Mediterranean countries should improve their adaptation plans, and develop more cross-sectoral, multidisciplinary and participatory approaches. In addition, based on experience from existing local networks in advancing national legislation and trans-border cooperation, we outline recommendations for a regional cooperation framework. We suggest that a stable and neutral framework is required, and that it should address the characteristics and needs of African, Asian and European countries around the Mediterranean in order to ensure participation. Such a regional framework is essential to reduce the risk of VBD transmission, since the vectors of infectious diseases know no political borders. PMID:26084000
Crouse, M; Ramchandran, K
1997-01-01
Striving to maximize baseline (Joint Photographers Expert Group-JPEG) image quality without compromising compatibility of current JPEG decoders, we develop an image-adaptive JPEG encoding algorithm that jointly optimizes quantizer selection, coefficient "thresholding", and Huffman coding within a rate-distortion (R-D) framework. Practically speaking, our algorithm unifies two previous approaches to image-adaptive JPEG encoding: R-D optimized quantizer selection and R-D optimal thresholding. Conceptually speaking, our algorithm is a logical consequence of entropy-constrained vector quantization (ECVQ) design principles in the severely constrained instance of JPEG-compatible encoding. We explore both viewpoints: the practical, to concretely derive our algorithm, and the conceptual, to justify the claim that our algorithm approaches the best performance that a JPEG encoder can achieve. This performance includes significant objective peak signal-to-noise ratio (PSNR) improvement over previous work and at high rates gives results comparable to state-of-the-art image coders. For example, coding the Lena image at 1.0 b/pixel, our JPEG encoder achieves a PSNR performance of 39.6 dB that slightly exceeds the quoted PSNR results of Shapiro's wavelet-based zero-tree coder. Using a visually based distortion metric, we can achieve noticeable subjective improvement as well. Furthermore, our algorithm may be applied to other systems that use run-length encoding, including intraframe MPEG and subband or wavelet coding. PMID:18282923
Ndiath, Mamadou Ousmane; Mazenot, Catherine; Sokhna, Cheikh; Trape, Jean-François
2014-01-01
Background Insecticide treated bed nets have been recommended and proven efficient as a measure to protect African populations from malaria mosquito vector Anopheles spp. This study evaluates the consequences of bed nets use on vectors resistance to insecticides, their feeding behavior and malaria transmission in Dielmo village, Senegal, were LLINs were offered to all villagers in July 2008. Methods Adult mosquitoes were collected monthly from January 2006 to December 2011 by human landing catches (HLC) and by pyrethroid spray catches (PCS). A randomly selected sub-sample of 15–20% of An. gambiae s.l. collected each month was used to investigate the molecular forms of the An. gambiae complex, kdr mutations, and Plasmodium falciparum circumsporozoite (CSP) rate. Malaria prevalence and gametocytaemia in Dielmo villagers were measured quarterly. Results Insecticide susceptible mosquitoes (wild kdr genotype) presented a reduced lifespan after LLINs implementation but they rapidly adapted their feeding behavior, becoming more exophageous and zoophilic, and biting earlier during the night. In the meantime, insecticide-resistant specimens (kdr L1014F genotype) increased in frequency in the population, with an unchanged lifespan and feeding behaviour. P. falciparum prevalence and gametocyte rate in villagers decreased dramatically after LLINs deployment. Malaria infection rate tended to zero in susceptible mosquitoes whereas the infection rate increased markedly in the kdr homozygote mosquitoes. Conclusion Dramatic changes in vector populations and their behavior occurred after the deployment of LLINs due to the extraordinary adaptative skills of An. gambiae s. l. mosquitoes. However, despite the increasing proportion of insecticide resistant mosquitoes and their almost exclusive responsibility in malaria transmission, the P. falciparum gametocyte reservoir continued to decrease three years after the deployment of LLINs. PMID:24892677
Liu, Kun; Tsujimoto, Hitoshi; Cha, Sung-Jae; Agre, Peter; Rasgon, Jason L.
2011-01-01
Altered patterns of malaria endemicity reflect, in part, changes in feeding behavior and climate adaptation of mosquito vectors. Aquaporin (AQP) water channels are found throughout nature and confer high-capacity water flow through cell membranes. The genome of the major malaria vector mosquito Anopheles gambiae contains at least seven putative AQP sequences. Anticipating that transmembrane water movements are important during the life cycle of A. gambiae, we identified and characterized the A. gambiae aquaporin 1 (AgAQP1) protein that is homologous to AQPs known in humans, Drosophila, and sap-sucking insects. When expressed in Xenopus laevis oocytes, AgAQP1 transports water but not glycerol. Similar to mammalian AQPs, water permeation of AgAQP1 is inhibited by HgCl2 and tetraethylammonium, with Tyr185 conferring tetraethylammonium sensitivity. AgAQP1 is more highly expressed in adult female A. gambiae mosquitoes than in males. Expression is high in gut, ovaries, and Malpighian tubules where immunofluorescence microscopy reveals that AgAQP1 resides in stellate cells but not principal cells. AgAQP1 expression is up-regulated in fat body and ovary by blood feeding but not by sugar feeding, and it is reduced by exposure to a dehydrating environment (42% relative humidity). RNA interference reduces AgAQP1 mRNA and protein levels. In a desiccating environment (<20% relative humidity), mosquitoes with reduced AgAQP1 protein survive significantly longer than controls. These studies support a role for AgAQP1 in water homeostasis during blood feeding and humidity adaptation of A. gambiae, a major mosquito vector of human malaria in sub-Saharan Africa. PMID:21444767
Adaptive entropy coded subband coding of images.
Kim, Y H; Modestino, J W
1992-01-01
The authors describe a design approach, called 2-D entropy-constrained subband coding (ECSBC), based upon recently developed 2-D entropy-constrained vector quantization (ECVQ) schemes. The output indexes of the embedded quantizers are further compressed by use of noiseless entropy coding schemes, such as Huffman or arithmetic codes, resulting in variable-rate outputs. Depending upon the specific configurations of the ECVQ and the ECPVQ over the subbands, many different types of SBC schemes can be derived within the generic 2-D ECSBC framework. Among these, the authors concentrate on three representative types of 2-D ECSBC schemes and provide relative performance evaluations. They also describe an adaptive buffer instrumented version of 2-D ECSBC, called 2-D ECSBC/AEC, for use with fixed-rate channels which completely eliminates buffer overflow/underflow problems. This adaptive scheme achieves performance quite close to the corresponding ideal 2-D ECSBC system. PMID:18296138
Francisella–Arthropod Vector Interaction and its Role in Patho-Adaptation to Infect Mammals
Akimana, Christine; Kwaik, Yousef Abu
2011-01-01
Francisella tularensis is a Gram-negative, intracellular, zoonotic bacterium, and is the causative agent of tularemia with a broad host range. Arthropods such as ticks, mosquitoes, and flies maintain F. tularensis in nature by transmitting the bacteria among small mammals. While the tick is largely believed to be a biological vector of F. tularensis, transmission by mosquitoes and flies is largely believed to be mechanical on the mouthpart through interrupted feedings. However, the mechanism of infection of the vectors by F. tularensis is not well understood. Since F. tularensis has not been localized in the salivary gland of the primary human biting ticks, it is thought that bacterial transmission by ticks is through mechanical inoculation of tick feces containing F. tularensis into the skin wound. Drosophila melanogaster is an established good arthropod model for arthropod vectors of tularemia, where F. tularensis infects hemocytes, and is found in hemolymph, as seen in ticks. In addition, phagosome biogenesis and robust intracellular proliferation of F. tularensis in arthropod-derived cells are similar to that in mammalian macrophages. Furthermore, bacterial factors required for infectivity of mammals are often required for infectivity of the fly by F. tularensis. Several host factors that contribute to F. tularensis intracellular pathogenesis in D. melanogaster have been identified, and F. tularensis targets some of the evolutionarily conserved eukaryotic processes to enable intracellular survival and proliferation in evolutionarily distant hosts. PMID:21687425
A Nonlinear Adaptive Beamforming Algorithm Based on Least Squares Support Vector Regression
Wang, Lutao; Jin, Gang; Li, Zhengzhou; Xu, Hongbin
2012-01-01
To overcome the performance degradation in the presence of steering vector mismatches, strict restrictions on the number of available snapshots, and numerous interferences, a novel beamforming approach based on nonlinear least-square support vector regression machine (LS-SVR) is derived in this paper. In this approach, the conventional linearly constrained minimum variance cost function used by minimum variance distortionless response (MVDR) beamformer is replaced by a squared-loss function to increase robustness in complex scenarios and provide additional control over the sidelobe level. Gaussian kernels are also used to obtain better generalization capacity. This novel approach has two highlights, one is a recursive regression procedure to estimate the weight vectors on real-time, the other is a sparse model with novelty criterion to reduce the final size of the beamformer. The analysis and simulation tests show that the proposed approach offers better noise suppression capability and achieve near optimal signal-to-interference-and-noise ratio (SINR) with a low computational burden, as compared to other recently proposed robust beamforming techniques.
Design of a Two-level Adaptive Multi-Agent System for Malaria Vectors driven by an ontology
Koum, Guillaume; Yekel, Augustin; Ndifon, Bengyella; Etang, Josiane; Simard, Frédéric
2007-01-01
Background The understanding of heterogeneities in disease transmission dynamics as far as malaria vectors are concerned is a big challenge. Many studies while tackling this problem don't find exact models to explain the malaria vectors propagation. Methods To solve the problem we define an Adaptive Multi-Agent System (AMAS) which has the property to be elastic and is a two-level system as well. This AMAS is a dynamic system where the two levels are linked by an Ontology which allows it to function as a reduced system and as an extended system. In a primary level, the AMAS comprises organization agents and in a secondary level, it is constituted of analysis agents. Its entry point, a User Interface Agent, can reproduce itself because it is given a minimum of background knowledge and it learns appropriate "behavior" from the user in the presence of ambiguous queries and from other agents of the AMAS in other situations. Results Some of the outputs of our system present a series of tables, diagrams showing some factors like Entomological parameters of malaria transmission, Percentages of malaria transmission per malaria vectors, Entomological inoculation rate. Many others parameters can be produced by the system depending on the inputted data. Conclusion Our approach is an intelligent one which differs from statistical approaches that are sometimes used in the field. This intelligent approach aligns itself with the distributed artificial intelligence. In terms of fight against malaria disease our system offers opportunities of reducing efforts of human resources who are not obliged to cover the entire territory while conducting surveys. Secondly the AMAS can determine the presence or the absence of malaria vectors even when specific data have not been collected in the geographical area. In the difference of a statistical technique, in our case the projection of the results in the field can sometimes appeared to be more general. PMID:17605778
Uniform quantized electron gas.
Høye, Johan S; Lomba, Enrique
2016-10-19
In this work we study the correlation energy of the quantized electron gas of uniform density at temperature T = 0. To do so we utilize methods from classical statistical mechanics. The basis for this is the Feynman path integral for the partition function of quantized systems. With this representation the quantum mechanical problem can be interpreted as, and is equivalent to, a classical polymer problem in four dimensions where the fourth dimension is imaginary time. Thus methods, results, and properties obtained in the statistical mechanics of classical fluids can be utilized. From this viewpoint we recover the well known RPA (random phase approximation). Then to improve it we modify the RPA by requiring the corresponding correlation function to be such that electrons with equal spins can not be on the same position. Numerical evaluations are compared with well known results of a standard parameterization of Monte Carlo correlation energies. PMID:27546166
Consistent quantization of massive chiral electrodynamics in four dimensions
Andrianov, A. ); Bassetto, A.; Soldati, R.
1989-10-09
We discuss the quantization of a four-dimensional model in which a massive Abelian vector field interacts with chiral massless fermions. We show that, by introducing extra scalar fields, a renormalizable unitary {ital S} matrix can be obtained in a suitably defined Hilbert space of physical states.
Quantization of Constrained Systems
NASA Astrophysics Data System (ADS)
Klauder, John R.
The present article is primarily a review of the projection-operator approach to quantize systems with constraints. We study the quantization of systems with general first- and second-class constraints from the point of view of coherent-state, phase-space path integration, and show that all such cases may be treated, within the original classical phase space, by using suitable path-integral measures for the Lagrange multipliers which ensure that the quantum system satisfies the appropr iate quantum constraint conditions. Unlike conventional methods, our procedures involve no delta-functionals of the classical constraints, no need for dynamical gauge fixing of first-class constraints nor any average thereover, no need to eliminate second-class constraints, no potentially ambiguous determinants, as well as no need to add auxiliary dynamical variables expanding the phase space beyond its original classical formulation, including no ghosts. Bes ides several pedagogical examples, we also study: (i) the quantization procedure for reparameterization invariant models, (ii) systems for which the original set of Lagrange multipliers are elevated to the status of dynamical variables and used to define an extended dynamical system which is completed with the addition of suitable conjugates and new sets of constraints and their associated Lagrange multipliers, (iii) special examples of alternative but equivalent formulations of given first-class constraint s, as well as (iv) a comparison of both regular and irregular constraints.
Tabin, C.J.; Hoffman, J.W.; Goff, S.P.; Weinberg, R.A.
1982-04-01
The authors investigated the feasibility of using retroviruses as vectors for transferring DNA sequences into animal cells. The thymidine kinase (tk) gene of herpes simplex virus was chosen as a convenient model. The internal BamHI fragments of a DNA clone of Moloney leukemia virus (MLV) were replaced with a purified BamHI DNA segment containing the tk gene. Chimeric genomes were created carrying the tk insert on both orientations relative to the MLV sequence. Each was transfected into TK/sup -/ cells along with MLV helper virus, and TK/sup +/ colonies were obtained by selection in the presence of hypoxanthine, aminopterin, and thymidine (HAT). Virus collected from TK/sup +/-transformed, MLV producer cells passed the TK/sup +/ phenotype to TK/sup -/ cells. Nonproducer cells were isolated, and TK/sup +/ transducing virus was subsequently rescued from them. The chimeric virus showed single-hit kinetics in infections. Virion and cellular RNA and cellular DNA from infected cells were all shown to contain sequences which hybridized to both MLV- and tk-specific probes. The sizes of these sequences were consistent with those predicted for the chimeric virus. In all respects studied, the chimeric MLV-tk virus behaved like known replication-defective retroviruses. These experiments suggest great general applicability of retroviruses as eucaryotic vectors.
NASA Astrophysics Data System (ADS)
Sadat Hashemipour, Maryam; Soleimani, Seyed Ali
2016-01-01
Artificial immune system (AIS) algorithm based on clonal selection method can be defined as a soft computing method inspired by theoretical immune system in order to solve science and engineering problems. Support vector machine (SVM) is a popular pattern classification method with many diverse applications. Kernel parameter setting in the SVM training procedure along with the feature selection significantly impacts on the classification accuracy rate. In this study, AIS based on Adaptive Clonal Selection (AISACS) algorithm has been used to optimise the SVM parameters and feature subset selection without degrading the SVM classification accuracy. Several public datasets of University of California Irvine machine learning (UCI) repository are employed to calculate the classification accuracy rate in order to evaluate the AISACS approach then it was compared with grid search algorithm and Genetic Algorithm (GA) approach. The experimental results show that the feature reduction rate and running time of the AISACS approach are better than the GA approach.
NASA Technical Reports Server (NTRS)
Rorvig, Mark E.
1991-01-01
Vector-product information retrieval (IR) systems produce retrieval results superior to all other searching methods but presently have no commercial implementations beyond the personal computer environment. The NASA Electronic Library Systems (NELS) provides a ranked list of the most likely relevant objects in collections in response to a natural language query. Additionally, the system is constructed using standards and tools (Unix, X-Windows, Notif, and TCP/IP) that permit its operation in organizations that possess many different hosts, workstations, and platforms. There are no known commercial equivalents to this product at this time. The product has applications in all corporate management environments, particularly those that are information intensive, such as finance, manufacturing, biotechnology, and research and development.
Quantization of Generally Covariant Systems
NASA Astrophysics Data System (ADS)
Sforza, Daniel M.
2000-12-01
Finite dimensional models that mimic the constraint structure of Einstein's General Relativity are quantized in the framework of BRST and Dirac's canonical formalisms. The first system to be studied is one featuring a constraint quadratic in the momenta (the "super-Hamiltonian") and a set of constraints linear in the momenta (the "supermomentum" constraints). The starting point is to realize that the ghost contributions to the supermomentum constraint operators can be read in terms of the natural volume induced by the constraints in the orbits. This volume plays a fundamental role in the construction of the quadratic sector of the nilpotent BRST charge. It is shown that the quantum theory is invariant under scaling of the super-Hamiltonian. As long as the system has an intrinsic time, this property translates in a contribution of the potential to the kinetic term. In this aspect, the results substantially differ from other works where the scaling invariance is forced by introducing a coupling to the curvature. The contribution of the potential, far from being unnatural, is beautifully justified in the light of the Jacobi's principle. Then, it is shown that the obtained results can be extended to systems with extrinsic time. In this case, if the metric has a conformal temporal Killing vector and the potential exhibits a suitable behavior with respect to it, the role played by the potential in the case of intrinsic time is now played by the norm of the Killing vector. Finally, the results for the previous cases are extended to a system featuring two super-Hamiltonian constraints. This step is extremely important due to the fact that General Relativity features an infinite number of such constraints satisfying a non trivial algebra among themselves.
Coherent state quantization of quaternions
Muraleetharan, B. E-mail: santhar@gmail.com; Thirulogasanthar, K. E-mail: santhar@gmail.com
2015-08-15
Parallel to the quantization of the complex plane, using the canonical coherent states of a right quaternionic Hilbert space, quaternion field of quaternionic quantum mechanics is quantized. Associated upper symbols, lower symbols, and related quantities are analyzed. Quaternionic version of the harmonic oscillator and Weyl-Heisenberg algebra are also obtained.
Coherent state quantization of quaternions
NASA Astrophysics Data System (ADS)
Muraleetharan, B.; Thirulogasanthar, K.
2015-08-01
Parallel to the quantization of the complex plane, using the canonical coherent states of a right quaternionic Hilbert space, quaternion field of quaternionic quantum mechanics is quantized. Associated upper symbols, lower symbols, and related quantities are analyzed. Quaternionic version of the harmonic oscillator and Weyl-Heisenberg algebra are also obtained.
Visual optimization of DCT quantization matrices for individual images
NASA Technical Reports Server (NTRS)
Watson, Andrew B.
1993-01-01
Many image compression standards (JPEG, MPEG, H.261) are based on the Discrete Cosine Transform (DCT). However, these standards do not specify the actual DCT quantization matrix. We have previously provided mathematical formulae to compute a perceptually lossless quantization matrix. Here I show how to compute a matrix that is optimized for a particular image. The method treats each DCT coefficient as an approximation to the local response of a visual 'channel'. For a given quantization matrix, the DCT quantization errors are adjusted by contrast sensitivity, light adaptation, and contrast masking, and are pooled non-linearly over the blocks of the image. This yields an 8x8 'perceptual error matrix'. A second non-linear pooling over the perceptual error matrix yields total perceptual error. With this model we may estimate the quantization matrix for a particular image that yields minimum bit rate for a given total perceptual error, or minimum perceptual error for a given bit rate. Custom matrices for a number of images show clear improvement over image-independent matrices. Custom matrices are compatible with the JPEG standard, which requires transmission of the quantization matrix.
NASA Astrophysics Data System (ADS)
Kaul, Richard; Adkins, Kenneth; Bibyk, Steven
The hardware and algorithms used to vector quantize (VQ) predicted pixel intensity differences for real-time video compression are described. The hardware is designed for rapid vector quantization performance, which entails the development of application-specific associative memory circuits. A modified DPCM algorithm is originally examined to determine how neural circuitry could enhance its operation. It was determined that quantization and encoding could be improved by consolidating these two functions into one, and by increasing the amount of information (i.e. number of pixels) quantized at a time. The result is a predictive scheme that vector quantizes differential values. Some of the disadvantages of VQ algorithms are solved using associative memories. The video compression algorithm and the associative memory design are described.
Tse, Wang-Kong; MacDonald, A H
2012-12-01
We investigate the Casimir effect between two-dimensional electron systems driven to the quantum Hall regime by a strong perpendicular magnetic field. In the large-separation (d) limit where retardation effects are essential, we find (i) that the Casimir force is quantized in units of 3ħcα(2)/8π(2)d(4) and (ii) that the force is repulsive for mirrors with the same type of carrier and attractive for mirrors with opposite types of carrier. The sign of the Casimir force is therefore electrically tunable in ambipolar materials such as graphene. The Casimir force is suppressed when one mirror is a charge-neutral graphene system in a filling factor ν=0 quantum Hall state. PMID:23368242
First quantized electrodynamics
Bennett, A.F.
2014-06-15
The parametrized Dirac wave equation represents position and time as operators, and can be formulated for many particles. It thus provides, unlike field-theoretic Quantum Electrodynamics (QED), an elementary and unrestricted representation of electrons entangled in space or time. The parametrized formalism leads directly and without further conjecture to the Bethe–Salpeter equation for bound states. The formalism also yields the Uehling shift of the hydrogenic spectrum, the anomalous magnetic moment of the electron to leading order in the fine structure constant, the Lamb shift and the axial anomaly of QED. -- Highlights: •First-quantized electrodynamics of the parametrized Dirac equation is developed. •Unrestricted entanglement in time is made explicit. •Bethe and Salpeter’s equation for relativistic bound states is derived without further conjecture. •One-loop scattering corrections and the axial anomaly are derived using a partial summation. •Wide utility of semi-classical Quantum Electrodynamics is argued.
NASA Astrophysics Data System (ADS)
Kiani, Maryam; Pourtakdoust, Seid H.
2014-12-01
A novel algorithm is presented in this study for estimation of spacecraft's attitudes and angular rates from vector observations. In this regard, a new cubature-quadrature particle filter (CQPF) is initially developed that uses the Square-Root Cubature-Quadrature Kalman Filter (SR-CQKF) to generate the importance proposal distribution. The developed CQPF scheme avoids the basic limitation of particle filter (PF) with regards to counting the new measurements. Subsequently, CQPF is enhanced to adjust the sample size at every time step utilizing the idea of confidence intervals, thus improving the efficiency and accuracy of the newly proposed adaptive CQPF (ACQPF). In addition, application of the q-method for filter initialization has intensified the computation burden as well. The current study also applies ACQPF to the problem of attitude estimation of a low Earth orbit (LEO) satellite. For this purpose, the undertaken satellite is equipped with a three-axis magnetometer (TAM) as well as a sun sensor pack that provide noisy geomagnetic field data and Sun direction measurements, respectively. The results and performance of the proposed filter are investigated and compared with those of the extended Kalman filter (EKF) and the standard particle filter (PF) utilizing a Monte Carlo simulation. The comparison demonstrates the viability and the accuracy of the proposed nonlinear estimator.
Quantized beam shifts in graphene
de Melo Kort-Kamp, Wilton Junior; Sinitsyn, Nikolai; Dalvit, Diego Alejandro Roberto
2015-10-08
We predict the existence of quantized Imbert-Fedorov, Goos-Hanchen, and photonic spin Hall shifts for light beams impinging on a graphene-on-substrate system in an external magnetic field. In the quantum Hall regime the Imbert-Fedorov and photonic spin Hall shifts are quantized in integer multiples of the fine structure constant α, while the Goos-Hanchen ones in multiples of α^{2}. We investigate the influence on these shifts of magnetic field, temperature, and material dispersion and dissipation. An experimental demonstration of quantized beam shifts could be achieved at terahertz frequencies for moderate values of the magnetic field.
QED in Krein Space Quantization
NASA Astrophysics Data System (ADS)
Zarei, A.; Forghan, B.; Takook, M. V.
2011-08-01
In this paper we consider the QED in Krein space quantization. We show that the theory is automatically regularized. The three primitive divergences integrals in usual QED are considered in Krein QED. The photon self energy, electron self energy and vertex function are calculated in this formalism. We show that these quantities are finite. The infrared and ultraviolet divergencies do not appear. We discuss that Krein space quantization is similar to Pauli-Villars regularization, so we have called it the "Krein regularization".
Escobar, W. A.
2013-01-01
The proposed model holds that, at its most fundamental level, visual awareness is quantized. That is to say that visual awareness arises as individual bits of awareness through the action of neural circuits with hundreds to thousands of neurons in at least the human striate cortex. Circuits with specific topologies will reproducibly result in visual awareness that correspond to basic aspects of vision like color, motion, and depth. These quanta of awareness (qualia) are produced by the feedforward sweep that occurs through the geniculocortical pathway but are not integrated into a conscious experience until recurrent processing from centers like V4 or V5 select the appropriate qualia being produced in V1 to create a percept. The model proposed here has the potential to shift the focus of the search for visual awareness to the level of microcircuits and these likely exist across the kingdom Animalia. Thus establishing qualia as the fundamental nature of visual awareness will not only provide a deeper understanding of awareness, but also allow for a more quantitative understanding of the evolution of visual awareness throughout the animal kingdom. PMID:24319436
NASA Astrophysics Data System (ADS)
Alam, Md Jahangir; Gupta, Vishwa; Kenny, Patrick; Dumouchel, Pierre
2015-12-01
The REVERB challenge provides a common framework for the evaluation of feature extraction techniques in the presence of both reverberation and additive background noise. State-of-the-art speech recognition systems perform well in controlled environments, but their performance degrades in realistic acoustical conditions, especially in real as well as simulated reverberant environments. In this contribution, we utilize multiple feature extractors including the conventional mel-filterbank, multi-taper spectrum estimation-based mel-filterbank, robust mel and compressive gammachirp filterbank, iterative deconvolution-based dereverberated mel-filterbank, and maximum likelihood inverse filtering-based dereverberated mel-frequency cepstral coefficient features for speech recognition with multi-condition training data. In order to improve speech recognition performance, we combine their results using ROVER (Recognizer Output Voting Error Reduction). For two- and eight-channel tasks, to get benefited from the multi-channel data, we also use ROVER, instead of the multi-microphone signal processing method, to reduce word error rate by selecting the best scoring word at each channel. As in a previous work, we also apply i-vector-based speaker adaptation which was found effective. In speech recognition task, speaker adaptation tries to reduce mismatch between the training and test speakers. Speech recognition experiments are conducted on the REVERB challenge 2014 corpora using the Kaldi recognizer. In our experiments, we use both utterance-based batch processing and full batch processing. In the single-channel task, full batch processing reduced word error rate (WER) from 10.0 to 9.3 % on SimData as compared to utterance-based batch processing. Using full batch processing, we obtained an average WER of 9.0 and 23.4 % on the SimData and RealData, respectively, for the two-channel task, whereas for the eight-channel task on the SimData and RealData, the average WERs found were 8
Dicks, Matthew D J; Guzman, Efrain; Spencer, Alexandra J; Gilbert, Sarah C; Charleston, Bryan; Hill, Adrian V S; Cottingham, Matthew G
2015-02-25
Adenovirus vaccine vectors generated from new viral serotypes are routinely screened in pre-clinical laboratory animal models to identify the most immunogenic and efficacious candidates for further evaluation in clinical human and veterinary settings. Here, we show that studies in a laboratory species do not necessarily predict the hierarchy of vector performance in other mammals. In mice, after intramuscular immunization, HAdV-5 (Human adenovirus C) based vectors elicited cellular and humoral adaptive responses of higher magnitudes compared to the chimpanzee adenovirus vectors ChAdOx1 and AdC68 from species Human adenovirus E. After HAdV-5 vaccination, transgene specific IFN-γ(+) CD8(+) T cell responses reached peak magnitude later than after ChAdOx1 and AdC68 vaccination, and exhibited a slower contraction to a memory phenotype. In cattle, cellular and humoral immune responses were at least equivalent, if not higher, in magnitude after ChAdOx1 vaccination compared to HAdV-5. Though we have not tested protective efficacy in a disease model, these findings have important implications for the selection of candidate vectors for further evaluation. We propose that vaccines based on ChAdOx1 or other Human adenovirus E serotypes could be at least as immunogenic as current licensed bovine vaccines based on HAdV-5. PMID:25629523
Dynamical non-Abelian two-form: BRST quantization
Lahiri, A.
1997-04-01
When an antisymmetric tensor potential is coupled to the field strength of a gauge field via a BANDF coupling and a kinetic term for B is included, the gauge field develops an effective mass. The theory can be made invariant under a non-Abelian vector gauge symmetry by introducing an auxiliary vector field. The covariant quantization of this theory requires ghosts for ghosts. The resultant theory including gauge fixing and ghost terms is BRST invariant by construction, and therefore unitary. The construction of the BRST-invariant action is given for both Abelian and non-Abelian models of mass generation. {copyright} {ital 1997} {ital The American Physical Society}
NASA Astrophysics Data System (ADS)
Kisi, Ozgur
2015-09-01
Pan evaporation (Ep) modeling is an important issue in reservoir management, regional water resources planning and evaluation of drinking-water supplies. The main purpose of this study is to investigate the accuracy of least square support vector machine (LSSVM), multivariate adaptive regression splines (MARS) and M5 Model Tree (M5Tree) in modeling Ep. The first part of the study focused on testing the ability of the LSSVM, MARS and M5Tree models in estimating the Ep data of Mersin and Antalya stations located in Mediterranean Region of Turkey by using cross-validation method. The LSSVM models outperformed the MARS and M5Tree models in estimating Ep of Mersin and Antalya stations with local input and output data. The average root mean square error (RMSE) of the M5Tree and MARS models was decreased by 24-32.1% and 10.8-18.9% using LSSVM models for the Mersin and Antalya stations, respectively. The ability of three different methods was examined in estimation of Ep using input air temperature, solar radiation, relative humidity and wind speed data from nearby station in the second part of the study (cross-station application without local input data). The results showed that the MARS models provided better accuracy than the LSSVM and M5Tree models with respect to RMSE, mean absolute error (MAE) and determination coefficient (R2) criteria. The average RMSE accuracy of the LSSVM and M5Tree was increased by 3.7% and 16.5% using MARS. In the case of without local input data, the average RMSE accuracy of the LSSVM and M5Tree was respectively increased by 11.4% and 18.4% using MARS. In the third part of the study, the ability of the applied models was examined in Ep estimation using input and output data of nearby station. The results reported that the MARS models performed better than the other models with respect to RMSE, MAE and R2 criteria. The average RMSE of the LSSVM and M5Tree was respectively decreased by 54% and 3.4% using MARS. The overall results indicated that
Deformation quantization of cosmological models
NASA Astrophysics Data System (ADS)
Cordero, Rubén; García-Compeán, Hugo; Turrubiates, Francisco J.
2011-06-01
The Weyl-Wigner-Groenewold-Moyal formalism of deformation quantization is applied to cosmological models in the minisuperspace. The quantization procedure is performed explicitly for quantum cosmology in a flat minisuperspace. The de Sitter cosmological model is worked out in detail and the computation of the Wigner functions for the Hartle-Hawking, Vilenkin and Linde wave functions are done numerically. The Wigner function is analytically calculated for the Kantowski-Sachs model in (non)commutative quantum cosmology and for string cosmology with dilaton exponential potential. Finally, baby universes solutions are described in this context and the Wigner function is obtained.
Periodic roads and quantized wheels
NASA Astrophysics Data System (ADS)
de Campos Valadares, Eduardo
2016-08-01
We propose a simple approach to determine all possible wheels that can roll smoothly without slipping on a periodic roadbed, while maintaining the center of mass at a fixed height. We also address the inverse problem that of obtaining the roadbed profile compatible with a specific wheel and all other related "quantized wheels." The role of symmetry is highlighted, which might preclude the center of mass from remaining at a fixed height. A straightforward consequence of such geometric quantization is that the gravitational potential energy and the moment of inertia are discrete, suggesting a parallelism between macroscopic wheels and nano-systems, such as carbon nanotubes.
Fermionic Quantization of Hopf Solitons
NASA Astrophysics Data System (ADS)
Krusch, S.; Speight, J. M.
2006-06-01
In this paper we show how to quantize Hopf solitons using the Finkelstein-Rubinstein approach. Hopf solitons can be quantized as fermions if their Hopf charge is odd. Symmetries of classical minimal energy configurations induce loops in configuration space which give rise to constraints on the wave function. These constraints depend on whether the given loop is contractible. Our method is to exploit the relationship between the configuration spaces of the Faddeev-Hopf and Skyrme models provided by the Hopf fibration. We then use recent results in the Skyrme model to determine whether loops are contractible. We discuss possible quantum ground states up to Hopf charge Q=7.
Scalable Feature Matching by Dual Cascaded Scalar Quantization for Image Retrieval.
Zhou, Wengang; Yang, Ming; Wang, Xiaoyu; Li, Houqiang; Lin, Yuanqing; Tian, Qi
2016-01-01
In this paper, we investigate the problem of scalable visual feature matching in large-scale image search and propose a novel cascaded scalar quantization scheme in dual resolution. We formulate the visual feature matching as a range-based neighbor search problem and approach it by identifying hyper-cubes with a dual-resolution scalar quantization strategy. Specifically, for each dimension of the PCA-transformed feature, scalar quantization is performed at both coarse and fine resolutions. The scalar quantization results at the coarse resolution are cascaded over multiple dimensions to index an image database. The scalar quantization results over multiple dimensions at the fine resolution are concatenated into a binary super-vector and stored into the index list for efficient verification. The proposed cascaded scalar quantization (CSQ) method is free of the costly visual codebook training and thus is independent of any image descriptor training set. The index structure of the CSQ is flexible enough to accommodate new image features and scalable to index large-scale image database. We evaluate our approach on the public benchmark datasets for large-scale image retrieval. Experimental results demonstrate the competitive retrieval performance of the proposed method compared with several recent retrieval algorithms on feature quantization. PMID:26656584
The wavelet/scalar quantization compression standard for digital fingerprint images
Bradley, J.N.; Brislawn, C.M.
1994-04-01
A new digital image compression standard has been adopted by the US Federal Bureau of Investigation for use on digitized gray-scale fingerprint images. The algorithm is based on adaptive uniform scalar quantization of a discrete wavelet transform image decomposition and is referred to as the wavelet/scalar quantization standard. The standard produces archival quality images at compression ratios of around 20:1 and will allow the FBI to replace their current database of paper fingerprint cards with digital imagery.
Geometric Quantization and Foliation Reduction
NASA Astrophysics Data System (ADS)
Skerritt, Paul
A standard question in the study of geometric quantization is whether symplectic reduction interacts nicely with the quantized theory, and in particular whether "quantization commutes with reduction." Guillemin and Sternberg first proposed this question, and answered it in the affirmative for the case of a free action of a compact Lie group on a compact Kahler manifold. Subsequent work has focused mainly on extending their proof to non-free actions and non-Kahler manifolds. For realistic physical examples, however, it is desirable to have a proof which also applies to non-compact symplectic manifolds. In this thesis we give a proof of the quantization-reduction problem for general symplectic manifolds. This is accomplished by working in a particular wavefunction representation, associated with a polarization that is in some sense compatible with reduction. While the polarized sections described by Guillemin and Sternberg are nonzero on a dense subset of the Kahler manifold, the ones considered here are distributional, having support only on regions of the phase space associated with certain quantized, or "admissible", values of momentum. We first propose a reduction procedure for the prequantum geometric structures that "covers" symplectic reduction, and demonstrate how both symplectic and prequantum reduction can be viewed as examples of foliation reduction. Consistency of prequantum reduction imposes the above-mentioned admissibility conditions on the quantized momenta, which can be seen as analogues of the Bohr-Wilson-Sommerfeld conditions for completely integrable systems. We then describe our reduction-compatible polarization, and demonstrate a one-to-one correspondence between polarized sections on the unreduced and reduced spaces. Finally, we describe a factorization of the reduced prequantum bundle, suggested by the structure of the underlying reduced symplectic manifold. This in turn induces a factorization of the space of polarized sections that agrees
Broom, Donald M
2006-01-01
The term adaptation is used in biology in three different ways. It may refer to changes which occur at the cell and organ level, or at the individual level, or at the level of gene action and evolutionary processes. Adaptation by cells, especially nerve cells helps in: communication within the body, the distinguishing of stimuli, the avoidance of overload and the conservation of energy. The time course and complexity of these mechanisms varies. Adaptive characters of organisms, including adaptive behaviours, increase fitness so this adaptation is evolutionary. The major part of this paper concerns adaptation by individuals and its relationships to welfare. In complex animals, feed forward control is widely used. Individuals predict problems and adapt by acting before the environmental effect is substantial. Much of adaptation involves brain control and animals have a set of needs, located in the brain and acting largely via motivational mechanisms, to regulate life. Needs may be for resources but are also for actions and stimuli which are part of the mechanism which has evolved to obtain the resources. Hence pigs do not just need food but need to be able to carry out actions like rooting in earth or manipulating materials which are part of foraging behaviour. The welfare of an individual is its state as regards its attempts to cope with its environment. This state includes various adaptive mechanisms including feelings and those which cope with disease. The part of welfare which is concerned with coping with pathology is health. Disease, which implies some significant effect of pathology, always results in poor welfare. Welfare varies over a range from very good, when adaptation is effective and there are feelings of pleasure or contentment, to very poor. A key point concerning the concept of individual adaptation in relation to welfare is that welfare may be good or poor while adaptation is occurring. Some adaptation is very easy and energetically cheap and
Deformation of second and third quantization
NASA Astrophysics Data System (ADS)
Faizal, Mir
2015-03-01
In this paper, we will deform the second and third quantized theories by deforming the canonical commutation relations in such a way that they become consistent with the generalized uncertainty principle. Thus, we will first deform the second quantized commutator and obtain a deformed version of the Wheeler-DeWitt equation. Then we will further deform the third quantized theory by deforming the third quantized canonical commutation relation. This way we will obtain a deformed version of the third quantized theory for the multiverse.
A trellis-searched APC (adaptive predictive coding) speech coder
Malone, K.T. ); Fischer, T.R. . Dept. of Electrical and Computer Engineering)
1990-01-01
In this paper we formulate a speech coding system that incorporates trellis coded vector quantization (TCVQ) and adaptive predictive coding (APC). A method for optimizing'' the TCVQ codebooks is presented and experimental results concerning survivor path mergings are reported. Simulation results are given for encoding rates of 16 and 9.6 kbps for a variety of coder parameters. The quality of the encoded speech is deemed excellent at an encoding rate of 16 kbps and very good at 9.6 kbps. 13 refs., 2 figs., 4 tabs.
An adaptive algorithm for motion compensated color image coding
NASA Technical Reports Server (NTRS)
Kwatra, Subhash C.; Whyte, Wayne A.; Lin, Chow-Ming
1987-01-01
This paper presents an adaptive algorithm for motion compensated color image coding. The algorithm can be used for video teleconferencing or broadcast signals. Activity segmentation is used to reduce the bit rate and a variable stage search is conducted to save computations. The adaptive algorithm is compared with the nonadaptive algorithm and it is shown that with approximately 60 percent savings in computing the motion vector and 33 percent additional compression, the performance of the adaptive algorithm is similar to the nonadaptive algorithm. The adaptive algorithm results also show improvement of up to 1 bit/pel over interframe DPCM coding with nonuniform quantization. The test pictures used for this study were recorded directly from broadcast video in color.
Separable quantizations of Stäckel systems
NASA Astrophysics Data System (ADS)
Błaszak, Maciej; Marciniak, Krzysztof; Domański, Ziemowit
2016-08-01
In this article we prove that many Hamiltonian systems that cannot be separably quantized in the classical approach of Robertson and Eisenhart can be separably quantized if we extend the class of admissible quantizations through a suitable choice of Riemann space adapted to the Poisson geometry of the system. Actually, in this article we prove that for every quadratic in momenta Stäckel system (defined on 2 n dimensional Poisson manifold) for which Stäckel matrix consists of monomials in position coordinates there exist infinitely many quantizations-parametrized by n arbitrary functions-that turn this system into a quantum separable Stäckel system. In this paper we prove that conjecture for a very large class of Stäckel systems, generated by separation relations of the form (17), where Stäckel matrix consists of monomials in position coordinates. For any Stäckel system from this class we construct a family of metrices for which the minimal quantization leads to quantum separability and commutativity of the quantized constants of motion. We want to stress, however, that we do not deal with spectral theory of the obtained quantum systems, as it requires a separate investigations.The paper is organized as follows. In Section 2 we briefly summarize the results of Robertson-Eisenhart theory of quantum separability. In Section 3 we present some fundamental facts about classical Stäckel systems. Section 4 contains presentation of some results derived from our general theory of quantization of Hamiltonian systems on phase space; especially we demonstrate how to obtain the minimal quantization (4) from our general theory. In Section 5 we relate quantizations of the same Hamiltonian in different metrics g and g ¯ (or in different Hilbert spaces L2(Q ,ωg) and L2(Q ,ωḡ)). Essentially, this construction explains the origin of the quantum correction terms in the classical Hamiltonians introduced in [1] and in [2
Quantized beam shifts in graphene
NASA Astrophysics Data System (ADS)
Kort-Kamp, Wilton; Sinitsyn, Nikolai; Dalvit, Diego
We show that the magneto-optical response of a graphene-on-substrate system in the presence of an external magnetic field strongly affects light beam shifts. In the quantum Hall regime, we predict quantized Imbert-Fedorov, Goos-Hänchen, and photonic spin Hall shifts. The Imbert-Fedorov and photonic spin Hall shifts are given in integer multiples of the fine structure constant α, while the Goos-Hänchen ones in discrete multiples of α2. Due to time-reversal symmetry breaking the IF shifts change sign when the direction of the applied magnetic field is reversed, while the other shifts remain unchanged. We investigate the influence on these shifts of magnetic field, temperature, and material dispersion and dissipation. An experimental demonstration of quantized beam shifts could be achieved at terahertz frequencies for moderate values of the magnetic field. We acknowledge the LANL LDRD program for financial support.
Third Quantization and Quantum Universes
NASA Astrophysics Data System (ADS)
Kim, Sang Pyo
2014-01-01
We study the third quantization of the Friedmann-Robertson-Walker cosmology with N-minimal massless fields. The third quantized Hamiltonian for the Wheeler-DeWitt equation in the minisuperspace consists of infinite number of intrinsic time-dependent, decoupled oscillators. The Hamiltonian has a pair of invariant operators for each universe with conserved momenta of the fields that play a role of the annihilation and the creation operators and that construct various quantum states for the universe. The closed universe exhibits an interesting feature of transitions from stable states to tachyonic states depending on the conserved momenta of the fields. In the classical forbidden unstable regime, the quantum states have googolplex growing position and conjugate momentum dispersions, which defy any measurements of the position of the universe.
Quantized Cosmology: A Simple Approach
Weinstein, M
2004-06-03
I discuss the problem of inflation in the context of Friedmann-Robertson-Walker Cosmology and show how, after a simple change of variables, to quantize the problem in a way which parallels the classical discussion. The result is that two of the Einstein equations arise as exact equations of motion and one of the usual Einstein equations (suitably quantized) survives as a constraint equation to be imposed on the space of physical states. However, the Friedmann equation, which is also a constraint equation and which is the basis of the Wheeler-deWitt equation, acquires a welcome quantum correction that becomes significant for small scale factors. To clarify how things work in this formalism I briefly outline the way in which our formalism works for the exactly solvable case of de-Sitter space.
NASA Astrophysics Data System (ADS)
Bargatze, L. F.
2015-12-01
Active Data Archive Product Tracking (ADAPT) is a collection of software routines that permits one to generate XML metadata files to describe and register data products in support of the NASA Heliophysics Virtual Observatory VxO effort. ADAPT is also a philosophy. The ADAPT concept is to use any and all available metadata associated with scientific data to produce XML metadata descriptions in a consistent, uniform, and organized fashion to provide blanket access to the full complement of data stored on a targeted data server. In this poster, we present an application of ADAPT to describe all of the data products that are stored by using the Common Data File (CDF) format served out by the CDAWEB and SPDF data servers hosted at the NASA Goddard Space Flight Center. These data servers are the primary repositories for NASA Heliophysics data. For this purpose, the ADAPT routines have been used to generate data resource descriptions by using an XML schema named Space Physics Archive, Search, and Extract (SPASE). SPASE is the designated standard for documenting Heliophysics data products, as adopted by the Heliophysics Data and Model Consortium. The set of SPASE XML resource descriptions produced by ADAPT includes high-level descriptions of numerical data products, display data products, or catalogs and also includes low-level "Granule" descriptions. A SPASE Granule is effectively a universal access metadata resource; a Granule associates an individual data file (e.g. a CDF file) with a "parent" high-level data resource description, assigns a resource identifier to the file, and lists the corresponding assess URL(s). The CDAWEB and SPDF file systems were queried to provide the input required by the ADAPT software to create an initial set of SPASE metadata resource descriptions. Then, the CDAWEB and SPDF data repositories were queried subsequently on a nightly basis and the CDF file lists were checked for any changes such as the occurrence of new, modified, or deleted
On the quantization of the linearized gravitational field
NASA Astrophysics Data System (ADS)
Grigore, D. R.
2000-01-01
We present a new point of view on the quantization of the gravitational field, namely we use exclusively the quantum framework of the second quantization. More explicitly, we take as one-particle Hilbert space, H_{graviton} the unitary irreducible representation of the Poincarégroup corresponding to a massless particle of helicity 2 and apply the second quantization procedure with Einstein-Bose statistics. The resulting Hilbert space F + (H_{graviton}) is, by definition, the Hilbert space of the gravitational field. Then we prove that this Hilbert space is canonically isomorphic to a space of the type Ker(Q ) / Im(Q ) where Q is a supercharge defined in an extension of the Hilbert space F + (H_{graviton}) by the inclusion of ghosts: some fermion ghosts u µ , tildeu µ which are vector fields and a bosonic ghost Φ which is a scalar field. This has to be contrasted with the usual approaches where only the fermion ghosts are considered. However, a rigorous proof that this is, indeed, possible seems to be lacking in the literature.
Wavelet/scalar quantization compression standard for fingerprint images
Brislawn, C.M.
1996-06-12
US Federal Bureau of Investigation (FBI) has recently formulated a national standard for digitization and compression of gray-scale fingerprint images. Fingerprints are scanned at a spatial resolution of 500 dots per inch, with 8 bits of gray-scale resolution. The compression algorithm for the resulting digital images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition (wavelet/scalar quantization method). The FBI standard produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. The compression standard specifies a class of potential encoders and a universal decoder with sufficient generality to reconstruct compressed images produced by any compliant encoder, allowing flexibility for future improvements in encoder technology. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations.
Exact quantization conditions for cluster integrable systems
NASA Astrophysics Data System (ADS)
Franco, Sebastián; Hatsuda, Yasuyuki; Mariño, Marcos
2016-06-01
We propose exact quantization conditions for the quantum integrable systems of Goncharov and Kenyon, based on the enumerative geometry of the corresponding toric Calabi–Yau manifolds. Our conjecture builds upon recent results on the quantization of mirror curves, and generalizes a previous proposal for the quantization of the relativistic Toda lattice. We present explicit tests of our conjecture for the integrable systems associated to the resolved {{{C}}3}/{{{Z}}5} and {{{C}}3}/{{{Z}}6} orbifolds.
Quantized-"Gray-Scale" Electronic Synapses
NASA Technical Reports Server (NTRS)
Lamb, James L.; Daud, Taher; Thakoor, Anilkumar P.
1990-01-01
Proposed array of programmable synaptic connections for electronic neural network applications offers multiple quantized levels of connection strength using only simple, two-terminal, binary microswitch devices. Subgrids in fine grid of programmable resistive connections connected externally in parallel to form coarser synaptic grid. By selection of pattern of connections in each subgrid, connection strength of synaptic node represented by that subgrid set at quantized "gray level". Device structures promise implementations of quantized-"gray-scale" synaptic arrays with very high density.
Quantized vortices in interacting gauge theories
NASA Astrophysics Data System (ADS)
Butera, Salvatore; Valiente, Manuel; Öhberg, Patrik
2016-01-01
We consider a two-dimensional weakly interacting ultracold Bose gas whose constituents are two-level atoms. We study the effects of a synthetic density-dependent gauge field that arises from laser-matter coupling in the adiabatic limit with a laser configuration such that the single-particle zeroth-order vector potential corresponds to a constant synthetic magnetic field. We find a new exotic type of current nonlinearity in the Gross-Pitaevskii equation which affects the dynamics of the order parameter of the condensate. We investigate the rotational properties of this system in the Thomas-Fermi limit, focusing in particular on the physical conditions that make the existence of a quantized vortex in the system energetically favourable with respect to the non-rotating solution. We point out that two different physical interpretations can be given to this new nonlinearity: firstly it can be seen as a local modification of the mean field coupling constant, whose value depends on the angular momentum of the condensate. Secondly, it can be interpreted as a density modulated angular velocity given to the cloud. Looking at the problem from both of these viewpoints, we show that the effect of the new nonlinearity is to induce a rotation to the condensate, where the transition from non-rotating to rotating states depends on the density of the cloud.
Quantized vortices in interacting gauge theories
NASA Astrophysics Data System (ADS)
Butera, Salvatore; Valiente, Manuel; Ohberg, Patrik
2015-05-01
We consider a two-dimensional weakly interacting ultracold Bose gas whose constituents are two-level atoms. We study the effects of a synthetic density-dependent gauge field that arises from laser-matter coupling in the adiabatic limit with a laser configuration such that the single-particle vector potential corresponds to a constant synthetic magnetic field. We find a new type of current non-linearity in the Gross-Pitaevskii equation which affects the dynamics of the order parameter of the condensate. We investigate on the physical conditions that make the nucleation of a quantized vortex in the system energetically favourable with respect to the non rotating solution. Two different physical interpretations can be given to this new non linearity: firstly it can be seen as a local modification of the mean field coupling constant, whose value depends on the angular momentum of the condensate. Secondly, it can be interpreted as a density modulated angular velocity given to the cloud. We analyze the physical conditions that make a single vortex state energetically favourable. In the Thomas-Fermi limit, we show that the effect of the new nonlinearity is to induce a rotation to the condensate, where the transition from non-rotating to rotating depends on the density of the cloud. The authors acknowledge support from CM-DTC and EPSRC.
On abelian group actions and Galois quantizations
NASA Astrophysics Data System (ADS)
Huru, H. L.; Lychagin, V. V.
2013-08-01
Quantizations of actions of finite abelian groups G are explicitly described by elements in the tensor square of the group algebra of G. Over algebraically closed fields of characteristic 0 these are in one to one correspondence with the second cohomology group of the dual of G. With certain adjustments this result is applied to group actions over any field of characteristic 0. In particular we consider the quantizations of Galois extensions, which are quantized by "deforming" the multiplication. For the splitting fields of products of quadratic polynomials this produces quantized Galois extensions that all are Clifford type algebras.
Liu, Yi-Hung; Wu, Chien-Te; Kao, Yung-Hwa; Chen, Ya-Ting
2013-01-01
Single-trial electroencephalography (EEG)-based emotion recognition enables us to perform fast and direct assessments of human emotional states. However, previous works suggest that a great improvement on the classification accuracy of valence and arousal levels is still needed. To address this, we propose a novel emotional EEG feature extraction method: kernel Eigen-emotion pattern (KEEP). An adaptive SVM is also proposed to deal with the problem of learning from imbalanced emotional EEG data sets. In this study, a set of pictures from IAPS are used for emotion induction. Results based on seven participants show that KEEP gives much better classification results than the widely-used EEG frequency band power features. Also, the adaptive SVM greatly improves classification performance of commonly-adopted SVM classifier. Combined use of KEEP and adaptive SVM can achieve high average valence and arousal classification rates of 73.42% and 73.57%. The highest classification rates for valence and arousal are 80% and 79%, respectively. The results are very promising. PMID:24110685
NASA Astrophysics Data System (ADS)
Fukumoto, Tetsuya; Kato, Yousuke; Kurita, Kazuya; Hayashi, Yoichi
Because of various errors caused by dead time, temperature variation of resistance and so on, the speed estimation error is inevitable in the speed sensor-less vector control methods of the induction motor. Especially, the speed control loop becomes unstable at near zero frequency. In order to solve these problems, this paper proposes a novel design of an adaptive observer for the speed estimation. Adding a feedback loop of the error between the estimated and reference fluxes, the sensitivity of the current error signals for the speed estimation and the primary resistance identification are improved. The proposed system is analyzed and the appropriate feedback gains are derived. The experimental results showed good performance in low speed range.
Quantized ionic conductance in nanopores.
Zwolak, Michael; Lagerqvist, Johan; Di Ventra, Massimiliano
2009-09-18
Ionic transport in nanopores is a fundamentally and technologically important problem in view of its occurrence in biological processes and its impact on novel DNA sequencing applications. Using molecular dynamics simulations we show that ion transport may exhibit strong nonlinearities as a function of the pore radius reminiscent of the conductance quantization steps as a function of the transverse cross section of quantum point contacts. In the present case, however, conductance steps originate from the break up of the hydration layers that form around ions in aqueous solution. We discuss this phenomenon and the conditions under which it should be experimentally observable. PMID:19792463
Quantization of general linear electrodynamics
Rivera, Sergio; Schuller, Frederic P.
2011-03-15
General linear electrodynamics allow for an arbitrary linear constitutive relation between the field strength 2-form and induction 2-form density if crucial hyperbolicity and energy conditions are satisfied, which render the theory predictive and physically interpretable. Taking into account the higher-order polynomial dispersion relation and associated causal structure of general linear electrodynamics, we carefully develop its Hamiltonian formulation from first principles. Canonical quantization of the resulting constrained system then results in a quantum vacuum which is sensitive to the constitutive tensor of the classical theory. As an application we calculate the Casimir effect in a birefringent linear optical medium.
NASA Astrophysics Data System (ADS)
Rajagopal, A. K.; Mochena, Mogus
2000-12-01
The group-theory framework developed by Fukutome for a systematic analysis of the various broken-symmetry types of Hartree-Fock solution exhibiting spin structures is here extended to the general many-body context using spinor Green function formalism for describing magnetic systems. Consequences of this theory are discussed for examining the magnetism of itinerant electrons in nanometric systems of current interest as well as bulk systems where a vector spin-density form is required, by specializing our work to spin-density-functional formalism. We also formulate the linear-response theory for such a system and compare and contrast our results with the recent results obtained for localized electron systems. The various phenomenological treatments of itinerant magnetic systems are here unified in this group-theoretical description. We apply this theory to the one-band Hubbard model to illustrate the usefulness of this approach.
Mesquita, Rafael D; Vionette-Amaral, Raquel J; Lowenberger, Carl; Rivera-Pomar, Rolando; Monteiro, Fernando A; Minx, Patrick; Spieth, John; Carvalho, A Bernardo; Panzera, Francisco; Lawson, Daniel; Torres, André Q; Ribeiro, Jose M C; Sorgine, Marcos H F; Waterhouse, Robert M; Montague, Michael J; Abad-Franch, Fernando; Alves-Bezerra, Michele; Amaral, Laurence R; Araujo, Helena M; Araujo, Ricardo N; Aravind, L; Atella, Georgia C; Azambuja, Patricia; Berni, Mateus; Bittencourt-Cunha, Paula R; Braz, Gloria R C; Calderón-Fernández, Gustavo; Carareto, Claudia M A; Christensen, Mikkel B; Costa, Igor R; Costa, Samara G; Dansa, Marilvia; Daumas-Filho, Carlos R O; De-Paula, Iron F; Dias, Felipe A; Dimopoulos, George; Emrich, Scott J; Esponda-Behrens, Natalia; Fampa, Patricia; Fernandez-Medina, Rita D; da Fonseca, Rodrigo N; Fontenele, Marcio; Fronick, Catrina; Fulton, Lucinda A; Gandara, Ana Caroline; Garcia, Eloi S; Genta, Fernando A; Giraldo-Calderón, Gloria I; Gomes, Bruno; Gondim, Katia C; Granzotto, Adriana; Guarneri, Alessandra A; Guigó, Roderic; Harry, Myriam; Hughes, Daniel S T; Jablonka, Willy; Jacquin-Joly, Emmanuelle; Juárez, M Patricia; Koerich, Leonardo B; Lange, Angela B; Latorre-Estivalis, José Manuel; Lavore, Andrés; Lawrence, Gena G; Lazoski, Cristiano; Lazzari, Claudio R; Lopes, Raphael R; Lorenzo, Marcelo G; Lugon, Magda D; Majerowicz, David; Marcet, Paula L; Mariotti, Marco; Masuda, Hatisaburo; Megy, Karine; Melo, Ana C A; Missirlis, Fanis; Mota, Theo; Noriega, Fernando G; Nouzova, Marcela; Nunes, Rodrigo D; Oliveira, Raquel L L; Oliveira-Silveira, Gilbert; Ons, Sheila; Orchard, Ian; Pagola, Lucia; Paiva-Silva, Gabriela O; Pascual, Agustina; Pavan, Marcio G; Pedrini, Nicolás; Peixoto, Alexandre A; Pereira, Marcos H; Pike, Andrew; Polycarpo, Carla; Prosdocimi, Francisco; Ribeiro-Rodrigues, Rodrigo; Robertson, Hugh M; Salerno, Ana Paula; Salmon, Didier; Santesmasses, Didac; Schama, Renata; Seabra-Junior, Eloy S; Silva-Cardoso, Livia; Silva-Neto, Mario A C; Souza-Gomes, Matheus; Sterkel, Marcos; Taracena, Mabel L; Tojo, Marta; Tu, Zhijian Jake; Tubio, Jose M C; Ursic-Bedoya, Raul; Venancio, Thiago M; Walter-Nuno, Ana Beatriz; Wilson, Derek; Warren, Wesley C; Wilson, Richard K; Huebner, Erwin; Dotson, Ellen M; Oliveira, Pedro L
2015-12-01
Rhodnius prolixus not only has served as a model organism for the study of insect physiology, but also is a major vector of Chagas disease, an illness that affects approximately seven million people worldwide. We sequenced the genome of R. prolixus, generated assembled sequences covering 95% of the genome (∼ 702 Mb), including 15,456 putative protein-coding genes, and completed comprehensive genomic analyses of this obligate blood-feeding insect. Although immune-deficiency (IMD)-mediated immune responses were observed, R. prolixus putatively lacks key components of the IMD pathway, suggesting a reorganization of the canonical immune signaling network. Although both Toll and IMD effectors controlled intestinal microbiota, neither affected Trypanosoma cruzi, the causal agent of Chagas disease, implying the existence of evasion or tolerance mechanisms. R. prolixus has experienced an extensive loss of selenoprotein genes, with its repertoire reduced to only two proteins, one of which is a selenocysteine-based glutathione peroxidase, the first found in insects. The genome contained actively transcribed, horizontally transferred genes from Wolbachia sp., which showed evidence of codon use evolution toward the insect use pattern. Comparative protein analyses revealed many lineage-specific expansions and putative gene absences in R. prolixus, including tandem expansions of genes related to chemoreception, feeding, and digestion that possibly contributed to the evolution of a blood-feeding lifestyle. The genome assembly and these associated analyses provide critical information on the physiology and evolution of this important vector species and should be instrumental for the development of innovative disease control methods. PMID:26627243
Mesquita, Rafael D.; Vionette-Amaral, Raquel J.; Lowenberger, Carl; Rivera-Pomar, Rolando; Monteiro, Fernando A.; Minx, Patrick; Spieth, John; Carvalho, A. Bernardo; Panzera, Francisco; Lawson, Daniel; Torres, André Q.; Ribeiro, Jose M. C.; Sorgine, Marcos H. F.; Waterhouse, Robert M.; Abad-Franch, Fernando; Alves-Bezerra, Michele; Amaral, Laurence R.; Araujo, Helena M.; Aravind, L.; Atella, Georgia C.; Azambuja, Patricia; Berni, Mateus; Bittencourt-Cunha, Paula R.; Braz, Gloria R. C.; Calderón-Fernández, Gustavo; Carareto, Claudia M. A.; Christensen, Mikkel B.; Costa, Igor R.; Costa, Samara G.; Dansa, Marilvia; Daumas-Filho, Carlos R. O.; De-Paula, Iron F.; Dias, Felipe A.; Dimopoulos, George; Emrich, Scott J.; Esponda-Behrens, Natalia; Fampa, Patricia; Fernandez-Medina, Rita D.; da Fonseca, Rodrigo N.; Fontenele, Marcio; Fronick, Catrina; Fulton, Lucinda A.; Gandara, Ana Caroline; Garcia, Eloi S.; Genta, Fernando A.; Giraldo-Calderón, Gloria I.; Gomes, Bruno; Gondim, Katia C.; Granzotto, Adriana; Guarneri, Alessandra A.; Guigó, Roderic; Harry, Myriam; Hughes, Daniel S. T.; Jablonka, Willy; Jacquin-Joly, Emmanuelle; Juárez, M. Patricia; Koerich, Leonardo B.; Lange, Angela B.; Latorre-Estivalis, José Manuel; Lavore, Andrés; Lawrence, Gena G.; Lazoski, Cristiano; Lazzari, Claudio R.; Lopes, Raphael R.; Lorenzo, Marcelo G.; Lugon, Magda D.; Marcet, Paula L.; Mariotti, Marco; Masuda, Hatisaburo; Megy, Karine; Missirlis, Fanis; Mota, Theo; Noriega, Fernando G.; Nouzova, Marcela; Nunes, Rodrigo D.; Oliveira, Raquel L. L.; Oliveira-Silveira, Gilbert; Ons, Sheila; Orchard, Ian; Pagola, Lucia; Paiva-Silva, Gabriela O.; Pascual, Agustina; Pavan, Marcio G.; Pedrini, Nicolás; Peixoto, Alexandre A.; Pereira, Marcos H.; Pike, Andrew; Polycarpo, Carla; Prosdocimi, Francisco; Ribeiro-Rodrigues, Rodrigo; Robertson, Hugh M.; Salerno, Ana Paula; Salmon, Didier; Santesmasses, Didac; Schama, Renata; Seabra-Junior, Eloy S.; Silva-Cardoso, Livia; Silva-Neto, Mario A. C.; Souza-Gomes, Matheus; Sterkel, Marcos; Taracena, Mabel L.; Tojo, Marta; Tu, Zhijian Jake; Tubio, Jose M. C.; Ursic-Bedoya, Raul; Venancio, Thiago M.; Walter-Nuno, Ana Beatriz; Wilson, Derek; Warren, Wesley C.; Wilson, Richard K.; Huebner, Erwin; Dotson, Ellen M.; Oliveira, Pedro L.
2015-01-01
Rhodnius prolixus not only has served as a model organism for the study of insect physiology, but also is a major vector of Chagas disease, an illness that affects approximately seven million people worldwide. We sequenced the genome of R. prolixus, generated assembled sequences covering 95% of the genome (∼702 Mb), including 15,456 putative protein-coding genes, and completed comprehensive genomic analyses of this obligate blood-feeding insect. Although immune-deficiency (IMD)-mediated immune responses were observed, R. prolixus putatively lacks key components of the IMD pathway, suggesting a reorganization of the canonical immune signaling network. Although both Toll and IMD effectors controlled intestinal microbiota, neither affected Trypanosoma cruzi, the causal agent of Chagas disease, implying the existence of evasion or tolerance mechanisms. R. prolixus has experienced an extensive loss of selenoprotein genes, with its repertoire reduced to only two proteins, one of which is a selenocysteine-based glutathione peroxidase, the first found in insects. The genome contained actively transcribed, horizontally transferred genes from Wolbachia sp., which showed evidence of codon use evolution toward the insect use pattern. Comparative protein analyses revealed many lineage-specific expansions and putative gene absences in R. prolixus, including tandem expansions of genes related to chemoreception, feeding, and digestion that possibly contributed to the evolution of a blood-feeding lifestyle. The genome assembly and these associated analyses provide critical information on the physiology and evolution of this important vector species and should be instrumental for the development of innovative disease control methods. PMID:26627243
Breathers on quantized superfluid vortices.
Salman, Hayder
2013-10-18
We consider the propagation of breathers along a quantized superfluid vortex. Using the correspondence between the local induction approximation (LIA) and the nonlinear Schrödinger equation, we identify a set of initial conditions corresponding to breather solutions of vortex motion governed by the LIA. These initial conditions, which give rise to a long-wavelength modulational instability, result in the emergence of large amplitude perturbations that are localized in both space and time. The emergent structures on the vortex filament are analogous to loop solitons but arise from the dual action of bending and twisting of the vortex. Although the breather solutions we study are exact solutions of the LIA equations, we demonstrate through full numerical simulations that their key emergent attributes carry over to vortex dynamics governed by the Biot-Savart law and to quantized vortices described by the Gross-Pitaevskii equation. The breather excitations can lead to self-reconnections, a mechanism that can play an important role within the crossover range of scales in superfluid turbulence. Moreover, the observation of breather solutions on vortices in a field model suggests that these solutions are expected to arise in a wide range of other physical contexts from classical vortices to cosmological strings. PMID:24182275
Perceptual quantization of chromatic components
NASA Astrophysics Data System (ADS)
Saadane, Abdelhakim; Bedat, Laurent; Barba, Dominique
1998-07-01
In order to achieve a color image coding based on the human visual system features, we have been interested by the design of a perceptually based quantizer. The cardinal directions Ach, Cr1 and Cr2, designed by Krauskopf from habituation experiments and validated in our lab from spatial masking experiments, have been used to characterize color images. The achromatic component, already considered in previous study, will not be considered here. The same methodology has been applied to the two chromatic components to specify the decision thresholds and the reconstruction levels which ensure that the degradations induced will be lower than their visibility thresholds. Two observers have been used for each of the two components. From the values obtained for Cr1 component one should notice that the decision thresholds and reconstruction levels follow a linear law even at higher levels. However, for Cr2 component the values seem following a monotonous increasing function. To determine if these behaviors are frequency dependent, further experiments have been conducted with stimulus frequencies varying from 1cy/deg to 4cy/deg. The measured values show no significant variations. Finally, instead of sinusoidal stimuli, filtered textures have been used to take into account the spatio-frequential combination. The same laws (linear for Cr1 and monotonous increasing for Cr2) have been observed even if a variation in the quantization intervals is reported.
Breathers on Quantized Superfluid Vortices
NASA Astrophysics Data System (ADS)
Salman, Hayder
2013-10-01
We consider the propagation of breathers along a quantized superfluid vortex. Using the correspondence between the local induction approximation (LIA) and the nonlinear Schrödinger equation, we identify a set of initial conditions corresponding to breather solutions of vortex motion governed by the LIA. These initial conditions, which give rise to a long-wavelength modulational instability, result in the emergence of large amplitude perturbations that are localized in both space and time. The emergent structures on the vortex filament are analogous to loop solitons but arise from the dual action of bending and twisting of the vortex. Although the breather solutions we study are exact solutions of the LIA equations, we demonstrate through full numerical simulations that their key emergent attributes carry over to vortex dynamics governed by the Biot-Savart law and to quantized vortices described by the Gross-Pitaevskii equation. The breather excitations can lead to self-reconnections, a mechanism that can play an important role within the crossover range of scales in superfluid turbulence. Moreover, the observation of breather solutions on vortices in a field model suggests that these solutions are expected to arise in a wide range of other physical contexts from classical vortices to cosmological strings.
Weak associativity and deformation quantization
NASA Astrophysics Data System (ADS)
Kupriyanov, V. G.
2016-09-01
Non-commutativity and non-associativity are quite natural in string theory. For open strings it appears due to the presence of non-vanishing background two-form in the world volume of Dirichlet brane, while in closed string theory the flux compactifications with non-vanishing three-form also lead to non-geometric backgrounds. In this paper, working in the framework of deformation quantization, we study the violation of associativity imposing the condition that the associator of three elements should vanish whenever each two of them are equal. The corresponding star products are called alternative and satisfy important for physical applications properties like the Moufang identities, alternative identities, Artin's theorem, etc. The condition of alternativity is invariant under the gauge transformations, just like it happens in the associative case. The price to pay is the restriction on the non-associative algebra which can be represented by the alternative star product, it should satisfy the Malcev identity. The example of nontrivial Malcev algebra is the algebra of imaginary octonions. For this case we construct an explicit expression of the non-associative and alternative star product. We also discuss the quantization of Malcev-Poisson algebras of general form, study its properties and provide the lower order expression for the alternative star product. To conclude we define the integration on the algebra of the alternative star products and show that the integrated associator vanishes.
Quantization of higher spin fields
Wagenaar, J. W.; Rijken, T. A
2009-11-15
In this article we quantize (massive) higher spin (1{<=}j{<=}2) fields by means of Dirac's constrained Hamilton procedure both in the situation were they are totally free and were they are coupled to (an) auxiliary field(s). A full constraint analysis and quantization is presented by determining and discussing all constraints and Lagrange multipliers and by giving all equal times (anti)commutation relations. Also we construct the relevant propagators. In the free case we obtain the well-known propagators and show that they are not covariant, which is also well known. In the coupled case we do obtain covariant propagators (in the spin-3/2 case this requires b=0) and show that they have a smooth massless limit connecting perfectly to the massless case (with auxiliary fields). We notice that in our system of the spin-3/2 and spin-2 case the massive propagators coupled to conserved currents only have a smooth limit to the pure massless spin-propagator, when there are ghosts in the massive case.
Weighted Bergman Kernels and Quantization}
NASA Astrophysics Data System (ADS)
Engliš, Miroslav
Let Ω be a bounded pseudoconvex domain in CN, φ, ψ two positive functions on Ω such that - log ψ, - log φ are plurisubharmonic, and z∈Ω a point at which - log φ is smooth and strictly plurisubharmonic. We show that as k-->∞, the Bergman kernels with respect to the weights φkψ have an asymptotic expansion
Covariant Photon Quantization in the SME
NASA Astrophysics Data System (ADS)
Colladay, D.
2014-01-01
The Gupta-Bleuler quantization procedure is applied to the SME photon sector. A direct application of the method to the massless case fails due to an unavoidable incompleteness in the polarization states. A mass term can be included into the photon lagrangian to rescue the quantization procedure and maintain covariance.
Kim, Changjae; Habib, Ayman; Pyeon, Muwook; Kwon, Goo-rak; Jung, Jaehoon; Heo, Joon
2016-01-01
Diverse approaches to laser point segmentation have been proposed since the emergence of the laser scanning system. Most of these segmentation techniques, however, suffer from limitations such as sensitivity to the choice of seed points, lack of consideration of the spatial relationships among points, and inefficient performance. In an effort to overcome these drawbacks, this paper proposes a segmentation methodology that: (1) reduces the dimensions of the attribute space; (2) considers the attribute similarity and the proximity of the laser point simultaneously; and (3) works well with both airborne and terrestrial laser scanning data. A neighborhood definition based on the shape of the surface increases the homogeneity of the laser point attributes. The magnitude of the normal position vector is used as an attribute for reducing the dimension of the accumulator array. The experimental results demonstrate, through both qualitative and quantitative evaluations, the outcomes’ high level of reliability. The proposed segmentation algorithm provided 96.89% overall correctness, 95.84% completeness, a 0.25 m overall mean value of centroid difference, and less than 1° of angle difference. The performance of the proposed approach was also verified with a large dataset and compared with other approaches. Additionally, the evaluation of the sensitivity of the thresholds was carried out. In summary, this paper proposes a robust and efficient segmentation methodology for abstraction of an enormous number of laser points into plane information. PMID:26805849
Kim, Changjae; Habib, Ayman; Pyeon, Muwook; Kwon, Goo-rak; Jung, Jaehoon; Heo, Joon
2016-01-01
Diverse approaches to laser point segmentation have been proposed since the emergence of the laser scanning system. Most of these segmentation techniques, however, suffer from limitations such as sensitivity to the choice of seed points, lack of consideration of the spatial relationships among points, and inefficient performance. In an effort to overcome these drawbacks, this paper proposes a segmentation methodology that: (1) reduces the dimensions of the attribute space; (2) considers the attribute similarity and the proximity of the laser point simultaneously; and (3) works well with both airborne and terrestrial laser scanning data. A neighborhood definition based on the shape of the surface increases the homogeneity of the laser point attributes. The magnitude of the normal position vector is used as an attribute for reducing the dimension of the accumulator array. The experimental results demonstrate, through both qualitative and quantitative evaluations, the outcomes' high level of reliability. The proposed segmentation algorithm provided 96.89% overall correctness, 95.84% completeness, a 0.25 m overall mean value of centroid difference, and less than 1° of angle difference. The performance of the proposed approach was also verified with a large dataset and compared with other approaches. Additionally, the evaluation of the sensitivity of the thresholds was carried out. In summary, this paper proposes a robust and efficient segmentation methodology for abstraction of an enormous number of laser points into plane information. PMID:26805849
Hybrid quantization of an inflationary model: The flat case
NASA Astrophysics Data System (ADS)
Fernández-Méndez, Mikel; Mena Marugán, Guillermo A.; Olmedo, Javier
2013-08-01
We present a complete quantization of an approximately homogeneous and isotropic universe with small scalar perturbations. We consider the case in which the matter content is a minimally coupled scalar field and the spatial sections are flat and compact, with the topology of a three-torus. The quantization is carried out along the lines that were put forward by the authors in a previous work for spherical topology. The action of the system is truncated at second order in perturbations. The local gauge freedom is fixed at the classical level, although different gauges are discussed and shown to lead to equivalent conclusions. Moreover, descriptions in terms of gauge-invariant quantities are considered. The reduced system is proven to admit a symplectic structure, and its dynamical evolution is dictated by a Hamiltonian constraint. Then, the background geometry is polymerically quantized, while a Fock representation is adopted for the inhomogeneities. The latter is selected by uniqueness criteria adapted from quantum field theory in curved spacetimes, which determine a specific scaling of the perturbations. In our hybrid quantization, we promote the Hamiltonian constraint to an operator on the kinematical Hilbert space. If the zero mode of the scalar field is interpreted as a relational time, a suitable ansatz for the dependence of the physical states on the polymeric degrees of freedom leads to a quantum wave equation for the evolution of the perturbations. Alternatively, the solutions to the quantum constraint can be characterized by their initial data on the minimum-volume section of each superselection sector. The physical implications of this model will be addressed in a future work, in order to check whether they are compatible with observations.
Quantized conic sections; quantum gravity
Noyes, H.P.
1993-03-15
Starting from free relativistic particles whose position and velocity can only be measured to a precision < {Delta}r{Delta}v > {equivalent_to} {plus_minus} k/2 meter{sup 2}sec{sup {minus}1} , we use the relativistic conservation laws to define the relative motion of the coordinate r = r{sub 1} {minus} r{sub 2} of two particles of mass m{sub 1}, m{sub 2} and relative velocity v = {beta}c = {sub (k{sub 1} + k{sub 2}})/ {sup (k{sub 1} {minus} k{sub 2}}) in terms of conic section equation v{sup 2} = {Gamma} [2/r {plus_minus} 1/a] where ``+`` corresponds to hyperbolic and ``{minus}`` to elliptical trajectories. Equation is quantized by expressing Kepler`s Second Law as conservation of angular niomentum per unit mass in units of k. Principal quantum number is n {equivalent_to} j + {1/2} with``square`` {sub T{sup 2}}/{sup A{sup 2}} = (n {minus}1)nk{sup 2} {equivalent_to} {ell}{sub {circle_dot}}({ell}{sub {circle_dot}} + 1)k{sup 2}. Here {ell}{sub {circle_dot}} = n {minus} 1 is the angular momentumquantum number for circular orbits. In a sense, we obtain ``spin`` from this quantization. Since {Gamma}/a cannot reach c{sup 2} without predicting either circular or asymptotic velocities equal to the limiting velocity for particulate motion, we can also quantize velocities in terms of the principle quantum number by defining {beta}{sub n}/{sup 2} = {sub c{sup 2}}/{sup v{sub n{sup 2}} = {sub n{sup 2}}/1({sub c{sup 2}}a/{Gamma}) = ({sub nN{Gamma}}/1){sup 2}. For the Z{sub 1}e,Z{sub 2}e of the same sign and {alpha} {triple_bond} e{sup 2}/m{sub e}{kappa}c, we find that {Gamma}/c{sup 2}a = Z{sub 1}Z{sub 2}{alpha}. The characteristic Coulomb parameter {eta}(n) {triple_bond} Z{sub 1}Z{sub 2}{alpha}/{beta}{sub n} = Z{sub 1}Z{sub 2}nN{sub {Gamma}} then specifies the penetration factor C{sup 2}({eta}) = 2{pi}{eta}/(e{sup 2{pi}{eta}} {minus} 1}). For unlike charges, with {eta} still taken as positive, C{sup 2}({minus}{eta}) = 2{pi}{eta}/(1 {minus} e{sup {minus}2{pi}{eta}}).
Variable-rate colour image quantization based on quadtree segmentation
NASA Astrophysics Data System (ADS)
Hu, Y. C.; Li, C. Y.; Chuang, J. C.; Lo, C. C.
2011-09-01
A novel variable-sized block encoding with threshold control for colour image quantization (CIQ) is presented in this paper. In CIQ, the colour palette used has a great influence on the reconstructed image quality. Typically, a higher image quality and a larger storage cost are obtained when a larger-sized palette is used in CIQ. To cut down the storage cost while preserving quality of the reconstructed images, the threshold control policy for quadtree segmentation is used in this paper. Experimental results show that the proposed method adaptively provides desired bit rates while having better image qualities comparing to CIQ with the usage of multiple palettes of different sizes.
The Necessity of Quantizing Gravity
NASA Astrophysics Data System (ADS)
Adelman, Jeremy
2016-03-01
The Eppley Hannah thought experiment is often cited as justification for attempts by theorists to develop a complete, consistent theory of quantum gravity. A modification of the earlier ``Heisenberg microscope'' argument for the necessity of quantized light, the Eppley-Hannah thought experiment purports to show that purely classical gravitational waves would either not conserve energy or else allow for violations of the uncertainty principle. However, several subsequent papers have cast doubt as to the validity of the Eppley-Hannah argument. In this talk, we will show how to resurrect the Eppley-Hannah thought experiment by modifying the original argument in a way that gets around the present criticisms levied against it. With support from the Department of Energy, Grant Number DE-FG02-91ER40674.
Quantized ionic conductance in nanopores
Zwolak, Michael; Lagerqvist, Johan; Di Ventra, Massimilliano
2009-01-01
Ionic transport in nanopores is a fundamentally and technologically important problem in view of its ubiquitous occurrence in biological processes and its impact on DNA sequencing applications. Using microscopic calculations, we show that ion transport may exhibit strong non-liDearities as a function of the pore radius reminiscent of the conductance quantization steps as a function of the transverse cross section of quantum point contacts. In the present case, however, conductance steps originate from the break up of the hydration layers that form around ions in aqueous solution. Once in the pore, the water molecules form wavelike structures due to multiple scattering at the surface of the pore walls and interference with the radial waves around the ion. We discuss these effects as well as the conditions under which the step-like features in the ionic conductance should be experimentally observable.
Cosmology Quantized in Cosmic Time
Weinstein, M
2004-06-03
This paper discusses the problem of inflation in the context of Friedmann-Robertson-Walker Cosmology. We show how, after a simple change of variables, to quantize the problem in a way which parallels the classical discussion. The result is that two of the Einstein equations arise as exact equations of motion and one of the usual Einstein equations (suitably quantized) survives as a constraint equation to be imposed on the space of physical states. However, the Friedmann equation, which is also a constraint equation and which is the basis of the Wheeler-deWitt equation, acquires a welcome quantum correction that becomes significant for small scale factors. We discuss the extension of this result to a full quantum mechanical derivation of the anisotropy ({delta} {rho}/{rho}) in the cosmic microwave background radiation, and the possibility that the extra term in the Friedmann equation could have observable consequences. To clarify the general formalism and explicitly show why we choose to weaken the statement of the Wheeler-deWitt equation, we apply the general formalism to de Sitter space. After exactly solving the relevant Heisenberg equations of motion we give a detailed discussion of the subtleties associated with defining physical states and the emergence of the classical theory. This computation provides the striking result that quantum corrections to this long wavelength limit of gravity eliminate the problem of the big crunch. We also show that the same corrections lead to possibly measurable effects on the CMB radiation. For the sake of completeness, we discuss the special case, {lambda} = 0, and its relation to Minkowski space. Finally, we suggest interesting ways in which these techniques can be generalized to cast light on the question of chaotic or eternal inflation. In particular, we suggest one can put an experimental lower bound on the distance to a universe with a scale factor very different from our own, by looking at its effects on our CMB
ERIC Educational Resources Information Center
Roche, John
1997-01-01
Suggests an approach to teaching vectors that promotes active learning through challenging questions addressed to the class, as opposed to subtle explanations. Promotes introducing vector graphics with concrete examples, beginning with an explanation of the displacement vector. Also discusses artificial vectors, vector algebra, and unit vectors.…
Zamora Perea, Elvira; Balta León, Rosario; Palomino Salcedo, Miriam; Brogdon, William G; Devine, Gregor J
2009-01-01
Background The purpose of this study was to establish whether the "bottle assay", a tool for monitoring insecticide resistance in mosquitoes, can complement and augment the capabilities of the established WHO assay, particularly in resource-poor, logistically challenging environments. Methods Laboratory reared Aedes aegypti and field collected Anopheles darlingi and Anopheles albimanus were used to assess the suitability of locally sourced solvents and formulated insecticides for use with the bottle assay. Using these adapted protocols, the ability of the bottle assay and the WHO assay to discriminate between deltamethrin-resistant Anopheles albimanus populations was compared. The diagnostic dose of deltamethrin that would identify resistance in currently susceptible populations of An. darlingi and Ae. aegypti was defined. The robustness of the bottle assay during a surveillance exercise in the Amazon was assessed. Results The bottle assay (using technical or formulated material) and the WHO assay were equally able to differentiate deltamethrin-resistant and susceptible An. albimanus populations. A diagnostic dose of 10 μg a.i./bottle was identified as the most sensitive discriminating dose for characterizing resistance in An. darlingi and Ae. aegypti. Treated bottles, prepared using locally sourced solvents and insecticide formulations, can be stored for > 14 days and used three times. Bottles can be stored and transported under local conditions and field-assays can be completed in a single evening. Conclusion The flexible and portable nature of the bottle assay and the ready availability of its components make it a potentially robust and useful tool for monitoring insecticide resistance and efficacy in remote areas that require minimal cost tools. PMID:19728871
Quantized vortices around wavefront nodes, 2
NASA Technical Reports Server (NTRS)
Hirschfelder, J. O.; Goebel, C. J.; Bruch, L. W.
1974-01-01
Quantized vortices can occur around nodal points in wavefunctions. The derivation depends only on the wavefunction being single valued, continuous, and having continuous first derivatives. Since the derivation does not depend upon the dynamical equations, the quantized vortices are expected to occur for many types of waves such as electromagnetic and acoustic. Such vortices have appeared in the calculations of the H + H2 molecular collisions and play a role in the chemical kinetics. In a companion paper, it is shown that quantized vortices occur when optical waves are internally reflected from the face of a prism or particle beams are reflected from potential energy barriers.
A note on quantizations of Galois extensions
NASA Astrophysics Data System (ADS)
İlhan, Aslı Güçlükan
2014-12-01
In Huru and Lychagin (2013), it is conjectured that the quantizations of splitting fields of products of quadratic polynomials, which are obtained by deforming the multiplication, are Clifford type algebras. In this paper, we prove this conjecture.
Loop quantization of Schwarzschild interior revisited
NASA Astrophysics Data System (ADS)
Singh, Parampreet; Corichi, Alejandro
2016-03-01
Several studies of different inequivalent loop quantizations have shown, that there exists no fully satisfactory quantum theory for the Schwarzschild interior. Existing quantizations fail either on dependence on the fiducial structure or on the lack of the classical limit. Here we put forward a novel viewpoint to construct the quantum theory that overcomes all of the known problems of the existing quantizations. It is shown that the quantum gravitational constraint is well defined past the singularity and that its effective dynamics possesses a bounce into an expanding regime. The classical singularity is avoided, and a semiclassical spacetime satisfying vacuum Einstein's equations is recovered on the ``other side'' of the bounce. We argue that such metric represents the interior region of a white-hole spacetime, but for which the corresponding ``white-hole mass'' differs from the original black hole mass. We compare the differences in physical implications with other quantizations.
Torus quantization of symmetrically excited helium
Mueller, J. ); Burgdoerfer, J. Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831-6377 ); Noid, D. )
1992-02-01
The recent discovery by Richter and Wintgen (J. Phys. B 23, L197 (1990)) that the classical helium atom is not globally ergodic has stimulated renewed interest in its semiclassical quantization. The Einstein-Brillouin-Keller quantization of Kolmogorov-Arnold-Moser tori around stable periodic orbits becomes locally possible in a selected region of phase space. Using a hyperspherical representation we have found a dynamically confining potential allowing for a stable motion near the Wannier ridge. The resulting semiclassical eigenenergies provide a test for full quantum calculations in the limit of very high quantum numbers. The relations to frequently used group-theoretical classifications for doubly excited states and to the periodic-orbit quantization of the chaotic portion of the phase space are discussed. The extrapolation of the semiclassical quantization to low-lying states give remarkably accurate estimates for the energies of all symmetric {ital L}=0 states of helium.
Towards quantized current arbitrary waveform synthesis
NASA Astrophysics Data System (ADS)
Mirovsky, P.; Fricke, L.; Hohls, F.; Kaestner, B.; Leicht, Ch.; Pierz, K.; Melcher, J.; Schumacher, H. W.
2013-06-01
The generation of ac modulated quantized current waveforms using a semiconductor non-adiabatic single electron pump is demonstrated. In standard operation, the single electron pump generates a quantized output current of I = ef, where e is the charge of the electron and f is the pumping frequency. Suitable frequency modulation of f allows the generation of ac modulated output currents with different characteristics. By sinusoidal and saw tooth like modulation of f accordingly modulated quantized current waveforms with kHz modulation frequencies and peak currents up to 100 pA are obtained. Such ac quantized current sources could find applications ranging from precision ac metrology to on-chip signal generation.
Topologies on quantum topoi induced by quantization
Nakayama, Kunji
2013-07-15
In the present paper, we consider effects of quantization in a topos approach of quantum theory. A quantum system is assumed to be coded in a quantum topos, by which we mean the topos of presheaves on the context category of commutative subalgebras of a von Neumann algebra of bounded operators on a Hilbert space. A classical system is modeled by a Lie algebra of classical observables. It is shown that a quantization map from the classical observables to self-adjoint operators on the Hilbert space naturally induces geometric morphisms from presheaf topoi related to the classical system to the quantum topos. By means of the geometric morphisms, we give Lawvere-Tierney topologies on the quantum topos (and their equivalent Grothendieck topologies on the context category). We show that, among them, there exists a canonical one which we call a quantization topology. We furthermore give an explicit expression of a sheafification functor associated with the quantization topology.
NASA Astrophysics Data System (ADS)
Kisi, Ozgur; Parmar, Kulwinder Singh
2016-03-01
This study investigates the accuracy of least square support vector machine (LSSVM), multivariate adaptive regression splines (MARS) and M5 model tree (M5Tree) in modeling river water pollution. Various combinations of water quality parameters, Free Ammonia (AMM), Total Kjeldahl Nitrogen (TKN), Water Temperature (WT), Total Coliform (TC), Fecal Coliform (FC) and Potential of Hydrogen (pH) monitored at Nizamuddin, Delhi Yamuna River in India were used as inputs to the applied models. Results indicated that the LSSVM and MARS models had almost same accuracy and they performed better than the M5Tree model in modeling monthly chemical oxygen demand (COD). The average root mean square error (RMSE) of the LSSVM and M5Tree models was decreased by 1.47% and 19.1% using MARS model, respectively. Adding TC input to the models did not increase their accuracy in modeling COD while adding FC and pH inputs to the models generally decreased the accuracy. The overall results indicated that the MARS and LSSVM models could be successfully used in estimating monthly river water pollution level by using AMM, TKN and WT parameters as inputs.
NASA Astrophysics Data System (ADS)
Chen, Xia; Hu, Hong-li; Liu, Fei; Gao, Xiang Xiang
2011-10-01
The task of image reconstruction for an electrical capacitance tomography (ECT) system is to determine the permittivity distribution and hence the phase distribution in a pipeline by measuring the electrical capacitances between sets of electrodes placed around its periphery. In view of the nonlinear relationship between the permittivity distribution and capacitances and the limited number of independent capacitance measurements, image reconstruction for ECT is a nonlinear and ill-posed inverse problem. To solve this problem, a new image reconstruction method for ECT based on a least-squares support vector machine (LS-SVM) combined with a self-adaptive particle swarm optimization (PSO) algorithm is presented. Regarded as a special small sample theory, the SVM avoids the issues appearing in artificial neural network methods such as difficult determination of a network structure, over-learning and under-learning. However, the SVM performs differently with different parameters. As a relatively new population-based evolutionary optimization technique, PSO is adopted to realize parameters' effective selection with the advantages of global optimization and rapid convergence. This paper builds up a 12-electrode ECT system and a pneumatic conveying platform to verify this image reconstruction algorithm. Experimental results indicate that the algorithm has good generalization ability and high-image reconstruction quality.
Controlling charge quantization with quantum fluctuations.
Jezouin, S; Iftikhar, Z; Anthore, A; Parmentier, F D; Gennser, U; Cavanna, A; Ouerghi, A; Levkivskyi, I P; Idrisov, E; Sukhorukov, E V; Glazman, L I; Pierre, F
2016-08-01
In 1909, Millikan showed that the charge of electrically isolated systems is quantized in units of the elementary electron charge e. Today, the persistence of charge quantization in small, weakly connected conductors allows for circuits in which single electrons are manipulated, with applications in, for example, metrology, detectors and thermometry. However, as the connection strength is increased, the discreteness of charge is progressively reduced by quantum fluctuations. Here we report the full quantum control and characterization of charge quantization. By using semiconductor-based tunable elemental conduction channels to connect a micrometre-scale metallic island to a circuit, we explore the complete evolution of charge quantization while scanning the entire range of connection strengths, from a very weak (tunnel) to a perfect (ballistic) contact. We observe, when approaching the ballistic limit, that charge quantization is destroyed by quantum fluctuations, and scales as the square root of the residual probability for an electron to be reflected across the quantum channel; this scaling also applies beyond the different regimes of connection strength currently accessible to theory. At increased temperatures, the thermal fluctuations result in an exponential suppression of charge quantization and in a universal square-root scaling, valid for all connection strengths, in agreement with expectations. Besides being pertinent for the improvement of single-electron circuits and their applications, and for the metal-semiconductor hybrids relevant to topological quantum computing, knowledge of the quantum laws of electricity will be essential for the quantum engineering of future nanoelectronic devices. PMID:27488797
Controlling charge quantization with quantum fluctuations
NASA Astrophysics Data System (ADS)
Jezouin, S.; Iftikhar, Z.; Anthore, A.; Parmentier, F. D.; Gennser, U.; Cavanna, A.; Ouerghi, A.; Levkivskyi, I. P.; Idrisov, E.; Sukhorukov, E. V.; Glazman, L. I.; Pierre, F.
2016-08-01
In 1909, Millikan showed that the charge of electrically isolated systems is quantized in units of the elementary electron charge e. Today, the persistence of charge quantization in small, weakly connected conductors allows for circuits in which single electrons are manipulated, with applications in, for example, metrology, detectors and thermometry. However, as the connection strength is increased, the discreteness of charge is progressively reduced by quantum fluctuations. Here we report the full quantum control and characterization of charge quantization. By using semiconductor-based tunable elemental conduction channels to connect a micrometre-scale metallic island to a circuit, we explore the complete evolution of charge quantization while scanning the entire range of connection strengths, from a very weak (tunnel) to a perfect (ballistic) contact. We observe, when approaching the ballistic limit, that charge quantization is destroyed by quantum fluctuations, and scales as the square root of the residual probability for an electron to be reflected across the quantum channel; this scaling also applies beyond the different regimes of connection strength currently accessible to theory. At increased temperatures, the thermal fluctuations result in an exponential suppression of charge quantization and in a universal square-root scaling, valid for all connection strengths, in agreement with expectations. Besides being pertinent for the improvement of single-electron circuits and their applications, and for the metal–semiconductor hybrids relevant to topological quantum computing, knowledge of the quantum laws of electricity will be essential for the quantum engineering of future nanoelectronic devices.
Quantization of Prior Probabilities for Collaborative Distributed Hypothesis Testing
NASA Astrophysics Data System (ADS)
Rhim, Joong Bum; Varshney, Lav R.; Goyal, Vivek K.
2012-09-01
This paper studies the quantization of prior probabilities, drawn from an ensemble, for distributed detection and data fusion. Design and performance equivalences between a team of N agents tied by a fixed fusion rule and a more powerful single agent are obtained. Effects of identical quantization and diverse quantization are compared. Consideration of perceived common risk enables agents using diverse quantizers to collaborate in hypothesis testing, and it is proven that the minimum mean Bayes risk error is achieved by diverse quantization. The comparison shows that optimal diverse quantization with K cells per quantizer performs as well as optimal identical quantization with N(K-1)+1 cells per quantizer. Similar results are obtained for maximum Bayes risk error as the distortion criterion.
Smooth big bounce from affine quantization
NASA Astrophysics Data System (ADS)
Bergeron, Hervé; Dapor, Andrea; Gazeau, Jean Pierre; Małkiewicz, Przemysław
2014-04-01
We examine the possibility of dealing with gravitational singularities on a quantum level through the use of coherent state or wavelet quantization instead of canonical quantization. We consider the Robertson-Walker metric coupled to a perfect fluid. It is the simplest model of a gravitational collapse, and the results obtained here may serve as a useful starting point for more complex investigations in the future. We follow a quantization procedure based on affine coherent states or wavelets built from the unitary irreducible representation of the affine group of the real line with positive dilation. The main issue of our approach is the appearance of a quantum centrifugal potential allowing for regularization of the singularity, essential self-adjointness of the Hamiltonian, and unambiguous quantum dynamical evolution.
Magnetic Flux Quantization of the Landau Problem
NASA Astrophysics Data System (ADS)
Wang, Jianhua; Li, Kang; Long, Shuming; Yuan, Yi
2014-08-01
Landau problem has a very important application in modern physics, in which two-dimensional electron gas system and quantum Hall effect are outstanding. In this paper, first we review the solution of the Pauli equation, then using the single electron wave function, we calculate moving area expectations of the ideal 2-dimensional electron gas system and the per unit area's degeneracy of the electron gas system. As a result, how to calculate the magnetic flux of the electron gas system is given. It shows that the magnetic flux of 2-dimensional electron gas system in magnetic field is quantized, and magnetic flux quantization results from the quantization of the moving area expectations of electron gas system.
Virtual topological insulators with real quantized physics
NASA Astrophysics Data System (ADS)
Prodan, Emil
2015-06-01
A concrete strategy is presented for generating strong topological insulators in d +d' dimensions which have quantized physics in d dimensions. Here, d counts the physical and d' the virtual dimensions. It consists of seeking d -dimensional representations of operator algebras which are usually defined in d +d' dimensions where topological elements display strong topological invariants. The invariants are shown, however, to be fully determined by the physical dimensions, in the sense that their measurement can be done at fixed virtual coordinates. We solve the bulk-boundary correspondence and show that the boundary invariants are also fully determined by the physical coordinates. We analyze the virtual Chern insulator in 1 +1 dimensions realized in Y. E. Kraus et al., Phys. Rev. Lett. 109, 106402 (2012), 10.1103/PhysRevLett.109.106402 and predict quantized forces at the edges. We generate a topological system in (3 +1 ) dimensions, which is predicted to have quantized magnetoelectric response.
Experimental realization of quantized anomalous Hall effect
NASA Astrophysics Data System (ADS)
Xue, Qi-Kun
2014-03-01
Anomalous Hall effect was discovered by Edwin Hall in 1880. In this talk, we report the experimental observation of the quantized version of AHE, the quantum anomalous Hall effect (QAHE) in thin films of Cr-doped (Bi,Sb)2Te3 magnetic topological insulator. At zero magnetic field, the gate-tuned anomalous Hall resistance exhibits a quantized value of h /e2 accompanied by a significant drop of the longitudinal resistance. The longitudinal resistance vanishes under a strong magnetic field whereas the Hall resistance remains at the quantized value. The realization of QAHE paves a way for developing low-power-consumption electronics. Implications on observing Majorana fermions and other exotic phenomena in magnetic topological insulators will also be discussed. The work was collaborated with Ke He, Yayu Wang, Xucun Ma, Xi Chen, Li Lv, Dai Xi, Zhong Fang and Shoucheng Zhang.
Deformation quantization for contact interactions and dissipation
NASA Astrophysics Data System (ADS)
Belchev, Borislav Stefanov
This thesis studies deformation quantization and its application to contact interactions and systems with dissipation. We consider the subtleties related to quantization when contact interactions and boundaries are present. We exploit the idea that discontinuous potentials are idealizations that should be realized as limits of smooth potentials. The Wigner functions are found for the Morse potential and in the proper limit they reduce to the Wigner functions for the infinite wall, for the most general (Robin) boundary conditions. This is possible for a very limited subset of the values of the parameters --- so-called fine tuning is necessary. It explains why Dirichlet boundary conditions are used predominantly. Secondly, we consider deformation quantization in relation to dissipative phenomena. For the damped harmonic oscillator we study a method using a modified noncommutative star product. Within this framework we resolve the non-reality problem with the Wigner function and correct the classical limit.