Science.gov

Sample records for adaptive vector quantization

  1. A recursive technique for adaptive vector quantization

    NASA Technical Reports Server (NTRS)

    Lindsay, Robert A.

    1989-01-01

    Vector Quantization (VQ) is fast becoming an accepted, if not preferred method for image compression. The VQ performs well when compressing all types of imagery including Video, Electro-Optical (EO), Infrared (IR), Synthetic Aperture Radar (SAR), Multi-Spectral (MS), and digital map data. The only requirement is to change the codebook to switch the compressor from one image sensor to another. There are several approaches for designing codebooks for a vector quantizer. Adaptive Vector Quantization is a procedure that simultaneously designs codebooks as the data is being encoded or quantized. This is done by computing the centroid as a recursive moving average where the centroids move after every vector is encoded. When computing the centroid of a fixed set of vectors the resultant centroid is identical to the previous centroid calculation. This method of centroid calculation can be easily combined with VQ encoding techniques. The defined quantizer changes after every encoded vector by recursively updating the centroid of minimum distance which is the selected by the encoder. Since the quantizer is changing definition or states after every encoded vector, the decoder must now receive updates to the codebook. This is done as side information by multiplexing bits into the compressed source data.

  2. Online Adaptive Vector Quantization with Variable Size Codebook Entries.

    ERIC Educational Resources Information Center

    Constantinescu, Cornel; Storer, James A.

    1994-01-01

    Presents a new image compression algorithm that employs some of the most successful approaches to adaptive lossless compression to perform adaptive online (single pass) vector quantization with variable size codebook entries. Results of tests of the algorithm's effectiveness on standard test images are given. (12 references) (KRN)

  3. Locally adaptive vector quantization: Data compression with feature preservation

    NASA Technical Reports Server (NTRS)

    Cheung, K. M.; Sayano, M.

    1992-01-01

    A study of a locally adaptive vector quantization (LAVQ) algorithm for data compression is presented. This algorithm provides high-speed one-pass compression and is fully adaptable to any data source and does not require a priori knowledge of the source statistics. Therefore, LAVQ is a universal data compression algorithm. The basic algorithm and several modifications to improve performance are discussed. These modifications are nonlinear quantization, coarse quantization of the codebook, and lossless compression of the output. Performance of LAVQ on various images using irreversible (lossy) coding is comparable to that of the Linde-Buzo-Gray algorithm, but LAVQ has a much higher speed; thus this algorithm has potential for real-time video compression. Unlike most other image compression algorithms, LAVQ preserves fine detail in images. LAVQ's performance as a lossless data compression algorithm is comparable to that of Lempel-Ziv-based algorithms, but LAVQ uses far less memory during the coding process.

  4. Vector quantization

    NASA Technical Reports Server (NTRS)

    Gray, Robert M.

    1989-01-01

    During the past ten years Vector Quantization (VQ) has developed from a theoretical possibility promised by Shannon's source coding theorems into a powerful and competitive technique for speech and image coding and compression at medium to low bit rates. In this survey, the basic ideas behind the design of vector quantizers are sketched and some comments made on the state-of-the-art and current research efforts.

  5. Gain-adaptive vector quantization for medium-rate speech coding

    NASA Technical Reports Server (NTRS)

    Chen, J.-H.; Gersho, A.

    1985-01-01

    A class of adaptive vector quantizers (VQs) that can dynamically adjust the 'gain' of codevectors according to the input signal level is introduced. The encoder uses a gain estimator to determine a suitable normalization of each input vector prior to VQ coding. The normalized vectors have reduced dynamic range and can then be more efficiently coded. At the receiver, the VQ decoder output is multiplied by the estimated gain. Both forward and backward adaptation are considered and several different gain estimators are compared and evaluated. An approach to optimizing the design of gain estimators is introduced. Some of the more obvious techniques for achieving gain adaptation are substantially less effective than the use of optimized gain estimators. A novel design technique that is needed to generate the appropriate gain-normalized codebook for the vector quantizer is introduced. Experimental results show that a significant gain in segmental SNR can be obtained over nonadaptive VQ with a negligible increase in complexity.

  6. Adaptive vector quantization of MR images using online k-means algorithm

    NASA Astrophysics Data System (ADS)

    Shademan, Azad; Zia, Mohammad A.

    2001-12-01

    The k-means algorithm is widely used to design image codecs using vector quantization (VQ). In this paper, we focus on an adaptive approach to implement a VQ technique using the online version of k-means algorithm, in which the size of the codebook is adapted continuously to the statistical behavior of the image. Based on the statistical analysis of the feature space, a set of thresholds are designed such that those codewords corresponding to the low-density clusters would be removed from the codebook and hence, resulting in a higher bit-rate efficiency. Applications of this approach would be in telemedicine, where sequences of highly correlated medical images, e.g. consecutive brain slices, are transmitted over a low bit-rate channel. We have applied this algorithm on magnetic resonance (MR) images and the simulation results on a sample sequence are given. The proposed method has been compared to the standard k-means algorithm in terms of PSNR, MSE, and elapsed time to complete the algorithm.

  7. Divergence-based vector quantization.

    PubMed

    Villmann, Thomas; Haase, Sven

    2011-05-01

    Supervised and unsupervised vector quantization methods for classification and clustering traditionally use dissimilarities, frequently taken as Euclidean distances. In this article, we investigate the applicability of divergences instead, focusing on online learning. We deduce the mathematical fundamentals for its utilization in gradient-based online vector quantization algorithms. It bears on the generalized derivatives of the divergences known as Fréchet derivatives in functional analysis, which reduces in finite-dimensional problems to partial derivatives in a natural way. We demonstrate the application of this methodology for widely applied supervised and unsupervised online vector quantization schemes, including self-organizing maps, neural gas, and learning vector quantization. Additionally, principles for hyperparameter optimization and relevance learning for parameterized divergences in the case of supervised vector quantization are given to achieve improved classification accuracy. PMID:21299418

  8. VLSI Processor For Vector Quantization

    NASA Technical Reports Server (NTRS)

    Tawel, Raoul

    1995-01-01

    Pixel intensities in each kernel compared simultaneously with all code vectors. Prototype high-performance, low-power, very-large-scale integrated (VLSI) circuit designed to perform compression of image data by vector-quantization method. Contains relatively simple analog computational cells operating on direct or buffered outputs of photodetectors grouped into blocks in imaging array, yielding vector-quantization code word for each such block in sequence. Scheme exploits parallel-processing nature of vector-quantization architecture, with consequent increase in speed.

  9. Video data compression using artificial neural network differential vector quantization

    NASA Technical Reports Server (NTRS)

    Krishnamurthy, Ashok K.; Bibyk, Steven B.; Ahalt, Stanley C.

    1991-01-01

    An artificial neural network vector quantizer is developed for use in data compression applications such as Digital Video. Differential Vector Quantization is used to preserve edge features, and a new adaptive algorithm, known as Frequency-Sensitive Competitive Learning, is used to develop the vector quantizer codebook. To develop real time performance, a custom Very Large Scale Integration Application Specific Integrated Circuit (VLSI ASIC) is being developed to realize the associative memory functions needed in the vector quantization algorithm. By using vector quantization, the need for Huffman coding can be eliminated, resulting in superior performance against channel bit errors than methods that use variable length codes.

  10. Combining Vector Quantization and Histogram Equalization.

    ERIC Educational Resources Information Center

    Cosman, Pamela C.; And Others

    1992-01-01

    Discussion of contrast enhancement techniques focuses on the use of histogram equalization with a data compression technique, i.e., tree-structured vector quantization. The enhancement technique of intensity windowing is described, and the use of enhancement techniques for medical images is explained, including adaptive histogram equalization.…

  11. Adaptive image segmentation by quantization

    NASA Astrophysics Data System (ADS)

    Liu, Hui; Yun, David Y.

    1992-12-01

    Segmentation of images into textural homogeneous regions is a fundamental problem in an image understanding system. Most region-oriented segmentation approaches suffer from the problem of different thresholds selecting for different images. In this paper an adaptive image segmentation based on vector quantization is presented. It automatically segments images without preset thresholds. The approach contains a feature extraction module and a two-layer hierarchical clustering module, a vector quantizer (VQ) implemented by a competitive learning neural network in the first layer. A near-optimal competitive learning algorithm (NOLA) is employed to train the vector quantizer. NOLA combines the advantages of both Kohonen self- organizing feature map (KSFM) and K-means clustering algorithm. After the VQ is trained, the weights of the network and the number of input vectors clustered by each neuron form a 3- D topological feature map with separable hills aggregated by similar vectors. This overcomes the inability to visualize the geometric properties of data in a high-dimensional space for most other clustering algorithms. The second clustering algorithm operates in the feature map instead of the input set itself. Since the number of units in the feature map is much less than the number of feature vectors in the feature set, it is easy to check all peaks and find the `correct' number of clusters, also a key problem in current clustering techniques. In the experiments, we compare our algorithm with K-means clustering method on a variety of images. The results show that our algorithm achieves better performance.

  12. Motion-vector-based adaptive quantization in MPEG-4 fine granular scalable coding

    NASA Astrophysics Data System (ADS)

    Yang, Shuping; Lin, Xinggang; Wang, Guijin

    2003-05-01

    Selective enhancement mechanism of Fine-Granular-Scalability (FGS) In MPEG-4 is able to enhance specific objects under bandwidth variation. A novel technique for self-adaptive enhancement of interested regions based on Motion Vectors (MVs) of the base layer is proposed, which is suitable for those video sequences having still background and what we are interested in is only the moving objects in the scene, such as news broadcasting, video surveillance, Internet education, etc. Motion vectors generated during base layer encoding are obtained and analyzed. A Gaussian model is introduced to describe non-moving macroblocks which may have non-zero MVs caused by random noise or luminance variation. MVs of these macroblocks are set to zero to prevent them from being enhanced. A segmentation algorithm, region growth, based on MV values is exploited to separate foreground from background. Post-process is needed to reduce the influence of burst noise so that only the interested moving regions are left. Applying the result in selective enhancement during enhancement layer encoding can significantly improves the visual quality of interested regions within an aforementioned video transmitted at different bit-rate in our experiments.

  13. Image compression using address-vector quantization

    NASA Astrophysics Data System (ADS)

    Nasrabadi, Nasser M.; Feng, Yushu

    1990-12-01

    A novel vector quantization scheme, the address-vector quantizer (A-VQ), is proposed which exploits the interblock correlation by encoding a group of blocks together using an address-codebook (AC). The AC is a set of address-codevectors (ACVs), each representing a combination of addresses or indices. Each element of the ACV is an address of an entry in the LBG-codebook, representing a vector-quantized block. The AC consists of an active (addressable) region and an inactive (nonaddressable) region. During encoding the ACVs in the AC are reordered adaptively to bring the most probable ACVs into the active region. When encoding an ACV, the active region is checked, and if such an address combination exists, its index is transmitted to the receiver. Otherwise, the address of each block is transmitted individually. The SNR of the images encoded by the A-VQ method is the same as that of a memoryless vector quantizer, but the bit rate is by a factor of approximately two.

  14. Fast and Adaptive Detection of Pulmonary Nodules in Thoracic CT Images Using a Hierarchical Vector Quantization Scheme

    PubMed Central

    Han, Hao; Li, Lihong; Han, Fangfang; Song, Bowen; Moore, William; Liang, Zhengrong

    2014-01-01

    Computer-aided detection (CADe) of pulmonary nodules is critical to assisting radiologists in early identification of lung cancer from computed tomography (CT) scans. This paper proposes a novel CADe system based on a hierarchical vector quantization (VQ) scheme. Compared with the commonly-used simple thresholding approach, high-level VQ yields a more accurate segmentation of the lungs from the chest volume. In identifying initial nodule candidates (INCs) within the lungs, low-level VQ proves to be effective for INCs detection and segmentation, as well as computationally efficient compared to existing approaches. False-positive (FP) reduction is conducted via rule-based filtering operations in combination with a feature-based support vector machine classifier. The proposed system was validated on 205 patient cases from the publically available on-line LIDC (Lung Image Database Consortium) database, with each case having at least one juxta-pleural nodule annotation. Experimental results demonstrated that our CADe system obtained an overall sensitivity of 82.7% at a specificity of 4 FPs/scan, and 89.2% sensitivity at 4.14 FPs/scan for the classification of juxta-pleural INCs only. With respect to comparable CADe systems, the proposed system shows outperformance and demonstrates its potential for fast and adaptive detection of pulmonary nodules via CT imaging. PMID:25486657

  15. Systolic architectures for vector quantization

    NASA Technical Reports Server (NTRS)

    Davidson, Grant A.; Cappello, Peter R.; Gersho, Allen

    1988-01-01

    A family of architectural techniques are proposed which offer efficient computation of weighted Euclidean distance measures for nearest-neighbor codebook searching. The general approach uses a single metric comparator chip in conjunction with a linear array of inner product processor chips. Very high vector-quantization (VQ) throughput can be achieved for many speech and image-processing applications. Several alternative configurations allow reasonable tradeoffs between speed and VLSI chip area required.

  16. Vector quantization for volume rendering

    NASA Technical Reports Server (NTRS)

    Ning, Paul; Hesselink, Lambertus

    1992-01-01

    Volume rendering techniques typically process volumetric data in raw, uncompressed form. As algorithmic and architectural advances improve rendering speeds, however, larger data sets will be evaluated requiring consideration of data storage and transmission issues. In this paper, we analyze the data compression requirements for volume rendering applications and present a solution based on vector quantization. The proposed system compresses volumetric data and then renders images directly from the new data format. Tests on a fluid flow data set demonstrate that good image quality may be achieved at a compression ratio of 17:1 with only a 5 percent cost in additional rendering time.

  17. Image indexing based on vector quantization

    NASA Astrophysics Data System (ADS)

    Grana Romay, Manuel; Rebollo, Israel

    2000-10-01

    We propose the computation of the color palette of each image in isolation, using Vector Quantization methods. The image features are, then, the color palette and the histogram of the color quantization of the image with this color palette. We propose as a measure of similitude the weighted sum of the differences between the color palettes and the corresponding histograms. This approach allows the increase of the database without the recomputation of the image features and without substantial loss of discriminative power.

  18. Honey Bee Mating Optimization Vector Quantization Scheme in Image Compression

    NASA Astrophysics Data System (ADS)

    Horng, Ming-Huwi

    The vector quantization is a powerful technique in the applications of digital image compression. The traditionally widely used method such as the Linde-Buzo-Gray (LBG) algorithm always generated local optimal codebook. Recently, particle swarm optimization (PSO) is adapted to obtain the near-global optimal codebook of vector quantization. In this paper, we applied a new swarm algorithm, honey bee mating optimization, to construct the codebook of vector quantization. The proposed method is called the honey bee mating optimization based LBG (HBMO-LBG) algorithm. The results were compared with the other two methods that are LBG and PSO-LBG algorithms. Experimental results showed that the proposed HBMO-LBG algorithm is more reliable and the reconstructed images get higher quality than those generated form the other three methods.

  19. Segmentation and texture representation with vector quantizers

    NASA Astrophysics Data System (ADS)

    Yuan, Li; Barba, Joseph

    1990-11-01

    An algorithm for segmentation of cell images and the extraction of texture textons based on vector quantization is presented. Initially a few low dimensional code vectors are employed in a standard vector quantization algorithm to generate a coarse code book a procedure which is equivalent to histogram sharpening. Representative gray level value from each coarse code vector are used to construct a larger fine code book. Coding the original image with the fine code book produces a less distorted image and facilitates cell and nuclear extraction. Texture textons are extracted by application of the same algorithm to the cell area using a larger number of initial code vectors and fine code book. Applications of the algorithm to cytological specimen are presented.

  20. Image coding with uniform and piecewise-uniform vector quantizers.

    PubMed

    Jeong, D G; Gibson, J D

    1995-01-01

    New lattice vector quantizer design procedures for nonuniform sources that yield excellent performance while retaining the structure required for fast quantization are described. Analytical methods for truncating and scaling lattices to be used in vector quantization are given, and an analytical technique for piecewise-linear multidimensional companding is presented. The uniform and piecewise-uniform lattice vector quantizers are then used to quantize the discrete cosine transform coefficients of images, and their objective and subjective performance and complexity are contrasted with other lattice vector quantizers and with LBG training-mode designs. PMID:18289966

  1. Vector quantization of 3-D point clouds

    NASA Astrophysics Data System (ADS)

    Sim, Jae-Young; Kim, Chang-Su; Lee, Sang-Uk

    2005-10-01

    A geometry compression algorithm for 3-D QSplat data using vector quantization (VQ) is proposed in this work. The positions of child spheres are transformed to the local coordinate system, which is determined by the parent children relationship. The coordinate transform makes child positions more compactly distributed in 3-D space, facilitating effective quantization. Moreover, we develop a constrained encoding method for sphere radii, which guarantees hole-free surface rendering at the decoder side. Simulation results show that the proposed algorithm provides a faithful rendering quality even at low bitrates.

  2. Vector Quantization Algorithm Based on Associative Memories

    NASA Astrophysics Data System (ADS)

    Guzmán, Enrique; Pogrebnyak, Oleksiy; Yáñez, Cornelio; Manrique, Pablo

    This paper presents a vector quantization algorithm for image compression based on extended associative memories. The proposed algorithm is divided in two stages. First, an associative network is generated applying the learning phase of the extended associative memories between a codebook generated by the LBG algorithm and a training set. This associative network is named EAM-codebook and represents a new codebook which is used in the next stage. The EAM-codebook establishes a relation between training set and the LBG codebook. Second, the vector quantization process is performed by means of the recalling stage of EAM using as associative memory the EAM-codebook. This process generates a set of the class indices to which each input vector belongs. With respect to the LBG algorithm, the main advantages offered by the proposed algorithm is high processing speed and low demand of resources (system memory); results of image compression and quality are presented.

  3. Quantization noise in adaptive weighting networks

    NASA Astrophysics Data System (ADS)

    Davis, R. M.; Sher, P. J.-S.

    1984-09-01

    Adaptive weighting networks can be implemented using in-phase and quadrature, phase-phase, or phase-amplitude modulators. The statistical properties of the quantization error are derived for each modulator and the quantization noise power produced by the modulators are compared at the output of an adaptive antenna. Other relevant characteristics of the three types of modulators are also discussed.

  4. Scalar-vector quantization of medical images.

    PubMed

    Mohsenian, N; Shahri, H; Nasrabadi, N M

    1996-01-01

    A new coding scheme based on the scalar-vector quantizer (SVQ) is developed for compression of medical images. The SVQ is a fixed rate encoder and its rate-distortion performance is close to that of optimal entropy-constrained scalar quantizers (ECSQs) for memoryless sources. The use of a fixed-rate quantizer is expected to eliminate some of the complexity of using variable-length scalar quantizers. When transmission of images over noisy channels is considered, our coding scheme does not suffer from error propagation that is typical of coding schemes using variable-length codes. For a set of magnetic resonance (MR) images, coding results obtained from SVQ and ECSQ at low bit rates are indistinguishable. Furthermore, our encoded images are perceptually indistinguishable from the original when displayed on a monitor. This makes our SVQ-based coder an attractive compression scheme for picture archiving and communication systems (PACS). PACS are currently under study for use in an all-digital radiology environment in hospitals, where reliable transmission, storage, and high fidelity reconstruction of images are desired. PMID:18285124

  5. Logarithmic Adaptive Quantization Projection for Audio Watermarking

    NASA Astrophysics Data System (ADS)

    Zhao, Xuemin; Guo, Yuhong; Liu, Jian; Yan, Yonghong; Fu, Qiang

    In this paper, a logarithmic adaptive quantization projection (LAQP) algorithm for digital watermarking is proposed. Conventional quantization index modulation uses a fixed quantization step in the watermarking embedding procedure, which leads to poor fidelity. Moreover, the conventional methods are sensitive to value-metric scaling attack. The LAQP method combines the quantization projection scheme with a perceptual model. In comparison to some conventional quantization methods with a perceptual model, the LAQP only needs to calculate the perceptual model in the embedding procedure, avoiding the decoding errors introduced by the difference of the perceptual model used in the embedding and decoding procedure. Experimental results show that the proposed watermarking scheme keeps a better fidelity and is robust against the common signal processing attack. More importantly, the proposed scheme is invariant to value-metric scaling attack.

  6. The decoding method based on wavelet image En vector quantization

    NASA Astrophysics Data System (ADS)

    Liu, Chun-yang; Li, Hui; Wang, Tao

    2013-12-01

    With the rapidly progress of internet technology, large scale integrated circuit and computer technology, digital image processing technology has been greatly developed. Vector quantization technique plays a very important role in digital image compression. It has the advantages other than scalar quantization, which possesses the characteristics of higher compression ratio, simple algorithm of image decoding. Vector quantization, therefore, has been widely used in many practical fields. This paper will combine the wavelet analysis method and vector quantization En encoder efficiently, make a testing in standard image. The experiment result in PSNR will have a great improvement compared with the LBG algorithm.

  7. Image Compression on a VLSI Neural-Based Vector Quantizer.

    ERIC Educational Resources Information Center

    Chen, Oscal T.-C.; And Others

    1992-01-01

    Describes a modified frequency-sensitive self-organization (FSO) algorithm for image data compression and the associated VLSI architecture. Topics discussed include vector quantization; VLSI neural processor architecture; detailed circuit implementation; and a neural network vector quantization prototype chip. Examples of images using the FSO…

  8. Application of a VLSI vector quantization processor to real-time speech coding

    NASA Technical Reports Server (NTRS)

    Davidson, G.; Gersho, A.

    1986-01-01

    Attention is given to a working vector quantization processor for speech coding that is based on a first-generation VLSI chip which efficiently performs the pattern-matching operation needed for the codebook search process (CPS). Using this chip, the CPS architecture has been successfully incorporated into a compact, single-board Vector PCM implementation operating at 7-18 kbits/sec. A real time Adaptive Vector Predictive Coder system using the CPS has also been implemented.

  9. Image Coding By Vector Quantization In A Transformed Domain

    NASA Astrophysics Data System (ADS)

    Labit, C.; Marescq, J. P...

    1986-05-01

    Using vector quantization in a transformed domain, TV images are coded. The method exploit spatial redundancies of small 4x4 blocks of pixel : first, a DCT (or Hadamard) trans-form is performed on these blocks. A classification algorithm ranks them into visual and transform properties-based classes. For each class, high energy carrying coefficients are retained and using vector quantization, a codebook is built for the AC remaining part of the transformed blocks. The whole of the codeworks are referenced by an index. Each block is then coded by specifying its DC coefficient and associated index.

  10. Subband Image Coding Using Entropy-Constrained Residual Vector Quantization.

    ERIC Educational Resources Information Center

    Kossentini, Faouzi; And Others

    1994-01-01

    Discusses a flexible, high performance subband coding system. Residual vector quantization is discussed as a basis for coding subbands, and subband decomposition and bit allocation issues are covered. Experimental results showing the quality achievable at low bit rates are presented. (13 references) (KRN)

  11. Improved vector quantization scheme for grayscale image compression

    NASA Astrophysics Data System (ADS)

    Hu, Y.-C.; Chen, W.-L.; Lo, C.-C.; Chuang, J.-C.

    2012-06-01

    This paper proposes an improved image coding scheme based on vector quantization. It is well known that the image quality of a VQ-compressed image is poor when a small-sized codebook is used. In order to solve this problem, the mean value of the image block is taken as an alternative block encoding rule to improve the image quality in the proposed scheme. To cut down the storage cost of compressed codes, a two-stage lossless coding approach including the linear prediction technique and the Huffman coding technique is employed in the proposed scheme. The results show that the proposed scheme achieves better image qualities than vector quantization while keeping low bit rates.

  12. Wavelet-based vector quantization for high-fidelity compression and fast transmission of medical images.

    PubMed

    Mitra, S; Yang, S; Kustov, V

    1998-11-01

    Compression of medical images has always been viewed with skepticism, since the loss of information involved is thought to affect diagnostic information. However, recent research indicates that some wavelet-based compression techniques may not effectively reduce the image quality, even when subjected to compression ratios up to 30:1. The performance of a recently designed wavelet-based adaptive vector quantization is compared with a well-known wavelet-based scalar quantization technique to demonstrate the superiority of the former technique at compression ratios higher than 30:1. The use of higher compression with high fidelity of the reconstructed images allows fast transmission of images over the Internet for prompt inspection by radiologists at remote locations in an emergency situation, while higher quality images follow in a progressive manner if desired. Such fast and progressive transmission can also be used for downloading large data sets such as the Visible Human at a quality desired by the users for research or education. This new adaptive vector quantization uses a neural networks-based clustering technique for efficient quantization of the wavelet-decomposed subimages, yielding minimal distortion in the reconstructed images undergoing high compression. Results of compression up to 100:1 are shown for 24-bit color and 8-bit monochrome medical images. PMID:9848058

  13. Texture Classification Using Local Pattern Based on Vector Quantization.

    PubMed

    Pan, Zhibin; Fan, Hongcheng; Zhang, Li

    2015-12-01

    Local binary pattern (LBP) is a simple and effective descriptor for texture classification. However, it has two main disadvantages: (1) different structural patterns sometimes have the same binary code and (2) it is sensitive to noise. In order to overcome these disadvantages, we propose a new local descriptor named local vector quantization pattern (LVQP). In LVQP, different kinds of texture images are chosen to train a local pattern codebook, where each different structural pattern is described by a unique codeword index. Contrarily to the original LBP and its many variants, LVQP does not quantize each neighborhood pixel separately to 0/1, but aims at quantizing the whole difference vector between the central pixel and its neighborhood pixels. Since LVQP deals with the structural pattern as a whole, it has a high discriminability and is less sensitive to noise. Our experimental results, achieved by using four representative texture databases of Outex, UIUC, CUReT, and Brodatz, show that the proposed LVQP method can improve classification accuracy significantly and is more robust to noise. PMID:26353370

  14. Finite-state residual vector quantizer for image coding

    NASA Astrophysics Data System (ADS)

    Huang, Steve S.; Wang, Jia-Shung

    1993-10-01

    Finite state vector quantization (FSVQ) has been proven during recent years to be a high quality and low bit rate coding scheme. A FSVQ has achieved the efficiency of a small codebook (the state codebook) VQ while maintaining the quality of a large codebook (the master codebook) VQ. However, the large master codebook has become a primary limitation of FSVQ if the implementation is carefully taken into account. A large amount of memory would be required in storing the master codebook and also much effort would be spent in maintaining the state codebook if the master codebook became too large. This problem could be partially solved by the mean/residual technique (MRVQ). That is, the block means and the residual vectors would be separately coded. A new hybrid coding scheme called the finite state residual vector quantization (FSRVQ) is proposed in this paper for the sake of utilizing both advantage in FSVQ and MRVQ. The codewords in FSRVQ were designed by removing the block means so as to reduce the codebook size. The block means were predicted by the neighboring blocks to reduce the bit rate. Additionally, the predicted means were added to the residual vectors so that the state codebooks could be generated entirely. The performance of FSRVQ was indicated from the experimental results to be better than that of both ordinary FSVQ and RMVQ uniformly.

  15. A constrained joint source/channel coder design and vector quantization of nonstationary sources

    NASA Technical Reports Server (NTRS)

    Sayood, Khalid; Chen, Y. C.; Nori, S.; Araj, A.

    1993-01-01

    The emergence of broadband ISDN as the network for the future brings with it the promise of integration of all proposed services in a flexible environment. In order to achieve this flexibility, asynchronous transfer mode (ATM) has been proposed as the transfer technique. During this period a study was conducted on the bridging of network transmission performance and video coding. The successful transmission of variable bit rate video over ATM networks relies on the interaction between the video coding algorithm and the ATM networks. Two aspects of networks that determine the efficiency of video transmission are the resource allocation algorithm and the congestion control algorithm. These are explained in this report. Vector quantization (VQ) is one of the more popular compression techniques to appear in the last twenty years. Numerous compression techniques, which incorporate VQ, have been proposed. While the LBG VQ provides excellent compression, there are also several drawbacks to the use of the LBG quantizers including search complexity and memory requirements, and a mismatch between the codebook and the inputs. The latter mainly stems from the fact that the VQ is generally designed for a specific rate and a specific class of inputs. In this work, an adaptive technique is proposed for vector quantization of images and video sequences. This technique is an extension of the recursively indexed scalar quantization (RISQ) algorithm.

  16. Image coding using entropy-constrained residual vector quantization

    NASA Technical Reports Server (NTRS)

    Kossentini, Faouzi; Smith, Mark J. T.; Barnes, Christopher F.

    1993-01-01

    The residual vector quantization (RVQ) structure is exploited to produce a variable length codeword RVQ. Necessary conditions for the optimality of this RVQ are presented, and a new entropy-constrained RVQ (ECRVQ) design algorithm is shown to be very effective in designing RVQ codebooks over a wide range of bit rates and vector sizes. The new EC-RVQ has several important advantages. It can outperform entropy-constrained VQ (ECVQ) in terms of peak signal-to-noise ratio (PSNR), memory, and computation requirements. It can also be used to design high rate codebooks and codebooks with relatively large vector sizes. Experimental results indicate that when the new EC-RVQ is applied to image coding, very high quality is achieved at relatively low bit rates.

  17. The vector quantization for AVIRIS hyperspectral imagery compression with fixed low bitrate

    NASA Astrophysics Data System (ADS)

    Zhang, Jing; Li, Yunsong; Wang, Keyan; Liu, Haiying

    2012-10-01

    Vector quantization is an optimal compression strategy for hyperspectral imagery, but it can't satisfy the fixed bitrate application. In this paper, we propose a vector quantization algorithm for AVIRIS hyperspectral imagery compression with fixed low bitrate. The 2D-TCE lossless compression for codebook image and index image, the codebook reordering, the remove water absorbed band algorithm are introduced to the classical vector quantization, and the bitrate distribution is replaced by choosing the appropriate codebook size algorithm. Experimental results show that the proposed vector quantization has a better performance than the traditional hyperspectral imagery lossy compression with fixed low bitrate.

  18. Image Compression Using Vector Quantization with Variable Block Size Division

    NASA Astrophysics Data System (ADS)

    Matsumoto, Hiroki; Kichikawa, Fumito; Sasazaki, Kazuya; Maeda, Junji; Suzuki, Yukinori

    In this paper, we propose a method for compressing a still image using vector quantization (VQ). Local fractal dimension (LFD) is computed to divided an image into variable block size. The LFD shows the complexity of local regions of an image, so that a region of an image that shows higher LFD values than those of other regions is partitioned into small blocks of pixels, while a region of an image that shows lower LFD values than those of other regions is partitioned into large blocks. Furthermore, we developed a division and merging algorithm to decrease the number of blocks to encode. This results in improvement of compression rate. We construct code books for respective blocks sizes. To encode an image, a block of pixels is transformed by discrete cosine transform (DCT) and the closest vector is chosen from the code book (CB). In decoding, the code vector corresponding to the index is selected from the CB and then the code vector is transformed by inverse DCT to reconstruct a block of pixels. Computational experiments were carried out to show the effectiveness of the proposed method. Performance of the proposed method is slightly better than that of JPEG. In the case of learning images to construct a CB being different from test images, the compression rate is comparable to compression rates of methods proposed so far, while image quality evaluated by NPIQM (normalized perceptual image quality measure) is almost the highest step. The results show that the proposed method is effective for still image compression.

  19. Round Randomized Learning Vector Quantization for Brain Tumor Imaging

    PubMed Central

    2016-01-01

    Brain magnetic resonance imaging (MRI) classification into normal and abnormal is a critical and challenging task. Owing to that, several medical imaging classification techniques have been devised in which Learning Vector Quantization (LVQ) is amongst the potential. The main goal of this paper is to enhance the performance of LVQ technique in order to gain higher accuracy detection for brain tumor in MRIs. The classical way of selecting the winner code vector in LVQ is to measure the distance between the input vector and the codebook vectors using Euclidean distance function. In order to improve the winner selection technique, round off function is employed along with the Euclidean distance function. Moreover, in competitive learning classifiers, the fitting model is highly dependent on the class distribution. Therefore this paper proposed a multiresampling technique for which better class distribution can be achieved. This multiresampling is executed by using random selection via preclassification. The test data sample used are the brain tumor magnetic resonance images collected from Universiti Kebangsaan Malaysia Medical Center and UCI benchmark data sets. Comparative studies showed that the proposed methods with promising results are LVQ1, Multipass LVQ, Hierarchical LVQ, Multilayer Perceptron, and Radial Basis Function. PMID:27516807

  20. Round Randomized Learning Vector Quantization for Brain Tumor Imaging.

    PubMed

    Sheikh Abdullah, Siti Norul Huda; Bohani, Farah Aqilah; Nayef, Baher H; Sahran, Shahnorbanun; Al Akash, Omar; Iqbal Hussain, Rizuana; Ismail, Fuad

    2016-01-01

    Brain magnetic resonance imaging (MRI) classification into normal and abnormal is a critical and challenging task. Owing to that, several medical imaging classification techniques have been devised in which Learning Vector Quantization (LVQ) is amongst the potential. The main goal of this paper is to enhance the performance of LVQ technique in order to gain higher accuracy detection for brain tumor in MRIs. The classical way of selecting the winner code vector in LVQ is to measure the distance between the input vector and the codebook vectors using Euclidean distance function. In order to improve the winner selection technique, round off function is employed along with the Euclidean distance function. Moreover, in competitive learning classifiers, the fitting model is highly dependent on the class distribution. Therefore this paper proposed a multiresampling technique for which better class distribution can be achieved. This multiresampling is executed by using random selection via preclassification. The test data sample used are the brain tumor magnetic resonance images collected from Universiti Kebangsaan Malaysia Medical Center and UCI benchmark data sets. Comparative studies showed that the proposed methods with promising results are LVQ1, Multipass LVQ, Hierarchical LVQ, Multilayer Perceptron, and Radial Basis Function. PMID:27516807

  1. Vector quantization for efficient coding of upper subbands

    NASA Technical Reports Server (NTRS)

    Zeng, W. J.; Huang, Y. F.

    1994-01-01

    This paper examines the application of vector quantization (VQ) to exploit both intra-band and inter-band redundancy in subband coding. The focus here is on the exploitation of inter-band dependency. It is shown that VQ is particularly suitable and effective for coding the upper subbands. Three subband decomposition-based VQ coding schemes are proposed here to exploit the inter-band dependency by making full use of the extra flexibility of VQ approach over scalar quantization. A quadtree-based variable rate VQ (VRVQ) scheme which takes full advantage of the intra-band and inter-band redundancy is first proposed. Then, a more easily implementable alternative based on an efficient block-based edge estimation technique is employed to overcome the implementational barriers of the first scheme. Finally, a predictive VQ scheme formulated in the context of finite state VQ is proposed to further exploit the dependency among different subbands. A VRVQ scheme proposed elsewhere is extended to provide an efficient bit allocation procedure. Simulation results show that these three hybrid techniques have advantages, in terms of peak signal-to-noise ratio (PSNR) and complexity, over other existing subband-VQ approaches.

  2. Recursive optimal pruning with applications to tree structured vector quantizers

    NASA Technical Reports Server (NTRS)

    Kiang, Shei-Zein; Baker, Richard L.; Sullivan, Gary J.; Chiu, Chung-Yen

    1992-01-01

    A pruning algorithm of Chou et al. (1989) for designing optimal tree structures identifies only those codebooks which lie on the convex hull of the original codebook's operational distortion rate function. The authors introduce a modified version of the original algorithm, which identifies a large number of codebooks having minimum average distortion, under the constraint that, in each step, only modes having no descendents are removed from the tree. All codebooks generated by the original algorithm are also generated by this algorithm. The new algorithm generates a much larger number of codebooks in the middle- and low-rate regions. The additional codebooks permit operation near the codebook's operational distortion rate function without time sharing by choosing from the increased number of available bit rates. Despite the statistical mismatch which occurs when coding data outside the training sequence, these pruned codebooks retain their performance advantage over full search vector quantizers (VQs) for a large range of rates.

  3. Fast clustering algorithm for codebook production in image vector quantization

    NASA Astrophysics Data System (ADS)

    Al-Otum, Hazem M.

    2001-04-01

    In this paper, a fast clustering algorithm (FCA) is proposed to be implemented in vector quantization codebook production. This algorithm gives the ability to avoid iterative averaging of vectors and is based on collecting vectors with similar or closely similar characters to produce corresponding clusters. FCA gives an increase in peak signal-to-noise ratio (PSNR) about 0.3 - 1.1 dB, over the LBG algorithm and reduces the computational cost for codebook production (10% - 60%) at different bit rates. Here, two FCA modifications are proposed: FCA with limited cluster size 1& (FCA-LCS1 and FCA-LCS2, respectively). FCA- LCS1 tends to subdivide large clusters into smaller ones while FCA-LCS2 reduces a predetermined threshold by a step to reach the required cluster size. The FCA-LCS1 and FCA- LCS2 give an increase in PSNR of about 0.9 - 1.0 and 0.9 - 1.1 dB, respectively, over the FCA algorithm, at the expense of about 15% - 25% and 18% - 28% increase in the output codebook size.

  4. Evaluation of learning vector quantization to classify cotton trash

    NASA Astrophysics Data System (ADS)

    Lieberman, Michael A.; Patil, Rajendra B.

    1997-03-01

    The cotton industry needs a method to identify the type of trash [nonlint material (NLM)] in cotton samples; learning vector quantization (LVQ) is evaluated as that method. LVQ is a classification technique that defines reference vectors (group prototypes) in an N-dimensional feature space (RN). Normalized trash object features extracted from images of compressed cotton samples define RN. An unknown NLM object is given the label of the closest reference vector (as defined by Euclidean distance). Different normalized features spaces and NLM classifications are evaluated and accuracies reported for correctly identifying the NLM type. LVQ is used to partition cotton trash into: (1) bark (B), leaf (L), pepper (P), or stick (S); (2) bark and nonbark (N); or (3) bark, combined leaf and pepper (LP), or stick. Percentage accuracy for correctly identifying 139 pieces of test trash placed on laboratory prepared samples for the three scenarios are (B:95, L:87, P:100, S:88), (B:100, N:97), and (B:95, LP:99, S:88), respectively. Also, LVQ results are compared to previous work using backpropagating neural networks.

  5. Hierarchically clustered adaptive quantization CMAC and its learning convergence.

    PubMed

    Teddy, S D; Lai, E M K; Quek, C

    2007-11-01

    The cerebellar model articulation controller (CMAC) neural network (NN) is a well-established computational model of the human cerebellum. Nevertheless, there are two major drawbacks associated with the uniform quantization scheme of the CMAC network. They are the following: (1) a constant output resolution associated with the entire input space and (2) the generalization-accuracy dilemma. Moreover, the size of the CMAC network is an exponential function of the number of inputs. Depending on the characteristics of the training data, only a small percentage of the entire set of CMAC memory cells is utilized. Therefore, the efficient utilization of the CMAC memory is a crucial issue. One approach is to quantize the input space nonuniformly. For existing nonuniformly quantized CMAC systems, there is a tradeoff between memory efficiency and computational complexity. Inspired by the underlying organizational mechanism of the human brain, this paper presents a novel CMAC architecture named hierarchically clustered adaptive quantization CMAC (HCAQ-CMAC). HCAQ-CMAC employs hierarchical clustering for the nonuniform quantization of the input space to identify significant input segments and subsequently allocating more memory cells to these regions. The stability of the HCAQ-CMAC network is theoretically guaranteed by the proof of its learning convergence. The performance of the proposed network is subsequently benchmarked against the original CMAC network, as well as two other existing CMAC variants on two real-life applications, namely, automated control of car maneuver and modeling of the human blood glucose dynamics. The experimental results have demonstrated that the HCAQ-CMAC network offers an efficient memory allocation scheme and improves the generalization and accuracy of the network output to achieve better or comparable performances with smaller memory usages. Index Terms-Cerebellar model articulation controller (CMAC), hierarchical clustering, hierarchically

  6. Novel multivariate vector quantization for effective compression of hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Li, Xiaohui; Ren, Jinchang; Zhao, Chunhui; Qiao, Tong; Marshall, Stephen

    2014-12-01

    Although hyperspectral imagery (HSI) has been successfully deployed in a wide range of applications, it suffers from extremely large data volumes for storage and transmission. Consequently, coding and compression is needed for effective data reduction whilst maintaining the image integrity. In this paper, a multivariate vector quantization (MVQ) approach is proposed for the compression of HSI, where the pixel spectra is considered as a linear combination of two codewords from the codebook, and the indexed maps and their corresponding coefficients are separately coded and compressed. A strategy is proposed for effective codebook design, using the fuzzy C-mean (FCM) to determine the optimal number of clusters of data and selected codewords for the codebook. Comprehensive experiments on several real datasets are used for performance assessment, including quantitative evaluations to measure the degree of data reduction and the distortion of reconstructed images. Our results have indicated that the proposed MVQ approach outperforms conventional VQ and several typical algorithms for effective compression of HSI, where the image quality measured using mean squared error (MSE) has been significantly improved even under the same level of compressed bitrate.

  7. Novel hybrid classified vector quantization using discrete cosine transform for image compression

    NASA Astrophysics Data System (ADS)

    Al-Fayadh, Ali; Hussain, Abir Jaafar; Lisboa, Paulo; Al-Jumeily, Dhiya

    2009-04-01

    We present a novel image compression technique using a classified vector Quantizer and singular value decomposition for the efficient representation of still images. The proposed method is called hybrid classified vector quantization. It involves a simple but efficient classifier-based gradient method in the spatial domain, which employs only one threshold to determine the class of the input image block, and uses three AC coefficients of discrete cosine transform coefficients to determine the orientation of the block without employing any threshold. The proposed technique is benchmarked with each of the standard vector quantizers generated using the k-means algorithm, standard classified vector quantizer schemes, and JPEG-2000. Simulation results indicate that the proposed approach alleviates edge degradation and can reconstruct good visual quality images with higher peak signal-to-noise ratio than the benchmarked techniques, or be competitive with them.

  8. SAR ATR using a modified learning vector quantization algorithm

    NASA Astrophysics Data System (ADS)

    Marinelli, Anne Marie P.; Kaplan, Lance M.; Nasrabadi, Nasser M.

    1999-08-01

    We addressed the problem of classifying 10 target types in imagery formed from synthetic aperture radar (SAR). By executing a group training process, we show how to increase the performance of 10 initial sets of target templates formed by simple averaging. This training process is a modified learning vector quantization (LVQ) algorithm that was previously shown effective with forward-looking infrared (FLIR) imagery. For comparison, we ran the LVQ experiments using coarse, medium, and fine template sets that captured the target pose signature variations over 60 degrees, 40 degrees, and 20 degrees, respectively. Using sequestered test imagery, we evaluated how well the original and post-LVQ template sets classify the 10 target types. We show that after the LVQ training process, the coarse template set outperforms the coarse and medium original sets. And, for a test set that included untrained version variants, we show that classification using coarse template sets nearly matches that of the fine template sets. In a related experiment, we stored 9 initial template sets to classify 9 of the target types and used a threshold to separate the 10th type, previously found to be a 'confusing' type. We used imagery of all 10 targets in the LVQ training process to modify the 9 template sets. Overall classification performance increased slightly and an equalization of the individual target classification rates occurred, as compared to the 10-template experiment. The SAR imagery that we used is publicly available from the Moving and Stationary Target Acquisition and Recognition (MSTAR) program, sponsored by the Defense Advanced Research Projects Agency (DARPA).

  9. Distortion-rate models for entropy-coded lattice vector quantization.

    PubMed

    Raffy, P; Antonini, M; Barlaud, M

    2000-01-01

    The increasing demand for real-time applications requires the use of variable-rate quantizers having good performance in the low bit rate domain. In order to minimize the complexity of quantization, as well as maintaining a reasonably high PSNR ratio, we propose to use an entropy-coded lattice vector quantizer (ECLVQ). These quantizers have proven to outperform the well-known EZW algorithm's performance in terms of rate-distortion tradeoff. In this paper, we focus our attention on the modeling of the mean squared error (MSE) distortion and the prefix code rate for ECLVQ. First, we generalize the distortion model of Jeong and Gibson (1993) on fixed-rate cubic quantizers to lattices under a high rate assumption. Second, we derive new rate models for ECLVQ, efficient at low bit rates without any high rate assumptions. Simulation results prove the precision of our models. PMID:18262939

  10. Vector Quantization of Harmonic Magnitudes in Speech Coding Applications—A Survey and New Technique

    NASA Astrophysics Data System (ADS)

    Chu, Wai C.

    2004-12-01

    A harmonic coder extracts the harmonic components of a signal and represents them efficiently using a few parameters. The principles of harmonic coding have become quite successful and several standardized speech and audio coders are based on it. One of the key issues in harmonic coder design is in the quantization of harmonic magnitudes, where many propositions have appeared in the literature. The objective of this paper is to provide a survey of the various techniques that have appeared in the literature for vector quantization of harmonic magnitudes, with emphasis on those adopted by the major speech coding standards; these include constant magnitude approximation, partial quantization, dimension conversion, and variable-dimension vector quantization (VDVQ). In addition, a refined VDVQ technique is proposed where experimental data are provided to demonstrate its effectiveness.

  11. Synthetic aperture radar signal data compression using block adaptive quantization

    NASA Technical Reports Server (NTRS)

    Kuduvalli, Gopinath; Dutkiewicz, Melanie; Cumming, Ian

    1994-01-01

    This paper describes the design and testing of an on-board SAR signal data compression algorithm for ESA's ENVISAT satellite. The Block Adaptive Quantization (BAQ) algorithm was selected, and optimized for the various operational modes of the ASAR instrument. A flexible BAQ scheme was developed which allows a selection of compression ratio/image quality trade-offs. Test results show the high quality of the SAR images processed from the reconstructed signal data, and the feasibility of on-board implementation using a single ASIC.

  12. A Heisenberg Algebra Bundle of a Vector Field in Three-Space and its Weyl Quantization

    SciTech Connect

    Binz, Ernst; Pods, Sonja

    2006-01-04

    In these notes we associate a natural Heisenberg group bundle Ha with a singularity free smooth vector field X = (id,a) on a submanifold M in a Euclidean three-space. This bundle yields naturally an infinite dimensional Heisenberg group H{sub X}{sup {infinity}}. A representation of the C*-group algebra of H{sub X}{sup {infinity}} is a quantization. It causes a natural Weyl-deformation quantization of X. The influence of the topological structure of M on this quantization is encoded in the Chern class of a canonical complex line bundle inside Ha.

  13. Vector adaptive predictive coder for speech and audio

    NASA Technical Reports Server (NTRS)

    Chen, Juin-Hwey (Inventor); Gersho, Allen (Inventor)

    1990-01-01

    A real-time vector adaptive predictive coder which approximates each vector of K speech samples by using each of M fixed vectors in a first codebook to excite a time-varying synthesis filter and picking the vector that minimizes distortion. Predictive analysis for each frame determines parameters used for computing from vectors in the first codebook zero-state response vectors that are stored at the same address (index) in a second codebook. Encoding of input speech vectors s.sub.n is then carried out using the second codebook. When the vector that minimizes distortion is found, its index is transmitted to a decoder which has a codebook identical to the first codebook of the decoder. There the index is used to read out a vector that is used to synthesize an output speech vector s.sub.n. The parameters used in the encoder are quantized, for example by using a table, and the indices are transmitted to the decoder where they are decoded to specify transfer characteristics of filters used in producing the vector s.sub.n from the receiver codebook vector selected by the vector index transmitted.

  14. Comparison study of EMG signals compression by methods transform using vector quantization, SPIHT and arithmetic coding.

    PubMed

    Ntsama, Eloundou Pascal; Colince, Welba; Ele, Pierre

    2016-01-01

    In this article, we make a comparative study for a new approach compression between discrete cosine transform (DCT) and discrete wavelet transform (DWT). We seek the transform proper to vector quantization to compress the EMG signals. To do this, we initially associated vector quantization and DCT, then vector quantization and DWT. The coding phase is made by the SPIHT coding (set partitioning in hierarchical trees coding) associated with the arithmetic coding. The method is demonstrated and evaluated on actual EMG data. Objective performance evaluations metrics are presented: compression factor, percentage root mean square difference and signal to noise ratio. The results show that method based on the DWT is more efficient than the method based on the DCT. PMID:27104132

  15. Necessary conditions for the optimality of variable rate residual vector quantizers

    NASA Technical Reports Server (NTRS)

    Kossentini, Faouzi; Smith, Mark J. T.; Barnes, Christopher F.

    1993-01-01

    Residual vector quantization (RVQ), or multistage VQ, as it is also called, has recently been shown to be a competitive technique for data compression. The competitive performance of RVQ reported in results from the joint optimization of variable rate encoding and RVQ direct-sum code books. In this paper, necessary conditions for the optimality of variable rate RVQ's are derived, and an iterative descent algorithm based on a Lagrangian formulation is introduced for designing RVQ's having minimum average distortion subject to an entropy constraint. Simulation results for these entropy-constrained RVQ's (EC-RVQ's) are presented for memory less Gaussian, Laplacian, and uniform sources. A Gauss-Markov source is also considered. The performance is superior to that of entropy-constrained scalar quantizers (EC-SQ's) and practical entropy-constrained vector quantizers (EC-VQ's), and is competitive with that of some of the best source coding techniques that have appeared in the literature.

  16. Design of vector quantizer for image compression using self-organizing feature map and surface fitting.

    PubMed

    Laha, Arijit; Pal, Nikhil R; Chanda, Bhabatosh

    2004-10-01

    We propose a new scheme of designing a vector quantizer for image compression. First, a set of codevectors is generated using the self-organizing feature map algorithm. Then, the set of blocks associated with each code vector is modeled by a cubic surface for better perceptual fidelity of the reconstructed images. Mean-removed vectors from a set of training images is used for the construction of a generic codebook. Further, Huffman coding of the indices generated by the encoder and the difference-coded mean values of the blocks are used to achieve better compression ratio. We proposed two indices for quantitative assessment of the psychovisual quality (blocking effect) of the reconstructed image. Our experiments on several training and test images demonstrate that the proposed scheme can produce reconstructed images of good quality while achieving compression at low bit rates. Index Terms-Cubic surface fitting, generic codebook, image compression, self-organizing feature map, vector quantization. PMID:15462140

  17. Medical Image Compression Based on Vector Quantization with Variable Block Sizes in Wavelet Domain

    PubMed Central

    Jiang, Huiyan; Ma, Zhiyuan; Hu, Yang; Yang, Benqiang; Zhang, Libo

    2012-01-01

    An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD) was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality. PMID:23049544

  18. A VLSI chip set for real time vector quantization of image sequences

    NASA Technical Reports Server (NTRS)

    Baker, Richard L.

    1989-01-01

    The architecture and implementation of a VLSI chip set that vector quantizes (VQ) image sequences in real time is described. The chip set forms a programmable Single-Instruction, Multiple-Data (SIMD) machine which can implement various vector quantization encoding structures. Its VQ codebook may contain unlimited number of codevectors, N, having dimension up to K = 64. Under a weighted least squared error criterion, the engine locates at video rates the best code vector in full-searched or large tree searched VQ codebooks. The ability to manipulate tree structured codebooks, coupled with parallelism and pipelining, permits searches in as short as O (log N) cycles. A full codebook search results in O(N) performance, compared to O(KN) for a Single-Instruction, Single-Data (SISD) machine. With this VLSI chip set, an entire video code can be built on a single board that permits realtime experimentation with very large codebooks.

  19. Image-adapted visually weighted quantization matrices for digital image compression

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B. (Inventor)

    1994-01-01

    A method for performing image compression that eliminates redundant and invisible image components is presented. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The present invention adapts or customizes the quantization matrix to the image being compressed. The quantization matrix comprises visual masking by luminance and contrast techniques and by an error pooling technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.

  20. Vector quantization based on a psychovisual lattice for a visual subband coding scheme

    NASA Astrophysics Data System (ADS)

    Senane, Hakim; Saadane, Abdelhakim; Barba, Dominique

    1997-01-01

    A vector quantization based on a psychovisual lattice is used in a visual components image coding scheme to achieve a high compression ratio with an excellent visual quality. The vectors construction methodology preserves the main properties of the human visual system concerning the perception of quantization impairments and takes into account the masking effect due to interaction between subbands with the same radial frequency but with different orientations. The vectors components are the local band limited contrasts Cij defined as the ratio between the luminance Lij at point, which belongs to the radial subband i and angular sector j, and the average luminance at this location corresponding to the radial frequencies up to subband i-1. Hence the vectors dimension is depending on the orientation selectivity of the chosen decomposition. The low pass subband, which is nondirectional is scalar quantized. The performances of the coding scheme have been evaluated on a set of images in terms of peak SNR, true bit rates and visual quality. For this, no impairments are visible at a distance of 4 times the height of a high quality TV monitor. The SNR are about 6 to 8 dB under the ones of classical subband image coding schemes when producing the same visual quality. Due to the use of the local band limited contrast, the particularity of this approach relies in the structure of the reconstruction image error which is found to be highly correlated to the structure of the original image.

  1. Justification of Fuzzy Declustering Vector Quantization Modeling in Classification of Genotype-Image Phenotypes

    NASA Astrophysics Data System (ADS)

    Ng, Theam Foo; Pham, Tuan D.; Zhou, Xiaobo

    2010-01-01

    With the fast development of multi-dimensional data compression and pattern classification techniques, vector quantization (VQ) has become a system that allows large reduction of data storage and computational effort. One of the most recent VQ techniques that handle the poor estimation of vector centroids due to biased data from undersampling is to use fuzzy declustering-based vector quantization (FDVQ) technique. Therefore, in this paper, we are motivated to propose a justification of FDVQ based hidden Markov model (HMM) for investigating its effectiveness and efficiency in classification of genotype-image phenotypes. The performance evaluation and comparison of the recognition accuracy between a proposed FDVQ based HMM (FDVQ-HMM) and a well-known LBG (Linde, Buzo, Gray) vector quantization based HMM (LBG-HMM) will be carried out. The experimental results show that the performances of both FDVQ-HMM and LBG-HMM are almost similar. Finally, we have justified the competitiveness of FDVQ-HMM in classification of cellular phenotype image database by using hypotheses t-test. As a result, we have validated that the FDVQ algorithm is a robust and an efficient classification technique in the application of RNAi genome-wide screening image data.

  2. Compression of Medical Images Using Enhanced Vector Quantizer Designed with Self Organizing Feature Maps

    NASA Astrophysics Data System (ADS)

    Dandawate, Yogesh H.; Joshi, Madhuri A.; Umrani, Shrirang

    Now a days all medical imaging equipments give output as digital image and non-invasive techniques are becoming cheaper, the database of images is becoming larger. This archive of images increases up to significant size and in telemedicine-based applications the storage and transmission requires large memory and bandwidth respectively. There is a need for compression to save memory space and fast transmission over internet and 3G mobile with good quality decompressed image, even though compression is lossy. This paper presents a novel approach for designing enhanced vector quantizer, which uses Kohonen's Self Organizing neural network. The vector quantizer (codebook) is designed by training with a neatly designed training image and by selective training approach .Compressing; images using it gives better quality. The quality analysis of decompressed images is evaluated by using various quality measures along with conventionally used PSNR.

  3. Simple, fast codebook training algorithm by entropy sequence for vector quantization

    NASA Astrophysics Data System (ADS)

    Pang, Chao-yang; Yao, Shaowen; Qi, Zhang; Sun, Shi-xin; Liu, Jingde

    2001-09-01

    The traditional training algorithm for vector quantization such as the LBG algorithm uses the convergence of distortion sequence as the condition of the end of algorithm. We presented a novel training algorithm for vector quantization in this paper. The convergence of the entropy sequence of each region sequence is employed as the condition of the end of the algorithm. Compared with the famous LBG algorithm, it is simple, fast and easy to be comprehended and controlled. We test the performance of the algorithm by typical test image Lena and Barb. The result shows that the PSNR difference between the algorithm and LBG is less than 0.1dB, but the running time of it is at most one second of LBG.

  4. Progressive Vector Quantization on a massively parallel SIMD machine with application to multispectral image data

    NASA Technical Reports Server (NTRS)

    Manohar, Mareboyana; Tilton, James C.

    1994-01-01

    A progressive vector quantization (VQ) compression approach is discussed which decomposes image data into a number of levels using full search VQ. The final level is losslessly compressed, enabling lossless reconstruction. The computational difficulties are addressed by implementation on a massively parallel SIMD machine. We demonstrate progressive VQ on multispectral imagery obtained from the Advanced Very High Resolution Radiometer instrument and other Earth observation image data, and investigate the trade-offs in selecting the number of decomposition levels and codebook training method.

  5. Vector quantizer based on brightness maps for image compression with the polynomial transform

    NASA Astrophysics Data System (ADS)

    Escalante-Ramirez, Boris; Moreno-Gutierrez, Mauricio; Silvan-Cardenas, Jose L.

    2002-11-01

    We present a vector quantization scheme acting on brightness fields based on distance/distortion criteria correspondent with psycho-visual aspects. These criteria quantify sensorial distortion between vectors that represent either portions of a digital image or alternatively, coefficients of a transform-based coding system. In the latter case, we use an image representation model, namely the Hermite transform, that is based on some of the main perceptual characteristics of the human vision system (HVS) and in their response to light stimulus. Energy coding in the brightness domain, determination of local structure, code-book training and local orientation analysis are all obtained by means of the Hermite transform. This paper, for thematic reasons, is divided in four sections. The first one will shortly highlight the importance of having newer and better compression algorithms. This section will also serve to explain briefly the most relevant characteristics of the HVS, advantages and disadvantages related with the behavior of our vision in front of ocular stimulus. The second section shall go through a quick review of vector quantization techniques, focusing their performance on image treatment, as a preview for the image vector quantizer compressor actually constructed in section 5. Third chapter was chosen to concentrate the most important data gathered on brightness models. The building of this so-called brightness maps (quantification of the human perception on the visible objects reflectance), in a bi-dimensional model, will be addressed here. The Hermite transform, a special case of polynomial transforms, and its usefulness, will be treated, in an applicable discrete form, in the fourth chapter. As we have learned from previous works 1, Hermite transform has showed to be a useful and practical solution to efficiently code the energy within an image block, deciding which kind of quantization is to be used upon them (whether scalar or vector). It will also be

  6. Fuzzy Adaptive Quantized Control for a Class of Stochastic Nonlinear Uncertain Systems.

    PubMed

    Liu, Zhi; Wang, Fang; Zhang, Yun; Chen, C L Philip

    2016-02-01

    In this paper, a fuzzy adaptive approach for stochastic strict-feedback nonlinear systems with quantized input signal is developed. Compared with the existing research on quantized input problem, the existing works focus on quantized stabilization, while this paper considers the quantized tracking problem, which recovers stabilization as a special case. In addition, uncertain nonlinearity and the unknown stochastic disturbances are simultaneously considered in the quantized feedback control systems. By putting forward a new nonlinear decomposition of the quantized input, the relationship between the control signal and the quantized signal is established, as a result, the major technique difficulty arising from the piece-wise quantized input is overcome. Based on fuzzy logic systems' universal approximation capability, a novel fuzzy adaptive tracking controller is constructed via backstepping technique. The proposed controller guarantees that the tracking error converges to a neighborhood of the origin in the sense of probability and all the signals in the closed-loop system remain bounded in probability. Finally, an example illustrates the effectiveness of the proposed control approach. PMID:25751885

  7. Vector Adaptive/Predictive Encoding Of Speech

    NASA Technical Reports Server (NTRS)

    Chen, Juin-Hwey; Gersho, Allen

    1989-01-01

    Vector adaptive/predictive technique for digital encoding of speech signals yields decoded speech of very good quality after transmission at coding rate of 9.6 kb/s and of reasonably good quality at 4.8 kb/s. Requires 3 to 4 million multiplications and additions per second. Combines advantages of adaptive/predictive coding, and code-excited linear prediction, yielding speech of high quality but requires 600 million multiplications and additions per second at encoding rate of 4.8 kb/s. Vector adaptive/predictive coding technique bridges gaps in performance and complexity between adaptive/predictive coding and code-excited linear prediction.

  8. Pipeline synthetic aperture radar data compression utilizing systolic binary tree-searched architecture for vector quantization

    NASA Technical Reports Server (NTRS)

    Chang, Chi-Yung (Inventor); Fang, Wai-Chi (Inventor); Curlander, John C. (Inventor)

    1995-01-01

    A system for data compression utilizing systolic array architecture for Vector Quantization (VQ) is disclosed for both full-searched and tree-searched. For a tree-searched VQ, the special case of a Binary Tree-Search VQ (BTSVQ) is disclosed with identical Processing Elements (PE) in the array for both a Raw-Codebook VQ (RCVQ) and a Difference-Codebook VQ (DCVQ) algorithm. A fault tolerant system is disclosed which allows a PE that has developed a fault to be bypassed in the array and replaced by a spare at the end of the array, with codebook memory assignment shifted one PE past the faulty PE of the array.

  9. Light-Front BRST Quantization of the Vector Schwinger Model with a Photon Mass Term

    NASA Astrophysics Data System (ADS)

    Kulshreshtha, Usha; Kulshreshtha, Daya Shankar; Vary, James P.; Sharma, Lalit Kumar

    2014-12-01

    Vector Schwinger model with a mass term for the photon, describing 2D electrodynamics with mass-less fermions, studied by us recently (UK, Mod. Phys. Lett A22, 2993 (2007), PoS LC2008, 008 (2008), UK and DSK, Int. J. Mod. Phys. A22, 6183 (2007), UK, Mod. Phys. Lett A27, 1250157 (2012)), represents a new class of models. This theory becomes gauge-invariant when studied on the light-front. This is in contrast to the instant-form theory which is gauge-non-invariant. The light-front Hamiltonian and path integral quantization of this theory has been studied recently by one of us (UK, Mod. Phys. Lett. A27 (No. 27) 1250157 (2012)). In the present work we study the light-front Becchi-Rouet-Stora and Tyutin (BRST) quantization of this theory under appropriate light-cone BRST gauge-fixing. Here the BRST (gauge) symmetry of the theory is maintained even under BRST-gauge-fixing which is in contrast to its Hamiltonian and path integral quantization where the gauge symmetry of the theory necessarily gets broken under gauge-fixing.

  10. Fast Encoding Method for Image Vector Quantization Based on Multiple Appropriate Features to Estimate Euclidean Distance

    NASA Astrophysics Data System (ADS)

    Pan, Zhibin; Kotani, Koji; Ohmi, Tadahiro

    The encoding process of finding the best-matched codeword (winner) for a certain input vector in image vector quantization (VQ) is computationally very expensive due to a lot of k-dimensional Euclidean distance computations. In order to speed up the VQ encoding process, it is beneficial to firstly estimate how large the Euclidean distance is between the input vector and a candidate codeword by using appropriate low dimensional features of a vector instead of an immediate Euclidean distance computation. If the estimated Euclidean distance is large enough, it implies that the current candidate codeword could not be a winner so that it can be rejected safely and thus avoid actual Euclidean distance computation. Sum (1-D), L2 norm (1-D) and partial sums (2-D) of a vector are used together as the appropriate features in this paper because they are the first three simplest features. Then, four estimations of Euclidean distance between the input vector and a codeword are connected to each other by the Cauchy-Schwarz inequality to realize codeword rejection. For typical standard images with very different details (Lena, F-16, Pepper and Baboon), the final remaining must-do actual Euclidean distance computations can be eliminated obviously and the total computational cost including all overhead can also be reduced obviously compared to the state-of-the-art EEENNS method meanwhile keeping a full search (FS) equivalent PSNR.

  11. Evaluation of Raman spectra of human brain tumor tissue using the learning vector quantization neural network

    NASA Astrophysics Data System (ADS)

    Liu, Tuo; Chen, Changshui; Shi, Xingzhe; Liu, Chengyong

    2016-05-01

    The Raman spectra of tissue of 20 brain tumor patients was recorded using a confocal microlaser Raman spectroscope with 785 nm excitation in vitro. A total of 133 spectra were investigated. Spectra peaks from normal white matter tissue and tumor tissue were analyzed. Algorithms, such as principal component analysis, linear discriminant analysis, and the support vector machine, are commonly used to analyze spectral data. However, in this study, we employed the learning vector quantization (LVQ) neural network, which is typically used for pattern recognition. By applying the proposed method, a normal diagnosis accuracy of 85.7% and a glioma diagnosis accuracy of 89.5% were achieved. The LVQ neural network is a recent approach to excavating Raman spectra information. Moreover, it is fast and convenient, does not require the spectra peak counterpart, and achieves a relatively high accuracy. It can be used in brain tumor prognostics and in helping to optimize the cutting margins of gliomas.

  12. Reconfigurable VLSI implementation for learning vector quantization with on-chip learning circuit

    NASA Astrophysics Data System (ADS)

    Zhang, Xiangyu; An, Fengwei; Chen, Lei; Jürgen Mattausch, Hans

    2016-04-01

    As an alternative to conventional single-instruction-multiple-data (SIMD) mode solutions with massive parallelism for self-organizing-map (SOM) neural network models, this paper reports a memory-based proposal for the learning vector quantization (LVQ), which is a variant of SOM. A dual-mode LVQ system, enabling both on-chip learning and classification, is implemented by using a reconfigurable pipeline with parallel p-word input (R-PPPI) architecture. As a consequence of the reuse of R-PPPI for solving the most severe computational demands in both modes, power dissipation and Si-area consumption can be dramatically reduced in comparison to previous LVQ implementations. In addition, the designed LVQ ASIC has high flexibility with respect to feature-vector dimensionality and reference-vector number, allowing the execution of many different machine-learning applications. The fabricated test chip in 180 nm CMOS with parallel 8-word inputs and 102 K-bit on-chip memory achieves low power consumption of 66.38 mW (at 75 MHz and 1.8 V) and high learning speed of (R + 1) × \\lceil d/8 \\rceil + 10 clock cycles per d-dimensional sample vector where R is the reference-vector number.

  13. Combining nonlinear multiresolution system and vector quantization for still image compression

    SciTech Connect

    Wong, Y.

    1993-12-17

    It is popular to use multiresolution systems for image coding and compression. However, general-purpose techniques such as filter banks and wavelets are linear. While these systems are rigorous, nonlinear features in the signals cannot be utilized in a single entity for compression. Linear filters are known to blur the edges. Thus, the low-resolution images are typically blurred, carrying little information. We propose and demonstrate that edge-preserving filters such as median filters can be used in generating a multiresolution system using the Laplacian pyramid. The signals in the detail images are small and localized to the edge areas. Principal component vector quantization (PCVQ) is used to encode the detail images. PCVQ is a tree-structured VQ which allows fast codebook design and encoding/decoding. In encoding, the quantization error at each level is fed back through the pyramid to the previous level so that ultimately all the error is confined to the first level. With simple coding methods, we demonstrate that images with PSNR 33 dB can be obtained at 0.66 bpp without the use of entropy coding. When the rate is decreased to 0.25 bpp, the PSNR of 30 dB can still be achieved. Combined with an earlier result, our work demonstrate that nonlinear filters can be used for multiresolution systems and image coding.

  14. Online learning vector quantization: a harmonic competition approach based on conservation network.

    PubMed

    Wang, J H; Sun, W D

    1999-01-01

    This paper presents a self-creating neural network in which a conservation principle is incorporated with the competitive learning algorithm to harmonize equi-probable and equi-distortion criteria. Each node is associated with a measure of vitality which is updated after each input presentation. The total amount of vitality in the network at any time is 1, hence the name conservation. Competitive learning based on a vitality conservation principle is near-optimum, in the sense that problem of trapping in a local minimum is alleviated by adding perturbations to the learning rate during node generation processes. Combined with a procedure that redistributes the learning rate variables after generation and removal of nodes, the competitive conservation strategy provides a novel approach to the problem of harmonizing equi-error and equi-probable criteria. The training process is smooth and incremental, it not only achieves the biologically plausible learning property, but also facilitates systematic derivations for training parameters. Comparison studies on learning vector quantization involving stationary and nonstationary, structured and nonstructured inputs demonstrate that the proposed network outperforms other competitive networks in terms of quantization error, learning speed, and codeword search efficiency. PMID:18252343

  15. Compression of fingerprint data using the wavelet vector quantization image compression algorithm. 1992 progress report

    SciTech Connect

    Bradley, J.N.; Brislawn, C.M.

    1992-04-11

    This report describes the development of a Wavelet Vector Quantization (WVQ) image compression algorithm for fingerprint raster files. The pertinent work was performed at Los Alamos National Laboratory for the Federal Bureau of Investigation. This document describes a previously-sent package of C-language source code, referred to as LAFPC, that performs the WVQ fingerprint compression and decompression tasks. The particulars of the WVQ algorithm and the associated design procedure are detailed elsewhere; the purpose of this document is to report the results of the design algorithm for the fingerprint application and to delineate the implementation issues that are incorporated in LAFPC. Special attention is paid to the computation of the wavelet transform, the fast search algorithm used for the VQ encoding, and the entropy coding procedure used in the transmission of the source symbols.

  16. Compression of color facial images using feature correction two-stage vector quantization.

    PubMed

    Huang, J; Wang, Y

    1999-01-01

    A feature correction two-stage vector quantization (FC2VQ) algorithm was previously developed to compress gray-scale photo identification (ID) pictures. This algorithm is extended to color images in this work. Three options are compared, which apply the FC2VQ algorithm in RGB, YCbCr, and Karhunen-Loeve transform (KLT) color spaces, respectively. The RGB-FC2VQ algorithm is found to yield better image quality than KLT-FC2VQ or YCbCr-FC2VQ at similar bit rates. With the RGB-FC2VQ algorithm, a 128 x 128 24-b color ID image (49,152 bytes) can be compressed down to about 500 bytes with satisfactory quality. When the codeword indices are further compressed losslessly using a first order Huffman coder, this size is further reduced to about 450 bytes. PMID:18262869

  17. VLSI realization of learning vector quantization with hardware/software co-design for different applications

    NASA Astrophysics Data System (ADS)

    An, Fengwei; Akazawa, Toshinobu; Yamasaki, Shogo; Chen, Lei; Jürgen Mattausch, Hans

    2015-04-01

    This paper reports a VLSI realization of learning vector quantization (LVQ) with high flexibility for different applications. It is based on a hardware/software (HW/SW) co-design concept for on-chip learning and recognition and designed as a SoC in 180 nm CMOS. The time consuming nearest Euclidean distance search in the LVQ algorithm’s competition layer is efficiently implemented as a pipeline with parallel p-word input. Since neuron number in the competition layer, weight values, input and output number are scalable, the requirements of many different applications can be satisfied without hardware changes. Classification of a d-dimensional input vector is completed in n × \\lceil d/p \\rceil + R clock cycles, where R is the pipeline depth, and n is the number of reference feature vectors (FVs). Adjustment of stored reference FVs during learning is done by the embedded 32-bit RISC CPU, because this operation is not time critical. The high flexibility is verified by the application of human detection with different numbers for the dimensionality of the FVs.

  18. Lossless compression of the geostationary imaging Fourier transform spectrometer (GIFTS) data via predictive partitioned vector quantization

    NASA Astrophysics Data System (ADS)

    Huang, Bormin; Wei, Shih-Chieh; Huang, Allen H.-L.; Smuga-Otto, Maciek; Knuteson, Robert; Revercomb, Henry E.; Smith, William L., Sr.

    2007-09-01

    The Geostationary Imaging Fourier Transform Spectrometer (GIFTS), as part of NASA's New Millennium Program, is an advanced instrument to provide high-temporal-resolution measurements of atmospheric temperature and water vapor, which will greatly facilitate the detection of rapid atmospheric changes associated with destructive weather events, including tornadoes, severe thunderstorms, flash floods, and hurricanes. The Committee on Earth Science and Applications from Space under the National Academy of Sciences recommended that NASA and NOAA complete the fabrication, testing, and space qualification of the GIFTS instrument and that they support the international effort to launch GIFTS by 2008. Lossless data compression is critical for the overall success of the GIFTS experiment, or any other very high data rate experiment where the data is to be disseminated to the user community in real-time and archived for scientific studies and climate assessment. In general, lossless data compression is needed for high data rate hyperspectral sounding instruments such as GIFTS for (1) transmitting the data down to the ground within the bandwidth capabilities of the satellite transmitter and ground station receiving system, (2) compressing the data at the ground station for distribution to the user community (as is traditionally performed with GOES data via satellite rebroadcast), and (3) archival of the data without loss of any information content so that it can be used in scientific studies and climate assessment for many years after the date of the measurements. In this paper we study lossless compression of GIFTS data that has been collected as part of the calibration or ground based tests that were conducted in 2006. The predictive partitioned vector quantization (PPVQ) is investigated for higher lossless compression performance. PPVQ consists of linear prediction, channel partitioning and vector quantization. It yields an average compression ratio of 4.65 on the GIFTS test

  19. Speech recognition method based on genetic vector quantization and BP neural network

    NASA Astrophysics Data System (ADS)

    Gao, Li'ai; Li, Lihua; Zhou, Jian; Zhao, Qiuxia

    2009-07-01

    Vector Quantization is one of popular codebook design methods for speech recognition at present. In the process of codebook design, traditional LBG algorithm owns the advantage of fast convergence, but it is easy to get the local optimal result and be influenced by initial codebook. According to the understanding that Genetic Algorithm has the capability of getting the global optimal result, this paper proposes a hybrid clustering method GA-L based on Genetic Algorithm and LBG algorithm to improve the codebook.. Then using genetic neural networks for speech recognition. consequently search a global optimization codebook of the training vector space. The experiments show that neural network identification method based on genetic algorithm can extricate from its local maximum value and the initial restrictions, it can show superior to the standard genetic algorithm and BP neural network algorithm from various sources, and the genetic BP neural networks has a higher recognition rate and the unique application advantages than the general BP neural network in the same GA-VQ codebook, it can achieve a win-win situation in the time and efficiency.

  20. Adaptive wavelet methods - Matrix-vector multiplication

    NASA Astrophysics Data System (ADS)

    Černá, Dana; Finěk, Václav

    2012-12-01

    The design of most adaptive wavelet methods for elliptic partial differential equations follows a general concept proposed by A. Cohen, W. Dahmen and R. DeVore in [3, 4]. The essential steps are: transformation of the variational formulation into the well-conditioned infinite-dimensional l2 problem, finding of the convergent iteration process for the l2 problem and finally derivation of its finite dimensional version which works with an inexact right hand side and approximate matrix-vector multiplications. In our contribution, we shortly review all these parts and wemainly pay attention to approximate matrix-vector multiplications. Effective approximation of matrix-vector multiplications is enabled by an off-diagonal decay of entries of the wavelet stiffness matrix. We propose here a new approach which better utilize actual decay of matrix entries.

  1. Optimization of ion-exchange protein separations using a vector quantizing neural network.

    PubMed

    Klein, E J; Rivera, S L; Porter, J E

    2000-01-01

    In this work, a previously proposed methodology for the optimization of analytical scale protein separations using ion-exchange chromatography is subjected to two challenging case studies. The optimization methodology uses a Doehlert shell design for design of experiments and a novel criteria function to rank chromatograms in order of desirability. This chromatographic optimization function (COF) accounts for the separation between neighboring peaks, the total number of peaks eluted, and total analysis time. The COF is penalized when undesirable peak geometries (i.e., skewed and/or shouldered peaks) are present as determined by a vector quantizing neural network. Results of the COF analysis are fit to a quadratic response model, which is optimized with respect to the optimization variables using an advanced Nelder and Mead simplex algorithm. The optimization methodology is tested on two case study sample mixtures, the first of which is composed of equal parts of lysozyme, conalbumin, bovine serum albumin, and transferrin, and the second of which contains equal parts of conalbumin, bovine serum albumin, tranferrin, beta-lactoglobulin, insulin, and alpha -chymotrypsinogen A. Mobile-phase pH and gradient length are optimized to achieve baseline resolution of all solutes for both case studies in acceptably short analysis times, thus demonstrating the usefulness of the empirical optimization methodology. PMID:10835256

  2. Optimization of a vector quantization codebook for objective evaluation of surgical skill.

    PubMed

    Kowalewski, Timothy M; Rosen, Jacob; Chang, Lily; Sinanan, Mika N; Hannaford, Blake

    2004-01-01

    Surgical robotic systems and virtual reality simulators have introduced an unprecedented precision of measurement for both tool-tissue and tool-surgeon interaction; thus holding promise for more objective analyses of surgical skill. Integrative or averaged metrics such as path length, time-to-task, success/failure percentages, etc., have often been employed towards this end but these fail to address the processes associated with a surgical task as a dynamic phenomena. Stochastic tools such as Markov modeling using a 'white-box' approach have proven amenable to this type of analysis. While such an approach reveals the internal structure of the of the surgical task as a process, it requires a task decomposition based on expert knowledge, which may result in a relatively large/complex model. In this work, a 'black box' approach is developed with generalized cross-procedural applications., the model is characterized by a compact topology, abstract state definitions, and optimized codebook size. Data sets of isolated tasks were extracted from the Blue DRAGON database consisting of 30 surgical subjects stratified into six training levels. Vector quantization (VQ) was employed on the entire database, thus synthesizing a lexicon of discrete, task-independent surgical tool/tissue interactions. VQ has successfully established a dictionary of 63 surgical code words and displayed non-temporal skill discrimination. VQ allows for a more cross-procedural analysis without relying on a thorough study of the procedure, links the results of the black-box approach to observable phenomena, and reduces the computational cost of the analysis by discretizing a complex, continuous data space. PMID:15544266

  3. [The Identification of Lettuce Varieties by Using Unsupervised Possibilistic Fuzzy Learning Vector Quantization and Near Infrared Spectroscopy].

    PubMed

    Wu, Xiao-hong; Cai, Pei-qiang; Wu, Bin; Sun, Jun; Ji, Gang

    2016-03-01

    To solve the noisy sensitivity problem of fuzzy learning vector quantization (FLVQ), unsupervised possibilistic fuzzy learning vector quantization (UPFLVQ) was proposed based on unsupervised possibilistic fuzzy clustering (UPFC). UPFLVQ aimed to use fuzzy membership values and typicality values of UPFC to update the learning rate of learning vector quantization network and cluster centers. UPFLVQ is an unsupervised machine learning algorithm and it can be applied to classify without learning samples. UPFLVQ was used in the identification of lettuce varieties by near infrared spectroscopy (NIS). Short wave and long wave near infrared spectra of three types of lettuces were collected by FieldSpec@3 portable spectrometer in the wave-length range of 350-2 500 nm. When the near infrared spectra were compressed by principal component analysis (PCA), the first three principal components explained 97.50% of the total variance in near infrared spectra. After fuzzy c-means (FCM). clustering was performed for its cluster centers as the initial cluster centers of UPFLVQ, UPFLVQ could classify lettuce varieties with the terminal fuzzy membership values and typicality values. The experimental results showed that UPFLVQ together with NIS provided an effective method of identification of lettuce varieties with advantages such as fast testing, high accuracy rate and non-destructive characteristics. UPFLVQ is a clustering algorithm by combining UPFC and FLVQ, and it need not prepare any learning samples for the identification of lettuce varieties by NIS. UPFLVQ is suitable for linear separable data clustering and it provides a novel method for fast and nondestructive identification of lettuce varieties. PMID:27400511

  4. Light-Front Quantization of the Vector Schwinger Model with a Photon Mass Term in Faddeevian Regularization

    NASA Astrophysics Data System (ADS)

    Kulshreshtha, Usha; Kulshreshtha, Daya Shankar; Vary, James P.

    2016-07-01

    In this talk, we study the light-front quantization of the vector Schwinger model with photon mass term in Faddeevian Regularization, describing two-dimensional electrodynamics with mass-less fermions but with a mass term for the U(1) gauge field. This theory is gauge-non-invariant (GNI). We construct a gauge-invariant (GI) theory using Stueckelberg mechanism and then recover the physical content of the original GNI theory from the newly constructed GI theory under some special gauge-fixing conditions (GFC's). We then study LFQ of this new GI theory.

  5. Lossless compression of weight vectors from an adaptive filter

    SciTech Connect

    Bredemann, M.V.; Elliott, G.R.; Stearns, S.D.

    1994-08-01

    Techniques for lossless waveform compression can be applied to the transmission of weight vectors from an orbiting satellite. The vectors, which are a part of a hybrid analog/digital adaptive filter, are a representation of the radio frequency background seen by the satellite. An approach is used which treats each adaptive weight as a time-varying waveform.

  6. Analytical derivation of distortion constraints and their verification in a learning vector quantization-based target recognition system

    NASA Astrophysics Data System (ADS)

    Iftekharuddin, Khan M.; Razzaque, Mohammad A.

    2005-06-01

    We obtain a novel analytical derivation for distortion-related constraints in a neural network- (NN)-based automatic target recognition (ATR) system. We obtain two types of constraints for a realistic ATR system implementation involving 4-f correlator architecture. The first constraint determines the relative size between the input objects and input correlation filters. The second constraint dictates the limits on amount of rotation, translation, and scale of input objects for system implementation. We exploit these constraints in recognition of targets varying in rotation, translation, scale, occlusion, and the combination of all of these distortions using a learning vector quantization (LVQ) NN. We present the simulation verification of the constraints using both the gray-scale images and Defense Advanced Research Projects Agency's (DARPA's) Moving and Stationary Target Recognition (MSTAR) synthetic aperture radar (SAR) images with different depression and pose angles.

  7. Using the Relevance Vector Machine Model Combined with Local Phase Quantization to Predict Protein-Protein Interactions from Protein Sequences.

    PubMed

    An, Ji-Yong; Meng, Fan-Rong; You, Zhu-Hong; Fang, Yu-Hong; Zhao, Yu-Jun; Zhang, Ming

    2016-01-01

    We propose a novel computational method known as RVM-LPQ that combines the Relevance Vector Machine (RVM) model and Local Phase Quantization (LPQ) to predict PPIs from protein sequences. The main improvements are the results of representing protein sequences using the LPQ feature representation on a Position Specific Scoring Matrix (PSSM), reducing the influence of noise using a Principal Component Analysis (PCA), and using a Relevance Vector Machine (RVM) based classifier. We perform 5-fold cross-validation experiments on Yeast and Human datasets, and we achieve very high accuracies of 92.65% and 97.62%, respectively, which is significantly better than previous works. To further evaluate the proposed method, we compare it with the state-of-the-art support vector machine (SVM) classifier on the Yeast dataset. The experimental results demonstrate that our RVM-LPQ method is obviously better than the SVM-based method. The promising experimental results show the efficiency and simplicity of the proposed method, which can be an automatic decision support tool for future proteomics research. PMID:27314023

  8. Using the Relevance Vector Machine Model Combined with Local Phase Quantization to Predict Protein-Protein Interactions from Protein Sequences

    PubMed Central

    An, Ji-Yong; Meng, Fan-Rong; You, Zhu-Hong; Fang, Yu-Hong; Zhao, Yu-Jun; Zhang, Ming

    2016-01-01

    We propose a novel computational method known as RVM-LPQ that combines the Relevance Vector Machine (RVM) model and Local Phase Quantization (LPQ) to predict PPIs from protein sequences. The main improvements are the results of representing protein sequences using the LPQ feature representation on a Position Specific Scoring Matrix (PSSM), reducing the influence of noise using a Principal Component Analysis (PCA), and using a Relevance Vector Machine (RVM) based classifier. We perform 5-fold cross-validation experiments on Yeast and Human datasets, and we achieve very high accuracies of 92.65% and 97.62%, respectively, which is significantly better than previous works. To further evaluate the proposed method, we compare it with the state-of-the-art support vector machine (SVM) classifier on the Yeast dataset. The experimental results demonstrate that our RVM-LPQ method is obviously better than the SVM-based method. The promising experimental results show the efficiency and simplicity of the proposed method, which can be an automatic decision support tool for future proteomics research. PMID:27314023

  9. Recognition of Manual Actions Using Vector Quantization and Dynamic Time Warping

    NASA Astrophysics Data System (ADS)

    Martin, Marcel; Maycock, Jonathan; Schmidt, Florian Paul; Kramer, Oliver

    The recognition of manual actions, i.e., hand movements, hand postures and gestures, plays an important role in human-computer interaction, while belonging to a category of particularly difficult tasks. Using a Vicon system to capture 3D spatial data, we investigate the recognition of manual actions in tasks such as pouring a cup of milk and writing into a book. We propose recognizing sequences in multidimensional time-series by first learning a smooth quantization of the data, and then using a variant of dynamic time warping to recognize short sequences of prototypical motions in a long unknown sequence. An experimental analysis validates our approach. Short manual actions are successfully recognized and the approach is shown to be spatially invariant. We also show that the approach speeds up processing while not decreasing recognition performance.

  10. More About Vector Adaptive/Predictive Coding Of Speech

    NASA Technical Reports Server (NTRS)

    Jedrey, Thomas C.; Gersho, Allen

    1992-01-01

    Report presents additional information about digital speech-encoding and -decoding system described in "Vector Adaptive/Predictive Encoding of Speech" (NPO-17230). Summarizes development of vector adaptive/predictive coding (VAPC) system and describes basic functions of algorithm. Describes refinements introduced enabling receiver to cope with errors. VAPC algorithm implemented in integrated-circuit coding/decoding processors (codecs). VAPC and other codecs tested under variety of operating conditions. Tests designed to reveal effects of various background quiet and noisy environments and of poor telephone equipment. VAPC found competitive with and, in some respects, superior to other 4.8-kb/s codecs and other codecs of similar complexity.

  11. Adaptive support vector regression for UAV flight control.

    PubMed

    Shin, Jongho; Jin Kim, H; Kim, Youdan

    2011-01-01

    This paper explores an application of support vector regression for adaptive control of an unmanned aerial vehicle (UAV). Unlike neural networks, support vector regression (SVR) generates global solutions, because SVR basically solves quadratic programming (QP) problems. With this advantage, the input-output feedback-linearized inverse dynamic model and the compensation term for the inversion error are identified off-line, which we call I-SVR (inversion SVR) and C-SVR (compensation SVR), respectively. In order to compensate for the inversion error and the unexpected uncertainty, an online adaptation algorithm for the C-SVR is proposed. Then, the stability of the overall error dynamics is analyzed by the uniformly ultimately bounded property in the nonlinear system theory. In order to validate the effectiveness of the proposed adaptive controller, numerical simulations are performed on the UAV model. PMID:20970303

  12. LVQ-SMOTE – Learning Vector Quantization based Synthetic Minority Over–sampling Technique for biomedical data

    PubMed Central

    2013-01-01

    Background Over-sampling methods based on Synthetic Minority Over-sampling Technique (SMOTE) have been proposed for classification problems of imbalanced biomedical data. However, the existing over-sampling methods achieve slightly better or sometimes worse result than the simplest SMOTE. In order to improve the effectiveness of SMOTE, this paper presents a novel over-sampling method using codebooks obtained by the learning vector quantization. In general, even when an existing SMOTE applied to a biomedical dataset, its empty feature space is still so huge that most classification algorithms would not perform well on estimating borderlines between classes. To tackle this problem, our over-sampling method generates synthetic samples which occupy more feature space than the other SMOTE algorithms. Briefly saying, our over-sampling method enables to generate useful synthetic samples by referring to actual samples taken from real-world datasets. Results Experiments on eight real-world imbalanced datasets demonstrate that our proposed over-sampling method performs better than the simplest SMOTE on four of five standard classification algorithms. Moreover, it is seen that the performance of our method increases if the latest SMOTE called MWMOTE is used in our algorithm. Experiments on datasets for β-turn types prediction show some important patterns that have not been seen in previous analyses. Conclusions The proposed over-sampling method generates useful synthetic samples for the classification of imbalanced biomedical data. Besides, the proposed over-sampling method is basically compatible with basic classification algorithms and the existing over-sampling methods. PMID:24088532

  13. An investigative study of multispectral data compression for remotely-sensed images using vector quantization and difference-mapped shift-coding

    NASA Technical Reports Server (NTRS)

    Jaggi, S.

    1993-01-01

    A study is conducted to investigate the effects and advantages of data compression techniques on multispectral imagery data acquired by NASA's airborne scanners at the Stennis Space Center. The first technique used was vector quantization. The vector is defined in the multispectral imagery context as an array of pixels from the same location from each channel. The error obtained in substituting the reconstructed images for the original set is compared for different compression ratios. Also, the eigenvalues of the covariance matrix obtained from the reconstructed data set are compared with the eigenvalues of the original set. The effects of varying the size of the vector codebook on the quality of the compression and on subsequent classification are also presented. The output data from the Vector Quantization algorithm was further compressed by a lossless technique called Difference-mapped Shift-extended Huffman coding. The overall compression for 7 channels of data acquired by the Calibrated Airborne Multispectral Scanner (CAMS), with an RMS error of 15.8 pixels was 195:1 (0.41 bpp) and with an RMS error of 3.6 pixels was 18:1 (.447 bpp). The algorithms were implemented in software and interfaced with the help of dedicated image processing boards to an 80386 PC compatible computer. Modules were developed for the task of image compression and image analysis. Also, supporting software to perform image processing for visual display and interpretation of the compressed/classified images was developed.

  14. Discriminative Common Spatial Pattern Sub-bands Weighting Based on Distinction Sensitive Learning Vector Quantization Method in Motor Imagery Based Brain-computer Interface

    PubMed Central

    Jamaloo, Fatemeh; Mikaeili, Mohammad

    2015-01-01

    Common spatial pattern (CSP) is a method commonly used to enhance the effects of event-related desynchronization and event-related synchronization present in multichannel electroencephalogram-based brain-computer interface (BCI) systems. In the present study, a novel CSP sub-band feature selection has been proposed based on the discriminative information of the features. Besides, a distinction sensitive learning vector quantization based weighting of the selected features has been considered. Finally, after the classification of the weighted features using a support vector machine classifier, the performance of the suggested method has been compared with the existing methods based on frequency band selection, on the same BCI competitions datasets. The results show that the proposed method yields superior results on “ay” subject dataset compared against existing approaches such as sub-band CSP, filter bank CSP (FBCSP), discriminative FBCSP, and sliding window discriminative CSP. PMID:26284171

  15. Online Sequential Projection Vector Machine with Adaptive Data Mean Update

    PubMed Central

    Chen, Lin; Jia, Ji-Ting; Zhang, Qiong; Deng, Wan-Yu; Wei, Wei

    2016-01-01

    We propose a simple online learning algorithm especial for high-dimensional data. The algorithm is referred to as online sequential projection vector machine (OSPVM) which derives from projection vector machine and can learn from data in one-by-one or chunk-by-chunk mode. In OSPVM, data centering, dimension reduction, and neural network training are integrated seamlessly. In particular, the model parameters including (1) the projection vectors for dimension reduction, (2) the input weights, biases, and output weights, and (3) the number of hidden nodes can be updated simultaneously. Moreover, only one parameter, the number of hidden nodes, needs to be determined manually, and this makes it easy for use in real applications. Performance comparison was made on various high-dimensional classification problems for OSPVM against other fast online algorithms including budgeted stochastic gradient descent (BSGD) approach, adaptive multihyperplane machine (AMM), primal estimated subgradient solver (Pegasos), online sequential extreme learning machine (OSELM), and SVD + OSELM (feature selection based on SVD is performed before OSELM). The results obtained demonstrated the superior generalization performance and efficiency of the OSPVM. PMID:27143958

  16. Online Sequential Projection Vector Machine with Adaptive Data Mean Update.

    PubMed

    Chen, Lin; Jia, Ji-Ting; Zhang, Qiong; Deng, Wan-Yu; Wei, Wei

    2016-01-01

    We propose a simple online learning algorithm especial for high-dimensional data. The algorithm is referred to as online sequential projection vector machine (OSPVM) which derives from projection vector machine and can learn from data in one-by-one or chunk-by-chunk mode. In OSPVM, data centering, dimension reduction, and neural network training are integrated seamlessly. In particular, the model parameters including (1) the projection vectors for dimension reduction, (2) the input weights, biases, and output weights, and (3) the number of hidden nodes can be updated simultaneously. Moreover, only one parameter, the number of hidden nodes, needs to be determined manually, and this makes it easy for use in real applications. Performance comparison was made on various high-dimensional classification problems for OSPVM against other fast online algorithms including budgeted stochastic gradient descent (BSGD) approach, adaptive multihyperplane machine (AMM), primal estimated subgradient solver (Pegasos), online sequential extreme learning machine (OSELM), and SVD + OSELM (feature selection based on SVD is performed before OSELM). The results obtained demonstrated the superior generalization performance and efficiency of the OSPVM. PMID:27143958

  17. Trypanosoma cruzi: adaptation to its vectors and its hosts

    PubMed Central

    Noireau, François; Diosque, Patricio; Jansen, Ana Maria

    2009-01-01

    American trypanosomiasis is a parasitic zoonosis that occurs throughout Latin America. The etiological agent, Trypanosoma cruzi, is able to infect almost all tissues of its mammalian hosts and spreads in the environment in multifarious transmission cycles that may or not be connected. This biological plasticity, which is probably the result of the considerable heterogeneity of the taxon, exemplifies a successful adaptation of a parasite resulting in distinct outcomes of infection and a complex epidemiological pattern. In the 1990s, most endemic countries strengthened national control programs to interrupt the transmission of this parasite to humans. However, many obstacles remain to the effective control of the disease. Current knowledge of the different components involved in elaborate system that is American trypanosomiasis (the protozoan parasite T. cruzi, vectors Triatominae and the many reservoirs of infection), as well as the interactions existing within the system, is still incomplete. The Triatominae probably evolve from predatory reduvids in response to the availability of vertebrate food source. However, the basic mechanisms of adaptation of some of them to artificial ecotopes remain poorly understood. Nevertheless, these adaptations seem to be associated with a behavioral plasticity, a reduction in the genetic repertoire and increasing developmental instability. PMID:19250627

  18. Quantization of Electromagnetic Fields in Cavities

    NASA Technical Reports Server (NTRS)

    Kakazu, Kiyotaka; Oshiro, Kazunori

    1996-01-01

    A quantization procedure for the electromagnetic field in a rectangular cavity with perfect conductor walls is presented, where a decomposition formula of the field plays an essential role. All vector mode functions are obtained by using the decomposition. After expanding the field in terms of the vector mode functions, we get the quantized electromagnetic Hamiltonian.

  19. Alice in microbes' land: adaptations and counter-adaptations of vector-borne parasitic protozoa and their hosts.

    PubMed

    Caljon, Guy; De Muylder, Géraldine; Durnez, Lies; Jennes, Wim; Vanaerschot, Manu; Dujardin, Jean-Claude

    2016-09-01

    In the present review, we aim to provide a general introduction to different facets of the arms race between pathogens and their hosts/environment, emphasizing its evolutionary aspects. We focus on vector-borne parasitic protozoa, which have to adapt to both invertebrate and vertebrate hosts. Using Leishmania, Trypanosoma and Plasmodium as main models, we review successively (i) the adaptations and counter-adaptations of parasites and their invertebrate host, (ii) the adaptations and counter-adaptations of parasites and their vertebrate host and (iii) the impact of human interventions (chemotherapy, vaccination, vector control and environmental changes) on these adaptations. We conclude by discussing the practical impact this knowledge can have on translational research and public health. PMID:27400870

  20. Fourth quantization

    NASA Astrophysics Data System (ADS)

    Faizal, Mir

    2013-12-01

    In this Letter we will analyze the creation of the multiverse. We will first calculate the wave function for the multiverse using third quantization. Then we will fourth-quantize this theory. We will show that there is no single vacuum state for this theory. Thus, we can end up with a multiverse, even after starting from a vacuum state. This will be used as a possible explanation for the creation of the multiverse. We also analyze the effect of interactions in this fourth-quantized theory.

  1. Vector quantization with self-resynchronizing coding for lossless compression and rebroadcast of the NASA Geostationary Imaging Fourier Transform Spectrometer (GIFTS) data

    NASA Astrophysics Data System (ADS)

    Huang, Bormin; Wei, Shih-Chieh; Huang, Hung-Lung; Smith, William L.; Bloom, Hal J.

    2008-08-01

    As part of NASA's New Millennium Program, the Geostationary Imaging Fourier Transform Spectrometer (GIFTS) is an advanced ultraspectral sounder with a 128x128 array of interferograms for the retrieval of such geophysical parameters as atmospheric temperature, moisture, and wind. With massive data volume that would be generated by future advanced satellite sensors such as GIFTS, chances are that even the state-of-the-art channel coding (e.g. Turbo codes, LDPC) with low BER might not correct all the errors. Due to the error-sensitive ill-posed nature of the retrieval problem, lossless compression with error resilience is desired for ultraspectral sounder data downlink and rebroadcast. Previously, we proposed the fast precomputed vector quantization (FPVQ) with arithmetic coding (AC) which can produce high compression gain for ground operation. In this paper we adopt FPVQ with the reversible variable-length coding (RVLC) to provide better resilience against satellite transmission errors remaining after channel decoding. The FPVQ-RVLC method is compared with the previous FPVQ-AC method for lossless compression of the GIFTS data. The experiment shows that the FPVQ-RVLC method is a significantly better tool for rebroadcast of massive ultraspectral sounder data.

  2. Local adaptation to temperature and the implications for vector-borne diseases.

    PubMed

    Sternberg, Eleanore D; Thomas, Matthew B

    2014-03-01

    Vector life-history traits and parasite development respond in strongly nonlinear ways to changes in temperature. These thermal sensitivities create the potential for climate change to have a marked impact on disease transmission. To date, most research considering impacts of climate change on vector-borne diseases assumes that all populations of a given parasite or vector species respond similarly to temperature, regardless of their source population. This may be an inappropriate assumption because spatial variation in selective pressures such as temperature can lead to local adaptation. We examine evidence for local adaptation in disease vectors and present conceptual models for understanding how local adaptation might modulate the effects of both short- and long-term changes in climate. PMID:24513566

  3. Fast large-scale object retrieval with binary quantization

    NASA Astrophysics Data System (ADS)

    Zhou, Shifu; Zeng, Dan; Shen, Wei; Zhang, Zhijiang; Tian, Qi

    2015-11-01

    The objective of large-scale object retrieval systems is to search for images that contain the target object in an image database. Where state-of-the-art approaches rely on global image representations to conduct searches, we consider many boxes per image as candidates to search locally in a picture. In this paper, a feature quantization algorithm called binary quantization is proposed. In binary quantization, a scale-invariant feature transform (SIFT) feature is quantized into a descriptive and discriminative bit-vector, which allows itself to adapt to the classic inverted file structure for box indexing. The inverted file, which stores the bit-vector and box ID where the SIFT feature is located inside, is compact and can be loaded into the main memory for efficient box indexing. We evaluate our approach on available object retrieval datasets. Experimental results demonstrate that the proposed approach is fast and achieves excellent search quality. Therefore, the proposed approach is an improvement over state-of-the-art approaches for object retrieval.

  4. No evidence for local adaptation of dengue viruses to mosquito vector populations in Thailand.

    PubMed

    Fansiri, Thanyalak; Pongsiri, Arissara; Klungthong, Chonticha; Ponlawat, Alongkot; Thaisomboonsuk, Butsaya; Jarman, Richard G; Scott, Thomas W; Lambrechts, Louis

    2016-04-01

    Despite their epidemiological importance, the evolutionary forces that shape the spatial structure of dengue virus genetic diversity are not fully understood. Fine-scale genetic structure of mosquito vector populations and evidence for genotype × genotype interactions between dengue viruses and their mosquito vectors are consistent with the hypothesis that the geographical distribution of dengue virus genetic diversity may reflect viral adaptation to local mosquito populations. To test this hypothesis, we measured vector competence in all sympatric and allopatric combinations of 14 low-passage dengue virus isolates and two wild-type populations of Aedes aegypti mosquitoes sampled in Bangkok and Kamphaeng Phet, two sites located about 300 km apart in Thailand. Despite significant genotype × genotype interactions, we found no evidence for superior vector competence in sympatric versus allopatric vector-virus combinations. Viral phylogenetic analysis revealed no geographical clustering of the 14 isolates, suggesting that high levels of viral migration (gene flow) in Thailand may counteract spatially heterogeneous natural selection. We conclude that it is unlikely that vector-mediated selection is a major driver of dengue virus adaptive evolution at the regional scale that we examined. Dengue virus local adaptation to mosquito vector populations could happen, however, in places or times that we did not test, or at a different geographical scale. PMID:27099625

  5. Use of a local cone model to predict essential CSF light adaptation behavior used in the design of luminance quantization nonlinearities

    NASA Astrophysics Data System (ADS)

    Daly, Scott; Golestaneh, S. A.

    2015-03-01

    The human visual system's luminance nonlinearity ranges continuously from square root behavior in the very dark, gamma-like behavior in dim ambient, cube-root in office lighting, and logarithmic for daylight ranges. Early display quantization nonlinearities have been developed based on luminance bipartite JND data. More advanced approaches considered spatial frequency behavior, and used the Barten light-adaptive Contrast Sensitivity Function (CSF) modelled across a range of light adaptation to determine the luminance nonlinearity (e.g., DICOM, referred to as a GSDF {grayscale display function}). A recent approach for a GSDF, also referred to as an electrical-to-optical transfer function (EOTF), using that light-adaptive CSF model improves on this by tracking the CSF for the most sensitive spatial frequency, which changes with adaptation level. We explored the cone photoreceptor's contribution to the behavior of this maximum sensitivity of the CSF as a function of light adaptation, despite the CSF's frequency variations and that the cone's nonlinearity is a point-process. We found that parameters of a local cone model could fit the max sensitivity of the CSF model, across all frequencies, and are within the ranges of parameters commonly accepted for psychophysicallytuned cone models. Thus, a linking of the spatial frequency and luminance dimensions has been made for a key neural component. This provides a better theoretical foundation for the recently designed visual signal format using the aforementioned EOTF.

  6. Local Adaptation and Vector-Mediated Population Structure in Plasmodium vivax Malaria

    PubMed Central

    Gonzalez-Ceron, Lilia; Carlton, Jane M.; Gueye, Amy; Fay, Michael; McCutchan, Thomas F.; Su, Xin-zhuan

    2008-01-01

    Plasmodium vivax in southern Mexico exhibits different infectivities to 2 local mosquito vectors, Anopheles pseudopunctipennis and Anopheles albimanus. Previous work has tied these differences in mosquito infectivity to variation in the central repeat motif of the malaria parasite's circumsporozoite (csp) gene, but subsequent studies have questioned this view. Here we present evidence that P. vivax in southern Mexico comprised 3 genetic populations whose distributions largely mirror those of the 2 mosquito vectors. Additionally, laboratory colony feeding experiments indicate that parasite populations are most compatible with sympatric mosquito species. Our results suggest that reciprocal selection between malaria parasites and mosquito vectors has led to local adaptation of the parasite. Adaptation to local vectors may play an important role in generating population structure in Plasmodium. A better understanding of coevolutionary dynamics between sympatric mosquitoes and parasites will facilitate the identification of molecular mechanisms relevant to disease transmission in nature and provide crucial information for malaria control. PMID:18385220

  7. Adaptive mesh refinement for time-domain electromagnetics using vector finite elements :a feasibility study.

    SciTech Connect

    Turner, C. David; Kotulski, Joseph Daniel; Pasik, Michael Francis

    2005-12-01

    This report investigates the feasibility of applying Adaptive Mesh Refinement (AMR) techniques to a vector finite element formulation for the wave equation in three dimensions. Possible error estimators are considered first. Next, approaches for refining tetrahedral elements are reviewed. AMR capabilities within the Nevada framework are then evaluated. We summarize our conclusions on the feasibility of AMR for time-domain vector finite elements and identify a path forward.

  8. Regularized Estimate of the Weight Vector of an Adaptive Interference Canceller

    NASA Astrophysics Data System (ADS)

    Ermolayev, V. T.; Sorokin, I. S.; Flaksman, A. G.; Yastrebov, A. V.

    2016-05-01

    We consider an adaptive multi-channel interference canceller, which ensures the minimum value of the average output power of interference. It is proposed to form the weight vector of such a canceller as the power-vector expansion. It is shown that this approach allows one to obtain an exact analytical solution for the optimal weight vector by using the procedure of the power-vector orthogonalization. In the case of a limited number of the input-process samples, the solution becomes ill-defined and its regularization is required. An effective regularization method, which ensures a high degree of the interference suppression and does not involve the procedure of inversion of the correlation matrix of interference, is proposed, which significantly reduces the computational cost of the weight-vector estimation.

  9. Regularized Estimate of the Weight Vector of an Adaptive Interference Canceller

    NASA Astrophysics Data System (ADS)

    Ermolayev, V. T.; Sorokin, I. S.; Flaksman, A. G.; Yastrebov, A. V.

    2016-06-01

    We consider an adaptive multi-channel interference canceller, which ensures the minimum value of the average output power of interference. It is proposed to form the weight vector of such a canceller as the power-vector expansion. It is shown that this approach allows one to obtain an exact analytical solution for the optimal weight vector by using the procedure of the power-vector orthogonalization. In the case of a limited number of the input-process samples, the solution becomes ill-defined and its regularization is required. An effective regularization method, which ensures a high degree of the interference suppression and does not involve the procedure of inversion of the correlation matrix of interference, is proposed, which significantly reduces the computational cost of the weight-vector estimation.

  10. Adaptive nonseparable vector lifting scheme for digital holographic data compression.

    PubMed

    Xing, Yafei; Kaaniche, Mounir; Pesquet-Popescu, Béatrice; Dufaux, Frédéric

    2015-01-01

    Holographic data play a crucial role in recent three-dimensional imaging as well as microscopic applications. As a result, huge amounts of storage capacity will be involved for this kind of data. Therefore, it becomes necessary to develop efficient hologram compression schemes for storage and transmission purposes. In this paper, we focus on the shifted distance information, obtained by the phase-shifting algorithm, where two sets of difference data need to be encoded. More precisely, a nonseparable vector lifting scheme is investigated in order to exploit the two-dimensional characteristics of the holographic contents. Simulations performed on different digital holograms have shown the effectiveness of the proposed method in terms of bitrate saving and quality of object reconstruction. PMID:25967029

  11. Energy-saving technology of vector controlled induction motor based on the adaptive neuro-controller

    NASA Astrophysics Data System (ADS)

    Engel, E.; Kovalev, I. V.; Karandeev, D.

    2015-10-01

    The ongoing evolution of the power system towards a Smart Grid implies an important role of intelligent technologies, but poses strict requirements on their control schemes to preserve stability and controllability. This paper presents the adaptive neuro-controller for the vector control of induction motor within Smart Gird. The validity and effectiveness of the proposed energy-saving technology of vector controlled induction motor based on adaptive neuro-controller are verified by simulation results at different operating conditions over a wide speed range of induction motor.

  12. Support Vector Machine Based on Adaptive Acceleration Particle Swarm Optimization

    PubMed Central

    Abdulameer, Mohammed Hasan; Othman, Zulaiha Ali

    2014-01-01

    Existing face recognition methods utilize particle swarm optimizer (PSO) and opposition based particle swarm optimizer (OPSO) to optimize the parameters of SVM. However, the utilization of random values in the velocity calculation decreases the performance of these techniques; that is, during the velocity computation, we normally use random values for the acceleration coefficients and this creates randomness in the solution. To address this problem, an adaptive acceleration particle swarm optimization (AAPSO) technique is proposed. To evaluate our proposed method, we employ both face and iris recognition based on AAPSO with SVM (AAPSO-SVM). In the face and iris recognition systems, performance is evaluated using two human face databases, YALE and CASIA, and the UBiris dataset. In this method, we initially perform feature extraction and then recognition on the extracted features. In the recognition process, the extracted features are used for SVM training and testing. During the training and testing, the SVM parameters are optimized with the AAPSO technique, and in AAPSO, the acceleration coefficients are computed using the particle fitness values. The parameters in SVM, which are optimized by AAPSO, perform efficiently for both face and iris recognition. A comparative analysis between our proposed AAPSO-SVM and the PSO-SVM technique is presented. PMID:24790584

  13. Support vector machine based on adaptive acceleration particle swarm optimization.

    PubMed

    Abdulameer, Mohammed Hasan; Sheikh Abdullah, Siti Norul Huda; Othman, Zulaiha Ali

    2014-01-01

    Existing face recognition methods utilize particle swarm optimizer (PSO) and opposition based particle swarm optimizer (OPSO) to optimize the parameters of SVM. However, the utilization of random values in the velocity calculation decreases the performance of these techniques; that is, during the velocity computation, we normally use random values for the acceleration coefficients and this creates randomness in the solution. To address this problem, an adaptive acceleration particle swarm optimization (AAPSO) technique is proposed. To evaluate our proposed method, we employ both face and iris recognition based on AAPSO with SVM (AAPSO-SVM). In the face and iris recognition systems, performance is evaluated using two human face databases, YALE and CASIA, and the UBiris dataset. In this method, we initially perform feature extraction and then recognition on the extracted features. In the recognition process, the extracted features are used for SVM training and testing. During the training and testing, the SVM parameters are optimized with the AAPSO technique, and in AAPSO, the acceleration coefficients are computed using the particle fitness values. The parameters in SVM, which are optimized by AAPSO, perform efficiently for both face and iris recognition. A comparative analysis between our proposed AAPSO-SVM and the PSO-SVM technique is presented. PMID:24790584

  14. On the Computation of Integral Curves in Adaptive Mesh Refinement Vector Fields

    SciTech Connect

    Deines, Eduard; Weber, Gunther H.; Garth, Christoph; Van Straalen, Brian; Borovikov, Sergey; Martin, Daniel F.; Joy, Kenneth I.

    2011-06-27

    Integral curves, such as streamlines, streaklines, pathlines, and timelines, are an essential tool in the analysis of vector field structures, offering straightforward and intuitive interpretation of visualization results. While such curves have a long-standing tradition in vector field visualization, their application to Adaptive Mesh Refinement (AMR) simulation results poses unique problems. AMR is a highly effective discretization method for a variety of physical simulation problems and has recently been applied to the study of vector fields in flow and magnetohydrodynamic applications. The cell-centered nature of AMR data and discontinuities in the vector field representation arising from AMR level boundaries complicate the application of numerical integration methods to compute integral curves. In this paper, we propose a novel approach to alleviate these problems and show its application to streamline visualization in an AMR model of the magnetic field of the solar system as well as to a simulation of two incompressible viscous vortex rings merging.

  15. Visibility of wavelet quantization noise

    NASA Technical Reports Server (NTRS)

    Watson, A. B.; Yang, G. Y.; Solomon, J. A.; Villasenor, J.

    1997-01-01

    The discrete wavelet transform (DWT) decomposes an image into bands that vary in spatial frequency and orientation. It is widely used for image compression. Measures of the visibility of DWT quantization errors are required to achieve optimal compression. Uniform quantization of a single band of coefficients results in an artifact that we call DWT uniform quantization noise; it is the sum of a lattice of random amplitude basis functions of the corresponding DWT synthesis filter. We measured visual detection thresholds for samples of DWT uniform quantization noise in Y, Cb, and Cr color channels. The spatial frequency of a wavelet is r 2-lambda, where r is display visual resolution in pixels/degree, and lambda is the wavelet level. Thresholds increase rapidly with wavelet spatial frequency. Thresholds also increase from Y to Cr to Cb, and with orientation from lowpass to horizontal/vertical to diagonal. We construct a mathematical model for DWT noise detection thresholds that is a function of level, orientation, and display visual resolution. This allows calculation of a "perceptually lossless" quantization matrix for which all errors are in theory below the visual threshold. The model may also be used as the basis for adaptive quantization schemes.

  16. Visibility of Wavelet Quantization Noise

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Yang, Gloria Y.; Solomon, Joshua A.; Villasenor, John; Null, Cynthia H. (Technical Monitor)

    1995-01-01

    The Discrete Wavelet Transform (DWT) decomposes an image into bands that vary in spatial frequency and orientation. It is widely used for image compression. Measures of the visibility of DWT quantization errors are required to achieve optimal compression. Uniform quantization of a single band of coefficients results in an artifact that is the sum of a lattice of random amplitude basis functions of the corresponding DWT synthesis filter, which we call DWT uniform quantization noise. We measured visual detection thresholds for samples of DWT uniform quantization noise in Y, Cb, and Cr color channels. The spatial frequency of a wavelet is r 2(exp)-L , where r is display visual resolution in pixels/degree, and L is the wavelet level. Amplitude thresholds increase rapidly with spatial frequency. Thresholds also increase from Y to Cr to Cb, and with orientation from low-pass to horizontal/vertical to diagonal. We describe a mathematical model to predict DWT noise detection thresholds as a function of level, orientation, and display visual resolution. This allows calculation of a "perceptually lossless" quantization matrix for which all errors are in theory below the visual threshold. The model may also be used as the basis for adaptive quantization schemes.

  17. Adaptive track scheduling to optimize concurrency and vectorization in GeantV

    NASA Astrophysics Data System (ADS)

    Apostolakis, J.; Bandieramonte, M.; Bitzes, G.; Brun, R.; Canal, P.; Carminati, F.; De Fine Licht, J. C.; Duhem, L.; Elvira, V. D.; Gheata, A.; Jun, S. Y.; Lima, G.; Novak, M.; Sehgal, R.; Shadura, O.; Wenzel, S.

    2015-05-01

    The GeantV project is focused on the R&D of new particle transport techniques to maximize parallelism on multiple levels, profiting from the use of both SIMD instructions and co-processors for the CPU-intensive calculations specific to this type of applications. In our approach, vectors of tracks belonging to multiple events and matching different locality criteria must be gathered and dispatched to algorithms having vector signatures. While the transport propagates tracks and changes their individual states, data locality becomes harder to maintain. The scheduling policy has to be changed to maintain efficient vectors while keeping an optimal level of concurrency. The model has complex dynamics requiring tuning the thresholds to switch between the normal regime and special modes, i.e. prioritizing events to allow flushing memory, adding new events in the transport pipeline to boost locality, dynamically adjusting the particle vector size or switching between vector to single track mode when vectorization causes only overhead. This work requires a comprehensive study for optimizing these parameters to make the behaviour of the scheduler self-adapting, presenting here its initial results.

  18. Third quantization

    SciTech Connect

    Seligman, Thomas H.; Prosen, Tomaz

    2010-12-23

    The basic ideas of second quantization and Fock space are extended to density operator states, used in treatments of open many-body systems. This can be done for fermions and bosons. While the former only requires the use of a non-orthogonal basis, the latter requires the introduction of a dual set of spaces. In both cases an operator algebra closely resembling the canonical one is developed and used to define the dual sets of bases. We here concentrated on the bosonic case where the unboundedness of the operators requires the definitions of dual spaces to support the pair of bases. Some applications, mainly to non-equilibrium steady states, will be mentioned.

  19. Using unknown input observers for robust adaptive fault detection in vector second-order systems

    NASA Astrophysics Data System (ADS)

    Demetriou, Michael A.

    2005-03-01

    The purpose of this manuscript is to construct natural observers for vector second-order systems by utilising unknown input observer (UIO) methods. This observer is subsequently used for a robust fault detection scheme and also as an adaptive detection scheme for a certain class of actuator faults wherein the time instance and characteristics of an incipient actuator fault are detected. Stability of the adaptive scheme is provided by a parameter-dependent Lyapunov function for second-order systems. Numerical example on a mechanical system describing an automobile suspension system is used to illustrate the theoretical results.

  20. Design of smart composite platforms for adaptive trust vector control and adaptive laser telescope for satellite applications

    NASA Astrophysics Data System (ADS)

    Ghasemi-Nejhad, Mehrdad N.

    2013-04-01

    This paper presents design of smart composite platforms for adaptive trust vector control (TVC) and adaptive laser telescope for satellite applications. To eliminate disturbances, the proposed adaptive TVC and telescope systems will be mounted on two analogous smart composite platform with simultaneous precision positioning (pointing) and vibration suppression (stabilizing), SPPVS, with micro-radian pointing resolution, and then mounted on a satellite in two different locations. The adaptive TVC system provides SPPVS with large tip-tilt to potentially eliminate the gimbals systems. The smart composite telescope will be mounted on a smart composite platform with SPPVS and then mounted on a satellite. The laser communication is intended for the Geosynchronous orbit. The high degree of directionality increases the security of the laser communication signal (as opposed to a diffused RF signal), but also requires sophisticated subsystems for transmission and acquisition. The shorter wavelength of the optical spectrum increases the data transmission rates, but laser systems require large amounts of power, which increases the mass and complexity of the supporting systems. In addition, the laser communication on the Geosynchronous orbit requires an accurate platform with SPPVS capabilities. Therefore, this work also addresses the design of an active composite platform to be used to simultaneously point and stabilize an intersatellite laser communication telescope with micro-radian pointing resolution. The telescope is a Cassegrain receiver that employs two mirrors, one convex (primary) and the other concave (secondary). The distance, as well as the horizontal and axial alignment of the mirrors, must be precisely maintained or else the optical properties of the system will be severely degraded. The alignment will also have to be maintained during thruster firings, which will require vibration suppression capabilities of the system as well. The innovative platform has been

  1. Adaptive vector validation in image velocimetry to minimise the influence of outlier clusters

    NASA Astrophysics Data System (ADS)

    Masullo, Alessandro; Theunissen, Raf

    2016-03-01

    The universal outlier detection scheme (Westerweel and Scarano in Exp Fluids 39:1096-1100, 2005) and the distance-weighted universal outlier detection scheme for unstructured data (Duncan et al. in Meas Sci Technol 21:057002, 2010) are the most common PIV data validation routines. However, such techniques rely on a spatial comparison of each vector with those in a fixed-size neighbourhood and their performance subsequently suffers in the presence of clusters of outliers. This paper proposes an advancement to render outlier detection more robust while reducing the probability of mistakenly invalidating correct vectors. Velocity fields undergo a preliminary evaluation in terms of local coherency, which parametrises the extent of the neighbourhood with which each vector will be compared subsequently. Such adaptivity is shown to reduce the number of undetected outliers, even when implemented in the afore validation schemes. In addition, the authors present an alternative residual definition considering vector magnitude and angle adopting a modified Gaussian-weighted distance-based averaging median. This procedure is able to adapt the degree of acceptable background fluctuations in velocity to the local displacement magnitude. The traditional, extended and recommended validation methods are numerically assessed on the basis of flow fields from an isolated vortex, a turbulent channel flow and a DNS simulation of forced isotropic turbulence. The resulting validation method is adaptive, requires no user-defined parameters and is demonstrated to yield the best performances in terms of outlier under- and over-detection. Finally, the novel validation routine is applied to the PIV analysis of experimental studies focused on the near wake behind a porous disc and on a supersonic jet, illustrating the potential gains in spatial resolution and accuracy.

  2. Adaptive strategies of African horse sickness virus to facilitate vector transmission

    PubMed Central

    Wilson, Anthony; Mellor, Philip Scott; Szmaragd, Camille; Mertens, Peter Paul Clement

    2009-01-01

    African horse sickness virus (AHSV) is an orbivirus that is usually transmitted between its equid hosts by adult Culicoides midges. In this article, we review the ways in which AHSV may have adapted to this mode of transmission. The AHSV particle can be modified by the pH or proteolytic enzymes of its immediate environment, altering its ability to infect different cell types. The degree of pathogenesis in the host and vector may also represent adaptations maximising the likelihood of successful vectorial transmission. However, speculation upon several adaptations for vectorial transmission is based upon research on related viruses such as bluetongue virus (BTV), and further direct studies of AHSV are required in order to improve our understanding of this important virus. PMID:19094921

  3. Iterative Robust Capon Beamforming with Adaptively Updated Array Steering Vector Mismatch Levels

    PubMed Central

    Sun, Liguo

    2014-01-01

    The performance of the conventional adaptive beamformer is sensitive to the array steering vector (ASV) mismatch. And the output signal-to interference and noise ratio (SINR) suffers deterioration, especially in the presence of large direction of arrival (DOA) error. To improve the robustness of traditional approach, we propose a new approach to iteratively search the ASV of the desired signal based on the robust capon beamformer (RCB) with adaptively updated uncertainty levels, which are derived in the form of quadratically constrained quadratic programming (QCQP) problem based on the subspace projection theory. The estimated levels in this iterative beamformer present the trend of decreasing. Additionally, other array imperfections also degrade the performance of beamformer in practice. To cover several kinds of mismatches together, the adaptive flat ellipsoid models are introduced in our method as tight as possible. In the simulations, our beamformer is compared with other methods and its excellent performance is demonstrated via the numerical examples. PMID:27355008

  4. Adaptative Variable Structure Control for an Online Tuning Direct Vector Controlled Induction Motor Drives

    NASA Astrophysics Data System (ADS)

    Lasaad, Sbita; Dalila, Zaltni; Naceurq, Abdelkrim Mohamed

    This study demonstrates that high performance speed control can be obtained by using an adaptative sliding mode control method for a direct vector controlled Squirrel Cage Induction Motor (SCIM). In this study a new method of designing a simple and effective adaptative sliding mode rotational speed control law is developed. The design includes an accurate sliding mode flux observation from the measured stator terminals and rotor speed. The performance of the Direct Field-Orientation Control (DFOC) is ensured by online tuning based on a Model Reference Adaptative System (MRAS) rotor time constant estimator. The control strategy is derived in the sense of Lyapunov stability theory so that the stable tracking performance can be guaranteed under the occurrence of system uncertainties and external disturbances. The proposed scheme is a solution for a robust and high performance induction motor servo drives. Simulation results are provided to validate the effectiveness and robustness of the developed methodology.

  5. Performance Characteristics of an Adaptive Mesh RefinementCalculation on Scalar and Vector Platforms

    SciTech Connect

    Welcome, Michael; Rendleman, Charles; Oliker, Leonid; Biswas, Rupak

    2006-01-31

    Adaptive mesh refinement (AMR) is a powerful technique thatreduces the resources necessary to solve otherwise in-tractable problemsin computational science. The AMR strategy solves the problem on arelatively coarse grid, and dynamically refines it in regions requiringhigher resolution. However, AMR codes tend to be far more complicatedthan their uniform grid counterparts due to the software infrastructurenecessary to dynamically manage the hierarchical grid framework. Despitethis complexity, it is generally believed that future multi-scaleapplications will increasingly rely on adaptive methods to study problemsat unprecedented scale and resolution. Recently, a new generation ofparallel-vector architectures have become available that promise toachieve extremely high sustained performance for a wide range ofapplications, and are the foundation of many leadership-class computingsystems worldwide. It is therefore imperative to understand the tradeoffsbetween conventional scalar and parallel-vector platforms for solvingAMR-based calculations. In this paper, we examine the HyperCLaw AMRframework to compare and contrast performance on the Cray X1E, IBM Power3and Power5, and SGI Altix. To the best of our knowledge, this is thefirst work that investigates and characterizes the performance of an AMRcalculation on modern parallel-vector systems.

  6. Intrusive versus domiciliated triatomines and the challenge of adapting vector control practices against Chagas disease

    PubMed Central

    Waleckx, Etienne; Gourbière, Sébastien; Dumonteil, Eric

    2015-01-01

    Chagas disease prevention remains mostly based on triatomine vector control to reduce or eliminate house infestation with these bugs. The level of adaptation of triatomines to human housing is a key part of vector competence and needs to be precisely evaluated to allow for the design of effective vector control strategies. In this review, we examine how the domiciliation/intrusion level of different triatomine species/populations has been defined and measured and discuss how these concepts may be improved for a better understanding of their ecology and evolution, as well as for the design of more effective control strategies against a large variety of triatomine species. We suggest that a major limitation of current criteria for classifying triatomines into sylvatic, intrusive, domiciliary and domestic species is that these are essentially qualitative and do not rely on quantitative variables measuring population sustainability and fitness in their different habitats. However, such assessments may be derived from further analysis and modelling of field data. Such approaches can shed new light on the domiciliation process of triatomines and may represent a key tool for decision-making and the design of vector control interventions. PMID:25993504

  7. An adaptive online learning approach for Support Vector Regression: Online-SVR-FID

    NASA Astrophysics Data System (ADS)

    Liu, Jie; Zio, Enrico

    2016-08-01

    Support Vector Regression (SVR) is a popular supervised data-driven approach for building empirical models from available data. Like all data-driven methods, under non-stationary environmental and operational conditions it needs to be provided with adaptive learning capabilities, which might become computationally burdensome with large datasets cumulating dynamically. In this paper, a cost-efficient online adaptive learning approach is proposed for SVR by combining Feature Vector Selection (FVS) and Incremental and Decremental Learning. The proposed approach adaptively modifies the model only when different pattern drifts are detected according to proposed criteria. Two tolerance parameters are introduced in the approach to control the computational complexity, reduce the influence of the intrinsic noise in the data and avoid the overfitting problem of SVR. Comparisons of the prediction results is made with other online learning approaches e.g. NORMA, SOGA, KRLS, Incremental Learning, on several artificial datasets and a real case study concerning time series prediction based on data recorded on a component of a nuclear power generation system. The performance indicators MSE and MARE computed on the test dataset demonstrate the efficiency of the proposed online learning method.

  8. Sequential adaptive mutations enhance efficient vector switching by Chikungunya virus and its epidemic emergence.

    PubMed

    Tsetsarkin, Konstantin A; Weaver, Scott C

    2011-12-01

    The adaptation of Chikungunya virus (CHIKV) to a new vector, the Aedes albopictus mosquito, is a major factor contributing to its ongoing re-emergence in a series of large-scale epidemics of arthritic disease in many parts of the world since 2004. Although the initial step of CHIKV adaptation to A. albopictus was determined to involve an A226V amino acid substitution in the E1 envelope glycoprotein that first arose in 2005, little attention has been paid to subsequent CHIKV evolution after this adaptive mutation was convergently selected in several geographic locations. To determine whether selection of second-step adaptive mutations in CHIKV or other arthropod-borne viruses occurs in nature, we tested the effect of an additional envelope glycoprotein amino acid change identified in Kerala, India in 2009. This substitution, E2-L210Q, caused a significant increase in the ability of CHIKV to develop a disseminated infection in A. albopictus, but had no effect on CHIKV fitness in the alternative mosquito vector, A. aegypti, or in vertebrate cell lines. Using infectious viruses or virus-like replicon particles expressing the E2-210Q and E2-210L residues, we determined that E2-L210Q acts primarily at the level of infection of A. albopictus midgut epithelial cells. In addition, we observed that the initial adaptive substitution, E1-A226V, had a significantly stronger effect on CHIKV fitness in A. albopictus than E2-L210Q, thus explaining the observed time differences required for selective sweeps of these mutations in nature. These results indicate that the continuous CHIKV circulation in an A. albopictus-human cycle since 2005 has resulted in the selection of an additional, second-step mutation that may facilitate even more efficient virus circulation and persistence in endemic areas, further increasing the risk of more severe and expanded CHIK epidemics. PMID:22174678

  9. Adaptive Developmental Delay in Chagas Disease Vectors: An Evolutionary Ecology Approach

    PubMed Central

    Menu, Frédéric; Ginoux, Marine; Rajon, Etienne; Lazzari, Claudio R.; Rabinovich, Jorge E.

    2010-01-01

    Background The developmental time of vector insects is important in population dynamics, evolutionary biology, epidemiology and in their responses to global climatic change. In the triatomines (Triatominae, Reduviidae), vectors of Chagas disease, evolutionary ecology concepts, which may allow for a better understanding of their biology, have not been applied. Despite delay in the molting in some individuals observed in triatomines, no effort was made to explain this variability. Methodology We applied four methods: (1) an e-mail survey sent to 30 researchers with experience in triatomines, (2) a statistical description of the developmental time of eleven triatomine species, (3) a relationship between development time pattern and climatic inter-annual variability, (4) a mathematical optimization model of evolution of developmental delay (diapause). Principal Findings 85.6% of responses informed on prolonged developmental times in 5th instar nymphs, with 20 species identified with remarkable developmental delays. The developmental time analysis showed some degree of bi-modal pattern of the development time of the 5th instars in nine out of eleven species but no trend between development time pattern and climatic inter-annual variability was observed. Our optimization model predicts that the developmental delays could be due to an adaptive risk-spreading diapause strategy, only if survival throughout the diapause period and the probability of random occurrence of “bad” environmental conditions are sufficiently high. Conclusions/Significance Developmental delay may not be a simple non-adaptive phenotypic plasticity in development time, and could be a form of adaptive diapause associated to a physiological mechanism related to the postponement of the initiation of reproduction, as an adaptation to environmental stochasticity through a spreading of risk (bet-hedging) strategy. We identify a series of parameters that can be measured in the field and laboratory to test

  10. Robust vibration suppression of an adaptive circular composite plate for satellite thrust vector control

    NASA Astrophysics Data System (ADS)

    Yan, Su; Ma, Kougen; Ghasemi-Nejhad, Mehrdad N.

    2008-03-01

    In this paper, a novel application of adaptive composite structures, a University of Hawaii at Manoa (UHM) smart composite platform, is developed for the Thrust Vector Control (TVC) of satellites. The device top plate of the UHM platform is an adaptive circular composite plate (ACCP) that utilizes integrated sensors/actuators and controllers to suppress low frequency vibrations during the thruster firing as well as to potentially isolate dynamic responses from the satellite structure bus. Since the disturbance due to the satellite thruster firing can be estimated, a combined strategy of an adaptive disturbance observer (DOB) and feed-forward control is proposed for vibration suppression of the ACCP with multi-sensors and multi-actuators. Meanwhile, the effects of the DOB cut-off frequency and the relative degree of the low-pass filter on the DOB performance are investigated. Simulations and experimental results show that higher relative degree of the low-pass filter with the required cut-off frequency will enhance the DOB performance for a high-order system control. Further, although the increase of the filter cut-off frequency can guarantee a sufficient stability margin, it may cause an undesirable increase of the control bandwidth. The effectiveness of the proposed adaptive DOB with feed-forward control strategy is verified through simulations and experiments using the ACCP system.

  11. Performance Enhancement for a GPS Vector-Tracking Loop Utilizing an Adaptive Iterated Extended Kalman Filter

    PubMed Central

    Chen, Xiyuan; Wang, Xiying; Xu, Yuan

    2014-01-01

    This paper deals with the problem of state estimation for the vector-tracking loop of a software-defined Global Positioning System (GPS) receiver. For a nonlinear system that has the model error and white Gaussian noise, a noise statistics estimator is used to estimate the model error, and based on this, a modified iterated extended Kalman filter (IEKF) named adaptive iterated Kalman filter (AIEKF) is proposed. A vector-tracking GPS receiver utilizing AIEKF is implemented to evaluate the performance of the proposed method. Through road tests, it is shown that the proposed method has an obvious accuracy advantage over the IEKF and Adaptive Extended Kalman filter (AEKF) in position determination. The results show that the proposed method is effective to reduce the root-mean-square error (RMSE) of position (including longitude, latitude and altitude). Comparing with EKF, the position RMSE values of AIEKF are reduced by about 45.1%, 40.9% and 54.6% in the east, north and up directions, respectively. Comparing with IEKF, the position RMSE values of AIEKF are reduced by about 25.7%, 19.3% and 35.7% in the east, north and up directions, respectively. Compared with AEKF, the position RMSE values of AIEKF are reduced by about 21.6%, 15.5% and 30.7% in the east, north and up directions, respectively. PMID:25502124

  12. Completely quantized collapse and consequences

    SciTech Connect

    Pearle, Philip

    2005-08-15

    Promotion of quantum theory from a theory of measurement to a theory of reality requires an unambiguous specification of the ensemble of realizable states (and each state's probability of realization). Although not yet achieved within the framework of standard quantum theory, it has been achieved within the framework of the continuous spontaneous localization (CSL) wave-function collapse model. In CSL, a classical random field w(x,t) interacts with quantum particles. The state vector corresponding to each w(x,t) is a realizable state. In this paper, I consider a previously presented model, which is predictively equivalent to CSL. In this completely quantized collapse (CQC) model, the classical random field is quantized. It is represented by the operator W(x,t) which satisfies [W(x,t),W(x{sup '},t{sup '})]=0. The ensemble of realizable states is described by a single state vector, the 'ensemble vector'. Each superposed state which comprises the ensemble vector at time t is the direct product of an eigenstate of W(x,t{sup '}), for all x and for 0{<=}t{sup '}{<=}t, and the CSL state corresponding to that eigenvalue. These states never interfere (they satisfy a superselection rule at any time), they only branch, so the ensemble vector may be considered to be, as Schroedinger put it, a 'catalog' of the realizable states. In this context, many different interpretations (e.g., many worlds, environmental decoherence, consistent histories, modal interpretation) may be satisfactorily applied. Using this description, a long-standing problem is resolved, where the energy comes from the particles gain due to the narrowing of their wave packets by the collapse mechanism. It is shown how to define the energy of the random field and its energy of interaction with particles so that total energy is conserved for the ensemble of realizable states. As a by-product, since the random-field energy spectrum is unbounded, its canonical conjugate, a self-adjoint time operator, can be

  13. Visual data mining for quantized spatial data

    NASA Technical Reports Server (NTRS)

    Braverman, Amy; Kahn, Brian

    2004-01-01

    In previous papers we've shown how a well known data compression algorithm called Entropy-constrained Vector Quantization ( can be modified to reduce the size and complexity of very large, satellite data sets. In this paper, we descuss how to visualize and understand the content of such reduced data sets.

  14. The research and application of visual saliency and adaptive support vector machine in target tracking field.

    PubMed

    Chen, Yuantao; Xu, Weihong; Kuang, Fangjun; Gao, Shangbing

    2013-01-01

    The efficient target tracking algorithm researches have become current research focus of intelligent robots. The main problems of target tracking process in mobile robot face environmental uncertainty. They are very difficult to estimate the target states, illumination change, target shape changes, complex backgrounds, and other factors and all affect the occlusion in tracking robustness. To further improve the target tracking's accuracy and reliability, we present a novel target tracking algorithm to use visual saliency and adaptive support vector machine (ASVM). Furthermore, the paper's algorithm has been based on the mixture saliency of image features. These features include color, brightness, and sport feature. The execution process used visual saliency features and those common characteristics have been expressed as the target's saliency. Numerous experiments demonstrate the effectiveness and timeliness of the proposed target tracking algorithm in video sequences where the target objects undergo large changes in pose, scale, and illumination. PMID:24363779

  15. Modeling of variable speed refrigerated display cabinets based on adaptive support vector machine

    NASA Astrophysics Data System (ADS)

    Cao, Zhikun; Han, Hua; Gu, Bo

    2010-01-01

    In this paper the adaptive support vector machine (ASVM) method is introduced to the field of intelligent modeling of refrigerated display cabinets and used to construct a highly precise mathematical model of their performance. A model for a variable speed open vertical display cabinet was constructed using preprocessing techniques for measured data, including the elimination of outlying data points by the use of an exponential weighted moving average (EWMA). Using dynamic loss coefficient adjustment, the adaptation of the SVM for use in this application was achieved. From there, the object function for energy use per unit of display area total energy consumption (TEC)/total display area (TDA) was constructed and solved using the ASVM method. When compared to the results achieved using a back-propagation neural network (BPNN) model, the ASVM model for the refrigerated display cabinet was characterized by its simple structure, fast convergence speed and high prediction accuracy. The ASVM model also has better noise rejection properties than that of original SVM model. It was revealed by the theoretical analysis and experimental results presented in this paper that it is feasible to model of the display cabinet built using the ASVM method.

  16. Quantization of color histograms using GLA

    NASA Astrophysics Data System (ADS)

    Yang, Christopher C.; Yip, Milo K.

    2002-09-01

    Color histogram has been used as one of the most important image descriptor in a wide range of content-based image retrieval (CBIR) projects for color image indexing. It captures the global chromatic distribution of an image. Traditionally, there are two major approaches to quantize the color space: (1) quantize each dimension of a color coordinate system uniformly to generate a fixed number of bins; and (2) quantize a color coordinate system arbitrarily. The first approach works best on cubical color coordinate systems, such as RGB. For other non-cubical color coordinate system, such as CIELAB and CIELUV, some bins may fall out of the gamut (transformed from the RGB cube) of the color space. As a result, it reduces the effectiveness of the color histogram and hence reduces the retrieval performance. The second approach uses arbitrarily quantization. The volume of the bins is not necessary uniform. As a result, it affects the effectiveness of the histogram significantly. In this paper, we propose to develop the color histogram by tessellating the non-cubical color gamut transformed from RGB cube using a vector quantization (VQ) method, the General Loyld Algorithm (GLA) [6]. Using such approach, the problem of empty bins due to the gamut of the color coordinate system can be avoided. Besides, all bins quantized by GLA will occupy the same volume. It guarantees that uniformity of each quantized bins in the histogram. An experiment has been conducted to evaluate the quantitative performance of our approach. The image collection from UC Berkeley's digital library project is used as the test bed. The indexing effectiveness of a histogram space [3] is used as the measurement of the performance. The experimental result shows that using the GLA quantization approach significantly increase the indexing effectiveness.

  17. An Adaptive Supervisory Sliding Fuzzy Cerebellar Model Articulation Controller for Sensorless Vector-Controlled Induction Motor Drive Systems

    PubMed Central

    Wang, Shun-Yuan; Tseng, Chwan-Lu; Lin, Shou-Chuang; Chiu, Chun-Jung; Chou, Jen-Hsiang

    2015-01-01

    This paper presents the implementation of an adaptive supervisory sliding fuzzy cerebellar model articulation controller (FCMAC) in the speed sensorless vector control of an induction motor (IM) drive system. The proposed adaptive supervisory sliding FCMAC comprised a supervisory controller, integral sliding surface, and an adaptive FCMAC. The integral sliding surface was employed to eliminate steady-state errors and enhance the responsiveness of the system. The adaptive FCMAC incorporated an FCMAC with a compensating controller to perform a desired control action. The proposed controller was derived using the Lyapunov approach, which guarantees learning-error convergence. The implementation of three intelligent control schemes—the adaptive supervisory sliding FCMAC, adaptive sliding FCMAC, and adaptive sliding CMAC—were experimentally investigated under various conditions in a realistic sensorless vector-controlled IM drive system. The root mean square error (RMSE) was used as a performance index to evaluate the experimental results of each control scheme. The analysis results indicated that the proposed adaptive supervisory sliding FCMAC substantially improved the system performance compared with the other control schemes. PMID:25815450

  18. Self-adapting root-MUSIC algorithm and its real-valued formulation for acoustic vector sensor array

    NASA Astrophysics Data System (ADS)

    Wang, Peng; Zhang, Guo-jun; Xue, Chen-yang; Zhang, Wen-dong; Xiong, Ji-jun

    2012-12-01

    In this paper, based on the root-MUSIC algorithm for acoustic pressure sensor array, a new self-adapting root-MUSIC algorithm for acoustic vector sensor array is proposed by self-adaptive selecting the lead orientation vector, and its real-valued formulation by Forward-Backward(FB) smoothing and real-valued inverse covariance matrix is also proposed, which can reduce the computational complexity and distinguish the coherent signals. The simulation experiment results show the better performance of two new algorithm with low Signal-to-Noise (SNR) in direction of arrival (DOA) estimation than traditional MUSIC algorithm, and the experiment results using MEMS vector hydrophone array in lake trails show the engineering practicability of two new algorithms.

  19. Retrieval of Brain Tumors by Adaptive Spatial Pooling and Fisher Vector Representation

    PubMed Central

    Huang, Meiyan; Huang, Wei; Jiang, Jun; Zhou, Yujia; Yang, Ru; Zhao, Jie; Feng, Yanqiu; Feng, Qianjin; Chen, Wufan

    2016-01-01

    Content-based image retrieval (CBIR) techniques have currently gained increasing popularity in the medical field because they can use numerous and valuable archived images to support clinical decisions. In this paper, we concentrate on developing a CBIR system for retrieving brain tumors in T1-weighted contrast-enhanced MRI images. Specifically, when the user roughly outlines the tumor region of a query image, brain tumor images in the database of the same pathological type are expected to be returned. We propose a novel feature extraction framework to improve the retrieval performance. The proposed framework consists of three steps. First, we augment the tumor region and use the augmented tumor region as the region of interest to incorporate informative contextual information. Second, the augmented tumor region is split into subregions by an adaptive spatial division method based on intensity orders; within each subregion, we extract raw image patches as local features. Third, we apply the Fisher kernel framework to aggregate the local features of each subregion into a respective single vector representation and concatenate these per-subregion vector representations to obtain an image-level signature. After feature extraction, a closed-form metric learning algorithm is applied to measure the similarity between the query image and database images. Extensive experiments are conducted on a large dataset of 3604 images with three types of brain tumors, namely, meningiomas, gliomas, and pituitary tumors. The mean average precision can reach 94.68%. Experimental results demonstrate the power of the proposed algorithm against some related state-of-the-art methods on the same dataset. PMID:27273091

  20. Retrieval of Brain Tumors by Adaptive Spatial Pooling and Fisher Vector Representation.

    PubMed

    Cheng, Jun; Yang, Wei; Huang, Meiyan; Huang, Wei; Jiang, Jun; Zhou, Yujia; Yang, Ru; Zhao, Jie; Feng, Yanqiu; Feng, Qianjin; Chen, Wufan

    2016-01-01

    Content-based image retrieval (CBIR) techniques have currently gained increasing popularity in the medical field because they can use numerous and valuable archived images to support clinical decisions. In this paper, we concentrate on developing a CBIR system for retrieving brain tumors in T1-weighted contrast-enhanced MRI images. Specifically, when the user roughly outlines the tumor region of a query image, brain tumor images in the database of the same pathological type are expected to be returned. We propose a novel feature extraction framework to improve the retrieval performance. The proposed framework consists of three steps. First, we augment the tumor region and use the augmented tumor region as the region of interest to incorporate informative contextual information. Second, the augmented tumor region is split into subregions by an adaptive spatial division method based on intensity orders; within each subregion, we extract raw image patches as local features. Third, we apply the Fisher kernel framework to aggregate the local features of each subregion into a respective single vector representation and concatenate these per-subregion vector representations to obtain an image-level signature. After feature extraction, a closed-form metric learning algorithm is applied to measure the similarity between the query image and database images. Extensive experiments are conducted on a large dataset of 3604 images with three types of brain tumors, namely, meningiomas, gliomas, and pituitary tumors. The mean average precision can reach 94.68%. Experimental results demonstrate the power of the proposed algorithm against some related state-of-the-art methods on the same dataset. PMID:27273091

  1. Separable quantizations of Stäckel systems

    NASA Astrophysics Data System (ADS)

    Błaszak, Maciej; Marciniak, Krzysztof; Domański, Ziemowit

    2016-08-01

    In this article we prove that many Hamiltonian systems that cannot be separably quantized in the classical approach of Robertson and Eisenhart can be separably quantized if we extend the class of admissible quantizations through a suitable choice of Riemann space adapted to the Poisson geometry of the system. Actually, in this article we prove that for every quadratic in momenta Stäckel system (defined on 2 n dimensional Poisson manifold) for which Stäckel matrix consists of monomials in position coordinates there exist infinitely many quantizations-parametrized by n arbitrary functions-that turn this system into a quantum separable Stäckel system.

  2. A physically motivated quantization of the electromagnetic field

    NASA Astrophysics Data System (ADS)

    Bennett, Robert; Barlow, Thomas M.; Beige, Almut

    2016-01-01

    The notion that the electromagnetic field is quantized is usually inferred from observations such as the photoelectric effect and the black-body spectrum. However accounts of the quantization of this field are usually mathematically motivated and begin by introducing a vector potential, followed by the imposition of a gauge that allows the manipulation of the solutions of Maxwell’s equations into a form that is amenable for the machinery of canonical quantization. By contrast, here we quantize the electromagnetic field in a less mathematically and more physically motivated way. Starting from a direct description of what one sees in experiments, we show that the usual expressions of the electric and magnetic field observables follow from Heisenberg’s equation of motion. In our treatment, there is no need to invoke the vector potential in a specific gauge and we avoid the commonly used notion of a fictitious cavity that applies boundary conditions to the field.

  3. Improving the textural characterization of trabecular bone structure to quantify its changes: the locally adapted scaling vector method

    NASA Astrophysics Data System (ADS)

    Raeth, Christoph W.; Mueller, Dirk; Boehm, Holger F.; Rummeny, Ernst J.; Link, Thomas M.; Monetti, Roberto

    2005-04-01

    We extend the recently introduced scaling vector method (SVM) to improve the textural characterization of oriented trabecular bone structures in the context of osteoporosis. Using the concept of scaling vectors one obtains non-linear structural information from data sets, which can account for global anisotropies. In this work we present a method which allows us to determine the local directionalities in images by using scaling vectors. Thus it becomes possible to better account for local anisotropies and to implement this knowledge in the calculation of the scaling properties of the image. By applying this adaptive technique, a refined quantification of the image structure is possible: we test and evaluate our new method using realistic two-dimensional simulations of bone structures, which model the effect of osteoblasts and osteoclasts on the local change of relative bone density. The partial differential equations involved in the model are solved numerically using cellular automata (CA). Different realizations with slightly varying control parameters are considered. Our results show that even small changes in the trabecular structures, which are induced by variation of a control parameters of the system, become discernible by applying the locally adapted scaling vector method. The results are superior to those obtained by isotropic and/or bulk measures. These findings may be especially important for monitoring the treatment of patients, where the early recognition of (drug-induced) changes in the trabecular structure is crucial.

  4. Aedes aegypti (L.) in Latin American and Caribbean region: With growing evidence for vector adaptation to climate change?

    PubMed

    Chadee, Dave D; Martinez, Raymond

    2016-04-01

    Within Latin America and the Caribbean region the impact of climate change has been associated with the effects of rainfall and temperature on seasonal outbreaks of dengue but few studies have been conducted on the impacts of climate on the behaviour and ecology of Aedes aegypti mosquitoes.This study was conducted to examine the adaptive behaviours currently being employed by A. aegypti mosquitoes exposed to the force of climate change in LAC countries. The literature on the association between climate and dengue incidence is small and sometimes speculative. Few laboratory and field studies have identified research gaps. Laboratory and field experiments were designed and conducted to better understand the container preferences, climate-associated-adaptive behaviour, ecology and the effects of different temperatures and light regimens on the life history of A. aegypti mosquitoes. A. aegypti adaptive behaviours and changes in container preferences demonstrate how complex dengue transmission dynamics is, in different ecosystems. The use of underground drains and septic tanks represents a major behaviour change identified and compounds an already difficult task to control A. aegypti populations. A business as usual approach will exacerbate the problem and lead to more frequent outbreaks of dengue and chikungunya in LAC countries unless both area-wide and targeted vector control approaches are adopted. The current evidence and the results from proposed transdisciplinary research on dengue within different ecosystems will help guide the development of new vector control strategies and foster a better understanding of climate change impacts on vector-borne disease transmission. PMID:26796862

  5. Lagrange structure and quantization

    NASA Astrophysics Data System (ADS)

    Kazinski, Peter O.; Lyakhovich, Simon L.; Sharapov, Alexey A.

    2005-07-01

    A path-integral quantization method is proposed for dynamical systems whose classical equations of motion do not necessarily follow from the action principle. The key new notion behind this quantization scheme is the Lagrange structure which is more general than the lagrangian formalism in the same sense as Poisson geometry is more general than the symplectic one. The Lagrange structure is shown to admit a natural BRST description which is used to construct an AKSZ-type topological sigma-model. The dynamics of this sigma-model in d+1 dimensions, being localized on the boundary, are proved to be equivalent to the original theory in d dimensions. As the topological sigma-model has a well defined action, it is path-integral quantized in the usual way that results in quantization of the original (not necessarily lagrangian) theory. When the original equations of motion come from the action principle, the standard BV path-integral is explicitly deduced from the proposed quantization scheme. The general quantization scheme is exemplified by several models including the ones whose classical dynamics are not variational.

  6. Exact quantization of a paraxial electromagnetic field

    SciTech Connect

    Aiello, A.; Woerdman, J. P.

    2005-12-15

    A nonperturbative quantization of a paraxial electromagnetic field is achieved via a generalized dispersion relation imposed on the longitudinal and the transverse components of the photon wave vector. This theoretical formalism yields a seamless transition between the paraxial- and the Maxwell-equation solutions. This obviates the need to introduce either ad hoc or perturbatively defined field operators. Moreover, our (exact) formalism remains valid beyond the quasimonochromatic paraxial limit.

  7. Lithology intelligent identification using support vector machine and adaptive cellular automata in multispectral remote sensing image

    NASA Astrophysics Data System (ADS)

    Wang, Xianmin; Niu, Ruiqing; Wu, Ke

    2011-07-01

    Remote sensing provides a new idea and an advanced method for lithology identification, but lithology identification by remote sensing is quite difficult because 1. the disciplines of lithology identification in a concrete region are often quite different from the experts' experience; 2. in the regions with flourishing vegetation, lithology information is poor, so it is very difficult to identify the lithologies by remote sensing images. At present, the studies on lithology identification by remote sensing are primarily conducted on the regions with low vegetation coverage and high rock bareness. And there is no mature method of lithology identification in the regions with flourishing vegetation. Traditional methods lacking in the mining and extraction of the various complicated lithology information from a remote sensing image, often need much manual intervention and possess poor intelligence and accuracy. An intelligent method proposed in this paper for lithology identification based on support vector machine (SVM) and adaptive cellular automata (ACA) is expected to solve the above problems. The method adopted Landsat-7 ETM+ images and 1:50000 geological map as the data origins. It first derived the lithology identification factors on three aspects: 1. spectra, 2. texture and 3. vegetation cover. Second, it plied the remote sensing images with the geological map and established the SVM to obtain the transition rules according to the factor values of the samples. Finally, it established an ACA model to intelligently identify the lithologies according to the transition and neighborhood rules. In this paper an ACA model is proposed and compared with the traditional one. Results of 2 real-world examples show that: 1. The SVM-ACA method obtains a good result of lithology identification in the regions with flourishing vegetation; 2. it possesses high accuracies of lithology identification (with the overall accuracies of 92.29% and 85.54%, respectively, in the two

  8. Quantization Effects on Complex Networks.

    PubMed

    Wang, Ying; Wang, Lin; Yang, Wen; Wang, Xiaofan

    2016-01-01

    Weights of edges in many complex networks we constructed are quantized values of the real weights. To what extent does the quantization affect the properties of a network? In this work, quantization effects on network properties are investigated based on the spectrum of the corresponding Laplacian. In contrast to the intuition that larger quantization level always implies a better approximation of the quantized network to the original one, we find a ubiquitous periodic jumping phenomenon with peak-value decreasing in a power-law relationship in all the real-world weighted networks that we investigated. We supply theoretical analysis on the critical quantization level and the power laws. PMID:27226049

  9. Quantization Effects on Complex Networks

    NASA Astrophysics Data System (ADS)

    Wang, Ying; Wang, Lin; Yang, Wen; Wang, Xiaofan

    2016-05-01

    Weights of edges in many complex networks we constructed are quantized values of the real weights. To what extent does the quantization affect the properties of a network? In this work, quantization effects on network properties are investigated based on the spectrum of the corresponding Laplacian. In contrast to the intuition that larger quantization level always implies a better approximation of the quantized network to the original one, we find a ubiquitous periodic jumping phenomenon with peak-value decreasing in a power-law relationship in all the real-world weighted networks that we investigated. We supply theoretical analysis on the critical quantization level and the power laws.

  10. Quantization Effects on Complex Networks

    PubMed Central

    Wang, Ying; Wang, Lin; Yang, Wen; Wang, Xiaofan

    2016-01-01

    Weights of edges in many complex networks we constructed are quantized values of the real weights. To what extent does the quantization affect the properties of a network? In this work, quantization effects on network properties are investigated based on the spectrum of the corresponding Laplacian. In contrast to the intuition that larger quantization level always implies a better approximation of the quantized network to the original one, we find a ubiquitous periodic jumping phenomenon with peak-value decreasing in a power-law relationship in all the real-world weighted networks that we investigated. We supply theoretical analysis on the critical quantization level and the power laws. PMID:27226049

  11. A visual detection model for DCT coefficient quantization

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert J., Jr.; Watson, Andrew B.

    1994-01-01

    The discrete cosine transform (DCT) is widely used in image compression and is part of the JPEG and MPEG compression standards. The degree of compression and the amount of distortion in the decompressed image are controlled by the quantization of the transform coefficients. The standards do not specify how the DCT coefficients should be quantized. One approach is to set the quantization level for each coefficient so that the quantization error is near the threshold of visibility. Results from previous work are combined to form the current best detection model for DCT coefficient quantization noise. This model predicts sensitivity as a function of display parameters, enabling quantization matrices to be designed for display situations varying in luminance, veiling light, and spatial frequency related conditions (pixel size, viewing distance, and aspect ratio). It also allows arbitrary color space directions for the representation of color. A model-based method of optimizing the quantization matrix for an individual image was developed. The model described above provides visual thresholds for each DCT frequency. These thresholds are adjusted within each block for visual light adaptation and contrast masking. For given quantization matrix, the DCT quantization errors are scaled by the adjusted thresholds to yield perceptual errors. These errors are pooled nonlinearly over the image to yield total perceptual error. With this model one may estimate the quantization matrix for a particular image that yields minimum bit rate for a given total perceptual error, or minimum perceptual error for a given bit rate. Custom matrices for a number of images show clear improvement over image-independent matrices. Custom matrices are compatible with the JPEG standard, which requires transmission of the quantization matrix.

  12. Atomism and quantization

    NASA Astrophysics Data System (ADS)

    Fröhlich, J.; Knowles, A.; Pizzo, A.

    2007-03-01

    Within the framework of the theory of interacting classical and quantum gases, it is shown that the atomistic constitution of gases can be understood as a consequence of (second) quantization of a continuum theory of gases. In this paper, this is explained in some detail for the theory of non-relativistic interacting Bose gases, which can be viewed as the second quantization of a continuum theory whose dynamics is given by the Hartree equation. Conversely, the Hartree equation emerges from the theory of Bose gases in the mean-field limit. It is shown that, for such systems, the time evolution of 'observables' commutes with their Wick quantization, up to quantum corrections that tend to zero in the mean-field limit. This is an Egorov-type theorem.

  13. Quantization of Black Holes

    NASA Astrophysics Data System (ADS)

    He, Xiao-Gang; Ma, Bo-Qiang

    We show that black holes can be quantized in an intuitive and elegant way with results in agreement with conventional knowledge of black holes by using Bohr's idea of quantizing the motion of an electron inside the atom in quantum mechanics. We find that properties of black holes can also be derived from an ansatz of quantized entropy Δ S = 4π k Δ R/{{-{λ }}}, which was suggested in a previous work to unify the black hole entropy formula and Verlinde's conjecture to explain gravity as an entropic force. Such an Ansatz also explains gravity as an entropic force from quantum effect. This suggests a way to unify gravity with quantum theory. Several interesting and surprising results of black holes are given from which we predict the existence of primordial black holes ranging from Planck scale both in size and energy to big ones in size but with low energy behaviors.

  14. Laser-induced Breakdown spectroscopy quantitative analysis method via adaptive analytical line selection and relevance vector machine regression model

    NASA Astrophysics Data System (ADS)

    Yang, Jianhong; Yi, Cancan; Xu, Jinwu; Ma, Xianghong

    2015-05-01

    A new LIBS quantitative analysis method based on analytical line adaptive selection and Relevance Vector Machine (RVM) regression model is proposed. First, a scheme of adaptively selecting analytical line is put forward in order to overcome the drawback of high dependency on a priori knowledge. The candidate analytical lines are automatically selected based on the built-in characteristics of spectral lines, such as spectral intensity, wavelength and width at half height. The analytical lines which will be used as input variables of regression model are determined adaptively according to the samples for both training and testing. Second, an LIBS quantitative analysis method based on RVM is presented. The intensities of analytical lines and the elemental concentrations of certified standard samples are used to train the RVM regression model. The predicted elemental concentration analysis results will be given with a form of confidence interval of probabilistic distribution, which is helpful for evaluating the uncertainness contained in the measured spectra. Chromium concentration analysis experiments of 23 certified standard high-alloy steel samples have been carried out. The multiple correlation coefficient of the prediction was up to 98.85%, and the average relative error of the prediction was 4.01%. The experiment results showed that the proposed LIBS quantitative analysis method achieved better prediction accuracy and better modeling robustness compared with the methods based on partial least squares regression, artificial neural network and standard support vector machine.

  15. DCT quantization matrices visually optimized for individual images

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.

    1993-01-01

    This presentation describes how a vision model incorporating contrast sensitivity, contrast masking, and light adaptation is used to design visually optimal quantization matrices for Discrete Cosine Transform image compression. The Discrete Cosine Transform (DCT) underlies several image compression standards (JPEG, MPEG, H.261). The DCT is applied to 8x8 pixel blocks, and the resulting coefficients are quantized by division and rounding. The 8x8 'quantization matrix' of divisors determines the visual quality of the reconstructed image; the design of this matrix is left to the user. Since each DCT coefficient corresponds to a particular spatial frequency in a particular image region, each quantization error consists of a local increment or decrement in a particular frequency. After adjustments for contrast sensitivity, local light adaptation, and local contrast masking, this coefficient error can be converted to a just-noticeable-difference (jnd). The jnd's for different frequencies and image blocks can be pooled to yield a global perceptual error metric. With this metric, we can compute for each image the quantization matrix that minimizes the bit-rate for a given perceptual error, or perceptual error for a given bit-rate. Implementation of this system demonstrates its advantages over existing techniques. A unique feature of this scheme is that the quantization matrix is optimized for each individual image. This is compatible with the JPEG standard, which requires transmission of the quantization matrix.

  16. On Quantizable Odd Lie Bialgebras

    NASA Astrophysics Data System (ADS)

    Khoroshkin, Anton; Merkulov, Sergei; Willwacher, Thomas

    2016-09-01

    Motivated by the obstruction to the deformation quantization of Poisson structures in infinite dimensions, we introduce the notion of a quantizable odd Lie bialgebra. The main result of the paper is a construction of the highly non-trivial minimal resolution of the properad governing such Lie bialgebras, and its link with the theory of so-called quantizable Poisson structures.

  17. Quantized Algebra I Texts

    ERIC Educational Resources Information Center

    DeBuvitz, William

    2014-01-01

    I am a volunteer reader at the Princeton unit of "Learning Ally" (formerly "Recording for the Blind & Dyslexic") and I recently discovered that high school students are introduced to the concept of quantization well before they take chemistry and physics. For the past few months I have been reading onto computer files a…

  18. Development of a novel plasmid vector pTIO-1 adapted for electrotransformation of Porphyromonas gingivalis.

    PubMed

    Tagawa, Junpei; Inoue, Tetsuyoshi; Naito, Mariko; Sato, Keiko; Kuwahara, Tomomi; Nakayama, Masaaki; Nakayama, Koji; Yamashiro, Takashi; Ohara, Naoya

    2014-10-01

    We report here the construction of a plasmid vector designed for the efficient electrotransformation of the periodontal pathogen Porphyromonas gingivalis. The novel Escherichia coli-Bacteroides/P. gingivalis shuttle vector, designated pTIO-1, is based on the 11.0-kb E. coli-Bacteroides conjugative shuttle vector, pVAL-1 (a pB8-51 derivative). To construct pTIO-1, the pB8-51 origin of replication and erythromycin resistance determinant of pVAL-1 were cloned into the E. coli cloning vector pBluescript II SK(-) and non-functional regions were deleted. pTIO-1 has an almost complete multiple cloning site from pBluescript II SK(-). The size of pTIO-1 is 4.5kb, which is convenient for routine gene manipulation. pTIO-1 was introduced into P. gingivalis via electroporation, and erythromycin-resistant transformants carrying pTIO-1 were obtained. We characterized the transformation efficiency, copy number, host range, stability, and insert size capacity of pTIO-1. An efficient plasmid electrotransformation of P. gingivalis will facilitate functional analysis and expression of P. gingivalis genes, including the virulence factors of this bacterium. PMID:25102110

  19. Impacts of Climate Change on Vector Borne Diseases in the Mediterranean Basin — Implications for Preparedness and Adaptation Policy

    PubMed Central

    Negev, Maya; Paz, Shlomit; Clermont, Alexandra; Pri-Or, Noemie Groag; Shalom, Uri; Yeger, Tamar; Green, Manfred S.

    2015-01-01

    The Mediterranean region is vulnerable to climatic changes. A warming trend exists in the basin with changes in rainfall patterns. It is expected that vector-borne diseases (VBD) in the region will be influenced by climate change since weather conditions influence their emergence. For some diseases (i.e., West Nile virus) the linkage between emergence andclimate change was recently proved; for others (such as dengue) the risk for local transmission is real. Consequently, adaptation and preparation for changing patterns of VBD distribution is crucial in the Mediterranean basin. We analyzed six representative Mediterranean countries and found that they have started to prepare for this threat, but the preparation levels among them differ, and policy mechanisms are limited and basic. Furthermore, cross-border cooperation is not stable and depends on international frameworks. The Mediterranean countries should improve their adaptation plans, and develop more cross-sectoral, multidisciplinary and participatory approaches. In addition, based on experience from existing local networks in advancing national legislation and trans-border cooperation, we outline recommendations for a regional cooperation framework. We suggest that a stable and neutral framework is required, and that it should address the characteristics and needs of African, Asian and European countries around the Mediterranean in order to ensure participation. Such a regional framework is essential to reduce the risk of VBD transmission, since the vectors of infectious diseases know no political borders. PMID:26084000

  20. Impacts of Climate Change on Vector Borne Diseases in the Mediterranean Basin - Implications for Preparedness and Adaptation Policy.

    PubMed

    Negev, Maya; Paz, Shlomit; Clermont, Alexandra; Pri-Or, Noemie Groag; Shalom, Uri; Yeger, Tamar; Green, Manfred S

    2015-06-01

    The Mediterranean region is vulnerable to climatic changes. A warming trend exists in the basin with changes in rainfall patterns. It is expected that vector-borne diseases (VBD) in the region will be influenced by climate change since weather conditions influence their emergence. For some diseases (i.e., West Nile virus) the linkage between emergence andclimate change was recently proved; for others (such as dengue) the risk for local transmission is real. Consequently, adaptation and preparation for changing patterns of VBD distribution is crucial in the Mediterranean basin. We analyzed six representative Mediterranean countries and found that they have started to prepare for this threat, but the preparation levels among them differ, and policy mechanisms are limited and basic. Furthermore, cross-border cooperation is not stable and depends on international frameworks. The Mediterranean countries should improve their adaptation plans, and develop more cross-sectoral, multidisciplinary and participatory approaches. In addition, based on experience from existing local networks in advancing national legislation and trans-border cooperation, we outline recommendations for a regional cooperation framework. We suggest that a stable and neutral framework is required, and that it should address the characteristics and needs of African, Asian and European countries around the Mediterranean in order to ensure participation. Such a regional framework is essential to reduce the risk of VBD transmission, since the vectors of infectious diseases know no political borders. PMID:26084000

  1. Joint thresholding and quantizer selection for transform image coding: entropy-constrained analysis and applications to baseline JPEG.

    PubMed

    Crouse, M; Ramchandran, K

    1997-01-01

    Striving to maximize baseline (Joint Photographers Expert Group-JPEG) image quality without compromising compatibility of current JPEG decoders, we develop an image-adaptive JPEG encoding algorithm that jointly optimizes quantizer selection, coefficient "thresholding", and Huffman coding within a rate-distortion (R-D) framework. Practically speaking, our algorithm unifies two previous approaches to image-adaptive JPEG encoding: R-D optimized quantizer selection and R-D optimal thresholding. Conceptually speaking, our algorithm is a logical consequence of entropy-constrained vector quantization (ECVQ) design principles in the severely constrained instance of JPEG-compatible encoding. We explore both viewpoints: the practical, to concretely derive our algorithm, and the conceptual, to justify the claim that our algorithm approaches the best performance that a JPEG encoder can achieve. This performance includes significant objective peak signal-to-noise ratio (PSNR) improvement over previous work and at high rates gives results comparable to state-of-the-art image coders. For example, coding the Lena image at 1.0 b/pixel, our JPEG encoder achieves a PSNR performance of 39.6 dB that slightly exceeds the quoted PSNR results of Shapiro's wavelet-based zero-tree coder. Using a visually based distortion metric, we can achieve noticeable subjective improvement as well. Furthermore, our algorithm may be applied to other systems that use run-length encoding, including intraframe MPEG and subband or wavelet coding. PMID:18282923

  2. How the Malaria Vector Anopheles gambiae Adapts to the Use of Insecticide-Treated Nets by African Populations

    PubMed Central

    Ndiath, Mamadou Ousmane; Mazenot, Catherine; Sokhna, Cheikh; Trape, Jean-François

    2014-01-01

    Background Insecticide treated bed nets have been recommended and proven efficient as a measure to protect African populations from malaria mosquito vector Anopheles spp. This study evaluates the consequences of bed nets use on vectors resistance to insecticides, their feeding behavior and malaria transmission in Dielmo village, Senegal, were LLINs were offered to all villagers in July 2008. Methods Adult mosquitoes were collected monthly from January 2006 to December 2011 by human landing catches (HLC) and by pyrethroid spray catches (PCS). A randomly selected sub-sample of 15–20% of An. gambiae s.l. collected each month was used to investigate the molecular forms of the An. gambiae complex, kdr mutations, and Plasmodium falciparum circumsporozoite (CSP) rate. Malaria prevalence and gametocytaemia in Dielmo villagers were measured quarterly. Results Insecticide susceptible mosquitoes (wild kdr genotype) presented a reduced lifespan after LLINs implementation but they rapidly adapted their feeding behavior, becoming more exophageous and zoophilic, and biting earlier during the night. In the meantime, insecticide-resistant specimens (kdr L1014F genotype) increased in frequency in the population, with an unchanged lifespan and feeding behaviour. P. falciparum prevalence and gametocyte rate in villagers decreased dramatically after LLINs deployment. Malaria infection rate tended to zero in susceptible mosquitoes whereas the infection rate increased markedly in the kdr homozygote mosquitoes. Conclusion Dramatic changes in vector populations and their behavior occurred after the deployment of LLINs due to the extraordinary adaptative skills of An. gambiae s. l. mosquitoes. However, despite the increasing proportion of insecticide resistant mosquitoes and their almost exclusive responsibility in malaria transmission, the P. falciparum gametocyte reservoir continued to decrease three years after the deployment of LLINs. PMID:24892677

  3. Aquaporin water channel AgAQP1 in the malaria vector mosquito Anopheles gambiae during blood feeding and humidity adaptation

    PubMed Central

    Liu, Kun; Tsujimoto, Hitoshi; Cha, Sung-Jae; Agre, Peter; Rasgon, Jason L.

    2011-01-01

    Altered patterns of malaria endemicity reflect, in part, changes in feeding behavior and climate adaptation of mosquito vectors. Aquaporin (AQP) water channels are found throughout nature and confer high-capacity water flow through cell membranes. The genome of the major malaria vector mosquito Anopheles gambiae contains at least seven putative AQP sequences. Anticipating that transmembrane water movements are important during the life cycle of A. gambiae, we identified and characterized the A. gambiae aquaporin 1 (AgAQP1) protein that is homologous to AQPs known in humans, Drosophila, and sap-sucking insects. When expressed in Xenopus laevis oocytes, AgAQP1 transports water but not glycerol. Similar to mammalian AQPs, water permeation of AgAQP1 is inhibited by HgCl2 and tetraethylammonium, with Tyr185 conferring tetraethylammonium sensitivity. AgAQP1 is more highly expressed in adult female A. gambiae mosquitoes than in males. Expression is high in gut, ovaries, and Malpighian tubules where immunofluorescence microscopy reveals that AgAQP1 resides in stellate cells but not principal cells. AgAQP1 expression is up-regulated in fat body and ovary by blood feeding but not by sugar feeding, and it is reduced by exposure to a dehydrating environment (42% relative humidity). RNA interference reduces AgAQP1 mRNA and protein levels. In a desiccating environment (<20% relative humidity), mosquitoes with reduced AgAQP1 protein survive significantly longer than controls. These studies support a role for AgAQP1 in water homeostasis during blood feeding and humidity adaptation of A. gambiae, a major mosquito vector of human malaria in sub-Saharan Africa. PMID:21444767

  4. Adaptive entropy coded subband coding of images.

    PubMed

    Kim, Y H; Modestino, J W

    1992-01-01

    The authors describe a design approach, called 2-D entropy-constrained subband coding (ECSBC), based upon recently developed 2-D entropy-constrained vector quantization (ECVQ) schemes. The output indexes of the embedded quantizers are further compressed by use of noiseless entropy coding schemes, such as Huffman or arithmetic codes, resulting in variable-rate outputs. Depending upon the specific configurations of the ECVQ and the ECPVQ over the subbands, many different types of SBC schemes can be derived within the generic 2-D ECSBC framework. Among these, the authors concentrate on three representative types of 2-D ECSBC schemes and provide relative performance evaluations. They also describe an adaptive buffer instrumented version of 2-D ECSBC, called 2-D ECSBC/AEC, for use with fixed-rate channels which completely eliminates buffer overflow/underflow problems. This adaptive scheme achieves performance quite close to the corresponding ideal 2-D ECSBC system. PMID:18296138

  5. Francisella–Arthropod Vector Interaction and its Role in Patho-Adaptation to Infect Mammals

    PubMed Central

    Akimana, Christine; Kwaik, Yousef Abu

    2011-01-01

    Francisella tularensis is a Gram-negative, intracellular, zoonotic bacterium, and is the causative agent of tularemia with a broad host range. Arthropods such as ticks, mosquitoes, and flies maintain F. tularensis in nature by transmitting the bacteria among small mammals. While the tick is largely believed to be a biological vector of F. tularensis, transmission by mosquitoes and flies is largely believed to be mechanical on the mouthpart through interrupted feedings. However, the mechanism of infection of the vectors by F. tularensis is not well understood. Since F. tularensis has not been localized in the salivary gland of the primary human biting ticks, it is thought that bacterial transmission by ticks is through mechanical inoculation of tick feces containing F. tularensis into the skin wound. Drosophila melanogaster is an established good arthropod model for arthropod vectors of tularemia, where F. tularensis infects hemocytes, and is found in hemolymph, as seen in ticks. In addition, phagosome biogenesis and robust intracellular proliferation of F. tularensis in arthropod-derived cells are similar to that in mammalian macrophages. Furthermore, bacterial factors required for infectivity of mammals are often required for infectivity of the fly by F. tularensis. Several host factors that contribute to F. tularensis intracellular pathogenesis in D. melanogaster have been identified, and F. tularensis targets some of the evolutionarily conserved eukaryotic processes to enable intracellular survival and proliferation in evolutionarily distant hosts. PMID:21687425

  6. A Nonlinear Adaptive Beamforming Algorithm Based on Least Squares Support Vector Regression

    PubMed Central

    Wang, Lutao; Jin, Gang; Li, Zhengzhou; Xu, Hongbin

    2012-01-01

    To overcome the performance degradation in the presence of steering vector mismatches, strict restrictions on the number of available snapshots, and numerous interferences, a novel beamforming approach based on nonlinear least-square support vector regression machine (LS-SVR) is derived in this paper. In this approach, the conventional linearly constrained minimum variance cost function used by minimum variance distortionless response (MVDR) beamformer is replaced by a squared-loss function to increase robustness in complex scenarios and provide additional control over the sidelobe level. Gaussian kernels are also used to obtain better generalization capacity. This novel approach has two highlights, one is a recursive regression procedure to estimate the weight vectors on real-time, the other is a sparse model with novelty criterion to reduce the final size of the beamformer. The analysis and simulation tests show that the proposed approach offers better noise suppression capability and achieve near optimal signal-to-interference-and-noise ratio (SINR) with a low computational burden, as compared to other recently proposed robust beamforming techniques.

  7. Design of a Two-level Adaptive Multi-Agent System for Malaria Vectors driven by an ontology

    PubMed Central

    Koum, Guillaume; Yekel, Augustin; Ndifon, Bengyella; Etang, Josiane; Simard, Frédéric

    2007-01-01

    Background The understanding of heterogeneities in disease transmission dynamics as far as malaria vectors are concerned is a big challenge. Many studies while tackling this problem don't find exact models to explain the malaria vectors propagation. Methods To solve the problem we define an Adaptive Multi-Agent System (AMAS) which has the property to be elastic and is a two-level system as well. This AMAS is a dynamic system where the two levels are linked by an Ontology which allows it to function as a reduced system and as an extended system. In a primary level, the AMAS comprises organization agents and in a secondary level, it is constituted of analysis agents. Its entry point, a User Interface Agent, can reproduce itself because it is given a minimum of background knowledge and it learns appropriate "behavior" from the user in the presence of ambiguous queries and from other agents of the AMAS in other situations. Results Some of the outputs of our system present a series of tables, diagrams showing some factors like Entomological parameters of malaria transmission, Percentages of malaria transmission per malaria vectors, Entomological inoculation rate. Many others parameters can be produced by the system depending on the inputted data. Conclusion Our approach is an intelligent one which differs from statistical approaches that are sometimes used in the field. This intelligent approach aligns itself with the distributed artificial intelligence. In terms of fight against malaria disease our system offers opportunities of reducing efforts of human resources who are not obliged to cover the entire territory while conducting surveys. Secondly the AMAS can determine the presence or the absence of malaria vectors even when specific data have not been collected in the geographical area. In the difference of a statistical technique, in our case the projection of the results in the field can sometimes appeared to be more general. PMID:17605778

  8. Uniform quantized electron gas.

    PubMed

    Høye, Johan S; Lomba, Enrique

    2016-10-19

    In this work we study the correlation energy of the quantized electron gas of uniform density at temperature T  =  0. To do so we utilize methods from classical statistical mechanics. The basis for this is the Feynman path integral for the partition function of quantized systems. With this representation the quantum mechanical problem can be interpreted as, and is equivalent to, a classical polymer problem in four dimensions where the fourth dimension is imaginary time. Thus methods, results, and properties obtained in the statistical mechanics of classical fluids can be utilized. From this viewpoint we recover the well known RPA (random phase approximation). Then to improve it we modify the RPA by requiring the corresponding correlation function to be such that electrons with equal spins can not be on the same position. Numerical evaluations are compared with well known results of a standard parameterization of Monte Carlo correlation energies. PMID:27546166

  9. Consistent quantization of massive chiral electrodynamics in four dimensions

    SciTech Connect

    Andrianov, A. ); Bassetto, A.; Soldati, R.

    1989-10-09

    We discuss the quantization of a four-dimensional model in which a massive Abelian vector field interacts with chiral massless fermions. We show that, by introducing extra scalar fields, a renormalizable unitary {ital S} matrix can be obtained in a suitably defined Hilbert space of physical states.

  10. Quantization of Constrained Systems

    NASA Astrophysics Data System (ADS)

    Klauder, John R.

    The present article is primarily a review of the projection-operator approach to quantize systems with constraints. We study the quantization of systems with general first- and second-class constraints from the point of view of coherent-state, phase-space path integration, and show that all such cases may be treated, within the original classical phase space, by using suitable path-integral measures for the Lagrange multipliers which ensure that the quantum system satisfies the appropr iate quantum constraint conditions. Unlike conventional methods, our procedures involve no delta-functionals of the classical constraints, no need for dynamical gauge fixing of first-class constraints nor any average thereover, no need to eliminate second-class constraints, no potentially ambiguous determinants, as well as no need to add auxiliary dynamical variables expanding the phase space beyond its original classical formulation, including no ghosts. Bes ides several pedagogical examples, we also study: (i) the quantization procedure for reparameterization invariant models, (ii) systems for which the original set of Lagrange multipliers are elevated to the status of dynamical variables and used to define an extended dynamical system which is completed with the addition of suitable conjugates and new sets of constraints and their associated Lagrange multipliers, (iii) special examples of alternative but equivalent formulations of given first-class constraint s, as well as (iv) a comparison of both regular and irregular constraints.

  11. Adaptation of a retrovirus as a eucaryotic vector transmitting the herpes simplex virus thymidine kinase gene

    SciTech Connect

    Tabin, C.J.; Hoffman, J.W.; Goff, S.P.; Weinberg, R.A.

    1982-04-01

    The authors investigated the feasibility of using retroviruses as vectors for transferring DNA sequences into animal cells. The thymidine kinase (tk) gene of herpes simplex virus was chosen as a convenient model. The internal BamHI fragments of a DNA clone of Moloney leukemia virus (MLV) were replaced with a purified BamHI DNA segment containing the tk gene. Chimeric genomes were created carrying the tk insert on both orientations relative to the MLV sequence. Each was transfected into TK/sup -/ cells along with MLV helper virus, and TK/sup +/ colonies were obtained by selection in the presence of hypoxanthine, aminopterin, and thymidine (HAT). Virus collected from TK/sup +/-transformed, MLV producer cells passed the TK/sup +/ phenotype to TK/sup -/ cells. Nonproducer cells were isolated, and TK/sup +/ transducing virus was subsequently rescued from them. The chimeric virus showed single-hit kinetics in infections. Virion and cellular RNA and cellular DNA from infected cells were all shown to contain sequences which hybridized to both MLV- and tk-specific probes. The sizes of these sequences were consistent with those predicted for the chimeric virus. In all respects studied, the chimeric MLV-tk virus behaved like known replication-defective retroviruses. These experiments suggest great general applicability of retroviruses as eucaryotic vectors.

  12. Artificial immune system based on adaptive clonal selection for feature selection and parameters optimisation of support vector machines

    NASA Astrophysics Data System (ADS)

    Sadat Hashemipour, Maryam; Soleimani, Seyed Ali

    2016-01-01

    Artificial immune system (AIS) algorithm based on clonal selection method can be defined as a soft computing method inspired by theoretical immune system in order to solve science and engineering problems. Support vector machine (SVM) is a popular pattern classification method with many diverse applications. Kernel parameter setting in the SVM training procedure along with the feature selection significantly impacts on the classification accuracy rate. In this study, AIS based on Adaptive Clonal Selection (AISACS) algorithm has been used to optimise the SVM parameters and feature subset selection without degrading the SVM classification accuracy. Several public datasets of University of California Irvine machine learning (UCI) repository are employed to calculate the classification accuracy rate in order to evaluate the AISACS approach then it was compared with grid search algorithm and Genetic Algorithm (GA) approach. The experimental results show that the feature reduction rate and running time of the AISACS approach are better than the GA approach.

  13. A vector-product information retrieval system adapted to heterogeneous, distributed computing environments

    NASA Technical Reports Server (NTRS)

    Rorvig, Mark E.

    1991-01-01

    Vector-product information retrieval (IR) systems produce retrieval results superior to all other searching methods but presently have no commercial implementations beyond the personal computer environment. The NASA Electronic Library Systems (NELS) provides a ranked list of the most likely relevant objects in collections in response to a natural language query. Additionally, the system is constructed using standards and tools (Unix, X-Windows, Notif, and TCP/IP) that permit its operation in organizations that possess many different hosts, workstations, and platforms. There are no known commercial equivalents to this product at this time. The product has applications in all corporate management environments, particularly those that are information intensive, such as finance, manufacturing, biotechnology, and research and development.

  14. Quantization of Generally Covariant Systems

    NASA Astrophysics Data System (ADS)

    Sforza, Daniel M.

    2000-12-01

    Finite dimensional models that mimic the constraint structure of Einstein's General Relativity are quantized in the framework of BRST and Dirac's canonical formalisms. The first system to be studied is one featuring a constraint quadratic in the momenta (the "super-Hamiltonian") and a set of constraints linear in the momenta (the "supermomentum" constraints). The starting point is to realize that the ghost contributions to the supermomentum constraint operators can be read in terms of the natural volume induced by the constraints in the orbits. This volume plays a fundamental role in the construction of the quadratic sector of the nilpotent BRST charge. It is shown that the quantum theory is invariant under scaling of the super-Hamiltonian. As long as the system has an intrinsic time, this property translates in a contribution of the potential to the kinetic term. In this aspect, the results substantially differ from other works where the scaling invariance is forced by introducing a coupling to the curvature. The contribution of the potential, far from being unnatural, is beautifully justified in the light of the Jacobi's principle. Then, it is shown that the obtained results can be extended to systems with extrinsic time. In this case, if the metric has a conformal temporal Killing vector and the potential exhibits a suitable behavior with respect to it, the role played by the potential in the case of intrinsic time is now played by the norm of the Killing vector. Finally, the results for the previous cases are extended to a system featuring two super-Hamiltonian constraints. This step is extremely important due to the fact that General Relativity features an infinite number of such constraints satisfying a non trivial algebra among themselves.

  15. Coherent state quantization of quaternions

    SciTech Connect

    Muraleetharan, B. E-mail: santhar@gmail.com; Thirulogasanthar, K. E-mail: santhar@gmail.com

    2015-08-15

    Parallel to the quantization of the complex plane, using the canonical coherent states of a right quaternionic Hilbert space, quaternion field of quaternionic quantum mechanics is quantized. Associated upper symbols, lower symbols, and related quantities are analyzed. Quaternionic version of the harmonic oscillator and Weyl-Heisenberg algebra are also obtained.

  16. Coherent state quantization of quaternions

    NASA Astrophysics Data System (ADS)

    Muraleetharan, B.; Thirulogasanthar, K.

    2015-08-01

    Parallel to the quantization of the complex plane, using the canonical coherent states of a right quaternionic Hilbert space, quaternion field of quaternionic quantum mechanics is quantized. Associated upper symbols, lower symbols, and related quantities are analyzed. Quaternionic version of the harmonic oscillator and Weyl-Heisenberg algebra are also obtained.

  17. Visual optimization of DCT quantization matrices for individual images

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.

    1993-01-01

    Many image compression standards (JPEG, MPEG, H.261) are based on the Discrete Cosine Transform (DCT). However, these standards do not specify the actual DCT quantization matrix. We have previously provided mathematical formulae to compute a perceptually lossless quantization matrix. Here I show how to compute a matrix that is optimized for a particular image. The method treats each DCT coefficient as an approximation to the local response of a visual 'channel'. For a given quantization matrix, the DCT quantization errors are adjusted by contrast sensitivity, light adaptation, and contrast masking, and are pooled non-linearly over the blocks of the image. This yields an 8x8 'perceptual error matrix'. A second non-linear pooling over the perceptual error matrix yields total perceptual error. With this model we may estimate the quantization matrix for a particular image that yields minimum bit rate for a given total perceptual error, or minimum perceptual error for a given bit rate. Custom matrices for a number of images show clear improvement over image-independent matrices. Custom matrices are compatible with the JPEG standard, which requires transmission of the quantization matrix.

  18. An all digital implementation of a modified Hamming net for video compression with prediction and quantization circuits

    NASA Astrophysics Data System (ADS)

    Kaul, Richard; Adkins, Kenneth; Bibyk, Steven

    The hardware and algorithms used to vector quantize (VQ) predicted pixel intensity differences for real-time video compression are described. The hardware is designed for rapid vector quantization performance, which entails the development of application-specific associative memory circuits. A modified DPCM algorithm is originally examined to determine how neural circuitry could enhance its operation. It was determined that quantization and encoding could be improved by consolidating these two functions into one, and by increasing the amount of information (i.e. number of pixels) quantized at a time. The result is a predictive scheme that vector quantizes differential values. Some of the disadvantages of VQ algorithms are solved using associative memories. The video compression algorithm and the associative memory design are described.

  19. Quantized Casimir force.

    PubMed

    Tse, Wang-Kong; MacDonald, A H

    2012-12-01

    We investigate the Casimir effect between two-dimensional electron systems driven to the quantum Hall regime by a strong perpendicular magnetic field. In the large-separation (d) limit where retardation effects are essential, we find (i) that the Casimir force is quantized in units of 3ħcα(2)/8π(2)d(4) and (ii) that the force is repulsive for mirrors with the same type of carrier and attractive for mirrors with opposite types of carrier. The sign of the Casimir force is therefore electrically tunable in ambipolar materials such as graphene. The Casimir force is suppressed when one mirror is a charge-neutral graphene system in a filling factor ν=0 quantum Hall state. PMID:23368242

  20. First quantized electrodynamics

    SciTech Connect

    Bennett, A.F.

    2014-06-15

    The parametrized Dirac wave equation represents position and time as operators, and can be formulated for many particles. It thus provides, unlike field-theoretic Quantum Electrodynamics (QED), an elementary and unrestricted representation of electrons entangled in space or time. The parametrized formalism leads directly and without further conjecture to the Bethe–Salpeter equation for bound states. The formalism also yields the Uehling shift of the hydrogenic spectrum, the anomalous magnetic moment of the electron to leading order in the fine structure constant, the Lamb shift and the axial anomaly of QED. -- Highlights: •First-quantized electrodynamics of the parametrized Dirac equation is developed. •Unrestricted entanglement in time is made explicit. •Bethe and Salpeter’s equation for relativistic bound states is derived without further conjecture. •One-loop scattering corrections and the axial anomaly are derived using a partial summation. •Wide utility of semi-classical Quantum Electrodynamics is argued.

  1. Adaptive Square-Root Cubature-Quadrature Kalman Particle Filter for satellite attitude determination using vector observations

    NASA Astrophysics Data System (ADS)

    Kiani, Maryam; Pourtakdoust, Seid H.

    2014-12-01

    A novel algorithm is presented in this study for estimation of spacecraft's attitudes and angular rates from vector observations. In this regard, a new cubature-quadrature particle filter (CQPF) is initially developed that uses the Square-Root Cubature-Quadrature Kalman Filter (SR-CQKF) to generate the importance proposal distribution. The developed CQPF scheme avoids the basic limitation of particle filter (PF) with regards to counting the new measurements. Subsequently, CQPF is enhanced to adjust the sample size at every time step utilizing the idea of confidence intervals, thus improving the efficiency and accuracy of the newly proposed adaptive CQPF (ACQPF). In addition, application of the q-method for filter initialization has intensified the computation burden as well. The current study also applies ACQPF to the problem of attitude estimation of a low Earth orbit (LEO) satellite. For this purpose, the undertaken satellite is equipped with a three-axis magnetometer (TAM) as well as a sun sensor pack that provide noisy geomagnetic field data and Sun direction measurements, respectively. The results and performance of the proposed filter are investigated and compared with those of the extended Kalman filter (EKF) and the standard particle filter (PF) utilizing a Monte Carlo simulation. The comparison demonstrates the viability and the accuracy of the proposed nonlinear estimator.

  2. Quantized beam shifts in graphene

    SciTech Connect

    de Melo Kort-Kamp, Wilton Junior; Sinitsyn, Nikolai; Dalvit, Diego Alejandro Roberto

    2015-10-08

    We predict the existence of quantized Imbert-Fedorov, Goos-Hanchen, and photonic spin Hall shifts for light beams impinging on a graphene-on-substrate system in an external magnetic field. In the quantum Hall regime the Imbert-Fedorov and photonic spin Hall shifts are quantized in integer multiples of the fine structure constant α, while the Goos-Hanchen ones in multiples of α2. We investigate the influence on these shifts of magnetic field, temperature, and material dispersion and dissipation. An experimental demonstration of quantized beam shifts could be achieved at terahertz frequencies for moderate values of the magnetic field.

  3. QED in Krein Space Quantization

    NASA Astrophysics Data System (ADS)

    Zarei, A.; Forghan, B.; Takook, M. V.

    2011-08-01

    In this paper we consider the QED in Krein space quantization. We show that the theory is automatically regularized. The three primitive divergences integrals in usual QED are considered in Krein QED. The photon self energy, electron self energy and vertex function are calculated in this formalism. We show that these quantities are finite. The infrared and ultraviolet divergencies do not appear. We discuss that Krein space quantization is similar to Pauli-Villars regularization, so we have called it the "Krein regularization".

  4. Quantized visual awareness

    PubMed Central

    Escobar, W. A.

    2013-01-01

    The proposed model holds that, at its most fundamental level, visual awareness is quantized. That is to say that visual awareness arises as individual bits of awareness through the action of neural circuits with hundreds to thousands of neurons in at least the human striate cortex. Circuits with specific topologies will reproducibly result in visual awareness that correspond to basic aspects of vision like color, motion, and depth. These quanta of awareness (qualia) are produced by the feedforward sweep that occurs through the geniculocortical pathway but are not integrated into a conscious experience until recurrent processing from centers like V4 or V5 select the appropriate qualia being produced in V1 to create a percept. The model proposed here has the potential to shift the focus of the search for visual awareness to the level of microcircuits and these likely exist across the kingdom Animalia. Thus establishing qualia as the fundamental nature of visual awareness will not only provide a deeper understanding of awareness, but also allow for a more quantitative understanding of the evolution of visual awareness throughout the animal kingdom. PMID:24319436

  5. Speech recognition in reverberant and noisy environments employing multiple feature extractors and i-vector speaker adaptation

    NASA Astrophysics Data System (ADS)

    Alam, Md Jahangir; Gupta, Vishwa; Kenny, Patrick; Dumouchel, Pierre

    2015-12-01

    The REVERB challenge provides a common framework for the evaluation of feature extraction techniques in the presence of both reverberation and additive background noise. State-of-the-art speech recognition systems perform well in controlled environments, but their performance degrades in realistic acoustical conditions, especially in real as well as simulated reverberant environments. In this contribution, we utilize multiple feature extractors including the conventional mel-filterbank, multi-taper spectrum estimation-based mel-filterbank, robust mel and compressive gammachirp filterbank, iterative deconvolution-based dereverberated mel-filterbank, and maximum likelihood inverse filtering-based dereverberated mel-frequency cepstral coefficient features for speech recognition with multi-condition training data. In order to improve speech recognition performance, we combine their results using ROVER (Recognizer Output Voting Error Reduction). For two- and eight-channel tasks, to get benefited from the multi-channel data, we also use ROVER, instead of the multi-microphone signal processing method, to reduce word error rate by selecting the best scoring word at each channel. As in a previous work, we also apply i-vector-based speaker adaptation which was found effective. In speech recognition task, speaker adaptation tries to reduce mismatch between the training and test speakers. Speech recognition experiments are conducted on the REVERB challenge 2014 corpora using the Kaldi recognizer. In our experiments, we use both utterance-based batch processing and full batch processing. In the single-channel task, full batch processing reduced word error rate (WER) from 10.0 to 9.3 % on SimData as compared to utterance-based batch processing. Using full batch processing, we obtained an average WER of 9.0 and 23.4 % on the SimData and RealData, respectively, for the two-channel task, whereas for the eight-channel task on the SimData and RealData, the average WERs found were 8

  6. The relative magnitude of transgene-specific adaptive immune responses induced by human and chimpanzee adenovirus vectors differs between laboratory animals and a target species.

    PubMed

    Dicks, Matthew D J; Guzman, Efrain; Spencer, Alexandra J; Gilbert, Sarah C; Charleston, Bryan; Hill, Adrian V S; Cottingham, Matthew G

    2015-02-25

    Adenovirus vaccine vectors generated from new viral serotypes are routinely screened in pre-clinical laboratory animal models to identify the most immunogenic and efficacious candidates for further evaluation in clinical human and veterinary settings. Here, we show that studies in a laboratory species do not necessarily predict the hierarchy of vector performance in other mammals. In mice, after intramuscular immunization, HAdV-5 (Human adenovirus C) based vectors elicited cellular and humoral adaptive responses of higher magnitudes compared to the chimpanzee adenovirus vectors ChAdOx1 and AdC68 from species Human adenovirus E. After HAdV-5 vaccination, transgene specific IFN-γ(+) CD8(+) T cell responses reached peak magnitude later than after ChAdOx1 and AdC68 vaccination, and exhibited a slower contraction to a memory phenotype. In cattle, cellular and humoral immune responses were at least equivalent, if not higher, in magnitude after ChAdOx1 vaccination compared to HAdV-5. Though we have not tested protective efficacy in a disease model, these findings have important implications for the selection of candidate vectors for further evaluation. We propose that vaccines based on ChAdOx1 or other Human adenovirus E serotypes could be at least as immunogenic as current licensed bovine vaccines based on HAdV-5. PMID:25629523

  7. Dynamical non-Abelian two-form: BRST quantization

    SciTech Connect

    Lahiri, A.

    1997-04-01

    When an antisymmetric tensor potential is coupled to the field strength of a gauge field via a BANDF coupling and a kinetic term for B is included, the gauge field develops an effective mass. The theory can be made invariant under a non-Abelian vector gauge symmetry by introducing an auxiliary vector field. The covariant quantization of this theory requires ghosts for ghosts. The resultant theory including gauge fixing and ghost terms is BRST invariant by construction, and therefore unitary. The construction of the BRST-invariant action is given for both Abelian and non-Abelian models of mass generation. {copyright} {ital 1997} {ital The American Physical Society}

  8. Pan evaporation modeling using least square support vector machine, multivariate adaptive regression splines and M5 model tree

    NASA Astrophysics Data System (ADS)

    Kisi, Ozgur

    2015-09-01

    Pan evaporation (Ep) modeling is an important issue in reservoir management, regional water resources planning and evaluation of drinking-water supplies. The main purpose of this study is to investigate the accuracy of least square support vector machine (LSSVM), multivariate adaptive regression splines (MARS) and M5 Model Tree (M5Tree) in modeling Ep. The first part of the study focused on testing the ability of the LSSVM, MARS and M5Tree models in estimating the Ep data of Mersin and Antalya stations located in Mediterranean Region of Turkey by using cross-validation method. The LSSVM models outperformed the MARS and M5Tree models in estimating Ep of Mersin and Antalya stations with local input and output data. The average root mean square error (RMSE) of the M5Tree and MARS models was decreased by 24-32.1% and 10.8-18.9% using LSSVM models for the Mersin and Antalya stations, respectively. The ability of three different methods was examined in estimation of Ep using input air temperature, solar radiation, relative humidity and wind speed data from nearby station in the second part of the study (cross-station application without local input data). The results showed that the MARS models provided better accuracy than the LSSVM and M5Tree models with respect to RMSE, mean absolute error (MAE) and determination coefficient (R2) criteria. The average RMSE accuracy of the LSSVM and M5Tree was increased by 3.7% and 16.5% using MARS. In the case of without local input data, the average RMSE accuracy of the LSSVM and M5Tree was respectively increased by 11.4% and 18.4% using MARS. In the third part of the study, the ability of the applied models was examined in Ep estimation using input and output data of nearby station. The results reported that the MARS models performed better than the other models with respect to RMSE, MAE and R2 criteria. The average RMSE of the LSSVM and M5Tree was respectively decreased by 54% and 3.4% using MARS. The overall results indicated that

  9. Deformation quantization of cosmological models

    NASA Astrophysics Data System (ADS)

    Cordero, Rubén; García-Compeán, Hugo; Turrubiates, Francisco J.

    2011-06-01

    The Weyl-Wigner-Groenewold-Moyal formalism of deformation quantization is applied to cosmological models in the minisuperspace. The quantization procedure is performed explicitly for quantum cosmology in a flat minisuperspace. The de Sitter cosmological model is worked out in detail and the computation of the Wigner functions for the Hartle-Hawking, Vilenkin and Linde wave functions are done numerically. The Wigner function is analytically calculated for the Kantowski-Sachs model in (non)commutative quantum cosmology and for string cosmology with dilaton exponential potential. Finally, baby universes solutions are described in this context and the Wigner function is obtained.

  10. Periodic roads and quantized wheels

    NASA Astrophysics Data System (ADS)

    de Campos Valadares, Eduardo

    2016-08-01

    We propose a simple approach to determine all possible wheels that can roll smoothly without slipping on a periodic roadbed, while maintaining the center of mass at a fixed height. We also address the inverse problem that of obtaining the roadbed profile compatible with a specific wheel and all other related "quantized wheels." The role of symmetry is highlighted, which might preclude the center of mass from remaining at a fixed height. A straightforward consequence of such geometric quantization is that the gravitational potential energy and the moment of inertia are discrete, suggesting a parallelism between macroscopic wheels and nano-systems, such as carbon nanotubes.

  11. Fermionic Quantization of Hopf Solitons

    NASA Astrophysics Data System (ADS)

    Krusch, S.; Speight, J. M.

    2006-06-01

    In this paper we show how to quantize Hopf solitons using the Finkelstein-Rubinstein approach. Hopf solitons can be quantized as fermions if their Hopf charge is odd. Symmetries of classical minimal energy configurations induce loops in configuration space which give rise to constraints on the wave function. These constraints depend on whether the given loop is contractible. Our method is to exploit the relationship between the configuration spaces of the Faddeev-Hopf and Skyrme models provided by the Hopf fibration. We then use recent results in the Skyrme model to determine whether loops are contractible. We discuss possible quantum ground states up to Hopf charge Q=7.

  12. Scalable Feature Matching by Dual Cascaded Scalar Quantization for Image Retrieval.

    PubMed

    Zhou, Wengang; Yang, Ming; Wang, Xiaoyu; Li, Houqiang; Lin, Yuanqing; Tian, Qi

    2016-01-01

    In this paper, we investigate the problem of scalable visual feature matching in large-scale image search and propose a novel cascaded scalar quantization scheme in dual resolution. We formulate the visual feature matching as a range-based neighbor search problem and approach it by identifying hyper-cubes with a dual-resolution scalar quantization strategy. Specifically, for each dimension of the PCA-transformed feature, scalar quantization is performed at both coarse and fine resolutions. The scalar quantization results at the coarse resolution are cascaded over multiple dimensions to index an image database. The scalar quantization results over multiple dimensions at the fine resolution are concatenated into a binary super-vector and stored into the index list for efficient verification. The proposed cascaded scalar quantization (CSQ) method is free of the costly visual codebook training and thus is independent of any image descriptor training set. The index structure of the CSQ is flexible enough to accommodate new image features and scalable to index large-scale image database. We evaluate our approach on the public benchmark datasets for large-scale image retrieval. Experimental results demonstrate the competitive retrieval performance of the proposed method compared with several recent retrieval algorithms on feature quantization. PMID:26656584

  13. The wavelet/scalar quantization compression standard for digital fingerprint images

    SciTech Connect

    Bradley, J.N.; Brislawn, C.M.

    1994-04-01

    A new digital image compression standard has been adopted by the US Federal Bureau of Investigation for use on digitized gray-scale fingerprint images. The algorithm is based on adaptive uniform scalar quantization of a discrete wavelet transform image decomposition and is referred to as the wavelet/scalar quantization standard. The standard produces archival quality images at compression ratios of around 20:1 and will allow the FBI to replace their current database of paper fingerprint cards with digital imagery.

  14. Geometric Quantization and Foliation Reduction

    NASA Astrophysics Data System (ADS)

    Skerritt, Paul

    A standard question in the study of geometric quantization is whether symplectic reduction interacts nicely with the quantized theory, and in particular whether "quantization commutes with reduction." Guillemin and Sternberg first proposed this question, and answered it in the affirmative for the case of a free action of a compact Lie group on a compact Kahler manifold. Subsequent work has focused mainly on extending their proof to non-free actions and non-Kahler manifolds. For realistic physical examples, however, it is desirable to have a proof which also applies to non-compact symplectic manifolds. In this thesis we give a proof of the quantization-reduction problem for general symplectic manifolds. This is accomplished by working in a particular wavefunction representation, associated with a polarization that is in some sense compatible with reduction. While the polarized sections described by Guillemin and Sternberg are nonzero on a dense subset of the Kahler manifold, the ones considered here are distributional, having support only on regions of the phase space associated with certain quantized, or "admissible", values of momentum. We first propose a reduction procedure for the prequantum geometric structures that "covers" symplectic reduction, and demonstrate how both symplectic and prequantum reduction can be viewed as examples of foliation reduction. Consistency of prequantum reduction imposes the above-mentioned admissibility conditions on the quantized momenta, which can be seen as analogues of the Bohr-Wilson-Sommerfeld conditions for completely integrable systems. We then describe our reduction-compatible polarization, and demonstrate a one-to-one correspondence between polarized sections on the unreduced and reduced spaces. Finally, we describe a factorization of the reduced prequantum bundle, suggested by the structure of the underlying reduced symplectic manifold. This in turn induces a factorization of the space of polarized sections that agrees

  15. Adaptation.

    PubMed

    Broom, Donald M

    2006-01-01

    The term adaptation is used in biology in three different ways. It may refer to changes which occur at the cell and organ level, or at the individual level, or at the level of gene action and evolutionary processes. Adaptation by cells, especially nerve cells helps in: communication within the body, the distinguishing of stimuli, the avoidance of overload and the conservation of energy. The time course and complexity of these mechanisms varies. Adaptive characters of organisms, including adaptive behaviours, increase fitness so this adaptation is evolutionary. The major part of this paper concerns adaptation by individuals and its relationships to welfare. In complex animals, feed forward control is widely used. Individuals predict problems and adapt by acting before the environmental effect is substantial. Much of adaptation involves brain control and animals have a set of needs, located in the brain and acting largely via motivational mechanisms, to regulate life. Needs may be for resources but are also for actions and stimuli which are part of the mechanism which has evolved to obtain the resources. Hence pigs do not just need food but need to be able to carry out actions like rooting in earth or manipulating materials which are part of foraging behaviour. The welfare of an individual is its state as regards its attempts to cope with its environment. This state includes various adaptive mechanisms including feelings and those which cope with disease. The part of welfare which is concerned with coping with pathology is health. Disease, which implies some significant effect of pathology, always results in poor welfare. Welfare varies over a range from very good, when adaptation is effective and there are feelings of pleasure or contentment, to very poor. A key point concerning the concept of individual adaptation in relation to welfare is that welfare may be good or poor while adaptation is occurring. Some adaptation is very easy and energetically cheap and

  16. Deformation of second and third quantization

    NASA Astrophysics Data System (ADS)

    Faizal, Mir

    2015-03-01

    In this paper, we will deform the second and third quantized theories by deforming the canonical commutation relations in such a way that they become consistent with the generalized uncertainty principle. Thus, we will first deform the second quantized commutator and obtain a deformed version of the Wheeler-DeWitt equation. Then we will further deform the third quantized theory by deforming the third quantized canonical commutation relation. This way we will obtain a deformed version of the third quantized theory for the multiverse.

  17. A trellis-searched APC (adaptive predictive coding) speech coder

    SciTech Connect

    Malone, K.T. ); Fischer, T.R. . Dept. of Electrical and Computer Engineering)

    1990-01-01

    In this paper we formulate a speech coding system that incorporates trellis coded vector quantization (TCVQ) and adaptive predictive coding (APC). A method for optimizing'' the TCVQ codebooks is presented and experimental results concerning survivor path mergings are reported. Simulation results are given for encoding rates of 16 and 9.6 kbps for a variety of coder parameters. The quality of the encoded speech is deemed excellent at an encoding rate of 16 kbps and very good at 9.6 kbps. 13 refs., 2 figs., 4 tabs.

  18. An adaptive algorithm for motion compensated color image coding

    NASA Technical Reports Server (NTRS)

    Kwatra, Subhash C.; Whyte, Wayne A.; Lin, Chow-Ming

    1987-01-01

    This paper presents an adaptive algorithm for motion compensated color image coding. The algorithm can be used for video teleconferencing or broadcast signals. Activity segmentation is used to reduce the bit rate and a variable stage search is conducted to save computations. The adaptive algorithm is compared with the nonadaptive algorithm and it is shown that with approximately 60 percent savings in computing the motion vector and 33 percent additional compression, the performance of the adaptive algorithm is similar to the nonadaptive algorithm. The adaptive algorithm results also show improvement of up to 1 bit/pel over interframe DPCM coding with nonuniform quantization. The test pictures used for this study were recorded directly from broadcast video in color.

  19. Separable quantizations of Stäckel systems

    NASA Astrophysics Data System (ADS)

    Błaszak, Maciej; Marciniak, Krzysztof; Domański, Ziemowit

    2016-08-01

    In this article we prove that many Hamiltonian systems that cannot be separably quantized in the classical approach of Robertson and Eisenhart can be separably quantized if we extend the class of admissible quantizations through a suitable choice of Riemann space adapted to the Poisson geometry of the system. Actually, in this article we prove that for every quadratic in momenta Stäckel system (defined on 2 n dimensional Poisson manifold) for which Stäckel matrix consists of monomials in position coordinates there exist infinitely many quantizations-parametrized by n arbitrary functions-that turn this system into a quantum separable Stäckel system. In this paper we prove that conjecture for a very large class of Stäckel systems, generated by separation relations of the form (17), where Stäckel matrix consists of monomials in position coordinates. For any Stäckel system from this class we construct a family of metrices for which the minimal quantization leads to quantum separability and commutativity of the quantized constants of motion. We want to stress, however, that we do not deal with spectral theory of the obtained quantum systems, as it requires a separate investigations.The paper is organized as follows. In Section  2 we briefly summarize the results of Robertson-Eisenhart theory of quantum separability. In Section  3 we present some fundamental facts about classical Stäckel systems. Section  4 contains presentation of some results derived from our general theory of quantization of Hamiltonian systems on phase space; especially we demonstrate how to obtain the minimal quantization (4) from our general theory. In Section  5 we relate quantizations of the same Hamiltonian in different metrics g and g ¯ (or in different Hilbert spaces L2(Q ,ωg) and L2(Q ,ωḡ)). Essentially, this construction explains the origin of the quantum correction terms in the classical Hamiltonians introduced in  [1] and in  [2

  20. Quantized beam shifts in graphene

    NASA Astrophysics Data System (ADS)

    Kort-Kamp, Wilton; Sinitsyn, Nikolai; Dalvit, Diego

    We show that the magneto-optical response of a graphene-on-substrate system in the presence of an external magnetic field strongly affects light beam shifts. In the quantum Hall regime, we predict quantized Imbert-Fedorov, Goos-Hänchen, and photonic spin Hall shifts. The Imbert-Fedorov and photonic spin Hall shifts are given in integer multiples of the fine structure constant α, while the Goos-Hänchen ones in discrete multiples of α2. Due to time-reversal symmetry breaking the IF shifts change sign when the direction of the applied magnetic field is reversed, while the other shifts remain unchanged. We investigate the influence on these shifts of magnetic field, temperature, and material dispersion and dissipation. An experimental demonstration of quantized beam shifts could be achieved at terahertz frequencies for moderate values of the magnetic field. We acknowledge the LANL LDRD program for financial support.

  1. Third Quantization and Quantum Universes

    NASA Astrophysics Data System (ADS)

    Kim, Sang Pyo

    2014-01-01

    We study the third quantization of the Friedmann-Robertson-Walker cosmology with N-minimal massless fields. The third quantized Hamiltonian for the Wheeler-DeWitt equation in the minisuperspace consists of infinite number of intrinsic time-dependent, decoupled oscillators. The Hamiltonian has a pair of invariant operators for each universe with conserved momenta of the fields that play a role of the annihilation and the creation operators and that construct various quantum states for the universe. The closed universe exhibits an interesting feature of transitions from stable states to tachyonic states depending on the conserved momenta of the fields. In the classical forbidden unstable regime, the quantum states have googolplex growing position and conjugate momentum dispersions, which defy any measurements of the position of the universe.

  2. Quantized Cosmology: A Simple Approach

    SciTech Connect

    Weinstein, M

    2004-06-03

    I discuss the problem of inflation in the context of Friedmann-Robertson-Walker Cosmology and show how, after a simple change of variables, to quantize the problem in a way which parallels the classical discussion. The result is that two of the Einstein equations arise as exact equations of motion and one of the usual Einstein equations (suitably quantized) survives as a constraint equation to be imposed on the space of physical states. However, the Friedmann equation, which is also a constraint equation and which is the basis of the Wheeler-deWitt equation, acquires a welcome quantum correction that becomes significant for small scale factors. To clarify how things work in this formalism I briefly outline the way in which our formalism works for the exactly solvable case of de-Sitter space.

  3. Adapt

    NASA Astrophysics Data System (ADS)

    Bargatze, L. F.

    2015-12-01

    Active Data Archive Product Tracking (ADAPT) is a collection of software routines that permits one to generate XML metadata files to describe and register data products in support of the NASA Heliophysics Virtual Observatory VxO effort. ADAPT is also a philosophy. The ADAPT concept is to use any and all available metadata associated with scientific data to produce XML metadata descriptions in a consistent, uniform, and organized fashion to provide blanket access to the full complement of data stored on a targeted data server. In this poster, we present an application of ADAPT to describe all of the data products that are stored by using the Common Data File (CDF) format served out by the CDAWEB and SPDF data servers hosted at the NASA Goddard Space Flight Center. These data servers are the primary repositories for NASA Heliophysics data. For this purpose, the ADAPT routines have been used to generate data resource descriptions by using an XML schema named Space Physics Archive, Search, and Extract (SPASE). SPASE is the designated standard for documenting Heliophysics data products, as adopted by the Heliophysics Data and Model Consortium. The set of SPASE XML resource descriptions produced by ADAPT includes high-level descriptions of numerical data products, display data products, or catalogs and also includes low-level "Granule" descriptions. A SPASE Granule is effectively a universal access metadata resource; a Granule associates an individual data file (e.g. a CDF file) with a "parent" high-level data resource description, assigns a resource identifier to the file, and lists the corresponding assess URL(s). The CDAWEB and SPDF file systems were queried to provide the input required by the ADAPT software to create an initial set of SPASE metadata resource descriptions. Then, the CDAWEB and SPDF data repositories were queried subsequently on a nightly basis and the CDF file lists were checked for any changes such as the occurrence of new, modified, or deleted

  4. On the quantization of the linearized gravitational field

    NASA Astrophysics Data System (ADS)

    Grigore, D. R.

    2000-01-01

    We present a new point of view on the quantization of the gravitational field, namely we use exclusively the quantum framework of the second quantization. More explicitly, we take as one-particle Hilbert space, H_{graviton} the unitary irreducible representation of the Poincarégroup corresponding to a massless particle of helicity 2 and apply the second quantization procedure with Einstein-Bose statistics. The resulting Hilbert space F + (H_{graviton}) is, by definition, the Hilbert space of the gravitational field. Then we prove that this Hilbert space is canonically isomorphic to a space of the type Ker(Q ) / Im(Q ) where Q is a supercharge defined in an extension of the Hilbert space F + (H_{graviton}) by the inclusion of ghosts: some fermion ghosts u µ , tildeu µ which are vector fields and a bosonic ghost Φ which is a scalar field. This has to be contrasted with the usual approaches where only the fermion ghosts are considered. However, a rigorous proof that this is, indeed, possible seems to be lacking in the literature.

  5. Wavelet/scalar quantization compression standard for fingerprint images

    SciTech Connect

    Brislawn, C.M.

    1996-06-12

    US Federal Bureau of Investigation (FBI) has recently formulated a national standard for digitization and compression of gray-scale fingerprint images. Fingerprints are scanned at a spatial resolution of 500 dots per inch, with 8 bits of gray-scale resolution. The compression algorithm for the resulting digital images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition (wavelet/scalar quantization method). The FBI standard produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. The compression standard specifies a class of potential encoders and a universal decoder with sufficient generality to reconstruct compressed images produced by any compliant encoder, allowing flexibility for future improvements in encoder technology. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations.

  6. Exact quantization conditions for cluster integrable systems

    NASA Astrophysics Data System (ADS)

    Franco, Sebastián; Hatsuda, Yasuyuki; Mariño, Marcos

    2016-06-01

    We propose exact quantization conditions for the quantum integrable systems of Goncharov and Kenyon, based on the enumerative geometry of the corresponding toric Calabi–Yau manifolds. Our conjecture builds upon recent results on the quantization of mirror curves, and generalizes a previous proposal for the quantization of the relativistic Toda lattice. We present explicit tests of our conjecture for the integrable systems associated to the resolved {{{C}}3}/{{{Z}}5} and {{{C}}3}/{{{Z}}6} orbifolds.

  7. Quantized-"Gray-Scale" Electronic Synapses

    NASA Technical Reports Server (NTRS)

    Lamb, James L.; Daud, Taher; Thakoor, Anilkumar P.

    1990-01-01

    Proposed array of programmable synaptic connections for electronic neural network applications offers multiple quantized levels of connection strength using only simple, two-terminal, binary microswitch devices. Subgrids in fine grid of programmable resistive connections connected externally in parallel to form coarser synaptic grid. By selection of pattern of connections in each subgrid, connection strength of synaptic node represented by that subgrid set at quantized "gray level". Device structures promise implementations of quantized-"gray-scale" synaptic arrays with very high density.

  8. Quantized vortices in interacting gauge theories

    NASA Astrophysics Data System (ADS)

    Butera, Salvatore; Valiente, Manuel; Öhberg, Patrik

    2016-01-01

    We consider a two-dimensional weakly interacting ultracold Bose gas whose constituents are two-level atoms. We study the effects of a synthetic density-dependent gauge field that arises from laser-matter coupling in the adiabatic limit with a laser configuration such that the single-particle zeroth-order vector potential corresponds to a constant synthetic magnetic field. We find a new exotic type of current nonlinearity in the Gross-Pitaevskii equation which affects the dynamics of the order parameter of the condensate. We investigate the rotational properties of this system in the Thomas-Fermi limit, focusing in particular on the physical conditions that make the existence of a quantized vortex in the system energetically favourable with respect to the non-rotating solution. We point out that two different physical interpretations can be given to this new nonlinearity: firstly it can be seen as a local modification of the mean field coupling constant, whose value depends on the angular momentum of the condensate. Secondly, it can be interpreted as a density modulated angular velocity given to the cloud. Looking at the problem from both of these viewpoints, we show that the effect of the new nonlinearity is to induce a rotation to the condensate, where the transition from non-rotating to rotating states depends on the density of the cloud.

  9. Quantized vortices in interacting gauge theories

    NASA Astrophysics Data System (ADS)

    Butera, Salvatore; Valiente, Manuel; Ohberg, Patrik

    2015-05-01

    We consider a two-dimensional weakly interacting ultracold Bose gas whose constituents are two-level atoms. We study the effects of a synthetic density-dependent gauge field that arises from laser-matter coupling in the adiabatic limit with a laser configuration such that the single-particle vector potential corresponds to a constant synthetic magnetic field. We find a new type of current non-linearity in the Gross-Pitaevskii equation which affects the dynamics of the order parameter of the condensate. We investigate on the physical conditions that make the nucleation of a quantized vortex in the system energetically favourable with respect to the non rotating solution. Two different physical interpretations can be given to this new non linearity: firstly it can be seen as a local modification of the mean field coupling constant, whose value depends on the angular momentum of the condensate. Secondly, it can be interpreted as a density modulated angular velocity given to the cloud. We analyze the physical conditions that make a single vortex state energetically favourable. In the Thomas-Fermi limit, we show that the effect of the new nonlinearity is to induce a rotation to the condensate, where the transition from non-rotating to rotating depends on the density of the cloud. The authors acknowledge support from CM-DTC and EPSRC.

  10. On abelian group actions and Galois quantizations

    NASA Astrophysics Data System (ADS)

    Huru, H. L.; Lychagin, V. V.

    2013-08-01

    Quantizations of actions of finite abelian groups G are explicitly described by elements in the tensor square of the group algebra of G. Over algebraically closed fields of characteristic 0 these are in one to one correspondence with the second cohomology group of the dual of G. With certain adjustments this result is applied to group actions over any field of characteristic 0. In particular we consider the quantizations of Galois extensions, which are quantized by "deforming" the multiplication. For the splitting fields of products of quadratic polynomials this produces quantized Galois extensions that all are Clifford type algebras.

  11. Single-trial EEG-based emotion recognition using kernel Eigen-emotion pattern and adaptive support vector machine.

    PubMed

    Liu, Yi-Hung; Wu, Chien-Te; Kao, Yung-Hwa; Chen, Ya-Ting

    2013-01-01

    Single-trial electroencephalography (EEG)-based emotion recognition enables us to perform fast and direct assessments of human emotional states. However, previous works suggest that a great improvement on the classification accuracy of valence and arousal levels is still needed. To address this, we propose a novel emotional EEG feature extraction method: kernel Eigen-emotion pattern (KEEP). An adaptive SVM is also proposed to deal with the problem of learning from imbalanced emotional EEG data sets. In this study, a set of pictures from IAPS are used for emotion induction. Results based on seven participants show that KEEP gives much better classification results than the widely-used EEG frequency band power features. Also, the adaptive SVM greatly improves classification performance of commonly-adopted SVM classifier. Combined use of KEEP and adaptive SVM can achieve high average valence and arousal classification rates of 73.42% and 73.57%. The highest classification rates for valence and arousal are 80% and 79%, respectively. The results are very promising. PMID:24110685

  12. Performance Improvement of Induction Motor Speed Sensor-Less Vector Control System Using an Adaptive Observer with an Estimated Flux Feedback in Low Speed Range

    NASA Astrophysics Data System (ADS)

    Fukumoto, Tetsuya; Kato, Yousuke; Kurita, Kazuya; Hayashi, Yoichi

    Because of various errors caused by dead time, temperature variation of resistance and so on, the speed estimation error is inevitable in the speed sensor-less vector control methods of the induction motor. Especially, the speed control loop becomes unstable at near zero frequency. In order to solve these problems, this paper proposes a novel design of an adaptive observer for the speed estimation. Adding a feedback loop of the error between the estimated and reference fluxes, the sensitivity of the current error signals for the speed estimation and the primary resistance identification are improved. The proposed system is analyzed and the appropriate feedback gains are derived. The experimental results showed good performance in low speed range.

  13. Quantized ionic conductance in nanopores.

    PubMed

    Zwolak, Michael; Lagerqvist, Johan; Di Ventra, Massimiliano

    2009-09-18

    Ionic transport in nanopores is a fundamentally and technologically important problem in view of its occurrence in biological processes and its impact on novel DNA sequencing applications. Using molecular dynamics simulations we show that ion transport may exhibit strong nonlinearities as a function of the pore radius reminiscent of the conductance quantization steps as a function of the transverse cross section of quantum point contacts. In the present case, however, conductance steps originate from the break up of the hydration layers that form around ions in aqueous solution. We discuss this phenomenon and the conditions under which it should be experimentally observable. PMID:19792463

  14. Quantization of general linear electrodynamics

    SciTech Connect

    Rivera, Sergio; Schuller, Frederic P.

    2011-03-15

    General linear electrodynamics allow for an arbitrary linear constitutive relation between the field strength 2-form and induction 2-form density if crucial hyperbolicity and energy conditions are satisfied, which render the theory predictive and physically interpretable. Taking into account the higher-order polynomial dispersion relation and associated causal structure of general linear electrodynamics, we carefully develop its Hamiltonian formulation from first principles. Canonical quantization of the resulting constrained system then results in a quantum vacuum which is sensitive to the constitutive tensor of the classical theory. As an application we calculate the Casimir effect in a birefringent linear optical medium.

  15. Broken-symmetry-adapted Green function theory of condensed matter systems: Towards a vector spin-density-functional theory

    NASA Astrophysics Data System (ADS)

    Rajagopal, A. K.; Mochena, Mogus

    2000-12-01

    The group-theory framework developed by Fukutome for a systematic analysis of the various broken-symmetry types of Hartree-Fock solution exhibiting spin structures is here extended to the general many-body context using spinor Green function formalism for describing magnetic systems. Consequences of this theory are discussed for examining the magnetism of itinerant electrons in nanometric systems of current interest as well as bulk systems where a vector spin-density form is required, by specializing our work to spin-density-functional formalism. We also formulate the linear-response theory for such a system and compare and contrast our results with the recent results obtained for localized electron systems. The various phenomenological treatments of itinerant magnetic systems are here unified in this group-theoretical description. We apply this theory to the one-band Hubbard model to illustrate the usefulness of this approach.

  16. Genome of Rhodnius prolixus, an insect vector of Chagas disease, reveals unique adaptations to hematophagy and parasite infection.

    PubMed

    Mesquita, Rafael D; Vionette-Amaral, Raquel J; Lowenberger, Carl; Rivera-Pomar, Rolando; Monteiro, Fernando A; Minx, Patrick; Spieth, John; Carvalho, A Bernardo; Panzera, Francisco; Lawson, Daniel; Torres, André Q; Ribeiro, Jose M C; Sorgine, Marcos H F; Waterhouse, Robert M; Montague, Michael J; Abad-Franch, Fernando; Alves-Bezerra, Michele; Amaral, Laurence R; Araujo, Helena M; Araujo, Ricardo N; Aravind, L; Atella, Georgia C; Azambuja, Patricia; Berni, Mateus; Bittencourt-Cunha, Paula R; Braz, Gloria R C; Calderón-Fernández, Gustavo; Carareto, Claudia M A; Christensen, Mikkel B; Costa, Igor R; Costa, Samara G; Dansa, Marilvia; Daumas-Filho, Carlos R O; De-Paula, Iron F; Dias, Felipe A; Dimopoulos, George; Emrich, Scott J; Esponda-Behrens, Natalia; Fampa, Patricia; Fernandez-Medina, Rita D; da Fonseca, Rodrigo N; Fontenele, Marcio; Fronick, Catrina; Fulton, Lucinda A; Gandara, Ana Caroline; Garcia, Eloi S; Genta, Fernando A; Giraldo-Calderón, Gloria I; Gomes, Bruno; Gondim, Katia C; Granzotto, Adriana; Guarneri, Alessandra A; Guigó, Roderic; Harry, Myriam; Hughes, Daniel S T; Jablonka, Willy; Jacquin-Joly, Emmanuelle; Juárez, M Patricia; Koerich, Leonardo B; Lange, Angela B; Latorre-Estivalis, José Manuel; Lavore, Andrés; Lawrence, Gena G; Lazoski, Cristiano; Lazzari, Claudio R; Lopes, Raphael R; Lorenzo, Marcelo G; Lugon, Magda D; Majerowicz, David; Marcet, Paula L; Mariotti, Marco; Masuda, Hatisaburo; Megy, Karine; Melo, Ana C A; Missirlis, Fanis; Mota, Theo; Noriega, Fernando G; Nouzova, Marcela; Nunes, Rodrigo D; Oliveira, Raquel L L; Oliveira-Silveira, Gilbert; Ons, Sheila; Orchard, Ian; Pagola, Lucia; Paiva-Silva, Gabriela O; Pascual, Agustina; Pavan, Marcio G; Pedrini, Nicolás; Peixoto, Alexandre A; Pereira, Marcos H; Pike, Andrew; Polycarpo, Carla; Prosdocimi, Francisco; Ribeiro-Rodrigues, Rodrigo; Robertson, Hugh M; Salerno, Ana Paula; Salmon, Didier; Santesmasses, Didac; Schama, Renata; Seabra-Junior, Eloy S; Silva-Cardoso, Livia; Silva-Neto, Mario A C; Souza-Gomes, Matheus; Sterkel, Marcos; Taracena, Mabel L; Tojo, Marta; Tu, Zhijian Jake; Tubio, Jose M C; Ursic-Bedoya, Raul; Venancio, Thiago M; Walter-Nuno, Ana Beatriz; Wilson, Derek; Warren, Wesley C; Wilson, Richard K; Huebner, Erwin; Dotson, Ellen M; Oliveira, Pedro L

    2015-12-01

    Rhodnius prolixus not only has served as a model organism for the study of insect physiology, but also is a major vector of Chagas disease, an illness that affects approximately seven million people worldwide. We sequenced the genome of R. prolixus, generated assembled sequences covering 95% of the genome (∼ 702 Mb), including 15,456 putative protein-coding genes, and completed comprehensive genomic analyses of this obligate blood-feeding insect. Although immune-deficiency (IMD)-mediated immune responses were observed, R. prolixus putatively lacks key components of the IMD pathway, suggesting a reorganization of the canonical immune signaling network. Although both Toll and IMD effectors controlled intestinal microbiota, neither affected Trypanosoma cruzi, the causal agent of Chagas disease, implying the existence of evasion or tolerance mechanisms. R. prolixus has experienced an extensive loss of selenoprotein genes, with its repertoire reduced to only two proteins, one of which is a selenocysteine-based glutathione peroxidase, the first found in insects. The genome contained actively transcribed, horizontally transferred genes from Wolbachia sp., which showed evidence of codon use evolution toward the insect use pattern. Comparative protein analyses revealed many lineage-specific expansions and putative gene absences in R. prolixus, including tandem expansions of genes related to chemoreception, feeding, and digestion that possibly contributed to the evolution of a blood-feeding lifestyle. The genome assembly and these associated analyses provide critical information on the physiology and evolution of this important vector species and should be instrumental for the development of innovative disease control methods. PMID:26627243

  17. Genome of Rhodnius prolixus, an insect vector of Chagas disease, reveals unique adaptations to hematophagy and parasite infection

    PubMed Central

    Mesquita, Rafael D.; Vionette-Amaral, Raquel J.; Lowenberger, Carl; Rivera-Pomar, Rolando; Monteiro, Fernando A.; Minx, Patrick; Spieth, John; Carvalho, A. Bernardo; Panzera, Francisco; Lawson, Daniel; Torres, André Q.; Ribeiro, Jose M. C.; Sorgine, Marcos H. F.; Waterhouse, Robert M.; Abad-Franch, Fernando; Alves-Bezerra, Michele; Amaral, Laurence R.; Araujo, Helena M.; Aravind, L.; Atella, Georgia C.; Azambuja, Patricia; Berni, Mateus; Bittencourt-Cunha, Paula R.; Braz, Gloria R. C.; Calderón-Fernández, Gustavo; Carareto, Claudia M. A.; Christensen, Mikkel B.; Costa, Igor R.; Costa, Samara G.; Dansa, Marilvia; Daumas-Filho, Carlos R. O.; De-Paula, Iron F.; Dias, Felipe A.; Dimopoulos, George; Emrich, Scott J.; Esponda-Behrens, Natalia; Fampa, Patricia; Fernandez-Medina, Rita D.; da Fonseca, Rodrigo N.; Fontenele, Marcio; Fronick, Catrina; Fulton, Lucinda A.; Gandara, Ana Caroline; Garcia, Eloi S.; Genta, Fernando A.; Giraldo-Calderón, Gloria I.; Gomes, Bruno; Gondim, Katia C.; Granzotto, Adriana; Guarneri, Alessandra A.; Guigó, Roderic; Harry, Myriam; Hughes, Daniel S. T.; Jablonka, Willy; Jacquin-Joly, Emmanuelle; Juárez, M. Patricia; Koerich, Leonardo B.; Lange, Angela B.; Latorre-Estivalis, José Manuel; Lavore, Andrés; Lawrence, Gena G.; Lazoski, Cristiano; Lazzari, Claudio R.; Lopes, Raphael R.; Lorenzo, Marcelo G.; Lugon, Magda D.; Marcet, Paula L.; Mariotti, Marco; Masuda, Hatisaburo; Megy, Karine; Missirlis, Fanis; Mota, Theo; Noriega, Fernando G.; Nouzova, Marcela; Nunes, Rodrigo D.; Oliveira, Raquel L. L.; Oliveira-Silveira, Gilbert; Ons, Sheila; Orchard, Ian; Pagola, Lucia; Paiva-Silva, Gabriela O.; Pascual, Agustina; Pavan, Marcio G.; Pedrini, Nicolás; Peixoto, Alexandre A.; Pereira, Marcos H.; Pike, Andrew; Polycarpo, Carla; Prosdocimi, Francisco; Ribeiro-Rodrigues, Rodrigo; Robertson, Hugh M.; Salerno, Ana Paula; Salmon, Didier; Santesmasses, Didac; Schama, Renata; Seabra-Junior, Eloy S.; Silva-Cardoso, Livia; Silva-Neto, Mario A. C.; Souza-Gomes, Matheus; Sterkel, Marcos; Taracena, Mabel L.; Tojo, Marta; Tu, Zhijian Jake; Tubio, Jose M. C.; Ursic-Bedoya, Raul; Venancio, Thiago M.; Walter-Nuno, Ana Beatriz; Wilson, Derek; Warren, Wesley C.; Wilson, Richard K.; Huebner, Erwin; Dotson, Ellen M.; Oliveira, Pedro L.

    2015-01-01

    Rhodnius prolixus not only has served as a model organism for the study of insect physiology, but also is a major vector of Chagas disease, an illness that affects approximately seven million people worldwide. We sequenced the genome of R. prolixus, generated assembled sequences covering 95% of the genome (∼702 Mb), including 15,456 putative protein-coding genes, and completed comprehensive genomic analyses of this obligate blood-feeding insect. Although immune-deficiency (IMD)-mediated immune responses were observed, R. prolixus putatively lacks key components of the IMD pathway, suggesting a reorganization of the canonical immune signaling network. Although both Toll and IMD effectors controlled intestinal microbiota, neither affected Trypanosoma cruzi, the causal agent of Chagas disease, implying the existence of evasion or tolerance mechanisms. R. prolixus has experienced an extensive loss of selenoprotein genes, with its repertoire reduced to only two proteins, one of which is a selenocysteine-based glutathione peroxidase, the first found in insects. The genome contained actively transcribed, horizontally transferred genes from Wolbachia sp., which showed evidence of codon use evolution toward the insect use pattern. Comparative protein analyses revealed many lineage-specific expansions and putative gene absences in R. prolixus, including tandem expansions of genes related to chemoreception, feeding, and digestion that possibly contributed to the evolution of a blood-feeding lifestyle. The genome assembly and these associated analyses provide critical information on the physiology and evolution of this important vector species and should be instrumental for the development of innovative disease control methods. PMID:26627243

  18. Breathers on quantized superfluid vortices.

    PubMed

    Salman, Hayder

    2013-10-18

    We consider the propagation of breathers along a quantized superfluid vortex. Using the correspondence between the local induction approximation (LIA) and the nonlinear Schrödinger equation, we identify a set of initial conditions corresponding to breather solutions of vortex motion governed by the LIA. These initial conditions, which give rise to a long-wavelength modulational instability, result in the emergence of large amplitude perturbations that are localized in both space and time. The emergent structures on the vortex filament are analogous to loop solitons but arise from the dual action of bending and twisting of the vortex. Although the breather solutions we study are exact solutions of the LIA equations, we demonstrate through full numerical simulations that their key emergent attributes carry over to vortex dynamics governed by the Biot-Savart law and to quantized vortices described by the Gross-Pitaevskii equation. The breather excitations can lead to self-reconnections, a mechanism that can play an important role within the crossover range of scales in superfluid turbulence. Moreover, the observation of breather solutions on vortices in a field model suggests that these solutions are expected to arise in a wide range of other physical contexts from classical vortices to cosmological strings. PMID:24182275

  19. Perceptual quantization of chromatic components

    NASA Astrophysics Data System (ADS)

    Saadane, Abdelhakim; Bedat, Laurent; Barba, Dominique

    1998-07-01

    In order to achieve a color image coding based on the human visual system features, we have been interested by the design of a perceptually based quantizer. The cardinal directions Ach, Cr1 and Cr2, designed by Krauskopf from habituation experiments and validated in our lab from spatial masking experiments, have been used to characterize color images. The achromatic component, already considered in previous study, will not be considered here. The same methodology has been applied to the two chromatic components to specify the decision thresholds and the reconstruction levels which ensure that the degradations induced will be lower than their visibility thresholds. Two observers have been used for each of the two components. From the values obtained for Cr1 component one should notice that the decision thresholds and reconstruction levels follow a linear law even at higher levels. However, for Cr2 component the values seem following a monotonous increasing function. To determine if these behaviors are frequency dependent, further experiments have been conducted with stimulus frequencies varying from 1cy/deg to 4cy/deg. The measured values show no significant variations. Finally, instead of sinusoidal stimuli, filtered textures have been used to take into account the spatio-frequential combination. The same laws (linear for Cr1 and monotonous increasing for Cr2) have been observed even if a variation in the quantization intervals is reported.

  20. Breathers on Quantized Superfluid Vortices

    NASA Astrophysics Data System (ADS)

    Salman, Hayder

    2013-10-01

    We consider the propagation of breathers along a quantized superfluid vortex. Using the correspondence between the local induction approximation (LIA) and the nonlinear Schrödinger equation, we identify a set of initial conditions corresponding to breather solutions of vortex motion governed by the LIA. These initial conditions, which give rise to a long-wavelength modulational instability, result in the emergence of large amplitude perturbations that are localized in both space and time. The emergent structures on the vortex filament are analogous to loop solitons but arise from the dual action of bending and twisting of the vortex. Although the breather solutions we study are exact solutions of the LIA equations, we demonstrate through full numerical simulations that their key emergent attributes carry over to vortex dynamics governed by the Biot-Savart law and to quantized vortices described by the Gross-Pitaevskii equation. The breather excitations can lead to self-reconnections, a mechanism that can play an important role within the crossover range of scales in superfluid turbulence. Moreover, the observation of breather solutions on vortices in a field model suggests that these solutions are expected to arise in a wide range of other physical contexts from classical vortices to cosmological strings.

  1. Weak associativity and deformation quantization

    NASA Astrophysics Data System (ADS)

    Kupriyanov, V. G.

    2016-09-01

    Non-commutativity and non-associativity are quite natural in string theory. For open strings it appears due to the presence of non-vanishing background two-form in the world volume of Dirichlet brane, while in closed string theory the flux compactifications with non-vanishing three-form also lead to non-geometric backgrounds. In this paper, working in the framework of deformation quantization, we study the violation of associativity imposing the condition that the associator of three elements should vanish whenever each two of them are equal. The corresponding star products are called alternative and satisfy important for physical applications properties like the Moufang identities, alternative identities, Artin's theorem, etc. The condition of alternativity is invariant under the gauge transformations, just like it happens in the associative case. The price to pay is the restriction on the non-associative algebra which can be represented by the alternative star product, it should satisfy the Malcev identity. The example of nontrivial Malcev algebra is the algebra of imaginary octonions. For this case we construct an explicit expression of the non-associative and alternative star product. We also discuss the quantization of Malcev-Poisson algebras of general form, study its properties and provide the lower order expression for the alternative star product. To conclude we define the integration on the algebra of the alternative star products and show that the integrated associator vanishes.

  2. Quantization of higher spin fields

    SciTech Connect

    Wagenaar, J. W.; Rijken, T. A

    2009-11-15

    In this article we quantize (massive) higher spin (1{<=}j{<=}2) fields by means of Dirac's constrained Hamilton procedure both in the situation were they are totally free and were they are coupled to (an) auxiliary field(s). A full constraint analysis and quantization is presented by determining and discussing all constraints and Lagrange multipliers and by giving all equal times (anti)commutation relations. Also we construct the relevant propagators. In the free case we obtain the well-known propagators and show that they are not covariant, which is also well known. In the coupled case we do obtain covariant propagators (in the spin-3/2 case this requires b=0) and show that they have a smooth massless limit connecting perfectly to the massless case (with auxiliary fields). We notice that in our system of the spin-3/2 and spin-2 case the massive propagators coupled to conserved currents only have a smooth limit to the pure massless spin-propagator, when there are ghosts in the massive case.

  3. Weighted Bergman Kernels and Quantization}

    NASA Astrophysics Data System (ADS)

    Engliš, Miroslav

    Let Ω be a bounded pseudoconvex domain in CN, φ, ψ two positive functions on Ω such that - log ψ, - log φ are plurisubharmonic, and z∈Ω a point at which - log φ is smooth and strictly plurisubharmonic. We show that as k-->∞, the Bergman kernels with respect to the weights φkψ have an asymptotic expansion for x,y near z, where φ(x,y) is an almost-analytic extension of &\\phi(x)=φ(x,x) and similarly for ψ. Further, . If in addition Ω is of finite type, φ,ψ behave reasonably at the boundary, and - log φ, - log ψ are strictly plurisubharmonic on Ω, we obtain also an analogous asymptotic expansion for the Berezin transform and give applications to the Berezin quantization. Finally, for Ω smoothly bounded and strictly pseudoconvex and φ a smooth strictly plurisubharmonic defining function for Ω, we also obtain results on the Berezin-Toeplitz quantization.

  4. Covariant Photon Quantization in the SME

    NASA Astrophysics Data System (ADS)

    Colladay, D.

    2014-01-01

    The Gupta-Bleuler quantization procedure is applied to the SME photon sector. A direct application of the method to the massless case fails due to an unavoidable incompleteness in the polarization states. A mass term can be included into the photon lagrangian to rescue the quantization procedure and maintain covariance.

  5. Segmentation of Planar Surfaces from Laser Scanning Data Using the Magnitude of Normal Position Vector for Adaptive Neighborhoods

    PubMed Central

    Kim, Changjae; Habib, Ayman; Pyeon, Muwook; Kwon, Goo-rak; Jung, Jaehoon; Heo, Joon

    2016-01-01

    Diverse approaches to laser point segmentation have been proposed since the emergence of the laser scanning system. Most of these segmentation techniques, however, suffer from limitations such as sensitivity to the choice of seed points, lack of consideration of the spatial relationships among points, and inefficient performance. In an effort to overcome these drawbacks, this paper proposes a segmentation methodology that: (1) reduces the dimensions of the attribute space; (2) considers the attribute similarity and the proximity of the laser point simultaneously; and (3) works well with both airborne and terrestrial laser scanning data. A neighborhood definition based on the shape of the surface increases the homogeneity of the laser point attributes. The magnitude of the normal position vector is used as an attribute for reducing the dimension of the accumulator array. The experimental results demonstrate, through both qualitative and quantitative evaluations, the outcomes’ high level of reliability. The proposed segmentation algorithm provided 96.89% overall correctness, 95.84% completeness, a 0.25 m overall mean value of centroid difference, and less than 1° of angle difference. The performance of the proposed approach was also verified with a large dataset and compared with other approaches. Additionally, the evaluation of the sensitivity of the thresholds was carried out. In summary, this paper proposes a robust and efficient segmentation methodology for abstraction of an enormous number of laser points into plane information. PMID:26805849

  6. Segmentation of Planar Surfaces from Laser Scanning Data Using the Magnitude of Normal Position Vector for Adaptive Neighborhoods.

    PubMed

    Kim, Changjae; Habib, Ayman; Pyeon, Muwook; Kwon, Goo-rak; Jung, Jaehoon; Heo, Joon

    2016-01-01

    Diverse approaches to laser point segmentation have been proposed since the emergence of the laser scanning system. Most of these segmentation techniques, however, suffer from limitations such as sensitivity to the choice of seed points, lack of consideration of the spatial relationships among points, and inefficient performance. In an effort to overcome these drawbacks, this paper proposes a segmentation methodology that: (1) reduces the dimensions of the attribute space; (2) considers the attribute similarity and the proximity of the laser point simultaneously; and (3) works well with both airborne and terrestrial laser scanning data. A neighborhood definition based on the shape of the surface increases the homogeneity of the laser point attributes. The magnitude of the normal position vector is used as an attribute for reducing the dimension of the accumulator array. The experimental results demonstrate, through both qualitative and quantitative evaluations, the outcomes' high level of reliability. The proposed segmentation algorithm provided 96.89% overall correctness, 95.84% completeness, a 0.25 m overall mean value of centroid difference, and less than 1° of angle difference. The performance of the proposed approach was also verified with a large dataset and compared with other approaches. Additionally, the evaluation of the sensitivity of the thresholds was carried out. In summary, this paper proposes a robust and efficient segmentation methodology for abstraction of an enormous number of laser points into plane information. PMID:26805849

  7. Hybrid quantization of an inflationary model: The flat case

    NASA Astrophysics Data System (ADS)

    Fernández-Méndez, Mikel; Mena Marugán, Guillermo A.; Olmedo, Javier

    2013-08-01

    We present a complete quantization of an approximately homogeneous and isotropic universe with small scalar perturbations. We consider the case in which the matter content is a minimally coupled scalar field and the spatial sections are flat and compact, with the topology of a three-torus. The quantization is carried out along the lines that were put forward by the authors in a previous work for spherical topology. The action of the system is truncated at second order in perturbations. The local gauge freedom is fixed at the classical level, although different gauges are discussed and shown to lead to equivalent conclusions. Moreover, descriptions in terms of gauge-invariant quantities are considered. The reduced system is proven to admit a symplectic structure, and its dynamical evolution is dictated by a Hamiltonian constraint. Then, the background geometry is polymerically quantized, while a Fock representation is adopted for the inhomogeneities. The latter is selected by uniqueness criteria adapted from quantum field theory in curved spacetimes, which determine a specific scaling of the perturbations. In our hybrid quantization, we promote the Hamiltonian constraint to an operator on the kinematical Hilbert space. If the zero mode of the scalar field is interpreted as a relational time, a suitable ansatz for the dependence of the physical states on the polymeric degrees of freedom leads to a quantum wave equation for the evolution of the perturbations. Alternatively, the solutions to the quantum constraint can be characterized by their initial data on the minimum-volume section of each superselection sector. The physical implications of this model will be addressed in a future work, in order to check whether they are compatible with observations.

  8. Quantized conic sections; quantum gravity

    SciTech Connect

    Noyes, H.P.

    1993-03-15

    Starting from free relativistic particles whose position and velocity can only be measured to a precision < {Delta}r{Delta}v > {equivalent_to} {plus_minus} k/2 meter{sup 2}sec{sup {minus}1} , we use the relativistic conservation laws to define the relative motion of the coordinate r = r{sub 1} {minus} r{sub 2} of two particles of mass m{sub 1}, m{sub 2} and relative velocity v = {beta}c = {sub (k{sub 1} + k{sub 2}})/ {sup (k{sub 1} {minus} k{sub 2}}) in terms of conic section equation v{sup 2} = {Gamma} [2/r {plus_minus} 1/a] where ``+`` corresponds to hyperbolic and ``{minus}`` to elliptical trajectories. Equation is quantized by expressing Kepler`s Second Law as conservation of angular niomentum per unit mass in units of k. Principal quantum number is n {equivalent_to} j + {1/2} with``square`` {sub T{sup 2}}/{sup A{sup 2}} = (n {minus}1)nk{sup 2} {equivalent_to} {ell}{sub {circle_dot}}({ell}{sub {circle_dot}} + 1)k{sup 2}. Here {ell}{sub {circle_dot}} = n {minus} 1 is the angular momentumquantum number for circular orbits. In a sense, we obtain ``spin`` from this quantization. Since {Gamma}/a cannot reach c{sup 2} without predicting either circular or asymptotic velocities equal to the limiting velocity for particulate motion, we can also quantize velocities in terms of the principle quantum number by defining {beta}{sub n}/{sup 2} = {sub c{sup 2}}/{sup v{sub n{sup 2}} = {sub n{sup 2}}/1({sub c{sup 2}}a/{Gamma}) = ({sub nN{Gamma}}/1){sup 2}. For the Z{sub 1}e,Z{sub 2}e of the same sign and {alpha} {triple_bond} e{sup 2}/m{sub e}{kappa}c, we find that {Gamma}/c{sup 2}a = Z{sub 1}Z{sub 2}{alpha}. The characteristic Coulomb parameter {eta}(n) {triple_bond} Z{sub 1}Z{sub 2}{alpha}/{beta}{sub n} = Z{sub 1}Z{sub 2}nN{sub {Gamma}} then specifies the penetration factor C{sup 2}({eta}) = 2{pi}{eta}/(e{sup 2{pi}{eta}} {minus} 1}). For unlike charges, with {eta} still taken as positive, C{sup 2}({minus}{eta}) = 2{pi}{eta}/(1 {minus} e{sup {minus}2{pi}{eta}}).

  9. Variable-rate colour image quantization based on quadtree segmentation

    NASA Astrophysics Data System (ADS)

    Hu, Y. C.; Li, C. Y.; Chuang, J. C.; Lo, C. C.

    2011-09-01

    A novel variable-sized block encoding with threshold control for colour image quantization (CIQ) is presented in this paper. In CIQ, the colour palette used has a great influence on the reconstructed image quality. Typically, a higher image quality and a larger storage cost are obtained when a larger-sized palette is used in CIQ. To cut down the storage cost while preserving quality of the reconstructed images, the threshold control policy for quadtree segmentation is used in this paper. Experimental results show that the proposed method adaptively provides desired bit rates while having better image qualities comparing to CIQ with the usage of multiple palettes of different sizes.

  10. The Necessity of Quantizing Gravity

    NASA Astrophysics Data System (ADS)

    Adelman, Jeremy

    2016-03-01

    The Eppley Hannah thought experiment is often cited as justification for attempts by theorists to develop a complete, consistent theory of quantum gravity. A modification of the earlier ``Heisenberg microscope'' argument for the necessity of quantized light, the Eppley-Hannah thought experiment purports to show that purely classical gravitational waves would either not conserve energy or else allow for violations of the uncertainty principle. However, several subsequent papers have cast doubt as to the validity of the Eppley-Hannah argument. In this talk, we will show how to resurrect the Eppley-Hannah thought experiment by modifying the original argument in a way that gets around the present criticisms levied against it. With support from the Department of Energy, Grant Number DE-FG02-91ER40674.

  11. Quantized ionic conductance in nanopores

    SciTech Connect

    Zwolak, Michael; Lagerqvist, Johan; Di Ventra, Massimilliano

    2009-01-01

    Ionic transport in nanopores is a fundamentally and technologically important problem in view of its ubiquitous occurrence in biological processes and its impact on DNA sequencing applications. Using microscopic calculations, we show that ion transport may exhibit strong non-liDearities as a function of the pore radius reminiscent of the conductance quantization steps as a function of the transverse cross section of quantum point contacts. In the present case, however, conductance steps originate from the break up of the hydration layers that form around ions in aqueous solution. Once in the pore, the water molecules form wavelike structures due to multiple scattering at the surface of the pore walls and interference with the radial waves around the ion. We discuss these effects as well as the conditions under which the step-like features in the ionic conductance should be experimentally observable.

  12. Cosmology Quantized in Cosmic Time

    SciTech Connect

    Weinstein, M

    2004-06-03

    This paper discusses the problem of inflation in the context of Friedmann-Robertson-Walker Cosmology. We show how, after a simple change of variables, to quantize the problem in a way which parallels the classical discussion. The result is that two of the Einstein equations arise as exact equations of motion and one of the usual Einstein equations (suitably quantized) survives as a constraint equation to be imposed on the space of physical states. However, the Friedmann equation, which is also a constraint equation and which is the basis of the Wheeler-deWitt equation, acquires a welcome quantum correction that becomes significant for small scale factors. We discuss the extension of this result to a full quantum mechanical derivation of the anisotropy ({delta} {rho}/{rho}) in the cosmic microwave background radiation, and the possibility that the extra term in the Friedmann equation could have observable consequences. To clarify the general formalism and explicitly show why we choose to weaken the statement of the Wheeler-deWitt equation, we apply the general formalism to de Sitter space. After exactly solving the relevant Heisenberg equations of motion we give a detailed discussion of the subtleties associated with defining physical states and the emergence of the classical theory. This computation provides the striking result that quantum corrections to this long wavelength limit of gravity eliminate the problem of the big crunch. We also show that the same corrections lead to possibly measurable effects on the CMB radiation. For the sake of completeness, we discuss the special case, {lambda} = 0, and its relation to Minkowski space. Finally, we suggest interesting ways in which these techniques can be generalized to cast light on the question of chaotic or eternal inflation. In particular, we suggest one can put an experimental lower bound on the distance to a universe with a scale factor very different from our own, by looking at its effects on our CMB

  13. Introducing Vectors.

    ERIC Educational Resources Information Center

    Roche, John

    1997-01-01

    Suggests an approach to teaching vectors that promotes active learning through challenging questions addressed to the class, as opposed to subtle explanations. Promotes introducing vector graphics with concrete examples, beginning with an explanation of the displacement vector. Also discusses artificial vectors, vector algebra, and unit vectors.…

  14. Adaptation and evaluation of the bottle assay for monitoring insecticide resistance in disease vector mosquitoes in the Peruvian Amazon

    PubMed Central

    Zamora Perea, Elvira; Balta León, Rosario; Palomino Salcedo, Miriam; Brogdon, William G; Devine, Gregor J

    2009-01-01

    Background The purpose of this study was to establish whether the "bottle assay", a tool for monitoring insecticide resistance in mosquitoes, can complement and augment the capabilities of the established WHO assay, particularly in resource-poor, logistically challenging environments. Methods Laboratory reared Aedes aegypti and field collected Anopheles darlingi and Anopheles albimanus were used to assess the suitability of locally sourced solvents and formulated insecticides for use with the bottle assay. Using these adapted protocols, the ability of the bottle assay and the WHO assay to discriminate between deltamethrin-resistant Anopheles albimanus populations was compared. The diagnostic dose of deltamethrin that would identify resistance in currently susceptible populations of An. darlingi and Ae. aegypti was defined. The robustness of the bottle assay during a surveillance exercise in the Amazon was assessed. Results The bottle assay (using technical or formulated material) and the WHO assay were equally able to differentiate deltamethrin-resistant and susceptible An. albimanus populations. A diagnostic dose of 10 μg a.i./bottle was identified as the most sensitive discriminating dose for characterizing resistance in An. darlingi and Ae. aegypti. Treated bottles, prepared using locally sourced solvents and insecticide formulations, can be stored for > 14 days and used three times. Bottles can be stored and transported under local conditions and field-assays can be completed in a single evening. Conclusion The flexible and portable nature of the bottle assay and the ready availability of its components make it a potentially robust and useful tool for monitoring insecticide resistance and efficacy in remote areas that require minimal cost tools. PMID:19728871

  15. Quantized vortices around wavefront nodes, 2

    NASA Technical Reports Server (NTRS)

    Hirschfelder, J. O.; Goebel, C. J.; Bruch, L. W.

    1974-01-01

    Quantized vortices can occur around nodal points in wavefunctions. The derivation depends only on the wavefunction being single valued, continuous, and having continuous first derivatives. Since the derivation does not depend upon the dynamical equations, the quantized vortices are expected to occur for many types of waves such as electromagnetic and acoustic. Such vortices have appeared in the calculations of the H + H2 molecular collisions and play a role in the chemical kinetics. In a companion paper, it is shown that quantized vortices occur when optical waves are internally reflected from the face of a prism or particle beams are reflected from potential energy barriers.

  16. A note on quantizations of Galois extensions

    NASA Astrophysics Data System (ADS)

    İlhan, Aslı Güçlükan

    2014-12-01

    In Huru and Lychagin (2013), it is conjectured that the quantizations of splitting fields of products of quadratic polynomials, which are obtained by deforming the multiplication, are Clifford type algebras. In this paper, we prove this conjecture.

  17. Loop quantization of Schwarzschild interior revisited

    NASA Astrophysics Data System (ADS)

    Singh, Parampreet; Corichi, Alejandro

    2016-03-01

    Several studies of different inequivalent loop quantizations have shown, that there exists no fully satisfactory quantum theory for the Schwarzschild interior. Existing quantizations fail either on dependence on the fiducial structure or on the lack of the classical limit. Here we put forward a novel viewpoint to construct the quantum theory that overcomes all of the known problems of the existing quantizations. It is shown that the quantum gravitational constraint is well defined past the singularity and that its effective dynamics possesses a bounce into an expanding regime. The classical singularity is avoided, and a semiclassical spacetime satisfying vacuum Einstein's equations is recovered on the ``other side'' of the bounce. We argue that such metric represents the interior region of a white-hole spacetime, but for which the corresponding ``white-hole mass'' differs from the original black hole mass. We compare the differences in physical implications with other quantizations.

  18. Torus quantization of symmetrically excited helium

    SciTech Connect

    Mueller, J. ); Burgdoerfer, J. Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831-6377 ); Noid, D. )

    1992-02-01

    The recent discovery by Richter and Wintgen (J. Phys. B 23, L197 (1990)) that the classical helium atom is not globally ergodic has stimulated renewed interest in its semiclassical quantization. The Einstein-Brillouin-Keller quantization of Kolmogorov-Arnold-Moser tori around stable periodic orbits becomes locally possible in a selected region of phase space. Using a hyperspherical representation we have found a dynamically confining potential allowing for a stable motion near the Wannier ridge. The resulting semiclassical eigenenergies provide a test for full quantum calculations in the limit of very high quantum numbers. The relations to frequently used group-theoretical classifications for doubly excited states and to the periodic-orbit quantization of the chaotic portion of the phase space are discussed. The extrapolation of the semiclassical quantization to low-lying states give remarkably accurate estimates for the energies of all symmetric {ital L}=0 states of helium.

  19. Towards quantized current arbitrary waveform synthesis

    NASA Astrophysics Data System (ADS)

    Mirovsky, P.; Fricke, L.; Hohls, F.; Kaestner, B.; Leicht, Ch.; Pierz, K.; Melcher, J.; Schumacher, H. W.

    2013-06-01

    The generation of ac modulated quantized current waveforms using a semiconductor non-adiabatic single electron pump is demonstrated. In standard operation, the single electron pump generates a quantized output current of I = ef, where e is the charge of the electron and f is the pumping frequency. Suitable frequency modulation of f allows the generation of ac modulated output currents with different characteristics. By sinusoidal and saw tooth like modulation of f accordingly modulated quantized current waveforms with kHz modulation frequencies and peak currents up to 100 pA are obtained. Such ac quantized current sources could find applications ranging from precision ac metrology to on-chip signal generation.

  20. Topologies on quantum topoi induced by quantization

    SciTech Connect

    Nakayama, Kunji

    2013-07-15

    In the present paper, we consider effects of quantization in a topos approach of quantum theory. A quantum system is assumed to be coded in a quantum topos, by which we mean the topos of presheaves on the context category of commutative subalgebras of a von Neumann algebra of bounded operators on a Hilbert space. A classical system is modeled by a Lie algebra of classical observables. It is shown that a quantization map from the classical observables to self-adjoint operators on the Hilbert space naturally induces geometric morphisms from presheaf topoi related to the classical system to the quantum topos. By means of the geometric morphisms, we give Lawvere-Tierney topologies on the quantum topos (and their equivalent Grothendieck topologies on the context category). We show that, among them, there exists a canonical one which we call a quantization topology. We furthermore give an explicit expression of a sheafification functor associated with the quantization topology.

  1. Application of least square support vector machine and multivariate adaptive regression spline models in long term prediction of river water pollution

    NASA Astrophysics Data System (ADS)

    Kisi, Ozgur; Parmar, Kulwinder Singh

    2016-03-01

    This study investigates the accuracy of least square support vector machine (LSSVM), multivariate adaptive regression splines (MARS) and M5 model tree (M5Tree) in modeling river water pollution. Various combinations of water quality parameters, Free Ammonia (AMM), Total Kjeldahl Nitrogen (TKN), Water Temperature (WT), Total Coliform (TC), Fecal Coliform (FC) and Potential of Hydrogen (pH) monitored at Nizamuddin, Delhi Yamuna River in India were used as inputs to the applied models. Results indicated that the LSSVM and MARS models had almost same accuracy and they performed better than the M5Tree model in modeling monthly chemical oxygen demand (COD). The average root mean square error (RMSE) of the LSSVM and M5Tree models was decreased by 1.47% and 19.1% using MARS model, respectively. Adding TC input to the models did not increase their accuracy in modeling COD while adding FC and pH inputs to the models generally decreased the accuracy. The overall results indicated that the MARS and LSSVM models could be successfully used in estimating monthly river water pollution level by using AMM, TKN and WT parameters as inputs.

  2. Image reconstruction for an electrical capacitance tomography system based on a least-squares support vector machine and a self-adaptive particle swarm optimization algorithm

    NASA Astrophysics Data System (ADS)

    Chen, Xia; Hu, Hong-li; Liu, Fei; Gao, Xiang Xiang

    2011-10-01

    The task of image reconstruction for an electrical capacitance tomography (ECT) system is to determine the permittivity distribution and hence the phase distribution in a pipeline by measuring the electrical capacitances between sets of electrodes placed around its periphery. In view of the nonlinear relationship between the permittivity distribution and capacitances and the limited number of independent capacitance measurements, image reconstruction for ECT is a nonlinear and ill-posed inverse problem. To solve this problem, a new image reconstruction method for ECT based on a least-squares support vector machine (LS-SVM) combined with a self-adaptive particle swarm optimization (PSO) algorithm is presented. Regarded as a special small sample theory, the SVM avoids the issues appearing in artificial neural network methods such as difficult determination of a network structure, over-learning and under-learning. However, the SVM performs differently with different parameters. As a relatively new population-based evolutionary optimization technique, PSO is adopted to realize parameters' effective selection with the advantages of global optimization and rapid convergence. This paper builds up a 12-electrode ECT system and a pneumatic conveying platform to verify this image reconstruction algorithm. Experimental results indicate that the algorithm has good generalization ability and high-image reconstruction quality.

  3. Controlling charge quantization with quantum fluctuations.

    PubMed

    Jezouin, S; Iftikhar, Z; Anthore, A; Parmentier, F D; Gennser, U; Cavanna, A; Ouerghi, A; Levkivskyi, I P; Idrisov, E; Sukhorukov, E V; Glazman, L I; Pierre, F

    2016-08-01

    In 1909, Millikan showed that the charge of electrically isolated systems is quantized in units of the elementary electron charge e. Today, the persistence of charge quantization in small, weakly connected conductors allows for circuits in which single electrons are manipulated, with applications in, for example, metrology, detectors and thermometry. However, as the connection strength is increased, the discreteness of charge is progressively reduced by quantum fluctuations. Here we report the full quantum control and characterization of charge quantization. By using semiconductor-based tunable elemental conduction channels to connect a micrometre-scale metallic island to a circuit, we explore the complete evolution of charge quantization while scanning the entire range of connection strengths, from a very weak (tunnel) to a perfect (ballistic) contact. We observe, when approaching the ballistic limit, that charge quantization is destroyed by quantum fluctuations, and scales as the square root of the residual probability for an electron to be reflected across the quantum channel; this scaling also applies beyond the different regimes of connection strength currently accessible to theory. At increased temperatures, the thermal fluctuations result in an exponential suppression of charge quantization and in a universal square-root scaling, valid for all connection strengths, in agreement with expectations. Besides being pertinent for the improvement of single-electron circuits and their applications, and for the metal-semiconductor hybrids relevant to topological quantum computing, knowledge of the quantum laws of electricity will be essential for the quantum engineering of future nanoelectronic devices. PMID:27488797

  4. Controlling charge quantization with quantum fluctuations

    NASA Astrophysics Data System (ADS)

    Jezouin, S.; Iftikhar, Z.; Anthore, A.; Parmentier, F. D.; Gennser, U.; Cavanna, A.; Ouerghi, A.; Levkivskyi, I. P.; Idrisov, E.; Sukhorukov, E. V.; Glazman, L. I.; Pierre, F.

    2016-08-01

    In 1909, Millikan showed that the charge of electrically isolated systems is quantized in units of the elementary electron charge e. Today, the persistence of charge quantization in small, weakly connected conductors allows for circuits in which single electrons are manipulated, with applications in, for example, metrology, detectors and thermometry. However, as the connection strength is increased, the discreteness of charge is progressively reduced by quantum fluctuations. Here we report the full quantum control and characterization of charge quantization. By using semiconductor-based tunable elemental conduction channels to connect a micrometre-scale metallic island to a circuit, we explore the complete evolution of charge quantization while scanning the entire range of connection strengths, from a very weak (tunnel) to a perfect (ballistic) contact. We observe, when approaching the ballistic limit, that charge quantization is destroyed by quantum fluctuations, and scales as the square root of the residual probability for an electron to be reflected across the quantum channel; this scaling also applies beyond the different regimes of connection strength currently accessible to theory. At increased temperatures, the thermal fluctuations result in an exponential suppression of charge quantization and in a universal square-root scaling, valid for all connection strengths, in agreement with expectations. Besides being pertinent for the improvement of single-electron circuits and their applications, and for the metal–semiconductor hybrids relevant to topological quantum computing, knowledge of the quantum laws of electricity will be essential for the quantum engineering of future nanoelectronic devices.

  5. Quantization of Prior Probabilities for Collaborative Distributed Hypothesis Testing

    NASA Astrophysics Data System (ADS)

    Rhim, Joong Bum; Varshney, Lav R.; Goyal, Vivek K.

    2012-09-01

    This paper studies the quantization of prior probabilities, drawn from an ensemble, for distributed detection and data fusion. Design and performance equivalences between a team of N agents tied by a fixed fusion rule and a more powerful single agent are obtained. Effects of identical quantization and diverse quantization are compared. Consideration of perceived common risk enables agents using diverse quantizers to collaborate in hypothesis testing, and it is proven that the minimum mean Bayes risk error is achieved by diverse quantization. The comparison shows that optimal diverse quantization with K cells per quantizer performs as well as optimal identical quantization with N(K-1)+1 cells per quantizer. Similar results are obtained for maximum Bayes risk error as the distortion criterion.

  6. Smooth big bounce from affine quantization

    NASA Astrophysics Data System (ADS)

    Bergeron, Hervé; Dapor, Andrea; Gazeau, Jean Pierre; Małkiewicz, Przemysław

    2014-04-01

    We examine the possibility of dealing with gravitational singularities on a quantum level through the use of coherent state or wavelet quantization instead of canonical quantization. We consider the Robertson-Walker metric coupled to a perfect fluid. It is the simplest model of a gravitational collapse, and the results obtained here may serve as a useful starting point for more complex investigations in the future. We follow a quantization procedure based on affine coherent states or wavelets built from the unitary irreducible representation of the affine group of the real line with positive dilation. The main issue of our approach is the appearance of a quantum centrifugal potential allowing for regularization of the singularity, essential self-adjointness of the Hamiltonian, and unambiguous quantum dynamical evolution.

  7. Magnetic Flux Quantization of the Landau Problem

    NASA Astrophysics Data System (ADS)

    Wang, Jianhua; Li, Kang; Long, Shuming; Yuan, Yi

    2014-08-01

    Landau problem has a very important application in modern physics, in which two-dimensional electron gas system and quantum Hall effect are outstanding. In this paper, first we review the solution of the Pauli equation, then using the single electron wave function, we calculate moving area expectations of the ideal 2-dimensional electron gas system and the per unit area's degeneracy of the electron gas system. As a result, how to calculate the magnetic flux of the electron gas system is given. It shows that the magnetic flux of 2-dimensional electron gas system in magnetic field is quantized, and magnetic flux quantization results from the quantization of the moving area expectations of electron gas system.

  8. Virtual topological insulators with real quantized physics

    NASA Astrophysics Data System (ADS)

    Prodan, Emil

    2015-06-01

    A concrete strategy is presented for generating strong topological insulators in d +d' dimensions which have quantized physics in d dimensions. Here, d counts the physical and d' the virtual dimensions. It consists of seeking d -dimensional representations of operator algebras which are usually defined in d +d' dimensions where topological elements display strong topological invariants. The invariants are shown, however, to be fully determined by the physical dimensions, in the sense that their measurement can be done at fixed virtual coordinates. We solve the bulk-boundary correspondence and show that the boundary invariants are also fully determined by the physical coordinates. We analyze the virtual Chern insulator in 1 +1 dimensions realized in Y. E. Kraus et al., Phys. Rev. Lett. 109, 106402 (2012), 10.1103/PhysRevLett.109.106402 and predict quantized forces at the edges. We generate a topological system in (3 +1 ) dimensions, which is predicted to have quantized magnetoelectric response.

  9. Experimental realization of quantized anomalous Hall effect

    NASA Astrophysics Data System (ADS)

    Xue, Qi-Kun

    2014-03-01

    Anomalous Hall effect was discovered by Edwin Hall in 1880. In this talk, we report the experimental observation of the quantized version of AHE, the quantum anomalous Hall effect (QAHE) in thin films of Cr-doped (Bi,Sb)2Te3 magnetic topological insulator. At zero magnetic field, the gate-tuned anomalous Hall resistance exhibits a quantized value of h /e2 accompanied by a significant drop of the longitudinal resistance. The longitudinal resistance vanishes under a strong magnetic field whereas the Hall resistance remains at the quantized value. The realization of QAHE paves a way for developing low-power-consumption electronics. Implications on observing Majorana fermions and other exotic phenomena in magnetic topological insulators will also be discussed. The work was collaborated with Ke He, Yayu Wang, Xucun Ma, Xi Chen, Li Lv, Dai Xi, Zhong Fang and Shoucheng Zhang.

  10. Deformation quantization for contact interactions and dissipation

    NASA Astrophysics Data System (ADS)

    Belchev, Borislav Stefanov

    This thesis studies deformation quantization and its application to contact interactions and systems with dissipation. We consider the subtleties related to quantization when contact interactions and boundaries are present. We exploit the idea that discontinuous potentials are idealizations that should be realized as limits of smooth potentials. The Wigner functions are found for the Morse potential and in the proper limit they reduce to the Wigner functions for the infinite wall, for the most general (Robin) boundary conditions. This is possible for a very limited subset of the values of the parameters --- so-called fine tuning is necessary. It explains why Dirichlet boundary conditions are used predominantly. Secondly, we consider deformation quantization in relation to dissipative phenomena. For the damped harmonic oscillator we study a method using a modified noncommutative star product. Within this framework we resolve the non-reality problem with the Wigner function and correct the classical limit.

  11. Quantization ambiguities in isotropic quantum geometry

    NASA Astrophysics Data System (ADS)

    Bojowald, Martin

    2002-10-01

    Some typical quantization ambiguities of quantum geometry are studied within isotropic models. Since this allows explicit computations of operators and their spectra, one can investigate the effects of ambiguities in a quantitative manner. It is shown that these ambiguities do not affect the fate of the classical singularity, demonstrating that the absence of a singularity in loop quantum cosmology is a robust implication of the general quantization scheme. The calculations also allow conclusions about modified operators in the full theory. In particular, using holonomies in a non-fundamental representation of SU(2) to quantize connection components turns out to lead to significant corrections to classical behaviour at macroscopic volume for large values of the spin of the chosen representation.

  12. Single Abrikosov vortices as quantized information bits

    NASA Astrophysics Data System (ADS)

    Golod, T.; Iovan, A.; Krasnov, V. M.

    2015-10-01

    Superconducting digital devices can be advantageously used in future supercomputers because they can greatly reduce the dissipation power and increase the speed of operation. Non-volatile quantized states are ideal for the realization of classical Boolean logics. A quantized Abrikosov vortex represents the most compact magnetic object in superconductors, which can be utilized for creation of high-density digital cryoelectronics. In this work we provide a proof of concept for Abrikosov-vortex-based random access memory cell, in which a single vortex is used as an information bit. We demonstrate high-endurance write operation and two different ways of read-out using a spin valve or a Josephson junction. These memory cells are characterized by an infinite magnetoresistance between 0 and 1 states, a short access time, a scalability to nm sizes and an extremely low write energy. Non-volatility and perfect reproducibility are inherent for such a device due to the quantized nature of the vortex.

  13. Constraints on operator ordering from third quantization

    NASA Astrophysics Data System (ADS)

    Ohkuwa, Yoshiaki; Faizal, Mir; Ezawa, Yasuo

    2016-02-01

    In this paper, we analyse the Wheeler-DeWitt equation in the third quantized formalism. We will demonstrate that for certain operator ordering, the early stages of the universe are dominated by quantum fluctuations, and the universe becomes classical at later stages during the cosmic expansion. This is physically expected, if the universe is formed from quantum fluctuations in the third quantized formalism. So, we will argue that this physical requirement can be used to constrain the form of the operator ordering chosen. We will explicitly demonstrate this to be the case for two different cosmological models.

  14. Minimal representations, geometric quantization, and unitarity.

    PubMed Central

    Brylinski, R; Kostant, B

    1994-01-01

    In the framework of geometric quantization we explicitly construct, in a uniform fashion, a unitary minimal representation pio of every simply-connected real Lie group Go such that the maximal compact subgroup of Go has finite center and Go admits some minimal representation. We obtain algebraic and analytic results about pio. We give several results on the algebraic and symplectic geometry of the minimal nilpotent orbits and then "quantize" these results to obtain the corresponding representations. We assume (Lie Go)C is simple. PMID:11607478

  15. Estimation of breast percent density in raw and processed full field digital mammography images via adaptive fuzzy c-means clustering and support vector machine segmentation

    SciTech Connect

    Keller, Brad M.; Nathan, Diane L.; Wang Yan; Zheng Yuanjie; Gee, James C.; Conant, Emily F.; Kontos, Despina

    2012-08-15

    Purpose: The amount of fibroglandular tissue content in the breast as estimated mammographically, commonly referred to as breast percent density (PD%), is one of the most significant risk factors for developing breast cancer. Approaches to quantify breast density commonly focus on either semiautomated methods or visual assessment, both of which are highly subjective. Furthermore, most studies published to date investigating computer-aided assessment of breast PD% have been performed using digitized screen-film mammograms, while digital mammography is increasingly replacing screen-film mammography in breast cancer screening protocols. Digital mammography imaging generates two types of images for analysis, raw (i.e., 'FOR PROCESSING') and vendor postprocessed (i.e., 'FOR PRESENTATION'), of which postprocessed images are commonly used in clinical practice. Development of an algorithm which effectively estimates breast PD% in both raw and postprocessed digital mammography images would be beneficial in terms of direct clinical application and retrospective analysis. Methods: This work proposes a new algorithm for fully automated quantification of breast PD% based on adaptive multiclass fuzzy c-means (FCM) clustering and support vector machine (SVM) classification, optimized for the imaging characteristics of both raw and processed digital mammography images as well as for individual patient and image characteristics. Our algorithm first delineates the breast region within the mammogram via an automated thresholding scheme to identify background air followed by a straight line Hough transform to extract the pectoral muscle region. The algorithm then applies adaptive FCM clustering based on an optimal number of clusters derived from image properties of the specific mammogram to subdivide the breast into regions of similar gray-level intensity. Finally, a SVM classifier is trained to identify which clusters within the breast tissue are likely fibroglandular, which are then

  16. Estimation of breast percent density in raw and processed full field digital mammography images via adaptive fuzzy c-means clustering and support vector machine segmentation

    PubMed Central

    Keller, Brad M.; Nathan, Diane L.; Wang, Yan; Zheng, Yuanjie; Gee, James C.; Conant, Emily F.; Kontos, Despina

    2012-01-01

    Purpose: The amount of fibroglandular tissue content in the breast as estimated mammographically, commonly referred to as breast percent density (PD%), is one of the most significant risk factors for developing breast cancer. Approaches to quantify breast density commonly focus on either semiautomated methods or visual assessment, both of which are highly subjective. Furthermore, most studies published to date investigating computer-aided assessment of breast PD% have been performed using digitized screen-film mammograms, while digital mammography is increasingly replacing screen-film mammography in breast cancer screening protocols. Digital mammography imaging generates two types of images for analysis, raw (i.e., “FOR PROCESSING”) and vendor postprocessed (i.e., “FOR PRESENTATION”), of which postprocessed images are commonly used in clinical practice. Development of an algorithm which effectively estimates breast PD% in both raw and postprocessed digital mammography images would be beneficial in terms of direct clinical application and retrospective analysis. Methods: This work proposes a new algorithm for fully automated quantification of breast PD% based on adaptive multiclass fuzzy c-means (FCM) clustering and support vector machine (SVM) classification, optimized for the imaging characteristics of both raw and processed digital mammography images as well as for individual patient and image characteristics. Our algorithm first delineates the breast region within the mammogram via an automated thresholding scheme to identify background air followed by a straight line Hough transform to extract the pectoral muscle region. The algorithm then applies adaptive FCM clustering based on an optimal number of clusters derived from image properties of the specific mammogram to subdivide the breast into regions of similar gray-level intensity. Finally, a SVM classifier is trained to identify which clusters within the breast tissue are likely fibroglandular, which

  17. Observation of Quantized and Partial Quantized Conductance in Polymer-Suspended Graphene Nanoplatelets

    NASA Astrophysics Data System (ADS)

    Kang, Yuhong; Ruan, Hang; Claus, Richard O.; Heremans, Jean; Orlowski, Marius

    2016-04-01

    Quantized conductance is observed at zero magnetic field and room temperature in metal-insulator-metal structures with graphene submicron-sized nanoplatelets embedded in a 3-hexylthiophene (P3HT) polymer layer. In devices with medium concentration of graphene platelets, integer multiples of G o = 2 e 2/ h (=12.91 kΩ-1), and in some devices partially quantized including a series of with ( n/7) × G o, steps are observed. Such an organic memory device exhibits reliable memory operation with an on/off ratio of more than 10. We attribute the quantized conductance to the existence of a 1-D electron waveguide along the conductive path. The partial quantized conductance results likely from imperfect transmission coefficient due to impedance mismatch of the first waveguide modes.

  18. Observation of Quantized and Partial Quantized Conductance in Polymer-Suspended Graphene Nanoplatelets.

    PubMed

    Kang, Yuhong; Ruan, Hang; Claus, Richard O; Heremans, Jean; Orlowski, Marius

    2016-12-01

    Quantized conductance is observed at zero magnetic field and room temperature in metal-insulator-metal structures with graphene submicron-sized nanoplatelets embedded in a 3-hexylthiophene (P3HT) polymer layer. In devices with medium concentration of graphene platelets, integer multiples of G o = 2e (2)/h (=12.91 kΩ(-1)), and in some devices partially quantized including a series of with (n/7) × G o, steps are observed. Such an organic memory device exhibits reliable memory operation with an on/off ratio of more than 10. We attribute the quantized conductance to the existence of a 1-D electron waveguide along the conductive path. The partial quantized conductance results likely from imperfect transmission coefficient due to impedance mismatch of the first waveguide modes. PMID:27044308

  19. Hysteresis in a quantized superfluid 'atomtronic' circuit.

    PubMed

    Eckel, Stephen; Lee, Jeffrey G; Jendrzejewski, Fred; Murray, Noel; Clark, Charles W; Lobb, Christopher J; Phillips, William D; Edwards, Mark; Campbell, Gretchen K

    2014-02-13

    Atomtronics is an emerging interdisciplinary field that seeks to develop new functional methods by creating devices and circuits where ultracold atoms, often superfluids, have a role analogous to that of electrons in electronics. Hysteresis is widely used in electronic circuits-it is routinely observed in superconducting circuits and is essential in radio-frequency superconducting quantum interference devices. Furthermore, it is as fundamental to superfluidity (and superconductivity) as quantized persistent currents, critical velocity and Josephson effects. Nevertheless, despite multiple theoretical predictions, hysteresis has not been previously observed in any superfluid, atomic-gas Bose-Einstein condensate. Here we directly detect hysteresis between quantized circulation states in an atomtronic circuit formed from a ring of superfluid Bose-Einstein condensate obstructed by a rotating weak link (a region of low atomic density). This contrasts with previous experiments on superfluid liquid helium where hysteresis was observed directly in systems in which the quantization of flow could not be observed, and indirectly in systems that showed quantized flow. Our techniques allow us to tune the size of the hysteresis loop and to consider the fundamental excitations that accompany hysteresis. The results suggest that the relevant excitations involved in hysteresis are vortices, and indicate that dissipation has an important role in the dynamics. Controlled hysteresis in atomtronic circuits may prove to be a crucial feature for the development of practical devices, just as it has in electronic circuits such as memories, digital noise filters (for example Schmitt triggers) and magnetometers (for example superconducting quantum interference devices). PMID:24522597

  20. Multiverse in the Third Quantized Formalism

    NASA Astrophysics Data System (ADS)

    Mir, Faizal

    2014-11-01

    In this paper we will analyze the third quantization of gravity in path integral formalism. We will use the time-dependent version of Wheeler—DeWitt equation to analyze the multiverse in this formalism. We will propose a mechanism for baryogenesis to occur in the multiverse, without violating the baryon number conservation.

  1. Bolometric Device Based on Fluxoid Quantization

    NASA Technical Reports Server (NTRS)

    Bonetti, Joseph A.; Kenyon, Matthew E.; Leduc, Henry G.; Day, Peter K.

    2010-01-01

    The temperature dependence of fluxoid quantization in a superconducting loop. The sensitivity of the device is expected to surpass that of other superconducting- based bolometric devices, such as superconducting transition-edge sensors and superconducting nanowire devices. Just as important, the proposed device has advantages in sample fabrication.

  2. The Quantization Rule and Maslov Index

    NASA Astrophysics Data System (ADS)

    Gu, Xiao-Yan

    Within extensions of the new quantization rule approach in arbitrary dimensions, the Maslov indices and energy spectra of some exactly solvable potentials are presented. We find that the Maslov index for the harmonic oscillator in three dimensions agrees well with those obtained by other methods.

  3. Vector Sum Excited Linear Prediction (VSELP) speech coding at 4.8 kbps

    NASA Technical Reports Server (NTRS)

    Gerson, Ira A.; Jasiuk, Mark A.

    1990-01-01

    Code Excited Linear Prediction (CELP) speech coders exhibit good performance at data rates as low as 4800 bps. The major drawback to CELP type coders is their larger computational requirements. The Vector Sum Excited Linear Prediction (VSELP) speech coder utilizes a codebook with a structure which allows for a very efficient search procedure. Other advantages of the VSELP codebook structure is discussed and a detailed description of a 4.8 kbps VSELP coder is given. This coder is an improved version of the VSELP algorithm, which finished first in the NSA's evaluation of the 4.8 kbps speech coders. The coder uses a subsample resolution single tap long term predictor, a single VSELP excitation codebook, a novel gain quantizer which is robust to channel errors, and a new adaptive pre/postfilter arrangement.

  4. Quantization of Two Classical Models by Means of the BRST Quantization Method

    NASA Astrophysics Data System (ADS)

    Bracken, Paul

    2008-12-01

    An elementary gauge-non-invariant model and the bosonized form of the chiral Schwinger model are introduced as classical theories. The constraint structure is then investigated. It is shown that by introducing a new field, these models can be made gauge-invariant. The BRST form of quantization is reviewed and applied to each of these models in turn such that gauge-invariance is not broken. Some consequences of this form of quantization are discussed.

  5. Design and analysis of vector color error diffusion halftoning systems.

    PubMed

    Damera-Venkata, N; Evans, B L

    2001-01-01

    Traditional error diffusion halftoning is a high quality method for producing binary images from digital grayscale images. Error diffusion shapes the quantization noise power into the high frequency regions where the human eye is the least sensitive. Error diffusion may be extended to color images by using error filters with matrix-valued coefficients to take into account the correlation among color planes. For vector color error diffusion, we propose three contributions. First, we analyze vector color error diffusion based on a new matrix gain model for the quantizer, which linearizes vector error diffusion. The model predicts the key characteristics of color error diffusion, esp. image sharpening and noise shaping. The proposed model includes linear gain models for the quantizer by Ardalan and Paulos (1987) and by Kite et al. (1997) as special cases. Second, based on our model, we optimize the noise shaping behavior of color error diffusion by designing error filters that are optimum with respect to any given linear spatially-invariant model of the human visual system. Our approach allows the error filter to have matrix-valued coefficients and diffuse quantization error across color channels in an opponent color representation. Thus, the noise is shaped into frequency regions of reduced human color sensitivity. To obtain the optimal filter, we derive a matrix version of the Yule-Walker equations which we solve by using a gradient descent algorithm. Finally, we show that the vector error filter has a parallel implementation as a polyphase filterbank. PMID:18255498

  6. Quantization of polarization states through scattering mechanisms

    NASA Astrophysics Data System (ADS)

    Stratis, Glafkos

    This dissertation investigates, in a comprehensive and unified effort, three major areas: (a) The quantization of polarization states through various scattering mechanisms and frequencies. (b) Scattering multispectra mechanisms, mainly diffraction and reflection combined with the split of polarization states, introduce polarization dynamics, creates new opportunities and applications in communication systems, detection algorithms and various other applications. (c) Combine the Finite-Difference Time-Domain (FDTD) and Geometrical Optics (GO) resulting in realistic monostatic-bistatic UWB (Ultra Wide Band) polarimetric capabilities for both high frequency and low frequency applications under a single computation engine, where current methods Physical Optics (PO) and GO are only capable for high frequency Radar Cross Section (RCS) applications; these methods are based on separate computational engines. The quantization of polarization states is a result of various scattering mechanisms, when Electromagnetic waves of various frequencies, are incident on various scatterers. We generate and introduce for the first time the concept of quantization matrix revealing the unique characteristics of scatterers. This is a similar and very close concept related to the quantization of energy states in quantum mechanics. The split of polarization states causes the coherency/incoherency of depolarization through the various scattering mechanisms and frequencies. It is shown (in chapter 3) that by increasing the number of frequencies, the quantization matrix size increases as well, allowing better and higher resolution. The edge diffraction was chosen as one of the scattering mechanisms showing strong polarization filtering effects. Furthermore the filtering of polarization through edges combined with reflections, links with polarization dynamics in NLOS (non line of sight) applications. In addition, the fact that wedge scattering is more sensitive to polarization versus reflections

  7. A new video codec based on 3D-DTCWT and vector SPIHT

    NASA Astrophysics Data System (ADS)

    Xu, Ruiping; Li, Huifang; Xie, Sunyun

    2011-10-01

    In this paper, a new video coding system combining 3-D complex dual-tree discrete wavelet transform with vector SPIHT and arithmetic coding is proposed, and tested on standard video sequences. First the 3-D DTCWT of each color component is performed for video sequences. Then the wavelet coefficients are grouped to form vector, and successive refinement vector quantization techniques is used to quantize the groups. Finally experimental results are given. It shows that the proposed video codec provides better performance than the 3D-DTCWT and 3D-SPIHT codec, and the superior performance for the proposed sheme lies in not performing motion compensation.

  8. Spin foam model from canonical quantization

    SciTech Connect

    Alexandrov, Sergei

    2008-01-15

    We suggest a modification of the Barrett-Crane spin foam model of four-dimensional Lorentzian general relativity motivated by the canonical quantization. The starting point is Lorentz covariant loop quantum gravity. Its kinematical Hilbert space is found as a space of the so-called projected spin networks. These spin networks are identified with the boundary states of a spin foam model and provide a generalization of the unique Barrett-Crane intertwiner. We propose a way to modify the Barrett-Crane quantization procedure to arrive at this generalization: the B field (bivectors) should be promoted not to generators of the gauge algebra, but to their certain projection. The modification is also justified by the canonical analysis of the Plebanski formulation. Finally, we compare our construction with other proposals to modify the Barrett-Crane model.

  9. Eigenvalue spacings for quantized cat maps

    NASA Astrophysics Data System (ADS)

    Gamburd, Alex; Lafferty, John; Rockmore, Dan

    2003-03-01

    According to one of the basic conjectures in quantum chaos, the eigenvalues of a quantized chaotic Hamiltonian behave like the spectrum of the typical member of the appropriate ensemble of random matrices. We study one of the simplest examples of this phenomenon in the context of ergodic actions of groups generated by several linear toral automorphisms - 'cat maps'. Our numerical experiments indicate that for 'generic' choices of cat maps, the unfolded consecutive spacing distribution in the irreducible components of the Nth quantization (given by the N-dimensional Weil representation) approaches the GOE/GSE law of random matrix theory. For certain special 'arithmetic' transformations, related to the Ramanujan graphs of Lubotzky, Phillips and Sarnak, the experiments indicate that the unfolded consecutive spacing distribution follows Poisson statistics; we provide a sharp estimate in that direction.

  10. Second quantization in bit-string physics

    NASA Technical Reports Server (NTRS)

    Noyes, H. Pierre

    1993-01-01

    Using a new fundamental theory based on bit-strings, a finite and discrete version of the solutions of the free one particle Dirac equation as segmented trajectories with steps of length h/mc along the forward and backward light cones executed at velocity +/- c are derived. Interpreting the statistical fluctuations which cause the bends in these segmented trajectories as emission and absorption of radiation, these solutions are analogous to a fermion propagator in a second quantized theory. This allows us to interpret the mass parameter in the step length as the physical mass of the free particle. The radiation in interaction with it has the usual harmonic oscillator structure of a second quantized theory. How these free particle masses can be generated gravitationally using the combinatorial hierarchy sequence (3,10,137,2(sup 127) + 136), and some of the predictive consequences are sketched.

  11. Adiabatic Quantization of Andreev Quantum Billiard Levels

    NASA Astrophysics Data System (ADS)

    Silvestrov, P. G.; Goorden, M. C.; Beenakker, C. W.

    2003-03-01

    We identify the time T between Andreev reflections as a classical adiabatic invariant in a ballistic chaotic cavity (Lyapunov exponent λ), coupled to a superconductor by an N-mode constriction. Quantization of the adiabatically invariant torus in phase space gives a discrete set of periods Tn, which in turn generate a ladder of excited states ɛnm=(m+1/2)πℏ/Tn. The largest quantized period is the Ehrenfest time T0=λ-1ln(N. Projection of the invariant torus onto the coordinate plane shows that the wave functions inside the cavity are squeezed to a transverse dimension W/(N), much below the width W of the constriction.

  12. Loop quantization of the Schwarzschild black hole.

    PubMed

    Gambini, Rodolfo; Pullin, Jorge

    2013-05-24

    We quantize spherically symmetric vacuum gravity without gauge fixing the diffeomorphism constraint. Through a rescaling, we make the algebra of Hamiltonian constraints Abelian, and therefore the constraint algebra is a true Lie algebra. This allows the completion of the Dirac quantization procedure using loop quantum gravity techniques. We can construct explicitly the exact solutions of the physical Hilbert space annihilated by all constraints. New observables living in the bulk appear at the quantum level (analogous to spin in quantum mechanics) that are not present at the classical level and are associated with the discrete nature of the spin network states of loop quantum gravity. The resulting quantum space-times resolve the singularity present in the classical theory inside black holes. PMID:23745855

  13. Quantization: Towards a comparison between methods

    NASA Astrophysics Data System (ADS)

    Tuynman, G. M.

    1987-12-01

    In this paper it is shown that the procedure of geometric quantiztion applied to Kähler manifolds gives the following result: the Hilbert space H consists, roughly speaking, of holomorphic functions on the phase space M and to each classical observable f (i.e., a real function on M) is associated an operator f on H as follows: first multiply by f+ 1/4 ℏΔdRf (ΔdR being the Laplace-de Rham operator on the Kähler manifold M) and then take the holomorphic part [see G. M. Tuynman, J. Math. Phys. 27, 573 (1987)]. This result is correct on compact Kähler manifolds and correct modulo a boundary term ∫Mdα on noncompact Kähler manifolds. In this way these results can be compared with the quantization procedure of Berezin [Math. USSR Izv. 8, 1109 (1974); 9, 341 (1975); Commun. Math. Phys. 40, 153 (1975)], which is strongly related to quantization by *-products [e.g., see C. Moreno and P. Ortega-Navarro; Amn. Inst. H. Poincaré Sec. A: 38, 215 (1983); Lett. Math. Phys. 7, 181 (1983); C. Moreno, Lett. Math. Phys. 11, 361 (1986); 12, 217 (1986)]. It is shown that on irreducible Hermitian spaces [see S. Helgason, Differential Geometry, Lie Groups and Symmetric Spaces (Academic, Orlando, FL, 1978] the contravariant symbols (in the sense of Berezin) of the operators f as above are given by the functions f+ 1/4 ℏΔdRf. The difference with the quantization result of Berezin is discussed and a change in the geometric quantization scheme is proposed.

  14. Getzler symbol calculus and deformation quantization

    NASA Astrophysics Data System (ADS)

    Mesa, Camilo

    2013-11-01

    In this paper we give a construction of Fedosov quantization incorporating the odd variables and an analogous formula to Getzler's pseudodifferential calculus composition formula is obtained. A Fedosov type connection is constructed on the bundle of Weyl tensor Clifford algebras over the cotangent bundle of a Riemannian manifold. The quantum algebra associated with this connection is used to define a deformation of the exterior algebra of Riemannian manifolds.

  15. Positronium in basis light-front quantization

    NASA Astrophysics Data System (ADS)

    Zhao, Xingbo; Wiecki, Paul; Li, Yang; Maris, Pieter; Vary, James

    2014-09-01

    Basis light-front quantization (BLFQ) has been recently developed as a first-principles nonperturbative approach for quantum field theory. Adopting the light-front quantization and Hamiltonian formalism, it solves for the mass eigenstates of quantum field theory as the eigenvalue problem of the associated light-front Hamiltonian. In this work we apply BLFQ to the positronium system in QED and solve for its eigenspectrum in the Fock space with the lowest two Fock sectors included. We explicitly demonstrate our nonperturbative renormalization procedure, in which we infer the various needed renormalization factors through solving a series of parallel single electron problems. We then compare our numerical results for the mass spectrum to the expected Bohr spectrum from nonrelativistic quantum mechanics. Basis light-front quantization (BLFQ) has been recently developed as a first-principles nonperturbative approach for quantum field theory. Adopting the light-front quantization and Hamiltonian formalism, it solves for the mass eigenstates of quantum field theory as the eigenvalue problem of the associated light-front Hamiltonian. In this work we apply BLFQ to the positronium system in QED and solve for its eigenspectrum in the Fock space with the lowest two Fock sectors included. We explicitly demonstrate our nonperturbative renormalization procedure, in which we infer the various needed renormalization factors through solving a series of parallel single electron problems. We then compare our numerical results for the mass spectrum to the expected Bohr spectrum from nonrelativistic quantum mechanics. Supported by DOE (under Grants DESC0008485 SciDAC/NUCLEI, DE-FG02-87ER40371) and NSF (under Grant 0904782).

  16. Conductance Quantization in Resistive Random Access Memory.

    PubMed

    Li, Yang; Long, Shibing; Liu, Yang; Hu, Chen; Teng, Jiao; Liu, Qi; Lv, Hangbing; Suñé, Jordi; Liu, Ming

    2015-12-01

    The intrinsic scaling-down ability, simple metal-insulator-metal (MIM) sandwich structure, excellent performances, and complementary metal-oxide-semiconductor (CMOS) technology-compatible fabrication processes make resistive random access memory (RRAM) one of the most promising candidates for the next-generation memory. The RRAM device also exhibits rich electrical, thermal, magnetic, and optical effects, in close correlation with the abundant resistive switching (RS) materials, metal-oxide interface, and multiple RS mechanisms including the formation/rupture of nanoscale to atomic-sized conductive filament (CF) incorporated in RS layer. Conductance quantization effect has been observed in the atomic-sized CF in RRAM, which provides a good opportunity to deeply investigate the RS mechanism in mesoscopic dimension. In this review paper, the operating principles of RRAM are introduced first, followed by the summarization of the basic conductance quantization phenomenon in RRAM and the related RS mechanisms, device structures, and material system. Then, we discuss the theory and modeling of quantum transport in RRAM. Finally, we present the opportunities and challenges in quantized RRAM devices and our views on the future prospects. PMID:26501832

  17. Single Abrikosov vortices as quantized information bits

    PubMed Central

    Golod, T.; Iovan, A.; Krasnov, V. M.

    2015-01-01

    Superconducting digital devices can be advantageously used in future supercomputers because they can greatly reduce the dissipation power and increase the speed of operation. Non-volatile quantized states are ideal for the realization of classical Boolean logics. A quantized Abrikosov vortex represents the most compact magnetic object in superconductors, which can be utilized for creation of high-density digital cryoelectronics. In this work we provide a proof of concept for Abrikosov-vortex-based random access memory cell, in which a single vortex is used as an information bit. We demonstrate high-endurance write operation and two different ways of read-out using a spin valve or a Josephson junction. These memory cells are characterized by an infinite magnetoresistance between 0 and 1 states, a short access time, a scalability to nm sizes and an extremely low write energy. Non-volatility and perfect reproducibility are inherent for such a device due to the quantized nature of the vortex. PMID:26456592

  18. Loop quantization of vacuum Bianchi I cosmology

    SciTech Connect

    Martin-Benito, M.; Mena Marugan, G. A.; Pawlowski, T.

    2008-09-15

    We analyze the loop quantization of the family of vacuum Bianchi I spacetimes, a gravitational system of which classical solutions describe homogeneous anisotropic cosmologies. We rigorously construct the operator that represents the Hamiltonian constraint, showing that the states of zero volume completely decouple from the rest of quantum states. This fact ensures that the classical cosmological singularity is resolved in the quantum theory. In addition, this allows us to adopt an equivalent quantum description in terms of a well-defined densitized Hamiltonian constraint. This latter constraint can be regarded in a certain sense as a difference evolution equation in an internal time provided by one of the triad components, which is polymerically quantized. Generically, this evolution equation is a relation between the projection of the quantum states in three different sections of constant internal time. Nevertheless, around the initial singularity the equation involves only the two closest sections with the same orientation of the triad. This has a double effect: on the one hand, physical states are determined just by the data on one section, on the other hand, the evolution defined in this way never crosses the singularity, without the need of any special boundary condition. Finally, we determine the inner product and the physical Hilbert space employing group averaging techniques, and we specify a complete algebra of Dirac observables. This completes the quantization program.

  19. Light-Front Quantization of Gauge Theories

    SciTech Connect

    Brodskey, Stanley

    2002-12-01

    Light-front wavefunctions provide a frame-independent representation of hadrons in terms of their physical quark and gluon degrees of freedom. The light-front Hamiltonian formalism provides new nonperturbative methods for obtaining the QCD spectrum and eigensolutions, including resolvant methods, variational techniques, and discretized light-front quantization. A new method for quantizing gauge theories in light-cone gauge using Dirac brackets to implement constraints is presented. In the case of the electroweak theory, this method of light-front quantization leads to a unitary and renormalizable theory of massive gauge particles, automatically incorporating the Lorentz and 't Hooft conditions as well as the Goldstone boson equivalence theorem. Spontaneous symmetry breaking is represented by the appearance of zero modes of the Higgs field leaving the light-front vacuum equal to the perturbative vacuum. I also discuss an ''event amplitude generator'' for automatically computing renormalized amplitudes in perturbation theory. The importance of final-state interactions for the interpretation of diffraction, shadowing, and single-spin asymmetries in inclusive reactions such as deep inelastic lepton-hadron scattering is emphasized.

  20. Conductance Quantization in Resistive Random Access Memory

    NASA Astrophysics Data System (ADS)

    Li, Yang; Long, Shibing; Liu, Yang; Hu, Chen; Teng, Jiao; Liu, Qi; Lv, Hangbing; Suñé, Jordi; Liu, Ming

    2015-10-01

    The intrinsic scaling-down ability, simple metal-insulator-metal (MIM) sandwich structure, excellent performances, and complementary metal-oxide-semiconductor (CMOS) technology-compatible fabrication processes make resistive random access memory (RRAM) one of the most promising candidates for the next-generation memory. The RRAM device also exhibits rich electrical, thermal, magnetic, and optical effects, in close correlation with the abundant resistive switching (RS) materials, metal-oxide interface, and multiple RS mechanisms including the formation/rupture of nanoscale to atomic-sized conductive filament (CF) incorporated in RS layer. Conductance quantization effect has been observed in the atomic-sized CF in RRAM, which provides a good opportunity to deeply investigate the RS mechanism in mesoscopic dimension. In this review paper, the operating principles of RRAM are introduced first, followed by the summarization of the basic conductance quantization phenomenon in RRAM and the related RS mechanisms, device structures, and material system. Then, we discuss the theory and modeling of quantum transport in RRAM. Finally, we present the opportunities and challenges in quantized RRAM devices and our views on the future prospects.

  1. Perceptually-Based Adaptive JPEG Coding

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Rosenholtz, Ruth; Null, Cynthia H. (Technical Monitor)

    1996-01-01

    An extension to the JPEG standard (ISO/IEC DIS 10918-3) allows spatial adaptive coding of still images. As with baseline JPEG coding, one quantization matrix applies to an entire image channel, but in addition the user may specify a multiplier for each 8 x 8 block, which multiplies the quantization matrix, yielding the new matrix for the block. MPEG 1 and 2 use much the same scheme, except there the multiplier changes only on macroblock boundaries. We propose a method for perceptual optimization of the set of multipliers. We compute the perceptual error for each block based upon DCT quantization error adjusted according to contrast sensitivity, light adaptation, and contrast masking, and pick the set of multipliers which yield maximally flat perceptual error over the blocks of the image. We investigate the bitrate savings due to this adaptive coding scheme and the relative importance of the different sorts of masking on adaptive coding.

  2. Detection of perturbed quantization class stego images based on possible change modes

    NASA Astrophysics Data System (ADS)

    Zhang, Yi; Liu, Fenlin; Yang, Chunfang; Luo, Xiangyang; Song, Xiaofeng

    2015-11-01

    To improve the detection performance for perturbed quantization (PQ) class [PQ, energy-adaptive PQ (PQe), and texture-adaptive PQ (PQt)] stego images, a detection method based on possible change modes is proposed. First, by using the relationship between the changeable coefficients used for carrying secret messages and the second quantization steps, the modes having even second quantization steps are identified as possible change modes. Second, by referencing the existing features, the modified features that can accurately capture the embedding changes based on possible change modes are extracted. Next, feature sensitivity analyses based on the modifications performed before and after the embedding are carried out. These analyses show that the modified features are more sensitive to the original features. Experimental results indicate that detection performance of the modified features is better than that of the corresponding original features for three typical feature models [Cartesian calibrated PEVny (ccPEV), Cartesian calibrated co-occurrence matrix features (CF), and JPEG rich model (JRM)], and the integrated feature consisting of enhanced histogram features (EHF) and the modified JRM outperforms two current state-of-the-art feature models, namely, phase aware projection model (PHARM) and Gabor rich model (GRM).

  3. An Enlarged Canonical Quantization Scheme and Quantization of a Free Particle on Two-Dimensional Sphere

    NASA Astrophysics Data System (ADS)

    Zhang, Zhong-Shuai; Xiao, Shi-Fa; Xun, Da-Mao; Liu, Quan-Hui

    2015-01-01

    For a non-relativistic particle that freely moves on a curved surface, the fundamental commutation relations between positions and momenta are insufficient to uniquely determine the operator form of the momenta. With introduction of more commutation relations between positions and Hamiltonian and those between momenta and Hamiltonian, our recent sequential studies imply that the Cartesian system of coordinates is physically preferable, consistent with Dirac' observation. In present paper, we study quantization problem of the motion constrained on the two-dimensional sphere and develop a discriminant that can be used to show how the quantization within the intrinsic geometry is improper. Two kinds of parameterization of the spherical surface are explicitly invoked to investigate the quantization problem within the intrinsic geometry.

  4. Flux quantization in rings for Hubbard (attractive and repulsive) and t - J -like Hamiltonians

    SciTech Connect

    Ferretti, A. ); Kulik, I.O. ); Lami, A. )

    1992-03-01

    The ground-state energy of rings with 10 or 16 sites and 2 or 4 fermions (holes or electrons) is computed as a function of the flux {Phi} created by a vector potential constant along the ring (Aharonov-Bohm setup) for three different model Hamiltonians: the attractive ({ital U}{lt}0) and repulsive ({ital U}{gt}0) Hubbard and a {ital t}-{ital J}-like Hamiltonian. In all cases the flux is found to be quantized in units {ital hc}/2{ital e}, showing that the charge carriers are pairs. The possible role of phonon fluctuations in discriminating among the different models is also investigated.

  5. Quantized Nambu-Poisson manifolds and n-Lie algebras

    SciTech Connect

    DeBellis, Joshua; Saemann, Christian; Szabo, Richard J.

    2010-12-15

    We investigate the geometric interpretation of quantized Nambu-Poisson structures in terms of noncommutative geometries. We describe an extension of the usual axioms of quantization in which classical Nambu-Poisson structures are translated to n-Lie algebras at quantum level. We demonstrate that this generalized procedure matches an extension of Berezin-Toeplitz quantization yielding quantized spheres, hyperboloids, and superspheres. The extended Berezin quantization of spheres is closely related to a deformation quantization of n-Lie algebras as well as the approach based on harmonic analysis. We find an interpretation of Nambu-Heisenberg n-Lie algebras in terms of foliations of R{sup n} by fuzzy spheres, fuzzy hyperboloids, and noncommutative hyperplanes. Some applications to the quantum geometry of branes in M-theory are also briefly discussed.

  6. A visual detection model for DCT coefficient quantization

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert J., Jr.; Peterson, Heidi A.

    1993-01-01

    The discrete cosine transform (DCT) is widely used in image compression, and is part of the JPEG and MPEG compression standards. The degree of compression, and the amount of distortion in the decompressed image are determined by the quantization of the transform coefficients. The standards do not specify how the DCT coefficients should be quantized. Our approach is to set the quantization level for each coefficient so that the quantization error is at the threshold of visibility. Here we combine results from our previous work to form our current best detection model for DCT coefficient quantization noise. This model predicts sensitivity as a function of display parameters, enabling quantization matrices to be designed for display situations varying in luminance, veiling light, and spatial frequency related conditions (pixel size, viewing distance, and aspect ratio). It also allows arbitrary color space directions for the representation of color.

  7. Quantized Nambu-Poisson manifolds and n-Lie algebras

    NASA Astrophysics Data System (ADS)

    DeBellis, Joshua; Sämann, Christian; Szabo, Richard J.

    2010-12-01

    We investigate the geometric interpretation of quantized Nambu-Poisson structures in terms of noncommutative geometries. We describe an extension of the usual axioms of quantization in which classical Nambu-Poisson structures are translated to n-Lie algebras at quantum level. We demonstrate that this generalized procedure matches an extension of Berezin-Toeplitz quantization yielding quantized spheres, hyperboloids, and superspheres. The extended Berezin quantization of spheres is closely related to a deformation quantization of n-Lie algebras as well as the approach based on harmonic analysis. We find an interpretation of Nambu-Heisenberg n-Lie algebras in terms of foliations of {{R}}^n by fuzzy spheres, fuzzy hyperboloids, and noncommutative hyperplanes. Some applications to the quantum geometry of branes in M-theory are also briefly discussed.

  8. Self-adjointness of the Fourier expansion of quantized interaction field Lagrangians

    PubMed Central

    Paneitz, S. M.; Segal, I. E.

    1983-01-01

    Regularity properties significantly stronger than were previously known are developed for four-dimensional non-linear conformally invariant quantized fields. The Fourier coefficients of the interaction Lagrangian in the interaction representation—i.e., evaluated after substitution of the associated quantized free field—is a densely defined operator on the associated free field Hilbert space K. These Fourier coefficients are with respect to a natural basis in the universal cosmos ˜M, to which such fields canonically and maximally extend from Minkowski space-time M0, which is covariantly a submanifold of ˜M. However, conformally invariant free fields over M0 and ˜M are canonically identifiable. The kth Fourier coefficient of the interaction Lagrangian has domain inclusive of all vectors in K to which arbitrary powers of the free hamiltonian in ˜M are applicable. Its adjoint in the rigorous Hilbert space sense is a-k in the case of a hermitian Lagrangian. In particular (k = 0) the leading term in the perturbative expansion of the S-matrix for a conformally invariant quantized field in M0 is a self-adjoint operator. Thus, e.g., if ϕ(x) denotes the free massless neutral scalar field in M0, then ∫M0:ϕ(x)4:d4x is a self-adjoint operator. No coupling constant renormalization is involved here. PMID:16593346

  9. Stochastic variational method as quantization scheme: Field quantization of the complex Klein-Gordon equation

    NASA Astrophysics Data System (ADS)

    Koide, T.; Kodama, T.

    2015-09-01

    The stochastic variational method (SVM) is the generalization of the variational approach to systems described by stochastic variables. In this paper, we investigate the applicability of SVM as an alternative field-quantization scheme, by considering the complex Klein-Gordon equation. There, the Euler-Lagrangian equation for the stochastic field variables leads to the functional Schrödinger equation, which can be interpreted as the Euler (ideal fluid) equation in the functional space. The present formulation is a quantization scheme based on commutable variables, so that there appears no ambiguity associated with the ordering of operators, e.g., in the definition of Noether charges.

  10. Canonical quantization of the kink model beyond the static solution

    SciTech Connect

    Kapustnikov, A.A.; Pashnev, A.; Pichugin, A.

    1997-02-01

    A new approach to the quantization of the relativistic kink model around the solitonic solution is developed on the grounds of the collective coordinates method. The corresponding effective action is proved to be the action of the nonminimal d=1+1 point particle with curvature. It is shown that upon canonical quantization this action yields the spectrum of the kink solution obtained first with the help of WKB quantization. {copyright} {ital 1997} {ital The American Physical Society}

  11. Vector Video

    NASA Astrophysics Data System (ADS)

    Taylor, David P.

    2001-01-01

    Vector addition is an important skill for introductory physics students to master. For years, I have used a fun example to introduce vector addition in my introductory physics classes based on one with which my high school physics teacher piqued my interest many years ago.

  12. Semiclassical quantization of nonadiabatic systems with hopping periodic orbits.

    PubMed

    Fujii, Mikiya; Yamashita, Koichi

    2015-02-21

    We present a semiclassical quantization condition, i.e., quantum-classical correspondence, for steady states of nonadiabatic systems consisting of fast and slow degrees of freedom (DOFs) by extending Gutzwiller's trace formula to a nonadiabatic form. The quantum-classical correspondence indicates that a set of primitive hopping periodic orbits, which are invariant under time evolution in the phase space of the slow DOF, should be quantized. The semiclassical quantization is then applied to a simple nonadiabatic model and accurately reproduces exact quantum energy levels. In addition to the semiclassical quantization condition, we also discuss chaotic dynamics involved in the classical limit of nonadiabatic dynamics. PMID:25701999

  13. Homotopy of rational maps and the quantization of Skyrmions

    NASA Astrophysics Data System (ADS)

    Krusch, Steffen

    2003-04-01

    The Skyrme model is a classical field theory which models the strong interaction between atomic nuclei. It has to be quantized in order to compare it to nuclear physics. When the Skyrme model is semi-classically quantized it is important to take the Finkelstein-Rubinstein constraints into account. The aim of this paper is to show how to calculate these FR constraints directly from the rational map ansatz using basic homotopy theory. We then apply this construction in order to quantize the Skyrme model in the simplest approximation, the zero mode quantization. This is carried out for up to 22 nucleons and the results are compared to experiment.

  14. Coarse quantization with the fast digital shearlet transform

    NASA Astrophysics Data System (ADS)

    Bodmann, Bernhard G.; Kutyniok, Gitta; Zhuang, Xiaosheng

    2011-09-01

    The fast digital shearlet transform (FDST) was recently introduced as a means to analyze natural images efficiently, owing to the fact that those are typically governed by cartoon-like structures. In this paper, we introduce and discuss a first-order hybrid sigma-delta quantization algorithm for coarsely quantizing the shearlet coefficients generated by the FDST. Radial oversampling in the frequency domain together with our choice for the quantization helps suppress the reconstruction error in a similar way as first-order sigma-delta quantization for finite frames. We provide a theoretical bound for the reconstruction error and confirm numerically that the error is in accordance with this theoretical decay.

  15. Quantization effects in radiation spectroscopy based on digital pulse processing

    SciTech Connect

    Jordanov, V. T.; Jordanova, K. V.

    2011-07-01

    Radiation spectra represent inherently quantization data in the form of stacked channels of equal width. The spectrum is an experimental measurement of the discrete probability density function (PDF) of the detector pulse heights. The quantization granularity of the spectra depends on the total number of channels covering the full range of pulse heights. In analog pulse processing the total number of channels is equal to the total digital values produced by a spectroscopy analog-to-digital converter (ADC). In digital pulse processing each detector pulse is sampled and quantized by a fast ADC producing certain number of quantized numerical values. These digital values are linearly processed to obtain a digital quantity representing the peak of the digitally shaped pulse. Using digital pulse processing it is possible to acquire a spectrum with the total number of channels greater than the number of ADC values. Noise and sample averaging are important in the transformation of ADC quantized data into spectral quantized data. Analysis of this transformation is performed using an area sampling model of quantization. Spectrum differential nonlinearity (DNL) is shown to be related to the quantization at low noise levels and small number of averaged samples. Theoretical analysis and experimental measurements are used to obtain the condition to minimize the DNL due to quantization. (authors)

  16. Topological Quantization in Units of the Fine Structure Constant

    SciTech Connect

    Maciejko, Joseph; Qi, Xiao-Liang; Drew, H.Dennis; Zhang, Shou-Cheng; /Stanford U., Phys. Dept. /Stanford U., Materials Sci. Dept. /SLAC

    2011-11-11

    Fundamental topological phenomena in condensed matter physics are associated with a quantized electromagnetic response in units of fundamental constants. Recently, it has been predicted theoretically that the time-reversal invariant topological insulator in three dimensions exhibits a topological magnetoelectric effect quantized in units of the fine structure constant {alpha} = e{sup 2}/{h_bar}c. In this Letter, we propose an optical experiment to directly measure this topological quantization phenomenon, independent of material details. Our proposal also provides a way to measure the half-quantized Hall conductances on the two surfaces of the topological insulator independently of each other.

  17. Semiclassical quantization of nonadiabatic systems with hopping periodic orbits

    SciTech Connect

    Fujii, Mikiya Yamashita, Koichi

    2015-02-21

    We present a semiclassical quantization condition, i.e., quantum–classical correspondence, for steady states of nonadiabatic systems consisting of fast and slow degrees of freedom (DOFs) by extending Gutzwiller’s trace formula to a nonadiabatic form. The quantum–classical correspondence indicates that a set of primitive hopping periodic orbits, which are invariant under time evolution in the phase space of the slow DOF, should be quantized. The semiclassical quantization is then applied to a simple nonadiabatic model and accurately reproduces exact quantum energy levels. In addition to the semiclassical quantization condition, we also discuss chaotic dynamics involved in the classical limit of nonadiabatic dynamics.

  18. Observation of quantized conductance in neutral matter

    NASA Astrophysics Data System (ADS)

    Krinner, Sebastian; Stadler, David; Husmann, Dominik; Brantut, Jean-Philippe; Esslinger, Tilman

    2015-01-01

    In transport experiments, the quantum nature of matter becomes directly evident when changes in conductance occur only in discrete steps, with a size determined solely by Planck's constant h. Observations of quantized steps in electrical conductance have provided important insights into the physics of mesoscopic systems and have allowed the development of quantum electronic devices. Even though quantized conductance should not rely on the presence of electric charges, it has never been observed for neutral, massive particles. In its most fundamental form, it requires a quantum-degenerate Fermi gas, a ballistic and adiabatic transport channel, and a constriction with dimensions comparable to the Fermi wavelength. Here we report the observation of quantized conductance in the transport of neutral atoms driven by a chemical potential bias. The atoms are in an ultraballistic regime, where their mean free path exceeds not only the size of the transport channel, but also the size of the entire system, including the atom reservoirs. We use high-resolution lithography to shape light potentials that realize either a quantum point contact or a quantum wire for atoms. These constrictions are imprinted on a quasi-two-dimensional ballistic channel connecting the reservoirs. By varying either a gate potential or the transverse confinement of the constrictions, we observe distinct plateaux in the atom conductance. The conductance in the first plateau is found to be equal to the universal conductance quantum, 1/h. We use Landauer's formula to model our results and find good agreement for low gate potentials, with all parameters determined a priori. Our experiment lets us investigate quantum conductors with wide control not only over the channel geometry, but also over the reservoir properties, such as interaction strength, size and thermalization rate.

  19. Canonical quantization of classical mechanics in curvilinear coordinates. Invariant quantization procedure

    SciTech Connect

    Błaszak, Maciej Domański, Ziemowit

    2013-12-15

    In the paper is presented an invariant quantization procedure of classical mechanics on the phase space over flat configuration space. Then, the passage to an operator representation of quantum mechanics in a Hilbert space over configuration space is derived. An explicit form of position and momentum operators as well as their appropriate ordering in arbitrary curvilinear coordinates is demonstrated. Finally, the extension of presented formalism onto non-flat case and related ambiguities of the process of quantization are discussed. -- Highlights: •An invariant quantization procedure of classical mechanics on the phase space over flat configuration space is presented. •The passage to an operator representation of quantum mechanics in a Hilbert space over configuration space is derived. •Explicit form of position and momentum operators and their appropriate ordering in curvilinear coordinates is shown. •The invariant form of Hamiltonian operators quadratic and cubic in momenta is derived. •The extension of presented formalism onto non-flat case and related ambiguities of the quantization process are discussed.

  20. Quantum mechanics, gravity and modified quantization relations.

    PubMed

    Calmet, Xavier

    2015-08-01

    In this paper, we investigate a possible energy scale dependence of the quantization rules and, in particular, from a phenomenological point of view, an energy scale dependence of an effective [Formula: see text] (reduced Planck's constant). We set a bound on the deviation of the value of [Formula: see text] at the muon scale from its usual value using measurements of the anomalous magnetic moment of the muon. Assuming that inflation has taken place, we can conclude that nature is described by a quantum theory at least up to an energy scale of about 10(16) GeV. PMID:26124253

  1. Quantized gauged massless Rarita-Schwinger fields

    NASA Astrophysics Data System (ADS)

    Adler, Stephen L.

    2015-10-01

    We study the quantization of a minimally gauged massless Rarita-Schwinger field, by both the Dirac bracket and functional integral methods. The Dirac bracket approach in the covariant radiation gauge leads to an anticommutator that has a nonsingular limit as gauge fields approach zero, is manifestly positive semidefinite, and is Lorentz invariant. The constraints also have the form needed to apply the Faddeev-Popov method for deriving a functional integral, using the same constrained Hamiltonian and inverse constraint matrix that appear in the Dirac bracket approach.

  2. Quantization of lumped elements electrical circuits revisited

    NASA Astrophysics Data System (ADS)

    Lalumiere, Kevin; Najafi­-Yazdi, Alireza

    In 1995, the ``Les Houches'' seminar of Michel Devoret introduced a method to quantize lumped elements electrical circuits. This method has since been formalized using the matricial formalism, in particular by G. Burkard. Starting from these seminal contributions, we present a new algorithm to quantify electrical circuits. This algorithm unites the features of Devoret's and Burkad's approaches. We minimize the set of assumptions made so that the method can treat directly most electrical circuits. This includes circuits with resistances, mutual inductances, voltage and current sources. We conclude with a discussion about the choice of the basis in which the Hamiltonian operator should be written, an issue which is often overlooked.

  3. Quantization of soluble classical constrained systems

    SciTech Connect

    Belhadi, Z.; Menas, F.; Bérard, A.; Mohrbach, H.

    2014-12-15

    The derivation of the brackets among coordinates and momenta for classical constrained systems is a necessary step toward their quantization. Here we present a new approach for the determination of the classical brackets which does neither require Dirac’s formalism nor the symplectic method of Faddeev and Jackiw. This approach is based on the computation of the brackets between the constants of integration of the exact solutions of the equations of motion. From them all brackets of the dynamical variables of the system can be deduced in a straightforward way.

  4. Phase-space quantization of field theory.

    SciTech Connect

    Curtright, T.; Zachos, C.

    1999-04-20

    In this lecture, a limited introduction of gauge invariance in phase-space is provided, predicated on canonical transformations in quantum phase-space. Exact characteristic trajectories are also specified for the time-propagating Wigner phase-space distribution function: they are especially simple--indeed, classical--for the quantized simple harmonic oscillator. This serves as the underpinning of the field theoretic Wigner functional formulation introduced. Scalar field theory is thus reformulated in terms of distributions in field phase-space. This is a pedagogical selection from work published and reported at the Yukawa Institute Workshop ''Gauge Theory and Integrable Models'', 26-29 January, 1999.

  5. Path integral quantization of generalized quantum electrodynamics

    SciTech Connect

    Bufalo, R.; Pimentel, B. M.; Zambrano, G. E. R.

    2011-02-15

    In this paper, a complete covariant quantization of generalized electrodynamics is shown through the path integral approach. To this goal, we first studied the Hamiltonian structure of the system following Dirac's methodology and, then, we followed the Faddeev-Senjanovic procedure to obtain the transition amplitude. The complete propagators (Schwinger-Dyson-Fradkin equations) of the correct gauge fixation and the generalized Ward-Fradkin-Takahashi identities are also obtained. Afterwards, an explicit calculation of one-loop approximations of all Green's functions and a discussion about the obtained results are presented.

  6. Brief Review on Black Hole Loop Quantization

    NASA Astrophysics Data System (ADS)

    Olmedo, Javier

    2016-06-01

    Here, we present a review about the quantization of spherically-symmetric spacetimes adopting loop quantum gravity techniques. Several models that have been studied so far share similar properties: the resolution of the classical singularity and some of them an intrinsic discretization of the geometry. We also explain the extension to Reissner---Nordstr\\"om black holes. Besides, we review how quantum test fields on these quantum geometries allow us to study phenomena, like the Casimir effect or Hawking radiation. Finally, we briefly describe a recent proposal that incorporates spherically-symmetric matter, discussing its relevance for the understanding of black hole evolution.

  7. The pointwise product in Weyl quantization

    NASA Astrophysics Data System (ADS)

    Dubin, D. A.; Hennings, M. A.

    2004-07-01

    We study the odot-product of Bracken [1], which is the Weyl quantized version of the pointwise product of functions in phase space. We prove that it is not compatible with the algebras of finite rank and Hilbert-Schmidt operators. By solving the linearization problem for the special Hermite functions, we are able to express the odot-product in terms of the component operators, mediated by the linearization coefficients. This is applied to finite rank operators and their matrices, and operators whose symbols are radial and angular distributions.

  8. Cloning vector

    DOEpatents

    Guilfoyle, R.A.; Smith, L.M.

    1994-12-27

    A vector comprising a filamentous phage sequence containing a first copy of filamentous phage gene X and other sequences necessary for the phage to propagate is disclosed. The vector also contains a second copy of filamentous phage gene X downstream from a promoter capable of promoting transcription in a bacterial host. In a preferred form of the present invention, the filamentous phage is M13 and the vector additionally includes a restriction endonuclease site located in such a manner as to substantially inactivate the second gene X when a DNA sequence is inserted into the restriction site. 2 figures.

  9. Cloning vector

    DOEpatents

    Guilfoyle, Richard A.; Smith, Lloyd M.

    1994-01-01

    A vector comprising a filamentous phage sequence containing a first copy of filamentous phage gene X and other sequences necessary for the phage to propagate is disclosed. The vector also contains a second copy of filamentous phage gene X downstream from a promoter capable of promoting transcription in a bacterial host. In a preferred form of the present invention, the filamentous phage is M13 and the vector additionally includes a restriction endonuclease site located in such a manner as to substantially inactivate the second gene X when a DNA sequence is inserted into the restriction site.

  10. Equivalent Vectors

    ERIC Educational Resources Information Center

    Levine, Robert

    2004-01-01

    The cross-product is a mathematical operation that is performed between two 3-dimensional vectors. The result is a vector that is orthogonal or perpendicular to both of them. Learning about this for the first time while taking Calculus-III, the class was taught that if AxB = AxC, it does not necessarily follow that B = C. This seemed baffling. The…

  11. Bohmian quantization of the big rip

    SciTech Connect

    Pinto-Neto, Nelson; Pantoja, Diego Moraes

    2009-10-15

    It is shown in this paper that minisuperspace quantization of homogeneous and isotropic geometries with phantom scalar fields, when examined in the light of the Bohm-de Broglie interpretation of quantum mechanics, does not eliminate, in general, the classical big rip singularity present in the classical model. For some values of the Hamilton-Jacobi separation constant present in a class of quantum state solutions of the Wheeler-De Witt equation, the big rip can be either completely eliminated or may still constitute a future attractor for all expanding solutions. This is contrary to the conclusion presented in [M. P. Dabrowski, C. Kiefer, and B. Sandhofer, Phys. Rev. D 74, 044022 (2006).], using a different interpretation of the wave function, where the big rip singularity is completely eliminated ('smoothed out') through quantization, independently of such a separation constant and for all members of the above mentioned class of solutions. This is an example of the very peculiar situation where different interpretations of the same quantum state of a system are predicting different physical facts, instead of just giving different descriptions of the same observable facts: in fact, there is nothing more observable than the fate of the whole Universe.

  12. Bohmian quantization of the big rip

    NASA Astrophysics Data System (ADS)

    Pinto-Neto, Nelson; Pantoja, Diego Moraes

    2009-10-01

    It is shown in this paper that minisuperspace quantization of homogeneous and isotropic geometries with phantom scalar fields, when examined in the light of the Bohm-de Broglie interpretation of quantum mechanics, does not eliminate, in general, the classical big rip singularity present in the classical model. For some values of the Hamilton-Jacobi separation constant present in a class of quantum state solutions of the Wheeler-De Witt equation, the big rip can be either completely eliminated or may still constitute a future attractor for all expanding solutions. This is contrary to the conclusion presented in [M. P. Dabrowski, C. Kiefer, and B. Sandhofer, Phys. Rev. DPRVDAQ1550-7998 74, 044022 (2006).10.1103/PhysRevD.74.044022], using a different interpretation of the wave function, where the big rip singularity is completely eliminated (“smoothed out”) through quantization, independently of such a separation constant and for all members of the above mentioned class of solutions. This is an example of the very peculiar situation where different interpretations of the same quantum state of a system are predicting different physical facts, instead of just giving different descriptions of the same observable facts: in fact, there is nothing more observable than the fate of the whole Universe.

  13. Light-cone quantization of quantum chromodynamics

    SciTech Connect

    Brodsky, S.J. ); Pauli, H.C. )

    1991-06-01

    We discuss the light-cone quantization of gauge theories from two perspectives: as a calculational tool for representing hadrons as QCD bound-states of relativistic quarks and gluons, and also as a novel method for simulating quantum field theory on a computer. The light-cone Fock state expansion of wavefunctions at fixed light cone time provides a precise definition of the parton model and a general calculus for hadronic matrix elements. We present several new applications of light-cone Fock methods, including calculations of exclusive weak decays of heavy hadrons, and intrinsic heavy-quark contributions to structure functions. A general nonperturbative method for numerically solving quantum field theories, discretized light-cone quantization,'' is outlined and applied to several gauge theories, including QCD in one space and one time dimension, and quantum electrodynamics in physical space-time at large coupling strength. The DLCQ method is invariant under the large class of light-cone Lorentz transformations, and it can be formulated such at ultraviolet regularization is independent of the momentum space discretization. Both the bound-state spectrum and the corresponding relativistic light-cone wavefunctions can be obtained by matrix diagonalization and related techniques. We also discuss the construction of the light-cone Fock basis, the structure of the light-cone vacuum, and outline the renormalization techniques required for solving gauge theories within the light-cone Hamiltonian formalism.

  14. Loop quantization of the Schwarzschild interior revisited

    NASA Astrophysics Data System (ADS)

    Corichi, Alejandro; Singh, Parampreet

    2016-03-01

    The loop quantization of the Schwarzschild interior region, as described by a homogeneous anisotropic Kantowski-Sachs model, is re-examined. As several studies of different—inequivalent—loop quantizations have shown, to date there exists no fully satisfactory quantum theory for this model. This fact poses challenges to the validity of some scenarios to address the black hole information problem. Here we put forward a novel viewpoint to construct the quantum theory that builds from some of the models available in the literature. The final picture is a quantum theory that is both independent of any auxiliary structure and possesses a correct low curvature limit. It represents a subtle but non-trivial modification of the original prescription given by Ashtekar and Bojowald. It is shown that the quantum gravitational constraint is well defined past the singularity and that its effective dynamics possesses a bounce into an expanding regime. The classical singularity is avoided, and a semiclassical spacetime satisfying vacuum Einstein’s equations is recovered on the ‘other side’ of the bounce. We argue that such a metric represents the interior region of a white-hole spacetime, but for which the corresponding ‘white hole mass’ differs from the original black hole mass. Furthermore, we find that the value of the white hole mass is proportional to the third power of the starting black hole mass.

  15. Exciton condensation in microcavities under three-dimensional quantization conditions

    SciTech Connect

    Kochereshko, V. P. Platonov, A. V.; Savvidis, P.; Kavokin, A. V.; Bleuse, J.; Mariette, H.

    2013-11-15

    The dependence of the spectra of the polarized photoluminescence of excitons in microcavities under conditions of three-dimensional quantization on the optical-excitation intensity is investigated. The cascade relaxation of polaritons between quantized states of a polariton Bose condensate is observed.

  16. Weighted MinMax Algorithm for Color Image Quantization

    NASA Technical Reports Server (NTRS)

    Reitan, Paula J.

    1999-01-01

    The maximum intercluster distance and the maximum quantization error that are minimized by the MinMax algorithm are shown to be inappropriate error measures for color image quantization. A fast and effective (improves image quality) method for generalizing activity weighting to any histogram-based color quantization algorithm is presented. A new non-hierarchical color quantization technique called weighted MinMax that is a hybrid between the MinMax and Linde-Buzo-Gray (LBG) algorithms is also described. The weighted MinMax algorithm incorporates activity weighting and seeks to minimize WRMSE, whereby obtaining high quality quantized images with significantly less visual distortion than the MinMax algorithm.

  17. Fast color quantization using weighted sort-means clustering.

    PubMed

    Celebi, M Emre

    2009-11-01

    Color quantization is an important operation with numerous applications in graphics and image processing. Most quantization methods are essentially based on data clustering algorithms. However, despite its popularity as a general purpose clustering algorithm, K-means has not received much respect in the color quantization literature because of its high computational requirements and sensitivity to initialization. In this paper, a fast color quantization method based on K-means is presented. The method involves several modifications to the conventional (batch) K-means algorithm, including data reduction, sample weighting, and the use of the triangle inequality to speed up the nearest-neighbor search. Experiments on a diverse set of images demonstrate that, with the proposed modifications, K-means becomes very competitive with state-of-the-art color quantization methods in terms of both effectiveness and efficiency. PMID:19884945

  18. First, Second Quantization and Q-Deformed Harmonic Oscillator

    NASA Astrophysics Data System (ADS)

    Van Ngu, Man; Gia Vinh, Ngo; Lan, Nguyen Tri; Thanh, Luu Thi Kim; Viet, Nguyen Ai

    2015-06-01

    Relations between the first, the second quantized representations and deform algebra are investigated. In the case of harmonic oscillator, the axiom of first quantization (the commutation relation between coordinate and momentum operators) and the axiom of second quantization (the commutation relation between creation and annihilation operators) are equivalent. We shown that in the case of q-deformed harmonic oscillator, a violence of the axiom of second quantization leads to a violence of the axiom of first quantization, and inverse. Using the coordinate representation, we study fine structures of the vacuum state wave function depend in the deformation parameter q. A comparison with fine structures of Cooper pair of superconductivity in the coordinate representation is also performed.

  19. Combined optimal quantization and lossless coding of digital holograms of three-dimensional objects

    NASA Astrophysics Data System (ADS)

    Shortt, Alison E.; Naughton, Thomas J.; Javidi, Bahram

    2006-10-01

    Digital holography is an inherently three-dimensional (3D) technique for the capture of real-world objects. Many existing 3D imaging and processing techniques are based on the explicit combination of several 2D perspectives (or light stripes, etc.) through digital image processing. The advantage of recording a hologram is that multiple 2D perspectives can be optically combined in parallel, and in a constant number of steps independent of the hologram size. Although holography and its capabilities have been known for many decades, it is only very recently that digital holography has been practically investigated due to the recent development of megapixel digital sensors with sufficient spatial resolution and dynamic range. The applications of digital holography could include 3D television, virtual reality, and medical imaging. If these applications are realized, compression standards will have to be defined. We outline the techniques that have been proposed to date for the compression of digital hologram data and show that they are comparable to the performance of what in communication theory is known as optimal signal quantization. We adapt the optimal signal quantization technique to complex-valued 2D signals. The technique relies on knowledge of the histograms of real and imaginary values in the digital holograms. Our digital holograms of 3D objects are captured using phase-shift interferometry. We complete the compression procedure by applying lossless techniques to the quantized holographic pixels.

  20. Evolutionary genetics and vector adaptation of recombinant viruses of the western equine encephalitis antigenic complex provides new insights into alphavirus diversity and host switching

    PubMed Central

    Allison, Andrew B.; Stallknecht, David E.; Holmes, Edward C.

    2014-01-01

    Western equine encephalitis virus (WEEV), Highlands J virus (HJV), and Fort Morgan virus (FMV) are the sole representatives of the WEE antigenic complex of the genus Alphavirus, family Togaviridae, that are endemic to North America. All three viruses have their ancestry in a recombination event involving eastern equine encephalitis virus (EEEV) and a Sindbis (SIN)-like virus that gave rise to a chimeric alphavirus that subsequently diversified into the present-day WEEV, HJV, and FMV. Here, we present a comparative analysis of the genetic, ecological, and evolutionary relationships among these recombinant-origin viruses, including the description of a nsP4 polymerase mutation in FMV that allows it to circumvent the host range barrier to Asian tiger mosquito cells, a vector species that is normally refractory to infection. Notably, we also provide evidence that the recombination event that gave rise to these three WEEV antigenic complex viruses may have occurred in North America. PMID:25463613

  1. Applications of Basis Light-Front Quantization to QED

    NASA Astrophysics Data System (ADS)

    Vary, James P.; Zhao, Xingbo; Ilderton, Anton; Honkanen, Heli; Maris, Pieter; Brodsky, Stanley J.

    2014-06-01

    Hamiltonian light-front quantum field theory provides a framework for calculating both static and dynamic properties of strongly interacting relativistic systems. Invariant masses, correlated parton amplitudes and time-dependent scattering amplitudes, possibly with strong external time-dependent fields, represent a few of the important applications. By choosing the light-front gauge and adopting an orthonormal basis function representation, we obtain a large, sparse, Hamiltonian matrix eigenvalue problem for mass eigenstates that we solve by adapting ab initio no-core methods of nuclear many-body theory. In the continuum limit, the infinite matrix limit, we recover full covariance. Guided by the symmetries of light-front quantized theory, we adopt a two-dimensional harmonic oscillator basis for transverse modes that corresponds with eigensolutions of the soft-wall anti-de Sitter/quantum chromodynamics (AdS/QCD) model obtained from light-front holography. We outline our approach and present results for non-linear Compton scattering, evaluated non-perturbatively, where a strong and time-dependent laser field accelerates the electron and produces states of higher invariant mass i.e. final states with photon emission.

  2. Chikungunya Virus–Vector Interactions

    PubMed Central

    Coffey, Lark L.; Failloux, Anna-Bella; Weaver, Scott C.

    2014-01-01

    Chikungunya virus (CHIKV) is a mosquito-borne alphavirus that causes chikungunya fever, a severe, debilitating disease that often produces chronic arthralgia. Since 2004, CHIKV has emerged in Africa, Indian Ocean islands, Asia, Europe, and the Americas, causing millions of human infections. Central to understanding CHIKV emergence is knowledge of the natural ecology of transmission and vector infection dynamics. This review presents current understanding of CHIKV infection dynamics in mosquito vectors and its relationship to human disease emergence. The following topics are reviewed: CHIKV infection and vector life history traits including transmission cycles, genetic origins, distribution, emergence and spread, dispersal, vector competence, vector immunity and microbial interactions, and co-infection by CHIKV and other arboviruses. The genetics of vector susceptibility and host range changes, population heterogeneity and selection for the fittest viral genomes, dual host cycling and its impact on CHIKV adaptation, viral bottlenecks and intrahost diversity, and adaptive constraints on CHIKV evolution are also discussed. The potential for CHIKV re-emergence and expansion into new areas and prospects for prevention via vector control are also briefly reviewed. PMID:25421891

  3. Kerr Black Hole Entropy and its Quantization

    NASA Astrophysics Data System (ADS)

    Jiang, Ji-Jian; Li, Chuan-An; Cheng, Xie-Feng

    2016-08-01

    By constructing the four-dimensional phase space based on the observable physical quantity of Kerr black hole and gauge transformation, the Kerr black hole entropy in the phase space was obtained. Then considering the corresponding mechanical quantities as operators and making the operators quantized, entropy spectrum of Kerr black hole was obtained. Our results show that the Kerr black hole has the entropy spectrum with equal intervals, which is in agreement with the idea of Bekenstein. In the limit of large event horizon, the area of the adjacent event horizon of the black hole have equal intervals. The results are in consistent with the results based on the loop quantum gravity theory by Dreyer et al.

  4. Quantized spin waves in antiferromagnetic Heisenberg chains.

    PubMed

    Wieser, R; Vedmedenko, E Y; Wiesendanger, R

    2008-10-24

    The quantized stationary spin wave modes in one-dimensional antiferromagnetic spin chains with easy axis on-site anisotropy have been studied by means of Landau-Lifshitz-Gilbert spin dynamics. We demonstrate that the confined antiferromagnetic chains show a unique behavior having no equivalent, neither in ferromagnetism nor in acoustics. The discrete energy dispersion is split into two interpenetrating n and n' levels caused by the existence of two sublattices. The oscillations of individual sublattices as well as the standing wave pattern strongly depend on the boundary conditions. Particularly, acoustical and optical antiferromagnetic spin waves in chains with boundaries fixed (pinned) on different sublattices can be found, while an asymmetry of oscillations appears if the two pinned ends belong to the same sublattice. PMID:18999780

  5. Optimized regulator for the quantized anharmonic oscillator

    NASA Astrophysics Data System (ADS)

    Kovacs, J.; Nagy, S.; Sailer, K.

    2015-04-01

    The energy gap between the first excited state and the ground state is calculated for the quantized anharmonic oscillator in the framework of the functional renormalization group method. The compactly supported smooth regulator is used which includes various types of regulators as limiting cases. It was found that the value of the energy gap depends on the regulator parameters. We argue that the optimization based on the disappearance of the false, broken symmetric phase of the model leads to the Litim's regulator. The least sensitivity on the regulator parameters leads, however, to an IR regulator being somewhat different of the Litim's one, but it can be described as a perturbatively improved, or generalized Litim's regulator and provides analytic evolution equations, too.

  6. Nonclassical vibrational states in a quantized trap

    NASA Astrophysics Data System (ADS)

    Zeng, Heping; Lin, Fucheng

    1993-09-01

    The quantized center-of-mass (c.m.) motions of a single two-level atom or ion confined into a one-dimensional harmonic potential and interacting with a single-mode classical traveling-wave laser field are examined. We demonstrate that trap quantum states with remarkable nonclassical properties such as quadrature and amplitude-squared squeezing and sub-Poissonian statistics can be generated in this simple trap model when the c.m. motion is initially in certain coherent trap states. Our analyses also indicate that there exist some time regions where the production of nonclassical vibrational states is possible even if squeezing or sub-Poissonian statistics do not appear.

  7. Flux insertion, entanglement, and quantized responses

    NASA Astrophysics Data System (ADS)

    Zaletel, Michael P.; Mong, Roger S. K.; Pollmann, Frank

    2014-10-01

    There has been much discussion about which aspects of the entanglement spectrum are in fact robust properties of a bulk phase. By making use of a trick for constructing the ground state of a system on a ring given the ground state on an infinite chain, we show why the entanglement spectrum combined with the quantum numbers of the Schmidt states encodes a variety of robust topological observables. We introduce a method that allows us to characterize phases by measuring quantized responses, such as the Hall conductance, using data contained in the entanglement spectrum. As concrete examples, we show how the Berry phase allows us to map out the phase diagram of a spin-1 model and calculate the Hall conductivity of a quantum Hall system.

  8. Adaptation of a support vector machine algorithm for segmentation and visualization of retinal structures in volumetric optical coherence tomography data sets

    PubMed Central

    Zawadzki, Robert J.; Fuller, Alfred R.; Wiley, David F.; Hamann, Bernd; Choi, Stacey S.; Werner, John S.

    2008-01-01

    Recent developments in Fourier domain—optical coherence tomography (Fd-OCT) have increased the acquisition speed of current ophthalmic Fd-OCT instruments sufficiently to allow the acquisition of volumetric data sets of human retinas in a clinical setting. The large size and three-dimensional (3D) nature of these data sets require that intelligent data processing, visualization, and analysis tools are used to take full advantage of the available information. Therefore, we have combined methods from volume visualization, and data analysis in support of better visualization and diagnosis of Fd-OCT retinal volumes. Custom-designed 3D visualization and analysis software is used to view retinal volumes reconstructed from registered B-scans. We use a support vector machine (SVM) to perform semiautomatic segmentation of retinal layers and structures for subsequent analysis including a comparison of measured layer thicknesses. We have modified the SVM to gracefully handle OCT speckle noise by treating it as a characteristic of the volumetric data. Our software has been tested successfully in clinical settings for its efficacy in assessing 3D retinal structures in healthy as well as diseased cases. Our tool facilitates diagnosis and treatment monitoring of retinal diseases. PMID:17867795

  9. HVS-motivated quantization schemes in wavelet image compression

    NASA Astrophysics Data System (ADS)

    Topiwala, Pankaj N.

    1996-11-01

    Wavelet still image compression has recently been a focus of intense research, and appears to be maturing as a subject. Considerable coding gains over older DCT-based methods have been achieved, while the computational complexity has been made very competitive. We report here on a high performance wavelet still image compression algorithm optimized for both mean-squared error (MSE) and human visual system (HVS) characteristics. We present the problem of optimal quantization from a Lagrange multiplier point of view, and derive novel solutions. Ideally, all three components of a typical image compression system: transform, quantization, and entropy coding, should be optimized simultaneously. However, the highly nonlinear nature of quantization and encoding complicates the formulation of the total cost function. In this report, we consider optimizing the filter, and then the quantizer, separately, holding the other two components fixed. While optimal bit allocation has been treated in the literature, we specifically address the issue of setting the quantization stepsizes, which in practice is quite different. In this paper, we select a short high- performance filter, develop an efficient scalar MSE- quantizer, and four HVS-motivated quantizers which add some value visually without incurring any MSE losses. A combination of run-length and empirically optimized Huffman coding is fixed in this study.

  10. Quaternionic quantization principle in general relativity and supergravity

    NASA Astrophysics Data System (ADS)

    Kober, Martin

    2016-01-01

    A generalized quantization principle is considered, which incorporates nontrivial commutation relations of the components of the variables of the quantized theory with the components of the corresponding canonical conjugated momenta referring to other space-time directions. The corresponding commutation relations are formulated by using quaternions. At the beginning, this extended quantization concept is applied to the variables of quantum mechanics. The resulting Dirac equation and the corresponding generalized expression for plane waves are formulated and some consequences for quantum field theory are considered. Later, the quaternionic quantization principle is transferred to canonical quantum gravity. Within quantum geometrodynamics as well as the Ashtekar formalism, the generalized algebraic properties of the operators describing the gravitational observables and the corresponding quantum constraints implied by the generalized representations of these operators are determined. The generalized algebra also induces commutation relations of the several components of the quantized variables with each other. Finally, the quaternionic quantization procedure is also transferred to 𝒩 = 1 supergravity. Accordingly, the quantization principle has to be generalized to be compatible with Dirac brackets, which appear in canonical quantum supergravity.

  11. Quantization table design revisited for image/video coding.

    PubMed

    Yang, En-Hui; Sun, Chang; Meng, Jin

    2014-11-01

    Quantization table design is revisited for image/video coding where soft decision quantization (SDQ) is considered. Unlike conventional approaches, where quantization table design is bundled with a specific encoding method, we assume optimal SDQ encoding and design a quantization table for the purpose of reconstruction. Under this assumption, we model transform coefficients across different frequencies as independently distributed random sources and apply the Shannon lower bound to approximate the rate distortion function of each source. We then show that a quantization table can be optimized in a way that the resulting distortion complies with certain behavior. Guided by this new design principle, we propose an efficient statistical-model-based algorithm using the Laplacian model to design quantization tables for DCT-based image coding. When applied to standard JPEG encoding, it provides more than 1.5-dB performance gain in PSNR, with almost no extra burden on complexity. Compared with the state-of-the-art JPEG quantization table optimizer, the proposed algorithm offers an average 0.5-dB gain in PSNR with computational complexity reduced by a factor of more than 2000 when SDQ is OFF, and a 0.2-dB performance gain or more with 85% of the complexity reduced when SDQ is ON. Significant compression performance improvement is also seen when the algorithm is applied to other image coding systems proposed in the literature. PMID:25248184

  12. A progressive data compression scheme based upon adaptive transform coding: Mixture block coding of natural images

    NASA Technical Reports Server (NTRS)

    Rost, Martin C.; Sayood, Khalid

    1991-01-01

    A method for efficiently coding natural images using a vector-quantized variable-blocksized transform source coder is presented. The method, mixture block coding (MBC), incorporates variable-rate coding by using a mixture of discrete cosine transform (DCT) source coders. Which coders are selected to code any given image region is made through a threshold driven distortion criterion. In this paper, MBC is used in two different applications. The base method is concerned with single-pass low-rate image data compression. The second is a natural extension of the base method which allows for low-rate progressive transmission (PT). Since the base method adapts easily to progressive coding, it offers the aesthetic advantage of progressive coding without incorporating extensive channel overhead. Image compression rates of approximately 0.5 bit/pel are demonstrated for both monochrome and color images.

  13. On the vector model of angular momentum

    NASA Astrophysics Data System (ADS)

    Saari, Peeter

    2016-09-01

    Instead of (or in addition to) the common vector diagram with cones, we propose to visualize the peculiarities of quantum mechanical angular momentum by a completely quantized 3D model. It spotlights the discrete eigenvalues and noncommutativity of components of angular momentum and corresponds to outcomes of measurements—real or computer-simulated. The latter can be easily realized by an interactive worksheet of a suitable program package of algebraic calculations. The proposed complementary method of visualization helps undergraduate students to better understand the counterintuitive properties of this quantum mechanical observable.

  14. Direct observation of Kelvin waves excited by quantized vortex reconnection.

    PubMed

    Fonda, Enrico; Meichle, David P; Ouellette, Nicholas T; Hormoz, Sahand; Lathrop, Daniel P

    2014-03-25

    Quantized vortices are key features of quantum fluids such as superfluid helium and Bose-Einstein condensates. The reconnection of quantized vortices and subsequent emission of Kelvin waves along the vortices are thought to be central to dissipation in such systems. By visualizing the motion of submicron particles dispersed in superfluid (4)He, we have directly observed the emission of Kelvin waves from quantized vortex reconnection. We characterize one event in detail, using dimensionless similarity coordinates, and compare it with several theories. Finally, we give evidence for other examples of wavelike behavior in our system. PMID:24704878

  15. Modified 8×8 quantization table and Huffman encoding steganography

    NASA Astrophysics Data System (ADS)

    Guo, Yongning; Sun, Shuliang

    2014-10-01

    A new secure steganography, which is based on Huffman encoding and modified quantized discrete cosine transform (DCT) coefficients, is provided in this paper. Firstly, the cover image is segmented into 8×8 blocks and modified DCT transformation is applied on each block. Huffman encoding is applied to code the secret image before embedding. DCT coefficients are quantized by modified quantization table. Inverse DCT(IDCT) is conducted on each block. All the blocks are combined together and the steg image is finally achieved. The experiment shows that the proposed method is better than DCT and Mahender Singh's in PSNR and Capacity.

  16. Direct observation of Kelvin waves excited by quantized vortex reconnection

    PubMed Central

    Fonda, Enrico; Meichle, David P.; Ouellette, Nicholas T.; Hormoz, Sahand; Lathrop, Daniel P.

    2014-01-01

    Quantized vortices are key features of quantum fluids such as superfluid helium and Bose–Einstein condensates. The reconnection of quantized vortices and subsequent emission of Kelvin waves along the vortices are thought to be central to dissipation in such systems. By visualizing the motion of submicron particles dispersed in superfluid 4He, we have directly observed the emission of Kelvin waves from quantized vortex reconnection. We characterize one event in detail, using dimensionless similarity coordinates, and compare it with several theories. Finally, we give evidence for other examples of wavelike behavior in our system. PMID:24704878

  17. Adaptive Development

    NASA Technical Reports Server (NTRS)

    2005-01-01

    The goal of this research is to develop and demonstrate innovative adaptive seal technologies that can lead to dramatic improvements in engine performance, life, range, and emissions, and enhance operability for next generation gas turbine engines. This work is concentrated on the development of self-adaptive clearance control systems for gas turbine engines. Researchers have targeted the high-pressure turbine (HPT) blade tip seal location for following reasons: Current active clearance control (ACC) systems (e.g., thermal case-cooling schemes) cannot respond to blade tip clearance changes due to mechanical, thermal, and aerodynamic loads. As such they are prone to wear due to the required tight running clearances during operation. Blade tip seal wear (increased clearances) reduces engine efficiency, performance, and service life. Adaptive sealing technology research has inherent impact on all envisioned 21st century propulsion systems (e.g. distributed vectored, hybrid and electric drive propulsion concepts).

  18. Minimum uncertainty and squeezing in diffusion processes and stochastic quantization

    NASA Technical Reports Server (NTRS)

    Demartino, S.; Desiena, S.; Illuminati, Fabrizo; Vitiello, Giuseppe

    1994-01-01

    We show that uncertainty relations, as well as minimum uncertainty coherent and squeezed states, are structural properties for diffusion processes. Through Nelson stochastic quantization we derive the stochastic image of the quantum mechanical coherent and squeezed states.

  19. Simulation of bit-quantization influence on SAR-images

    NASA Astrophysics Data System (ADS)

    Wolframm, A. P.; Pike, T. K.

    The first European Remote Sensing satellite ERS-1 has two imaging modes, the conventional Synthetic Aperture Radar (SAR) mode and the wave mode. Two quantization schemes, 2-bit and 4-bit, have been proposed for the analogue-to-digital conversion of the video signal of the ERS-1 wave mode. This paper analyzes the influence of these two quantization schemes on ocean-wave spectra. The SAR-images were obtained through simulation using a static oceanwave radar model and a comprehensive software SAR-system simulation model (SARSIM) on the DFVLR computing system. The results indicate that spectra produced by the 4-bit quantization are not significantly degraded from the optimum, but that the 2-bit quantization requires some gain adjustment for optimal spectral reproduction. The conclusions are supported by images and spectral plots covering the various options simulated.

  20. Path integral quantization of the relativistic Hopfield model

    NASA Astrophysics Data System (ADS)

    Belgiorno, F.; Cacciatori, S. L.; Dalla Piazza, F.; Doronzo, M.

    2016-03-01

    The path-integral quantization method is applied to a relativistically covariant version of the Hopfield model, which represents a very interesting mesoscopic framework for the description of the interaction between quantum light and dielectric quantum matter, with particular reference to the context of analogue gravity. In order to take into account the constraints occurring in the model, we adopt the Faddeev-Jackiw approach to constrained quantization in the path-integral formalism. In particular, we demonstrate that the propagator obtained with the Faddeev-Jackiw approach is equivalent to the one which, in the framework of Dirac canonical quantization for constrained systems, can be directly computed as the vacuum expectation value of the time-ordered product of the fields. Our analysis also provides an explicit example of quantization of the electromagnetic field in a covariant gauge and coupled with the polarization field, which is a novel contribution to the literature on the Faddeev-Jackiw procedure.

  1. Wigner quantization of some one-dimensional Hamiltonians

    SciTech Connect

    Regniers, G.; Van der Jeugt, J.

    2010-12-15

    Recently, several papers have been dedicated to the Wigner quantization of different Hamiltonians. In these examples, many interesting mathematical and physical properties have been shown. Among those we have the ubiquitous relation with Lie superalgebras and their representations. In this paper, we study two one-dimensional Hamiltonians for which the Wigner quantization is related with the orthosymplectic Lie superalgebra osp(1|2). One of them, the Hamiltonian H=xp, is popular due to its connection with the Riemann zeros, discovered by Berry and Keating on the one hand and Connes on the other. The Hamiltonian of the free particle, H{sub f}=p{sup 2}/2, is the second Hamiltonian we will examine. Wigner quantization introduces an extra representation parameter for both of these Hamiltonians. Canonical quantization is recovered by restricting to a specific representation of the Lie superalgebra osp(1|2).

  2. Klauder's quantization in the Almost-Kaehler case

    SciTech Connect

    Maraner, P.; Onofri, E. ); Tecchiolli, G.P. )

    1992-05-20

    In this paper the authors prove that a regularized projection operator on the physical subspace H{sub phys} {contained in} L{sub 2} ({omega}) can be defined for a symplectic manifold {omega} = T*M equipped with an Almost-Kaehler structure, provided that a suitable counterterm is added to Klauder's definition. The present result extends Klauder's quantization to the case in which geometric quantization requires a real polarization.

  3. Quantum Hamilton Mechanics and the Theory of Quantization Conditions

    NASA Astrophysics Data System (ADS)

    Bracken, Paul

    A formulation of quantum mechanics in terms of complex canonical variables is presented. It is seen that these variables are governed by Hamilton's equations. It is shown that the action variables need to be quantized. By formulating a quantum Hamilton equation for the momentum variable, the energies for two different systems are determined. Quantum canonical transformation theory is introduced and the geometrical significance of a set of generalized quantization conditions which are obtained is discussed.

  4. Quantization of β-Fermi-Pasta-Ulam Lattice with Nearest and Next-nearest Neighbour Interactions

    NASA Astrophysics Data System (ADS)

    Dey, Bishwajyoti

    2015-03-01

    We quantize the β-Fermi-Pasta-Ulam (FPU) model with nearest and next-nearest neighbour (NNN) interactions using a number conserving approximation and a numerically exact diagonalization method. Our numerical mean field bi-phonon spectrum shows excellent agreement with the analytic mean field results of Ivic and Tsironis, except for the wave vector at the midpoint of the Brillouin zone. We then relax the mean field approximation and calculate the eigenvalue spectrum of the full Hamiltonian. We show the existence of multi-phonon bound states and analyze the properties of these states by varying the system parameters. From the calculation of the spatial correlation function we then show that these multi-phonon bound states are particle like states with finite spatial correlation. Accordingly we identify these multi-phonon bound states as the quantum equivalent of the breather solutions of the corresponding classical FPU model. The four-phonon spectrum of the system is then obtained and its properties are studied. We then generalize the study to an extended range interaction and quantize the β-FPU model with NNN interactions. We analyze the effects of the NNN interactions on the eigenvalue spectrum and the correlation functions of the system. I would like to thank DST, India and BCUD, Pune University, Pune for financial support through research projects.

  5. Binned progressive quantization for compressive sensing.

    PubMed

    Wang, Liangjun; Wu, Xiaolin; Shi, Guangming

    2012-06-01

    Compressive sensing (CS) has been recently and enthusiastically promoted as a joint sampling and compression approach. The advantages of CS over conventional signal compression techniques are architectural: the CS encoder is made signal independent and computationally inexpensive by shifting the bulk of system complexity to the decoder. While these properties of CS allow signal acquisition and communication in some severely resource-deprived conditions that render conventional sampling and coding impossible, they are accompanied by rather disappointing rate-distortion performance. In this paper, we propose a novel coding technique that rectifies, to a certain extent, the problem of poor compression performance of CS and, at the same time, maintains the simplicity and universality of the current CS encoder design. The main innovation is a scheme of progressive fixed-rate scalar quantization with binning that enables the CS decoder to exploit hidden correlations between CS measurements, which was overlooked in the existing literature. Experimental results are presented to demonstrate the efficacy of the new CS coding technique. Encouragingly, on some test images, the new CS technique matches or even slightly outperforms JPEG. PMID:22374362

  6. Observation of quantized conductance in neutral matter

    NASA Astrophysics Data System (ADS)

    Husmann, Dominik; Krinner, Sebastian; Lebrat, Martin; Grenier, Charles; Nakajima, Shuta; Häusler, Samuel; Brantut, Jean-Philippe; Esslinger, Tilman

    2015-05-01

    In transport experiments, the quantum nature of matter becomes directly evident when changes in conductance occur only in discrete steps, with a size determined solely by Planck's constant h. Here we report the observation of quantized conductance in the transport of neutral atoms driven by a chemical potential bias. We use high-resolution lithography to shape light potentials that realize either a quantum point contact or a quantum wire for atoms. These constrictions are imprinted on a quasi-two-dimensional ballistic channel connecting the reservoirs. By varying either a gate potential or the transverse confinement of the constrictions, we observe distinct plateaux in the atom conductance. The conductance in the first plateau is found to be equal to the universal conductance quantum, 1/h. We use Landauer's formula to model our results and find good agreement for low gate potentials, with all parameters determined a priori. We eventually explore the behavior of a strongly interacting Fermi gas in the same configuration, and the consequences of the emergence of superfluidity.

  7. The Hopfield model revisited: covariance and quantization

    NASA Astrophysics Data System (ADS)

    Belgiorno, F.; Cacciatori, S. L.; Dalla Piazza, F.

    2016-01-01

    There are several possible applications of quantum electrodynamics in dielectric media which require a quantum description for the electromagnetic field interacting with matter fields. The associated quantum models can refer to macroscopic electromagnetic fields or, alternatively, to mesoscopic fields (polarization fields) describing an effective interaction between electromagnetic field and matter fields. We adopt the latter approach, and focus on the Hopfield model for the electromagnetic field in a dielectric dispersive medium in a framework in which space-time dependent mesoscopic parameters occur, like susceptibility, matter resonance frequency, and also coupling between electromagnetic field and polarization field. Our most direct goal is to describe in a phenomenological way a space-time varying dielectric perturbation induced by means of the Kerr effect in nonlinear dielectric media. This extension of the model is implemented by means of a Lorentz-invariant Lagrangian which, for constant microscopic parameters, and in the rest frame, coincides with the standard one. Moreover, we deduce a covariant scalar product and provide a canonical quantization scheme which takes into account the constraints implicit in the model. Examples of viable applications are indicated.

  8. Dynamics of Quantized Vortices Before Reconnection

    NASA Astrophysics Data System (ADS)

    Andryushchenko, V. A.; Kondaurova, L. P.; Nemirovskii, S. K.

    2016-04-01

    The main goal of this paper is to investigate numerically the dynamics of quantized vortex loops, just before the reconnection at finite temperature, when mutual friction essentially changes the evolution of lines. Modeling is performed on the base of vortex filament method using the full Biot-Savart equation. It was discovered that the initial position of vortices and the temperature strongly affect the dependence on time of the minimum distance δ (t) between tips of two vortex loops. In particular, in some cases, the shrinking and collapse of vortex loops due to mutual friction occur earlier than the reconnection, thereby canceling the latter. However, this relationship takes a universal square-root form δ ( t) =√{( κ/2π ) ( t_{*}-t) } at distances smaller than the distances, satisfying the Schwarz reconnection criterion, when the nonlocal contribution to the Biot-Savart equation becomes about equal to the local contribution. In the "universal" stage, the nearest parts of vortices form a pyramid-like structure with angles which neither depend on the initial configuration nor on temperature.

  9. Light-cone quantization and hadron structure

    SciTech Connect

    Brodsky, S.J.

    1996-04-01

    Quantum chromodynamics provides a fundamental description of hadronic and nuclear structure and dynamics in terms of elementary quark and gluon degrees of freedom. In practice, the direct application of QCD to reactions involving the structure of hadrons is extremely complex because of the interplay of nonperturbative effects such as color confinement and multi-quark coherence. In this talk, the author will discuss light-cone quantization and the light-cone Fock expansion as a tractable and consistent representation of relativistic many-body systems and bound states in quantum field theory. The Fock state representation in QCD includes all quantum fluctuations of the hadron wavefunction, including fax off-shell configurations such as intrinsic strangeness and charm and, in the case of nuclei, hidden color. The Fock state components of the hadron with small transverse size, which dominate hard exclusive reactions, have small color dipole moments and thus diminished hadronic interactions. Thus QCD predicts minimal absorptive corrections, i.e., color transparency for quasi-elastic exclusive reactions in nuclear targets at large momentum transfer. In other applications, such as the calculation of the axial, magnetic, and quadrupole moments of light nuclei, the QCD relativistic Fock state description provides new insights which go well beyond the usual assumptions of traditional hadronic and nuclear physics.

  10. Interactions between unidirectional quantized vortex rings

    NASA Astrophysics Data System (ADS)

    Zhu, T.; Evans, M. L.; Brown, R. A.; Walmsley, P. M.; Golov, A. I.

    2016-08-01

    We have used the vortex filament method to numerically investigate the interactions between pairs of quantized vortex rings that are initially traveling in the same direction but with their axes offset by a variable impact parameter. The interaction of two circular rings of comparable radii produces outcomes that can be categorized into four regimes, dependent only on the impact parameter; the two rings can either miss each other on the inside or outside or reconnect leading to final states consisting of either one or two deformed rings. The fraction of energy that went into ring deformations and the transverse component of velocity of the rings are analyzed for each regime. We find that rings of very similar radius only reconnect for a very narrow range of the impact parameter, much smaller than would be expected from the geometrical cross-section alone. In contrast, when the radii of the rings are very different, the range of impact parameters producing a reconnection is close to the geometrical value. A second type of interaction considered is the collision of circular rings with a highly deformed ring. This type of interaction appears to be a productive mechanism for creating small vortex rings. The simulations are discussed in the context of experiments on colliding vortex rings and quantum turbulence in superfluid helium in the zero-temperature limit.

  11. Canonical quantization of Galilean covariant field theories

    NASA Astrophysics Data System (ADS)

    Santos, E. S.; de Montigny, M.; Khanna, F. C.

    2005-11-01

    The Galilean-invariant field theories are quantized by using the canonical method and the five-dimensional Lorentz-like covariant expressions of non-relativistic field equations. This method is motivated by the fact that the extended Galilei group in 3 + 1 dimensions is a subgroup of the inhomogeneous Lorentz group in 4 + 1 dimensions. First, we consider complex scalar fields, where the Schrödinger field follows from a reduction of the Klein-Gordon equation in the extended space. The underlying discrete symmetries are discussed, and we calculate the scattering cross-sections for the Coulomb interaction and for the self-interacting term λΦ4. Then, we turn to the Dirac equation, which, upon dimensional reduction, leads to the Lévy-Leblond equations. Like its relativistic analogue, the model allows for the existence of antiparticles. Scattering amplitudes and cross-sections are calculated for the Coulomb interaction, the electron-electron and the electron-positron scattering. These examples show that the so-called 'non-relativistic' approximations, obtained in low-velocity limits, must be treated with great care to be Galilei-invariant. The non-relativistic Proca field is discussed briefly.

  12. Thickness quantization in a reorientation transition

    NASA Astrophysics Data System (ADS)

    Venus, David; He, Gengming; Winch, Harrison; Belanger, Randy

    The reorientation transition of an ultrathin film from perpendicular to in-plane magnetization is driven by a competition between shape and surface anisotropy. It is accompanied by a ''stripe'' domain structure that evolves as the reorientation progresses. Often, an n layer film has stable perpendicular magnetization and an n+1 layer film has stable in-plane magnetization. If the domain walls are not pinned, the long-range stripe domain pattern averages over this structure so that the transition occurs at a non-integer layer thickness. We report in situ experimental measurements of the magnetic susceptibility (via MOKE) of the reorientation transition in Fe/2 ML Ni/W(110) films as a function of thickness as they are deposited at room temperature. In addition to a peak at the reorientation transition, we observe a strong precursor due to thickness quantization in atomic layers. This peak is described quantitatively by the response of small islands of thickness 3 layers with in-plane anisotropy in a sea of 2 layers Fe with perpendicular anisotropy. The fitted parameters give an estimate of the island size at which the response disappears. This size corresponds to a domain wall thickness, so that the islands become locally in-plane, demonstrating the self-consistency of the model.

  13. An improved adaptive deblocking filter for MPEG video decoder

    NASA Astrophysics Data System (ADS)

    Kwon, Do-Kyoung; Shen, Mei-Yin; Kuo, C.-C. Jay

    2005-03-01

    A highly adaptive deblocking algorithm is proposed for MPEG video in this research. In comparison with previous work in this area, the proposed deblocking filter improves in three aspects. First, the proposed algorithm is adaptive to the change of the quantization parameter (QP). Since blocking artifacts between two blocks encoded with different QPs tend to be more visible due to quality difference, filters should be able to adapt dynamically to the QP change between blocks. Second, the proposed algorithm classifies the block boundary into three different region modes based on local region characteristics. The three modes are active, smooth and dormant regions. The active region represents a complex region with details and high activities while the smooth and the dormant regions refer to moderately flat and extremely flat regions, respectively. By applying different filters of different strengths to each region mode, the proposed algorithm can minimize the undesirable blur so that both subjective and objective qualities improve for various types of sequences at a wide range of bitrates. Finally, the proposed algorithm also provides a way to determine the threshold values. The proposed adaptive deblocking algorithms require several thresholds in determining proper region modes and filters. Since the quality of image sequences after filtering depends largely on the threshold values, they have to be determined carefully. In the proposed algorithm, thresholds are determined adaptively to the strength of the blocking artifact and, as a result, to various encoding parameters such as QP, absolute difference between QPs, the coding type, and motion vectors. It is shown by experimental results that the proposed algorithm can achieve 0.2-0.4 dB gains for I- and P-frames, and 0.1-0.3 dB gains for the B-frame when bit streams are encoded using the TM5 rate control algorithm.

  14. Polarization of He II films upon the relative motion of the superfluid component and the quantized vortices

    NASA Astrophysics Data System (ADS)

    Adamenko, I. N.; Nemchenko, E. K.

    2016-04-01

    Theoretical study of the electrical activity of the saturated superfluid helium (He II) film upon the relative motion of the normal and superfluid components in the film was performed. The polarization vector due to the dipole moments of the quantized vortex rings in He II in the field of van der Waals forces was calculated taking into account the relative motion of the normal and superfluid components. An explicit analytical expression for the electric potential difference arising upon the relative motion of the normal and superfluid components in a torsional oscillator was derived. The obtained time, temperature and relative velocity dependences of the potential difference were in agreement with the experimental data.

  15. Momentum space orthogonal polynomial projection quantization

    NASA Astrophysics Data System (ADS)

    Handy, C. R.; Vrinceanu, D.; Marth, C. B.; Gupta, R.

    2016-04-01

    The orthogonal polynomial projection quantization (OPPQ) is an algebraic method for solving Schrödinger’s equation by representing the wave function as an expansion {{\\Psi }}(x)={\\displaystyle \\sum }n{{{Ω }}}n{P}n(x)R(x) in terms of polynomials {P}n(x) orthogonal with respect to a suitable reference function R(x), which decays asymptotically not faster than the bound state wave function. The expansion coefficients {{{Ω }}}n are obtained as linear combinations of power moments {μ }{{p}}=\\int {x}p{{\\Psi }}(x) {{d}}x. In turn, the {μ }{{p}}'s are generated by a linear recursion relation derived from Schrödinger’s equation from an initial set of low order moments. It can be readily argued that for square integrable wave functions representing physical states {{lim}}n\\to ∞ {{{Ω }}}n=0. Rapidly converging discrete energies are obtained by setting Ω coefficients to zero at arbitrarily high order. This paper introduces an extention of OPPQ in momentum space by using the representation {{Φ }}(k)={\\displaystyle \\sum }n{{{\\Xi }}}n{Q}n(k)T(k), where Q n (k) are polynomials orthogonal with respect to a suitable reference function T(k). The advantage of this new representation is that it can help solving problems for which there is no coordinate space moment equation. This is because the power moments in momentum space are the Taylor expansion coefficients, which are recursively calculated via Schrödinger’s equation. We show the convergence of this new method for the sextic anharmonic oscillator and an algebraic treatment of Gross-Pitaevskii nonlinear equation.

  16. Quantized Concentration Gradient in Picoliter Scale

    NASA Astrophysics Data System (ADS)

    Hong, Jong Wook

    2010-10-01

    Generation of concentration gradient is of paramount importance in the success of reactions for cell biology, molecular biology, biochemistry, drug-discovery, chemotaxis, cell culture, biomaterials synthesis, and tissue engineering. In conventional method of conducting reactions, the concentration gradients is achieved by using pipettes, test tubes, 96-well assay plates, and robotic systems. Conventional methods require milliliter or microliter volumes of samples for typical experiments with multiple and sequential reactions. It is a challenge to carry out experiments with precious samples that have strict limitations with the amount of samples or the price to pay for the amount. In order to overcome this challenge faced by the conventional methods, fluidic devices with micrometer scale channels have been developed. These devices, however, cause restrictions on changing the concentration due to the fixed gradient set based on fixed fluidic channels.ootnotetextJambovane, S.; Duin, E. C.; Kim, S-K.; Hong, J. W., Determination of Kinetic Parameters, KM and kcat, with a Single Experiment on a Chip. textitAnalytical Chemistry, 81, (9), 3239-3245, 2009.^,ootnotetextJambovane, S.; Hong, J. W., Lorenz-like Chatotic System on a Chip In The 14th International Conference on Miniaturized Systems for Chemistry and Life Sciences (MicroTAS), The Netherlands, October, 2010. Here, we present a unique microfluidic system that can generate quantized concentration gradient by using series of droplets generated by a mechanical valve based injection method.ootnotetextJambovane, S.; Rho, H.; Hong, J., Fluidic Circuit based Predictive Model of Microdroplet Generation through Mechanical Cutting. In ASME International Mechanical Engineering Congress & Exposition, Lake Buena Vista, Florida, USA, October, 2009.^,ootnotetextLee, W.; Jambovane, S.; Kim, D.; Hong, J., Predictive Model on Micro Droplet Generation through Mechanical Cutting. Microfluidics and Nanofluidics, 7, (3), 431-438, 2009

  17. Perturbation theory in light-cone quantization

    SciTech Connect

    Langnau, A.

    1992-01-01

    A thorough investigation of light-cone properties which are characteristic for higher dimensions is very important. The easiest way of addressing these issues is by analyzing the perturbative structure of light-cone field theories first. Perturbative studies cannot be substituted for an analysis of problems related to a nonperturbative approach. However, in order to lay down groundwork for upcoming nonperturbative studies, it is indispensable to validate the renormalization methods at the perturbative level, i.e., to gain control over the perturbative treatment first. A clear understanding of divergences in perturbation theory, as well as their numerical treatment, is a necessary first step towards formulating such a program. The first objective of this dissertation is to clarify this issue, at least in second and fourth-order in perturbation theory. The work in this dissertation can provide guidance for the choice of counterterms in Discrete Light-Cone Quantization or the Tamm-Dancoff approach. A second objective of this work is the study of light-cone perturbation theory as a competitive tool for conducting perturbative Feynman diagram calculations. Feynman perturbation theory has become the most practical tool for computing cross sections in high energy physics and other physical properties of field theory. Although this standard covariant method has been applied to a great range of problems, computations beyond one-loop corrections are very difficult. Because of the algebraic complexity of the Feynman calculations in higher-order perturbation theory, it is desirable to automatize Feynman diagram calculations so that algebraic manipulation programs can carry out almost the entire calculation. This thesis presents a step in this direction. The technique we are elaborating on here is known as light-cone perturbation theory.

  18. Remote Sensing and Quantization of Analog Sensors

    NASA Technical Reports Server (NTRS)

    Strauss, Karl F.

    2011-01-01

    This method enables sensing and quantization of analog strain gauges. By manufacturing a piezoelectric sensor stack in parallel (physical) with a piezoelectric actuator stack, the capacitance of the sensor stack varies in exact proportion to the exertion applied by the actuator stack. This, in turn, varies the output frequency of the local sensor oscillator. The output, F(sub out), is fed to a phase detector, which is driven by a stable reference, F(sub ref). The output of the phase detector is a square waveform, D(sub out), whose duty cycle, t(sub W), varies in exact proportion according to whether F(sub out) is higher or lower than F(sub ref). In this design, should F(sub out) be precisely equal to F(sub ref), then the waveform has an exact 50/50 duty cycle. The waveform, D(sub out), is of generally very low frequency suitable for safe transmission over long distances without corruption. The active portion of the waveform, t(sub W), gates a remotely located counter, which is driven by a stable oscillator (source) of such frequency as to give sufficient digitization of t(sub W) to the resolution required by the application. The advantage to this scheme is that it negates the most-common, present method of sending either very low level signals (viz. direct output from the sensors) across great distances (anything over one-half meter) or the need to transmit widely varying higher frequencies over significant distances thereby eliminating interference [both in terms of beat frequency generation and in-situ EMI (electromagnetic interference)] caused by ineffective shielding. It also results in a significant reduction in shielding mass.

  19. Some effects of quantization on a noiseless phase-locked loop. [sampling phase errors

    NASA Technical Reports Server (NTRS)

    Greenhall, C. A.

    1979-01-01

    If the VCO of a phase-locked receiver is to be replaced by a digitally programmed synthesizer, the phase error signal must be sampled and quantized. Effects of quantizing after the loop filter (frequency quantization) or before (phase error quantization) are investigated. Constant Doppler or Doppler rate noiseless inputs are assumed. The main result gives the phase jitter due to frequency quantization for a Doppler-rate input. By itself, however, frequency quantization is impractical because it makes the loop dynamic range too small.

  20. Canonical quantization theory of general singular QED system of Fermi field interaction with generally decomposed gauge potential

    SciTech Connect

    Zhang, Zhen-Lu; Huang, Yong-Chang

    2014-03-15

    Quantization theory gives rise to transverse phonons for the traditional Coulomb gauge condition and to scalar and longitudinal photons for the Lorentz gauge condition. We describe a new approach to quantize the general singular QED system by decomposing a general gauge potential into two orthogonal components in general field theory, which preserves scalar and longitudinal photons. Using these two orthogonal components, we obtain an expansion of the gauge-invariant Lagrangian density, from which we deduce the two orthogonal canonical momenta conjugate to the two components of the gauge potential. We then obtain the canonical Hamiltonian in the phase space and deduce the inherent constraints. In terms of the naturally deduced gauge condition, the quantization results are exactly consistent with those in the traditional Coulomb gauge condition and superior to those in the Lorentz gauge condition. Moreover, we find that all the nonvanishing quantum commutators are permanently gauge-invariant. A system can only be measured in physical experiments when it is gauge-invariant. The vanishing longitudinal vector potential means that the gauge invariance of the general QED system cannot be retained. This is similar to the nucleon spin crisis dilemma, which is an example of a physical quantity that cannot be exactly measured experimentally. However, the theory here solves this dilemma by keeping the gauge invariance of the general QED system. -- Highlights: •We decompose the general gauge potential into two orthogonal parts according to general field theory. •We identify a new approach for quantizing the general singular QED system. •The results obtained are superior to those for the Lorentz gauge condition. •The theory presented solves dilemmas such as the nucleon spin crisis.

  1. Estimation with Wireless Sensor Networks: Censoring and Quantization Perspectives

    NASA Astrophysics Data System (ADS)

    Msechu, Eric James

    In the last decade there has been an increase in application areas for wireless sensor networks (WSNs), which can be attributed to the advances in the enabling sensor technology. These advances include integrated circuit miniaturization and mass-production of highly-reliable hardware for sensing, processing, and data storage at a lower cost. In many emerging applications, massive amounts of data are acquired by a large number of low-cost sensing devices. The design of signal processing algorithms for these WSNs, unlike in wireless networks designed for communications, face a different set of challenges due to resource constraints sensor nodes must adhere to. These include: (i) limited on-board memory for storage; (ii) limited energy source, typically based on irreplaceable battery cells; (iii) radios with limited transmission range; and (iv) stringent data rates either due to a need to save energy or due to limited radio-frequency bandwidth allocated to sensor networks. This work addresses distributed data-reduction at sensor nodes using a combination of measurement-censoring and measurement quantization. The WSN is envisioned for decentralized estimation of either a vector of unknown parameters in a maximum likelihood framework, or, for decentralized estimation of a random signal using Bayesian optimality criteria. Early research effort in data-reduction methods involved using a centralized computation platform directing selection of the most informative data and focusing computational and communication resources toward the selected data only. Robustness against failure of the central computation unit, as well as the need for iterative data-selection and data-gathering in some applications (e.g., real-time navigation systems), motivates a rethinking of the centralized data-selection approach. Recently, research focus has been on collaborative signal processing in sensor neighborhoods for the data-reduction step. It is in this spirit that investigation of methods

  2. Energy-Constrained Optimal Quantization for Wireless Sensor Networks

    NASA Astrophysics Data System (ADS)

    Luo, Xiliang; Giannakis, Georgios B.

    2007-12-01

    As low power, low cost, and longevity of transceivers are major requirements in wireless sensor networks, optimizing their design under energy constraints is of paramount importance. To this end, we develop quantizers under strict energy constraints to effect optimal reconstruction at the fusion center. Propagation, modulation, as well as transmitter and receiver structures are jointly accounted for using a binary symmetric channel model. We first optimize quantization for reconstructing a single sensor's measurement, and deriving the optimal number of quantization levels as well as the optimal energy allocation across bits. The constraints take into account not only the transmission energy but also the energy consumed by the transceiver's circuitry. Furthermore, we consider multiple sensors collaborating to estimate a deterministic parameter in noise. Similarly, optimum energy allocation and optimum number of quantization bits are derived and tested with simulated examples. Finally, we study the effect of channel coding on the reconstruction performance under strict energy constraints and jointly optimize the number of quantization levels as well as the number of channel uses.

  3. Optimal sampling and quantization of synthetic aperture radar signals

    NASA Technical Reports Server (NTRS)

    Wu, C.

    1978-01-01

    Some theoretical and experimental results on optimal sampling and quantization of synthetic aperture radar (SAR) signals are presented. It includes a description of a derived theoretical relationship between the pixel signal to noise ratio of processed SAR images and the number of quantization bits per sampled signal, assuming homogeneous extended targets. With this relationship known, a solution may be realized for the problem of optimal allocation of a fixed data bit-volume (for specified surface area and resolution criterion) between the number of samples and the number of bits per sample. The results indicate that to achieve the best possible image quality for a fixed bit rate and a given resolution criterion, one should quantize individual samples coarsely and thereby maximize the number of multiple looks. The theoretical results are then compared with simulation results obtained by processing aircraft SAR data.

  4. Performance of customized DCT quantization tables on scientific data

    NASA Technical Reports Server (NTRS)

    Ratnakar, Viresh; Livny, Miron

    1994-01-01

    We show that it is desirable to use data-specific or customized quantization tables for scaling the spatial frequency coefficients obtained using the Discrete Cosine Transform (DCT). DCT is widely used for image and video compression (MP89, PM93) but applications typically use default quantization matrices. Using actual scientific data gathered from divers sources such as spacecrafts and electron-microscopes, we show that the default compression/quality tradeoffs can be significantly improved upon by using customized tables. We also show that significant improvements are possible for the standard test images Lena and Baboon. This work is part of an effort to develop a practical scheme for optimizing quantization matrices for any given image or video stream, under any given quality or compression constraints.

  5. Complex cross correlators with three-level quantization Design tolerances

    NASA Astrophysics Data System (ADS)

    D'Addario, L. R.; Thompson, A. R.; Schwab, F. R.; Granlund, J.

    1984-06-01

    It is noted that for cases in which the components behave ideally, digital correlators have been analyzed in considerable detail. Consideration is given here to the effects of the major deviations from ideal behavior that are encountered in practice. Even though attention is restricted to the three-level quantization, the fairly general case of the complex cross correlator is investigated. Among the errors analyzed are quantization threshold errors, quantizer indecision regions, sampler timing skews, quadrature network errors, and numerical errors in algorithms for converting digital measurements to equivalent analog correlations. It is assumed that all the digital operations, including delay, multiplication, summing, and storage, can be implemented without errors. The study is prompted by the requirement for cross-power measurements in Fourier synthesis radio telescopes.

  6. Quantized Space-Time and Black Hole Entropy

    NASA Astrophysics Data System (ADS)

    Ma, Meng-Sen; Li, Huai-Fan; Zhao, Ren

    2014-06-01

    On the basis of Snyder’s idea of quantized space-time, we derive a new generalized uncertainty principle and a new modified density of states. Accordingly, we obtain a corrected black hole entropy with a logarithmic correction term by employing the new generalized uncertainty principle. In addition, we recalculate the entropy of spherically symmetric black holes using statistical mechanics. Because of the use of the minimal length in quantized space-time as a natural cutoff, the entanglement entropy we obtained does not have the usual form A/4 but has a coefficient dependent on the minimal length, which shows differences between black hole entropy in quantized space-time and that in continuous space-time.

  7. Quantized chaotic dynamics and non-commutative KS entropy

    SciTech Connect

    Klimek, S.; Lesniewski, A.

    1996-06-01

    We study the quantization of two examples of classically chaotic dynamics, the Anosov dynamics of {open_quote}{open_quote}cat maps{close_quote}{close_quote} on a two dimensional torus, and the dynamics of baker{close_quote}s maps. Each of these dynamics is implemented as a discrete group of automorphisms of a von Neumann algebra of functions on a quantized torus. We compute the non-commutative generalization of the Kolmogorov-Sinai entropy, namely the Connes-Sto/rmer entropy, of the generator of this group, and find that its value is equal to the classical value. This can be interpreted as a sign of persistence of chaotic behavior in a dynamical system under quantization. Copyright {copyright} 1996 Academic Press, Inc.

  8. Possibility of gravitational quantization under the teleparallel theory of gravitation

    NASA Astrophysics Data System (ADS)

    Ming, Kian; Triyanta, Kosasih, J. S.

    2016-03-01

    Teleparallel gravity (TG) or tele-equivalent general relativity (TEGR) is an alternative gauge theory for gravity. In TG tetrad fields are defined to express gravitational fields and act like gauge potentials in standard gauge theory. The lagrangians for the gravitational field in TG and for the Yang-Mills field in standard gauge theory differ due to different indices that stick on the components of the corresponding fields: two external indices for tetrad field and internal and external indices for the Yang-Mills field. Different types of indices lead to different possible contractions and thus lead to different expression of the lagrangian for the Yang Mills field and for the tetrad field. As TG is a gauge theory it is then natural to quantize gravity in TG by applying the same procedure of quantization as in the standard gauge theory. Here we will discuss on the possibility to quantize gravity, canonically and functionally, under the framework of TG theory.

  9. Quantization of gauge fields, graph polynomials and graph homology

    SciTech Connect

    Kreimer, Dirk; Sars, Matthias; Suijlekom, Walter D. van

    2013-09-15

    We review quantization of gauge fields using algebraic properties of 3-regular graphs. We derive the Feynman integrand at n loops for a non-abelian gauge theory quantized in a covariant gauge from scalar integrands for connected 3-regular graphs, obtained from the two Symanzik polynomials. The transition to the full gauge theory amplitude is obtained by the use of a third, new, graph polynomial, the corolla polynomial. This implies effectively a covariant quantization without ghosts, where all the relevant signs of the ghost sector are incorporated in a double complex furnished by the corolla polynomial–we call it cycle homology–and by graph homology. -- Highlights: •We derive gauge theory Feynman from scalar field theory with 3-valent vertices. •We clarify the role of graph homology and cycle homology. •We use parametric renormalization and the new corolla polynomial.

  10. [An algorithm of a wavelet-based medical image quantization].

    PubMed

    Hou, Wensheng; Wu, Xiaoying; Peng, Chenglin

    2002-12-01

    The compression of medical image is the key to study tele-medicine & PACS. We have studied the statistical distribution of wavelet subimage coefficients and concluded that the distribution of wavelet subimage coefficients is very much similar to that of Laplacian distribution. Based on the statistical properties of image wavelet decomposition, an image quantization algorithm is proposed. In this algorithm, we selected the sample-standard-deviation as the key quantization threshold in every wavelet subimage. The test has proved that, the main advantages of this algorithm are simple computing and the predictability of coefficients in different quantization threshold range. Also, high compression efficiency can be obtained. Therefore, this algorithm can be potentially used in tele-medicine and PACS. PMID:12561372

  11. Adaptive Sampling using Support Vector Machines

    SciTech Connect

    D. Mandelli; C. Smith

    2012-11-01

    Reliability/safety analysis of stochastic dynamic systems (e.g., nuclear power plants, airplanes, chemical plants) is currently performed through a combination of Event-Tress and Fault-Trees. However, these conventional methods suffer from certain drawbacks: • Timing of events is not explicitly modeled • Ordering of events is preset by the analyst • The modeling of complex accident scenarios is driven by expert-judgment For these reasons, there is currently an increasing interest into the development of dynamic PRA methodologies since they can be used to address the deficiencies of conventional methods listed above.

  12. Effective Field Theory of Fractional Quantized Hall Nematics

    SciTech Connect

    Mulligan, Michael; Nayak, Chetan; Kachru, Shamit; /Stanford U., Phys. Dept. /SLAC

    2012-06-06

    We present a Landau-Ginzburg theory for a fractional quantized Hall nematic state and the transition to it from an isotropic fractional quantum Hall state. This justifies Lifshitz-Chern-Simons theory - which is shown to be its dual - on a more microscopic basis and enables us to compute a ground state wave function in the symmetry-broken phase. In such a state of matter, the Hall resistance remains quantized while the longitudinal DC resistivity due to thermally-excited quasiparticles is anisotropic. We interpret recent experiments at Landau level filling factor {nu} = 7/3 in terms of our theory.

  13. Semiclassical Landau quantization of spin-orbit coupled systems

    NASA Astrophysics Data System (ADS)

    Li, Tommy; Horovitz, Baruch; Sushkov, Oleg P.

    2016-06-01

    A semiclassical quantization condition is derived for Landau levels in general spin-orbit coupled systems. This generalizes the Onsager quantization condition via a matrix-valued phase which describes spin dynamics along the classical cyclotron trajectory. We discuss measurement of the matrix phase via magnetic oscillations and electron spin resonance, which may be used to probe the spin structure of the precessing wave function. We compare the resulting semiclassical spectrum with exact results which are obtained for a variety of spin-orbit interactions in two-dimensional systems.

  14. Luminance-model-based DCT quantization for color image compression

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert J., Jr.; Peterson, Heidi A.

    1992-01-01

    A model is developed to approximate visibility thresholds for discrete cosine transform (DCT) coefficient quantization error based on the peak-to-peak luminance of the error image. Experimentally measured visibility thresholds for R, G, and B DCT basis functions can be predicted by a simple luminance-based detection model. This model allows DCT coefficient quantization matrices to be designed for display conditions other than those of the experimental measurements: other display luminances, other veiling luminances, and other spatial frequencies (different pixel spacings, viewing distances, and aspect ratios).

  15. Exact quantization conditions for the relativistic Toda lattice

    NASA Astrophysics Data System (ADS)

    Hatsuda, Yasuyuki; Mariño, Marcos

    2016-05-01

    Inspired by recent connections between spectral theory and topological string theory, we propose exact quantization conditions for the relativistic Toda lattice of N particles. These conditions involve the Nekrasov-Shatashvili free energy, which resums the perturbative WKB expansion, but they require in addition a non-perturbative contribution, which is related to the perturbative result by an S-duality transformation of the Planck constant. We test the quantization conditions against explicit calculations of the spectrum for N = 3. Our proposal can be generalized to arbitrary toric Calabi-Yau manifolds and might solve the corresponding quantum integrable system of Goncharov and Kenyon.

  16. Probing quantized Einstein-Rosen waves with massless scalar matter

    SciTech Connect

    Fernando Barbero, J. G.; Garay, Inaki; Villasenor, Eduardo J. S.

    2006-08-15

    The purpose of this paper is to discuss in detail the use of scalar matter coupled to linearly polarized Einstein-Rosen waves as a probe to study quantum gravity in the restricted setting provided by this symmetry reduction of general relativity. We will obtain the relevant Hamiltonian and quantize it with the techniques already used for the purely gravitational case. Finally, we will discuss the use of particlelike modes of the quantized fields to operationally explore some of the features of quantum gravity within this framework. Specifically, we will study two-point functions, the Newton-Wigner propagator, and radial wave functions for one-particle states.

  17. On precanonical quantization of gravity in spin connection variables

    SciTech Connect

    Kanatchikov, I. V.

    2013-02-21

    The basics of precanonical quantization and its relation to the functional Schroedinger picture in QFT are briefly outlined. The approach is then applied to quantization of Einstein's gravity in vielbein and spin connection variables and leads to a quantum dynamics described by the covariant Schroedinger equation for the transition amplitudes on the bundle of spin connection coefficients over space-time, that yields a novel quantum description of space-time geometry. A toy model of precanonical quantum cosmology based on the example of flat FLRW universe is considered.

  18. Experiments to Study Photoemission of Electron Bubbles from Quantized Vortices

    SciTech Connect

    Konstantinov, Denis; Hirsch, Matthew; Maris, Humphrey J.

    2006-09-07

    At sufficiently low temperatures, electron bubbles (negative ions) can become trapped on quantized vortices in superfluid helium. Previously, the escape of electron bubbles from vortices by thermal excitation and through quantum tunneling has been studied. In this paper, we report on an experiment in which light is used to release bubbles from quantized vortices (photoemission). A CO2 laser is used to excite the electron from the 1S to the 1P state, and it is found that each time a photon is absorbed there is a small probability that the bubble containing the electron escapes from the vortex.

  19. New Exact Quantization Condition for Toric Calabi-Yau Geometries

    NASA Astrophysics Data System (ADS)

    Wang, Xin; Zhang, Guojun; Huang, Min-xin

    2015-09-01

    We propose a new exact quantization condition for a class of quantum mechanical systems derived from local toric Calabi-Yau threefolds. Our proposal includes all contributions to the energy spectrum which are nonperturbative in the Planck constant, and is much simpler than the available quantization condition in the literature. We check that our proposal is consistent with previous works and implies nontrivial relations among the topological Gopakumar-Vafa invariants of the toric Calabi-Yau geometries. Together with the recent developments, our proposal opens a new avenue in the long investigations at the interface of geometry, topology and quantum mechanics.

  20. New Exact Quantization Condition for Toric Calabi-Yau Geometries.

    PubMed

    Wang, Xin; Zhang, Guojun; Huang, Min-Xin

    2015-09-18

    We propose a new exact quantization condition for a class of quantum mechanical systems derived from local toric Calabi-Yau threefolds. Our proposal includes all contributions to the energy spectrum which are nonperturbative in the Planck constant, and is much simpler than the available quantization condition in the literature. We check that our proposal is consistent with previous works and implies nontrivial relations among the topological Gopakumar-Vafa invariants of the toric Calabi-Yau geometries. Together with the recent developments, our proposal opens a new avenue in the long investigations at the interface of geometry, topology and quantum mechanics. PMID:26430981

  1. Enhanced current quantization in high-frequency electron pumps in a perpendicular magnetic field

    SciTech Connect

    Wright, S. J.; Blumenthal, M. D.; Gumbs, Godfrey; Thorn, A. L.; Pepper, M.; Anderson, D.; Jones, G. A. C.; Nicoll, C. A.; Ritchie, D. A.; Janssen, T. J. B. M.; Holmes, S. N.

    2008-12-15

    We present experimental results of high-frequency quantized charge pumping through a quantum dot formed by the electric field arising from applied voltages in a GaAs/AlGaAs system in the presence of a perpendicular magnetic field B. Clear changes are observed in the quantized current plateaus as a function of applied magnetic field. We report on the robustness in the length of the quantized plateaus and improvements in the quantization as a result of the applied B field.

  2. Eddy Current Signature Classification of Steam Generator Tube Defects Using A Learning Vector Quantization Neural Network

    SciTech Connect

    Gabe V. Garcia

    2005-01-03

    A major cause of failure in nuclear steam generators is degradation of their tubes. Although seven primary defect categories exist, one of the principal causes of tube failure is intergranular attack/stress corrosion cracking (IGA/SCC). This type of defect usually begins on the secondary side surface of the tubes and propagates both inwards and laterally. In many cases this defect is found at or near the tube support plates.

  3. Second-quantized molecular time scale generalized Langevin equation theory: Coupled oscillator model

    SciTech Connect

    McDowell, H.K.

    1986-11-15

    A second-quantized, coupled oscillator model is presented which explicitly displays the structure of a second-quantized MTGLE theory. The Adelman ansatz (J. Chem Phys. 75, 5837 (1981)) for a quantum MTGLE response function is shown to generate the correct response function for the model. This result paves the way for the development of a general second-quantized MTGLE theory.

  4. Mathematics of Quantization and Quantum Fields

    NASA Astrophysics Data System (ADS)

    Dereziński, Jan; Gérard, Christian

    2013-03-01

    Preface; 1. Vector spaces; 2. Operators in Hilbert spaces; 3. Tensor algebras; 4. Analysis in L2(Rd); 5. Measures; 6. Algebras; 7. Anti-symmetric calculus; 8. Canonical commutation relations; 9. CCR on Fock spaces; 10. Symplectic invariance of CCR in finite dimensions; 11. Symplectic invariance of the CCR on Fock spaces; 12. Canonical anti-commutation relations; 13. CAR on Fock spaces; 14. Orthogonal invariance of CAR algebras; 15. Clifford relations; 16. Orthogonal invariance of the CAR on Fock spaces; 17. Quasi-free states; 18. Dynamics of quantum fields; 19. Quantum fields on space-time; 20. Diagrammatics; 21. Euclidean approach for bosons; 22. Interacting bosonic fields; Subject index; Symbols index.

  5. Dengue Vectors and their Spatial Distribution

    PubMed Central

    Higa, Yukiko

    2011-01-01

    The distribution of dengue vectors, Ae. aegypti and Ae. albopictus, is affected by climatic factors. In addition, since their life cycles are well adapted to the human environment, environmental changes resulting from human activity such as urbanization exert a great impact on vector distribution. The different responses of Ae. aegypti and Ae albopictus to various environments result in a difference in spatial distribution along north-south and urban-rural gradients, and between the indoors and outdoors. In the north-south gradient, climate associated with survival is an important factor in spatial distribution. In the urban-rural gradient, different distribution reflects a difference in adult niches and is modified by geographic and human factors. The direct response of the two species to the environment around houses is related to different spatial distribution indoors and outdoors. Dengue viruses circulate mainly between human and vector mosquitoes, and the vector presence is a limiting factor of transmission. Therefore, spatial distribution of dengue vectors is a significant concern in the epidemiology of the disease. Current technologies such as GIS, satellite imagery and statistical models allow researchers to predict the spatial distribution of vectors in the changing environment. Although it is difficult to confirm the actual effect of environmental and climate changes on vector abundance and vector-borne diseases, environmental changes caused by humans and human behavioral changes due to climate change can be expected to exert an impact on dengue vectors. Longitudinal monitoring of dengue vectors and viruses is therefore necessary. PMID:22500133

  6. Local mesh quantized extrema patterns for image retrieval.

    PubMed

    Koteswara Rao, L; Venkata Rao, D; Reddy, L Pratap

    2016-01-01

    In this paper, we propose a new feature descriptor, named local mesh quantized extrema patterns (LMeQEP) for image indexing and retrieval. The standard local quantized patterns collect the spatial relationship in the form of larger or deeper texture pattern based on the relative variations in the gray values of center pixel and its neighbors. Directional local extrema patterns explore the directional information in 0°, 90°, 45° and 135° for a pixel positioned at the center. A mesh structure is created from a quantized extrema to derive significant textural information. Initially, the directional quantized data from the mesh structure is extracted to form LMeQEP of given image. Then, RGB color histogram is built and integrated with the LMeQEP to enhance the performance of the system. In order to test the impact of proposed method, experimentation is done with bench mark image repositories such as MIT VisTex and Corel-1k. Avg. retrieval rate and avg. retrieval precision are considered as the evaluation metrics to record the performance level. The results from experiments show a considerable improvement when compared to other recent techniques in the image retrieval. PMID:27429886

  7. Semiclassical Quantization of the Electron-Dipole System.

    ERIC Educational Resources Information Center

    Turner, J. E.

    1979-01-01

    This paper presents a derivation of the number given by Fermi in 1925, in his semiclassical treatment of the motion of an electron in the field of two stationary positive charges, for Bohr quantization of the electron orbits when the stationary charges are positive, and applies it to an electron moving in the field of a stationary dipole.…

  8. FAST TRACK COMMUNICATION: Quantization over boson operator spaces

    NASA Astrophysics Data System (ADS)

    Prosen, Tomaž; Seligman, Thomas H.

    2010-10-01

    The framework of third quantization—canonical quantization in the Liouville space—is developed for open many-body bosonic systems. We show how to diagonalize the quantum Liouvillean for an arbitrary quadratic n-boson Hamiltonian with arbitrary linear Lindblad couplings to the baths and, as an example, explicitly work out a general case of a single boson.

  9. Quantization method for describing the motion of celestial systems

    NASA Astrophysics Data System (ADS)

    Christianto, Victor; Smarandache, Florentin

    2015-11-01

    Criticism arises concerning the use of quantization method for describing the motion of celestial systems, arguing that the method is oversimplifying the problem, and cannot explain other phenomena, for instance planetary migration. Using quantization method like Nottale-Schumacher did, one can expect to predict new exoplanets with remarkable result. The ``conventional'' theories explaining planetary migration normally use fluid theory involving diffusion process. Gibson have shown that these migration phenomena could be described via Navier-Stokes approach. Kiehn's argument was based on exact-mapping between Schrodinger equation and Navier-Stokes equations, while our method may be interpreted as an oversimplification of the real planetary migration process which took place sometime in the past, providing useful tool for prediction (e.g. other planetoids, which are likely to be observed in the near future, around 113.8AU and 137.7 AU). Therefore, quantization method could be seen as merely a ``plausible'' theory. We would like to emphasize that the quantization method does not have to be the true description of reality with regards to celestial phenomena. This method could explain some phenomena, while perhaps lacks explanation for other phenomena.

  10. Second quantization techniques in the scattering of nonidentical composite bodies

    NASA Technical Reports Server (NTRS)

    Norbury, J. W.; Townsend, L. W.; Deutchman, P. A.

    1986-01-01

    Second quantization techniques for describing elastic and inelastic interactions between nonidentical composite bodies are presented and are applied to nucleus-nucleus collisions involving ground-state and one-particle-one-hole excitations. Evaluations of the resultant collision matrix elements are made through use of Wick's theorem.

  11. Multispectral data compression through transform coding and block quantization

    NASA Technical Reports Server (NTRS)

    Ready, P. J.; Wintz, P. A.

    1972-01-01

    Transform coding and block quantization techniques are applied to multispectral aircraft scanner data, and digitized satellite imagery. The multispectral source is defined and an appropriate mathematical model proposed. The Karhunen-Loeve, Fourier, and Hadamard encoders are considered and are compared to the rate distortion function for the equivalent Gaussian source and to the performance of the single sample PCM encoder.

  12. Dynamic contrast-based quantization for lossy wavelet image compression.

    PubMed

    Chandler, Damon M; Hemami, Sheila S

    2005-04-01

    This paper presents a contrast-based quantization strategy for use in lossy wavelet image compression that attempts to preserve visual quality at any bit rate. Based on the results of recent psychophysical experiments using near-threshold and suprathreshold wavelet subband quantization distortions presented against natural-image backgrounds, subbands are quantized such that the distortions in the reconstructed image exhibit root-mean-squared contrasts selected based on image, subband, and display characteristics and on a measure of total visual distortion so as to preserve the visual system's ability to integrate edge structure across scale space. Within a single, unified framework, the proposed contrast-based strategy yields images which are competitive in visual quality with results from current visually lossless approaches at high bit rates and which demonstrate improved visual quality over current visually lossy approaches at low bit rates. This strategy operates in the context of both nonembedded and embedded quantization, the latter of which yields a highly scalable codestream which attempts to maintain visual quality at all bit rates; a specific application of the proposed algorithm to JPEG-2000 is presented. PMID:15825476

  13. Equivalent Electrical Circuit Representations of AC Quantized Hall Resistance Standards

    PubMed Central

    Cage, M. E.; Jeffery, A.; Matthews, J.

    1999-01-01

    We use equivalent electrical circuits to analyze the effects of large parasitic impedances existing in all sample probes on four-terminal-pair measurements of the ac quantized Hall resistance RH. The circuit components include the externally measurable parasitic capacitances, inductances, lead resistances, and leakage resistances of ac quantized Hall resistance standards, as well as components that represent the electrical characteristics of the quantum Hall effect device (QHE). Two kinds of electrical circuit connections to the QHE are described and considered: single-series “offset” and quadruple-series. (We eliminated other connections in earlier analyses because they did not provide the desired accuracy with all sample probe leads attached at the device.) Exact, but complicated, algebraic equations are derived for the currents and measured quantized Hall voltages for these two circuits. Only the quadruple-series connection circuit meets our desired goal of measuring RH for both ac and dc currents with a one-standard-deviation uncertainty of 10−8 RH or less during the same cool-down with all leads attached at the device. The single-series “offset” connection circuit meets our other desired goal of also measuring the longitudinal resistance Rx for both ac and dc currents during that same cool-down. We will use these predictions to apply small measurable corrections, and uncertainties of the corrections, to ac measurements of RH in order to realize an intrinsic ac quantized Hall resistance standard of 10−8 RH uncertainty or less.

  14. Quantization of higher abelian gauge theory in generalized differential cohomology

    NASA Astrophysics Data System (ADS)

    Szabo, R.

    We review and elaborate on some aspects of the quantization of certain classes of higher abelian gauge theories using techniques of generalized differential cohomology. Particular emphasis is placed on the examples of generalized Maxwell theory and Cheeger-Simons cohomology, and of Ramond-Ramond fields in Type II superstring theory and differential K-theory.

  15. Wave-vector dispersion versus angular-momentum dispersion of collective modes in small metal particles

    NASA Astrophysics Data System (ADS)

    Ekardt, W.

    1987-09-01

    The wave-vector dispersion of collective modes in small particles is investigated within the time-dependent local-density approximation as applied to a self-consistent jellium particle. It is shown that the dispersion of the volume plasmons can be understood from that in an infinite electron gas. For a given multipole an optimum wave vector exists for the quasiresonant excitation of the volume mode but not for the surface mode. It is pointed out that-for the volume modes-the hydrodynamic approximation gives a reasonable first guess for the relation between frequencies and size-quantized wave vectors.

  16. Bandwidth reduction of high-frequency sonar imagery in shallow water using content-adaptive hybrid image coding

    NASA Astrophysics Data System (ADS)

    Shin, Frances B.; Kil, David H.

    1998-09-01

    One of the biggest challenges in distributed underwater mine warfare for area sanitization and safe power projection during regional conflicts is transmission of compressed raw imagery data to a central processing station via a limited bandwidth channel while preserving crucial target information for further detection and automatic target recognition processing. Moreover, operating in an extremely shallow water with fluctuating channels and numerous interfering sources makes it imperative that image compression algorithms effectively deal with background nonstationarity within an image as well as content variation between images. In this paper, we present a novel approach to lossy image compression that combines image- content classification, content-adaptive bit allocation, and hybrid wavelet tree-based coding for over 100:1 bandwidth reduction with little sacrifice in signal-to-noise ratio (SNR). Our algorithm comprises (1) content-adaptive coding that takes advantage of a classify-before-coding strategy to reduce data mismatch, (2) subimage transformation for energy compaction, and (3) a wavelet tree-based coding for efficient encoding of significant wavelet coefficients. Furthermore, instead of using the embedded zerotree coding with scalar quantization (SQ), we investigate the use of a hybrid coding strategy that combines SQ for high-magnitude outlier transform coefficients and classified vector quantization (CVQ) for compactly clustered coefficients. This approach helps us achieve reduced distortion error and robustness while achieving high compression ratio. Our analysis based on the high-frequency sonar real data that exhibit severe content variability and contain both mines and mine-like clutter indicates that we can achieve over 100:1 compression ratio without losing crucial signal attributes. In comparison, benchmarking of the same data set with the best still-picture compression algorithm called the set partitioning in hierarchical trees (SPIHT) reveals

  17. Relativistic Landau–He–McKellar–Wilkens quantization and relativistic bound states solutions for a Coulomb-like potential induced by the Lorentz symmetry breaking effects

    SciTech Connect

    Bakke, K.; Belich, H.

    2013-06-15

    In this work, we discuss the relativistic Landau–He–McKellar–Wilkens quantization and relativistic bound states solutions for a Dirac neutral particle under the influence of a Coulomb-like potential induced by the Lorentz symmetry breaking effects. We present new possible scenarios of studying Lorentz symmetry breaking effects by fixing the space-like vector field background in special configurations. It is worth mentioning that the criterion for studying the violation of Lorentz symmetry is preserving the gauge symmetry. -- Highlights: •Two new possible scenarios of studying Lorentz symmetry breaking effects. •Coulomb-like potential induced by the Lorentz symmetry breaking effects. •Relativistic Landau–He–McKellar–Wilkens quantization. •Exact solutions of the Dirac equation.

  18. Quantization and Quantum-Like Phenomena: A Number Amplitude Approach

    NASA Astrophysics Data System (ADS)

    Robinson, T. R.; Haven, E.

    2015-12-01

    Historically, quantization has meant turning the dynamical variables of classical mechanics that are represented by numbers into their corresponding operators. Thus the relationships between classical variables determine the relationships between the corresponding quantum mechanical operators. Here, we take a radically different approach to this conventional quantization procedure. Our approach does not rely on any relations based on classical Hamiltonian or Lagrangian mechanics nor on any canonical quantization relations, nor even on any preconceptions of particle trajectories in space and time. Instead we examine the symmetry properties of certain Hermitian operators with respect to phase changes. This introduces harmonic operators that can be identified with a variety of cyclic systems, from clocks to quantum fields. These operators are shown to have the characteristics of creation and annihilation operators that constitute the primitive fields of quantum field theory. Such an approach not only allows us to recover the Hamiltonian equations of classical mechanics and the Schrödinger wave equation from the fundamental quantization relations, but also, by freeing the quantum formalism from any physical connotation, makes it more directly applicable to non-physical, so-called quantum-like systems. Over the past decade or so, there has been a rapid growth of interest in such applications. These include, the use of the Schrödinger equation in finance, second quantization and the number operator in social interactions, population dynamics and financial trading, and quantum probability models in cognitive processes and decision-making. In this paper we try to look beyond physical analogies to provide a foundational underpinning of such applications.

  19. Finite-Horizon Near-Optimal Output Feedback Neural Network Control of Quantized Nonlinear Discrete-Time Systems With Input Constraint.

    PubMed

    Xu, Hao; Zhao, Qiming; Jagannathan, Sarangapani

    2015-08-01

    The output feedback-based near-optimal regulation of uncertain and quantized nonlinear discrete-time systems in affine form with control constraint over finite horizon is addressed in this paper. First, the effect of input constraint is handled using a nonquadratic cost functional. Next, a neural network (NN)-based Luenberger observer is proposed to reconstruct both the system states and the control coefficient matrix so that a separate identifier is not needed. Then, approximate dynamic programming-based actor-critic framework is utilized to approximate the time-varying solution of the Hamilton-Jacobi-Bellman using NNs with constant weights and time-dependent activation functions. A new error term is defined and incorporated in the NN update law so that the terminal constraint error is also minimized over time. Finally, a novel dynamic quantizer for the control inputs with adaptive step size is designed to eliminate the quantization error overtime, thus overcoming the drawback of the traditional uniform quantizer. The proposed scheme functions in a forward-in-time manner without offline training phase. Lyapunov analysis is used to investigate the stability. Simulation results are given to show the effectiveness and feasibility of the proposed method. PMID:25794403

  20. An Effective Color Quantization Method Using Octree-Based Self-Organizing Maps

    PubMed Central

    Park, Hyun Jun; Kim, Kwang Baek; Cha, Eui-Young

    2016-01-01

    Color quantization is an essential technique in color image processing, which has been continuously researched. It is often used, in particular, as preprocessing for many applications. Self-Organizing Map (SOM) color quantization is one of the most effective methods. However, it is inefficient for obtaining accurate results when it performs quantization with too few colors. In this paper, we present a more effective color quantization algorithm that reduces the number of colors to a small number by using octree quantization. This generates more natural results with less difference from the original image. The proposed method is evaluated by comparing it with well-known quantization methods. The experimental results show that the proposed method is more effective than other methods when using a small number of colors to quantize the colors. Also, it takes only 71.73% of the processing time of the conventional SOM method. PMID:26884748

  1. Rotations with Rodrigues' Vector

    ERIC Educational Resources Information Center

    Pina, E.

    2011-01-01

    The rotational dynamics was studied from the point of view of Rodrigues' vector. This vector is defined here by its connection with other forms of parametrization of the rotation matrix. The rotation matrix was expressed in terms of this vector. The angular velocity was computed using the components of Rodrigues' vector as coordinates. It appears…

  2. Compact Representation of High-Dimensional Feature Vectors for Large-Scale Image Recognition and Retrieval.

    PubMed

    Zhang, Yu; Wu, Jianxin; Cai, Jianfei

    2016-05-01

    In large-scale visual recognition and image retrieval tasks, feature vectors, such as Fisher vector (FV) or the vector of locally aggregated descriptors (VLAD), have achieved state-of-the-art results. However, the combination of the large numbers of examples and high-dimensional vectors necessitates dimensionality reduction, in order to reduce its storage and CPU costs to a reasonable range. In spite of the popularity of various feature compression methods, this paper shows that the feature (dimension) selection is a better choice for high-dimensional FV/VLAD than the feature (dimension) compression methods, e.g., product quantization. We show that strong correlation among the feature dimensions in the FV and the VLAD may not exist, which renders feature selection a natural choice. We also show that, many dimensions in FV/VLAD are noise. Throwing them away using feature selection is better than compressing them and useful dimensions altogether using feature compression methods. To choose features, we propose an efficient importance sorting algorithm considering both the supervised and unsupervised cases, for visual recognition and image retrieval, respectively. Combining with the 1-bit quantization, feature selection has achieved both higher accuracy and less computational cost than feature compression methods, such as product quantization, on the FV and the VLAD image representations. PMID:27046897

  3. Foamy virus vectors.

    PubMed Central

    Russell, D W; Miller, A D

    1996-01-01

    Human foamy virus (HFV) is a retrovirus of the spumavirus family. We have constructed vectors based on HFV that encode neomycin phosphotransferase and alkaline phosphatase. These vectors are able to transduce a wide variety of vertebrate cells by integration of the vector genome. Unlike vectors based on murine leukemia virus, HFV vectors are not inactivated by human serum, and they transduce stationary-phase cultures more efficiently than murine leukemia virus vectors. These properties, as well as their large packaging capacity, make HFV vectors promising gene transfer vehicles. PMID:8523528

  4. VLSI circuit simulation using a vector computer

    NASA Technical Reports Server (NTRS)

    Mcgrogan, S. K.

    1984-01-01

    Simulation of circuits having more than 2000 active devices requires the largest, fastest computers available. A vector computer, such as the CYBER 205, can yield great speed and cost advantages if efforts are made to adapt the simulation program to the strengths of the computer. ASPEC and SPICE (1), two widely used circuit simulation programs, are discussed. ASPECV and VAMOS (5) are respectively vector adaptations of these two simulators. They demonstrate the substantial performance enhancements possible for this class of algorithm on the CYBER 205.

  5. Polymer quantization, stability and higher-order time derivative terms

    NASA Astrophysics Data System (ADS)

    Cumsille, Patricio; Reyes, Carlos M.; Ossandon, Sebastian; Reyes, Camilo

    2016-03-01

    The possibility that fundamental discreteness implicit in a quantum gravity theory may act as a natural regulator for ultraviolet singularities arising in quantum field theory has been intensively studied. Here, along the same expectations, we investigate whether a nonstandard representation called polymer representation can smooth away the large amount of negative energy that afflicts the Hamiltonians of higher-order time derivative theories, rendering the theory unstable when interactions come into play. We focus on the fourth-order Pais-Uhlenbeck model which can be reexpressed as the sum of two decoupled harmonic oscillators one producing positive energy and the other negative energy. As expected, the Schrödinger quantization of such model leads to the stability problem or to negative norm states called ghosts. Within the framework of polymer quantization we show the existence of new regions where the Hamiltonian can be defined well bounded from below.

  6. Abductive learning of quantized stochastic processes with probabilistic finite automata.

    PubMed

    Chattopadhyay, Ishanu; Lipson, Hod

    2013-02-13

    We present an unsupervised learning algorithm (GenESeSS) to infer the causal structure of quantized stochastic processes, defined as stochastic dynamical systems evolving over discrete time, and producing quantized observations. Assuming ergodicity and stationarity, GenESeSS infers probabilistic finite state automata models from a sufficiently long observed trace. Our approach is abductive; attempting to infer a simple hypothesis, consistent with observations and modelling framework that essentially fixes the hypothesis class. The probabilistic automata we infer have no initial and terminal states, have no structural restrictions and are shown to be probably approximately correct-learnable. Additionally, we establish rigorous performance guarantees and data requirements, and show that GenESeSS correctly infers long-range dependencies. Modelling and prediction examples on simulated and real data establish relevance to automated inference of causal stochastic structures underlying complex physical phenomena. PMID:23277601

  7. Radiative cooling: lattice quantization and surface emissivity in thin coatings.

    PubMed

    Suryawanshi, Chetan N; Lin, Chhiu-Tsu

    2009-06-01

    Nanodiamond powder (NDP), multiwall carbon nanotube (MWCNT), and carbon black (CB) were dispersed in an acrylate (AC) emulsion to form composite materials. These materials were coated on aluminum panels (alloy 3003) to give thin coatings. The active phonons of the nanomaterials were designed to act as a cooling fan, termed "molecular fan (MF)". The order of lattice quantization, as investigated by Raman spectroscopy, is MWCNT > CB > NDP. The enhanced surface emissivity of the MF coating (as observed by IR imaging) is well-correlated to lattice quantization, resulting in a better cooling performance by the MWCNT-AC composite. MF coatings with different concentrations (0%, 0.4%, 0.7%, and 1%) of MWCNT were prepared. The equilibrium temperature lowering of the coated panel was observed with an increase in the loading of CNTs and was measured as 17 degrees C for 1% loading of MWCNT. This was attributed to an increased density of active phonons in the MF coating. PMID:20355930

  8. Polymer quantization of the Einstein-Rosen wormhole throat

    SciTech Connect

    Kunstatter, Gabor; Peltola, Ari; Louko, Jorma

    2010-01-15

    We present a polymer quantization of spherically symmetric Einstein gravity in which the polymerized variable is the area of the Einstein-Rosen wormhole throat. In the classical polymer theory, the singularity is replaced by a bounce at a radius that depends on the polymerization scale. In the polymer quantum theory, we show numerically that the area spectrum is evenly spaced and in agreement with a Bohr-Sommerfeld semiclassical estimate, and this spectrum is not qualitatively sensitive to issues of factor ordering or boundary conditions except in the lowest few eigenvalues. In the limit of small polymerization scale we recover, within the numerical accuracy, the area spectrum obtained from a Schroedinger quantization of the wormhole throat dynamics. The prospects of recovering from the polymer throat theory a full quantum-corrected spacetime are discussed.

  9. Deformation quantization of the Pais-Uhlenbeck fourth order oscillator

    NASA Astrophysics Data System (ADS)

    Berra-Montiel, Jasel; Molgado, Alberto; Rojas, Efraín

    2015-11-01

    We analyze the quantization of the Pais-Uhlenbeck fourth order oscillator within the framework of deformation quantization. Our approach exploits the Noether symmetries of the system by proposing integrals of motion as the variables to obtain a solution to the ⋆-genvalue equation, namely the Wigner function. We also obtain, by means of a quantum canonical transformation the wave function associated to the Schrödinger equation of the system. We show that unitary evolution of the system is guaranteed by means of the quantum canonical transformation and via the properties of the constructed Wigner function, even in the so called equal frequency limit of the model, in agreement with recent results.

  10. Size quantization of Dirac fermions in graphene constrictions

    NASA Astrophysics Data System (ADS)

    Terrés, B.; Chizhova, L. A.; Libisch, F.; Peiro, J.; Jörger, D.; Engels, S.; Girschik, A.; Watanabe, K.; Taniguchi, T.; Rotkin, S. V.; Burgdörfer, J.; Stampfer, C.

    2016-05-01

    Quantum point contacts are cornerstones of mesoscopic physics and central building blocks for quantum electronics. Although the Fermi wavelength in high-quality bulk graphene can be tuned up to hundreds of nanometres, the observation of quantum confinement of Dirac electrons in nanostructured graphene has proven surprisingly challenging. Here we show ballistic transport and quantized conductance of size-confined Dirac fermions in lithographically defined graphene constrictions. At high carrier densities, the observed conductance agrees excellently with the Landauer theory of ballistic transport without any adjustable parameter. Experimental data and simulations for the evolution of the conductance with magnetic field unambiguously confirm the identification of size quantization in the constriction. Close to the charge neutrality point, bias voltage spectroscopy reveals a renormalized Fermi velocity of ~1.5 × 106 m s-1 in our constrictions. Moreover, at low carrier density transport measurements allow probing the density of localized states at edges, thus offering a unique handle on edge physics in graphene devices.

  11. Quantization of Td- and Oh-symmetric Skyrmions

    NASA Astrophysics Data System (ADS)

    Lau, P. H. C.; Manton, N. S.

    2014-06-01

    The geometrical construction of rational maps using a cubic grid has led to many new Skyrmion solutions, with baryon numbers up to 108. Energy spectra of some of the new Skyrmions are calculated here by semiclassical quantization. Quantization of the B=20 Td-symmetric Skyrmion, which is one of the newly found Skyrmions, is considered, and this leads to the development of a new approach to solving Finkelstein-Rubinstein constraints. Matrix equations are simplified by introducing a Cartesian version of angular momentum basis states, and the computations are easier. The quantum states of all Td-symmetric Skyrmions, constructed from the cubic grid, are classified into three classes, depending on the contribution of vertex points of the cubic grid to the rational maps. The analysis is extended to the larger symmetry group Oh. Quantum states of Oh-symmetric Skyrmions, constructed from the cubic grid, form a subset of the Td-symmetric quantum states.

  12. Conformal Loop quantization of gravity coupled to the standard model

    NASA Astrophysics Data System (ADS)

    Pullin, Jorge; Gambini, Rodolfo

    2016-03-01

    We consider a local conformal invariant coupling of the standard model to gravity free of any dimensional parameter. The theory is formulated in order to have a quantized version that admits a spin network description at the kinematical level like that of loop quantum gravity. The Gauss constraint, the diffeomorphism constraint and the conformal constraint are automatically satisfied and the standard inner product of the spin-network basis still holds. The resulting theory has resemblances with the Bars-Steinhardt-Turok local conformal theory, except it admits a canonical quantization in terms of loops. By considering a gauge fixed version of the theory we show that the Standard model coupled to gravity is recovered and the Higgs boson acquires mass. This in turn induces via the standard mechanism masses for massive bosons, baryons and leptons.

  13. Gauge Invariance of Parametrized Systems and Path Integral Quantization

    NASA Astrophysics Data System (ADS)

    de Cicco, Hernán; Simeone, Claudio

    Gauge invariance of systems whose Hamilton-Jacobi equation is separable is improved by adding surface terms to the action functional. The general form of these terms is given for some complete solutions of the Hamilton-Jacobi equation. The procedure is applied to the relativistic particle and toy universes, which are quantized by imposing canonical gauge conditions in the path integral; in the case of empty models, we first quantize the parametrized system called "ideal clock," and then we examine the possibility of obtaining the amplitude for the minisuperspaces by matching them with the ideal clock. The relation existing between the geometrical properties of the constraint surface and the variables identifying the quantum states in the path integral is discussed.

  14. Comparing conductance quantization in quantum wires and quantum Hall systems

    NASA Astrophysics Data System (ADS)

    Alekseev, Anton Yu.; Cheianov, Vadim V.; Fröhlich, Jürg

    1996-12-01

    We suggest a means to calculate the dc conductance of a one-dimensional electron system described by the Luttinger model. Our approach is based on the ideas of Landauer and Büttiker on transport in ballistic channels and on the methods of current algebra. We analyze in detail the way in which the system can be coupled to external reservoirs. This determines whether the conductance is renormalized or not. We provide a parallel treatment of a quantum wire and a fractional quantum Hall system on a cylinder with two widely separated edges. Although both systems are described by the same effective theory, the physical electrons are identified with different types of excitations, and hence the coupling to external reservoirs is different. As a consequence, the conductance in the wire is quantized in integer units of e2/h per spin orientation whereas the Hall conductance allows for fractional quantization.

  15. Precise quantization of anomalous Hall effect near zero magnetic field

    SciTech Connect

    Bestwick, A. J.; Fox, E. J.; Kou, Xufeng; Pan, Lei; Wang, Kang L.; Goldhaber-Gordon, D.

    2015-05-04

    In this study, we report a nearly ideal quantum anomalous Hall effect in a three-dimensional topological insulator thin film with ferromagnetic doping. Near zero applied magnetic field we measure exact quantization in the Hall resistance to within a part per 10,000 and a longitudinal resistivity under 1 Ω per square, with chiral edge transport explicitly confirmed by nonlocal measurements. Deviations from this behavior are found to be caused by thermally activated carriers, as indicated by an Arrhenius law temperature dependence. Using the deviations as a thermometer, we demonstrate an unexpected magnetocaloric effect and use it to reach near-perfect quantization by cooling the sample below the dilution refrigerator base temperature in a process approximating adiabatic demagnetization refrigeration.

  16. Compression of Ultrasonic NDT Image by Wavelet Based Local Quantization

    NASA Astrophysics Data System (ADS)

    Cheng, W.; Li, L. Q.; Tsukada, K.; Hanasaki, K.

    2004-02-01

    Compression on ultrasonic image that is always corrupted by noise will cause `over-smoothness' or much distortion. To solve this problem to meet the need of real time inspection and tele-inspection, a compression method based on Discrete Wavelet Transform (DWT) that can also suppress the noise without losing much flaw-relevant information, is presented in this work. Exploiting the multi-resolution and interscale correlation property of DWT, a simple way named DWCs classification, is introduced first to classify detail wavelet coefficients (DWCs) as dominated by noise, signal or bi-effected. A better denoising can be realized by selective thresholding DWCs. While in `Local quantization', different quantization strategies are applied to the DWCs according to their classification and the local image property. It allocates the bit rate more efficiently to the DWCs thus achieve a higher compression rate. Meanwhile, the decompressed image shows the effects of noise suppressed and flaw characters preserved.

  17. Novel properties of the q-analogue quantized radiation field

    NASA Technical Reports Server (NTRS)

    Nelson, Charles A.

    1993-01-01

    The 'classical limit' of the q-analog quantized radiation field is studied paralleling conventional quantum optics analyses. The q-generalizations of the phase operator of Susskind and Glogower and that of Pegg and Barnett are constructed. Both generalizations and their associated number-phase uncertainty relations are manifestly q-independent in the n greater than g number basis. However, in the q-coherent state z greater than q basis, the variance of the generic electric field, (delta(E))(sup 2) is found to be increased by a factor lambda(z) where lambda(z) greater than 1 if q not equal to 1. At large amplitudes, the amplitude itself would be quantized if the available resolution of unity for the q-analog coherent states is accepted in the formulation. These consequences are remarkable versus the conventional q = 1 limit.

  18. Corrected Hawking Temperature in Snyder's Quantized Space-time

    NASA Astrophysics Data System (ADS)

    Ma, Meng-Sen; Liu, Fang; Zhao, Ren

    2015-06-01

    In the quantized space-time of Snyder, generalized uncertainty relation and commutativity are both included. In this paper we analyze the possible form for the corrected Hawking temperature and derive it from the both effects. It is shown that the corrected Hawking temperature has a form similar to the one of noncommutative geometry inspired Schwarzschild black hole, however with an requirement for the noncommutative parameter 𝜃 and the minimal length a.

  19. Polymer quantization and the saddle point approximation of partition functions

    NASA Astrophysics Data System (ADS)

    Morales-Técotl, Hugo A.; Orozco-Borunda, Daniel H.; Rastgoo, Saeed

    2015-11-01

    The saddle point approximation of the path integral partition functions is an important way of deriving the thermodynamical properties of black holes. However, there are certain black hole models and some mathematically analog mechanical models for which this method cannot be applied directly. This is due to the fact that their action evaluated on a classical solution is not finite and its first variation does not vanish for all consistent boundary conditions. These problems can be dealt with by adding a counterterm to the classical action, which is a solution of the corresponding Hamilton-Jacobi equation. In this work we study the effects of polymer quantization on a mechanical model presenting the aforementioned difficulties and contrast it with the above counterterm method. This type of quantization for mechanical models is motivated by the loop quantization of gravity, which is known to play a role in the thermodynamics of black hole systems. The model we consider is a nonrelativistic particle in an inverse square potential, and we analyze two polarizations of the polymer quantization in which either the position or the momentum is discrete. In the former case, Thiemann's regularization is applied to represent the inverse power potential, but we still need to incorporate the Hamilton-Jacobi counterterm, which is now modified by polymer corrections. In the latter, momentum discrete case, however, such regularization could not be implemented. Yet, remarkably, owing to the fact that the position is bounded, we do not need a Hamilton-Jacobi counterterm in order to have a well-defined saddle point approximation. Further developments and extensions are commented upon in the discussion.

  20. Superfield Hamiltonian quantization in terms of quantum antibrackets

    NASA Astrophysics Data System (ADS)

    Batalin, Igor A.; Lavrov, Peter M.

    2016-04-01

    We develop a new version of the superfield Hamiltonian quantization. The main new feature is that the BRST-BFV charge and the gauge fixing Fermion are introduced on equal footing within the sigma model approach, which provides for the actual use of the quantum/derived antibrackets. We study in detail the generating equations for the quantum antibrackets and their primed counterparts. We discuss the finite quantum anticanonical transformations generated by the quantum antibracket.

  1. Topological invariance of the Hall conductance and quantization

    NASA Astrophysics Data System (ADS)

    Bracken, Paul

    2015-08-01

    It is shown that the Kubo equation for the Hall conductance can be expressed as an integral which implies quantization of the Hall conductance. The integral can be interpreted as the first Chern class of a U(1) principal fiber bundle on a two-dimensional torus. This accounts for the conductance given as an integer multiple of e2/h. The formalism can be extended to deduce the fractional conductivity as well.

  2. Heavy quarkonium in the basis light-front quantization approach

    NASA Astrophysics Data System (ADS)

    Li, Yang; Vary, James; Maris, Pieter

    2015-10-01

    I present a study of the charmonium and bottomonium spectra using the basis light-front quantization. We implement a one-gluon exchange interaction in the leading Fock sector following Ref.. We also adopt a phenomenological confining interaction based on the AdS/QCD and light-front holography. The results are compared with the experimental data. Supported by the US DOE Grants DESC0008485 (SciDAC/NUCLEI) and DE-FG02-87ER40371.

  3. Image compression system and method having optimized quantization tables

    NASA Technical Reports Server (NTRS)

    Ratnakar, Viresh (Inventor); Livny, Miron (Inventor)

    1998-01-01

    A digital image compression preprocessor for use in a discrete cosine transform-based digital image compression device is provided. The preprocessor includes a gathering mechanism for determining discrete cosine transform statistics from input digital image data. A computing mechanism is operatively coupled to the gathering mechanism to calculate a image distortion array and a rate of image compression array based upon the discrete cosine transform statistics for each possible quantization value. A dynamic programming mechanism is operatively coupled to the computing mechanism to optimize the rate of image compression array against the image distortion array such that a rate-distortion-optimal quantization table is derived. In addition, a discrete cosine transform-based digital image compression device and a discrete cosine transform-based digital image compression and decompression system are provided. Also, a method for generating a rate-distortion-optimal quantization table, using discrete cosine transform-based digital image compression, and operating a discrete cosine transform-based digital image compression and decompression system are provided.

  4. Combinatorial quantization of the Hamiltonian Chern-Simons theory II

    NASA Astrophysics Data System (ADS)

    Alekseev, Anton Yu.; Grosse, Harald; Schomerus, Volker

    1996-01-01

    This paper further develops the combinatorial approach to quantization of the Hamiltonian Chern Simons theory advertised in [1]. Using the theory of quantum Wilson lines, we show how the Verlinde algebra appears within the context of quantum group gauge theory. This allows to discuss flatness of quantum connections so that we can give a mathematically rigorous definition of the algebra of observables A CS of the Chern Simons model. It is a *-algebra of “functions on the quantum moduli space of flat connections” and comes equipped with a positive functional ω (“integration”). We prove that this data does not depend on the particular choices which have been made in the construction. Following ideas of Fock and Rosly [2], the algebra A CS provides a deformation quantization of the algebra of functions on the moduli space along the natural Poisson bracket induced by the Chern Simons action. We evaluate a volume of the quantized moduli space and prove that it coincides with the Verlinde number. This answer is also interpreted as a partition partition function of the lattice Yang-Mills theory corresponding to a quantum gauge group.

  5. Canonical quantization of a string describing N branes at angles

    NASA Astrophysics Data System (ADS)

    Pesando, Igor

    2014-12-01

    We study the canonical quantization of a bosonic string in presence of N twist fields. This generalizes the quantization of the twisted string in two ways: the in and out states are not necessarily twisted and the number of twist fields N can be bigger than 2. In order to quantize the theory we need to find the normal modes. Then we need to define a product between two modes which is conserved. Because of this we need to use the Klein-Gordon product and to separate the string coordinate into the classical and the quantum part. The quantum part has different boundary conditions than the original string coordinates but these boundary conditions are precisely those which make the operator describing the equation of motion self adjoint. The splitting of the string coordinates into a classical and quantum part allows the formulation of an improved overlap principle. Using this approach we then proceed in computing the generating function for the generic correlator with L untwisted operators and N (excited) twist fields for branes at angles. We recover as expected the results previously obtained using the path integral. This construction explains why these correlators are given by a generalization of the Wick theorem.

  6. Reduced Vector Preisach Model

    NASA Technical Reports Server (NTRS)

    Patel, Umesh D.; Torre, Edward Della; Day, John H. (Technical Monitor)

    2002-01-01

    A new vector Preisach model, called the Reduced Vector Preisach model (RVPM), was developed for fast computations. This model, derived from the Simplified Vector Preisach model (SVPM), has individual components that like the SVPM are calculated independently using coupled selection rules for the state vector computation. However, the RVPM does not require the rotational correction. Therefore, it provides a practical alternative for computing the magnetic susceptibility using a differential approach. A vector version, using the framework of the DOK model, is implemented. Simulation results for the reduced vector Preisach model are also presented.

  7. Understanding Singular Vectors

    ERIC Educational Resources Information Center

    James, David; Botteron, Cynthia

    2013-01-01

    matrix yields a surprisingly simple, heuristical approximation to its singular vectors. There are correspondingly good approximations to the singular values. Such rules of thumb provide an intuitive interpretation of the singular vectors that helps explain why the SVD is so…

  8. The vector ruling protractor

    NASA Technical Reports Server (NTRS)

    Zahm, A F

    1924-01-01

    The theory, structure and working of a vector slide rule is presented in this report. This instrument is used for determining a vector in magnitude and position when given its components and its moment about a point in their plane.

  9. Quantized spin waves in single Co/Pt dots detected by anomalous Hall effect based ferromagnetic resonance

    SciTech Connect

    Kikuchi, N. Furuta, M.; Okamoto, S.; Kitakami, O.; Shimatsu, T.

    2014-12-15

    Anomalous Hall effect (AHE) based ferromagnetic resonance (FMR) measurements were carried out on perpendicularly magnetized Co/Pt multilayer single dots of 0.4–3 μm in diameter. The resonance behavior was measured by detecting the decrease of perpendicular magnetization component due to magnetization precession. Resonance behavior was observed as a clear decrease of Hall voltages, and the obtained resonance fields were consistent with the results of vector-network-analyzer FMR. Spin-waves with cylindrical symmetry became significant by decreasing the dot diameter, and quantized multiple resonances were observed in the dot of 0.4 μm in diameter. The AHE based FMR proposed here is a powerful method to approach magnetization dynamics including spin waves and non-linear behavior excited in a finite nanostructure.

  10. The Hamiltonian structure of Dirac's equation in tensor form and its Fermi quantization

    NASA Technical Reports Server (NTRS)

    Reifler, Frank; Morris, Randall

    1992-01-01

    Currently, there is some interest in studying the tensor forms of the Dirac equation to elucidate the possibility of the constrained tensor fields admitting Fermi quantization. We demonstrate that the bispinor and tensor Hamiltonian systems have equivalent Fermi quantizations. Although the tensor Hamiltonian system is noncanonical, representing the tensor Poisson brackets as commutators for the Heisenberg operators directly leads to Fermi quantization without the use of bispinors.

  11. Restart 68000 vector remapping

    SciTech Connect

    Gustin, J.

    1984-05-03

    The circuit described allows power-on-reset (POR) vector fetch from ROM for a 68000 microprocessor. It offers programmability of exception vectors, including the restart vector. This method eliminates the need for high-resolution, address-decoder peripheral circuitry.

  12. Rhotrix Vector Spaces

    ERIC Educational Resources Information Center

    Aminu, Abdulhadi

    2010-01-01

    By rhotrix we understand an object that lies in some way between (n x n)-dimensional matrices and (2n - 1) x (2n - 1)-dimensional matrices. Representation of vectors in rhotrices is different from the representation of vectors in matrices. A number of vector spaces in matrices and their properties are known. On the other hand, little seems to be…

  13. MATRIX AND VECTOR SERVICES

    2001-10-18

    PETRA V2 provides matrix and vector services and the ability construct, query, and use matrix and vector objects that are used and computed by TRILINOS solvers. It provides all basic matr5ix and vector operations for solvers in TRILINOS.

  14. Insulated Foamy Viral Vectors.

    PubMed

    Browning, Diana L; Collins, Casey P; Hocum, Jonah D; Leap, David J; Rae, Dustin T; Trobridge, Grant D

    2016-03-01

    Retroviral vector-mediated gene therapy is promising, but genotoxicity has limited its use in the clinic. Genotoxicity is highly dependent on the retroviral vector used, and foamy viral (FV) vectors appear relatively safe. However, internal promoters may still potentially activate nearby genes. We developed insulated FV vectors, using four previously described insulators: a version of the well-studied chicken hypersensitivity site 4 insulator (650cHS4), two synthetic CCCTC-binding factor (CTCF)-based insulators, and an insulator based on the CCAAT box-binding transcription factor/nuclear factor I (7xCTF/NF1). We directly compared these insulators for enhancer-blocking activity, effect on FV vector titer, and fidelity of transfer to both proviral long terminal repeats. The synthetic CTCF-based insulators had the strongest insulating activity, but reduced titers significantly. The 7xCTF/NF1 insulator did not reduce titers but had weak insulating activity. The 650cHS4-insulated FV vector was identified as the overall most promising vector. Uninsulated and 650cHS4-insulated FV vectors were both significantly less genotoxic than gammaretroviral vectors. Integration sites were evaluated in cord blood CD34(+) cells and the 650cHS4-insulated FV vector had fewer hotspots compared with an uninsulated FV vector. These data suggest that insulated FV vectors are promising for hematopoietic stem cell gene therapy. PMID:26715244

  15. Fast and efficient search for MPEG-4 video using adjacent pixel intensity difference quantization histogram feature

    NASA Astrophysics Data System (ADS)

    Lee, Feifei; Kotani, Koji; Chen, Qiu; Ohmi, Tadahiro

    2010-02-01

    In this paper, a fast search algorithm for MPEG-4 video clips from video database is proposed. An adjacent pixel intensity difference quantization (APIDQ) histogram is utilized as the feature vector of VOP (video object plane), which had been reliably applied to human face recognition previously. Instead of fully decompressed video sequence, partially decoded data, namely DC sequence of the video object are extracted from the video sequence. Combined with active search, a temporal pruning algorithm, fast and robust video search can be realized. The proposed search algorithm has been evaluated by total 15 hours of video contained of TV programs such as drama, talk, news, etc. to search for given 200 MPEG-4 video clips which each length is 15 seconds. Experimental results show the proposed algorithm can detect the similar video clip in merely 80ms, and Equal Error Rate (ERR) of 2 % in drama and news categories are achieved, which are more accurately and robust than conventional fast video search algorithm.

  16. Thermography based breast cancer detection using texture features and minimum variance quantization

    PubMed Central

    Milosevic, Marina; Jankovic, Dragan; Peulic, Aleksandar

    2014-01-01

    In this paper, we present a system based on feature extraction techniques and image segmentation techniques for detecting and diagnosing abnormal patterns in breast thermograms. The proposed system consists of three major steps: feature extraction, classification into normal and abnormal pattern and segmentation of abnormal pattern. Computed features based on gray-level co-occurrence matrices are used to evaluate the effectiveness of textural information possessed by mass regions. A total of 20 GLCM features are extracted from thermograms. The ability of feature set in differentiating abnormal from normal tissue is investigated using a Support Vector Machine classifier, Naive Bayes classifier and K-Nearest Neighbor classifier. To evaluate the classification performance, five-fold cross validation method and Receiver operating characteristic analysis was performed. The verification results show that the proposed algorithm gives the best classification results using K-Nearest Neighbor classifier and a accuracy of 92.5%. Image segmentation techniques can play an important role to segment and extract suspected hot regions of interests in the breast infrared images. Three image segmentation techniques: minimum variance quantization, dilation of image and erosion of image are discussed. The hottest regions of thermal breast images are extracted and compared to the original images. According to the results, the proposed method has potential to extract almost exact shape of tumors. PMID:26417334

  17. Length quantization of DNA partially expelled from heads of a bacteriophage T3 mutant

    SciTech Connect

    Serwer, Philip; Wright, Elena T.; Liu, Zheng; Jiang, Wen

    2014-05-15

    DNA packaging of phages phi29, T3 and T7 sometimes produces incompletely packaged DNA with quantized lengths, based on gel electrophoretic band formation. We discover here a packaging ATPase-free, in vitro model for packaged DNA length quantization. We use directed evolution to isolate a five-site T3 point mutant that hyper-produces tail-free capsids with mature DNA (heads). Three tail gene mutations, but no head gene mutations, are present. A variable-length DNA segment leaks from some mutant heads, based on DNase I-protection assay and electron microscopy. The protected DNA segment has quantized lengths, based on restriction endonuclease analysis: six sharp bands of DNA missing 3.7–12.3% of the last end packaged. Native gel electrophoresis confirms quantized DNA expulsion and, after removal of external DNA, provides evidence that capsid radius is the quantization-ruler. Capsid-based DNA length quantization possibly evolved via selection for stalling that provides time for feedback control during DNA packaging and injection. - Graphical abstract: Highlights: • We implement directed evolution- and DNA-sequencing-based phage assembly genetics. • We purify stable, mutant phage heads with a partially leaked mature DNA molecule. • Native gels and DNase-protection show leaked DNA segments to have quantized lengths. • Native gels after DNase I-removal of leaked DNA reveal the capsids to vary in radius. • Thus, we hypothesize leaked DNA quantization via variably quantized capsid radius.

  18. Covariantized vector Galileons

    NASA Astrophysics Data System (ADS)

    Hull, Matthew; Koyama, Kazuya; Tasinato, Gianmassimo

    2016-03-01

    Vector Galileons are ghost-free systems containing higher derivative interactions of vector fields. They break the vector gauge symmetry, and the dynamics of the longitudinal vector polarizations acquire a Galileon symmetry in an appropriate decoupling limit in Minkowski space. Using an Arnowitt-Deser-Misner approach, we carefully reconsider the coupling with gravity of vector Galileons, with the aim of studying the necessary conditions to avoid the propagation of ghosts. We develop arguments that put on a more solid footing the results previously obtained in the literature. Moreover, working in analogy with the scalar counterpart, we find indications for the existence of a "beyond Horndeski" theory involving vector degrees of freedom that avoids the propagation of ghosts thanks to secondary constraints. In addition, we analyze a Higgs mechanism for generating vector Galileons through spontaneous symmetry breaking, and we present its consistent covariantization.

  19. Vectorization of computer programs with applications to computational fluid dynamics

    NASA Astrophysics Data System (ADS)

    Gentzsch, W.

    Techniques for adapting serial computer programs to the architecture of modern vector computers are presented and illustrated with examples, mainly from the field of computational fluid dynamics. The limitations of conventional computers are reviewed; the vector computers CRAY-1S and CDC-CYBER 205 are characterized; and chapters are devoted to vectorization of FORTRAN programs, sample-program vectorization on five different vector and parallel-architecture computers, restructuring of basic linear-algebra algorithms, iterative methods, vectorization of simple numerical algorithms, and fluid-dynamics vectorization on CRAY-1 (including an implicit beam and warming scheme, an implicit finite-difference method for laminar boundary-layer equations, the Galerkin method and a direct Monte Carlo simulation). Diagrams, charts, tables, and photographs are provided.

  20. Nucleation of Quantized Vortices from Rotating Superfluid Drops

    NASA Technical Reports Server (NTRS)

    Donnelly, Russell J.

    2001-01-01

    The long-term goal of this project is to study the nucleation of quantized vortices in helium II by investigating the behavior of rotating droplets of helium II in a reduced gravity environment. The objective of this ground-based research grant was to develop new experimental techniques to aid in accomplishing that goal. The development of an electrostatic levitator for superfluid helium, described below, and the successful suspension of charged superfluid drops in modest electric fields was the primary focus of this work. Other key technologies of general low temperature use were developed and are also discussed.

  1. Canonical quantization of general relativity in discrete space-times.

    PubMed

    Gambini, Rodolfo; Pullin, Jorge

    2003-01-17

    It has long been recognized that lattice gauge theory formulations, when applied to general relativity, conflict with the invariance of the theory under diffeomorphisms. We analyze discrete lattice general relativity and develop a canonical formalism that allows one to treat constrained theories in Lorentzian signature space-times. The presence of the lattice introduces a "dynamical gauge" fixing that makes the quantization of the theories conceptually clear, albeit computationally involved. The problem of a consistent algebra of constraints is automatically solved in our approach. The approach works successfully in other field theories as well, including topological theories. A simple cosmological application exhibits quantum elimination of the singularity at the big bang. PMID:12570532

  2. Quantized Eigenstates of a Classical Particle in a Ponderomotive Potential

    SciTech Connect

    I.Y. Dodin; N.J. Fisch

    2004-12-21

    The average dynamics of a classical particle under the action of a high-frequency radiation resembles quantum particle motion in a conservative field with an effective de Broglie wavelength ë equal to the particle average displacement on a period of oscillations. In a "quasi-classical" field, with a spatial scale large compared to ë, the guiding center motion is adiabatic. Otherwise, a particle exhibits quantized eigenstates in a ponderomotive potential well, can tunnel through classically forbidden regions and experience reflection from an attractive potential. Discrete energy levels are also found for a "crystal" formed by multiple ponderomotive barriers.

  3. Canonical Functional Quantization of Pseudo-Photons in Planar Systems

    SciTech Connect

    Ferreira, P. Castelo

    2008-06-25

    Extended U{sub e}(1)xU{sub g}(1) electromagnetism containing both a photon and a pseudo-photon is introduced at the variational level and is justified by the violation of the Bianchi identities in conceptual systems, either in the presence of magnetic monopoles or non-regular external fields, not being accounted for by the standard Maxwell Lagrangian. A dimensional reduction is carried out that yields a U{sub e}(1)xU{sub g}(1) Maxwell-BF type theory and a canonical functional quantization in planar systems is considered which may be relevant in Hall systems.

  4. Cavity QED with Quantized Center of Mass Motion

    NASA Astrophysics Data System (ADS)

    Leach, Joe; Rice, P. R.

    2004-09-01

    We investigate the quantum fluctuations of a single atom in a weakly driven cavity, where the center of mass motion of the atom is quantized in one dimension. We present analytic results for the second order intensity correlation function g(2)(τ) and the intensity-field correlation function hθ(τ), for transmitted light in the weak driving field limit. We find that the coupling of the center of mass motion to the intracavity field mode can be deleterious to nonclassical effects in photon statistics and field-intensity correlations, and compare the use of trapped atoms in a cavity to atomic beams.

  5. Landau quantization effects in ultracold atom-ion collisions

    NASA Astrophysics Data System (ADS)

    Simoni, Andrea; Launay, Jean-Michel

    2011-12-01

    We study ultracold atom-ion collisions in the presence of an external magnetic field. At low collision energy the field can drastically modify the translational motion of the ion, which follows quantized cyclotron orbits. We present a rigorous theoretical approach for the calculation of quantum scattering amplitudes in these conditions. Collisions in different magnetic field regimes, identified by the size of the cyclotron radius with respect to the range of the interaction potential, are investigated. Our results are important in cases where use of a magnetic field to control the atom-ion collision dynamics is envisioned.

  6. Digital Model of Fourier and Fresnel Quantized Holograms

    NASA Astrophysics Data System (ADS)

    Boriskevich, Anatoly A.; Erokhovets, Valery K.; Tkachenko, Vadim V.

    Some models schemes of Fourier and Fresnel quantized protective holograms with visual effects are suggested. The condition to arrive at optimum relationship between the quality of reconstructed images, and the coefficient of data reduction about a hologram, and quantity of iterations in the reconstructing hologram process has been estimated through computer model. Higher protection level is achieved by means of greater number both bi-dimensional secret keys (more than 2128) in form of pseudorandom amplitude and phase encoding matrixes, and one-dimensional encoding key parameters for every image of single-layer or superimposed holograms.

  7. Formal verification of communication protocols using quantized Horn clauses

    NASA Astrophysics Data System (ADS)

    Balu, Radhakrishnan

    2016-05-01

    The stochastic nature of quantum communication protocols naturally lends itself for expression via probabilistic logic languages. In this work we describe quantized computation using Horn clauses and base the semantics on quantum probability. Turing computable Horn clauses are very convenient to work with and the formalism can be extended to general form of first order languages. Towards this end we build a Hilbert space of H-interpretations and a corresponding non commutative von Neumann algebra of bounded linear operators. We demonstrate the expressive power of the language by casting quantum communication protocols as Horn clauses.

  8. Photophysics and photochemistry of quantized ZnO colloids

    SciTech Connect

    Kamat, P.V.; Patrick, B.

    1992-08-06

    The photophysical and photochemical behavior of quantized ZnO colloids in ethanol has been investigated by time-resolved transient absorption and emission measurements. Trapping of electrons at the ZnO surface resulted in broad absorption in the red region. The green emission of ZnO colloids was readily quenched by hole scavengers such as SCN{sup -} and I{sup -}. The photoinduced charge transfer to these hole scavengers was studied by laser flash photolysis. The yield of oxidized product increased considerably when ZnO colloids were coupled with ZnSe. 36 refs., 11 figs., 1 tab.

  9. Basis Light-Front Quantization: Recent Progress and Future Prospects

    NASA Astrophysics Data System (ADS)

    Vary, James P.; Adhikari, Lekha; Chen, Guangyao; Li, Yang; Maris, Pieter; Zhao, Xingbo

    2016-08-01

    Light-front Hamiltonian field theory has advanced to the stage of becoming a viable non-perturbative method for solving forefront problems in strong interaction physics. Physics drivers include hadron mass spectroscopy, generalized parton distribution functions, spin structures of the hadrons, inelastic structure functions, hadronization, particle production by strong external time-dependent fields in relativistic heavy ion collisions, and many more. We review selected recent results and future prospects with basis light-front quantization that include fermion-antifermion bound states in QCD, fermion motion in a strong time-dependent external field and a novel non-perturbative renormalization scheme.

  10. Motion on constant curvature spaces and quantization using Noether symmetries.

    PubMed

    Bracken, Paul

    2014-12-01

    A general approach is presented for quantizing a metric nonlinear system on a manifold of constant curvature. It makes use of a curvature dependent procedure which relies on determining Noether symmetries from the metric. The curvature of the space functions as a constant parameter. For a specific metric which defines the manifold, Lie differentiation of the metric gives these symmetries. A metric is used such that the resulting Schrödinger equation can be solved in terms of hypergeometric functions. This permits the investigation of both the energy spectrum and wave functions exactly for this system. PMID:25554048

  11. Motion on constant curvature spaces and quantization using noether symmetries

    NASA Astrophysics Data System (ADS)

    Bracken, Paul

    2014-12-01

    A general approach is presented for quantizing a metric nonlinear system on a manifold of constant curvature. It makes use of a curvature dependent procedure which relies on determining Noether symmetries from the metric. The curvature of the space functions as a constant parameter. For a specific metric which defines the manifold, Lie differentiation of the metric gives these symmetries. A metric is used such that the resulting Schrödinger equation can be solved in terms of hypergeometric functions. This permits the investigation of both the energy spectrum and wave functions exactly for this system.

  12. Electron g-2 in Light-front Quantization

    NASA Astrophysics Data System (ADS)

    Zhao, Xingbo; Honkanen, Heli; Maris, Pieter; Vary, James P.; Brodsky, Stanley J.

    2014-10-01

    Basis Light-front Quantization has been proposed as a nonperturbative framework for solving quantum field theory. We apply this approach to Quantum Electrodynamics and explicitly solve for the light-front wave function of a physical electron. Based on the resulting light-front wave function, we evaluate the electron anomalous magnetic moment. Nonperturbative mass renormalization is performed. Upon extrapolation to the infinite basis limit our numerical results agree with the Schwinger result obtained in perturbation theory to an accuracy of 0.06%.

  13. Basis light-front quantization approach to positronium

    NASA Astrophysics Data System (ADS)

    Wiecki, Paul; Li, Yang; Zhao, Xingbo; Maris, Pieter; Vary, James P.

    2015-05-01

    We present the first application of the recently developed basis light-front quantization (BLFQ) method to self-bound systems in quantum field theory, using the positronium system as a test case. Within the BLFQ framework, we develop a two-body effective interaction, operating only in the lowest Fock sector, that implements photon exchange, neglecting fermion self-energy effects. We then solve for the mass spectrum of this interaction at the unphysical coupling α =0.3 . The resulting spectrum is in good agreement with the expected Bohr spectrum of nonrelativistic quantum mechanics. We examine in detail the dependence of the results on the regulators of the theory.

  14. Basis Light-Front Quantization: Recent Progress and Future Prospects

    NASA Astrophysics Data System (ADS)

    Vary, James P.; Adhikari, Lekha; Chen, Guangyao; Li, Yang; Maris, Pieter; Zhao, Xingbo

    2016-05-01

    Light-front Hamiltonian field theory has advanced to the stage of becoming a viable non-perturbative method for solving forefront problems in strong interaction physics. Physics drivers include hadron mass spectroscopy, generalized parton distribution functions, spin structures of the hadrons, inelastic structure functions, hadronization, particle production by strong external time-dependent fields in relativistic heavy ion collisions, and many more. We review selected recent results and future prospects with basis light-front quantization that include fermion-antifermion bound states in QCD, fermion motion in a strong time-dependent external field and a novel non-perturbative renormalization scheme.

  15. High-resolution frequency measurement method with a wide-frequency range based on a quantized phase step law.

    PubMed

    Du, Baoqiang; Dong, Shaofeng; Wang, Yanfeng; Guo, Shuting; Cao, Lingzhi; Zhou, Wei; Zuo, Yandi; Liu, Dan

    2013-11-01

    A wide-frequency and high-resolution frequency measurement method based on the quantized phase step law is presented in this paper. Utilizing a variation law of the phase differences, the direct different frequency phase processing, and the phase group synchronization phenomenon, combining an A/D converter and the adaptive phase shifting principle, a counter gate is established in the phase coincidences at one-group intervals, which eliminates the ±1 counter error in the traditional frequency measurement method. More importantly, the direct phase comparison, the measurement, and the control between any periodic signals have been realized without frequency normalization in this method. Experimental results show that sub-picosecond resolution can be easily obtained in the frequency measurement, the frequency standard comparison, and the phase-locked control based on the phase quantization processing technique. The method may be widely used in navigation positioning, space techniques, communication, radar, astronomy, atomic frequency standards, and other high-tech fields. PMID:24158281

  16. Index Sets and Vectorization

    SciTech Connect

    Keasler, J A

    2012-03-27

    Vectorization is data parallelism (SIMD, SIMT, etc.) - extension of ISA enabling the same instruction to be performed on multiple data items simultaeously. Many/most CPUs support vectorization in some form. Vectorization is difficult to enable, but can yield large efficiency gains. Extra programmer effort is required because: (1) not all algorithms can be vectorized (regular algorithm structure and fine-grain parallelism must be used); (2) most CPUs have data alignment restrictions for load/store operations (obey or risk incorrect code); (3) special directives are often needed to enable vectorization; and (4) vector instructions are architecture-specific. Vectorization is the best way to optimize for power and performance due to reduced clock cycles. When data is organized properly, a vector load instruction (i.e. movaps) can replace 'normal' load instructions (i.e. movsd). Vector operations can potentially have a smaller footprint in the instruction cache when fewer instructions need to be executed. Hybrid index sets insulate users from architecture specific details. We have applied hybrid index sets to achieve optimal vectorization. We can extend this concept to handle other programming models.

  17. Design and evaluation of sparse quantization index modulation watermarking schemes

    NASA Astrophysics Data System (ADS)

    Cornelis, Bruno; Barbarien, Joeri; Dooms, Ann; Munteanu, Adrian; Cornelis, Jan; Schelkens, Peter

    2008-08-01

    In the past decade the use of digital data has increased significantly. The advantages of digital data are, amongst others, easy editing, fast, cheap and cross-platform distribution and compact storage. The most crucial disadvantages are the unauthorized copying and copyright issues, by which authors and license holders can suffer considerable financial losses. Many inexpensive methods are readily available for editing digital data and, unlike analog information, the reproduction in the digital case is simple and robust. Hence, there is great interest in developing technology that helps to protect the integrity of a digital work and the copyrights of its owners. Watermarking, which is the embedding of a signal (known as the watermark) into the original digital data, is one method that has been proposed for the protection of digital media elements such as audio, video and images. In this article, we examine watermarking schemes for still images, based on selective quantization of the coefficients of a wavelet transformed image, i.e. sparse quantization-index modulation (QIM) watermarking. Different grouping schemes for the wavelet coefficients are evaluated and experimentally verified for robustness against several attacks. Wavelet tree-based grouping schemes yield a slightly improved performance over block-based grouping schemes. Additionally, the impact of the deployment of error correction codes on the most promising configurations is examined. The utilization of BCH-codes (Bose, Ray-Chaudhuri, Hocquenghem) results in an improved robustness as long as the capacity of the error codes is not exceeded (cliff-effect).

  18. Size quantization of Dirac fermions in graphene constrictions

    PubMed Central

    Terrés, B.; Chizhova, L. A.; Libisch, F.; Peiro, J.; Jörger, D.; Engels, S.; Girschik, A.; Watanabe, K.; Taniguchi, T.; Rotkin, S. V.; Burgdörfer, J.; Stampfer, C.

    2016-01-01

    Quantum point contacts are cornerstones of mesoscopic physics and central building blocks for quantum electronics. Although the Fermi wavelength in high-quality bulk graphene can be tuned up to hundreds of nanometres, the observation of quantum confinement of Dirac electrons in nanostructured graphene has proven surprisingly challenging. Here we show ballistic transport and quantized conductance of size-confined Dirac fermions in lithographically defined graphene constrictions. At high carrier densities, the observed conductance agrees excellently with the Landauer theory of ballistic transport without any adjustable parameter. Experimental data and simulations for the evolution of the conductance with magnetic field unambiguously confirm the identification of size quantization in the constriction. Close to the charge neutrality point, bias voltage spectroscopy reveals a renormalized Fermi velocity of ∼1.5 × 106 m s−1 in our constrictions. Moreover, at low carrier density transport measurements allow probing the density of localized states at edges, thus offering a unique handle on edge physics in graphene devices. PMID:27198961

  19. Quantization of redshift differences in isolated galaxy pairs

    SciTech Connect

    Tifft, W.G.; Cocke, W.J.

    1989-01-01

    Improved 21 cm data on isolated galaxy pairs are presented which eliminate questions of inhomogeneity in the data on such pairs and reduce observational error to below 5 km/s. Quantization is sharpened, and the zero peak is shown to be displaced from zero to a location near 24 km/s. An exclusion principle is suggested whereby identical redshifts are forbidden in limited volumes. The radio data and data from Schweizer (1987) are combined with the best optical data on close Karachentsev pairs to provide a cumulative sample of 84 of the best differentials now available. New 21 cm observations are used to test for the presence of small differentials in very wide pairs, and the deficiency near zero is found to continue to very wide spacings. A loss of wide pairs by selection bias cannot produce the observed zero deficiency. A new test using pairs selected from the Fisher-Tully catalog is used to demonstrate quantization properties of third components associated with possible pairs. 27 references.

  20. Universality and quantized response in bosonic mesoscopic tunneling

    NASA Astrophysics Data System (ADS)

    Yin, Shaoyu; Béri, Benjamin

    2016-06-01

    We show that tunneling involving bosonic wires and/or boson integer quantum Hall (bIQH) edges is characterized by features that are far more universal than those in their fermionic counterpart. Considering a pair of minimal geometries, we examine the tunneling conductance as a function of energy (e.g., chemical potential bias) at high and low energy limits, finding a low energy enhancement and a universal high versus zero energy relation that hold for all wire/bIQH edge combinations. Beyond this universality present in all the different topological (bIQH-edge) and nontopological (wire) setups, we also discover a number of features distinguishing the topological bIQH edges, which include a current imbalance to chemical potential bias ratio that is quantized despite the lack of conductance quantization in the bIQH edges themselves. The predicted phenomena require only initial states to be thermal and thus are well suited for tests with ultracold bosons forming wires and bIQH states. For the latter, we highlight a potential realization based on single component bosons in the recently observed Harper-Hofstadter band structure.

  1. Path-memory induced quantization of classical orbits

    PubMed Central

    Fort, Emmanuel; Eddi, Antonin; Boudaoud, Arezki; Moukhtar, Julien; Couder, Yves

    2010-01-01

    A droplet bouncing on a liquid bath can self-propel due to its interaction with the waves it generates. The resulting “walker” is a dynamical association where, at a macroscopic scale, a particle (the droplet) is driven by a pilot-wave field. A specificity of this system is that the wave field itself results from the superposition of the waves generated at the points of space recently visited by the particle. It thus contains a memory of the past trajectory of the particle. Here, we investigate the response of this object to forces orthogonal to its motion. We find that the resulting closed orbits present a spontaneous quantization. This is observed only when the memory of the system is long enough for the particle to interact with the wave sources distributed along the whole orbit. An additional force then limits the possible orbits to a discrete set. The wave-sustained path memory is thus demonstrated to generate a quantization of angular momentum. Because a quantum-like uncertainty was also observed recently in these systems, the nonlocality generated by path memory opens new perspectives.

  2. Efficient, Edge-Aware, Combined Color Quantization and Dithering.

    PubMed

    Huang, Hao-Zhi; Xu, Kun; Martin, Ralph R; Huang, Fei-Yue; Hu, Shi-Min

    2016-03-01

    In this paper, we present a novel algorithm to simultaneously accomplish color quantization and dithering of images. This is achieved by minimizing a perception-based cost function, which considers pixel-wise differences between filtered versions of the quantized image and the input image. We use edge aware filters in defining the cost function to avoid mixing colors on the opposite sides of an edge. The importance of each pixel is weighted according to its saliency. To rapidly minimize the cost function, we use a modified multi-scale iterative conditional mode (ICM) algorithm, which updates one pixel a time while keeping other pixels unchanged. As ICM is a local method, careful initialization is required to prevent termination at a local minimum far from the global one. To address this problem, we initialize ICM with a palette generated by a modified median-cut method. Compared with previous approaches, our method can produce high-quality results with a fewer visual artifacts but also requires significantly less computational effort. PMID:26731765

  3. Optical evidence for quantization in transparent amorphous oxide semiconductor superlattice

    NASA Astrophysics Data System (ADS)

    Abe, Katsumi; Nomura, Kenji; Kamiya, Toshio; Hosono, Hideo

    2012-08-01

    We fabricated transparent amorphous oxide semiconductor superlattices composed of In-Ga-Zn-O (a-IGZO) well layers and Ga2O3 (a-Ga2O3) barrier layers, and investigated their optical absorption properties to examine energy quantization in the a-IGZO well layer. The Tauc gap of a-IGZO well layers monotonically increases with decreasing well thickness at ≤5 nm. The thickness dependence of the Tauc gap is quantitatively explained by a Krönig-Penny model employing a conduction band offset of 1.2 eV between the a-IGZO and the a-Ga2O3, and the effective masses of 0.35m0 for the a-IGZO well layer and 0.5m0 for the a-Ga2O3 barrier layer, where m0 is the electron rest mass. This result demonstrates the quantization in the a-IGZO well layer. The phase relaxation length of the a-IGZO is estimated to be larger than 3.5 nm.

  4. Optimal transform coding in the presence of quantization noise.

    PubMed

    Diamantaras, K I; Strintzis, M G

    1999-01-01

    The optimal linear Karhunen-Loeve transform (KLT) attains the minimum reconstruction error for a fixed number of transform coefficients assuming that these coefficients do not contain noise. In any real coding system, however, the representation of the coefficients using a finite number of bits requires the presence of quantizers. We formulate the optimal linear transform using a data model that incorporates the quantization noise. Our solution does not correspond to an orthogonal transform and in fact, it achieves a smaller mean squared error (MSE) compared to the KLT, in the noisy case. Like the KLT, our solution depends on the statistics of the input signal, but it also depends on the bit-rate used for each coefficient. Especially for images, based on our optimality theory, we propose a simple modification of the discrete cosine transform (DCT). Our coding experiments show a peak signal-to noise ratio (SNR) performance improvement over JPEG of the order of 0.2 dB with an overhead less than 0.01 b/pixel. PMID:18267426

  5. Topos quantum theory on quantization-induced sheaves

    SciTech Connect

    Nakayama, Kunji

    2014-10-15

    In this paper, we construct a sheaf-based topos quantum theory. It is well known that a topos quantum theory can be constructed on the topos of presheaves on the category of commutative von Neumann algebras of bounded operators on a Hilbert space. Also, it is already known that quantization naturally induces a Lawvere-Tierney topology on the presheaf topos. We show that a topos quantum theory akin to the presheaf-based one can be constructed on sheaves defined by the quantization-induced Lawvere-Tierney topology. That is, starting from the spectral sheaf as a state space of a given quantum system, we construct sheaf-based expressions of physical propositions and truth objects, and thereby give a method of truth-value assignment to the propositions. Furthermore, we clarify the relationship to the presheaf-based quantum theory. We give translation rules between the sheaf-based ingredients and the corresponding presheaf-based ones. The translation rules have “coarse-graining” effects on the spaces of the presheaf-based ingredients; a lot of different proposition presheaves, truth presheaves, and presheaf-based truth-values are translated to a proposition sheaf, a truth sheaf, and a sheaf-based truth-value, respectively. We examine the extent of the coarse-graining made by translation.

  6. Genesis of quantization of matter and radiation field

    NASA Astrophysics Data System (ADS)

    de la Peña, Luis; Cetto, Ana María.

    2015-09-01

    Are we to accept quantization as a fundamental property of nature, the origin of which does not require or admit further investigation? To get an insight into this question we consider atomic systems as open systems, since they are by necessity in contact with the electromagnetic radiation field. This includes not only photonic radiation, but, more importantly for our purposes, the random zero-point or nonthermal radiation that pervades the Universe. The Heisenberg inequalities, atomic stability and the existence of discrete solutions are explained as a result of the permanent action of this field upon matter and the balance between mean absorbed and emitted powers in the equilibrium regime. A detailed study carried out along the years has led to the usual quantum-mechanical formalism as a powerful and revealing statistical description of the behavior of matter in the radiationless approximation, as well as to the radiative corrections of nonrelativistic QED. The theory presented gives thus a response to the question posed above, within a local, realist and objective framework: quantization appears as an emergent phenomenon due to the matter-field interaction.

  7. A Variant of the Mukai Pairing via Deformation Quantization

    NASA Astrophysics Data System (ADS)

    Ramadoss, Ajay C.

    2012-06-01

    Let X be a smooth projective complex variety. The Hochschild homology HH•( X) of X is an important invariant of X, which is isomorphic to the Hodge cohomology of X via the Hochschild-Kostant-Rosenberg isomorphism. On HH•( X), one has the Mukai pairing constructed by Caldararu. An explicit formula for the Mukai pairing at the level of Hodge cohomology was proven by the author in an earlier work (following ideas of Markarian). This formula implies a similar explicit formula for a closely related variant of the Mukai pairing on HH•( X). The latter pairing on HH•( X) is intimately linked to the study of Fourier-Mukai transforms of complex projective varieties. We give a new method to prove a formula computing the aforementioned variant of Caldararu's Mukai pairing. Our method is based on some important results in the area of deformation quantization. In particular, we use part of the work of Kashiwara and Schapira on Deformation Quantization modules together with an algebraic index theorem of Bressler, Nest and Tsygan. Our new method explicitly shows that the "Noncommutative Riemann-Roch" implies the classical Riemann-Roch. Further, it is hoped that our method would be useful for generalization to settings involving certain singular varieties.

  8. Minimally destructive Doppler measurement of a quantized, superfluid flow

    NASA Astrophysics Data System (ADS)

    Anderson, Neil; Kumar, Avinash; Eckel, Stephen; Stringari, Sandro; Campbell, Gretchen

    2016-05-01

    Ring shaped Bose-Einstein condensates are of interest because they support the existence of quantized, persistent currents. These currents arise because in a ring trap, the wavefunction of the condensate must be single valued, and thus the azimuthal velocity is quantized. Previously, these persistent current states have only been measured in a destructive fashion via either interference with a phase reference or using the size of a central vortex-like structure that appears in time of flight. Here, we demonstrate a minimally destructive, in-situ measurement of the winding number of a ring shaped BEC. We excite a standing wave of phonon modes in the ring BEC using a perturbation. If the condensate is in a nonzero circulation state, then the frequency of these phonon modes are Doppler shifted, causing the standing wave to precess about the ring. From the direction and velocity of this precession, we can infer the winding number of the flow. For certain parameters, this technique can detect individual winding numbers with approximately 90% fidelity.

  9. HMM-Based Voice Conversion Using Quantized F0 Context

    NASA Astrophysics Data System (ADS)

    Nose, Takashi; Ota, Yuhei; Kobayashi, Takao

    We propose a segment-based voice conversion technique using hidden Markov model (HMM)-based speech synthesis with nonparallel training data. In the proposed technique, the phoneme information with durations and a quantized F0 contour are extracted from the input speech of a source speaker, and are transmitted to a synthesis part. In the synthesis part, the quantized F0 symbols are used as prosodic context. A phonetically and prosodically context-dependent label sequence is generated from the transmitted phoneme and the F0 symbols. Then, converted speech is generated from the label sequence with durations using the target speaker's pre-trained context-dependent HMMs. In the model training, the models of the source and target speakers can be trained separately, hence there is no need to prepare parallel speech data of the source and target speakers. Objective and subjective experimental results show that the segment-based voice conversion with phonetic and prosodic contexts works effectively even if the parallel speech data is not available.

  10. A short course on quantum mechanics and methods of quantization

    NASA Astrophysics Data System (ADS)

    Ercolessi, Elisa

    2015-07-01

    These notes collect the lectures given by the author to the "XXIII International Workshop on Geometry and Physics" held in Granada (Spain) in September 2014. The first part of this paper aims at introducing a mathematical oriented reader to the realm of Quantum Mechanics (QM) and then to present the geometric structures that underline the mathematical formalism of QM which, contrary to what is usually done in Classical Mechanics (CM), are usually not taught in introductory courses. The mathematics related to Hilbert spaces and Differential Geometry are assumed to be known by the reader. In the second part, we concentrate on some quantization procedures, that are founded on the geometric structures of QM — as we have described them in the first part — and represent the ones that are more operatively used in modern theoretical physics. We will discuss first the so-called Coherent State Approach which, mainly complemented by "Feynman Path Integral Technique", is the method which is most widely used in quantum field theory. Finally, we will describe the "Weyl Quantization Approach" which is at the origin of modern tomographic techniques, originally used in optics and now in quantum information theory.

  11. Pisot q-coherent states quantization of the harmonic oscillator

    SciTech Connect

    Gazeau, J.P.; Olmo, M.A. del

    2013-03-15

    We revisit the quantized version of the harmonic oscillator obtained through a q-dependent family of coherent states. For each q, 0Quantized version of the harmonic oscillator (HO) through a q-family of coherent states. Black-Right-Pointing-Pointer For q,0

  12. q-bosons and the q-analogue quantized field

    NASA Technical Reports Server (NTRS)

    Nelson, Charles A.

    1995-01-01

    The q-analogue coherent states are used to identify physical signatures for the presence of a 1-analogue quantized radiation field in the q-CS classical limits where the absolute value of z is large. In this quantum-optics-like limit, the fractional uncertainties of most physical quantities (momentum, position, amplitude, phase) which characterize the quantum field are O(1). They only vanish as O(1/absolute value of z) when q = 1. However, for the number operator, N, and the N-Hamiltonian for a free q-boson gas, H(sub N) = h(omega)(N + 1/2), the fractional uncertainties do still approach zero. A signature for q-boson counting statistics is that (Delta N)(exp 2)/ (N) approaches 0 as the absolute value of z approaches infinity. Except for its O(1) fractional uncertainty, the q-generalization of the Hermitian phase operator of Pegg and Barnett, phi(sub q), still exhibits normal classical behavior. The standard number-phase uncertainty-relation, Delta(N) Delta phi(sub q) = 1/2, and the approximate commutation relation, (N, phi(sub q)) = i, still hold for the single-mode q-analogue quantized field. So, N and phi(sub q) are almost canonically conjugate operators in the q-CS classical limit. The q-analogue CS's minimize this uncertainty relation for moderate (absolute value of z)(exp 2).

  13. Size quantization of Dirac fermions in graphene constrictions.

    PubMed

    Terrés, B; Chizhova, L A; Libisch, F; Peiro, J; Jörger, D; Engels, S; Girschik, A; Watanabe, K; Taniguchi, T; Rotkin, S V; Burgdörfer, J; Stampfer, C

    2016-01-01

    Quantum point contacts are cornerstones of mesoscopic physics and central building blocks for quantum electronics. Although the Fermi wavelength in high-quality bulk graphene can be tuned up to hundreds of nanometres, the observation of quantum confinement of Dirac electrons in nanostructured graphene has proven surprisingly challenging. Here we show ballistic transport and quantized conductance of size-confined Dirac fermions in lithographically defined graphene constrictions. At high carrier densities, the observed conductance agrees excellently with the Landauer theory of ballistic transport without any adjustable parameter. Experimental data and simulations for the evolution of the conductance with magnetic field unambiguously confirm the identification of size quantization in the constriction. Close to the charge neutrality point, bias voltage spectroscopy reveals a renormalized Fermi velocity of ∼1.5 × 10(6) m s(-1) in our constrictions. Moreover, at low carrier density transport measurements allow probing the density of localized states at edges, thus offering a unique handle on edge physics in graphene devices. PMID:27198961

  14. SWKB and proper quantization conditions for translationally shape-invariant potentials

    NASA Astrophysics Data System (ADS)

    Mahdi, Kamal; Kasri, Y.; Grandati, Y.; Bérard, A.

    2016-08-01

    Using a recently proposed classification for the primary translationally shape-invariant potentials, we show that the exact quantization rule formulated by Ma and Xu is equivalent to the supersymmetric JWKB quantization condition. The energy levels for the two considered categories of shape-invariant potentials are also derived.

  15. Educational Information Quantization for Improving Content Quality in Learning Management Systems

    ERIC Educational Resources Information Center

    Rybanov, Alexander Aleksandrovich

    2014-01-01

    The article offers the educational information quantization method for improving content quality in Learning Management Systems. The paper considers questions concerning analysis of quality of quantized presentation of educational information, based on quantitative text parameters: average frequencies of parts of speech, used in the text; formal…

  16. Quantized massive collective modes and massive spin fluctuations in high-Tc cuprates

    NASA Astrophysics Data System (ADS)

    Kanazawa, I.; Sasaki, T.

    2015-10-01

    We have analyzed angle-resolved photoemission spectra of the single- and double-layered Bi-family high-Tc superconductors by using quantized massive gauge fields, which might contain effects of spin fluctuations, charge fluctuations, and phonons. It is suggested strongly that the quantized massive gauge fields might be mediating Cooper pairing in high-Tc cuprates.

  17. On the Stochastic Quantization Method: Characteristics and Applications to Singular Systems

    NASA Technical Reports Server (NTRS)

    Kanenaga, Masahiko; Namiki, Mikio

    1996-01-01

    Introducing the generalized Langevin equation, we extend the stochastic quantization method so as to deal with singular dynamical systems beyond the ordinary territory of quantum mechanics. We also show how the uncertainty relation is built up to the quantum mechanical limit with respect to fictitious time, irrespective of its initial value, within the framework of the usual stochastic quantization method.

  18. Number-Phase Quantization Scheme and the Quantum Effects of a Mesoscopic Electric Circuit at Finite Temperature

    NASA Astrophysics Data System (ADS)

    Wang, Shuai

    2009-05-01

    For L-C circuit, a new quantized scheme has been proposed in the context of number-phase quantization. In this quantization scheme, the number n of the electric charge q( q= en) is quantized as the charge number operator and the phase difference θ across the capacity is quantized as phase operator. Based on the scheme of number-phase quantization and the thermo field dynamics (TFD), the quantum fluctuations of the charge number and phase difference of a mesoscopic L-C circuit in the thermal vacuum state, the thermal coherent state and the thermal squeezed state have been studied. It is shown that these quantum fluctuations of the charge number and phase difference are related to not only the parameters of circuit, the squeezing parameter, but also the temperature in these quantum states. It is proven that the number-phase quantization scheme is very useful to tackle with quantization of some mesoscopic electric circuits and the quantum effects.

  19. Uncertainty in adaptive capacity

    NASA Astrophysics Data System (ADS)

    Adger, W. Neil; Vincent, Katharine

    2005-03-01

    The capacity to adapt is a critical element of the process of adaptation: it is the vector of resources that represent the asset base from which adaptation actions can be made. Adaptive capacity can in theory be identified and measured at various scales, from the individual to the nation. The assessment of uncertainty within such measures comes from the contested knowledge domain and theories surrounding the nature of the determinants of adaptive capacity and the human action of adaptation. While generic adaptive capacity at the national level, for example, is often postulated as being dependent on health, governance and political rights, and literacy, and economic well-being, the determinants of these variables at national levels are not widely understood. We outline the nature of this uncertainty for the major elements of adaptive capacity and illustrate these issues with the example of a social vulnerability index for countries in Africa. To cite this article: W.N. Adger, K. Vincent, C. R. Geoscience 337 (2005).

  20. Dynamic State Estimation of Power Systems With Quantization Effects: A Recursive Filter Approach.

    PubMed

    Hu, Liang; Wang, Zidong; Liu, Xiaohui

    2016-08-01

    In this paper, a recursive filter algorithm is developed to deal with the state estimation problem for power systems with quantized nonlinear measurements. The measurements from both the remote terminal units and the phasor measurement unit are subject to quantizations described by a logarithmic quantizer. Attention is focused on the design of a recursive filter such that, in the simultaneous presence of nonlinear measurements and quantization effects, an upper bound for the estimation error covariance is guaranteed and subsequently minimized. Instead of using the traditional approximation methods in nonlinear estimation that simply ignore the linearization errors, we treat both the linearization and quantization errors as norm-bounded uncertainties in the algorithm development so as to improve the performance of the estimator. For the power system with such kind of introduced uncertainties, a filter is designed in the framework of robust recursive estimation, and the developed filter algorithm is tested on the IEEE benchmark power system to demonstrate its effectiveness. PMID:25576579

  1. Flat minimal quantizations of Stäckel systems and quantum separability

    SciTech Connect

    Błaszak, Maciej; Domański, Ziemowit; Silindir, Burcu

    2014-12-15

    In this paper, we consider the problem of quantization of classical Stäckel systems and the problem of separability of related quantum Hamiltonians. First, using the concept of Stäckel transform, natural Hamiltonian systems from a given Riemann space are expressed by some flat coordinates of related Euclidean configuration space. Then, the so-called flat minimal quantization procedure is applied in order to construct an appropriate Hermitian operator in the respective Hilbert space. Finally, we distinguish a class of Stäckel systems which remains separable after any of admissible flat minimal quantizations. - Highlights: • Using Stäckel transform, separable Hamiltonians are expressed by flat coordinates. • The concept of admissible flat minimal quantizations is developed. • The class of Stäckel systems, separable after minimal flat quantization is established. • Separability of related stationary Schrödinger equations is presented in explicit form.

  2. On the macroscopic quantization in mesoscopic rings and single-electron devices

    NASA Astrophysics Data System (ADS)

    Semenov, Andrew G.

    2016-05-01

    In this letter we investigate the phenomenon of macroscopic quantization and consider particle on the ring interacting with the dissipative bath as an example. We demonstrate that even in presence of environment, there is macroscopically quantized observable which can take only integer values in the zero temperature limit. This fact follows from the total angular momentum conservation combined with momentum quantization for bare particle on the ring. The nontrivial thing is that the model under consideration, including the notion of quantized observable, can be mapped onto the Ambegaokar-Eckern-Schon model of the single-electron box (SEB). We evaluate SEB observable, originating after mapping, and reveal new physics, which follows from the macroscopic quantization phenomenon and the existence of additional conservation law. Some generalizations of the obtained results are also presented.

  3. Performance of peaky template matching under additive white Gaussian noise and uniform quantization

    NASA Astrophysics Data System (ADS)

    Horvath, Matthew S.; Rigling, Brian D.

    2015-05-01

    Peaky template matching (PTM) is a special case of a general algorithm known as multinomial pattern matching originally developed for automatic target recognition of synthetic aperture radar data. The algorithm is a model- based approach that first quantizes pixel values into Nq = 2 discrete values yielding generative Beta-Bernoulli models as class-conditional templates. Here, we consider the case of classification of target chips in AWGN and develop approximations to image-to-template classification performance as a function of the noise power. We focus specifically on the case of a uniform quantization" scheme, where a fixed number of the largest pixels are quantized high as opposed to using a fixed threshold. This quantization method reduces sensitivity to the scaling of pixel intensities and quantization in general reduces sensitivity to various nuisance parameters difficult to account for a priori. Our performance expressions are verified using forward-looking infrared imagery from the Army Research Laboratory Comanche dataset.

  4. Gammaretroviral vectors: biology, technology and application.

    PubMed

    Maetzig, Tobias; Galla, Melanie; Baum, Christopher; Schambach, Axel

    2011-06-01

    Retroviruses are evolutionary optimized gene carriers that have naturally adapted to their hosts to efficiently deliver their nucleic acids into the target cell chromatin, thereby overcoming natural cellular barriers. Here we will review-starting with a deeper look into retroviral biology-how Murine Leukemia Virus (MLV), a simple gammaretrovirus, can be converted into an efficient vehicle of genetic therapeutics. Furthermore, we will describe how more rational vector backbones can be designed and how these so-called self-inactivating vectors can be pseudotyped and produced. Finally, we will provide an overview on existing clinical trials and how biosafety can be improved. PMID:21994751

  5. Symbolic computer vector analysis

    NASA Technical Reports Server (NTRS)

    Stoutemyer, D. R.

    1977-01-01

    A MACSYMA program is described which performs symbolic vector algebra and vector calculus. The program can combine and simplify symbolic expressions including dot products and cross products, together with the gradient, divergence, curl, and Laplacian operators. The distribution of these operators over sums or products is under user control, as are various other expansions, including expansion into components in any specific orthogonal coordinate system. There is also a capability for deriving the scalar or vector potential of a vector field. Examples include derivation of the partial differential equations describing fluid flow and magnetohydrodynamics, for 12 different classic orthogonal curvilinear coordinate systems.

  6. Volume quantization of the mouse cerebellum by semiautomatic 3D segmentation of magnetic resonance images

    NASA Astrophysics Data System (ADS)

    Sijbers, Jan; Van der Linden, Anne-Marie; Scheunders, Paul; Van Audekerke, Johan; Van Dyck, Dirk; Raman, Erik R.

    1996-04-01

    The aim of this work is the development of a non-invasive technique for efficient and accurate volume quantization of the cerebellum of mice. This enables an in-vivo study on the development of the cerebellum in order to define possible alterations in cerebellum volume of transgenic mice. We concentrate on a semi-automatic segmentation procedure to extract the cerebellum from 3D magnetic resonance data. The proposed technique uses a 3D variant of Vincent and Soille's immersion based watershed algorithm which is applied to the gradient magnitude of the MR data. The algorithm results in a partitioning of the data in volume primitives. The known drawback of the watershed algorithm, over-segmentation, is strongly reduced by a priori application of an adaptive anisotropic diffusion filter on the gradient magnitude data. In addition, over-segmentation is a posteriori contingently reduced by properly merging volume primitives, based on the minimum description length principle. The outcome of the preceding image processing step is presented to the user for manual segmentation. The first slice which contains the object of interest is quickly segmented by the user through selection of basic image regions. In the sequel, the subsequent slices are automatically segmented. The segmentation results are contingently manually corrected. The technique is tested on phantom objects, where segmentation errors less than 2% were observed. Three-dimensional reconstructions of the segmented data are shown for the mouse cerebellum and the mouse brains in toto.

  7. Mode conversion from quantized to propagating spin waves in a rhombic antidot lattice supporting spin wave nanochannels

    NASA Astrophysics Data System (ADS)

    Tacchi, S.; Botters, B.; Madami, M.; Kłos, J. W.; Sokolovskyy, M. L.; Krawczyk, M.; Gubbiotti, G.; Carlotti, G.; Adeyeye, A. O.; Neusser, S.; Grundler, D.

    2012-07-01

    We report spin wave excitations in a nanopatterned antidot lattice fabricated from a 30-nm thick Ni80Fe20 film. The 250-nm-wide circular holes are arranged in a rhombic unit cell with a lattice constant of 400 nm. By Brillouin light scattering, we find that quantized spin wave modes transform to propagating ones and vice versa by changing the in-plane orientation of the applied magnetic field H by 30∘. Spin waves of either negative or positive group velocity are found. In the latter case, they propagate in narrow channels exhibiting a width of below 100 nm. We use the plane wave method to calculate the spin wave dispersions for the two relevant orientations of H. The theory allows us to explain the wave-vector-dependent characteristics of the prominent modes. Allowed minibands are formed for selected modes only for specific orientations of H and wave vector. The results are important for applications such as spin wave filters and interconnected waveguides in the emerging field of magnonics where the control of spin wave propagation on the nanoscale is key.

  8. Canonical Groups for Quantization on the Two-Dimensional Sphere and One-Dimensional Complex Projective Space

    NASA Astrophysics Data System (ADS)

    A, Sumadi A. H.; H, Zainuddin

    2014-11-01

    Using Isham's group-theoretic quantization scheme, we construct the canonical groups of the systems on the two-dimensional sphere and one-dimensional complex projective space, which are homeomorphic. In the first case, we take SO(3) as the natural canonical Lie group of rotations of the two-sphere and find all the possible Hamiltonian vector fields, and followed by verifying the commutator and Poisson bracket algebra correspondences with the Lie algebra of the group. In the second case, the same technique is resumed to define the Lie group, in this case SU (2), of CP'.We show that one can simply use a coordinate transformation from S2 to CP1 to obtain all the Hamiltonian vector fields of CP1. We explicitly show that the Lie algebra structures of both canonical groups are locally homomorphic. On the other hand, globally their corresponding canonical groups are acting on different geometries, the latter of which is almost complex. Thus the canonical group for CP1 is the double-covering group of SO(3), namely SU(2). The relevance of the proposed formalism is to understand the idea of CP1 as a space of where the qubit lives which is known as a Bloch sphere.

  9. Adaptive compression of the ambulatory electrocardiogram.

    PubMed

    Hamilton, P S

    1993-01-01

    Previous use of the MIT/BIH arrhythmia database, on analog tape, to investigate compression of ambulatory ECG data by average beat subtraction, residual differencing, and Huffman coding of the residuals had shown that with a quantization level of 35 mu V and a sample rate of 100 samples per second, it was possible to store ECG data with average data rates of 174 bits per second (bps), but because of the variation in ECG signals, data rates for different records ranged from 144 bps to 230 bps. In a practical storage system, it is desirable to fix the maximum data rate and store data with a minimum of distortion. For this study the previous compression algorithm was modified to adapt its quantization level to different ECG signal conditions. Two adaptation strategies were investigated. Both adapt the quantization-step size according to the number of bytes required for storing the coded signal, beat arrival times, and beat classifications. The new compression algorithm was tested with data from the MIT/BIH database on CD ROM. With the more successful of the two strategies, the adaptive compression algorithm stored MIT/BIH records with a difference of only 0.8 bps between the record with the highest data rate and the record with the lowest data rate. The average data rate for the entire database was 193.3 bps. Signal-to-compression noise ratios varied from record to record and varied over time for a given record. Average signal to compression noise ratios varied from 26.82 to 532.83. PMID:8418967

  10. Adaptive MPEG-2 video data hiding scheme

    NASA Astrophysics Data System (ADS)

    Sarkar, Anindya; Madhow, Upamanyu; Chandrasekaran, Shivkumar; Manjunath, Bangalore S.

    2007-02-01

    We have investigated adaptive mechanisms for high-volume transform-domain data hiding in MPEG-2 video which can be tuned to sustain varying levels of compression attacks. The data is hidden in the uncompressed domain by scalar quantization index modulation (QIM) on a selected set of low-frequency discrete cosine transform (DCT) coefficients. We propose an adaptive hiding scheme where the embedding rate is varied according to the type of frame and the reference quantization parameter (decided according to MPEG-2 rate control scheme) for that frame. For a 1.5 Mbps video and a frame-rate of 25 frames/sec, we are able to embed almost 7500 bits/sec. Also, the adaptive scheme hides 20% more data and incurs significantly less frame errors (frames for which the embedded data is not fully recovered) than the non-adaptive scheme. Our embedding scheme incurs insertions and deletions at the decoder which may cause de-synchronization and decoding failure. This problem is solved by the use of powerful turbo-like codes and erasures at the encoder. The channel capacity estimate gives an idea of the minimum code redundancy factor required for reliable decoding of hidden data transmitted through the channel. To that end, we have modeled the MPEG-2 video channel using the transition probability matrices given by the data hiding procedure, using which we compute the (hiding scheme dependent) channel capacity.

  11. Vector theories in cosmology

    SciTech Connect

    Esposito-Farese, Gilles; Pitrou, Cyril; Uzan, Jean-Philippe

    2010-03-15

    This article provides a general study of the Hamiltonian stability and the hyperbolicity of vector field models involving both a general function of the Faraday tensor and its dual, f(F{sup 2},FF-tilde), as well as a Proca potential for the vector field, V(A{sup 2}). In particular it is demonstrated that theories involving only f(F{sup 2}) do not satisfy the hyperbolicity conditions. It is then shown that in this class of models, the cosmological dynamics always dilutes the vector field. In the case of a nonminimal coupling to gravity, it is established that theories involving Rf(A{sup 2}) or Rf(F{sup 2}) are generically pathologic. To finish, we exhibit a model where the vector field is not diluted during the cosmological evolution, because of a nonminimal vector field-curvature coupling which maintains second-order field equations. The relevance of such models for cosmology is discussed.

  12. Vector generator scan converter

    DOEpatents

    Moore, J.M.; Leighton, J.F.

    1988-02-05

    High printing speeds for graphics data are achieved with a laser printer by transmitting compressed graphics data from a main processor over an I/O channel to a vector generator scan converter which reconstructs a full graphics image for input to the laser printer through a raster data input port. The vector generator scan converter includes a microprocessor with associated microcode memory containing a microcode instruction set, a working memory for storing compressed data, vector generator hardware for drawing a full graphic image from vector parameters calculated by the microprocessor, image buffer memory for storing the reconstructed graphics image and an output scanner for reading the graphics image data and inputting the data to the printer. The vector generator scan converter eliminates the bottleneck created by the I/O channel for transmitting graphics data from the main processor to the laser printer, and increases printer speed up to thirty fold. 7 figs.

  13. Vector generator scan converter

    DOEpatents

    Moore, James M.; Leighton, James F.

    1990-01-01

    High printing speeds for graphics data are achieved with a laser printer by transmitting compressed graphics data from a main processor over an I/O (input/output) channel to a vector generator scan converter which reconstructs a full graphics image for input to the laser printer through a raster data input port. The vector generator scan converter includes a microprocessor with associated microcode memory containing a microcode instruction set, a working memory for storing compressed data, vector generator hardward for drawing a full graphic image from vector parameters calculated by the microprocessor, image buffer memory for storing the reconstructed graphics image and an output scanner for reading the graphics image data and inputting the data to the printer. The vector generator scan converter eliminates the bottleneck created by the I/O channel for transmitting graphics data from the main processor to the laser printer, and increases printer speed up to thirty fold.

  14. A deformation quantization theory for noncommutative quantum mechanics

    SciTech Connect

    Costa Dias, Nuno; Prata, Joao Nuno; Gosson, Maurice de; Luef, Franz

    2010-07-15

    We show that the deformation quantization of noncommutative quantum mechanics previously considered by Dias and Prata ['Weyl-Wigner formulation of noncommutative quantum mechanics', J. Math. Phys. 49, 072101 (2008)] and Bastos, Dias, and Prata ['Wigner measures in non-commutative quantum mechanics', e-print arXiv:math-ph/0907.4438v1; Commun. Math. Phys. (to appear)] can be expressed as a Weyl calculus on a double phase space. We study the properties of the star-product thus defined and prove a spectral theorem for the star-genvalue equation using an extension of the methods recently initiated by de Gosson and Luef ['A new approach to the *-genvalue equation', Lett. Math. Phys. 85, 173-183 (2008)].

  15. Theory of the Knight Shift and Flux Quantization in Superconductors

    DOE R&D Accomplishments Database

    Cooper, L. N.; Lee, H. J.; Schwartz, B. B.; Silvert, W.

    1962-05-01

    Consequences of a generalization of the theory of superconductivity that yields a finite Knight shift are presented. In this theory, by introducing an electron-electron interaction that is not spatially invariant, the pairing of electrons with varying total momentum is made possible. An expression for Xs (the spin susceptibility in the superconducting state) is derived. In general Xs is smaller than Xn, but is not necessarily zero. The precise magnitude of Xs will vary from sample to sample and will depend on the nonuniformity of the samples. There should be no marked size dependence and no marked dependence on the strength of the magnetic field; this is in accord with observation. The basic superconducting properties are retained, but there are modifications in the various electromagnetic and thermal properties since the electrons paired are not time sequences of this generalized theory on flux quantization arguments are presented.(auth)

  16. Paul Weiss and the genesis of canonical quantization

    NASA Astrophysics Data System (ADS)

    Rickles, Dean; Blum, Alexander

    2015-12-01

    This paper describes the life and work of a figure who, we argue, was of primary importance during the early years of field quantisation and (albeit more indirectly) quantum gravity. A student of Dirac and Born, he was interned in Canada during the second world war as an enemy alien and after his release never seemed to regain a good foothold in physics, identifying thereafter as a mathematician. He developed a general method of quantizing (linear and non-linear) field theories based on the parameters labelling an arbitrary hypersurface. This method (the `parameter formalism' often attributed to Dirac), though later discarded, was employed (and viewed at the time as an extremely important tool) by the leading figures associated with canonical quantum gravity: Dirac, Pirani and Schild, Bergmann, DeWitt, and others. We argue that he deserves wider recognition for this and other innovations.

  17. Oscillating magnetocaloric effect in size-quantized diamagnetic film

    SciTech Connect

    Alisultanov, Z. Z.

    2014-03-21

    We investigate the oscillating magnetocaloric effect on a size-quantized diamagnetic film in a transverse magnetic field. We obtain the analytical expression for the thermodynamic potential in case of the arbitrary spectrum of carriers. The entropy change is shown to be the oscillating function of the magnetic field and the film thickness. The nature of this effect is the same as for the de Haas–van Alphen effect. The magnetic part of entropy has a maximal value at some temperature. Such behavior of the entropy is not observed in magneto-ordered materials. We discuss the nature of unusual behavior of the magnetic entropy. We compare our results with the data obtained for 2D and 3D cases.

  18. Quantized Water Transport: Ideal Desalination through Graphyne-4 Membrane

    PubMed Central

    Zhu, Chongqin; Li, Hui; Zeng, Xiao Cheng; Wang, E. G.; Meng, Sheng

    2013-01-01

    Graphyne sheet exhibits promising potential for nanoscale desalination to achieve both high water permeability and salt rejection rate. Extensive molecular dynamics simulations on pore-size effects suggest that γ-graphyne-4, with 4 acetylene bonds between two adjacent phenyl rings, has the best performance with 100% salt rejection and an unprecedented water permeability, to our knowledge, of ~13 L/cm2/day/MPa, 3 orders of magnitude higher than prevailing commercial membranes based on reverse osmosis, and ~10 times higher than the state-of-the-art nanoporous graphene. Strikingly, water permeability across graphyne exhibits unexpected nonlinear dependence on the pore size. This counter-intuitive behavior is attributed to the quantized nature of water flow at the nanoscale, which has wide implications in controlling nanoscale water transport and designing highly effective membranes. PMID:24196437

  19. Casimir effect for a scalar field via Krein quantization

    SciTech Connect

    Pejhan, H.; Tanhayi, M.R.; Takook, M.V.

    2014-02-15

    In this work, we present a rather simple method to study the Casimir effect on a spherical shell for a massless scalar field with Dirichlet boundary condition by applying the indefinite metric field (Krein) quantization technique. In this technique, the field operators are constructed from both negative and positive norm states. Having understood that negative norm states are un-physical, they are only used as a mathematical tool for renormalizing the theory and then one can get rid of them by imposing some proper physical conditions. -- Highlights: • A modification of QFT is considered to address the vacuum energy divergence problem. • Casimir energy of a spherical shell is calculated, through this approach. • In this technique, it is shown, the theory is automatically regularized.

  20. Semiclassical Quantization of Spinning Quasiparticles in Ballistic Josephson Junctions.

    PubMed

    Konschelle, François; Bergeret, F Sebastián; Tokatly, Ilya V

    2016-06-10

    A Josephson junction made of a generic magnetic material sandwiched between two conventional superconductors is studied in the ballistic semiclassic limit. The spectrum of Andreev bound states is obtained from the single valuedness of a particle-hole spinor over closed orbits generated by electron-hole reflections at the interfaces between superconducting and normal materials. The semiclassical quantization condition is shown to depend only on the angle mismatch between initial and final spin directions along such closed trajectories. For the demonstration, an Andreev-Wilson loop in the composite position-particle-hole-spin space is constructed and shown to depend on only two parameters, namely, a magnetic phase shift and a local precession axis for the spin. The details of the Andreev-Wilson loop can be extracted via measuring the spin-resolved density of states. A Josephson junction can thus be viewed as an analog computer of closed-path-ordered exponentials. PMID:27341251