Visually Lossless JPEG 2000 for Remote Image Browsing
Oh, Han; Bilgin, Ali; Marcellin, Michael
2017-01-01
Image sizes have increased exponentially in recent years. The resulting high-resolution images are often viewed via remote image browsing. Zooming and panning are desirable features in this context, which result in disparate spatial regions of an image being displayed at a variety of (spatial) resolutions. When an image is displayed at a reduced resolution, the quantization step sizes needed for visually lossless quality generally increase. This paper investigates the quantization step sizes needed for visually lossless display as a function of resolution, and proposes a method that effectively incorporates the resulting (multiple) quantization step sizes into a single JPEG2000 codestream. This codestream is JPEG2000 Part 1 compliant and allows for visually lossless decoding at all resolutions natively supported by the wavelet transform as well as arbitrary intermediate resolutions, using only a fraction of the full-resolution codestream. When images are browsed remotely using the JPEG2000 Interactive Protocol (JPIP), the required bandwidth is significantly reduced, as demonstrated by extensive experimental results. PMID:28748112
Quantization improves stabilization of dynamical systems with delayed feedback
NASA Astrophysics Data System (ADS)
Stepan, Gabor; Milton, John G.; Insperger, Tamas
2017-11-01
We show that an unstable scalar dynamical system with time-delayed feedback can be stabilized by quantizing the feedback. The discrete time model corresponds to a previously unrecognized case of the microchaotic map in which the fixed point is both locally and globally repelling. In the continuous-time model, stabilization by quantization is possible when the fixed point in the absence of feedback is an unstable node, and in the presence of feedback, it is an unstable focus (spiral). The results are illustrated with numerical simulation of the unstable Hayes equation. The solutions of the quantized Hayes equation take the form of oscillations in which the amplitude is a function of the size of the quantization step. If the quantization step is sufficiently small, the amplitude of the oscillations can be small enough to practically approximate the dynamics around a stable fixed point.
Codestream-Based Identification of JPEG 2000 Images with Different Coding Parameters
NASA Astrophysics Data System (ADS)
Watanabe, Osamu; Fukuhara, Takahiro; Kiya, Hitoshi
A method of identifying JPEG 2000 images with different coding parameters, such as code-block sizes, quantization-step sizes, and resolution levels, is presented. It does not produce false-negative matches regardless of different coding parameters (compression rate, code-block size, and discrete wavelet transform (DWT) resolutions levels) or quantization step sizes. This feature is not provided by conventional methods. Moreover, the proposed approach is fast because it uses the number of zero-bit-planes that can be extracted from the JPEG 2000 codestream by only parsing the header information without embedded block coding with optimized truncation (EBCOT) decoding. The experimental results revealed the effectiveness of image identification based on the new method.
2-Step scalar deadzone quantization for bitplane image coding.
Auli-Llinas, Francesc
2013-12-01
Modern lossy image coding systems generate a quality progressive codestream that, truncated at increasing rates, produces an image with decreasing distortion. Quality progressivity is commonly provided by an embedded quantizer that employs uniform scalar deadzone quantization (USDQ) together with a bitplane coding strategy. This paper introduces a 2-step scalar deadzone quantization (2SDQ) scheme that achieves same coding performance as that of USDQ while reducing the coding passes and the emitted symbols of the bitplane coding engine. This serves to reduce the computational costs of the codec and/or to code high dynamic range images. The main insights behind 2SDQ are the use of two quantization step sizes that approximate wavelet coefficients with more or less precision depending on their density, and a rate-distortion optimization technique that adjusts the distortion decreases produced when coding 2SDQ indexes. The integration of 2SDQ in current codecs is straightforward. The applicability and efficiency of 2SDQ are demonstrated within the framework of JPEG2000.
NASA Technical Reports Server (NTRS)
Wrigley, Chris J.; Hancock, Bruce R.; Newton, Kenneth W.; Cunningham, Thomas J.
2013-01-01
Single-slope analog-to-digital converters (ADCs) are particularly useful for onchip digitization in focal plane arrays (FPAs) because of their inherent monotonicity, relative simplicity, and efficiency for column-parallel applications, but they are comparatively slow. Squareroot encoding can allow the number of code values to be reduced without loss of signal-to-noise ratio (SNR) by keeping the quantization noise just below the signal shot noise. This encoding can be implemented directly by using a quadratic ramp. The reduction in the number of code values can substantially increase the quantization speed. However, in an FPA, the fixed pattern noise (FPN) limits the use of small quantization steps at low signal levels. If the zero-point is adjusted so that the lowest column is onscale, the other columns, including those at the center of the distribution, will be pushed up the ramp where the quantization noise is higher. Additionally, the finite frequency response of the ramp buffer amplifier and the comparator distort the shape of the ramp, so that the effective ramp value at the time the comparator trips differs from the intended value, resulting in errors. Allowing increased settling time decreases the quantization speed, while increasing the bandwidth increases the noise. The FPN problem is solved by breaking the ramp into two portions, with some fraction of the available code values allocated to a linear ramp and the remainder to a quadratic ramp. To avoid large transients, both the value and the slope of the linear and quadratic portions should be equal where they join. The span of the linear portion must cover the minimum offset, but not necessarily the maximum, since the fraction of the pixels above the upper limit will still be correctly quantized, albeit with increased quantization noise. The required linear span, maximum signal and ratio of quantization noise to shot noise at high signal, along with the continuity requirement, determines the number of code values that must be allocated to each portion. The distortion problem is solved by using a lookup table to convert captured code values back to signal levels. The values in this table will be similar to the intended ramp value, but with a correction for the finite bandwidth effects. Continuous-time comparators are used, and their bandwidth is set below the step rate, which smoothes the ramp and reduces the noise. No settling time is needed, as would be the case for clocked comparators, but the low bandwidth enhances the distortion of the non-linear portion. This is corrected by use of a return lookup table, which differs from the one used to generate the ramp. The return lookup table is obtained by calibrating against a stepped precision DC reference. This results in a residual non-linearity well below the quantization noise. This method can also compensate for differential non-linearity (DNL) in the DAC used to generate the ramp. The use of a ramp with a combination of linear and quadratic portions for a single-slope ADC is novel. The number of steps is minimized by keeping the step size just below the photon shot noise. This in turn maximizes the speed of the conversion. High resolution is maintained by keeping small quantization steps at low signals, and noise is minimized by allowing the lowest analog bandwidth, all without increasing the quantization noise. A calibrated return lookup table allows the system to maintain excellent linearity.
Nonlinear Multiscale Transformations: From Synchronization to Error Control
2001-07-01
transformation (plus the quantization step) has taken place, a lossless Lempel - Ziv compression algorithm is applied to reduce the size of the transformed... compressed data are all very close, however the visual quality of the reconstructed image is significantly better for the EC compression algorithm ...used in recent times in the first step of transform coding algorithms for image compression . Ideally, a multiscale transformation allows for an
Learning-Based Just-Noticeable-Quantization- Distortion Modeling for Perceptual Video Coding.
Ki, Sehwan; Bae, Sung-Ho; Kim, Munchurl; Ko, Hyunsuk
2018-07-01
Conventional predictive video coding-based approaches are reaching the limit of their potential coding efficiency improvements, because of severely increasing computation complexity. As an alternative approach, perceptual video coding (PVC) has attempted to achieve high coding efficiency by eliminating perceptual redundancy, using just-noticeable-distortion (JND) directed PVC. The previous JNDs were modeled by adding white Gaussian noise or specific signal patterns into the original images, which were not appropriate in finding JND thresholds due to distortion with energy reduction. In this paper, we present a novel discrete cosine transform-based energy-reduced JND model, called ERJND, that is more suitable for JND-based PVC schemes. Then, the proposed ERJND model is extended to two learning-based just-noticeable-quantization-distortion (JNQD) models as preprocessing that can be applied for perceptual video coding. The two JNQD models can automatically adjust JND levels based on given quantization step sizes. One of the two JNQD models, called LR-JNQD, is based on linear regression and determines the model parameter for JNQD based on extracted handcraft features. The other JNQD model is based on a convolution neural network (CNN), called CNN-JNQD. To our best knowledge, our paper is the first approach to automatically adjust JND levels according to quantization step sizes for preprocessing the input to video encoders. In experiments, both the LR-JNQD and CNN-JNQD models were applied to high efficiency video coding (HEVC) and yielded maximum (average) bitrate reductions of 38.51% (10.38%) and 67.88% (24.91%), respectively, with little subjective video quality degradation, compared with the input without preprocessing applied.
Quantized Step-up Model for Evaluation of Internship in Teaching of Prospective Science Teachers.
ERIC Educational Resources Information Center
Sindhu, R. S.
2002-01-01
Describes the quantized step-up model developed for the evaluation purposes of internship in teaching which is an analogous model of the atomic structure. Assesses prospective teachers' abilities in lesson delivery. (YDS)
Efficient compression of molecular dynamics trajectory files.
Marais, Patrick; Kenwood, Julian; Smith, Keegan Carruthers; Kuttel, Michelle M; Gain, James
2012-10-15
We investigate whether specific properties of molecular dynamics trajectory files can be exploited to achieve effective file compression. We explore two classes of lossy, quantized compression scheme: "interframe" predictors, which exploit temporal coherence between successive frames in a simulation, and more complex "intraframe" schemes, which compress each frame independently. Our interframe predictors are fast, memory-efficient and well suited to on-the-fly compression of massive simulation data sets, and significantly outperform the benchmark BZip2 application. Our schemes are configurable: atomic positional accuracy can be sacrificed to achieve greater compression. For high fidelity compression, our linear interframe predictor gives the best results at very little computational cost: at moderate levels of approximation (12-bit quantization, maximum error ≈ 10(-2) Å), we can compress a 1-2 fs trajectory file to 5-8% of its original size. For 200 fs time steps-typically used in fine grained water diffusion experiments-we can compress files to ~25% of their input size, still substantially better than BZip2. While compression performance degrades with high levels of quantization, the simulation error is typically much greater than the associated approximation error in such cases. Copyright © 2012 Wiley Periodicals, Inc.
A variable-step-size robust delta modulator.
NASA Technical Reports Server (NTRS)
Song, C. L.; Garodnick, J.; Schilling, D. L.
1971-01-01
Description of an analytically obtained optimum adaptive delta modulator-demodulator configuration. The device utilizes two past samples to obtain a step size which minimizes the mean square error for a Markov-Gaussian source. The optimum system is compared, using computer simulations, with a linear delta modulator and an enhanced Abate delta modulator. In addition, the performance is compared to the rate distortion bound for a Markov source. It is shown that the optimum delta modulator is neither quantization nor slope-overload limited. The highly nonlinear equations obtained for the optimum transmitter and receiver are approximated by piecewise-linear equations in order to obtain system equations which can be transformed into hardware. The derivation of the experimental system is presented.
Digital halftoning methods for selectively partitioning error into achromatic and chromatic channels
NASA Technical Reports Server (NTRS)
Mulligan, Jeffrey B.
1990-01-01
A method is described for reducing the visibility of artifacts arising in the display of quantized color images on CRT displays. The method is based on the differential spatial sensitivity of the human visual system to chromatic and achromatic modulations. Because the visual system has the highest spatial and temporal acuity for the luminance component of an image, a technique which will reduce luminance artifacts at the expense of introducing high-frequency chromatic errors is sought. A method based on controlling the correlations between the quantization errors in the individual phosphor images is explored. The luminance component is greatest when the phosphor errors are positively correlated, and is minimized when the phosphor errors are negatively correlated. The greatest effect of the correlation is obtained when the intensity quantization step sizes of the individual phosphors have equal luminances. For the ordered dither algorithm, a version of the method can be implemented by simply inverting the matrix of thresholds for one of the color components.
Qdot Labeled Actin Super Resolution Motility Assay Measures Low Duty Cycle Muscle Myosin Step-Size
Wang, Yihua; Ajtai, Katalin; Burghardt, Thomas P.
2013-01-01
Myosin powers contraction in heart and skeletal muscle and is a leading target for mutations implicated in inheritable muscle diseases. During contraction, myosin transduces ATP free energy into the work of muscle shortening against resisting force. Muscle shortening involves relative sliding of myosin and actin filaments. Skeletal actin filaments were fluorescence labeled with a streptavidin conjugate quantum dot (Qdot) binding biotin-phalloidin on actin. Single Qdot’s were imaged in time with total internal reflection fluorescence microscopy then spatially localized to 1-3 nanometers using a super-resolution algorithm as they translated with actin over a surface coated with skeletal heavy meromyosin (sHMM) or full length β-cardiac myosin (MYH7). Average Qdot-actin velocity matches measurements with rhodamine-phalloidin labeled actin. The sHMM Qdot-actin velocity histogram contains low velocity events corresponding to actin translation in quantized steps of ~5 nm. The MYH7 velocity histogram has quantized steps at 3 and 8 nm in addition to 5 nm, and, larger compliance than sHMM depending on MYH7 surface concentration. Low duty cycle skeletal and cardiac myosin present challenges for a single molecule assay because actomyosin dissociates quickly and the freely moving element diffuses away. The in vitro motility assay has modestly more actomyosin interactions and methylcellulose inhibited diffusion to sustain the complex while preserving a subset of encounters that do not overlap in time on a single actin filament. A single myosin step is isolated in time and space then characterized using super-resolution. The approach provides quick, quantitative, and inexpensive step-size measurement for low duty cycle muscle myosin. PMID:23383646
Quantization selection in the high-throughput H.264/AVC encoder based on the RD
NASA Astrophysics Data System (ADS)
Pastuszak, Grzegorz
2013-10-01
In the hardware video encoder, the quantization is responsible for quality losses. On the other hand, it allows the reduction of bit rates to the target one. If the mode selection is based on the rate-distortion criterion, the quantization can also be adjusted to obtain better compression efficiency. Particularly, the use of Lagrangian function with a given multiplier enables the encoder to select the most suitable quantization step determined by the quantization parameter QP. Moreover, the quantization offset added before discarding the fraction value after quantization can be adjusted. In order to select the best quantization parameter and offset in real time, the HD/SD encoder should be implemented in the hardware. In particular, the hardware architecture should embed the transformation and quantization modules able to process the same residuals many times. In this work, such an architecture is used. Experimental results show what improvements in terms of compression efficiency are achievable for Intra coding.
Density-Dependent Quantized Least Squares Support Vector Machine for Large Data Sets.
Nan, Shengyu; Sun, Lei; Chen, Badong; Lin, Zhiping; Toh, Kar-Ann
2017-01-01
Based on the knowledge that input data distribution is important for learning, a data density-dependent quantization scheme (DQS) is proposed for sparse input data representation. The usefulness of the representation scheme is demonstrated by using it as a data preprocessing unit attached to the well-known least squares support vector machine (LS-SVM) for application on big data sets. Essentially, the proposed DQS adopts a single shrinkage threshold to obtain a simple quantization scheme, which adapts its outputs to input data density. With this quantization scheme, a large data set is quantized to a small subset where considerable sample size reduction is generally obtained. In particular, the sample size reduction can save significant computational cost when using the quantized subset for feature approximation via the Nyström method. Based on the quantized subset, the approximated features are incorporated into LS-SVM to develop a data density-dependent quantized LS-SVM (DQLS-SVM), where an analytic solution is obtained in the primal solution space. The developed DQLS-SVM is evaluated on synthetic and benchmark data with particular emphasis on large data sets. Extensive experimental results show that the learning machine incorporating DQS attains not only high computational efficiency but also good generalization performance.
NASA Astrophysics Data System (ADS)
Ji, Zhengping; Ovsiannikov, Ilia; Wang, Yibing; Shi, Lilong; Zhang, Qiang
2015-05-01
In this paper, we develop a server-client quantization scheme to reduce bit resolution of deep learning architecture, i.e., Convolutional Neural Networks, for image recognition tasks. Low bit resolution is an important factor in bringing the deep learning neural network into hardware implementation, which directly determines the cost and power consumption. We aim to reduce the bit resolution of the network without sacrificing its performance. To this end, we design a new quantization algorithm called supervised iterative quantization to reduce the bit resolution of learned network weights. In the training stage, the supervised iterative quantization is conducted via two steps on server - apply k-means based adaptive quantization on learned network weights and retrain the network based on quantized weights. These two steps are alternated until the convergence criterion is met. In this testing stage, the network configuration and low-bit weights are loaded to the client hardware device to recognize coming input in real time, where optimized but expensive quantization becomes infeasible. Considering this, we adopt a uniform quantization for the inputs and internal network responses (called feature maps) to maintain low on-chip expenses. The Convolutional Neural Network with reduced weight and input/response precision is demonstrated in recognizing two types of images: one is hand-written digit images and the other is real-life images in office scenarios. Both results show that the new network is able to achieve the performance of the neural network with full bit resolution, even though in the new network the bit resolution of both weight and input are significantly reduced, e.g., from 64 bits to 4-5 bits.
How quantizable matter gravitates: A practitioner's guide
NASA Astrophysics Data System (ADS)
Schuller, Frederic P.; Witte, Christof
2014-05-01
We present the practical step-by-step procedure for constructing canonical gravitational dynamics and kinematics directly from any previously specified quantizable classical matter dynamics, and then illustrate the application of this recipe by way of two completely worked case studies. Following the same procedure, any phenomenological proposal for fundamental matter dynamics must be supplemented with a suitable gravity theory providing the coefficients and kinematical interpretation of the matter theory, before any of the two theories can be meaningfully compared to experimental data.
Medical Image Compression Based on Vector Quantization with Variable Block Sizes in Wavelet Domain
Jiang, Huiyan; Ma, Zhiyuan; Hu, Yang; Yang, Benqiang; Zhang, Libo
2012-01-01
An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD) was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality. PMID:23049544
Medical image compression based on vector quantization with variable block sizes in wavelet domain.
Jiang, Huiyan; Ma, Zhiyuan; Hu, Yang; Yang, Benqiang; Zhang, Libo
2012-01-01
An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD) was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality.
Quantization and training of object detection networks with low-precision weights and activations
NASA Astrophysics Data System (ADS)
Yang, Bo; Liu, Jian; Zhou, Li; Wang, Yun; Chen, Jie
2018-01-01
As convolutional neural networks have demonstrated state-of-the-art performance in object recognition and detection, there is a growing need for deploying these systems on resource-constrained mobile platforms. However, the computational burden and energy consumption of inference for these networks are significantly higher than what most low-power devices can afford. To address these limitations, this paper proposes a method to train object detection networks with low-precision weights and activations. The probability density functions of weights and activations of each layer are first directly estimated using piecewise Gaussian models. Then, the optimal quantization intervals and step sizes for each convolution layer are adaptively determined according to the distribution of weights and activations. As the most computationally expensive convolutions can be replaced by effective fixed point operations, the proposed method can drastically reduce computation complexity and memory footprint. Performing on the tiny you only look once (YOLO) and YOLO architectures, the proposed method achieves comparable accuracy to their 32-bit counterparts. As an illustration, the proposed 4-bit and 8-bit quantized versions of the YOLO model achieve a mean average precision of 62.6% and 63.9%, respectively, on the Pascal visual object classes 2012 test dataset. The mAP of the 32-bit full-precision baseline model is 64.0%.
Second quantization in bit-string physics
NASA Technical Reports Server (NTRS)
Noyes, H. Pierre
1993-01-01
Using a new fundamental theory based on bit-strings, a finite and discrete version of the solutions of the free one particle Dirac equation as segmented trajectories with steps of length h/mc along the forward and backward light cones executed at velocity +/- c are derived. Interpreting the statistical fluctuations which cause the bends in these segmented trajectories as emission and absorption of radiation, these solutions are analogous to a fermion propagator in a second quantized theory. This allows us to interpret the mass parameter in the step length as the physical mass of the free particle. The radiation in interaction with it has the usual harmonic oscillator structure of a second quantized theory. How these free particle masses can be generated gravitationally using the combinatorial hierarchy sequence (3,10,137,2(sup 127) + 136), and some of the predictive consequences are sketched.
A visual detection model for DCT coefficient quantization
NASA Technical Reports Server (NTRS)
Ahumada, Albert J., Jr.; Peterson, Heidi A.
1993-01-01
The discrete cosine transform (DCT) is widely used in image compression, and is part of the JPEG and MPEG compression standards. The degree of compression, and the amount of distortion in the decompressed image are determined by the quantization of the transform coefficients. The standards do not specify how the DCT coefficients should be quantized. Our approach is to set the quantization level for each coefficient so that the quantization error is at the threshold of visibility. Here we combine results from our previous work to form our current best detection model for DCT coefficient quantization noise. This model predicts sensitivity as a function of display parameters, enabling quantization matrices to be designed for display situations varying in luminance, veiling light, and spatial frequency related conditions (pixel size, viewing distance, and aspect ratio). It also allows arbitrary color space directions for the representation of color.
Chamberlin, Ralph V; Davis, Bryce F
2013-10-01
Disordered systems show deviations from the standard Debye theory of specific heat at low temperatures. These deviations are often attributed to two-level systems of uncertain origin. We find that a source of excess specific heat comes from correlations between quanta of energy if excitations are localized on an intermediate length scale. We use simulations of a simplified Creutz model for a system of Ising-like spins coupled to a thermal bath of Einstein-like oscillators. One feature of this model is that energy is quantized in both the system and its bath, ensuring conservation of energy at every step. Another feature is that the exact entropies of both the system and its bath are known at every step, so that their temperatures can be determined independently. We find that there is a mismatch in canonical temperature between the system and its bath. In addition to the usual finite-size effects in the Bose-Einstein and Fermi-Dirac distributions, if excitations in the heat bath are localized on an intermediate length scale, this mismatch is independent of system size up to at least 10(6) particles. We use a model for correlations between quanta of energy to adjust the statistical distributions and yield a thermodynamically consistent temperature. The model includes a chemical potential for units of energy, as is often used for other types of particles that are quantized and conserved. Experimental evidence for this model comes from its ability to characterize the excess specific heat of imperfect crystals at low temperatures.
Light-hole quantization in the optical response of ultra-wide GaAs/Al(x)Ga(1-x)As quantum wells.
Solovyev, V V; Bunakov, V A; Schmult, S; Kukushkin, I V
2013-01-16
Temperature-dependent reflectivity and photoluminescence spectra are studied for undoped ultra-wide 150 and 250 nm GaAs quantum wells. It is shown that spectral features previously attributed to a size quantization of the exciton motion in the z-direction coincide well with energies of quantized levels for light holes. Furthermore, optical spectra reveal very similar properties at temperatures above the exciton dissociation point.
Technical note: Signal resolution increase and noise reduction in a CCD digitizer.
González, A; Martínez, J A; Tobarra, B
2004-03-01
Increasing output resolution is assumed to improve noise characteristics of a CCD digitizer. In this work, however, we have found that as the quantization step becomes lower than the analog noise (present in the signal before its conversion to digital) the noise reduction becomes significantly lower than expected. That is the case for values of sigma(an)/delta larger than 0.6, where sigma(an) is the standard deviation of the analog noise and delta is the quantization step. The procedure is applied to a commercially available CCD digitizer, and noise reduction by means of signal resolution increase is compared to that obtained by low pass filtering.
Du, Baoqiang; Dong, Shaofeng; Wang, Yanfeng; Guo, Shuting; Cao, Lingzhi; Zhou, Wei; Zuo, Yandi; Liu, Dan
2013-11-01
A wide-frequency and high-resolution frequency measurement method based on the quantized phase step law is presented in this paper. Utilizing a variation law of the phase differences, the direct different frequency phase processing, and the phase group synchronization phenomenon, combining an A/D converter and the adaptive phase shifting principle, a counter gate is established in the phase coincidences at one-group intervals, which eliminates the ±1 counter error in the traditional frequency measurement method. More importantly, the direct phase comparison, the measurement, and the control between any periodic signals have been realized without frequency normalization in this method. Experimental results show that sub-picosecond resolution can be easily obtained in the frequency measurement, the frequency standard comparison, and the phase-locked control based on the phase quantization processing technique. The method may be widely used in navigation positioning, space techniques, communication, radar, astronomy, atomic frequency standards, and other high-tech fields.
A hybrid LBG/lattice vector quantizer for high quality image coding
NASA Technical Reports Server (NTRS)
Ramamoorthy, V.; Sayood, K.; Arikan, E. (Editor)
1991-01-01
It is well known that a vector quantizer is an efficient coder offering a good trade-off between quantization distortion and bit rate. The performance of a vector quantizer asymptotically approaches the optimum bound with increasing dimensionality. A vector quantized image suffers from the following types of degradations: (1) edge regions in the coded image contain staircase effects, (2) quasi-constant or slowly varying regions suffer from contouring effects, and (3) textured regions lose details and suffer from granular noise. All three of these degradations are due to the finite size of the code book, the distortion measures used in the design, and due to the finite training procedure involved in the construction of the code book. In this paper, we present an adaptive technique which attempts to ameliorate the edge distortion and contouring effects.
Visual data mining for quantized spatial data
NASA Technical Reports Server (NTRS)
Braverman, Amy; Kahn, Brian
2004-01-01
In previous papers we've shown how a well known data compression algorithm called Entropy-constrained Vector Quantization ( can be modified to reduce the size and complexity of very large, satellite data sets. In this paper, we descuss how to visualize and understand the content of such reduced data sets.
Wang, Chenliang; Wen, Changyun; Hu, Qinglei; Wang, Wei; Zhang, Xiuyu
2018-06-01
This paper is devoted to distributed adaptive containment control for a class of nonlinear multiagent systems with input quantization. By employing a matrix factorization and a novel matrix normalization technique, some assumptions involving control gain matrices in existing results are relaxed. By fusing the techniques of sliding mode control and backstepping control, a two-step design method is proposed to construct controllers and, with the aid of neural networks, all system nonlinearities are allowed to be unknown. Moreover, a linear time-varying model and a similarity transformation are introduced to circumvent the obstacle brought by quantization, and the controllers need no information about the quantizer parameters. The proposed scheme is able to ensure the boundedness of all closed-loop signals and steer the containment errors into an arbitrarily small residual set. The simulation results illustrate the effectiveness of the scheme.
A visual detection model for DCT coefficient quantization
NASA Technical Reports Server (NTRS)
Ahumada, Albert J., Jr.; Watson, Andrew B.
1994-01-01
The discrete cosine transform (DCT) is widely used in image compression and is part of the JPEG and MPEG compression standards. The degree of compression and the amount of distortion in the decompressed image are controlled by the quantization of the transform coefficients. The standards do not specify how the DCT coefficients should be quantized. One approach is to set the quantization level for each coefficient so that the quantization error is near the threshold of visibility. Results from previous work are combined to form the current best detection model for DCT coefficient quantization noise. This model predicts sensitivity as a function of display parameters, enabling quantization matrices to be designed for display situations varying in luminance, veiling light, and spatial frequency related conditions (pixel size, viewing distance, and aspect ratio). It also allows arbitrary color space directions for the representation of color. A model-based method of optimizing the quantization matrix for an individual image was developed. The model described above provides visual thresholds for each DCT frequency. These thresholds are adjusted within each block for visual light adaptation and contrast masking. For given quantization matrix, the DCT quantization errors are scaled by the adjusted thresholds to yield perceptual errors. These errors are pooled nonlinearly over the image to yield total perceptual error. With this model one may estimate the quantization matrix for a particular image that yields minimum bit rate for a given total perceptual error, or minimum perceptual error for a given bit rate. Custom matrices for a number of images show clear improvement over image-independent matrices. Custom matrices are compatible with the JPEG standard, which requires transmission of the quantization matrix.
Atomic-scale epitaxial aluminum film on GaAs substrate
NASA Astrophysics Data System (ADS)
Fan, Yen-Ting; Lo, Ming-Cheng; Wu, Chu-Chun; Chen, Peng-Yu; Wu, Jenq-Shinn; Liang, Chi-Te; Lin, Sheng-Di
2017-07-01
Atomic-scale metal films exhibit intriguing size-dependent film stability, electrical conductivity, superconductivity, and chemical reactivity. With advancing methods for preparing ultra-thin and atomically smooth metal films, clear evidences of the quantum size effect have been experimentally collected in the past two decades. However, with the problems of small-area fabrication, film oxidation in air, and highly-sensitive interfaces between the metal, substrate, and capping layer, the uses of the quantized metallic films for further ex-situ investigations and applications have been seriously limited. To this end, we develop a large-area fabrication method for continuous atomic-scale aluminum film. The self-limited oxidation of aluminum protects and quantizes the metallic film and enables ex-situ characterizations and device processing in air. Structure analysis and electrical measurements on the prepared films imply the quantum size effect in the atomic-scale aluminum film. Our work opens the way for further physics studies and device applications using the quantized electronic states in metals.
Can one ADM quantize relativistic bosonicstrings and membranes?
NASA Astrophysics Data System (ADS)
Moncrief, Vincent
2006-04-01
The standard methods for quantizing relativistic strings diverge significantly from the Dirac-Wheeler-DeWitt program for quantization of generally covariant systems and one wonders whether the latter could be successfully implemented as an alternative to the former. As a first step in this direction, we consider the possibility of quantizing strings (and also relativistic membranes) via a partially gauge-fixed ADM (Arnowitt, Deser and Misner) formulation of the reduced field equations for these systems. By exploiting some (Euclidean signature) Hamilton-Jacobi techniques that Mike Ryan and I had developed previously for the quantization of Bianchi IX cosmological models, I show how to construct Diff( S 1)-invariant (or Diff(Σ)-invariant in the case of membranes) ground state wave functionals for the cases of co-dimension one strings and membranes embedded in Minkowski spacetime. I also show that the reduced Hamiltonian density operators for these systems weakly commute when applied to physical (i.e. Diff( S 1) or Diff(Σ)-invariant) states. While many open questions remain, these preliminary results seem to encourage further research along the same lines.
Conductance Quantization in Resistive Random Access Memory
NASA Astrophysics Data System (ADS)
Li, Yang; Long, Shibing; Liu, Yang; Hu, Chen; Teng, Jiao; Liu, Qi; Lv, Hangbing; Suñé, Jordi; Liu, Ming
2015-10-01
The intrinsic scaling-down ability, simple metal-insulator-metal (MIM) sandwich structure, excellent performances, and complementary metal-oxide-semiconductor (CMOS) technology-compatible fabrication processes make resistive random access memory (RRAM) one of the most promising candidates for the next-generation memory. The RRAM device also exhibits rich electrical, thermal, magnetic, and optical effects, in close correlation with the abundant resistive switching (RS) materials, metal-oxide interface, and multiple RS mechanisms including the formation/rupture of nanoscale to atomic-sized conductive filament (CF) incorporated in RS layer. Conductance quantization effect has been observed in the atomic-sized CF in RRAM, which provides a good opportunity to deeply investigate the RS mechanism in mesoscopic dimension. In this review paper, the operating principles of RRAM are introduced first, followed by the summarization of the basic conductance quantization phenomenon in RRAM and the related RS mechanisms, device structures, and material system. Then, we discuss the theory and modeling of quantum transport in RRAM. Finally, we present the opportunities and challenges in quantized RRAM devices and our views on the future prospects.
Conductance Quantization in Resistive Random Access Memory.
Li, Yang; Long, Shibing; Liu, Yang; Hu, Chen; Teng, Jiao; Liu, Qi; Lv, Hangbing; Suñé, Jordi; Liu, Ming
2015-12-01
The intrinsic scaling-down ability, simple metal-insulator-metal (MIM) sandwich structure, excellent performances, and complementary metal-oxide-semiconductor (CMOS) technology-compatible fabrication processes make resistive random access memory (RRAM) one of the most promising candidates for the next-generation memory. The RRAM device also exhibits rich electrical, thermal, magnetic, and optical effects, in close correlation with the abundant resistive switching (RS) materials, metal-oxide interface, and multiple RS mechanisms including the formation/rupture of nanoscale to atomic-sized conductive filament (CF) incorporated in RS layer. Conductance quantization effect has been observed in the atomic-sized CF in RRAM, which provides a good opportunity to deeply investigate the RS mechanism in mesoscopic dimension. In this review paper, the operating principles of RRAM are introduced first, followed by the summarization of the basic conductance quantization phenomenon in RRAM and the related RS mechanisms, device structures, and material system. Then, we discuss the theory and modeling of quantum transport in RRAM. Finally, we present the opportunities and challenges in quantized RRAM devices and our views on the future prospects.
Sun, Jun; Duan, Yizhou; Li, Jiangtao; Liu, Jiaying; Guo, Zongming
2013-01-01
In the first part of this paper, we derive a source model describing the relationship between the rate, distortion, and quantization steps of the dead-zone plus uniform threshold scalar quantizers with nearly uniform reconstruction quantizers for generalized Gaussian distribution. This source model consists of rate-quantization, distortion-quantization (D-Q), and distortion-rate (D-R) models. In this part, we first rigorously confirm the accuracy of the proposed source model by comparing the calculated results with the coding data of JM 16.0. Efficient parameter estimation strategies are then developed to better employ this source model in our two-pass rate control method for H.264 variable bit rate coding. Based on our D-Q and D-R models, the proposed method is of high stability, low complexity and is easy to implement. Extensive experiments demonstrate that the proposed method achieves: 1) average peak signal-to-noise ratio variance of only 0.0658 dB, compared to 1.8758 dB of JM 16.0's method, with an average rate control error of 1.95% and 2) significant improvement in smoothing the video quality compared with the latest two-pass rate control method.
Microtiming in Swing and Funk affects the body movement behavior of music expert listeners.
Kilchenmann, Lorenz; Senn, Olivier
2015-01-01
The theory of Participatory Discrepancies (or PDs) claims that minute temporal asynchronies (microtiming) in music performance are crucial for prompting bodily entrainment in listeners, which is a fundamental effect of the "groove" experience. Previous research has failed to find evidence to support this theory. The present study tested the influence of varying PD magnitudes on the beat-related body movement behavior of music listeners. 160 participants (79 music experts, 81 non-experts) listened to 12 music clips in either Funk or Swing style. These stimuli were based on two audio recordings (one in each style) of expert drum and bass duo performances. In one series of six clips, the PDs were downscaled from their originally performed magnitude to complete quantization in steps of 20%. In another series of six clips, the PDs were upscaled from their original magnitude to double magnitude in steps of 20%. The intensity of the listeners' beat-related head movement was measured using video-based motion capture technology and Fourier analysis. A mixed-design Four-Factor ANOVA showed that the PD manipulations had a significant effect on the expert listeners' entrainment behavior. The experts moved more when listening to stimuli with PDs that were downscaled by 60% compared to completely quantized stimuli. This finding offers partial support for PD theory: PDs of a certain magnitude do augment entrainment in listeners. But the effect was found to be small to moderately sized, and it affected music expert listeners only.
Microtiming in Swing and Funk affects the body movement behavior of music expert listeners
Kilchenmann, Lorenz; Senn, Olivier
2015-01-01
The theory of Participatory Discrepancies (or PDs) claims that minute temporal asynchronies (microtiming) in music performance are crucial for prompting bodily entrainment in listeners, which is a fundamental effect of the “groove” experience. Previous research has failed to find evidence to support this theory. The present study tested the influence of varying PD magnitudes on the beat-related body movement behavior of music listeners. 160 participants (79 music experts, 81 non-experts) listened to 12 music clips in either Funk or Swing style. These stimuli were based on two audio recordings (one in each style) of expert drum and bass duo performances. In one series of six clips, the PDs were downscaled from their originally performed magnitude to complete quantization in steps of 20%. In another series of six clips, the PDs were upscaled from their original magnitude to double magnitude in steps of 20%. The intensity of the listeners' beat-related head movement was measured using video-based motion capture technology and Fourier analysis. A mixed-design Four-Factor ANOVA showed that the PD manipulations had a significant effect on the expert listeners' entrainment behavior. The experts moved more when listening to stimuli with PDs that were downscaled by 60% compared to completely quantized stimuli. This finding offers partial support for PD theory: PDs of a certain magnitude do augment entrainment in listeners. But the effect was found to be small to moderately sized, and it affected music expert listeners only. PMID:26347694
Quenching of the Quantum Hall Effect in Graphene with Scrolled Edges
NASA Astrophysics Data System (ADS)
Cresti, Alessandro; Fogler, Michael M.; Guinea, Francisco; Castro Neto, A. H.; Roche, Stephan
2012-04-01
Edge nanoscrolls are shown to strongly influence transport properties of suspended graphene in the quantum Hall regime. The relatively long arclength of the scrolls in combination with their compact transverse size results in formation of many nonchiral transport channels in the scrolls. They short circuit the bulk current paths and inhibit the observation of the quantized two-terminal resistance. Unlike competing theoretical proposals, this mechanism of disrupting the Hall quantization in suspended graphene is not caused by ill-chosen placement of the contacts, singular elastic strains, or a small sample size.
2013-01-01
Confined states of a positronium (Ps) in the spherical and circular quantum dots (QDs) are theoretically investigated in two size quantization regimes: strong and weak. Two-band approximation of Kane’s dispersion law and parabolic dispersion law of charge carriers are considered. It is shown that electron-positron pair instability is a consequence of dimensionality reduction, not of the size quantization. The binding energies for the Ps in circular and spherical QDs are calculated. The Ps formation dependence on the QD radius is studied. PMID:23826867
NASA Astrophysics Data System (ADS)
Abramov, G. V.; Emeljanov, A. E.; Ivashin, A. L.
Theoretical bases for modeling a digital control system with information transfer via the channel of plural access and a regular quantization cycle are submitted. The theory of dynamic systems with random changes of the structure including elements of the Markov random processes theory is used for a mathematical description of a network control system. The characteristics of similar control systems are received. Experimental research of the given control systems is carried out.
NASA Astrophysics Data System (ADS)
Pokatilov, E. P.; Nika, D. L.; Askerov, A. S.; Zincenco, N. D.; Balandin, A. A.
2007-12-01
nanometer scale thickness by taking into account multiple quantized electron subbands and the confined optical phonon dispersion. It was shown that the inter-subband electronic transitions play an important role in limiting the electron mobility in the heterostructures when the energy separation between one of the size-quantized excited electron subbands and the Fermi energy becomes comparable to the optical phonon energy. The latter leads to the oscillatory dependence of the electron mobility on the thickness of the heterostructure conduction channel layer. This effect is observable at room temperature and over a wide range of the carrier densities. The developed formalism and calculation procedure are readily applicable to other material systems. The described effect can be used for fine-tuning the confined electron and phonon states in the nanoscale heterostructures in order to achieve performance enhancement of the nanoscale electronic and optoelectronic devices.
interdisciplinary fields of photoelectrochemistry, semiconductor-molecule interfaces, quantum size effects, electron photoelectrochemistry (hot carrier effects, size quantization effects, superlattice electrodes, quantum dot solar cells
Prediction-guided quantization for video tone mapping
NASA Astrophysics Data System (ADS)
Le Dauphin, Agnès.; Boitard, Ronan; Thoreau, Dominique; Olivier, Yannick; Francois, Edouard; LeLéannec, Fabrice
2014-09-01
Tone Mapping Operators (TMOs) compress High Dynamic Range (HDR) content to address Low Dynamic Range (LDR) displays. However, before reaching the end-user, this tone mapped content is usually compressed for broadcasting or storage purposes. Any TMO includes a quantization step to convert floating point values to integer ones. In this work, we propose to adapt this quantization, in the loop of an encoder, to reduce the entropy of the tone mapped video content. Our technique provides an appropriate quantization for each mode of both the Intra and Inter-prediction that is performed in the loop of a block-based encoder. The mode that minimizes a rate-distortion criterion uses its associated quantization to provide integer values for the rest of the encoding process. The method has been implemented in HEVC and was tested over two different scenarios: the compression of tone mapped LDR video content (using the HM10.0) and the compression of perceptually encoded HDR content (HM14.0). Results show an average bit-rate reduction under the same PSNR for all the sequences and TMO considered of 20.3% and 27.3% for tone mapped content and 2.4% and 2.7% for HDR content.
Subband directional vector quantization in radiological image compression
NASA Astrophysics Data System (ADS)
Akrout, Nabil M.; Diab, Chaouki; Prost, Remy; Goutte, Robert; Amiel, Michel
1992-05-01
The aim of this paper is to propose a new scheme for image compression. The method is very efficient for images which have directional edges such as the tree-like structure of the coronary vessels in digital angiograms. This method involves two steps. First, the original image is decomposed at different resolution levels using a pyramidal subband decomposition scheme. For decomposition/reconstruction of the image, free of aliasing and boundary errors, we use an ideal band-pass filter bank implemented in the Discrete Cosine Transform domain (DCT). Second, the high-frequency subbands are vector quantized using a multiresolution codebook with vertical and horizontal codewords which take into account the edge orientation of each subband. The proposed method reduces the blocking effect encountered at low bit rates in conventional vector quantization.
In quest of a systematic framework for unifying and defining nanoscience
2009-01-01
This article proposes a systematic framework for unifying and defining nanoscience based on historic first principles and step logic that led to a “central paradigm” (i.e., unifying framework) for traditional elemental/small-molecule chemistry. As such, a Nanomaterials classification roadmap is proposed, which divides all nanomatter into Category I: discrete, well-defined and Category II: statistical, undefined nanoparticles. We consider only Category I, well-defined nanoparticles which are >90% monodisperse as a function of Critical Nanoscale Design Parameters (CNDPs) defined according to: (a) size, (b) shape, (c) surface chemistry, (d) flexibility, and (e) elemental composition. Classified as either hard (H) (i.e., inorganic-based) or soft (S) (i.e., organic-based) categories, these nanoparticles were found to manifest pervasive atom mimicry features that included: (1) a dominance of zero-dimensional (0D) core–shell nanoarchitectures, (2) the ability to self-assemble or chemically bond as discrete, quantized nanounits, and (3) exhibited well-defined nanoscale valencies and stoichiometries reminiscent of atom-based elements. These discrete nanoparticle categories are referred to as hard or soft particle nanoelements. Many examples describing chemical bonding/assembly of these nanoelements have been reported in the literature. We refer to these hard:hard (H-n:H-n), soft:soft (S-n:S-n), or hard:soft (H-n:S-n) nanoelement combinations as nanocompounds. Due to their quantized features, many nanoelement and nanocompound categories are reported to exhibit well-defined nanoperiodic property patterns. These periodic property patterns are dependent on their quantized nanofeatures (CNDPs) and dramatically influence intrinsic physicochemical properties (i.e., melting points, reactivity/self-assembly, sterics, and nanoencapsulation), as well as important functional/performance properties (i.e., magnetic, photonic, electronic, and toxicologic properties). We propose this perspective as a modest first step toward more clearly defining synthetic nanochemistry as well as providing a systematic framework for unifying nanoscience. With further progress, one should anticipate the evolution of future nanoperiodic table(s) suitable for predicting important risk/benefit boundaries in the field of nanoscience. Electronic supplementary material The online version of this article (doi:10.1007/s11051-009-9632-z) contains supplementary material, which is available to authorized users. PMID:21170133
Li, Huaqing; Chen, Guo; Huang, Tingwen; Dong, Zhaoyang; Zhu, Wei; Gao, Lan
2016-12-01
In this paper, we consider the event-triggered distributed average-consensus of discrete-time first-order multiagent systems with limited communication data rate and general directed network topology. In the framework of digital communication network, each agent has a real-valued state but can only exchange finite-bit binary symbolic data sequence with its neighborhood agents at each time step due to the digital communication channels with energy constraints. Novel event-triggered dynamic encoder and decoder for each agent are designed, based on which a distributed control algorithm is proposed. A scheme that selects the number of channel quantization level (number of bits) at each time step is developed, under which all the quantizers in the network are never saturated. The convergence rate of consensus is explicitly characterized, which is related to the scale of network, the maximum degree of nodes, the network structure, the scaling function, the quantization interval, the initial states of agents, the control gain and the event gain. It is also found that under the designed event-triggered protocol, by selecting suitable parameters, for any directed digital network containing a spanning tree, the distributed average consensus can be always achieved with an exponential convergence rate based on merely one bit information exchange between each pair of adjacent agents at each time step. Two simulation examples are provided to illustrate the feasibility of presented protocol and the correctness of the theoretical results.
NREL Senior Research Fellow Honored by The Journal of Physical Chemistry |
and quantum size effects in semiconductors and carrier dynamics in semiconductor quantum dots and using hot carrier effects, size quantization, and superlattice concepts that could, in principle, enable
Brynolfsson, Patrik; Nilsson, David; Torheim, Turid; Asklund, Thomas; Karlsson, Camilla Thellenberg; Trygg, Johan; Nyholm, Tufve; Garpebring, Anders
2017-06-22
In recent years, texture analysis of medical images has become increasingly popular in studies investigating diagnosis, classification and treatment response assessment of cancerous disease. Despite numerous applications in oncology and medical imaging in general, there is no consensus regarding texture analysis workflow, or reporting of parameter settings crucial for replication of results. The aim of this study was to assess how sensitive Haralick texture features of apparent diffusion coefficient (ADC) MR images are to changes in five parameters related to image acquisition and pre-processing: noise, resolution, how the ADC map is constructed, the choice of quantization method, and the number of gray levels in the quantized image. We found that noise, resolution, choice of quantization method and the number of gray levels in the quantized images had a significant influence on most texture features, and that the effect size varied between different features. Different methods for constructing the ADC maps did not have an impact on any texture feature. Based on our results, we recommend using images with similar resolutions and noise levels, using one quantization method, and the same number of gray levels in all quantized images, to make meaningful comparisons of texture feature results between different subjects.
Electrical and thermal conductance quantization in nanostructures
NASA Astrophysics Data System (ADS)
Nawrocki, Waldemar
2008-10-01
In the paper problems of electron transport in mesoscopic structures and nanostructures are considered. The electrical conductance of nanowires was measured in a simple experimental system. Investigations have been performed in air at room temperature measuring the conductance between two vibrating metal wires with standard oscilloscope. Conductance quantization in units of G0 = 2e2/h = (12.9 kΩ)-1 up to five quanta of conductance has been observed for nanowires formed in many metals. The explanation of this universal phenomena is the formation of a nanometer-sized wire (nanowire) between macroscopic metallic contacts which induced, due to theory proposed by Landauer, the quantization of conductance. Thermal problems in nanowires are also discussed in the paper.
Hierarchically clustered adaptive quantization CMAC and its learning convergence.
Teddy, S D; Lai, E M K; Quek, C
2007-11-01
The cerebellar model articulation controller (CMAC) neural network (NN) is a well-established computational model of the human cerebellum. Nevertheless, there are two major drawbacks associated with the uniform quantization scheme of the CMAC network. They are the following: (1) a constant output resolution associated with the entire input space and (2) the generalization-accuracy dilemma. Moreover, the size of the CMAC network is an exponential function of the number of inputs. Depending on the characteristics of the training data, only a small percentage of the entire set of CMAC memory cells is utilized. Therefore, the efficient utilization of the CMAC memory is a crucial issue. One approach is to quantize the input space nonuniformly. For existing nonuniformly quantized CMAC systems, there is a tradeoff between memory efficiency and computational complexity. Inspired by the underlying organizational mechanism of the human brain, this paper presents a novel CMAC architecture named hierarchically clustered adaptive quantization CMAC (HCAQ-CMAC). HCAQ-CMAC employs hierarchical clustering for the nonuniform quantization of the input space to identify significant input segments and subsequently allocating more memory cells to these regions. The stability of the HCAQ-CMAC network is theoretically guaranteed by the proof of its learning convergence. The performance of the proposed network is subsequently benchmarked against the original CMAC network, as well as two other existing CMAC variants on two real-life applications, namely, automated control of car maneuver and modeling of the human blood glucose dynamics. The experimental results have demonstrated that the HCAQ-CMAC network offers an efficient memory allocation scheme and improves the generalization and accuracy of the network output to achieve better or comparable performances with smaller memory usages. Index Terms-Cerebellar model articulation controller (CMAC), hierarchical clustering, hierarchically clustered adaptive quantization CMAC (HCAQ-CMAC), learning convergence, nonuniform quantization.
Size quantization patterns in self-assembled InAs/GaAs quantum dots
NASA Astrophysics Data System (ADS)
Colocci, M.; Bogani, F.; Carraresi, L.; Mattolini, R.; Bosacchi, A.; Franchi, S.; Frigeri, P.; Taddei, S.; Rosa-Clot, M.
1997-07-01
Molecular beam epitaxy has been used for growing self-assembled InAs quantum dots. A continuous variation of the InAs average coverage across the sample has been obtained by properly aligning the (001) GaAs substrate with respect to the molecular beam. Excitation of a large number of dots (laser spot diameter ≈ 100 μm) results in structured photoluminescence spectra; a clear quantization of the dot sizes is deduced from the distinct luminescence bands separated in energy by an average spacing of 20-30 meV. We ascribe the individual bands of the photoluminescence spectrum after low excitation to families of dots with roughly the same diameter and heights differing by one monolayer.
Quantized magnetoresistance in atomic-size contacts.
Sokolov, Andrei; Zhang, Chunjuan; Tsymbal, Evgeny Y; Redepenning, Jody; Doudin, Bernard
2007-03-01
When the dimensions of a metallic conductor are reduced so that they become comparable to the de Broglie wavelengths of the conduction electrons, the absence of scattering results in ballistic electron transport and the conductance becomes quantized. In ferromagnetic metals, the spin angular momentum of the electrons results in spin-dependent conductance quantization and various unusual magnetoresistive phenomena. Theorists have predicted a related phenomenon known as ballistic anisotropic magnetoresistance (BAMR). Here we report the first experimental evidence for BAMR by observing a stepwise variation in the ballistic conductance of cobalt nanocontacts as the direction of an applied magnetic field is varied. Our results show that BAMR can be positive and negative, and exhibits symmetric and asymmetric angular dependences, consistent with theoretical predictions.
Quantized topological magnetoelectric effect of the zero-plateau quantum anomalous Hall state
Wang, Jing; Lian, Biao; Qi, Xiao-Liang; ...
2015-08-10
The topological magnetoelectric effect in a three-dimensional topological insulator is a novel phenomenon, where an electric field induces a magnetic field in the same direction, with a universal coefficient of proportionality quantized in units of $e²/2h$. Here in this paper, we propose that the topological magnetoelectric effect can be realized in the zero-plateau quantum anomalous Hall state of magnetic topological insulators or a ferromagnet-topological insulator heterostructure. The finite-size effect is also studied numerically, where the magnetoelectric coefficient is shown to converge to a quantized value when the thickness of the topological insulator film increases. We further propose a device setupmore » to eliminate nontopological contributions from the side surface.« less
HVS-based quantization steps for validation of digital cinema extended bitrates
NASA Astrophysics Data System (ADS)
Larabi, M.-C.; Pellegrin, P.; Anciaux, G.; Devaux, F.-O.; Tulet, O.; Macq, B.; Fernandez, C.
2009-02-01
In Digital Cinema, the video compression must be as transparent as possible to provide the best image quality to the audience. The goal of compression is to simplify transport, storing, distribution and projection of films. For all those tasks, equipments need to be developed. It is thus mandatory to reduce the complexity of the equipments by imposing limitations in the specifications. In this sense, the DCI has fixed the maximum bitrate for a compressed stream to 250 Mbps independently from the input format (4K/24fps, 2K/48fps or 2K/24fps). The work described in this paper This parameter is discussed in this paper because it is not consistent to double/quadruple the input rate without increasing the output rate. The work presented in this paper is intended to define quantization steps ensuring the visually lossless compression. Two steps are followed first to evaluate the effect of each subband separately and then to fin the scaling ratio. The obtained results show that it is necessary to increase the bitrate limit for cinema material in order to achieve the visually lossless.
Zhang, Changwang; Xia, Yong; Zhang, Zhiming; ...
2017-03-22
A new strategy for narrowing the size distribution of colloidal quantum dots (QDs) was developed by combining cation exchange and quantized Ostwald ripening. Medium-sized reactant CdS(e) QDs were subjected to cation exchange to form the target PbS(e) QDs, and then small reactant CdS(e) QDs were added which were converted to small PbS(e) dots via cation exchange. The small-sized ensemble of PbS(e) QDs dissolved completely rapidly and released a large amount of monomers, promoting the growth and size-focusing of the medium-sized ensemble of PbS(e) QDs. The addition of small reactant QDs can be repeated to continuously reduce the size distribution. Themore » new method was applied to synthesize PbSe and PbS QDs with extremely narrow size distributions and as a bonus they have hybrid surface passivation. In conclusion, the size distribution of prepared PbSe and PbS QDs are as low as 3.6% and 4.3%, respectively, leading to hexagonal close packing in monolayer and highly ordered three-dimensional superlattice.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Changwang; Xia, Yong; Zhang, Zhiming
A new strategy for narrowing the size distribution of colloidal quantum dots (QDs) was developed by combining cation exchange and quantized Ostwald ripening. Medium-sized reactant CdS(e) QDs were subjected to cation exchange to form the target PbS(e) QDs, and then small reactant CdS(e) QDs were added which were converted to small PbS(e) dots via cation exchange. The small-sized ensemble of PbS(e) QDs dissolved completely rapidly and released a large amount of monomers, promoting the growth and size-focusing of the medium-sized ensemble of PbS(e) QDs. The addition of small reactant QDs can be repeated to continuously reduce the size distribution. Themore » new method was applied to synthesize PbSe and PbS QDs with extremely narrow size distributions and as a bonus they have hybrid surface passivation. In conclusion, the size distribution of prepared PbSe and PbS QDs are as low as 3.6% and 4.3%, respectively, leading to hexagonal close packing in monolayer and highly ordered three-dimensional superlattice.« less
On-line gas chromatographic analysis of airborne particles
Hering, Susanne V [Berkeley, CA; Goldstein, Allen H [Orinda, CA
2012-01-03
A method and apparatus for the in-situ, chemical analysis of an aerosol. The method may include the steps of: collecting an aerosol; thermally desorbing the aerosol into a carrier gas to provide desorbed aerosol material; transporting the desorbed aerosol material onto the head of a gas chromatography column; analyzing the aerosol material using a gas chromatograph, and quantizing the aerosol material as it evolves from the gas chromatography column. The apparatus includes a collection and thermal desorption cell, a gas chromatograph including a gas chromatography column, heated transport lines coupling the cell and the column; and a quantization detector for aerosol material evolving from the gas chromatography column.
Image coding using entropy-constrained residual vector quantization
NASA Technical Reports Server (NTRS)
Kossentini, Faouzi; Smith, Mark J. T.; Barnes, Christopher F.
1993-01-01
The residual vector quantization (RVQ) structure is exploited to produce a variable length codeword RVQ. Necessary conditions for the optimality of this RVQ are presented, and a new entropy-constrained RVQ (ECRVQ) design algorithm is shown to be very effective in designing RVQ codebooks over a wide range of bit rates and vector sizes. The new EC-RVQ has several important advantages. It can outperform entropy-constrained VQ (ECVQ) in terms of peak signal-to-noise ratio (PSNR), memory, and computation requirements. It can also be used to design high rate codebooks and codebooks with relatively large vector sizes. Experimental results indicate that when the new EC-RVQ is applied to image coding, very high quality is achieved at relatively low bit rates.
Equivalence of Szegedy's and coined quantum walks
NASA Astrophysics Data System (ADS)
Wong, Thomas G.
2017-09-01
Szegedy's quantum walk is a quantization of a classical random walk or Markov chain, where the walk occurs on the edges of the bipartite double cover of the original graph. To search, one can simply quantize a Markov chain with absorbing vertices. Recently, Santos proposed two alternative search algorithms that instead utilize the sign-flip oracle in Grover's algorithm rather than absorbing vertices. In this paper, we show that these two algorithms are exactly equivalent to two algorithms involving coined quantum walks, which are walks on the vertices of the original graph with an internal degree of freedom. The first scheme is equivalent to a coined quantum walk with one walk step per query of Grover's oracle, and the second is equivalent to a coined quantum walk with two walk steps per query of Grover's oracle. These equivalences lie outside the previously known equivalence of Szegedy's quantum walk with absorbing vertices and the coined quantum walk with the negative identity operator as the coin for marked vertices, whose precise relationships we also investigate.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gavrilenko, V. I.; Krishtopenko, S. S., E-mail: ds_a-teens@mail.ru; Goiran, M.
2011-01-15
The effect of electron-electron interaction on the spectrum of two-dimensional electron states in InAs/AlSb (001) heterostructures with a GaSb cap layer with one filled size-quantization subband. The energy spectrum of two-dimensional electrons is calculated in the Hartree and Hartree-Fock approximations. It is shown that the exchange interaction decreasing the electron energy in subbands increases the energy gap between subbands and the spin-orbit splitting of the spectrum in the entire region of electron concentrations, at which only the lower size-quantization band is filled. The nonlinear dependence of the Rashba splitting constant at the Fermi wave vector on the concentration of two-dimensionalmore » electrons is demonstrated.« less
NASA Astrophysics Data System (ADS)
Shibata, K.; Yoshida, K.; Daiguji, K.; Sato, H.; , T., Ii; Hirakawa, K.
2017-10-01
An electric-field control of quantized conductance in metal (gold) quantum point contacts (QPCs) is demonstrated by adopting a liquid-gated electric-double-layer (EDL) transistor geometry. Atomic-scale gold QPCs were fabricated by applying the feedback-controlled electrical break junction method to the gold nanojunction. The electric conductance in gold QPCs shows quantized conductance plateaus and step-wise increase/decrease by the conductance quantum, G0 = 2e2/h, as EDL-gate voltage is swept, demonstrating a modulation of the conductance of gold QPCs by EDL gating. The electric-field control of conductance in metal QPCs may open a way for their application to local charge sensing at room temperature.
NASA Astrophysics Data System (ADS)
Cho, Hyunjin; Kim, Whi Dong; Lee, Kangha; Lee, Seokwon; Kang, Gil-Seong; Joh, Han-Ik; Lee, Doh C.
2018-01-01
We investigate the product selectivity of CO2 reduction using NiO photocathodes decorated with CdSe quantum dots (QDs) of varying size in a photoelectrochemical (PEC) cell. Size-tunable and quantized energy states of conduction band in CdSe QDs enable systematic control of electron transfer kinetics from CdSe QDs to NiO. It turns out that different size of CdSe QDs results in variation in product selectivity for CO2 reduction. The energy gap between conduction band edge and redox potential of each reduction product (e.g., CO and CH4) correlates with their production rate. The size dependence of the electron transfer rate estimated from the energy gap is in agreement with the selectivity of CO2 reduction products for all reduction products but CO. The deviation in the case of CO is attributed to sequential conversion of CO into CH4 with CO adsorbed on electrode surface. Based on a premise that the CdSe QDs would exhibit similar surface configuration regardless of QD size, it is concluded that the electron transfer kinetics proves to alter the selectivity of CO2 reduction.
Prior-Based Quantization Bin Matching for Cloud Storage of JPEG Images.
Liu, Xianming; Cheung, Gene; Lin, Chia-Wen; Zhao, Debin; Gao, Wen
2018-07-01
Millions of user-generated images are uploaded to social media sites like Facebook daily, which translate to a large storage cost. However, there exists an asymmetry in upload and download data: only a fraction of the uploaded images are subsequently retrieved for viewing. In this paper, we propose a cloud storage system that reduces the storage cost of all uploaded JPEG photos, at the expense of a controlled increase in computation mainly during download of requested image subset. Specifically, the system first selectively re-encodes code blocks of uploaded JPEG images using coarser quantization parameters for smaller storage sizes. Then during download, the system exploits known signal priors-sparsity prior and graph-signal smoothness prior-for reverse mapping to recover original fine quantization bin indices, with either deterministic guarantee (lossless mode) or statistical guarantee (near-lossless mode). For fast reverse mapping, we use small dictionaries and sparse graphs that are tailored for specific clusters of similar blocks, which are classified via tree-structured vector quantizer. During image upload, cluster indices identifying the appropriate dictionaries and graphs for the re-quantized blocks are encoded as side information using a differential distributed source coding scheme to facilitate reverse mapping during image download. Experimental results show that our system can reap significant storage savings (up to 12.05%) at roughly the same image PSNR (within 0.18 dB).
Fabrication of Subnanometer-Precision Nanopores in Hexagonal Boron Nitride
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gilbert, S. Matt; Dunn, Gabriel; Azizi, Amin
Here, we demonstrate the fabrication of individual nanopores in hexagonal boron nitride (h-BN) with atomically precise control of the pore shape and size. Previous methods of pore production in other 2D materials typically create pores with irregular geometry and imprecise diameters. In contrast, other studies have shown that with careful control of electron irradiation, defects in h-BN grow with pristine zig-zag edges at quantized triangular sizes, but they have failed to demonstrate production and control of isolated defects. In this work, we combine these techniques to yield a method in which we can create individual size-quantized triangular nanopores through anmore » h-BN sheet. The pores are created using the electron beam of a conventional transmission electron microscope; which can strip away multiple layers of h-BN exposing single-layer regions, introduce single vacancies, and preferentially grow vacancies only in the single-layer region. We further demonstrate how the geometry of these pores can be altered beyond triangular by changing beam conditions. Precisely size- and geometry-tuned nanopores could find application in molecular sensing, DNA sequencing, water desalination, and molecular separation.« less
Fabrication of Subnanometer-Precision Nanopores in Hexagonal Boron Nitride
Gilbert, S. Matt; Dunn, Gabriel; Azizi, Amin; ...
2017-11-08
Here, we demonstrate the fabrication of individual nanopores in hexagonal boron nitride (h-BN) with atomically precise control of the pore shape and size. Previous methods of pore production in other 2D materials typically create pores with irregular geometry and imprecise diameters. In contrast, other studies have shown that with careful control of electron irradiation, defects in h-BN grow with pristine zig-zag edges at quantized triangular sizes, but they have failed to demonstrate production and control of isolated defects. In this work, we combine these techniques to yield a method in which we can create individual size-quantized triangular nanopores through anmore » h-BN sheet. The pores are created using the electron beam of a conventional transmission electron microscope; which can strip away multiple layers of h-BN exposing single-layer regions, introduce single vacancies, and preferentially grow vacancies only in the single-layer region. We further demonstrate how the geometry of these pores can be altered beyond triangular by changing beam conditions. Precisely size- and geometry-tuned nanopores could find application in molecular sensing, DNA sequencing, water desalination, and molecular separation.« less
Speech coding at low to medium bit rates
NASA Astrophysics Data System (ADS)
Leblanc, Wilfred Paul
1992-09-01
Improved search techniques coupled with improved codebook design methodologies are proposed to improve the performance of conventional code-excited linear predictive coders for speech. Improved methods for quantizing the short term filter are developed by employing a tree search algorithm and joint codebook design to multistage vector quantization. Joint codebook design procedures are developed to design locally optimal multistage codebooks. Weighting during centroid computation is introduced to improve the outlier performance of the multistage vector quantizer. Multistage vector quantization is shown to be both robust against input characteristics and in the presence of channel errors. Spectral distortions of about 1 dB are obtained at rates of 22-28 bits/frame. Structured codebook design procedures for excitation in code-excited linear predictive coders are compared to general codebook design procedures. Little is lost using significant structure in the excitation codebooks while greatly reducing the search complexity. Sparse multistage configurations are proposed for reducing computational complexity and memory size. Improved search procedures are applied to code-excited linear prediction which attempt joint optimization of the short term filter, the adaptive codebook, and the excitation. Improvements in signal to noise ratio of 1-2 dB are realized in practice.
Pbte Nanostructures for Spin Filtering and Detecting
NASA Astrophysics Data System (ADS)
Grabecki, G.
2005-08-01
An uniqueness of lead telluride PbTe relies on combination of excellent semiconducting properties, like high electron mobility and tunable carrier concentration, with paraelectric behavior leading to huge dielectric constant at low temperatures. The present article is a review of our experimental works performed on PbTe nanostructures. The main result is observation of one-dimensional quantization of the electron motion at much impure conditions than in any other system studied so far. We explain this in terms of dielectric screening of Coulomb potentials produced by charged defects. Furthermore, in an external magnetic field, the conductance quantization steps show very pronounced spin splitting, already visible at several kilogauss. This indicates that PbTe nanostructures have a potential as local spin filtering devices.
Learning binary code via PCA of angle projection for image retrieval
NASA Astrophysics Data System (ADS)
Yang, Fumeng; Ye, Zhiqiang; Wei, Xueqi; Wu, Congzhong
2018-01-01
With benefits of low storage costs and high query speeds, binary code representation methods are widely researched for efficiently retrieving large-scale data. In image hashing method, learning hashing function to embed highdimensions feature to Hamming space is a key step for accuracy retrieval. Principal component analysis (PCA) technical is widely used in compact hashing methods, and most these hashing methods adopt PCA projection functions to project the original data into several dimensions of real values, and then each of these projected dimensions is quantized into one bit by thresholding. The variances of different projected dimensions are different, and with real-valued projection produced more quantization error. To avoid the real-valued projection with large quantization error, in this paper we proposed to use Cosine similarity projection for each dimensions, the angle projection can keep the original structure and more compact with the Cosine-valued. We used our method combined the ITQ hashing algorithm, and the extensive experiments on the public CIFAR-10 and Caltech-256 datasets validate the effectiveness of the proposed method.
Direct Solution of the Chemical Master Equation Using Quantized Tensor Trains
Kazeev, Vladimir; Khammash, Mustafa; Nip, Michael; Schwab, Christoph
2014-01-01
The Chemical Master Equation (CME) is a cornerstone of stochastic analysis and simulation of models of biochemical reaction networks. Yet direct solutions of the CME have remained elusive. Although several approaches overcome the infinite dimensional nature of the CME through projections or other means, a common feature of proposed approaches is their susceptibility to the curse of dimensionality, i.e. the exponential growth in memory and computational requirements in the number of problem dimensions. We present a novel approach that has the potential to “lift” this curse of dimensionality. The approach is based on the use of the recently proposed Quantized Tensor Train (QTT) formatted numerical linear algebra for the low parametric, numerical representation of tensors. The QTT decomposition admits both, algorithms for basic tensor arithmetics with complexity scaling linearly in the dimension (number of species) and sub-linearly in the mode size (maximum copy number), and a numerical tensor rounding procedure which is stable and quasi-optimal. We show how the CME can be represented in QTT format, then use the exponentially-converging -discontinuous Galerkin discretization in time to reduce the CME evolution problem to a set of QTT-structured linear equations to be solved at each time step using an algorithm based on Density Matrix Renormalization Group (DMRG) methods from quantum chemistry. Our method automatically adapts the “basis” of the solution at every time step guaranteeing that it is large enough to capture the dynamics of interest but no larger than necessary, as this would increase the computational complexity. Our approach is demonstrated by applying it to three different examples from systems biology: independent birth-death process, an example of enzymatic futile cycle, and a stochastic switch model. The numerical results on these examples demonstrate that the proposed QTT method achieves dramatic speedups and several orders of magnitude storage savings over direct approaches. PMID:24626049
Competing Classical and Quantum Effects in Shape Relaxation of a Metallic Nanostructure
NASA Technical Reports Server (NTRS)
Chen, Dongmin; Okamoto, Hiroshi; Yamada, Toshishi; Biegel, Bryan (Technical Monitor)
2003-01-01
We demonstrate for the first time that the quantum size effect (QSE) plays a competing role along side the classical thermodynamic effect in the shape relaxation of a small metallic island. Together, these effects transforms a lead(Pb) island grown on Si(111) substrate from its initially flattop faceted morphology to a peculiar ring-shape island, a process catalysed by the tip electric field of a scanning tunnelling microscope (STM). We shall show for the first time how QSE affects the relaxation process dynamically. In particular, it leads to a novel strip-flow growth and double-step growth on selective strips of a plateau inside the ring, defined by the substrate steps more than 60?0?3 below. It appears that atoms diffusing on the plateau can clearly (sub i)(deg)sense(sub i)+/- the quantized energy states inside the island and have preferentially attached to regions that further reduces the surface energy as a result of the QSE, limiting its own growth and stabilizing the ring shape. The mechanism proposed here offers a sound explanation for ring shape metal and semiconductor islands observed in other systems as well.
Statistical characterization of speckle noise in coherent imaging systems
NASA Astrophysics Data System (ADS)
Yaroslavsky, Leonid; Shefler, A.
2003-05-01
Speckle noise imposes fundamental limitation on image quality in coherent radiation based imaging and optical metrology systems. Speckle noise phenomena are associated with properties of objects to diffusely scatter irradiation and with the fact that in recording the wave field, a number of signal distortions inevitably occur due to technical limitations inherent to hologram sensors. The statistical theory of speckle noise was developed with regard to only limited resolving power of coherent imaging devices. It is valid only asymptotically as much as the central limit theorem of the probability theory can be applied. In applications this assumption is not always applicable. Moreover, in treating speckle noise problem one should also consider other sources of the hologram deterioration. In the paper, statistical properties of speckle due to the limitation of hologram size, dynamic range and hologram signal quantization are studied by Monte-Carlo simulation for holograms recorded in near and far diffraction zones. The simulation experiments have shown that, for limited resolving power of the imaging system, widely accepted opinion that speckle contrast is equal to one holds only for rather severe level of the hologram size limitation. For moderate limitations, speckle contrast changes gradually from zero for no limitation to one for limitation to less than about 20% of hologram size. The results obtained for the limitation of the hologram sensor"s dynamic range and hologram signal quantization reveal that speckle noise due to these hologram signal distortions is not multiplicative and is directly associated with the severity of the limitation and quantization. On the base of the simulation results, analytical models are suggested.
NASA Astrophysics Data System (ADS)
Huang, Wen-Min; Mou, Chung-Yu; Chang, Cheng-Hung
2010-02-01
While the scattering phase for several one-dimensional potentials can be exactly derived, less is known in multi-dimensional quantum systems. This work provides a method to extend the one-dimensional phase knowledge to multi-dimensional quantization rules. The extension is illustrated in the example of Bogomolny's transfer operator method applied in two quantum wells bounded by step potentials of different heights. This generalized semiclassical method accurately determines the energy spectrum of the systems, which indicates the substantial role of the proposed phase correction. Theoretically, the result can be extended to other semiclassical methods, such as Gutzwiller trace formula, dynamical zeta functions, and semiclassical Landauer-Büttiker formula. In practice, this recipe enhances the applicability of semiclassical methods to multi-dimensional quantum systems bounded by general soft potentials.
Song, Can-Li; Wang, Lili; He, Ke; Ji, Shuai-Hua; Chen, Xi; Ma, Xu-Cun; Xue, Qi-Kun
2015-05-01
Scanning tunneling microscopy and spectroscopy have been used to investigate the femtosecond dynamics of Dirac fermions in the topological insulator Bi2Se3 ultrathin films. At the two-dimensional limit, bulk electrons become quantized and the quantization can be controlled by the film thickness at a single quintuple layer level. By studying the spatial decay of standing waves (quasiparticle interference patterns) off steps, we measure directly the energy and film thickness dependence of the phase relaxation length lϕ and inelastic scattering lifetime τ of topological surface-state electrons. We find that τ exhibits a remarkable (E - EF)(-2) energy dependence and increases with film thickness. We show that the features revealed are typical for electron-electron scattering between surface and bulk states.
Constraining the loop quantum gravity parameter space from phenomenology
NASA Astrophysics Data System (ADS)
Brahma, Suddhasattwa; Ronco, Michele
2018-03-01
Development of quantum gravity theories rarely takes inputs from experimental physics. In this letter, we take a small step towards correcting this by establishing a paradigm for incorporating putative quantum corrections, arising from canonical quantum gravity (QG) theories, in deriving falsifiable modified dispersion relations (MDRs) for particles on a deformed Minkowski space-time. This allows us to differentiate and, hopefully, pick between several quantization choices via testable, state-of-the-art phenomenological predictions. Although a few explicit examples from loop quantum gravity (LQG) (such as the regularization scheme used or the representation of the gauge group) are shown here to establish the claim, our framework is more general and is capable of addressing other quantization ambiguities within LQG and also those arising from other similar QG approaches.
A neural net based architecture for the segmentation of mixed gray-level and binary pictures
NASA Technical Reports Server (NTRS)
Tabatabai, Ali; Troudet, Terry P.
1991-01-01
A neural-net-based architecture is proposed to perform segmentation in real time for mixed gray-level and binary pictures. In this approach, the composite picture is divided into 16 x 16 pixel blocks, which are identified as character blocks or image blocks on the basis of a dichotomy measure computed by an adaptive 16 x 16 neural net. For compression purposes, each image block is further divided into 4 x 4 subblocks; a one-bit nonparametric quantizer is used to encode 16 x 16 character and 4 x 4 image blocks; and the binary map and quantizer levels are obtained through a neural net segmentor over each block. The efficiency of the neural segmentation in terms of computational speed, data compression, and quality of the compressed picture is demonstrated. The effect of weight quantization is also discussed. VLSI implementations of such adaptive neural nets in CMOS technology are described and simulated in real time for a maximum block size of 256 pixels.
NASA Astrophysics Data System (ADS)
Jacak, Janusz E.
2018-01-01
We demonstrate an original development of path-integral quantization in the case of a multiply connected configuration space of indistinguishable charged particles on a 2D manifold and exposed to a strong perpendicular magnetic field. The system occurs to be exceptionally homotopy-rich and the structure of the homotopy essentially depends on the magnetic field strength resulting in multiloop trajectories at specific conditions. We have proved, by a generalization of the Bohr-Sommerfeld quantization rule, that the size of a magnetic field flux quantum grows for multiloop orbits like (2 k +1 ) h/c with the number of loops k . Utilizing this property for electrons on the 2D substrate jellium, we have derived upon the path integration a complete FQHE hierarchy in excellent consistence with experiments. The path integral has been next developed to a sum over configurations, displaying various patterns of trajectory homotopies (topological configurations), which in the nonstationary case of quantum kinetics, reproduces some unclear formerly details in the longitudinal resistivity observed in experiments.
Navarro, A; Cristaldo, P E; Díaz, M P; Eynard, A R
2000-01-01
Food pictures are suitable visual tools for quantize food and nutrient consumption avoiding bias due to self-assessments. To determine the perception of food portion size and to establish the efficacy of food pictures for dietaries assessments. A food frequency questionnaire (FFQ) including 118 food items of daily consumption was applied to 30 adults representative of Córdoba, Argentina, population. Among several food models (paper maché, plastics) and pictures, those which more accurately filled the purpose were selected. 3 small, median and large standard portion size were determined. Data were evaluated with descriptive statistics tools and Chi square adherence test. The assessment of 51 percent of the food was assayed in concordance with the reference size. In general, the remainder was overestimated. The 90 percent of volunteers concluded that the pictures were the best visual resource. The photographic atlas of food is an useful material for quantize the dietary consumption, suitable for many types of dietaries assessments. In conclusion, comparison among pictures of three portions previously standardized for each food is highly recommendable.
NASA Astrophysics Data System (ADS)
Smith, L. W.; Al-Taie, H.; Lesage, A. A. J.; Thomas, K. J.; Sfigakis, F.; See, P.; Griffiths, J. P.; Farrer, I.; Jones, G. A. C.; Ritchie, D. A.; Kelly, M. J.; Smith, C. G.
2016-04-01
We study 95 split gates of different size on a single chip using a multiplexing technique. Each split gate defines a one-dimensional channel on a modulation-doped GaAs /AlGaAs heterostructure, through which the conductance is quantized. The yield of devices showing good quantization decreases rapidly as the length of the split gates increases. However, for the subset of devices showing good quantization, there is no correlation between the electrostatic length of the one-dimensional channel (estimated using a saddle-point model) and the gate length. The variation in electrostatic length and the one-dimensional subband spacing for devices of the same gate length exceeds the variation in the average values between devices of different lengths. There is a clear correlation between the curvature of the potential barrier in the transport direction and the strength of the "0.7 anomaly": the conductance value of the 0.7 anomaly reduces as the barrier curvature becomes shallower. These results highlight the key role of the electrostatic environment in one-dimensional systems. Even in devices with clean conductance plateaus, random fluctuations in the background potential are crucial in determining the potential landscape in the active device area such that nominally identical gate structures have different characteristics.
Berggren, K.-F.; Pepper, M.
2010-01-01
In this article, we present a summary of the current status of the study of the transport of electrons confined to one dimension in very low disorder GaAs–AlGaAs heterostructures. By means of suitably located gates and application of a voltage to ‘electrostatically squeeze’ the electronic wave functions, it is possible to produce a controllable size quantization and a transition from two-dimensional transport. If the length of the electron channel is sufficiently short, then transport is ballistic and the quantized subbands each have a conductance equal to the fundamental quantum value 2e2/h, where the factor of 2 arises from the spin degeneracy. This mode of conduction is discussed, and it is shown that a number of many-body effects can be observed. These effects are discussed as in the spin-incoherent regime, which is entered when the separation of the electrons is increased and the exchange energy is less than kT. Finally, results are presented in the regime where the confinement potential is decreased and the electron configuration relaxes to minimize the electron–electron repulsion to move towards a two-dimensional array. It is shown that the ground state is no longer a line determined by the size quantization alone, but becomes two distinct rows arising from minimization of the electrostatic energy and is the precursor of a two-dimensional Wigner lattice. PMID:20123751
Quantization of charged fields in the presence of critical potential steps
NASA Astrophysics Data System (ADS)
Gavrilov, S. P.; Gitman, D. M.
2016-02-01
QED with strong external backgrounds that can create particles from the vacuum is well developed for the so-called t -electric potential steps, which are time-dependent external electric fields that are switched on and off at some time instants. However, there exist many physically interesting situations where external backgrounds do not switch off at the time infinity. E.g., these are time-independent nonuniform electric fields that are concentrated in restricted space areas. The latter backgrounds represent a kind of spatial x -electric potential steps for charged particles. They can also create particles from the vacuum, the Klein paradox being closely related to this process. Approaches elaborated for treating quantum effects in the t -electric potential steps are not directly applicable to the x -electric potential steps and their generalization for x -electric potential steps was not sufficiently developed. We believe that the present work represents a consistent solution of the latter problem. We have considered a canonical quantization of the Dirac and scalar fields with x -electric potential step and have found in- and out-creation and annihilation operators that allow one to have particle interpretation of the physical system under consideration. To identify in- and out-operators we have performed a detailed mathematical and physical analysis of solutions of the relativistic wave equations with an x -electric potential step with subsequent QFT analysis of correctness of such an identification. We elaborated a nonperturbative (in the external field) technique that allows one to calculate all characteristics of zero-order processes, such, for example, scattering, reflection, and electron-positron pair creation, without radiation corrections, and also to calculate Feynman diagrams that describe all characteristics of processes with interaction between the in-, out-particles and photons. These diagrams have formally the usual form, but contain special propagators. Expressions for these propagators in terms of in- and out-solutions are presented. We apply the elaborated approach to two popular exactly solvable cases of x -electric potential steps, namely, to the Sauter potential and to the Klein step.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jung, Seung-Ho; Chen, Chen; Cha, Sang-Ho
Detailed understanding of the mechanism of dielectrophoresis (DEP) and the drastic improvement of its efficiency for small size-quantized nanoparticles (NPs) open the door for the convergence of microscale and nanoscale technologies. It is hindered, however, by the severe reduction of DEP force in particles with volumes below a few hundred cubic nanometers. We report here DEP assembly of size-quantized CdTe nanoparticles (NPs) with a diameter of 4.2 nm under AC voltage of 4–10 V. Calculations of the nominal DEP force for these NPs indicate that it is several orders of magnitude smaller than the force of the Brownian motion destroyingmore » the assemblies even for the maximum applied AC voltage. Despite this, very efficient formation of NP bridges between electrodes separated by a gap of 2 μm was observed even for AC voltages of 6 V and highly diluted NP dispersions. The resolution of this conundrum was found in the intrinsic ability of CdTe NPs to self-assemble. The species being assembled by DEP are substantially bigger than the individual NPs. DEP assembly should be treated as a process taking place for NP chains with a length of ~140 nm. The self-assembled chains increase the nominal volume where the polarization of the particles takes place, while retaining the size-quantized nature of the material. The produced NP bridges were found to be photoactive, producing photocurrent upon illumination. DEP bridges of quantum confined NPs can be used in fast parallel manufacturing of novel MEMS components, sensors, and optical and optoelectronic devices. Purposeful engineering of self-assembling properties of NPs makes possible further facilitation of the DEP and increase of complexity of the produced nano- and microscale structures.« less
Chern Numbers Hiding in Time of Flight Images
NASA Astrophysics Data System (ADS)
Satija, Indubala; Zhao, Erhai; Ghosh, Parag; Bray-Ali, Noah
2011-03-01
Since the experimental realization of synthetic magnetic fields in neural ultracold atoms, transport measurement such as quantized Hall conductivity remains an open challenge. Here we propose a novel and feasible scheme to measure the topological invariants, namely the chern numbers, in the time of flight images. We study both the commensurate and the incommensurate flux, with the later being the main focus here. The central concept underlying our proposal is the mapping between the chern numbers and the size of the dimerized states that emerge when the two-dimensional hopping is tuned to the highly anisotropic limit. In a uncoupled double quantum Hall system exhibiting time reversal invariance, only odd-sized dimer correlation functions are non-zero and hence encode quantized spin current. Finally, we illustrate that inspite of highly fragmented spectrum, a finite set of chern numbers are meaningful. Our results are supported by direct numerical computation of transverse conductivity. NBA acknowledges support from a National Research Council postdoctoral research associateship.
NASA Astrophysics Data System (ADS)
Korgel, Brian Allan
1997-11-01
Phosphatidylcholine vesicles provide reaction compartments for synthesis of size-quantized CdS nanocrystals of dimension predicted to within ±2 A based on initial encapsulated CdClsb2 concentration and vesicle diameter. Vesicle formation by detergent dialysis of phosphatidylcholine/hexylglucoside mixed micelles yields highly monodisperse lipid capsules within which monodisperse CdS nanoparticles are precipitated with sulfide. Size-quantized CdS nanocrystals, with diameters ranging from 20 to 60 A, have been produced with typical standard deviations about the mean diameter of ±8% as measured by transmission electron microscopy. By including ZnClsb2 or HgClsb2 in the dialyzate prior to vesicle formation, quantum-sized Znsb{y}Cdsb{1-y}S or Hgsb{y}Cdsb{1-y}S nanocrystal alloys with controlled stoichiometry are generated. Spectrophotometric and spectrofluorimetric measurements are consistent with highly crystalline, monodisperse particles with few core or surface defects. The alloyed nanocrystal spectra shift consistently with composition indicating a high degree of compositional control. Measured exciton energies for CdS show excellent agreement with data in the literature. The empirical pseudopotential model presented by Ramakrishna and Friesner for a cubic CdS lattice, correcting for experimentally measured lattice contractions, best fits the data. Size-quantized CdS nanocrystals serve as photocatalysts for nitrate reduction at neutral pH under conditions that mimic illumination by sunlight with overall product quantum yields of up to 4% for {˜}20 A, amine-terminated particles. Due to the effects of quantum confinement on electron and hole redox potentials, photocatalyzed nitrate reduction rates depend strongly on the particle size, and the fastest reduction rates are observed with the smallest nanocrystals. Using a Tafel plot and the empirical pseudopotential model to estimate electron redox potentials, the apparent electron transfer coefficient and the apparent standard rate constant is estimated at 0.23 and 4.0× 10sp{-12} cm/sec, respectively, for amine-terminated particles. Nitrate adsorption is important in this system and the effect on photoreduction rates is described well by a Langmuir-Hinschelwood expression. Nitrate reduction rates are reduced two-fold or more on negatively charged, carboxy-terminated nanocrystals that electrostatically repel nitrate. Reaction rates are additionally influenced by competetive chloride adsorption and surface charge modification due to solution pH.
Photon induced non-linear quantized double layer charging in quaternary semiconducting quantum dots.
Nair, Vishnu; Ananthoju, Balakrishna; Mohapatra, Jeotikanta; Aslam, M
2018-03-15
Room temperature quantized double layer charging was observed in 2 nm Cu 2 ZnSnS 4 (CZTS) quantum dots. In addition to this we observed a distinct non-linearity in the quantized double layer charging arising from UV light modulation of double layer. UV light irradiation resulted in a 26% increase in the integral capacitance at the semiconductor-dielectric (CZTS-oleylamine) interface of the quantum dot without any change in its core size suggesting that the cause be photocapacitive. The increasing charge separation at the semiconductor-dielectric interface due to highly stable and mobile photogenerated carriers cause larger electrostatic forces between the quantum dot and electrolyte leading to an enhanced double layer. This idea was supported by a decrease in the differential capacitance possible due to an enhanced double layer. Furthermore the UV illumination enhanced double layer gives us an AC excitation dependent differential double layer capacitance which confirms that the charging process is non-linear. This ultimately illustrates the utility of a colloidal quantum dot-electrolyte interface as a non-linear photocapacitor. Copyright © 2017 Elsevier Inc. All rights reserved.
A class of all digital phase locked loops - Modeling and analysis
NASA Technical Reports Server (NTRS)
Reddy, C. P.; Gupta, S. C.
1973-01-01
An all digital phase locked loop which tracks the phase of the incoming signal once per carrier cycle is proposed. The different elements and their functions, and the phase lock operation are explained in detail. The general digital loop operation is governed by a nonlinear difference equation from which a suitable model is developed. The lock range for the general model is derived. The performance of the digital loop for phase step and frequency step inputs for different levels of quantization without loop filter are studied. The analytical results are checked by simulating the actual system on the digital computer.
Manipulating topological-insulator properties using quantum confinement
NASA Astrophysics Data System (ADS)
Kotulla, M.; Zülicke, U.
2017-07-01
Recent discoveries have spurred the theoretical prediction and experimental realization of novel materials that have topological properties arising from band inversion. Such topological insulators are insulating in the bulk but have conductive surface or edge states. Topological materials show various unusual physical properties and are surmised to enable the creation of exotic Majorana-fermion quasiparticles. How the signatures of topological behavior evolve when the system size is reduced is interesting from both a fundamental and an application-oriented point of view, as such understanding may form the basis for tailoring systems to be in specific topological phases. This work considers the specific case of quantum-well confinement defining two-dimensional layers. Based on the effective-Hamiltonian description of bulk topological insulators, and using a harmonic-oscillator potential as an example for a softer-than-hard-wall confinement, we have studied the interplay of band inversion and size quantization. Our model system provides a useful platform for systematic study of the transition between the normal and topological phases, including the development of band inversion and the formation of massless-Dirac-fermion surface states. The effects of bare size quantization, two-dimensional-subband mixing, and electron-hole asymmetry are disentangled and their respective physical consequences elucidated.
Wavelet-based image compression using shuffling and bit plane correlation
NASA Astrophysics Data System (ADS)
Kim, Seungjong; Jeong, Jechang
2000-12-01
In this paper, we propose a wavelet-based image compression method using shuffling and bit plane correlation. The proposed method improves coding performance in two steps: (1) removing the sign bit plane by shuffling process on quantized coefficients, (2) choosing the arithmetic coding context according to maximum correlation direction. The experimental results are comparable or superior for some images with low correlation, to existing coders.
Memory-efficient decoding of LDPC codes
NASA Technical Reports Server (NTRS)
Kwok-San Lee, Jason; Thorpe, Jeremy; Hawkins, Jon
2005-01-01
We present a low-complexity quantization scheme for the implementation of regular (3,6) LDPC codes. The quantization parameters are optimized to maximize the mutual information between the source and the quantized messages. Using this non-uniform quantized belief propagation algorithm, we have simulated that an optimized 3-bit quantizer operates with 0.2dB implementation loss relative to a floating point decoder, and an optimized 4-bit quantizer operates less than 0.1dB quantization loss.
Electron Transport In Nanowires - An Engineer'S View
NASA Astrophysics Data System (ADS)
Nawrocki, W.
In the paper technological problems connected to electron transport in mesoscopic- and nanostructures are considered. The electrical conductance of nanowires formed by metallic contacts in an experimental setup proposed by Costa-Kramer et al. The investigation has been performed in air at room temperature measuring the conductance between two vibrating metal wires with standard oscilloscope. Conductance quantization in units of G o = 2e /h = (12.9 kΩ)-1 up to five quanta of conductance has been observed for nanowires formed in many metals. The explanation of this universal phenomena is the formation of a nanometer-sized wire (nanowire) between macroscopic metallic contacts which induced, due to theory proposed by Landauer, the quantization of conductance. Thermal problems in nanowirese are also discussed in the paper.
PHASE QUANTIZATION STUDY OF SPATIAL LIGHT MODULATOR FOR EXTREME HIGH-CONTRAST IMAGING
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dou, Jiangpei; Ren, Deqing, E-mail: jpdou@niaot.ac.cn, E-mail: jiangpeidou@gmail.com
2016-11-20
Direct imaging of exoplanets by reflected starlight is extremely challenging due to the large luminosity ratio to the primary star. Wave-front control is a critical technique to attenuate the speckle noise in order to achieve an extremely high contrast. We present a phase quantization study of a spatial light modulator (SLM) for wave-front control to meet the contrast requirement of detection of a terrestrial planet in the habitable zone of a solar-type star. We perform the numerical simulation by employing the SLM with different phase accuracy and actuator numbers, which are related to the achievable contrast. We use an optimizationmore » algorithm to solve the quantization problems that is matched to the controllable phase step of the SLM. Two optical configurations are discussed with the SLM located before and after the coronagraph focal plane mask. The simulation result has constrained the specification for SLM phase accuracy in the above two optical configurations, which gives us a phase accuracy of 0.4/1000 and 1/1000 waves to achieve a contrast of 10{sup -10}. Finally, we have demonstrated that an SLM with more actuators can deliver a competitive contrast performance on the order of 10{sup -10} in comparison to that by using a deformable mirror.« less
Phase Quantization Study of Spatial Light Modulator for Extreme High-contrast Imaging
NASA Astrophysics Data System (ADS)
Dou, Jiangpei; Ren, Deqing
2016-11-01
Direct imaging of exoplanets by reflected starlight is extremely challenging due to the large luminosity ratio to the primary star. Wave-front control is a critical technique to attenuate the speckle noise in order to achieve an extremely high contrast. We present a phase quantization study of a spatial light modulator (SLM) for wave-front control to meet the contrast requirement of detection of a terrestrial planet in the habitable zone of a solar-type star. We perform the numerical simulation by employing the SLM with different phase accuracy and actuator numbers, which are related to the achievable contrast. We use an optimization algorithm to solve the quantization problems that is matched to the controllable phase step of the SLM. Two optical configurations are discussed with the SLM located before and after the coronagraph focal plane mask. The simulation result has constrained the specification for SLM phase accuracy in the above two optical configurations, which gives us a phase accuracy of 0.4/1000 and 1/1000 waves to achieve a contrast of 10-10. Finally, we have demonstrated that an SLM with more actuators can deliver a competitive contrast performance on the order of 10-10 in comparison to that by using a deformable mirror.
Weakly supervised visual dictionary learning by harnessing image attributes.
Gao, Yue; Ji, Rongrong; Liu, Wei; Dai, Qionghai; Hua, Gang
2014-12-01
Bag-of-features (BoFs) representation has been extensively applied to deal with various computer vision applications. To extract discriminative and descriptive BoF, one important step is to learn a good dictionary to minimize the quantization loss between local features and codewords. While most existing visual dictionary learning approaches are engaged with unsupervised feature quantization, the latest trend has turned to supervised learning by harnessing the semantic labels of images or regions. However, such labels are typically too expensive to acquire, which restricts the scalability of supervised dictionary learning approaches. In this paper, we propose to leverage image attributes to weakly supervise the dictionary learning procedure without requiring any actual labels. As a key contribution, our approach establishes a generative hidden Markov random field (HMRF), which models the quantized codewords as the observed states and the image attributes as the hidden states, respectively. Dictionary learning is then performed by supervised grouping the observed states, where the supervised information is stemmed from the hidden states of the HMRF. In such a way, the proposed dictionary learning approach incorporates the image attributes to learn a semantic-preserving BoF representation without any genuine supervision. Experiments in large-scale image retrieval and classification tasks corroborate that our approach significantly outperforms the state-of-the-art unsupervised dictionary learning approaches.
Recursive optimal pruning with applications to tree structured vector quantizers
NASA Technical Reports Server (NTRS)
Kiang, Shei-Zein; Baker, Richard L.; Sullivan, Gary J.; Chiu, Chung-Yen
1992-01-01
A pruning algorithm of Chou et al. (1989) for designing optimal tree structures identifies only those codebooks which lie on the convex hull of the original codebook's operational distortion rate function. The authors introduce a modified version of the original algorithm, which identifies a large number of codebooks having minimum average distortion, under the constraint that, in each step, only modes having no descendents are removed from the tree. All codebooks generated by the original algorithm are also generated by this algorithm. The new algorithm generates a much larger number of codebooks in the middle- and low-rate regions. The additional codebooks permit operation near the codebook's operational distortion rate function without time sharing by choosing from the increased number of available bit rates. Despite the statistical mismatch which occurs when coding data outside the training sequence, these pruned codebooks retain their performance advantage over full search vector quantizers (VQs) for a large range of rates.
Displaying radiologic images on personal computers: image storage and compression--Part 2.
Gillespy, T; Rowberg, A H
1994-02-01
This is part 2 of our article on image storage and compression, the third article of our series for radiologists and imaging scientists on displaying, manipulating, and analyzing radiologic images on personal computers. Image compression is classified as lossless (nondestructive) or lossy (destructive). Common lossless compression algorithms include variable-length bit codes (Huffman codes and variants), dictionary-based compression (Lempel-Ziv variants), and arithmetic coding. Huffman codes and the Lempel-Ziv-Welch (LZW) algorithm are commonly used for image compression. All of these compression methods are enhanced if the image has been transformed into a differential image based on a differential pulse-code modulation (DPCM) algorithm. The LZW compression after the DPCM image transformation performed the best on our example images, and performed almost as well as the best of the three commercial compression programs tested. Lossy compression techniques are capable of much higher data compression, but reduced image quality and compression artifacts may be noticeable. Lossy compression is comprised of three steps: transformation, quantization, and coding. Two commonly used transformation methods are the discrete cosine transformation and discrete wavelet transformation. In both methods, most of the image information is contained in a relatively few of the transformation coefficients. The quantization step reduces many of the lower order coefficients to 0, which greatly improves the efficiency of the coding (compression) step. In fractal-based image compression, image patterns are stored as equations that can be reconstructed at different levels of resolution.
Low-Complexity Lossless and Near-Lossless Data Compression Technique for Multispectral Imagery
NASA Technical Reports Server (NTRS)
Xie, Hua; Klimesh, Matthew A.
2009-01-01
This work extends the lossless data compression technique described in Fast Lossless Compression of Multispectral- Image Data, (NPO-42517) NASA Tech Briefs, Vol. 30, No. 8 (August 2006), page 26. The original technique was extended to include a near-lossless compression option, allowing substantially smaller compressed file sizes when a small amount of distortion can be tolerated. Near-lossless compression is obtained by including a quantization step prior to encoding of prediction residuals. The original technique uses lossless predictive compression and is designed for use on multispectral imagery. A lossless predictive data compression algorithm compresses a digitized signal one sample at a time as follows: First, a sample value is predicted from previously encoded samples. The difference between the actual sample value and the prediction is called the prediction residual. The prediction residual is encoded into the compressed file. The decompressor can form the same predicted sample and can decode the prediction residual from the compressed file, and so can reconstruct the original sample. A lossless predictive compression algorithm can generally be converted to a near-lossless compression algorithm by quantizing the prediction residuals prior to encoding them. In this case, since the reconstructed sample values will not be identical to the original sample values, the encoder must determine the values that will be reconstructed and use these values for predicting later sample values. The technique described here uses this method, starting with the original technique, to allow near-lossless compression. The extension to allow near-lossless compression adds the ability to achieve much more compression when small amounts of distortion are tolerable, while retaining the low complexity and good overall compression effectiveness of the original algorithm.
Efficient Encoding and Rendering of Time-Varying Volume Data
NASA Technical Reports Server (NTRS)
Ma, Kwan-Liu; Smith, Diann; Shih, Ming-Yun; Shen, Han-Wei
1998-01-01
Visualization of time-varying volumetric data sets, which may be obtained from numerical simulations or sensing instruments, provides scientists insights into the detailed dynamics of the phenomenon under study. This paper describes a coherent solution based on quantization, coupled with octree and difference encoding for visualizing time-varying volumetric data. Quantization is used to attain voxel-level compression and may have a significant influence on the performance of the subsequent encoding and visualization steps. Octree encoding is used for spatial domain compression, and difference encoding for temporal domain compression. In essence, neighboring voxels may be fused into macro voxels if they have similar values, and subtrees at consecutive time steps may be merged if they are identical. The software rendering process is tailored according to the tree structures and the volume visualization process. With the tree representation, selective rendering may be performed very efficiently. Additionally, the I/O costs are reduced. With these combined savings, a higher level of user interactivity is achieved. We have studied a variety of time-varying volume datasets, performed encoding based on data statistics, and optimized the rendering calculations wherever possible. Preliminary tests on workstations have shown in many cases tremendous reduction by as high as 90% in both storage space and inter-frame delay.
A class of all digital phase locked loops - Modelling and analysis.
NASA Technical Reports Server (NTRS)
Reddy, C. P.; Gupta, S. C.
1972-01-01
An all digital phase locked loop which tracks the phase of the incoming signal once per carrier cycle is proposed. The different elements and their functions, and the phase lock operation are explained in detail. The general digital loop operation is governed by a non-linear difference equation from which a suitable model is developed. The lock range for the general model is derived. The performance of the digital loop for phase step, and frequency step inputs for different levels of quantization without loop filter, are studied. The analytical results are checked by simulating the actual system on the digital computer.
BRST quantization of cosmological perturbations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Armendariz-Picon, Cristian; Şengör, Gizem
2016-11-08
BRST quantization is an elegant and powerful method to quantize theories with local symmetries. In this article we study the Hamiltonian BRST quantization of cosmological perturbations in a universe dominated by a scalar field, along with the closely related quantization method of Dirac. We describe how both formalisms apply to perturbations in a time-dependent background, and how expectation values of gauge-invariant operators can be calculated in the in-in formalism. Our analysis focuses mostly on the free theory. By appropriate canonical transformations we simplify and diagonalize the free Hamiltonian. BRST quantization in derivative gauges allows us to dramatically simplify the structuremore » of the propagators, whereas Dirac quantization, which amounts to quantization in synchronous gauge, dispenses with the need to introduce ghosts and preserves the locality of the gauge-fixed action.« less
Self-assembly of concentric quantum double rings.
Mano, Takaaki; Kuroda, Takashi; Sanguinetti, Stefano; Ochiai, Tetsuyuki; Tateno, Takahiro; Kim, Jongsu; Noda, Takeshi; Kawabe, Mitsuo; Sakoda, Kazuaki; Kido, Giyuu; Koguchi, Nobuyuki
2005-03-01
We demonstrate the self-assembled formation of concentric quantum double rings with high uniformity and excellent rotational symmetry using the droplet epitaxy technique. Varying the growth process conditions can control each ring's size. Photoluminescence spectra emitted from an individual quantum ring complex show peculiar quantized levels that are specified by the carriers' orbital trajectories.
Deformation of second and third quantization
NASA Astrophysics Data System (ADS)
Faizal, Mir
2015-03-01
In this paper, we will deform the second and third quantized theories by deforming the canonical commutation relations in such a way that they become consistent with the generalized uncertainty principle. Thus, we will first deform the second quantized commutator and obtain a deformed version of the Wheeler-DeWitt equation. Then we will further deform the third quantized theory by deforming the third quantized canonical commutation relation. This way we will obtain a deformed version of the third quantized theory for the multiverse.
NASA Astrophysics Data System (ADS)
Sasano, Koji; Okajima, Hiroshi; Matsunaga, Nobutomo
Recently, the fractional order PID (FO-PID) control, which is the extension of the PID control, has been focused on. Even though the FO-PID requires the high-order filter, it is difficult to realize the high-order filter due to the memory limitation of digital computer. For implementation of FO-PID, approximation of the fractional integrator and differentiator are required. Short memory principle (SMP) is one of the effective approximation methods. However, there is a disadvantage that the approximated filter with SMP cannot eliminate the steady-state error. For this problem, we introduce the distributed implementation of the integrator and the dynamic quantizer to make the efficient use of permissible memory. The objective of this study is to clarify how to implement the accurate FO-PID with limited memories. In this paper, we propose the implementation method of FO-PID with memory constraint using dynamic quantizer. And the trade off between approximation of fractional elements and quantized data size are examined so as to close to the ideal FO-PID responses. The effectiveness of proposed method is evaluated by numerical example and experiment in the temperature control of heat plate.
Design and Implementation of Multi-Input Adaptive Signal Extractions.
1982-09-01
deflected gradient) algorithm requiring only N+ l multiplications per adaptation step. Additional quantization is introduced to eliminate all multiplications...noise cancellation for intermittent-signal applications," IEEE Trans. Information Theory, Vol. IT-26. Nov. 1980, pp. 746-750. 1-2 J. Kazakoff and W. A...cancellation," Proc. IEEE, July 1981, Vol. 69, pp. 846-847. *I-10 P. L . Kelly and W. A. Gardner, "Pilot-Directed Adaptive Signal Extraction," Dept. of
Full Spectrum Conversion Using Traveling Pulse Wave Quantization
2017-03-01
Full Spectrum Conversion Using Traveling Pulse Wave Quantization Michael S. Kappes Mikko E. Waltari IQ-Analog Corporation San Diego, California...temporal-domain quantization technique called Traveling Pulse Wave Quantization (TPWQ). Full spectrum conversion is defined as the complete...pulse width measurements that are continuously generated hence the name “traveling” pulse wave quantization. Our TPWQ-based ADC is composed of a
Quantizing and sampling considerations in digital phased-locked loops
NASA Technical Reports Server (NTRS)
Hurst, G. T.; Gupta, S. C.
1974-01-01
The quantizer problem is first considered. The conditions under which the uniform white sequence model for the quantizer error is valid are established independent of the sampling rate. An equivalent spectral density is defined for the quantizer error resulting in an effective SNR value. This effective SNR may be used to determine quantized performance from infinitely fine quantized results. Attention is given to sampling rate considerations. Sampling rate characteristics of the digital phase-locked loop (DPLL) structure are investigated for the infinitely fine quantized system. The predicted phase error variance equation is examined as a function of the sampling rate. Simulation results are presented and a method is described which enables the minimum required sampling rate to be determined from the predicted phase error variance equations.
Modeling and analysis of energy quantization effects on single electron inverter performance
NASA Astrophysics Data System (ADS)
Dan, Surya Shankar; Mahapatra, Santanu
2009-08-01
In this paper, for the first time, the effects of energy quantization on single electron transistor (SET) inverter performance are analyzed through analytical modeling and Monte Carlo simulations. It is shown that energy quantization mainly changes the Coulomb blockade region and drain current of SET devices and thus affects the noise margin, power dissipation, and the propagation delay of SET inverter. A new analytical model for the noise margin of SET inverter is proposed which includes the energy quantization effects. Using the noise margin as a metric, the robustness of SET inverter is studied against the effects of energy quantization. A compact expression is developed for a novel parameter quantization threshold which is introduced for the first time in this paper. Quantization threshold explicitly defines the maximum energy quantization that an SET inverter logic circuit can withstand before its noise margin falls below a specified tolerance level. It is found that SET inverter designed with CT:CG=1/3 (where CT and CG are tunnel junction and gate capacitances, respectively) offers maximum robustness against energy quantization.
Berezin-Toeplitz quantization and naturally defined star products for Kähler manifolds
NASA Astrophysics Data System (ADS)
Schlichenmaier, Martin
2018-04-01
For compact quantizable Kähler manifolds the Berezin-Toeplitz quantization schemes, both operator and deformation quantization (star product) are reviewed. The treatment includes Berezin's covariant symbols and the Berezin transform. The general compact quantizable case was done by Bordemann-Meinrenken-Schlichenmaier, Schlichenmaier, and Karabegov-Schlichenmaier. For star products on Kähler manifolds, separation of variables, or equivalently star product of (anti-) Wick type, is a crucial property. As canonically defined star products the Berezin-Toeplitz, Berezin, and the geometric quantization are treated. It turns out that all three are equivalent, but different.
Metamaterial bricks and quantization of meta-surfaces
Memoli, Gianluca; Caleap, Mihai; Asakawa, Michihiro; Sahoo, Deepak R.; Drinkwater, Bruce W.; Subramanian, Sriram
2017-01-01
Controlling acoustic fields is crucial in diverse applications such as loudspeaker design, ultrasound imaging and therapy or acoustic particle manipulation. The current approaches use fixed lenses or expensive phased arrays. Here, using a process of analogue-to-digital conversion and wavelet decomposition, we develop the notion of quantal meta-surfaces. The quanta here are small, pre-manufactured three-dimensional units—which we call metamaterial bricks—each encoding a specific phase delay. These bricks can be assembled into meta-surfaces to generate any diffraction-limited acoustic field. We apply this methodology to show experimental examples of acoustic focusing, steering and, after stacking single meta-surfaces into layers, the more complex field of an acoustic tractor beam. We demonstrate experimentally single-sided air-borne acoustic levitation using meta-layers at various bit-rates: from a 4-bit uniform to 3-bit non-uniform quantization in phase. This powerful methodology dramatically simplifies the design of acoustic devices and provides a key-step towards realizing spatial sound modulators. PMID:28240283
Metamaterial bricks and quantization of meta-surfaces
NASA Astrophysics Data System (ADS)
Memoli, Gianluca; Caleap, Mihai; Asakawa, Michihiro; Sahoo, Deepak R.; Drinkwater, Bruce W.; Subramanian, Sriram
2017-02-01
Controlling acoustic fields is crucial in diverse applications such as loudspeaker design, ultrasound imaging and therapy or acoustic particle manipulation. The current approaches use fixed lenses or expensive phased arrays. Here, using a process of analogue-to-digital conversion and wavelet decomposition, we develop the notion of quantal meta-surfaces. The quanta here are small, pre-manufactured three-dimensional units--which we call metamaterial bricks--each encoding a specific phase delay. These bricks can be assembled into meta-surfaces to generate any diffraction-limited acoustic field. We apply this methodology to show experimental examples of acoustic focusing, steering and, after stacking single meta-surfaces into layers, the more complex field of an acoustic tractor beam. We demonstrate experimentally single-sided air-borne acoustic levitation using meta-layers at various bit-rates: from a 4-bit uniform to 3-bit non-uniform quantization in phase. This powerful methodology dramatically simplifies the design of acoustic devices and provides a key-step towards realizing spatial sound modulators.
Metamaterial bricks and quantization of meta-surfaces.
Memoli, Gianluca; Caleap, Mihai; Asakawa, Michihiro; Sahoo, Deepak R; Drinkwater, Bruce W; Subramanian, Sriram
2017-02-27
Controlling acoustic fields is crucial in diverse applications such as loudspeaker design, ultrasound imaging and therapy or acoustic particle manipulation. The current approaches use fixed lenses or expensive phased arrays. Here, using a process of analogue-to-digital conversion and wavelet decomposition, we develop the notion of quantal meta-surfaces. The quanta here are small, pre-manufactured three-dimensional units-which we call metamaterial bricks-each encoding a specific phase delay. These bricks can be assembled into meta-surfaces to generate any diffraction-limited acoustic field. We apply this methodology to show experimental examples of acoustic focusing, steering and, after stacking single meta-surfaces into layers, the more complex field of an acoustic tractor beam. We demonstrate experimentally single-sided air-borne acoustic levitation using meta-layers at various bit-rates: from a 4-bit uniform to 3-bit non-uniform quantization in phase. This powerful methodology dramatically simplifies the design of acoustic devices and provides a key-step towards realizing spatial sound modulators.
NASA Astrophysics Data System (ADS)
Moskalenko, Sveatoslav A.; Podlesny, Igor V.; Dumanov, Evgheni V.; Liberman, Michael A.
2015-09-01
We consider the energy spectrum of the two-dimensional cavity polaritons under the influence of a strong magnetic and electric fields perpendicular to the surface of the GaAs-type quantum wells (QWs) with p-type valence band embedded into the resonators. As the first step in this direction the Landau quantization (LQ) of the electrons and heavy-holes (hh) was investigated taking into account the Rashba spin-orbit coupling (RSOC) with third-order chirality terms for hh and with nonparabolicity terms in their dispersion low including as well the Zeeman splitting (ZS) effects. The nonparabolicity term is proportional to the strength of the electric field and was introduced to avoid the collapse of the semiconductor energy gap under the influence of the third order chirality terms. The exact solutions for the eigenfunctions and eigenenergies were obtained using the Rashba method [E.I. Rashba, Fiz. Tverd. Tela 2, 1224 (1960) [Sov. Phys. Solid State 2, 1109 (1960)
A point particle model of lightly bound skyrmions
NASA Astrophysics Data System (ADS)
Gillard, Mike; Harland, Derek; Kirk, Elliot; Maybee, Ben; Speight, Martin
2017-04-01
A simple model of the dynamics of lightly bound skyrmions is developed in which skyrmions are replaced by point particles, each carrying an internal orientation. The model accounts well for the static energy minimizers of baryon number 1 ≤ B ≤ 8 obtained by numerical simulation of the full field theory. For 9 ≤ B ≤ 23, a large number of static solutions of the point particle model are found, all closely resembling size B subsets of a face centred cubic lattice, with the particle orientations dictated by a simple colouring rule. Rigid body quantization of these solutions is performed, and the spin and isospin of the corresponding ground states extracted. As part of the quantization scheme, an algorithm to compute the symmetry group of an oriented point cloud, and to determine its corresponding Finkelstein-Rubinstein constraints, is devised.
Video data compression using artificial neural network differential vector quantization
NASA Technical Reports Server (NTRS)
Krishnamurthy, Ashok K.; Bibyk, Steven B.; Ahalt, Stanley C.
1991-01-01
An artificial neural network vector quantizer is developed for use in data compression applications such as Digital Video. Differential Vector Quantization is used to preserve edge features, and a new adaptive algorithm, known as Frequency-Sensitive Competitive Learning, is used to develop the vector quantizer codebook. To develop real time performance, a custom Very Large Scale Integration Application Specific Integrated Circuit (VLSI ASIC) is being developed to realize the associative memory functions needed in the vector quantization algorithm. By using vector quantization, the need for Huffman coding can be eliminated, resulting in superior performance against channel bit errors than methods that use variable length codes.
NASA Astrophysics Data System (ADS)
Chernyak, Vladimir Y.; Klein, John R.; Sinitsyn, Nikolai A.
2012-04-01
This article studies Markovian stochastic motion of a particle on a graph with finite number of nodes and periodically time-dependent transition rates that satisfy the detailed balance condition at any time. We show that under general conditions, the currents in the system on average become quantized or fractionally quantized for adiabatic driving at sufficiently low temperature. We develop the quantitative theory of this quantization and interpret it in terms of topological invariants. By implementing the celebrated Kirchhoff theorem we derive a general and explicit formula for the average generated current that plays a role of an efficient tool for treating the current quantization effects.
Zhang, Senlin; Chen, Huayan; Liu, Meiqin; Zhang, Qunfei
2017-11-07
Target tracking is one of the broad applications of underwater wireless sensor networks (UWSNs). However, as a result of the temporal and spatial variability of acoustic channels, underwater acoustic communications suffer from an extremely limited bandwidth. In order to reduce network congestion, it is important to shorten the length of the data transmitted from local sensors to the fusion center by quantization. Although quantization can reduce bandwidth cost, it also brings about bad tracking performance as a result of information loss after quantization. To solve this problem, this paper proposes an optimal quantization-based target tracking scheme. It improves the tracking performance of low-bit quantized measurements by minimizing the additional covariance caused by quantization. The simulation demonstrates that our scheme performs much better than the conventional uniform quantization-based target tracking scheme and the increment of the data length affects our scheme only a little. Its tracking performance improves by only 4.4% from 2- to 3-bit, which means our scheme weakly depends on the number of data bits. Moreover, our scheme also weakly depends on the number of participate sensors, and it can work well in sparse sensor networks. In a 6 × 6 × 6 sensor network, compared with 4 × 4 × 4 sensor networks, the number of participant sensors increases by 334.92%, while the tracking accuracy using 1-bit quantized measurements improves by only 50.77%. Overall, our optimal quantization-based target tracking scheme can achieve the pursuit of data-efficiency, which fits the requirements of low-bandwidth UWSNs.
Perceptual Optimization of DCT Color Quantization Matrices
NASA Technical Reports Server (NTRS)
Watson, Andrew B.; Statler, Irving C. (Technical Monitor)
1994-01-01
Many image compression schemes employ a block Discrete Cosine Transform (DCT) and uniform quantization. Acceptable rate/distortion performance depends upon proper design of the quantization matrix. In previous work, we showed how to use a model of the visibility of DCT basis functions to design quantization matrices for arbitrary display resolutions and color spaces. Subsequently, we showed how to optimize greyscale quantization matrices for individual images, for optimal rate/perceptual distortion performance. Here we describe extensions of this optimization algorithm to color images.
A Kalman Filter Implementation for Precision Improvement in Low-Cost GPS Positioning of Tractors
Gomez-Gil, Jaime; Ruiz-Gonzalez, Ruben; Alonso-Garcia, Sergio; Gomez-Gil, Francisco Javier
2013-01-01
Low-cost GPS receivers provide geodetic positioning information using the NMEA protocol, usually with eight digits for latitude and nine digits for longitude. When these geodetic coordinates are converted into Cartesian coordinates, the positions fit in a quantization grid of some decimeters in size, the dimensions of which vary depending on the point of the terrestrial surface. The aim of this study is to reduce the quantization errors of some low-cost GPS receivers by using a Kalman filter. Kinematic tractor model equations were employed to particularize the filter, which was tuned by applying Monte Carlo techniques to eighteen straight trajectories, to select the covariance matrices that produced the lowest Root Mean Square Error in these trajectories. Filter performance was tested by using straight tractor paths, which were either simulated or real trajectories acquired by a GPS receiver. The results show that the filter can reduce the quantization error in distance by around 43%. Moreover, it reduces the standard deviation of the heading by 75%. Data suggest that the proposed filter can satisfactorily preprocess the low-cost GPS receiver data when used in an assistance guidance GPS system for tractors. It could also be useful to smooth tractor GPS trajectories that are sharpened when the tractor moves over rough terrain. PMID:24217355
Accelerating simulation for the multiple-point statistics algorithm using vector quantization
NASA Astrophysics Data System (ADS)
Zuo, Chen; Pan, Zhibin; Liang, Hao
2018-03-01
Multiple-point statistics (MPS) is a prominent algorithm to simulate categorical variables based on a sequential simulation procedure. Assuming training images (TIs) as prior conceptual models, MPS extracts patterns from TIs using a template and records their occurrences in a database. However, complex patterns increase the size of the database and require considerable time to retrieve the desired elements. In order to speed up simulation and improve simulation quality over state-of-the-art MPS methods, we propose an accelerating simulation for MPS using vector quantization (VQ), called VQ-MPS. First, a variable representation is presented to make categorical variables applicable for vector quantization. Second, we adopt a tree-structured VQ to compress the database so that stationary simulations are realized. Finally, a transformed template and classified VQ are used to address nonstationarity. A two-dimensional (2D) stationary channelized reservoir image is used to validate the proposed VQ-MPS. In comparison with several existing MPS programs, our method exhibits significantly better performance in terms of computational time, pattern reproductions, and spatial uncertainty. Further demonstrations consist of a 2D four facies simulation, two 2D nonstationary channel simulations, and a three-dimensional (3D) rock simulation. The results reveal that our proposed method is also capable of solving multifacies, nonstationarity, and 3D simulations based on 2D TIs.
Bulk dimensional nanocomposites for thermoelectric applications
Nolas, George S
2014-06-24
Thermoelectric elements may be used for heat sensors, heat pumps, and thermoelectric generators. A quantum-dot or nano-scale grain size polycrystalline material the effects of size-quantization are present inside the nanocrystals. A thermoelectric element composed of densified Groups IV-VI material, such as calcogenide-based materials are doped with metal or chalcogenide to form interference barriers form along grains. The dopant used is either silver or sodium. These chalcogenide materials form nanoparticles of highly crystal grains, and may specifically be between 1- and 100 nm. The compound is densified by spark plasma sintering.
NASA Astrophysics Data System (ADS)
Mazzola, F.; Wells, J. W.; Pakpour-Tabrizi, A. C.; Jackman, R. B.; Thiagarajan, B.; Hofmann, Ph.; Miwa, J. A.
2018-01-01
We demonstrate simultaneous quantization of conduction band (CB) and valence band (VB) states in silicon using ultrashallow, high-density, phosphorus doping profiles (so-called Si:P δ layers). We show that, in addition to the well-known quantization of CB states within the dopant plane, the confinement of VB-derived states between the subsurface P dopant layer and the Si surface gives rise to a simultaneous quantization of VB states in this narrow region. We also show that the VB quantization can be explained using a simple particle-in-a-box model, and that the number and energy separation of the quantized VB states depend on the depth of the P dopant layer beneath the Si surface. Since the quantized CB states do not show a strong dependence on the dopant depth (but rather on the dopant density), it is straightforward to exhibit control over the properties of the quantized CB and VB states independently of each other by choosing the dopant density and depth accordingly, thus offering new possibilities for engineering quantum matter.
NASA Astrophysics Data System (ADS)
Du, Haifeng; Liang, Dong; Jin, Chiming; Kong, Lingyao; Stolt, Matthew J.; Ning, Wei; Yang, Jiyong; Xing, Ying; Wang, Jian; Che, Renchao; Zang, Jiadong; Jin, Song; Zhang, Yuheng; Tian, Mingliang
2015-07-01
Magnetic skyrmions are topologically stable whirlpool-like spin textures that offer great promise as information carriers for future spintronic devices. To enable such applications, particular attention has been focused on the properties of skyrmions in highly confined geometries such as one-dimensional nanowires. Hitherto, it is still experimentally unclear what happens when the width of the nanowire is comparable to that of a single skyrmion. Here, we achieve this by measuring the magnetoresistance in ultra-narrow MnSi nanowires. We observe quantized jumps in magnetoresistance versus magnetic field curves. By tracking the size dependence of the jump number, we infer that skyrmions are assembled into cluster states with a tunable number of skyrmions, in agreement with the Monte Carlo simulations. Our results enable an electric reading of the number of skyrmions in the cluster states, thus laying a solid foundation to realize skyrmion-based memory devices.
Theory of the Knight Shift and Flux Quantization in Superconductors
DOE R&D Accomplishments Database
Cooper, L. N.; Lee, H. J.; Schwartz, B. B.; Silvert, W.
1962-05-01
Consequences of a generalization of the theory of superconductivity that yields a finite Knight shift are presented. In this theory, by introducing an electron-electron interaction that is not spatially invariant, the pairing of electrons with varying total momentum is made possible. An expression for Xs (the spin susceptibility in the superconducting state) is derived. In general Xs is smaller than Xn, but is not necessarily zero. The precise magnitude of Xs will vary from sample to sample and will depend on the nonuniformity of the samples. There should be no marked size dependence and no marked dependence on the strength of the magnetic field; this is in accord with observation. The basic superconducting properties are retained, but there are modifications in the various electromagnetic and thermal properties since the electrons paired are not time sequences of this generalized theory on flux quantization arguments are presented.(auth)
An adaptive vector quantization scheme
NASA Technical Reports Server (NTRS)
Cheung, K.-M.
1990-01-01
Vector quantization is known to be an effective compression scheme to achieve a low bit rate so as to minimize communication channel bandwidth and also to reduce digital memory storage while maintaining the necessary fidelity of the data. However, the large number of computations required in vector quantizers has been a handicap in using vector quantization for low-rate source coding. An adaptive vector quantization algorithm is introduced that is inherently suitable for simple hardware implementation because it has a simple architecture. It allows fast encoding and decoding because it requires only addition and subtraction operations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Serwer, Philip, E-mail: serwer@uthscsa.edu; Wright, Elena T.; Liu, Zheng
DNA packaging of phages phi29, T3 and T7 sometimes produces incompletely packaged DNA with quantized lengths, based on gel electrophoretic band formation. We discover here a packaging ATPase-free, in vitro model for packaged DNA length quantization. We use directed evolution to isolate a five-site T3 point mutant that hyper-produces tail-free capsids with mature DNA (heads). Three tail gene mutations, but no head gene mutations, are present. A variable-length DNA segment leaks from some mutant heads, based on DNase I-protection assay and electron microscopy. The protected DNA segment has quantized lengths, based on restriction endonuclease analysis: six sharp bands of DNAmore » missing 3.7–12.3% of the last end packaged. Native gel electrophoresis confirms quantized DNA expulsion and, after removal of external DNA, provides evidence that capsid radius is the quantization-ruler. Capsid-based DNA length quantization possibly evolved via selection for stalling that provides time for feedback control during DNA packaging and injection. - Graphical abstract: Highlights: • We implement directed evolution- and DNA-sequencing-based phage assembly genetics. • We purify stable, mutant phage heads with a partially leaked mature DNA molecule. • Native gels and DNase-protection show leaked DNA segments to have quantized lengths. • Native gels after DNase I-removal of leaked DNA reveal the capsids to vary in radius. • Thus, we hypothesize leaked DNA quantization via variably quantized capsid radius.« less
Dimensional quantization effects in the thermodynamics of conductive filaments
NASA Astrophysics Data System (ADS)
Niraula, D.; Grice, C. R.; Karpov, V. G.
2018-06-01
We consider the physical effects of dimensional quantization in conductive filaments that underlie operations of some modern electronic devices. We show that, as a result of quantization, a sufficiently thin filament acquires a positive charge. Several applications of this finding include the host material polarization, the stability of filament constrictions, the equilibrium filament radius, polarity in device switching, and quantization of conductance.
Nearly associative deformation quantization
NASA Astrophysics Data System (ADS)
Vassilevich, Dmitri; Oliveira, Fernando Martins Costa
2018-04-01
We study several classes of non-associative algebras as possible candidates for deformation quantization in the direction of a Poisson bracket that does not satisfy Jacobi identities. We show that in fact alternative deformation quantization algebras require the Jacobi identities on the Poisson bracket and, under very general assumptions, are associative. At the same time, flexible deformation quantization algebras exist for any Poisson bracket.
Dimensional quantization effects in the thermodynamics of conductive filaments.
Niraula, D; Grice, C R; Karpov, V G
2018-06-29
We consider the physical effects of dimensional quantization in conductive filaments that underlie operations of some modern electronic devices. We show that, as a result of quantization, a sufficiently thin filament acquires a positive charge. Several applications of this finding include the host material polarization, the stability of filament constrictions, the equilibrium filament radius, polarity in device switching, and quantization of conductance.
Lee, Bumshik; Kim, Munchurl
2016-08-01
In this paper, a low complexity coding unit (CU)-level rate and distortion estimation scheme is proposed for High Efficiency Video Coding (HEVC) hardware-friendly implementation where a Walsh-Hadamard transform (WHT)-based low-complexity integer discrete cosine transform (DCT) is employed for distortion estimation. Since HEVC adopts quadtree structures of coding blocks with hierarchical coding depths, it becomes more difficult to estimate accurate rate and distortion values without actually performing transform, quantization, inverse transform, de-quantization, and entropy coding. Furthermore, DCT for rate-distortion optimization (RDO) is computationally high, because it requires a number of multiplication and addition operations for various transform block sizes of 4-, 8-, 16-, and 32-orders and requires recursive computations to decide the optimal depths of CU or transform unit. Therefore, full RDO-based encoding is highly complex, especially for low-power implementation of HEVC encoders. In this paper, a rate and distortion estimation scheme is proposed in CU levels based on a low-complexity integer DCT that can be computed in terms of WHT whose coefficients are produced in prediction stages. For rate and distortion estimation in CU levels, two orthogonal matrices of 4×4 and 8×8 , which are applied to WHT that are newly designed in a butterfly structure only with addition and shift operations. By applying the integer DCT based on the WHT and newly designed transforms in each CU block, the texture rate can precisely be estimated after quantization using the number of non-zero quantized coefficients and the distortion can also be precisely estimated in transform domain without de-quantization and inverse transform required. In addition, a non-texture rate estimation is proposed by using a pseudoentropy code to obtain accurate total rate estimates. The proposed rate and the distortion estimation scheme can effectively be used for HW-friendly implementation of HEVC encoders with 9.8% loss over HEVC full RDO, which much less than 20.3% and 30.2% loss of a conventional approach and Hadamard-only scheme, respectively.
NASA Astrophysics Data System (ADS)
Panmand, Rajendra P.; Kumar, Ganapathy; Mahajan, Satish M.; Kulkarni, Milind V.; Amalnerkar, D. P.; Kale, Bharat B.; Gosavi, Suresh. W.
2011-02-01
We report optical studies with magneto-optic properties of Bi2S3 quantum dot/wires-glass nanocomposite. The size of the Q-dot was observed to be in the range 3-15 nm along with 11 nm Q-wires. Optical study clearly demonstrated the size quantization effect with drastic band gap variation with size. Faraday rotation tests on the glass nanocomposites show variation in Verdet constant with Q-dot size. Bi2S3 Q-dot/wires glass nanocomposite demonstrated 190 times enhanced Verdet constant compared to the host glass. Prima facie observations exemplify the significant enhancement in Verdet constant of Q-dot glass nanocomposites and will have potential application in magneto-optical devices.
Topological quantization in units of the fine structure constant.
Maciejko, Joseph; Qi, Xiao-Liang; Drew, H Dennis; Zhang, Shou-Cheng
2010-10-15
Fundamental topological phenomena in condensed matter physics are associated with a quantized electromagnetic response in units of fundamental constants. Recently, it has been predicted theoretically that the time-reversal invariant topological insulator in three dimensions exhibits a topological magnetoelectric effect quantized in units of the fine structure constant α=e²/ℏc. In this Letter, we propose an optical experiment to directly measure this topological quantization phenomenon, independent of material details. Our proposal also provides a way to measure the half-quantized Hall conductances on the two surfaces of the topological insulator independently of each other.
On the Dequantization of Fedosov's Deformation Quantization
NASA Astrophysics Data System (ADS)
Karabegov, Alexander V.
2003-08-01
To each natural deformation quantization on a Poisson manifold M we associate a Poisson morphism from the formal neighborhood of the zero section of the cotangent bundle to M to the formal neighborhood of the diagonal of the product M x M~, where M~ is a copy of M with the opposite Poisson structure. We call it dequantization of the natural deformation quantization. Then we "dequantize" Fedosov's quantization.
Quantum Computing and Second Quantization
Makaruk, Hanna Ewa
2017-02-10
Quantum computers are by their nature many particle quantum systems. Both the many-particle arrangement and being quantum are necessary for the existence of the entangled states, which are responsible for the parallelism of the quantum computers. Second quantization is a very important approximate method of describing such systems. This lecture will present the general idea of the second quantization, and discuss shortly some of the most important formulations of second quantization.
Quantum Computing and Second Quantization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Makaruk, Hanna Ewa
Quantum computers are by their nature many particle quantum systems. Both the many-particle arrangement and being quantum are necessary for the existence of the entangled states, which are responsible for the parallelism of the quantum computers. Second quantization is a very important approximate method of describing such systems. This lecture will present the general idea of the second quantization, and discuss shortly some of the most important formulations of second quantization.
BSIFT: toward data-independent codebook for large scale image search.
Zhou, Wengang; Li, Houqiang; Hong, Richang; Lu, Yijuan; Tian, Qi
2015-03-01
Bag-of-Words (BoWs) model based on Scale Invariant Feature Transform (SIFT) has been widely used in large-scale image retrieval applications. Feature quantization by vector quantization plays a crucial role in BoW model, which generates visual words from the high- dimensional SIFT features, so as to adapt to the inverted file structure for the scalable retrieval. Traditional feature quantization approaches suffer several issues, such as necessity of visual codebook training, limited reliability, and update inefficiency. To avoid the above problems, in this paper, a novel feature quantization scheme is proposed to efficiently quantize each SIFT descriptor to a descriptive and discriminative bit-vector, which is called binary SIFT (BSIFT). Our quantizer is independent of image collections. In addition, by taking the first 32 bits out from BSIFT as code word, the generated BSIFT naturally lends itself to adapt to the classic inverted file structure for image indexing. Moreover, the quantization error is reduced by feature filtering, code word expansion, and query sensitive mask shielding. Without any explicit codebook for quantization, our approach can be readily applied in image search in some resource-limited scenarios. We evaluate the proposed algorithm for large scale image search on two public image data sets. Experimental results demonstrate the index efficiency and retrieval accuracy of our approach.
Image-adapted visually weighted quantization matrices for digital image compression
NASA Technical Reports Server (NTRS)
Watson, Andrew B. (Inventor)
1994-01-01
A method for performing image compression that eliminates redundant and invisible image components is presented. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The present invention adapts or customizes the quantization matrix to the image being compressed. The quantization matrix comprises visual masking by luminance and contrast techniques and by an error pooling technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.
A new compression format for fiber tracking datasets.
Presseau, Caroline; Jodoin, Pierre-Marc; Houde, Jean-Christophe; Descoteaux, Maxime
2015-04-01
A single diffusion MRI streamline fiber tracking dataset may contain hundreds of thousands, and often millions of streamlines and can take up to several gigabytes of memory. This amount of data is not only heavy to compute, but also difficult to visualize and hard to store on disk (especially when dealing with a collection of brains). These problems call for a fiber-specific compression format that simplifies its manipulation. As of today, no fiber compression format has yet been adopted and the need for it is now becoming an issue for future connectomics research. In this work, we propose a new compression format, .zfib, for streamline tractography datasets reconstructed from diffusion magnetic resonance imaging (dMRI). Tracts contain a large amount of redundant information and are relatively smooth. Hence, they are highly compressible. The proposed method is a processing pipeline containing a linearization, a quantization and an encoding step. Our pipeline is tested and validated under a wide range of DTI and HARDI tractography configurations (step size, streamline number, deterministic and probabilistic tracking) and compression options. Similar to JPEG, the user has one parameter to select: a worst-case maximum tolerance error in millimeter (mm). Overall, we find a compression factor of more than 96% for a maximum error of 0.1mm without any perceptual change or change of diffusion statistics (mean fractional anisotropy and mean diffusivity) along bundles. This opens new opportunities for connectomics and tractometry applications. Copyright © 2014 Elsevier Inc. All rights reserved.
Generic absence of strong singularities in loop quantum Bianchi-IX spacetimes
NASA Astrophysics Data System (ADS)
Saini, Sahil; Singh, Parampreet
2018-03-01
We study the generic resolution of strong singularities in loop quantized effective Bianchi-IX spacetime in two different quantizations—the connection operator based ‘A’ quantization and the extrinsic curvature based ‘K’ quantization. We show that in the effective spacetime description with arbitrary matter content, it is necessary to include inverse triad corrections to resolve all the strong singularities in the ‘A’ quantization. Whereas in the ‘K’ quantization these results can be obtained without including inverse triad corrections. Under these conditions, the energy density, expansion and shear scalars for both of the quantization prescriptions are bounded. Notably, both the quantizations can result in potentially curvature divergent events if matter content allows divergences in the partial derivatives of the energy density with respect to the triad variables at a finite energy density. Such events are found to be weak curvature singularities beyond which geodesics can be extended in the effective spacetime. Our results show that all potential strong curvature singularities of the classical theory are forbidden in Bianchi-IX spacetime in loop quantum cosmology and geodesic evolution never breaks down for such events.
Visual Typo Correction by Collocative Optimization: A Case Study on Merchandize Images.
Wei, Xiao-Yong; Yang, Zhen-Qun; Ngo, Chong-Wah; Zhang, Wei
2014-02-01
Near-duplicate retrieval (NDR) in merchandize images is of great importance to a lot of online applications on e-Commerce websites. In those applications where the requirement of response time is critical, however, the conventional techniques developed for a general purpose NDR are limited, because expensive post-processing like spatial verification or hashing is usually employed to compromise the quantization errors among the visual words used for the images. In this paper, we argue that most of the errors are introduced because of the quantization process where the visual words are considered individually, which has ignored the contextual relations among words. We propose a "spelling or phrase correction" like process for NDR, which extends the concept of collocations to visual domain for modeling the contextual relations. Binary quadratic programming is used to enforce the contextual consistency of words selected for an image, so that the errors (typos) are eliminated and the quality of the quantization process is improved. The experimental results show that the proposed method can improve the efficiency of NDR by reducing vocabulary size by 1000% times, and under the scenario of merchandize image NDR, the expensive local interest point feature used in conventional approaches can be replaced by color-moment feature, which reduces the time cost by 9202% while maintaining comparable performance to the state-of-the-art methods.
Passive forensics for copy-move image forgery using a method based on DCT and SVD.
Zhao, Jie; Guo, Jichang
2013-12-10
As powerful image editing tools are widely used, the demand for identifying the authenticity of an image is much increased. Copy-move forgery is one of the tampering techniques which are frequently used. Most existing techniques to expose this forgery need to improve the robustness for common post-processing operations and fail to precisely locate the tampering region especially when there are large similar or flat regions in the image. In this paper, a robust method based on DCT and SVD is proposed to detect this specific artifact. Firstly, the suspicious image is divided into fixed-size overlapping blocks and 2D-DCT is applied to each block, then the DCT coefficients are quantized by a quantization matrix to obtain a more robust representation of each block. Secondly, each quantized block is divided non-overlapping sub-blocks and SVD is applied to each sub-block, then features are extracted to reduce the dimension of each block using its largest singular value. Finally, the feature vectors are lexicographically sorted, and duplicated image blocks will be matched by predefined shift frequency threshold. Experiment results demonstrate that our proposed method can effectively detect multiple copy-move forgery and precisely locate the duplicated regions, even when an image was distorted by Gaussian blurring, AWGN, JPEG compression and their mixed operations. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Pseudo-Kähler Quantization on Flag Manifolds
NASA Astrophysics Data System (ADS)
Karabegov, Alexander V.
A unified approach to geometric, symbol and deformation quantizations on a generalized flag manifold endowed with an invariant pseudo-Kähler structure is proposed. In particular cases we arrive at Berezin's quantization via covariant and contravariant symbols.
Instant-Form and Light-Front Quantization of Field Theories
NASA Astrophysics Data System (ADS)
Kulshreshtha, Usha; Kulshreshtha, Daya Shankar; Vary, James
2018-05-01
In this work we consider the instant-form and light-front quantization of some field theories. As an example, we consider a class of gauged non-linear sigma models with different regularizations. In particular, we present the path integral quantization of the gauged non-linear sigma model in the Faddeevian regularization. We also make a comparision of the possible differences in the instant-form and light-front quantization at appropriate places.
On Correspondence of BRST-BFV, Dirac, and Refined Algebraic Quantizations of Constrained Systems
NASA Astrophysics Data System (ADS)
Shvedov, O. Yu.
2002-11-01
The correspondence between BRST-BFV, Dirac, and refined algebraic (group averaging, projection operator) approaches to quantizing constrained systems is analyzed. For the closed-algebra case, it is shown that the component of the BFV wave function corresponding to maximal (minimal) value of number of ghosts and antighosts in the Schrodinger representation may be viewed as a wave function in the refined algebraic (Dirac) quantization approach. The Giulini-Marolf group averaging formula for the inner product in the refined algebraic quantization approach is obtained from the Batalin-Marnelius prescription for the BRST-BFV inner product, which should be generally modified due to topological problems. The considered prescription for the correspondence of states is observed to be applicable to the open-algebra case. The refined algebraic quantization approach is generalized then to the case of nontrivial structure functions. A simple example is discussed. The correspondence of observables for different quantization methods is also investigated.
Perceptual compression of magnitude-detected synthetic aperture radar imagery
NASA Technical Reports Server (NTRS)
Gorman, John D.; Werness, Susan A.
1994-01-01
A perceptually-based approach for compressing synthetic aperture radar (SAR) imagery is presented. Key components of the approach are a multiresolution wavelet transform, a bit allocation mask based on an empirical human visual system (HVS) model, and hybrid scalar/vector quantization. Specifically, wavelet shrinkage techniques are used to segregate wavelet transform coefficients into three components: local means, edges, and texture. Each of these three components is then quantized separately according to a perceptually-based bit allocation scheme. Wavelet coefficients associated with local means and edges are quantized using high-rate scalar quantization while texture information is quantized using low-rate vector quantization. The impact of the perceptually-based multiresolution compression algorithm on visual image quality, impulse response, and texture properties is assessed for fine-resolution magnitude-detected SAR imagery; excellent image quality is found at bit rates at or above 1 bpp along with graceful performance degradation at rates below 1 bpp.
Sarkar, Sujit
2018-04-12
An attempt is made to study and understand the behavior of quantization of geometric phase of a quantum Ising chain with long range interaction. We show the existence of integer and fractional topological characterization for this model Hamiltonian with different quantization condition and also the different quantized value of geometric phase. The quantum critical lines behave differently from the perspective of topological characterization. The results of duality and its relation to the topological quantization is presented here. The symmetry study for this model Hamiltonian is also presented. Our results indicate that the Zak phase is not the proper physical parameter to describe the topological characterization of system with long range interaction. We also present quite a few exact solutions with physical explanation. Finally we present the relation between duality, symmetry and topological characterization. Our work provides a new perspective on topological quantization.
Discriminative Projection Selection Based Face Image Hashing
NASA Astrophysics Data System (ADS)
Karabat, Cagatay; Erdogan, Hakan
Face image hashing is an emerging method used in biometric verification systems. In this paper, we propose a novel face image hashing method based on a new technique called discriminative projection selection. We apply the Fisher criterion for selecting the rows of a random projection matrix in a user-dependent fashion. Moreover, another contribution of this paper is to employ a bimodal Gaussian mixture model at the quantization step. Our simulation results on three different databases demonstrate that the proposed method has superior performance in comparison to previously proposed random projection based methods.
NASA Technical Reports Server (NTRS)
Hruby, R. J.; Bjorkman, W. S.; Schmidt, S. F.; Carestia, R. A.
1979-01-01
Algorithms were developed that attempt to identify which sensor in a tetrad configuration has experienced a step failure. An algorithm is also described that provides a measure of the confidence with which the correct identification was made. Experimental results are presented from real-time tests conducted on a three-axis motion facility utilizing an ortho-skew tetrad strapdown inertial sensor package. The effects of prediction errors and of quantization on correct failure identification are discussed as well as an algorithm for detecting second failures through prediction.
BF actions for the Husain-Kuchař model
NASA Astrophysics Data System (ADS)
Barbero G., J. Fernando; Villaseñor, Eduardo J.
2001-04-01
We show that the Husain-Kuchař model can be described in the framework of BF theories. This is a first step towards its quantization by standard perturbative quantum field theory techniques or the spin-foam formalism introduced in the space-time description of general relativity and other diff-invariant theories. The actions that we will consider are similar to the ones describing the BF-Yang-Mills model and some mass generating mechanisms for gauge fields. We will also discuss the role of diffeomorphisms in the new formulations that we propose.
Performance of Low-Density Parity-Check Coded Modulation
NASA Astrophysics Data System (ADS)
Hamkins, J.
2011-02-01
This article presents the simulated performance of a family of nine AR4JA low-density parity-check (LDPC) codes when used with each of five modulations. In each case, the decoder inputs are codebit log-likelihood ratios computed from the received (noisy) modulation symbols using a general formula which applies to arbitrary modulations. Suboptimal soft-decision and hard-decision demodulators are also explored. Bit-interleaving and various mappings of bits to modulation symbols are considered. A number of subtle decoder algorithm details are shown to affect performance, especially in the error floor region. Among these are quantization dynamic range and step size, clipping degree-one variable nodes, "Jones clipping" of variable nodes, approximations of the min* function, and partial hard-limiting messages from check nodes. Using these decoder optimizations, all coded modulations simulated here are free of error floors down to codeword error rates below 10^{-6}. The purpose of generating this performance data is to aid system engineers in determining an appropriate code and modulation to use under specific power and bandwidth constraints, and to provide information needed to design a variable/adaptive coded modulation (VCM/ACM) system using the AR4JA codes. IPNPR Volume 42-185 Tagged File.txt
Noncommutative gerbes and deformation quantization
NASA Astrophysics Data System (ADS)
Aschieri, Paolo; Baković, Igor; Jurčo, Branislav; Schupp, Peter
2010-11-01
We define noncommutative gerbes using the language of star products. Quantized twisted Poisson structures are discussed as an explicit realization in the sense of deformation quantization. Our motivation is the noncommutative description of D-branes in the presence of topologically non-trivial background fields.
Quantized discrete space oscillators
NASA Technical Reports Server (NTRS)
Uzes, C. A.; Kapuscik, Edward
1993-01-01
A quasi-canonical sequence of finite dimensional quantizations was found which has canonical quantization as its limit. In order to demonstrate its practical utility and its numerical convergence, this formalism is applied to the eigenvalue and 'eigenfunction' problem of several harmonic and anharmonic oscillators.
Visibility of wavelet quantization noise
NASA Technical Reports Server (NTRS)
Watson, A. B.; Yang, G. Y.; Solomon, J. A.; Villasenor, J.
1997-01-01
The discrete wavelet transform (DWT) decomposes an image into bands that vary in spatial frequency and orientation. It is widely used for image compression. Measures of the visibility of DWT quantization errors are required to achieve optimal compression. Uniform quantization of a single band of coefficients results in an artifact that we call DWT uniform quantization noise; it is the sum of a lattice of random amplitude basis functions of the corresponding DWT synthesis filter. We measured visual detection thresholds for samples of DWT uniform quantization noise in Y, Cb, and Cr color channels. The spatial frequency of a wavelet is r 2-lambda, where r is display visual resolution in pixels/degree, and lambda is the wavelet level. Thresholds increase rapidly with wavelet spatial frequency. Thresholds also increase from Y to Cr to Cb, and with orientation from lowpass to horizontal/vertical to diagonal. We construct a mathematical model for DWT noise detection thresholds that is a function of level, orientation, and display visual resolution. This allows calculation of a "perceptually lossless" quantization matrix for which all errors are in theory below the visual threshold. The model may also be used as the basis for adaptive quantization schemes.
A recursive technique for adaptive vector quantization
NASA Technical Reports Server (NTRS)
Lindsay, Robert A.
1989-01-01
Vector Quantization (VQ) is fast becoming an accepted, if not preferred method for image compression. The VQ performs well when compressing all types of imagery including Video, Electro-Optical (EO), Infrared (IR), Synthetic Aperture Radar (SAR), Multi-Spectral (MS), and digital map data. The only requirement is to change the codebook to switch the compressor from one image sensor to another. There are several approaches for designing codebooks for a vector quantizer. Adaptive Vector Quantization is a procedure that simultaneously designs codebooks as the data is being encoded or quantized. This is done by computing the centroid as a recursive moving average where the centroids move after every vector is encoded. When computing the centroid of a fixed set of vectors the resultant centroid is identical to the previous centroid calculation. This method of centroid calculation can be easily combined with VQ encoding techniques. The defined quantizer changes after every encoded vector by recursively updating the centroid of minimum distance which is the selected by the encoder. Since the quantizer is changing definition or states after every encoded vector, the decoder must now receive updates to the codebook. This is done as side information by multiplexing bits into the compressed source data.
Hao, Li-Ying; Yang, Guang-Hong
2013-09-01
This paper is concerned with the problem of robust fault-tolerant compensation control problem for uncertain linear systems subject to both state and input signal quantization. By incorporating novel matrix full-rank factorization technique with sliding surface design successfully, the total failure of certain actuators can be coped with, under a special actuator redundancy assumption. In order to compensate for quantization errors, an adjustment range of quantization sensitivity for a dynamic uniform quantizer is given through the flexible choices of design parameters. Comparing with the existing results, the derived inequality condition leads to the fault tolerance ability stronger and much wider scope of applicability. With a static adjustment policy of quantization sensitivity, an adaptive sliding mode controller is then designed to maintain the sliding mode, where the gain of the nonlinear unit vector term is updated automatically to compensate for the effects of actuator faults, quantization errors, exogenous disturbances and parameter uncertainties without the need for a fault detection and isolation (FDI) mechanism. Finally, the effectiveness of the proposed design method is illustrated via a model of a rocket fairing structural-acoustic. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.
Thermal field theory and generalized light front quantization
NASA Astrophysics Data System (ADS)
Weldon, H. Arthur
2003-04-01
The dependence of thermal field theory on the surface of quantization and on the velocity of the heat bath is investigated by working in general coordinates that are arbitrary linear combinations of the Minkowski coordinates. In the general coordinates the metric tensor gμν¯ is nondiagonal. The Kubo-Martin-Schwinger condition requires periodicity in thermal correlation functions when the temporal variable changes by an amount -i/(T(g00¯)). Light-front quantization fails since g00¯=0; however, various related quantizations are possible.
Generalized radiation-field quantization method and the Petermann excess-noise factor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheng, Y.-J.; Siegman, A.E.; E.L. Ginzton Laboratory, Stanford University, Stanford, California 94305
2003-10-01
We propose a generalized radiation-field quantization formalism, where quantization does not have to be referenced to a set of power-orthogonal eigenmodes as conventionally required. This formalism can be used to directly quantize the true system eigenmodes, which can be non-power-orthogonal due to the open nature of the system or the gain/loss medium involved in the system. We apply this generalized field quantization to the laser linewidth problem, in particular, lasers with non-power-orthogonal oscillation modes, and derive the excess-noise factor in a fully quantum-mechanical framework. We also show that, despite the excess-noise factor for oscillating modes, the total spatially averaged decaymore » rate for the laser atoms remains unchanged.« less
Simultaneous fault detection and control design for switched systems with two quantized signals.
Li, Jian; Park, Ju H; Ye, Dan
2017-01-01
The problem of simultaneous fault detection and control design for switched systems with two quantized signals is presented in this paper. Dynamic quantizers are employed, respectively, before the output is passed to fault detector, and before the control input is transmitted to the switched system. Taking the quantized errors into account, the robust performance for this kind of system is given. Furthermore, sufficient conditions for the existence of fault detector/controller are presented in the framework of linear matrix inequalities, and fault detector/controller gains and the supremum of quantizer range are derived by a convex optimized method. Finally, two illustrative examples demonstrate the effectiveness of the proposed method. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
BFV approach to geometric quantization
NASA Astrophysics Data System (ADS)
Fradkin, E. S.; Linetsky, V. Ya.
1994-12-01
A gauge-invariant approach to geometric quantization is developed. It yields a complete quantum description for dynamical systems with non-trivial geometry and topology of the phase space. The method is a global version of the gauge-invariant approach to quantization of second-class constraints developed by Batalin, Fradkin and Fradkina (BFF). Physical quantum states and quantum observables are respectively described by covariantly constant sections of the Fock bundle and the bundle of hermitian operators over the phase space with a flat connection defined by the nilpotent BVF-BRST operator. Perturbative calculation of the first non-trivial quantum correction to the Poisson brackets leads to the Chevalley cocycle known in deformation quantization. Consistency conditions lead to a topological quantization condition with metaplectic anomaly.
Deformation quantization of fermi fields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Galaviz, I.; Garcia-Compean, H.; Departamento de Fisica, Centro de Investigacion y de Estudios Avanzados del IPN, P.O. Box 14-740, 07000 Mexico, D.F.
2008-04-15
Deformation quantization for any Grassmann scalar free field is described via the Weyl-Wigner-Moyal formalism. The Stratonovich-Weyl quantizer, the Moyal *-product and the Wigner functional are obtained by extending the formalism proposed recently in [I. Galaviz, H. Garcia-Compean, M. Przanowski, F.J. Turrubiates, Weyl-Wigner-Moyal Formalism for Fermi Classical Systems, arXiv:hep-th/0612245] to the fermionic systems of infinite number of degrees of freedom. In particular, this formalism is applied to quantize the Dirac free field. It is observed that the use of suitable oscillator variables facilitates considerably the procedure. The Stratonovich-Weyl quantizer, the Moyal *-product, the Wigner functional, the normal ordering operator, and finally,more » the Dirac propagator have been found with the use of these variables.« less
Polymer-Fourier quantization of the scalar field revisited
NASA Astrophysics Data System (ADS)
Garcia-Chung, Angel; Vergara, J. David
2016-10-01
The polymer quantization of the Fourier modes of the real scalar field is studied within algebraic scheme. We replace the positive linear functional of the standard Poincaré invariant quantization by a singular one. This singular positive linear functional is constructed as mimicking the singular limit of the complex structure of the Poincaré invariant Fock quantization. The resulting symmetry group of such polymer quantization is the subgroup SDiff(ℝ4) which is a subgroup of Diff(ℝ4) formed by spatial volume preserving diffeomorphisms. In consequence, this yields an entirely different irreducible representation of the canonical commutation relations, nonunitary equivalent to the standard Fock representation. We also compared the Poincaré invariant Fock vacuum with the polymer Fourier vacuum.
Quantized Rabi oscillations and circular dichroism in quantum Hall systems
NASA Astrophysics Data System (ADS)
Tran, D. T.; Cooper, N. R.; Goldman, N.
2018-06-01
The dissipative response of a quantum system upon periodic driving can be exploited as a probe of its topological properties. Here we explore the implications of such phenomena in two-dimensional gases subjected to a uniform magnetic field. It is shown that a filled Landau level exhibits a quantized circular dichroism, which can be traced back to its underlying nontrivial topology. Based on selection rules, we find that this quantized effect can be suitably described in terms of Rabi oscillations, whose frequencies satisfy simple quantization laws. We discuss how quantized dissipative responses can be probed locally, both in the bulk and at the boundaries of the system. This work suggests alternative forms of topological probes based on circular dichroism.
Instabilities caused by floating-point arithmetic quantization.
NASA Technical Reports Server (NTRS)
Phillips, C. L.
1972-01-01
It is shown that an otherwise stable digital control system can be made unstable by signal quantization when the controller operates on floating-point arithmetic. Sufficient conditions of instability are determined, and an example of loss of stability is treated when only one quantizer is operated.
Dynamic State Estimation of Power Systems With Quantization Effects: A Recursive Filter Approach.
Hu, Liang; Wang, Zidong; Liu, Xiaohui
2016-08-01
In this paper, a recursive filter algorithm is developed to deal with the state estimation problem for power systems with quantized nonlinear measurements. The measurements from both the remote terminal units and the phasor measurement unit are subject to quantizations described by a logarithmic quantizer. Attention is focused on the design of a recursive filter such that, in the simultaneous presence of nonlinear measurements and quantization effects, an upper bound for the estimation error covariance is guaranteed and subsequently minimized. Instead of using the traditional approximation methods in nonlinear estimation that simply ignore the linearization errors, we treat both the linearization and quantization errors as norm-bounded uncertainties in the algorithm development so as to improve the performance of the estimator. For the power system with such kind of introduced uncertainties, a filter is designed in the framework of robust recursive estimation, and the developed filter algorithm is tested on the IEEE benchmark power system to demonstrate its effectiveness.
Direct comparison of fractional and integer quantized Hall resistance
NASA Astrophysics Data System (ADS)
Ahlers, Franz J.; Götz, Martin; Pierz, Klaus
2017-08-01
We present precision measurements of the fractional quantized Hall effect, where the quantized resistance {{R}≤ft[ 1/3 \\right]} in the fractional quantum Hall state at filling factor 1/3 was compared with a quantized resistance {{R}[2]} , represented by an integer quantum Hall state at filling factor 2. A cryogenic current comparator bridge capable of currents down to the nanoampere range was used to directly compare two resistance values of two GaAs-based devices located in two cryostats. A value of 1-(5.3 ± 6.3) 10-8 (95% confidence level) was obtained for the ratio ({{R}≤ft[ 1/3 \\right]}/6{{R}[2]} ). This constitutes the most precise comparison of integer resistance quantization (in terms of h/e 2) in single-particle systems and of fractional quantization in fractionally charged quasi-particle systems. While not relevant for practical metrology, such a test of the validity of the underlying physics is of significance in the context of the upcoming revision of the SI.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Błaszak, Maciej, E-mail: blaszakm@amu.edu.pl; Domański, Ziemowit, E-mail: ziemowit@amu.edu.pl
In the paper is presented an invariant quantization procedure of classical mechanics on the phase space over flat configuration space. Then, the passage to an operator representation of quantum mechanics in a Hilbert space over configuration space is derived. An explicit form of position and momentum operators as well as their appropriate ordering in arbitrary curvilinear coordinates is demonstrated. Finally, the extension of presented formalism onto non-flat case and related ambiguities of the process of quantization are discussed. -- Highlights: •An invariant quantization procedure of classical mechanics on the phase space over flat configuration space is presented. •The passage tomore » an operator representation of quantum mechanics in a Hilbert space over configuration space is derived. •Explicit form of position and momentum operators and their appropriate ordering in curvilinear coordinates is shown. •The invariant form of Hamiltonian operators quadratic and cubic in momenta is derived. •The extension of presented formalism onto non-flat case and related ambiguities of the quantization process are discussed.« less
Quantization noise in digital speech. M.S. Thesis- Houston Univ.
NASA Technical Reports Server (NTRS)
Schmidt, O. L.
1972-01-01
The amount of quantization noise generated in a digital-to-analog converter is dependent on the number of bits or quantization levels used to digitize the analog signal in the analog-to-digital converter. The minimum number of quantization levels and the minimum sample rate were derived for a digital voice channel. A sample rate of 6000 samples per second and lowpass filters with a 3 db cutoff of 2400 Hz are required for 100 percent sentence intelligibility. Consonant sounds are the first speech components to be degraded by quantization noise. A compression amplifier can be used to increase the weighting of the consonant sound amplitudes in the analog-to-digital converter. An expansion network must be installed at the output of the digital-to-analog converter to restore the original weighting of the consonant sounds. This technique results in 100 percent sentence intelligibility for a sample rate of 5000 samples per second, eight quantization levels, and lowpass filters with a 3 db cutoff of 2000 Hz.
Steganalysis based on JPEG compatibility
NASA Astrophysics Data System (ADS)
Fridrich, Jessica; Goljan, Miroslav; Du, Rui
2001-11-01
In this paper, we introduce a new forensic tool that can reliably detect modifications in digital images, such as distortion due to steganography and watermarking, in images that were originally stored in the JPEG format. The JPEG compression leave unique fingerprints and serves as a fragile watermark enabling us to detect changes as small as modifying the LSB of one randomly chosen pixel. The detection of changes is based on investigating the compatibility of 8x8 blocks of pixels with JPEG compression with a given quantization matrix. The proposed steganalytic method is applicable to virtually all steganongraphic and watermarking algorithms with the exception of those that embed message bits into the quantized JPEG DCT coefficients. The method can also be used to estimate the size of the secret message and identify the pixels that carry message bits. As a consequence of our steganalysis, we strongly recommend avoiding using images that have been originally stored in the JPEG format as cover-images for spatial-domain steganography.
Coherent state quantization of quaternions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Muraleetharan, B., E-mail: bbmuraleetharan@jfn.ac.lk, E-mail: santhar@gmail.com; Thirulogasanthar, K., E-mail: bbmuraleetharan@jfn.ac.lk, E-mail: santhar@gmail.com
Parallel to the quantization of the complex plane, using the canonical coherent states of a right quaternionic Hilbert space, quaternion field of quaternionic quantum mechanics is quantized. Associated upper symbols, lower symbols, and related quantities are analyzed. Quaternionic version of the harmonic oscillator and Weyl-Heisenberg algebra are also obtained.
Educational Information Quantization for Improving Content Quality in Learning Management Systems
ERIC Educational Resources Information Center
Rybanov, Alexander Aleksandrovich
2014-01-01
The article offers the educational information quantization method for improving content quality in Learning Management Systems. The paper considers questions concerning analysis of quality of quantized presentation of educational information, based on quantitative text parameters: average frequencies of parts of speech, used in the text; formal…
A Heisenberg Algebra Bundle of a Vector Field in Three-Space and its Weyl Quantization
NASA Astrophysics Data System (ADS)
Binz, Ernst; Pods, Sonja
2006-01-01
In these notes we associate a natural Heisenberg group bundle Ha with a singularity free smooth vector field X = (id,a) on a submanifold M in a Euclidean three-space. This bundle yields naturally an infinite dimensional Heisenberg group HX∞. A representation of the C*-group algebra of HX∞ is a quantization. It causes a natural Weyl-deformation quantization of X. The influence of the topological structure of M on this quantization is encoded in the Chern class of a canonical complex line bundle inside Ha.
BFV quantization on hermitian symmetric spaces
NASA Astrophysics Data System (ADS)
Fradkin, E. S.; Linetsky, V. Ya.
1995-02-01
Gauge-invariant BFV approach to geometric quantization is applied to the case of hermitian symmetric spaces G/ H. In particular, gauge invariant quantization on the Lobachevski plane and sphere is carried out. Due to the presence of symmetry, master equations for the first-class constraints, quantum observables and physical quantum states are exactly solvable. BFV-BRST operator defines a flat G-connection in the Fock bundle over G/ H. Physical quantum states are covariantly constant sections with respect to this connection and are shown to coincide with the generalized coherent states for the group G. Vacuum expectation values of the quantum observables commuting with the quantum first-class constraints reduce to the covariant symbols of Berezin. The gauge-invariant approach to quantization on symplectic manifolds synthesizes geometric, deformation and Berezin quantization approaches.
NASA Technical Reports Server (NTRS)
Jaggi, S.
1993-01-01
A study is conducted to investigate the effects and advantages of data compression techniques on multispectral imagery data acquired by NASA's airborne scanners at the Stennis Space Center. The first technique used was vector quantization. The vector is defined in the multispectral imagery context as an array of pixels from the same location from each channel. The error obtained in substituting the reconstructed images for the original set is compared for different compression ratios. Also, the eigenvalues of the covariance matrix obtained from the reconstructed data set are compared with the eigenvalues of the original set. The effects of varying the size of the vector codebook on the quality of the compression and on subsequent classification are also presented. The output data from the Vector Quantization algorithm was further compressed by a lossless technique called Difference-mapped Shift-extended Huffman coding. The overall compression for 7 channels of data acquired by the Calibrated Airborne Multispectral Scanner (CAMS), with an RMS error of 15.8 pixels was 195:1 (0.41 bpp) and with an RMS error of 3.6 pixels was 18:1 (.447 bpp). The algorithms were implemented in software and interfaced with the help of dedicated image processing boards to an 80386 PC compatible computer. Modules were developed for the task of image compression and image analysis. Also, supporting software to perform image processing for visual display and interpretation of the compressed/classified images was developed.
Timing Calibration in PET Using a Time Alignment Probe
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moses, William W.; Thompson, Christopher J.
2006-05-05
We evaluate the Scanwell Time Alignment Probe for performing the timing calibration for the LBNL Prostate-Specific PET Camera. We calibrate the time delay correction factors for each detector module in the camera using two methods--using the Time Alignment Probe (which measures the time difference between the probe and each detector module) and using the conventional method (which measures the timing difference between all module-module combinations in the camera). These correction factors, which are quantized in 2 ns steps, are compared on a module-by-module basis. The values are in excellent agreement--of the 80 correction factors, 62 agree exactly, 17 differ bymore » 1 step, and 1 differs by 2 steps. We also measure on-time and off-time counting rates when the two sets of calibration factors are loaded into the camera and find that they agree within statistical error. We conclude that the performance using the Time Alignment Probe and conventional methods are equivalent.« less
A Algebraic Approach to the Quantization of Constrained Systems: Finite Dimensional Examples.
NASA Astrophysics Data System (ADS)
Tate, Ranjeet Shekhar
1992-01-01
General relativity has two features in particular, which make it difficult to apply to it existing schemes for the quantization of constrained systems. First, there is no background structure in the theory, which could be used, e.g., to regularize constraint operators, to identify a "time" or to define an inner product on physical states. Second, in the Ashtekar formulation of general relativity, which is a promising avenue to quantum gravity, the natural variables for quantization are not canonical; and, classically, there are algebraic identities between them. Existing schemes are usually not concerned with such identities. Thus, from the point of view of canonical quantum gravity, it has become imperative to find a framework for quantization which provides a general prescription to find the physical inner product, and is flexible enough to accommodate non -canonical variables. In this dissertation I present an algebraic formulation of the Dirac approach to the quantization of constrained systems. The Dirac quantization program is augmented by a general principle to find the inner product on physical states. Essentially, the Hermiticity conditions on physical operators determine this inner product. I also clarify the role in quantum theory of possible algebraic identities between the elementary variables. I use this approach to quantize various finite dimensional systems. Some of these models test the new aspects of the algebraic framework. Others bear qualitative similarities to general relativity, and may give some insight into the pitfalls lurking in quantum gravity. The previous quantizations of one such model had many surprising features. When this model is quantized using the algebraic program, there is no longer any unexpected behaviour. I also construct the complete quantum theory for a previously unsolved relativistic cosmology. All these models indicate that the algebraic formulation provides powerful new tools for quantization. In (spatially compact) general relativity, the Hamiltonian is constrained to vanish. I present various approaches one can take to obtain an interpretation of the quantum theory of such "dynamically constrained" systems. I apply some of these ideas to the Bianchi I cosmology, and analyze the issue of the initial singularity in quantum theory.
Quantization of Electromagnetic Fields in Cavities
NASA Technical Reports Server (NTRS)
Kakazu, Kiyotaka; Oshiro, Kazunori
1996-01-01
A quantization procedure for the electromagnetic field in a rectangular cavity with perfect conductor walls is presented, where a decomposition formula of the field plays an essential role. All vector mode functions are obtained by using the decomposition. After expanding the field in terms of the vector mode functions, we get the quantized electromagnetic Hamiltonian.
Quantization Distortion in Block Transform-Compressed Data
NASA Technical Reports Server (NTRS)
Boden, A. F.
1995-01-01
The popular JPEG image compression standard is an example of a block transform-based compression scheme; the image is systematically subdivided into block that are individually transformed, quantized, and encoded. The compression is achieved by quantizing the transformed data, reducing the data entropy and thus facilitating efficient encoding. A generic block transform model is introduced.
Quantized impedance dealing with the damping behavior of the one-dimensional oscillator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Jinghao; Zhang, Jing; Li, Yuan
2015-11-15
A quantized impedance is proposed to theoretically establish the relationship between the atomic eigenfrequency and the intrinsic frequency of the one-dimensional oscillator in this paper. The classical oscillator is modified by the idea that the electron transition is treated as a charge-discharge process of a suggested capacitor with the capacitive energy equal to the energy level difference of the jumping electron. The quantized capacitance of the impedance interacting with the jumping electron can lead the resonant frequency of the oscillator to the same as the atomic eigenfrequency. The quantized resistance reflects that the damping coefficient of the oscillator is themore » mean collision frequency of the transition electron. In addition, the first and third order electric susceptibilities based on the oscillator are accordingly quantized. Our simulation of the hydrogen atom emission spectrum based on the proposed method agrees well with the experimental one. Our results exhibits that the one-dimensional oscillator with the quantized impedance may become useful in the estimations of the refractive index and one- or multi-photon absorption coefficients of some nonmagnetic media composed of hydrogen-like atoms.« less
Low-rate image coding using vector quantization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Makur, A.
1990-01-01
This thesis deals with the development and analysis of a computationally simple vector quantization image compression system for coding monochrome images at low bit rate. Vector quantization has been known to be an effective compression scheme when a low bit rate is desirable, but the intensive computation required in a vector quantization encoder has been a handicap in using it for low rate image coding. The present work shows that, without substantially increasing the coder complexity, it is indeed possible to achieve acceptable picture quality while attaining a high compression ratio. Several modifications to the conventional vector quantization coder aremore » proposed in the thesis. These modifications are shown to offer better subjective quality when compared to the basic coder. Distributed blocks are used instead of spatial blocks to construct the input vectors. A class of input-dependent weighted distortion functions is used to incorporate psychovisual characteristics in the distortion measure. Computationally simple filtering techniques are applied to further improve the decoded image quality. Finally, unique designs of the vector quantization coder using electronic neural networks are described, so that the coding delay is reduced considerably.« less
Quantized impedance dealing with the damping behavior of the one-dimensional oscillator
NASA Astrophysics Data System (ADS)
Zhu, Jinghao; Zhang, Jing; Li, Yuan; Zhang, Yong; Fang, Zhengji; Zhao, Peide; Li, Erping
2015-11-01
A quantized impedance is proposed to theoretically establish the relationship between the atomic eigenfrequency and the intrinsic frequency of the one-dimensional oscillator in this paper. The classical oscillator is modified by the idea that the electron transition is treated as a charge-discharge process of a suggested capacitor with the capacitive energy equal to the energy level difference of the jumping electron. The quantized capacitance of the impedance interacting with the jumping electron can lead the resonant frequency of the oscillator to the same as the atomic eigenfrequency. The quantized resistance reflects that the damping coefficient of the oscillator is the mean collision frequency of the transition electron. In addition, the first and third order electric susceptibilities based on the oscillator are accordingly quantized. Our simulation of the hydrogen atom emission spectrum based on the proposed method agrees well with the experimental one. Our results exhibits that the one-dimensional oscillator with the quantized impedance may become useful in the estimations of the refractive index and one- or multi-photon absorption coefficients of some nonmagnetic media composed of hydrogen-like atoms.
Probabilistic distance-based quantizer design for distributed estimation
NASA Astrophysics Data System (ADS)
Kim, Yoon Hak
2016-12-01
We consider an iterative design of independently operating local quantizers at nodes that should cooperate without interaction to achieve application objectives for distributed estimation systems. We suggest as a new cost function a probabilistic distance between the posterior distribution and its quantized one expressed as the Kullback Leibler (KL) divergence. We first present the analysis that minimizing the KL divergence in the cyclic generalized Lloyd design framework is equivalent to maximizing the logarithmic quantized posterior distribution on the average which can be further computationally reduced in our iterative design. We propose an iterative design algorithm that seeks to maximize the simplified version of the posterior quantized distribution and discuss that our algorithm converges to a global optimum due to the convexity of the cost function and generates the most informative quantized measurements. We also provide an independent encoding technique that enables minimization of the cost function and can be efficiently simplified for a practical use of power-constrained nodes. We finally demonstrate through extensive experiments an obvious advantage of improved estimation performance as compared with the typical designs and the novel design techniques previously published.
Quantization and Superselection Sectors I:. Transformation Group C*-ALGEBRAS
NASA Astrophysics Data System (ADS)
Landsman, N. P.
Quantization is defined as the act of assigning an appropriate C*-algebra { A} to a given configuration space Q, along with a prescription mapping self-adjoint elements of { A} into physically interpretable observables. This procedure is adopted to solve the problem of quantizing a particle moving on a homogeneous locally compact configuration space Q=G/H. Here { A} is chosen to be the transformation group C*-algebra corresponding to the canonical action of G on Q. The structure of these algebras and their representations are examined in some detail. Inequivalent quantizations are identified with inequivalent irreducible representations of the C*-algebra corresponding to the system, hence with its superselection sectors. Introducing the concept of a pre-Hamiltonian, we construct a large class of G-invariant time-evolutions on these algebras, and find the Hamiltonians implementing these time-evolutions in each irreducible representation of { A}. “Topological” terms in the Hamiltonian (or the corresponding action) turn out to be representation-dependent, and are automatically induced by the quantization procedure. Known “topological” charge quantization or periodicity conditions are then identically satisfied as a consequence of the representation theory of { A}.
A study of non-local holography in the AdS/CFT correspondence
NASA Astrophysics Data System (ADS)
Hamilton, Alex
This thesis is broadly composed of three topics. After giving a brief overview of the origins of the AdS/CFT duality, we describe a way of representing local bulk fields as quasi-local CFT operators. We show how these smeared boundary operators encode the holographic radial-scale duality, and how this can lead to degrees of freedom consistent with Bekenstein's entropy. We also gain insight into the BTZ black hole, with the horizon, singularity, and thermality arising naturally via these operators. As another aspect of AdS/CFT, we will be interested in the fate of giant gravitons under a marginal deformation. We review the construction and fluctuation spectrum of giants, and then proceed to evaluate them in two different Penrose limits of Lunin and Maldacena's gamma deformed geometry. We find only one to be stable, and describe how the degeneracy of the spectrum is partially broken. Finally, we make a first step towards cosmological particle production in string theory by introducing a first quantized alternative approach to the standard method of calculation. We show how the same calculation can be done with Green's Functions---objects which are well defined in a first quantized setting (such as string theory).
Application of State Quantization-Based Methods in HEP Particle Transport Simulation
NASA Astrophysics Data System (ADS)
Santi, Lucio; Ponieman, Nicolás; Jun, Soon Yung; Genser, Krzysztof; Elvira, Daniel; Castro, Rodrigo
2017-10-01
Simulation of particle-matter interactions in complex geometries is one of the main tasks in high energy physics (HEP) research. An essential aspect of it is an accurate and efficient particle transportation in a non-uniform magnetic field, which includes the handling of volume crossings within a predefined 3D geometry. Quantized State Systems (QSS) is a family of numerical methods that provides attractive features for particle transportation processes, such as dense output (sequences of polynomial segments changing only according to accuracy-driven discrete events) and lightweight detection and handling of volume crossings (based on simple root-finding of polynomial functions). In this work we present a proof-of-concept performance comparison between a QSS-based standalone numerical solver and an application based on the Geant4 simulation toolkit, with its default Runge-Kutta based adaptive step method. In a case study with a charged particle circulating in a vacuum (with interactions with matter turned off), in a uniform magnetic field, and crossing up to 200 volume boundaries twice per turn, simulation results showed speedups of up to 6 times in favor of QSS while it being 10 times slower in the case with zero volume boundaries.
Light-cone quantization of two dimensional field theory in the path integral approach
NASA Astrophysics Data System (ADS)
Cortés, J. L.; Gamboa, J.
1999-05-01
A quantization condition due to the boundary conditions and the compatification of the light cone space-time coordinate x- is identified at the level of the classical equations for the right-handed fermionic field in two dimensions. A detailed analysis of the implications of the implementation of this quantization condition at the quantum level is presented. In the case of the Thirring model one has selection rules on the excitations as a function of the coupling and in the case of the Schwinger model a double integer structure of the vacuum is derived in the light-cone frame. Two different quantized chiral Schwinger models are found, one of them without a θ-vacuum structure. A generalization of the quantization condition to theories with several fermionic fields and to higher dimensions is presented.
Relational symplectic groupoid quantization for constant poisson structures
NASA Astrophysics Data System (ADS)
Cattaneo, Alberto S.; Moshayedi, Nima; Wernli, Konstantin
2017-09-01
As a detailed application of the BV-BFV formalism for the quantization of field theories on manifolds with boundary, this note describes a quantization of the relational symplectic groupoid for a constant Poisson structure. The presence of mixed boundary conditions and the globalization of results are also addressed. In particular, the paper includes an extension to space-times with boundary of some formal geometry considerations in the BV-BFV formalism, and specifically introduces into the BV-BFV framework a "differential" version of the classical and quantum master equations. The quantization constructed in this paper induces Kontsevich's deformation quantization on the underlying Poisson manifold, i.e., the Moyal product, which is known in full details. This allows focussing on the BV-BFV technology and testing it. For the inexperienced reader, this is also a practical and reasonably simple way to learn it.
Segmentation of magnetic resonance images using fuzzy algorithms for learning vector quantization.
Karayiannis, N B; Pai, P I
1999-02-01
This paper evaluates a segmentation technique for magnetic resonance (MR) images of the brain based on fuzzy algorithms for learning vector quantization (FALVQ). These algorithms perform vector quantization by updating all prototypes of a competitive network through an unsupervised learning process. Segmentation of MR images is formulated as an unsupervised vector quantization process, where the local values of different relaxation parameters form the feature vectors which are represented by a relatively small set of prototypes. The experiments evaluate a variety of FALVQ algorithms in terms of their ability to identify different tissues and discriminate between normal tissues and abnormalities.
Splitting Times of Doubly Quantized Vortices in Dilute Bose-Einstein Condensates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huhtamaeki, J. A. M.; Pietilae, V.; Virtanen, S. M. M.
2006-09-15
Recently, the splitting of a topologically created doubly quantized vortex into two singly quantized vortices was experimentally investigated in dilute atomic cigar-shaped Bose-Einstein condensates [Y. Shin et al., Phys. Rev. Lett. 93, 160406 (2004)]. In particular, the dependency of the splitting time on the peak particle density was studied. We present results of theoretical simulations which closely mimic the experimental setup. We show that the combination of gravitational sag and time dependency of the trapping potential alone suffices to split the doubly quantized vortex in time scales which are in good agreement with the experiments.
Response of two-band systems to a single-mode quantized field
NASA Astrophysics Data System (ADS)
Shi, Z. C.; Shen, H. Z.; Wang, W.; Yi, X. X.
2016-03-01
The response of topological insulators (TIs) to an external weakly classical field can be expressed in terms of Kubo formula, which predicts quantized Hall conductivity of the quantum Hall family. The response of TIs to a single-mode quantized field, however, remains unexplored. In this work, we take the quantum nature of the external field into account and define a Hall conductance to characterize the linear response of a two-band system to the quantized field. The theory is then applied to topological insulators. Comparisons with the traditional Hall conductance are presented and discussed.
Xiong, Wenjun; Yu, Xinghuo; Chen, Yao; Gao, Jie
2017-06-01
This brief investigates the quantized iterative learning problem for digital networks with time-varying topologies. The information is first encoded as symbolic data and then transmitted. After the data are received, a decoder is used by the receiver to get an estimate of the sender's state. Iterative learning quantized communication is considered in the process of encoding and decoding. A sufficient condition is then presented to achieve the consensus tracking problem in a finite interval using the quantized iterative learning controllers. Finally, simulation results are given to illustrate the usefulness of the developed criterion.
Universe creation from the third-quantized vacuum
DOE Office of Scientific and Technical Information (OSTI.GOV)
McGuigan, M.
1989-04-15
Third quantization leads to a Hilbert space containing a third-quantized vacuum in which no universes are present as well as multiuniverse states. We consider the possibility of universe creation for the special case where the universe emerges in a no-particle state. The probability of such a creation is computed from both the path-integral and operator formalisms.
Universe creation from the third-quantized vacuum
NASA Astrophysics Data System (ADS)
McGuigan, Michael
1989-04-01
Third quantization leads to a Hilbert space containing a third-quantized vacuum in which no universes are present as well as multiuniverse states. We consider the possibility of universe creation for the special case where the universe emerges in a no-particle state. The probability of such a creation is computed from both the path-integral and operator formalisms.
4D Sommerfeld quantization of the complex extended charge
NASA Astrophysics Data System (ADS)
Bulyzhenkov, Igor E.
2017-12-01
Gravitational fields and accelerations cannot change quantized magnetic flux in closed line contours due to flat 3D section of curved 4D space-time-matter. The relativistic Bohr-Sommerfeld quantization of the imaginary charge reveals an electric analog of the Compton length, which can introduce quantitatively the fine structure constant and the Plank length.
The coordinate coherent states approach revisited
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miao, Yan-Gang, E-mail: miaoyg@nankai.edu.cn; Zhang, Shao-Jun, E-mail: sjzhang@mail.nankai.edu.cn
2013-02-15
We revisit the coordinate coherent states approach through two different quantization procedures in the quantum field theory on the noncommutative Minkowski plane. The first procedure, which is based on the normal commutation relation between an annihilation and creation operators, deduces that a point mass can be described by a Gaussian function instead of the usual Dirac delta function. However, we argue this specific quantization by adopting the canonical one (based on the canonical commutation relation between a field and its conjugate momentum) and show that a point mass should still be described by the Dirac delta function, which implies thatmore » the concept of point particles is still valid when we deal with the noncommutativity by following the coordinate coherent states approach. In order to investigate the dependence on quantization procedures, we apply the two quantization procedures to the Unruh effect and Hawking radiation and find that they give rise to significantly different results. Under the first quantization procedure, the Unruh temperature and Unruh spectrum are not deformed by noncommutativity, but the Hawking temperature is deformed by noncommutativity while the radiation specturm is untack. However, under the second quantization procedure, the Unruh temperature and Hawking temperature are untack but the both spectra are modified by an effective greybody (deformed) factor. - Highlights: Black-Right-Pointing-Pointer Suggest a canonical quantization in the coordinate coherent states approach. Black-Right-Pointing-Pointer Prove the validity of the concept of point particles. Black-Right-Pointing-Pointer Apply the canonical quantization to the Unruh effect and Hawking radiation. Black-Right-Pointing-Pointer Find no deformations in the Unruh temperature and Hawking temperature. Black-Right-Pointing-Pointer Provide the modified spectra of the Unruh effect and Hawking radiation.« less
NASA Astrophysics Data System (ADS)
Liu, Hao; Li, Kangda; Wang, Bing; Tang, Hainie; Gong, Xiaohui
2017-01-01
A quantized block compressive sensing (QBCS) framework, which incorporates the universal measurement, quantization/inverse quantization, entropy coder/decoder, and iterative projected Landweber reconstruction, is summarized. Under the QBCS framework, this paper presents an improved reconstruction algorithm for aerial imagery, QBCS, with entropy-aware projected Landweber (QBCS-EPL), which leverages the full-image sparse transform without Wiener filter and an entropy-aware thresholding model for wavelet-domain image denoising. Through analyzing the functional relation between the soft-thresholding factors and entropy-based bitrates for different quantization methods, the proposed model can effectively remove wavelet-domain noise of bivariate shrinkage and achieve better image reconstruction quality. For the overall performance of QBCS reconstruction, experimental results demonstrate that the proposed QBCS-EPL algorithm significantly outperforms several existing algorithms. With the experiment-driven methodology, the QBCS-EPL algorithm can obtain better reconstruction quality at a relatively moderate computational cost, which makes it more desirable for aerial imagery applications.
Quantized kernel least mean square algorithm.
Chen, Badong; Zhao, Songlin; Zhu, Pingping; Príncipe, José C
2012-01-01
In this paper, we propose a quantization approach, as an alternative of sparsification, to curb the growth of the radial basis function structure in kernel adaptive filtering. The basic idea behind this method is to quantize and hence compress the input (or feature) space. Different from sparsification, the new approach uses the "redundant" data to update the coefficient of the closest center. In particular, a quantized kernel least mean square (QKLMS) algorithm is developed, which is based on a simple online vector quantization method. The analytical study of the mean square convergence has been carried out. The energy conservation relation for QKLMS is established, and on this basis we arrive at a sufficient condition for mean square convergence, and a lower and upper bound on the theoretical value of the steady-state excess mean square error. Static function estimation and short-term chaotic time-series prediction examples are presented to demonstrate the excellent performance.
Hao, Li-Ying; Park, Ju H; Ye, Dan
2017-09-01
In this paper, a new robust fault-tolerant compensation control method for uncertain linear systems over networks is proposed, where only quantized signals are assumed to be available. This approach is based on the integral sliding mode (ISM) method where two kinds of integral sliding surfaces are constructed. One is the continuous-state-dependent surface with the aim of sliding mode stability analysis and the other is the quantization-state-dependent surface, which is used for ISM controller design. A scheme that combines the adaptive ISM controller and quantization parameter adjustment strategy is then proposed. Through utilizing H ∞ control analytical technique, once the system is in the sliding mode, the nature of performing disturbance attenuation and fault tolerance from the initial time can be found without requiring any fault information. Finally, the effectiveness of our proposed ISM control fault-tolerant schemes against quantization errors is demonstrated in the simulation.
Rate and power efficient image compressed sensing and transmission
NASA Astrophysics Data System (ADS)
Olanigan, Saheed; Cao, Lei; Viswanathan, Ramanarayanan
2016-01-01
This paper presents a suboptimal quantization and transmission scheme for multiscale block-based compressed sensing images over wireless channels. The proposed method includes two stages: dealing with quantization distortion and transmission errors. First, given the total transmission bit rate, the optimal number of quantization bits is assigned to the sensed measurements in different wavelet sub-bands so that the total quantization distortion is minimized. Second, given the total transmission power, the energy is allocated to different quantization bit layers based on their different error sensitivities. The method of Lagrange multipliers with Karush-Kuhn-Tucker conditions is used to solve both optimization problems, for which the first problem can be solved with relaxation and the second problem can be solved completely. The effectiveness of the scheme is illustrated through simulation results, which have shown up to 10 dB improvement over the method without the rate and power optimization in medium and low signal-to-noise ratio cases.
NASA Technical Reports Server (NTRS)
Gray, Robert M.
1989-01-01
During the past ten years Vector Quantization (VQ) has developed from a theoretical possibility promised by Shannon's source coding theorems into a powerful and competitive technique for speech and image coding and compression at medium to low bit rates. In this survey, the basic ideas behind the design of vector quantizers are sketched and some comments made on the state-of-the-art and current research efforts.
Robust vector quantization for noisy channels
NASA Technical Reports Server (NTRS)
Demarca, J. R. B.; Farvardin, N.; Jayant, N. S.; Shoham, Y.
1988-01-01
The paper briefly discusses techniques for making vector quantizers more tolerant to tranmsission errors. Two algorithms are presented for obtaining an efficient binary word assignment to the vector quantizer codewords without increasing the transmission rate. It is shown that about 4.5 dB gain over random assignment can be achieved with these algorithms. It is also proposed to reduce the effects of error propagation in vector-predictive quantizers by appropriately constraining the response of the predictive loop. The constrained system is shown to have about 4 dB of SNR gain over an unconstrained system in a noisy channel, with a small loss of clean-channel performance.
Image data compression having minimum perceptual error
NASA Technical Reports Server (NTRS)
Watson, Andrew B. (Inventor)
1995-01-01
A method for performing image compression that eliminates redundant and invisible image components is described. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The present invention adapts or customizes the quantization matrix to the image being compressed. The quantization matrix comprises visual masking by luminance and contrast techniques and by an error pooling technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.
Immirzi parameter without Immirzi ambiguity: Conformal loop quantization of scalar-tensor gravity
NASA Astrophysics Data System (ADS)
Veraguth, Olivier J.; Wang, Charles H.-T.
2017-10-01
Conformal loop quantum gravity provides an approach to loop quantization through an underlying conformal structure i.e. conformally equivalent class of metrics. The property that general relativity itself has no conformal invariance is reinstated with a constrained scalar field setting the physical scale. Conformally equivalent metrics have recently been shown to be amenable to loop quantization including matter coupling. It has been suggested that conformal geometry may provide an extended symmetry to allow a reformulated Immirzi parameter necessary for loop quantization to behave like an arbitrary group parameter that requires no further fixing as its present standard form does. Here, we find that this can be naturally realized via conformal frame transformations in scalar-tensor gravity. Such a theory generally incorporates a dynamical scalar gravitational field and reduces to general relativity when the scalar field becomes a pure gauge. In particular, we introduce a conformal Einstein frame in which loop quantization is implemented. We then discuss how different Immirzi parameters under this description may be related by conformal frame transformations and yet share the same quantization having, for example, the same area gaps, modulated by the scalar gravitational field.
Tribology of the lubricant quantized sliding state.
Castelli, Ivano Eligio; Capozza, Rosario; Vanossi, Andrea; Santoro, Giuseppe E; Manini, Nicola; Tosatti, Erio
2009-11-07
In the framework of Langevin dynamics, we demonstrate clear evidence of the peculiar quantized sliding state, previously found in a simple one-dimensional boundary lubricated model [A. Vanossi et al., Phys. Rev. Lett. 97, 056101 (2006)], for a substantially less idealized two-dimensional description of a confined multilayer solid lubricant under shear. This dynamical state, marked by a nontrivial "quantized" ratio of the averaged lubricant center-of-mass velocity to the externally imposed sliding speed, is recovered, and shown to be robust against the effects of thermal fluctuations, quenched disorder in the confining substrates, and over a wide range of loading forces. The lubricant softness, setting the width of the propagating solitonic structures, is found to play a major role in promoting in-registry commensurate regions beneficial to this quantized sliding. By evaluating the force instantaneously exerted on the top plate, we find that this quantized sliding represents a dynamical "pinned" state, characterized by significantly low values of the kinetic friction. While the quantized sliding occurs due to solitons being driven gently, the transition to ordinary unpinned sliding regimes can involve lubricant melting due to large shear-induced Joule heating, for example at large speed.
A Low Power Digital Accumulation Technique for Digital-Domain CMOS TDI Image Sensor.
Yu, Changwei; Nie, Kaiming; Xu, Jiangtao; Gao, Jing
2016-09-23
In this paper, an accumulation technique suitable for digital domain CMOS time delay integration (TDI) image sensors is proposed to reduce power consumption without degrading the rate of imaging. In terms of the slight variations of quantization codes among different pixel exposures towards the same object, the pixel array is divided into two groups: one is for coarse quantization of high bits only, and the other one is for fine quantization of low bits. Then, the complete quantization codes are composed of both results from the coarse-and-fine quantization. The equivalent operation comparably reduces the total required bit numbers of the quantization. In the 0.18 µm CMOS process, two versions of 16-stage digital domain CMOS TDI image sensor chains based on a 10-bit successive approximate register (SAR) analog-to-digital converter (ADC), with and without the proposed technique, are designed. The simulation results show that the average power consumption of slices of the two versions are 6 . 47 × 10 - 8 J/line and 7 . 4 × 10 - 8 J/line, respectively. Meanwhile, the linearity of the two versions are 99.74% and 99.99%, respectively.
Optimal Compression of Floating-Point Astronomical Images Without Significant Loss of Information
NASA Technical Reports Server (NTRS)
Pence, William D.; White, R. L.; Seaman, R.
2010-01-01
We describe a compression method for floating-point astronomical images that gives compression ratios of 6 - 10 while still preserving the scientifically important information in the image. The pixel values are first preprocessed by quantizing them into scaled integer intensity levels, which removes some of the uncompressible noise in the image. The integers are then losslessly compressed using the fast and efficient Rice algorithm and stored in a portable FITS format file. Quantizing an image more coarsely gives greater image compression, but it also increases the noise and degrades the precision of the photometric and astrometric measurements in the quantized image. Dithering the pixel values during the quantization process greatly improves the precision of measurements in the more coarsely quantized images. We perform a series of experiments on both synthetic and real astronomical CCD images to quantitatively demonstrate that the magnitudes and positions of stars in the quantized images can be measured with the predicted amount of precision. In order to encourage wider use of these image compression methods, we have made available a pair of general-purpose image compression programs, called fpack and funpack, which can be used to compress any FITS format image.
Quantized Majorana conductance
NASA Astrophysics Data System (ADS)
Zhang, Hao; Liu, Chun-Xiao; Gazibegovic, Sasa; Xu, Di; Logan, John A.; Wang, Guanzhong; van Loo, Nick; Bommer, Jouri D. S.; de Moor, Michiel W. A.; Car, Diana; Op Het Veld, Roy L. M.; van Veldhoven, Petrus J.; Koelling, Sebastian; Verheijen, Marcel A.; Pendharkar, Mihir; Pennachio, Daniel J.; Shojaei, Borzoyeh; Lee, Joon Sue; Palmstrøm, Chris J.; Bakkers, Erik P. A. M.; Sarma, S. Das; Kouwenhoven, Leo P.
2018-04-01
Majorana zero-modes—a type of localized quasiparticle—hold great promise for topological quantum computing. Tunnelling spectroscopy in electrical transport is the primary tool for identifying the presence of Majorana zero-modes, for instance as a zero-bias peak in differential conductance. The height of the Majorana zero-bias peak is predicted to be quantized at the universal conductance value of 2e2/h at zero temperature (where e is the charge of an electron and h is the Planck constant), as a direct consequence of the famous Majorana symmetry in which a particle is its own antiparticle. The Majorana symmetry protects the quantization against disorder, interactions and variations in the tunnel coupling. Previous experiments, however, have mostly shown zero-bias peaks much smaller than 2e2/h, with a recent observation of a peak height close to 2e2/h. Here we report a quantized conductance plateau at 2e2/h in the zero-bias conductance measured in indium antimonide semiconductor nanowires covered with an aluminium superconducting shell. The height of our zero-bias peak remains constant despite changing parameters such as the magnetic field and tunnel coupling, indicating that it is a quantized conductance plateau. We distinguish this quantized Majorana peak from possible non-Majorana origins by investigating its robustness to electric and magnetic fields as well as its temperature dependence. The observation of a quantized conductance plateau strongly supports the existence of Majorana zero-modes in the system, consequently paving the way for future braiding experiments that could lead to topological quantum computing.
Quantized Majorana conductance.
Zhang, Hao; Liu, Chun-Xiao; Gazibegovic, Sasa; Xu, Di; Logan, John A; Wang, Guanzhong; van Loo, Nick; Bommer, Jouri D S; de Moor, Michiel W A; Car, Diana; Op Het Veld, Roy L M; van Veldhoven, Petrus J; Koelling, Sebastian; Verheijen, Marcel A; Pendharkar, Mihir; Pennachio, Daniel J; Shojaei, Borzoyeh; Lee, Joon Sue; Palmstrøm, Chris J; Bakkers, Erik P A M; Sarma, S Das; Kouwenhoven, Leo P
2018-04-05
Majorana zero-modes-a type of localized quasiparticle-hold great promise for topological quantum computing. Tunnelling spectroscopy in electrical transport is the primary tool for identifying the presence of Majorana zero-modes, for instance as a zero-bias peak in differential conductance. The height of the Majorana zero-bias peak is predicted to be quantized at the universal conductance value of 2e 2 /h at zero temperature (where e is the charge of an electron and h is the Planck constant), as a direct consequence of the famous Majorana symmetry in which a particle is its own antiparticle. The Majorana symmetry protects the quantization against disorder, interactions and variations in the tunnel coupling. Previous experiments, however, have mostly shown zero-bias peaks much smaller than 2e 2 /h, with a recent observation of a peak height close to 2e 2 /h. Here we report a quantized conductance plateau at 2e 2 /h in the zero-bias conductance measured in indium antimonide semiconductor nanowires covered with an aluminium superconducting shell. The height of our zero-bias peak remains constant despite changing parameters such as the magnetic field and tunnel coupling, indicating that it is a quantized conductance plateau. We distinguish this quantized Majorana peak from possible non-Majorana origins by investigating its robustness to electric and magnetic fields as well as its temperature dependence. The observation of a quantized conductance plateau strongly supports the existence of Majorana zero-modes in the system, consequently paving the way for future braiding experiments that could lead to topological quantum computing.
Controlling charge quantization with quantum fluctuations.
Jezouin, S; Iftikhar, Z; Anthore, A; Parmentier, F D; Gennser, U; Cavanna, A; Ouerghi, A; Levkivskyi, I P; Idrisov, E; Sukhorukov, E V; Glazman, L I; Pierre, F
2016-08-04
In 1909, Millikan showed that the charge of electrically isolated systems is quantized in units of the elementary electron charge e. Today, the persistence of charge quantization in small, weakly connected conductors allows for circuits in which single electrons are manipulated, with applications in, for example, metrology, detectors and thermometry. However, as the connection strength is increased, the discreteness of charge is progressively reduced by quantum fluctuations. Here we report the full quantum control and characterization of charge quantization. By using semiconductor-based tunable elemental conduction channels to connect a micrometre-scale metallic island to a circuit, we explore the complete evolution of charge quantization while scanning the entire range of connection strengths, from a very weak (tunnel) to a perfect (ballistic) contact. We observe, when approaching the ballistic limit, that charge quantization is destroyed by quantum fluctuations, and scales as the square root of the residual probability for an electron to be reflected across the quantum channel; this scaling also applies beyond the different regimes of connection strength currently accessible to theory. At increased temperatures, the thermal fluctuations result in an exponential suppression of charge quantization and in a universal square-root scaling, valid for all connection strengths, in agreement with expectations. Besides being pertinent for the improvement of single-electron circuits and their applications, and for the metal-semiconductor hybrids relevant to topological quantum computing, knowledge of the quantum laws of electricity will be essential for the quantum engineering of future nanoelectronic devices.
Robustness of topological Hall effect of nontrivial spin textures
NASA Astrophysics Data System (ADS)
Jalil, Mansoor B. A.; Tan, Seng Ghee
2014-05-01
We analyze the topological Hall conductivity (THC) of topologically nontrivial spin textures like magnetic vortices and skyrmions and investigate its possible application in the readback for magnetic memory based on those spin textures. Under adiabatic conditions, such spin textures would theoretically yield quantized THC values, which are related to topological invariants such as the winding number and polarity, and as such are insensitive to fluctuations and smooth deformations. However, in a practical setting, the finite size of spin texture elements and the influence of edges may cause them to deviate from their ideal configurations. We calculate the degree of robustness of the THC output in practical magnetic memories in the presence of edge and finite size effects.
Protograph LDPC Codes for the Erasure Channel
NASA Technical Reports Server (NTRS)
Pollara, Fabrizio; Dolinar, Samuel J.; Divsalar, Dariush
2006-01-01
This viewgraph presentation reviews the use of protograph Low Density Parity Check (LDPC) codes for erasure channels. A protograph is a Tanner graph with a relatively small number of nodes. A "copy-and-permute" operation can be applied to the protograph to obtain larger derived graphs of various sizes. For very high code rates and short block sizes, a low asymptotic threshold criterion is not the best approach to designing LDPC codes. Simple protographs with much regularity and low maximum node degrees appear to be the best choices Quantized-rateless protograph LDPC codes can be built by careful design of the protograph such that multiple puncturing patterns will still permit message passing decoding to proceed
System design of the annular suspension and pointing system /ASPS/
NASA Technical Reports Server (NTRS)
Cunningham, D. C.; Gismondi, T. P.; Wilson, G. W.
1978-01-01
This paper presents the control system design for the Annular Suspension and Pointing System. Actuator sizing and configuration of the system are explained, and the control laws developed for linearizing and compensating the magnetic bearings, roll induction motor and gimbal torquers are given. Decoupling, feedforward and error compensation for the vernier and gimbal controllers is developed. The algorithm for computing the strapdown attitude reference is derived, and the allowable sampling rates, time delays and quantization of control signals are specified.
Mesoscopic Vortex–Meissner currents in ring ladders
NASA Astrophysics Data System (ADS)
Haug, Tobias; Amico, Luigi; Dumke, Rainer; Kwek, Leong-Chuan
2018-07-01
Recent experimental progress have revealed Meissner and Vortex phases in low-dimensional ultracold atoms systems. Atomtronic setups can realize ring ladders, while explicitly taking the finite size of the system into account. This enables the engineering of quantized chiral currents and phase slips in between them. We find that the mesoscopic scale modifies the current. Full control of the lattice configuration reveals a reentrant behavior of Vortex and Meissner phases. Our approach allows a feasible diagnostic of the currents’ configuration through time-of-flight measurements.
Modulated error diffusion CGHs for neural nets
NASA Astrophysics Data System (ADS)
Vermeulen, Pieter J. E.; Casasent, David P.
1990-05-01
New modulated error diffusion CGHs (computer generated holograms) for optical computing are considered. Specific attention is given to their use in optical matrix-vector, associative processor, neural net and optical interconnection architectures. We consider lensless CGH systems (many CGHs use an external Fourier transform (FT) lens), the Fresnel sampling requirements, the effects of finite CGH apertures (sample and hold inputs), dot size correction (for laser recorders), and new applications for this novel encoding method (that devotes attention to quantization noise effects).
Perturbation theory in light-cone quantization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Langnau, A.
1992-01-01
A thorough investigation of light-cone properties which are characteristic for higher dimensions is very important. The easiest way of addressing these issues is by analyzing the perturbative structure of light-cone field theories first. Perturbative studies cannot be substituted for an analysis of problems related to a nonperturbative approach. However, in order to lay down groundwork for upcoming nonperturbative studies, it is indispensable to validate the renormalization methods at the perturbative level, i.e., to gain control over the perturbative treatment first. A clear understanding of divergences in perturbation theory, as well as their numerical treatment, is a necessary first step towardsmore » formulating such a program. The first objective of this dissertation is to clarify this issue, at least in second and fourth-order in perturbation theory. The work in this dissertation can provide guidance for the choice of counterterms in Discrete Light-Cone Quantization or the Tamm-Dancoff approach. A second objective of this work is the study of light-cone perturbation theory as a competitive tool for conducting perturbative Feynman diagram calculations. Feynman perturbation theory has become the most practical tool for computing cross sections in high energy physics and other physical properties of field theory. Although this standard covariant method has been applied to a great range of problems, computations beyond one-loop corrections are very difficult. Because of the algebraic complexity of the Feynman calculations in higher-order perturbation theory, it is desirable to automatize Feynman diagram calculations so that algebraic manipulation programs can carry out almost the entire calculation. This thesis presents a step in this direction. The technique we are elaborating on here is known as light-cone perturbation theory.« less
La genèse du concept de champ quantique
NASA Astrophysics Data System (ADS)
Darrigol, O.
This is a historical study of the roots of a concept which has proved to be essential in modern particle physics : the concept of quantum field. The first steps were accomplished by two young theoreticians : Pascual Jordan quantized the free electromagnetic field in 1925 by means of the formal rules of the just discovered matrix mechanics, and Paul Dirac quantized the whole system charges + field in 1927. Using Dirac's equation for electrons (1928) and Jordan's idea of quantized matter waves (second quantization), Werner Heisenberg and Wolfgang Pauli provided in 1929-1930 an extension of Dirac's radiation theory and the proof of its relativistic invariance. Meanwhile Enrico Fermi discovered independently a more elegant and pedagogical formulation. To appreciate the degree of historical necessity of the quantization of fields, and the value of contemporaneous critics to this approach, it was necessary to investigate some of the history of the old radiation theory. We present the various arguments however provisional or naïve or wrong they could be in retrospect. So we hope to contribute to a more vivid picture of notions which, once deprived of their historical setting, might seem abstruse to the modern user. Nous présentons une étude historique des origines d'un concept devenu essentiel dans la physique moderne des particules : le concept de champ quantique. Deux jeunes théoriciens franchirent les premières étapes : Pascual Jordan quantifia le champ électromagnétique en 1925 grâce aux règles formelles de la mécanique des matrices naissante, et Paul Dirac quantifia le système complet charges + champ en 1927. Au moyen de l'équation de l'électron de Dirac (1928) et de l'idée de Jordan d'ondes de matière quantifiées (deuxième quantification), Werner Heisenberg et Wolfgang Pauli donnèrent en 1929-1930 une extension de la théorie du rayonnement de Dirac et la preuve de son invariance relativiste. Pendant ce temps Enrico Fermi découvrit indépendamment une formulation plus élégante et plus pédagogique. Pour apprécier le degré de nécessité historique de la quantification des champs et la valeur des critiques contemporaines à cette approche, nous avons dû analyser quelques points de l'histoire de l'ancienne théorie du rayonnement. Nous présentons les divers arguments quelque provisoires, naïfs ou faux qu'ils puissent sembler aujourd'hui. Ainsi nous espérons brosser un tableau plus vivant de notions menacées d'hermétisme si l'on oublie leurs fondements historiques.
NASA Astrophysics Data System (ADS)
Deng, Jinyu; Li, Huihui; Dong, Kaifeng; Li, Run-Wei; Peng, Yingguo; Ju, Ganping; Hu, Jiangfeng; Chow, Gan Moog; Chen, Jingsheng
2018-03-01
We find that the misfit strain may lead to the oscillatory size distributions of heteroepitaxial nanostructures. In heteroepitaxial FePt thin films grown on single-crystal MgO substrate, ⟨110 ⟩ -oriented mazelike and granular patterns with "quantized" feature sizes are realized in scanning-electron-microscope images. The physical mechanism responsible for the size oscillations is related to the oscillatory nature of the misfit strain energy in the domain-matching epitaxial FePt /MgO system, which is observed by transmission electron microscopy. Based on the experimental observations, a model is built and the results suggest that when the FePt island sizes are an integer times the misfit dislocation period, the misfit strain can be completely canceled by the misfit dislocations. With applying the mechanism, small and uniform grain is obtained on the TiN (200) polycrystalline underlayer, which is suitable for practical application. This finding may offer a way to synthesize nanostructured materials with well-controlled size and size distribution by tuning the lattice mismatch between the epitaxial-grown heterostructure.
Evaluation of NASA speech encoder
NASA Technical Reports Server (NTRS)
1976-01-01
Techniques developed by NASA for spaceflight instrumentation were used in the design of a quantizer for speech-decoding. Computer simulation of the actions of the quantizer was tested with synthesized and real speech signals. Results were evaluated by a phometician. Topics discussed include the relationship between the number of quantizer levels and the required sampling rate; reconstruction of signals; digital filtering; speech recording, sampling, and storage, and processing results.
Landau quantization in monolayer GaAs
NASA Astrophysics Data System (ADS)
Chung, Hsien-Ching; Ho, Ching-Hong; Chang, Cheng-Peng; Chen, Chun-Nan; Chiu, Chih-Wei; Lin, Ming-Fa
In the past decade, the discovery of graphene has opened the possibility of two-dimensional materials both in fundamental researches and technological applications. However, the gapless feature shrinks the applications of pristine graphene. Recently, researchers have new challenges and opportunities for post-graphene two-dimensional nanomaterials, such as silicene (Si), germanene (Ge), and tinene (Sn), due to the large enough energy gap (of the size comparable to the thermal energy at room temperature). Apart from the graphene analogs of group IV elements, the buckled honeycomb lattices of the binary compositions of group III-V elements have been proposed as a new class of post-graphene two-dimensional nanomaterials. In this study, the generalized tight-binding model considering the spin-orbital coupling is used to investigate the essential properties of monolayer GaAs. The Landau quantization, band structure, wave function, and density of states are discussed in detail. One of us (Hsien-Ching Chung) thanks Ming-Hui Chung and Su-Ming Chen for financial support. This work was supported in part by the Ministry of Science and Technology of Taiwan under Grant Number MOST 105-2811-M-017-003.
A network of spiking neurons for computing sparse representations in an energy efficient way
Hu, Tao; Genkin, Alexander; Chklovskii, Dmitri B.
2013-01-01
Computing sparse redundant representations is an important problem both in applied mathematics and neuroscience. In many applications, this problem must be solved in an energy efficient way. Here, we propose a hybrid distributed algorithm (HDA), which solves this problem on a network of simple nodes communicating via low-bandwidth channels. HDA nodes perform both gradient-descent-like steps on analog internal variables and coordinate-descent-like steps via quantized external variables communicated to each other. Interestingly, such operation is equivalent to a network of integrate-and-fire neurons, suggesting that HDA may serve as a model of neural computation. We compare the numerical performance of HDA with existing algorithms and show that in the asymptotic regime the representation error of HDA decays with time, t, as 1/t. We show that HDA is stable against time-varying noise, specifically, the representation error decays as 1/t for Gaussian white noise. PMID:22920853
Face biometrics with renewable templates
NASA Astrophysics Data System (ADS)
van der Veen, Michiel; Kevenaar, Tom; Schrijen, Geert-Jan; Akkermans, Ton H.; Zuo, Fei
2006-02-01
In recent literature, privacy protection technologies for biometric templates were proposed. Among these is the so-called helper-data system (HDS) based on reliable component selection. In this paper we integrate this approach with face biometrics such that we achieve a system in which the templates are privacy protected, and multiple templates can be derived from the same facial image for the purpose of template renewability. Extracting binary feature vectors forms an essential step in this process. Using the FERET and Caltech databases, we show that this quantization step does not significantly degrade the classification performance compared to, for example, traditional correlation-based classifiers. The binary feature vectors are integrated in the HDS leading to a privacy protected facial recognition algorithm with acceptable FAR and FRR, provided that the intra-class variation is sufficiently small. This suggests that a controlled enrollment procedure with a sufficient number of enrollment measurements is required.
A network of spiking neurons for computing sparse representations in an energy-efficient way.
Hu, Tao; Genkin, Alexander; Chklovskii, Dmitri B
2012-11-01
Computing sparse redundant representations is an important problem in both applied mathematics and neuroscience. In many applications, this problem must be solved in an energy-efficient way. Here, we propose a hybrid distributed algorithm (HDA), which solves this problem on a network of simple nodes communicating by low-bandwidth channels. HDA nodes perform both gradient-descent-like steps on analog internal variables and coordinate-descent-like steps via quantized external variables communicated to each other. Interestingly, the operation is equivalent to a network of integrate-and-fire neurons, suggesting that HDA may serve as a model of neural computation. We show that the numerical performance of HDA is on par with existing algorithms. In the asymptotic regime, the representation error of HDA decays with time, t, as 1/t. HDA is stable against time-varying noise; specifically, the representation error decays as 1/√t for gaussian white noise.
Quantum games of opinion formation based on the Marinatto-Weber quantum game scheme
NASA Astrophysics Data System (ADS)
Deng, Xinyang; Deng, Yong; Liu, Qi; Shi, Lei; Wang, Zhen
2016-06-01
Quantization has become a new way to investigate classical game theory since quantum strategies and quantum games were proposed. In the existing studies, many typical game models, such as the prisoner's dilemma, battle of the sexes, Hawk-Dove game, have been extensively explored by using quantization approach. Along a similar method, here several game models of opinion formations will be quantized on the basis of the Marinatto-Weber quantum game scheme, a frequently used scheme of converting classical games to quantum versions. Our results show that the quantization can fascinatingly change the properties of some classical opinion formation game models so as to generate win-win outcomes.
NASA Astrophysics Data System (ADS)
Jurčo, B.; Schlieker, M.
1995-07-01
In this paper explicitly natural (from the geometrical point of view) Fock-space representations (contragradient Verma modules) of the quantized enveloping algebras are constructed. In order to do so, one starts from the Gauss decomposition of the quantum group and introduces the differential operators on the corresponding q-deformed flag manifold (assumed as a left comodule for the quantum group) by a projection to it of the right action of the quantized enveloping algebra on the quantum group. Finally, the representatives of the elements of the quantized enveloping algebra corresponding to the left-invariant vector fields on the quantum group are expressed as first-order differential operators on the q-deformed flag manifold.
NASA Astrophysics Data System (ADS)
DeWitt, Bryce S.
2017-06-01
During the period June-July 1957 six physicists met at the Institute for Theoretical Physics of the University of Copenhagen in Denmark to work together on problems connected with the quantization of the gravitational field. A large part of the discussion was devoted to exposition of the individual work of the various participants, but a number of new results were also obtained. The topics investigated by these physicists are outlined in this report and may be grouped under the following main headings: The theory of measurement. Topographical problems in general relativity. Feynman quantization. Canonical quantization. Approximation methods. Special problems.
Magnetic resonance image compression using scalar-vector quantization
NASA Astrophysics Data System (ADS)
Mohsenian, Nader; Shahri, Homayoun
1995-12-01
A new coding scheme based on the scalar-vector quantizer (SVQ) is developed for compression of medical images. SVQ is a fixed-rate encoder and its rate-distortion performance is close to that of optimal entropy-constrained scalar quantizers (ECSQs) for memoryless sources. The use of a fixed-rate quantizer is expected to eliminate some of the complexity issues of using variable-length scalar quantizers. When transmission of images over noisy channels is considered, our coding scheme does not suffer from error propagation which is typical of coding schemes which use variable-length codes. For a set of magnetic resonance (MR) images, coding results obtained from SVQ and ECSQ at low bit-rates are indistinguishable. Furthermore, our encoded images are perceptually indistinguishable from the original, when displayed on a monitor. This makes our SVQ based coder an attractive compression scheme for picture archiving and communication systems (PACS), currently under consideration for an all digital radiology environment in hospitals, where reliable transmission, storage, and high fidelity reconstruction of images are desired.
Topologies on quantum topoi induced by quantization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nakayama, Kunji
2013-07-15
In the present paper, we consider effects of quantization in a topos approach of quantum theory. A quantum system is assumed to be coded in a quantum topos, by which we mean the topos of presheaves on the context category of commutative subalgebras of a von Neumann algebra of bounded operators on a Hilbert space. A classical system is modeled by a Lie algebra of classical observables. It is shown that a quantization map from the classical observables to self-adjoint operators on the Hilbert space naturally induces geometric morphisms from presheaf topoi related to the classical system to the quantummore » topos. By means of the geometric morphisms, we give Lawvere-Tierney topologies on the quantum topos (and their equivalent Grothendieck topologies on the context category). We show that, among them, there exists a canonical one which we call a quantization topology. We furthermore give an explicit expression of a sheafification functor associated with the quantization topology.« less
Bulk-edge correspondence in topological transport and pumping
NASA Astrophysics Data System (ADS)
Imura, Ken-Ichiro; Yoshimura, Yukinori; Fukui, Takahiro; Hatsugai, Yasuhiro
2018-03-01
The bulk-edge correspondence (BEC) refers to a one-to-one relation between the bulk and edge properties ubiquitous in topologically nontrivial systems. Depending on the setup, BEC manifests in different forms and govern the spectral and transport properties of topological insulators and semimetals. Although the topological pump is theoretically old, BEC in the pump has been established just recently [1] motivated by the state-of-the-art experiments using cold atoms [2, 3]. The center of mass (CM) of a system with boundaries shows a sequence of quantized jumps in the adiabatic limit associated with the edge states. Despite that the bulk is adiabatic, the edge is inevitably non-adiabatic in the experimental setup or in any numerical simulations. Still the pumped charge is quantized and carried by the bulk. Its quantization is guaranteed by a compensation between the bulk and edges. We show that in the presence of disorder the pumped charge continues to be quantized despite the appearance of non-quantized jumps.
Fast large-scale object retrieval with binary quantization
NASA Astrophysics Data System (ADS)
Zhou, Shifu; Zeng, Dan; Shen, Wei; Zhang, Zhijiang; Tian, Qi
2015-11-01
The objective of large-scale object retrieval systems is to search for images that contain the target object in an image database. Where state-of-the-art approaches rely on global image representations to conduct searches, we consider many boxes per image as candidates to search locally in a picture. In this paper, a feature quantization algorithm called binary quantization is proposed. In binary quantization, a scale-invariant feature transform (SIFT) feature is quantized into a descriptive and discriminative bit-vector, which allows itself to adapt to the classic inverted file structure for box indexing. The inverted file, which stores the bit-vector and box ID where the SIFT feature is located inside, is compact and can be loaded into the main memory for efficient box indexing. We evaluate our approach on available object retrieval datasets. Experimental results demonstrate that the proposed approach is fast and achieves excellent search quality. Therefore, the proposed approach is an improvement over state-of-the-art approaches for object retrieval.
Wang, Yan-Wu; Bian, Tao; Xiao, Jiang-Wen; Wen, Changyun
2015-10-01
This paper studies the global synchronization of complex dynamical network (CDN) under digital communication with limited bandwidth. To realize the digital communication, the so-called uniform-quantizer-sets are introduced to quantize the states of nodes, which are then encoded and decoded by newly designed encoders and decoders. To meet the requirement of the bandwidth constraint, a scaling function is utilized to guarantee the quantizers having bounded inputs and thus achieving bounded real-time quantization levels. Moreover, a new type of vector norm is introduced to simplify the expression of the bandwidth limit. Through mathematical induction, a sufficient condition is derived to ensure global synchronization of the CDNs. The lower bound on the sum of the real-time quantization levels is analyzed for different cases. Optimization method is employed to relax the requirements on the network topology and to determine the minimum of such lower bound for each case, respectively. Simulation examples are also presented to illustrate the established results.
From black holes to white holes: a quantum gravitational, symmetric bounce
NASA Astrophysics Data System (ADS)
Olmedo, Javier; Saini, Sahil; Singh, Parampreet
2017-11-01
Recently, a consistent non-perturbative quantization of the Schwarzschild interior resulting in a bounce from black hole to white hole geometry has been obtained by loop quantizing the Kantowski-Sachs vacuum spacetime. As in other spacetimes where the singularity is dominated by the Weyl part of the spacetime curvature, the structure of the singularity is highly anisotropic in the Kantowski-Sachs vacuum spacetime. As a result, the bounce turns out to be in general asymmetric, creating a large mass difference between the parent black hole and the child white hole. In this manuscript, we investigate under what circumstances a symmetric bounce scenario can be constructed in the above quantization. Using the setting of Dirac observables and geometric clocks, we obtain a symmetric bounce condition which can be satisfied by a slight modification in the construction of loops over which holonomies are considered in the quantization procedure. These modifications can be viewed as quantization ambiguities, and are demonstrated in three different flavors, all of which lead to a non-singular black to white hole transition with identical masses. Our results show that quantization ambiguities can mitigate or even qualitatively change some key features of the physics of singularity resolution. Further, these results are potentially helpful in motivating and constructing symmetric black to white hole transition scenarios.
From Weyl to Born-Jordan quantization: The Schrödinger representation revisited
NASA Astrophysics Data System (ADS)
de Gosson, Maurice A.
2016-03-01
The ordering problem has been one of the long standing and much discussed questions in quantum mechanics from its very beginning. Nowadays, there is more or less a consensus among physicists that the right prescription is Weyl's rule, which is closely related to the Moyal-Wigner phase space formalism. We propose in this report an alternative approach by replacing Weyl quantization with the less well-known Born-Jordan quantization. This choice is actually natural if we want the Heisenberg and Schrödinger pictures of quantum mechanics to be mathematically equivalent. It turns out that, in addition, Born-Jordan quantization can be recovered from Feynman's path integral approach provided that one used short-time propagators arising from correct formulas for the short-time action, as observed by Makri and Miller. These observations lead to a slightly different quantum mechanics, exhibiting some unexpected features, and this without affecting the main existing theory; for instance quantizations of physical Hamiltonian functions are the same as in the Weyl correspondence. The differences are in fact of a more subtle nature; for instance, the quantum observables will not correspond in a one-to-one fashion to classical ones, and the dequantization of a Born-Jordan quantum operator is less straightforward than that of the corresponding Weyl operator. The use of Born-Jordan quantization moreover solves the "angular momentum dilemma", which already puzzled L. Pauling. Born-Jordan quantization has been known for some time (but not fully exploited) by mathematicians working in time-frequency analysis and signal analysis, but ignored by physicists. One of the aims of this report is to collect and synthesize these sporadic discussions, while analyzing the conceptual differences with Weyl quantization, which is also reviewed in detail. Another striking feature is that the Born-Jordan formalism leads to a redefinition of phase space quantum mechanics, where the usual Wigner distribution has to be replaced with a new quasi-distribution reducing interference effects.
Interpretation of laser/multi-sensor data for short range terrain modeling and hazard detection
NASA Technical Reports Server (NTRS)
Messing, B. S.
1980-01-01
A terrain modeling algorithm that would reconstruct the sensed ground images formed by the triangulation scheme, and classify as unsafe any terrain feature that would pose a hazard to a roving vehicle is described. This modeler greatly reduces quantization errors inherent in a laser/sensing system through the use of a thinning algorithm. Dual filters are employed to separate terrain steps from the general landscape, simplifying the analysis of terrain features. A crosspath analysis is utilized to detect and avoid obstacles that would adversely affect the roll of the vehicle. Computer simulations of the rover on various terrains examine the performance of the modeler.
Classical BV Theories on Manifolds with Boundary
NASA Astrophysics Data System (ADS)
Cattaneo, Alberto S.; Mnev, Pavel; Reshetikhin, Nicolai
2014-12-01
In this paper we extend the classical BV framework to gauge theories on spacetime manifolds with boundary. In particular, we connect the BV construction in the bulk with the BFV construction on the boundary and we develop its extension to strata of higher codimension in the case of manifolds with corners. We present several examples including electrodynamics, Yang-Mills theory and topological field theories coming from the AKSZ construction, in particular, the Chern-Simons theory, the BF theory, and the Poisson sigma model. This paper is the first step towards developing the perturbative quantization of such theories on manifolds with boundary in a way consistent with gluing.
Conduction quantization in monolayer MoS2
NASA Astrophysics Data System (ADS)
Li, T. S.
2016-10-01
We study the ballistic conduction of a monolayer MoS2 subject to a spatially modulated magnetic field by using the Landauer-Buttiker formalism. The band structure depends sensitively on the field strength, and its change has profound influence on the electron conduction. The conductance is found to demonstrate multi-step behavior due to the discrete number of conduction channels. The sharp peak and rectangular structures of the conductance are stretched out as temperature increases, due to the thermal broadening of the derivative of the Fermi-Dirac distribution function. Finally, quantum behavior in the conductance of MoS2 can be observed at temperatures below 10 K.
An analogue of Weyl’s law for quantized irreducible generalized flag manifolds
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matassa, Marco, E-mail: marco.matassa@gmail.com, E-mail: mmatassa@math.uio.no
2015-09-15
We prove an analogue of Weyl’s law for quantized irreducible generalized flag manifolds. This is formulated in terms of a zeta function which, similarly to the classical setting, satisfies the following two properties: as a functional on the quantized algebra it is proportional to the Haar state and its first singularity coincides with the classical dimension. The relevant formulas are given for the more general case of compact quantum groups.
Skydan, Oleksandr A; Lilley, Francis; Lalor, Michael J; Burton, David R
2003-09-10
We present an investigation into the phase errors that occur in fringe pattern analysis that are caused by quantization effects. When acquisition devices with a limited value of camera bit depth are used, there are a limited number of quantization levels available to record the signal. This may adversely affect the recorded signal and adds a potential source of instrumental error to the measurement system. Quantization effects also determine the accuracy that may be achieved by acquisition devices in a measurement system. We used the Fourier fringe analysis measurement technique. However, the principles can be applied equally well for other phase measuring techniques to yield a phase error distribution that is caused by the camera bit depth.
Performance of customized DCT quantization tables on scientific data
NASA Technical Reports Server (NTRS)
Ratnakar, Viresh; Livny, Miron
1994-01-01
We show that it is desirable to use data-specific or customized quantization tables for scaling the spatial frequency coefficients obtained using the Discrete Cosine Transform (DCT). DCT is widely used for image and video compression (MP89, PM93) but applications typically use default quantization matrices. Using actual scientific data gathered from divers sources such as spacecrafts and electron-microscopes, we show that the default compression/quality tradeoffs can be significantly improved upon by using customized tables. We also show that significant improvements are possible for the standard test images Lena and Baboon. This work is part of an effort to develop a practical scheme for optimizing quantization matrices for any given image or video stream, under any given quality or compression constraints.
Gravitational surface Hamiltonian and entropy quantization
NASA Astrophysics Data System (ADS)
Bakshi, Ashish; Majhi, Bibhas Ranjan; Samanta, Saurav
2017-02-01
The surface Hamiltonian corresponding to the surface part of a gravitational action has xp structure where p is conjugate momentum of x. Moreover, it leads to TS on the horizon of a black hole. Here T and S are temperature and entropy of the horizon. Imposing the hermiticity condition we quantize this Hamiltonian. This leads to an equidistant spectrum of its eigenvalues. Using this we show that the entropy of the horizon is quantized. This analysis holds for any order of Lanczos-Lovelock gravity. For general relativity, the area spectrum is consistent with Bekenstein's observation. This provides a more robust confirmation of this earlier result as the calculation is based on the direct quantization of the Hamiltonian in the sense of usual quantum mechanics.
Quantization of Non-Lagrangian Systems
NASA Astrophysics Data System (ADS)
Kochan, Denis
A novel method for quantization of non-Lagrangian (open) systems is proposed. It is argued that the essential object, which provides both classical and quantum evolution, is a certain canonical two-form defined in extended velocity space. In this setting classical dynamics is recovered from the stringy-type variational principle, which employs umbilical surfaces instead of histories of the system. Quantization is then accomplished in accordance with the introduced variational principle. The path integral for the transition probability amplitude (propagator) is rearranged to a surface functional integral. In the standard case of closed (Lagrangian) systems the presented method reduces to the standard Feynman's approach. The inverse problem of the calculus of variation, the problem of quantization ambiguity and the quantum mechanics in the presence of friction are analyzed in detail.
New variables for classical and quantum gravity in all dimensions: I. Hamiltonian analysis
NASA Astrophysics Data System (ADS)
Bodendorfer, N.; Thiemann, T.; Thurn, A.
2013-02-01
Loop quantum gravity (LQG) relies heavily on a connection formulation of general relativity such that (1) the connection Poisson commutes with itself and (2) the corresponding gauge group is compact. This can be achieved starting from the Palatini or Holst action when imposing the time gauge. Unfortunately, this method is restricted to D + 1 = 4 spacetime dimensions. However, interesting string theories and supergravity theories require higher dimensions and it would therefore be desirable to have higher dimensional supergravity loop quantizations at one’s disposal in order to compare these approaches. In this series of papers we take first steps toward this goal. The present first paper develops a classical canonical platform for a higher dimensional connection formulation of the purely gravitational sector. The new ingredient is a different extension of the ADM phase space than the one used in LQG which does not require the time gauge and which generalizes to any dimension D > 1. The result is a Yang-Mills theory phase space subject to Gauß, spatial diffeomorphism and Hamiltonian constraint as well as one additional constraint, called the simplicity constraint. The structure group can be chosen to be SO(1, D) or SO(D + 1) and the latter choice is preferred for purposes of quantization.
Image Coding Based on Address Vector Quantization.
NASA Astrophysics Data System (ADS)
Feng, Yushu
Image coding is finding increased application in teleconferencing, archiving, and remote sensing. This thesis investigates the potential of Vector Quantization (VQ), a relatively new source coding technique, for compression of monochromatic and color images. Extensions of the Vector Quantization technique to the Address Vector Quantization method have been investigated. In Vector Quantization, the image data to be encoded are first processed to yield a set of vectors. A codeword from the codebook which best matches the input image vector is then selected. Compression is achieved by replacing the image vector with the index of the code-word which produced the best match, the index is sent to the channel. Reconstruction of the image is done by using a table lookup technique, where the label is simply used as an address for a table containing the representative vectors. A code-book of representative vectors (codewords) is generated using an iterative clustering algorithm such as K-means, or the generalized Lloyd algorithm. A review of different Vector Quantization techniques are given in chapter 1. Chapter 2 gives an overview of codebook design methods including the Kohonen neural network to design codebook. During the encoding process, the correlation of the address is considered and Address Vector Quantization is developed for color image and monochrome image coding. Address VQ which includes static and dynamic processes is introduced in chapter 3. In order to overcome the problems in Hierarchical VQ, Multi-layer Address Vector Quantization is proposed in chapter 4. This approach gives the same performance as that of the normal VQ scheme but the bit rate is about 1/2 to 1/3 as that of the normal VQ method. In chapter 5, a Dynamic Finite State VQ based on a probability transition matrix to select the best subcodebook to encode the image is developed. In chapter 6, a new adaptive vector quantization scheme, suitable for color video coding, called "A Self -Organizing Adaptive VQ Technique" is presented. In addition to chapters 2 through 6 which report on new work, this dissertation includes one chapter (chapter 1) and part of chapter 2 which review previous work on VQ and image coding, respectively. Finally, a short discussion of directions for further research is presented in conclusion.
Quantization of Gaussian samples at very low SNR regime in continuous variable QKD applications
NASA Astrophysics Data System (ADS)
Daneshgaran, Fred; Mondin, Marina
2016-09-01
The main problem for information reconciliation in continuous variable Quantum Key Distribution (QKD) at low Signal to Noise Ratio (SNR) is quantization and assignment of labels to the samples of the Gaussian Random Variables (RVs) observed at Alice and Bob. Trouble is that most of the samples, assuming that the Gaussian variable is zero mean which is de-facto the case, tend to have small magnitudes and are easily disturbed by noise. Transmission over longer and longer distances increases the losses corresponding to a lower effective SNR exasperating the problem. This paper looks at the quantization problem of the Gaussian samples at very low SNR regime from an information theoretic point of view. We look at the problem of two bit per sample quantization of the Gaussian RVs at Alice and Bob and derive expressions for the mutual information between the bit strings as a result of this quantization. The quantization threshold for the Most Significant Bit (MSB) should be chosen based on the maximization of the mutual information between the quantized bit strings. Furthermore, while the LSB string at Alice and Bob are balanced in a sense that their entropy is close to maximum, this is not the case for the second most significant bit even under optimal threshold. We show that with two bit quantization at SNR of -3 dB we achieve 75.8% of maximal achievable mutual information between Alice and Bob, hence, as the number of quantization bits increases beyond 2-bits, the number of additional useful bits that can be extracted for secret key generation decreases rapidly. Furthermore, the error rates between the bit strings at Alice and Bob at the same significant bit level are rather high demanding very powerful error correcting codes. While our calculations and simulation shows that the mutual information between the LSB at Alice and Bob is 0.1044 bits, that at the MSB level is only 0.035 bits. Hence, it is only by looking at the bits jointly that we are able to achieve a mutual information of 0.2217 bits which is 75.8% of maximum achievable. The implication is that only by coding both MSB and LSB jointly can we hope to get close to this 75.8% limit. Hence, non-binary codes are essential to achieve acceptable performance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Inomata, A.; Junker, G.; Wilson, R.
1993-08-01
The unified treatment of the Dirac monopole, the Schwinger monopole, and the Aharonov-Bahn problem by Barut and Wilson is revisited via a path integral approach. The Kustaanheimo-Stiefel transformation of space and time is utilized to calculate the path integral for a charged particle in the singular vector potential. In the process of dimensional reduction, a topological charge quantization rule is derived, which contains Dirac's quantization condition as a special case. 32 refs.
2014-07-01
establishment of Glioblastoma ( GBM ) cell lines from GBM patient’s tumor samples and quantized cell populations of each of the parental GBM cell lines, we... GBM patients are now well established and from the basis of the molecular characterization of the tumor development and signatures presented by these...analysis of these quantized cell sub populations and have begun to assemble the protein signatures of GBM tumors underpinned by the comprehensive
Differential calculus on quantized simple lie groups
NASA Astrophysics Data System (ADS)
Jurčo, Branislav
1991-07-01
Differential calculi, generalizations of Woronowicz's four-dimensional calculus on SU q (2), are introduced for quantized classical simple Lie groups in a constructive way. For this purpose, the approach of Faddeev and his collaborators to quantum groups was used. An equivalence of Woronowicz's enveloping algebra generated by the dual space to the left-invariant differential forms and the corresponding quantized universal enveloping algebra, is obtained for our differential calculi. Real forms for q ∈ ℝ are also discussed.
Deformation quantizations with separation of variables on a Kähler manifold
NASA Astrophysics Data System (ADS)
Karabegov, Alexander V.
1996-10-01
We give a simple geometric description of all formal differentiable deformation quantizations on a Kähler manifold M such that for each open subset U⊂ M ⋆-multiplication from the left by a holomorphic function and from the right by an antiholomorphic function on U coincides with the pointwise multiplication by these functions. We show that these quantizations are in 1-1 correspondence with the formal deformations of the original Kähler metrics on M.
Extension of loop quantum gravity to f(R) theories.
Zhang, Xiangdong; Ma, Yongge
2011-04-29
The four-dimensional metric f(R) theories of gravity are cast into connection-dynamical formalism with real su(2) connections as configuration variables. Through this formalism, the classical metric f(R) theories are quantized by extending the loop quantization scheme of general relativity. Our results imply that the nonperturbative quantization procedure of loop quantum gravity is valid not only for general relativity but also for a rather general class of four-dimensional metric theories of gravity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gonçalves, L.A.; Olavo, L.S.F., E-mail: olavolsf@gmail.com
Dissipation in Quantum Mechanics took some time to become a robust field of investigation after the birth of the field. The main issue hindering developments in the field is that the Quantization process was always tightly connected to the Hamiltonian formulation of Classical Mechanics. In this paper we present a quantization process that does not depend upon the Hamiltonian formulation of Classical Mechanics (although still departs from Classical Mechanics) and thus overcome the problem of finding, from first principles, a completely general Schrödinger equation encompassing dissipation. This generalized process of quantization is shown to be nothing but an extension ofmore » a more restricted version that is shown to produce the Schrödinger equation for Hamiltonian systems from first principles (even for Hamiltonian velocity dependent potential). - Highlights: • A Quantization process independent of the Hamiltonian formulation of quantum Mechanics is proposed. • This quantization method is applied to dissipative or absorptive systems. • A Dissipative Schrödinger equation is derived from first principles.« less
Application of heterogeneous pulse coupled neural network in image quantization
NASA Astrophysics Data System (ADS)
Huang, Yi; Ma, Yide; Li, Shouliang; Zhan, Kun
2016-11-01
On the basis of the different strengths of synaptic connections between actual neurons, this paper proposes a heterogeneous pulse coupled neural network (HPCNN) algorithm to perform quantization on images. HPCNNs are developed from traditional pulse coupled neural network (PCNN) models, which have different parameters corresponding to different image regions. This allows pixels of different gray levels to be classified broadly into two categories: background regional and object regional. Moreover, an HPCNN also satisfies human visual characteristics. The parameters of the HPCNN model are calculated automatically according to these categories, and quantized results will be optimal and more suitable for humans to observe. At the same time, the experimental results of natural images from the standard image library show the validity and efficiency of our proposed quantization method.
Tampered Region Localization of Digital Color Images Based on JPEG Compression Noise
NASA Astrophysics Data System (ADS)
Wang, Wei; Dong, Jing; Tan, Tieniu
With the availability of various digital image edit tools, seeing is no longer believing. In this paper, we focus on tampered region localization for image forensics. We propose an algorithm which can locate tampered region(s) in a lossless compressed tampered image when its unchanged region is output of JPEG decompressor. We find the tampered region and the unchanged region have different responses for JPEG compression. The tampered region has stronger high frequency quantization noise than the unchanged region. We employ PCA to separate different spatial frequencies quantization noises, i.e. low, medium and high frequency quantization noise, and extract high frequency quantization noise for tampered region localization. Post-processing is involved to get final localization result. The experimental results prove the effectiveness of our proposed method.
Vector quantizer designs for joint compression and terrain categorization of multispectral imagery
NASA Technical Reports Server (NTRS)
Gorman, John D.; Lyons, Daniel F.
1994-01-01
Two vector quantizer designs for compression of multispectral imagery and their impact on terrain categorization performance are evaluated. The mean-squared error (MSE) and classification performance of the two quantizers are compared, and it is shown that a simple two-stage design minimizing MSE subject to a constraint on classification performance has a significantly better classification performance than a standard MSE-based tree-structured vector quantizer followed by maximum likelihood classification. This improvement in classification performance is obtained with minimal loss in MSE performance. The results show that it is advantageous to tailor compression algorithm designs to the required data exploitation tasks. Applications of joint compression/classification include compression for the archival or transmission of Landsat imagery that is later used for land utility surveys and/or radiometric analysis.
Nonperturbative light-front Hamiltonian methods
NASA Astrophysics Data System (ADS)
Hiller, J. R.
2016-09-01
We examine the current state-of-the-art in nonperturbative calculations done with Hamiltonians constructed in light-front quantization of various field theories. The language of light-front quantization is introduced, and important (numerical) techniques, such as Pauli-Villars regularization, discrete light-cone quantization, basis light-front quantization, the light-front coupled-cluster method, the renormalization group procedure for effective particles, sector-dependent renormalization, and the Lanczos diagonalization method, are surveyed. Specific applications are discussed for quenched scalar Yukawa theory, ϕ4 theory, ordinary Yukawa theory, supersymmetric Yang-Mills theory, quantum electrodynamics, and quantum chromodynamics. The content should serve as an introduction to these methods for anyone interested in doing such calculations and as a rallying point for those who wish to solve quantum chromodynamics in terms of wave functions rather than random samplings of Euclidean field configurations.
Photonic topological boundary pumping as a probe of 4D quantum Hall physics
NASA Astrophysics Data System (ADS)
Zilberberg, Oded; Huang, Sheng; Guglielmon, Jonathan; Wang, Mohan; Chen, Kevin P.; Kraus, Yaacov E.; Rechtsman, Mikael C.
2018-01-01
When a two-dimensional (2D) electron gas is placed in a perpendicular magnetic field, its in-plane transverse conductance becomes quantized; this is known as the quantum Hall effect. It arises from the non-trivial topology of the electronic band structure of the system, where an integer topological invariant (the first Chern number) leads to quantized Hall conductance. It has been shown theoretically that the quantum Hall effect can be generalized to four spatial dimensions, but so far this has not been realized experimentally because experimental systems are limited to three spatial dimensions. Here we use tunable 2D arrays of photonic waveguides to realize a dynamically generated four-dimensional (4D) quantum Hall system experimentally. The inter-waveguide separation in the array is constructed in such a way that the propagation of light through the device samples over momenta in two additional synthetic dimensions, thus realizing a 2D topological pump. As a result, the band structure has 4D topological invariants (known as second Chern numbers) that support a quantized bulk Hall response with 4D symmetry. In a finite-sized system, the 4D topological bulk response is carried by localized edge modes that cross the sample when the synthetic momenta are modulated. We observe this crossing directly through photon pumping of our system from edge to edge and corner to corner. These crossings are equivalent to charge pumping across a 4D system from one three-dimensional hypersurface to the spatially opposite one and from one 2D hyperedge to another. Our results provide a platform for the study of higher-dimensional topological physics.
MPEG-1 low-cost encoder solution
NASA Astrophysics Data System (ADS)
Grueger, Klaus; Schirrmeister, Frank; Filor, Lutz; von Reventlow, Christian; Schneider, Ulrich; Mueller, Gerriet; Sefzik, Nicolai; Fiedrich, Sven
1995-02-01
A solution for real-time compression of digital YCRCB video data to an MPEG-1 video data stream has been developed. As an additional option, motion JPEG and video telephone streams (H.261) can be generated. For MPEG-1, up to two bidirectional predicted images are supported. The required computational power for motion estimation and DCT/IDCT, memory size and memory bandwidth have been the main challenges. The design uses fast-page-mode memory accesses and requires only one single 80 ns EDO-DRAM with 256 X 16 organization for video encoding. This can be achieved only by using adequate access and coding strategies. The architecture consists of an input processing and filter unit, a memory interface, a motion estimation unit, a motion compensation unit, a DCT unit, a quantization control, a VLC unit and a bus interface. For using the available memory bandwidth by the processing tasks, a fixed schedule for memory accesses has been applied, that can be interrupted for asynchronous events. The motion estimation unit implements a highly sophisticated hierarchical search strategy based on block matching. The DCT unit uses a separated fast-DCT flowgraph realized by a switchable hardware unit for both DCT and IDCT operation. By appropriate multiplexing, only one multiplier is required for: DCT, quantization, inverse quantization, and IDCT. The VLC unit generates the video-stream up to the video sequence layer and is directly coupled with an intelligent bus-interface. Thus, the assembly of video, audio and system data can easily be performed by the host computer. Having a relatively low complexity and only small requirements for DRAM circuits, the developed solution can be applied to low-cost encoding products for consumer electronics.
Photonic topological boundary pumping as a probe of 4D quantum Hall physics.
Zilberberg, Oded; Huang, Sheng; Guglielmon, Jonathan; Wang, Mohan; Chen, Kevin P; Kraus, Yaacov E; Rechtsman, Mikael C
2018-01-03
When a two-dimensional (2D) electron gas is placed in a perpendicular magnetic field, its in-plane transverse conductance becomes quantized; this is known as the quantum Hall effect. It arises from the non-trivial topology of the electronic band structure of the system, where an integer topological invariant (the first Chern number) leads to quantized Hall conductance. It has been shown theoretically that the quantum Hall effect can be generalized to four spatial dimensions, but so far this has not been realized experimentally because experimental systems are limited to three spatial dimensions. Here we use tunable 2D arrays of photonic waveguides to realize a dynamically generated four-dimensional (4D) quantum Hall system experimentally. The inter-waveguide separation in the array is constructed in such a way that the propagation of light through the device samples over momenta in two additional synthetic dimensions, thus realizing a 2D topological pump. As a result, the band structure has 4D topological invariants (known as second Chern numbers) that support a quantized bulk Hall response with 4D symmetry. In a finite-sized system, the 4D topological bulk response is carried by localized edge modes that cross the sample when the synthetic momenta are modulated. We observe this crossing directly through photon pumping of our system from edge to edge and corner to corner. These crossings are equivalent to charge pumping across a 4D system from one three-dimensional hypersurface to the spatially opposite one and from one 2D hyperedge to another. Our results provide a platform for the study of higher-dimensional topological physics.
Focal-Plane Arrays of Quantum-Dot Infrared Photodetectors
NASA Technical Reports Server (NTRS)
Gunapala, Sarath; Wilson, Daniel; Hill, Cory; Liu, John; Bandara, Sumith; Ting, David
2007-01-01
Focal-plane arrays of semiconductor quantum-dot infrared photodetectors (QDIPs) are being developed as superior alternatives to prior infrared imagers, including imagers based on HgCdTe devices and, especially, those based on quantum-well infrared photodetectors (QWIPs). HgCdTe devices and arrays thereof are difficult to fabricate and operate, and they exhibit large nonunformities and high 1/f (where f signifies frequency) noise. QWIPs are easier to fabricate and operate, can be made nearly uniform, and exhibit lower 1/f noise, but they exhibit larger dark currents, and their quantization only along the growth direction prevents them from absorbing photons at normal incidence, thereby limiting their quantum efficiencies. Like QWIPs, QDIPs offer the advantages of greater ease of operation, greater uniformity, and lower 1/f noise, but without the disadvantages: QDIPs exhibit lower dark currents, and quantum efficiencies of QDIPs are greater because the three-dimensional quantization of QDIPs is favorable to the absorption of photons at normal or oblique incidence. Moreover, QDIPs can be operated at higher temperatures (around 200 K) than are required for operation of QWIPs. The main problem in the development of QDIP imagers is to fabricate quantum dots with the requisite uniformity of size and spacing. A promising approach to be tested soon involves the use of electron-beam lithography to define the locations and sizes of quantum dots. A photoresist-covered GaAs substrate would be exposed to the beam generated by an advanced, high-precision electron beam apparatus. The exposure pattern would consist of spots typically having a diameter of 4 nm and typically spaced 20 nm apart. The exposed photoresist would be developed by either a high-contrast or a low-contrast method. In the high-contrast method, the spots would be etched in such a way as to form steep-wall holes all the way down to the substrate. The holes would be wider than the electron beam spots perhaps as wide as 15 to 20 nm, but may be sufficient to control the growth of the quantum dots. In the low-contrast method, the resist would be etched in such a way as to form dimples, the shapes of which would mimic the electron-beam density profile. Then by use of a transfer etching process that etches the substrate faster than it etches the resist, either the pattern of holes or a pattern comprising the narrow, lowest portions of the dimples would be imparted to the substrate. Having been thus patterned, the substrate would be cleaned. The resulting holes or dimples in the substrate would serve as nucleation sites for the growth of quantum dots of controlled size in the following steps. The substrate would be cleaned, then placed in a molecular-beam-epitaxy (MBE) chamber, where native oxide would be thermally desorbed and the quantum dots would be grown.
Vacuum Energy Induced by AN Impenetrable Flux Tube of Finite Radius
NASA Astrophysics Data System (ADS)
Gorkavenko, V. M.; Sitenko, Yu. A.; Stepanov, O. B.
2011-06-01
We consider the effect of the magnetic field background in the form of a tube of the finite transverse size on the vacuum of the quantized charged massive scalar field which is subject to the Dirichlet boundary condition at the edge of the tube. The vacuum energy is induced, being periodic in the value of the magnetic flux enclosed in the tube. The dependence of the vacuum energy density on the distance from the tube and on the coupling to the space-time curvature scalar is comprehensively analyzed.
Vacuum Energy Induced by AN Impenetrable Flux Tube of Finite Radius
NASA Astrophysics Data System (ADS)
Gorkavenko, V. M.; Sitenko, Yu. A.; Stepanov, O. B.
We consider the effect of the magnetic field background in the form of a tube of the finite transverse size on the vacuum of the quantized charged massive scalar field which is subject to the Dirichlet boundary condition at the edge of the tube. The vacuum energy is induced, being periodic in the value of the magnetic flux enclosed in the tube. The dependence of the vacuum energy density on the distance from the tube and on the coupling to the space-time curvature scalar is comprehensively analyzed.
Quantized phase coding and connected region labeling for absolute phase retrieval.
Chen, Xiangcheng; Wang, Yuwei; Wang, Yajun; Ma, Mengchao; Zeng, Chunnian
2016-12-12
This paper proposes an absolute phase retrieval method for complex object measurement based on quantized phase-coding and connected region labeling. A specific code sequence is embedded into quantized phase of three coded fringes. Connected regions of different codes are labeled and assigned with 3-digit-codes combining the current period and its neighbors. Wrapped phase, more than 36 periods, can be restored with reference to the code sequence. Experimental results verify the capability of the proposed method to measure multiple isolated objects.
The wavelet/scalar quantization compression standard for digital fingerprint images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bradley, J.N.; Brislawn, C.M.
1994-04-01
A new digital image compression standard has been adopted by the US Federal Bureau of Investigation for use on digitized gray-scale fingerprint images. The algorithm is based on adaptive uniform scalar quantization of a discrete wavelet transform image decomposition and is referred to as the wavelet/scalar quantization standard. The standard produces archival quality images at compression ratios of around 20:1 and will allow the FBI to replace their current database of paper fingerprint cards with digital imagery.
Table look-up estimation of signal and noise parameters from quantized observables
NASA Technical Reports Server (NTRS)
Vilnrotter, V. A.; Rodemich, E. R.
1986-01-01
A table look-up algorithm for estimating underlying signal and noise parameters from quantized observables is examined. A general mathematical model is developed, and a look-up table designed specifically for estimating parameters from four-bit quantized data is described. Estimator performance is evaluated both analytically and by means of numerical simulation, and an example is provided to illustrate the use of the look-up table for estimating signal-to-noise ratios commonly encountered in Voyager-type data.
Digital television system design study
NASA Technical Reports Server (NTRS)
Huth, G. K.
1976-01-01
The use of digital techniques for transmission of pictorial data is discussed for multi-frame images (television). Video signals are processed in a manner which includes quantization and coding such that they are separable from the noise introduced into the channel. The performance of digital television systems is determined by the nature of the processing techniques (i.e., whether the video signal itself or, instead, something related to the video signal is quantized and coded) and to the quantization and coding schemes employed.
Rotating effects on the Landau quantization for an atom with a magnetic quadrupole moment
NASA Astrophysics Data System (ADS)
Fonseca, I. C.; Bakke, K.
2016-01-01
Based on the single particle approximation [Dmitriev et al., Phys. Rev. C 50, 2358 (1994) and C.-C. Chen, Phys. Rev. A 51, 2611 (1995)], the Landau quantization associated with an atom with a magnetic quadrupole moment is introduced, and then, rotating effects on this analogue of the Landau quantization is investigated. It is shown that rotating effects can modify the cyclotron frequency and breaks the degeneracy of the analogue of the Landau levels.
1991-11-01
2 mega joule/m 2 (MJ/m 2 ) curie 3.700000 x E +1 *giga becquerel (GBq) degree (angle) 1.745329 x E -2 radian (rad) degree Farenheit tK = (tp...quantization assigned two quantization values. One value was assigned for demodulation values that was larger than zero and another quantization value to...demodulation values that were smaller than zero (for maximum-likelihood decisions). Logic 0 was assigned for a positive demodulation value and a logic 1 was
Rotating effects on the Landau quantization for an atom with a magnetic quadrupole moment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fonseca, I. C.; Bakke, K., E-mail: kbakke@fisica.ufpb.br
2016-01-07
Based on the single particle approximation [Dmitriev et al., Phys. Rev. C 50, 2358 (1994) and C.-C. Chen, Phys. Rev. A 51, 2611 (1995)], the Landau quantization associated with an atom with a magnetic quadrupole moment is introduced, and then, rotating effects on this analogue of the Landau quantization is investigated. It is shown that rotating effects can modify the cyclotron frequency and breaks the degeneracy of the analogue of the Landau levels.
Kalathil, Shafeer; Lee, Jintae; Cho, Moo Hwan
2013-02-01
Oppan quantized style: By adding a gold precursor at its cathode, a microbial fuel cell (MFC) is demonstrated to form gold nanoparticles that can be used to simultaneously produce bioelectricity and hydrogen. By exploiting the quantized capacitance charging effect, the gold nanoparticles mediate the production of hydrogen without requiring an external power supply, while the MFC produces a stable power density. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Synthesis of nanocrystalline CdS thin film by SILAR and their characterization
NASA Astrophysics Data System (ADS)
Mukherjee, A.; Satpati, B.; Bhattacharyya, S. R.; Ghosh, R.; Mitra, P.
2015-01-01
Cadmium sulphide (CdS) thin film was prepared by successive ion layer adsorption and reaction (SILAR) technique using ammonium sulphide as anionic precursor. Characterization techniques of XRD, SEM, TEM, FTIR and EDX were utilized to study the microstructure of the films. Structural characterization by x-ray diffraction reveals the polycrystalline nature of the films. Cubic structure is revealed from X-ray diffraction and selected area diffraction (SAD) patterns. The particle size estimated using X-ray line broadening method is approximately 7 nm. Instrumental broadening was taken into account while particle size estimation. TEM shows CdS nanoparticles in the range 5-15 nm. Elemental mapping using EFTEM reveals good stoichiometric composition of CdS. Characteristic stretching vibration mode of CdS was observed in the absorption band of FTIR spectrum. Optical absorption study exhibits a distinct blue shift in band gap energy value of about 2.56 eV which confirms the size quantization.
Reformulation of the covering and quantizer problems as ground states of interacting particles.
Torquato, S
2010-11-01
It is known that the sphere-packing problem and the number-variance problem (closely related to an optimization problem in number theory) can be posed as energy minimizations associated with an infinite number of point particles in d-dimensional Euclidean space R(d) interacting via certain repulsive pair potentials. We reformulate the covering and quantizer problems as the determination of the ground states of interacting particles in R(d) that generally involve single-body, two-body, three-body, and higher-body interactions. This is done by linking the covering and quantizer problems to certain optimization problems involving the "void" nearest-neighbor functions that arise in the theory of random media and statistical mechanics. These reformulations, which again exemplify the deep interplay between geometry and physics, allow one now to employ theoretical and numerical optimization techniques to analyze and solve these energy minimization problems. The covering and quantizer problems have relevance in numerous applications, including wireless communication network layouts, the search of high-dimensional data parameter spaces, stereotactic radiation therapy, data compression, digital communications, meshing of space for numerical analysis, and coding and cryptography, among other examples. In the first three space dimensions, the best known solutions of the sphere-packing and number-variance problems (or their "dual" solutions) are directly related to those of the covering and quantizer problems, but such relationships may or may not exist for d≥4 , depending on the peculiarities of the dimensions involved. Our reformulation sheds light on the reasons for these similarities and differences. We also show that disordered saturated sphere packings provide relatively thin (economical) coverings and may yield thinner coverings than the best known lattice coverings in sufficiently large dimensions. In the case of the quantizer problem, we derive improved upper bounds on the quantizer error using sphere-packing solutions, which are generally substantially sharper than an existing upper bound in low to moderately large dimensions. We also demonstrate that disordered saturated sphere packings yield relatively good quantizers. Finally, we remark on possible applications of our results for the detection of gravitational waves.
Reformulation of the covering and quantizer problems as ground states of interacting particles
NASA Astrophysics Data System (ADS)
Torquato, S.
2010-11-01
It is known that the sphere-packing problem and the number-variance problem (closely related to an optimization problem in number theory) can be posed as energy minimizations associated with an infinite number of point particles in d -dimensional Euclidean space Rd interacting via certain repulsive pair potentials. We reformulate the covering and quantizer problems as the determination of the ground states of interacting particles in Rd that generally involve single-body, two-body, three-body, and higher-body interactions. This is done by linking the covering and quantizer problems to certain optimization problems involving the “void” nearest-neighbor functions that arise in the theory of random media and statistical mechanics. These reformulations, which again exemplify the deep interplay between geometry and physics, allow one now to employ theoretical and numerical optimization techniques to analyze and solve these energy minimization problems. The covering and quantizer problems have relevance in numerous applications, including wireless communication network layouts, the search of high-dimensional data parameter spaces, stereotactic radiation therapy, data compression, digital communications, meshing of space for numerical analysis, and coding and cryptography, among other examples. In the first three space dimensions, the best known solutions of the sphere-packing and number-variance problems (or their “dual” solutions) are directly related to those of the covering and quantizer problems, but such relationships may or may not exist for d≥4 , depending on the peculiarities of the dimensions involved. Our reformulation sheds light on the reasons for these similarities and differences. We also show that disordered saturated sphere packings provide relatively thin (economical) coverings and may yield thinner coverings than the best known lattice coverings in sufficiently large dimensions. In the case of the quantizer problem, we derive improved upper bounds on the quantizer error using sphere-packing solutions, which are generally substantially sharper than an existing upper bound in low to moderately large dimensions. We also demonstrate that disordered saturated sphere packings yield relatively good quantizers. Finally, we remark on possible applications of our results for the detection of gravitational waves.
Error diffusion concept for multi-level quantization
NASA Astrophysics Data System (ADS)
Broja, Manfred; Michalowski, Kristina; Bryngdahl, Olof
1990-11-01
The error diffusion binarization procedure is adapted to multi-level quantization. The threshold parameters then available have a noticeable influence on the process. Characteristic features of the technique are shown together with experimental results.
Natural inflation from polymer quantization
NASA Astrophysics Data System (ADS)
Ali, Masooma; Seahra, Sanjeev S.
2017-11-01
We study the polymer quantization of a homogeneous massive scalar field in the early Universe using a prescription inequivalent to those previously appearing in the literature. Specifically, we assume a Hilbert space for which the scalar field momentum is well defined but its amplitude is not. This is closer in spirit to the quantization scheme of loop quantum gravity, in which no unique configuration operator exists. We show that in the semiclassical approximation, the main effect of this polymer quantization scheme is to compactify the phase space of chaotic inflation in the field amplitude direction. This gives rise to an effective scalar potential closely resembling that of hybrid natural inflation. Unlike polymer schemes in which the scalar field amplitude is well defined, the semiclassical dynamics involves a past cosmological singularity; i.e., this approach does not mitigate the big bang.
Optimal sampling and quantization of synthetic aperture radar signals
NASA Technical Reports Server (NTRS)
Wu, C.
1978-01-01
Some theoretical and experimental results on optimal sampling and quantization of synthetic aperture radar (SAR) signals are presented. It includes a description of a derived theoretical relationship between the pixel signal to noise ratio of processed SAR images and the number of quantization bits per sampled signal, assuming homogeneous extended targets. With this relationship known, a solution may be realized for the problem of optimal allocation of a fixed data bit-volume (for specified surface area and resolution criterion) between the number of samples and the number of bits per sample. The results indicate that to achieve the best possible image quality for a fixed bit rate and a given resolution criterion, one should quantize individual samples coarsely and thereby maximize the number of multiple looks. The theoretical results are then compared with simulation results obtained by processing aircraft SAR data.
Effect of temperature degeneracy and Landau quantization on drift solitary waves and double layers
NASA Astrophysics Data System (ADS)
Shan, Shaukat Ali; Haque, Q.
2018-01-01
The linear and nonlinear drift ion acoustic waves have been investigated in an inhomogeneous, magnetized, dense degenerate, and quantized magnetic field plasma. The linear drift ion acoustic wave propagation along with the nonlinear structures like double layers and solitary waves has been found to be strongly dependent on the drift speed, magnetic field quantization parameter β, and the temperature degeneracy. The graphical illustrations show that the frequency of linear waves and the amplitude of the solitary waves increase with the increase in temperature degeneracy and Landau quantization effect, while the amplitude of the double layers decreases with the increase in η and T. The relevance of the present study is pointed out in the plasma environment of fast ignition inertial confinement fusion, the white dwarf stars, and short pulsed petawatt laser technology.
Time-Symmetric Quantization in Spacetimes with Event Horizons
NASA Astrophysics Data System (ADS)
Kobakhidze, Archil; Rodd, Nicholas
2013-08-01
The standard quantization formalism in spacetimes with event horizons implies a non-unitary evolution of quantum states, as initial pure states may evolve into thermal states. This phenomenon is behind the famous black hole information loss paradox which provoked long-standing debates on the compatibility of quantum mechanics and gravity. In this paper we demonstrate that within an alternative time-symmetric quantization formalism thermal radiation is absent and states evolve unitarily in spacetimes with event horizons. We also discuss the theoretical consistency of the proposed formalism. We explicitly demonstrate that the theory preserves the microcausality condition and suggest a "reinterpretation postulate" to resolve other apparent pathologies associated with negative energy states. Accordingly as there is a consistent alternative, we argue that choosing to use time-asymmetric quantization is a necessary condition for the black hole information loss paradox.
Ao, Wei; Song, Yongdong; Wen, Changyun
2017-05-01
In this paper, we investigate the adaptive control problem for a class of nonlinear uncertain MIMO systems with actuator faults and quantization effects. Under some mild conditions, an adaptive robust fault-tolerant control is developed to compensate the affects of uncertainties, actuator failures and errors caused by quantization, and a range of the parameters for these quantizers is established. Furthermore, a Lyapunov-like approach is adopted to demonstrate that the ultimately uniformly bounded output tracking error is guaranteed by the controller, and the signals of the closed-loop system are ensured to be bounded, even in the presence of at most m-q actuators stuck or outage. Finally, numerical simulations are provided to verify and illustrate the effectiveness of the proposed adaptive schemes. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
On a canonical quantization of 3D Anti de Sitter pure gravity
NASA Astrophysics Data System (ADS)
Kim, Jihun; Porrati, Massimo
2015-10-01
We perform a canonical quantization of pure gravity on AdS 3 using as a technical tool its equivalence at the classical level with a Chern-Simons theory with gauge group SL(2,{R})× SL(2,{R}) . We first quantize the theory canonically on an asymptotically AdS space -which is topologically the real line times a Riemann surface with one connected boundary. Using the "constrain first" approach we reduce canonical quantization to quantization of orbits of the Virasoro group and Kähler quantization of Teichmüller space. After explicitly computing the Kähler form for the torus with one boundary component and after extending that result to higher genus, we recover known results, such as that wave functions of SL(2,{R}) Chern-Simons theory are conformal blocks. We find new restrictions on the Hilbert space of pure gravity by imposing invariance under large diffeomorphisms and normalizability of the wave function. The Hilbert space of pure gravity is shown to be the target space of Conformal Field Theories with continuous spectrum and a lower bound on operator dimensions. A projection defined by topology changing amplitudes in Euclidean gravity is proposed. It defines an invariant subspace that allows for a dual interpretation in terms of a Liouville CFT. Problems and features of the CFT dual are assessed and a new definition of the Hilbert space, exempt from those problems, is proposed in the case of highly-curved AdS 3.
Optimized universal color palette design for error diffusion
NASA Astrophysics Data System (ADS)
Kolpatzik, Bernd W.; Bouman, Charles A.
1995-04-01
Currently, many low-cost computers can only simultaneously display a palette of 256 color. However, this palette is usually selectable from a very large gamut of available colors. For many applications, this limited palette size imposes a significant constraint on the achievable image quality. We propose a method for designing an optimized universal color palette for use with halftoning methods such as error diffusion. The advantage of a universal color palette is that it is fixed and therefore allows multiple images to be displayed simultaneously. To design the palette, we employ a new vector quantization method known as sequential scalar quantization (SSQ) to allocate the colors in a visually uniform color space. The SSQ method achieves near-optimal allocation, but may be efficiently implemented using a series of lookup tables. When used with error diffusion, SSQ adds little computational overhead and may be used to minimize the visual error in an opponent color coordinate system. We compare the performance of the optimized algorithm to standard error diffusion by evaluating a visually weighted mean-squared-error measure. Our metric is based on the color difference in CIE L*AL*B*, but also accounts for the lowpass characteristic of human contrast sensitivity.
NASA Astrophysics Data System (ADS)
Mezey, Paul G.
2017-11-01
Two strongly related theorems on non-degenerate ground state electron densities serve as the basis of "Molecular Informatics". The Hohenberg-Kohn theorem is a statement on global molecular information, ensuring that the complete electron density contains the complete molecular information. However, the Holographic Electron Density Theorem states more: the local information present in each and every positive volume density fragment is already complete: the information in the fragment is equivalent to the complete molecular information. In other words, the complete molecular information provided by the Hohenberg-Kohn Theorem is already provided, in full, by any positive volume, otherwise arbitrarily small electron density fragment. In this contribution some of the consequences of the Holographic Electron Density Theorem are discussed within the framework of the "Nuclear Charge Space" and the Universal Molecule Model. In the Nuclear Charge Space" the nuclear charges are regarded as continuous variables, and in the more general Universal Molecule Model some other quantized parameteres are also allowed to become "de-quantized and then re-quantized, leading to interrelations among real molecules through abstract molecules. Here the specific role of the Holographic Electron Density Theorem is discussed within the above context.
Zhang, Lu; Pang, Xiaodan; Ozolins, Oskars; Udalcovs, Aleksejs; Popov, Sergei; Xiao, Shilin; Hu, Weisheng; Chen, Jiajia
2018-04-01
We propose a spectrally efficient digitized radio-over-fiber (D-RoF) system by grouping highly correlated neighboring samples of the analog signals into multidimensional vectors, where the k-means clustering algorithm is adopted for adaptive quantization. A 30 Gbit/s D-RoF system is experimentally demonstrated to validate the proposed scheme, reporting a carrier aggregation of up to 40 100 MHz orthogonal frequency division multiplexing (OFDM) channels with quadrate amplitude modulation (QAM) order of 4 and an aggregation of 10 100 MHz OFDM channels with a QAM order of 16384. The equivalent common public radio interface rates from 37 to 150 Gbit/s are supported. Besides, the error vector magnitude (EVM) of 8% is achieved with the number of quantization bits of 4, and the EVM can be further reduced to 1% by increasing the number of quantization bits to 7. Compared with conventional pulse coding modulation-based D-RoF systems, the proposed D-RoF system improves the signal-to-noise-ratio up to ∼9 dB and greatly reduces the EVM, given the same number of quantization bits.
Improved image decompression for reduced transform coding artifacts
NASA Technical Reports Server (NTRS)
Orourke, Thomas P.; Stevenson, Robert L.
1994-01-01
The perceived quality of images reconstructed from low bit rate compression is severely degraded by the appearance of transform coding artifacts. This paper proposes a method for producing higher quality reconstructed images based on a stochastic model for the image data. Quantization (scalar or vector) partitions the transform coefficient space and maps all points in a partition cell to a representative reconstruction point, usually taken as the centroid of the cell. The proposed image estimation technique selects the reconstruction point within the quantization partition cell which results in a reconstructed image which best fits a non-Gaussian Markov random field (MRF) image model. This approach results in a convex constrained optimization problem which can be solved iteratively. At each iteration, the gradient projection method is used to update the estimate based on the image model. In the transform domain, the resulting coefficient reconstruction points are projected to the particular quantization partition cells defined by the compressed image. Experimental results will be shown for images compressed using scalar quantization of block DCT and using vector quantization of subband wavelet transform. The proposed image decompression provides a reconstructed image with reduced visibility of transform coding artifacts and superior perceived quality.
Landau quantization effects on hole-acoustic instability in semiconductor plasmas
NASA Astrophysics Data System (ADS)
Sumera, P.; Rasheed, A.; Jamil, M.; Siddique, M.; Areeb, F.
2017-12-01
The growth rate of the hole acoustic waves (HAWs) exciting in magnetized semiconductor quantum plasma pumped by the electron beam has been investigated. The instability of the waves contains quantum effects including the exchange and correlation potential, Bohm potential, Fermi-degenerate pressure, and the magnetic quantization of semiconductor plasma species. The effects of various plasma parameters, which include relative concentration of plasma particles, beam electron temperature, beam speed, plasma temperature (temperature of electrons/holes), and Landau electron orbital magnetic quantization parameter η, on the growth rate of HAWs, have been discussed. The numerical study of our model of acoustic waves has been applied, as an example, to the GaAs semiconductor exposed to electron beam in the magnetic field environment. An increment in either the concentration of the semiconductor electrons or the speed of beam electrons, in the presence of magnetic quantization of fermion orbital motion, enhances remarkably the growth rate of the HAWs. Although the growth rate of the waves reduces with a rise in the thermal temperature of plasma species, at a particular temperature, we receive a higher instability due to the contribution of magnetic quantization of fermions to it.
Quantization of Space-like States in Lorentz-Violating Theories
NASA Astrophysics Data System (ADS)
Colladay, Don
2018-01-01
Lorentz violation frequently induces modified dispersion relations that can yield space-like states that impede the standard quantization procedures. In certain cases, an extended Hamiltonian formalism can be used to define observer-covariant normalization factors for field expansions and phase space integrals. These factors extend the theory to include non-concordant frames in which there are negative-energy states. This formalism provides a rigorous way to quantize certain theories containing space-like states and allows for the consistent computation of Cherenkov radiation rates in arbitrary frames and avoids singular expressions.
Correspondence between quantization schemes for two-player nonzero-sum games and CNOT complexity
NASA Astrophysics Data System (ADS)
Vijayakrishnan, V.; Balakrishnan, S.
2018-05-01
The well-known quantization schemes for two-player nonzero-sum games are Eisert-Wilkens-Lewenstein scheme and Marinatto-Weber scheme. In this work, we establish the connection between the two schemes from the perspective of quantum circuits. Further, we provide the correspondence between any game quantization schemes and the CNOT complexity, where CNOT complexity is up to the local unitary operations. While CNOT complexity is known to be useful in the analysis of universal quantum circuit, in this work, we find its applicability in quantum game theory.
Equivalence of Einstein and Jordan frames in quantized anisotropic cosmological models
NASA Astrophysics Data System (ADS)
Pandey, Sachin; Pal, Sridip; Banerjee, Narayan
2018-06-01
The present work shows that the mathematical equivalence of the Jordan frame and its conformally transformed version, the Einstein frame, so as far as Brans-Dicke theory is concerned, survives a quantization of cosmological models, arising as solutions to the Brans-Dicke theory. We work with the Wheeler-deWitt quantization scheme and take up quite a few anisotropic cosmological models as examples. We effectively show that the transformation from the Jordan to the Einstein frame is a canonical one and hence two frames furnish equivalent description of same physical scenario.
Gauge fixing and BFV quantization
NASA Astrophysics Data System (ADS)
Rogers, Alice
2000-01-01
Non-singularity conditions are established for the Batalin-Fradkin-Vilkovisky (BFV) gauge-fixing fermion which are sufficient for it to lead to the correct path integral for a theory with constraints canonically quantized in the BFV approach. The conditions ensure that the anticommutator of this fermion with the BRST charge regularizes the path integral by regularizing the trace over non-physical states in each ghost sector. The results are applied to the quantization of a system which has a Gribov problem, using a non-standard form of the gauge-fixing fermion.
NASA Astrophysics Data System (ADS)
Jarvis, P. D.; Corney, S. P.; Tsohantjis, I.
1999-12-01
A covariant spinor representation of iosp(d,2/2) is constructed for the quantization of the spinning relativistic particle. It is found that, with appropriately defined wavefunctions, this representation can be identified with the state space arising from the canonical extended BFV-BRST quantization of the spinning particle with admissible gauge fixing conditions after a contraction procedure. For this model, the cohomological determination of physical states can thus be obtained purely from the representation theory of the iosp(d,2/2) algebra.
Luminance-model-based DCT quantization for color image compression
NASA Technical Reports Server (NTRS)
Ahumada, Albert J., Jr.; Peterson, Heidi A.
1992-01-01
A model is developed to approximate visibility thresholds for discrete cosine transform (DCT) coefficient quantization error based on the peak-to-peak luminance of the error image. Experimentally measured visibility thresholds for R, G, and B DCT basis functions can be predicted by a simple luminance-based detection model. This model allows DCT coefficient quantization matrices to be designed for display conditions other than those of the experimental measurements: other display luminances, other veiling luminances, and other spatial frequencies (different pixel spacings, viewing distances, and aspect ratios).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grigoryan, G.V.; Grigoryan, R.P.
1995-09-01
The canonical quantization of a (D=2n)-dimensional Dirac particle with spin in an arbitrary external electromagnetic field is performed in a gauge that makes it possible to describe simultaneously particles and antiparticles (both massive and massless) already at the classical level. A pseudoclassical Foldy-Wouthuysen transformation is used to find the canonical (Newton-Wigner) coordinates. The connection between this quantization scheme and Blount`s picture describing the behavior of a Dirac particle in an external electromagnetic field is discussed.
Image Data Compression Having Minimum Perceptual Error
NASA Technical Reports Server (NTRS)
Watson, Andrew B. (Inventor)
1997-01-01
A method is presented for performing color or grayscale image compression that eliminates redundant and invisible image components. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The quantization matrix comprises visual masking by luminance and contrast technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.
JND measurements of the speech formants parameters and its implication in the LPC pole quantization
NASA Astrophysics Data System (ADS)
Orgad, Yaakov
1988-08-01
The inherent sensitivity of auditory perception is explicitly used with the objective of designing an efficient speech encoder. Speech can be modelled by a filter representing the vocal tract shape that is driven by an excitation signal representing glottal air flow. This work concentrates on the filter encoding problem, assuming that excitation signal encoding is optimal. Linear predictive coding (LPC) techniques were used to model a short speech segment by an all-pole filter; each pole was directly related to the speech formants. Measurements were made of the auditory just noticeable difference (JND) corresponding to the natural speech formants, with the LPC filter poles as the best candidates to represent the speech spectral envelope. The JND is the maximum precision required in speech quantization; it was defined on the basis of the shift of one pole parameter of a single frame of a speech segment, necessary to induce subjective perception of the distortion, with .75 probability. The average JND in LPC filter poles in natural speech was found to increase with increasing pole bandwidth and, to a lesser extent, frequency. The JND measurements showed a large spread of the residuals around the average values, indicating that inter-formant coupling and, perhaps, other, not yet fully understood, factors were not taken into account at this stage of the research. A future treatment should consider these factors. The average JNDs obtained in this work were used to design pole quantization tables for speech coding and provided a better bit-rate than the standard quantizer of reflection coefficient; a 30-bits-per-frame pole quantizer yielded a speech quality similar to that obtained with a standard 41-bits-per-frame reflection coefficient quantizer. Owing to the complexity of the numerical root extraction system, the practical implementation of the pole quantization approach remains to be proved.
NASA Astrophysics Data System (ADS)
Wuthrich, Christian
My dissertation studies the foundations of loop quantum gravity (LQG), a candidate for a quantum theory of gravity based on classical general relativity. At the outset, I discuss two---and I claim separate---questions: first, do we need a quantum theory of gravity at all; and second, if we do, does it follow that gravity should or even must be quantized? My evaluation of different arguments either way suggests that while no argument can be considered conclusive, there are strong indications that gravity should be quantized. LQG attempts a canonical quantization of general relativity and thereby provokes a foundational interest as it must take a stance on many technical issues tightly linked to the interpretation of general relativity. Most importantly, it codifies general relativity's main innovation, the so-called background independence, in a formalism suitable for quantization. This codification pulls asunder what has been joined together in general relativity: space and time. It is thus a central issue whether or not general relativity's four-dimensional structure can be retrieved in the alternative formalism and how it fares through the quantization process. I argue that the rightful four-dimensional spacetime structure can only be partially retrieved at the classical level. What happens at the quantum level is an entirely open issue. Known examples of classically singular behaviour which gets regularized by quantization evoke an admittedly pious hope that the singularities which notoriously plague the classical theory may be washed away by quantization. This work scrutinizes pronouncements claiming that the initial singularity of classical cosmological models vanishes in quantum cosmology based on LQG and concludes that these claims must be severely qualified. In particular, I explicate why casting the quantum cosmological models in terms of a deterministic temporal evolution fails to capture the concepts at work adequately. Finally, a scheme is developed of how the re-emergence of the smooth spacetime from the underlying discrete quantum structure could be understood.
Fast and efficient compression of floating-point data.
Lindstrom, Peter; Isenburg, Martin
2006-01-01
Large scale scientific simulation codes typically run on a cluster of CPUs that write/read time steps to/from a single file system. As data sets are constantly growing in size, this increasingly leads to I/O bottlenecks. When the rate at which data is produced exceeds the available I/O bandwidth, the simulation stalls and the CPUs are idle. Data compression can alleviate this problem by using some CPU cycles to reduce the amount of data needed to be transfered. Most compression schemes, however, are designed to operate offline and seek to maximize compression, not throughput. Furthermore, they often require quantizing floating-point values onto a uniform integer grid, which disqualifies their use in applications where exact values must be retained. We propose a simple scheme for lossless, online compression of floating-point data that transparently integrates into the I/O of many applications. A plug-in scheme for data-dependent prediction makes our scheme applicable to a wide variety of data used in visualization, such as unstructured meshes, point sets, images, and voxel grids. We achieve state-of-the-art compression rates and speeds, the latter in part due to an improved entropy coder. We demonstrate that this significantly accelerates I/O throughput in real simulation runs. Unlike previous schemes, our method also adapts well to variable-precision floating-point and integer data.
On the Perturbative Equivalence Between the Hamiltonian and Lagrangian Quantizations
NASA Astrophysics Data System (ADS)
Batalin, I. A.; Tyutin, I. V.
The Hamiltonian (BFV) and Lagrangian (BV) quantization schemes are proved to be perturbatively equivalent to each other. It is shown in particular that the quantum master equation being treated perturbatively possesses a local formal solution.
Fill-in binary loop pulse-torque quantizer
NASA Technical Reports Server (NTRS)
Lory, C. B.
1975-01-01
Fill-in binary (FIB) loop provides constant heating of torque generator, an advantage of binary current switching. At the same time, it avoids mode-related dead zone and data delay of binary, an advantage of ternary quantization.
Theory of quantized systems: formal basis for DEVS/HLA distributed simulation environment
NASA Astrophysics Data System (ADS)
Zeigler, Bernard P.; Lee, J. S.
1998-08-01
In the context of a DARPA ASTT project, we are developing an HLA-compliant distributed simulation environment based on the DEVS formalism. This environment will provide a user- friendly, high-level tool-set for developing interoperable discrete and continuous simulation models. One application is the study of contract-based predictive filtering. This paper presents a new approach to predictive filtering based on a process called 'quantization' to reduce state update transmission. Quantization, which generates state updates only at quantum level crossings, abstracts a sender model into a DEVS representation. This affords an alternative, efficient approach to embedding continuous models within distributed discrete event simulations. Applications of quantization to message traffic reduction are discussed. The theory has been validated by DEVSJAVA simulations of test cases. It will be subject to further test in actual distributed simulations using the DEVS/HLA modeling and simulation environment.
Locally adaptive vector quantization: Data compression with feature preservation
NASA Technical Reports Server (NTRS)
Cheung, K. M.; Sayano, M.
1992-01-01
A study of a locally adaptive vector quantization (LAVQ) algorithm for data compression is presented. This algorithm provides high-speed one-pass compression and is fully adaptable to any data source and does not require a priori knowledge of the source statistics. Therefore, LAVQ is a universal data compression algorithm. The basic algorithm and several modifications to improve performance are discussed. These modifications are nonlinear quantization, coarse quantization of the codebook, and lossless compression of the output. Performance of LAVQ on various images using irreversible (lossy) coding is comparable to that of the Linde-Buzo-Gray algorithm, but LAVQ has a much higher speed; thus this algorithm has potential for real-time video compression. Unlike most other image compression algorithms, LAVQ preserves fine detail in images. LAVQ's performance as a lossless data compression algorithm is comparable to that of Lempel-Ziv-based algorithms, but LAVQ uses far less memory during the coding process.
Landau quantization of Dirac fermions in graphene and its multilayers
NASA Astrophysics Data System (ADS)
Yin, Long-Jing; Bai, Ke-Ke; Wang, Wen-Xiao; Li, Si-Yu; Zhang, Yu; He, Lin
2017-08-01
When electrons are confined in a two-dimensional (2D) system, typical quantum-mechanical phenomena such as Landau quantization can be detected. Graphene systems, including the single atomic layer and few-layer stacked crystals, are ideal 2D materials for studying a variety of quantum-mechanical problems. In this article, we review the experimental progress in the unusual Landau quantized behaviors of Dirac fermions in monolayer and multilayer graphene by using scanning tunneling microscopy (STM) and scanning tunneling spectroscopy (STS). Through STS measurement of the strong magnetic fields, distinct Landau-level spectra and rich level-splitting phenomena are observed in different graphene layers. These unique properties provide an effective method for identifying the number of layers, as well as the stacking orders, and investigating the fundamentally physical phenomena of graphene. Moreover, in the presence of a strain and charged defects, the Landau quantization of graphene can be significantly modified, leading to unusual spectroscopic and electronic properties.
Necessary conditions for the optimality of variable rate residual vector quantizers
NASA Technical Reports Server (NTRS)
Kossentini, Faouzi; Smith, Mark J. T.; Barnes, Christopher F.
1993-01-01
Residual vector quantization (RVQ), or multistage VQ, as it is also called, has recently been shown to be a competitive technique for data compression. The competitive performance of RVQ reported in results from the joint optimization of variable rate encoding and RVQ direct-sum code books. In this paper, necessary conditions for the optimality of variable rate RVQ's are derived, and an iterative descent algorithm based on a Lagrangian formulation is introduced for designing RVQ's having minimum average distortion subject to an entropy constraint. Simulation results for these entropy-constrained RVQ's (EC-RVQ's) are presented for memory less Gaussian, Laplacian, and uniform sources. A Gauss-Markov source is also considered. The performance is superior to that of entropy-constrained scalar quantizers (EC-SQ's) and practical entropy-constrained vector quantizers (EC-VQ's), and is competitive with that of some of the best source coding techniques that have appeared in the literature.
More on quantum groups from the quantization point of view
NASA Astrophysics Data System (ADS)
Jurčo, Branislav
1994-12-01
Star products on the classical double group of a simple Lie group and on corresponding symplectic groupoids are given so that the quantum double and the “quantized tangent bundle” are obtained in the deformation description. “Complex” quantum groups and bicovariant quantum Lie algebras are discussed from this point of view. Further we discuss the quantization of the Poisson structure on the symmetric algebra S(g) leading to the quantized enveloping algebra U h (g) as an example of biquantization in the sense of Turaev. Description of U h (g) in terms of the generators of the bicovariant differential calculus on F(G q ) is very convenient for this purpose. Finaly we interpret in the deformation framework some well known properties of compact quantum groups as simple consequences of corresponding properties of classical compact Lie groups. An analogue of the classical Kirillov's universal character formula is given for the unitary irreducble representation in the compact case.
Quantization of gauge fields, graph polynomials and graph homology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kreimer, Dirk, E-mail: kreimer@physik.hu-berlin.de; Sars, Matthias; Suijlekom, Walter D. van
2013-09-15
We review quantization of gauge fields using algebraic properties of 3-regular graphs. We derive the Feynman integrand at n loops for a non-abelian gauge theory quantized in a covariant gauge from scalar integrands for connected 3-regular graphs, obtained from the two Symanzik polynomials. The transition to the full gauge theory amplitude is obtained by the use of a third, new, graph polynomial, the corolla polynomial. This implies effectively a covariant quantization without ghosts, where all the relevant signs of the ghost sector are incorporated in a double complex furnished by the corolla polynomial–we call it cycle homology–and by graph homology.more » -- Highlights: •We derive gauge theory Feynman from scalar field theory with 3-valent vertices. •We clarify the role of graph homology and cycle homology. •We use parametric renormalization and the new corolla polynomial.« less
Augmenting Phase Space Quantization to Introduce Additional Physical Effects
NASA Astrophysics Data System (ADS)
Robbins, Matthew P. G.
Quantum mechanics can be done using classical phase space functions and a star product. The state of the system is described by a quasi-probability distribution. A classical system can be quantized in phase space in different ways with different quasi-probability distributions and star products. A transition differential operator relates different phase space quantizations. The objective of this thesis is to introduce additional physical effects into the process of quantization by using the transition operator. As prototypical examples, we first look at the coarse-graining of the Wigner function and the damped simple harmonic oscillator. By generalizing the transition operator and star product to also be functions of the position and momentum, we show that additional physical features beyond damping and coarse-graining can be introduced into a quantum system, including the generalized uncertainty principle of quantum gravity phenomenology, driving forces, and decoherence.
Rakkiyappan, R; Maheswari, K; Velmurugan, G; Park, Ju H
2018-05-17
This paper investigates H ∞ state estimation problem for a class of semi-Markovian jumping discrete-time neural networks model with event-triggered scheme and quantization. First, a new event-triggered communication scheme is introduced to determine whether or not the current sampled sensor data should be broad-casted and transmitted to the quantizer, which can save the limited communication resource. Second, a novel communication framework is employed by the logarithmic quantizer that quantifies and reduces the data transmission rate in the network, which apparently improves the communication efficiency of networks. Third, a stabilization criterion is derived based on the sufficient condition which guarantees a prescribed H ∞ performance level in the estimation error system in terms of the linear matrix inequalities. Finally, numerical simulations are given to illustrate the correctness of the proposed scheme. Copyright © 2018 Elsevier Ltd. All rights reserved.
Model predictive control of non-linear systems over networks with data quantization and packet loss.
Yu, Jimin; Nan, Liangsheng; Tang, Xiaoming; Wang, Ping
2015-11-01
This paper studies the approach of model predictive control (MPC) for the non-linear systems under networked environment where both data quantization and packet loss may occur. The non-linear controlled plant in the networked control system (NCS) is represented by a Tagaki-Sugeno (T-S) model. The sensed data and control signal are quantized in both links and described as sector bound uncertainties by applying sector bound approach. Then, the quantized data are transmitted in the communication networks and may suffer from the effect of packet losses, which are modeled as Bernoulli process. A fuzzy predictive controller which guarantees the stability of the closed-loop system is obtained by solving a set of linear matrix inequalities (LMIs). A numerical example is given to illustrate the effectiveness of the proposed method. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Thermal distributions of first, second and third quantization
NASA Astrophysics Data System (ADS)
McGuigan, Michael
1989-05-01
We treat first quantized string theory as two-dimensional gravity plus matter. This allows us to compute the two-dimensional density of one string states by the method of Darwin and Fowler. One can then use second quantized methods to form a grand microcanonical ensemble in which one can compute the density of multistring states of arbitrary momentum and mass. It is argued that modelling an elementary particle as a d-1-dimensional object whose internal degrees of freedom are described by a massless d-dimensional gas yields a density of internal states given by σ d(m)∼m -aexp((bm) {2(d-1)}/{d}) . This indicates that these objects cannot be in thermal equilibrium at any temperature unless d⩽2; that is for a string or a particle. Finally, we discuss the application of the above ideas to four-dimensional gravity and introduce an ensemble of multiuniverse states parameterized by second quantized canonical momenta and particle number.
Fine structure constant and quantized optical transparency of plasmonic nanoarrays.
Kravets, V G; Schedin, F; Grigorenko, A N
2012-01-24
Optics is renowned for displaying quantum phenomena. Indeed, studies of emission and absorption lines, the photoelectric effect and blackbody radiation helped to build the foundations of quantum mechanics. Nevertheless, it came as a surprise that the visible transparency of suspended graphene is determined solely by the fine structure constant, as this kind of universality had been previously reserved only for quantized resistance and flux quanta in superconductors. Here we describe a plasmonic system in which relative optical transparency is determined solely by the fine structure constant. The system consists of a regular array of gold nanoparticles fabricated on a thin metallic sublayer. We show that its relative transparency can be quantized in the near-infrared, which we attribute to the quantized contact resistance between the nanoparticles and the metallic sublayer. Our results open new possibilities in the exploration of universal dynamic conductance in plasmonic nanooptics.
Sub-Selective Quantization for Learning Binary Codes in Large-Scale Image Search.
Li, Yeqing; Liu, Wei; Huang, Junzhou
2018-06-01
Recently with the explosive growth of visual content on the Internet, large-scale image search has attracted intensive attention. It has been shown that mapping high-dimensional image descriptors to compact binary codes can lead to considerable efficiency gains in both storage and performing similarity computation of images. However, most existing methods still suffer from expensive training devoted to large-scale binary code learning. To address this issue, we propose a sub-selection based matrix manipulation algorithm, which can significantly reduce the computational cost of code learning. As case studies, we apply the sub-selection algorithm to several popular quantization techniques including cases using linear and nonlinear mappings. Crucially, we can justify the resulting sub-selective quantization by proving its theoretic properties. Extensive experiments are carried out on three image benchmarks with up to one million samples, corroborating the efficacy of the sub-selective quantization method in terms of image retrieval.
Optical design of cipher block chaining (CBC) encryption mode by using digital holography
NASA Astrophysics Data System (ADS)
Gil, Sang Keun; Jeon, Seok Hee; Jung, Jong Rae; Kim, Nam
2016-03-01
We propose an optical design of cipher block chaining (CBC) encryption by using digital holographic technique, which has higher security than the conventional electronic method because of the analog-type randomized cipher text with 2-D array. In this paper, an optical design of CBC encryption mode is implemented by 2-step quadrature phase-shifting digital holographic encryption technique using orthogonal polarization. A block of plain text is encrypted with the encryption key by applying 2-step phase-shifting digital holography, and it is changed into cipher text blocks which are digital holograms. These ciphered digital holograms with the encrypted information are Fourier transform holograms and are recorded on CCDs with 256 gray levels quantized intensities. The decryption is computed by these encrypted digital holograms of cipher texts, the same encryption key and the previous cipher text. Results of computer simulations are presented to verify that the proposed method shows the feasibility in the high secure CBC encryption system.
Perspectives of Light-Front Quantized Field Theory: Some New Results
DOE Office of Scientific and Technical Information (OSTI.GOV)
Srivastava, Prem P.
1999-08-13
A review of some basic topics in the light-front (LF) quantization of relativistic field theory is made. It is argued that the LF quantization is equally appropriate as the conventional one and that they lead, assuming the microcausality principle, to the same physical content. This is confirmed in the studies on the LF of the spontaneous symmetry breaking (SSB), of the degenerate vacua in Schwinger model (SM) and Chiral SM (CSM), of the chiral boson theory, and of the QCD in covariant gauges among others. The discussion on the LF is more economical and more transparent than that found inmore » the conventional equal-time quantized theory. The removal of the constraints on the LF phase space by following the Dirac method, in fact, results in a substantially reduced number of independent dynamical variables. Consequently, the descriptions of the physical Hilbert space and the vacuum structure, for example, become more tractable. In the context of the Dyson-Wick perturbation theory the relevant propagators in the front form theory are causal. The Wick rotation can then be performed to employ the Euclidean space integrals in momentum space. The lack of manifest covariance becomes tractable, and still more so if we employ, as discussed in the text, the Fourier transform of the fermionic field based on a special construction of the LF spinor. The fact that the hyperplanes x{sup {+-}} = 0 constitute characteristic surfaces of the hyperbolic partial differential equation is found irrelevant in the quantized theory; it seems sufficient to quantize the theory on one of the characteristic hyperplanes.« less
Quantization and Quantum-Like Phenomena: A Number Amplitude Approach
NASA Astrophysics Data System (ADS)
Robinson, T. R.; Haven, E.
2015-12-01
Historically, quantization has meant turning the dynamical variables of classical mechanics that are represented by numbers into their corresponding operators. Thus the relationships between classical variables determine the relationships between the corresponding quantum mechanical operators. Here, we take a radically different approach to this conventional quantization procedure. Our approach does not rely on any relations based on classical Hamiltonian or Lagrangian mechanics nor on any canonical quantization relations, nor even on any preconceptions of particle trajectories in space and time. Instead we examine the symmetry properties of certain Hermitian operators with respect to phase changes. This introduces harmonic operators that can be identified with a variety of cyclic systems, from clocks to quantum fields. These operators are shown to have the characteristics of creation and annihilation operators that constitute the primitive fields of quantum field theory. Such an approach not only allows us to recover the Hamiltonian equations of classical mechanics and the Schrödinger wave equation from the fundamental quantization relations, but also, by freeing the quantum formalism from any physical connotation, makes it more directly applicable to non-physical, so-called quantum-like systems. Over the past decade or so, there has been a rapid growth of interest in such applications. These include, the use of the Schrödinger equation in finance, second quantization and the number operator in social interactions, population dynamics and financial trading, and quantum probability models in cognitive processes and decision-making. In this paper we try to look beyond physical analogies to provide a foundational underpinning of such applications.
Generalized noise terms for the quantized fluctuational electrodynamics
NASA Astrophysics Data System (ADS)
Partanen, Mikko; Häyrynen, Teppo; Tulkki, Jukka; Oksanen, Jani
2017-03-01
The quantization of optical fields in vacuum has been known for decades, but extending the field quantization to lossy and dispersive media in nonequilibrium conditions has proven to be complicated due to the position-dependent electric and magnetic responses of the media. In fact, consistent position-dependent quantum models for the photon number in resonant structures have only been formulated very recently and only for dielectric media. Here we present a general position-dependent quantized fluctuational electrodynamics (QFED) formalism that extends the consistent field quantization to describe the photon number also in the presence of magnetic field-matter interactions. It is shown that the magnetic fluctuations provide an additional degree of freedom in media where the magnetic coupling to the field is prominent. Therefore, the field quantization requires an additional independent noise operator that is commuting with the conventional bosonic noise operator describing the polarization current fluctuations in dielectric media. In addition to allowing the detailed description of field fluctuations, our methods provide practical tools for modeling optical energy transfer and the formation of thermal balance in general dielectric and magnetic nanodevices. We use QFED to investigate the magnetic properties of microcavity systems to demonstrate an example geometry in which it is possible to probe fields arising from the electric and magnetic source terms. We show that, as a consequence of the magnetic Purcell effect, the tuning of the position of an emitter layer placed inside a vacuum cavity can make the emissivity of a magnetic emitter to exceed the emissivity of a corresponding electric emitter.
Bfv Quantization of Relativistic Spinning Particles with a Single Bosonic Constraint
NASA Astrophysics Data System (ADS)
Rabello, Silvio J.; Vaidya, Arvind N.
Using the BFV approach we quantize a pseudoclassical model of the spin-1/2 relativistic particle that contains a single bosonic constraint, contrary to the usual locally supersymmetric models that display first and second class constraints.
Minimum uncertainty and squeezing in diffusion processes and stochastic quantization
NASA Technical Reports Server (NTRS)
Demartino, S.; Desiena, S.; Illuminati, Fabrizo; Vitiello, Giuseppe
1994-01-01
We show that uncertainty relations, as well as minimum uncertainty coherent and squeezed states, are structural properties for diffusion processes. Through Nelson stochastic quantization we derive the stochastic image of the quantum mechanical coherent and squeezed states.
A consistent covariant quantization of the Brink-Schwarz superparticle
NASA Astrophysics Data System (ADS)
Eisenberg, Yeshayahu
1992-02-01
We perform the covariant quantization of the ten-dimensional Brink-Schwarz superparticle by reducing it to a system whose constraints are all first class, covariant and have only two levels of reducibility. Research supported by the Rothschild Fellowship.
Two-step single slope/SAR ADC with error correction for CMOS image sensor.
Tang, Fang; Bermak, Amine; Amira, Abbes; Amor Benammar, Mohieddine; He, Debiao; Zhao, Xiaojin
2014-01-01
Conventional two-step ADC for CMOS image sensor requires full resolution noise performance in the first stage single slope ADC, leading to high power consumption and large chip area. This paper presents an 11-bit two-step single slope/successive approximation register (SAR) ADC scheme for CMOS image sensor applications. The first stage single slope ADC generates a 3-bit data and 1 redundant bit. The redundant bit is combined with the following 8-bit SAR ADC output code using a proposed error correction algorithm. Instead of requiring full resolution noise performance, the first stage single slope circuit of the proposed ADC can tolerate up to 3.125% quantization noise. With the proposed error correction mechanism, the power consumption and chip area of the single slope ADC are significantly reduced. The prototype ADC is fabricated using 0.18 μ m CMOS technology. The chip area of the proposed ADC is 7 μ m × 500 μ m. The measurement results show that the energy efficiency figure-of-merit (FOM) of the proposed ADC core is only 125 pJ/sample under 1.4 V power supply and the chip area efficiency is 84 k μ m(2) · cycles/sample.
Covariant scalar representation of ? and quantization of the scalar relativistic particle
NASA Astrophysics Data System (ADS)
Jarvis, P. D.; Tsohantjis, I.
1996-03-01
A covariant scalar representation of iosp(d,2/2) is constructed and analysed in comparison with existing BFV-BRST methods for the quantization of the scalar relativistic particle. It is found that, with appropriately defined wavefunctions, this iosp(d,2/2) produced representation can be identified with the state space arising from the canonical BFV-BRST quantization of the modular-invariant, unoriented scalar particle (or antiparticle) with admissible gauge-fixing conditions. For this model, the cohomological determination of physical states can thus be obtained purely from the representation theory of the iosp(d,2/2) algebra.
NASA Astrophysics Data System (ADS)
Song, Haiyu; Yu, Li; Zhang, Dan; Zhang, Wen-An
2012-12-01
This paper is concerned with the finite-time quantized H∞ control problem for a class of discrete-time switched time-delay systems with time-varying exogenous disturbances. By using the sector bound approach and the average dwell time method, sufficient conditions are derived for the switched system to be finite-time bounded and ensure a prescribed H∞ disturbance attenuation level, and a mode-dependent quantized state feedback controller is designed by solving an optimization problem. Two illustrative examples are provided to demonstrate the effectiveness of the proposed theoretical results.
High Order Entropy-Constrained Residual VQ for Lossless Compression of Images
NASA Technical Reports Server (NTRS)
Kossentini, Faouzi; Smith, Mark J. T.; Scales, Allen
1995-01-01
High order entropy coding is a powerful technique for exploiting high order statistical dependencies. However, the exponentially high complexity associated with such a method often discourages its use. In this paper, an entropy-constrained residual vector quantization method is proposed for lossless compression of images. The method consists of first quantizing the input image using a high order entropy-constrained residual vector quantizer and then coding the residual image using a first order entropy coder. The distortion measure used in the entropy-constrained optimization is essentially the first order entropy of the residual image. Experimental results show very competitive performance.
Toward a perceptual image quality assessment of color quantized images
NASA Astrophysics Data System (ADS)
Frackiewicz, Mariusz; Palus, Henryk
2018-04-01
Color image quantization is an important operation in the field of color image processing. In this paper, we consider new perceptual image quality metrics for assessment of quantized images. These types of metrics, e.g. DSCSI, MDSIs, MDSIm and HPSI achieve the highest correlation coefficients with MOS during tests on the six publicly available image databases. Research was limited to images distorted by two types of compression: JPG and JPG2K. Statistical analysis of correlation coefficients based on the Friedman test and post-hoc procedures showed that the differences between the four new perceptual metrics are not statistically significant.
Constraints on operator ordering from third quantization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ohkuwa, Yoshiaki; Faizal, Mir, E-mail: f2mir@uwaterloo.ca; Ezawa, Yasuo
2016-02-15
In this paper, we analyse the Wheeler–DeWitt equation in the third quantized formalism. We will demonstrate that for certain operator ordering, the early stages of the universe are dominated by quantum fluctuations, and the universe becomes classical at later stages during the cosmic expansion. This is physically expected, if the universe is formed from quantum fluctuations in the third quantized formalism. So, we will argue that this physical requirement can be used to constrain the form of the operator ordering chosen. We will explicitly demonstrate this to be the case for two different cosmological models.
Information efficiency in visual communication
NASA Astrophysics Data System (ADS)
Alter-Gartenberg, Rachel; Rahman, Zia-ur
1993-08-01
This paper evaluates the quantization process in the context of the end-to-end performance of the visual-communication channel. Results show that the trade-off between data transmission and visual quality revolves around the information in the acquired signal, not around its energy. Improved information efficiency is gained by frequency dependent quantization that maintains the information capacity of the channel and reduces the entropy of the encoded signal. Restorations with energy bit-allocation lose both in sharpness and clarity relative to restorations with information bit-allocation. Thus, quantization with information bit-allocation is preferred for high information efficiency and visual quality in optimized visual communication.
NASA Astrophysics Data System (ADS)
Karyakin, Yu. E.; Nekhozhin, M. A.; Pletnev, A. A.
2013-07-01
A method for calculating the quantity of moisture in a metal-concrete container in the process of its charging with spent nuclear fuel is proposed. A computing method and results obtained by it for conservative estimation of the time of vacuum drying of a container charged with spent nuclear fuel by technologies with quantization and without quantization of the lower fuel element cluster are presented. It has been shown that the absence of quantization in loading spent fuel increases several times the time of vacuum drying of the metal-concrete container.
Information efficiency in visual communication
NASA Technical Reports Server (NTRS)
Alter-Gartenberg, Rachel; Rahman, Zia-Ur
1993-01-01
This paper evaluates the quantization process in the context of the end-to-end performance of the visual-communication channel. Results show that the trade-off between data transmission and visual quality revolves around the information in the acquired signal, not around its energy. Improved information efficiency is gained by frequency dependent quantization that maintains the information capacity of the channel and reduces the entropy of the encoded signal. Restorations with energy bit-allocation lose both in sharpness and clarity relative to restorations with information bit-allocation. Thus, quantization with information bit-allocation is preferred for high information efficiency and visual quality in optimized visual communication.
Cascade Error Projection with Low Bit Weight Quantization for High Order Correlation Data
NASA Technical Reports Server (NTRS)
Duong, Tuan A.; Daud, Taher
1998-01-01
In this paper, we reinvestigate the solution for chaotic time series prediction problem using neural network approach. The nature of this problem is such that the data sequences are never repeated, but they are rather in chaotic region. However, these data sequences are correlated between past, present, and future data in high order. We use Cascade Error Projection (CEP) learning algorithm to capture the high order correlation between past and present data to predict a future data using limited weight quantization constraints. This will help to predict a future information that will provide us better estimation in time for intelligent control system. In our earlier work, it has been shown that CEP can sufficiently learn 5-8 bit parity problem with 4- or more bits, and color segmentation problem with 7- or more bits of weight quantization. In this paper, we demonstrate that chaotic time series can be learned and generalized well with as low as 4-bit weight quantization using round-off and truncation techniques. The results show that generalization feature will suffer less as more bit weight quantization is available and error surfaces with the round-off technique are more symmetric around zero than error surfaces with the truncation technique. This study suggests that CEP is an implementable learning technique for hardware consideration.
Group theoretical quantization of isotropic loop cosmology
NASA Astrophysics Data System (ADS)
Livine, Etera R.; Martín-Benito, Mercedes
2012-06-01
We achieve a group theoretical quantization of the flat Friedmann-Robertson-Walker model coupled to a massless scalar field adopting the improved dynamics of loop quantum cosmology. Deparemetrizing the system using the scalar field as internal time, we first identify a complete set of phase space observables whose Poisson algebra is isomorphic to the su(1,1) Lie algebra. It is generated by the volume observable and the Hamiltonian. These observables describe faithfully the regularized phase space underlying the loop quantization: they account for the polymerization of the variable conjugate to the volume and for the existence of a kinematical nonvanishing minimum volume. Since the Hamiltonian is an element in the su(1,1) Lie algebra, the dynamics is now implemented as SU(1, 1) transformations. At the quantum level, the system is quantized as a timelike irreducible representation of the group SU(1, 1). These representations are labeled by a half-integer spin, which gives the minimal volume. They provide superselection sectors without quantization anomalies and no factor ordering ambiguity arises when representing the Hamiltonian. We then explicitly construct SU(1, 1) coherent states to study the quantum evolution. They not only provide semiclassical states but truly dynamical coherent states. Their use further clarifies the nature of the bounce that resolves the big bang singularity.
Kang, Zhe; Yuan, Jinhui; Zhang, Xianting; Sang, Xinzhu; Wang, Kuiru; Wu, Qiang; Yan, Binbin; Li, Feng; Zhou, Xian; Zhong, Kangping; Zhou, Guiyao; Yu, Chongxiu; Farrell, Gerald; Lu, Chao; Yaw Tam, Hwa; Wai, P. K. A.
2016-01-01
High performance all-optical quantizer based on silicon waveguide is believed to have significant applications in photonic integratable optical communication links, optical interconnection networks, and real-time signal processing systems. In this paper, we propose an integratable all-optical quantizer for on-chip and low power consumption all-optical analog-to-digital converters. The quantization is realized by the strong cross-phase modulation and interference in a silicon-organic hybrid (SOH) slot waveguide based Mach-Zehnder interferometer. By carefully designing the dimension of the SOH waveguide, large nonlinear coefficients up to 16,000 and 18,069 W−1/m for the pump and probe signals can be obtained respectively, along with a low pulse walk-off parameter of 66.7 fs/mm, and all-normal dispersion in the wavelength regime considered. Simulation results show that the phase shift of the probe signal can reach 8π at a low pump pulse peak power of 206 mW and propagation length of 5 mm such that a 4-bit all-optical quantizer can be realized. The corresponding signal-to-noise ratio is 23.42 dB and effective number of bit is 3.89-bit. PMID:26777054
Noncommutative Line Bundles and Gerbes
NASA Astrophysics Data System (ADS)
Jurčo, B.
We introduce noncommutative line bundles and gerbes within the framework of deformation quantization. The Seiberg-Witten map is used to construct the corresponding noncommutative Čech cocycles. Morita equivalence of star products and quantization of twisted Poisson structures are discussed from this point of view.
Compression of digital images over local area networks. Appendix 1: Item 3. M.S. Thesis
NASA Technical Reports Server (NTRS)
Gorjala, Bhargavi
1991-01-01
Differential Pulse Code Modulation (DPCM) has been used with speech for many years. It has not been as successful for images because of poor edge performance. The only corruption in DPC is quantizer error but this corruption becomes quite large in the region of an edge because of the abrupt changes in the statistics of the signal. We introduce two improved DPCM schemes; Edge correcting DPCM and Edge Preservation Differential Coding. These two coding schemes will detect the edges and take action to correct them. In an Edge Correcting scheme, the quantizer error for an edge is encoded using a recursive quantizer with entropy coding and sent to the receiver as side information. In an Edge Preserving scheme, when the quantizer input falls in the overload region, the quantizer error is encoded and sent to the receiver repeatedly until the quantizer input falls in the inner levels. Therefore these coding schemes increase the bit rate in the region of an edge and require variable rate channels. We implement these two variable rate coding schemes on a token wing network. Timed token protocol supports two classes of messages; asynchronous and synchronous. The synchronous class provides a pre-allocated bandwidth and guaranteed response time. The remaining bandwidth is dynamically allocated to the asynchronous class. The Edge Correcting DPCM is simulated by considering the edge information under the asynchronous class. For the simulation of the Edge Preserving scheme, the amount of information sent each time is fixed, but the length of the packet or the bit rate for that packet is chosen depending on the availability capacity. The performance of the network, and the performance of the image coding algorithms, is studied.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Butko, Yana A., E-mail: yanabutko@yandex.ru, E-mail: kinderknecht@math.uni-sb.de; Grothaus, Martin, E-mail: grothaus@mathematik.uni-kl.de; Smolyanov, Oleg G., E-mail: Smolyanov@yandex.ru
2016-02-15
Evolution semigroups generated by pseudo-differential operators are considered. These operators are obtained by different (parameterized by a number τ) procedures of quantization from a certain class of functions (or symbols) defined on the phase space. This class contains Hamilton functions of particles with variable mass in magnetic and potential fields and more general symbols given by the Lévy-Khintchine formula. The considered semigroups are represented as limits of n-fold iterated integrals when n tends to infinity. Such representations are called Feynman formulae. Some of these representations are constructed with the help of another pseudo-differential operator, obtained by the same procedure ofmore » quantization; such representations are called Hamiltonian Feynman formulae. Some representations are based on integral operators with elementary kernels; these are called Lagrangian Feynman formulae. Langrangian Feynman formulae provide approximations of evolution semigroups, suitable for direct computations and numerical modeling of the corresponding dynamics. Hamiltonian Feynman formulae allow to represent the considered semigroups by means of Feynman path integrals. In the article, a family of phase space Feynman pseudomeasures corresponding to different procedures of quantization is introduced. The considered evolution semigroups are represented as phase space Feynman path integrals with respect to these Feynman pseudomeasures, i.e., different quantizations correspond to Feynman path integrals with the same integrand but with respect to different pseudomeasures. This answers Berezin’s problem of distinguishing a procedure of quantization on the language of Feynman path integrals. Moreover, the obtained Lagrangian Feynman formulae allow also to calculate these phase space Feynman path integrals and to connect them with some functional integrals with respect to probability measures.« less
Combining Vector Quantization and Histogram Equalization.
ERIC Educational Resources Information Center
Cosman, Pamela C.; And Others
1992-01-01
Discussion of contrast enhancement techniques focuses on the use of histogram equalization with a data compression technique, i.e., tree-structured vector quantization. The enhancement technique of intensity windowing is described, and the use of enhancement techniques for medical images is explained, including adaptive histogram equalization.…
Introduction to quantized LIE groups and algebras
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tjin, T.
1992-10-10
In this paper, the authors give a self-contained introduction to the theory of quantum groups according to Drinfeld, highlighting the formal aspects as well as the applications to the Yang-Baxter equation and representation theory. Introductions to Hopf algebras, Poisson structures and deformation quantization are also provided. After defining Poisson Lie groups the authors study their relation to Lie bialgebras and the classical Yang-Baxter equation. Then the authors explain in detail the concept of quantization for them. As an example the quantization of sl[sub 2] is explicitly carried out. Next, the authors show how quantum groups are related to the Yang-Baxtermore » equation and how they can be used to solve it. Using the quantum double construction, the authors explicitly construct the universal R matrix for the quantum sl[sub 2] algebra. In the last section, the authors deduce all finite-dimensional irreducible representations for q a root of unity. The authors also give their tensor product decomposition (fusion rules), which is relevant to conformal field theory.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Du, Liang; Yang, Yi; Harley, Ronald Gordon
A system is for a plurality of different electric load types. The system includes a plurality of sensors structured to sense a voltage signal and a current signal for each of the different electric loads; and a processor. The processor acquires a voltage and current waveform from the sensors for a corresponding one of the different electric load types; calculates a power or current RMS profile of the waveform; quantizes the power or current RMS profile into a set of quantized state-values; evaluates a state-duration for each of the quantized state-values; evaluates a plurality of state-types based on the powermore » or current RMS profile and the quantized state-values; generates a state-sequence that describes a corresponding finite state machine model of a generalized load start-up or transient profile for the corresponding electric load type; and identifies the corresponding electric load type.« less
Image compression system and method having optimized quantization tables
NASA Technical Reports Server (NTRS)
Ratnakar, Viresh (Inventor); Livny, Miron (Inventor)
1998-01-01
A digital image compression preprocessor for use in a discrete cosine transform-based digital image compression device is provided. The preprocessor includes a gathering mechanism for determining discrete cosine transform statistics from input digital image data. A computing mechanism is operatively coupled to the gathering mechanism to calculate a image distortion array and a rate of image compression array based upon the discrete cosine transform statistics for each possible quantization value. A dynamic programming mechanism is operatively coupled to the computing mechanism to optimize the rate of image compression array against the image distortion array such that a rate-distortion-optimal quantization table is derived. In addition, a discrete cosine transform-based digital image compression device and a discrete cosine transform-based digital image compression and decompression system are provided. Also, a method for generating a rate-distortion-optimal quantization table, using discrete cosine transform-based digital image compression, and operating a discrete cosine transform-based digital image compression and decompression system are provided.
Quantization of Poisson Manifolds from the Integrability of the Modular Function
NASA Astrophysics Data System (ADS)
Bonechi, F.; Ciccoli, N.; Qiu, J.; Tarlini, M.
2014-10-01
We discuss a framework for quantizing a Poisson manifold via the quantization of its symplectic groupoid, combining the tools of geometric quantization with the results of Renault's theory of groupoid C*-algebras. This setting allows very singular polarizations. In particular, we consider the case when the modular function is multiplicatively integrable, i.e., when the space of leaves of the polarization inherits a groupoid structure. If suitable regularity conditions are satisfied, then one can define the quantum algebra as the convolution algebra of the subgroupoid of leaves satisfying the Bohr-Sommerfeld conditions. We apply this procedure to the case of a family of Poisson structures on , seen as Poisson homogeneous spaces of the standard Poisson-Lie group SU( n + 1). We show that a bihamiltonian system on defines a multiplicative integrable model on the symplectic groupoid; we compute the Bohr-Sommerfeld groupoid and show that it satisfies the needed properties for applying Renault theory. We recover and extend Sheu's description of quantum homogeneous spaces as groupoid C*-algebras.
NASA Astrophysics Data System (ADS)
Zhang, Xuyan; Zhang, Zhiyao; Wang, Shubing; Liang, Dong; Li, Heping; Liu, Yong
2018-03-01
We propose and demonstrate an approach that can achieve high-resolution quantization by employing soliton self-frequency shift and spectral compression. Our approach is based on a bi-directional comb-fiber architecture which is composed of a Sagnac-loop-based mirror and a comb-like combination of N sections of interleaved single-mode fibers and high nonlinear fibers. The Sagnac-loop-based mirror placed at the terminal of a bus line reflects the optical pulses back to the bus line to achieve additional N-stage spectral compression, thus single-stage soliton self-frequency shift (SSFS) and (2 N - 1)-stage spectral compression are realized in the bi-directional scheme. The fiber length in the architecture is numerically optimized, and the proposed quantization scheme is evaluated by both simulation and experiment in the case of N = 2. In the experiment, a quantization resolution of 6.2 bits is obtained, which is 1.2-bit higher than that of its uni-directional counterpart.
Quantization ambiguities and bounds on geometric scalars in anisotropic loop quantum cosmology
NASA Astrophysics Data System (ADS)
Singh, Parampreet; Wilson-Ewing, Edward
2014-02-01
We study quantization ambiguities in loop quantum cosmology that arise for space-times with non-zero spatial curvature and anisotropies. Motivated by lessons from different possible loop quantizations of the closed Friedmann-Lemaître-Robertson-Walker cosmology, we find that using open holonomies of the extrinsic curvature, which due to gauge-fixing can be treated as a connection, leads to the same quantum geometry effects that are found in spatially flat cosmologies. More specifically, in contrast to the quantization based on open holonomies of the Ashtekar-Barbero connection, the expansion and shear scalars in the effective theories of the Bianchi type II and Bianchi type IX models have upper bounds, and these are in exact agreement with the bounds found in the effective theories of the Friedmann-Lemaître-Robertson-Walker and Bianchi type I models in loop quantum cosmology. We also comment on some ambiguities present in the definition of inverse triad operators and their role.
The electronic structure of Au25 clusters: between discrete and continuous
NASA Astrophysics Data System (ADS)
Katsiev, Khabiboulakh; Lozova, Nataliya; Wang, Lu; Sai Krishna, Katla; Li, Ruipeng; Mei, Wai-Ning; Skrabalak, Sara E.; Kumar, Challa S. S. R.; Losovyj, Yaroslav
2016-08-01
Here, an approach based on synchrotron resonant photoemission is employed to explore the transition between quantization and hybridization of the electronic structure in atomically precise ligand-stabilized nanoparticles. While the presence of ligands maintains quantization in Au25 clusters, their removal renders increased hybridization of the electronic states in the vicinity of the Fermi level. These observations are supported by DFT studies.Here, an approach based on synchrotron resonant photoemission is employed to explore the transition between quantization and hybridization of the electronic structure in atomically precise ligand-stabilized nanoparticles. While the presence of ligands maintains quantization in Au25 clusters, their removal renders increased hybridization of the electronic states in the vicinity of the Fermi level. These observations are supported by DFT studies. Electronic supplementary information (ESI) available: Experimental details including chemicals, sample preparation, and characterization methods. Computation techniques, SV-AUC, GIWAXS, XPS, UPS, MALDI-TOF, ESI data of Au25 clusters. See DOI: 10.1039/c6nr02374f
NASA Astrophysics Data System (ADS)
Ivanov, K. A.; Nikolaev, V. V.; Gubaydullin, A. R.; Kaliteevski, M. A.
2017-10-01
Based on the scattering matrix formalism, we have developed a method of quantization of an electromagnetic field in two-dimensional photonic nanostructures ( S-quantization in the two-dimensional case). In this method, the fields at the boundaries of the quantization box are expanded into a Fourier series and are related with each other by the scattering matrix of the system, which is the product of matrices describing the propagation of plane waves in empty regions of the quantization box and the scattering matrix of the photonic structure (or an arbitrary inhomogeneity). The quantization condition (similarly to the onedimensional case) is formulated as follows: the eigenvalues of the scattering matrix are equal to unity, which corresponds to the fact that the set of waves that are incident on the structure (components of the expansion into the Fourier series) is equal to the set of waves that travel away from the structure (outgoing waves). The coefficients of the matrix of scattering through the inhomogeneous structure have been calculated using the following procedure: the structure is divided into parallel layers such that the permittivity in each layer varies only along the axis that is perpendicular to the layers. Using the Fourier transform, the Maxwell equations have been written in the form of a matrix that relates the Fourier components of the electric field at the boundaries of neighboring layers. The product of these matrices is the transfer matrix in the basis of the Fourier components of the electric field. Represented in a block form, it is composed by matrices that contain the reflection and transmission coefficients for the Fourier components of the field, which, in turn, constitute the scattering matrix. The developed method considerably simplifies the calculation scheme for the analysis of the behavior of the electromagnetic field in structures with a two-dimensional inhomogeneity. In addition, this method makes it possible to obviate difficulties that arise in the analysis of the Purcell effect because of the divergence of the integral describing the effective volume of the mode in open systems.
Spaceborne Imaging Radar-C instrument
NASA Technical Reports Server (NTRS)
Huneycutt, Bryan L.
1993-01-01
The Spaceborne Imaging Radar-C is the next radar in the series of spaceborne radar experiments, which began with Seasat and continued with SIR-A and SIR-B. The SIR-C instrument has been designed to obtain simultaneous multifrequency and simultaneous multipolarization radar images from a low earth orbit. It is a multiparameter imaging radar that will be flown during at least two different seasons. The instrument operates in the squint alignment mode, the extended aperture mode, the scansar mode, and the interferometry mode. The instrument uses engineering techniques such as beam nulling for echo tracking, pulse repetition frequency hopping for Doppler centroid tracking, generating the frequency step chirp for radar parameter flexibility, block floating-point quantizing for data rate compression, and elevation beamwidth broadening for increasing the swath illumination.
An improved maximum power point tracking method for a photovoltaic system
NASA Astrophysics Data System (ADS)
Ouoba, David; Fakkar, Abderrahim; El Kouari, Youssef; Dkhichi, Fayrouz; Oukarfi, Benyounes
2016-06-01
In this paper, an improved auto-scaling variable step-size Maximum Power Point Tracking (MPPT) method for photovoltaic (PV) system was proposed. To achieve simultaneously a fast dynamic response and stable steady-state power, a first improvement was made on the step-size scaling function of the duty cycle that controls the converter. An algorithm was secondly proposed to address wrong decision that may be made at an abrupt change of the irradiation. The proposed auto-scaling variable step-size approach was compared to some various other approaches from the literature such as: classical fixed step-size, variable step-size and a recent auto-scaling variable step-size maximum power point tracking approaches. The simulation results obtained by MATLAB/SIMULINK were given and discussed for validation.
Exact quantization of Einstein-Rosen waves coupled to massless scalar matter.
Barbero G, J Fernando; Garay, Iñaki; Villaseñor, Eduardo J S
2005-07-29
We show in this Letter that gravity coupled to a massless scalar field with full cylindrical symmetry can be exactly quantized by an extension of the techniques used in the quantization of Einstein-Rosen waves. This system provides a useful test bed to discuss a number of issues in quantum general relativity, such as the emergence of the classical metric, microcausality, and large quantum gravity effects. It may also provide an appropriate framework to study gravitational critical phenomena from a quantum point of view, issues related to black hole evaporation, and the consistent definition of test fields and particles in quantum gravity.
NASA Astrophysics Data System (ADS)
Yang, Shuyu; Mitra, Sunanda
2002-05-01
Due to the huge volumes of radiographic images to be managed in hospitals, efficient compression techniques yielding no perceptual loss in the reconstructed images are becoming a requirement in the storage and management of such datasets. A wavelet-based multi-scale vector quantization scheme that generates a global codebook for efficient storage and transmission of medical images is presented in this paper. The results obtained show that even at low bit rates one is able to obtain reconstructed images with perceptual quality higher than that of the state-of-the-art scalar quantization method, the set partitioning in hierarchical trees.
Subband Image Coding with Jointly Optimized Quantizers
NASA Technical Reports Server (NTRS)
Kossentini, Faouzi; Chung, Wilson C.; Smith Mark J. T.
1995-01-01
An iterative design algorithm for the joint design of complexity- and entropy-constrained subband quantizers and associated entropy coders is proposed. Unlike conventional subband design algorithms, the proposed algorithm does not require the use of various bit allocation algorithms. Multistage residual quantizers are employed here because they provide greater control of the complexity-performance tradeoffs, and also because they allow efficient and effective high-order statistical modeling. The resulting subband coder exploits statistical dependencies within subbands, across subbands, and across stages, mainly through complexity-constrained high-order entropy coding. Experimental results demonstrate that the complexity-rate-distortion performance of the new subband coder is exceptional.
One size fits all electronics for insole-based activity monitoring.
Hegde, Nagaraj; Bries, Matthew; Melanson, Edward; Sazonov, Edward
2017-07-01
Footwear based wearable sensors are becoming prominent in many areas of monitoring health and wellness, such as gait and activity monitoring. In our previous research we introduced an insole based wearable system SmartStep, which is completely integrated in a socially acceptable package. From a manufacturing perspective, SmartStep's electronics had to be custom made for each shoe size, greatly complicating the manufacturing process. In this work we explore the possibility of making a universal electronics platform for SmartStep - SmartStep 3.0, which can be used in the most common insole sizes without modifications. A pilot human subject experiments were run to compare the accuracy between the one-size fits all (SmartStep 3.0) and custom size SmartStep 2.0. A total of ~10 hours of data was collected in the pilot study involving three participants performing different activities of daily living while wearing SmartStep 2.0 and SmartStep 3.0. Leave one out cross validation resulted in a 98.5% average accuracy from SmartStep 2.0, while SmartStep 3.0 resulted in 98.3% accuracy, suggesting that the SmartStep 3.0 can be as accurate as SmartStep 2.0, while fitting most common shoe sizes.
NASA Astrophysics Data System (ADS)
Yu, Yong; Yao, Qiaofeng; Luo, Zhentao; Yuan, Xun; Lee, Jim Yang; Xie, Jianping
2013-05-01
In very recent years, thiolate-protected metal nanoclusters (or thiolated MNCs) with core sizes smaller than 2 nm have emerged as a new direction in nanoparticle research due to their discrete and size dependent electronic structures and molecular-like properties, such as HOMO-LUMO transitions in optical absorptions, quantized charging, and strong luminescence. Synthesis of monodisperse thiolated MNCs in sufficiently large quantities (up to several hundred micrograms) is necessary for establishing reliable size-property relationships and exploring potential applications. This Feature Article reviews recent progress in the development of synthetic strategies for the production of monodisperse thiolated MNCs. The preparation of monodisperse thiolated MNCs is viewed as an engineerable process where both the precursors (input) and their conversion chemistry (processing) may be rationally designed to achieve the desired outcome - monodisperse thiolated MNCs (output). Several strategies for tailoring the precursor and the conversion process are analyzed to arrive at a unifying understanding of the processes involved.
Second quantization techniques in the scattering of nonidentical composite bodies
NASA Technical Reports Server (NTRS)
Norbury, J. W.; Townsend, L. W.; Deutchman, P. A.
1986-01-01
Second quantization techniques for describing elastic and inelastic interactions between nonidentical composite bodies are presented and are applied to nucleus-nucleus collisions involving ground-state and one-particle-one-hole excitations. Evaluations of the resultant collision matrix elements are made through use of Wick's theorem.
Distributed Adaptive Binary Quantization for Fast Nearest Neighbor Search.
Xianglong Liu; Zhujin Li; Cheng Deng; Dacheng Tao
2017-11-01
Hashing has been proved an attractive technique for fast nearest neighbor search over big data. Compared with the projection based hashing methods, prototype-based ones own stronger power to generate discriminative binary codes for the data with complex intrinsic structure. However, existing prototype-based methods, such as spherical hashing and K-means hashing, still suffer from the ineffective coding that utilizes the complete binary codes in a hypercube. To address this problem, we propose an adaptive binary quantization (ABQ) method that learns a discriminative hash function with prototypes associated with small unique binary codes. Our alternating optimization adaptively discovers the prototype set and the code set of a varying size in an efficient way, which together robustly approximate the data relations. Our method can be naturally generalized to the product space for long hash codes, and enjoys the fast training linear to the number of the training data. We further devise a distributed framework for the large-scale learning, which can significantly speed up the training of ABQ in the distributed environment that has been widely deployed in many areas nowadays. The extensive experiments on four large-scale (up to 80 million) data sets demonstrate that our method significantly outperforms state-of-the-art hashing methods, with up to 58.84% performance gains relatively.
Quantized vortices and superflow in arbitrary dimensions: structure, energetics and dynamics
NASA Astrophysics Data System (ADS)
Goldbart, Paul M.; Bora, Florin
2009-05-01
The structure and energetics of superflow around quantized vortices, and the motion inherited by these vortices from this superflow, are explored in the general setting of a superfluid in arbitrary dimensions. The vortices may be idealized as objects of codimension 2, such as one-dimensional loops and two-dimensional closed surfaces, respectively, in the cases of three- and four-dimensional superfluidity. By using the analogy between the vortical superflow and Ampère-Maxwell magnetostatics, the equilibrium superflow containing any specified collection of vortices is constructed. The energy of the superflow is found to take on a simple form for vortices that are smooth and asymptotically large, compared with the vortex core size. The motion of vortices is analyzed in general, as well as for the special cases of hyper-spherical and weakly distorted hyper-planar vortices. In all dimensions, vortex motion reflects vortex geometry. In dimension 4 and higher, this includes not only extrinsic but also intrinsic aspects of the vortex shape, which enter via the first and second fundamental forms of classical geometry. For hyper-spherical vortices, which generalize the vortex rings of three-dimensional superfluidity, the energy-momentum relation is determined. Simple scaling arguments recover the essential features of these results, up to numerical and logarithmic factors.
Quantization of simple parametrized systems
NASA Astrophysics Data System (ADS)
Ruffini, G.
2005-11-01
I study the canonical formulation and quantization of some simple parametrized systems, including the non-relativistic parametrized particle and the relativistic parametrized particle. Using Dirac's formalism I construct for each case the classical reduced phase space and study the dependence on the gauge fixing used. Two separate features of these systems can make this construction difficult: the actions are not invariant at the boundaries, and the constraints may have disconnected solution spaces. The relativistic particle is affected by both, while the non-relativistic particle displays only by the first. Analyzing the role of canonical transformations in the reduced phase space, I show that a change of gauge fixing is equivalent to a canonical transformation. In the relativistic case, quantization of one branch of the constraint at the time is applied and I analyze the electromagenetic backgrounds in which it is possible to quantize simultaneously both branches and still obtain a covariant unitary quantum theory. To preserve unitarity and space-time covariance, second quantization is needed unless there is no electric field. I motivate a definition of the inner product in all these cases and derive the Klein-Gordon inner product for the relativistic case. I construct phase space path integral representations for amplitudes for the BFV and the Faddeev path integrals, from which the path integrals in coordinate space (Faddeev-Popov and geometric path integrals) are derived.
Vortex creation during magnetic trap manipulations of spinor Bose-Einstein condensates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Itin, A. P.; Space Research Institute, RAS, Moscow; Morishita, T.
2006-06-15
We investigate several mechanisms of vortex creation during splitting of a spinor Bose-Einstein condensate (BEC) in a magnetic double-well trap controlled by a pair of current carrying wires and bias magnetic fields. Our study is motivated by a recent MIT experiment on splitting BECs with a similar trap [Y. Shin et al., Phys. Rev. A 72, 021604 (2005)], where an unexpected fork-like structure appeared in the interference fringes indicating the presence of a singly quantized vortex in one of the interfering condensates. It is well known that in a spin-1 BEC in a quadrupole trap, a doubly quantized vortex ismore » topologically produced by a 'slow' reversal of bias magnetic field B{sub z}. Since in the experiment a doubly quantized vortex had never been seen, Shin et al. ruled out the topological mechanism and concentrated on the nonadiabatic mechanical mechanism for explanation of the vortex creation. We find, however, that in the magnetic trap considered both mechanisms are possible: singly quantized vortices can be formed in a spin-1 BEC topologically (for example, during the magnetic field switching-off process). We therefore provide a possible alternative explanation for the interference patterns observed in the experiment. We also present a numerical example of creation of singly quantized vortices due to 'fast' splitting; i.e., by a dynamical (nonadiabatic) mechanism.« less
A constrained joint source/channel coder design and vector quantization of nonstationary sources
NASA Technical Reports Server (NTRS)
Sayood, Khalid; Chen, Y. C.; Nori, S.; Araj, A.
1993-01-01
The emergence of broadband ISDN as the network for the future brings with it the promise of integration of all proposed services in a flexible environment. In order to achieve this flexibility, asynchronous transfer mode (ATM) has been proposed as the transfer technique. During this period a study was conducted on the bridging of network transmission performance and video coding. The successful transmission of variable bit rate video over ATM networks relies on the interaction between the video coding algorithm and the ATM networks. Two aspects of networks that determine the efficiency of video transmission are the resource allocation algorithm and the congestion control algorithm. These are explained in this report. Vector quantization (VQ) is one of the more popular compression techniques to appear in the last twenty years. Numerous compression techniques, which incorporate VQ, have been proposed. While the LBG VQ provides excellent compression, there are also several drawbacks to the use of the LBG quantizers including search complexity and memory requirements, and a mismatch between the codebook and the inputs. The latter mainly stems from the fact that the VQ is generally designed for a specific rate and a specific class of inputs. In this work, an adaptive technique is proposed for vector quantization of images and video sequences. This technique is an extension of the recursively indexed scalar quantization (RISQ) algorithm.
NASA Astrophysics Data System (ADS)
Menezes, G.; Svaiter, N. F.
2006-07-01
We use the method of stochastic quantization in a topological field theory defined in an Euclidean space, assuming a Langevin equation with a memory kernel. We show that our procedure for the Abelian Chern-Simons theory converges regardless of the nature of the Chern-Simons coefficient.
Symplectic Quantization of a Reducible Theory
NASA Astrophysics Data System (ADS)
Barcelos-Neto, J.; Silva, M. B. D.
We use the symplectic formalism to quantize the Abelian antisymmetric tensor gauge field. It is related to a reducible theory in the sense that all of its constraints are not independent. A procedure like ghost-of-ghost of the BFV method has to be used, but in terms of Lagrange multipliers.
ERIC Educational Resources Information Center
DeBuvitz, William
2014-01-01
I am a volunteer reader at the Princeton unit of "Learning Ally" (formerly "Recording for the Blind & Dyslexic") and I recently discovered that high school students are introduced to the concept of quantization well before they take chemistry and physics. For the past few months I have been reading onto computer files a…
Techniques for decoding speech phonemes and sounds: A concept
NASA Technical Reports Server (NTRS)
Lokerson, D. C.; Holby, H. G.
1975-01-01
Techniques studied involve conversion of speech sounds into machine-compatible pulse trains. (1) Voltage-level quantizer produces number of output pulses proportional to amplitude characteristics of vowel-type phoneme waveforms. (2) Pulses produced by quantizer of first speech formants are compared with pulses produced by second formants.
A new apparatus for studies of quantized vortex dynamics in dilute-gas Bose-Einstein condensates
NASA Astrophysics Data System (ADS)
Newman, Zachary L.
The presence of quantized vortices and a high level of control over trap geometries and other system parameters make dilute-gas Bose-Einstein condensates (BECs) a natural environment for studies of vortex dynamics and quantum turbulence in superfluids, primary interests of the BEC group at the University of Arizona. Such research may lead to deeper understanding of the nature of quantum fluid dynamics and far-from-equilbrium phenomena. Despite the importance of quantized vortex dynamics in the fields of superfluidity, superconductivity and quantum turbulence, direct imaging of vortices in trapped BECs remains a significant technical challenge. This is primarily due to the small size of the vortex core in a trapped gas, which is typically a few hundred nanometers in diameter. In this dissertation I present the design and construction of a new 87Rb BEC apparatus with the goal of studying vortex dynamics in trapped BECs. The heart of the apparatus is a compact vacuum chamber with a custom, all-glass science cell designed to accommodate the use of commercial high-numerical-aperture microscope objectives for in situ imaging of vortices. The designs for the new system are, in part, based on prior work in our group on in situ imaging of vortices. Here I review aspects of our prior work and discuss some of the successes and limitations that are relevant to the new apparatus. The bulk of the thesis is used to described the major subsystems of the new apparatus which include the vacuum chamber, the laser systems, the magnetic transfer system and the final magnetic trap for the atoms. Finally, I demonstrate the creation of a BEC of ˜ 2 x 106 87Rb atoms in our new system and show that the BEC can be transferred into a weak, spherical, magnetic trap with a well defined magnetic field axis that may be useful for future vortex imaging studies.
Coupled binary embedding for large-scale image retrieval.
Zheng, Liang; Wang, Shengjin; Tian, Qi
2014-08-01
Visual matching is a crucial step in image retrieval based on the bag-of-words (BoW) model. In the baseline method, two keypoints are considered as a matching pair if their SIFT descriptors are quantized to the same visual word. However, the SIFT visual word has two limitations. First, it loses most of its discriminative power during quantization. Second, SIFT only describes the local texture feature. Both drawbacks impair the discriminative power of the BoW model and lead to false positive matches. To tackle this problem, this paper proposes to embed multiple binary features at indexing level. To model correlation between features, a multi-IDF scheme is introduced, through which different binary features are coupled into the inverted file. We show that matching verification methods based on binary features, such as Hamming embedding, can be effectively incorporated in our framework. As an extension, we explore the fusion of binary color feature into image retrieval. The joint integration of the SIFT visual word and binary features greatly enhances the precision of visual matching, reducing the impact of false positive matches. Our method is evaluated through extensive experiments on four benchmark datasets (Ukbench, Holidays, DupImage, and MIR Flickr 1M). We show that our method significantly improves the baseline approach. In addition, large-scale experiments indicate that the proposed method requires acceptable memory usage and query time compared with other approaches. Further, when global color feature is integrated, our method yields competitive performance with the state-of-the-arts.
On the quantization of the massless Bateman system
NASA Astrophysics Data System (ADS)
Takahashi, K.
2018-03-01
The so-called Bateman system for the damped harmonic oscillator is reduced to a genuine dual dissipation system (DDS) by setting the mass to zero. We explore herein the condition under which the canonical quantization of the DDS is consistently performed. The roles of the observable and auxiliary coordinates are discriminated. The results show that the complete and orthogonal Fock space of states can be constructed on the stable vacuum if an anti-Hermite representation of the canonical Hamiltonian is adopted. The amplitude of the one-particle wavefunction is consistent with the classical solution. The fields can be quantized as bosonic or fermionic. For bosonic systems, the quantum fluctuation of the field is directly associated with the dissipation rate.
Quantization with maximally degenerate Poisson brackets: the harmonic oscillator!
NASA Astrophysics Data System (ADS)
Nutku, Yavuz
2003-07-01
Nambu's construction of multi-linear brackets for super-integrable systems can be thought of as degenerate Poisson brackets with a maximal set of Casimirs in their kernel. By introducing privileged coordinates in phase space these degenerate Poisson brackets are brought to the form of Heisenberg's equations. We propose a definition for constructing quantum operators for classical functions, which enables us to turn the maximally degenerate Poisson brackets into operators. They pose a set of eigenvalue problems for a new state vector. The requirement of the single-valuedness of this eigenfunction leads to quantization. The example of the harmonic oscillator is used to illustrate this general procedure for quantizing a class of maximally super-integrable systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kovchavtsev, A. P., E-mail: kap@isp.nsc.ru; Tsarenko, A. V.; Guzev, A. A.
The influence of electron energy quantization in a space-charge region on the accumulation capacitance of the InAs-based metal-oxide-semiconductor capacitors (MOSCAPs) has been investigated by modeling and comparison with the experimental data from Au/anodic layer(4-20 nm)/n-InAs(111)A MOSCAPs. The accumulation capacitance for MOSCAPs has been calculated by the solution of Poisson equation with different assumptions and the self-consistent solution of Schrödinger and Poisson equations with quantization taken into account. It was shown that the quantization during the MOSCAPs accumulation capacitance calculations should be taken into consideration for the correct interface states density determination by Terman method and the evaluation of gate dielectric thicknessmore » from capacitance-voltage measurements.« less
Canonical methods in classical and quantum gravity: An invitation to canonical LQG
NASA Astrophysics Data System (ADS)
Reyes, Juan D.
2018-04-01
Loop Quantum Gravity (LQG) is a candidate quantum theory of gravity still under construction. LQG was originally conceived as a background independent canonical quantization of Einstein’s general relativity theory. This contribution provides some physical motivations and an overview of some mathematical tools employed in canonical Loop Quantum Gravity. First, Hamiltonian classical methods are reviewed from a geometric perspective. Canonical Dirac quantization of general gauge systems is sketched next. The Hamiltonian formultation of gravity in geometric ADM and connection-triad variables is then presented to finally lay down the canonical loop quantization program. The presentation is geared toward advanced undergradute or graduate students in physics and/or non-specialists curious about LQG.
Accelerating Families of Fuzzy K-Means Algorithms for Vector Quantization Codebook Design
Mata, Edson; Bandeira, Silvio; de Mattos Neto, Paulo; Lopes, Waslon; Madeiro, Francisco
2016-01-01
The performance of signal processing systems based on vector quantization depends on codebook design. In the image compression scenario, the quality of the reconstructed images depends on the codebooks used. In this paper, alternatives are proposed for accelerating families of fuzzy K-means algorithms for codebook design. The acceleration is obtained by reducing the number of iterations of the algorithms and applying efficient nearest neighbor search techniques. Simulation results concerning image vector quantization have shown that the acceleration obtained so far does not decrease the quality of the reconstructed images. Codebook design time savings up to about 40% are obtained by the accelerated versions with respect to the original versions of the algorithms. PMID:27886061
Accelerating Families of Fuzzy K-Means Algorithms for Vector Quantization Codebook Design.
Mata, Edson; Bandeira, Silvio; de Mattos Neto, Paulo; Lopes, Waslon; Madeiro, Francisco
2016-11-23
The performance of signal processing systems based on vector quantization depends on codebook design. In the image compression scenario, the quality of the reconstructed images depends on the codebooks used. In this paper, alternatives are proposed for accelerating families of fuzzy K-means algorithms for codebook design. The acceleration is obtained by reducing the number of iterations of the algorithms and applying efficient nearest neighbor search techniques. Simulation results concerning image vector quantization have shown that the acceleration obtained so far does not decrease the quality of the reconstructed images. Codebook design time savings up to about 40% are obtained by the accelerated versions with respect to the original versions of the algorithms.
Fractional quantization of the magnetic flux in cylindrical unconventional superconductors.
Loder, F; Kampf, A P; Kopp, T
2013-07-26
The magnetic flux threading a conventional superconducting ring is typically quantized in units of Φ0=hc/2e. The factor of 2 in the denominator of Φ0 originates from the existence of two different types of pairing states with minima of the free energy at even and odd multiples of Φ0. Here we show that spatially modulated pairing states exist with energy minima at fractional flux values, in particular, at multiples of Φ0/2. In such states, condensates with different center-of-mass momenta of the Cooper pairs coexist. The proposed mechanism for fractional flux quantization is discussed in the context of cuprate superconductors, where hc/4e flux periodicities were observed.
Floating-point system quantization errors in digital control systems
NASA Technical Reports Server (NTRS)
Phillips, C. L.
1973-01-01
The results are reported of research into the effects on system operation of signal quantization in a digital control system. The investigation considered digital controllers (filters) operating in floating-point arithmetic in either open-loop or closed-loop systems. An error analysis technique is developed, and is implemented by a digital computer program that is based on a digital simulation of the system. As an output the program gives the programing form required for minimum system quantization errors (either maximum of rms errors), and the maximum and rms errors that appear in the system output for a given bit configuration. The program can be integrated into existing digital simulations of a system.
The uniform quantized electron gas revisited
NASA Astrophysics Data System (ADS)
Lomba, Enrique; Høye, Johan S.
2017-11-01
In this article we continue and extend our recent work on the correlation energy of the quantized electron gas of uniform density at temperature T=0 . As before, we utilize the methods, properties, and results obtained by means of classical statistical mechanics. These were extended to quantized systems via the Feynman path integral formalism. The latter translates the quantum problem into a classical polymer problem in four dimensions. Again, the well known RPA (random phase approximation) is recovered as a basic result which we then modify and improve upon. Here we analyze the condition of thermodynamic self-consistency. Our numerical calculations exhibit a remarkable agreement with well known results of a standard parameterization of Monte Carlo correlation energies.
Consistency of certain constitutive relations with quantum electromagnetism
DOE Office of Scientific and Technical Information (OSTI.GOV)
Horsley, S. A. R.
2011-12-15
Recent work by Philbin [New J. Phys. 12, 123008 (2010)] has provided a Lagrangian theory that establishes a general method for the canonical quantization of the electromagnetic field in any dispersive, lossy, linear dielectric. Working from this theory, we extend the Lagrangian description to reciprocal and nonreciprocal magnetoelectric (bianisotropic) media, showing that some versions of the constitutive relations are inconsistent with a real Lagrangian, and hence with quantization. This amounts to a restriction on the magnitude of the magnetoelectric coupling. Moreover, from the point of view of quantization, moving media are shown to be fundamentally different from stationary magnetoelectrics, despitemore » the formal similarity in the constitutive relations.« less
N-terminus of Cardiac Myosin Essential Light Chain Modulates Myosin Step-Size
Wang, Yihua; Ajtai, Katalin; Kazmierczak, Katarzyna; Szczesna-Cordary, Danuta; Burghardt, Thomas P.
2016-01-01
Muscle myosin cyclically hydrolyzes ATP to translate actin. Ventricular cardiac myosin (βmys) moves actin with three distinct unitary step-sizes resulting from its lever-arm rotation and with step-frequencies that are modulated in a myosin regulation mechanism. The lever-arm associated essential light chain (vELC) binds actin by its 43 residue N-terminal extension. Unitary steps were proposed to involve the vELC N-terminal extension with the 8 nm step engaging the vELC/actin bond facilitating an extra ~19 degrees of lever-arm rotation while the predominant 5 nm step forgoes vELC/actin binding. A minor 3 nm step is the unlikely conversion of the completed 5 to the 8 nm step. This hypothesis was tested using a 17 residue N-terminal truncated vELC in porcine βmys (Δ17βmys) and a 43 residue N-terminal truncated human vELC expressed in transgenic mouse heart (Δ43αmys). Step-size and step-frequency were measured using the Qdot motility assay. Both Δ17βmys and Δ43αmys had significantly increased 5 nm step-frequency and coincident loss in the 8 nm step-frequency compared to native proteins suggesting the vELC/actin interaction drives step-size preference. Step-size and step-frequency probability densities depend on the relative fraction of truncated vELC and relate linearly to pure myosin species concentrations in a mixture containing native vELC homodimer, two truncated vELCs in the modified homodimer, and one native and one truncated vELC in the heterodimer. Step-size and step-frequency, measured for native homodimer and at two or more known relative fractions of truncated vELC, are surmised for each pure species by using a new analytical method. PMID:26671638
High Performance Compression of Science Data
NASA Technical Reports Server (NTRS)
Storer, James A.; Carpentieri, Bruno; Cohn, Martin
1994-01-01
Two papers make up the body of this report. One presents a single-pass adaptive vector quantization algorithm that learns a codebook of variable size and shape entries; the authors present experiments on a set of test images showing that with no training or prior knowledge of the data, for a given fidelity, the compression achieved typically equals or exceeds that of the JPEG standard. The second paper addresses motion compensation, one of the most effective techniques used in interframe data compression. A parallel block-matching algorithm for estimating interframe displacement of blocks with minimum error is presented. The algorithm is designed for a simple parallel architecture to process video in real time.
Quantization of higher abelian gauge theory in generalized differential cohomology
NASA Astrophysics Data System (ADS)
Szabo, R.
We review and elaborate on some aspects of the quantization of certain classes of higher abelian gauge theories using techniques of generalized differential cohomology. Particular emphasis is placed on the examples of generalized Maxwell theory and Cheeger-Simons cohomology, and of Ramond-Ramond fields in Type II superstring theory and differential K-theory.
Radiation dose-rate meter using an energy-sensitive counter
Kopp, Manfred K.
1988-01-01
A radiation dose-rate meter is provided which uses an energy-sensitive detector and combines charge quantization and pulse-rate measurement to monitor radiation dose rates. The charge from each detected photon is quantized by level-sensitive comparators so that the resulting total output pulse rate is proportional to the dose-rate.
Quantized Chiral Magnetic Current from Reconnections of Magnetic Flux.
Hirono, Yuji; Kharzeev, Dmitri E; Yin, Yi
2016-10-21
We introduce a new mechanism for the chiral magnetic effect that does not require an initial chirality imbalance. The chiral magnetic current is generated by reconnections of magnetic flux that change the magnetic helicity of the system. The resulting current is entirely determined by the change of magnetic helicity, and it is quantized.
Quantized Chiral Magnetic Current from Reconnections of Magnetic Flux
Hirono, Yuji; Kharzeev, Dmitri E.; Yin, Yi
2016-10-20
We introduce a new mechanism for the chiral magnetic e ect that does not require an initial chirality imbalance. The chiral magnetic current is generated by reconnections of magnetic ux that change the magnetic helicity of the system. The resulting current is entirely determined by the change of magnetic helicity, and it is quantized.
Dirac’s magnetic monopole and the Kontsevich star product
NASA Astrophysics Data System (ADS)
Soloviev, M. A.
2018-03-01
We examine relationships between various quantization schemes for an electrically charged particle in the field of a magnetic monopole. Quantization maps are defined in invariant geometrical terms, appropriate to the case of nontrivial topology, and are constructed for two operator representations. In the first setting, the quantum operators act on the Hilbert space of sections of a nontrivial complex line bundle associated with the Hopf bundle, whereas the second approach uses instead a quaternionic Hilbert module of sections of a trivial quaternionic line bundle. We show that these two quantizations are naturally related by a bundle morphism and, as a consequence, induce the same phase-space star product. We obtain explicit expressions for the integral kernels of star-products corresponding to various operator orderings and calculate their asymptotic expansions up to the third order in the Planck constant \\hbar . We also show that the differential form of the magnetic Weyl product corresponding to the symmetric ordering agrees completely with the Kontsevich formula for deformation quantization of Poisson structures and can be represented by Kontsevich’s graphs.
Monopoles for gravitation and for higher spin fields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bunster, Claudio; Portugues, Ruben; Cnockaert, Sandrine
2006-05-15
We consider massless higher spin gauge theories with both electric and magnetic sources, with a special emphasis on the spin two case. We write the equations of motion at the linear level (with conserved external sources) and introduce Dirac strings so as to derive the equations from a variational principle. We then derive a quantization condition that generalizes the familiar Dirac quantization condition, and which involves the conserved charges associated with the asymptotic symmetries for higher spins. Next we discuss briefly how the result extends to the nonlinear theory. This is done in the context of gravitation, where the Taub-NUTmore » solution provides the exact solution of the field equations with both types of sources. We rederive, in analogy with electromagnetism, the quantization condition from the quantization of the angular momentum. We also observe that the Taub-NUT metric is asymptotically flat at spatial infinity in the sense of Regge and Teitelboim (including their parity conditions). It follows, in particular, that one can consistently consider in the variational principle configurations with different electric and magnetic masses.« less
Combinatorial quantization of the Hamiltonian Chern-Simons theory II
NASA Astrophysics Data System (ADS)
Alekseev, Anton Yu.; Grosse, Harald; Schomerus, Volker
1996-01-01
This paper further develops the combinatorial approach to quantization of the Hamiltonian Chern Simons theory advertised in [1]. Using the theory of quantum Wilson lines, we show how the Verlinde algebra appears within the context of quantum group gauge theory. This allows to discuss flatness of quantum connections so that we can give a mathematically rigorous definition of the algebra of observables A CS of the Chern Simons model. It is a *-algebra of “functions on the quantum moduli space of flat connections” and comes equipped with a positive functional ω (“integration”). We prove that this data does not depend on the particular choices which have been made in the construction. Following ideas of Fock and Rosly [2], the algebra A CS provides a deformation quantization of the algebra of functions on the moduli space along the natural Poisson bracket induced by the Chern Simons action. We evaluate a volume of the quantized moduli space and prove that it coincides with the Verlinde number. This answer is also interpreted as a partition partition function of the lattice Yang-Mills theory corresponding to a quantum gauge group.
NASA Astrophysics Data System (ADS)
Pedersen, K.; Kristensen, T. B.; Pedersen, T. G.; Morgen, P.; Li, Z.; Hoffmann, S. V.
2002-05-01
Thin noble metal films (Ag, Au and Cu) on Si (111) have been investigated by optical second-harmonic generation (SHG) in combination with synchrotron radiation photoemission spectroscopy. The valence band spectra of Ag films show a quantization of the sp-band in the 4-eV energy range from the Fermi level down to the onset of the d-bands. For Cu and Au the corresponding energy range is much narrower and quantization effects are less visible. Quantization effects in SHG are observed as oscillations in the signal as a function of film thickness. The oscillations are strongest for Ag and less pronounced for Cu, in agreement with valence band photoemission spectra. In the case of Au, a reacted layer floating on top of the Au film masks the observation of quantum well levels by photoemission. However, SHG shows a well-developed quantization of levels in the Au film below the reacted layer. For Ag films, the relation between film thickness and photon energy of the SHG resonances indicates different types of resonances, some of which involve both quantum well and substrate states.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guedes, Carlos; Oriti, Daniele; Raasakka, Matti
The phase space given by the cotangent bundle of a Lie group appears in the context of several models for physical systems. A representation for the quantum system in terms of non-commutative functions on the (dual) Lie algebra, and a generalized notion of (non-commutative) Fourier transform, different from standard harmonic analysis, has been recently developed, and found several applications, especially in the quantum gravity literature. We show that this algebra representation can be defined on the sole basis of a quantization map of the classical Poisson algebra, and identify the conditions for its existence. In particular, the corresponding non-commutative star-productmore » carried by this representation is obtained directly from the quantization map via deformation quantization. We then clarify under which conditions a unitary intertwiner between such algebra representation and the usual group representation can be constructed giving rise to the non-commutative plane waves and consequently, the non-commutative Fourier transform. The compact groups U(1) and SU(2) are considered for different choices of quantization maps, such as the symmetric and the Duflo map, and we exhibit the corresponding star-products, algebra representations, and non-commutative plane waves.« less
The fundamental role of quantized vibrations in coherent light harvesting by cryptophyte algae
NASA Astrophysics Data System (ADS)
Kolli, Avinash; O'Reilly, Edward J.; Scholes, Gregory D.; Olaya-Castro, Alexandra
2012-11-01
The influence of fast vibrations on energy transfer and conversion in natural molecular aggregates is an issue of central interest. This article shows the important role of high-energy quantized vibrations and their non-equilibrium dynamics for energy transfer in photosynthetic systems with highly localized excitonic states. We consider the cryptophyte antennae protein phycoerythrin 545 and show that coupling to quantized vibrations, which are quasi-resonant with excitonic transitions is fundamental for biological function as it generates non-cascaded transport with rapid and wider spatial distribution of excitation energy. Our work also indicates that the non-equilibrium dynamics of such vibrations can manifest itself in ultrafast beating of both excitonic populations and coherences at room temperature, with time scales in agreement with those reported in experiments. Moreover, we show that mechanisms supporting coherent excitonic dynamics assist coupling to selected modes that channel energy to preferential sites in the complex. We therefore argue that, in the presence of strong coupling between electronic excitations and quantized vibrations, a concrete and important advantage of quantum coherent dynamics is precisely to tune resonances that promote fast and effective energy distribution.
Gao, Zhiyuan; Yang, Congjie; Xu, Jiangtao; Nie, Kaiming
2015-11-06
This paper presents a dynamic range (DR) enhanced readout technique with a two-step time-to-digital converter (TDC) for high speed linear CMOS image sensors. A multi-capacitor and self-regulated capacitive trans-impedance amplifier (CTIA) structure is employed to extend the dynamic range. The gain of the CTIA is auto adjusted by switching different capacitors to the integration node asynchronously according to the output voltage. A column-parallel ADC based on a two-step TDC is utilized to improve the conversion rate. The conversion is divided into coarse phase and fine phase. An error calibration scheme is also proposed to correct quantization errors caused by propagation delay skew within -T(clk)~+T(clk). A linear CMOS image sensor pixel array is designed in the 0.13 μm CMOS process to verify this DR-enhanced high speed readout technique. The post simulation results indicate that the dynamic range of readout circuit is 99.02 dB and the ADC achieves 60.22 dB SNDR and 9.71 bit ENOB at a conversion rate of 2 MS/s after calibration, with 14.04 dB and 2.4 bit improvement, compared with SNDR and ENOB of that without calibration.
Unconventional topological Hall effect in skyrmion crystals caused by the topology of the lattice
NASA Astrophysics Data System (ADS)
Göbel, Börge; Mook, Alexander; Henk, Jürgen; Mertig, Ingrid
2017-03-01
The hallmark of a skyrmion crystal (SkX) is the topological Hall effect (THE). In this article we predict and explain an unconventional behavior of the topological Hall conductivity in SkXs. In simple terms, the spin texture of the skyrmions causes an inhomogeneous emergent magnetic field whose associated Lorentz force acts on the electrons. By making the emergent field homogeneous, the THE is mapped onto the quantum Hall effect (QHE). Consequently, each electronic band of the SkX is assigned to a Landau level. This correspondence of THE and QHE allows us to explain the unconventional behavior of the THE of electrons in SkXs. For example, a skyrmion crystal on a triangular lattice exhibits a quantized topological Hall conductivity with steps of 2 .e2/h below and with steps of 1 .e2/h above the van Hove singularity. On top of this, the conductivity shows a prominent sign change at the van Hove singularity. These unconventional features are deeply connected to the topology of the structural lattice.
NASA Astrophysics Data System (ADS)
Villa, Carlos; Kumavor, Patrick; Donkor, Eric
2008-04-01
Photonics Analog-to-Digital Converters (ADCs) utilize a train of optical pulses to sample an electrical input waveform applied to an electrooptic modulator or a reverse biased photodiode. In the former, the resulting train of amplitude-modulated optical pulses is detected (converter to electrical) and quantized using a conversional electronics ADC- as at present there are no practical, cost-effective optical quantizers available with performance that rival electronic quantizers. In the latter, the electrical samples are directly quantized by the electronics ADC. In both cases however, the sampling rate is limited by the speed with which the electronics ADC can quantize the electrical samples. One way to increase the sampling rate by a factor N is by using the time-interleaved technique which consists of a parallel array of N electrical ADC converters, which have the same sampling rate but different sampling phase. Each operating at a quantization rate of fs/N where fs is the aggregated sampling rate. In a system with no real-time operation, the N channels digital outputs are stored in memory, and then aggregated (multiplexed) to obtain the digital representation of the analog input waveform. Alternatively, for real-time operation systems the reduction of storing time in the multiplexing process is desired to improve the time response of the ADC. The complete elimination of memories come expenses of concurrent timing and synchronization in the aggregation of the digital signal that became critical for a good digital representation of the analog signal waveform. In this paper we propose and demonstrate a novel optically synchronized encoder and multiplexer scheme for interleaved photonics ADCs that utilize the N optical signals used to sample different phases of an analog input signal to synchronize the multiplexing of the resulting N digital output channels in a single digital output port. As a proof of concept, four 320 Megasamples/sec 12-bit of resolution digital signals were multiplexed to form an aggregated 1.28 Gigasamples/sec single digital output signal.
Wavelet subband coding of computer simulation output using the A++ array class library
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bradley, J.N.; Brislawn, C.M.; Quinlan, D.J.
1995-07-01
The goal of the project is to produce utility software for off-line compression of existing data and library code that can be called from a simulation program for on-line compression of data dumps as the simulation proceeds. Naturally, we would like the amount of CPU time required by the compression algorithm to be small in comparison to the requirements of typical simulation codes. We also want the algorithm to accomodate a wide variety of smooth, multidimensional data types. For these reasons, the subband vector quantization (VQ) approach employed in has been replaced by a scalar quantization (SQ) strategy using amore » bank of almost-uniform scalar subband quantizers in a scheme similar to that used in the FBI fingerprint image compression standard. This eliminates the considerable computational burdens of training VQ codebooks for each new type of data and performing nearest-vector searches to encode the data. The comparison of subband VQ and SQ algorithms in indicated that, in practice, there is relatively little additional gain from using vector as opposed to scalar quantization on DWT subbands, even when the source imagery is from a very homogeneous population, and our subjective experience with synthetic computer-generated data supports this stance. It appears that a careful study is needed of the tradeoffs involved in selecting scalar vs. vector subband quantization, but such an analysis is beyond the scope of this paper. Our present work is focused on the problem of generating wavelet transform/scalar quantization (WSQ) implementations that can be ported easily between different hardware environments. This is an extremely important consideration given the great profusion of different high-performance computing architectures available, the high cost associated with learning how to map algorithms effectively onto a new architecture, and the rapid rate of evolution in the world of high-performance computing.« less
FPGA implementation of self organizing map with digital phase locked loops.
Hikawa, Hiroomi
2005-01-01
The self-organizing map (SOM) has found applicability in a wide range of application areas. Recently new SOM hardware with phase modulated pulse signal and digital phase-locked loops (DPLLs) has been proposed (Hikawa, 2005). The system uses the DPLL as a computing element since the operation of the DPLL is very similar to that of SOM's computation. The system also uses square waveform phase to hold the value of the each input vector element. This paper discuss the hardware implementation of the DPLL SOM architecture. For effective hardware implementation, some components are redesigned to reduce the circuit size. The proposed SOM architecture is described in VHDL and implemented on field programmable gate array (FPGA). Its feasibility is verified by experiments. Results show that the proposed SOM implemented on the FPGA has a good quantization capability, and its circuit size very small.
Level Anticrossing of Impurity States in Semiconductor Nanocrystals
Baimuratov, Anvar S.; Rukhlenko, Ivan D.; Turkov, Vadim K.; Ponomareva, Irina O.; Leonov, Mikhail Yu.; Perova, Tatiana S.; Berwick, Kevin; Baranov, Alexander V.; Fedorov, Anatoly V.
2014-01-01
The size dependence of the quantized energies of elementary excitations is an essential feature of quantum nanostructures, underlying most of their applications in science and technology. Here we report on a fundamental property of impurity states in semiconductor nanocrystals that appears to have been overlooked—the anticrossing of energy levels exhibiting different size dependencies. We show that this property is inherent to the energy spectra of charge carriers whose spatial motion is simultaneously affected by the Coulomb potential of the impurity ion and the confining potential of the nanocrystal. The coupling of impurity states, which leads to the anticrossing, can be induced by interactions with elementary excitations residing inside the nanocrystal or an external electromagnetic field. We formulate physical conditions that allow a straightforward interpretation of level anticrossings in the nanocrystal energy spectrum and an accurate estimation of the states' coupling strength. PMID:25369911
Song, Xiaoying; Huang, Qijun; Chang, Sheng; He, Jin; Wang, Hao
2016-12-01
To address the low compression efficiency of lossless compression and the low image quality of general near-lossless compression, a novel near-lossless compression algorithm based on adaptive spatial prediction is proposed for medical sequence images for possible diagnostic use in this paper. The proposed method employs adaptive block size-based spatial prediction to predict blocks directly in the spatial domain and Lossless Hadamard Transform before quantization to improve the quality of reconstructed images. The block-based prediction breaks the pixel neighborhood constraint and takes full advantage of the local spatial correlations found in medical images. The adaptive block size guarantees a more rational division of images and the improved use of the local structure. The results indicate that the proposed algorithm can efficiently compress medical images and produces a better peak signal-to-noise ratio (PSNR) under the same pre-defined distortion than other near-lossless methods.
Duszenko, Nikolas
2017-01-01
ABSTRACT Many, but not all, organisms use quinones to conserve energy in their electron transport chains. Fermentative bacteria and methane-producing archaea (methanogens) do not produce quinones but have devised other ways to generate ATP. Methanophenazine (MPh) is a unique membrane electron carrier found in Methanosarcina species that plays the same role as quinones in the electron transport chain. To extend the analogy between quinones and MPh, we compared the MPh pool sizes between two well-studied Methanosarcina species, Methanosarcina acetivorans C2A and Methanosarcina barkeri Fusaro, to the quinone pool size in the bacterium Escherichia coli. We found the quantity of MPh per cell increases as cultures transition from exponential growth to stationary phase, and absolute quantities of MPh were 3-fold higher in M. acetivorans than in M. barkeri. The concentration of MPh suggests the cell membrane of M. acetivorans, but not of M. barkeri, is electrically quantized as if it were a single conductive metal sheet and near optimal for rate of electron transport. Similarly, stationary (but not exponentially growing) E. coli cells also have electrically quantized membranes on the basis of quinone content. Consistent with our hypothesis, we demonstrated that the exogenous addition of phenazine increases the growth rate of M. barkeri three times that of M. acetivorans. Our work suggests electron flux through MPh is naturally higher in M. acetivorans than in M. barkeri and that hydrogen cycling is less efficient at conserving energy than scalar proton translocation using MPh. IMPORTANCE Can we grow more from less? The ability to optimize and manipulate metabolic efficiency in cells is the difference between commercially viable and nonviable renewable technologies. Much can be learned from methane-producing archaea (methanogens) which evolved a successful metabolic lifestyle under extreme thermodynamic constraints. Methanogens use highly efficient electron transport systems and supramolecular complexes to optimize electron and carbon flow to control biomass synthesis and the production of methane. Worldwide, methanogens are used to generate renewable methane for heat, electricity, and transportation. Our observations suggest Methanosarcina acetivorans, but not Methanosarcina barkeri, has electrically quantized membranes. Escherichia coli, a model facultative anaerobe, has optimal electron transport at the stationary phase but not during exponential growth. This study also suggests the metabolic efficiency of bacteria and archaea can be improved using exogenously supplied lipophilic electron carriers. The enhancement of methanogen electron transport through methanophenazine has the potential to increase renewable methane production at an industrial scale. PMID:28710268
Duszenko, Nikolas; Buan, Nicole R
2017-09-15
Many, but not all, organisms use quinones to conserve energy in their electron transport chains. Fermentative bacteria and methane-producing archaea (methanogens) do not produce quinones but have devised other ways to generate ATP. Methanophenazine (MPh) is a unique membrane electron carrier found in Methanosarcina species that plays the same role as quinones in the electron transport chain. To extend the analogy between quinones and MPh, we compared the MPh pool sizes between two well-studied Methanosarcina species, Methanosarcina acetivorans C2A and Methanosarcina barkeri Fusaro, to the quinone pool size in the bacterium Escherichia coli We found the quantity of MPh per cell increases as cultures transition from exponential growth to stationary phase, and absolute quantities of MPh were 3-fold higher in M. acetivorans than in M. barkeri The concentration of MPh suggests the cell membrane of M. acetivorans , but not of M. barkeri , is electrically quantized as if it were a single conductive metal sheet and near optimal for rate of electron transport. Similarly, stationary (but not exponentially growing) E. coli cells also have electrically quantized membranes on the basis of quinone content. Consistent with our hypothesis, we demonstrated that the exogenous addition of phenazine increases the growth rate of M. barkeri three times that of M. acetivorans Our work suggests electron flux through MPh is naturally higher in M. acetivorans than in M. barkeri and that hydrogen cycling is less efficient at conserving energy than scalar proton translocation using MPh. IMPORTANCE Can we grow more from less? The ability to optimize and manipulate metabolic efficiency in cells is the difference between commercially viable and nonviable renewable technologies. Much can be learned from methane-producing archaea (methanogens) which evolved a successful metabolic lifestyle under extreme thermodynamic constraints. Methanogens use highly efficient electron transport systems and supramolecular complexes to optimize electron and carbon flow to control biomass synthesis and the production of methane. Worldwide, methanogens are used to generate renewable methane for heat, electricity, and transportation. Our observations suggest Methanosarcina acetivorans , but not Methanosarcina barkeri , has electrically quantized membranes. Escherichia coli , a model facultative anaerobe, has optimal electron transport at the stationary phase but not during exponential growth. This study also suggests the metabolic efficiency of bacteria and archaea can be improved using exogenously supplied lipophilic electron carriers. The enhancement of methanogen electron transport through methanophenazine has the potential to increase renewable methane production at an industrial scale. Copyright © 2017 American Society for Microbiology.
The UOSAT magnetometer experiment
NASA Technical Reports Server (NTRS)
Acuna, M. H.
1982-01-01
The magnetometer aboard the University of Surrey satellite (UOSAT) and its associated electronics are described. The basic fluxgate magnetometer employed has a dynamic range of plus or minus 8000 nT with outputs digitized by a 12-bit successive approximation A-D converter having a resolution of plus or minus 2 nT. Noise in the 3-13 Hz bandwidth is less than 1 nT. A bias field generator extends the dynamic range to plus or minus 64,000 nT with quantization steps of 8000 nT. The magnetometer experiment is expected to provide information on the secular variation of the geomagnetic field, and the decay rate of the dipole term. Special emphasis will be placed on the acquisition of real time and memory data over the poles which can be correlated with that from Magsat.
Semiclassical states on Lie algebras
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tsobanjan, Artur, E-mail: artur.tsobanjan@gmail.com
2015-03-15
The effective technique for analyzing representation-independent features of quantum systems based on the semiclassical approximation (developed elsewhere) has been successfully used in the context of the canonical (Weyl) algebra of the basic quantum observables. Here, we perform the important step of extending this effective technique to the quantization of a more general class of finite-dimensional Lie algebras. The case of a Lie algebra with a single central element (the Casimir element) is treated in detail by considering semiclassical states on the corresponding universal enveloping algebra. Restriction to an irreducible representation is performed by “effectively” fixing the Casimir condition, following themore » methods previously used for constrained quantum systems. We explicitly determine the conditions under which this restriction can be consistently performed alongside the semiclassical truncation.« less
FAST TRACK COMMUNICATION: Quantization over boson operator spaces
NASA Astrophysics Data System (ADS)
Prosen, Tomaž; Seligman, Thomas H.
2010-10-01
The framework of third quantization—canonical quantization in the Liouville space—is developed for open many-body bosonic systems. We show how to diagonalize the quantum Liouvillean for an arbitrary quadratic n-boson Hamiltonian with arbitrary linear Lindblad couplings to the baths and, as an example, explicitly work out a general case of a single boson.
Quantized Vector Potential and the Photon Wave-function
NASA Astrophysics Data System (ADS)
Meis, C.; Dahoo, P. R.
2017-12-01
The vector potential function {\\overrightarrow{α }}kλ (\\overrightarrow{r},t) for a k-mode and λ-polarization photon, with the quantized amplitude α 0k (ω k ) = ξω k , satisfies the classical wave propagation equation as well as the Schrodinger’s equation with the relativistic massless Hamiltonian \\mathop{H}\\limits∼ =-i\\hslash c\\overrightarrow{\
A heat kernel proof of the index theorem for deformation quantization
NASA Astrophysics Data System (ADS)
Karabegov, Alexander
2017-11-01
We give a heat kernel proof of the algebraic index theorem for deformation quantization with separation of variables on a pseudo-Kähler manifold. We use normalizations of the canonical trace density of a star product and of the characteristic classes involved in the index formula for which this formula contains no extra constant factors.
Floating-point system quantization errors in digital control systems
NASA Technical Reports Server (NTRS)
Phillips, C. L.; Vallely, D. P.
1978-01-01
This paper considers digital controllers (filters) operating in floating-point arithmetic in either open-loop or closed-loop systems. A quantization error analysis technique is developed, and is implemented by a digital computer program that is based on a digital simulation of the system. The program can be integrated into existing digital simulations of a system.
Investigating Students' Mental Models about the Quantization of Light, Energy, and Angular Momentum
ERIC Educational Resources Information Center
Didis, Nilüfer; Eryilmaz, Ali; Erkoç, Sakir
2014-01-01
This paper is the first part of a multiphase study examining students' mental models about the quantization of physical observables--light, energy, and angular momentum. Thirty-one second-year physics and physics education college students who were taking a modern physics course participated in the study. The qualitative analysis of data revealed…
Phase-Quantized Block Noncoherent Communication
2013-07-01
2828 IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 61, NO. 7, JULY 2013 Phase-Quantized Block Noncoherent Communication Jaspreet Singh and Upamanyu...in a carrier asynchronous system. Specifically, we consider transmission over the block noncoherent additive white Gaussian noise channel, and...block noncoherent channel. Several results, based on the symmetry inherent in the channel model, are provided to characterize this transition density
Tachyon field in loop quantum cosmology: An example of traversable singularity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li Lifang; Zhu Jianyang
2009-06-15
Loop quantum cosmology (LQC) predicts a nonsingular evolution of the universe through a bounce in the high energy region. But LQC has an ambiguity about the quantization scheme. Recently, the authors in [Phys. Rev. D 77, 124008 (2008)] proposed a new quantization scheme. Similar to others, this new quantization scheme also replaces the big bang singularity with the quantum bounce. More interestingly, it introduces a quantum singularity, which is traversable. We investigate this novel dynamics quantitatively with a tachyon scalar field, which gives us a concrete example. Our result shows that our universe can evolve through the quantum singularity regularly,more » which is different from the classical big bang singularity. So this singularity is only a weak singularity.« less
Obliquely propagating ion acoustic solitary structures in the presence of quantized magnetic field
NASA Astrophysics Data System (ADS)
Iqbal Shaukat, Muzzamal
2017-10-01
The effect of linear and nonlinear propagation of electrostatic waves have been studied in degenerate magnetoplasma taking into account the effect of electron trapping and finite temperature with quantizing magnetic field. The formation of solitary structures has been investigated by employing the small amplitude approximation both for fully and partially degenerate quantum plasma. It is observed that the inclusion of quantizing magnetic field significantly affects the propagation characteristics of the solitary wave. Importantly, the Zakharov-Kuznetsov equation under consideration has been found to allow the formation of compressive solitary structures only. The present investigation may be beneficial to understand the propagation of nonlinear electrostatic structures in dense astrophysical environments such as those found in white dwarfs.
There are many ways to spin a photon: Half-quantization of a total optical angular momentum
Ballantine, Kyle E.; Donegan, John F.; Eastham, Paul R.
2016-01-01
The angular momentum of light plays an important role in many areas, from optical trapping to quantum information. In the usual three-dimensional setting, the angular momentum quantum numbers of the photon are integers, in units of the Planck constant ħ. We show that, in reduced dimensions, photons can have a half-integer total angular momentum. We identify a new form of total angular momentum, carried by beams of light, comprising an unequal mixture of spin and orbital contributions. We demonstrate the half-integer quantization of this total angular momentum using noise measurements. We conclude that for light, as is known for electrons, reduced dimensionality allows new forms of quantization. PMID:28861467
Third Quantization and Quantum Universes
NASA Astrophysics Data System (ADS)
Kim, Sang Pyo
2014-01-01
We study the third quantization of the Friedmann-Robertson-Walker cosmology with N-minimal massless fields. The third quantized Hamiltonian for the Wheeler-DeWitt equation in the minisuperspace consists of infinite number of intrinsic time-dependent, decoupled oscillators. The Hamiltonian has a pair of invariant operators for each universe with conserved momenta of the fields that play a role of the annihilation and the creation operators and that construct various quantum states for the universe. The closed universe exhibits an interesting feature of transitions from stable states to tachyonic states depending on the conserved momenta of the fields. In the classical forbidden unstable regime, the quantum states have googolplex growing position and conjugate momentum dispersions, which defy any measurements of the position of the universe.
Validation of a quantized-current source with 0.2 ppm uncertainty
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stein, Friederike; Fricke, Lukas, E-mail: lukas.fricke@ptb.de; Scherer, Hansjörg
2015-09-07
We report on high-accuracy measurements of quantized current, sourced by a tunable-barrier single-electron pump at frequencies f up to 1 GHz. The measurements were performed with an ultrastable picoammeter instrument, traceable to the Josephson and quantum Hall effects. Current quantization according to I = ef with e being the elementary charge was confirmed at f = 545 MHz with a total relative uncertainty of 0.2 ppm, improving the state of the art by about a factor of 5. The accuracy of a possible future quantum current standard based on single-electron transport was experimentally validated to be better than the best (indirect) realization of the ampere within themore » present SI.« less
Quantum no-singularity theorem from geometric flows
NASA Astrophysics Data System (ADS)
Alsaleh, Salwa; Alasfar, Lina; Faizal, Mir; Ali, Ahmed Farag
2018-04-01
In this paper, we analyze the classical geometric flow as a dynamical system. We obtain an action for this system, such that its equation of motion is the Raychaudhuri equation. This action will be used to quantize this system. As the Raychaudhuri equation is the basis for deriving the singularity theorems, we will be able to understand the effects and such a quantization will have on the classical singularity theorems. Thus, quantizing the geometric flow, we can demonstrate that a quantum space-time is complete (nonsingular). This is because the existence of a conjugate point is a necessary condition for the occurrence of singularities, and we will be able to demonstrate that such conjugate points cannot occur due to such quantum effects.
Generalized Ehrenfest Relations, Deformation Quantization, and the Geometry of Inter-model Reduction
NASA Astrophysics Data System (ADS)
Rosaler, Joshua
2018-03-01
This study attempts to spell out more explicitly than has been done previously the connection between two types of formal correspondence that arise in the study of quantum-classical relations: one the one hand, deformation quantization and the associated continuity between quantum and classical algebras of observables in the limit \\hbar → 0, and, on the other, a certain generalization of Ehrenfest's Theorem and the result that expectation values of position and momentum evolve approximately classically for narrow wave packet states. While deformation quantization establishes a direct continuity between the abstract algebras of quantum and classical observables, the latter result makes in-eliminable reference to the quantum and classical state spaces on which these structures act—specifically, via restriction to narrow wave packet states. Here, we describe a certain geometrical re-formulation and extension of the result that expectation values evolve approximately classically for narrow wave packet states, which relies essentially on the postulates of deformation quantization, but describes a relationship between the actions of quantum and classical algebras and groups over their respective state spaces that is non-trivially distinct from deformation quantization. The goals of the discussion are partly pedagogical in that it aims to provide a clear, explicit synthesis of known results; however, the particular synthesis offered aspires to some novelty in its emphasis on a certain general type of mathematical and physical relationship between the state spaces of different models that represent the same physical system, and in the explicitness with which it details the above-mentioned connection between quantum and classical models.
Factorial Comparison of Working Memory Models
van den Berg, Ronald; Awh, Edward; Ma, Wei Ji
2014-01-01
Three questions have been prominent in the study of visual working memory limitations: (a) What is the nature of mnemonic precision (e.g., quantized or continuous)? (b) How many items are remembered? (c) To what extent do spatial binding errors account for working memory failures? Modeling studies have typically focused on comparing possible answers to a single one of these questions, even though the result of such a comparison might depend on the assumed answers to both others. Here, we consider every possible combination of previously proposed answers to the individual questions. Each model is then a point in a 3-factor model space containing a total of 32 models, of which only 6 have been tested previously. We compare all models on data from 10 delayed-estimation experiments from 6 laboratories (for a total of 164 subjects and 131,452 trials). Consistently across experiments, we find that (a) mnemonic precision is not quantized but continuous and not equal but variable across items and trials; (b) the number of remembered items is likely to be variable across trials, with a mean of 6.4 in the best model (median across subjects); (c) spatial binding errors occur but explain only a small fraction of responses (16.5% at set size 8 in the best model). We find strong evidence against all 6 documented models. Our results demonstrate the value of factorial model comparison in working memory. PMID:24490791
Output MSE and PSNR prediction in DCT-based lossy compression of remote sensing images
NASA Astrophysics Data System (ADS)
Kozhemiakin, Ruslan A.; Abramov, Sergey K.; Lukin, Vladimir V.; Vozel, Benoit; Chehdi, Kacem
2017-10-01
Amount and size of remote sensing (RS) images acquired by modern systems are so large that data have to be compressed in order to transfer, save and disseminate them. Lossy compression becomes more popular for aforementioned situations. But lossy compression has to be applied carefully with providing acceptable level of introduced distortions not to lose valuable information contained in data. Then introduced losses have to be controlled and predicted and this is problematic for many coders. In this paper, we analyze possibilities of predicting mean square error or, equivalently, PSNR for coders based on discrete cosine transform (DCT) applied either for compressing singlechannel RS images or multichannel data in component-wise manner. The proposed approach is based on direct dependence between distortions introduced due to DCT coefficient quantization and losses in compressed data. One more innovation deals with possibility to employ a limited number (percentage) of blocks for which DCT-coefficients have to be calculated. This accelerates prediction and makes it considerably faster than compression itself. There are two other advantages of the proposed approach. First, it is applicable for both uniform and non-uniform quantization of DCT coefficients. Second, the approach is quite general since it works for several analyzed DCT-based coders. The simulation results are obtained for standard test images and then verified for real-life RS data.
Tomalia, Donald A; Khanna, Shiv N
2016-02-24
Development of a central paradigm is undoubtedly the single most influential force responsible for advancing Dalton's 19th century atomic/molecular chemistry concepts to the current maturity enjoyed by traditional chemistry. A similar central dogma for guiding and unifying nanoscience has been missing. This review traces the origins, evolution, and current status of such a critical nanoperiodic concept/framework for defining and unifying nanoscience. Based on parallel efforts and a mutual consensus now shared by both chemists and physicists, a nanoperiodic/systematic framework concept has emerged. This concept is based on the well-documented existence of discrete, nanoscale collections of traditional inorganic/organic atoms referred to as hard and soft superatoms (i.e., nanoelement categories). These nanometric entities are widely recognized to exhibit nanoscale atom mimicry features reminiscent of traditional picoscale atoms. All unique superatom/nanoelement physicochemical features are derived from quantized structural control defined by six critical nanoscale design parameters (CNDPs), namely, size, shape, surface chemistry, flexibility/rigidity, architecture, and elemental composition. These CNDPs determine all intrinsic superatom properties, their combining behavior to form stoichiometric nanocompounds/assemblies as well as to exhibit nanoperiodic properties leading to new nanoperiodic rules and predictive Mendeleev-like nanoperiodic tables, and they portend possible extension of these principles to larger quantized building blocks including meta-atoms.
Steepest descent method implementation on unconstrained optimization problem using C++ program
NASA Astrophysics Data System (ADS)
Napitupulu, H.; Sukono; Mohd, I. Bin; Hidayat, Y.; Supian, S.
2018-03-01
Steepest Descent is known as the simplest gradient method. Recently, many researches are done to obtain the appropriate step size in order to reduce the objective function value progressively. In this paper, the properties of steepest descent method from literatures are reviewed together with advantages and disadvantages of each step size procedure. The development of steepest descent method due to its step size procedure is discussed. In order to test the performance of each step size, we run a steepest descent procedure in C++ program. We implemented it to unconstrained optimization test problem with two variables, then we compare the numerical results of each step size procedure. Based on the numerical experiment, we conclude the general computational features and weaknesses of each procedure in each case of problem.
Synthesis and characterization of surface-modified colloidal CdTe Quantum Dots
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rajh, T.; Micic, O.I.; Nozik, A.J.
1993-11-18
The controlled synthesis of quantized colloidal CdTe nanocrystals (in aqueous solutions) with narrow size distributions and stabilized against rapid oxidation was achieved by capping the quantum dot particles with 3-mercapto-1,2-propanediol. Nanocrystals (i.e., quantum dots) with mean diameters of 20, 25, 35, and 40 A were produced. Optical absorption spectra showed strong excitonic peaks at the smallest size; the absorption coefficient was shown to follow an inverse cube dependence on particle diameter, while the extinction coefficient per particle remained constant. The quantum yield for photoluminescence increased with decreasing particle size and reached 20% at 20 A. The valence band edges ofmore » the CdTe quantum dots were determined by pulse radiolysis experiments (hole injection from oxidizing radicals); the bandgaps were estimated from pulse radiolysis data (redox potentials of hole and electron injecting radicals) and from the optical spectra. The dependence of the CdTe bandgap on quantum dot size was found to be much weaker than predicted by the effective mass approximation; this result is consistent with recently published theoretical calculations by several groups. 36 refs., 5 figs., 1 tab.« less
Quantum state of the multiverse
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robles-Perez, Salvador; Gonzalez-Diaz, Pedro F.
2010-04-15
A third quantization formalism is applied to a simplified multiverse scenario. A well-defined quantum state of the multiverse is obtained which agrees with standard boundary condition proposals. These states are found to be squeezed, and related to accelerating universes: they share similar properties to those obtained previously by Grishchuk and Siderov. We also comment on related works that have criticized the third quantization approach.
Hadron Spectra, Decays and Scattering Properties Within Basis Light Front Quantization
NASA Astrophysics Data System (ADS)
Vary, James P.; Adhikari, Lekha; Chen, Guangyao; Jia, Shaoyang; Li, Meijian; Li, Yang; Maris, Pieter; Qian, Wenyang; Spence, John R.; Tang, Shuo; Tuchin, Kirill; Yu, Anji; Zhao, Xingbo
2018-07-01
We survey recent progress in calculating properties of the electron and hadrons within the basis light front quantization (BLFQ) approach. We include applications to electromagnetic and strong scattering processes in relativistic heavy ion collisions. We present an initial investigation into the glueball states by applying BLFQ with multigluon sectors, introducing future research possibilities on multi-quark and multi-gluon systems.
Multispectral data compression through transform coding and block quantization
NASA Technical Reports Server (NTRS)
Ready, P. J.; Wintz, P. A.
1972-01-01
Transform coding and block quantization techniques are applied to multispectral aircraft scanner data, and digitized satellite imagery. The multispectral source is defined and an appropriate mathematical model proposed. The Karhunen-Loeve, Fourier, and Hadamard encoders are considered and are compared to the rate distortion function for the equivalent Gaussian source and to the performance of the single sample PCM encoder.
NASA Astrophysics Data System (ADS)
Ng, Theam Foo; Pham, Tuan D.; Zhou, Xiaobo
2010-01-01
With the fast development of multi-dimensional data compression and pattern classification techniques, vector quantization (VQ) has become a system that allows large reduction of data storage and computational effort. One of the most recent VQ techniques that handle the poor estimation of vector centroids due to biased data from undersampling is to use fuzzy declustering-based vector quantization (FDVQ) technique. Therefore, in this paper, we are motivated to propose a justification of FDVQ based hidden Markov model (HMM) for investigating its effectiveness and efficiency in classification of genotype-image phenotypes. The performance evaluation and comparison of the recognition accuracy between a proposed FDVQ based HMM (FDVQ-HMM) and a well-known LBG (Linde, Buzo, Gray) vector quantization based HMM (LBG-HMM) will be carried out. The experimental results show that the performances of both FDVQ-HMM and LBG-HMM are almost similar. Finally, we have justified the competitiveness of FDVQ-HMM in classification of cellular phenotype image database by using hypotheses t-test. As a result, we have validated that the FDVQ algorithm is a robust and an efficient classification technique in the application of RNAi genome-wide screening image data.
Quantized Self-Assembly of Discotic Rings in a Liquid Crystal Confined in Nanopores
NASA Astrophysics Data System (ADS)
Sentker, Kathrin; Zantop, Arne W.; Lippmann, Milena; Hofmann, Tommy; Seeck, Oliver H.; Kityk, Andriy V.; Yildirim, Arda; Schönhals, Andreas; Mazza, Marco G.; Huber, Patrick
2018-02-01
Disklike molecules with aromatic cores spontaneously stack up in linear columns with high, one-dimensional charge carrier mobilities along the columnar axes, making them prominent model systems for functional, self-organized matter. We show by high-resolution optical birefringence and synchrotron-based x-ray diffraction that confining a thermotropic discotic liquid crystal in cylindrical nanopores induces a quantized formation of annular layers consisting of concentric circular bent columns, unknown in the bulk state. Starting from the walls this ring self-assembly propagates layer by layer towards the pore center in the supercooled domain of the bulk isotropic-columnar transition and thus allows one to switch on and off reversibly single, nanosized rings through small temperature variations. By establishing a Gibbs free energy phase diagram we trace the phase transition quantization to the discreteness of the layers' excess bend deformation energies in comparison to the thermal energy, even for this near room-temperature system. Monte Carlo simulations yielding spatially resolved nematic order parameters, density maps, and bond-orientational order parameters corroborate the universality and robustness of the confinement-induced columnar ring formation as well as its quantized nature.
Generalized centripetal force law and quantization of motion constrained on 2D surfaces
NASA Astrophysics Data System (ADS)
Liu, Q. H.; Zhang, J.; Lian, D. K.; Hu, L. D.; Li, Z.
2017-03-01
For a particle of mass μ moves on a 2D surface f(x) = 0 embedded in 3D Euclidean space of coordinates x, there is an open and controversial problem whether the Dirac's canonical quantization scheme for the constrained motion allows for the geometric potential that has been experimentally confirmed. We note that the Dirac's scheme hypothesizes that the symmetries indicated by classical brackets among positions x and momenta p and Hamiltonian Hc remain in quantum mechanics, i.e., the following Dirac brackets [ x ,Hc ] D and [ p ,Hc ] D holds true after quantization, in addition to the fundamental ones [ x , x ] D, [ x , p ] D and [ p , p ] D. This set of hypotheses implies that the Hamiltonian operator is simultaneously determined during the quantization. The quantum mechanical relations corresponding to the classical mechanical ones p / μ =[ x ,Hc ] D directly give the geometric momenta. The time t derivative of the momenta p ˙ =[ p ,Hc ] D in classical mechanics is in fact the generalized centripetal force law for particle on the 2D surface, which in quantum mechanics permits both the geometric momenta and the geometric potential.
Treatment of constraints in the stochastic quantization method and covariantized Langevin equation
NASA Astrophysics Data System (ADS)
Ikegami, Kenji; Kimura, Tadahiko; Mochizuki, Riuji
1993-04-01
We study the treatment of the constraints in the stochastic quantization method. We improve the treatment of the stochastic consistency condition proposed by Namiki et al. by suitably taking into account the Ito calculus. Then we obtain an improved Langevi equation and the Fokker-Planck equation which naturally leads to the correct path integral quantization of the constrained system as the stochastic equilibrium state. This treatment is applied to an O( N) non-linear α model and it is shown that singular terms appearing in the improved Langevin equation cancel out the σ n(O) divergences in one loop order. We also ascertain that the above Langevin equation, rewritten in terms of idependent variables, is actually equivalent to the one in the general-coordinate transformation covariant and vielbein-rotation invariant formalish.
TBA-like integral equations from quantized mirror curves
NASA Astrophysics Data System (ADS)
Okuyama, Kazumi; Zakany, Szabolcs
2016-03-01
Quantizing the mirror curve of certain toric Calabi-Yau (CY) three-folds leads to a family of trace class operators. The resolvent function of these operators is known to encode topological data of the CY. In this paper, we show that in certain cases, this resolvent function satisfies a system of non-linear integral equations whose structure is very similar to the Thermodynamic Bethe Ansatz (TBA) systems. This can be used to compute spectral traces, both exactly and as a semiclassical expansion. As a main example, we consider the system related to the quantized mirror curve of local P2. According to a recent proposal, the traces of this operator are determined by the refined BPS indices of the underlying CY. We use our non-linear integral equations to test that proposal.
On two mathematical problems of canonical quantization. IV
NASA Astrophysics Data System (ADS)
Kirillov, A. I.
1992-11-01
A method for solving the problem of reconstructing a measure beginning with its logarithmic derivative is presented. The method completes that of solving the stochastic differential equation via Dirichlet forms proposed by S. Albeverio and M. Rockner. As a result one obtains the mathematical apparatus for the stochastic quantization. The apparatus is applied to prove the existence of the Feynman-Kac measure of the sine-Gordon and λφ2n/(1 + K2φ2n)-models. A synthesis of both mathematical problems of canonical quantization is obtained in the form of a second-order martingale problem for vacuum noise. It is shown that in stochastic mechanics the martingale problem is an analog of Newton's second law and enables us to find the Nelson's stochastic trajectories without determining the wave functions.
In Vitro and In Vivo Single Myosin Step-Sizes in Striated Muscle a
Burghardt, Thomas P.; Sun, Xiaojing; Wang, Yihua; Ajtai, Katalin
2016-01-01
Myosin in muscle transduces ATP free energy into the mechanical work of moving actin. It has a motor domain transducer containing ATP and actin binding sites, and, mechanical elements coupling motor impulse to the myosin filament backbone providing transduction/mechanical-coupling. The mechanical coupler is a lever-arm stabilized by bound essential and regulatory light chains. The lever-arm rotates cyclically to impel bound filamentous actin. Linear actin displacement due to lever-arm rotation is the myosin step-size. A high-throughput quantum dot labeled actin in vitro motility assay (Qdot assay) measures motor step-size in the context of an ensemble of actomyosin interactions. The ensemble context imposes a constant velocity constraint for myosins interacting with one actin filament. In a cardiac myosin producing multiple step-sizes, a “second characterization” is step-frequency that adjusts longer step-size to lower frequency maintaining a linear actin velocity identical to that from a shorter step-size and higher frequency actomyosin cycle. The step-frequency characteristic involves and integrates myosin enzyme kinetics, mechanical strain, and other ensemble affected characteristics. The high-throughput Qdot assay suits a new paradigm calling for wide surveillance of the vast number of disease or aging relevant myosin isoforms that contrasts with the alternative model calling for exhaustive research on a tiny subset myosin forms. The zebrafish embryo assay (Z assay) performs single myosin step-size and step-frequency assaying in vivo combining single myosin mechanical and whole muscle physiological characterizations in one model organism. The Qdot and Z assays cover “bottom-up” and “top-down” assaying of myosin characteristics. PMID:26728749
Role of step size and max dwell time in anatomy based inverse optimization for prostate implants
Manikandan, Arjunan; Sarkar, Biplab; Rajendran, Vivek Thirupathur; King, Paul R.; Sresty, N.V. Madhusudhana; Holla, Ragavendra; Kotur, Sachin; Nadendla, Sujatha
2013-01-01
In high dose rate (HDR) brachytherapy, the source dwell times and dwell positions are vital parameters in achieving a desirable implant dose distribution. Inverse treatment planning requires an optimal choice of these parameters to achieve the desired target coverage with the lowest achievable dose to the organs at risk (OAR). This study was designed to evaluate the optimum source step size and maximum source dwell time for prostate brachytherapy implants using an Ir-192 source. In total, one hundred inverse treatment plans were generated for the four patients included in this study. Twenty-five treatment plans were created for each patient by varying the step size and maximum source dwell time during anatomy-based, inverse-planned optimization. Other relevant treatment planning parameters were kept constant, including the dose constraints and source dwell positions. Each plan was evaluated for target coverage, urethral and rectal dose sparing, treatment time, relative target dose homogeneity, and nonuniformity ratio. The plans with 0.5 cm step size were seen to have clinically acceptable tumor coverage, minimal normal structure doses, and minimum treatment time as compared with the other step sizes. The target coverage for this step size is 87% of the prescription dose, while the urethral and maximum rectal doses were 107.3 and 68.7%, respectively. No appreciable difference in plan quality was observed with variation in maximum source dwell time. The step size plays a significant role in plan optimization for prostate implants. Our study supports use of a 0.5 cm step size for prostate implants. PMID:24049323
Electromagnetic properties of proximity systems
NASA Astrophysics Data System (ADS)
Kresin, Vladimir Z.
1985-07-01
Magnetic screening in the proximity system Sα-Mβ, where Mβ is a normal metal N, semiconductor (semimetal), or a superconductor, is studied. Main attention is paid to the low-temperature region where nonlocality plays an important role. The thermodynamic Green's-function method is employed in order to describe the behavior of the proximity system in an external field. The temperature and thickness dependences of the penetration depth λ are obtained. The dependence λ(T) differs in a striking way from the dependence in usual superconductors. The strong-coupling effect is taken into account. A special case of screening in a superconducting film backed by a size-quantizing semimetal film is considered. The results obtained are in good agreement with experimental data.
High performance compression of science data
NASA Technical Reports Server (NTRS)
Storer, James A.; Cohn, Martin
1994-01-01
Two papers make up the body of this report. One presents a single-pass adaptive vector quantization algorithm that learns a codebook of variable size and shape entries; the authors present experiments on a set of test images showing that with no training or prior knowledge of the data, for a given fidelity, the compression achieved typically equals or exceeds that of the JPEG standard. The second paper addresses motion compensation, one of the most effective techniques used in the interframe data compression. A parallel block-matching algorithm for estimating interframe displacement of blocks with minimum error is presented. The algorithm is designed for a simple parallel architecture to process video in real time.
A review on preparation of silver nano-particles
NASA Astrophysics Data System (ADS)
Haider, Adawiya J.; Haider, Mohammad J.; Mehde, Mohammad S.
2018-05-01
The term "nano particle" (NP) refers to particle diameter in nanometers in size. Nanoparticles contain a small number of constituent atoms or molecules that differ from the properties inherent in their bulk counterparts, found in various forms such as spherical, triangular, cubic, pentagonal, rod-shaped, shells, elliptical and so on. In this chapter, it has been presented the theoretical concepts of the preparation of AgNPS as powders and collide nanoparticles, techniques of preparation with their characterization (morphology, sign charge and potential value, particle distribution ….etc.). Also, included unique properties of AgNPS that are different from those of their bulk materials like: High surface area to volume ratio effects Quantization of electronic and vibration properties.
Hydrodynamic Electron Flow and Hall Viscosity
NASA Astrophysics Data System (ADS)
Scaffidi, Thomas; Nandi, Nabhanila; Schmidt, Burkhard; Mackenzie, Andrew P.; Moore, Joel E.
2017-06-01
In metallic samples of small enough size and sufficiently strong momentum-conserving scattering, the viscosity of the electron gas can become the dominant process governing transport. In this regime, momentum is a long-lived quantity whose evolution is described by an emergent hydrodynamical theory. Furthermore, breaking time-reversal symmetry leads to the appearance of an odd component to the viscosity called the Hall viscosity, which has attracted considerable attention recently due to its quantized nature in gapped systems but still eludes experimental confirmation. Based on microscopic calculations, we discuss how to measure the effects of both the even and odd components of the viscosity using hydrodynamic electronic transport in mesoscopic samples under applied magnetic fields.
Hydrodynamic Electron Flow and Hall Viscosity.
Scaffidi, Thomas; Nandi, Nabhanila; Schmidt, Burkhard; Mackenzie, Andrew P; Moore, Joel E
2017-06-02
In metallic samples of small enough size and sufficiently strong momentum-conserving scattering, the viscosity of the electron gas can become the dominant process governing transport. In this regime, momentum is a long-lived quantity whose evolution is described by an emergent hydrodynamical theory. Furthermore, breaking time-reversal symmetry leads to the appearance of an odd component to the viscosity called the Hall viscosity, which has attracted considerable attention recently due to its quantized nature in gapped systems but still eludes experimental confirmation. Based on microscopic calculations, we discuss how to measure the effects of both the even and odd components of the viscosity using hydrodynamic electronic transport in mesoscopic samples under applied magnetic fields.
NASA Astrophysics Data System (ADS)
Kim, Y.; Hwang, T.; Vose, J. M.; Martin, K. L.; Band, L. E.
2016-12-01
Obtaining quality hydrologic observations is the first step towards a successful water resources management. While remote sensing techniques have enabled to convert satellite images of the Earth's surface to hydrologic data, the importance of ground-based observations has never been diminished because in-situ data are often highly accurate and can be used to validate remote measurements. The existence of efficient hydrometric networks is becoming more important to obtain as much as information with minimum redundancy. The World Meteorological Organization (WMO) has recommended a guideline for the minimum hydrometric network density based on physiography; however, this guideline is not for the optimum network design but for avoiding serious deficiency from a network. Moreover, all hydrologic variables are interconnected within the hydrologic cycle, while monitoring networks have been designed individually. This study proposes an integrated network design method using entropy theory with a multiobjective optimization approach. In specific, a precipitation and a streamflow networks in a semi-urban watershed in Ontario, Canada were designed simultaneously by maximizing joint entropy, minimizing total correlation, and maximizing conditional entropy of streamflow network given precipitation network. After comparing with the typical individual network designs, the proposed design method would be able to determine more efficient optimal networks by avoiding the redundant stations, in which hydrologic information is transferable. Additionally, four quantization cases were applied in entropy calculations to assess their implications on the station rankings and the optimal networks. The results showed that the selection of quantization method should be considered carefully because the rankings and optimal networks are subject to change accordingly.
NASA Astrophysics Data System (ADS)
Keum, J.; Coulibaly, P. D.
2017-12-01
Obtaining quality hydrologic observations is the first step towards a successful water resources management. While remote sensing techniques have enabled to convert satellite images of the Earth's surface to hydrologic data, the importance of ground-based observations has never been diminished because in-situ data are often highly accurate and can be used to validate remote measurements. The existence of efficient hydrometric networks is becoming more important to obtain as much as information with minimum redundancy. The World Meteorological Organization (WMO) has recommended a guideline for the minimum hydrometric network density based on physiography; however, this guideline is not for the optimum network design but for avoiding serious deficiency from a network. Moreover, all hydrologic variables are interconnected within the hydrologic cycle, while monitoring networks have been designed individually. This study proposes an integrated network design method using entropy theory with a multiobjective optimization approach. In specific, a precipitation and a streamflow networks in a semi-urban watershed in Ontario, Canada were designed simultaneously by maximizing joint entropy, minimizing total correlation, and maximizing conditional entropy of streamflow network given precipitation network. After comparing with the typical individual network designs, the proposed design method would be able to determine more efficient optimal networks by avoiding the redundant stations, in which hydrologic information is transferable. Additionally, four quantization cases were applied in entropy calculations to assess their implications on the station rankings and the optimal networks. The results showed that the selection of quantization method should be considered carefully because the rankings and optimal networks are subject to change accordingly.
The electronic structure of Au25 clusters: between discrete and continuous.
Katsiev, Khabiboulakh; Lozova, Nataliya; Wang, Lu; Sai Krishna, Katla; Li, Ruipeng; Mei, Wai-Ning; Skrabalak, Sara E; Kumar, Challa S S R; Losovyj, Yaroslav
2016-08-21
Here, an approach based on synchrotron resonant photoemission is employed to explore the transition between quantization and hybridization of the electronic structure in atomically precise ligand-stabilized nanoparticles. While the presence of ligands maintains quantization in Au25 clusters, their removal renders increased hybridization of the electronic states in the vicinity of the Fermi level. These observations are supported by DFT studies.
The Analysis of Design of Robust Nonlinear Estimators and Robust Signal Coding Schemes.
1982-09-16
b - )’/ 12. between uniform and nonuniform quantizers. For the nonuni- Proof: If b - acca then form quantizer we can expect the mean-square error to...in the window greater than or equal to the value at We define f7 ’(s) as the n-times filtered signal p + 1; consequently, point p + 1 is the median and
NASA Astrophysics Data System (ADS)
Shifman, M.; Yung, A.
2018-03-01
Non-Abelian strings are considered in non-supersymmetric theories with fermions in various appropriate representations of the gauge group U(N). We derive the electric charge quantization conditions and the index theorems counting fermion zero modes in the string background both for the left-handed and right-handed fermions. In both cases we observe a non-trivial N dependence.
NASA Astrophysics Data System (ADS)
Myrheim, J.
Contents 1 Introduction 1.1 The concept of particle statistics 1.2 Statistical mechanics and the many-body problem 1.3 Experimental physics in two dimensions 1.4 The algebraic approach: Heisenberg quantization 1.5 More general quantizations 2 The configuration space 2.1 The Euclidean relative space for two particles 2.2 Dimensions d=1,2,3 2.3 Homotopy 2.4 The braid group 3 Schroedinger quantization in one dimension 4 Heisenberg quantization in one dimension 4.1 The coordinate representation 5 Schroedinger quantization in dimension d ≥ 2 5.1 Scalar wave functions 5.2 Homotopy 5.3 Interchange phases 5.4 The statistics vector potential 5.5 The N-particle case 5.6 Chern-Simons theory 6 The Feynman path integral for anyons 6.1 Eigenstates for position and momentum 6.2 The path integral 6.3 Conjugation classes in SN 6.4 The non-interacting case 6.5 Duality of Feynman and Schroedinger quantization 7 The harmonic oscillator 7.1 The two-dimensional harmonic oscillator 7.2 Two anyons in a harmonic oscillator potential 7.3 More than two anyons 7.4 The three-anyon problem 8 The anyon gas 8.1 The cluster and virial expansions 8.2 First and second order perturbative results 8.3 Regularization by periodic boundary conditions 8.4 Regularization by a harmonic oscillator potential 8.5 Bosons and fermions 8.6 Two anyons 8.7 Three anyons 8.8 The Monte Carlo method 8.9 The path integral representation of the coefficients GP 8.10 Exact and approximate polynomials 8.11 The fourth virial coefficient of anyons 8.12 Two polynomial theorems 9 Charged particles in a constant magnetic field 9.1 One particle in a magnetic field 9.2 Two anyons in a magnetic field 9.3 The anyon gas in a magnetic field 10 Interchange phases and geometric phases 10.1 Introduction to geometric phases 10.2 One particle in a magnetic field 10.3 Two particles in a magnetic field 10.4 Interchange of two anyons in potential wells 10.5 Laughlin's theory of the fractional quantum Hall effect
Quantization of Simple Parametrized Systems
NASA Astrophysics Data System (ADS)
Ruffini, Giulio
1995-01-01
I study the canonical formulation and quantization of some simple parametrized systems using Dirac's formalism and the Becchi-Rouet-Stora-Tyutin (BRST) extended phase space method. These systems include the parametrized particle and minisuperspace. Using Dirac's formalism I first analyze for each case the construction of the classical reduced phase space. There are two separate features of these systems that may make this construction difficult: (a) Because of the boundary conditions used, the actions are not gauge invariant at the boundaries. (b) The constraints may have a disconnected solution space. The relativistic particle and minisuperspace have such complicated constraints, while the non-relativistic particle displays only the first feature. I first show that a change of gauge fixing is equivalent to a canonical transformation in the reduced phase space, thus resolving the problems associated with the first feature above. Then I consider the quantization of these systems using several approaches: Dirac's method, Dirac-Fock quantization, and the BRST formalism. In the cases of the relativistic particle and minisuperspace I consider first the quantization of one branch of the constraint at the time and then discuss the backgrounds in which it is possible to quantize simultaneously both branches. I motivate and define the inner product, and obtain, for example, the Klein-Gordon inner product for the relativistic case. Then I show how to construct phase space path integral representations for amplitudes in these approaches--the Batalin-Fradkin-Vilkovisky (BFV) and the Faddeev path integrals --from which one can then derive the path integrals in coordinate space--the Faddeev-Popov path integral and the geometric path integral. In particular I establish the connection between the Hilbert space representation and the range of the lapse in the path integrals. I also examine the class of paths that contribute in the path integrals and how they affect space-time covariance, concluding that it is consistent to take paths that move forward in time only when there is no electric field. The key elements in this analysis are the space-like paths and the behavior of the action under the non-trivial ( Z_2) element of the reparametrization group.
Can The Periods of Some Extra-Solar Planetary Systems be Quantized?
NASA Astrophysics Data System (ADS)
El Fady Morcos, Abd
A simple formula was derived before by Morcos (2013 ), to relate the quantum numbers of planetary systems and their periods. This formula is applicable perfectly for the solar system planets, and some extra-solar planets , of stars of approximately the same masses like the Sun. This formula has been used to estimate the periods of some extra-solar planet of known quantum numbers. The used quantum numbers were calculated previously by other authors. A comparison between the observed and estimated periods, from the given formula has been done. The differences between the observed and calculated periods for the extra-solar systems have been calculated and tabulated. It is found that there is an error of the range of 10% The same formula has been also used to find the quantum numbers, of some known periods, exo-planet. Keywords: Quantization; Periods; Extra-Planetary; Extra-Solar Planet REFERENCES [1] Agnese, A. G. and Festa, R. “Discretization on the Cosmic Scale Inspirred from the Old Quantum Mechanics,” 1998. http://arxiv.org/abs/astro-ph/9807186 [2] Agnese, A. G. and Festa, R. “Discretizing ups-Andro- medae Planetary System,” 1999. http://arxiv.org/abs/astro-ph/9910534. [3] Barnothy, J. M. “The Stability of the Solar Systemand of Small Stellar Systems,” Proceedings of the IAU Sympo-sium 62, Warsaw, 5-8 September 1973, pp. 23-31. [4] Morcos, A.B. , “Confrontation between Quantized Periods of Some Extra-Solar Planetary Systems and Observations”, International Journal of Astronomy and Astrophysics, 2013, 3, 28-32. [5] Nottale, L. “Fractal Space-Time and Microphysics, To-wards a Theory of Scale Relativity,” World Scientific, London, 1994. [6] Nottale , L., “Scale-Relativity and Quantization of Extra- Solar Planetary Systems,” Astronomy & Astrophysics, Vol. 315, 1996, pp. L9-L12 [7] Nottale, L., Schumacher, G. and Gay, J. “Scale-Relativity and Quantization of the Solar Systems,” Astronomy & Astrophysics letters, Vol. 322, 1997, pp. 1018-10 [8]Nottale, L. “Scale-Relativity and Quantization of Exo- planet Orbital Semi-Major Axes,” Astronomy & Astro- physics, Vol. 361, 2000, pp. 379-387.
NASA Astrophysics Data System (ADS)
Zender, Charles S.
2016-09-01
Geoscientific models and measurements generate false precision (scientifically meaningless data bits) that wastes storage space. False precision can mislead (by implying noise is signal) and be scientifically pointless, especially for measurements. By contrast, lossy compression can be both economical (save space) and heuristic (clarify data limitations) without compromising the scientific integrity of data. Data quantization can thus be appropriate regardless of whether space limitations are a concern. We introduce, implement, and characterize a new lossy compression scheme suitable for IEEE floating-point data. Our new Bit Grooming algorithm alternately shaves (to zero) and sets (to one) the least significant bits of consecutive values to preserve a desired precision. This is a symmetric, two-sided variant of an algorithm sometimes called Bit Shaving that quantizes values solely by zeroing bits. Our variation eliminates the artificial low bias produced by always zeroing bits, and makes Bit Grooming more suitable for arrays and multi-dimensional fields whose mean statistics are important. Bit Grooming relies on standard lossless compression to achieve the actual reduction in storage space, so we tested Bit Grooming by applying the DEFLATE compression algorithm to bit-groomed and full-precision climate data stored in netCDF3, netCDF4, HDF4, and HDF5 formats. Bit Grooming reduces the storage space required by initially uncompressed and compressed climate data by 25-80 and 5-65 %, respectively, for single-precision values (the most common case for climate data) quantized to retain 1-5 decimal digits of precision. The potential reduction is greater for double-precision datasets. When used aggressively (i.e., preserving only 1-2 digits), Bit Grooming produces storage reductions comparable to other quantization techniques such as Linear Packing. Unlike Linear Packing, whose guaranteed precision rapidly degrades within the relatively narrow dynamic range of values that it can compress, Bit Grooming guarantees the specified precision throughout the full floating-point range. Data quantization by Bit Grooming is irreversible (i.e., lossy) yet transparent, meaning that no extra processing is required by data users/readers. Hence Bit Grooming can easily reduce data storage volume without sacrificing scientific precision or imposing extra burdens on users.
From classical to quantum mechanics: ``How to translate physical ideas into mathematical language''
NASA Astrophysics Data System (ADS)
Bergeron, H.
2001-09-01
Following previous works by E. Prugovečki [Physica A 91A, 202 (1978) and Stochastic Quantum Mechanics and Quantum Space-time (Reidel, Dordrecht, 1986)] on common features of classical and quantum mechanics, we develop a unified mathematical framework for classical and quantum mechanics (based on L2-spaces over classical phase space), in order to investigate to what extent quantum mechanics can be obtained as a simple modification of classical mechanics (on both logical and analytical levels). To obtain this unified framework, we split quantum theory in two parts: (i) general quantum axiomatics (a system is described by a state in a Hilbert space, observables are self-adjoints operators, and so on) and (ii) quantum mechanics proper that specifies the Hilbert space as L2(Rn); the Heisenberg rule [pi,qj]=-iℏδij with p=-iℏ∇, the free Hamiltonian H=-ℏ2Δ/2m and so on. We show that general quantum axiomatics (up to a supplementary "axiom of classicity") can be used as a nonstandard mathematical ground to formulate physical ideas and equations of ordinary classical statistical mechanics. So, the question of a "true quantization" with "ℏ" must be seen as an independent physical problem not directly related with quantum formalism. At this stage, we show that this nonstandard formulation of classical mechanics exhibits a new kind of operation that has no classical counterpart: this operation is related to the "quantization process," and we show why quantization physically depends on group theory (the Galilei group). This analytical procedure of quantization replaces the "correspondence principle" (or canonical quantization) and allows us to map classical mechanics into quantum mechanics, giving all operators of quantum dynamics and the Schrödinger equation. The great advantage of this point of view is that quantization is based on concrete physical arguments and not derived from some "pure algebraic rule" (we exhibit also some limit of the correspondence principle). Moreover spins for particles are naturally generated, including an approximation of their interaction with magnetic fields. We also recover by this approach the semi-classical formalism developed by E. Prugovečki [Stochastic Quantum Mechanics and Quantum Space-time (Reidel, Dordrecht, 1986)].
Unique Fock quantization of scalar cosmological perturbations
NASA Astrophysics Data System (ADS)
Fernández-Méndez, Mikel; Mena Marugán, Guillermo A.; Olmedo, Javier; Velhinho, José M.
2012-05-01
We investigate the ambiguities in the Fock quantization of the scalar perturbations of a Friedmann-Lemaître-Robertson-Walker model with a massive scalar field as matter content. We consider the case of compact spatial sections (thus avoiding infrared divergences), with the topology of a three-sphere. After expanding the perturbations in series of eigenfunctions of the Laplace-Beltrami operator, the Hamiltonian of the system is written up to quadratic order in them. We fix the gauge of the local degrees of freedom in two different ways, reaching in both cases the same qualitative results. A canonical transformation, which includes the scaling of the matter-field perturbations by the scale factor of the geometry, is performed in order to arrive at a convenient formulation of the system. We then study the quantization of these perturbations in the classical background determined by the homogeneous variables. Based on previous work, we introduce a Fock representation for the perturbations in which: (a) the complex structure is invariant under the isometries of the spatial sections and (b) the field dynamics is implemented as a unitary operator. These two properties select not only a unique unitary equivalence class of representations, but also a preferred field description, picking up a canonical pair of field variables among all those that can be obtained by means of a time-dependent scaling of the matter field (completed into a linear canonical transformation). Finally, we present an equivalent quantization constructed in terms of gauge-invariant quantities. We prove that this quantization can be attained by a mode-by-mode time-dependent linear canonical transformation which admits a unitary implementation, so that it is also uniquely determined.
Sun, Xiaojing; Wang, Yihua; Ajtai, Katalin
2017-01-01
Myosin motors in cardiac ventriculum convert ATP free energy to the work of moving blood volume under pressure. The actin bound motor cyclically rotates its lever-arm/light-chain complex linking motor generated torque to the myosin filament backbone and translating actin against resisting force. Previous research showed that the unloaded in vitro motor is described with high precision by single molecule mechanical characteristics including unitary step-sizes of approximately 3, 5, and 8 nm and their relative step-frequencies of approximately 13, 50, and 37%. The 3 and 8 nm unitary step-sizes are dependent on myosin essential light chain (ELC) N-terminus actin binding. Step-size and step-frequency quantitation specifies in vitro motor function including duty-ratio, power, and strain sensitivity metrics. In vivo, motors integrated into the muscle sarcomere form the more complex and hierarchically functioning muscle machine. The goal of the research reported here is to measure single myosin step-size and step-frequency in vivo to assess how tissue integration impacts motor function. A photoactivatable GFP tags the ventriculum myosin lever-arm/light-chain complex in the beating heart of a live zebrafish embryo. Detected single GFP emission reports time-resolved myosin lever-arm orientation interpreted as step-size and step-frequency providing single myosin mechanical characteristics over the active cycle. Following step-frequency of cardiac ventriculum myosin transitioning from low to high force in relaxed to auxotonic to isometric contraction phases indicates that the imposition of resisting force during contraction causes the motor to down-shift to the 3 nm step-size accounting for >80% of all the steps in the near-isometric phase. At peak force, the ATP initiated actomyosin dissociation is the predominant strain inhibited transition in the native myosin contraction cycle. The proposed model for motor down-shifting and strain sensing involves ELC N-terminus actin binding. Overall, the approach is a unique bottom-up single molecule mechanical characterization of a hierarchically functional native muscle myosin. PMID:28423017
Effect of signal intensity and camera quantization on laser speckle contrast analysis
Song, Lipei; Elson, Daniel S.
2012-01-01
Laser speckle contrast analysis (LASCA) is limited to being a qualitative method for the measurement of blood flow and tissue perfusion as it is sensitive to the measurement configuration. The signal intensity is one of the parameters that can affect the contrast values due to the quantization of the signals by the camera and analog-to-digital converter (ADC). In this paper we deduce the theoretical relationship between signal intensity and contrast values based on the probability density function (PDF) of the speckle pattern and simplify it to a rational function. A simple method to correct this contrast error is suggested. The experimental results demonstrate that this relationship can effectively compensate the bias in contrast values induced by the quantized signal intensity and correct for bias induced by signal intensity variations across the field of view. PMID:23304650
Symmetries and Invariants of Twisted Quantum Algebras and Associated Poisson Algebras
NASA Astrophysics Data System (ADS)
Molev, A. I.; Ragoucy, E.
We construct an action of the braid group BN on the twisted quantized enveloping algebra U q'( {o}N) where the elements of BN act as automorphisms. In the classical limit q → 1, we recover the action of BN on the polynomial functions on the space of upper triangular matrices with ones on the diagonal. The action preserves the Poisson bracket on the space of polynomials which was introduced by Nelson and Regge in their study of quantum gravity and rediscovered in the mathematical literature. Furthermore, we construct a Poisson bracket on the space of polynomials associated with another twisted quantized enveloping algebra U q'( {sp}2n). We use the Casimir elements of both twisted quantized enveloping algebras to reproduce and construct some well-known and new polynomial invariants of the corresponding Poisson algebras.
Zhang, Wanli; Yang, Shiju; Li, Chuandong; Zhang, Wei; Yang, Xinsong
2018-08-01
This paper focuses on stochastic exponential synchronization of delayed memristive neural networks (MNNs) by the aid of systems with interval parameters which are established by using the concept of Filippov solution. New intermittent controller and adaptive controller with logarithmic quantization are structured to deal with the difficulties induced by time-varying delays, interval parameters as well as stochastic perturbations, simultaneously. Moreover, not only control cost can be reduced but also communication channels and bandwidth are saved by using these controllers. Based on novel Lyapunov functions and new analytical methods, several synchronization criteria are established to realize the exponential synchronization of MNNs with stochastic perturbations via intermittent control and adaptive control with or without logarithmic quantization. Finally, numerical simulations are offered to substantiate our theoretical results. Copyright © 2018 Elsevier Ltd. All rights reserved.
Measurement analysis and quantum gravity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Albers, Mark; Kiefer, Claus; Reginatto, Marcel
2008-09-15
We consider the question of whether consistency arguments based on measurement theory show that the gravitational field must be quantized. Motivated by the argument of Eppley and Hannah, we apply a DeWitt-type measurement analysis to a coupled system that consists of a gravitational wave interacting with a mass cube. We also review the arguments of Eppley and Hannah and of DeWitt, and investigate a second model in which a gravitational wave interacts with a quantized scalar field. We argue that one cannot conclude from the existing gedanken experiments that gravity has to be quantized. Despite the many physical arguments whichmore » speak in favor of a quantum theory of gravity, it appears that the justification for such a theory must be based on empirical tests and does not follow from logical arguments alone.« less
Quantized Faraday and Kerr rotation and axion electrodynamics of a 3D topological insulator
NASA Astrophysics Data System (ADS)
Wu, Liang; Salehi, M.; Koirala, N.; Moon, J.; Oh, S.; Armitage, N. P.
2016-12-01
Topological insulators have been proposed to be best characterized as bulk magnetoelectric materials that show response functions quantized in terms of fundamental physical constants. Here, we lower the chemical potential of three-dimensional (3D) Bi2Se3 films to ~30 meV above the Dirac point and probe their low-energy electrodynamic response in the presence of magnetic fields with high-precision time-domain terahertz polarimetry. For fields higher than 5 tesla, we observed quantized Faraday and Kerr rotations, whereas the dc transport is still semiclassical. A nontrivial Berry’s phase offset to these values gives evidence for axion electrodynamics and the topological magnetoelectric effect. The time structure used in these measurements allows a direct measure of the fine-structure constant based on a topological invariant of a solid-state system.
Mass quantization of the Schwarzschild black hole
NASA Astrophysics Data System (ADS)
Vaz, Cenalo; Witten, Louis
1999-07-01
We examine the Wheeler-DeWitt equation for a static, eternal Schwarzschild black hole in Kuchař-Brown variables and obtain its energy eigenstates. Consistent solutions vanish in the exterior of the Kruskal manifold and are nonvanishing only in the interior. The system is reminiscent of a particle in a box. States of definite parity avoid the singular geometry by vanishing at the origin. These definite parity states admit a discrete energy spectrum, depending on one quantum number which determines the Arnowitt-Deser-Misner mass of the black hole according to a relation conjectured long ago by Bekenstein M~nMp. If attention is restricted only to these quantized energy states, a black hole is described not only by its mass but also by its parity. States of indefinite parity do not admit a quantized mass spectrum.
Polymer quantization of the Einstein-Rosen wormhole throat
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kunstatter, Gabor; Peltola, Ari; Louko, Jorma
2010-01-15
We present a polymer quantization of spherically symmetric Einstein gravity in which the polymerized variable is the area of the Einstein-Rosen wormhole throat. In the classical polymer theory, the singularity is replaced by a bounce at a radius that depends on the polymerization scale. In the polymer quantum theory, we show numerically that the area spectrum is evenly spaced and in agreement with a Bohr-Sommerfeld semiclassical estimate, and this spectrum is not qualitatively sensitive to issues of factor ordering or boundary conditions except in the lowest few eigenvalues. In the limit of small polymerization scale we recover, within the numericalmore » accuracy, the area spectrum obtained from a Schroedinger quantization of the wormhole throat dynamics. The prospects of recovering from the polymer throat theory a full quantum-corrected spacetime are discussed.« less
Quantized conductance operation near a single-atom point contact in a polymer-based atomic switch
NASA Astrophysics Data System (ADS)
Krishnan, Karthik; Muruganathan, Manoharan; Tsuruoka, Tohru; Mizuta, Hiroshi; Aono, Masakazu
2017-06-01
Highly-controlled conductance quantization is achieved near a single-atom point contact in a redox-based atomic switch device, in which a poly(ethylene oxide) (PEO) film is sandwiched between Ag and Pt electrodes. Current-voltage measurements revealed reproducible quantized conductance of ˜1G 0 for more than 102 continuous voltage sweep cycles under a specific condition, indicating the formation of a well-defined single-atom point contact of Ag in the PEO matrix. The device exhibited a conductance state distribution centered at 1G 0, with distinct half-integer multiples of G 0 and small fractional variations. First-principles density functional theory simulations showed that the experimental observations could be explained by the existence of a tunneling gap and the structural rearrangement of an atomic point contact.
Quantization of the nonlinear sigma model revisited
NASA Astrophysics Data System (ADS)
Nguyen, Timothy
2016-08-01
We revisit the subject of perturbatively quantizing the nonlinear sigma model in two dimensions from a rigorous, mathematical point of view. Our main contribution is to make precise the cohomological problem of eliminating potential anomalies that may arise when trying to preserve symmetries under quantization. The symmetries we consider are twofold: (i) diffeomorphism covariance for a general target manifold; (ii) a transitive group of isometries when the target manifold is a homogeneous space. We show that there are no anomalies in case (i) and that (ii) is also anomaly-free under additional assumptions on the target homogeneous space, in agreement with the work of Friedan. We carry out some explicit computations for the O(N)-model. Finally, we show how a suitable notion of the renormalization group establishes the Ricci flow as the one loop renormalization group flow of the nonlinear sigma model.
Novel properties of the q-analogue quantized radiation field
NASA Technical Reports Server (NTRS)
Nelson, Charles A.
1993-01-01
The 'classical limit' of the q-analog quantized radiation field is studied paralleling conventional quantum optics analyses. The q-generalizations of the phase operator of Susskind and Glogower and that of Pegg and Barnett are constructed. Both generalizations and their associated number-phase uncertainty relations are manifestly q-independent in the n greater than g number basis. However, in the q-coherent state z greater than q basis, the variance of the generic electric field, (delta(E))(sup 2) is found to be increased by a factor lambda(z) where lambda(z) greater than 1 if q not equal to 1. At large amplitudes, the amplitude itself would be quantized if the available resolution of unity for the q-analog coherent states is accepted in the formulation. These consequences are remarkable versus the conventional q = 1 limit.
NASA Astrophysics Data System (ADS)
Liu, Ligang; Fukumoto, Masahiro; Saiki, Sachio; Zhang, Shiyong
2009-12-01
Proportionate adaptive algorithms have been proposed recently to accelerate convergence for the identification of sparse impulse response. When the excitation signal is colored, especially the speech, the convergence performance of proportionate NLMS algorithms demonstrate slow convergence speed. The proportionate affine projection algorithm (PAPA) is expected to solve this problem by using more information in the input signals. However, its steady-state performance is limited by the constant step-size parameter. In this article we propose a variable step-size PAPA by canceling the a posteriori estimation error. This can result in high convergence speed using a large step size when the identification error is large, and can then considerably decrease the steady-state misalignment using a small step size after the adaptive filter has converged. Simulation results show that the proposed approach can greatly improve the steady-state misalignment without sacrificing the fast convergence of PAPA.
Closed almost-periodic orbits in semiclassical quantization of generic polygons
Biswas
2000-05-01
Periodic orbits are the central ingredients of modern semiclassical theories and corrections to these are generally nonclassical in origin. We show here that, for the class of generic polygonal billiards, the corrections are predominantly classical in origin owing to the contributions from closed almost-periodic (CAP) orbit families. Furthermore, CAP orbit families outnumber periodic families but have comparable weights. They are hence indispensable for semiclassical quantization.
Application of Generalized Feynman-Hellmann Theorem in Quantization of LC Circuit in Thermo Bath
NASA Astrophysics Data System (ADS)
Fan, Hong-Yi; Tang, Xu-Bing
For the quantized LC electric circuit, when taking the Joule thermal effect into account, we think that physical observables should be evaluated in the context of ensemble average. We then use the generalized Feynman-Hellmann theorem for ensemble average to calculate them, which seems convenient. Fluctuation of observables in various LC electric circuits in the presence of thermo bath growing with temperature is exhibited.
Symplectic Quantization of a Vector-Tensor Gauge Theory with Topological Coupling
NASA Astrophysics Data System (ADS)
Barcelos-Neto, J.; Silva, M. B. D.
We use the symplectic formalism to quantize a gauge theory where vectors and tensors fields are coupled in a topological way. This is an example of reducible theory and a procedure like of ghosts-of-ghosts of the BFV method is applied but in terms of Lagrange multipliers. Our final results are in agreement with the ones found in the literature by using the Dirac method.
Issues in the digital implementation of control compensators. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Moroney, P.
1979-01-01
Techniques developed for the finite-precision implementation of digital filters were used, adapted, and extended for digital feedback compensators, with particular emphasis on steady state, linear-quadratic-Gaussian compensators. Topics covered include: (1) the linear-quadratic-Gaussian problem; (2) compensator structures; (3) architectural issues: serialism, parallelism, and pipelining; (4) finite wordlength effects: quantization noise, quantizing the coefficients, and limit cycles; and (5) the optimization of structures.
Selecting Representative Points in Normal Populations.
1983-01-14
where values can be compared. A rather early paper on quantization is by Steinhaus [1956]. In that paper, he demonstrates the two necessary (but not...already noted that Steinhaus lists these two l<i<l conditions. These two necessary conditions for an optimal quantization suggest an iterative...Stanford, California. Steinhaus , H. (1956). Sur la division des corps materiels en parties. Bulletin De L’Academie Polonaise Des Sciences, Cl. III- Vol
Information preserving coding for multispectral data
NASA Technical Reports Server (NTRS)
Duan, J. R.; Wintz, P. A.
1973-01-01
A general formulation of the data compression system is presented. A method of instantaneous expansion of quantization levels by reserving two codewords in the codebook to perform a folding over in quantization is implemented for error free coding of data with incomplete knowledge of the probability density function. Results for simple DPCM with folding and an adaptive transform coding technique followed by a DPCM technique are compared using ERTS-1 data.
Particle localization, spinor two-valuedness, and Fermi quantization of tensor systems
NASA Technical Reports Server (NTRS)
Reifler, Frank; Morris, Randall
1994-01-01
Recent studies of particle localization shows that square-integrable positive energy bispinor fields in a Minkowski space-time cannot be physically distinguished from constrained tensor fields. In this paper we generalize this result by characterizing all classical tensor systems, which admit Fermi quantization, as those having unitary Lie-Poisson brackets. Examples include Euler's tensor equation for a rigid body and Dirac's equation in tensor form.
Field quantization and squeezed states generation in resonators with time-dependent parameters
NASA Technical Reports Server (NTRS)
Dodonov, V. V.; Klimov, A. B.; Nikonov, D. E.
1992-01-01
The problem of electromagnetic field quantization is usually considered in textbooks under the assumption that the field occupies some empty box. The case when a nonuniform time-dependent dielectric medium is confined in some space region with time-dependent boundaries is studied. The basis of the subsequent consideration is the system of Maxwell's equations in linear passive time-dependent dielectric and magnetic medium without sources.
Bohr-Sommerfeld quantization condition for Dirac states derived from an Ermakov-type invariant
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thylwe, Karl-Erik; McCabe, Patrick
2013-05-15
It is shown that solutions of the second-order decoupled radial Dirac equations satisfy Ermakov-type invariants. These invariants lead to amplitude-phase-type representations of the radial spinor solutions, with exact relations between their amplitudes and phases. Implications leading to a Bohr-Sommerfeld quantization condition for bound states, and a few particular atomic/ionic and nuclear/hadronic bound-state situations are discussed.
Fedosov Deformation Quantization as a BRST Theory
NASA Astrophysics Data System (ADS)
Grigoriev, M. A.; Lyakhovich, S. L.
The relationship is established between the Fedosov deformation quantization of a general symplectic manifold and the BFV-BRST quantization of constrained dynamical systems. The original symplectic manifold M is presented as a second class constrained surface in the fibre bundle ?*ρM which is a certain modification of a usual cotangent bundle equipped with a natural symplectic structure. The second class system is converted into the first class one by continuation of the constraints into the extended manifold, being a direct sum of ?*ρM and the tangent bundle TM. This extended manifold is equipped with a nontrivial Poisson bracket which naturally involves two basic ingredients of Fedosov geometry: the symplectic structure and the symplectic connection. The constructed first class constrained theory, being equivalent to the original symplectic manifold, is quantized through the BFV-BRST procedure. The existence theorem is proven for the quantum BRST charge and the quantum BRST invariant observables. The adjoint action of the quantum BRST charge is identified with the Abelian Fedosov connection while any observable, being proven to be a unique BRST invariant continuation for the values defined in the original symplectic manifold, is identified with the Fedosov flat section of the Weyl bundle. The Fedosov fibrewise star multiplication is thus recognized as a conventional product of the quantum BRST invariant observables.
Thermal Counterflow in a Periodic Channel with Solid Boundaries
NASA Astrophysics Data System (ADS)
Baggaley, Andrew W.; Laurie, Jason
2015-01-01
We perform numerical simulations of finite temperature quantum turbulence produced through thermal counterflow in superfluid He, using the vortex filament model. We investigate the effects of solid boundaries along one of the Cartesian directions, assuming a laminar normal fluid with a Poiseuille velocity profile, whilst varying the temperature and the normal fluid velocity. We analyze the distribution of the quantized vortices, reconnection rates, and quantized vorticity production as a function of the wall-normal direction. We find that the quantized vortex lines tend to concentrate close to the solid boundaries with their position depending only on temperature and not on the counterflow velocity. We offer an explanation of this phenomenon by considering the balance of two competing effects, namely the rate of turbulent diffusion of an isotropic tangle near the boundaries and the rate of quantized vorticity production at the center. Moreover, this yields the observed scaling of the position of the peak vortex line density with the mutual friction parameter. Finally, we provide evidence that upon the transition from laminar to turbulent normal fluid flow, there is a dramatic increase in the homogeneity of the tangle, which could be used as an indirect measure of the transition to turbulence in the normal fluid component for experiments.
NASA Astrophysics Data System (ADS)
Huang, Yingyi; Setiawan, F.; Sau, Jay D.
2018-03-01
A weak superconducting proximity effect in the vicinity of the topological transition of a quantum anomalous Hall system has been proposed as a venue to realize a topological superconductor (TSC) with chiral Majorana edge modes (CMEMs). A recent experiment [Science 357, 294 (2017), 10.1126/science.aag2792] claimed to have observed such CMEMs in the form of a half-integer quantized conductance plateau in the two-terminal transport measurement of a quantum anomalous Hall-superconductor junction. Although the presence of a superconducting proximity effect generically splits the quantum Hall transition into two phase transitions with a gapped TSC in between, in this Rapid Communication we propose that a nearly flat conductance plateau, similar to that expected from CMEMs, can also arise from the percolation of quantum Hall edges well before the onset of the TSC or at temperatures much above the TSC gap. Our Rapid Communication, therefore, suggests that, in order to confirm the TSC, it is necessary to supplement the observation of the half-quantized conductance plateau with a hard superconducting gap (which is unlikely for a disordered system) from the conductance measurements or the heat transport measurement of the transport gap. Alternatively, the half-quantized thermal conductance would also serve as a smoking-gun signature of the TSC.
Permutation modulation for quantization and information reconciliation in CV-QKD systems
NASA Astrophysics Data System (ADS)
Daneshgaran, Fred; Mondin, Marina; Olia, Khashayar
2017-08-01
This paper is focused on the problem of Information Reconciliation (IR) for continuous variable Quantum Key Distribution (QKD). The main problem is quantization and assignment of labels to the samples of the Gaussian variables observed at Alice and Bob. Trouble is that most of the samples, assuming that the Gaussian variable is zero mean which is de-facto the case, tend to have small magnitudes and are easily disturbed by noise. Transmission over longer and longer distances increases the losses corresponding to a lower effective Signal to Noise Ratio (SNR) exasperating the problem. Here we propose to use Permutation Modulation (PM) as a means of quantization of Gaussian vectors at Alice and Bob over a d-dimensional space with d ≫ 1. The goal is to achieve the necessary coding efficiency to extend the achievable range of continuous variable QKD by quantizing over larger and larger dimensions. Fractional bit rate per sample is easily achieved using PM at very reasonable computational cost. Ordered statistics is used extensively throughout the development from generation of the seed vector in PM to analysis of error rates associated with the signs of the Gaussian samples at Alice and Bob as a function of the magnitude of the observed samples at Bob.
Supporting Dynamic Quantization for High-Dimensional Data Analytics.
Guzun, Gheorghi; Canahuate, Guadalupe
2017-05-01
Similarity searches are at the heart of exploratory data analysis tasks. Distance metrics are typically used to characterize the similarity between data objects represented as feature vectors. However, when the dimensionality of the data increases and the number of features is large, traditional distance metrics fail to distinguish between the closest and furthest data points. Localized distance functions have been proposed as an alternative to traditional distance metrics. These functions only consider dimensions close to query to compute the distance/similarity. Furthermore, in order to enable interactive explorations of high-dimensional data, indexing support for ad-hoc queries is needed. In this work we set up to investigate whether bit-sliced indices can be used for exploratory analytics such as similarity searches and data clustering for high-dimensional big-data. We also propose a novel dynamic quantization called Query dependent Equi-Depth (QED) quantization and show its effectiveness on characterizing high-dimensional similarity. When applying QED we observe improvements in kNN classification accuracy over traditional distance functions. Gheorghi Guzun and Guadalupe Canahuate. 2017. Supporting Dynamic Quantization for High-Dimensional Data Analytics. In Proceedings of Ex-ploreDB'17, Chicago, IL, USA, May 14-19, 2017, 6 pages. https://doi.org/http://dx.doi.org/10.1145/3077331.3077336.
Awan, Muaaz Gul; Saeed, Fahad
2017-08-01
Modern high resolution Mass Spectrometry instruments can generate millions of spectra in a single systems biology experiment. Each spectrum consists of thousands of peaks but only a small number of peaks actively contribute to deduction of peptides. Therefore, pre-processing of MS data to detect noisy and non-useful peaks are an active area of research. Most of the sequential noise reducing algorithms are impractical to use as a pre-processing step due to high time-complexity. In this paper, we present a GPU based dimensionality-reduction algorithm, called G-MSR, for MS2 spectra. Our proposed algorithm uses novel data structures which optimize the memory and computational operations inside GPU. These novel data structures include Binary Spectra and Quantized Indexed Spectra (QIS) . The former helps in communicating essential information between CPU and GPU using minimum amount of data while latter enables us to store and process complex 3-D data structure into a 1-D array structure while maintaining the integrity of MS data. Our proposed algorithm also takes into account the limited memory of GPUs and switches between in-core and out-of-core modes based upon the size of input data. G-MSR achieves a peak speed-up of 386x over its sequential counterpart and is shown to process over a million spectra in just 32 seconds. The code for this algorithm is available as a GPL open-source at GitHub at the following link: https://github.com/pcdslab/G-MSR.
Women in Physics in the Philippines: Quantized Yet Taking Steps Toward a Mature Science Culture
NASA Astrophysics Data System (ADS)
Villagonzalo, Cristine; Bornales, Jinky; Betoya-Nonesa, Jelly Grace
2009-04-01
Scientific culture in the Philippines is young and physics is no exception. There are only four physics PhD-granting universities with research laboratories. More than 10 universities offer a bachelor's degree or master's degree in Physics. Like the world trend, these physics institutions are male dominated. However, four of the leading universities already have female PhD faculty members in physics occupying positions of an assistant professor or better. On a positive note, female physicists are no longer limited to work in the national capital region but have carved out their careers in other parts of the country. Also, female physicists have spread into other non-physics-degree-granting universities or found work in the industrial sector. The number of female graduates in physics in the undergraduate and graduate level have slowly but steadily increased since 2002. With the observed increase in number, a working group for women in physics in the Philippines was created this year. In order to provide recommendations to regulators and policy makers, the group's first step is to monitor the number of female students and physicists, their study and work environments, and the scholarships and opportunities for development that are available to them.
NASA Astrophysics Data System (ADS)
Kurbatova, N. V.; Galyautdinov, M. F.; Shtyrkov, E. I.; Nuzhdin, V. I.; Stepanov, A. L.
2010-06-01
The modification of the shape of ion-synthesized silver and copper nanoparticles in a silica glass during laser annealing has been studied for the first time by Raman spectroscopy at a temperature of 77 K. The laser annealing has been carried out for a wavelength of 694 nm at the edge of the plasmon absorption spectrum of nanoparticles. A comparison of the experimental spectra and the calculated modes of in-phase bending vibrations of the “harmonica” type in nanostrings of the corresponding metals has demonstrated their good agreement. The effects observed have been discussed from the standpoint of the size quantization of vibrations in metal nanowires. This methodical approach has made it possible to estimate the sizes of the Ag and Cu nanoparticles under the assumption that they have an elongated form; in this case, their average lengths are equal to 2.5 and 1.4 nm, respectively.
New vertices and canonical quantization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alexandrov, Sergei
2010-07-15
We present two results on the recently proposed new spin foam models. First, we show how a (slightly modified) restriction on representations in the Engle-Pereira-Rovelli-Livine model leads to the appearance of the Ashtekar-Barbero connection, thus bringing this model even closer to loop quantum gravity. Second, we however argue that the quantization procedure used to derive the new models is inconsistent since it relies on the symplectic structure of the unconstrained BF theory.
Quantum paradoxes, entanglement and their explanation on the basis of quantization of fields
NASA Astrophysics Data System (ADS)
Melkikh, A. V.
2017-01-01
Quantum entanglement is discussed as a consequence of the quantization of fields. The inclusion of quantum fields self-consistently explains some quantum paradoxes (EPR and Hardy’s paradox). The definition of entanglement was introduced, which depends on the maximum energy of the interaction of particles. The destruction of entanglement is caused by the creation and annihilation of particles. On this basis, an algorithm for quantum particle evolution was formulated.
The Casalbuoni-Brink-Schwarz superparticle with covariant, reducible constraints
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dayi, O.F.
1992-04-30
This paper discusses the fermionic constraints of the massless Casalbuoni-Brink-Schwarz superparticle in d = 10 which are separated covariantly as first- and second-class constraints which are infinitely reducible. Although the reducibility conditions of the second-class constraints include the first-class ones a consistent quantization is possible. The ghost structure of the system for quantizing it in terms of the BFV-BRST methods is given and unitarity is shown.
Superfield Hamiltonian quantization in terms of quantum antibrackets
NASA Astrophysics Data System (ADS)
Batalin, Igor A.; Lavrov, Peter M.
2016-04-01
We develop a new version of the superfield Hamiltonian quantization. The main new feature is that the BRST-BFV charge and the gauge fixing Fermion are introduced on equal footing within the sigma model approach, which provides for the actual use of the quantum/derived antibrackets. We study in detail the generating equations for the quantum antibrackets and their primed counterparts. We discuss the finite quantum anticanonical transformations generated by the quantum antibracket.
Theory of free electron vortices
Schattschneider, P.; Verbeeck, J.
2011-01-01
The recent creation of electron vortex beams and their first practical application motivates a better understanding of their properties. Here, we develop the theory of free electron vortices with quantized angular momentum, based on solutions of the Schrödinger equation for cylindrical boundary conditions. The principle of transformation of a plane wave into vortices with quantized angular momentum, their paraxial propagation through round magnetic lenses, and the effect of partial coherence are discussed. PMID:21930017
NASA Technical Reports Server (NTRS)
King, J. C.
1975-01-01
The general orbit-coverage problem in a simplified physical model is investigated by application of numerical approaches derived from basic number theory. A system of basic and general properties is defined by which idealized periodic coverage patterns may be characterized, classified, and delineated. The principal common features of these coverage patterns are their longitudinal quantization, determined by the revolution number R, and their overall symmetry.
Stochastic quantization of conformally coupled scalar in AdS
NASA Astrophysics Data System (ADS)
Jatkar, Dileep P.; Oh, Jae-Hyuk
2013-10-01
We explore the relation between stochastic quantization and holographic Wilsonian renormalization group flow further by studying conformally coupled scalar in AdS d+1. We establish one to one mapping between the radial flow of its double trace deformation and stochastic 2-point correlation function. This map is shown to be identical, up to a suitable field re-definition of the bulk scalar, to the original proposal in arXiv:1209.2242.
Hamiltonian description and quantization of dissipative systems
NASA Astrophysics Data System (ADS)
Enz, Charles P.
1994-09-01
Dissipative systems are described by a Hamiltonian, combined with a “dynamical matrix” which generalizes the simplectic form of the equations of motion. Criteria for dissipation are given and the examples of a particle with friction and of the Lotka-Volterra model are presented. Quantization is first introduced by translating generalized Poisson brackets into commutators and anticommutators. Then a generalized Schrödinger equation expressed by a dynamical matrix is constructed and discussed.
Survey of adaptive image coding techniques
NASA Technical Reports Server (NTRS)
Habibi, A.
1977-01-01
The general problem of image data compression is discussed briefly with attention given to the use of Karhunen-Loeve transforms, suboptimal systems, and block quantization. A survey is then conducted encompassing the four categories of adaptive systems: (1) adaptive transform coding (adaptive sampling, adaptive quantization, etc.), (2) adaptive predictive coding (adaptive delta modulation, adaptive DPCM encoding, etc.), (3) adaptive cluster coding (blob algorithms and the multispectral cluster coding technique), and (4) adaptive entropy coding.
Robust 1-Bit Compressive Sensing via Binary Stable Embeddings of Sparse Vectors
2011-04-15
funded by Mitsubishi Electric Research Laboratories. †ICTEAM Institute, ELEN Department, Université catholique de Louvain (UCL), B-1348 Louvain-la-Neuve...reduced to a simple comparator that tests for values above or below zero, enabling extremely simple, efficient, and fast quantization. A 1-bit quantizer is...these two terms appears to be significantly different, according to the previously discussed experiments. To test the hypothesis that this term is the key
Holographic anyonic superfluidity
NASA Astrophysics Data System (ADS)
Jokela, Niko; Lifschytz, Gilad; Lippert, Matthew
2013-10-01
Starting with a holographic construction for a fractional quantum Hall state based on the D3-D7' system, we explore alternative quantization conditions for the bulk gauge fields. This gives a description of a quantum Hall state with various filling fractions. For a particular alternative quantization of the bulk gauge fields, we obtain a holographic anyon fluid in a vanishing background magnetic field. We show that this system is a superfluid, exhibiting the relevant gapless excitation.
Three paths toward the quantum angle operator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gazeau, Jean Pierre, E-mail: gazeau@apc.univ-paris7.fr; Szafraniec, Franciszek Hugon, E-mail: franciszek.szafraniec@uj.edu.pl
2016-12-15
We examine mathematical questions around angle (or phase) operator associated with a number operator through a short list of basic requirements. We implement three methods of construction of quantum angle. The first one is based on operator theory and parallels the definition of angle for the upper half-circle through its cosine and completed by a sign inversion. The two other methods are integral quantization generalizing in a certain sense the Berezin–Klauder approaches. One method pertains to Weyl–Heisenberg integral quantization of the plane viewed as the phase space of the motion on the line. It depends on a family of “weight”more » functions on the plane. The third method rests upon coherent state quantization of the cylinder viewed as the phase space of the motion on the circle. The construction of these coherent states depends on a family of probability distributions on the line.« less
Vector Quantization Algorithm Based on Associative Memories
NASA Astrophysics Data System (ADS)
Guzmán, Enrique; Pogrebnyak, Oleksiy; Yáñez, Cornelio; Manrique, Pablo
This paper presents a vector quantization algorithm for image compression based on extended associative memories. The proposed algorithm is divided in two stages. First, an associative network is generated applying the learning phase of the extended associative memories between a codebook generated by the LBG algorithm and a training set. This associative network is named EAM-codebook and represents a new codebook which is used in the next stage. The EAM-codebook establishes a relation between training set and the LBG codebook. Second, the vector quantization process is performed by means of the recalling stage of EAM using as associative memory the EAM-codebook. This process generates a set of the class indices to which each input vector belongs. With respect to the LBG algorithm, the main advantages offered by the proposed algorithm is high processing speed and low demand of resources (system memory); results of image compression and quality are presented.
Magnetic quantization in monolayer bismuthene
NASA Astrophysics Data System (ADS)
Chen, Szu-Chao; Chiu, Chih-Wei; Lin, Hui-Chi; Lin, Ming-Fa
The magnetic quantization in monolayer bismuthene is investigated by the generalized tight-binding model. The quite large Hamiltonian matrix is built from the tight-binding functions of the various sublattices, atomic orbitals and spin states. Due to the strong spin orbital coupling and sp3 bonding, monolayer bismuthene has the diverse low-lying energy bands such as the parabolic, linear and oscillating energy bands. The main features of band structures are further reflected in the rich magnetic quantization. Under a uniform perpendicular magnetic field (Bz) , three groups of Landau levels (LLs) with distinct features are revealed near the Fermi level. Their Bz-dependent energy spectra display the linear, square-root and non-monotonous dependences, respectively. These LLs are dominated by the combinations of the 6pz orbital and (6px,6py) orbitals as a result of strong sp3 bonding. Specifically, the LL anti-crossings only occur between LLs originating from the oscillating energy band.
Scalets, wavelets and (complex) turning point quantization
NASA Astrophysics Data System (ADS)
Handy, C. R.; Brooks, H. A.
2001-05-01
Despite the many successes of wavelet analysis in image and signal processing, the incorporation of continuous wavelet transform theory within quantum mechanics has lacked a compelling, first principles, motivating analytical framework, until now. For arbitrary one-dimensional rational fraction Hamiltonians, we develop a simple, unified formalism, which clearly underscores the complementary, and mutually interdependent, role played by moment quantization theory (i.e. via scalets, as defined herein) and wavelets. This analysis involves no approximation of the Hamiltonian within the (equivalent) wavelet space, and emphasizes the importance of (complex) multiple turning point contributions in the quantization process. We apply the method to three illustrative examples. These include the (double-well) quartic anharmonic oscillator potential problem, V(x) = Z2x2 + gx4, the quartic potential, V(x) = x4, and the very interesting and significant non-Hermitian potential V(x) = -(ix)3, recently studied by Bender and Boettcher.
NASA Astrophysics Data System (ADS)
Melas, Evangelos
2011-07-01
The 3+1 (canonical) decomposition of all geometries admitting two-dimensional space-like surfaces is exhibited as a generalization of a previous work. A proposal, consisting of a specific re-normalization Assumption and an accompanying Requirement, which has been put forward in the 2+1 case is now generalized to 3+1 dimensions. This enables the canonical quantization of these geometries through a generalization of Kuchař's quantization scheme in the case of infinite degrees of freedom. The resulting Wheeler-deWitt equation is based on a re-normalized manifold parameterized by three smooth scalar functionals. The entire space of solutions to this equation is analytically given, a fact that is entirely new to the present case. This is made possible by exploiting the freedom left by the imposition of the Requirement and contained in the third functional.
Weighted Bergman Kernels and Quantization}
NASA Astrophysics Data System (ADS)
Engliš, Miroslav
Let Ω be a bounded pseudoconvex domain in CN, φ, ψ two positive functions on Ω such that - log ψ, - log φ are plurisubharmonic, and z∈Ω a point at which - log φ is smooth and strictly plurisubharmonic. We show that as k-->∞, the Bergman kernels with respect to the weights φkψ have an asymptotic expansion
q-Derivatives, quantization methods and q-algebras
DOE Office of Scientific and Technical Information (OSTI.GOV)
Twarock, Reidun
1998-12-15
Using the example of Borel quantization on S{sup 1}, we discuss the relation between quantization methods and q-algebras. In particular, it is shown that a q-deformation of the Witt algebra with generators labeled by Z is realized by q-difference operators. This leads to a discrete quantum mechanics. Because of Z, the discretization is equidistant. As an approach to a non-equidistant discretization of quantum mechanics one can change the Witt algebra using not the number field Z as labels but a quadratic extension of Z characterized by an irrational number {tau}. This extension is denoted as quasi-crystal Lie algebra, because thismore » is a relation to one-dimensional quasicrystals. The q-deformation of this quasicrystal Lie algebra is discussed. It is pointed out that quasicrystal Lie algebras can be considered also as a 'deformed' Witt algebra with a 'deformation' of the labeling number field. Their application to the theory is discussed.« less
Quantized circular photogalvanic effect in Weyl semimetals
NASA Astrophysics Data System (ADS)
de Juan, Fernando; Grushin, Adolfo G.; Morimoto, Takahiro; Moore, Joel E.
The circular photogalvanic effect (CPGE) is the part of a photocurrent that switches depending on the sense of circular polarization of the incident light. It has been consistently observed in systems without inversion symmetry and depends on non-universal material details. We find that in a class of Weyl semimetals (e.g. SrSi2) and three-dimensional Rashba materials (e.g. doped Te) without inversion and mirror symmetries, the CPGE trace is effectively Quantized in terms of the combination of fundamental constants e3/h2 cɛ0 with no material-dependent parameters. This is so because the CPGE directly measures the topological charge of Weyl points near the Fermi surface, and non-quantized corrections from disorder and additional bands can be small over a significant range of incident frequencies. Moreover, the magnitude of the CPGE induced by a Weyl node is relatively large, which enables the direct detection of the monopole charge with current techniques.
Quantized circular photogalvanic effect in Weyl semimetals
NASA Astrophysics Data System (ADS)
de Juan, Fernando; Grushin, Adolfo G.; Morimoto, Takahiro; Moore, Joel E.
2017-07-01
The circular photogalvanic effect (CPGE) is the part of a photocurrent that switches depending on the sense of circular polarization of the incident light. It has been consistently observed in systems without inversion symmetry and depends on non-universal material details. Here we find that in a class of Weyl semimetals (for example, SrSi2) and three-dimensional Rashba materials (for example, doped Te) without inversion and mirror symmetries, the injection contribution to the CPGE trace is effectively quantized in terms of the fundamental constants e, h, c and with no material-dependent parameters. This is so because the CPGE directly measures the topological charge of Weyl points, and non-quantized corrections from disorder and additional bands can be small over a significant range of incident frequencies. Moreover, the magnitude of the CPGE induced by a Weyl node is relatively large, which enables the direct detection of the monopole charge with current techniques.
NASA Astrophysics Data System (ADS)
Sun, Liang; McKay, Matthew R.
2014-08-01
This paper studies the sum rate performance of a low complexity quantized CSI-based Tomlinson-Harashima (TH) precoding scheme for downlink multiuser MIMO tansmission, employing greedy user selection. The asymptotic distribution of the output signal to interference plus noise ratio of each selected user and the asymptotic sum rate as the number of users K grows large are derived by using extreme value theory. For fixed finite signal to noise ratios and a finite number of transmit antennas $n_T$, we prove that as K grows large, the proposed approach can achieve the optimal sum rate scaling of the MIMO broadcast channel. We also prove that, if we ignore the precoding loss, the average sum rate of this approach converges to the average sum capacity of the MIMO broadcast channel. Our results provide insights into the effect of multiuser interference caused by quantized CSI on the multiuser diversity gain.
NASA Astrophysics Data System (ADS)
Kanai, Toshiaki; Guo, Wei; Tsubota, Makoto
2018-01-01
It is a common view that rotational motion in a superfluid can exist only in the presence of topological defects, i.e., quantized vortices. However, in our numerical studies on the merging of two concentric Bose-Einstein condensates with axial symmetry in two-dimensional space, we observe the emergence of a spiral dark soliton when one condensate has a nonzero initial angular momentum. This spiral dark soliton enables the transfer of angular momentum between the condensates and allows the merged condensate to rotate even in the absence of quantized vortices. Our examination of the flow field around the soliton strikingly reveals that its sharp endpoint can induce flow like a vortex point but with a fraction of a quantized circulation. This interesting nontopological "phase defect" may generate broad interest since rotational motion is essential in many quantum transport processes.
Correlated Light-Matter Interactions in Cavity QED
NASA Astrophysics Data System (ADS)
Flick, Johannes; Pellegrini, Camilla; Ruggenthaler, Michael; Appel, Heiko; Tokatly, Ilya; Rubio, Angel
2015-03-01
In the last decade, time-dependent density functional theory (TDDFT) has been successfully applied to a large variety of problems, such as calculations of absorption spectra, excitation energies, or dynamics in strong laser fields. Recently, we have generalized TDDFT to also describe electron-photon systems (QED-TDDFT). Here, matter and light are treated on an equal quantized footing. In this work, we present the first numerical calculations in the framework of QED-TDDFT. We show exact solutions for fully quantized prototype systems consisting of atoms or molecules placed in optical high-Q cavities and coupled to quantized electromagnetic modes. We focus on the electron-photon exchange-correlation (xc) contribution by calculating exact Kohn-Sham potentials using fixed-point inversions and present the performance of the first approximated xc-potential based on an optimized effective potential (OEP) approach. Max Planck Institute for the Structure and Dynamics of Matter, Hamburg, and Fritz-Haber-Institut der MPG, Berlin
Perceptually-Based Adaptive JPEG Coding
NASA Technical Reports Server (NTRS)
Watson, Andrew B.; Rosenholtz, Ruth; Null, Cynthia H. (Technical Monitor)
1996-01-01
An extension to the JPEG standard (ISO/IEC DIS 10918-3) allows spatial adaptive coding of still images. As with baseline JPEG coding, one quantization matrix applies to an entire image channel, but in addition the user may specify a multiplier for each 8 x 8 block, which multiplies the quantization matrix, yielding the new matrix for the block. MPEG 1 and 2 use much the same scheme, except there the multiplier changes only on macroblock boundaries. We propose a method for perceptual optimization of the set of multipliers. We compute the perceptual error for each block based upon DCT quantization error adjusted according to contrast sensitivity, light adaptation, and contrast masking, and pick the set of multipliers which yield maximally flat perceptual error over the blocks of the image. We investigate the bitrate savings due to this adaptive coding scheme and the relative importance of the different sorts of masking on adaptive coding.
Wavelet/scalar quantization compression standard for fingerprint images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brislawn, C.M.
1996-06-12
US Federal Bureau of Investigation (FBI) has recently formulated a national standard for digitization and compression of gray-scale fingerprint images. Fingerprints are scanned at a spatial resolution of 500 dots per inch, with 8 bits of gray-scale resolution. The compression algorithm for the resulting digital images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition (wavelet/scalar quantization method). The FBI standard produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. The compression standard specifies a class ofmore » potential encoders and a universal decoder with sufficient generality to reconstruct compressed images produced by any compliant encoder, allowing flexibility for future improvements in encoder technology. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations.« less
Polymer quantization, stability and higher-order time derivative terms
NASA Astrophysics Data System (ADS)
Cumsille, Patricio; Reyes, Carlos M.; Ossandon, Sebastian; Reyes, Camilo
2016-03-01
The possibility that fundamental discreteness implicit in a quantum gravity theory may act as a natural regulator for ultraviolet singularities arising in quantum field theory has been intensively studied. Here, along the same expectations, we investigate whether a nonstandard representation called polymer representation can smooth away the large amount of negative energy that afflicts the Hamiltonians of higher-order time derivative theories, rendering the theory unstable when interactions come into play. We focus on the fourth-order Pais-Uhlenbeck model which can be reexpressed as the sum of two decoupled harmonic oscillators one producing positive energy and the other negative energy. As expected, the Schrödinger quantization of such model leads to the stability problem or to negative norm states called ghosts. Within the framework of polymer quantization we show the existence of new regions where the Hamiltonian can be defined well bounded from below.
Uniform quantized electron gas
NASA Astrophysics Data System (ADS)
Høye, Johan S.; Lomba, Enrique
2016-10-01
In this work we study the correlation energy of the quantized electron gas of uniform density at temperature T = 0. To do so we utilize methods from classical statistical mechanics. The basis for this is the Feynman path integral for the partition function of quantized systems. With this representation the quantum mechanical problem can be interpreted as, and is equivalent to, a classical polymer problem in four dimensions where the fourth dimension is imaginary time. Thus methods, results, and properties obtained in the statistical mechanics of classical fluids can be utilized. From this viewpoint we recover the well known RPA (random phase approximation). Then to improve it we modify the RPA by requiring the corresponding correlation function to be such that electrons with equal spins can not be on the same position. Numerical evaluations are compared with well known results of a standard parameterization of Monte Carlo correlation energies.
Conditional Entropy-Constrained Residual VQ with Application to Image Coding
NASA Technical Reports Server (NTRS)
Kossentini, Faouzi; Chung, Wilson C.; Smith, Mark J. T.
1996-01-01
This paper introduces an extension of entropy-constrained residual vector quantization (VQ) where intervector dependencies are exploited. The method, which we call conditional entropy-constrained residual VQ, employs a high-order entropy conditioning strategy that captures local information in the neighboring vectors. When applied to coding images, the proposed method is shown to achieve better rate-distortion performance than that of entropy-constrained residual vector quantization with less computational complexity and lower memory requirements. Moreover, it can be designed to support progressive transmission in a natural way. It is also shown to outperform some of the best predictive and finite-state VQ techniques reported in the literature. This is due partly to the joint optimization between the residual vector quantizer and a high-order conditional entropy coder as well as the efficiency of the multistage residual VQ structure and the dynamic nature of the prediction.
Fractional-calculus diffusion equation
2010-01-01
Background Sequel to the work on the quantization of nonconservative systems using fractional calculus and quantization of a system with Brownian motion, which aims to consider the dissipation effects in quantum-mechanical description of microscale systems. Results The canonical quantization of a system represented classically by one-dimensional Fick's law, and the diffusion equation is carried out according to the Dirac method. A suitable Lagrangian, and Hamiltonian, describing the diffusive system, are constructed and the Hamiltonian is transformed to Schrodinger's equation which is solved. An application regarding implementation of the developed mathematical method to the analysis of diffusion, osmosis, which is a biological application of the diffusion process, is carried out. Schrödinger's equation is solved. Conclusions The plot of the probability function represents clearly the dissipative and drift forces and hence the osmosis, which agrees totally with the macro-scale view, or the classical-version osmosis. PMID:20492677
The behavior of quantization spectra as a function of signal-to-noise ratio
NASA Technical Reports Server (NTRS)
Flanagan, M. J.
1991-01-01
An expression for the spectrum of quantization error in a discrete-time system whose input is a sinusoid plus white Gaussian noise is derived. This quantization spectrum consists of two components: a white-noise floor and spurious harmonics. The dithering effect of the input Gaussian noise in both components of the spectrum is considered. Quantitative results in a discrete Fourier transform (DFT) example show the behavior of spurious harmonics as a function of the signal-to-noise ratio (SNR). These results have strong implications for digital reception and signal analysis systems. At low SNRs, spurious harmonics decay exponentially on a log-log scale, and the resulting spectrum is white. As the SNR increases, the spurious harmonics figure prominently in the output spectrum. A useful expression is given that roughly bounds the magnitude of a spurious harmonic as a function of the SNR.
A VLSI chip set for real time vector quantization of image sequences
NASA Technical Reports Server (NTRS)
Baker, Richard L.
1989-01-01
The architecture and implementation of a VLSI chip set that vector quantizes (VQ) image sequences in real time is described. The chip set forms a programmable Single-Instruction, Multiple-Data (SIMD) machine which can implement various vector quantization encoding structures. Its VQ codebook may contain unlimited number of codevectors, N, having dimension up to K = 64. Under a weighted least squared error criterion, the engine locates at video rates the best code vector in full-searched or large tree searched VQ codebooks. The ability to manipulate tree structured codebooks, coupled with parallelism and pipelining, permits searches in as short as O (log N) cycles. A full codebook search results in O(N) performance, compared to O(KN) for a Single-Instruction, Single-Data (SISD) machine. With this VLSI chip set, an entire video code can be built on a single board that permits realtime experimentation with very large codebooks.
NASA Technical Reports Server (NTRS)
Kelly, J. R.
1983-01-01
A simulator investigation was conducted to determine the effect of the lead-aircraft ground-speed quantization level on self-spacing performance using a Cockpit Display of Traffic Information (CDTI). The study utilized a simulator employing cathode-ray tubes for the primary flight and navigation displays and highly augmented flight control modes. The pilot's task was to follow, and self-space on, a lead aircraft which was performing an idle-thrust profile descent to an instrument landing system (ILS) approach and landing. The spacing requirement was specified in terms of both a minimum distance and a time interval. The results indicate that the ground-speed quantization level, lead-aircraft scenario, and pilot technique had a significant effect on self-spacing performance. However, the ground-speed quantization level only had a significant effect on the performance when the lead aircraft flew a fast final approach.
NASA Astrophysics Data System (ADS)
Franck, Bas A. M.; Dreschler, Wouter A.; Lyzenga, Johannes
2004-12-01
In this study we investigated the reliability and convergence characteristics of an adaptive multidirectional pattern search procedure, relative to a nonadaptive multidirectional pattern search procedure. The procedure was designed to optimize three speech-processing strategies. These comprise noise reduction, spectral enhancement, and spectral lift. The search is based on a paired-comparison paradigm, in which subjects evaluated the listening comfort of speech-in-noise fragments. The procedural and nonprocedural factors that influence the reliability and convergence of the procedure are studied using various test conditions. The test conditions combine different tests, initial settings, background noise types, and step size configurations. Seven normal hearing subjects participated in this study. The results indicate that the reliability of the optimization strategy may benefit from the use of an adaptive step size. Decreasing the step size increases accuracy, while increasing the step size can be beneficial to create clear perceptual differences in the comparisons. The reliability also depends on starting point, stop criterion, step size constraints, background noise, algorithms used, as well as the presence of drifting cues and suboptimal settings. There appears to be a trade-off between reliability and convergence, i.e., when the step size is enlarged the reliability improves, but the convergence deteriorates. .
A Conformational Transition in the Myosin VI Converter Contributes to the Variable Step Size
Ovchinnikov, V.; Cecchini, M.; Vanden-Eijnden, E.; Karplus, M.
2011-01-01
Myosin VI (MVI) is a dimeric molecular motor that translocates backwards on actin filaments with a surprisingly large and variable step size, given its short lever arm. A recent x-ray structure of MVI indicates that the large step size can be explained in part by a novel conformation of the converter subdomain in the prepowerstroke state, in which a 53-residue insert, unique to MVI, reorients the lever arm nearly parallel to the actin filament. To determine whether the existence of the novel converter conformation could contribute to the step-size variability, we used a path-based free-energy simulation tool, the string method, to show that there is a small free-energy difference between the novel converter conformation and the conventional conformation found in other myosins. This result suggests that MVI can bind to actin with the converter in either conformation. Models of MVI/MV chimeric dimers show that the variability in the tilting angle of the lever arm that results from the two converter conformations can lead to step-size variations of ∼12 nm. These variations, in combination with other proposed mechanisms, could explain the experimentally determined step-size variability of ∼25 nm for wild-type MVI. Mutations to test the findings by experiment are suggested. PMID:22098742
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guo, Zhanying; Department of Applied Science, University of Québec at Chicoutimi, Saguenay, QC G7H 2B1; Zhao, Gang
2016-04-15
The effects of two homogenization treatments applied to the direct chill (DC) cast billet on the recrystallization behavior in 7150 aluminum alloy during post-rolling annealing have been investigated using the electron backscatter diffraction (EBSD) technique. Following hot and cold rolling to the sheet, measured orientation maps, the recrystallization fraction and grain size, the misorientation angle and the subgrain size were used to characterize the recovery and recrystallization processes at different annealing temperatures. The results were compared between the conventional one-step homogenization and the new two-step homogenization, with the first step being pretreated at 250 °C. Al{sub 3}Zr dispersoids with highermore » densities and smaller sizes were obtained after the two-step homogenization, which strongly retarded subgrain/grain boundary mobility and inhibited recrystallization. Compared with the conventional one-step homogenized samples, a significantly lower recrystallized fraction and a smaller recrystallized grain size were obtained under all annealing conditions after cold rolling in the two-step homogenized samples. - Highlights: • Effects of two homogenization treatments on recrystallization in 7150 Al sheets • Quantitative study on the recrystallization evolution during post-rolling annealing • Al{sub 3}Zr dispersoids with higher densities and smaller sizes after two-step treatment • Higher recrystallization resistance of 7150 sheets with two-step homogenization.« less
Zender, Charles S.
2016-09-19
Geoscientific models and measurements generate false precision (scientifically meaningless data bits) that wastes storage space. False precision can mislead (by implying noise is signal) and be scientifically pointless, especially for measurements. By contrast, lossy compression can be both economical (save space) and heuristic (clarify data limitations) without compromising the scientific integrity of data. Data quantization can thus be appropriate regardless of whether space limitations are a concern. We introduce, implement, and characterize a new lossy compression scheme suitable for IEEE floating-point data. Our new Bit Grooming algorithm alternately shaves (to zero) and sets (to one) the least significant bits ofmore » consecutive values to preserve a desired precision. This is a symmetric, two-sided variant of an algorithm sometimes called Bit Shaving that quantizes values solely by zeroing bits. Our variation eliminates the artificial low bias produced by always zeroing bits, and makes Bit Grooming more suitable for arrays and multi-dimensional fields whose mean statistics are important. Bit Grooming relies on standard lossless compression to achieve the actual reduction in storage space, so we tested Bit Grooming by applying the DEFLATE compression algorithm to bit-groomed and full-precision climate data stored in netCDF3, netCDF4, HDF4, and HDF5 formats. Bit Grooming reduces the storage space required by initially uncompressed and compressed climate data by 25–80 and 5–65 %, respectively, for single-precision values (the most common case for climate data) quantized to retain 1–5 decimal digits of precision. The potential reduction is greater for double-precision datasets. When used aggressively (i.e., preserving only 1–2 digits), Bit Grooming produces storage reductions comparable to other quantization techniques such as Linear Packing. Unlike Linear Packing, whose guaranteed precision rapidly degrades within the relatively narrow dynamic range of values that it can compress, Bit Grooming guarantees the specified precision throughout the full floating-point range. Data quantization by Bit Grooming is irreversible (i.e., lossy) yet transparent, meaning that no extra processing is required by data users/readers. Hence Bit Grooming can easily reduce data storage volume without sacrificing scientific precision or imposing extra burdens on users.« less
Schlenstedt, Christian; Mancini, Martina; Horak, Fay; Peterson, Daniel
2017-07-01
To characterize anticipatory postural adjustments (APAs) across a variety of step initiation tasks in people with Parkinson disease (PD) and healthy subjects. Cross-sectional study. Step initiation was analyzed during self-initiated gait, perceptual cued gait, and compensatory forward stepping after platform perturbation. People with PD were assessed on and off levodopa. University research laboratory. People (N=31) with PD (n=19) and healthy aged-matched subjects (n=12). Not applicable. Mediolateral (ML) size of APAs (calculated from center of pressure recordings), step kinematics, and body alignment. With respect to self-initiated gait, the ML size of APAs was significantly larger during the cued condition and significantly smaller during the compensatory condition (P<.001). Healthy subjects and patients with PD did not differ in body alignment during the stance phase prior to stepping. No significant group effect was found for ML size of APAs between healthy subjects and patients with PD. However, the reduction in APA size from cued to compensatory stepping was significantly less pronounced in PD off medication compared with healthy subjects, as indicated by a significant group by condition interaction effect (P<.01). No significant differences were found comparing patients with PD on and off medications. Specific stepping conditions had a significant effect on the preparation and execution of step initiation. Therefore, APA size should be interpreted with respect to the specific stepping condition. Across-task changes in people with PD were less pronounced compared with healthy subjects. Antiparkinsonian medication did not significantly improve step initiation in this mildly affected PD cohort. Copyright © 2016 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
Critical Motor Number for Fractional Steps of Cytoskeletal Filaments in Gliding Assays
Li, Xin; Lipowsky, Reinhard; Kierfeld, Jan
2012-01-01
In gliding assays, filaments are pulled by molecular motors that are immobilized on a solid surface. By varying the motor density on the surface, one can control the number of motors that pull simultaneously on a single filament. Here, such gliding assays are studied theoretically using Brownian (or Langevin) dynamics simulations and taking the local force balance between motors and filaments as well as the force-dependent velocity of the motors into account. We focus on the filament stepping dynamics and investigate how single motor properties such as stalk elasticity and step size determine the presence or absence of fractional steps of the filaments. We show that each gliding assay can be characterized by a critical motor number, . Because of thermal fluctuations, fractional filament steps are only detectable as long as . The corresponding fractional filament step size is where is the step size of a single motor. We first apply our computational approach to microtubules pulled by kinesin-1 motors. For elastic motor stalks that behave as linear springs with a zero rest length, the critical motor number is found to be , and the corresponding distributions of the filament step sizes are in good agreement with the available experimental data. In general, the critical motor number depends on the elastic stalk properties and is reduced to for linear springs with a nonzero rest length. Furthermore, is shown to depend quadratically on the motor step size . Therefore, gliding assays consisting of actin filaments and myosin-V are predicted to exhibit fractional filament steps up to motor number . Finally, we show that fractional filament steps are also detectable for a fixed average motor number as determined by the surface density (or coverage) of the motors on the substrate surface. PMID:22927953
Application of a VLSI vector quantization processor to real-time speech coding
NASA Technical Reports Server (NTRS)
Davidson, G.; Gersho, A.
1986-01-01
Attention is given to a working vector quantization processor for speech coding that is based on a first-generation VLSI chip which efficiently performs the pattern-matching operation needed for the codebook search process (CPS). Using this chip, the CPS architecture has been successfully incorporated into a compact, single-board Vector PCM implementation operating at 7-18 kbits/sec. A real time Adaptive Vector Predictive Coder system using the CPS has also been implemented.
NASA Astrophysics Data System (ADS)
Fan, Hong-Yi; Xu, Xue-Xiang; Hu, Li-Yun
2010-06-01
By virtue of the generalized Hellmann-Feynman theorem for the ensemble average, we obtain the internal energy and average energy consumed by the resistance R in a quantized resistance-inductance-capacitance (RLC) electric circuit. We also calculate the entropy-variation with R. The relation between entropy and R is also derived. By the use of figures we indeed see that the entropy increases with the increment of R.
Weyl Exceptional Rings in a Three-Dimensional Dissipative Cold Atomic Gas (Author’s Manuscript)
2017-01-27
Weyl Exceptional Rings in a Three-Dimensional Dissipative Cold Atomic Gas Yong Xu,∗ Sheng-Tao Wang, and L.-M. Duan Department of Physics, University...atomic gas trapped in an optical lattice. Recently, condensed matter systems have proven to be a powerful platform to study low energy gapless...possess a nonzero quantized Chern number. This leads to a natural question of whether there exists a topological ring exhibiting both a quantized Chern