Sample records for optimized vector quantization

  1. Medical Image Compression Based on Vector Quantization with Variable Block Sizes in Wavelet Domain

    PubMed Central

    Jiang, Huiyan; Ma, Zhiyuan; Hu, Yang; Yang, Benqiang; Zhang, Libo

    2012-01-01

    An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD) was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality. PMID:23049544

  2. Medical image compression based on vector quantization with variable block sizes in wavelet domain.

    PubMed

    Jiang, Huiyan; Ma, Zhiyuan; Hu, Yang; Yang, Benqiang; Zhang, Libo

    2012-01-01

    An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD) was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality.

  3. Necessary conditions for the optimality of variable rate residual vector quantizers

    NASA Technical Reports Server (NTRS)

    Kossentini, Faouzi; Smith, Mark J. T.; Barnes, Christopher F.

    1993-01-01

    Residual vector quantization (RVQ), or multistage VQ, as it is also called, has recently been shown to be a competitive technique for data compression. The competitive performance of RVQ reported in results from the joint optimization of variable rate encoding and RVQ direct-sum code books. In this paper, necessary conditions for the optimality of variable rate RVQ's are derived, and an iterative descent algorithm based on a Lagrangian formulation is introduced for designing RVQ's having minimum average distortion subject to an entropy constraint. Simulation results for these entropy-constrained RVQ's (EC-RVQ's) are presented for memory less Gaussian, Laplacian, and uniform sources. A Gauss-Markov source is also considered. The performance is superior to that of entropy-constrained scalar quantizers (EC-SQ's) and practical entropy-constrained vector quantizers (EC-VQ's), and is competitive with that of some of the best source coding techniques that have appeared in the literature.

  4. Gain-adaptive vector quantization for medium-rate speech coding

    NASA Technical Reports Server (NTRS)

    Chen, J.-H.; Gersho, A.

    1985-01-01

    A class of adaptive vector quantizers (VQs) that can dynamically adjust the 'gain' of codevectors according to the input signal level is introduced. The encoder uses a gain estimator to determine a suitable normalization of each input vector prior to VQ coding. The normalized vectors have reduced dynamic range and can then be more efficiently coded. At the receiver, the VQ decoder output is multiplied by the estimated gain. Both forward and backward adaptation are considered and several different gain estimators are compared and evaluated. An approach to optimizing the design of gain estimators is introduced. Some of the more obvious techniques for achieving gain adaptation are substantially less effective than the use of optimized gain estimators. A novel design technique that is needed to generate the appropriate gain-normalized codebook for the vector quantizer is introduced. Experimental results show that a significant gain in segmental SNR can be obtained over nonadaptive VQ with a negligible increase in complexity.

  5. Enhancing speech recognition using improved particle swarm optimization based hidden Markov model.

    PubMed

    Selvaraj, Lokesh; Ganesan, Balakrishnan

    2014-01-01

    Enhancing speech recognition is the primary intention of this work. In this paper a novel speech recognition method based on vector quantization and improved particle swarm optimization (IPSO) is suggested. The suggested methodology contains four stages, namely, (i) denoising, (ii) feature mining (iii), vector quantization, and (iv) IPSO based hidden Markov model (HMM) technique (IP-HMM). At first, the speech signals are denoised using median filter. Next, characteristics such as peak, pitch spectrum, Mel frequency Cepstral coefficients (MFCC), mean, standard deviation, and minimum and maximum of the signal are extorted from the denoised signal. Following that, to accomplish the training process, the extracted characteristics are given to genetic algorithm based codebook generation in vector quantization. The initial populations are created by selecting random code vectors from the training set for the codebooks for the genetic algorithm process and IP-HMM helps in doing the recognition. At this point the creativeness will be done in terms of one of the genetic operation crossovers. The proposed speech recognition technique offers 97.14% accuracy.

  6. High Order Entropy-Constrained Residual VQ for Lossless Compression of Images

    NASA Technical Reports Server (NTRS)

    Kossentini, Faouzi; Smith, Mark J. T.; Scales, Allen

    1995-01-01

    High order entropy coding is a powerful technique for exploiting high order statistical dependencies. However, the exponentially high complexity associated with such a method often discourages its use. In this paper, an entropy-constrained residual vector quantization method is proposed for lossless compression of images. The method consists of first quantizing the input image using a high order entropy-constrained residual vector quantizer and then coding the residual image using a first order entropy coder. The distortion measure used in the entropy-constrained optimization is essentially the first order entropy of the residual image. Experimental results show very competitive performance.

  7. Speech coding at low to medium bit rates

    NASA Astrophysics Data System (ADS)

    Leblanc, Wilfred Paul

    1992-09-01

    Improved search techniques coupled with improved codebook design methodologies are proposed to improve the performance of conventional code-excited linear predictive coders for speech. Improved methods for quantizing the short term filter are developed by employing a tree search algorithm and joint codebook design to multistage vector quantization. Joint codebook design procedures are developed to design locally optimal multistage codebooks. Weighting during centroid computation is introduced to improve the outlier performance of the multistage vector quantizer. Multistage vector quantization is shown to be both robust against input characteristics and in the presence of channel errors. Spectral distortions of about 1 dB are obtained at rates of 22-28 bits/frame. Structured codebook design procedures for excitation in code-excited linear predictive coders are compared to general codebook design procedures. Little is lost using significant structure in the excitation codebooks while greatly reducing the search complexity. Sparse multistage configurations are proposed for reducing computational complexity and memory size. Improved search procedures are applied to code-excited linear prediction which attempt joint optimization of the short term filter, the adaptive codebook, and the excitation. Improvements in signal to noise ratio of 1-2 dB are realized in practice.

  8. Conditional Entropy-Constrained Residual VQ with Application to Image Coding

    NASA Technical Reports Server (NTRS)

    Kossentini, Faouzi; Chung, Wilson C.; Smith, Mark J. T.

    1996-01-01

    This paper introduces an extension of entropy-constrained residual vector quantization (VQ) where intervector dependencies are exploited. The method, which we call conditional entropy-constrained residual VQ, employs a high-order entropy conditioning strategy that captures local information in the neighboring vectors. When applied to coding images, the proposed method is shown to achieve better rate-distortion performance than that of entropy-constrained residual vector quantization with less computational complexity and lower memory requirements. Moreover, it can be designed to support progressive transmission in a natural way. It is also shown to outperform some of the best predictive and finite-state VQ techniques reported in the literature. This is due partly to the joint optimization between the residual vector quantizer and a high-order conditional entropy coder as well as the efficiency of the multistage residual VQ structure and the dynamic nature of the prediction.

  9. Magnetic resonance image compression using scalar-vector quantization

    NASA Astrophysics Data System (ADS)

    Mohsenian, Nader; Shahri, Homayoun

    1995-12-01

    A new coding scheme based on the scalar-vector quantizer (SVQ) is developed for compression of medical images. SVQ is a fixed-rate encoder and its rate-distortion performance is close to that of optimal entropy-constrained scalar quantizers (ECSQs) for memoryless sources. The use of a fixed-rate quantizer is expected to eliminate some of the complexity issues of using variable-length scalar quantizers. When transmission of images over noisy channels is considered, our coding scheme does not suffer from error propagation which is typical of coding schemes which use variable-length codes. For a set of magnetic resonance (MR) images, coding results obtained from SVQ and ECSQ at low bit-rates are indistinguishable. Furthermore, our encoded images are perceptually indistinguishable from the original, when displayed on a monitor. This makes our SVQ based coder an attractive compression scheme for picture archiving and communication systems (PACS), currently under consideration for an all digital radiology environment in hospitals, where reliable transmission, storage, and high fidelity reconstruction of images are desired.

  10. Image coding using entropy-constrained residual vector quantization

    NASA Technical Reports Server (NTRS)

    Kossentini, Faouzi; Smith, Mark J. T.; Barnes, Christopher F.

    1993-01-01

    The residual vector quantization (RVQ) structure is exploited to produce a variable length codeword RVQ. Necessary conditions for the optimality of this RVQ are presented, and a new entropy-constrained RVQ (ECRVQ) design algorithm is shown to be very effective in designing RVQ codebooks over a wide range of bit rates and vector sizes. The new EC-RVQ has several important advantages. It can outperform entropy-constrained VQ (ECVQ) in terms of peak signal-to-noise ratio (PSNR), memory, and computation requirements. It can also be used to design high rate codebooks and codebooks with relatively large vector sizes. Experimental results indicate that when the new EC-RVQ is applied to image coding, very high quality is achieved at relatively low bit rates.

  11. Video data compression using artificial neural network differential vector quantization

    NASA Technical Reports Server (NTRS)

    Krishnamurthy, Ashok K.; Bibyk, Steven B.; Ahalt, Stanley C.

    1991-01-01

    An artificial neural network vector quantizer is developed for use in data compression applications such as Digital Video. Differential Vector Quantization is used to preserve edge features, and a new adaptive algorithm, known as Frequency-Sensitive Competitive Learning, is used to develop the vector quantizer codebook. To develop real time performance, a custom Very Large Scale Integration Application Specific Integrated Circuit (VLSI ASIC) is being developed to realize the associative memory functions needed in the vector quantization algorithm. By using vector quantization, the need for Huffman coding can be eliminated, resulting in superior performance against channel bit errors than methods that use variable length codes.

  12. Optimal source coding, removable noise elimination, and natural coordinate system construction for general vector sources using replicator neural networks

    NASA Astrophysics Data System (ADS)

    Hecht-Nielsen, Robert

    1997-04-01

    A new universal one-chart smooth manifold model for vector information sources is introduced. Natural coordinates (a particular type of chart) for such data manifolds are then defined. Uniformly quantized natural coordinates form an optimal vector quantization code for a general vector source. Replicator neural networks (a specialized type of multilayer perceptron with three hidden layers) are the introduced. As properly configured examples of replicator networks approach minimum mean squared error (e.g., via training and architecture adjustment using randomly chosen vectors from the source), these networks automatically develop a mapping which, in the limit, produces natural coordinates for arbitrary source vectors. The new concept of removable noise (a noise model applicable to a wide variety of real-world noise processes) is then discussed. Replicator neural networks, when configured to approach minimum mean squared reconstruction error (e.g., via training and architecture adjustment on randomly chosen examples from a vector source, each with randomly chosen additive removable noise contamination), in the limit eliminate removable noise and produce natural coordinates for the data vector portions of the noise-corrupted source vectors. Consideration regarding selection of the dimension of a data manifold source model and the training/configuration of replicator neural networks are discussed.

  13. An adaptive vector quantization scheme

    NASA Technical Reports Server (NTRS)

    Cheung, K.-M.

    1990-01-01

    Vector quantization is known to be an effective compression scheme to achieve a low bit rate so as to minimize communication channel bandwidth and also to reduce digital memory storage while maintaining the necessary fidelity of the data. However, the large number of computations required in vector quantizers has been a handicap in using vector quantization for low-rate source coding. An adaptive vector quantization algorithm is introduced that is inherently suitable for simple hardware implementation because it has a simple architecture. It allows fast encoding and decoding because it requires only addition and subtraction operations.

  14. Efficient boundary hunting via vector quantization

    NASA Astrophysics Data System (ADS)

    Diamantini, Claudia; Panti, Maurizio

    2001-03-01

    A great amount of information about a classification problem is contained in those instances falling near the decision boundary. This intuition dates back to the earliest studies in pattern recognition, and in the more recent adaptive approaches to the so called boundary hunting, such as the work of Aha et alii on Instance Based Learning and the work of Vapnik et alii on Support Vector Machines. The last work is of particular interest, since theoretical and experimental results ensure the accuracy of boundary reconstruction. However, its optimization approach has heavy computational and memory requirements, which limits its application on huge amounts of data. In the paper we describe an alternative approach to boundary hunting based on adaptive labeled quantization architectures. The adaptation is performed by a stochastic gradient algorithm for the minimization of the error probability. Error probability minimization guarantees the accurate approximation of the optimal decision boundary, while the use of a stochastic gradient algorithm defines an efficient method to reach such approximation. In the paper comparisons to Support Vector Machines are considered.

  15. A recursive technique for adaptive vector quantization

    NASA Technical Reports Server (NTRS)

    Lindsay, Robert A.

    1989-01-01

    Vector Quantization (VQ) is fast becoming an accepted, if not preferred method for image compression. The VQ performs well when compressing all types of imagery including Video, Electro-Optical (EO), Infrared (IR), Synthetic Aperture Radar (SAR), Multi-Spectral (MS), and digital map data. The only requirement is to change the codebook to switch the compressor from one image sensor to another. There are several approaches for designing codebooks for a vector quantizer. Adaptive Vector Quantization is a procedure that simultaneously designs codebooks as the data is being encoded or quantized. This is done by computing the centroid as a recursive moving average where the centroids move after every vector is encoded. When computing the centroid of a fixed set of vectors the resultant centroid is identical to the previous centroid calculation. This method of centroid calculation can be easily combined with VQ encoding techniques. The defined quantizer changes after every encoded vector by recursively updating the centroid of minimum distance which is the selected by the encoder. Since the quantizer is changing definition or states after every encoded vector, the decoder must now receive updates to the codebook. This is done as side information by multiplexing bits into the compressed source data.

  16. Improved image decompression for reduced transform coding artifacts

    NASA Technical Reports Server (NTRS)

    Orourke, Thomas P.; Stevenson, Robert L.

    1994-01-01

    The perceived quality of images reconstructed from low bit rate compression is severely degraded by the appearance of transform coding artifacts. This paper proposes a method for producing higher quality reconstructed images based on a stochastic model for the image data. Quantization (scalar or vector) partitions the transform coefficient space and maps all points in a partition cell to a representative reconstruction point, usually taken as the centroid of the cell. The proposed image estimation technique selects the reconstruction point within the quantization partition cell which results in a reconstructed image which best fits a non-Gaussian Markov random field (MRF) image model. This approach results in a convex constrained optimization problem which can be solved iteratively. At each iteration, the gradient projection method is used to update the estimate based on the image model. In the transform domain, the resulting coefficient reconstruction points are projected to the particular quantization partition cells defined by the compressed image. Experimental results will be shown for images compressed using scalar quantization of block DCT and using vector quantization of subband wavelet transform. The proposed image decompression provides a reconstructed image with reduced visibility of transform coding artifacts and superior perceived quality.

  17. Recursive optimal pruning with applications to tree structured vector quantizers

    NASA Technical Reports Server (NTRS)

    Kiang, Shei-Zein; Baker, Richard L.; Sullivan, Gary J.; Chiu, Chung-Yen

    1992-01-01

    A pruning algorithm of Chou et al. (1989) for designing optimal tree structures identifies only those codebooks which lie on the convex hull of the original codebook's operational distortion rate function. The authors introduce a modified version of the original algorithm, which identifies a large number of codebooks having minimum average distortion, under the constraint that, in each step, only modes having no descendents are removed from the tree. All codebooks generated by the original algorithm are also generated by this algorithm. The new algorithm generates a much larger number of codebooks in the middle- and low-rate regions. The additional codebooks permit operation near the codebook's operational distortion rate function without time sharing by choosing from the increased number of available bit rates. Despite the statistical mismatch which occurs when coding data outside the training sequence, these pruned codebooks retain their performance advantage over full search vector quantizers (VQs) for a large range of rates.

  18. Segmentation of magnetic resonance images using fuzzy algorithms for learning vector quantization.

    PubMed

    Karayiannis, N B; Pai, P I

    1999-02-01

    This paper evaluates a segmentation technique for magnetic resonance (MR) images of the brain based on fuzzy algorithms for learning vector quantization (FALVQ). These algorithms perform vector quantization by updating all prototypes of a competitive network through an unsupervised learning process. Segmentation of MR images is formulated as an unsupervised vector quantization process, where the local values of different relaxation parameters form the feature vectors which are represented by a relatively small set of prototypes. The experiments evaluate a variety of FALVQ algorithms in terms of their ability to identify different tissues and discriminate between normal tissues and abnormalities.

  19. Low-rate image coding using vector quantization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Makur, A.

    1990-01-01

    This thesis deals with the development and analysis of a computationally simple vector quantization image compression system for coding monochrome images at low bit rate. Vector quantization has been known to be an effective compression scheme when a low bit rate is desirable, but the intensive computation required in a vector quantization encoder has been a handicap in using it for low rate image coding. The present work shows that, without substantially increasing the coder complexity, it is indeed possible to achieve acceptable picture quality while attaining a high compression ratio. Several modifications to the conventional vector quantization coder aremore » proposed in the thesis. These modifications are shown to offer better subjective quality when compared to the basic coder. Distributed blocks are used instead of spatial blocks to construct the input vectors. A class of input-dependent weighted distortion functions is used to incorporate psychovisual characteristics in the distortion measure. Computationally simple filtering techniques are applied to further improve the decoded image quality. Finally, unique designs of the vector quantization coder using electronic neural networks are described, so that the coding delay is reduced considerably.« less

  20. Image Coding Based on Address Vector Quantization.

    NASA Astrophysics Data System (ADS)

    Feng, Yushu

    Image coding is finding increased application in teleconferencing, archiving, and remote sensing. This thesis investigates the potential of Vector Quantization (VQ), a relatively new source coding technique, for compression of monochromatic and color images. Extensions of the Vector Quantization technique to the Address Vector Quantization method have been investigated. In Vector Quantization, the image data to be encoded are first processed to yield a set of vectors. A codeword from the codebook which best matches the input image vector is then selected. Compression is achieved by replacing the image vector with the index of the code-word which produced the best match, the index is sent to the channel. Reconstruction of the image is done by using a table lookup technique, where the label is simply used as an address for a table containing the representative vectors. A code-book of representative vectors (codewords) is generated using an iterative clustering algorithm such as K-means, or the generalized Lloyd algorithm. A review of different Vector Quantization techniques are given in chapter 1. Chapter 2 gives an overview of codebook design methods including the Kohonen neural network to design codebook. During the encoding process, the correlation of the address is considered and Address Vector Quantization is developed for color image and monochrome image coding. Address VQ which includes static and dynamic processes is introduced in chapter 3. In order to overcome the problems in Hierarchical VQ, Multi-layer Address Vector Quantization is proposed in chapter 4. This approach gives the same performance as that of the normal VQ scheme but the bit rate is about 1/2 to 1/3 as that of the normal VQ method. In chapter 5, a Dynamic Finite State VQ based on a probability transition matrix to select the best subcodebook to encode the image is developed. In chapter 6, a new adaptive vector quantization scheme, suitable for color video coding, called "A Self -Organizing Adaptive VQ Technique" is presented. In addition to chapters 2 through 6 which report on new work, this dissertation includes one chapter (chapter 1) and part of chapter 2 which review previous work on VQ and image coding, respectively. Finally, a short discussion of directions for further research is presented in conclusion.

  1. Vector quantization

    NASA Technical Reports Server (NTRS)

    Gray, Robert M.

    1989-01-01

    During the past ten years Vector Quantization (VQ) has developed from a theoretical possibility promised by Shannon's source coding theorems into a powerful and competitive technique for speech and image coding and compression at medium to low bit rates. In this survey, the basic ideas behind the design of vector quantizers are sketched and some comments made on the state-of-the-art and current research efforts.

  2. Robust vector quantization for noisy channels

    NASA Technical Reports Server (NTRS)

    Demarca, J. R. B.; Farvardin, N.; Jayant, N. S.; Shoham, Y.

    1988-01-01

    The paper briefly discusses techniques for making vector quantizers more tolerant to tranmsission errors. Two algorithms are presented for obtaining an efficient binary word assignment to the vector quantizer codewords without increasing the transmission rate. It is shown that about 4.5 dB gain over random assignment can be achieved with these algorithms. It is also proposed to reduce the effects of error propagation in vector-predictive quantizers by appropriately constraining the response of the predictive loop. The constrained system is shown to have about 4 dB of SNR gain over an unconstrained system in a noisy channel, with a small loss of clean-channel performance.

  3. A hybrid LBG/lattice vector quantizer for high quality image coding

    NASA Technical Reports Server (NTRS)

    Ramamoorthy, V.; Sayood, K.; Arikan, E. (Editor)

    1991-01-01

    It is well known that a vector quantizer is an efficient coder offering a good trade-off between quantization distortion and bit rate. The performance of a vector quantizer asymptotically approaches the optimum bound with increasing dimensionality. A vector quantized image suffers from the following types of degradations: (1) edge regions in the coded image contain staircase effects, (2) quasi-constant or slowly varying regions suffer from contouring effects, and (3) textured regions lose details and suffer from granular noise. All three of these degradations are due to the finite size of the code book, the distortion measures used in the design, and due to the finite training procedure involved in the construction of the code book. In this paper, we present an adaptive technique which attempts to ameliorate the edge distortion and contouring effects.

  4. Global synchronization of complex dynamical networks through digital communication with limited data rate.

    PubMed

    Wang, Yan-Wu; Bian, Tao; Xiao, Jiang-Wen; Wen, Changyun

    2015-10-01

    This paper studies the global synchronization of complex dynamical network (CDN) under digital communication with limited bandwidth. To realize the digital communication, the so-called uniform-quantizer-sets are introduced to quantize the states of nodes, which are then encoded and decoded by newly designed encoders and decoders. To meet the requirement of the bandwidth constraint, a scaling function is utilized to guarantee the quantizers having bounded inputs and thus achieving bounded real-time quantization levels. Moreover, a new type of vector norm is introduced to simplify the expression of the bandwidth limit. Through mathematical induction, a sufficient condition is derived to ensure global synchronization of the CDNs. The lower bound on the sum of the real-time quantization levels is analyzed for different cases. Optimization method is employed to relax the requirements on the network topology and to determine the minimum of such lower bound for each case, respectively. Simulation examples are also presented to illustrate the established results.

  5. Applications of wavelet-based compression to multidimensional Earth science data

    NASA Technical Reports Server (NTRS)

    Bradley, Jonathan N.; Brislawn, Christopher M.

    1993-01-01

    A data compression algorithm involving vector quantization (VQ) and the discrete wavelet transform (DWT) is applied to two different types of multidimensional digital earth-science data. The algorithms (WVQ) is optimized for each particular application through an optimization procedure that assigns VQ parameters to the wavelet transform subbands subject to constraints on compression ratio and encoding complexity. Preliminary results of compressing global ocean model data generated on a Thinking Machines CM-200 supercomputer are presented. The WVQ scheme is used in both a predictive and nonpredictive mode. Parameters generated by the optimization algorithm are reported, as are signal-to-noise (SNR) measurements of actual quantized data. The problem of extrapolating hydrodynamic variables across the continental landmasses in order to compute the DWT on a rectangular grid is discussed. Results are also presented for compressing Landsat TM 7-band data using the WVQ scheme. The formulation of the optimization problem is presented along with SNR measurements of actual quantized data. Postprocessing applications are considered in which the seven spectral bands are clustered into 256 clusters using a k-means algorithm and analyzed using the Los Alamos multispectral data analysis program, SPECTRUM, both before and after being compressed using the WVQ program.

  6. Evaluation of Raman spectra of human brain tumor tissue using the learning vector quantization neural network

    NASA Astrophysics Data System (ADS)

    Liu, Tuo; Chen, Changshui; Shi, Xingzhe; Liu, Chengyong

    2016-05-01

    The Raman spectra of tissue of 20 brain tumor patients was recorded using a confocal microlaser Raman spectroscope with 785 nm excitation in vitro. A total of 133 spectra were investigated. Spectra peaks from normal white matter tissue and tumor tissue were analyzed. Algorithms, such as principal component analysis, linear discriminant analysis, and the support vector machine, are commonly used to analyze spectral data. However, in this study, we employed the learning vector quantization (LVQ) neural network, which is typically used for pattern recognition. By applying the proposed method, a normal diagnosis accuracy of 85.7% and a glioma diagnosis accuracy of 89.5% were achieved. The LVQ neural network is a recent approach to excavating Raman spectra information. Moreover, it is fast and convenient, does not require the spectra peak counterpart, and achieves a relatively high accuracy. It can be used in brain tumor prognostics and in helping to optimize the cutting margins of gliomas.

  7. Quantization of Electromagnetic Fields in Cavities

    NASA Technical Reports Server (NTRS)

    Kakazu, Kiyotaka; Oshiro, Kazunori

    1996-01-01

    A quantization procedure for the electromagnetic field in a rectangular cavity with perfect conductor walls is presented, where a decomposition formula of the field plays an essential role. All vector mode functions are obtained by using the decomposition. After expanding the field in terms of the vector mode functions, we get the quantized electromagnetic Hamiltonian.

  8. Sea ice motion from low-resolution satellite sensors: An alternative method and its validation in the Arctic

    NASA Astrophysics Data System (ADS)

    Lavergne, T.; Eastwood, S.; Teffah, Z.; Schyberg, H.; Breivik, L.-A.

    2010-10-01

    The retrieval of sea ice motion with the Maximum Cross-Correlation (MCC) method from low-resolution (10-15 km) spaceborne imaging sensors is challenged by a dominating quantization noise as the time span of displacement vectors is shortened. To allow investigating shorter displacements from these instruments, we introduce an alternative sea ice motion tracking algorithm that builds on the MCC method but relies on a continuous optimization step for computing the motion vector. The prime effect of this method is to effectively dampen the quantization noise, an artifact of the MCC. It allows for retrieving spatially smooth 48 h sea ice motion vector fields in the Arctic. Strategies to detect and correct erroneous vectors as well as to optimally merge several polarization channels of a given instrument are also described. A test processing chain is implemented and run with several active and passive microwave imagers (Advanced Microwave Scanning Radiometer-EOS (AMSR-E), Special Sensor Microwave Imager, and Advanced Scatterometer) during three Arctic autumn, winter, and spring seasons. Ice motion vectors are collocated to and compared with GPS positions of in situ drifters. Error statistics are shown to be ranging from 2.5 to 4.5 km (standard deviation for components of the vectors) depending on the sensor, without significant bias. We discuss the relative contribution of measurement and representativeness errors by analyzing monthly validation statistics. The 37 GHz channels of the AMSR-E instrument allow for the best validation statistics. The operational low-resolution sea ice drift product of the EUMETSAT OSI SAF (European Organisation for the Exploitation of Meteorological Satellites Ocean and Sea Ice Satellite Application Facility) is based on the algorithms presented in this paper.

  9. A Heisenberg Algebra Bundle of a Vector Field in Three-Space and its Weyl Quantization

    NASA Astrophysics Data System (ADS)

    Binz, Ernst; Pods, Sonja

    2006-01-01

    In these notes we associate a natural Heisenberg group bundle Ha with a singularity free smooth vector field X = (id,a) on a submanifold M in a Euclidean three-space. This bundle yields naturally an infinite dimensional Heisenberg group HX∞. A representation of the C*-group algebra of HX∞ is a quantization. It causes a natural Weyl-deformation quantization of X. The influence of the topological structure of M on this quantization is encoded in the Chern class of a canonical complex line bundle inside Ha.

  10. Feature Vector Construction Method for IRIS Recognition

    NASA Astrophysics Data System (ADS)

    Odinokikh, G.; Fartukov, A.; Korobkin, M.; Yoo, J.

    2017-05-01

    One of the basic stages of iris recognition pipeline is iris feature vector construction procedure. The procedure represents the extraction of iris texture information relevant to its subsequent comparison. Thorough investigation of feature vectors obtained from iris showed that not all the vector elements are equally relevant. There are two characteristics which determine the vector element utility: fragility and discriminability. Conventional iris feature extraction methods consider the concept of fragility as the feature vector instability without respect to the nature of such instability appearance. This work separates sources of the instability into natural and encodinginduced which helps deeply investigate each source of instability independently. According to the separation concept, a novel approach of iris feature vector construction is proposed. The approach consists of two steps: iris feature extraction using Gabor filtering with optimal parameters and quantization with separated preliminary optimized fragility thresholds. The proposed method has been tested on two different datasets of iris images captured under changing environmental conditions. The testing results show that the proposed method surpasses all the methods considered as a prior art by recognition accuracy on both datasets.

  11. Vector quantizer designs for joint compression and terrain categorization of multispectral imagery

    NASA Technical Reports Server (NTRS)

    Gorman, John D.; Lyons, Daniel F.

    1994-01-01

    Two vector quantizer designs for compression of multispectral imagery and their impact on terrain categorization performance are evaluated. The mean-squared error (MSE) and classification performance of the two quantizers are compared, and it is shown that a simple two-stage design minimizing MSE subject to a constraint on classification performance has a significantly better classification performance than a standard MSE-based tree-structured vector quantizer followed by maximum likelihood classification. This improvement in classification performance is obtained with minimal loss in MSE performance. The results show that it is advantageous to tailor compression algorithm designs to the required data exploitation tasks. Applications of joint compression/classification include compression for the archival or transmission of Landsat imagery that is later used for land utility surveys and/or radiometric analysis.

  12. Image segmentation using fuzzy LVQ clustering networks

    NASA Technical Reports Server (NTRS)

    Tsao, Eric Chen-Kuo; Bezdek, James C.; Pal, Nikhil R.

    1992-01-01

    In this note we formulate image segmentation as a clustering problem. Feature vectors extracted from a raw image are clustered into subregions, thereby segmenting the image. A fuzzy generalization of a Kohonen learning vector quantization (LVQ) which integrates the Fuzzy c-Means (FCM) model with the learning rate and updating strategies of the LVQ is used for this task. This network, which segments images in an unsupervised manner, is thus related to the FCM optimization problem. Numerical examples on photographic and magnetic resonance images are given to illustrate this approach to image segmentation.

  13. Combining Vector Quantization and Histogram Equalization.

    ERIC Educational Resources Information Center

    Cosman, Pamela C.; And Others

    1992-01-01

    Discussion of contrast enhancement techniques focuses on the use of histogram equalization with a data compression technique, i.e., tree-structured vector quantization. The enhancement technique of intensity windowing is described, and the use of enhancement techniques for medical images is explained, including adaptive histogram equalization.…

  14. Application of a VLSI vector quantization processor to real-time speech coding

    NASA Technical Reports Server (NTRS)

    Davidson, G.; Gersho, A.

    1986-01-01

    Attention is given to a working vector quantization processor for speech coding that is based on a first-generation VLSI chip which efficiently performs the pattern-matching operation needed for the codebook search process (CPS). Using this chip, the CPS architecture has been successfully incorporated into a compact, single-board Vector PCM implementation operating at 7-18 kbits/sec. A real time Adaptive Vector Predictive Coder system using the CPS has also been implemented.

  15. Progressive low-bitrate digital color/monochrome image coding by neuro-fuzzy clustering

    NASA Astrophysics Data System (ADS)

    Mitra, Sunanda; Meadows, Steven

    1997-10-01

    Color image coding at low bit rates is an area of research that is just being addressed in recent literature since the problems of storage and transmission of color images are becoming more prominent in many applications. Current trends in image coding exploit the advantage of subband/wavelet decompositions in reducing the complexity in optimal scalar/vector quantizer (SQ/VQ) design. Compression ratios (CRs) of the order of 10:1 to 20:1 with high visual quality have been achieved by using vector quantization of subband decomposed color images in perceptually weighted color spaces. We report the performance of a recently developed adaptive vector quantizer, namely, AFLC-VQ for effective reduction in bit rates while maintaining high visual quality of reconstructed color as well as monochrome images. For 24 bit color images, excellent visual quality is maintained upto a bit rate reduction to approximately 0.48 bpp (for each color plane or monochrome 0.16 bpp, CR 50:1) by using the RGB color space. Further tuning of the AFLC-VQ, and addition of an entropy coder module after the VQ stage results in extremely low bit rates (CR 80:1) for good quality, reconstructed images. Our recent study also reveals that for similar visual quality, RGB color space requires less bits/pixel than either the YIQ, or HIS color space for storing the same information when entropy coding is applied. AFLC-VQ outperforms other standard VQ and adaptive SQ techniques in retaining visual fidelity at similar bit rate reduction.

  16. Memory-efficient decoding of LDPC codes

    NASA Technical Reports Server (NTRS)

    Kwok-San Lee, Jason; Thorpe, Jeremy; Hawkins, Jon

    2005-01-01

    We present a low-complexity quantization scheme for the implementation of regular (3,6) LDPC codes. The quantization parameters are optimized to maximize the mutual information between the source and the quantized messages. Using this non-uniform quantized belief propagation algorithm, we have simulated that an optimized 3-bit quantizer operates with 0.2dB implementation loss relative to a floating point decoder, and an optimized 4-bit quantizer operates less than 0.1dB quantization loss.

  17. Perceptual compression of magnitude-detected synthetic aperture radar imagery

    NASA Technical Reports Server (NTRS)

    Gorman, John D.; Werness, Susan A.

    1994-01-01

    A perceptually-based approach for compressing synthetic aperture radar (SAR) imagery is presented. Key components of the approach are a multiresolution wavelet transform, a bit allocation mask based on an empirical human visual system (HVS) model, and hybrid scalar/vector quantization. Specifically, wavelet shrinkage techniques are used to segregate wavelet transform coefficients into three components: local means, edges, and texture. Each of these three components is then quantized separately according to a perceptually-based bit allocation scheme. Wavelet coefficients associated with local means and edges are quantized using high-rate scalar quantization while texture information is quantized using low-rate vector quantization. The impact of the perceptually-based multiresolution compression algorithm on visual image quality, impulse response, and texture properties is assessed for fine-resolution magnitude-detected SAR imagery; excellent image quality is found at bit rates at or above 1 bpp along with graceful performance degradation at rates below 1 bpp.

  18. Multi-mode energy management strategy for fuel cell electric vehicles based on driving pattern identification using learning vector quantization neural network algorithm

    NASA Astrophysics Data System (ADS)

    Song, Ke; Li, Feiqiang; Hu, Xiao; He, Lin; Niu, Wenxu; Lu, Sihao; Zhang, Tong

    2018-06-01

    The development of fuel cell electric vehicles can to a certain extent alleviate worldwide energy and environmental issues. While a single energy management strategy cannot meet the complex road conditions of an actual vehicle, this article proposes a multi-mode energy management strategy for electric vehicles with a fuel cell range extender based on driving condition recognition technology, which contains a patterns recognizer and a multi-mode energy management controller. This paper introduces a learning vector quantization (LVQ) neural network to design the driving patterns recognizer according to a vehicle's driving information. This multi-mode strategy can automatically switch to the genetic algorithm optimized thermostat strategy under specific driving conditions in the light of the differences in condition recognition results. Simulation experiments were carried out based on the model's validity verification using a dynamometer test bench. Simulation results show that the proposed strategy can obtain better economic performance than the single-mode thermostat strategy under dynamic driving conditions.

  19. Justification of Fuzzy Declustering Vector Quantization Modeling in Classification of Genotype-Image Phenotypes

    NASA Astrophysics Data System (ADS)

    Ng, Theam Foo; Pham, Tuan D.; Zhou, Xiaobo

    2010-01-01

    With the fast development of multi-dimensional data compression and pattern classification techniques, vector quantization (VQ) has become a system that allows large reduction of data storage and computational effort. One of the most recent VQ techniques that handle the poor estimation of vector centroids due to biased data from undersampling is to use fuzzy declustering-based vector quantization (FDVQ) technique. Therefore, in this paper, we are motivated to propose a justification of FDVQ based hidden Markov model (HMM) for investigating its effectiveness and efficiency in classification of genotype-image phenotypes. The performance evaluation and comparison of the recognition accuracy between a proposed FDVQ based HMM (FDVQ-HMM) and a well-known LBG (Linde, Buzo, Gray) vector quantization based HMM (LBG-HMM) will be carried out. The experimental results show that the performances of both FDVQ-HMM and LBG-HMM are almost similar. Finally, we have justified the competitiveness of FDVQ-HMM in classification of cellular phenotype image database by using hypotheses t-test. As a result, we have validated that the FDVQ algorithm is a robust and an efficient classification technique in the application of RNAi genome-wide screening image data.

  20. Accelerating Families of Fuzzy K-Means Algorithms for Vector Quantization Codebook Design

    PubMed Central

    Mata, Edson; Bandeira, Silvio; de Mattos Neto, Paulo; Lopes, Waslon; Madeiro, Francisco

    2016-01-01

    The performance of signal processing systems based on vector quantization depends on codebook design. In the image compression scenario, the quality of the reconstructed images depends on the codebooks used. In this paper, alternatives are proposed for accelerating families of fuzzy K-means algorithms for codebook design. The acceleration is obtained by reducing the number of iterations of the algorithms and applying efficient nearest neighbor search techniques. Simulation results concerning image vector quantization have shown that the acceleration obtained so far does not decrease the quality of the reconstructed images. Codebook design time savings up to about 40% are obtained by the accelerated versions with respect to the original versions of the algorithms. PMID:27886061

  1. Accelerating Families of Fuzzy K-Means Algorithms for Vector Quantization Codebook Design.

    PubMed

    Mata, Edson; Bandeira, Silvio; de Mattos Neto, Paulo; Lopes, Waslon; Madeiro, Francisco

    2016-11-23

    The performance of signal processing systems based on vector quantization depends on codebook design. In the image compression scenario, the quality of the reconstructed images depends on the codebooks used. In this paper, alternatives are proposed for accelerating families of fuzzy K-means algorithms for codebook design. The acceleration is obtained by reducing the number of iterations of the algorithms and applying efficient nearest neighbor search techniques. Simulation results concerning image vector quantization have shown that the acceleration obtained so far does not decrease the quality of the reconstructed images. Codebook design time savings up to about 40% are obtained by the accelerated versions with respect to the original versions of the algorithms.

  2. Perceptual Optimization of DCT Color Quantization Matrices

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Statler, Irving C. (Technical Monitor)

    1994-01-01

    Many image compression schemes employ a block Discrete Cosine Transform (DCT) and uniform quantization. Acceptable rate/distortion performance depends upon proper design of the quantization matrix. In previous work, we showed how to use a model of the visibility of DCT basis functions to design quantization matrices for arbitrary display resolutions and color spaces. Subsequently, we showed how to optimize greyscale quantization matrices for individual images, for optimal rate/perceptual distortion performance. Here we describe extensions of this optimization algorithm to color images.

  3. Quantized Vector Potential and the Photon Wave-function

    NASA Astrophysics Data System (ADS)

    Meis, C.; Dahoo, P. R.

    2017-12-01

    The vector potential function {\\overrightarrow{α }}kλ (\\overrightarrow{r},t) for a k-mode and λ-polarization photon, with the quantized amplitude α 0k (ω k ) = ξω k , satisfies the classical wave propagation equation as well as the Schrodinger’s equation with the relativistic massless Hamiltonian \\mathop{H}\\limits∼ =-i\\hslash c\\overrightarrow{\

  4. Vector Quantization Algorithm Based on Associative Memories

    NASA Astrophysics Data System (ADS)

    Guzmán, Enrique; Pogrebnyak, Oleksiy; Yáñez, Cornelio; Manrique, Pablo

    This paper presents a vector quantization algorithm for image compression based on extended associative memories. The proposed algorithm is divided in two stages. First, an associative network is generated applying the learning phase of the extended associative memories between a codebook generated by the LBG algorithm and a training set. This associative network is named EAM-codebook and represents a new codebook which is used in the next stage. The EAM-codebook establishes a relation between training set and the LBG codebook. Second, the vector quantization process is performed by means of the recalling stage of EAM using as associative memory the EAM-codebook. This process generates a set of the class indices to which each input vector belongs. With respect to the LBG algorithm, the main advantages offered by the proposed algorithm is high processing speed and low demand of resources (system memory); results of image compression and quality are presented.

  5. A VLSI chip set for real time vector quantization of image sequences

    NASA Technical Reports Server (NTRS)

    Baker, Richard L.

    1989-01-01

    The architecture and implementation of a VLSI chip set that vector quantizes (VQ) image sequences in real time is described. The chip set forms a programmable Single-Instruction, Multiple-Data (SIMD) machine which can implement various vector quantization encoding structures. Its VQ codebook may contain unlimited number of codevectors, N, having dimension up to K = 64. Under a weighted least squared error criterion, the engine locates at video rates the best code vector in full-searched or large tree searched VQ codebooks. The ability to manipulate tree structured codebooks, coupled with parallelism and pipelining, permits searches in as short as O (log N) cycles. A full codebook search results in O(N) performance, compared to O(KN) for a Single-Instruction, Single-Data (SISD) machine. With this VLSI chip set, an entire video code can be built on a single board that permits realtime experimentation with very large codebooks.

  6. Optimized universal color palette design for error diffusion

    NASA Astrophysics Data System (ADS)

    Kolpatzik, Bernd W.; Bouman, Charles A.

    1995-04-01

    Currently, many low-cost computers can only simultaneously display a palette of 256 color. However, this palette is usually selectable from a very large gamut of available colors. For many applications, this limited palette size imposes a significant constraint on the achievable image quality. We propose a method for designing an optimized universal color palette for use with halftoning methods such as error diffusion. The advantage of a universal color palette is that it is fixed and therefore allows multiple images to be displayed simultaneously. To design the palette, we employ a new vector quantization method known as sequential scalar quantization (SSQ) to allocate the colors in a visually uniform color space. The SSQ method achieves near-optimal allocation, but may be efficiently implemented using a series of lookup tables. When used with error diffusion, SSQ adds little computational overhead and may be used to minimize the visual error in an opponent color coordinate system. We compare the performance of the optimized algorithm to standard error diffusion by evaluating a visually weighted mean-squared-error measure. Our metric is based on the color difference in CIE L*AL*B*, but also accounts for the lowpass characteristic of human contrast sensitivity.

  7. Subband directional vector quantization in radiological image compression

    NASA Astrophysics Data System (ADS)

    Akrout, Nabil M.; Diab, Chaouki; Prost, Remy; Goutte, Robert; Amiel, Michel

    1992-05-01

    The aim of this paper is to propose a new scheme for image compression. The method is very efficient for images which have directional edges such as the tree-like structure of the coronary vessels in digital angiograms. This method involves two steps. First, the original image is decomposed at different resolution levels using a pyramidal subband decomposition scheme. For decomposition/reconstruction of the image, free of aliasing and boundary errors, we use an ideal band-pass filter bank implemented in the Discrete Cosine Transform domain (DCT). Second, the high-frequency subbands are vector quantized using a multiresolution codebook with vertical and horizontal codewords which take into account the edge orientation of each subband. The proposed method reduces the blocking effect encountered at low bit rates in conventional vector quantization.

  8. BSIFT: toward data-independent codebook for large scale image search.

    PubMed

    Zhou, Wengang; Li, Houqiang; Hong, Richang; Lu, Yijuan; Tian, Qi

    2015-03-01

    Bag-of-Words (BoWs) model based on Scale Invariant Feature Transform (SIFT) has been widely used in large-scale image retrieval applications. Feature quantization by vector quantization plays a crucial role in BoW model, which generates visual words from the high- dimensional SIFT features, so as to adapt to the inverted file structure for the scalable retrieval. Traditional feature quantization approaches suffer several issues, such as necessity of visual codebook training, limited reliability, and update inefficiency. To avoid the above problems, in this paper, a novel feature quantization scheme is proposed to efficiently quantize each SIFT descriptor to a descriptive and discriminative bit-vector, which is called binary SIFT (BSIFT). Our quantizer is independent of image collections. In addition, by taking the first 32 bits out from BSIFT as code word, the generated BSIFT naturally lends itself to adapt to the classic inverted file structure for image indexing. Moreover, the quantization error is reduced by feature filtering, code word expansion, and query sensitive mask shielding. Without any explicit codebook for quantization, our approach can be readily applied in image search in some resource-limited scenarios. We evaluate the proposed algorithm for large scale image search on two public image data sets. Experimental results demonstrate the index efficiency and retrieval accuracy of our approach.

  9. Density-Dependent Quantized Least Squares Support Vector Machine for Large Data Sets.

    PubMed

    Nan, Shengyu; Sun, Lei; Chen, Badong; Lin, Zhiping; Toh, Kar-Ann

    2017-01-01

    Based on the knowledge that input data distribution is important for learning, a data density-dependent quantization scheme (DQS) is proposed for sparse input data representation. The usefulness of the representation scheme is demonstrated by using it as a data preprocessing unit attached to the well-known least squares support vector machine (LS-SVM) for application on big data sets. Essentially, the proposed DQS adopts a single shrinkage threshold to obtain a simple quantization scheme, which adapts its outputs to input data density. With this quantization scheme, a large data set is quantized to a small subset where considerable sample size reduction is generally obtained. In particular, the sample size reduction can save significant computational cost when using the quantized subset for feature approximation via the Nyström method. Based on the quantized subset, the approximated features are incorporated into LS-SVM to develop a data density-dependent quantized LS-SVM (DQLS-SVM), where an analytic solution is obtained in the primal solution space. The developed DQLS-SVM is evaluated on synthetic and benchmark data with particular emphasis on large data sets. Extensive experimental results show that the learning machine incorporating DQS attains not only high computational efficiency but also good generalization performance.

  10. Efficient storage and management of radiographic images using a novel wavelet-based multiscale vector quantizer

    NASA Astrophysics Data System (ADS)

    Yang, Shuyu; Mitra, Sunanda

    2002-05-01

    Due to the huge volumes of radiographic images to be managed in hospitals, efficient compression techniques yielding no perceptual loss in the reconstructed images are becoming a requirement in the storage and management of such datasets. A wavelet-based multi-scale vector quantization scheme that generates a global codebook for efficient storage and transmission of medical images is presented in this paper. The results obtained show that even at low bit rates one is able to obtain reconstructed images with perceptual quality higher than that of the state-of-the-art scalar quantization method, the set partitioning in hierarchical trees.

  11. Fast large-scale object retrieval with binary quantization

    NASA Astrophysics Data System (ADS)

    Zhou, Shifu; Zeng, Dan; Shen, Wei; Zhang, Zhijiang; Tian, Qi

    2015-11-01

    The objective of large-scale object retrieval systems is to search for images that contain the target object in an image database. Where state-of-the-art approaches rely on global image representations to conduct searches, we consider many boxes per image as candidates to search locally in a picture. In this paper, a feature quantization algorithm called binary quantization is proposed. In binary quantization, a scale-invariant feature transform (SIFT) feature is quantized into a descriptive and discriminative bit-vector, which allows itself to adapt to the classic inverted file structure for box indexing. The inverted file, which stores the bit-vector and box ID where the SIFT feature is located inside, is compact and can be loaded into the main memory for efficient box indexing. We evaluate our approach on available object retrieval datasets. Experimental results demonstrate that the proposed approach is fast and achieves excellent search quality. Therefore, the proposed approach is an improvement over state-of-the-art approaches for object retrieval.

  12. Symplectic Quantization of a Vector-Tensor Gauge Theory with Topological Coupling

    NASA Astrophysics Data System (ADS)

    Barcelos-Neto, J.; Silva, M. B. D.

    We use the symplectic formalism to quantize a gauge theory where vectors and tensors fields are coupled in a topological way. This is an example of reducible theory and a procedure like of ghosts-of-ghosts of the BFV method is applied but in terms of Lagrange multipliers. Our final results are in agreement with the ones found in the literature by using the Dirac method.

  13. Image and Video Compression with VLSI Neural Networks

    NASA Technical Reports Server (NTRS)

    Fang, W.; Sheu, B.

    1993-01-01

    An advanced motion-compensated predictive video compression system based on artificial neural networks has been developed to effectively eliminate the temporal and spatial redundancy of video image sequences and thus reduce the bandwidth and storage required for the transmission and recording of the video signal. The VLSI neuroprocessor for high-speed high-ratio image compression based upon a self-organization network and the conventional algorithm for vector quantization are compared. The proposed method is quite efficient and can achieve near-optimal results.

  14. Locally adaptive vector quantization: Data compression with feature preservation

    NASA Technical Reports Server (NTRS)

    Cheung, K. M.; Sayano, M.

    1992-01-01

    A study of a locally adaptive vector quantization (LAVQ) algorithm for data compression is presented. This algorithm provides high-speed one-pass compression and is fully adaptable to any data source and does not require a priori knowledge of the source statistics. Therefore, LAVQ is a universal data compression algorithm. The basic algorithm and several modifications to improve performance are discussed. These modifications are nonlinear quantization, coarse quantization of the codebook, and lossless compression of the output. Performance of LAVQ on various images using irreversible (lossy) coding is comparable to that of the Linde-Buzo-Gray algorithm, but LAVQ has a much higher speed; thus this algorithm has potential for real-time video compression. Unlike most other image compression algorithms, LAVQ preserves fine detail in images. LAVQ's performance as a lossless data compression algorithm is comparable to that of Lempel-Ziv-based algorithms, but LAVQ uses far less memory during the coding process.

  15. A constrained joint source/channel coder design and vector quantization of nonstationary sources

    NASA Technical Reports Server (NTRS)

    Sayood, Khalid; Chen, Y. C.; Nori, S.; Araj, A.

    1993-01-01

    The emergence of broadband ISDN as the network for the future brings with it the promise of integration of all proposed services in a flexible environment. In order to achieve this flexibility, asynchronous transfer mode (ATM) has been proposed as the transfer technique. During this period a study was conducted on the bridging of network transmission performance and video coding. The successful transmission of variable bit rate video over ATM networks relies on the interaction between the video coding algorithm and the ATM networks. Two aspects of networks that determine the efficiency of video transmission are the resource allocation algorithm and the congestion control algorithm. These are explained in this report. Vector quantization (VQ) is one of the more popular compression techniques to appear in the last twenty years. Numerous compression techniques, which incorporate VQ, have been proposed. While the LBG VQ provides excellent compression, there are also several drawbacks to the use of the LBG quantizers including search complexity and memory requirements, and a mismatch between the codebook and the inputs. The latter mainly stems from the fact that the VQ is generally designed for a specific rate and a specific class of inputs. In this work, an adaptive technique is proposed for vector quantization of images and video sequences. This technique is an extension of the recursively indexed scalar quantization (RISQ) algorithm.

  16. Radial quantization of the 3d CFT and the higher spin/vector model duality

    NASA Astrophysics Data System (ADS)

    Hu, Shan; Li, Tianjun

    2014-10-01

    We study the radial quantization of the 3dO(N) vector model. We calculate the higher spin charges whose commutation relations give the higher spin algebra. The Fock states of higher spin gravity in AdS4 are realized as the states in the 3d CFT. The dynamical information is encoded in their inner products. This serves as the simplest explicit demonstration of the CFT definition for the quantum gravity.

  17. Wavelet subband coding of computer simulation output using the A++ array class library

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bradley, J.N.; Brislawn, C.M.; Quinlan, D.J.

    1995-07-01

    The goal of the project is to produce utility software for off-line compression of existing data and library code that can be called from a simulation program for on-line compression of data dumps as the simulation proceeds. Naturally, we would like the amount of CPU time required by the compression algorithm to be small in comparison to the requirements of typical simulation codes. We also want the algorithm to accomodate a wide variety of smooth, multidimensional data types. For these reasons, the subband vector quantization (VQ) approach employed in has been replaced by a scalar quantization (SQ) strategy using amore » bank of almost-uniform scalar subband quantizers in a scheme similar to that used in the FBI fingerprint image compression standard. This eliminates the considerable computational burdens of training VQ codebooks for each new type of data and performing nearest-vector searches to encode the data. The comparison of subband VQ and SQ algorithms in indicated that, in practice, there is relatively little additional gain from using vector as opposed to scalar quantization on DWT subbands, even when the source imagery is from a very homogeneous population, and our subjective experience with synthetic computer-generated data supports this stance. It appears that a careful study is needed of the tradeoffs involved in selecting scalar vs. vector subband quantization, but such an analysis is beyond the scope of this paper. Our present work is focused on the problem of generating wavelet transform/scalar quantization (WSQ) implementations that can be ported easily between different hardware environments. This is an extremely important consideration given the great profusion of different high-performance computing architectures available, the high cost associated with learning how to map algorithms effectively onto a new architecture, and the rapid rate of evolution in the world of high-performance computing.« less

  18. Compact Representation of High-Dimensional Feature Vectors for Large-Scale Image Recognition and Retrieval.

    PubMed

    Zhang, Yu; Wu, Jianxin; Cai, Jianfei

    2016-05-01

    In large-scale visual recognition and image retrieval tasks, feature vectors, such as Fisher vector (FV) or the vector of locally aggregated descriptors (VLAD), have achieved state-of-the-art results. However, the combination of the large numbers of examples and high-dimensional vectors necessitates dimensionality reduction, in order to reduce its storage and CPU costs to a reasonable range. In spite of the popularity of various feature compression methods, this paper shows that the feature (dimension) selection is a better choice for high-dimensional FV/VLAD than the feature (dimension) compression methods, e.g., product quantization. We show that strong correlation among the feature dimensions in the FV and the VLAD may not exist, which renders feature selection a natural choice. We also show that, many dimensions in FV/VLAD are noise. Throwing them away using feature selection is better than compressing them and useful dimensions altogether using feature compression methods. To choose features, we propose an efficient importance sorting algorithm considering both the supervised and unsupervised cases, for visual recognition and image retrieval, respectively. Combining with the 1-bit quantization, feature selection has achieved both higher accuracy and less computational cost than feature compression methods, such as product quantization, on the FV and the VLAD image representations.

  19. Interframe vector wavelet coding technique

    NASA Astrophysics Data System (ADS)

    Wus, John P.; Li, Weiping

    1997-01-01

    Wavelet coding is often used to divide an image into multi- resolution wavelet coefficients which are quantized and coded. By 'vectorizing' scalar wavelet coding and combining this with vector quantization (VQ), vector wavelet coding (VWC) can be implemented. Using a finite number of states, finite-state vector quantization (FSVQ) takes advantage of the similarity between frames by incorporating memory into the video coding system. Lattice VQ eliminates the potential mismatch that could occur using pre-trained VQ codebooks. It also eliminates the need for codebook storage in the VQ process, thereby creating a more robust coding system. Therefore, by using the VWC coding method in conjunction with the FSVQ system and lattice VQ, the formulation of a high quality very low bit rate coding systems is proposed. A coding system using a simple FSVQ system where the current state is determined by the previous channel symbol only is developed. To achieve a higher degree of compression, a tree-like FSVQ system is implemented. The groupings are done in this tree-like structure from the lower subbands to the higher subbands in order to exploit the nature of subband analysis in terms of the parent-child relationship. Class A and Class B video sequences from the MPEG-IV testing evaluations are used in the evaluation of this coding method.

  20. On the Problem of Bandwidth Partitioning in FDD Block-Fading Single-User MISO/SIMO Systems

    NASA Astrophysics Data System (ADS)

    Ivrlač, Michel T.; Nossek, Josef A.

    2008-12-01

    We report on our research activity on the problem of how to optimally partition the available bandwidth of frequency division duplex, multi-input single-output communication systems, into subbands for the uplink, the downlink, and the feedback. In the downlink, the transmitter applies coherent beamforming based on quantized channel information which is obtained by feedback from the receiver. As feedback takes away resources from the uplink, which could otherwise be used to transfer payload data, it is highly desirable to reserve the "right" amount of uplink resources for the feedback. Under the assumption of random vector quantization, and a frequency flat, independent and identically distributed block-fading channel, we derive closed-form expressions for both the feedback quantization and bandwidth partitioning which jointly maximize the sum of the average payload data rates of the downlink and the uplink. While we do introduce some approximations to facilitate mathematical tractability, the analytical solution is asymptotically exact as the number of antennas approaches infinity, while for systems with few antennas, it turns out to be a fairly accurate approximation. In this way, the obtained results are meaningful for practical communication systems, which usually can only employ a few antennas.

  1. Research on conceptual/innovative design for the life cycle

    NASA Technical Reports Server (NTRS)

    Cagan, Jonathan; Agogino, Alice M.

    1990-01-01

    The goal of this research is developing and integrating qualitative and quantitative methods for life cycle design. The definition of the problem includes formal computer-based methods limited to final detailing stages of design; CAD data bases do not capture design intent or design history; and life cycle issues were ignored during early stages of design. Viewgraphs outline research in conceptual design; the SYMON (SYmbolic MONotonicity analyzer) algorithm; multistart vector quantization optimization algorithm; intelligent manufacturing: IDES - Influence Diagram Architecture; and 1st PRINCE (FIRST PRINciple Computational Evaluator).

  2. Spectrally efficient digitized radio-over-fiber system with k-means clustering-based multidimensional quantization.

    PubMed

    Zhang, Lu; Pang, Xiaodan; Ozolins, Oskars; Udalcovs, Aleksejs; Popov, Sergei; Xiao, Shilin; Hu, Weisheng; Chen, Jiajia

    2018-04-01

    We propose a spectrally efficient digitized radio-over-fiber (D-RoF) system by grouping highly correlated neighboring samples of the analog signals into multidimensional vectors, where the k-means clustering algorithm is adopted for adaptive quantization. A 30  Gbit/s D-RoF system is experimentally demonstrated to validate the proposed scheme, reporting a carrier aggregation of up to 40 100 MHz orthogonal frequency division multiplexing (OFDM) channels with quadrate amplitude modulation (QAM) order of 4 and an aggregation of 10 100 MHz OFDM channels with a QAM order of 16384. The equivalent common public radio interface rates from 37 to 150  Gbit/s are supported. Besides, the error vector magnitude (EVM) of 8% is achieved with the number of quantization bits of 4, and the EVM can be further reduced to 1% by increasing the number of quantization bits to 7. Compared with conventional pulse coding modulation-based D-RoF systems, the proposed D-RoF system improves the signal-to-noise-ratio up to ∼9  dB and greatly reduces the EVM, given the same number of quantization bits.

  3. Vector quantizer based on brightness maps for image compression with the polynomial transform

    NASA Astrophysics Data System (ADS)

    Escalante-Ramirez, Boris; Moreno-Gutierrez, Mauricio; Silvan-Cardenas, Jose L.

    2002-11-01

    We present a vector quantization scheme acting on brightness fields based on distance/distortion criteria correspondent with psycho-visual aspects. These criteria quantify sensorial distortion between vectors that represent either portions of a digital image or alternatively, coefficients of a transform-based coding system. In the latter case, we use an image representation model, namely the Hermite transform, that is based on some of the main perceptual characteristics of the human vision system (HVS) and in their response to light stimulus. Energy coding in the brightness domain, determination of local structure, code-book training and local orientation analysis are all obtained by means of the Hermite transform. This paper, for thematic reasons, is divided in four sections. The first one will shortly highlight the importance of having newer and better compression algorithms. This section will also serve to explain briefly the most relevant characteristics of the HVS, advantages and disadvantages related with the behavior of our vision in front of ocular stimulus. The second section shall go through a quick review of vector quantization techniques, focusing their performance on image treatment, as a preview for the image vector quantizer compressor actually constructed in section 5. Third chapter was chosen to concentrate the most important data gathered on brightness models. The building of this so-called brightness maps (quantification of the human perception on the visible objects reflectance), in a bi-dimensional model, will be addressed here. The Hermite transform, a special case of polynomial transforms, and its usefulness, will be treated, in an applicable discrete form, in the fourth chapter. As we have learned from previous works 1, Hermite transform has showed to be a useful and practical solution to efficiently code the energy within an image block, deciding which kind of quantization is to be used upon them (whether scalar or vector). It will also be a unique tool to structurally classify the image block within a given lattice. This particular operation intends to be one of the main contributions of this work. The fifth section will fuse the proposals derived from the study of the three main topics- addressed in the last sections- in order to propose an image compression model that takes advantage of vector quantizers inside the brightness transformed domain to determine the most important structures, finding the energy distribution inside the Hermite domain. Sixth and last section will show some results obtained while testing the coding-decoding model. The guidelines to evaluate the image compressing performance were the compression ratio, SNR and psycho-visual quality. Some conclusions derived from the research and possible unexplored paths will be shown on this section as well.

  4. Progressive Vector Quantization on a massively parallel SIMD machine with application to multispectral image data

    NASA Technical Reports Server (NTRS)

    Manohar, Mareboyana; Tilton, James C.

    1994-01-01

    A progressive vector quantization (VQ) compression approach is discussed which decomposes image data into a number of levels using full search VQ. The final level is losslessly compressed, enabling lossless reconstruction. The computational difficulties are addressed by implementation on a massively parallel SIMD machine. We demonstrate progressive VQ on multispectral imagery obtained from the Advanced Very High Resolution Radiometer instrument and other Earth observation image data, and investigate the trade-offs in selecting the number of decomposition levels and codebook training method.

  5. Visual data mining for quantized spatial data

    NASA Technical Reports Server (NTRS)

    Braverman, Amy; Kahn, Brian

    2004-01-01

    In previous papers we've shown how a well known data compression algorithm called Entropy-constrained Vector Quantization ( can be modified to reduce the size and complexity of very large, satellite data sets. In this paper, we descuss how to visualize and understand the content of such reduced data sets.

  6. Topological charge quantization via path integration: An application of the Kustaanheimo-Stiefel transformation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Inomata, A.; Junker, G.; Wilson, R.

    1993-08-01

    The unified treatment of the Dirac monopole, the Schwinger monopole, and the Aharonov-Bahn problem by Barut and Wilson is revisited via a path integral approach. The Kustaanheimo-Stiefel transformation of space and time is utilized to calculate the path integral for a charged particle in the singular vector potential. In the process of dimensional reduction, a topological charge quantization rule is derived, which contains Dirac's quantization condition as a special case. 32 refs.

  7. Detecting double compressed MPEG videos with the same quantization matrix and synchronized group of pictures structure

    NASA Astrophysics Data System (ADS)

    Aghamaleki, Javad Abbasi; Behrad, Alireza

    2018-01-01

    Double compression detection is a crucial stage in digital image and video forensics. However, the detection of double compressed videos is challenging when the video forger uses the same quantization matrix and synchronized group of pictures (GOP) structure during the recompression history to conceal tampering effects. A passive approach is proposed for detecting double compressed MPEG videos with the same quantization matrix and synchronized GOP structure. To devise the proposed algorithm, the effects of recompression on P frames are mathematically studied. Then, based on the obtained guidelines, a feature vector is proposed to detect double compressed frames on the GOP level. Subsequently, sparse representations of the feature vectors are used for dimensionality reduction and enrich the traces of recompression. Finally, a support vector machine classifier is employed to detect and localize double compression in temporal domain. The experimental results show that the proposed algorithm achieves the accuracy of more than 95%. In addition, the comparisons of the results of the proposed method with those of other methods reveal the efficiency of the proposed algorithm.

  8. Optimal Quantization Scheme for Data-Efficient Target Tracking via UWSNs Using Quantized Measurements.

    PubMed

    Zhang, Senlin; Chen, Huayan; Liu, Meiqin; Zhang, Qunfei

    2017-11-07

    Target tracking is one of the broad applications of underwater wireless sensor networks (UWSNs). However, as a result of the temporal and spatial variability of acoustic channels, underwater acoustic communications suffer from an extremely limited bandwidth. In order to reduce network congestion, it is important to shorten the length of the data transmitted from local sensors to the fusion center by quantization. Although quantization can reduce bandwidth cost, it also brings about bad tracking performance as a result of information loss after quantization. To solve this problem, this paper proposes an optimal quantization-based target tracking scheme. It improves the tracking performance of low-bit quantized measurements by minimizing the additional covariance caused by quantization. The simulation demonstrates that our scheme performs much better than the conventional uniform quantization-based target tracking scheme and the increment of the data length affects our scheme only a little. Its tracking performance improves by only 4.4% from 2- to 3-bit, which means our scheme weakly depends on the number of data bits. Moreover, our scheme also weakly depends on the number of participate sensors, and it can work well in sparse sensor networks. In a 6 × 6 × 6 sensor network, compared with 4 × 4 × 4 sensor networks, the number of participant sensors increases by 334.92%, while the tracking accuracy using 1-bit quantized measurements improves by only 50.77%. Overall, our optimal quantization-based target tracking scheme can achieve the pursuit of data-efficiency, which fits the requirements of low-bandwidth UWSNs.

  9. Automated vector selection of SIVQ and parallel computing integration MATLAB™: Innovations supporting large-scale and high-throughput image analysis studies.

    PubMed

    Cheng, Jerome; Hipp, Jason; Monaco, James; Lucas, David R; Madabhushi, Anant; Balis, Ulysses J

    2011-01-01

    Spatially invariant vector quantization (SIVQ) is a texture and color-based image matching algorithm that queries the image space through the use of ring vectors. In prior studies, the selection of one or more optimal vectors for a particular feature of interest required a manual process, with the user initially stochastically selecting candidate vectors and subsequently testing them upon other regions of the image to verify the vector's sensitivity and specificity properties (typically by reviewing a resultant heat map). In carrying out the prior efforts, the SIVQ algorithm was noted to exhibit highly scalable computational properties, where each region of analysis can take place independently of others, making a compelling case for the exploration of its deployment on high-throughput computing platforms, with the hypothesis that such an exercise will result in performance gains that scale linearly with increasing processor count. An automated process was developed for the selection of optimal ring vectors to serve as the predicate matching operator in defining histopathological features of interest. Briefly, candidate vectors were generated from every possible coordinate origin within a user-defined vector selection area (VSA) and subsequently compared against user-identified positive and negative "ground truth" regions on the same image. Each vector from the VSA was assessed for its goodness-of-fit to both the positive and negative areas via the use of the receiver operating characteristic (ROC) transfer function, with each assessment resulting in an associated area-under-the-curve (AUC) figure of merit. Use of the above-mentioned automated vector selection process was demonstrated in two cases of use: First, to identify malignant colonic epithelium, and second, to identify soft tissue sarcoma. For both examples, a very satisfactory optimized vector was identified, as defined by the AUC metric. Finally, as an additional effort directed towards attaining high-throughput capability for the SIVQ algorithm, we demonstrated the successful incorporation of it with the MATrix LABoratory (MATLAB™) application interface. The SIVQ algorithm is suitable for automated vector selection settings and high throughput computation.

  10. Rate and power efficient image compressed sensing and transmission

    NASA Astrophysics Data System (ADS)

    Olanigan, Saheed; Cao, Lei; Viswanathan, Ramanarayanan

    2016-01-01

    This paper presents a suboptimal quantization and transmission scheme for multiscale block-based compressed sensing images over wireless channels. The proposed method includes two stages: dealing with quantization distortion and transmission errors. First, given the total transmission bit rate, the optimal number of quantization bits is assigned to the sensed measurements in different wavelet sub-bands so that the total quantization distortion is minimized. Second, given the total transmission power, the energy is allocated to different quantization bit layers based on their different error sensitivities. The method of Lagrange multipliers with Karush-Kuhn-Tucker conditions is used to solve both optimization problems, for which the first problem can be solved with relaxation and the second problem can be solved completely. The effectiveness of the scheme is illustrated through simulation results, which have shown up to 10 dB improvement over the method without the rate and power optimization in medium and low signal-to-noise ratio cases.

  11. An investigative study of multispectral data compression for remotely-sensed images using vector quantization and difference-mapped shift-coding

    NASA Technical Reports Server (NTRS)

    Jaggi, S.

    1993-01-01

    A study is conducted to investigate the effects and advantages of data compression techniques on multispectral imagery data acquired by NASA's airborne scanners at the Stennis Space Center. The first technique used was vector quantization. The vector is defined in the multispectral imagery context as an array of pixels from the same location from each channel. The error obtained in substituting the reconstructed images for the original set is compared for different compression ratios. Also, the eigenvalues of the covariance matrix obtained from the reconstructed data set are compared with the eigenvalues of the original set. The effects of varying the size of the vector codebook on the quality of the compression and on subsequent classification are also presented. The output data from the Vector Quantization algorithm was further compressed by a lossless technique called Difference-mapped Shift-extended Huffman coding. The overall compression for 7 channels of data acquired by the Calibrated Airborne Multispectral Scanner (CAMS), with an RMS error of 15.8 pixels was 195:1 (0.41 bpp) and with an RMS error of 3.6 pixels was 18:1 (.447 bpp). The algorithms were implemented in software and interfaced with the help of dedicated image processing boards to an 80386 PC compatible computer. Modules were developed for the task of image compression and image analysis. Also, supporting software to perform image processing for visual display and interpretation of the compressed/classified images was developed.

  12. On Fock-space representations of quantized enveloping algebras related to noncommutative differential geometry

    NASA Astrophysics Data System (ADS)

    Jurčo, B.; Schlieker, M.

    1995-07-01

    In this paper explicitly natural (from the geometrical point of view) Fock-space representations (contragradient Verma modules) of the quantized enveloping algebras are constructed. In order to do so, one starts from the Gauss decomposition of the quantum group and introduces the differential operators on the corresponding q-deformed flag manifold (assumed as a left comodule for the quantum group) by a projection to it of the right action of the quantized enveloping algebra on the quantum group. Finally, the representatives of the elements of the quantized enveloping algebra corresponding to the left-invariant vector fields on the quantum group are expressed as first-order differential operators on the q-deformed flag manifold.

  13. Novel Integration of Frame Rate Up Conversion and HEVC Coding Based on Rate-Distortion Optimization.

    PubMed

    Guo Lu; Xiaoyun Zhang; Li Chen; Zhiyong Gao

    2018-02-01

    Frame rate up conversion (FRUC) can improve the visual quality by interpolating new intermediate frames. However, high frame rate videos by FRUC are confronted with more bitrate consumption or annoying artifacts of interpolated frames. In this paper, a novel integration framework of FRUC and high efficiency video coding (HEVC) is proposed based on rate-distortion optimization, and the interpolated frames can be reconstructed at encoder side with low bitrate cost and high visual quality. First, joint motion estimation (JME) algorithm is proposed to obtain robust motion vectors, which are shared between FRUC and video coding. What's more, JME is embedded into the coding loop and employs the original motion search strategy in HEVC coding. Then, the frame interpolation is formulated as a rate-distortion optimization problem, where both the coding bitrate consumption and visual quality are taken into account. Due to the absence of original frames, the distortion model for interpolated frames is established according to the motion vector reliability and coding quantization error. Experimental results demonstrate that the proposed framework can achieve 21% ~ 42% reduction in BDBR, when compared with the traditional methods of FRUC cascaded with coding.

  14. Curvilinear component analysis: a self-organizing neural network for nonlinear mapping of data sets.

    PubMed

    Demartines, P; Herault, J

    1997-01-01

    We present a new strategy called "curvilinear component analysis" (CCA) for dimensionality reduction and representation of multidimensional data sets. The principle of CCA is a self-organized neural network performing two tasks: vector quantization (VQ) of the submanifold in the data set (input space); and nonlinear projection (P) of these quantizing vectors toward an output space, providing a revealing unfolding of the submanifold. After learning, the network has the ability to continuously map any new point from one space into another: forward mapping of new points in the input space, or backward mapping of an arbitrary position in the output space.

  15. Multipurpose image watermarking algorithm based on multistage vector quantization.

    PubMed

    Lu, Zhe-Ming; Xu, Dian-Guo; Sun, Sheng-He

    2005-06-01

    The rapid growth of digital multimedia and Internet technologies has made copyright protection, copy protection, and integrity verification three important issues in the digital world. To solve these problems, the digital watermarking technique has been presented and widely researched. Traditional watermarking algorithms are mostly based on discrete transform domains, such as the discrete cosine transform, discrete Fourier transform (DFT), and discrete wavelet transform (DWT). Most of these algorithms are good for only one purpose. Recently, some multipurpose digital watermarking methods have been presented, which can achieve the goal of content authentication and copyright protection simultaneously. However, they are based on DWT or DFT. Lately, several robust watermarking schemes based on vector quantization (VQ) have been presented, but they can only be used for copyright protection. In this paper, we present a novel multipurpose digital image watermarking method based on the multistage vector quantizer structure, which can be applied to image authentication and copyright protection. In the proposed method, the semi-fragile watermark and the robust watermark are embedded in different VQ stages using different techniques, and both of them can be extracted without the original image. Simulation results demonstrate the effectiveness of our algorithm in terms of robustness and fragility.

  16. Distance learning in discriminative vector quantization.

    PubMed

    Schneider, Petra; Biehl, Michael; Hammer, Barbara

    2009-10-01

    Discriminative vector quantization schemes such as learning vector quantization (LVQ) and extensions thereof offer efficient and intuitive classifiers based on the representation of classes by prototypes. The original methods, however, rely on the Euclidean distance corresponding to the assumption that the data can be represented by isotropic clusters. For this reason, extensions of the methods to more general metric structures have been proposed, such as relevance adaptation in generalized LVQ (GLVQ) and matrix learning in GLVQ. In these approaches, metric parameters are learned based on the given classification task such that a data-driven distance measure is found. In this letter, we consider full matrix adaptation in advanced LVQ schemes. In particular, we introduce matrix learning to a recent statistical formalization of LVQ, robust soft LVQ, and we compare the results on several artificial and real-life data sets to matrix learning in GLVQ, a derivation of LVQ-like learning based on a (heuristic) cost function. In all cases, matrix adaptation allows a significant improvement of the classification accuracy. Interestingly, however, the principled behavior of the models with respect to prototype locations and extracted matrix dimensions shows several characteristic differences depending on the data sets.

  17. Using a binaural biomimetic array to identify bottom objects ensonified by echolocating dolphins

    USGS Publications Warehouse

    Heiweg, D.A.; Moore, P.W.; Martin, S.W.; Dankiewicz, L.A.

    2006-01-01

    The development of a unique dolphin biomimetic sonar produced data that were used to study signal processing methods for object identification. Echoes from four metallic objects proud on the bottom, and a substrate-only condition, were generated by bottlenose dolphins trained to ensonify the targets in very shallow water. Using the two-element ('binaural') receive array, object echo spectra were collected and submitted for identification to four neural network architectures. Identification accuracy was evaluated over two receive array configurations, and five signal processing schemes. The four neural networks included backpropagation, learning vector quantization, genetic learning and probabilistic network architectures. The processing schemes included four methods that capitalized on the binaural data, plus a monaural benchmark process. All the schemes resulted in above-chance identification accuracy when applied to learning vector quantization and backpropagation. Beam-forming or concatenation of spectra from both receive elements outperformed the monaural benchmark, with higher sensitivity and lower bias. Ultimately, best object identification performance was achieved by the learning vector quantization network supplied with beam-formed data. The advantages of multi-element signal processing for object identification are clearly demonstrated in this development of a first-ever dolphin biomimetic sonar. ?? 2006 IOP Publishing Ltd.

  18. Quantized kernel least mean square algorithm.

    PubMed

    Chen, Badong; Zhao, Songlin; Zhu, Pingping; Príncipe, José C

    2012-01-01

    In this paper, we propose a quantization approach, as an alternative of sparsification, to curb the growth of the radial basis function structure in kernel adaptive filtering. The basic idea behind this method is to quantize and hence compress the input (or feature) space. Different from sparsification, the new approach uses the "redundant" data to update the coefficient of the closest center. In particular, a quantized kernel least mean square (QKLMS) algorithm is developed, which is based on a simple online vector quantization method. The analytical study of the mean square convergence has been carried out. The energy conservation relation for QKLMS is established, and on this basis we arrive at a sufficient condition for mean square convergence, and a lower and upper bound on the theoretical value of the steady-state excess mean square error. Static function estimation and short-term chaotic time-series prediction examples are presented to demonstrate the excellent performance.

  19. Optimal sampling and quantization of synthetic aperture radar signals

    NASA Technical Reports Server (NTRS)

    Wu, C.

    1978-01-01

    Some theoretical and experimental results on optimal sampling and quantization of synthetic aperture radar (SAR) signals are presented. It includes a description of a derived theoretical relationship between the pixel signal to noise ratio of processed SAR images and the number of quantization bits per sampled signal, assuming homogeneous extended targets. With this relationship known, a solution may be realized for the problem of optimal allocation of a fixed data bit-volume (for specified surface area and resolution criterion) between the number of samples and the number of bits per sample. The results indicate that to achieve the best possible image quality for a fixed bit rate and a given resolution criterion, one should quantize individual samples coarsely and thereby maximize the number of multiple looks. The theoretical results are then compared with simulation results obtained by processing aircraft SAR data.

  20. Low bit rate coding of Earth science images

    NASA Technical Reports Server (NTRS)

    Kossentini, Faouzi; Chung, Wilson C.; Smith, Mark J. T.

    1993-01-01

    In this paper, the authors discuss compression based on some new ideas in vector quantization and their incorporation in a sub-band coding framework. Several variations are considered, which collectively address many of the individual compression needs within the earth science community. The approach taken in this work is based on some recent advances in the area of variable rate residual vector quantization (RVQ). This new RVQ method is considered separately and in conjunction with sub-band image decomposition. Very good results are achieved in coding a variety of earth science images. The last section of the paper provides some comparisons that illustrate the improvement in performance attributable to this approach relative the the JPEG coding standard.

  1. Pipeline synthetic aperture radar data compression utilizing systolic binary tree-searched architecture for vector quantization

    NASA Technical Reports Server (NTRS)

    Chang, Chi-Yung (Inventor); Fang, Wai-Chi (Inventor); Curlander, John C. (Inventor)

    1995-01-01

    A system for data compression utilizing systolic array architecture for Vector Quantization (VQ) is disclosed for both full-searched and tree-searched. For a tree-searched VQ, the special case of a Binary Tree-Search VQ (BTSVQ) is disclosed with identical Processing Elements (PE) in the array for both a Raw-Codebook VQ (RCVQ) and a Difference-Codebook VQ (DCVQ) algorithm. A fault tolerant system is disclosed which allows a PE that has developed a fault to be bypassed in the array and replaced by a spare at the end of the array, with codebook memory assignment shifted one PE past the faulty PE of the array.

  2. A fingerprint key binding algorithm based on vector quantization and error correction

    NASA Astrophysics Data System (ADS)

    Li, Liang; Wang, Qian; Lv, Ke; He, Ning

    2012-04-01

    In recent years, researches on seamless combination cryptosystem with biometric technologies, e.g. fingerprint recognition, are conducted by many researchers. In this paper, we propose a binding algorithm of fingerprint template and cryptographic key to protect and access the key by fingerprint verification. In order to avoid the intrinsic fuzziness of variant fingerprints, vector quantization and error correction technique are introduced to transform fingerprint template and then bind with key, after a process of fingerprint registration and extracting global ridge pattern of fingerprint. The key itself is secure because only hash value is stored and it is released only when fingerprint verification succeeds. Experimental results demonstrate the effectiveness of our ideas.

  3. Image compression system and method having optimized quantization tables

    NASA Technical Reports Server (NTRS)

    Ratnakar, Viresh (Inventor); Livny, Miron (Inventor)

    1998-01-01

    A digital image compression preprocessor for use in a discrete cosine transform-based digital image compression device is provided. The preprocessor includes a gathering mechanism for determining discrete cosine transform statistics from input digital image data. A computing mechanism is operatively coupled to the gathering mechanism to calculate a image distortion array and a rate of image compression array based upon the discrete cosine transform statistics for each possible quantization value. A dynamic programming mechanism is operatively coupled to the computing mechanism to optimize the rate of image compression array against the image distortion array such that a rate-distortion-optimal quantization table is derived. In addition, a discrete cosine transform-based digital image compression device and a discrete cosine transform-based digital image compression and decompression system are provided. Also, a method for generating a rate-distortion-optimal quantization table, using discrete cosine transform-based digital image compression, and operating a discrete cosine transform-based digital image compression and decompression system are provided.

  4. Application of Classification Models to Pharyngeal High-Resolution Manometry

    ERIC Educational Resources Information Center

    Mielens, Jason D.; Hoffman, Matthew R.; Ciucci, Michelle R.; McCulloch, Timothy M.; Jiang, Jack J.

    2012-01-01

    Purpose: The authors present 3 methods of performing pattern recognition on spatiotemporal plots produced by pharyngeal high-resolution manometry (HRM). Method: Classification models, including the artificial neural networks (ANNs) multilayer perceptron (MLP) and learning vector quantization (LVQ), as well as support vector machines (SVM), were…

  5. Resolution-Adaptive Hybrid MIMO Architectures for Millimeter Wave Communications

    NASA Astrophysics Data System (ADS)

    Choi, Jinseok; Evans, Brian L.; Gatherer, Alan

    2017-12-01

    In this paper, we propose a hybrid analog-digital beamforming architecture with resolution-adaptive ADCs for millimeter wave (mmWave) receivers with large antenna arrays. We adopt array response vectors for the analog combiners and derive ADC bit-allocation (BA) solutions in closed form. The BA solutions reveal that the optimal number of ADC bits is logarithmically proportional to the RF chain's signal-to-noise ratio raised to the 1/3 power. Using the solutions, two proposed BA algorithms minimize the mean square quantization error of received analog signals under a total ADC power constraint. Contributions of this paper include 1) ADC bit-allocation algorithms to improve communication performance of a hybrid MIMO receiver, 2) approximation of the capacity with the BA algorithm as a function of channels, and 3) a worst-case analysis of the ergodic rate of the proposed MIMO receiver that quantifies system tradeoffs and serves as the lower bound. Simulation results demonstrate that the BA algorithms outperform a fixed-ADC approach in both spectral and energy efficiency, and validate the capacity and ergodic rate formula. For a power constraint equivalent to that of fixed 4-bit ADCs, the revised BA algorithm makes the quantization error negligible while achieving 22% better energy efficiency. Having negligible quantization error allows existing state-of-the-art digital beamformers to be readily applied to the proposed system.

  6. Automatic detection of voice impairments by means of short-term cepstral parameters and neural network based detectors.

    PubMed

    Godino-Llorente, J I; Gómez-Vilda, P

    2004-02-01

    It is well known that vocal and voice diseases do not necessarily cause perceptible changes in the acoustic voice signal. Acoustic analysis is a useful tool to diagnose voice diseases being a complementary technique to other methods based on direct observation of the vocal folds by laryngoscopy. Through the present paper two neural-network based classification approaches applied to the automatic detection of voice disorders will be studied. Structures studied are multilayer perceptron and learning vector quantization fed using short-term vectors calculated accordingly to the well-known Mel Frequency Coefficient cepstral parameterization. The paper shows that these architectures allow the detection of voice disorders--including glottic cancer--under highly reliable conditions. Within this context, the Learning Vector quantization methodology demonstrated to be more reliable than the multilayer perceptron architecture yielding 96% frame accuracy under similar working conditions.

  7. Accelerating simulation for the multiple-point statistics algorithm using vector quantization

    NASA Astrophysics Data System (ADS)

    Zuo, Chen; Pan, Zhibin; Liang, Hao

    2018-03-01

    Multiple-point statistics (MPS) is a prominent algorithm to simulate categorical variables based on a sequential simulation procedure. Assuming training images (TIs) as prior conceptual models, MPS extracts patterns from TIs using a template and records their occurrences in a database. However, complex patterns increase the size of the database and require considerable time to retrieve the desired elements. In order to speed up simulation and improve simulation quality over state-of-the-art MPS methods, we propose an accelerating simulation for MPS using vector quantization (VQ), called VQ-MPS. First, a variable representation is presented to make categorical variables applicable for vector quantization. Second, we adopt a tree-structured VQ to compress the database so that stationary simulations are realized. Finally, a transformed template and classified VQ are used to address nonstationarity. A two-dimensional (2D) stationary channelized reservoir image is used to validate the proposed VQ-MPS. In comparison with several existing MPS programs, our method exhibits significantly better performance in terms of computational time, pattern reproductions, and spatial uncertainty. Further demonstrations consist of a 2D four facies simulation, two 2D nonstationary channel simulations, and a three-dimensional (3D) rock simulation. The results reveal that our proposed method is also capable of solving multifacies, nonstationarity, and 3D simulations based on 2D TIs.

  8. Vector coding of wavelet-transformed images

    NASA Astrophysics Data System (ADS)

    Zhou, Jun; Zhi, Cheng; Zhou, Yuanhua

    1998-09-01

    Wavelet, as a brand new tool in signal processing, has got broad recognition. Using wavelet transform, we can get octave divided frequency band with specific orientation which combines well with the properties of Human Visual System. In this paper, we discuss the classified vector quantization method for multiresolution represented image.

  9. Permutation modulation for quantization and information reconciliation in CV-QKD systems

    NASA Astrophysics Data System (ADS)

    Daneshgaran, Fred; Mondin, Marina; Olia, Khashayar

    2017-08-01

    This paper is focused on the problem of Information Reconciliation (IR) for continuous variable Quantum Key Distribution (QKD). The main problem is quantization and assignment of labels to the samples of the Gaussian variables observed at Alice and Bob. Trouble is that most of the samples, assuming that the Gaussian variable is zero mean which is de-facto the case, tend to have small magnitudes and are easily disturbed by noise. Transmission over longer and longer distances increases the losses corresponding to a lower effective Signal to Noise Ratio (SNR) exasperating the problem. Here we propose to use Permutation Modulation (PM) as a means of quantization of Gaussian vectors at Alice and Bob over a d-dimensional space with d ≫ 1. The goal is to achieve the necessary coding efficiency to extend the achievable range of continuous variable QKD by quantizing over larger and larger dimensions. Fractional bit rate per sample is easily achieved using PM at very reasonable computational cost. Ordered statistics is used extensively throughout the development from generation of the seed vector in PM to analysis of error rates associated with the signs of the Gaussian samples at Alice and Bob as a function of the magnitude of the observed samples at Bob.

  10. Neurocomputing strategies in decomposition based structural design

    NASA Technical Reports Server (NTRS)

    Szewczyk, Z.; Hajela, P.

    1993-01-01

    The present paper explores the applicability of neurocomputing strategies in decomposition based structural optimization problems. It is shown that the modeling capability of a backpropagation neural network can be used to detect weak couplings in a system, and to effectively decompose it into smaller, more tractable, subsystems. When such partitioning of a design space is possible, parallel optimization can be performed in each subsystem, with a penalty term added to its objective function to account for constraint violations in all other subsystems. Dependencies among subsystems are represented in terms of global design variables, and a neural network is used to map the relations between these variables and all subsystem constraints. A vector quantization technique, referred to as a z-Network, can effectively be used for this purpose. The approach is illustrated with applications to minimum weight sizing of truss structures with multiple design constraints.

  11. Robust fault tolerant control based on sliding mode method for uncertain linear systems with quantization.

    PubMed

    Hao, Li-Ying; Yang, Guang-Hong

    2013-09-01

    This paper is concerned with the problem of robust fault-tolerant compensation control problem for uncertain linear systems subject to both state and input signal quantization. By incorporating novel matrix full-rank factorization technique with sliding surface design successfully, the total failure of certain actuators can be coped with, under a special actuator redundancy assumption. In order to compensate for quantization errors, an adjustment range of quantization sensitivity for a dynamic uniform quantizer is given through the flexible choices of design parameters. Comparing with the existing results, the derived inequality condition leads to the fault tolerance ability stronger and much wider scope of applicability. With a static adjustment policy of quantization sensitivity, an adaptive sliding mode controller is then designed to maintain the sliding mode, where the gain of the nonlinear unit vector term is updated automatically to compensate for the effects of actuator faults, quantization errors, exogenous disturbances and parameter uncertainties without the need for a fault detection and isolation (FDI) mechanism. Finally, the effectiveness of the proposed design method is illustrated via a model of a rocket fairing structural-acoustic. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  12. Model-based VQ for image data archival, retrieval and distribution

    NASA Technical Reports Server (NTRS)

    Manohar, Mareboyana; Tilton, James C.

    1995-01-01

    An ideal image compression technique for image data archival, retrieval and distribution would be one with the asymmetrical computational requirements of Vector Quantization (VQ), but without the complications arising from VQ codebooks. Codebook generation and maintenance are stumbling blocks which have limited the use of VQ as a practical image compression algorithm. Model-based VQ (MVQ), a variant of VQ described here, has the computational properties of VQ but does not require explicit codebooks. The codebooks are internally generated using mean removed error and Human Visual System (HVS) models. The error model assumed is the Laplacian distribution with mean, lambda-computed from a sample of the input image. A Laplacian distribution with mean, lambda, is generated with uniform random number generator. These random numbers are grouped into vectors. These vectors are further conditioned to make them perceptually meaningful by filtering the DCT coefficients from each vector. The DCT coefficients are filtered by multiplying by a weight matrix that is found to be optimal for human perception. The inverse DCT is performed to produce the conditioned vectors for the codebook. The only image dependent parameter used in the generation of codebook is the mean, lambda, that is included in the coded file to repeat the codebook generation process for decoding.

  13. Face recognition algorithm using extended vector quantization histogram features.

    PubMed

    Yan, Yan; Lee, Feifei; Wu, Xueqian; Chen, Qiu

    2018-01-01

    In this paper, we propose a face recognition algorithm based on a combination of vector quantization (VQ) and Markov stationary features (MSF). The VQ algorithm has been shown to be an effective method for generating features; it extracts a codevector histogram as a facial feature representation for face recognition. Still, the VQ histogram features are unable to convey spatial structural information, which to some extent limits their usefulness in discrimination. To alleviate this limitation of VQ histograms, we utilize Markov stationary features (MSF) to extend the VQ histogram-based features so as to add spatial structural information. We demonstrate the effectiveness of our proposed algorithm by achieving recognition results superior to those of several state-of-the-art methods on publicly available face databases.

  14. Soft learning vector quantization and clustering algorithms based on ordered weighted aggregation operators.

    PubMed

    Karayiannis, N B

    2000-01-01

    This paper presents the development and investigates the properties of ordered weighted learning vector quantization (LVQ) and clustering algorithms. These algorithms are developed by using gradient descent to minimize reformulation functions based on aggregation operators. An axiomatic approach provides conditions for selecting aggregation operators that lead to admissible reformulation functions. Minimization of admissible reformulation functions based on ordered weighted aggregation operators produces a family of soft LVQ and clustering algorithms, which includes fuzzy LVQ and clustering algorithms as special cases. The proposed LVQ and clustering algorithms are used to perform segmentation of magnetic resonance (MR) images of the brain. The diagnostic value of the segmented MR images provides the basis for evaluating a variety of ordered weighted LVQ and clustering algorithms.

  15. Robust 1-Bit Compressive Sensing via Binary Stable Embeddings of Sparse Vectors

    DTIC Science & Technology

    2011-04-15

    funded by Mitsubishi Electric Research Laboratories. †ICTEAM Institute, ELEN Department, Université catholique de Louvain (UCL), B-1348 Louvain-la-Neuve...reduced to a simple comparator that tests for values above or below zero, enabling extremely simple, efficient, and fast quantization. A 1-bit quantizer is...these two terms appears to be significantly different, according to the previously discussed experiments. To test the hypothesis that this term is the key

  16. Using the Relevance Vector Machine Model Combined with Local Phase Quantization to Predict Protein-Protein Interactions from Protein Sequences.

    PubMed

    An, Ji-Yong; Meng, Fan-Rong; You, Zhu-Hong; Fang, Yu-Hong; Zhao, Yu-Jun; Zhang, Ming

    2016-01-01

    We propose a novel computational method known as RVM-LPQ that combines the Relevance Vector Machine (RVM) model and Local Phase Quantization (LPQ) to predict PPIs from protein sequences. The main improvements are the results of representing protein sequences using the LPQ feature representation on a Position Specific Scoring Matrix (PSSM), reducing the influence of noise using a Principal Component Analysis (PCA), and using a Relevance Vector Machine (RVM) based classifier. We perform 5-fold cross-validation experiments on Yeast and Human datasets, and we achieve very high accuracies of 92.65% and 97.62%, respectively, which is significantly better than previous works. To further evaluate the proposed method, we compare it with the state-of-the-art support vector machine (SVM) classifier on the Yeast dataset. The experimental results demonstrate that our RVM-LPQ method is obviously better than the SVM-based method. The promising experimental results show the efficiency and simplicity of the proposed method, which can be an automatic decision support tool for future proteomics research.

  17. VLSI realization of learning vector quantization with hardware/software co-design for different applications

    NASA Astrophysics Data System (ADS)

    An, Fengwei; Akazawa, Toshinobu; Yamasaki, Shogo; Chen, Lei; Jürgen Mattausch, Hans

    2015-04-01

    This paper reports a VLSI realization of learning vector quantization (LVQ) with high flexibility for different applications. It is based on a hardware/software (HW/SW) co-design concept for on-chip learning and recognition and designed as a SoC in 180 nm CMOS. The time consuming nearest Euclidean distance search in the LVQ algorithm’s competition layer is efficiently implemented as a pipeline with parallel p-word input. Since neuron number in the competition layer, weight values, input and output number are scalable, the requirements of many different applications can be satisfied without hardware changes. Classification of a d-dimensional input vector is completed in n × \\lceil d/p \\rceil + R clock cycles, where R is the pipeline depth, and n is the number of reference feature vectors (FVs). Adjustment of stored reference FVs during learning is done by the embedded 32-bit RISC CPU, because this operation is not time critical. The high flexibility is verified by the application of human detection with different numbers for the dimensionality of the FVs.

  18. Vector quantization for efficient coding of upper subbands

    NASA Technical Reports Server (NTRS)

    Zeng, W. J.; Huang, Y. F.

    1994-01-01

    This paper examines the application of vector quantization (VQ) to exploit both intra-band and inter-band redundancy in subband coding. The focus here is on the exploitation of inter-band dependency. It is shown that VQ is particularly suitable and effective for coding the upper subbands. Three subband decomposition-based VQ coding schemes are proposed here to exploit the inter-band dependency by making full use of the extra flexibility of VQ approach over scalar quantization. A quadtree-based variable rate VQ (VRVQ) scheme which takes full advantage of the intra-band and inter-band redundancy is first proposed. Then, a more easily implementable alternative based on an efficient block-based edge estimation technique is employed to overcome the implementational barriers of the first scheme. Finally, a predictive VQ scheme formulated in the context of finite state VQ is proposed to further exploit the dependency among different subbands. A VRVQ scheme proposed elsewhere is extended to provide an efficient bit allocation procedure. Simulation results show that these three hybrid techniques have advantages, in terms of peak signal-to-noise ratio (PSNR) and complexity, over other existing subband-VQ approaches.

  19. Quantization with maximally degenerate Poisson brackets: the harmonic oscillator!

    NASA Astrophysics Data System (ADS)

    Nutku, Yavuz

    2003-07-01

    Nambu's construction of multi-linear brackets for super-integrable systems can be thought of as degenerate Poisson brackets with a maximal set of Casimirs in their kernel. By introducing privileged coordinates in phase space these degenerate Poisson brackets are brought to the form of Heisenberg's equations. We propose a definition for constructing quantum operators for classical functions, which enables us to turn the maximally degenerate Poisson brackets into operators. They pose a set of eigenvalue problems for a new state vector. The requirement of the single-valuedness of this eigenfunction leads to quantization. The example of the harmonic oscillator is used to illustrate this general procedure for quantizing a class of maximally super-integrable systems.

  20. Quantized Spectral Compressed Sensing: Cramer–Rao Bounds and Recovery Algorithms

    NASA Astrophysics Data System (ADS)

    Fu, Haoyu; Chi, Yuejie

    2018-06-01

    Efficient estimation of wideband spectrum is of great importance for applications such as cognitive radio. Recently, sub-Nyquist sampling schemes based on compressed sensing have been proposed to greatly reduce the sampling rate. However, the important issue of quantization has not been fully addressed, particularly for high-resolution spectrum and parameter estimation. In this paper, we aim to recover spectrally-sparse signals and the corresponding parameters, such as frequency and amplitudes, from heavy quantizations of their noisy complex-valued random linear measurements, e.g. only the quadrant information. We first characterize the Cramer-Rao bound under Gaussian noise, which highlights the trade-off between sample complexity and bit depth under different signal-to-noise ratios for a fixed budget of bits. Next, we propose a new algorithm based on atomic norm soft thresholding for signal recovery, which is equivalent to proximal mapping of properly designed surrogate signals with respect to the atomic norm that motivates spectral sparsity. The proposed algorithm can be applied to both the single measurement vector case, as well as the multiple measurement vector case. It is shown that under the Gaussian measurement model, the spectral signals can be reconstructed accurately with high probability, as soon as the number of quantized measurements exceeds the order of K log n, where K is the level of spectral sparsity and $n$ is the signal dimension. Finally, numerical simulations are provided to validate the proposed approaches.

  1. Cross-entropy embedding of high-dimensional data using the neural gas model.

    PubMed

    Estévez, Pablo A; Figueroa, Cristián J; Saito, Kazumi

    2005-01-01

    A cross-entropy approach to mapping high-dimensional data into a low-dimensional space embedding is presented. The method allows to project simultaneously the input data and the codebook vectors, obtained with the Neural Gas (NG) quantizer algorithm, into a low-dimensional output space. The aim of this approach is to preserve the relationship defined by the NG neighborhood function for each pair of input and codebook vectors. A cost function based on the cross-entropy between input and output probabilities is minimized by using a Newton-Raphson method. The new approach is compared with Sammon's non-linear mapping (NLM) and the hierarchical approach of combining a vector quantizer such as the self-organizing feature map (SOM) or NG with the NLM recall algorithm. In comparison with these techniques, our method delivers a clear visualization of both data points and codebooks, and it achieves a better mapping quality in terms of the topology preservation measure q(m).

  2. A novel data hiding scheme for block truncation coding compressed images using dynamic programming strategy

    NASA Astrophysics Data System (ADS)

    Chang, Ching-Chun; Liu, Yanjun; Nguyen, Son T.

    2015-03-01

    Data hiding is a technique that embeds information into digital cover data. This technique has been concentrated on the spatial uncompressed domain, and it is considered more challenging to perform in the compressed domain, i.e., vector quantization, JPEG, and block truncation coding (BTC). In this paper, we propose a new data hiding scheme for BTC-compressed images. In the proposed scheme, a dynamic programming strategy was used to search for the optimal solution of the bijective mapping function for LSB substitution. Then, according to the optimal solution, each mean value embeds three secret bits to obtain high hiding capacity with low distortion. The experimental results indicated that the proposed scheme obtained both higher hiding capacity and hiding efficiency than the other four existing schemes, while ensuring good visual quality of the stego-image. In addition, the proposed scheme achieved a low bit rate as original BTC algorithm.

  3. Reduced Order Podolsky Model

    NASA Astrophysics Data System (ADS)

    Thibes, Ronaldo

    2017-02-01

    We perform the canonical and path integral quantizations of a lower-order derivatives model describing Podolsky's generalized electrodynamics. The physical content of the model shows an auxiliary massive vector field coupled to the usual electromagnetic field. The equivalence with Podolsky's original model is studied at classical and quantum levels. Concerning the dynamical time evolution, we obtain a theory with two first-class and two second-class constraints in phase space. We calculate explicitly the corresponding Dirac brackets involving both vector fields. We use the Senjanovic procedure to implement the second-class constraints and the Batalin-Fradkin-Vilkovisky path integral quantization scheme to deal with the symmetries generated by the first-class constraints. The physical interpretation of the results turns out to be simpler due to the reduced derivatives order permeating the equations of motion, Dirac brackets and effective action.

  4. Spatially Invariant Vector Quantization: A pattern matching algorithm for multiple classes of image subject matter including pathology.

    PubMed

    Hipp, Jason D; Cheng, Jerome Y; Toner, Mehmet; Tompkins, Ronald G; Balis, Ulysses J

    2011-02-26

    HISTORICALLY, EFFECTIVE CLINICAL UTILIZATION OF IMAGE ANALYSIS AND PATTERN RECOGNITION ALGORITHMS IN PATHOLOGY HAS BEEN HAMPERED BY TWO CRITICAL LIMITATIONS: 1) the availability of digital whole slide imagery data sets and 2) a relative domain knowledge deficit in terms of application of such algorithms, on the part of practicing pathologists. With the advent of the recent and rapid adoption of whole slide imaging solutions, the former limitation has been largely resolved. However, with the expectation that it is unlikely for the general cohort of contemporary pathologists to gain advanced image analysis skills in the short term, the latter problem remains, thus underscoring the need for a class of algorithm that has the concurrent properties of image domain (or organ system) independence and extreme ease of use, without the need for specialized training or expertise. In this report, we present a novel, general case pattern recognition algorithm, Spatially Invariant Vector Quantization (SIVQ), that overcomes the aforementioned knowledge deficit. Fundamentally based on conventional Vector Quantization (VQ) pattern recognition approaches, SIVQ gains its superior performance and essentially zero-training workflow model from its use of ring vectors, which exhibit continuous symmetry, as opposed to square or rectangular vectors, which do not. By use of the stochastic matching properties inherent in continuous symmetry, a single ring vector can exhibit as much as a millionfold improvement in matching possibilities, as opposed to conventional VQ vectors. SIVQ was utilized to demonstrate rapid and highly precise pattern recognition capability in a broad range of gross and microscopic use-case settings. With the performance of SIVQ observed thus far, we find evidence that indeed there exist classes of image analysis/pattern recognition algorithms suitable for deployment in settings where pathologists alone can effectively incorporate their use into clinical workflow, as a turnkey solution. We anticipate that SIVQ, and other related class-independent pattern recognition algorithms, will become part of the overall armamentarium of digital image analysis approaches that are immediately available to practicing pathologists, without the need for the immediate availability of an image analysis expert.

  5. Development of a good-quality speech coder for transmission over noisy channels at 2.4 kb/s

    NASA Astrophysics Data System (ADS)

    Viswanathan, V. R.; Berouti, M.; Higgins, A.; Russell, W.

    1982-03-01

    This report describes the development, study, and experimental results of a 2.4 kb/s speech coder called harmonic deviations (HDV) vocoder, which transmits good-quality speech over noisy channels with bit-error rates of up to 1%. The HDV coder is based on the linear predictive coding (LPC) vocoder, and it transmits additional information over and above the data transmitted by the LPC vocoder, in the form of deviations between the speech spectrum and the LPC all-pole model spectrum at a selected set of frequencies. At the receiver, the spectral deviations are used to generate the excitation signal for the all-pole synthesis filter. The report describes and compares several methods for extracting the spectral deviations from the speech signal and for encoding them. To limit the bit-rate of the HDV coder to 2.4 kb/s the report discusses several methods including orthogonal transformation and minimum-mean-square-error scalar quantization of log area ratios, two-stage vector-scalar quantization, and variable frame rate transmission. The report also presents the results of speech-quality optimization of the HDV coder at 2.4 kb/s.

  6. Feature weighting using particle swarm optimization for learning vector quantization classifier

    NASA Astrophysics Data System (ADS)

    Dongoran, A.; Rahmadani, S.; Zarlis, M.; Zakarias

    2018-03-01

    This paper discusses and proposes a method of feature weighting in classification assignments on competitive learning artificial neural network LVQ. The weighting feature method is the search for the weight of an attribute using the PSO so as to give effect to the resulting output. This method is then applied to the LVQ-Classifier and tested on the 3 datasets obtained from the UCI Machine Learning repository. Then an accuracy analysis will be generated by two approaches. The first approach using LVQ1, referred to as LVQ-Classifier and the second approach referred to as PSOFW-LVQ, is a proposed model. The result shows that the PSO algorithm is capable of finding attribute weights that increase LVQ-classifier accuracy.

  7. Coherent states for the relativistic harmonic oscillator

    NASA Technical Reports Server (NTRS)

    Aldaya, Victor; Guerrero, J.

    1995-01-01

    Recently we have obtained, on the basis of a group approach to quantization, a Bargmann-Fock-like realization of the Relativistic Harmonic Oscillator as well as a generalized Bargmann transform relating fock wave functions and a set of relativistic Hermite polynomials. Nevertheless, the relativistic creation and annihilation operators satisfy typical relativistic commutation relations of the Lie product (vector-z, vector-z(sup dagger)) approximately equals Energy (an SL(2,R) algebra). Here we find higher-order polarization operators on the SL(2,R) group, providing canonical creation and annihilation operators satisfying the Lie product (vector-a, vector-a(sup dagger)) = identity vector 1, the eigenstates of which are 'true' coherent states.

  8. Simultaneous fault detection and control design for switched systems with two quantized signals.

    PubMed

    Li, Jian; Park, Ju H; Ye, Dan

    2017-01-01

    The problem of simultaneous fault detection and control design for switched systems with two quantized signals is presented in this paper. Dynamic quantizers are employed, respectively, before the output is passed to fault detector, and before the control input is transmitted to the switched system. Taking the quantized errors into account, the robust performance for this kind of system is given. Furthermore, sufficient conditions for the existence of fault detector/controller are presented in the framework of linear matrix inequalities, and fault detector/controller gains and the supremum of quantizer range are derived by a convex optimized method. Finally, two illustrative examples demonstrate the effectiveness of the proposed method. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  9. Reformulation of the covering and quantizer problems as ground states of interacting particles.

    PubMed

    Torquato, S

    2010-11-01

    It is known that the sphere-packing problem and the number-variance problem (closely related to an optimization problem in number theory) can be posed as energy minimizations associated with an infinite number of point particles in d-dimensional Euclidean space R(d) interacting via certain repulsive pair potentials. We reformulate the covering and quantizer problems as the determination of the ground states of interacting particles in R(d) that generally involve single-body, two-body, three-body, and higher-body interactions. This is done by linking the covering and quantizer problems to certain optimization problems involving the "void" nearest-neighbor functions that arise in the theory of random media and statistical mechanics. These reformulations, which again exemplify the deep interplay between geometry and physics, allow one now to employ theoretical and numerical optimization techniques to analyze and solve these energy minimization problems. The covering and quantizer problems have relevance in numerous applications, including wireless communication network layouts, the search of high-dimensional data parameter spaces, stereotactic radiation therapy, data compression, digital communications, meshing of space for numerical analysis, and coding and cryptography, among other examples. In the first three space dimensions, the best known solutions of the sphere-packing and number-variance problems (or their "dual" solutions) are directly related to those of the covering and quantizer problems, but such relationships may or may not exist for d≥4 , depending on the peculiarities of the dimensions involved. Our reformulation sheds light on the reasons for these similarities and differences. We also show that disordered saturated sphere packings provide relatively thin (economical) coverings and may yield thinner coverings than the best known lattice coverings in sufficiently large dimensions. In the case of the quantizer problem, we derive improved upper bounds on the quantizer error using sphere-packing solutions, which are generally substantially sharper than an existing upper bound in low to moderately large dimensions. We also demonstrate that disordered saturated sphere packings yield relatively good quantizers. Finally, we remark on possible applications of our results for the detection of gravitational waves.

  10. Reformulation of the covering and quantizer problems as ground states of interacting particles

    NASA Astrophysics Data System (ADS)

    Torquato, S.

    2010-11-01

    It is known that the sphere-packing problem and the number-variance problem (closely related to an optimization problem in number theory) can be posed as energy minimizations associated with an infinite number of point particles in d -dimensional Euclidean space Rd interacting via certain repulsive pair potentials. We reformulate the covering and quantizer problems as the determination of the ground states of interacting particles in Rd that generally involve single-body, two-body, three-body, and higher-body interactions. This is done by linking the covering and quantizer problems to certain optimization problems involving the “void” nearest-neighbor functions that arise in the theory of random media and statistical mechanics. These reformulations, which again exemplify the deep interplay between geometry and physics, allow one now to employ theoretical and numerical optimization techniques to analyze and solve these energy minimization problems. The covering and quantizer problems have relevance in numerous applications, including wireless communication network layouts, the search of high-dimensional data parameter spaces, stereotactic radiation therapy, data compression, digital communications, meshing of space for numerical analysis, and coding and cryptography, among other examples. In the first three space dimensions, the best known solutions of the sphere-packing and number-variance problems (or their “dual” solutions) are directly related to those of the covering and quantizer problems, but such relationships may or may not exist for d≥4 , depending on the peculiarities of the dimensions involved. Our reformulation sheds light on the reasons for these similarities and differences. We also show that disordered saturated sphere packings provide relatively thin (economical) coverings and may yield thinner coverings than the best known lattice coverings in sufficiently large dimensions. In the case of the quantizer problem, we derive improved upper bounds on the quantizer error using sphere-packing solutions, which are generally substantially sharper than an existing upper bound in low to moderately large dimensions. We also demonstrate that disordered saturated sphere packings yield relatively good quantizers. Finally, we remark on possible applications of our results for the detection of gravitational waves.

  11. Quantized Overcomplete Expansions: Analysis, Synthesis and Algorithms

    DTIC Science & Technology

    1995-07-01

    would be in the spirit of the Lempel - Ziv algorithm . The decoder would have to be aware of changes in the dictionary, but depending on the nature of the...37 3.4 A General Vector Compression Algorithm Based on Frames : : : : : : : : : : 40 ii 3.4.1 Design Considerations...x3.3. Along with exploring general properties of matching pursuit, we are interested in its application to compressing data vectors in RN. A general

  12. Associative Pattern Recognition In Analog VLSI Circuits

    NASA Technical Reports Server (NTRS)

    Tawel, Raoul

    1995-01-01

    Winner-take-all circuit selects best-match stored pattern. Prototype cascadable very-large-scale integrated (VLSI) circuit chips built and tested to demonstrate concept of electronic associative pattern recognition. Based on low-power, sub-threshold analog complementary oxide/semiconductor (CMOS) VLSI circuitry, each chip can store 128 sets (vectors) of 16 analog values (vector components), vectors representing known patterns as diverse as spectra, histograms, graphs, or brightnesses of pixels in images. Chips exploit parallel nature of vector quantization architecture to implement highly parallel processing in relatively simple computational cells. Through collective action, cells classify input pattern in fraction of microsecond while consuming power of few microwatts.

  13. Optimal block cosine transform image coding for noisy channels

    NASA Technical Reports Server (NTRS)

    Vaishampayan, V.; Farvardin, N.

    1986-01-01

    The two dimensional block transform coding scheme based on the discrete cosine transform was studied extensively for image coding applications. While this scheme has proven to be efficient in the absence of channel errors, its performance degrades rapidly over noisy channels. A method is presented for the joint source channel coding optimization of a scheme based on the 2-D block cosine transform when the output of the encoder is to be transmitted via a memoryless design of the quantizers used for encoding the transform coefficients. This algorithm produces a set of locally optimum quantizers and the corresponding binary code assignment for the assumed transform coefficient statistics. To determine the optimum bit assignment among the transform coefficients, an algorithm was used based on the steepest descent method, which under certain convexity conditions on the performance of the channel optimized quantizers, yields the optimal bit allocation. Comprehensive simulation results for the performance of this locally optimum system over noisy channels were obtained and appropriate comparisons against a reference system designed for no channel error were rendered.

  14. Two generalizations of Kohonen clustering

    NASA Technical Reports Server (NTRS)

    Bezdek, James C.; Pal, Nikhil R.; Tsao, Eric C. K.

    1993-01-01

    The relationship between the sequential hard c-means (SHCM), learning vector quantization (LVQ), and fuzzy c-means (FCM) clustering algorithms is discussed. LVQ and SHCM suffer from several major problems. For example, they depend heavily on initialization. If the initial values of the cluster centers are outside the convex hull of the input data, such algorithms, even if they terminate, may not produce meaningful results in terms of prototypes for cluster representation. This is due in part to the fact that they update only the winning prototype for every input vector. The impact and interaction of these two families with Kohonen's self-organizing feature mapping (SOFM), which is not a clustering method, but which often leads ideas to clustering algorithms is discussed. Then two generalizations of LVQ that are explicitly designed as clustering algorithms are presented; these algorithms are referred to as generalized LVQ = GLVQ; and fuzzy LVQ = FLVQ. Learning rules are derived to optimize an objective function whose goal is to produce 'good clusters'. GLVQ/FLVQ (may) update every node in the clustering net for each input vector. Neither GLVQ nor FLVQ depends upon a choice for the update neighborhood or learning rate distribution - these are taken care of automatically. Segmentation of a gray tone image is used as a typical application of these algorithms to illustrate the performance of GLVQ/FLVQ.

  15. A new local-global approach for classification.

    PubMed

    Peres, R T; Pedreira, C E

    2010-09-01

    In this paper, we propose a new local-global pattern classification scheme that combines supervised and unsupervised approaches, taking advantage of both, local and global environments. We understand as global methods the ones concerned with the aim of constructing a model for the whole problem space using the totality of the available observations. Local methods focus into sub regions of the space, possibly using an appropriately selected subset of the sample. In the proposed method, the sample is first divided in local cells by using a Vector Quantization unsupervised algorithm, the LBG (Linde-Buzo-Gray). In a second stage, the generated assemblage of much easier problems is locally solved with a scheme inspired by Bayes' rule. Four classification methods were implemented for comparison purposes with the proposed scheme: Learning Vector Quantization (LVQ); Feedforward Neural Networks; Support Vector Machine (SVM) and k-Nearest Neighbors. These four methods and the proposed scheme were implemented in eleven datasets, two controlled experiments, plus nine public available datasets from the UCI repository. The proposed method has shown a quite competitive performance when compared to these classical and largely used classifiers. Our method is simple concerning understanding and implementation and is based on very intuitive concepts. Copyright 2010 Elsevier Ltd. All rights reserved.

  16. Hybrid digital-analog coding with bandwidth expansion for correlated Gaussian sources under Rayleigh fading

    NASA Astrophysics Data System (ADS)

    Yahampath, Pradeepa

    2017-12-01

    Consider communicating a correlated Gaussian source over a Rayleigh fading channel with no knowledge of the channel signal-to-noise ratio (CSNR) at the transmitter. In this case, a digital system cannot be optimal for a range of CSNRs. Analog transmission however is optimal at all CSNRs, if the source and channel are memoryless and bandwidth matched. This paper presents new hybrid digital-analog (HDA) systems for sources with memory and channels with bandwidth expansion, which outperform both digital-only and analog-only systems over a wide range of CSNRs. The digital part is either a predictive quantizer or a transform code, used to achieve a coding gain. Analog part uses linear encoding to transmit the quantization error which improves the performance under CSNR variations. The hybrid encoder is optimized to achieve the minimum AMMSE (average minimum mean square error) over the CSNR distribution. To this end, analytical expressions are derived for the AMMSE of asymptotically optimal systems. It is shown that the outage CSNR of the channel code and the analog-digital power allocation must be jointly optimized to achieve the minimum AMMSE. In the case of HDA predictive quantization, a simple algorithm is presented to solve the optimization problem. Experimental results are presented for both Gauss-Markov sources and speech signals.

  17. Quantization of high dimensional Gaussian vector using permutation modulation with application to information reconciliation in continuous variable QKD

    NASA Astrophysics Data System (ADS)

    Daneshgaran, Fred; Mondin, Marina; Olia, Khashayar

    This paper is focused on the problem of Information Reconciliation (IR) for continuous variable Quantum Key Distribution (QKD). The main problem is quantization and assignment of labels to the samples of the Gaussian variables observed at Alice and Bob. Trouble is that most of the samples, assuming that the Gaussian variable is zero mean which is de-facto the case, tend to have small magnitudes and are easily disturbed by noise. Transmission over longer and longer distances increases the losses corresponding to a lower effective Signal-to-Noise Ratio (SNR) exasperating the problem. Quantization over higher dimensions is advantageous since it allows for fractional bit per sample accuracy which may be needed at very low SNR conditions whereby the achievable secret key rate is significantly less than one bit per sample. In this paper, we propose to use Permutation Modulation (PM) for quantization of Gaussian vectors potentially containing thousands of samples. PM is applied to the magnitudes of the Gaussian samples and we explore the dependence of the sign error probability on the magnitude of the samples. At very low SNR, we may transmit the entire label of the PM code from Bob to Alice in Reverse Reconciliation (RR) over public channel. The side information extracted from this label can then be used by Alice to characterize the sign error probability of her individual samples. Forward Error Correction (FEC) coding can be used by Bob on each subset of samples with similar sign error probability to aid Alice in error correction. This can be done for different subsets of samples with similar sign error probabilities leading to an Unequal Error Protection (UEP) coding paradigm.

  18. An efficient system for reliably transmitting image and video data over low bit rate noisy channels

    NASA Technical Reports Server (NTRS)

    Costello, Daniel J., Jr.; Huang, Y. F.; Stevenson, Robert L.

    1994-01-01

    This research project is intended to develop an efficient system for reliably transmitting image and video data over low bit rate noisy channels. The basic ideas behind the proposed approach are the following: employ statistical-based image modeling to facilitate pre- and post-processing and error detection, use spare redundancy that the source compression did not remove to add robustness, and implement coded modulation to improve bandwidth efficiency and noise rejection. Over the last six months, progress has been made on various aspects of the project. Through our studies of the integrated system, a list-based iterative Trellis decoder has been developed. The decoder accepts feedback from a post-processor which can detect channel errors in the reconstructed image. The error detection is based on the Huber Markov random field image model for the compressed image. The compression scheme used here is that of JPEG (Joint Photographic Experts Group). Experiments were performed and the results are quite encouraging. The principal ideas here are extendable to other compression techniques. In addition, research was also performed on unequal error protection channel coding, subband vector quantization as a means of source coding, and post processing for reducing coding artifacts. Our studies on unequal error protection (UEP) coding for image transmission focused on examining the properties of the UEP capabilities of convolutional codes. The investigation of subband vector quantization employed a wavelet transform with special emphasis on exploiting interband redundancy. The outcome of this investigation included the development of three algorithms for subband vector quantization. The reduction of transform coding artifacts was studied with the aid of a non-Gaussian Markov random field model. This results in improved image decompression. These studies are summarized and the technical papers included in the appendices.

  19. Studies on image compression and image reconstruction

    NASA Technical Reports Server (NTRS)

    Sayood, Khalid; Nori, Sekhar; Araj, A.

    1994-01-01

    During this six month period our works concentrated on three, somewhat different areas. We looked at and developed a number of error concealment schemes for use in a variety of video coding environments. This work is described in an accompanying (draft) Masters thesis. In the thesis we describe application of this techniques to the MPEG video coding scheme. We felt that the unique frame ordering approach used in the MPEG scheme would be a challenge to any error concealment/error recovery technique. We continued with our work in the vector quantization area. We have also developed a new type of vector quantizer, which we call a scan predictive vector quantization. The scan predictive VQ was tested on data processed at Goddard to approximate Landsat 7 HRMSI resolution and compared favorably with existing VQ techniques. A paper describing this work is included. The third area is concerned more with reconstruction than compression. While there is a variety of efficient lossless image compression schemes, they all have a common property that they use past data to encode future data. This is done either via taking differences, context modeling, or by building dictionaries. When encoding large images, this common property becomes a common flaw. When the user wishes to decode just a portion of the image, the requirement that the past history be available forces the decoding of a significantly larger portion of the image than desired by the user. Even with intelligent partitioning of the image dataset, the number of pixels decoded may be four times the number of pixels requested. We have developed an adaptive scanning strategy which can be used with any lossless compression scheme and which lowers the additional number of pixels to be decoded to about 7 percent of the number of pixels requested! A paper describing these results is included.

  20. Optical systolic array processor using residue arithmetic

    NASA Technical Reports Server (NTRS)

    Jackson, J.; Casasent, D.

    1983-01-01

    The use of residue arithmetic to increase the accuracy and reduce the dynamic range requirements of optical matrix-vector processors is evaluated. It is determined that matrix-vector operations and iterative algorithms can be performed totally in residue notation. A new parallel residue quantizer circuit is developed which significantly improves the performance of the systolic array feedback processor. Results are presented of a computer simulation of this system used to solve a set of three simultaneous equations.

  1. Classification and Compression of Multi-Resolution Vectors: A Tree Structured Vector Quantizer Approach

    DTIC Science & Technology

    2002-01-01

    their expression profile and for classification of cells into tumerous and non- tumerous classes. Then we will present a parallel tree method for... cancerous cells. We will use the same dataset and use tree structured classifiers with multi-resolution analysis for classifying cancerous from non- cancerous ...cells. We have the expressions of 4096 genes from 98 different cell types. Of these 98, 72 are cancerous while 26 are non- cancerous . We are interested

  2. Image segmentation using hidden Markov Gauss mixture models.

    PubMed

    Pyun, Kyungsuk; Lim, Johan; Won, Chee Sun; Gray, Robert M

    2007-07-01

    Image segmentation is an important tool in image processing and can serve as an efficient front end to sophisticated algorithms and thereby simplify subsequent processing. We develop a multiclass image segmentation method using hidden Markov Gauss mixture models (HMGMMs) and provide examples of segmentation of aerial images and textures. HMGMMs incorporate supervised learning, fitting the observation probability distribution given each class by a Gauss mixture estimated using vector quantization with a minimum discrimination information (MDI) distortion. We formulate the image segmentation problem using a maximum a posteriori criteria and find the hidden states that maximize the posterior density given the observation. We estimate both the hidden Markov parameter and hidden states using a stochastic expectation-maximization algorithm. Our results demonstrate that HMGMM provides better classification in terms of Bayes risk and spatial homogeneity of the classified objects than do several popular methods, including classification and regression trees, learning vector quantization, causal hidden Markov models (HMMs), and multiresolution HMMs. The computational load of HMGMM is similar to that of the causal HMM.

  3. FIVQ algorithm for interference hyper-spectral image compression

    NASA Astrophysics Data System (ADS)

    Wen, Jia; Ma, Caiwen; Zhao, Junsuo

    2014-07-01

    Based on the improved vector quantization (IVQ) algorithm [1] which was proposed in 2012, this paper proposes a further improved vector quantization (FIVQ) algorithm for LASIS (Large Aperture Static Imaging Spectrometer) interference hyper-spectral image compression. To get better image quality, IVQ algorithm takes both the mean values and the VQ indices as the encoding rules. Although IVQ algorithm can improve both the bit rate and the image quality, it still can be further improved in order to get much lower bit rate for the LASIS interference pattern with the special optical characteristics based on the pushing and sweeping in LASIS imaging principle. In the proposed algorithm FIVQ, the neighborhood of the encoding blocks of the interference pattern image, which are using the mean value rules, will be checked whether they have the same mean value as the current processing block. Experiments show the proposed algorithm FIVQ can get lower bit rate compared to that of the IVQ algorithm for the LASIS interference hyper-spectral sequences.

  4. Quantization, Frobenius and Bi algebras from the Categorical Framework of Quantum Mechanics to Natural Language Semantics

    NASA Astrophysics Data System (ADS)

    Sadrzadeh, Mehrnoosh

    2017-07-01

    Compact Closed categories and Frobenius and Bi algebras have been applied to model and reason about Quantum protocols. The same constructions have also been applied to reason about natural language semantics under the name: ``categorical distributional compositional'' semantics, or in short, the ``DisCoCat'' model. This model combines the statistical vector models of word meaning with the compositional models of grammatical structure. It has been applied to natural language tasks such as disambiguation, paraphrasing and entailment of phrases and sentences. The passage from the grammatical structure to vectors is provided by a functor, similar to the Quantization functor of Quantum Field Theory. The original DisCoCat model only used compact closed categories. Later, Frobenius algebras were added to it to model long distance dependancies such as relative pronouns. Recently, bialgebras have been added to the pack to reason about quantifiers. This paper reviews these constructions and their application to natural language semantics. We go over the theory and present some of the core experimental results.

  5. Learning vector quantization neural networks improve accuracy of transcranial color-coded duplex sonography in detection of middle cerebral artery spasm--preliminary report.

    PubMed

    Swiercz, Miroslaw; Kochanowicz, Jan; Weigele, John; Hurst, Robert; Liebeskind, David S; Mariak, Zenon; Melhem, Elias R; Krejza, Jaroslaw

    2008-01-01

    To determine the performance of an artificial neural network in transcranial color-coded duplex sonography (TCCS) diagnosis of middle cerebral artery (MCA) spasm. TCCS was prospectively acquired within 2 h prior to routine cerebral angiography in 100 consecutive patients (54M:46F, median age 50 years). Angiographic MCA vasospasm was classified as mild (<25% of vessel caliber reduction), moderate (25-50%), or severe (>50%). A Learning Vector Quantization neural network classified MCA spasm based on TCCS peak-systolic, mean, and end-diastolic velocity data. During a four-class discrimination task, accurate classification by the network ranged from 64.9% to 72.3%, depending on the number of neurons in the Kohonen layer. Accurate classification of vasospasm ranged from 79.6% to 87.6%, with an accuracy of 84.7% to 92.1% for the detection of moderate-to-severe vasospasm. An artificial neural network may increase the accuracy of TCCS in diagnosis of MCA spasm.

  6. Performance of customized DCT quantization tables on scientific data

    NASA Technical Reports Server (NTRS)

    Ratnakar, Viresh; Livny, Miron

    1994-01-01

    We show that it is desirable to use data-specific or customized quantization tables for scaling the spatial frequency coefficients obtained using the Discrete Cosine Transform (DCT). DCT is widely used for image and video compression (MP89, PM93) but applications typically use default quantization matrices. Using actual scientific data gathered from divers sources such as spacecrafts and electron-microscopes, we show that the default compression/quality tradeoffs can be significantly improved upon by using customized tables. We also show that significant improvements are possible for the standard test images Lena and Baboon. This work is part of an effort to develop a practical scheme for optimizing quantization matrices for any given image or video stream, under any given quality or compression constraints.

  7. A visual detection model for DCT coefficient quantization

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert J., Jr.; Watson, Andrew B.

    1994-01-01

    The discrete cosine transform (DCT) is widely used in image compression and is part of the JPEG and MPEG compression standards. The degree of compression and the amount of distortion in the decompressed image are controlled by the quantization of the transform coefficients. The standards do not specify how the DCT coefficients should be quantized. One approach is to set the quantization level for each coefficient so that the quantization error is near the threshold of visibility. Results from previous work are combined to form the current best detection model for DCT coefficient quantization noise. This model predicts sensitivity as a function of display parameters, enabling quantization matrices to be designed for display situations varying in luminance, veiling light, and spatial frequency related conditions (pixel size, viewing distance, and aspect ratio). It also allows arbitrary color space directions for the representation of color. A model-based method of optimizing the quantization matrix for an individual image was developed. The model described above provides visual thresholds for each DCT frequency. These thresholds are adjusted within each block for visual light adaptation and contrast masking. For given quantization matrix, the DCT quantization errors are scaled by the adjusted thresholds to yield perceptual errors. These errors are pooled nonlinearly over the image to yield total perceptual error. With this model one may estimate the quantization matrix for a particular image that yields minimum bit rate for a given total perceptual error, or minimum perceptual error for a given bit rate. Custom matrices for a number of images show clear improvement over image-independent matrices. Custom matrices are compatible with the JPEG standard, which requires transmission of the quantization matrix.

  8. Visibility of wavelet quantization noise

    NASA Technical Reports Server (NTRS)

    Watson, A. B.; Yang, G. Y.; Solomon, J. A.; Villasenor, J.

    1997-01-01

    The discrete wavelet transform (DWT) decomposes an image into bands that vary in spatial frequency and orientation. It is widely used for image compression. Measures of the visibility of DWT quantization errors are required to achieve optimal compression. Uniform quantization of a single band of coefficients results in an artifact that we call DWT uniform quantization noise; it is the sum of a lattice of random amplitude basis functions of the corresponding DWT synthesis filter. We measured visual detection thresholds for samples of DWT uniform quantization noise in Y, Cb, and Cr color channels. The spatial frequency of a wavelet is r 2-lambda, where r is display visual resolution in pixels/degree, and lambda is the wavelet level. Thresholds increase rapidly with wavelet spatial frequency. Thresholds also increase from Y to Cr to Cb, and with orientation from lowpass to horizontal/vertical to diagonal. We construct a mathematical model for DWT noise detection thresholds that is a function of level, orientation, and display visual resolution. This allows calculation of a "perceptually lossless" quantization matrix for which all errors are in theory below the visual threshold. The model may also be used as the basis for adaptive quantization schemes.

  9. Vector adaptive predictive coder for speech and audio

    NASA Technical Reports Server (NTRS)

    Chen, Juin-Hwey (Inventor); Gersho, Allen (Inventor)

    1990-01-01

    A real-time vector adaptive predictive coder which approximates each vector of K speech samples by using each of M fixed vectors in a first codebook to excite a time-varying synthesis filter and picking the vector that minimizes distortion. Predictive analysis for each frame determines parameters used for computing from vectors in the first codebook zero-state response vectors that are stored at the same address (index) in a second codebook. Encoding of input speech vectors s.sub.n is then carried out using the second codebook. When the vector that minimizes distortion is found, its index is transmitted to a decoder which has a codebook identical to the first codebook of the decoder. There the index is used to read out a vector that is used to synthesize an output speech vector s.sub.n. The parameters used in the encoder are quantized, for example by using a table, and the indices are transmitted to the decoder where they are decoded to specify transfer characteristics of filters used in producing the vector s.sub.n from the receiver codebook vector selected by the vector index transmitted.

  10. Prior-Based Quantization Bin Matching for Cloud Storage of JPEG Images.

    PubMed

    Liu, Xianming; Cheung, Gene; Lin, Chia-Wen; Zhao, Debin; Gao, Wen

    2018-07-01

    Millions of user-generated images are uploaded to social media sites like Facebook daily, which translate to a large storage cost. However, there exists an asymmetry in upload and download data: only a fraction of the uploaded images are subsequently retrieved for viewing. In this paper, we propose a cloud storage system that reduces the storage cost of all uploaded JPEG photos, at the expense of a controlled increase in computation mainly during download of requested image subset. Specifically, the system first selectively re-encodes code blocks of uploaded JPEG images using coarser quantization parameters for smaller storage sizes. Then during download, the system exploits known signal priors-sparsity prior and graph-signal smoothness prior-for reverse mapping to recover original fine quantization bin indices, with either deterministic guarantee (lossless mode) or statistical guarantee (near-lossless mode). For fast reverse mapping, we use small dictionaries and sparse graphs that are tailored for specific clusters of similar blocks, which are classified via tree-structured vector quantizer. During image upload, cluster indices identifying the appropriate dictionaries and graphs for the re-quantized blocks are encoded as side information using a differential distributed source coding scheme to facilitate reverse mapping during image download. Experimental results show that our system can reap significant storage savings (up to 12.05%) at roughly the same image PSNR (within 0.18 dB).

  11. Optimization and quantization in gradient symbol systems: a framework for integrating the continuous and the discrete in cognition.

    PubMed

    Smolensky, Paul; Goldrick, Matthew; Mathis, Donald

    2014-08-01

    Mental representations have continuous as well as discrete, combinatorial properties. For example, while predominantly discrete, phonological representations also vary continuously; this is reflected by gradient effects in instrumental studies of speech production. Can an integrated theoretical framework address both aspects of structure? The framework we introduce here, Gradient Symbol Processing, characterizes the emergence of grammatical macrostructure from the Parallel Distributed Processing microstructure (McClelland, Rumelhart, & The PDP Research Group, 1986) of language processing. The mental representations that emerge, Distributed Symbol Systems, have both combinatorial and gradient structure. They are processed through Subsymbolic Optimization-Quantization, in which an optimization process favoring representations that satisfy well-formedness constraints operates in parallel with a distributed quantization process favoring discrete symbolic structures. We apply a particular instantiation of this framework, λ-Diffusion Theory, to phonological production. Simulations of the resulting model suggest that Gradient Symbol Processing offers a way to unify accounts of grammatical competence with both discrete and continuous patterns in language performance. Copyright © 2013 Cognitive Science Society, Inc.

  12. Physics-based Detection of Subpixel Targets in Hyperspectral Imagery

    DTIC Science & Technology

    2007-01-01

    Learning Vector Quantization LWIR ...Wave Infrared ( LWIR ) from 7.0 to 15.0 microns regions as well. At these wavelengths, emissivity dominates the spectral signature. Emissivity is...object emits instead of reflects. Initial work has already been finished applying the hybrid detectors to LWIR sensors [13]. However, target

  13. Finite-time H∞ control for a class of discrete-time switched time-delay systems with quantized feedback

    NASA Astrophysics Data System (ADS)

    Song, Haiyu; Yu, Li; Zhang, Dan; Zhang, Wen-An

    2012-12-01

    This paper is concerned with the finite-time quantized H∞ control problem for a class of discrete-time switched time-delay systems with time-varying exogenous disturbances. By using the sector bound approach and the average dwell time method, sufficient conditions are derived for the switched system to be finite-time bounded and ensure a prescribed H∞ disturbance attenuation level, and a mode-dependent quantized state feedback controller is designed by solving an optimization problem. Two illustrative examples are provided to demonstrate the effectiveness of the proposed theoretical results.

  14. Information efficiency in visual communication

    NASA Astrophysics Data System (ADS)

    Alter-Gartenberg, Rachel; Rahman, Zia-ur

    1993-08-01

    This paper evaluates the quantization process in the context of the end-to-end performance of the visual-communication channel. Results show that the trade-off between data transmission and visual quality revolves around the information in the acquired signal, not around its energy. Improved information efficiency is gained by frequency dependent quantization that maintains the information capacity of the channel and reduces the entropy of the encoded signal. Restorations with energy bit-allocation lose both in sharpness and clarity relative to restorations with information bit-allocation. Thus, quantization with information bit-allocation is preferred for high information efficiency and visual quality in optimized visual communication.

  15. Information efficiency in visual communication

    NASA Technical Reports Server (NTRS)

    Alter-Gartenberg, Rachel; Rahman, Zia-Ur

    1993-01-01

    This paper evaluates the quantization process in the context of the end-to-end performance of the visual-communication channel. Results show that the trade-off between data transmission and visual quality revolves around the information in the acquired signal, not around its energy. Improved information efficiency is gained by frequency dependent quantization that maintains the information capacity of the channel and reduces the entropy of the encoded signal. Restorations with energy bit-allocation lose both in sharpness and clarity relative to restorations with information bit-allocation. Thus, quantization with information bit-allocation is preferred for high information efficiency and visual quality in optimized visual communication.

  16. Reducing the Volume of NASA Earth-Science Data

    NASA Technical Reports Server (NTRS)

    Lee, Seungwon; Braverman, Amy J.; Guillaume, Alexandre

    2010-01-01

    A computer program reduces data generated by NASA Earth-science missions into representative clusters characterized by centroids and membership information, thereby reducing the large volume of data to a level more amenable to analysis. The program effects an autonomous data-reduction/clustering process to produce a representative distribution and joint relationships of the data, without assuming a specific type of distribution and relationship and without resorting to domain-specific knowledge about the data. The program implements a combination of a data-reduction algorithm known as the entropy-constrained vector quantization (ECVQ) and an optimization algorithm known as the differential evolution (DE). The combination of algorithms generates the Pareto front of clustering solutions that presents the compromise between the quality of the reduced data and the degree of reduction. Similar prior data-reduction computer programs utilize only a clustering algorithm, the parameters of which are tuned manually by users. In the present program, autonomous optimization of the parameters by means of the DE supplants the manual tuning of the parameters. Thus, the program determines the best set of clustering solutions without human intervention.

  17. Application of heterogeneous pulse coupled neural network in image quantization

    NASA Astrophysics Data System (ADS)

    Huang, Yi; Ma, Yide; Li, Shouliang; Zhan, Kun

    2016-11-01

    On the basis of the different strengths of synaptic connections between actual neurons, this paper proposes a heterogeneous pulse coupled neural network (HPCNN) algorithm to perform quantization on images. HPCNNs are developed from traditional pulse coupled neural network (PCNN) models, which have different parameters corresponding to different image regions. This allows pixels of different gray levels to be classified broadly into two categories: background regional and object regional. Moreover, an HPCNN also satisfies human visual characteristics. The parameters of the HPCNN model are calculated automatically according to these categories, and quantized results will be optimal and more suitable for humans to observe. At the same time, the experimental results of natural images from the standard image library show the validity and efficiency of our proposed quantization method.

  18. Multipath search coding of stationary signals with applications to speech

    NASA Astrophysics Data System (ADS)

    Fehn, H. G.; Noll, P.

    1982-04-01

    This paper deals with the application of multipath search coding (MSC) concepts to the coding of stationary memoryless and correlated sources, and of speech signals, at a rate of one bit per sample. Use is made of three MSC classes: (1) codebook coding, or vector quantization, (2) tree coding, and (3) trellis coding. This paper explains the performances of these coders and compares them both with those of conventional coders and with rate-distortion bounds. The potentials of MSC coding strategies are demonstrated by illustrations. The paper reports also on results of MSC coding of speech, where both the strategy of adaptive quantization and of adaptive prediction were included in coder design.

  19. Vacuum polarization of the quantized massive fields in Friedman-Robertson-Walker spacetime

    NASA Astrophysics Data System (ADS)

    Matyjasek, Jerzy; Sadurski, Paweł; Telecka, Małgorzata

    2014-04-01

    The stress-energy tensor of the quantized massive fields in a spatially open, flat, and closed Friedman-Robertson-Walker universe is constructed using the adiabatic regularization (for the scalar field) and the Schwinger-DeWitt approach (for the scalar, spinor, and vector fields). It is shown that the stress-energy tensor calculated in the sixth adiabatic order coincides with the result obtained from the regularized effective action, constructed from the heat kernel coefficient a3. The behavior of the tensor is examined in the power-law cosmological models, and the semiclassical Einstein field equations are solved exactly in a few physically interesting cases, such as the generalized Starobinsky models.

  20. 2-Step scalar deadzone quantization for bitplane image coding.

    PubMed

    Auli-Llinas, Francesc

    2013-12-01

    Modern lossy image coding systems generate a quality progressive codestream that, truncated at increasing rates, produces an image with decreasing distortion. Quality progressivity is commonly provided by an embedded quantizer that employs uniform scalar deadzone quantization (USDQ) together with a bitplane coding strategy. This paper introduces a 2-step scalar deadzone quantization (2SDQ) scheme that achieves same coding performance as that of USDQ while reducing the coding passes and the emitted symbols of the bitplane coding engine. This serves to reduce the computational costs of the codec and/or to code high dynamic range images. The main insights behind 2SDQ are the use of two quantization step sizes that approximate wavelet coefficients with more or less precision depending on their density, and a rate-distortion optimization technique that adjusts the distortion decreases produced when coding 2SDQ indexes. The integration of 2SDQ in current codecs is straightforward. The applicability and efficiency of 2SDQ are demonstrated within the framework of JPEG2000.

  1. Comparison of Radio Frequency Distinct Native Attribute and Matched Filtering Techniques for Device Discrimination and Operation Identification

    DTIC Science & Technology

    identification. URE from ten MSP430F5529 16-bit microcontrollers were analyzed using: 1) RF distinct native attributes (RF-DNA) fingerprints paired with multiple...discriminant analysis/maximum likelihood (MDA/ML) classification, 2) RF-DNA fingerprints paired with generalized relevance learning vector quantized

  2. Subband Image Coding with Jointly Optimized Quantizers

    NASA Technical Reports Server (NTRS)

    Kossentini, Faouzi; Chung, Wilson C.; Smith Mark J. T.

    1995-01-01

    An iterative design algorithm for the joint design of complexity- and entropy-constrained subband quantizers and associated entropy coders is proposed. Unlike conventional subband design algorithms, the proposed algorithm does not require the use of various bit allocation algorithms. Multistage residual quantizers are employed here because they provide greater control of the complexity-performance tradeoffs, and also because they allow efficient and effective high-order statistical modeling. The resulting subband coder exploits statistical dependencies within subbands, across subbands, and across stages, mainly through complexity-constrained high-order entropy coding. Experimental results demonstrate that the complexity-rate-distortion performance of the new subband coder is exceptional.

  3. Optimal Compression of Floating-Point Astronomical Images Without Significant Loss of Information

    NASA Technical Reports Server (NTRS)

    Pence, William D.; White, R. L.; Seaman, R.

    2010-01-01

    We describe a compression method for floating-point astronomical images that gives compression ratios of 6 - 10 while still preserving the scientifically important information in the image. The pixel values are first preprocessed by quantizing them into scaled integer intensity levels, which removes some of the uncompressible noise in the image. The integers are then losslessly compressed using the fast and efficient Rice algorithm and stored in a portable FITS format file. Quantizing an image more coarsely gives greater image compression, but it also increases the noise and degrades the precision of the photometric and astrometric measurements in the quantized image. Dithering the pixel values during the quantization process greatly improves the precision of measurements in the more coarsely quantized images. We perform a series of experiments on both synthetic and real astronomical CCD images to quantitatively demonstrate that the magnitudes and positions of stars in the quantized images can be measured with the predicted amount of precision. In order to encourage wider use of these image compression methods, we have made available a pair of general-purpose image compression programs, called fpack and funpack, which can be used to compress any FITS format image.

  4. Supporting Dynamic Quantization for High-Dimensional Data Analytics.

    PubMed

    Guzun, Gheorghi; Canahuate, Guadalupe

    2017-05-01

    Similarity searches are at the heart of exploratory data analysis tasks. Distance metrics are typically used to characterize the similarity between data objects represented as feature vectors. However, when the dimensionality of the data increases and the number of features is large, traditional distance metrics fail to distinguish between the closest and furthest data points. Localized distance functions have been proposed as an alternative to traditional distance metrics. These functions only consider dimensions close to query to compute the distance/similarity. Furthermore, in order to enable interactive explorations of high-dimensional data, indexing support for ad-hoc queries is needed. In this work we set up to investigate whether bit-sliced indices can be used for exploratory analytics such as similarity searches and data clustering for high-dimensional big-data. We also propose a novel dynamic quantization called Query dependent Equi-Depth (QED) quantization and show its effectiveness on characterizing high-dimensional similarity. When applying QED we observe improvements in kNN classification accuracy over traditional distance functions. Gheorghi Guzun and Guadalupe Canahuate. 2017. Supporting Dynamic Quantization for High-Dimensional Data Analytics. In Proceedings of Ex-ploreDB'17, Chicago, IL, USA, May 14-19, 2017, 6 pages. https://doi.org/http://dx.doi.org/10.1145/3077331.3077336.

  5. Issues in the digital implementation of control compensators. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Moroney, P.

    1979-01-01

    Techniques developed for the finite-precision implementation of digital filters were used, adapted, and extended for digital feedback compensators, with particular emphasis on steady state, linear-quadratic-Gaussian compensators. Topics covered include: (1) the linear-quadratic-Gaussian problem; (2) compensator structures; (3) architectural issues: serialism, parallelism, and pipelining; (4) finite wordlength effects: quantization noise, quantizing the coefficients, and limit cycles; and (5) the optimization of structures.

  6. Selecting Representative Points in Normal Populations.

    DTIC Science & Technology

    1983-01-14

    where values can be compared. A rather early paper on quantization is by Steinhaus [1956]. In that paper, he demonstrates the two necessary (but not...already noted that Steinhaus lists these two l<i<l conditions. These two necessary conditions for an optimal quantization suggest an iterative...Stanford, California. Steinhaus , H. (1956). Sur la division des corps materiels en parties. Bulletin De L’Academie Polonaise Des Sciences, Cl. III- Vol

  7. Spin dynamics of paramagnetic centers with anisotropic g tensor and spin of ½

    PubMed Central

    Maryasov, Alexander G.

    2012-01-01

    The influence of g tensor anisotropy on spin dynamics of paramagnetic centers having real or effective spin of 1/2 is studied. The g anisotropy affects both the excitation and the detection of EPR signals, producing noticeable differences between conventional continuous-wave (cw) EPR and pulsed EPR spectra. The magnitudes and directions of the spin and magnetic moment vectors are generally not proportional to each other, but are related to each other through the g tensor. The equilibrium magnetic moment direction is generally parallel to neither the magnetic field nor the spin quantization axis due to the g anisotropy. After excitation with short microwave pulses, the spin vector precesses around its quantization axis, in a plane that is generally not perpendicular to the applied magnetic field. Paradoxically, the magnetic moment vector precesses around its equilibrium direction in a plane exactly perpendicular to the external magnetic field. In the general case, the oscillating part of the magnetic moment is elliptically polarized and the direction of precession is determined by the sign of the g tensor determinant (g tensor signature). Conventional pulsed and cw EPR spectrometers do not allow determination of the g tensor signature or the ellipticity of the magnetic moment trajectory. It is generally impossible to set a uniform spin turning angle for simple pulses in an unoriented or ‘powder’ sample when g tensor anisotropy is significant. PMID:22743542

  8. Spin dynamics of paramagnetic centers with anisotropic g tensor and spin of 1/2

    NASA Astrophysics Data System (ADS)

    Maryasov, Alexander G.; Bowman, Michael K.

    2012-08-01

    The influence of g tensor anisotropy on spin dynamics of paramagnetic centers having real or effective spin of 1/2 is studied. The g anisotropy affects both the excitation and the detection of EPR signals, producing noticeable differences between conventional continuous-wave (cw) EPR and pulsed EPR spectra. The magnitudes and directions of the spin and magnetic moment vectors are generally not proportional to each other, but are related to each other through the g tensor. The equilibrium magnetic moment direction is generally parallel to neither the magnetic field nor the spin quantization axis due to the g anisotropy. After excitation with short microwave pulses, the spin vector precesses around its quantization axis, in a plane that is generally not perpendicular to the applied magnetic field. Paradoxically, the magnetic moment vector precesses around its equilibrium direction in a plane exactly perpendicular to the external magnetic field. In the general case, the oscillating part of the magnetic moment is elliptically polarized and the direction of precession is determined by the sign of the g tensor determinant (g tensor signature). Conventional pulsed and cw EPR spectrometers do not allow determination of the g tensor signature or the ellipticity of the magnetic moment trajectory. It is generally impossible to set a uniform spin turning angle for simple pulses in an unoriented or 'powder' sample when g tensor anisotropy is significant.

  9. New adaptive color quantization method based on self-organizing maps.

    PubMed

    Chang, Chip-Hong; Xu, Pengfei; Xiao, Rui; Srikanthan, Thambipillai

    2005-01-01

    Color quantization (CQ) is an image processing task popularly used to convert true color images to palletized images for limited color display devices. To minimize the contouring artifacts introduced by the reduction of colors, a new competitive learning (CL) based scheme called the frequency sensitive self-organizing maps (FS-SOMs) is proposed to optimize the color palette design for CQ. FS-SOM harmonically blends the neighborhood adaptation of the well-known self-organizing maps (SOMs) with the neuron dependent frequency sensitive learning model, the global butterfly permutation sequence for input randomization, and the reinitialization of dead neurons to harness effective utilization of neurons. The net effect is an improvement in adaptation, a well-ordered color palette, and the alleviation of underutilization problem, which is the main cause of visually perceivable artifacts of CQ. Extensive simulations have been performed to analyze and compare the learning behavior and performance of FS-SOM against other vector quantization (VQ) algorithms. The results show that the proposed FS-SOM outperforms classical CL, Linde, Buzo, and Gray (LBG), and SOM algorithms. More importantly, FS-SOM achieves its superiority in reconstruction quality and topological ordering with a much greater robustness against variations in network parameters than the current art SOM algorithm for CQ. A most significant bit (MSB) biased encoding scheme is also introduced to reduce the number of parallel processing units. By mapping the pixel values as sign-magnitude numbers and biasing the magnitudes according to their sign bits, eight lattice points in the color space are condensed into one common point density function. Consequently, the same processing element can be used to map several color clusters and the entire FS-SOM network can be substantially scaled down without severely scarifying the quality of the displayed image. The drawback of this encoding scheme is the additional storage overhead, which can be cut down by leveraging on existing encoder in an overall lossy compression scheme.

  10. Wavelet Transforms in Parallel Image Processing

    DTIC Science & Technology

    1994-01-27

    NUMBER OF PAGES Object Segmentation, Texture Segmentation, Image Compression, Image 137 Halftoning , Neural Network, Parallel Algorithms, 2D and 3D...Vector Quantization of Wavelet Transform Coefficients ........ ............................. 57 B.1.f Adaptive Image Halftoning based on Wavelet...application has been directed to the adaptive image halftoning . The gray information at a pixel, including its gray value and gradient, is represented by

  11. Optimized nonorthogonal transforms for image compression.

    PubMed

    Guleryuz, O G; Orchard, M T

    1997-01-01

    The transform coding of images is analyzed from a common standpoint in order to generate a framework for the design of optimal transforms. It is argued that all transform coders are alike in the way they manipulate the data structure formed by transform coefficients. A general energy compaction measure is proposed to generate optimized transforms with desirable characteristics particularly suited to the simple transform coding operation of scalar quantization and entropy coding. It is shown that the optimal linear decoder (inverse transform) must be an optimal linear estimator, independent of the structure of the transform generating the coefficients. A formulation that sequentially optimizes the transforms is presented, and design equations and algorithms for its computation provided. The properties of the resulting transform systems are investigated. In particular, it is shown that the resulting basis are nonorthogonal and complete, producing energy compaction optimized, decorrelated transform coefficients. Quantization issues related to nonorthogonal expansion coefficients are addressed with a simple, efficient algorithm. Two implementations are discussed, and image coding examples are given. It is shown that the proposed design framework results in systems with superior energy compaction properties and excellent coding results.

  12. Diffraction pattern simulation of cellulose fibrils using distributed and quantized pair distances

    DOE PAGES

    Zhang, Yan; Inouye, Hideyo; Crowley, Michael; ...

    2016-10-14

    Intensity simulation of X-ray scattering from large twisted cellulose molecular fibrils is important in understanding the impact of chemical or physical treatments on structural properties such as twisting or coiling. This paper describes a highly efficient method for the simulation of X-ray diffraction patterns from complex fibrils using atom-type-specific pair-distance quantization. Pair distances are sorted into arrays which are labelled by atom type. Histograms of pair distances in each array are computed and binned and the resulting population distributions are used to represent the whole pair-distance data set. These quantized pair-distance arrays are used with a modified and vectorized Debyemore » formula to simulate diffraction patterns. This approach utilizes fewer pair distances in each iteration, and atomic scattering factors are moved outside the iteration since the arrays are labelled by atom type. As a result, this algorithm significantly reduces the computation time while maintaining the accuracy of diffraction pattern simulation, making possible the simulation of diffraction patterns from large twisted fibrils in a relatively short period of time, as is required for model testing and refinement.« less

  13. Diffraction pattern simulation of cellulose fibrils using distributed and quantized pair distances

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Yan; Inouye, Hideyo; Crowley, Michael

    Intensity simulation of X-ray scattering from large twisted cellulose molecular fibrils is important in understanding the impact of chemical or physical treatments on structural properties such as twisting or coiling. This paper describes a highly efficient method for the simulation of X-ray diffraction patterns from complex fibrils using atom-type-specific pair-distance quantization. Pair distances are sorted into arrays which are labelled by atom type. Histograms of pair distances in each array are computed and binned and the resulting population distributions are used to represent the whole pair-distance data set. These quantized pair-distance arrays are used with a modified and vectorized Debyemore » formula to simulate diffraction patterns. This approach utilizes fewer pair distances in each iteration, and atomic scattering factors are moved outside the iteration since the arrays are labelled by atom type. This algorithm significantly reduces the computation time while maintaining the accuracy of diffraction pattern simulation, making possible the simulation of diffraction patterns from large twisted fibrils in a relatively short period of time, as is required for model testing and refinement.« less

  14. Diffraction pattern simulation of cellulose fibrils using distributed and quantized pair distances

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Yan; Inouye, Hideyo; Crowley, Michael

    Intensity simulation of X-ray scattering from large twisted cellulose molecular fibrils is important in understanding the impact of chemical or physical treatments on structural properties such as twisting or coiling. This paper describes a highly efficient method for the simulation of X-ray diffraction patterns from complex fibrils using atom-type-specific pair-distance quantization. Pair distances are sorted into arrays which are labelled by atom type. Histograms of pair distances in each array are computed and binned and the resulting population distributions are used to represent the whole pair-distance data set. These quantized pair-distance arrays are used with a modified and vectorized Debyemore » formula to simulate diffraction patterns. This approach utilizes fewer pair distances in each iteration, and atomic scattering factors are moved outside the iteration since the arrays are labelled by atom type. As a result, this algorithm significantly reduces the computation time while maintaining the accuracy of diffraction pattern simulation, making possible the simulation of diffraction patterns from large twisted fibrils in a relatively short period of time, as is required for model testing and refinement.« less

  15. Quaternionic Kähler Detour Complexes and {mathcal{N} = 2} Supersymmetric Black Holes

    NASA Astrophysics Data System (ADS)

    Cherney, D.; Latini, E.; Waldron, A.

    2011-03-01

    We study a class of supersymmetric spinning particle models derived from the radial quantization of stationary, spherically symmetric black holes of four dimensional {{mathcal N} = 2} supergravities. By virtue of the c-map, these spinning particles move in quaternionic Kähler manifolds. Their spinning degrees of freedom describe mini-superspace-reduced supergravity fermions. We quantize these models using BRST detour complex technology. The construction of a nilpotent BRST charge is achieved by using local (worldline) supersymmetry ghosts to generate special holonomy transformations. (An interesting byproduct of the construction is a novel Dirac operator on the superghost extended Hilbert space.) The resulting quantized models are gauge invariant field theories with fields equaling sections of special quaternionic vector bundles. They underly and generalize the quaternionic version of Dolbeault cohomology discovered by Baston. In fact, Baston’s complex is related to the BPS sector of the models we write down. Our results rely on a calculus of operators on quaternionic Kähler manifolds that follows from BRST machinery, and although directly motivated by black hole physics, can be broadly applied to any model relying on quaternionic geometry.

  16. Master equation for open two-band systems and its applications to Hall conductance

    NASA Astrophysics Data System (ADS)

    Shen, H. Z.; Zhang, S. S.; Dai, C. M.; Yi, X. X.

    2018-02-01

    Hall conductivity in the presence of a dephasing environment has recently been investigated with a dissipative term introduced phenomenologically. In this paper, we study the dissipative topological insulator (TI) and its topological transition in the presence of quantized electromagnetic environments. A Lindblad-type equation is derived to determine the dynamics of a two-band system. When the two-band model describes TIs, the environment may be the fluctuations of radiation that surround the TIs. We find the dependence of decay rates in the master equation on Bloch vectors in the two-band system, which leads to a mixing of the band occupations. Hence the environment-induced current is in general not perfectly topological in the presence of coupling to the environment, although deviations are small in the weak limit. As an illustration, we apply the Bloch-vector-dependent master equation to TIs and calculate the Hall conductance of tight-binding electrons in a two-dimensional lattice. The influence of environments on the Hall conductance is presented and discussed. The calculations show that the phase transition points of the TIs are robust against the quantized electromagnetic environment. The results might bridge the gap between quantum optics and topological photonic materials.

  17. Musical sound analysis/synthesis using vector-quantized time-varying spectra

    NASA Astrophysics Data System (ADS)

    Ehmann, Andreas F.; Beauchamp, James W.

    2002-11-01

    A fundamental goal of computer music sound synthesis is accurate, yet efficient resynthesis of musical sounds, with the possibility of extending the synthesis into new territories using control of perceptually intuitive parameters. A data clustering technique known as vector quantization (VQ) is used to extract a globally optimum set of representative spectra from phase vocoder analyses of instrument tones. This set of spectra, called a Codebook, is used for sinusoidal additive synthesis or, more efficiently, for wavetable synthesis. Instantaneous spectra are synthesized by first determining the Codebook indices corresponding to the best least-squares matches to the original time-varying spectrum. Spectral index versus time functions are then smoothed, and interpolation is employed to provide smooth transitions between Codebook spectra. Furthermore, spectral frames are pre-flattened and their slope, or tilt, extracted before clustering is applied. This allows spectral tilt, closely related to the perceptual parameter ''brightness,'' to be independently controlled during synthesis. The result is a highly compressed format consisting of the Codebook spectra and time-varying tilt, amplitude, and Codebook index parameters. This technique has been applied to a variety of harmonic musical instrument sounds with the resulting resynthesized tones providing good matches to the originals.

  18. Electron-electron interaction and spin-orbit coupling in InAs/AlSb heterostructures with a two-dimensional electron gas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gavrilenko, V. I.; Krishtopenko, S. S., E-mail: ds_a-teens@mail.ru; Goiran, M.

    2011-01-15

    The effect of electron-electron interaction on the spectrum of two-dimensional electron states in InAs/AlSb (001) heterostructures with a GaSb cap layer with one filled size-quantization subband. The energy spectrum of two-dimensional electrons is calculated in the Hartree and Hartree-Fock approximations. It is shown that the exchange interaction decreasing the electron energy in subbands increases the energy gap between subbands and the spin-orbit splitting of the spectrum in the entire region of electron concentrations, at which only the lower size-quantization band is filled. The nonlinear dependence of the Rashba splitting constant at the Fermi wave vector on the concentration of two-dimensionalmore » electrons is demonstrated.« less

  19. Reducing weight precision of convolutional neural networks towards large-scale on-chip image recognition

    NASA Astrophysics Data System (ADS)

    Ji, Zhengping; Ovsiannikov, Ilia; Wang, Yibing; Shi, Lilong; Zhang, Qiang

    2015-05-01

    In this paper, we develop a server-client quantization scheme to reduce bit resolution of deep learning architecture, i.e., Convolutional Neural Networks, for image recognition tasks. Low bit resolution is an important factor in bringing the deep learning neural network into hardware implementation, which directly determines the cost and power consumption. We aim to reduce the bit resolution of the network without sacrificing its performance. To this end, we design a new quantization algorithm called supervised iterative quantization to reduce the bit resolution of learned network weights. In the training stage, the supervised iterative quantization is conducted via two steps on server - apply k-means based adaptive quantization on learned network weights and retrain the network based on quantized weights. These two steps are alternated until the convergence criterion is met. In this testing stage, the network configuration and low-bit weights are loaded to the client hardware device to recognize coming input in real time, where optimized but expensive quantization becomes infeasible. Considering this, we adopt a uniform quantization for the inputs and internal network responses (called feature maps) to maintain low on-chip expenses. The Convolutional Neural Network with reduced weight and input/response precision is demonstrated in recognizing two types of images: one is hand-written digit images and the other is real-life images in office scenarios. Both results show that the new network is able to achieve the performance of the neural network with full bit resolution, even though in the new network the bit resolution of both weight and input are significantly reduced, e.g., from 64 bits to 4-5 bits.

  20. Comparison of SOM point densities based on different criteria.

    PubMed

    Kohonen, T

    1999-11-15

    Point densities of model (codebook) vectors in self-organizing maps (SOMs) are evaluated in this article. For a few one-dimensional SOMs with finite grid lengths and a given probability density function of the input, the numerically exact point densities have been computed. The point density derived from the SOM algorithm turned out to be different from that minimizing the SOM distortion measure, showing that the model vectors produced by the basic SOM algorithm in general do not exactly coincide with the optimum of the distortion measure. A new computing technique based on the calculus of variations has been introduced. It was applied to the computation of point densities derived from the distortion measure for both the classical vector quantization and the SOM with general but equal dimensionality of the input vectors and the grid, respectively. The power laws in the continuum limit obtained in these cases were found to be identical.

  1. High-resolution quantization based on soliton self-frequency shift and spectral compression in a bi-directional comb-fiber architecture

    NASA Astrophysics Data System (ADS)

    Zhang, Xuyan; Zhang, Zhiyao; Wang, Shubing; Liang, Dong; Li, Heping; Liu, Yong

    2018-03-01

    We propose and demonstrate an approach that can achieve high-resolution quantization by employing soliton self-frequency shift and spectral compression. Our approach is based on a bi-directional comb-fiber architecture which is composed of a Sagnac-loop-based mirror and a comb-like combination of N sections of interleaved single-mode fibers and high nonlinear fibers. The Sagnac-loop-based mirror placed at the terminal of a bus line reflects the optical pulses back to the bus line to achieve additional N-stage spectral compression, thus single-stage soliton self-frequency shift (SSFS) and (2 N - 1)-stage spectral compression are realized in the bi-directional scheme. The fiber length in the architecture is numerically optimized, and the proposed quantization scheme is evaluated by both simulation and experiment in the case of N = 2. In the experiment, a quantization resolution of 6.2 bits is obtained, which is 1.2-bit higher than that of its uni-directional counterpart.

  2. Synthetic aperture radar signal data compression using block adaptive quantization

    NASA Technical Reports Server (NTRS)

    Kuduvalli, Gopinath; Dutkiewicz, Melanie; Cumming, Ian

    1994-01-01

    This paper describes the design and testing of an on-board SAR signal data compression algorithm for ESA's ENVISAT satellite. The Block Adaptive Quantization (BAQ) algorithm was selected, and optimized for the various operational modes of the ASAR instrument. A flexible BAQ scheme was developed which allows a selection of compression ratio/image quality trade-offs. Test results show the high quality of the SAR images processed from the reconstructed signal data, and the feasibility of on-board implementation using a single ASIC.

  3. The Coulomb problem on a 3-sphere and Heun polynomials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bellucci, Stefano; Yeghikyan, Vahagn; Yerevan State University, Alex-Manoogian st. 1, 00025 Yerevan

    2013-08-15

    The paper studies the quantum mechanical Coulomb problem on a 3-sphere. We present a special parametrization of the ellipto-spheroidal coordinate system suitable for the separation of variables. After quantization we get the explicit form of the spectrum and present an algebraic equation for the eigenvalues of the Runge-Lentz vector. We also present the wave functions expressed via Heun polynomials.

  4. Deformation quantization with separation of variables of an endomorphism bundle

    NASA Astrophysics Data System (ADS)

    Karabegov, Alexander

    2014-01-01

    Given a holomorphic Hermitian vector bundle E and a star-product with separation of variables on a pseudo-Kähler manifold, we construct a star product on the sections of the endomorphism bundle of the dual bundle E∗ which also has the appropriately generalized property of separation of variables. For this star product we prove a generalization of Gammelgaard's graph-theoretic formula.

  5. Improved Speech Coding Based on Open-Loop Parameter Estimation

    NASA Technical Reports Server (NTRS)

    Juang, Jer-Nan; Chen, Ya-Chin; Longman, Richard W.

    2000-01-01

    A nonlinear optimization algorithm for linear predictive speech coding was developed early that not only optimizes the linear model coefficients for the open loop predictor, but does the optimization including the effects of quantization of the transmitted residual. It also simultaneously optimizes the quantization levels used for each speech segment. In this paper, we present an improved method for initialization of this nonlinear algorithm, and demonstrate substantial improvements in performance. In addition, the new procedure produces monotonically improving speech quality with increasing numbers of bits used in the transmitted error residual. Examples of speech encoding and decoding are given for 8 speech segments and signal to noise levels as high as 47 dB are produced. As in typical linear predictive coding, the optimization is done on the open loop speech analysis model. Here we demonstrate that minimizing the error of the closed loop speech reconstruction, instead of the simpler open loop optimization, is likely to produce negligible improvement in speech quality. The examples suggest that the algorithm here is close to giving the best performance obtainable from a linear model, for the chosen order with the chosen number of bits for the codebook.

  6. Radiation and matter: Electrodynamics postulates and Lorenz gauge

    NASA Astrophysics Data System (ADS)

    Bobrov, V. B.; Trigger, S. A.; van Heijst, G. J.; Schram, P. P.

    2016-11-01

    In general terms, we have considered matter as the system of charged particles and quantized electromagnetic field. For consistent description of the thermodynamic properties of matter, especially in an extreme state, the problem of quantization of the longitudinal and scalar potentials should be solved. In this connection, we pay attention that the traditional postulates of electrodynamics, which claim that only electric and magnetic fields are observable, is resolved by denial of the statement about validity of the Maxwell equations for microscopic fields. The Maxwell equations, as the generalization of experimental data, are valid only for averaged values. We show that microscopic electrodynamics may be based on postulation of the d'Alembert equations for four-vector of the electromagnetic field potential. The Lorenz gauge is valid for the averages potentials (and provides the implementation of the Maxwell equations for averages). The suggested concept overcomes difficulties under the electromagnetic field quantization procedure being in accordance with the results of quantum electrodynamics. As a result, longitudinal and scalar photons become real rather than virtual and may be observed in principle. The longitudinal and scalar photons provide not only the Coulomb interaction of charged particles, but also allow the electrical Aharonov-Bohm effect.

  7. Direct Images, Fields of Hilbert Spaces, and Geometric Quantization

    NASA Astrophysics Data System (ADS)

    Lempert, László; Szőke, Róbert

    2014-04-01

    Geometric quantization often produces not one Hilbert space to represent the quantum states of a classical system but a whole family H s of Hilbert spaces, and the question arises if the spaces H s are canonically isomorphic. Axelrod et al. (J. Diff. Geo. 33:787-902, 1991) and Hitchin (Commun. Math. Phys. 131:347-380, 1990) suggest viewing H s as fibers of a Hilbert bundle H, introduce a connection on H, and use parallel transport to identify different fibers. Here we explore to what extent this can be done. First we introduce the notion of smooth and analytic fields of Hilbert spaces, and prove that if an analytic field over a simply connected base is flat, then it corresponds to a Hermitian Hilbert bundle with a flat connection and path independent parallel transport. Second we address a general direct image problem in complex geometry: pushing forward a Hermitian holomorphic vector bundle along a non-proper map . We give criteria for the direct image to be a smooth field of Hilbert spaces. Third we consider quantizing an analytic Riemannian manifold M by endowing TM with the family of adapted Kähler structures from Lempert and Szőke (Bull. Lond. Math. Soc. 44:367-374, 2012). This leads to a direct image problem. When M is homogeneous, we prove the direct image is an analytic field of Hilbert spaces. For certain such M—but not all—the direct image is even flat; which means that in those cases quantization is unique.

  8. Genetic Algorithms Evolve Optimized Transforms for Signal Processing Applications

    DTIC Science & Technology

    2005-04-01

    coefficient sets describing inverse transforms and matched forward/ inverse transform pairs that consistently outperform wavelets for image compression and reconstruction applications under conditions subject to quantization error.

  9. Adaptive quantization-parameter clip scheme for smooth quality in H.264/AVC.

    PubMed

    Hu, Sudeng; Wang, Hanli; Kwong, Sam

    2012-04-01

    In this paper, we investigate the issues over the smooth quality and the smooth bit rate during rate control (RC) in H.264/AVC. An adaptive quantization-parameter (Q(p)) clip scheme is proposed to optimize the quality smoothness while keeping the bit-rate fluctuation at an acceptable level. First, the frame complexity variation is studied by defining a complexity ratio between two nearby frames. Second, the range of the generated bits is analyzed to prevent the encoder buffer from overflow and underflow. Third, based on the safe range of the generated bits, an optimal Q(p) clip range is developed to reduce the quality fluctuation. Experimental results demonstrate that the proposed Q(p) clip scheme can achieve excellent performance in quality smoothness and buffer regulation.

  10. Understanding Local Structure Globally in Earth Science Remote Sensing Data Sets

    NASA Technical Reports Server (NTRS)

    Braverman, Amy; Fetzer, Eric

    2007-01-01

    Empirical probability distributions derived from the data are the signatures of physical processes generating the data. Distributions defined on different space-time windows can be compared and differences or changes can be attributed to physical processes. This presentation discusses on ways to reduce remote sensing data in a way that preserves information, focusing on the rate-distortion theory and using the entropy-constrained vector quantization algorithm.

  11. DCTune Perceptual Optimization of Compressed Dental X-Rays

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Null, Cynthia H. (Technical Monitor)

    1996-01-01

    In current dental practice, x-rays of completed dental work are often sent to the insurer for verification. It is faster and cheaper to transmit instead digital scans of the x-rays. Further economies result if the images are sent in compressed form. DCTune is a technology for optimizing DCT (digital communication technology) quantization matrices to yield maximum perceptual quality for a given bit-rate, or minimum bit-rate for a given perceptual quality. Perceptual optimization of DCT color quantization matrices. In addition, the technology provides a means of setting the perceptual quality of compressed imagery in a systematic way. The purpose of this research was, with respect to dental x-rays, 1) to verify the advantage of DCTune over standard JPEG (Joint Photographic Experts Group), 2) to verify the quality control feature of DCTune, and 3) to discover regularities in the optimized matrices of a set of images. We optimized matrices for a total of 20 images at two resolutions (150 and 300 dpi) and four bit-rates (0.25, 0.5, 0.75, 1.0 bits/pixel), and examined structural regularities in the resulting matrices. We also conducted psychophysical studies (1) to discover the DCTune quality level at which the images became 'visually lossless,' and (2) to rate the relative quality of DCTune and standard JPEG images at various bitrates. Results include: (1) At both resolutions, DCTune quality is a linear function of bit-rate. (2) DCTune quantization matrices for all images at all bitrates and resolutions are modeled well by an inverse Gaussian, with parameters of amplitude and width. (3) As bit-rate is varied, optimal values of both amplitude and width covary in an approximately linear fashion. (4) Both amplitude and width vary in systematic and orderly fashion with either bit-rate or DCTune quality; simple mathematical functions serve to describe these relationships. (5) In going from 150 to 300 dpi, amplitude parameters are substantially lower and widths larger at corresponding bit-rates or qualities. (6) Visually lossless compression occurs at a DCTune quality value of about 1. (7) At 0.25 bits/pixel, comparative ratings give DCTune a substantial advantage over standard JPEG. As visually lossless bit-rates are approached, this advantage of necessity diminishes. We have concluded that DCTune optimized quantization matrices provide better visual quality than standard JPEG. Meaningful quality levels may be specified by means of the DCTune metric. Optimized matrices are very similar across the class of dental x-rays, suggesting the possibility of a 'class-optimal' matrix. DCTune technology appears to provide some value in the context of compressed dental x-rays.

  12. Direct Volume Rendering with Shading via Three-Dimensional Textures

    NASA Technical Reports Server (NTRS)

    VanGelder, Allen; Kim, Kwansik

    1996-01-01

    A new and easy-to-implement method for direct volume rendering that uses 3D texture maps for acceleration, and incorporates directional lighting, is described. The implementation, called Voltx, produces high-quality images at nearly interactive speeds on workstations with hardware support for three-dimensional texture maps. Previously reported methods did not incorporate a light model, and did not address issues of multiple texture maps for large volumes. Our research shows that these extensions impact performance by about a factor of ten. Voltx supports orthographic, perspective, and stereo views. This paper describes the theory and implementation of this technique, and compares it to the shear-warp factorization approach. A rectilinear data set is converted into a three-dimensional texture map containing color and opacity information. Quantized normal vectors and a lookup table provide efficiency. A new tesselation of the sphere is described, which serves as the basis for normal-vector quantization. A new gradient-based shading criterion is described, in which the gradient magnitude is interpreted in the context of the field-data value and the material classification parameters, and not in isolation. In the rendering phase, the texture map is applied to a stack of parallel planes, which effectively cut the texture into many slabs. The slabs are composited to form an image.

  13. Image Classification of Ribbed Smoked Sheet using Learning Vector Quantization

    NASA Astrophysics Data System (ADS)

    Rahmat, R. F.; Pulungan, A. F.; Faza, S.; Budiarto, R.

    2017-01-01

    Natural rubber is an important export commodity in Indonesia, which can be a major contributor to national economic development. One type of rubber used as rubber material exports is Ribbed Smoked Sheet (RSS). The quantity of RSS exports depends on the quality of RSS. RSS rubber quality has been assigned in SNI 06-001-1987 and the International Standards of Quality and Packing for Natural Rubber Grades (The Green Book). The determination of RSS quality is also known as the sorting process. In the rubber factones, the sorting process is still done manually by looking and detecting at the levels of air bubbles on the surface of the rubber sheet by naked eyes so that the result is subjective and not so good. Therefore, a method is required to classify RSS rubber automatically and precisely. We propose some image processing techniques for the pre-processing, zoning method for feature extraction and Learning Vector Quantization (LVQ) method for classifying RSS rubber into two grades, namely RSS1 and RSS3. We used 120 RSS images as training dataset and 60 RSS images as testing dataset. The result shows that our proposed method can give 89% of accuracy and the best perform epoch is in the fifteenth epoch.

  14. Information theory-based decision support system for integrated design of multivariable hydrometric networks

    NASA Astrophysics Data System (ADS)

    Keum, Jongho; Coulibaly, Paulin

    2017-07-01

    Adequate and accurate hydrologic information from optimal hydrometric networks is an essential part of effective water resources management. Although the key hydrologic processes in the water cycle are interconnected, hydrometric networks (e.g., streamflow, precipitation, groundwater level) have been routinely designed individually. A decision support framework is proposed for integrated design of multivariable hydrometric networks. The proposed method is applied to design optimal precipitation and streamflow networks simultaneously. The epsilon-dominance hierarchical Bayesian optimization algorithm was combined with Shannon entropy of information theory to design and evaluate hydrometric networks. Specifically, the joint entropy from the combined networks was maximized to provide the most information, and the total correlation was minimized to reduce redundant information. To further optimize the efficiency between the networks, they were designed by maximizing the conditional entropy of the streamflow network given the information of the precipitation network. Compared to the traditional individual variable design approach, the integrated multivariable design method was able to determine more efficient optimal networks by avoiding the redundant stations. Additionally, four quantization cases were compared to evaluate their effects on the entropy calculations and the determination of the optimal networks. The evaluation results indicate that the quantization methods should be selected after careful consideration for each design problem since the station rankings and the optimal networks can change accordingly.

  15. Wigner functions on non-standard symplectic vector spaces

    NASA Astrophysics Data System (ADS)

    Dias, Nuno Costa; Prata, João Nuno

    2018-01-01

    We consider the Weyl quantization on a flat non-standard symplectic vector space. We focus mainly on the properties of the Wigner functions defined therein. In particular we show that the sets of Wigner functions on distinct symplectic spaces are different but have non-empty intersections. This extends previous results to arbitrary dimension and arbitrary (constant) symplectic structure. As a by-product we introduce and prove several concepts and results on non-standard symplectic spaces which generalize those on the standard symplectic space, namely, the symplectic spectrum, Williamson's theorem, and Narcowich-Wigner spectra. We also show how Wigner functions on non-standard symplectic spaces behave under the action of an arbitrary linear coordinate transformation.

  16. The BRST complex of homological Poisson reduction

    NASA Astrophysics Data System (ADS)

    Müller-Lennert, Martin

    2017-02-01

    BRST complexes are differential graded Poisson algebras. They are associated with a coisotropic ideal J of a Poisson algebra P and provide a description of the Poisson algebra (P/J)^J as their cohomology in degree zero. Using the notion of stable equivalence introduced in Felder and Kazhdan (Contemporary Mathematics 610, Perspectives in representation theory, 2014), we prove that any two BRST complexes associated with the same coisotropic ideal are quasi-isomorphic in the case P = R[V] where V is a finite-dimensional symplectic vector space and the bracket on P is induced by the symplectic structure on V. As a corollary, the cohomology of the BRST complexes is canonically associated with the coisotropic ideal J in the symplectic case. We do not require any regularity assumptions on the constraints generating the ideal J. We finally quantize the BRST complex rigorously in the presence of infinitely many ghost variables and discuss the uniqueness of the quantization procedure.

  17. Observation of Landau levels in potassium-intercalated graphite under a zero magnetic field

    PubMed Central

    Guo, Donghui; Kondo, Takahiro; Machida, Takahiro; Iwatake, Keigo; Okada, Susumu; Nakamura, Junji

    2012-01-01

    The charge carriers in graphene are massless Dirac fermions and exhibit a relativistic Landau-level quantization in a magnetic field. Recently, it has been reported that, without any external magnetic field, quantized energy levels have been also observed from strained graphene nanobubbles on a platinum surface, which were attributed to the Landau levels of massless Dirac fermions in graphene formed by a strain-induced pseudomagnetic field. Here we show the generation of the Landau levels of massless Dirac fermions on a partially potassium-intercalated graphite surface without applying external magnetic field. Landau levels of massless Dirac fermions indicate the graphene character in partially potassium-intercalated graphite. The generation of the Landau levels is ascribed to a vector potential induced by the perturbation of nearest-neighbour hopping, which may originate from a strain or a gradient of on-site potentials at the perimeters of potassium-free domains. PMID:22990864

  18. Diffractive charmonium spectrum in high energy collisions in the basis light-front quantization approach

    DOE PAGES

    Chen, Guangyao; Li, Yang; Maris, Pieter; ...

    2017-04-14

    Using the charmonium light-front wavefunctions obtained by diagonalizing an effective Hamiltonian with the one-gluon exchange interaction and a confining potential inspired by light-front holography in the basis light-front quantization formalism, we compute production of charmonium states in diffractive deep inelastic scattering and ultra-peripheral heavy ion collisions within the dipole picture. Our method allows us to predict yields of all vector charmonium states below the open flavor thresholds in high-energy deep inelastic scattering, proton-nucleus and ultra-peripheral heavy ion collisions, without introducing any new parameters in the light-front wavefunctions. The obtained charmonium cross section is in reasonable agreement with experimental data atmore » HERA, RHIC and LHC. We observe that the cross-section ratio σΨ(2s)/σJ/Ψ reveals significant independence of model parameters« less

  19. Adaptive time-sequential binary sensing for high dynamic range imaging

    NASA Astrophysics Data System (ADS)

    Hu, Chenhui; Lu, Yue M.

    2012-06-01

    We present a novel image sensor for high dynamic range imaging. The sensor performs an adaptive one-bit quantization at each pixel, with the pixel output switched from 0 to 1 only if the number of photons reaching that pixel is greater than or equal to a quantization threshold. With an oracle knowledge of the incident light intensity, one can pick an optimal threshold (for that light intensity) and the corresponding Fisher information contained in the output sequence follows closely that of an ideal unquantized sensor over a wide range of intensity values. This observation suggests the potential gains one may achieve by adaptively updating the quantization thresholds. As the main contribution of this work, we propose a time-sequential threshold-updating rule that asymptotically approaches the performance of the oracle scheme. With every threshold mapped to a number of ordered states, the dynamics of the proposed scheme can be modeled as a parametric Markov chain. We show that the frequencies of different thresholds converge to a steady-state distribution that is concentrated around the optimal choice. Moreover, numerical experiments show that the theoretical performance measures (Fisher information and Craḿer-Rao bounds) can be achieved by a maximum likelihood estimator, which is guaranteed to find globally optimal solution due to the concavity of the log-likelihood functions. Compared with conventional image sensors and the strategy that utilizes a constant single-photon threshold considered in previous work, the proposed scheme attains orders of magnitude improvement in terms of sensor dynamic ranges.

  20. Parallel image compression

    NASA Technical Reports Server (NTRS)

    Reif, John H.

    1987-01-01

    A parallel compression algorithm for the 16,384 processor MPP machine was developed. The serial version of the algorithm can be viewed as a combination of on-line dynamic lossless test compression techniques (which employ simple learning strategies) and vector quantization. These concepts are described. How these concepts are combined to form a new strategy for performing dynamic on-line lossy compression is discussed. Finally, the implementation of this algorithm in a massively parallel fashion on the MPP is discussed.

  1. An Intelligent System for Monitoring the Microgravity Environment Quality On-Board the International Space Station

    NASA Technical Reports Server (NTRS)

    Lin, Paul P.; Jules, Kenol

    2002-01-01

    An intelligent system for monitoring the microgravity environment quality on-board the International Space Station is presented. The monitoring system uses a new approach combining Kohonen's self-organizing feature map, learning vector quantization, and back propagation neural network to recognize and classify the known and unknown patterns. Finally, fuzzy logic is used to assess the level of confidence associated with each vibrating source activation detected by the system.

  2. Course 4: Anyons

    NASA Astrophysics Data System (ADS)

    Myrheim, J.

    Contents 1 Introduction 1.1 The concept of particle statistics 1.2 Statistical mechanics and the many-body problem 1.3 Experimental physics in two dimensions 1.4 The algebraic approach: Heisenberg quantization 1.5 More general quantizations 2 The configuration space 2.1 The Euclidean relative space for two particles 2.2 Dimensions d=1,2,3 2.3 Homotopy 2.4 The braid group 3 Schroedinger quantization in one dimension 4 Heisenberg quantization in one dimension 4.1 The coordinate representation 5 Schroedinger quantization in dimension d ≥ 2 5.1 Scalar wave functions 5.2 Homotopy 5.3 Interchange phases 5.4 The statistics vector potential 5.5 The N-particle case 5.6 Chern-Simons theory 6 The Feynman path integral for anyons 6.1 Eigenstates for position and momentum 6.2 The path integral 6.3 Conjugation classes in SN 6.4 The non-interacting case 6.5 Duality of Feynman and Schroedinger quantization 7 The harmonic oscillator 7.1 The two-dimensional harmonic oscillator 7.2 Two anyons in a harmonic oscillator potential 7.3 More than two anyons 7.4 The three-anyon problem 8 The anyon gas 8.1 The cluster and virial expansions 8.2 First and second order perturbative results 8.3 Regularization by periodic boundary conditions 8.4 Regularization by a harmonic oscillator potential 8.5 Bosons and fermions 8.6 Two anyons 8.7 Three anyons 8.8 The Monte Carlo method 8.9 The path integral representation of the coefficients GP 8.10 Exact and approximate polynomials 8.11 The fourth virial coefficient of anyons 8.12 Two polynomial theorems 9 Charged particles in a constant magnetic field 9.1 One particle in a magnetic field 9.2 Two anyons in a magnetic field 9.3 The anyon gas in a magnetic field 10 Interchange phases and geometric phases 10.1 Introduction to geometric phases 10.2 One particle in a magnetic field 10.3 Two particles in a magnetic field 10.4 Interchange of two anyons in potential wells 10.5 Laughlin's theory of the fractional quantum Hall effect

  3. Value Driven Information Processing and Fusion

    DTIC Science & Technology

    2016-03-01

    consensus approach allows a decentralized approach to achieve the optimal error exponent of the centralized counterpart, a conclusion that is signifi...SECURITY CLASSIFICATION OF: The objective of the project is to develop a general framework for value driven decentralized information processing...including: optimal data reduction in a network setting for decentralized inference with quantization constraint; interactive fusion that allows queries and

  4. Tomlinson-Harashima Precoding for Multiuser MIMO Systems With Quantized CSI Feedback and User Scheduling

    NASA Astrophysics Data System (ADS)

    Sun, Liang; McKay, Matthew R.

    2014-08-01

    This paper studies the sum rate performance of a low complexity quantized CSI-based Tomlinson-Harashima (TH) precoding scheme for downlink multiuser MIMO tansmission, employing greedy user selection. The asymptotic distribution of the output signal to interference plus noise ratio of each selected user and the asymptotic sum rate as the number of users K grows large are derived by using extreme value theory. For fixed finite signal to noise ratios and a finite number of transmit antennas $n_T$, we prove that as K grows large, the proposed approach can achieve the optimal sum rate scaling of the MIMO broadcast channel. We also prove that, if we ignore the precoding loss, the average sum rate of this approach converges to the average sum capacity of the MIMO broadcast channel. Our results provide insights into the effect of multiuser interference caused by quantized CSI on the multiuser diversity gain.

  5. Correlated Light-Matter Interactions in Cavity QED

    NASA Astrophysics Data System (ADS)

    Flick, Johannes; Pellegrini, Camilla; Ruggenthaler, Michael; Appel, Heiko; Tokatly, Ilya; Rubio, Angel

    2015-03-01

    In the last decade, time-dependent density functional theory (TDDFT) has been successfully applied to a large variety of problems, such as calculations of absorption spectra, excitation energies, or dynamics in strong laser fields. Recently, we have generalized TDDFT to also describe electron-photon systems (QED-TDDFT). Here, matter and light are treated on an equal quantized footing. In this work, we present the first numerical calculations in the framework of QED-TDDFT. We show exact solutions for fully quantized prototype systems consisting of atoms or molecules placed in optical high-Q cavities and coupled to quantized electromagnetic modes. We focus on the electron-photon exchange-correlation (xc) contribution by calculating exact Kohn-Sham potentials using fixed-point inversions and present the performance of the first approximated xc-potential based on an optimized effective potential (OEP) approach. Max Planck Institute for the Structure and Dynamics of Matter, Hamburg, and Fritz-Haber-Institut der MPG, Berlin

  6. Perceptually-Based Adaptive JPEG Coding

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Rosenholtz, Ruth; Null, Cynthia H. (Technical Monitor)

    1996-01-01

    An extension to the JPEG standard (ISO/IEC DIS 10918-3) allows spatial adaptive coding of still images. As with baseline JPEG coding, one quantization matrix applies to an entire image channel, but in addition the user may specify a multiplier for each 8 x 8 block, which multiplies the quantization matrix, yielding the new matrix for the block. MPEG 1 and 2 use much the same scheme, except there the multiplier changes only on macroblock boundaries. We propose a method for perceptual optimization of the set of multipliers. We compute the perceptual error for each block based upon DCT quantization error adjusted according to contrast sensitivity, light adaptation, and contrast masking, and pick the set of multipliers which yield maximally flat perceptual error over the blocks of the image. We investigate the bitrate savings due to this adaptive coding scheme and the relative importance of the different sorts of masking on adaptive coding.

  7. On families of differential equations on two-torus with all phase-lock areas

    NASA Astrophysics Data System (ADS)

    Glutsyuk, Alexey; Rybnikov, Leonid

    2017-01-01

    We consider two-parametric families of non-autonomous ordinary differential equations on the two-torus with coordinates (x, t) of the type \\overset{\\centerdot}{{x}} =v(x)+A+Bf(t) . We study its rotation number as a function of the parameters (A, B). The phase-lock areas are those level sets of the rotation number function ρ =ρ (A,B) that have non-empty interiors. Buchstaber, Karpov and Tertychnyi studied the case when v(x)=\\sin x in their joint paper. They observed the quantization effect: for every smooth periodic function f(t) the family of equations may have phase-lock areas only for integer rotation numbers. Another proof of this quantization statement was later obtained in a joint paper by Ilyashenko, Filimonov and Ryzhov. This implies a similar quantization effect for every v(x)=a\\sin (mx)+b\\cos (mx)+c and rotation numbers that are multiples of \\frac{1}{m} . We show that for every other analytic vector field v(x) (i.e. having at least two Fourier harmonics with non-zero non-opposite degrees and nonzero coefficients) there exists an analytic periodic function f(t) such that the corresponding family of equations has phase-lock areas for all the rational values of the rotation number.

  8. Chemical library subset selection algorithms: a unified derivation using spatial statistics.

    PubMed

    Hamprecht, Fred A; Thiel, Walter; van Gunsteren, Wilfred F

    2002-01-01

    If similar compounds have similar activity, rational subset selection becomes superior to random selection in screening for pharmacological lead discovery programs. Traditional approaches to this experimental design problem fall into two classes: (i) a linear or quadratic response function is assumed (ii) some space filling criterion is optimized. The assumptions underlying the first approach are clear but not always defendable; the second approach yields more intuitive designs but lacks a clear theoretical foundation. We model activity in a bioassay as realization of a stochastic process and use the best linear unbiased estimator to construct spatial sampling designs that optimize the integrated mean square prediction error, the maximum mean square prediction error, or the entropy. We argue that our approach constitutes a unifying framework encompassing most proposed techniques as limiting cases and sheds light on their underlying assumptions. In particular, vector quantization is obtained, in dimensions up to eight, in the limiting case of very smooth response surfaces for the integrated mean square error criterion. Closest packing is obtained for very rough surfaces under the integrated mean square error and entropy criteria. We suggest to use either the integrated mean square prediction error or the entropy as optimization criteria rather than approximations thereof and propose a scheme for direct iterative minimization of the integrated mean square prediction error. Finally, we discuss how the quality of chemical descriptors manifests itself and clarify the assumptions underlying the selection of diverse or representative subsets.

  9. Joint Machine Learning and Game Theory for Rate Control in High Efficiency Video Coding.

    PubMed

    Gao, Wei; Kwong, Sam; Jia, Yuheng

    2017-08-25

    In this paper, a joint machine learning and game theory modeling (MLGT) framework is proposed for inter frame coding tree unit (CTU) level bit allocation and rate control (RC) optimization in High Efficiency Video Coding (HEVC). First, a support vector machine (SVM) based multi-classification scheme is proposed to improve the prediction accuracy of CTU-level Rate-Distortion (R-D) model. The legacy "chicken-and-egg" dilemma in video coding is proposed to be overcome by the learning-based R-D model. Second, a mixed R-D model based cooperative bargaining game theory is proposed for bit allocation optimization, where the convexity of the mixed R-D model based utility function is proved, and Nash bargaining solution (NBS) is achieved by the proposed iterative solution search method. The minimum utility is adjusted by the reference coding distortion and frame-level Quantization parameter (QP) change. Lastly, intra frame QP and inter frame adaptive bit ratios are adjusted to make inter frames have more bit resources to maintain smooth quality and bit consumption in the bargaining game optimization. Experimental results demonstrate that the proposed MLGT based RC method can achieve much better R-D performances, quality smoothness, bit rate accuracy, buffer control results and subjective visual quality than the other state-of-the-art one-pass RC methods, and the achieved R-D performances are very close to the performance limits from the FixedQP method.

  10. Noise-shaping gradient descent-based online adaptation algorithms for digital calibration of analog circuits.

    PubMed

    Chakrabartty, Shantanu; Shaga, Ravi K; Aono, Kenji

    2013-04-01

    Analog circuits that are calibrated using digital-to-analog converters (DACs) use a digital signal processor-based algorithm for real-time adaptation and programming of system parameters. In this paper, we first show that this conventional framework for adaptation yields suboptimal calibration properties because of artifacts introduced by quantization noise. We then propose a novel online stochastic optimization algorithm called noise-shaping or ΣΔ gradient descent, which can shape the quantization noise out of the frequency regions spanning the parameter adaptation trajectories. As a result, the proposed algorithms demonstrate superior parameter search properties compared to floating-point gradient methods and better convergence properties than conventional quantized gradient-methods. In the second part of this paper, we apply the ΣΔ gradient descent algorithm to two examples of real-time digital calibration: 1) balancing and tracking of bias currents, and 2) frequency calibration of a band-pass Gm-C biquad filter biased in weak inversion. For each of these examples, the circuits have been prototyped in a 0.5-μm complementary metal-oxide-semiconductor process, and we demonstrate that the proposed algorithm is able to find the optimal solution even in the presence of spurious local minima, which are introduced by the nonlinear and non-monotonic response of calibration DACs.

  11. Visual information processing II; Proceedings of the Meeting, Orlando, FL, Apr. 14-16, 1993

    NASA Technical Reports Server (NTRS)

    Huck, Friedrich O. (Editor); Juday, Richard D. (Editor)

    1993-01-01

    Various papers on visual information processing are presented. Individual topics addressed include: aliasing as noise, satellite image processing using a hammering neural network, edge-detetion method using visual perception, adaptive vector median filters, design of a reading test for low-vision image warping, spatial transformation architectures, automatic image-enhancement method, redundancy reduction in image coding, lossless gray-scale image compression by predictive GDF, information efficiency in visual communication, optimizing JPEG quantization matrices for different applications, use of forward error correction to maintain image fidelity, effect of peanoscanning on image compression. Also discussed are: computer vision for autonomous robotics in space, optical processor for zero-crossing edge detection, fractal-based image edge detection, simulation of the neon spreading effect by bandpass filtering, wavelet transform (WT) on parallel SIMD architectures, nonseparable 2D wavelet image representation, adaptive image halftoning based on WT, wavelet analysis of global warming, use of the WT for signal detection, perfect reconstruction two-channel rational filter banks, N-wavelet coding for pattern classification, simulation of image of natural objects, number-theoretic coding for iconic systems.

  12. Optimized satellite image compression and reconstruction via evolution strategies

    NASA Astrophysics Data System (ADS)

    Babb, Brendan; Moore, Frank; Peterson, Michael

    2009-05-01

    This paper describes the automatic discovery, via an Evolution Strategy with Covariance Matrix Adaptation (CMA-ES), of vectors of real-valued coefficients representing matched forward and inverse transforms that outperform the 9/7 Cohen-Daubechies-Feauveau (CDF) discrete wavelet transform (DWT) for satellite image compression and reconstruction under conditions subject to quantization error. The best transform evolved during this study reduces the mean squared error (MSE) present in reconstructed satellite images by an average of 33.78% (1.79 dB), while maintaining the average information entropy (IE) of compressed images at 99.57% in comparison to the wavelet. In addition, this evolved transform achieves 49.88% (3.00 dB) average MSE reduction when tested on 80 images from the FBI fingerprint test set, and 42.35% (2.39 dB) average MSE reduction when tested on a set of 18 digital photographs, while achieving average IE of 104.36% and 100.08%, respectively. These results indicate that our evolved transform greatly improves the quality of reconstructed images without substantial loss of compression capability over a broad range of image classes.

  13. Lightweight Biometric Sensing for Walker Classification Using Narrowband RF Links

    PubMed Central

    Liang, Zhuo-qian

    2017-01-01

    This article proposes a lightweight biometric sensing system using ubiquitous narrowband radio frequency (RF) links for path-dependent walker classification. The fluctuated received signal strength (RSS) sequence generated by human motion is used for feature representation. To capture the most discriminative characteristics of individuals, a three-layer RF sensing network is organized for building multiple sampling links at the most common heights of upper limbs, thighs, and lower legs. The optimal parameters of sensing configuration, such as the height of link location and number of fused links, are investigated to improve sensory data distinctions among subjects, and the experimental results suggest that the synergistic sensing by using multiple links can contribute a better performance. This is the new consideration of using RF links in building a biometric sensing system. In addition, two types of classification methods involving vector quantization (VQ) and hidden Markov models (HMMs) are developed and compared for closed-set walker recognition and verification. Experimental studies in indoor line-of-sight (LOS) and non-line-of-sight (NLOS) scenarios are conducted to validate the proposed method. PMID:29206188

  14. Lightweight Biometric Sensing for Walker Classification Using Narrowband RF Links.

    PubMed

    Liu, Tong; Liang, Zhuo-Qian

    2017-12-05

    This article proposes a lightweight biometric sensing system using ubiquitous narrowband radio frequency (RF) links for path-dependent walker classification. The fluctuated received signal strength (RSS) sequence generated by human motion is used for feature representation. To capture the most discriminative characteristics of individuals, a three-layer RF sensing network is organized for building multiple sampling links at the most common heights of upper limbs, thighs, and lower legs. The optimal parameters of sensing configuration, such as the height of link location and number of fused links, are investigated to improve sensory data distinctions among subjects, and the experimental results suggest that the synergistic sensing by using multiple links can contribute a better performance. This is the new consideration of using RF links in building a biometric sensing system. In addition, two types of classification methods involving vector quantization (VQ) and hidden Markov models (HMMs) are developed and compared for closed-set walker recognition and verification. Experimental studies in indoor line-of-sight (LOS) and non-line-of-sight (NLOS) scenarios are conducted to validate the proposed method.

  15. Broad Absorption Line Quasar catalogues with Supervised Neural Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scaringi, Simone; Knigge, Christian; Cottis, Christopher E.

    2008-12-05

    We have applied a Learning Vector Quantization (LVQ) algorithm to SDSS DR5 quasar spectra in order to create a large catalogue of broad absorption line quasars (BALQSOs). We first discuss the problems with BALQSO catalogues constructed using the conventional balnicity and/or absorption indices (BI and AI), and then describe the supervised LVQ network we have trained to recognise BALQSOs. The resulting BALQSO catalogue should be substantially more robust and complete than BI-or AI-based ones.

  16. Quantization of Motor Activity into Primitives and Time-Frequency Atoms Using Independent Component Analysis and Matching Pursuit Algorithms

    DTIC Science & Technology

    2001-10-25

    form: (1) A is a scaling factor, t is time and r a coordinate vector describing the limb configuration. We...combination of limb state and EMG. In our early examination of EMG we detected underlying groups of muscles and phases of activity by inspection and...representations of EEG or other biological signals has been thoroughly explored. Such components might be used as a basis for neuroprosthetic control

  17. JND measurements of the speech formants parameters and its implication in the LPC pole quantization

    NASA Astrophysics Data System (ADS)

    Orgad, Yaakov

    1988-08-01

    The inherent sensitivity of auditory perception is explicitly used with the objective of designing an efficient speech encoder. Speech can be modelled by a filter representing the vocal tract shape that is driven by an excitation signal representing glottal air flow. This work concentrates on the filter encoding problem, assuming that excitation signal encoding is optimal. Linear predictive coding (LPC) techniques were used to model a short speech segment by an all-pole filter; each pole was directly related to the speech formants. Measurements were made of the auditory just noticeable difference (JND) corresponding to the natural speech formants, with the LPC filter poles as the best candidates to represent the speech spectral envelope. The JND is the maximum precision required in speech quantization; it was defined on the basis of the shift of one pole parameter of a single frame of a speech segment, necessary to induce subjective perception of the distortion, with .75 probability. The average JND in LPC filter poles in natural speech was found to increase with increasing pole bandwidth and, to a lesser extent, frequency. The JND measurements showed a large spread of the residuals around the average values, indicating that inter-formant coupling and, perhaps, other, not yet fully understood, factors were not taken into account at this stage of the research. A future treatment should consider these factors. The average JNDs obtained in this work were used to design pole quantization tables for speech coding and provided a better bit-rate than the standard quantizer of reflection coefficient; a 30-bits-per-frame pole quantizer yielded a speech quality similar to that obtained with a standard 41-bits-per-frame reflection coefficient quantizer. Owing to the complexity of the numerical root extraction system, the practical implementation of the pole quantization approach remains to be proved.

  18. Quantization of Gaussian samples at very low SNR regime in continuous variable QKD applications

    NASA Astrophysics Data System (ADS)

    Daneshgaran, Fred; Mondin, Marina

    2016-09-01

    The main problem for information reconciliation in continuous variable Quantum Key Distribution (QKD) at low Signal to Noise Ratio (SNR) is quantization and assignment of labels to the samples of the Gaussian Random Variables (RVs) observed at Alice and Bob. Trouble is that most of the samples, assuming that the Gaussian variable is zero mean which is de-facto the case, tend to have small magnitudes and are easily disturbed by noise. Transmission over longer and longer distances increases the losses corresponding to a lower effective SNR exasperating the problem. This paper looks at the quantization problem of the Gaussian samples at very low SNR regime from an information theoretic point of view. We look at the problem of two bit per sample quantization of the Gaussian RVs at Alice and Bob and derive expressions for the mutual information between the bit strings as a result of this quantization. The quantization threshold for the Most Significant Bit (MSB) should be chosen based on the maximization of the mutual information between the quantized bit strings. Furthermore, while the LSB string at Alice and Bob are balanced in a sense that their entropy is close to maximum, this is not the case for the second most significant bit even under optimal threshold. We show that with two bit quantization at SNR of -3 dB we achieve 75.8% of maximal achievable mutual information between Alice and Bob, hence, as the number of quantization bits increases beyond 2-bits, the number of additional useful bits that can be extracted for secret key generation decreases rapidly. Furthermore, the error rates between the bit strings at Alice and Bob at the same significant bit level are rather high demanding very powerful error correcting codes. While our calculations and simulation shows that the mutual information between the LSB at Alice and Bob is 0.1044 bits, that at the MSB level is only 0.035 bits. Hence, it is only by looking at the bits jointly that we are able to achieve a mutual information of 0.2217 bits which is 75.8% of maximum achievable. The implication is that only by coding both MSB and LSB jointly can we hope to get close to this 75.8% limit. Hence, non-binary codes are essential to achieve acceptable performance.

  19. Methods of Contemporary Gauge Theory

    NASA Astrophysics Data System (ADS)

    Makeenko, Yuri

    2002-08-01

    Preface; Part I. Path Integrals: 1. Operator calculus; 2. Second quantization; 3. Quantum anomalies from path integral; 4. Instantons in quantum mechanics; Part II. Lattice Gauge Theories: 5. Observables in gauge theories; 6. Gauge fields on a lattice; 7. Lattice methods; 8. Fermions on a lattice; 9. Finite temperatures; Part III. 1/N Expansion: 10. O(N) vector models; 11. Multicolor QCD; 12. QCD in loop space; 13. Matrix models; Part IV. Reduced Models: 14. Eguchi-Kawai model; 15. Twisted reduced models; 16. Non-commutative gauge theories.

  20. Methods of Contemporary Gauge Theory

    NASA Astrophysics Data System (ADS)

    Makeenko, Yuri

    2005-11-01

    Preface; Part I. Path Integrals: 1. Operator calculus; 2. Second quantization; 3. Quantum anomalies from path integral; 4. Instantons in quantum mechanics; Part II. Lattice Gauge Theories: 5. Observables in gauge theories; 6. Gauge fields on a lattice; 7. Lattice methods; 8. Fermions on a lattice; 9. Finite temperatures; Part III. 1/N Expansion: 10. O(N) vector models; 11. Multicolor QCD; 12. QCD in loop space; 13. Matrix models; Part IV. Reduced Models: 14. Eguchi-Kawai model; 15. Twisted reduced models; 16. Non-commutative gauge theories.

  1. Vector/Matrix Quantization for Narrow-Bandwidth Digital Speech Compression.

    DTIC Science & Technology

    1982-09-01

    8217o 0 -X -u -vc "oi ’" o 0 00i MN nM I -r -: I I Ir , I C 64 ut c 4c -C ;6 19I *~I C’ I I I 1 Kall 9 I I V4 S.0 M r4) ** al Iw* 0 0 10* 0 f 65 signal...Prediction of the Speech Wave, JASA Vol. 50, pp. 637-655, April 1971 . - 2. I. Itakura and S. Saito, Analysis Synthesis Telephony Based Upon the Maximum

  2. Modulated error diffusion CGHs for neural nets

    NASA Astrophysics Data System (ADS)

    Vermeulen, Pieter J. E.; Casasent, David P.

    1990-05-01

    New modulated error diffusion CGHs (computer generated holograms) for optical computing are considered. Specific attention is given to their use in optical matrix-vector, associative processor, neural net and optical interconnection architectures. We consider lensless CGH systems (many CGHs use an external Fourier transform (FT) lens), the Fresnel sampling requirements, the effects of finite CGH apertures (sample and hold inputs), dot size correction (for laser recorders), and new applications for this novel encoding method (that devotes attention to quantization noise effects).

  3. Quantum angular momentum diffusion of rigid bodies

    NASA Astrophysics Data System (ADS)

    Papendell, Birthe; Stickler, Benjamin A.; Hornberger, Klaus

    2017-12-01

    We show how to describe the diffusion of the quantized angular momentum vector of an arbitrarily shaped rigid rotor as induced by its collisional interaction with an environment. We present the general form of the Lindblad-type master equation and relate it to the orientational decoherence of an asymmetric nanoparticle in the limit of small anisotropies. The corresponding diffusion coefficients are derived for gas particles scattering off large molecules and for ambient photons scattering off dielectric particles, using the elastic scattering amplitudes.

  4. A CU-Level Rate and Distortion Estimation Scheme for RDO of Hardware-Friendly HEVC Encoders Using Low-Complexity Integer DCTs.

    PubMed

    Lee, Bumshik; Kim, Munchurl

    2016-08-01

    In this paper, a low complexity coding unit (CU)-level rate and distortion estimation scheme is proposed for High Efficiency Video Coding (HEVC) hardware-friendly implementation where a Walsh-Hadamard transform (WHT)-based low-complexity integer discrete cosine transform (DCT) is employed for distortion estimation. Since HEVC adopts quadtree structures of coding blocks with hierarchical coding depths, it becomes more difficult to estimate accurate rate and distortion values without actually performing transform, quantization, inverse transform, de-quantization, and entropy coding. Furthermore, DCT for rate-distortion optimization (RDO) is computationally high, because it requires a number of multiplication and addition operations for various transform block sizes of 4-, 8-, 16-, and 32-orders and requires recursive computations to decide the optimal depths of CU or transform unit. Therefore, full RDO-based encoding is highly complex, especially for low-power implementation of HEVC encoders. In this paper, a rate and distortion estimation scheme is proposed in CU levels based on a low-complexity integer DCT that can be computed in terms of WHT whose coefficients are produced in prediction stages. For rate and distortion estimation in CU levels, two orthogonal matrices of 4×4 and 8×8 , which are applied to WHT that are newly designed in a butterfly structure only with addition and shift operations. By applying the integer DCT based on the WHT and newly designed transforms in each CU block, the texture rate can precisely be estimated after quantization using the number of non-zero quantized coefficients and the distortion can also be precisely estimated in transform domain without de-quantization and inverse transform required. In addition, a non-texture rate estimation is proposed by using a pseudoentropy code to obtain accurate total rate estimates. The proposed rate and the distortion estimation scheme can effectively be used for HW-friendly implementation of HEVC encoders with 9.8% loss over HEVC full RDO, which much less than 20.3% and 30.2% loss of a conventional approach and Hadamard-only scheme, respectively.

  5. Becchi-Rouet-Stora-Tyutin formalism and zero locus reduction

    NASA Astrophysics Data System (ADS)

    Grigoriev, M. A.; Semikhatov, A. M.; Tipunin, I. Yu.

    2001-08-01

    In the Becchi-Rouet-Stora-Tyutin (BRST) quantization of gauge theories, the zero locus ZQ of the BRST differential Q carries an (anti)bracket whose parity is opposite to that of the fundamental bracket. Observables of the BRST theory are in a 1:1 correspondence with Casimir functions of the bracket on ZQ. For any constrained dynamical system with the phase space N0 and the constraint surface Σ, we prove its equivalence to the constrained system on the BFV-extended phase space with the constraint surface given by ZQ. Reduction to the zero locus of the differential gives rise to relations between bracket operations and differentials arising in different complexes (the Gerstenhaber, Schouten, Berezin-Kirillov, and Sklyanin brackets); the equation ensuring the existence of a nilpotent vector field on the reduced manifold can be the classical Yang-Baxter equation. We also generalize our constructions to the bi-QP manifolds which from the BRST theory viewpoint correspond to the BRST-anti-BRST-symmetric quantization.

  6. Fast and efficient search for MPEG-4 video using adjacent pixel intensity difference quantization histogram feature

    NASA Astrophysics Data System (ADS)

    Lee, Feifei; Kotani, Koji; Chen, Qiu; Ohmi, Tadahiro

    2010-02-01

    In this paper, a fast search algorithm for MPEG-4 video clips from video database is proposed. An adjacent pixel intensity difference quantization (APIDQ) histogram is utilized as the feature vector of VOP (video object plane), which had been reliably applied to human face recognition previously. Instead of fully decompressed video sequence, partially decoded data, namely DC sequence of the video object are extracted from the video sequence. Combined with active search, a temporal pruning algorithm, fast and robust video search can be realized. The proposed search algorithm has been evaluated by total 15 hours of video contained of TV programs such as drama, talk, news, etc. to search for given 200 MPEG-4 video clips which each length is 15 seconds. Experimental results show the proposed algorithm can detect the similar video clip in merely 80ms, and Equal Error Rate (ERR) of 2 % in drama and news categories are achieved, which are more accurately and robust than conventional fast video search algorithm.

  7. Joint Source-Channel Coding by Means of an Oversampled Filter Bank Code

    NASA Astrophysics Data System (ADS)

    Marinkovic, Slavica; Guillemot, Christine

    2006-12-01

    Quantized frame expansions based on block transforms and oversampled filter banks (OFBs) have been considered recently as joint source-channel codes (JSCCs) for erasure and error-resilient signal transmission over noisy channels. In this paper, we consider a coding chain involving an OFB-based signal decomposition followed by scalar quantization and a variable-length code (VLC) or a fixed-length code (FLC). This paper first examines the problem of channel error localization and correction in quantized OFB signal expansions. The error localization problem is treated as an[InlineEquation not available: see fulltext.]-ary hypothesis testing problem. The likelihood values are derived from the joint pdf of the syndrome vectors under various hypotheses of impulse noise positions, and in a number of consecutive windows of the received samples. The error amplitudes are then estimated by solving the syndrome equations in the least-square sense. The message signal is reconstructed from the corrected received signal by a pseudoinverse receiver. We then improve the error localization procedure by introducing a per-symbol reliability information in the hypothesis testing procedure of the OFB syndrome decoder. The per-symbol reliability information is produced by the soft-input soft-output (SISO) VLC/FLC decoders. This leads to the design of an iterative algorithm for joint decoding of an FLC and an OFB code. The performance of the algorithms developed is evaluated in a wavelet-based image coding system.

  8. Passive forensics for copy-move image forgery using a method based on DCT and SVD.

    PubMed

    Zhao, Jie; Guo, Jichang

    2013-12-10

    As powerful image editing tools are widely used, the demand for identifying the authenticity of an image is much increased. Copy-move forgery is one of the tampering techniques which are frequently used. Most existing techniques to expose this forgery need to improve the robustness for common post-processing operations and fail to precisely locate the tampering region especially when there are large similar or flat regions in the image. In this paper, a robust method based on DCT and SVD is proposed to detect this specific artifact. Firstly, the suspicious image is divided into fixed-size overlapping blocks and 2D-DCT is applied to each block, then the DCT coefficients are quantized by a quantization matrix to obtain a more robust representation of each block. Secondly, each quantized block is divided non-overlapping sub-blocks and SVD is applied to each sub-block, then features are extracted to reduce the dimension of each block using its largest singular value. Finally, the feature vectors are lexicographically sorted, and duplicated image blocks will be matched by predefined shift frequency threshold. Experiment results demonstrate that our proposed method can effectively detect multiple copy-move forgery and precisely locate the duplicated regions, even when an image was distorted by Gaussian blurring, AWGN, JPEG compression and their mixed operations. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  9. A robust H.264/AVC video watermarking scheme with drift compensation.

    PubMed

    Jiang, Xinghao; Sun, Tanfeng; Zhou, Yue; Wang, Wan; Shi, Yun-Qing

    2014-01-01

    A robust H.264/AVC video watermarking scheme for copyright protection with self-adaptive drift compensation is proposed. In our scheme, motion vector residuals of macroblocks with the smallest partition size are selected to hide copyright information in order to hold visual impact and distortion drift to a minimum. Drift compensation is also implemented to reduce the influence of watermark to the most extent. Besides, discrete cosine transform (DCT) with energy compact property is applied to the motion vector residual group, which can ensure robustness against intentional attacks. According to the experimental results, this scheme gains excellent imperceptibility and low bit-rate increase. Malicious attacks with different quantization parameters (QPs) or motion estimation algorithms can be resisted efficiently, with 80% accuracy on average after lossy compression.

  10. A Robust H.264/AVC Video Watermarking Scheme with Drift Compensation

    PubMed Central

    Sun, Tanfeng; Zhou, Yue; Shi, Yun-Qing

    2014-01-01

    A robust H.264/AVC video watermarking scheme for copyright protection with self-adaptive drift compensation is proposed. In our scheme, motion vector residuals of macroblocks with the smallest partition size are selected to hide copyright information in order to hold visual impact and distortion drift to a minimum. Drift compensation is also implemented to reduce the influence of watermark to the most extent. Besides, discrete cosine transform (DCT) with energy compact property is applied to the motion vector residual group, which can ensure robustness against intentional attacks. According to the experimental results, this scheme gains excellent imperceptibility and low bit-rate increase. Malicious attacks with different quantization parameters (QPs) or motion estimation algorithms can be resisted efficiently, with 80% accuracy on average after lossy compression. PMID:24672376

  11. Identifying images of handwritten digits using deep learning in H2O

    NASA Astrophysics Data System (ADS)

    Sadhasivam, Jayakumar; Charanya, R.; Kumar, S. Harish; Srinivasan, A.

    2017-11-01

    Automatic digit recognition is of popular interest today. Deep learning techniques make it possible for object recognition in image data. Perceiving the digit has turned into a fundamental part as far as certifiable applications. Since, digits are composed in various styles in this way to distinguish the digit it is important to perceive and arrange it with the assistance of machine learning methods. This exploration depends on supervised learning vector quantization neural system arranged under counterfeit artificial neural network. The pictures of digits are perceived, prepared and tried. After the system is made digits are prepared utilizing preparing dataset vectors and testing is connected to the pictures of digits which are separated to each other by fragmenting the picture and resizing the digit picture as needs be for better precision.

  12. Co-occurrence of viruses and mosquitoes at the vectors' optimal climate range: An underestimated risk to temperate regions?

    PubMed

    Blagrove, Marcus S C; Caminade, Cyril; Waldmann, Elisabeth; Sutton, Elizabeth R; Wardeh, Maya; Baylis, Matthew

    2017-06-01

    Mosquito-borne viruses have been estimated to cause over 100 million cases of human disease annually. Many methodologies have been developed to help identify areas most at risk from transmission of these viruses. However, generally, these methodologies focus predominantly on the effects of climate on either the vectors or the pathogens they spread, and do not consider the dynamic interaction between the optimal conditions for both vector and virus. Here, we use a new approach that considers the complex interplay between the optimal temperature for virus transmission, and the optimal climate for the mosquito vectors. Using published geolocated data we identified temperature and rainfall ranges in which a number of mosquito vectors have been observed to co-occur with West Nile virus, dengue virus or chikungunya virus. We then investigated whether the optimal climate for co-occurrence of vector and virus varies between "warmer" and "cooler" adapted vectors for the same virus. We found that different mosquito vectors co-occur with the same virus at different temperatures, despite significant overlap in vector temperature ranges. Specifically, we found that co-occurrence correlates with the optimal climatic conditions for the respective vector; cooler-adapted mosquitoes tend to co-occur with the same virus in cooler conditions than their warmer-adapted counterparts. We conclude that mosquitoes appear to be most able to transmit virus in the mosquitoes' optimal climate range, and hypothesise that this may be due to proportionally over-extended vector longevity, and other increased fitness attributes, within this optimal range. These results suggest that the threat posed by vector-competent mosquito species indigenous to temperate regions may have been underestimated, whilst the threat arising from invasive tropical vectors moving to cooler temperate regions may be overestimated.

  13. Theory of the Quantized Hall Conductance in Periodic Systems: a Topological Analysis.

    NASA Astrophysics Data System (ADS)

    Czerwinski, Michael Joseph

    The integral quantization of the Hall conductance in two-dimensional periodic systems is investigated from a topological point of view. Attention is focused on the contributions from the electronic sub-bands which arise from perturbed Landau levels. After reviewing the theoretical work leading to the identification of the Hall conductance as a topological quantum number, both a determination and interpretation of these quantized values for the sub-band conductances is made. It is shown that the Hall conductance of each sub-band can be regarded as the sum of two terms which will be referred to as classical and nonclassical. Although each of these contributions individually leads to a fractional conductance, the sum of these two contributions does indeed yield an integer. These integral conductances are found to be given by the solution of a simple Diophantine equation which depends on the periodic perturbation. A connection between the quantized value of the Hall conductance and the covering of real space by the zeroes of the sub-band wavefunctions allows for a determination of these conductances under more general potentials. A method is described for obtaining the conductance values from only those states bordering the Brillouin zone, and not the states in its interior. This method is demonstrated to give Hall conductances in agreement with those obtained from the Diophantine equation for the sinusoidal potential case explored earlier. Generalizing a simple gauge invariance argument from real space to k-space, a k-space 'vector potential' is introduced. This allows for a explicit identification of the Hall conductance with the phase winding number of the sub-band wavefunction around the Brillouin zone. The previously described division of the Hall conductance into classical and nonclassical contributions is in this way made more rigorous; based on periodicity considerations alone, these terms are identified as the winding numbers associated with (i) the basis states and (ii) the coefficients of these basis states, respectively. In this way a general Diophantine equation, independent of the periodic potential, is obtained. Finally, the use of the 'parallel transport' of state vectors in the determination of an overall phase convention for these states is described. This is seen to lead to a simple and straightforward method for determining the Hall conductance. This method is based on the states directly, without reference to the particular component wavefunctions of these states. Mention is made of the generality of calculations of this type, within the context of the geometric (or Berry) phases acquired by systems under an adiabatic modification of their environment.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ulmer, W

    Purpose: During the past decade the quantization of coupled/forced electromagnetic circuits with or without Ohm’s resistance has gained the subject of some fundamental studies, since even problems of quantum electrodynamics can be solved in an elegant manner, e.g. the creation of quantized electromagnetic fields. In this communication, we shall use these principles to describe optimization procedures in the design of klystrons, synchrotron irradiation and high energy bremsstrahlung. Methods: The base is the Hamiltonian of an electromagnetic circuit and the extension to coupled circuits, which allow the study of symmetries and perturbed symmetries in a very apparent way (SU2, SU3, SU4).more » The introduction resistance and forced oscillators for the emission and absorption in such coupled systems provides characteristic resonance conditions, and atomic orbitals can be described by that. The extension to virtual orbitals leads to creation of bremsstrahlung, if the incident electron (velocity v nearly c) is described by a current, which is associated with its inductivitance and the virtual orbital to the charge distribution (capacitance). Coupled systems with forced oscillators can be used to amplify drastically the resonance frequencies to describe klystrons and synchrotron radiation. Results: The cross-section formula for bremsstrahlung given by the propagator method of Feynman can readily be derived. The design of klystrons and synchrotrons inclusive the radiation outcome can be described and optimized by the determination of the mutual magnetic couplings between the oscillators induced by the currents. Conclusions: The presented methods of quantization of circuits inclusive resistance provide rather a straightforward way to understand complex technical processes such as creation of bremsstrahlung or creation of radiation by klystrons and synchrotrons. They can either be used for optimization procedures and, last but not least, for pedagogical purposes with regard to a qualified understanding of radiation physics for students.« less

  15. Classification of postural profiles among mouth-breathing children by learning vector quantization.

    PubMed

    Mancini, F; Sousa, F S; Hummel, A D; Falcão, A E J; Yi, L C; Ortolani, C F; Sigulem, D; Pisa, I T

    2011-01-01

    Mouth breathing is a chronic syndrome that may bring about postural changes. Finding characteristic patterns of changes occurring in the complex musculoskeletal system of mouth-breathing children has been a challenge. Learning vector quantization (LVQ) is an artificial neural network model that can be applied for this purpose. The aim of the present study was to apply LVQ to determine the characteristic postural profiles shown by mouth-breathing children, in order to further understand abnormal posture among mouth breathers. Postural training data on 52 children (30 mouth breathers and 22 nose breathers) and postural validation data on 32 children (22 mouth breathers and 10 nose breathers) were used. The performance of LVQ and other classification models was compared in relation to self-organizing maps, back-propagation applied to multilayer perceptrons, Bayesian networks, naive Bayes, J48 decision trees, k, and k-nearest-neighbor classifiers. Classifier accuracy was assessed by means of leave-one-out cross-validation, area under ROC curve (AUC), and inter-rater agreement (Kappa statistics). By using the LVQ model, five postural profiles for mouth-breathing children could be determined. LVQ showed satisfactory results for mouth-breathing and nose-breathing classification: sensitivity and specificity rates of 0.90 and 0.95, respectively, when using the training dataset, and 0.95 and 0.90, respectively, when using the validation dataset. The five postural profiles for mouth-breathing children suggested by LVQ were incorporated into application software for classifying the severity of mouth breathers' abnormal posture.

  16. Codestream-Based Identification of JPEG 2000 Images with Different Coding Parameters

    NASA Astrophysics Data System (ADS)

    Watanabe, Osamu; Fukuhara, Takahiro; Kiya, Hitoshi

    A method of identifying JPEG 2000 images with different coding parameters, such as code-block sizes, quantization-step sizes, and resolution levels, is presented. It does not produce false-negative matches regardless of different coding parameters (compression rate, code-block size, and discrete wavelet transform (DWT) resolutions levels) or quantization step sizes. This feature is not provided by conventional methods. Moreover, the proposed approach is fast because it uses the number of zero-bit-planes that can be extracted from the JPEG 2000 codestream by only parsing the header information without embedded block coding with optimized truncation (EBCOT) decoding. The experimental results revealed the effectiveness of image identification based on the new method.

  17. The canonical quantization of chaotic maps on the torus

    NASA Astrophysics Data System (ADS)

    Rubin, Ron Shai

    In this thesis, a quantization method for classical maps on the torus is presented. The quantum algebra of observables is defined as the quantization of measurable functions on the torus with generators exp (2/pi ix) and exp (2/pi ip). The Hilbert space we use remains the infinite-dimensional L2/ (/IR, dx). The dynamics is given by a unitary quantum propagator such that as /hbar /to 0, the classical dynamics is returned. We construct such a quantization for the Kronecker map, the cat map, the baker's map, the kick map, and the Harper map. For the cat map, we find the same for the propagator on the plane the same integral kernel conjectured in (HB) using semiclassical methods. We also define a quantum 'integral over phase space' as a trace over the quantum algebra. Using this definition, we proceed to define quantum ergodicity and mixing for maps on the torus. We prove that the quantum cat map and Kronecker map are both ergodic, but only the cat map is mixing, true to its classical origins. For Planck's constant satisfying the integrality condition h = 1/N, with N/in doubz+, we construct an explicit isomorphism between L2/ (/IR, dx) and the Hilbert space of sections of an N-dimensional vector bundle over a θ-torus T2 of boundary conditions. The basis functions are distributions in L2/ (/IR, dx), given by an infinite comb of Dirac δ-functions. In Bargmann space these distributions take on the form of Jacobi ϑ-functions. Transformations from position to momentum representation can be implemented via a finite N-dimensional discrete Fourier transform. With the θ-torus, we provide a connection between the finite-dimensional quantum maps given in the physics literature and the canonical quantization presented here and found in the language of pseudo-differential operators elsewhere in mathematics circles. Specifically, at a fixed point of the dynamics on the θ-torus, we return a finite-dimensional matrix propagator. We present this connection explicitly for several examples.

  18. The research of "blind" spot in the LVQ network

    NASA Astrophysics Data System (ADS)

    Guo, Zhanjie; Nan, Shupo; Wang, Xiaoli

    2017-04-01

    Nowadays competitive neural network has been widely used in the pattern recognition, classification and other aspects, and show the great advantages compared with the traditional clustering methods. But the competitive neural networks still has inadequate in many aspects, and it needs to be further improved. Based on the learning Vector Quantization Network proposed by Learning Kohonen [1], this paper resolve the issue of the large training error, when there are "blind" spots in a network through the introduction of threshold value learning rules and finally programs the realization with Matlab.

  19. Modeling shape and topology of low-resolution density maps of biological macromolecules.

    PubMed Central

    De-Alarcón, Pedro A; Pascual-Montano, Alberto; Gupta, Amarnath; Carazo, Jose M

    2002-01-01

    In the present work we develop an efficient way of representing the geometry and topology of volumetric datasets of biological structures from medium to low resolution, aiming at storing and querying them in a database framework. We make use of a new vector quantization algorithm to select the points within the macromolecule that best approximate the probability density function of the original volume data. Connectivity among points is obtained with the use of the alpha shapes theory. This novel data representation has a number of interesting characteristics, such as 1) it allows us to automatically segment and quantify a number of important structural features from low-resolution maps, such as cavities and channels, opening the possibility of querying large collections of maps on the basis of these quantitative structural features; 2) it provides a compact representation in terms of size; 3) it contains a subset of three-dimensional points that optimally quantify the densities of medium resolution data; and 4) a general model of the geometry and topology of the macromolecule (as opposite to a spatially unrelated bunch of voxels) is easily obtained by the use of the alpha shapes theory. PMID:12124252

  20. A variable rate speech compressor for mobile applications

    NASA Technical Reports Server (NTRS)

    Yeldener, S.; Kondoz, A. M.; Evans, B. G.

    1990-01-01

    One of the most promising speech coder at the bit rate of 9.6 to 4.8 kbits/s is CELP. Code Excited Linear Prediction (CELP) has been dominating 9.6 to 4.8 kbits/s region during the past 3 to 4 years. Its set back however, is its expensive implementation. As an alternative to CELP, the Base-Band CELP (CELP-BB) was developed which produced good quality speech comparable to CELP and a single chip implementable complexity as reported previously. Its robustness was also improved to tolerate errors up to 1.0 pct. and maintain intelligibility up to 5.0 pct. and more. Although, CELP-BB produces good quality speech at around 4.8 kbits/s, it has a fundamental problem when updating the pitch filter memory. A sub-optimal solution is proposed for this problem. Below 4.8 kbits/s, however, CELP-BB suffers from noticeable quantization noise as a result of the large vector dimensions used. Efficient representation of speech below 4.8 kbits/s is reported by introducing Sinusoidal Transform Coding (STC) to represent the LPC excitation which is called Sine Wave Excited LPC (SWELP). In this case, natural sounding good quality synthetic speech is obtained at around 2.4 kbits/s.

  1. Linear time relational prototype based learning.

    PubMed

    Gisbrecht, Andrej; Mokbel, Bassam; Schleif, Frank-Michael; Zhu, Xibin; Hammer, Barbara

    2012-10-01

    Prototype based learning offers an intuitive interface to inspect large quantities of electronic data in supervised or unsupervised settings. Recently, many techniques have been extended to data described by general dissimilarities rather than Euclidean vectors, so-called relational data settings. Unlike the Euclidean counterparts, the techniques have quadratic time complexity due to the underlying quadratic dissimilarity matrix. Thus, they are infeasible already for medium sized data sets. The contribution of this article is twofold: On the one hand we propose a novel supervised prototype based classification technique for dissimilarity data based on popular learning vector quantization (LVQ), on the other hand we transfer a linear time approximation technique, the Nyström approximation, to this algorithm and an unsupervised counterpart, the relational generative topographic mapping (GTM). This way, linear time and space methods result. We evaluate the techniques on three examples from the biomedical domain.

  2. Vector Sum Excited Linear Prediction (VSELP) speech coding at 4.8 kbps

    NASA Technical Reports Server (NTRS)

    Gerson, Ira A.; Jasiuk, Mark A.

    1990-01-01

    Code Excited Linear Prediction (CELP) speech coders exhibit good performance at data rates as low as 4800 bps. The major drawback to CELP type coders is their larger computational requirements. The Vector Sum Excited Linear Prediction (VSELP) speech coder utilizes a codebook with a structure which allows for a very efficient search procedure. Other advantages of the VSELP codebook structure is discussed and a detailed description of a 4.8 kbps VSELP coder is given. This coder is an improved version of the VSELP algorithm, which finished first in the NSA's evaluation of the 4.8 kbps speech coders. The coder uses a subsample resolution single tap long term predictor, a single VSELP excitation codebook, a novel gain quantizer which is robust to channel errors, and a new adaptive pre/postfilter arrangement.

  3. A novel computer-aided detection system for pulmonary nodule identification in CT images

    NASA Astrophysics Data System (ADS)

    Han, Hao; Li, Lihong; Wang, Huafeng; Zhang, Hao; Moore, William; Liang, Zhengrong

    2014-03-01

    Computer-aided detection (CADe) of pulmonary nodules from computer tomography (CT) scans is critical for assisting radiologists to identify lung lesions at an early stage. In this paper, we propose a novel approach for CADe of lung nodules using a two-stage vector quantization (VQ) scheme. The first-stage VQ aims to extract lung from the chest volume, while the second-stage VQ is designed to extract initial nodule candidates (INCs) within the lung volume. Then rule-based expert filtering is employed to prune obvious FPs from INCs, and the commonly-used support vector machine (SVM) classifier is adopted to further reduce the FPs. The proposed system was validated on 100 CT scans randomly selected from the 262 scans that have at least one juxta-pleural nodule annotation in the publicly available database - Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI). The two-stage VQ only missed 2 out of the 207 nodules at agreement level 1, and the INCs detection for each scan took about 30 seconds in average. Expert filtering reduced FPs more than 18 times, while maintaining a sensitivity of 93.24%. As it is trivial to distinguish INCs attached to pleural wall versus not on wall, we investigated the feasibility of training different SVM classifiers to further reduce FPs from these two kinds of INCs. Experiment results indicated that SVM classification over the entire set of INCs was in favor of, where the optimal operating of our CADe system achieved a sensitivity of 89.4% at a specificity of 86.8%.

  4. Dynamic optimization and its relation to classical and quantum constrained systems

    NASA Astrophysics Data System (ADS)

    Contreras, Mauricio; Pellicer, Rely; Villena, Marcelo

    2017-08-01

    We study the structure of a simple dynamic optimization problem consisting of one state and one control variable, from a physicist's point of view. By using an analogy to a physical model, we study this system in the classical and quantum frameworks. Classically, the dynamic optimization problem is equivalent to a classical mechanics constrained system, so we must use the Dirac method to analyze it in a correct way. We find that there are two second-class constraints in the model: one fix the momenta associated with the control variables, and the other is a reminder of the optimal control law. The dynamic evolution of this constrained system is given by the Dirac's bracket of the canonical variables with the Hamiltonian. This dynamic results to be identical to the unconstrained one given by the Pontryagin equations, which are the correct classical equations of motion for our physical optimization problem. In the same Pontryagin scheme, by imposing a closed-loop λ-strategy, the optimality condition for the action gives a consistency relation, which is associated to the Hamilton-Jacobi-Bellman equation of the dynamic programming method. A similar result is achieved by quantizing the classical model. By setting the wave function Ψ(x , t) =e iS(x , t) in the quantum Schrödinger equation, a non-linear partial equation is obtained for the S function. For the right-hand side quantization, this is the Hamilton-Jacobi-Bellman equation, when S(x , t) is identified with the optimal value function. Thus, the Hamilton-Jacobi-Bellman equation in Bellman's maximum principle, can be interpreted as the quantum approach of the optimization problem.

  5. Quantization and superselection sectors III: Multiply connected spaces and indistinguishable particles

    NASA Astrophysics Data System (ADS)

    Landsman, N. P. Klaas

    2016-09-01

    We reconsider the (non-relativistic) quantum theory of indistinguishable particles on the basis of Rieffel’s notion of C∗-algebraic (“strict”) deformation quantization. Using this formalism, we relate the operator approach of Messiah and Greenberg (1964) to the configuration space approach pioneered by Souriau (1967), Laidlaw and DeWitt-Morette (1971), Leinaas and Myrheim (1977), and others. In dimension d > 2, the former yields bosons, fermions, and paraparticles, whereas the latter seems to leave room for bosons and fermions only, apparently contradicting the operator approach as far as the admissibility of parastatistics is concerned. To resolve this, we first prove that in d > 2 the topologically non-trivial configuration spaces of the second approach are quantized by the algebras of observables of the first. Secondly, we show that the irreducible representations of the latter may be realized by vector bundle constructions, among which the line bundles recover the results of the second approach. Mathematically speaking, representations on higher-dimensional bundles (which define parastatistics) cannot be excluded, which render the configuration space approach incomplete. Physically, however, we show that the corresponding particle states may always be realized in terms of bosons and/or fermions with an unobserved internal degree of freedom (although based on non-relativistic quantum mechanics, this conclusion is analogous to the rigorous results of the Doplicher-Haag-Roberts analysis in algebraic quantum field theory, as well as to the heuristic arguments which led Gell-Mann and others to QCD (i.e. Quantum Chromodynamics)).

  6. Compress compound images in H.264/MPGE-4 AVC by exploiting spatial correlation.

    PubMed

    Lan, Cuiling; Shi, Guangming; Wu, Feng

    2010-04-01

    Compound images are a combination of text, graphics and natural image. They present strong anisotropic features, especially on the text and graphics parts. These anisotropic features often render conventional compression inefficient. Thus, this paper proposes a novel coding scheme from the H.264 intraframe coding. In the scheme, two new intramodes are developed to better exploit spatial correlation in compound images. The first is the residual scalar quantization (RSQ) mode, where intrapredicted residues are directly quantized and coded without transform. The second is the base colors and index map (BCIM) mode that can be viewed as an adaptive color quantization. In this mode, an image block is represented by several representative colors, referred to as base colors, and an index map to compress. Every block selects its coding mode from two new modes and the previous intramodes in H.264 by rate-distortion optimization (RDO). Experimental results show that the proposed scheme improves the coding efficiency even more than 10 dB at most bit rates for compound images and keeps a comparable efficient performance to H.264 for natural images.

  7. A robust hidden Markov Gauss mixture vector quantizer for a noisy source.

    PubMed

    Pyun, Kyungsuk Peter; Lim, Johan; Gray, Robert M

    2009-07-01

    Noise is ubiquitous in real life and changes image acquisition, communication, and processing characteristics in an uncontrolled manner. Gaussian noise and Salt and Pepper noise, in particular, are prevalent in noisy communication channels, camera and scanner sensors, and medical MRI images. It is not unusual for highly sophisticated image processing algorithms developed for clean images to malfunction when used on noisy images. For example, hidden Markov Gauss mixture models (HMGMM) have been shown to perform well in image segmentation applications, but they are quite sensitive to image noise. We propose a modified HMGMM procedure specifically designed to improve performance in the presence of noise. The key feature of the proposed procedure is the adjustment of covariance matrices in Gauss mixture vector quantizer codebooks to minimize an overall minimum discrimination information distortion (MDI). In adjusting covariance matrices, we expand or shrink their elements based on the noisy image. While most results reported in the literature assume a particular noise type, we propose a framework without assuming particular noise characteristics. Without denoising the corrupted source, we apply our method directly to the segmentation of noisy sources. We apply the proposed procedure to the segmentation of aerial images with Salt and Pepper noise and with independent Gaussian noise, and we compare our results with those of the median filter restoration method and the blind deconvolution-based method, respectively. We show that our procedure has better performance than image restoration-based techniques and closely matches to the performance of HMGMM for clean images in terms of both visual segmentation results and error rate.

  8. Poisson traces, D-modules, and symplectic resolutions

    NASA Astrophysics Data System (ADS)

    Etingof, Pavel; Schedler, Travis

    2018-03-01

    We survey the theory of Poisson traces (or zeroth Poisson homology) developed by the authors in a series of recent papers. The goal is to understand this subtle invariant of (singular) Poisson varieties, conditions for it to be finite-dimensional, its relationship to the geometry and topology of symplectic resolutions, and its applications to quantizations. The main technique is the study of a canonical D-module on the variety. In the case the variety has finitely many symplectic leaves (such as for symplectic singularities and Hamiltonian reductions of symplectic vector spaces by reductive groups), the D-module is holonomic, and hence, the space of Poisson traces is finite-dimensional. As an application, there are finitely many irreducible finite-dimensional representations of every quantization of the variety. Conjecturally, the D-module is the pushforward of the canonical D-module under every symplectic resolution of singularities, which implies that the space of Poisson traces is dual to the top cohomology of the resolution. We explain many examples where the conjecture is proved, such as symmetric powers of du Val singularities and symplectic surfaces and Slodowy slices in the nilpotent cone of a semisimple Lie algebra. We compute the D-module in the case of surfaces with isolated singularities and show it is not always semisimple. We also explain generalizations to arbitrary Lie algebras of vector fields, connections to the Bernstein-Sato polynomial, relations to two-variable special polynomials such as Kostka polynomials and Tutte polynomials, and a conjectural relationship with deformations of symplectic resolutions. In the appendix we give a brief recollection of the theory of D-modules on singular varieties that we require.

  9. Poisson traces, D-modules, and symplectic resolutions.

    PubMed

    Etingof, Pavel; Schedler, Travis

    2018-01-01

    We survey the theory of Poisson traces (or zeroth Poisson homology) developed by the authors in a series of recent papers. The goal is to understand this subtle invariant of (singular) Poisson varieties, conditions for it to be finite-dimensional, its relationship to the geometry and topology of symplectic resolutions, and its applications to quantizations. The main technique is the study of a canonical D-module on the variety. In the case the variety has finitely many symplectic leaves (such as for symplectic singularities and Hamiltonian reductions of symplectic vector spaces by reductive groups), the D-module is holonomic, and hence, the space of Poisson traces is finite-dimensional. As an application, there are finitely many irreducible finite-dimensional representations of every quantization of the variety. Conjecturally, the D-module is the pushforward of the canonical D-module under every symplectic resolution of singularities, which implies that the space of Poisson traces is dual to the top cohomology of the resolution. We explain many examples where the conjecture is proved, such as symmetric powers of du Val singularities and symplectic surfaces and Slodowy slices in the nilpotent cone of a semisimple Lie algebra. We compute the D-module in the case of surfaces with isolated singularities and show it is not always semisimple. We also explain generalizations to arbitrary Lie algebras of vector fields, connections to the Bernstein-Sato polynomial, relations to two-variable special polynomials such as Kostka polynomials and Tutte polynomials, and a conjectural relationship with deformations of symplectic resolutions. In the appendix we give a brief recollection of the theory of D-modules on singular varieties that we require.

  10. Face biometrics with renewable templates

    NASA Astrophysics Data System (ADS)

    van der Veen, Michiel; Kevenaar, Tom; Schrijen, Geert-Jan; Akkermans, Ton H.; Zuo, Fei

    2006-02-01

    In recent literature, privacy protection technologies for biometric templates were proposed. Among these is the so-called helper-data system (HDS) based on reliable component selection. In this paper we integrate this approach with face biometrics such that we achieve a system in which the templates are privacy protected, and multiple templates can be derived from the same facial image for the purpose of template renewability. Extracting binary feature vectors forms an essential step in this process. Using the FERET and Caltech databases, we show that this quantization step does not significantly degrade the classification performance compared to, for example, traditional correlation-based classifiers. The binary feature vectors are integrated in the HDS leading to a privacy protected facial recognition algorithm with acceptable FAR and FRR, provided that the intra-class variation is sufficiently small. This suggests that a controlled enrollment procedure with a sufficient number of enrollment measurements is required.

  11. Detecting double compression of audio signal

    NASA Astrophysics Data System (ADS)

    Yang, Rui; Shi, Yun Q.; Huang, Jiwu

    2010-01-01

    MP3 is the most popular audio format nowadays in our daily life, for example music downloaded from the Internet and file saved in the digital recorder are often in MP3 format. However, low bitrate MP3s are often transcoded to high bitrate since high bitrate ones are of high commercial value. Also audio recording in digital recorder can be doctored easily by pervasive audio editing software. This paper presents two methods for the detection of double MP3 compression. The methods are essential for finding out fake-quality MP3 and audio forensics. The proposed methods use support vector machine classifiers with feature vectors formed by the distributions of the first digits of the quantized MDCT (modified discrete cosine transform) coefficients. Extensive experiments demonstrate the effectiveness of the proposed methods. To the best of our knowledge, this piece of work is the first one to detect double compression of audio signal.

  12. Optimization and Quantization in Gradient Symbol Systems: A Framework for Integrating the Continuous and the Discrete in Cognition

    ERIC Educational Resources Information Center

    Smolensky, Paul; Goldrick, Matthew; Mathis, Donald

    2014-01-01

    Mental representations have continuous as well as discrete, combinatorial properties. For example, while predominantly discrete, phonological representations also vary continuously; this is reflected by gradient effects in instrumental studies of speech production. Can an integrated theoretical framework address both aspects of structure? The…

  13. Three learning phases for radial-basis-function networks.

    PubMed

    Schwenker, F; Kestler, H A; Palm, G

    2001-05-01

    In this paper, learning algorithms for radial basis function (RBF) networks are discussed. Whereas multilayer perceptrons (MLP) are typically trained with backpropagation algorithms, starting the training procedure with a random initialization of the MLP's parameters, an RBF network may be trained in many different ways. We categorize these RBF training methods into one-, two-, and three-phase learning schemes. Two-phase RBF learning is a very common learning scheme. The two layers of an RBF network are learnt separately; first the RBF layer is trained, including the adaptation of centers and scaling parameters, and then the weights of the output layer are adapted. RBF centers may be trained by clustering, vector quantization and classification tree algorithms, and the output layer by supervised learning (through gradient descent or pseudo inverse solution). Results from numerical experiments of RBF classifiers trained by two-phase learning are presented in three completely different pattern recognition applications: (a) the classification of 3D visual objects; (b) the recognition hand-written digits (2D objects); and (c) the categorization of high-resolution electrocardiograms given as a time series (ID objects) and as a set of features extracted from these time series. In these applications, it can be observed that the performance of RBF classifiers trained with two-phase learning can be improved through a third backpropagation-like training phase of the RBF network, adapting the whole set of parameters (RBF centers, scaling parameters, and output layer weights) simultaneously. This, we call three-phase learning in RBF networks. A practical advantage of two- and three-phase learning in RBF networks is the possibility to use unlabeled training data for the first training phase. Support vector (SV) learning in RBF networks is a different learning approach. SV learning can be considered, in this context of learning, as a special type of one-phase learning, where only the output layer weights of the RBF network are calculated, and the RBF centers are restricted to be a subset of the training data. Numerical experiments with several classifier schemes including k-nearest-neighbor, learning vector quantization and RBF classifiers trained through two-phase, three-phase and support vector learning are given. The performance of the RBF classifiers trained through SV learning and three-phase learning are superior to the results of two-phase learning, but SV learning often leads to complex network structures, since the number of support vectors is not a small fraction of the total number of data points.

  14. Performance Analysis for Channel Estimation With 1-Bit ADC and Unknown Quantization Threshold

    NASA Astrophysics Data System (ADS)

    Stein, Manuel S.; Bar, Shahar; Nossek, Josef A.; Tabrikian, Joseph

    2018-05-01

    In this work, the problem of signal parameter estimation from measurements acquired by a low-complexity analog-to-digital converter (ADC) with $1$-bit output resolution and an unknown quantization threshold is considered. Single-comparator ADCs are energy-efficient and can be operated at ultra-high sampling rates. For analysis of such systems, a fixed and known quantization threshold is usually assumed. In the symmetric case, i.e., zero hard-limiting offset, it is known that in the low signal-to-noise ratio (SNR) regime the signal processing performance degrades moderately by ${2}/{\\pi}$ ($-1.96$ dB) when comparing to an ideal $\\infty$-bit converter. Due to hardware imperfections, low-complexity $1$-bit ADCs will in practice exhibit an unknown threshold different from zero. Therefore, we study the accuracy which can be obtained with receive data processed by a hard-limiter with unknown quantization level by using asymptotically optimal channel estimation algorithms. To characterize the estimation performance of these nonlinear algorithms, we employ analytic error expressions for different setups while modeling the offset as a nuisance parameter. In the low SNR regime, we establish the necessary condition for a vanishing loss due to missing offset knowledge at the receiver. As an application, we consider the estimation of single-input single-output wireless channels with inter-symbol interference and validate our analysis by comparing the analytic and experimental performance of the studied estimation algorithms. Finally, we comment on the extension to multiple-input multiple-output channel models.

  15. Perceptual Image Compression in Telemedicine

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Ahumada, Albert J., Jr.; Eckstein, Miguel; Null, Cynthia H. (Technical Monitor)

    1996-01-01

    The next era of space exploration, especially the "Mission to Planet Earth" will generate immense quantities of image data. For example, the Earth Observing System (EOS) is expected to generate in excess of one terabyte/day. NASA confronts a major technical challenge in managing this great flow of imagery: in collection, pre-processing, transmission to earth, archiving, and distribution to scientists at remote locations. Expected requirements in most of these areas clearly exceed current technology. Part of the solution to this problem lies in efficient image compression techniques. For much of this imagery, the ultimate consumer is the human eye. In this case image compression should be designed to match the visual capacities of the human observer. We have developed three techniques for optimizing image compression for the human viewer. The first consists of a formula, developed jointly with IBM and based on psychophysical measurements, that computes a DCT quantization matrix for any specified combination of viewing distance, display resolution, and display brightness. This DCT quantization matrix is used in most recent standards for digital image compression (JPEG, MPEG, CCITT H.261). The second technique optimizes the DCT quantization matrix for each individual image, based on the contents of the image. This is accomplished by means of a model of visual sensitivity to compression artifacts. The third technique extends the first two techniques to the realm of wavelet compression. Together these two techniques will allow systematic perceptual optimization of image compression in NASA imaging systems. Many of the image management challenges faced by NASA are mirrored in the field of telemedicine. Here too there are severe demands for transmission and archiving of large image databases, and the imagery is ultimately used primarily by human observers, such as radiologists. In this presentation I will describe some of our preliminary explorations of the applications of our technology to the special problems of telemedicine.

  16. Fast Quaternion Attitude Estimation from Two Vector Measurements

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis; Bauer, Frank H. (Technical Monitor)

    2001-01-01

    Many spacecraft attitude determination methods use exactly two vector measurements. The two vectors are typically the unit vector to the Sun and the Earth's magnetic field vector for coarse "sun-mag" attitude determination or unit vectors to two stars tracked by two star trackers for fine attitude determination. Existing closed-form attitude estimates based on Wahba's optimality criterion for two arbitrarily weighted observations are somewhat slow to evaluate. This paper presents two new fast quaternion attitude estimation algorithms using two vector observations, one optimal and one suboptimal. The suboptimal method gives the same estimate as the TRIAD algorithm, at reduced computational cost. Simulations show that the TRIAD estimate is almost as accurate as the optimal estimate in representative test scenarios.

  17. The mean-square error optimal linear discriminant function and its application to incomplete data vectors

    NASA Technical Reports Server (NTRS)

    Walker, H. F.

    1979-01-01

    In many pattern recognition problems, data vectors are classified although one or more of the data vector elements are missing. This problem occurs in remote sensing when the ground is obscured by clouds. Optimal linear discrimination procedures for classifying imcomplete data vectors are discussed.

  18. Optimal four-impulse rendezvous between coplanar elliptical orbits

    NASA Astrophysics Data System (ADS)

    Wang, JianXia; Baoyin, HeXi; Li, JunFeng; Sun, FuChun

    2011-04-01

    Rendezvous in circular or near circular orbits has been investigated in great detail, while rendezvous in arbitrary eccentricity elliptical orbits is not sufficiently explored. Among the various optimization methods proposed for fuel optimal orbital rendezvous, Lawden's primer vector theory is favored by many researchers with its clear physical concept and simplicity in solution. Prussing has applied the primer vector optimization theory to minimum-fuel, multiple-impulse, time-fixed orbital rendezvous in a near circular orbit and achieved great success. Extending Prussing's work, this paper will employ the primer vector theory to study trajectory optimization problems of arbitrary eccentricity elliptical orbit rendezvous. Based on linearized equations of relative motion on elliptical reference orbit (referred to as T-H equations), the primer vector theory is used to deal with time-fixed multiple-impulse optimal rendezvous between two coplanar, coaxial elliptical orbits with arbitrary large eccentricity. A parameter adjustment method is developed for the prime vector to satisfy the Lawden's necessary condition for the optimal solution. Finally, the optimal multiple-impulse rendezvous solution including the time, direction and magnitudes of the impulse is obtained by solving the two-point boundary value problem. The rendezvous error of the linearized equation is also analyzed. The simulation results confirmed the analyzed results that the rendezvous error is small for the small eccentricity case and is large for the higher eccentricity. For better rendezvous accuracy of high eccentricity orbits, a combined method of multiplier penalty function with the simplex search method is used for local optimization. The simplex search method is sensitive to the initial values of optimization variables, but the simulation results show that initial values with the primer vector theory, and the local optimization algorithm can improve the rendezvous accuracy effectively with fast convergence, because the optimal results obtained by the primer vector theory are already very close to the actual optimal solution. If the initial values are taken randomly, it is difficult to converge to the optimal solution.

  19. Vector-model-supported approach in prostate plan optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Eva Sau Fan; Department of Health Technology and Informatics, The Hong Kong Polytechnic University; Wu, Vincent Wing Cheung

    Lengthy time consumed in traditional manual plan optimization can limit the use of step-and-shoot intensity-modulated radiotherapy/volumetric-modulated radiotherapy (S&S IMRT/VMAT). A vector model base, retrieving similar radiotherapy cases, was developed with respect to the structural and physiologic features extracted from the Digital Imaging and Communications in Medicine (DICOM) files. Planning parameters were retrieved from the selected similar reference case and applied to the test case to bypass the gradual adjustment of planning parameters. Therefore, the planning time spent on the traditional trial-and-error manual optimization approach in the beginning of optimization could be reduced. Each S&S IMRT/VMAT prostate reference database comprised 100more » previously treated cases. Prostate cases were replanned with both traditional optimization and vector-model-supported optimization based on the oncologists' clinical dose prescriptions. A total of 360 plans, which consisted of 30 cases of S&S IMRT, 30 cases of 1-arc VMAT, and 30 cases of 2-arc VMAT plans including first optimization and final optimization with/without vector-model-supported optimization, were compared using the 2-sided t-test and paired Wilcoxon signed rank test, with a significance level of 0.05 and a false discovery rate of less than 0.05. For S&S IMRT, 1-arc VMAT, and 2-arc VMAT prostate plans, there was a significant reduction in the planning time and iteration with vector-model-supported optimization by almost 50%. When the first optimization plans were compared, 2-arc VMAT prostate plans had better plan quality than 1-arc VMAT plans. The volume receiving 35 Gy in the femoral head for 2-arc VMAT plans was reduced with the vector-model-supported optimization compared with the traditional manual optimization approach. Otherwise, the quality of plans from both approaches was comparable. Vector-model-supported optimization was shown to offer much shortened planning time and iteration number without compromising the plan quality.« less

  20. Integrated design of multivariable hydrometric networks using entropy theory with a multiobjective optimization approach

    NASA Astrophysics Data System (ADS)

    Kim, Y.; Hwang, T.; Vose, J. M.; Martin, K. L.; Band, L. E.

    2016-12-01

    Obtaining quality hydrologic observations is the first step towards a successful water resources management. While remote sensing techniques have enabled to convert satellite images of the Earth's surface to hydrologic data, the importance of ground-based observations has never been diminished because in-situ data are often highly accurate and can be used to validate remote measurements. The existence of efficient hydrometric networks is becoming more important to obtain as much as information with minimum redundancy. The World Meteorological Organization (WMO) has recommended a guideline for the minimum hydrometric network density based on physiography; however, this guideline is not for the optimum network design but for avoiding serious deficiency from a network. Moreover, all hydrologic variables are interconnected within the hydrologic cycle, while monitoring networks have been designed individually. This study proposes an integrated network design method using entropy theory with a multiobjective optimization approach. In specific, a precipitation and a streamflow networks in a semi-urban watershed in Ontario, Canada were designed simultaneously by maximizing joint entropy, minimizing total correlation, and maximizing conditional entropy of streamflow network given precipitation network. After comparing with the typical individual network designs, the proposed design method would be able to determine more efficient optimal networks by avoiding the redundant stations, in which hydrologic information is transferable. Additionally, four quantization cases were applied in entropy calculations to assess their implications on the station rankings and the optimal networks. The results showed that the selection of quantization method should be considered carefully because the rankings and optimal networks are subject to change accordingly.

  1. Integrated design of multivariable hydrometric networks using entropy theory with a multiobjective optimization approach

    NASA Astrophysics Data System (ADS)

    Keum, J.; Coulibaly, P. D.

    2017-12-01

    Obtaining quality hydrologic observations is the first step towards a successful water resources management. While remote sensing techniques have enabled to convert satellite images of the Earth's surface to hydrologic data, the importance of ground-based observations has never been diminished because in-situ data are often highly accurate and can be used to validate remote measurements. The existence of efficient hydrometric networks is becoming more important to obtain as much as information with minimum redundancy. The World Meteorological Organization (WMO) has recommended a guideline for the minimum hydrometric network density based on physiography; however, this guideline is not for the optimum network design but for avoiding serious deficiency from a network. Moreover, all hydrologic variables are interconnected within the hydrologic cycle, while monitoring networks have been designed individually. This study proposes an integrated network design method using entropy theory with a multiobjective optimization approach. In specific, a precipitation and a streamflow networks in a semi-urban watershed in Ontario, Canada were designed simultaneously by maximizing joint entropy, minimizing total correlation, and maximizing conditional entropy of streamflow network given precipitation network. After comparing with the typical individual network designs, the proposed design method would be able to determine more efficient optimal networks by avoiding the redundant stations, in which hydrologic information is transferable. Additionally, four quantization cases were applied in entropy calculations to assess their implications on the station rankings and the optimal networks. The results showed that the selection of quantization method should be considered carefully because the rankings and optimal networks are subject to change accordingly.

  2. Intelligent classifier for dynamic fault patterns based on hidden Markov model

    NASA Astrophysics Data System (ADS)

    Xu, Bo; Feng, Yuguang; Yu, Jinsong

    2006-11-01

    It's difficult to build precise mathematical models for complex engineering systems because of the complexity of the structure and dynamics characteristics. Intelligent fault diagnosis introduces artificial intelligence and works in a different way without building the analytical mathematical model of a diagnostic object, so it's a practical approach to solve diagnostic problems of complex systems. This paper presents an intelligent fault diagnosis method, an integrated fault-pattern classifier based on Hidden Markov Model (HMM). This classifier consists of dynamic time warping (DTW) algorithm, self-organizing feature mapping (SOFM) network and Hidden Markov Model. First, after dynamic observation vector in measuring space is processed by DTW, the error vector including the fault feature of being tested system is obtained. Then a SOFM network is used as a feature extractor and vector quantization processor. Finally, fault diagnosis is realized by fault patterns classifying with the Hidden Markov Model classifier. The importing of dynamic time warping solves the problem of feature extracting from dynamic process vectors of complex system such as aeroengine, and makes it come true to diagnose complex system by utilizing dynamic process information. Simulating experiments show that the diagnosis model is easy to extend, and the fault pattern classifier is efficient and is convenient to the detecting and diagnosing of new faults.

  3. Minimizing embedding impact in steganography using trellis-coded quantization

    NASA Astrophysics Data System (ADS)

    Filler, Tomáš; Judas, Jan; Fridrich, Jessica

    2010-01-01

    In this paper, we propose a practical approach to minimizing embedding impact in steganography based on syndrome coding and trellis-coded quantization and contrast its performance with bounds derived from appropriate rate-distortion bounds. We assume that each cover element can be assigned a positive scalar expressing the impact of making an embedding change at that element (single-letter distortion). The problem is to embed a given payload with minimal possible average embedding impact. This task, which can be viewed as a generalization of matrix embedding or writing on wet paper, has been approached using heuristic and suboptimal tools in the past. Here, we propose a fast and very versatile solution to this problem that can theoretically achieve performance arbitrarily close to the bound. It is based on syndrome coding using linear convolutional codes with the optimal binary quantizer implemented using the Viterbi algorithm run in the dual domain. The complexity and memory requirements of the embedding algorithm are linear w.r.t. the number of cover elements. For practitioners, we include detailed algorithms for finding good codes and their implementation. Finally, we report extensive experimental results for a large set of relative payloads and for different distortion profiles, including the wet paper channel.

  4. A Power Transformers Fault Diagnosis Model Based on Three DGA Ratios and PSO Optimization SVM

    NASA Astrophysics Data System (ADS)

    Ma, Hongzhe; Zhang, Wei; Wu, Rongrong; Yang, Chunyan

    2018-03-01

    In order to make up for the shortcomings of existing transformer fault diagnosis methods in dissolved gas-in-oil analysis (DGA) feature selection and parameter optimization, a transformer fault diagnosis model based on the three DGA ratios and particle swarm optimization (PSO) optimize support vector machine (SVM) is proposed. Using transforming support vector machine to the nonlinear and multi-classification SVM, establishing the particle swarm optimization to optimize the SVM multi classification model, and conducting transformer fault diagnosis combined with the cross validation principle. The fault diagnosis results show that the average accuracy of test method is better than the standard support vector machine and genetic algorithm support vector machine, and the proposed method can effectively improve the accuracy of transformer fault diagnosis is proved.

  5. Generalized hamming networks and applications.

    PubMed

    Koutroumbas, Konstantinos; Kalouptsidis, Nicholas

    2005-09-01

    In this paper the classical Hamming network is generalized in various ways. First, for the Hamming maxnet, a generalized model is proposed, which covers under its umbrella most of the existing versions of the Hamming Maxnet. The network dynamics are time varying while the commonly used ramp function may be replaced by a much more general non-linear function. Also, the weight parameters of the network are time varying. A detailed convergence analysis is provided. A bound on the number of iterations required for convergence is derived and its distribution functions are given for the cases where the initial values of the nodes of the Hamming maxnet stem from the uniform and the peak distributions. Stabilization mechanisms aiming to prevent the node(s) with the maximum initial value diverging to infinity or decaying to zero are described. Simulations demonstrate the advantages of the proposed extension. Also, a rough comparison between the proposed generalized scheme as well as the original Hamming maxnet and its variants is carried out in terms of the time required for convergence, in hardware implementations. Finally, the other two parts of the Hamming network, namely the competitors generating module and the decoding module, are briefly considered in the framework of various applications such as classification/clustering, vector quantization and function optimization.

  6. A discrete mechanics approach to dislocation dynamics in BCC crystals

    NASA Astrophysics Data System (ADS)

    Ramasubramaniam, A.; Ariza, M. P.; Ortiz, M.

    2007-03-01

    A discrete mechanics approach to modeling the dynamics of dislocations in BCC single crystals is presented. Ideas are borrowed from discrete differential calculus and algebraic topology and suitably adapted to crystal lattices. In particular, the extension of a crystal lattice to a CW complex allows for convenient manipulation of forms and fields defined over the crystal. Dislocations are treated within the theory as energy-minimizing structures that lead to locally lattice-invariant but globally incompatible eigendeformations. The discrete nature of the theory eliminates the need for regularization of the core singularity and inherently allows for dislocation reactions and complicated topological transitions. The quantization of slip to integer multiples of the Burgers' vector leads to a large integer optimization problem. A novel approach to solving this NP-hard problem based on considerations of metastability is proposed. A numerical example that applies the method to study the emanation of dislocation loops from a point source of dilatation in a large BCC crystal is presented. The structure and energetics of BCC screw dislocation cores, as obtained via the present formulation, are also considered and shown to be in good agreement with available atomistic studies. The method thus provides a realistic avenue for mesoscale simulations of dislocation based crystal plasticity with fully atomistic resolution.

  7. An Alternative to the Gauge Theoretic Setting

    NASA Astrophysics Data System (ADS)

    Schroer, Bert

    2011-10-01

    The standard formulation of quantum gauge theories results from the Lagrangian (functional integral) quantization of classical gauge theories. A more intrinsic quantum theoretical access in the spirit of Wigner's representation theory shows that there is a fundamental clash between the pointlike localization of zero mass (vector, tensor) potentials and the Hilbert space (positivity, unitarity) structure of QT. The quantization approach has no other way than to stay with pointlike localization and sacrifice the Hilbert space whereas the approach built on the intrinsic quantum concept of modular localization keeps the Hilbert space and trades the conflict creating pointlike generation with the tightest consistent localization: semiinfinite spacelike string localization. Whereas these potentials in the presence of interactions stay quite close to associated pointlike field strengths, the interacting matter fields to which they are coupled bear the brunt of the nonlocal aspect in that they are string-generated in a way which cannot be undone by any differentiation. The new stringlike approach to gauge theory also revives the idea of a Schwinger-Higgs screening mechanism as a deeper and less metaphoric description of the Higgs spontaneous symmetry breaking and its accompanying tale about "God's particle" and its mass generation for all the other particles.

  8. Covariant open bosonic string field theory on multiple D-branes in the proper-time gauge

    NASA Astrophysics Data System (ADS)

    Lee, Taejin

    2017-12-01

    We construct a covariant open bosonic string field theory on multiple D-branes, which reduces to a non-Abelian group Yang-Mills gauge theory in the zero-slope limit. Making use of the first quantized open bosonic string in the proper time gauge, we convert the string amplitudes given by the Polyakov path integrals on string world sheets into those of the second quantized theory. The world sheet diagrams generated by the constructed open string field theory are planar in contrast to those of the Witten's cubic string field theory. However, the constructed string field theory is yet equivalent to the Witten's cubic string field theory. Having obtained planar diagrams, we may adopt the light-cone string field theory technique to calculate the multi-string scattering amplitudes with an arbitrary number of external strings. We examine in detail the three-string vertex diagram and the effective four-string vertex diagrams generated perturbatively by the three-string vertex at tree level. In the zero-slope limit, the string scattering amplitudes are identified precisely as those of non-Abelian Yang-Mills gauge theory if the external states are chosen to be massless vector particles.

  9. Metaplectic-c Quantomorphisms

    NASA Astrophysics Data System (ADS)

    Vaughan, Jennifer

    2015-03-01

    In the classical Kostant-Souriau prequantization procedure, the Poisson algebra of a symplectic manifold (M,ω) is realized as the space of infinitesimal quantomorphisms of the prequantization circle bundle. Robinson and Rawnsley developed an alternative to the Kostant-Souriau quantization process in which the prequantization circle bundle and metaplectic structure for (M,ω) are replaced by a metaplectic-c prequantization. They proved that metaplectic-c quantization can be applied to a larger class of manifolds than the classical recipe. This paper presents a definition for a metaplectic-c quantomorphism, which is a diffeomorphism of metaplectic-c prequantizations that preserves all of their structures. Since the structure of a metaplectic-c prequantization is more complicated than that of a circle bundle, we find that the definition must include an extra condition that does not have an analogue in the Kostant-Souriau case. We then define an infinitesimal quantomorphism to be a vector field whose flow consists of metaplectic-c quantomorphisms, and prove that the space of infinitesimal metaplectic-c quantomorphisms exhibits all of the same properties that are seen for the infinitesimal quantomorphisms of a prequantization circle bundle. In particular, this space is isomorphic to the Poisson algebra C^∞(M).

  10. Classical Field Theory and the Stress-Energy Tensor

    NASA Astrophysics Data System (ADS)

    Swanson, Mark S.

    2015-09-01

    This book is a concise introduction to the key concepts of classical field theory for beginning graduate students and advanced undergraduate students who wish to study the unifying structures and physical insights provided by classical field theory without dealing with the additional complication of quantization. In that regard, there are many important aspects of field theory that can be understood without quantizing the fields. These include the action formulation, Galilean and relativistic invariance, traveling and standing waves, spin angular momentum, gauge invariance, subsidiary conditions, fluctuations, spinor and vector fields, conservation laws and symmetries, and the Higgs mechanism, all of which are often treated briefly in a course on quantum field theory. The variational form of classical mechanics and continuum field theory are both developed in the time-honored graduate level text by Goldstein et al (2001). An introduction to classical field theory from a somewhat different perspective is available in Soper (2008). Basic classical field theory is often treated in books on quantum field theory. Two excellent texts where this is done are Greiner and Reinhardt (1996) and Peskin and Schroeder (1995). Green's function techniques are presented in Arfken et al (2013).

  11. A low complexity, low spur digital IF conversion circuit for high-fidelity GNSS signal playback

    NASA Astrophysics Data System (ADS)

    Su, Fei; Ying, Rendong

    2016-01-01

    A low complexity high efficiency and low spur digital intermediate frequency (IF) conversion circuit is discussed in the paper. This circuit is key element in high-fidelity GNSS signal playback instrument. We analyze the spur performance of a finite state machine (FSM) based numerically controlled oscillators (NCO), by optimization of the control algorithm, a FSM based NCO with 3 quantization stage can achieves 65dB SFDR in the range of the seventh harmonic. Compare with traditional lookup table based NCO design with the same Spurious Free Dynamic Range (SFDR) performance, the logic resource require to implemented the NCO is reduced to 1/3. The proposed design method can be extended to the IF conversion system with good SFDR in the range of higher harmonic components by increasing the quantization stage.

  12. Strapdown system performance optimization test evaluations (SPOT), volume 1

    NASA Technical Reports Server (NTRS)

    Blaha, R. J.; Gilmore, J. P.

    1973-01-01

    A three axis inertial system was packaged in an Apollo gimbal fixture for fine grain evaluation of strapdown system performance in dynamic environments. These evaluations have provided information to assess the effectiveness of real-time compensation techniques and to study system performance tradeoffs to factors such as quantization and iteration rate. The strapdown performance and tradeoff studies conducted include: (1) Compensation models and techniques for the inertial instrument first-order error terms were developed and compensation effectivity was demonstrated in four basic environments; single and multi-axis slew, and single and multi-axis oscillatory. (2) The theoretical coning bandwidth for the first-order quaternion algorithm expansion was verified. (3) Gyro loop quantization was identified to affect proportionally the system attitude uncertainty. (4) Land navigation evaluations identified the requirement for accurate initialization alignment in order to pursue fine grain navigation evaluations.

  13. Interactive optimization approach for optimal impulsive rendezvous using primer vector and evolutionary algorithms

    NASA Astrophysics Data System (ADS)

    Luo, Ya-Zhong; Zhang, Jin; Li, Hai-yang; Tang, Guo-Jin

    2010-08-01

    In this paper, a new optimization approach combining primer vector theory and evolutionary algorithms for fuel-optimal non-linear impulsive rendezvous is proposed. The optimization approach is designed to seek the optimal number of impulses as well as the optimal impulse vectors. In this optimization approach, adding a midcourse impulse is determined by an interactive method, i.e. observing the primer-magnitude time history. An improved version of simulated annealing is employed to optimize the rendezvous trajectory with the fixed-number of impulses. This interactive approach is evaluated by three test cases: coplanar circle-to-circle rendezvous, same-circle rendezvous and non-coplanar rendezvous. The results show that the interactive approach is effective and efficient in fuel-optimal non-linear rendezvous design. It can guarantee solutions, which satisfy the Lawden's necessary optimality conditions.

  14. An optimal control strategies using vaccination and fogging in dengue fever transmission model

    NASA Astrophysics Data System (ADS)

    Fitria, Irma; Winarni, Pancahayani, Sigit; Subchan

    2017-08-01

    This paper discussed regarding a model and an optimal control problem of dengue fever transmission. We classified the model as human and vector (mosquito) population classes. For the human population, there are three subclasses, such as susceptible, infected, and resistant classes. Then, for the vector population, we divided it into wiggler, susceptible, and infected vector classes. Thus, the model consists of six dynamic equations. To minimize the number of dengue fever cases, we designed two optimal control variables in the model, the giving of fogging and vaccination. The objective function of this optimal control problem is to minimize the number of infected human population, the number of vector, and the cost of the controlling efforts. By giving the fogging optimally, the number of vector can be minimized. In this case, we considered the giving of vaccination as a control variable because it is one of the efforts that are being developed to reduce the spreading of dengue fever. We used Pontryagin Minimum Principle to solve the optimal control problem. Furthermore, the numerical simulation results are given to show the effect of the optimal control strategies in order to minimize the epidemic of dengue fever.

  15. Parallel-vector computation for linear structural analysis and non-linear unconstrained optimization problems

    NASA Technical Reports Server (NTRS)

    Nguyen, D. T.; Al-Nasra, M.; Zhang, Y.; Baddourah, M. A.; Agarwal, T. K.; Storaasli, O. O.; Carmona, E. A.

    1991-01-01

    Several parallel-vector computational improvements to the unconstrained optimization procedure are described which speed up the structural analysis-synthesis process. A fast parallel-vector Choleski-based equation solver, pvsolve, is incorporated into the well-known SAP-4 general-purpose finite-element code. The new code, denoted PV-SAP, is tested for static structural analysis. Initial results on a four processor CRAY 2 show that using pvsolve reduces the equation solution time by a factor of 14-16 over the original SAP-4 code. In addition, parallel-vector procedures for the Golden Block Search technique and the BFGS method are developed and tested for nonlinear unconstrained optimization. A parallel version of an iterative solver and the pvsolve direct solver are incorporated into the BFGS method. Preliminary results on nonlinear unconstrained optimization test problems, using pvsolve in the analysis, show excellent parallel-vector performance indicating that these parallel-vector algorithms can be used in a new generation of finite-element based structural design/analysis-synthesis codes.

  16. Mathematics of Quantization and Quantum Fields

    NASA Astrophysics Data System (ADS)

    Dereziński, Jan; Gérard, Christian

    2013-03-01

    Preface; 1. Vector spaces; 2. Operators in Hilbert spaces; 3. Tensor algebras; 4. Analysis in L2(Rd); 5. Measures; 6. Algebras; 7. Anti-symmetric calculus; 8. Canonical commutation relations; 9. CCR on Fock spaces; 10. Symplectic invariance of CCR in finite dimensions; 11. Symplectic invariance of the CCR on Fock spaces; 12. Canonical anti-commutation relations; 13. CAR on Fock spaces; 14. Orthogonal invariance of CAR algebras; 15. Clifford relations; 16. Orthogonal invariance of the CAR on Fock spaces; 17. Quasi-free states; 18. Dynamics of quantum fields; 19. Quantum fields on space-time; 20. Diagrammatics; 21. Euclidean approach for bosons; 22. Interacting bosonic fields; Subject index; Symbols index.

  17. Perceptual distortion analysis of color image VQ-based coding

    NASA Astrophysics Data System (ADS)

    Charrier, Christophe; Knoblauch, Kenneth; Cherifi, Hocine

    1997-04-01

    It is generally accepted that a RGB color image can be easily encoded by using a gray-scale compression technique on each of the three color planes. Such an approach, however, fails to take into account correlations existing between color planes and perceptual factors. We evaluated several linear and non-linear color spaces, some introduced by the CIE, compressed with the vector quantization technique for minimum perceptual distortion. To study these distortions, we measured contrast and luminance of the video framebuffer, to precisely control color. We then obtained psychophysical judgements to measure how well these methods work to minimize perceptual distortion in a variety of color space.

  18. High Performance Compression of Science Data

    NASA Technical Reports Server (NTRS)

    Storer, James A.; Carpentieri, Bruno; Cohn, Martin

    1994-01-01

    Two papers make up the body of this report. One presents a single-pass adaptive vector quantization algorithm that learns a codebook of variable size and shape entries; the authors present experiments on a set of test images showing that with no training or prior knowledge of the data, for a given fidelity, the compression achieved typically equals or exceeds that of the JPEG standard. The second paper addresses motion compensation, one of the most effective techniques used in interframe data compression. A parallel block-matching algorithm for estimating interframe displacement of blocks with minimum error is presented. The algorithm is designed for a simple parallel architecture to process video in real time.

  19. Theoretical nuclear physics

    NASA Astrophysics Data System (ADS)

    Rost, E.; Shephard, J. R.

    1992-08-01

    This report discusses the following topics: Exact 1-loop vacuum polarization effects in 1 + 1 dimensional QHD; exact 1-fermion loop contributions in 1 + 1 dimensional solitons; exact scalar 1-loop contributions in 1 + 3 dimensions; exact vacuum calculations in a hyper-spherical basis; relativistic nuclear matter with self-consistent correlation energy; consistent RHA-RPA for finite nuclei; transverse response functions in the (triangle)-resonance region; hadronic matter in a nontopological soliton model; scalar and vector contributions to (bar p)p yields (bar lambda)lambda reaction; 0+ and 2+ strengths in pion double-charge exchange to double giant-dipole resonances; and nucleons in a hybrid sigma model including a quantized pion field.

  20. Multi-rate, real time image compression for images dominated by point sources

    NASA Technical Reports Server (NTRS)

    Huber, A. Kris; Budge, Scott E.; Harris, Richard W.

    1993-01-01

    An image compression system recently developed for compression of digital images dominated by point sources is presented. Encoding consists of minimum-mean removal, vector quantization, adaptive threshold truncation, and modified Huffman encoding. Simulations are presented showing that the peaks corresponding to point sources can be transmitted losslessly for low signal-to-noise ratios (SNR) and high point source densities while maintaining a reduced output bit rate. Encoding and decoding hardware has been built and tested which processes 552,960 12-bit pixels per second at compression rates of 10:1 and 4:1. Simulation results are presented for the 10:1 case only.

  1. Analytical Approach to the Fuel Optimal Impulsive Transfer Problem Using Primer Vector Method

    NASA Astrophysics Data System (ADS)

    Fitrianingsih, E.; Armellin, R.

    2018-04-01

    One of the objectives of mission design is selecting an optimum orbital transfer which often translated as a transfer which requires minimum propellant consumption. In order to assure the selected trajectory meets the requirement, the optimality of transfer should first be analyzed either by directly calculating the ΔV of the candidate trajectories and select the one that gives a minimum value or by evaluating the trajectory according to certain criteria of optimality. The second method is performed by analyzing the profile of the modulus of the thrust direction vector which is known as primer vector. Both methods come with their own advantages and disadvantages. However, it is possible to use the primer vector method to verify if the result from the direct method is truly optimal or if the ΔV can be reduced further by implementing correction maneuver to the reference trajectory. In addition to its capability to evaluate the transfer optimality without the need to calculate the transfer ΔV, primer vector also enables us to identify the time and position to apply correction maneuver in order to optimize a non-optimum transfer. This paper will present the analytical approach to the fuel optimal impulsive transfer using primer vector method. The validity of the method is confirmed by comparing the result to those from the numerical method. The investigation of the optimality of direct transfer is used to give an example of the application of the method. The case under study is the prograde elliptic transfers from Earth to Mars. The study enables us to identify the optimality of all the possible transfers.

  2. BRST quantization of cosmological perturbations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Armendariz-Picon, Cristian; Şengör, Gizem

    2016-11-08

    BRST quantization is an elegant and powerful method to quantize theories with local symmetries. In this article we study the Hamiltonian BRST quantization of cosmological perturbations in a universe dominated by a scalar field, along with the closely related quantization method of Dirac. We describe how both formalisms apply to perturbations in a time-dependent background, and how expectation values of gauge-invariant operators can be calculated in the in-in formalism. Our analysis focuses mostly on the free theory. By appropriate canonical transformations we simplify and diagonalize the free Hamiltonian. BRST quantization in derivative gauges allows us to dramatically simplify the structuremore » of the propagators, whereas Dirac quantization, which amounts to quantization in synchronous gauge, dispenses with the need to introduce ghosts and preserves the locality of the gauge-fixed action.« less

  3. Deformation of second and third quantization

    NASA Astrophysics Data System (ADS)

    Faizal, Mir

    2015-03-01

    In this paper, we will deform the second and third quantized theories by deforming the canonical commutation relations in such a way that they become consistent with the generalized uncertainty principle. Thus, we will first deform the second quantized commutator and obtain a deformed version of the Wheeler-DeWitt equation. Then we will further deform the third quantized theory by deforming the third quantized canonical commutation relation. This way we will obtain a deformed version of the third quantized theory for the multiverse.

  4. Path integral solution for a Klein-Gordon particle in vector and scalar deformed radial Rosen-Morse-type potentials

    NASA Astrophysics Data System (ADS)

    Khodja, A.; Kadja, A.; Benamira, F.; Guechi, L.

    2017-12-01

    The problem of a Klein-Gordon particle moving in equal vector and scalar Rosen-Morse-type potentials is solved in the framework of Feynman's path integral approach. Explicit path integration leads to a closed form for the radial Green's function associated with different shapes of the potentials. For q≤-1, and 1/2α ln | q|0, it is shown that the quantization conditions for the bound state energy levels E_{nr} are transcendental equations which can be solved numerically. Three special cases such as the standard radial Manning-Rosen potential (| q| =1), the standard radial Rosen-Morse potential (V2→ -V2,q=1) and the radial Eckart potential (V1→ -V1,q=1) are also briefly discussed.

  5. Detection of laryngeal function using speech and electroglottographic data.

    PubMed

    Childers, D G; Bae, K S

    1992-01-01

    The purpose of this research was to develop quantitative measures for the assessment of laryngeal function using speech and electroglottographic (EGG) data. We developed two procedures for the detection of laryngeal pathology: 1) a spectral distortion measure using pitch synchronous and asynchronous methods with linear predictive coding (LPC) vectors and vector quantization (VQ) and 2) analysis of the EGG signal using time interval and amplitude difference measures. The VQ procedure was conjectured to offer the possibility of circumventing the need to estimate the glottal volume velocity wave-form by inverse filtering techniques. The EGG procedure was to evaluate data that was "nearly" a direct measure of vocal fold vibratory motion and thus was conjectured to offer the potential for providing an excellent assessment of laryngeal function. A threshold based procedure gave 75.9 and 69.0% probability of pathological detection using procedures 1) and 2), respectively, for 29 patients with pathological voices and 52 normal subjects. The false alarm probability was 9.6% for the normal subjects.

  6. Condition monitoring of 3G cellular networks through competitive neural models.

    PubMed

    Barreto, Guilherme A; Mota, João C M; Souza, Luis G M; Frota, Rewbenio A; Aguayo, Leonardo

    2005-09-01

    We develop an unsupervised approach to condition monitoring of cellular networks using competitive neural algorithms. Training is carried out with state vectors representing the normal functioning of a simulated CDMA2000 network. Once training is completed, global and local normality profiles (NPs) are built from the distribution of quantization errors of the training state vectors and their components, respectively. The global NP is used to evaluate the overall condition of the cellular system. If abnormal behavior is detected, local NPs are used in a component-wise fashion to find abnormal state variables. Anomaly detection tests are performed via percentile-based confidence intervals computed over the global and local NPs. We compared the performance of four competitive algorithms [winner-take-all (WTA), frequency-sensitive competitive learning (FSCL), self-organizing map (SOM), and neural-gas algorithm (NGA)] and the results suggest that the joint use of global and local NPs is more efficient and more robust than current single-threshold methods.

  7. Meson effective mass in the isospin medium in hard-wall AdS/QCD model

    NASA Astrophysics Data System (ADS)

    Mamedov, Shahin

    2016-02-01

    We study a mass splitting of the light vector, axial-vector, and pseudoscalar mesons in the isospin medium in the framework of the hard-wall model. We write an effective mass definition for the interacting gauge fields and scalar field introduced in gauge field theory in the bulk of AdS space-time. Relying on holographic duality we obtain a formula for the effective mass of a boundary meson in terms of derivative operator over the extra bulk coordinate. The effective mass found in this way coincides with the one obtained from finding of poles of the two-point correlation function. In order to avoid introducing distinguished infrared boundaries in the quantization formula for the different mesons from the same isotriplet we introduce extra action terms at this boundary, which reduces distinguished values of this boundary to the same value. Profile function solutions and effective mass expressions were found for the in-medium ρ , a_1, and π mesons.

  8. Quantization and training of object detection networks with low-precision weights and activations

    NASA Astrophysics Data System (ADS)

    Yang, Bo; Liu, Jian; Zhou, Li; Wang, Yun; Chen, Jie

    2018-01-01

    As convolutional neural networks have demonstrated state-of-the-art performance in object recognition and detection, there is a growing need for deploying these systems on resource-constrained mobile platforms. However, the computational burden and energy consumption of inference for these networks are significantly higher than what most low-power devices can afford. To address these limitations, this paper proposes a method to train object detection networks with low-precision weights and activations. The probability density functions of weights and activations of each layer are first directly estimated using piecewise Gaussian models. Then, the optimal quantization intervals and step sizes for each convolution layer are adaptively determined according to the distribution of weights and activations. As the most computationally expensive convolutions can be replaced by effective fixed point operations, the proposed method can drastically reduce computation complexity and memory footprint. Performing on the tiny you only look once (YOLO) and YOLO architectures, the proposed method achieves comparable accuracy to their 32-bit counterparts. As an illustration, the proposed 4-bit and 8-bit quantized versions of the YOLO model achieve a mean average precision of 62.6% and 63.9%, respectively, on the Pascal visual object classes 2012 test dataset. The mAP of the 32-bit full-precision baseline model is 64.0%.

  9. Phase Quantization Study of Spatial Light Modulator for Extreme High-contrast Imaging

    NASA Astrophysics Data System (ADS)

    Dou, Jiangpei; Ren, Deqing

    2016-11-01

    Direct imaging of exoplanets by reflected starlight is extremely challenging due to the large luminosity ratio to the primary star. Wave-front control is a critical technique to attenuate the speckle noise in order to achieve an extremely high contrast. We present a phase quantization study of a spatial light modulator (SLM) for wave-front control to meet the contrast requirement of detection of a terrestrial planet in the habitable zone of a solar-type star. We perform the numerical simulation by employing the SLM with different phase accuracy and actuator numbers, which are related to the achievable contrast. We use an optimization algorithm to solve the quantization problems that is matched to the controllable phase step of the SLM. Two optical configurations are discussed with the SLM located before and after the coronagraph focal plane mask. The simulation result has constrained the specification for SLM phase accuracy in the above two optical configurations, which gives us a phase accuracy of 0.4/1000 and 1/1000 waves to achieve a contrast of 10-10. Finally, we have demonstrated that an SLM with more actuators can deliver a competitive contrast performance on the order of 10-10 in comparison to that by using a deformable mirror.

  10. Quantization selection in the high-throughput H.264/AVC encoder based on the RD

    NASA Astrophysics Data System (ADS)

    Pastuszak, Grzegorz

    2013-10-01

    In the hardware video encoder, the quantization is responsible for quality losses. On the other hand, it allows the reduction of bit rates to the target one. If the mode selection is based on the rate-distortion criterion, the quantization can also be adjusted to obtain better compression efficiency. Particularly, the use of Lagrangian function with a given multiplier enables the encoder to select the most suitable quantization step determined by the quantization parameter QP. Moreover, the quantization offset added before discarding the fraction value after quantization can be adjusted. In order to select the best quantization parameter and offset in real time, the HD/SD encoder should be implemented in the hardware. In particular, the hardware architecture should embed the transformation and quantization modules able to process the same residuals many times. In this work, such an architecture is used. Experimental results show what improvements in terms of compression efficiency are achievable for Intra coding.

  11. Optimal control of malaria: combining vector interventions and drug therapies.

    PubMed

    Khamis, Doran; El Mouden, Claire; Kura, Klodeta; Bonsall, Michael B

    2018-04-24

    The sterile insect technique and transgenic equivalents are considered promising tools for controlling vector-borne disease in an age of increasing insecticide and drug-resistance. Combining vector interventions with artemisinin-based therapies may achieve the twin goals of suppressing malaria endemicity while managing artemisinin resistance. While the cost-effectiveness of these controls has been investigated independently, their combined usage has not been dynamically optimized in response to ecological and epidemiological processes. An optimal control framework based on coupled models of mosquito population dynamics and malaria epidemiology is used to investigate the cost-effectiveness of combining vector control with drug therapies in homogeneous environments with and without vector migration. The costs of endemic malaria are weighed against the costs of administering artemisinin therapies and releasing modified mosquitoes using various cost structures. Larval density dependence is shown to reduce the cost-effectiveness of conventional sterile insect releases compared with transgenic mosquitoes with a late-acting lethal gene. Using drug treatments can reduce the critical vector control release ratio necessary to cause disease fadeout. Combining vector control and drug therapies is the most effective and efficient use of resources, and using optimized implementation strategies can substantially reduce costs.

  12. Vector-model-supported optimization in volumetric-modulated arc stereotactic radiotherapy planning for brain metastasis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Eva Sau Fan; Department of Health Technology and Informatics, The Hong Kong Polytechnic University; Wu, Vincent Wing Cheung

    Long planning time in volumetric-modulated arc stereotactic radiotherapy (VMA-SRT) cases can limit its clinical efficiency and use. A vector model could retrieve previously successful radiotherapy cases that share various common anatomic features with the current case. The prsent study aimed to develop a vector model that could reduce planning time by applying the optimization parameters from those retrieved reference cases. Thirty-six VMA-SRT cases of brain metastasis (gender, male [n = 23], female [n = 13]; age range, 32 to 81 years old) were collected and used as a reference database. Another 10 VMA-SRT cases were planned with both conventional optimization and vector-model-supported optimization, followingmore » the oncologists' clinical dose prescriptions. Planning time and plan quality measures were compared using the 2-sided paired Wilcoxon signed rank test with a significance level of 0.05, with positive false discovery rate (pFDR) of less than 0.05. With vector-model-supported optimization, there was a significant reduction in the median planning time, a 40% reduction from 3.7 to 2.2 hours (p = 0.002, pFDR = 0.032), and for the number of iterations, a 30% reduction from 8.5 to 6.0 (p = 0.006, pFDR = 0.047). The quality of plans from both approaches was comparable. From these preliminary results, vector-model-supported optimization can expedite the optimization of VMA-SRT for brain metastasis while maintaining plan quality.« less

  13. Full Spectrum Conversion Using Traveling Pulse Wave Quantization

    DTIC Science & Technology

    2017-03-01

    Full Spectrum Conversion Using Traveling Pulse Wave Quantization Michael S. Kappes Mikko E. Waltari IQ-Analog Corporation San Diego, California...temporal-domain quantization technique called Traveling Pulse Wave Quantization (TPWQ). Full spectrum conversion is defined as the complete...pulse width measurements that are continuously generated hence the name “traveling” pulse wave quantization. Our TPWQ-based ADC is composed of a

  14. Robust quantum optimizer with full connectivity.

    PubMed

    Nigg, Simon E; Lörch, Niels; Tiwari, Rakesh P

    2017-04-01

    Quantum phenomena have the potential to speed up the solution of hard optimization problems. For example, quantum annealing, based on the quantum tunneling effect, has recently been shown to scale exponentially better with system size than classical simulated annealing. However, current realizations of quantum annealers with superconducting qubits face two major challenges. First, the connectivity between the qubits is limited, excluding many optimization problems from a direct implementation. Second, decoherence degrades the success probability of the optimization. We address both of these shortcomings and propose an architecture in which the qubits are robustly encoded in continuous variable degrees of freedom. By leveraging the phenomenon of flux quantization, all-to-all connectivity with sufficient tunability to implement many relevant optimization problems is obtained without overhead. Furthermore, we demonstrate the robustness of this architecture by simulating the optimal solution of a small instance of the nondeterministic polynomial-time hard (NP-hard) and fully connected number partitioning problem in the presence of dissipation.

  15. Quantizing and sampling considerations in digital phased-locked loops

    NASA Technical Reports Server (NTRS)

    Hurst, G. T.; Gupta, S. C.

    1974-01-01

    The quantizer problem is first considered. The conditions under which the uniform white sequence model for the quantizer error is valid are established independent of the sampling rate. An equivalent spectral density is defined for the quantizer error resulting in an effective SNR value. This effective SNR may be used to determine quantized performance from infinitely fine quantized results. Attention is given to sampling rate considerations. Sampling rate characteristics of the digital phase-locked loop (DPLL) structure are investigated for the infinitely fine quantized system. The predicted phase error variance equation is examined as a function of the sampling rate. Simulation results are presented and a method is described which enables the minimum required sampling rate to be determined from the predicted phase error variance equations.

  16. Modeling and analysis of energy quantization effects on single electron inverter performance

    NASA Astrophysics Data System (ADS)

    Dan, Surya Shankar; Mahapatra, Santanu

    2009-08-01

    In this paper, for the first time, the effects of energy quantization on single electron transistor (SET) inverter performance are analyzed through analytical modeling and Monte Carlo simulations. It is shown that energy quantization mainly changes the Coulomb blockade region and drain current of SET devices and thus affects the noise margin, power dissipation, and the propagation delay of SET inverter. A new analytical model for the noise margin of SET inverter is proposed which includes the energy quantization effects. Using the noise margin as a metric, the robustness of SET inverter is studied against the effects of energy quantization. A compact expression is developed for a novel parameter quantization threshold which is introduced for the first time in this paper. Quantization threshold explicitly defines the maximum energy quantization that an SET inverter logic circuit can withstand before its noise margin falls below a specified tolerance level. It is found that SET inverter designed with CT:CG=1/3 (where CT and CG are tunnel junction and gate capacitances, respectively) offers maximum robustness against energy quantization.

  17. Berezin-Toeplitz quantization and naturally defined star products for Kähler manifolds

    NASA Astrophysics Data System (ADS)

    Schlichenmaier, Martin

    2018-04-01

    For compact quantizable Kähler manifolds the Berezin-Toeplitz quantization schemes, both operator and deformation quantization (star product) are reviewed. The treatment includes Berezin's covariant symbols and the Berezin transform. The general compact quantizable case was done by Bordemann-Meinrenken-Schlichenmaier, Schlichenmaier, and Karabegov-Schlichenmaier. For star products on Kähler manifolds, separation of variables, or equivalently star product of (anti-) Wick type, is a crucial property. As canonically defined star products the Berezin-Toeplitz, Berezin, and the geometric quantization are treated. It turns out that all three are equivalent, but different.

  18. Dielectric properties of classical and quantized ionic fluids.

    PubMed

    Høye, Johan S

    2010-06-01

    We study time-dependent correlation functions of classical and quantum gases using methods of equilibrium statistical mechanics for systems of uniform as well as nonuniform densities. The basis for our approach is the path integral formalism of quantum mechanical systems. With this approach the statistical mechanics of a quantum mechanical system becomes the equivalent of a classical polymer problem in four dimensions where imaginary time is the fourth dimension. Several nontrivial results for quantum systems have been obtained earlier by this analogy. Here, we will focus upon the presence of a time-dependent electromagnetic pair interaction where the electromagnetic vector potential that depends upon currents, will be present. Thus both density and current correlations are needed to evaluate the influence of this interaction. Then we utilize that densities and currents can be expressed by polarizations by which the ionic fluid can be regarded as a dielectric one for which a nonlocal susceptibility is found. This nonlocality has as a consequence that we find no contribution from a possible transverse electric zero-frequency mode for the Casimir force between metallic plates. Further, we establish expressions for a leading correction to ab initio calculations for the energies of the quantized electrons of molecules where now retardation effects also are taken into account.

  19. First integrals of motion in a gauge covariant framework, Killing-Maxwell system and quantum anomalies

    NASA Astrophysics Data System (ADS)

    Visinescu, M.

    2012-10-01

    Hidden symmetries in a covariant Hamiltonian framework are investigated. The special role of the Stackel-Killing and Killing-Yano tensors is pointed out. The covariant phase-space is extended to include external gauge fields and scalar potentials. We investigate the possibility for a higher-order symmetry to survive when the electromagnetic interactions are taken into account. Aconcrete realization of this possibility is given by the Killing-Maxwell system. The classical conserved quantities do not generally transfer to the quantized systems producing quantum gravitational anomalies. As a rule the conformal extension of the Killing vectors and tensors does not produce symmetry operators for the Klein-Gordon operator.

  20. Asymptotics of the evolution semigroup associated with a scalar field in the presence of a non-linear electromagnetic field

    NASA Astrophysics Data System (ADS)

    Albeverio, Sergio; Tamura, Hiroshi

    2018-04-01

    We consider a model describing the coupling of a vector-valued and a scalar homogeneous Markovian random field over R4, interpreted as expressing the interaction between a charged scalar quantum field coupled with a nonlinear quantized electromagnetic field. Expectations of functionals of the random fields are expressed by Brownian bridges. Using this, together with Feynman-Kac-Itô type formulae and estimates on the small time and large time behaviour of Brownian functionals, we prove asymptotic upper and lower bounds on the kernel of the transition semigroup for our model. The upper bound gives faster than exponential decay for large distances of the corresponding resolvent (propagator).

  1. High performance compression of science data

    NASA Technical Reports Server (NTRS)

    Storer, James A.; Cohn, Martin

    1994-01-01

    Two papers make up the body of this report. One presents a single-pass adaptive vector quantization algorithm that learns a codebook of variable size and shape entries; the authors present experiments on a set of test images showing that with no training or prior knowledge of the data, for a given fidelity, the compression achieved typically equals or exceeds that of the JPEG standard. The second paper addresses motion compensation, one of the most effective techniques used in the interframe data compression. A parallel block-matching algorithm for estimating interframe displacement of blocks with minimum error is presented. The algorithm is designed for a simple parallel architecture to process video in real time.

  2. Coherent distributions for the rigid rotator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grigorescu, Marius

    2016-06-15

    Coherent solutions of the classical Liouville equation for the rigid rotator are presented as positive phase-space distributions localized on the Lagrangian submanifolds of Hamilton-Jacobi theory. These solutions become Wigner-type quasiprobability distributions by a formal discretization of the left-invariant vector fields from their Fourier transform in angular momentum. The results are consistent with the usual quantization of the anisotropic rotator, but the expected value of the Hamiltonian contains a finite “zero point” energy term. It is shown that during the time when a quasiprobability distribution evolves according to the Liouville equation, the related quantum wave function should satisfy the time-dependent Schrödingermore » equation.« less

  3. Visual communications and image processing '92; Proceedings of the Meeting, Boston, MA, Nov. 18-20, 1992

    NASA Astrophysics Data System (ADS)

    Maragos, Petros

    The topics discussed at the conference include hierarchical image coding, motion analysis, feature extraction and image restoration, video coding, and morphological and related nonlinear filtering. Attention is also given to vector quantization, morphological image processing, fractals and wavelets, architectures for image and video processing, image segmentation, biomedical image processing, and model-based analysis. Papers are presented on affine models for motion and shape recovery, filters for directly detecting surface orientation in an image, tracking of unresolved targets in infrared imagery using a projection-based method, adaptive-neighborhood image processing, and regularized multichannel restoration of color images using cross-validation. (For individual items see A93-20945 to A93-20951)

  4. SAR data compression: Application, requirements, and designs

    NASA Technical Reports Server (NTRS)

    Curlander, John C.; Chang, C. Y.

    1991-01-01

    The feasibility of reducing data volume and data rate is evaluated for the Earth Observing System (EOS) Synthetic Aperture Radar (SAR). All elements of data stream from the sensor downlink data stream to electronic delivery of browse data products are explored. The factors influencing design of a data compression system are analyzed, including the signal data characteristics, the image quality requirements, and the throughput requirements. The conclusion is that little or no reduction can be achieved in the raw signal data using traditional data compression techniques (e.g., vector quantization, adaptive discrete cosine transform) due to the induced phase errors in the output image. However, after image formation, a number of techniques are effective for data compression.

  5. Quantum theory of structured monochromatic light

    NASA Astrophysics Data System (ADS)

    Punnoose, Alexander; Tu, J. J.

    2017-08-01

    Applications that envisage utilizing the orbital angular momentum (OAM) at the single photon level assume that the OAM degrees of freedom of the photons are orthogonal. To test this critical assumption, we quantize the beam-like solutions of the vector Helmholtz equation from first principles. We show that although the photon operators of a diffracting monochromatic beam do not in general satisfy the canonical commutation relations, implying that the photon states in Fock space are not orthogonal, the states are bona fide eigenstates of the number and Hamiltonian operators. As a result, the representation for the photon operators presented in this work form a natural basis to study structured monochromatic light at the single photon level.

  6. Quantization and fractional quantization of currents in periodically driven stochastic systems. I. Average currents

    NASA Astrophysics Data System (ADS)

    Chernyak, Vladimir Y.; Klein, John R.; Sinitsyn, Nikolai A.

    2012-04-01

    This article studies Markovian stochastic motion of a particle on a graph with finite number of nodes and periodically time-dependent transition rates that satisfy the detailed balance condition at any time. We show that under general conditions, the currents in the system on average become quantized or fractionally quantized for adiabatic driving at sufficiently low temperature. We develop the quantitative theory of this quantization and interpret it in terms of topological invariants. By implementing the celebrated Kirchhoff theorem we derive a general and explicit formula for the average generated current that plays a role of an efficient tool for treating the current quantization effects.

  7. Link Correlation Based Transmit Sector Antenna Selection for Alamouti Coded OFDM

    NASA Astrophysics Data System (ADS)

    Ahn, Chang-Jun

    In MIMO systems, the deployment of a multiple antenna technique can enhance the system performance. However, since the cost of RF transmitters is much higher than that of antennas, there is growing interest in techniques that use a larger number of antennas than the number of RF transmitters. These methods rely on selecting the optimal transmitter antennas and connecting them to the respective. In this case, feedback information (FBI) is required to select the optimal transmitter antenna elements. Since FBI is control overhead, the rate of the feedback is limited. This motivates the study of limited feedback techniques where only partial or quantized information from the receiver is conveyed back to the transmitter. However, in MIMO/OFDM systems, it is difficult to develop an effective FBI quantization method for choosing the space-time, space-frequency, or space-time-frequency processing due to the numerous subchannels. Moreover, MIMO/OFDM systems require antenna separation of 5 ∼ 10 wavelengths to keep the correlation coefficient below 0.7 to achieve a diversity gain. In this case, the base station requires a large space to set up multiple antennas. To reduce these problems, in this paper, we propose the link correlation based transmit sector antenna selection for Alamouti coded OFDM without FBI.

  8. Effective Clipart Image Vectorization through Direct Optimization of Bezigons.

    PubMed

    Yang, Ming; Chao, Hongyang; Zhang, Chi; Guo, Jun; Yuan, Lu; Sun, Jian

    2016-02-01

    Bezigons, i.e., closed paths composed of Bézier curves, have been widely employed to describe shapes in image vectorization results. However, most existing vectorization techniques infer the bezigons by simply approximating an intermediate vector representation (such as polygons). Consequently, the resultant bezigons are sometimes imperfect due to accumulated errors, fitting ambiguities, and a lack of curve priors, especially for low-resolution images. In this paper, we describe a novel method for vectorizing clipart images. In contrast to previous methods, we directly optimize the bezigons rather than using other intermediate representations; therefore, the resultant bezigons are not only of higher fidelity compared with the original raster image but also more reasonable because they were traced by a proficient expert. To enable such optimization, we have overcome several challenges and have devised a differentiable data energy as well as several curve-based prior terms. To improve the efficiency of the optimization, we also take advantage of the local control property of bezigons and adopt an overlapped piecewise optimization strategy. The experimental results show that our method outperforms both the current state-of-the-art method and commonly used commercial software in terms of bezigon quality.

  9. Detection of potential mosquito breeding sites based on community sourced geotagged images

    NASA Astrophysics Data System (ADS)

    Agarwal, Ankit; Chaudhuri, Usashi; Chaudhuri, Subhasis; Seetharaman, Guna

    2014-06-01

    Various initiatives have been taken all over the world to involve the citizens in the collection and reporting of data to make better and informed data-driven decisions. Our work shows how the geotagged images collected through the general population can be used to combat Malaria and Dengue by identifying and visualizing localities that contain potential mosquito breeding sites. Our method first employs image quality assessment on the client side to reject the images with distortions like blur and artifacts. Each geotagged image received on the server is converted into a feature vector using the bag of visual words model. We train an SVM classifier on a histogram-based feature vector obtained after the vector quantization of SIFT features to discriminate images containing either a small stagnant water body like puddle, or open containers and tires, bushes etc. from those that contain flowing water, manicured lawns, tires attached to a vehicle etc. A geographical heat map is generated by assigning a specific location a probability value of it being a potential mosquito breeding ground of mosquito using feature level fusion or the max approach presented in the paper. The heat map thus generated can be used by concerned health authorities to take appropriate action and to promote civic awareness.

  10. Velocity-tunable slow beams of cold O2 in a single spin-rovibronic state with full angular-momentum orientation by multistage Zeeman deceleration

    NASA Astrophysics Data System (ADS)

    Wiederkehr, A. W.; Schmutz, H.; Motsch, M.; Merkt, F.

    2012-08-01

    Cold samples of oxygen molecules in supersonic beams have been decelerated from initial velocities of 390 and 450 m s-1 to final velocities in the range between 150 and 280 m s-1 using a 90-stage Zeeman decelerator. (2 + 1) resonance-enhanced-multiphoton-ionization (REMPI) spectra of the 3sσ g 3Π g (C) ? two-photon transition of O2 have been recorded to characterize the state selectivity of the deceleration process. The decelerated molecular sample was found to consist exclusively of molecules in the J ‧‧ = 2 spin-rotational component of the X ? ground state of O2. Measurements of the REMPI spectra using linearly polarized laser radiation with polarization vector parallel to the decelerator axis, and thus to the magnetic-field vector of the deceleration solenoids, further showed that only the ? magnetic sublevel of the N‧‧ = 1, J ‧‧ = 2 spin-rotational level is populated in the decelerated sample, which therefore is characterized by a fully oriented total-angular-momentum vector. By maintaining a weak quantization magnetic field beyond the decelerator, the polarization of the sample could be maintained over the 5 cm distance separating the last deceleration solenoid and the detection region.

  11. LVQ and backpropagation neural networks applied to NASA SSME data

    NASA Technical Reports Server (NTRS)

    Doniere, Timothy F.; Dhawan, Atam P.

    1993-01-01

    Feedfoward neural networks with backpropagation learning have been used as function approximators for modeling the space shuttle main engine (SSME) sensor signals. The modeling of these sensor signals is aimed at the development of a sensor fault detection system that can be used during ground test firings. The generalization capability of a neural network based function approximator depends on the training vectors which in this application may be derived from a number of SSME ground test-firings. This yields a large number of training vectors. Large training sets can cause the time required to train the network to be very large. Also, the network may not be able to generalize for large training sets. To reduce the size of the training sets, the SSME test-firing data is reduced using the learning vector quantization (LVQ) based technique. Different compression ratios were used to obtain compressed data in training the neural network model. The performance of the neural model trained using reduced sets of training patterns is presented and compared with the performance of the model trained using complete data. The LVQ can also be used as a function approximator. The performance of the LVQ as a function approximator using reduced training sets is presented and compared with the performance of the backpropagation network.

  12. Adaptive track scheduling to optimize concurrency and vectorization in GeantV

    DOE PAGES

    Apostolakis, J.; Bandieramonte, M.; Bitzes, G.; ...

    2015-05-22

    The GeantV project is focused on the R&D of new particle transport techniques to maximize parallelism on multiple levels, profiting from the use of both SIMD instructions and co-processors for the CPU-intensive calculations specific to this type of applications. In our approach, vectors of tracks belonging to multiple events and matching different locality criteria must be gathered and dispatched to algorithms having vector signatures. While the transport propagates tracks and changes their individual states, data locality becomes harder to maintain. The scheduling policy has to be changed to maintain efficient vectors while keeping an optimal level of concurrency. The modelmore » has complex dynamics requiring tuning the thresholds to switch between the normal regime and special modes, i.e. prioritizing events to allow flushing memory, adding new events in the transport pipeline to boost locality, dynamically adjusting the particle vector size or switching between vector to single track mode when vectorization causes only overhead. Lastly, this work requires a comprehensive study for optimizing these parameters to make the behaviour of the scheduler self-adapting, presenting here its initial results.« less

  13. Visual Typo Correction by Collocative Optimization: A Case Study on Merchandize Images.

    PubMed

    Wei, Xiao-Yong; Yang, Zhen-Qun; Ngo, Chong-Wah; Zhang, Wei

    2014-02-01

    Near-duplicate retrieval (NDR) in merchandize images is of great importance to a lot of online applications on e-Commerce websites. In those applications where the requirement of response time is critical, however, the conventional techniques developed for a general purpose NDR are limited, because expensive post-processing like spatial verification or hashing is usually employed to compromise the quantization errors among the visual words used for the images. In this paper, we argue that most of the errors are introduced because of the quantization process where the visual words are considered individually, which has ignored the contextual relations among words. We propose a "spelling or phrase correction" like process for NDR, which extends the concept of collocations to visual domain for modeling the contextual relations. Binary quadratic programming is used to enforce the contextual consistency of words selected for an image, so that the errors (typos) are eliminated and the quality of the quantization process is improved. The experimental results show that the proposed method can improve the efficiency of NDR by reducing vocabulary size by 1000% times, and under the scenario of merchandize image NDR, the expensive local interest point feature used in conventional approaches can be replaced by color-moment feature, which reduces the time cost by 9202% while maintaining comparable performance to the state-of-the-art methods.

  14. A visual detection model for DCT coefficient quantization

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert J., Jr.; Peterson, Heidi A.

    1993-01-01

    The discrete cosine transform (DCT) is widely used in image compression, and is part of the JPEG and MPEG compression standards. The degree of compression, and the amount of distortion in the decompressed image are determined by the quantization of the transform coefficients. The standards do not specify how the DCT coefficients should be quantized. Our approach is to set the quantization level for each coefficient so that the quantization error is at the threshold of visibility. Here we combine results from our previous work to form our current best detection model for DCT coefficient quantization noise. This model predicts sensitivity as a function of display parameters, enabling quantization matrices to be designed for display situations varying in luminance, veiling light, and spatial frequency related conditions (pixel size, viewing distance, and aspect ratio). It also allows arbitrary color space directions for the representation of color.

  15. Simultaneous Conduction and Valence Band Quantization in Ultrashallow High-Density Doping Profiles in Semiconductors

    NASA Astrophysics Data System (ADS)

    Mazzola, F.; Wells, J. W.; Pakpour-Tabrizi, A. C.; Jackman, R. B.; Thiagarajan, B.; Hofmann, Ph.; Miwa, J. A.

    2018-01-01

    We demonstrate simultaneous quantization of conduction band (CB) and valence band (VB) states in silicon using ultrashallow, high-density, phosphorus doping profiles (so-called Si:P δ layers). We show that, in addition to the well-known quantization of CB states within the dopant plane, the confinement of VB-derived states between the subsurface P dopant layer and the Si surface gives rise to a simultaneous quantization of VB states in this narrow region. We also show that the VB quantization can be explained using a simple particle-in-a-box model, and that the number and energy separation of the quantized VB states depend on the depth of the P dopant layer beneath the Si surface. Since the quantized CB states do not show a strong dependence on the dopant depth (but rather on the dopant density), it is straightforward to exhibit control over the properties of the quantized CB and VB states independently of each other by choosing the dopant density and depth accordingly, thus offering new possibilities for engineering quantum matter.

  16. Explicit time integration of finite element models on a vectorized, concurrent computer with shared memory

    NASA Technical Reports Server (NTRS)

    Gilbertsen, Noreen D.; Belytschko, Ted

    1990-01-01

    The implementation of a nonlinear explicit program on a vectorized, concurrent computer with shared memory is described and studied. The conflict between vectorization and concurrency is described and some guidelines are given for optimal block sizes. Several example problems are summarized to illustrate the types of speed-ups which can be achieved by reprogramming as compared to compiler optimization.

  17. Research on intrusion detection based on Kohonen network and support vector machine

    NASA Astrophysics Data System (ADS)

    Shuai, Chunyan; Yang, Hengcheng; Gong, Zeweiyi

    2018-05-01

    In view of the problem of low detection accuracy and the long detection time of support vector machine, which directly applied to the network intrusion detection system. Optimization of SVM parameters can greatly improve the detection accuracy, but it can not be applied to high-speed network because of the long detection time. a method based on Kohonen neural network feature selection is proposed to reduce the optimization time of support vector machine parameters. Firstly, this paper is to calculate the weights of the KDD99 network intrusion data by Kohonen network and select feature by weight. Then, after the feature selection is completed, genetic algorithm (GA) and grid search method are used for parameter optimization to find the appropriate parameters and classify them by support vector machines. By comparing experiments, it is concluded that feature selection can reduce the time of parameter optimization, which has little influence on the accuracy of classification. The experiments suggest that the support vector machine can be used in the network intrusion detection system and reduce the missing rate.

  18. Scalable hybrid computation with spikes.

    PubMed

    Sarpeshkar, Rahul; O'Halloran, Micah

    2002-09-01

    We outline a hybrid analog-digital scheme for computing with three important features that enable it to scale to systems of large complexity: First, like digital computation, which uses several one-bit precise logical units to collectively compute a precise answer to a computation, the hybrid scheme uses several moderate-precision analog units to collectively compute a precise answer to a computation. Second, frequent discrete signal restoration of the analog information prevents analog noise and offset from degrading the computation. And, third, a state machine enables complex computations to be created using a sequence of elementary computations. A natural choice for implementing this hybrid scheme is one based on spikes because spike-count codes are digital, while spike-time codes are analog. We illustrate how spikes afford easy ways to implement all three components of scalable hybrid computation. First, as an important example of distributed analog computation, we show how spikes can create a distributed modular representation of an analog number by implementing digital carry interactions between spiking analog neurons. Second, we show how signal restoration may be performed by recursive spike-count quantization of spike-time codes. And, third, we use spikes from an analog dynamical system to trigger state transitions in a digital dynamical system, which reconfigures the analog dynamical system using a binary control vector; such feedback interactions between analog and digital dynamical systems create a hybrid state machine (HSM). The HSM extends and expands the concept of a digital finite-state-machine to the hybrid domain. We present experimental data from a two-neuron HSM on a chip that implements error-correcting analog-to-digital conversion with the concurrent use of spike-time and spike-count codes. We also present experimental data from silicon circuits that implement HSM-based pattern recognition using spike-time synchrony. We outline how HSMs may be used to perform learning, vector quantization, spike pattern recognition and generation, and how they may be reconfigured.

  19. Physiological Evidence for Isopotential Tunneling in the Electron Transport Chain of Methane-Producing Archaea

    PubMed Central

    Duszenko, Nikolas

    2017-01-01

    ABSTRACT Many, but not all, organisms use quinones to conserve energy in their electron transport chains. Fermentative bacteria and methane-producing archaea (methanogens) do not produce quinones but have devised other ways to generate ATP. Methanophenazine (MPh) is a unique membrane electron carrier found in Methanosarcina species that plays the same role as quinones in the electron transport chain. To extend the analogy between quinones and MPh, we compared the MPh pool sizes between two well-studied Methanosarcina species, Methanosarcina acetivorans C2A and Methanosarcina barkeri Fusaro, to the quinone pool size in the bacterium Escherichia coli. We found the quantity of MPh per cell increases as cultures transition from exponential growth to stationary phase, and absolute quantities of MPh were 3-fold higher in M. acetivorans than in M. barkeri. The concentration of MPh suggests the cell membrane of M. acetivorans, but not of M. barkeri, is electrically quantized as if it were a single conductive metal sheet and near optimal for rate of electron transport. Similarly, stationary (but not exponentially growing) E. coli cells also have electrically quantized membranes on the basis of quinone content. Consistent with our hypothesis, we demonstrated that the exogenous addition of phenazine increases the growth rate of M. barkeri three times that of M. acetivorans. Our work suggests electron flux through MPh is naturally higher in M. acetivorans than in M. barkeri and that hydrogen cycling is less efficient at conserving energy than scalar proton translocation using MPh. IMPORTANCE Can we grow more from less? The ability to optimize and manipulate metabolic efficiency in cells is the difference between commercially viable and nonviable renewable technologies. Much can be learned from methane-producing archaea (methanogens) which evolved a successful metabolic lifestyle under extreme thermodynamic constraints. Methanogens use highly efficient electron transport systems and supramolecular complexes to optimize electron and carbon flow to control biomass synthesis and the production of methane. Worldwide, methanogens are used to generate renewable methane for heat, electricity, and transportation. Our observations suggest Methanosarcina acetivorans, but not Methanosarcina barkeri, has electrically quantized membranes. Escherichia coli, a model facultative anaerobe, has optimal electron transport at the stationary phase but not during exponential growth. This study also suggests the metabolic efficiency of bacteria and archaea can be improved using exogenously supplied lipophilic electron carriers. The enhancement of methanogen electron transport through methanophenazine has the potential to increase renewable methane production at an industrial scale. PMID:28710268

  20. Physiological Evidence for Isopotential Tunneling in the Electron Transport Chain of Methane-Producing Archaea.

    PubMed

    Duszenko, Nikolas; Buan, Nicole R

    2017-09-15

    Many, but not all, organisms use quinones to conserve energy in their electron transport chains. Fermentative bacteria and methane-producing archaea (methanogens) do not produce quinones but have devised other ways to generate ATP. Methanophenazine (MPh) is a unique membrane electron carrier found in Methanosarcina species that plays the same role as quinones in the electron transport chain. To extend the analogy between quinones and MPh, we compared the MPh pool sizes between two well-studied Methanosarcina species, Methanosarcina acetivorans C2A and Methanosarcina barkeri Fusaro, to the quinone pool size in the bacterium Escherichia coli We found the quantity of MPh per cell increases as cultures transition from exponential growth to stationary phase, and absolute quantities of MPh were 3-fold higher in M. acetivorans than in M. barkeri The concentration of MPh suggests the cell membrane of M. acetivorans , but not of M. barkeri , is electrically quantized as if it were a single conductive metal sheet and near optimal for rate of electron transport. Similarly, stationary (but not exponentially growing) E. coli cells also have electrically quantized membranes on the basis of quinone content. Consistent with our hypothesis, we demonstrated that the exogenous addition of phenazine increases the growth rate of M. barkeri three times that of M. acetivorans Our work suggests electron flux through MPh is naturally higher in M. acetivorans than in M. barkeri and that hydrogen cycling is less efficient at conserving energy than scalar proton translocation using MPh. IMPORTANCE Can we grow more from less? The ability to optimize and manipulate metabolic efficiency in cells is the difference between commercially viable and nonviable renewable technologies. Much can be learned from methane-producing archaea (methanogens) which evolved a successful metabolic lifestyle under extreme thermodynamic constraints. Methanogens use highly efficient electron transport systems and supramolecular complexes to optimize electron and carbon flow to control biomass synthesis and the production of methane. Worldwide, methanogens are used to generate renewable methane for heat, electricity, and transportation. Our observations suggest Methanosarcina acetivorans , but not Methanosarcina barkeri , has electrically quantized membranes. Escherichia coli , a model facultative anaerobe, has optimal electron transport at the stationary phase but not during exponential growth. This study also suggests the metabolic efficiency of bacteria and archaea can be improved using exogenously supplied lipophilic electron carriers. The enhancement of methanogen electron transport through methanophenazine has the potential to increase renewable methane production at an industrial scale. Copyright © 2017 American Society for Microbiology.

  1. High-performance computing — an overview

    NASA Astrophysics Data System (ADS)

    Marksteiner, Peter

    1996-08-01

    An overview of high-performance computing (HPC) is given. Different types of computer architectures used in HPC are discussed: vector supercomputers, high-performance RISC processors, various parallel computers like symmetric multiprocessors, workstation clusters, massively parallel processors. Software tools and programming techniques used in HPC are reviewed: vectorizing compilers, optimization and vector tuning, optimization for RISC processors; parallel programming techniques like shared-memory parallelism, message passing and data parallelism; and numerical libraries.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Serwer, Philip, E-mail: serwer@uthscsa.edu; Wright, Elena T.; Liu, Zheng

    DNA packaging of phages phi29, T3 and T7 sometimes produces incompletely packaged DNA with quantized lengths, based on gel electrophoretic band formation. We discover here a packaging ATPase-free, in vitro model for packaged DNA length quantization. We use directed evolution to isolate a five-site T3 point mutant that hyper-produces tail-free capsids with mature DNA (heads). Three tail gene mutations, but no head gene mutations, are present. A variable-length DNA segment leaks from some mutant heads, based on DNase I-protection assay and electron microscopy. The protected DNA segment has quantized lengths, based on restriction endonuclease analysis: six sharp bands of DNAmore » missing 3.7–12.3% of the last end packaged. Native gel electrophoresis confirms quantized DNA expulsion and, after removal of external DNA, provides evidence that capsid radius is the quantization-ruler. Capsid-based DNA length quantization possibly evolved via selection for stalling that provides time for feedback control during DNA packaging and injection. - Graphical abstract: Highlights: • We implement directed evolution- and DNA-sequencing-based phage assembly genetics. • We purify stable, mutant phage heads with a partially leaked mature DNA molecule. • Native gels and DNase-protection show leaked DNA segments to have quantized lengths. • Native gels after DNase I-removal of leaked DNA reveal the capsids to vary in radius. • Thus, we hypothesize leaked DNA quantization via variably quantized capsid radius.« less

  3. Modeling of Mean-VaR portfolio optimization by risk tolerance when the utility function is quadratic

    NASA Astrophysics Data System (ADS)

    Sukono, Sidi, Pramono; Bon, Abdul Talib bin; Supian, Sudradjat

    2017-03-01

    The problems of investing in financial assets are to choose a combination of weighting a portfolio can be maximized return expectations and minimizing the risk. This paper discusses the modeling of Mean-VaR portfolio optimization by risk tolerance, when square-shaped utility functions. It is assumed that the asset return has a certain distribution, and the risk of the portfolio is measured using the Value-at-Risk (VaR). So, the process of optimization of the portfolio is done based on the model of Mean-VaR portfolio optimization model for the Mean-VaR done using matrix algebra approach, and the Lagrange multiplier method, as well as Khun-Tucker. The results of the modeling portfolio optimization is in the form of a weighting vector equations depends on the vector mean return vector assets, identities, and matrix covariance between return of assets, as well as a factor in risk tolerance. As an illustration of numeric, analyzed five shares traded on the stock market in Indonesia. Based on analysis of five stocks return data gained the vector of weight composition and graphics of efficient surface of portfolio. Vector composition weighting weights and efficient surface charts can be used as a guide for investors in decisions to invest.

  4. Dimensional quantization effects in the thermodynamics of conductive filaments

    NASA Astrophysics Data System (ADS)

    Niraula, D.; Grice, C. R.; Karpov, V. G.

    2018-06-01

    We consider the physical effects of dimensional quantization in conductive filaments that underlie operations of some modern electronic devices. We show that, as a result of quantization, a sufficiently thin filament acquires a positive charge. Several applications of this finding include the host material polarization, the stability of filament constrictions, the equilibrium filament radius, polarity in device switching, and quantization of conductance.

  5. Nearly associative deformation quantization

    NASA Astrophysics Data System (ADS)

    Vassilevich, Dmitri; Oliveira, Fernando Martins Costa

    2018-04-01

    We study several classes of non-associative algebras as possible candidates for deformation quantization in the direction of a Poisson bracket that does not satisfy Jacobi identities. We show that in fact alternative deformation quantization algebras require the Jacobi identities on the Poisson bracket and, under very general assumptions, are associative. At the same time, flexible deformation quantization algebras exist for any Poisson bracket.

  6. Dimensional quantization effects in the thermodynamics of conductive filaments.

    PubMed

    Niraula, D; Grice, C R; Karpov, V G

    2018-06-29

    We consider the physical effects of dimensional quantization in conductive filaments that underlie operations of some modern electronic devices. We show that, as a result of quantization, a sufficiently thin filament acquires a positive charge. Several applications of this finding include the host material polarization, the stability of filament constrictions, the equilibrium filament radius, polarity in device switching, and quantization of conductance.

  7. Optimized AAV rh.10 Vectors That Partially Evade Neutralizing Antibodies during Hepatic Gene Transfer

    PubMed Central

    Selot, Ruchita; Arumugam, Sathyathithan; Mary, Bertin; Cheemadan, Sabna; Jayandharan, Giridhara R.

    2017-01-01

    Of the 12 common serotypes used for gene delivery applications, Adeno-associated virus (AAV)rh.10 serotype has shown sustained hepatic transduction and has the lowest seropositivity in humans. We have evaluated if further modifications to AAVrh.10 at its phosphodegron like regions or predicted immunogenic epitopes could improve its hepatic gene transfer and immune evasion potential. Mutant AAVrh.10 vectors were generated by site directed mutagenesis of the predicted targets. These mutant vectors were first tested for their transduction efficiency in HeLa and HEK293T cells. The optimal vector was further evaluated for their cellular uptake, entry, and intracellular trafficking by quantitative PCR and time-lapse confocal microscopy. To evaluate their potential during hepatic gene therapy, C57BL/6 mice were administered with wild-type or optimal mutant AAVrh.10 and the luciferase transgene expression was documented by serial bioluminescence imaging at 14, 30, 45, and 72 days post-gene transfer. Their hepatic transduction was further verified by a quantitative PCR analysis of AAV copy number in the liver tissue. The optimal AAVrh.10 vector was further evaluated for their immune escape potential, in animals pre-immunized with human intravenous immunoglobulin. Our results demonstrate that a modified AAVrh.10 S671A vector had enhanced cellular entry (3.6 fold), migrate rapidly to the perinuclear region (1 vs. >2 h for wild type vectors) in vitro, which further translates to modest increase in hepatic gene transfer efficiency in vivo. More importantly, the mutant AAVrh.10 vector was able to partially evade neutralizing antibodies (~27–64 fold) in pre-immunized animals. The development of an AAV vector system that can escape the circulating neutralizing antibodies in the host will substantially widen the scope of gene therapy applications in humans. PMID:28769791

  8. Face recognition via sparse representation of SIFT feature on hexagonal-sampling image

    NASA Astrophysics Data System (ADS)

    Zhang, Daming; Zhang, Xueyong; Li, Lu; Liu, Huayong

    2018-04-01

    This paper investigates a face recognition approach based on Scale Invariant Feature Transform (SIFT) feature and sparse representation. The approach takes advantage of SIFT which is local feature other than holistic feature in classical Sparse Representation based Classification (SRC) algorithm and possesses strong robustness to expression, pose and illumination variations. Since hexagonal image has more inherit merits than square image to make recognition process more efficient, we extract SIFT keypoint in hexagonal-sampling image. Instead of matching SIFT feature, firstly the sparse representation of each SIFT keypoint is given according the constructed dictionary; secondly these sparse vectors are quantized according dictionary; finally each face image is represented by a histogram and these so-called Bag-of-Words vectors are classified by SVM. Due to use of local feature, the proposed method achieves better result even when the number of training sample is small. In the experiments, the proposed method gave higher face recognition rather than other methods in ORL and Yale B face databases; also, the effectiveness of the hexagonal-sampling in the proposed method is verified.

  9. Dynamics of entropy and nonclassical properties of the state of a Λ-type three-level atom interacting with a single-mode cavity field with intensity-dependent coupling in a Kerr medium

    NASA Astrophysics Data System (ADS)

    Faghihi, M. J.; Tavassoly, M. K.

    2012-02-01

    In this paper, we study the interaction between a three-level atom and a quantized single-mode field with ‘intensity-dependent coupling’ in a ‘Kerr medium’. The three-level atom is considered to be in a Λ-type configuration. Under particular initial conditions, which may be prepared for the atom and the field, the dynamical state vector of the entire system will be explicitly obtained, for the arbitrary nonlinearity function f(n) associated with any physical system. Then, after evaluating the variation of the field entropy against time, we will investigate the quantum statistics as well as some of the nonclassical properties of the introduced state. During our calculations we investigate the effects of intensity-dependent coupling, Kerr medium and detuning parameters on the depth and domain of the nonclassicality features of the atom-field state vector. Finally, we compare our obtained results with those of V-type three-level atoms.

  10. Application of a Noise Adaptive Contrast Sensitivity Function to Image Data Compression

    NASA Astrophysics Data System (ADS)

    Daly, Scott J.

    1989-08-01

    The visual contrast sensitivity function (CSF) has found increasing use in image compression as new algorithms optimize the display-observer interface in order to reduce the bit rate and increase the perceived image quality. In most compression algorithms, increasing the quantization intervals reduces the bit rate at the expense of introducing more quantization error, a potential image quality degradation. The CSF can be used to distribute this error as a function of spatial frequency such that it is undetectable by the human observer. Thus, instead of being mathematically lossless, the compression algorithm can be designed to be visually lossless, with the advantage of a significantly reduced bit rate. However, the CSF is strongly affected by image noise, changing in both shape and peak sensitivity. This work describes a model of the CSF that includes these changes as a function of image noise level by using the concepts of internal visual noise, and tests this model in the context of image compression with an observer study.

  11. Measuring and Modeling Shared Visual Attention

    NASA Technical Reports Server (NTRS)

    Mulligan, Jeffrey B.; Gontar, Patrick

    2016-01-01

    Multi-person teams are sometimes responsible for critical tasks, such as flying an airliner. Here we present a method using gaze tracking data to assess shared visual attention, a term we use to describe the situation where team members are attending to a common set of elements in the environment. Gaze data are quantized with respect to a set of N areas of interest (AOIs); these are then used to construct a time series of N dimensional vectors, with each vector component representing one of the AOIs, all set to 0 except for the component corresponding to the currently fixated AOI, which is set to 1. The resulting sequence of vectors can be averaged in time, with the result that each vector component represents the proportion of time that the corresponding AOI was fixated within the given time interval. We present two methods for comparing sequences of this sort, one based on computing the time-varying correlation of the averaged vectors, and another based on a chi-square test testing the hypothesis that the observed gaze proportions are drawn from identical probability distributions. We have evaluated the method using synthetic data sets, in which the behavior was modeled as a series of "activities," each of which was modeled as a first-order Markov process. By tabulating distributions for pairs of identical and disparate activities, we are able to perform a receiver operating characteristic (ROC) analysis, allowing us to choose appropriate criteria and estimate error rates. We have applied the methods to data from airline crews, collected in a high-fidelity flight simulator (Haslbeck, Gontar & Schubert, 2014). We conclude by considering the problem of automatic (blind) discovery of activities, using methods developed for text analysis.

  12. Measuring and Modeling Shared Visual Attention

    NASA Technical Reports Server (NTRS)

    Mulligan, Jeffrey B.

    2016-01-01

    Multi-person teams are sometimes responsible for critical tasks, such as flying an airliner. Here we present a method using gaze tracking data to assess shared visual attention, a term we use to describe the situation where team members are attending to a common set of elements in the environment. Gaze data are quantized with respect to a set of N areas of interest (AOIs); these are then used to construct a time series of N dimensional vectors, with each vector component representing one of the AOIs, all set to 0 except for the component corresponding to the currently fixated AOI, which is set to 1. The resulting sequence of vectors can be averaged in time, with the result that each vector component represents the proportion of time that the corresponding AOI was fixated within the given time interval. We present two methods for comparing sequences of this sort, one based on computing the time-varying correlation of the averaged vectors, and another based on a chi-square test testing the hypothesis that the observed gaze proportions are drawn from identical probability distributions.We have evaluated the method using synthetic data sets, in which the behavior was modeled as a series of activities, each of which was modeled as a first-order Markov process. By tabulating distributions for pairs of identical and disparate activities, we are able to perform a receiver operating characteristic (ROC) analysis, allowing us to choose appropriate criteria and estimate error rates.We have applied the methods to data from airline crews, collected in a high-fidelity flight simulator (Haslbeck, Gontar Schubert, 2014). We conclude by considering the problem of automatic (blind) discovery of activities, using methods developed for text analysis.

  13. Robust quantum optimizer with full connectivity

    PubMed Central

    Nigg, Simon E.; Lörch, Niels; Tiwari, Rakesh P.

    2017-01-01

    Quantum phenomena have the potential to speed up the solution of hard optimization problems. For example, quantum annealing, based on the quantum tunneling effect, has recently been shown to scale exponentially better with system size than classical simulated annealing. However, current realizations of quantum annealers with superconducting qubits face two major challenges. First, the connectivity between the qubits is limited, excluding many optimization problems from a direct implementation. Second, decoherence degrades the success probability of the optimization. We address both of these shortcomings and propose an architecture in which the qubits are robustly encoded in continuous variable degrees of freedom. By leveraging the phenomenon of flux quantization, all-to-all connectivity with sufficient tunability to implement many relevant optimization problems is obtained without overhead. Furthermore, we demonstrate the robustness of this architecture by simulating the optimal solution of a small instance of the nondeterministic polynomial-time hard (NP-hard) and fully connected number partitioning problem in the presence of dissipation. PMID:28435880

  14. Topological quantization in units of the fine structure constant.

    PubMed

    Maciejko, Joseph; Qi, Xiao-Liang; Drew, H Dennis; Zhang, Shou-Cheng

    2010-10-15

    Fundamental topological phenomena in condensed matter physics are associated with a quantized electromagnetic response in units of fundamental constants. Recently, it has been predicted theoretically that the time-reversal invariant topological insulator in three dimensions exhibits a topological magnetoelectric effect quantized in units of the fine structure constant α=e²/ℏc. In this Letter, we propose an optical experiment to directly measure this topological quantization phenomenon, independent of material details. Our proposal also provides a way to measure the half-quantized Hall conductances on the two surfaces of the topological insulator independently of each other.

  15. On the Dequantization of Fedosov's Deformation Quantization

    NASA Astrophysics Data System (ADS)

    Karabegov, Alexander V.

    2003-08-01

    To each natural deformation quantization on a Poisson manifold M we associate a Poisson morphism from the formal neighborhood of the zero section of the cotangent bundle to M to the formal neighborhood of the diagonal of the product M x M~, where M~ is a copy of M with the opposite Poisson structure. We call it dequantization of the natural deformation quantization. Then we "dequantize" Fedosov's quantization.

  16. Vector processing efficiency of plasma MHD codes by use of the FACOM 230-75 APU

    NASA Astrophysics Data System (ADS)

    Matsuura, T.; Tanaka, Y.; Naraoka, K.; Takizuka, T.; Tsunematsu, T.; Tokuda, S.; Azumi, M.; Kurita, G.; Takeda, T.

    1982-06-01

    In the framework of pipelined vector architecture, the efficiency of vector processing is assessed with respect to plasma MHD codes in nuclear fusion research. By using a vector processor, the FACOM 230-75 APU, the limit of the enhancement factor due to parallelism of current vector machines is examined for three numerical codes based on a fluid model. Reasonable speed-up factors of approximately 6,6 and 4 times faster than the highly optimized scalar version are obtained for ERATO (linear stability code), AEOLUS-R1 (nonlinear stability code) and APOLLO (1-1/2D transport code), respectively. Problems of the pipelined vector processors are discussed from the viewpoint of restructuring, optimization and choice of algorithms. In conclusion, the important concept of "concurrency within pipelined parallelism" is emphasized.

  17. Face verification system for Android mobile devices using histogram based features

    NASA Astrophysics Data System (ADS)

    Sato, Sho; Kobayashi, Kazuhiro; Chen, Qiu

    2016-07-01

    This paper proposes a face verification system that runs on Android mobile devices. In this system, facial image is captured by a built-in camera on the Android device firstly, and then face detection is implemented using Haar-like features and AdaBoost learning algorithm. The proposed system verify the detected face using histogram based features, which are generated by binary Vector Quantization (VQ) histogram using DCT coefficients in low frequency domains, as well as Improved Local Binary Pattern (Improved LBP) histogram in spatial domain. Verification results with different type of histogram based features are first obtained separately and then combined by weighted averaging. We evaluate our proposed algorithm by using publicly available ORL database and facial images captured by an Android tablet.

  18. Hypergraph-Based Combinatorial Optimization of Matrix-Vector Multiplication

    ERIC Educational Resources Information Center

    Wolf, Michael Maclean

    2009-01-01

    Combinatorial scientific computing plays an important enabling role in computational science, particularly in high performance scientific computing. In this thesis, we will describe our work on optimizing matrix-vector multiplication using combinatorial techniques. Our research has focused on two different problems in combinatorial scientific…

  19. A Hamiltonian approach to the planar optimization of mid-course corrections

    NASA Astrophysics Data System (ADS)

    Iorfida, E.; Palmer, P. L.; Roberts, M.

    2016-04-01

    Lawden's primer vector theory gives a set of necessary conditions that characterize the optimality of a transfer orbit, defined accordingly to the possibility of adding mid-course corrections. In this paper a novel approach is proposed where, through a polar coordinates transformation, the primer vector components decouple. Furthermore, the case when transfer, departure and arrival orbits are coplanar is analyzed using a Hamiltonian approach. This procedure leads to approximate analytic solutions for the in-plane components of the primer vector. Moreover, the solution for the circular transfer case is proven to be the Hill's solution. The novel procedure reduces the mathematical and computational complexity of the original case study. It is shown that the primer vector is independent of the semi-major axis of the transfer orbit. The case with a fixed transfer trajectory and variable initial and final thrust impulses is studied. The acquired related optimality maps are presented and analyzed and they express the likelihood of a set of trajectories to be optimal. Furthermore, it is presented which kind of requirements have to be fulfilled by a set of departure and arrival orbits to have the same profile of primer vector.

  20. Quantum Computing and Second Quantization

    DOE PAGES

    Makaruk, Hanna Ewa

    2017-02-10

    Quantum computers are by their nature many particle quantum systems. Both the many-particle arrangement and being quantum are necessary for the existence of the entangled states, which are responsible for the parallelism of the quantum computers. Second quantization is a very important approximate method of describing such systems. This lecture will present the general idea of the second quantization, and discuss shortly some of the most important formulations of second quantization.

  1. Quantum Computing and Second Quantization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Makaruk, Hanna Ewa

    Quantum computers are by their nature many particle quantum systems. Both the many-particle arrangement and being quantum are necessary for the existence of the entangled states, which are responsible for the parallelism of the quantum computers. Second quantization is a very important approximate method of describing such systems. This lecture will present the general idea of the second quantization, and discuss shortly some of the most important formulations of second quantization.

  2. Classifying brain metastases by their primary site of origin using a radiomics approach based on texture analysis: a feasibility study.

    PubMed

    Ortiz-Ramón, Rafael; Larroza, Andrés; Ruiz-España, Silvia; Arana, Estanislao; Moratal, David

    2018-05-14

    To examine the capability of MRI texture analysis to differentiate the primary site of origin of brain metastases following a radiomics approach. Sixty-seven untreated brain metastases (BM) were found in 3D T1-weighted MRI of 38 patients with cancer: 27 from lung cancer, 23 from melanoma and 17 from breast cancer. These lesions were segmented in 2D and 3D to compare the discriminative power of 2D and 3D texture features. The images were quantized using different number of gray-levels to test the influence of quantization. Forty-three rotation-invariant texture features were examined. Feature selection and random forest classification were implemented within a nested cross-validation structure. Classification was evaluated with the area under receiver operating characteristic curve (AUC) considering two strategies: multiclass and one-versus-one. In the multiclass approach, 3D texture features were more discriminative than 2D features. The best results were achieved for images quantized with 32 gray-levels (AUC = 0.873 ± 0.064) using the top four features provided by the feature selection method based on the p-value. In the one-versus-one approach, high accuracy was obtained when differentiating lung cancer BM from breast cancer BM (four features, AUC = 0.963 ± 0.054) and melanoma BM (eight features, AUC = 0.936 ± 0.070) using the optimal dataset (3D features, 32 gray-levels). Classification of breast cancer and melanoma BM was unsatisfactory (AUC = 0.607 ± 0.180). Volumetric MRI texture features can be useful to differentiate brain metastases from different primary cancers after quantizing the images with the proper number of gray-levels. • Texture analysis is a promising source of biomarkers for classifying brain neoplasms. • MRI texture features of brain metastases could help identifying the primary cancer. • Volumetric texture features are more discriminative than traditional 2D texture features.

  3. Image-adapted visually weighted quantization matrices for digital image compression

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B. (Inventor)

    1994-01-01

    A method for performing image compression that eliminates redundant and invisible image components is presented. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The present invention adapts or customizes the quantization matrix to the image being compressed. The quantization matrix comprises visual masking by luminance and contrast techniques and by an error pooling technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.

  4. Generic absence of strong singularities in loop quantum Bianchi-IX spacetimes

    NASA Astrophysics Data System (ADS)

    Saini, Sahil; Singh, Parampreet

    2018-03-01

    We study the generic resolution of strong singularities in loop quantized effective Bianchi-IX spacetime in two different quantizations—the connection operator based ‘A’ quantization and the extrinsic curvature based ‘K’ quantization. We show that in the effective spacetime description with arbitrary matter content, it is necessary to include inverse triad corrections to resolve all the strong singularities in the ‘A’ quantization. Whereas in the ‘K’ quantization these results can be obtained without including inverse triad corrections. Under these conditions, the energy density, expansion and shear scalars for both of the quantization prescriptions are bounded. Notably, both the quantizations can result in potentially curvature divergent events if matter content allows divergences in the partial derivatives of the energy density with respect to the triad variables at a finite energy density. Such events are found to be weak curvature singularities beyond which geodesics can be extended in the effective spacetime. Our results show that all potential strong curvature singularities of the classical theory are forbidden in Bianchi-IX spacetime in loop quantum cosmology and geodesic evolution never breaks down for such events.

  5. Fault Detection of Bearing Systems through EEMD and Optimization Algorithm

    PubMed Central

    Lee, Dong-Han; Ahn, Jong-Hyo; Koh, Bong-Hwan

    2017-01-01

    This study proposes a fault detection and diagnosis method for bearing systems using ensemble empirical mode decomposition (EEMD) based feature extraction, in conjunction with particle swarm optimization (PSO), principal component analysis (PCA), and Isomap. First, a mathematical model is assumed to generate vibration signals from damaged bearing components, such as the inner-race, outer-race, and rolling elements. The process of decomposing vibration signals into intrinsic mode functions (IMFs) and extracting statistical features is introduced to develop a damage-sensitive parameter vector. Finally, PCA and Isomap algorithm are used to classify and visualize this parameter vector, to separate damage characteristics from healthy bearing components. Moreover, the PSO-based optimization algorithm improves the classification performance by selecting proper weightings for the parameter vector, to maximize the visualization effect of separating and grouping of parameter vectors in three-dimensional space. PMID:29143772

  6. Pseudo-Kähler Quantization on Flag Manifolds

    NASA Astrophysics Data System (ADS)

    Karabegov, Alexander V.

    A unified approach to geometric, symbol and deformation quantizations on a generalized flag manifold endowed with an invariant pseudo-Kähler structure is proposed. In particular cases we arrive at Berezin's quantization via covariant and contravariant symbols.

  7. Instant-Form and Light-Front Quantization of Field Theories

    NASA Astrophysics Data System (ADS)

    Kulshreshtha, Usha; Kulshreshtha, Daya Shankar; Vary, James

    2018-05-01

    In this work we consider the instant-form and light-front quantization of some field theories. As an example, we consider a class of gauged non-linear sigma models with different regularizations. In particular, we present the path integral quantization of the gauged non-linear sigma model in the Faddeevian regularization. We also make a comparision of the possible differences in the instant-form and light-front quantization at appropriate places.

  8. Quantization improves stabilization of dynamical systems with delayed feedback

    NASA Astrophysics Data System (ADS)

    Stepan, Gabor; Milton, John G.; Insperger, Tamas

    2017-11-01

    We show that an unstable scalar dynamical system with time-delayed feedback can be stabilized by quantizing the feedback. The discrete time model corresponds to a previously unrecognized case of the microchaotic map in which the fixed point is both locally and globally repelling. In the continuous-time model, stabilization by quantization is possible when the fixed point in the absence of feedback is an unstable node, and in the presence of feedback, it is an unstable focus (spiral). The results are illustrated with numerical simulation of the unstable Hayes equation. The solutions of the quantized Hayes equation take the form of oscillations in which the amplitude is a function of the size of the quantization step. If the quantization step is sufficiently small, the amplitude of the oscillations can be small enough to practically approximate the dynamics around a stable fixed point.

  9. On Correspondence of BRST-BFV, Dirac, and Refined Algebraic Quantizations of Constrained Systems

    NASA Astrophysics Data System (ADS)

    Shvedov, O. Yu.

    2002-11-01

    The correspondence between BRST-BFV, Dirac, and refined algebraic (group averaging, projection operator) approaches to quantizing constrained systems is analyzed. For the closed-algebra case, it is shown that the component of the BFV wave function corresponding to maximal (minimal) value of number of ghosts and antighosts in the Schrodinger representation may be viewed as a wave function in the refined algebraic (Dirac) quantization approach. The Giulini-Marolf group averaging formula for the inner product in the refined algebraic quantization approach is obtained from the Batalin-Marnelius prescription for the BRST-BFV inner product, which should be generally modified due to topological problems. The considered prescription for the correspondence of states is observed to be applicable to the open-algebra case. The refined algebraic quantization approach is generalized then to the case of nontrivial structure functions. A simple example is discussed. The correspondence of observables for different quantization methods is also investigated.

  10. Quantization of geometric phase with integer and fractional topological characterization in a quantum Ising chain with long-range interaction.

    PubMed

    Sarkar, Sujit

    2018-04-12

    An attempt is made to study and understand the behavior of quantization of geometric phase of a quantum Ising chain with long range interaction. We show the existence of integer and fractional topological characterization for this model Hamiltonian with different quantization condition and also the different quantized value of geometric phase. The quantum critical lines behave differently from the perspective of topological characterization. The results of duality and its relation to the topological quantization is presented here. The symmetry study for this model Hamiltonian is also presented. Our results indicate that the Zak phase is not the proper physical parameter to describe the topological characterization of system with long range interaction. We also present quite a few exact solutions with physical explanation. Finally we present the relation between duality, symmetry and topological characterization. Our work provides a new perspective on topological quantization.

  11. Spacetime algebra as a powerful tool for electromagnetism

    NASA Astrophysics Data System (ADS)

    Dressel, Justin; Bliokh, Konstantin Y.; Nori, Franco

    2015-08-01

    We present a comprehensive introduction to spacetime algebra that emphasizes its practicality and power as a tool for the study of electromagnetism. We carefully develop this natural (Clifford) algebra of the Minkowski spacetime geometry, with a particular focus on its intrinsic (and often overlooked) complex structure. Notably, the scalar imaginary that appears throughout the electromagnetic theory properly corresponds to the unit 4-volume of spacetime itself, and thus has physical meaning. The electric and magnetic fields are combined into a single complex and frame-independent bivector field, which generalizes the Riemann-Silberstein complex vector that has recently resurfaced in studies of the single photon wavefunction. The complex structure of spacetime also underpins the emergence of electromagnetic waves, circular polarizations, the normal variables for canonical quantization, the distinction between electric and magnetic charge, complex spinor representations of Lorentz transformations, and the dual (electric-magnetic field exchange) symmetry that produces helicity conservation in vacuum fields. This latter symmetry manifests as an arbitrary global phase of the complex field, motivating the use of a complex vector potential, along with an associated transverse and gauge-invariant bivector potential, as well as complex (bivector and scalar) Hertz potentials. Our detailed treatment aims to encourage the use of spacetime algebra as a readily available and mature extension to existing vector calculus and tensor methods that can greatly simplify the analysis of fundamentally relativistic objects like the electromagnetic field.

  12. Application of optimal control theory to the design of the NASA/JPL 70-meter antenna servos

    NASA Technical Reports Server (NTRS)

    Alvarez, L. S.; Nickerson, J.

    1989-01-01

    The application of Linear Quadratic Gaussian (LQG) techniques to the design of the 70-m axis servos is described. Linear quadratic optimal control and Kalman filter theory are reviewed, and model development and verification are discussed. Families of optimal controller and Kalman filter gain vectors were generated by varying weight parameters. Performance specifications were used to select final gain vectors.

  13. Support vector machine firefly algorithm based optimization of lens system.

    PubMed

    Shamshirband, Shahaboddin; Petković, Dalibor; Pavlović, Nenad T; Ch, Sudheer; Altameem, Torki A; Gani, Abdullah

    2015-01-01

    Lens system design is an important factor in image quality. The main aspect of the lens system design methodology is the optimization procedure. Since optimization is a complex, nonlinear task, soft computing optimization algorithms can be used. There are many tools that can be employed to measure optical performance, but the spot diagram is the most useful. The spot diagram gives an indication of the image of a point object. In this paper, the spot size radius is considered an optimization criterion. Intelligent soft computing scheme support vector machines (SVMs) coupled with the firefly algorithm (FFA) are implemented. The performance of the proposed estimators is confirmed with the simulation results. The result of the proposed SVM-FFA model has been compared with support vector regression (SVR), artificial neural networks, and generic programming methods. The results show that the SVM-FFA model performs more accurately than the other methodologies. Therefore, SVM-FFA can be used as an efficient soft computing technique in the optimization of lens system designs.

  14. Robust resolution enhancement optimization methods to process variations based on vector imaging model

    NASA Astrophysics Data System (ADS)

    Ma, Xu; Li, Yanqiu; Guo, Xuejia; Dong, Lisong

    2012-03-01

    Optical proximity correction (OPC) and phase shifting mask (PSM) are the most widely used resolution enhancement techniques (RET) in the semiconductor industry. Recently, a set of OPC and PSM optimization algorithms have been developed to solve for the inverse lithography problem, which are only designed for the nominal imaging parameters without giving sufficient attention to the process variations due to the aberrations, defocus and dose variation. However, the effects of process variations existing in the practical optical lithography systems become more pronounced as the critical dimension (CD) continuously shrinks. On the other hand, the lithography systems with larger NA (NA>0.6) are now extensively used, rendering the scalar imaging models inadequate to describe the vector nature of the electromagnetic field in the current optical lithography systems. In order to tackle the above problems, this paper focuses on developing robust gradient-based OPC and PSM optimization algorithms to the process variations under a vector imaging model. To achieve this goal, an integrative and analytic vector imaging model is applied to formulate the optimization problem, where the effects of process variations are explicitly incorporated in the optimization framework. The steepest descent algorithm is used to optimize the mask iteratively. In order to improve the efficiency of the proposed algorithms, a set of algorithm acceleration techniques (AAT) are exploited during the optimization procedure.

  15. Fabric wrinkle characterization and classification using modified wavelet coefficients and optimized support-vector-machine classifier

    USDA-ARS?s Scientific Manuscript database

    This paper presents a novel wrinkle evaluation method that uses modified wavelet coefficients and an optimized support-vector-machine (SVM) classification scheme to characterize and classify wrinkle appearance of fabric. Fabric images were decomposed with the wavelet transform (WT), and five parame...

  16. An Orthogonal Evolutionary Algorithm With Learning Automata for Multiobjective Optimization.

    PubMed

    Dai, Cai; Wang, Yuping; Ye, Miao; Xue, Xingsi; Liu, Hailin

    2016-12-01

    Research on multiobjective optimization problems becomes one of the hottest topics of intelligent computation. In order to improve the search efficiency of an evolutionary algorithm and maintain the diversity of solutions, in this paper, the learning automata (LA) is first used for quantization orthogonal crossover (QOX), and a new fitness function based on decomposition is proposed to achieve these two purposes. Based on these, an orthogonal evolutionary algorithm with LA for complex multiobjective optimization problems with continuous variables is proposed. The experimental results show that in continuous states, the proposed algorithm is able to achieve accurate Pareto-optimal sets and wide Pareto-optimal fronts efficiently. Moreover, the comparison with the several existing well-known algorithms: nondominated sorting genetic algorithm II, decomposition-based multiobjective evolutionary algorithm, decomposition-based multiobjective evolutionary algorithm with an ensemble of neighborhood sizes, multiobjective optimization by LA, and multiobjective immune algorithm with nondominated neighbor-based selection, on 15 multiobjective benchmark problems, shows that the proposed algorithm is able to find more accurate and evenly distributed Pareto-optimal fronts than the compared ones.

  17. Vectorial mask optimization methods for robust optical lithography

    NASA Astrophysics Data System (ADS)

    Ma, Xu; Li, Yanqiu; Guo, Xuejia; Dong, Lisong; Arce, Gonzalo R.

    2012-10-01

    Continuous shrinkage of critical dimension in an integrated circuit impels the development of resolution enhancement techniques for low k1 lithography. Recently, several pixelated optical proximity correction (OPC) and phase-shifting mask (PSM) approaches were developed under scalar imaging models to account for the process variations. However, the lithography systems with larger-NA (NA>0.6) are predominant for current technology nodes, rendering the scalar models inadequate to describe the vector nature of the electromagnetic field that propagates through the optical lithography system. In addition, OPC and PSM algorithms based on scalar models can compensate for wavefront aberrations, but are incapable of mitigating polarization aberrations in practical lithography systems, which can only be dealt with under the vector model. To this end, we focus on developing robust pixelated gradient-based OPC and PSM optimization algorithms aimed at canceling defocus, dose variation, wavefront and polarization aberrations under a vector model. First, an integrative and analytic vector imaging model is applied to formulate the optimization problem, where the effects of process variations are explicitly incorporated in the optimization framework. A steepest descent algorithm is then used to iteratively optimize the mask patterns. Simulations show that the proposed algorithms can effectively improve the process windows of the optical lithography systems.

  18. Efficient Optimization of Low-Thrust Spacecraft Trajectories

    NASA Technical Reports Server (NTRS)

    Lee, Seungwon; Fink, Wolfgang; Russell, Ryan; Terrile, Richard; Petropoulos, Anastassios; vonAllmen, Paul

    2007-01-01

    A paper describes a computationally efficient method of optimizing trajectories of spacecraft driven by propulsion systems that generate low thrusts and, hence, must be operated for long times. A common goal in trajectory-optimization problems is to find minimum-time, minimum-fuel, or Pareto-optimal trajectories (here, Pareto-optimality signifies that no other solutions are superior with respect to both flight time and fuel consumption). The present method utilizes genetic and simulated-annealing algorithms to search for globally Pareto-optimal solutions. These algorithms are implemented in parallel form to reduce computation time. These algorithms are coupled with either of two traditional trajectory- design approaches called "direct" and "indirect." In the direct approach, thrust control is discretized in either arc time or arc length, and the resulting discrete thrust vectors are optimized. The indirect approach involves the primer-vector theory (introduced in 1963), in which the thrust control problem is transformed into a co-state control problem and the initial values of the co-state vector are optimized. In application to two example orbit-transfer problems, this method was found to generate solutions comparable to those of other state-of-the-art trajectory-optimization methods while requiring much less computation time.

  19. Applicability of initial optimal maternal and fetal electrocardiogram combination vectors to subsequent recordings

    NASA Astrophysics Data System (ADS)

    Yan, Hua-Wen; Huang, Xiao-Lin; Zhao, Ying; Si, Jun-Feng; Liu, Tie-Bing; Liu, Hong-Xing

    2014-11-01

    A series of experiments are conducted to confirm whether the vectors calculated for an early section of a continuous non-invasive fetal electrocardiogram (fECG) recording can be directly applied to subsequent sections in order to reduce the computation required for real-time monitoring. Our results suggest that it is generally feasible to apply the initial optimal maternal and fetal ECG combination vectors to extract the fECG and maternal ECG in subsequent recorded sections.

  20. Noncommutative gerbes and deformation quantization

    NASA Astrophysics Data System (ADS)

    Aschieri, Paolo; Baković, Igor; Jurčo, Branislav; Schupp, Peter

    2010-11-01

    We define noncommutative gerbes using the language of star products. Quantized twisted Poisson structures are discussed as an explicit realization in the sense of deformation quantization. Our motivation is the noncommutative description of D-branes in the presence of topologically non-trivial background fields.

  1. Quantized discrete space oscillators

    NASA Technical Reports Server (NTRS)

    Uzes, C. A.; Kapuscik, Edward

    1993-01-01

    A quasi-canonical sequence of finite dimensional quantizations was found which has canonical quantization as its limit. In order to demonstrate its practical utility and its numerical convergence, this formalism is applied to the eigenvalue and 'eigenfunction' problem of several harmonic and anharmonic oscillators.

  2. Optimal distribution of integration time for intensity measurements in Stokes polarimetry.

    PubMed

    Li, Xiaobo; Liu, Tiegen; Huang, Bingjing; Song, Zhanjie; Hu, Haofeng

    2015-10-19

    We consider the typical Stokes polarimetry system, which performs four intensity measurements to estimate a Stokes vector. We show that if the total integration time of intensity measurements is fixed, the variance of the Stokes vector estimator depends on the distribution of the integration time at four intensity measurements. Therefore, by optimizing the distribution of integration time, the variance of the Stokes vector estimator can be decreased. In this paper, we obtain the closed-form solution of the optimal distribution of integration time by employing Lagrange multiplier method. According to the theoretical analysis and real-world experiment, it is shown that the total variance of the Stokes vector estimator can be significantly decreased about 40% in the case discussed in this paper. The method proposed in this paper can effectively decrease the measurement variance and thus statistically improves the measurement accuracy of the polarimetric system.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grosso, G.; Nardin, G.; Morier-Genoud, F.

    Exciton polaritons have been shown to be an optimal system in order to investigate the properties of bosonic quantum fluids. We report here on the observation of dark solitons in the wake of engineered circular obstacles and their decay into streets of quantized vortices. Our experiments provide a time-resolved access to the polariton phase and density, which allows for a quantitative study of instabilities of freely evolving polaritons. The decay of solitons is quantified and identified as an effect of disorder-induced transverse perturbations in the dissipative polariton gas.

  4. Optimal Pitch Thrust-Vector Angle and Benefits for all Flight Regimes

    NASA Technical Reports Server (NTRS)

    Gilyard, Glenn B.; Bolonkin, Alexander

    2000-01-01

    The NASA Dryden Flight Research Center is exploring the optimum thrust-vector angle on aircraft. Simple aerodynamic performance models for various phases of aircraft flight are developed and optimization equations and algorithms are presented in this report. Results of optimal angles of thrust vectors and associated benefits for various flight regimes of aircraft (takeoff, climb, cruise, descent, final approach, and landing) are given. Results for a typical wide-body transport aircraft are also given. The benefits accruable for this class of aircraft are small, but the technique can be applied to other conventionally configured aircraft. The lower L/D aerodynamic characteristics of fighters generally would produce larger benefits than those produced for transport aircraft.

  5. Guaranteed convergence of the Hough transform

    NASA Astrophysics Data System (ADS)

    Soffer, Menashe; Kiryati, Nahum

    1995-01-01

    The straight-line Hough Transform using normal parameterization with a continuous voting kernel is considered. It transforms the colinearity detection problem to a problem of finding the global maximum of a two dimensional function above a domain in the parameter space. The principle is similar to robust regression using fixed scale M-estimation. Unlike standard M-estimation procedures the Hough Transform does not rely on a good initial estimate of the line parameters: The global optimization problem is approached by exhaustive search on a grid that is usually as fine as computationally feasible. The global maximum of a general function above a bounded domain cannot be found by a finite number of function evaluations. Only if sufficient a-priori knowledge about the smoothness of the objective function is available, convergence to the global maximum can be guaranteed. The extraction of a-priori information and its efficient use are the main challenges in real global optimization problems. The global optimization problem in the Hough Transform is essentially how fine should the parameter space quantization be in order not to miss the true maximum. More than thirty years after Hough patented the basic algorithm, the problem is still essentially open. In this paper an attempt is made to identify a-priori information on the smoothness of the objective (Hough) function and to introduce sufficient conditions for the convergence of the Hough Transform to the global maximum. An image model with several application dependent parameters is defined. Edge point location errors as well as background noise are accounted for. Minimal parameter space quantization intervals that guarantee convergence are obtained. Focusing policies for multi-resolution Hough algorithms are developed. Theoretical support for bottom- up processing is provided. Due to the randomness of errors and noise, convergence guarantees are probabilistic.

  6. Thermal field theory and generalized light front quantization

    NASA Astrophysics Data System (ADS)

    Weldon, H. Arthur

    2003-04-01

    The dependence of thermal field theory on the surface of quantization and on the velocity of the heat bath is investigated by working in general coordinates that are arbitrary linear combinations of the Minkowski coordinates. In the general coordinates the metric tensor gμν¯ is nondiagonal. The Kubo-Martin-Schwinger condition requires periodicity in thermal correlation functions when the temporal variable changes by an amount -i/(T(g00¯)). Light-front quantization fails since g00¯=0; however, various related quantizations are possible.

  7. Attitude Determination Using Two Vector Measurements

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis

    1998-01-01

    Many spacecraft attitude determination methods use exactly two vector measurements. The two vectors are typically the unit vector to the Sun and the Earth's magnetic field vector for coarse "sun-mag" attitude determination or unit vectors to two stars tracked by two star trackers for fine attitude determination. TRIAD, the earliest published algorithm for determining spacecraft attitude from two vector measurements, has been widely used in both ground-based and onboard attitude determination. Later attitude determination methods have been based on Wahba's optimality criterion for n arbitrarily weighted observations. The solution of Wahba's problem is somewhat difficult in the general case, but there is a simple closed-form solution in the two-observation case. This solution reduces to the TRIAD solution for certain choices of measurement weights. This paper presents and compares these algorithms as well as sub-optimal algorithms proposed by Bar-Itzhack, Harman, and Reynolds. Some new results will be presented, but the paper is primarily a review and tutorial.

  8. Expression of chicken parvovirus VP2 in chicken embryo fibroblasts requires codon optimization for production of naked DNA and vectored meleagrid herpesvirus type 1 vaccines.

    PubMed

    Spatz, Stephen J; Volkening, Jeremy D; Mullis, Robert; Li, Fenglan; Mercado, John; Zsak, Laszlo

    2013-10-01

    Meleagrid herpesvirus type 1 (MeHV-1) is an ideal vector for the expression of antigens from pathogenic avian organisms in order to generate vaccines. Chicken parvovirus (ChPV) is a widespread infectious virus that causes serious disease in chickens. It is one of the etiological agents largely suspected in causing Runting Stunting Syndrome (RSS) in chickens. Initial attempts to express the wild-type gene encoding the capsid protein VP2 of ChPV by insertion into the thymidine kinase gene of MeHV-1 were unsuccessful. However, transient expression of a codon-optimized synthetic VP2 gene cloned into the bicistronic vector pIRES2-Ds-Red2, could be demonstrated by immunocytochemical staining of transfected chicken embryo fibroblasts (CEFs). Red fluorescence could also be detected in these transfected cells since the red fluorescent protein gene is downstream from the internal ribosome entry site (IRES). Strikingly, fluorescence could not be demonstrated in cells transiently transfected with the bicistronic vector containing the wild-type or non-codon-optimized VP2 gene. Immunocytochemical staining of these cells also failed to demonstrate expression of wild-type VP2, indicating that the lack of expression was at the RNA level and the VP2 protein was not toxic to CEFs. Chickens vaccinated with a DNA vaccine consisting of the bicistronic vector containing the codon-optimized VP2 elicited a humoral immune response as measured by a VP2-specific ELISA. This VP2 codon-optimized bicistronic cassette was rescued into the MeHV-1 genome generating a vectored vaccine against ChPV disease.

  9. The primer vector in linear, relative-motion equations. [spacecraft trajectory optimization

    NASA Technical Reports Server (NTRS)

    1980-01-01

    Primer vector theory is used in analyzing a set of linear, relative-motion equations - the Clohessy-Wiltshire equations - to determine the criteria and necessary conditions for an optimal, N-impulse trajectory. Since the state vector for these equations is defined in terms of a linear system of ordinary differential equations, all fundamental relations defining the solution of the state and costate equations, and the necessary conditions for optimality, can be expressed in terms of elementary functions. The analysis develops the analytical criteria for improving a solution by (1) moving any dependent or independent variable in the initial and/or final orbit, and (2) adding intermediate impulses. If these criteria are violated, the theory establishes a sufficient number of analytical equations. The subsequent satisfaction of these equations will result in the optimal position vectors and times of an N-impulse trajectory. The solution is examined for the specific boundary conditions of (1) fixed-end conditions, two-impulse, and time-open transfer; (2) an orbit-to-orbit transfer; and (3) a generalized rendezvous problem. A sequence of rendezvous problems is solved to illustrate the analysis and the computational procedure.

  10. Optimizing the method for generation of integration-free induced pluripotent stem cells from human peripheral blood.

    PubMed

    Gu, Haihui; Huang, Xia; Xu, Jing; Song, Lili; Liu, Shuping; Zhang, Xiao-Bing; Yuan, Weiping; Li, Yanxin

    2018-06-15

    Generation of induced pluripotent stem cells (iPSCs) from human peripheral blood provides a convenient and low-invasive way to obtain patient-specific iPSCs. The episomal vector is one of the best approaches for reprogramming somatic cells to pluripotent status because of its simplicity and affordability. However, the efficiency of episomal vector reprogramming of adult peripheral blood cells is relatively low compared with cord blood and bone marrow cells. In the present study, integration-free human iPSCs derived from peripheral blood were established via episomal technology. We optimized mononuclear cell isolation and cultivation, episomal vector promoters, and a combination of transcriptional factors to improve reprogramming efficiency. Here, we improved the generation efficiency of integration-free iPSCs from human peripheral blood mononuclear cells by optimizing the method of isolating mononuclear cells from peripheral blood, by modifying the integration of culture medium, and by adjusting the duration of culture time and the combination of different episomal vectors. With this optimized protocol, a valuable asset for banking patient-specific iPSCs has been established.

  11. Generalized radiation-field quantization method and the Petermann excess-noise factor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheng, Y.-J.; Siegman, A.E.; E.L. Ginzton Laboratory, Stanford University, Stanford, California 94305

    2003-10-01

    We propose a generalized radiation-field quantization formalism, where quantization does not have to be referenced to a set of power-orthogonal eigenmodes as conventionally required. This formalism can be used to directly quantize the true system eigenmodes, which can be non-power-orthogonal due to the open nature of the system or the gain/loss medium involved in the system. We apply this generalized field quantization to the laser linewidth problem, in particular, lasers with non-power-orthogonal oscillation modes, and derive the excess-noise factor in a fully quantum-mechanical framework. We also show that, despite the excess-noise factor for oscillating modes, the total spatially averaged decaymore » rate for the laser atoms remains unchanged.« less

  12. BFV approach to geometric quantization

    NASA Astrophysics Data System (ADS)

    Fradkin, E. S.; Linetsky, V. Ya.

    1994-12-01

    A gauge-invariant approach to geometric quantization is developed. It yields a complete quantum description for dynamical systems with non-trivial geometry and topology of the phase space. The method is a global version of the gauge-invariant approach to quantization of second-class constraints developed by Batalin, Fradkin and Fradkina (BFF). Physical quantum states and quantum observables are respectively described by covariantly constant sections of the Fock bundle and the bundle of hermitian operators over the phase space with a flat connection defined by the nilpotent BVF-BRST operator. Perturbative calculation of the first non-trivial quantum correction to the Poisson brackets leads to the Chevalley cocycle known in deformation quantization. Consistency conditions lead to a topological quantization condition with metaplectic anomaly.

  13. Deformation quantization of fermi fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Galaviz, I.; Garcia-Compean, H.; Departamento de Fisica, Centro de Investigacion y de Estudios Avanzados del IPN, P.O. Box 14-740, 07000 Mexico, D.F.

    2008-04-15

    Deformation quantization for any Grassmann scalar free field is described via the Weyl-Wigner-Moyal formalism. The Stratonovich-Weyl quantizer, the Moyal *-product and the Wigner functional are obtained by extending the formalism proposed recently in [I. Galaviz, H. Garcia-Compean, M. Przanowski, F.J. Turrubiates, Weyl-Wigner-Moyal Formalism for Fermi Classical Systems, arXiv:hep-th/0612245] to the fermionic systems of infinite number of degrees of freedom. In particular, this formalism is applied to quantize the Dirac free field. It is observed that the use of suitable oscillator variables facilitates considerably the procedure. The Stratonovich-Weyl quantizer, the Moyal *-product, the Wigner functional, the normal ordering operator, and finally,more » the Dirac propagator have been found with the use of these variables.« less

  14. Polymer-Fourier quantization of the scalar field revisited

    NASA Astrophysics Data System (ADS)

    Garcia-Chung, Angel; Vergara, J. David

    2016-10-01

    The polymer quantization of the Fourier modes of the real scalar field is studied within algebraic scheme. We replace the positive linear functional of the standard Poincaré invariant quantization by a singular one. This singular positive linear functional is constructed as mimicking the singular limit of the complex structure of the Poincaré invariant Fock quantization. The resulting symmetry group of such polymer quantization is the subgroup SDiff(ℝ4) which is a subgroup of Diff(ℝ4) formed by spatial volume preserving diffeomorphisms. In consequence, this yields an entirely different irreducible representation of the canonical commutation relations, nonunitary equivalent to the standard Fock representation. We also compared the Poincaré invariant Fock vacuum with the polymer Fourier vacuum.

  15. Quantized Rabi oscillations and circular dichroism in quantum Hall systems

    NASA Astrophysics Data System (ADS)

    Tran, D. T.; Cooper, N. R.; Goldman, N.

    2018-06-01

    The dissipative response of a quantum system upon periodic driving can be exploited as a probe of its topological properties. Here we explore the implications of such phenomena in two-dimensional gases subjected to a uniform magnetic field. It is shown that a filled Landau level exhibits a quantized circular dichroism, which can be traced back to its underlying nontrivial topology. Based on selection rules, we find that this quantized effect can be suitably described in terms of Rabi oscillations, whose frequencies satisfy simple quantization laws. We discuss how quantized dissipative responses can be probed locally, both in the bulk and at the boundaries of the system. This work suggests alternative forms of topological probes based on circular dichroism.

  16. [The preparation of recombinant adenovirus Ad-Rad50-GFP and detection of the optimal multiplicity of infection in CNE1 transfected hv Ad-Rad50-GFP].

    PubMed

    Yan, Ruicheng; Huang, Jiancong; Zhu, Ling; Chang, Lihong; Li, Jingjia; Wu, Xifu; Ye, Jin; Zhang, Gehua

    2015-12-01

    The optimal multiplicity of infection (MOI) of the recombinant adenovirus Ad-Rad50-GFP carrying a mutant Rad50 gene expression region on the cell growth of nasopharyngeal carcinoma and the viral amplification efficiency of CNE1 cell infected by this adenovirus were studied. The biological titer of Ad-Rad50-GFP was measured by end point dilution method. The impact of recombinant adenoviral vector transfection on the growth of CNE1 cells was observed by cell growth curve. Transfection efficacy of recombinant adenoviral vector was observed and calculated through fluorescence microscope. The expression f mutant Rad50 in the Ad-Rad50-GFP transfected CNE1 cells with optimal MOI was detected by Western Blot after transfection. The biological titer of Ad-Rad50-GFP was 1.26 x 10¹¹ pfu/ml. CNE1 cell growth was not influenced significantly as they were transfected by recombinant adenoviral vector with MOI less than 50. Transfection efficacy of recombinant adenoviral vector was most salient at 24 hours after transfection, with the high expression of mutant Rad50, and the efficiency still remained about 70% after 72 hours. Recombinant adenoviral vector Ad-Rad50-GFP could transfect CNE1 cells as well as result in the expression of mutant Rad50 in CNE1 cells effectively. MOI = 50 was the optimal multiplicity of infection of CNE1 cells transfected by recombinant adenoviral vector Ad-Rad50-GFP.

  17. Instabilities caused by floating-point arithmetic quantization.

    NASA Technical Reports Server (NTRS)

    Phillips, C. L.

    1972-01-01

    It is shown that an otherwise stable digital control system can be made unstable by signal quantization when the controller operates on floating-point arithmetic. Sufficient conditions of instability are determined, and an example of loss of stability is treated when only one quantizer is operated.

  18. Dynamic State Estimation of Power Systems With Quantization Effects: A Recursive Filter Approach.

    PubMed

    Hu, Liang; Wang, Zidong; Liu, Xiaohui

    2016-08-01

    In this paper, a recursive filter algorithm is developed to deal with the state estimation problem for power systems with quantized nonlinear measurements. The measurements from both the remote terminal units and the phasor measurement unit are subject to quantizations described by a logarithmic quantizer. Attention is focused on the design of a recursive filter such that, in the simultaneous presence of nonlinear measurements and quantization effects, an upper bound for the estimation error covariance is guaranteed and subsequently minimized. Instead of using the traditional approximation methods in nonlinear estimation that simply ignore the linearization errors, we treat both the linearization and quantization errors as norm-bounded uncertainties in the algorithm development so as to improve the performance of the estimator. For the power system with such kind of introduced uncertainties, a filter is designed in the framework of robust recursive estimation, and the developed filter algorithm is tested on the IEEE benchmark power system to demonstrate its effectiveness.

  19. Direct comparison of fractional and integer quantized Hall resistance

    NASA Astrophysics Data System (ADS)

    Ahlers, Franz J.; Götz, Martin; Pierz, Klaus

    2017-08-01

    We present precision measurements of the fractional quantized Hall effect, where the quantized resistance {{R}≤ft[ 1/3 \\right]} in the fractional quantum Hall state at filling factor 1/3 was compared with a quantized resistance {{R}[2]} , represented by an integer quantum Hall state at filling factor 2. A cryogenic current comparator bridge capable of currents down to the nanoampere range was used to directly compare two resistance values of two GaAs-based devices located in two cryostats. A value of 1-(5.3  ±  6.3) 10-8 (95% confidence level) was obtained for the ratio ({{R}≤ft[ 1/3 \\right]}/6{{R}[2]} ). This constitutes the most precise comparison of integer resistance quantization (in terms of h/e 2) in single-particle systems and of fractional quantization in fractionally charged quasi-particle systems. While not relevant for practical metrology, such a test of the validity of the underlying physics is of significance in the context of the upcoming revision of the SI.

  20. Canonical quantization of classical mechanics in curvilinear coordinates. Invariant quantization procedure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Błaszak, Maciej, E-mail: blaszakm@amu.edu.pl; Domański, Ziemowit, E-mail: ziemowit@amu.edu.pl

    In the paper is presented an invariant quantization procedure of classical mechanics on the phase space over flat configuration space. Then, the passage to an operator representation of quantum mechanics in a Hilbert space over configuration space is derived. An explicit form of position and momentum operators as well as their appropriate ordering in arbitrary curvilinear coordinates is demonstrated. Finally, the extension of presented formalism onto non-flat case and related ambiguities of the process of quantization are discussed. -- Highlights: •An invariant quantization procedure of classical mechanics on the phase space over flat configuration space is presented. •The passage tomore » an operator representation of quantum mechanics in a Hilbert space over configuration space is derived. •Explicit form of position and momentum operators and their appropriate ordering in curvilinear coordinates is shown. •The invariant form of Hamiltonian operators quadratic and cubic in momenta is derived. •The extension of presented formalism onto non-flat case and related ambiguities of the quantization process are discussed.« less

  1. Quantization noise in digital speech. M.S. Thesis- Houston Univ.

    NASA Technical Reports Server (NTRS)

    Schmidt, O. L.

    1972-01-01

    The amount of quantization noise generated in a digital-to-analog converter is dependent on the number of bits or quantization levels used to digitize the analog signal in the analog-to-digital converter. The minimum number of quantization levels and the minimum sample rate were derived for a digital voice channel. A sample rate of 6000 samples per second and lowpass filters with a 3 db cutoff of 2400 Hz are required for 100 percent sentence intelligibility. Consonant sounds are the first speech components to be degraded by quantization noise. A compression amplifier can be used to increase the weighting of the consonant sound amplitudes in the analog-to-digital converter. An expansion network must be installed at the output of the digital-to-analog converter to restore the original weighting of the consonant sounds. This technique results in 100 percent sentence intelligibility for a sample rate of 5000 samples per second, eight quantization levels, and lowpass filters with a 3 db cutoff of 2000 Hz.

  2. Balancing aggregation and smoothing errors in inverse models

    DOE PAGES

    Turner, A. J.; Jacob, D. J.

    2015-06-30

    Inverse models use observations of a system (observation vector) to quantify the variables driving that system (state vector) by statistical optimization. When the observation vector is large, such as with satellite data, selecting a suitable dimension for the state vector is a challenge. A state vector that is too large cannot be effectively constrained by the observations, leading to smoothing error. However, reducing the dimension of the state vector leads to aggregation error as prior relationships between state vector elements are imposed rather than optimized. Here we present a method for quantifying aggregation and smoothing errors as a function ofmore » state vector dimension, so that a suitable dimension can be selected by minimizing the combined error. Reducing the state vector within the aggregation error constraints can have the added advantage of enabling analytical solution to the inverse problem with full error characterization. We compare three methods for reducing the dimension of the state vector from its native resolution: (1) merging adjacent elements (grid coarsening), (2) clustering with principal component analysis (PCA), and (3) applying a Gaussian mixture model (GMM) with Gaussian pdfs as state vector elements on which the native-resolution state vector elements are projected using radial basis functions (RBFs). The GMM method leads to somewhat lower aggregation error than the other methods, but more importantly it retains resolution of major local features in the state vector while smoothing weak and broad features.« less

  3. Balancing aggregation and smoothing errors in inverse models

    NASA Astrophysics Data System (ADS)

    Turner, A. J.; Jacob, D. J.

    2015-01-01

    Inverse models use observations of a system (observation vector) to quantify the variables driving that system (state vector) by statistical optimization. When the observation vector is large, such as with satellite data, selecting a suitable dimension for the state vector is a challenge. A state vector that is too large cannot be effectively constrained by the observations, leading to smoothing error. However, reducing the dimension of the state vector leads to aggregation error as prior relationships between state vector elements are imposed rather than optimized. Here we present a method for quantifying aggregation and smoothing errors as a function of state vector dimension, so that a suitable dimension can be selected by minimizing the combined error. Reducing the state vector within the aggregation error constraints can have the added advantage of enabling analytical solution to the inverse problem with full error characterization. We compare three methods for reducing the dimension of the state vector from its native resolution: (1) merging adjacent elements (grid coarsening), (2) clustering with principal component analysis (PCA), and (3) applying a Gaussian mixture model (GMM) with Gaussian pdfs as state vector elements on which the native-resolution state vector elements are projected using radial basis functions (RBFs). The GMM method leads to somewhat lower aggregation error than the other methods, but more importantly it retains resolution of major local features in the state vector while smoothing weak and broad features.

  4. Balancing aggregation and smoothing errors in inverse models

    NASA Astrophysics Data System (ADS)

    Turner, A. J.; Jacob, D. J.

    2015-06-01

    Inverse models use observations of a system (observation vector) to quantify the variables driving that system (state vector) by statistical optimization. When the observation vector is large, such as with satellite data, selecting a suitable dimension for the state vector is a challenge. A state vector that is too large cannot be effectively constrained by the observations, leading to smoothing error. However, reducing the dimension of the state vector leads to aggregation error as prior relationships between state vector elements are imposed rather than optimized. Here we present a method for quantifying aggregation and smoothing errors as a function of state vector dimension, so that a suitable dimension can be selected by minimizing the combined error. Reducing the state vector within the aggregation error constraints can have the added advantage of enabling analytical solution to the inverse problem with full error characterization. We compare three methods for reducing the dimension of the state vector from its native resolution: (1) merging adjacent elements (grid coarsening), (2) clustering with principal component analysis (PCA), and (3) applying a Gaussian mixture model (GMM) with Gaussian pdfs as state vector elements on which the native-resolution state vector elements are projected using radial basis functions (RBFs). The GMM method leads to somewhat lower aggregation error than the other methods, but more importantly it retains resolution of major local features in the state vector while smoothing weak and broad features.

  5. Detection of Road Surface States from Tire Noise Using Neural Network Analysis

    NASA Astrophysics Data System (ADS)

    Kongrattanaprasert, Wuttiwat; Nomura, Hideyuki; Kamakura, Tomoo; Ueda, Koji

    This report proposes a new processing method for automatically detecting the states of road surfaces from tire noises of passing vehicles. In addition to multiple indicators of the signal features in the frequency domain, we propose a few feature indicators in the time domain to successfully classify the road states into four categories: snowy, slushy, wet, and dry states. The method is based on artificial neural networks. The proposed classification is carried out in multiple neural networks using learning vector quantization. The outcomes of the networks are then integrated by the voting decision-making scheme. Experimental results obtained from recorded signals for ten days in the snowy season demonstrated that an accuracy of approximately 90% can be attained for predicting road surface states using only tire noise data.

  6. Conductance dips and spin precession in a nonuniform waveguide with spin–orbit coupling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Malyshev, A. I., E-mail: malyshev@phys.unn.ru; Kozulin, A. S.

    An infinite waveguide with a nonuniformity, a segment of finite length with spin–orbit coupling, is considered in the case when the Rashba and Dresselhaus parameters are identical. Analytical expressions have been derived in the single-mode approximation for the conductance of the system for an arbitrary initial spin state. Based on numerical calculations with several size quantization modes, we have detected and described the conductance dips arising when the waves are localized in the nonuniformity due to the formation of an effective potential well in it. We show that allowance for the evanescent modes under carrier spin precession in an effectivemore » magnetic field does not lead to a change in the direction of the average spin vector at the output of the system.« less

  7. Heavy and Heavy-Light Mesons in the Covariant Spectator Theory

    NASA Astrophysics Data System (ADS)

    Stadler, Alfred; Leitão, Sofia; Peña, M. T.; Biernat, Elmar P.

    2018-05-01

    The masses and vertex functions of heavy and heavy-light mesons, described as quark-antiquark bound states, are calculated with the Covariant Spectator Theory (CST). We use a kernel with an adjustable mixture of Lorentz scalar, pseudoscalar, and vector linear confining interaction, together with a one-gluon-exchange kernel. A series of fits to the heavy and heavy-light meson spectrum were calculated, and we discuss what conclusions can be drawn from it, especially about the Lorentz structure of the kernel. We also apply the Brodsky-Huang-Lepage prescription to express the CST wave functions for heavy quarkonia in terms of light-front variables. They agree remarkably well with light-front wave functions obtained in the Hamiltonian basis light-front quantization approach, even in excited states.

  8. Information extraction from multivariate images

    NASA Technical Reports Server (NTRS)

    Park, S. K.; Kegley, K. A.; Schiess, J. R.

    1986-01-01

    An overview of several multivariate image processing techniques is presented, with emphasis on techniques based upon the principal component transformation (PCT). Multiimages in various formats have a multivariate pixel value, associated with each pixel location, which has been scaled and quantized into a gray level vector, and the bivariate of the extent to which two images are correlated. The PCT of a multiimage decorrelates the multiimage to reduce its dimensionality and reveal its intercomponent dependencies if some off-diagonal elements are not small, and for the purposes of display the principal component images must be postprocessed into multiimage format. The principal component analysis of a multiimage is a statistical analysis based upon the PCT whose primary application is to determine the intrinsic component dimensionality of the multiimage. Computational considerations are also discussed.

  9. Application of two neural network paradigms to the study of voluntary employee turnover.

    PubMed

    Somers, M J

    1999-04-01

    Two neural network paradigms--multilayer perceptron and learning vector quantization--were used to study voluntary employee turnover with a sample of 577 hospital employees. The objectives of the study were twofold. The 1st was to assess whether neural computing techniques offered greater predictive accuracy than did conventional turnover methodologies. The 2nd was to explore whether computer models of turnover based on neural network technologies offered new insights into turnover processes. When compared with logistic regression analysis, both neural network paradigms provided considerably more accurate predictions of turnover behavior, particularly with respect to the correct classification of leavers. In addition, these neural network paradigms captured nonlinear relationships that are relevant for theory development. Results are discussed in terms of their implications for future research.

  10. Coherent state quantization of quaternions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Muraleetharan, B., E-mail: bbmuraleetharan@jfn.ac.lk, E-mail: santhar@gmail.com; Thirulogasanthar, K., E-mail: bbmuraleetharan@jfn.ac.lk, E-mail: santhar@gmail.com

    Parallel to the quantization of the complex plane, using the canonical coherent states of a right quaternionic Hilbert space, quaternion field of quaternionic quantum mechanics is quantized. Associated upper symbols, lower symbols, and related quantities are analyzed. Quaternionic version of the harmonic oscillator and Weyl-Heisenberg algebra are also obtained.

  11. Data compression using adaptive transform coding. Appendix 1: Item 1. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Rost, Martin Christopher

    1988-01-01

    Adaptive low-rate source coders are described in this dissertation. These coders adapt by adjusting the complexity of the coder to match the local coding difficulty of the image. This is accomplished by using a threshold driven maximum distortion criterion to select the specific coder used. The different coders are built using variable blocksized transform techniques, and the threshold criterion selects small transform blocks to code the more difficult regions and larger blocks to code the less complex regions. A theoretical framework is constructed from which the study of these coders can be explored. An algorithm for selecting the optimal bit allocation for the quantization of transform coefficients is developed. The bit allocation algorithm is more fully developed, and can be used to achieve more accurate bit assignments than the algorithms currently used in the literature. Some upper and lower bounds for the bit-allocation distortion-rate function are developed. An obtainable distortion-rate function is developed for a particular scalar quantizer mixing method that can be used to code transform coefficients at any rate.

  12. Educational Information Quantization for Improving Content Quality in Learning Management Systems

    ERIC Educational Resources Information Center

    Rybanov, Alexander Aleksandrovich

    2014-01-01

    The article offers the educational information quantization method for improving content quality in Learning Management Systems. The paper considers questions concerning analysis of quality of quantized presentation of educational information, based on quantitative text parameters: average frequencies of parts of speech, used in the text; formal…

  13. BFV quantization on hermitian symmetric spaces

    NASA Astrophysics Data System (ADS)

    Fradkin, E. S.; Linetsky, V. Ya.

    1995-02-01

    Gauge-invariant BFV approach to geometric quantization is applied to the case of hermitian symmetric spaces G/ H. In particular, gauge invariant quantization on the Lobachevski plane and sphere is carried out. Due to the presence of symmetry, master equations for the first-class constraints, quantum observables and physical quantum states are exactly solvable. BFV-BRST operator defines a flat G-connection in the Fock bundle over G/ H. Physical quantum states are covariantly constant sections with respect to this connection and are shown to coincide with the generalized coherent states for the group G. Vacuum expectation values of the quantum observables commuting with the quantum first-class constraints reduce to the covariant symbols of Berezin. The gauge-invariant approach to quantization on symplectic manifolds synthesizes geometric, deformation and Berezin quantization approaches.

  14. SSE-based Thomas algorithm for quasi-block-tridiagonal linear equation systems, optimized for small dense blocks

    NASA Astrophysics Data System (ADS)

    Barnaś, Dawid; Bieniasz, Lesław K.

    2017-07-01

    We have recently developed a vectorized Thomas solver for quasi-block tridiagonal linear algebraic equation systems using Streaming SIMD Extensions (SSE) and Advanced Vector Extensions (AVX) in operations on dense blocks [D. Barnaś and L. K. Bieniasz, Int. J. Comput. Meth., accepted]. The acceleration caused by vectorization was observed for large block sizes, but was less satisfactory for small blocks. In this communication we report on another version of the solver, optimized for small blocks of size up to four rows and/or columns.

  15. P and M gene junction is the optimal insertion site in Newcastle disease virus vaccine vector for foreign gene expression

    USDA-ARS?s Scientific Manuscript database

    Newcastle disease virus (NDV) has been developed as a vector for vaccine and gene therapy purposes. However, the optimal insertion site for foreign gene expression remained to be determined. In the present study, we inserted the green fluorescence protein (GFP) gene into five different intergenic ...

  16. Segmentation of tumor and edema along with healthy tissues of brain using wavelets and neural networks.

    PubMed

    Demirhan, Ayşe; Toru, Mustafa; Guler, Inan

    2015-07-01

    Robust brain magnetic resonance (MR) segmentation algorithms are critical to analyze tissues and diagnose tumor and edema in a quantitative way. In this study, we present a new tissue segmentation algorithm that segments brain MR images into tumor, edema, white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF). The detection of the healthy tissues is performed simultaneously with the diseased tissues because examining the change caused by the spread of tumor and edema on healthy tissues is very important for treatment planning. We used T1, T2, and FLAIR MR images of 20 subjects suffering from glial tumor. We developed an algorithm for stripping the skull before the segmentation process. The segmentation is performed using self-organizing map (SOM) that is trained with unsupervised learning algorithm and fine-tuned with learning vector quantization (LVQ). Unlike other studies, we developed an algorithm for clustering the SOM instead of using an additional network. Input feature vector is constructed with the features obtained from stationary wavelet transform (SWT) coefficients. The results showed that average dice similarity indexes are 91% for WM, 87% for GM, 96% for CSF, 61% for tumor, and 77% for edema.

  17. Spin wave modes in out-of-plane magnetized nanorings

    NASA Astrophysics Data System (ADS)

    Zhou, X.; Tartakovskaya, E. V.; Kakazei, G. N.; Adeyeye, A. O.

    2017-07-01

    We investigated the spin wave modes in flat circular permalloy rings with a canted external bias field using ferromagnetic resonance spectroscopy. The external magnetic field H was large enough to saturate the samples. For θ =0∘ (perpendicular geometry), three distinct resonance peaks were observed experimentally. In the case of the cylindrical symmetry violation due to H inclination from normal to the ring plane (the angle θ of H inclination was varied in the 0∘-6∘ range), the splitting of all initial peaks appeared. The distance between neighbor split peaks increased with the θ increment. Unexpectedly, the biggest splitting was observed for the mode with the smallest radial wave vector. This special feature of splitting behavior is determined by the topology of the ring shape. Developed analytical theory revealed that in perpendicular geometry, each observed peak is a combination of signals from the set of radially quantized spin wave excitation with almost the same radial wave vectors, radial profiles, and frequencies, but with different azimuthal dependencies. This degeneracy is a consequence of circular symmetry of the system and can be removed by H inclination from the normal. Our findings were further supported by micromagnetic simulations.

  18. "RCL-Pooling Assay": A Simplified Method for the Detection of Replication-Competent Lentiviruses in Vector Batches Using Sequential Pooling.

    PubMed

    Corre, Guillaume; Dessainte, Michel; Marteau, Jean-Brice; Dalle, Bruno; Fenard, David; Galy, Anne

    2016-02-01

    Nonreplicative recombinant HIV-1-derived lentiviral vectors (LV) are increasingly used in gene therapy of various genetic diseases, infectious diseases, and cancer. Before they are used in humans, preparations of LV must undergo extensive quality control testing. In particular, testing of LV must demonstrate the absence of replication-competent lentiviruses (RCL) with suitable methods, on representative fractions of vector batches. Current methods based on cell culture are challenging because high titers of vector batches translate into high volumes of cell culture to be tested in RCL assays. As vector batch size and titers are continuously increasing because of the improvement of production and purification methods, it became necessary for us to modify the current RCL assay based on the detection of p24 in cultures of indicator cells. Here, we propose a practical optimization of this method using a pairwise pooling strategy enabling easier testing of higher vector inoculum volumes. These modifications significantly decrease material handling and operator time, leading to a cost-effective method, while maintaining optimal sensibility of the RCL testing. This optimized "RCL-pooling assay" ameliorates the feasibility of the quality control of large-scale batches of clinical-grade LV while maintaining the same sensitivity.

  19. Ontology Sparse Vector Learning Algorithm for Ontology Similarity Measuring and Ontology Mapping via ADAL Technology

    NASA Astrophysics Data System (ADS)

    Gao, Wei; Zhu, Linli; Wang, Kaiyun

    2015-12-01

    Ontology, a model of knowledge representation and storage, has had extensive applications in pharmaceutics, social science, chemistry and biology. In the age of “big data”, the constructed concepts are often represented as higher-dimensional data by scholars, and thus the sparse learning techniques are introduced into ontology algorithms. In this paper, based on the alternating direction augmented Lagrangian method, we present an ontology optimization algorithm for ontological sparse vector learning, and a fast version of such ontology technologies. The optimal sparse vector is obtained by an iterative procedure, and the ontology function is then obtained from the sparse vector. Four simulation experiments show that our ontological sparse vector learning model has a higher precision ratio on plant ontology, humanoid robotics ontology, biology ontology and physics education ontology data for similarity measuring and ontology mapping applications.

  20. SU-E-T-422: Fast Analytical Beamlet Optimization for Volumetric Intensity-Modulated Arc Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chan, Kenny S K; Lee, Louis K Y; Xing, L

    2015-06-15

    Purpose: To implement a fast optimization algorithm on CPU/GPU heterogeneous computing platform and to obtain an optimal fluence for a given target dose distribution from the pre-calculated beamlets in an analytical approach. Methods: The 2D target dose distribution was modeled as an n-dimensional vector and estimated by a linear combination of independent basis vectors. The basis set was composed of the pre-calculated beamlet dose distributions at every 6 degrees of gantry angle and the cost function was set as the magnitude square of the vector difference between the target and the estimated dose distribution. The optimal weighting of the basis,more » which corresponds to the optimal fluence, was obtained analytically by the least square method. Those basis vectors with a positive weighting were selected for entering into the next level of optimization. Totally, 7 levels of optimization were implemented in the study.Ten head-and-neck and ten prostate carcinoma cases were selected for the study and mapped to a round water phantom with a diameter of 20cm. The Matlab computation was performed in a heterogeneous programming environment with Intel i7 CPU and NVIDIA Geforce 840M GPU. Results: In all selected cases, the estimated dose distribution was in a good agreement with the given target dose distribution and their correlation coefficients were found to be in the range of 0.9992 to 0.9997. Their root-mean-square error was monotonically decreasing and converging after 7 cycles of optimization. The computation took only about 10 seconds and the optimal fluence maps at each gantry angle throughout an arc were quickly obtained. Conclusion: An analytical approach is derived for finding the optimal fluence for a given target dose distribution and a fast optimization algorithm implemented on the CPU/GPU heterogeneous computing environment greatly reduces the optimization time.« less

  1. A Algebraic Approach to the Quantization of Constrained Systems: Finite Dimensional Examples.

    NASA Astrophysics Data System (ADS)

    Tate, Ranjeet Shekhar

    1992-01-01

    General relativity has two features in particular, which make it difficult to apply to it existing schemes for the quantization of constrained systems. First, there is no background structure in the theory, which could be used, e.g., to regularize constraint operators, to identify a "time" or to define an inner product on physical states. Second, in the Ashtekar formulation of general relativity, which is a promising avenue to quantum gravity, the natural variables for quantization are not canonical; and, classically, there are algebraic identities between them. Existing schemes are usually not concerned with such identities. Thus, from the point of view of canonical quantum gravity, it has become imperative to find a framework for quantization which provides a general prescription to find the physical inner product, and is flexible enough to accommodate non -canonical variables. In this dissertation I present an algebraic formulation of the Dirac approach to the quantization of constrained systems. The Dirac quantization program is augmented by a general principle to find the inner product on physical states. Essentially, the Hermiticity conditions on physical operators determine this inner product. I also clarify the role in quantum theory of possible algebraic identities between the elementary variables. I use this approach to quantize various finite dimensional systems. Some of these models test the new aspects of the algebraic framework. Others bear qualitative similarities to general relativity, and may give some insight into the pitfalls lurking in quantum gravity. The previous quantizations of one such model had many surprising features. When this model is quantized using the algebraic program, there is no longer any unexpected behaviour. I also construct the complete quantum theory for a previously unsolved relativistic cosmology. All these models indicate that the algebraic formulation provides powerful new tools for quantization. In (spatially compact) general relativity, the Hamiltonian is constrained to vanish. I present various approaches one can take to obtain an interpretation of the quantum theory of such "dynamically constrained" systems. I apply some of these ideas to the Bianchi I cosmology, and analyze the issue of the initial singularity in quantum theory.

  2. Optimal impulsive manoeuvres and aerodynamic braking

    NASA Technical Reports Server (NTRS)

    Jezewski, D. J.

    1985-01-01

    A method developed for obtaining solutions to the aerodynamic braking problem, using impulses in the exoatmospheric phases is discussed. The solution combines primer vector theory and the results of a suboptimal atmospheric guidance program. For a specified initial and final orbit, the solution determines: (1) the minimum impulsive cost using a maximum of four impulses, (2) the optimal atmospheric entry and exit-state vectors subject to equality and inequality constraints, and (3) the optimal coast times. Numerical solutions which illustrate the characteristics of the solution are presented.

  3. The Preventive Control of a Dengue Disease Using Pontryagin Minimum Principal

    NASA Astrophysics Data System (ADS)

    Ratna Sari, Eminugroho; Insani, Nur; Lestari, Dwi

    2017-06-01

    Behaviour analysis for host-vector model without control of dengue disease is based on the value of basic reproduction number obtained using next generation matrices. Furthermore, the model is further developed involving a preventive control to minimize the contact between host and vector. The purpose is to obtain an optimal preventive strategy with minimal cost. The Pontryagin Minimum Principal is used to find the optimal control analytically. The derived optimality model is then solved numerically to investigate control effort to reduce infected class.

  4. Optimal multiguidance integration in insect navigation.

    PubMed

    Hoinville, Thierry; Wehner, Rüdiger

    2018-03-13

    In the last decades, desert ants have become model organisms for the study of insect navigation. In finding their way, they use two major navigational routines: path integration using a celestial compass and landmark guidance based on sets of panoramic views of the terrestrial environment. It has been claimed that this information would enable the insect to acquire and use a centralized cognitive map of its foraging terrain. Here, we present a decentralized architecture, in which the concurrently operating path integration and landmark guidance routines contribute optimally to the directions to be steered, with "optimal" meaning maximizing the certainty (reliability) of the combined information. At any one time during its journey, the animal computes a path integration (global) vector and landmark guidance (local) vector, in which the length of each vector is proportional to the certainty of the individual estimates. Hence, these vectors represent the limited knowledge that the navigator has at any one place about the direction of the goal. The sum of the global and local vectors indicates the navigator's optimal directional estimate. Wherever applied, this decentralized model architecture is sufficient to simulate the results of quite a number of diverse cue-conflict experiments, which have recently been performed in various behavioral contexts by different authors in both desert ants and honeybees. They include even those experiments that have deliberately been designed by former authors to strengthen the evidence for a metric cognitive map in bees.

  5. Quantization Distortion in Block Transform-Compressed Data

    NASA Technical Reports Server (NTRS)

    Boden, A. F.

    1995-01-01

    The popular JPEG image compression standard is an example of a block transform-based compression scheme; the image is systematically subdivided into block that are individually transformed, quantized, and encoded. The compression is achieved by quantizing the transformed data, reducing the data entropy and thus facilitating efficient encoding. A generic block transform model is introduced.

  6. Quantized impedance dealing with the damping behavior of the one-dimensional oscillator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Jinghao; Zhang, Jing; Li, Yuan

    2015-11-15

    A quantized impedance is proposed to theoretically establish the relationship between the atomic eigenfrequency and the intrinsic frequency of the one-dimensional oscillator in this paper. The classical oscillator is modified by the idea that the electron transition is treated as a charge-discharge process of a suggested capacitor with the capacitive energy equal to the energy level difference of the jumping electron. The quantized capacitance of the impedance interacting with the jumping electron can lead the resonant frequency of the oscillator to the same as the atomic eigenfrequency. The quantized resistance reflects that the damping coefficient of the oscillator is themore » mean collision frequency of the transition electron. In addition, the first and third order electric susceptibilities based on the oscillator are accordingly quantized. Our simulation of the hydrogen atom emission spectrum based on the proposed method agrees well with the experimental one. Our results exhibits that the one-dimensional oscillator with the quantized impedance may become useful in the estimations of the refractive index and one- or multi-photon absorption coefficients of some nonmagnetic media composed of hydrogen-like atoms.« less

  7. Quantized impedance dealing with the damping behavior of the one-dimensional oscillator

    NASA Astrophysics Data System (ADS)

    Zhu, Jinghao; Zhang, Jing; Li, Yuan; Zhang, Yong; Fang, Zhengji; Zhao, Peide; Li, Erping

    2015-11-01

    A quantized impedance is proposed to theoretically establish the relationship between the atomic eigenfrequency and the intrinsic frequency of the one-dimensional oscillator in this paper. The classical oscillator is modified by the idea that the electron transition is treated as a charge-discharge process of a suggested capacitor with the capacitive energy equal to the energy level difference of the jumping electron. The quantized capacitance of the impedance interacting with the jumping electron can lead the resonant frequency of the oscillator to the same as the atomic eigenfrequency. The quantized resistance reflects that the damping coefficient of the oscillator is the mean collision frequency of the transition electron. In addition, the first and third order electric susceptibilities based on the oscillator are accordingly quantized. Our simulation of the hydrogen atom emission spectrum based on the proposed method agrees well with the experimental one. Our results exhibits that the one-dimensional oscillator with the quantized impedance may become useful in the estimations of the refractive index and one- or multi-photon absorption coefficients of some nonmagnetic media composed of hydrogen-like atoms.

  8. Probabilistic distance-based quantizer design for distributed estimation

    NASA Astrophysics Data System (ADS)

    Kim, Yoon Hak

    2016-12-01

    We consider an iterative design of independently operating local quantizers at nodes that should cooperate without interaction to achieve application objectives for distributed estimation systems. We suggest as a new cost function a probabilistic distance between the posterior distribution and its quantized one expressed as the Kullback Leibler (KL) divergence. We first present the analysis that minimizing the KL divergence in the cyclic generalized Lloyd design framework is equivalent to maximizing the logarithmic quantized posterior distribution on the average which can be further computationally reduced in our iterative design. We propose an iterative design algorithm that seeks to maximize the simplified version of the posterior quantized distribution and discuss that our algorithm converges to a global optimum due to the convexity of the cost function and generates the most informative quantized measurements. We also provide an independent encoding technique that enables minimization of the cost function and can be efficiently simplified for a practical use of power-constrained nodes. We finally demonstrate through extensive experiments an obvious advantage of improved estimation performance as compared with the typical designs and the novel design techniques previously published.

  9. Quantization and Superselection Sectors I:. Transformation Group C*-ALGEBRAS

    NASA Astrophysics Data System (ADS)

    Landsman, N. P.

    Quantization is defined as the act of assigning an appropriate C*-algebra { A} to a given configuration space Q, along with a prescription mapping self-adjoint elements of { A} into physically interpretable observables. This procedure is adopted to solve the problem of quantizing a particle moving on a homogeneous locally compact configuration space Q=G/H. Here { A} is chosen to be the transformation group C*-algebra corresponding to the canonical action of G on Q. The structure of these algebras and their representations are examined in some detail. Inequivalent quantizations are identified with inequivalent irreducible representations of the C*-algebra corresponding to the system, hence with its superselection sectors. Introducing the concept of a pre-Hamiltonian, we construct a large class of G-invariant time-evolutions on these algebras, and find the Hamiltonians implementing these time-evolutions in each irreducible representation of { A}. “Topological” terms in the Hamiltonian (or the corresponding action) turn out to be representation-dependent, and are automatically induced by the quantization procedure. Known “topological” charge quantization or periodicity conditions are then identically satisfied as a consequence of the representation theory of { A}.

  10. Improved Autoassociative Neural Networks

    NASA Technical Reports Server (NTRS)

    Hand, Charles

    2003-01-01

    Improved autoassociative neural networks, denoted nexi, have been proposed for use in controlling autonomous robots, including mobile exploratory robots of the biomorphic type. In comparison with conventional autoassociative neural networks, nexi would be more complex but more capable in that they could be trained to do more complex tasks. A nexus would use bit weights and simple arithmetic in a manner that would enable training and operation without a central processing unit, programs, weight registers, or large amounts of memory. Only a relatively small amount of memory (to hold the bit weights) and a simple logic application- specific integrated circuit would be needed. A description of autoassociative neural networks is prerequisite to a meaningful description of a nexus. An autoassociative network is a set of neurons that are completely connected in the sense that each neuron receives input from, and sends output to, all the other neurons. (In some instantiations, a neuron could also send output back to its own input terminal.) The state of a neuron is completely determined by the inner product of its inputs with weights associated with its input channel. Setting the weights sets the behavior of the network. The neurons of an autoassociative network are usually regarded as comprising a row or vector. Time is a quantized phenomenon for most autoassociative networks in the sense that time proceeds in discrete steps. At each time step, the row of neurons forms a pattern: some neurons are firing, some are not. Hence, the current state of an autoassociative network can be described with a single binary vector. As time goes by, the network changes the vector. Autoassociative networks move vectors over hyperspace landscapes of possibilities.

  11. DCTune Perceptual Optimization of Compressed Dental X-Rays

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Null, Cynthia H. (Technical Monitor)

    1997-01-01

    In current dental practice, x-rays of completed dental work are often sent to the insurer for verification. It is faster and cheaper to transmit instead digital scans of the x-rays. Further economies result if the images are sent in compressed form. DCtune is a technology for optimizing DCT quantization matrices to yield maximum perceptual quality for a given bit-rate, or minimum bit-rate for a given perceptual quality. In addition, the technology provides a means of setting the perceptual quality of compressed imagery in a systematic way. The purpose of this research was, with respect to dental x-rays: (1) to verify the advantage of DCTune over standard JPEG; (2) to verify the quality control feature of DCTune; and (3) to discover regularities in the optimized matrices of a set of images. Additional information is contained in the original extended abstract.

  12. Light-cone quantization of two dimensional field theory in the path integral approach

    NASA Astrophysics Data System (ADS)

    Cortés, J. L.; Gamboa, J.

    1999-05-01

    A quantization condition due to the boundary conditions and the compatification of the light cone space-time coordinate x- is identified at the level of the classical equations for the right-handed fermionic field in two dimensions. A detailed analysis of the implications of the implementation of this quantization condition at the quantum level is presented. In the case of the Thirring model one has selection rules on the excitations as a function of the coupling and in the case of the Schwinger model a double integer structure of the vacuum is derived in the light-cone frame. Two different quantized chiral Schwinger models are found, one of them without a θ-vacuum structure. A generalization of the quantization condition to theories with several fermionic fields and to higher dimensions is presented.

  13. Relational symplectic groupoid quantization for constant poisson structures

    NASA Astrophysics Data System (ADS)

    Cattaneo, Alberto S.; Moshayedi, Nima; Wernli, Konstantin

    2017-09-01

    As a detailed application of the BV-BFV formalism for the quantization of field theories on manifolds with boundary, this note describes a quantization of the relational symplectic groupoid for a constant Poisson structure. The presence of mixed boundary conditions and the globalization of results are also addressed. In particular, the paper includes an extension to space-times with boundary of some formal geometry considerations in the BV-BFV formalism, and specifically introduces into the BV-BFV framework a "differential" version of the classical and quantum master equations. The quantization constructed in this paper induces Kontsevich's deformation quantization on the underlying Poisson manifold, i.e., the Moyal product, which is known in full details. This allows focussing on the BV-BFV technology and testing it. For the inexperienced reader, this is also a practical and reasonably simple way to learn it.

  14. Entanglement Dynamics of Linear and Nonlinear Interaction of Two Two-Level Atoms with a Quantized Phase-Damped Field in the Dispersive Regime

    NASA Astrophysics Data System (ADS)

    Tavassoly, M. K.; Daneshmand, R.; Rustaee, N.

    2018-06-01

    In this paper we study the linear and nonlinear (intensity-dependent) interactions of two two-level atoms with a single-mode quantized field far from resonance, while the phase-damping effect is also taken into account. To find the analytical solution of the atom-field state vector corresponding to the considered model, after deducing the effective Hamiltonian we evaluate the time-dependent elements of the density operator using the master equation approach and superoperator method. Consequently, we are able to study the influences of the special nonlinearity function f (n) = √ {n}, the intensity of the initial coherent state field and the phase-damping parameter on the degree of entanglement of the whole system as well as the field and atom. It is shown that in the presence of damping, by passing time, the amount of entanglement of each subsystem with the rest of system, asymptotically reaches to its stationary and maximum value. Also, the nonlinear interaction does not have any effect on the entanglement of one of the atoms with the rest of system, but it changes the amplitude and time period of entanglement oscillations of the field and the other atom. Moreover, this may cause that, the degree of entanglement which may be low (high) at some moments of time becomes high (low) by entering the intensity-dependent function in the atom-field coupling.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lindstrom, P; Cohen, J D

    We present a streaming geometry compression codec for multiresolution, uniformly-gridded, triangular terrain patches that supports very fast decompression. Our method is based on linear prediction and residual coding for lossless compression of the full-resolution data. As simplified patches on coarser levels in the hierarchy already incur some data loss, we optionally allow further quantization for more lossy compression. The quantization levels are adaptive on a per-patch basis, while still permitting seamless, adaptive tessellations of the terrain. Our geometry compression on such a hierarchy achieves compression ratios of 3:1 to 12:1. Our scheme is not only suitable for fast decompression onmore » the CPU, but also for parallel decoding on the GPU with peak throughput over 2 billion triangles per second. Each terrain patch is independently decompressed on the fly from a variable-rate bitstream by a GPU geometry program with no branches or conditionals. Thus we can store the geometry compressed on the GPU, reducing storage and bandwidth requirements throughout the system. In our rendering approach, only compressed bitstreams and the decoded height values in the view-dependent 'cut' are explicitly stored on the GPU. Normal vectors are computed in a streaming fashion, and remaining geometry and texture coordinates, as well as mesh connectivity, are shared and re-used for all patches. We demonstrate and evaluate our algorithms on a small prototype system in which all compressed geometry fits in the GPU memory and decompression occurs on the fly every rendering frame without any cache maintenance.« less

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barone, Fiorella; Graffi, Sandro

    We consider on L{sup 2}(T{sup 2}) the Schrödinger operator family L{sub ε}:ε∈R with domain and action defined as D(L{sub ε})=H{sup 2}(T{sup 2}), L{sub ε}u=−(1/2)ℏ{sup 2}(α{sub 1}∂{sub φ{sub 1}{sup 2}}+α{sub 2}∂{sub φ{sub 2}{sup 2}})u−iℏ(γ{sub 1}∂{sub φ{sub 1}}+γ{sub 2}∂{sub φ{sub 2}})u+εVu. Here ε∈R, α= (α{sub 1}, α{sub 2}), γ= (γ{sub 1}, γ{sub 2}) are vectors of complex non-real frequencies, and V a pseudodifferential operator of order zero. L{sub ε} represents the Weyl quantization of the Hamiltonian family L{sub ε}(ξ,x)=(1/2)(α{sub 1}ξ{sub 1}{sup 2}+α{sub 2}ξ{sub 2}{sup 2})+γ{sub 1}ξ{sub 1}+γ{sub 2}ξ{sub 2}+εV(ξ,x) defined on the phase space R{sup 2}×T{sup 2}, where V(ξ,x)∈C{sup 2}(R{sup 2}×T{sup 2};R). Wemore » prove the uniform convergence with respect to ℏ∈[0, 1] of the quantum normal form, which reduces to the classical one for ℏ= 0. This result simultaneously entails an exact quantization formula for the quantum spectrum as well as a convergence criterion for the classical Birkhoff normal form generalizing a well known theorem of Cherry.« less

  17. Optimal cue integration in ants.

    PubMed

    Wystrach, Antoine; Mangan, Michael; Webb, Barbara

    2015-10-07

    In situations with redundant or competing sensory information, humans have been shown to perform cue integration, weighting different cues according to their certainty in a quantifiably optimal manner. Ants have been shown to merge the directional information available from their path integration (PI) and visual memory, but as yet it is not clear that they do so in a way that reflects the relative certainty of the cues. In this study, we manipulate the variance of the PI home vector by allowing ants (Cataglyphis velox) to run different distances and testing their directional choice when the PI vector direction is put in competition with visual memory. Ants show progressively stronger weighting of their PI direction as PI length increases. The weighting is quantitatively predicted by modelling the expected directional variance of home vectors of different lengths and assuming optimal cue integration. However, a subsequent experiment suggests ants may not actually compute an internal estimate of the PI certainty, but are using the PI home vector length as a proxy. © 2015 The Author(s).

  18. Splitting Times of Doubly Quantized Vortices in Dilute Bose-Einstein Condensates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huhtamaeki, J. A. M.; Pietilae, V.; Virtanen, S. M. M.

    2006-09-15

    Recently, the splitting of a topologically created doubly quantized vortex into two singly quantized vortices was experimentally investigated in dilute atomic cigar-shaped Bose-Einstein condensates [Y. Shin et al., Phys. Rev. Lett. 93, 160406 (2004)]. In particular, the dependency of the splitting time on the peak particle density was studied. We present results of theoretical simulations which closely mimic the experimental setup. We show that the combination of gravitational sag and time dependency of the trapping potential alone suffices to split the doubly quantized vortex in time scales which are in good agreement with the experiments.

  19. Response of two-band systems to a single-mode quantized field

    NASA Astrophysics Data System (ADS)

    Shi, Z. C.; Shen, H. Z.; Wang, W.; Yi, X. X.

    2016-03-01

    The response of topological insulators (TIs) to an external weakly classical field can be expressed in terms of Kubo formula, which predicts quantized Hall conductivity of the quantum Hall family. The response of TIs to a single-mode quantized field, however, remains unexplored. In this work, we take the quantum nature of the external field into account and define a Hall conductance to characterize the linear response of a two-band system to the quantized field. The theory is then applied to topological insulators. Comparisons with the traditional Hall conductance are presented and discussed.

  20. Quantized Iterative Learning Consensus Tracking of Digital Networks With Limited Information Communication.

    PubMed

    Xiong, Wenjun; Yu, Xinghuo; Chen, Yao; Gao, Jie

    2017-06-01

    This brief investigates the quantized iterative learning problem for digital networks with time-varying topologies. The information is first encoded as symbolic data and then transmitted. After the data are received, a decoder is used by the receiver to get an estimate of the sender's state. Iterative learning quantized communication is considered in the process of encoding and decoding. A sufficient condition is then presented to achieve the consensus tracking problem in a finite interval using the quantized iterative learning controllers. Finally, simulation results are given to illustrate the usefulness of the developed criterion.

  1. Initialization of Formation Flying Using Primer Vector Theory

    NASA Technical Reports Server (NTRS)

    Mailhe, Laurie; Schiff, Conrad; Folta, David

    2002-01-01

    In this paper, we extend primer vector analysis to formation flying. Optimization of the classical rendezvous or free-time transfer problem between two orbits using primer vector theory has been extensively studied for one spacecraft. However, an increasing number of missions are now considering flying a set of spacecraft in close formation. Missions such as the Magnetospheric MultiScale (MMS) and Leonardo-BRDF (Bidirectional Reflectance Distribution Function) need to determine strategies to transfer each spacecraft from the common launch orbit to their respective operational orbit. In addition, all the spacecraft must synchronize their states so that they achieve the same desired formation geometry over each orbit. This periodicity requirement imposes constraints on the boundary conditions that can be used for the primer vector algorithm. In this work we explore the impact of the periodicity requirement in optimizing each spacecraft transfer trajectory using primer vector theory. We first present our adaptation of primer vector theory to formation flying. Using this method, we then compute the AV budget for each spacecraft subject to different formation endpoint constraints.

  2. Attitude determination using vector observations: A fast optimal matrix algorithm

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis

    1993-01-01

    The attitude matrix minimizing Wahba's loss function is computed directly by a method that is competitive with the fastest known algorithm for finding this optimal estimate. The method also provides an estimate of the attitude error covariance matrix. Analysis of the special case of two vector observations identifies those cases for which the TRIAD or algebraic method minimizes Wahba's loss function.

  3. Supercomputer optimizations for stochastic optimal control applications

    NASA Technical Reports Server (NTRS)

    Chung, Siu-Leung; Hanson, Floyd B.; Xu, Huihuang

    1991-01-01

    Supercomputer optimizations for a computational method of solving stochastic, multibody, dynamic programming problems are presented. The computational method is valid for a general class of optimal control problems that are nonlinear, multibody dynamical systems, perturbed by general Markov noise in continuous time, i.e., nonsmooth Gaussian as well as jump Poisson random white noise. Optimization techniques for vector multiprocessors or vectorizing supercomputers include advanced data structures, loop restructuring, loop collapsing, blocking, and compiler directives. These advanced computing techniques and superconducting hardware help alleviate Bellman's curse of dimensionality in dynamic programming computations, by permitting the solution of large multibody problems. Possible applications include lumped flight dynamics models for uncertain environments, such as large scale and background random aerospace fluctuations.

  4. Pulmonary Nodule Recognition Based on Multiple Kernel Learning Support Vector Machine-PSO

    PubMed Central

    Zhu, Zhichuan; Zhao, Qingdong; Liu, Liwei; Zhang, Lijuan

    2018-01-01

    Pulmonary nodule recognition is the core module of lung CAD. The Support Vector Machine (SVM) algorithm has been widely used in pulmonary nodule recognition, and the algorithm of Multiple Kernel Learning Support Vector Machine (MKL-SVM) has achieved good results therein. Based on grid search, however, the MKL-SVM algorithm needs long optimization time in course of parameter optimization; also its identification accuracy depends on the fineness of grid. In the paper, swarm intelligence is introduced and the Particle Swarm Optimization (PSO) is combined with MKL-SVM algorithm to be MKL-SVM-PSO algorithm so as to realize global optimization of parameters rapidly. In order to obtain the global optimal solution, different inertia weights such as constant inertia weight, linear inertia weight, and nonlinear inertia weight are applied to pulmonary nodules recognition. The experimental results show that the model training time of the proposed MKL-SVM-PSO algorithm is only 1/7 of the training time of the MKL-SVM grid search algorithm, achieving better recognition effect. Moreover, Euclidean norm of normalized error vector is proposed to measure the proximity between the average fitness curve and the optimal fitness curve after convergence. Through statistical analysis of the average of 20 times operation results with different inertial weights, it can be seen that the dynamic inertial weight is superior to the constant inertia weight in the MKL-SVM-PSO algorithm. In the dynamic inertial weight algorithm, the parameter optimization time of nonlinear inertia weight is shorter; the average fitness value after convergence is much closer to the optimal fitness value, which is better than the linear inertial weight. Besides, a better nonlinear inertial weight is verified. PMID:29853983

  5. Pulmonary Nodule Recognition Based on Multiple Kernel Learning Support Vector Machine-PSO.

    PubMed

    Li, Yang; Zhu, Zhichuan; Hou, Alin; Zhao, Qingdong; Liu, Liwei; Zhang, Lijuan

    2018-01-01

    Pulmonary nodule recognition is the core module of lung CAD. The Support Vector Machine (SVM) algorithm has been widely used in pulmonary nodule recognition, and the algorithm of Multiple Kernel Learning Support Vector Machine (MKL-SVM) has achieved good results therein. Based on grid search, however, the MKL-SVM algorithm needs long optimization time in course of parameter optimization; also its identification accuracy depends on the fineness of grid. In the paper, swarm intelligence is introduced and the Particle Swarm Optimization (PSO) is combined with MKL-SVM algorithm to be MKL-SVM-PSO algorithm so as to realize global optimization of parameters rapidly. In order to obtain the global optimal solution, different inertia weights such as constant inertia weight, linear inertia weight, and nonlinear inertia weight are applied to pulmonary nodules recognition. The experimental results show that the model training time of the proposed MKL-SVM-PSO algorithm is only 1/7 of the training time of the MKL-SVM grid search algorithm, achieving better recognition effect. Moreover, Euclidean norm of normalized error vector is proposed to measure the proximity between the average fitness curve and the optimal fitness curve after convergence. Through statistical analysis of the average of 20 times operation results with different inertial weights, it can be seen that the dynamic inertial weight is superior to the constant inertia weight in the MKL-SVM-PSO algorithm. In the dynamic inertial weight algorithm, the parameter optimization time of nonlinear inertia weight is shorter; the average fitness value after convergence is much closer to the optimal fitness value, which is better than the linear inertial weight. Besides, a better nonlinear inertial weight is verified.

  6. Universe creation from the third-quantized vacuum

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McGuigan, M.

    1989-04-15

    Third quantization leads to a Hilbert space containing a third-quantized vacuum in which no universes are present as well as multiuniverse states. We consider the possibility of universe creation for the special case where the universe emerges in a no-particle state. The probability of such a creation is computed from both the path-integral and operator formalisms.

  7. Universe creation from the third-quantized vacuum

    NASA Astrophysics Data System (ADS)

    McGuigan, Michael

    1989-04-01

    Third quantization leads to a Hilbert space containing a third-quantized vacuum in which no universes are present as well as multiuniverse states. We consider the possibility of universe creation for the special case where the universe emerges in a no-particle state. The probability of such a creation is computed from both the path-integral and operator formalisms.

  8. 4D Sommerfeld quantization of the complex extended charge

    NASA Astrophysics Data System (ADS)

    Bulyzhenkov, Igor E.

    2017-12-01

    Gravitational fields and accelerations cannot change quantized magnetic flux in closed line contours due to flat 3D section of curved 4D space-time-matter. The relativistic Bohr-Sommerfeld quantization of the imaginary charge reveals an electric analog of the Compton length, which can introduce quantitatively the fine structure constant and the Plank length.

  9. A New Unified Analysis of Estimate Errors by Model-Matching Phase-Estimation Methods for Sensorless Drive of Permanent-Magnet Synchronous Motors and New Trajectory-Oriented Vector Control, Part II

    NASA Astrophysics Data System (ADS)

    Shinnaka, Shinji

    This paper presents a new unified analysis of estimate errors by model-matching extended-back-EMF estimation methods for sensorless drive of permanent-magnet synchronous motors. Analytical solutions about estimate errors, whose validity is confirmed by numerical experiments, are rich in universality and applicability. As an example of universality and applicability, a new trajectory-oriented vector control method is proposed, which can realize directly quasi-optimal strategy minimizing total losses with no additional computational loads by simply orienting one of vector-control coordinates to the associated quasi-optimal trajectory. The coordinate orientation rule, which is analytically derived, is surprisingly simple. Consequently the trajectory-oriented vector control method can be applied to a number of conventional vector control systems using model-matching extended-back-EMF estimation methods.

  10. Primer Vector Optimization: Survey of Theory, New Analysis and Applications

    NASA Technical Reports Server (NTRS)

    Guzman, J. J.; Mailhe, L. M.; Schiff, C.; Hughes, S. P.; Folta, D. C.

    2002-01-01

    In this paper, a summary of primer vector theory is presented. The applicability of primer vector theory is examined in an effort to understand when and why the theory can fail. For example, since the Calculus of Variations is based on "small" variations, singularities in the linearized (variational) equations of motion along the arcs must be taken into account. These singularities are a recurring problem in analyse that employ small variations. Two examples, the initialization of an orbit and a line of apsides rotation, are presented. Recommendations, future work, and the possible addition of other optimization techniques are also discussed.

  11. The coordinate coherent states approach revisited

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miao, Yan-Gang, E-mail: miaoyg@nankai.edu.cn; Zhang, Shao-Jun, E-mail: sjzhang@mail.nankai.edu.cn

    2013-02-15

    We revisit the coordinate coherent states approach through two different quantization procedures in the quantum field theory on the noncommutative Minkowski plane. The first procedure, which is based on the normal commutation relation between an annihilation and creation operators, deduces that a point mass can be described by a Gaussian function instead of the usual Dirac delta function. However, we argue this specific quantization by adopting the canonical one (based on the canonical commutation relation between a field and its conjugate momentum) and show that a point mass should still be described by the Dirac delta function, which implies thatmore » the concept of point particles is still valid when we deal with the noncommutativity by following the coordinate coherent states approach. In order to investigate the dependence on quantization procedures, we apply the two quantization procedures to the Unruh effect and Hawking radiation and find that they give rise to significantly different results. Under the first quantization procedure, the Unruh temperature and Unruh spectrum are not deformed by noncommutativity, but the Hawking temperature is deformed by noncommutativity while the radiation specturm is untack. However, under the second quantization procedure, the Unruh temperature and Hawking temperature are untack but the both spectra are modified by an effective greybody (deformed) factor. - Highlights: Black-Right-Pointing-Pointer Suggest a canonical quantization in the coordinate coherent states approach. Black-Right-Pointing-Pointer Prove the validity of the concept of point particles. Black-Right-Pointing-Pointer Apply the canonical quantization to the Unruh effect and Hawking radiation. Black-Right-Pointing-Pointer Find no deformations in the Unruh temperature and Hawking temperature. Black-Right-Pointing-Pointer Provide the modified spectra of the Unruh effect and Hawking radiation.« less

  12. Hierarchically clustered adaptive quantization CMAC and its learning convergence.

    PubMed

    Teddy, S D; Lai, E M K; Quek, C

    2007-11-01

    The cerebellar model articulation controller (CMAC) neural network (NN) is a well-established computational model of the human cerebellum. Nevertheless, there are two major drawbacks associated with the uniform quantization scheme of the CMAC network. They are the following: (1) a constant output resolution associated with the entire input space and (2) the generalization-accuracy dilemma. Moreover, the size of the CMAC network is an exponential function of the number of inputs. Depending on the characteristics of the training data, only a small percentage of the entire set of CMAC memory cells is utilized. Therefore, the efficient utilization of the CMAC memory is a crucial issue. One approach is to quantize the input space nonuniformly. For existing nonuniformly quantized CMAC systems, there is a tradeoff between memory efficiency and computational complexity. Inspired by the underlying organizational mechanism of the human brain, this paper presents a novel CMAC architecture named hierarchically clustered adaptive quantization CMAC (HCAQ-CMAC). HCAQ-CMAC employs hierarchical clustering for the nonuniform quantization of the input space to identify significant input segments and subsequently allocating more memory cells to these regions. The stability of the HCAQ-CMAC network is theoretically guaranteed by the proof of its learning convergence. The performance of the proposed network is subsequently benchmarked against the original CMAC network, as well as two other existing CMAC variants on two real-life applications, namely, automated control of car maneuver and modeling of the human blood glucose dynamics. The experimental results have demonstrated that the HCAQ-CMAC network offers an efficient memory allocation scheme and improves the generalization and accuracy of the network output to achieve better or comparable performances with smaller memory usages. Index Terms-Cerebellar model articulation controller (CMAC), hierarchical clustering, hierarchically clustered adaptive quantization CMAC (HCAQ-CMAC), learning convergence, nonuniform quantization.

  13. Entropy-aware projected Landweber reconstruction for quantized block compressive sensing of aerial imagery

    NASA Astrophysics Data System (ADS)

    Liu, Hao; Li, Kangda; Wang, Bing; Tang, Hainie; Gong, Xiaohui

    2017-01-01

    A quantized block compressive sensing (QBCS) framework, which incorporates the universal measurement, quantization/inverse quantization, entropy coder/decoder, and iterative projected Landweber reconstruction, is summarized. Under the QBCS framework, this paper presents an improved reconstruction algorithm for aerial imagery, QBCS, with entropy-aware projected Landweber (QBCS-EPL), which leverages the full-image sparse transform without Wiener filter and an entropy-aware thresholding model for wavelet-domain image denoising. Through analyzing the functional relation between the soft-thresholding factors and entropy-based bitrates for different quantization methods, the proposed model can effectively remove wavelet-domain noise of bivariate shrinkage and achieve better image reconstruction quality. For the overall performance of QBCS reconstruction, experimental results demonstrate that the proposed QBCS-EPL algorithm significantly outperforms several existing algorithms. With the experiment-driven methodology, the QBCS-EPL algorithm can obtain better reconstruction quality at a relatively moderate computational cost, which makes it more desirable for aerial imagery applications.

  14. Integral Sliding Mode Fault-Tolerant Control for Uncertain Linear Systems Over Networks With Signals Quantization.

    PubMed

    Hao, Li-Ying; Park, Ju H; Ye, Dan

    2017-09-01

    In this paper, a new robust fault-tolerant compensation control method for uncertain linear systems over networks is proposed, where only quantized signals are assumed to be available. This approach is based on the integral sliding mode (ISM) method where two kinds of integral sliding surfaces are constructed. One is the continuous-state-dependent surface with the aim of sliding mode stability analysis and the other is the quantization-state-dependent surface, which is used for ISM controller design. A scheme that combines the adaptive ISM controller and quantization parameter adjustment strategy is then proposed. Through utilizing H ∞ control analytical technique, once the system is in the sliding mode, the nature of performing disturbance attenuation and fault tolerance from the initial time can be found without requiring any fault information. Finally, the effectiveness of our proposed ISM control fault-tolerant schemes against quantization errors is demonstrated in the simulation.

  15. Soliton instabilities and vortex street formation in a polariton quantum fluid.

    PubMed

    Grosso, G; Nardin, G; Morier-Genoud, F; Léger, Y; Deveaud-Plédran, B

    2011-12-09

    Exciton polaritons have been shown to be an optimal system in order to investigate the properties of bosonic quantum fluids. We report here on the observation of dark solitons in the wake of engineered circular obstacles and their decay into streets of quantized vortices. Our experiments provide a time-resolved access to the polariton phase and density, which allows for a quantitative study of instabilities of freely evolving polaritons. The decay of solitons is quantified and identified as an effect of disorder-induced transverse perturbations in the dissipative polariton gas.

  16. Application of Frame Theory in Intelligent Packet-Based Communication Networks

    NASA Astrophysics Data System (ADS)

    Escobar-Moreira, León A.

    2007-09-01

    Frames are a stable and redundant representation of signals in a Hilbert space that have been used in signal processing because of their resilience to additive noise, quantization error, and their robustness to losses in packet-based networks [1,2]. Depending on the number of erasures (losses), there are some considerations to be taken into account in order to optimize the frame design. Further discussions will explain the innate characteristics of frames to include intelligence on the packet-based communication devices (routers) to increase their performance under different channel behaviors.

  17. Phase space localization for anti-de Sitter quantum mechanics and its zero curvature limit

    NASA Technical Reports Server (NTRS)

    Elgradechi, Amine M.

    1993-01-01

    Using techniques of geometric quantization and SO(sub 0)(3,2)-coherent states, a notion of optimal localization on phase space is defined for the quantum theory of a massive and spinning particle in anti-de Sitter space time. It is shown that this notion disappears in the zero curvature limit, providing one with a concrete example of the regularizing character of the constant (nonzero) curvature of the anti-de Sitter space time. As a byproduct a geometric characterization of masslessness is obtained.

  18. A novel non-uniform control vector parameterization approach with time grid refinement for flight level tracking optimal control problems.

    PubMed

    Liu, Ping; Li, Guodong; Liu, Xinggao; Xiao, Long; Wang, Yalin; Yang, Chunhua; Gui, Weihua

    2018-02-01

    High quality control method is essential for the implementation of aircraft autopilot system. An optimal control problem model considering the safe aerodynamic envelop is therefore established to improve the control quality of aircraft flight level tracking. A novel non-uniform control vector parameterization (CVP) method with time grid refinement is then proposed for solving the optimal control problem. By introducing the Hilbert-Huang transform (HHT) analysis, an efficient time grid refinement approach is presented and an adaptive time grid is automatically obtained. With this refinement, the proposed method needs fewer optimization parameters to achieve better control quality when compared with uniform refinement CVP method, whereas the computational cost is lower. Two well-known flight level altitude tracking problems and one minimum time cost problem are tested as illustrations and the uniform refinement control vector parameterization method is adopted as the comparative base. Numerical results show that the proposed method achieves better performances in terms of optimization accuracy and computation cost; meanwhile, the control quality is efficiently improved. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  19. Aircraft Engine Thrust Estimator Design Based on GSA-LSSVM

    NASA Astrophysics Data System (ADS)

    Sheng, Hanlin; Zhang, Tianhong

    2017-08-01

    In view of the necessity of highly precise and reliable thrust estimator to achieve direct thrust control of aircraft engine, based on support vector regression (SVR), as well as least square support vector machine (LSSVM) and a new optimization algorithm - gravitational search algorithm (GSA), by performing integrated modelling and parameter optimization, a GSA-LSSVM-based thrust estimator design solution is proposed. The results show that compared to particle swarm optimization (PSO) algorithm, GSA can find unknown optimization parameter better and enables the model developed with better prediction and generalization ability. The model can better predict aircraft engine thrust and thus fulfills the need of direct thrust control of aircraft engine.

  20. Ellipsoidal fuzzy learning for smart car platoons

    NASA Astrophysics Data System (ADS)

    Dickerson, Julie A.; Kosko, Bart

    1993-12-01

    A neural-fuzzy system combined supervised and unsupervised learning to find and tune the fuzzy-rules. An additive fuzzy system approximates a function by covering its graph with fuzzy rules. A fuzzy rule patch can take the form of an ellipsoid in the input-output space. Unsupervised competitive learning found the statistics of data clusters. The covariance matrix of each synaptic quantization vector defined on ellipsoid centered at the centroid of the data cluster. Tightly clustered data gave smaller ellipsoids or more certain rules. Sparse data gave larger ellipsoids or less certain rules. Supervised learning tuned the ellipsoids to improve the approximation. The supervised neural system used gradient descent to find the ellipsoidal fuzzy patches. It locally minimized the mean-squared error of the fuzzy approximation. Hybrid ellipsoidal learning estimated the control surface for a smart car controller.

  1. Adaptive filtering with the self-organizing map: a performance comparison.

    PubMed

    Barreto, Guilherme A; Souza, Luís Gustavo M

    2006-01-01

    In this paper we provide an in-depth evaluation of the SOM as a feasible tool for nonlinear adaptive filtering. A comprehensive survey of existing SOM-based and related architectures for learning input-output mappings is carried out and the application of these architectures to nonlinear adaptive filtering is formulated. Then, we introduce two simple procedures for building RBF-based nonlinear filters using the Vector-Quantized Temporal Associative Memory (VQTAM), a recently proposed method for learning dynamical input-output mappings using the SOM. The aforementioned SOM-based adaptive filters are compared with standard FIR/LMS and FIR/LMS-Newton linear transversal filters, as well as with powerful MLP-based filters in nonlinear channel equalization and inverse modeling tasks. The obtained results in both tasks indicate that SOM-based filters can consistently outperform powerful MLP-based ones.

  2. Analysis of the Westland Data Set

    NASA Technical Reports Server (NTRS)

    Wen, Fang; Willett, Peter; Deb, Somnath

    2001-01-01

    The "Westland" set of empirical accelerometer helicopter data with seeded and labeled faults is analyzed with the aim of condition monitoring. The autoregressive (AR) coefficients from a simple linear model encapsulate a great deal of information in a relatively few measurements; and it has also been found that augmentation of these by harmonic and other parameters call improve classification significantly. Several techniques have been explored, among these restricted Coulomb energy (RCE) networks, learning vector quantization (LVQ), Gaussian mixture classifiers and decision trees. A problem with these approaches, and in common with many classification paradigms, is that augmentation of the feature dimension can degrade classification ability. Thus, we also introduce the Bayesian data reduction algorithm (BDRA), which imposes a Dirichlet prior oil training data and is thus able to quantify probability of error in all exact manner, such that features may be discarded or coarsened appropriately.

  3. Medical Image Retrieval Using Multi-Texton Assignment.

    PubMed

    Tang, Qiling; Yang, Jirong; Xia, Xianfu

    2018-02-01

    In this paper, we present a multi-texton representation method for medical image retrieval, which utilizes the locality constraint to encode each filter bank response within its local-coordinate system consisting of the k nearest neighbors in texton dictionary and subsequently employs spatial pyramid matching technique to implement feature vector representation. Comparison with the traditional nearest neighbor assignment followed by texton histogram statistics method, our strategies reduce the quantization errors in mapping process and add information about the spatial layout of texton distributions and, thus, increase the descriptive power of the image representation. We investigate the effects of different parameters on system performance in order to choose the appropriate ones for our datasets and carry out experiments on the IRMA-2009 medical collection and the mammographic patch dataset. The extensive experimental results demonstrate that the proposed method has superior performance.

  4. Image data compression having minimum perceptual error

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B. (Inventor)

    1995-01-01

    A method for performing image compression that eliminates redundant and invisible image components is described. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The present invention adapts or customizes the quantization matrix to the image being compressed. The quantization matrix comprises visual masking by luminance and contrast techniques and by an error pooling technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.

  5. Immirzi parameter without Immirzi ambiguity: Conformal loop quantization of scalar-tensor gravity

    NASA Astrophysics Data System (ADS)

    Veraguth, Olivier J.; Wang, Charles H.-T.

    2017-10-01

    Conformal loop quantum gravity provides an approach to loop quantization through an underlying conformal structure i.e. conformally equivalent class of metrics. The property that general relativity itself has no conformal invariance is reinstated with a constrained scalar field setting the physical scale. Conformally equivalent metrics have recently been shown to be amenable to loop quantization including matter coupling. It has been suggested that conformal geometry may provide an extended symmetry to allow a reformulated Immirzi parameter necessary for loop quantization to behave like an arbitrary group parameter that requires no further fixing as its present standard form does. Here, we find that this can be naturally realized via conformal frame transformations in scalar-tensor gravity. Such a theory generally incorporates a dynamical scalar gravitational field and reduces to general relativity when the scalar field becomes a pure gauge. In particular, we introduce a conformal Einstein frame in which loop quantization is implemented. We then discuss how different Immirzi parameters under this description may be related by conformal frame transformations and yet share the same quantization having, for example, the same area gaps, modulated by the scalar gravitational field.

  6. Tribology of the lubricant quantized sliding state.

    PubMed

    Castelli, Ivano Eligio; Capozza, Rosario; Vanossi, Andrea; Santoro, Giuseppe E; Manini, Nicola; Tosatti, Erio

    2009-11-07

    In the framework of Langevin dynamics, we demonstrate clear evidence of the peculiar quantized sliding state, previously found in a simple one-dimensional boundary lubricated model [A. Vanossi et al., Phys. Rev. Lett. 97, 056101 (2006)], for a substantially less idealized two-dimensional description of a confined multilayer solid lubricant under shear. This dynamical state, marked by a nontrivial "quantized" ratio of the averaged lubricant center-of-mass velocity to the externally imposed sliding speed, is recovered, and shown to be robust against the effects of thermal fluctuations, quenched disorder in the confining substrates, and over a wide range of loading forces. The lubricant softness, setting the width of the propagating solitonic structures, is found to play a major role in promoting in-registry commensurate regions beneficial to this quantized sliding. By evaluating the force instantaneously exerted on the top plate, we find that this quantized sliding represents a dynamical "pinned" state, characterized by significantly low values of the kinetic friction. While the quantized sliding occurs due to solitons being driven gently, the transition to ordinary unpinned sliding regimes can involve lubricant melting due to large shear-induced Joule heating, for example at large speed.

  7. A Low Power Digital Accumulation Technique for Digital-Domain CMOS TDI Image Sensor.

    PubMed

    Yu, Changwei; Nie, Kaiming; Xu, Jiangtao; Gao, Jing

    2016-09-23

    In this paper, an accumulation technique suitable for digital domain CMOS time delay integration (TDI) image sensors is proposed to reduce power consumption without degrading the rate of imaging. In terms of the slight variations of quantization codes among different pixel exposures towards the same object, the pixel array is divided into two groups: one is for coarse quantization of high bits only, and the other one is for fine quantization of low bits. Then, the complete quantization codes are composed of both results from the coarse-and-fine quantization. The equivalent operation comparably reduces the total required bit numbers of the quantization. In the 0.18 µm CMOS process, two versions of 16-stage digital domain CMOS TDI image sensor chains based on a 10-bit successive approximate register (SAR) analog-to-digital converter (ADC), with and without the proposed technique, are designed. The simulation results show that the average power consumption of slices of the two versions are 6 . 47 × 10 - 8 J/line and 7 . 4 × 10 - 8 J/line, respectively. Meanwhile, the linearity of the two versions are 99.74% and 99.99%, respectively.

  8. Minimum impulse three-body trajectories.

    NASA Technical Reports Server (NTRS)

    D'Amario, L.; Edelbaum, T. N.

    1973-01-01

    A rapid and accurate method of calculating optimal impulsive transfers in the restricted problem of three bodies has been developed. The technique combines a multi-conic method of trajectory integration with primer vector theory and an accelerated gradient method of trajectory optimization. A unique feature is that the state transition matrix and the primer vector are found analytical without additional integrations or differentiations. The method has been applied to the determination of optimal two and three impulse transfers between the L2 libration point and circular orbits about both the earth and the moon.

  9. Identification and characterization of highly versatile peptide-vectors that bind non-competitively to the low-density lipoprotein receptor for in vivo targeting and delivery of small molecules and protein cargos

    PubMed Central

    David, Marion; Lécorché, Pascaline; Masse, Maxime; Faucon, Aude; Abouzid, Karima; Gaudin, Nicolas; Varini, Karine; Gassiot, Fanny; Ferracci, Géraldine; Jacquot, Guillaume; Vlieghe, Patrick

    2018-01-01

    Insufficient membrane penetration of drugs, in particular biotherapeutics and/or low target specificity remain a major drawback in their efficacy. We propose here the rational characterization and optimization of peptides to be developed as vectors that target cells expressing specific receptors involved in endocytosis or transcytosis. Among receptors involved in receptor-mediated transport is the LDL receptor. Screening complex phage-displayed peptide libraries on the human LDLR (hLDLR) stably expressed in cell lines led to the characterization of a family of cyclic and linear peptides that specifically bind the hLDLR. The VH411 lead cyclic peptide allowed endocytosis of payloads such as the S-Tag peptide or antibodies into cells expressing the hLDLR. Size reduction and chemical optimization of this lead peptide-vector led to improved receptor affinity. The optimized peptide-vectors were successfully conjugated to cargos of different nature and size including small organic molecules, siRNAs, peptides or a protein moiety such as an Fc fragment. We show that in all cases, the peptide-vectors retain their binding affinity to the hLDLR and potential for endocytosis. Following i.v. administration in wild type or ldlr-/- mice, an Fc fragment chemically conjugated or fused in C-terminal to peptide-vectors showed significant biodistribution in LDLR-enriched organs. We have thus developed highly versatile peptide-vectors endowed with good affinity for the LDLR as a target receptor. These peptide-vectors have the potential to be further developed for efficient transport of therapeutic or imaging agents into cells -including pathological cells—or organs that express the LDLR. PMID:29485998

  10. Electro-gravity via geometric chrononfield

    NASA Astrophysics Data System (ADS)

    Suchard, Eytan H.

    2017-05-01

    In De Sitter / Anti De Sitter space-time and in other geometries, reference sub-manifolds from which proper time is measured along integral curves, are described as events. We introduce here a foliation with the help of a scalar field. The scalar field need not be unique but from the gradient of the scalar field, an intrinsic Reeb vector of the foliations perpendicular to the gradient vector is calculated. The Reeb vector describes the acceleration of a physical particle that moves along the integral curves that are formed by the gradient of the scalar field. The Reeb vector appears as a component of an anti-symmetric matrix which is a part of a rank-2, 2-Form. The 2-form is extended into a non-degenerate 4-form and into rank-4 matrix of a 2-form, which when multiplied by a velocity of a particle, becomes the acceleration of the particle. The matrix has one U(1) degree of freedom and an additional SU(2) degrees of freedom in two vectors that span the plane perpendicular to the gradient of the scalar field and to the Reeb vector. In total, there are U(1) x SU(2) degrees of freedom. SU(3) degrees of freedom arise from three dimensional foliations but require an additional symmetry to exist in order to have a valid covariant meaning. Matter in the Einstein Grossmann equation is replaced by the action of the acceleration field, i.e. by a geometric action which is not anticipated by the metric alone. This idea leads to a new formalism that replaces the conventional stress-energy-momentum-tensor. The formalism will be mainly developed for classical physics but will also be discussed for quantized physics based on events instead of particles. The result is that a positive charge manifests small attracting gravity and a stronger but small repelling acceleration field that repels even uncharged particles that have a rest mass. Negative charge manifests a repelling anti-gravity but also a stronger acceleration field that attracts even uncharged particles that have rest mass. Preliminary version: http://sciencedomain.org/abstract/9858

  11. Thyra Abstract Interface Package

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bartlett, Roscoe A.

    2005-09-01

    Thrya primarily defines a set of abstract C++ class interfaces needed for the development of abstract numerical atgorithms (ANAs) such as iterative linear solvers, transient solvers all the way up to optimization. At the foundation of these interfaces are abstract C++ classes for vectors, vector spaces, linear operators and multi-vectors. Also included in the Thyra package is C++ code for creating concrete vector, vector space, linear operator, and multi-vector subclasses as well as other utilities to aid in the development of ANAs. Currently, very general and efficient concrete subclass implementations exist for serial and SPMD in-core vectors and multi-vectors. Codemore » also currently exists for testing objects and providing composite objects such as product vectors.« less

  12. Quantized Majorana conductance

    NASA Astrophysics Data System (ADS)

    Zhang, Hao; Liu, Chun-Xiao; Gazibegovic, Sasa; Xu, Di; Logan, John A.; Wang, Guanzhong; van Loo, Nick; Bommer, Jouri D. S.; de Moor, Michiel W. A.; Car, Diana; Op Het Veld, Roy L. M.; van Veldhoven, Petrus J.; Koelling, Sebastian; Verheijen, Marcel A.; Pendharkar, Mihir; Pennachio, Daniel J.; Shojaei, Borzoyeh; Lee, Joon Sue; Palmstrøm, Chris J.; Bakkers, Erik P. A. M.; Sarma, S. Das; Kouwenhoven, Leo P.

    2018-04-01

    Majorana zero-modes—a type of localized quasiparticle—hold great promise for topological quantum computing. Tunnelling spectroscopy in electrical transport is the primary tool for identifying the presence of Majorana zero-modes, for instance as a zero-bias peak in differential conductance. The height of the Majorana zero-bias peak is predicted to be quantized at the universal conductance value of 2e2/h at zero temperature (where e is the charge of an electron and h is the Planck constant), as a direct consequence of the famous Majorana symmetry in which a particle is its own antiparticle. The Majorana symmetry protects the quantization against disorder, interactions and variations in the tunnel coupling. Previous experiments, however, have mostly shown zero-bias peaks much smaller than 2e2/h, with a recent observation of a peak height close to 2e2/h. Here we report a quantized conductance plateau at 2e2/h in the zero-bias conductance measured in indium antimonide semiconductor nanowires covered with an aluminium superconducting shell. The height of our zero-bias peak remains constant despite changing parameters such as the magnetic field and tunnel coupling, indicating that it is a quantized conductance plateau. We distinguish this quantized Majorana peak from possible non-Majorana origins by investigating its robustness to electric and magnetic fields as well as its temperature dependence. The observation of a quantized conductance plateau strongly supports the existence of Majorana zero-modes in the system, consequently paving the way for future braiding experiments that could lead to topological quantum computing.

  13. Quantized Majorana conductance.

    PubMed

    Zhang, Hao; Liu, Chun-Xiao; Gazibegovic, Sasa; Xu, Di; Logan, John A; Wang, Guanzhong; van Loo, Nick; Bommer, Jouri D S; de Moor, Michiel W A; Car, Diana; Op Het Veld, Roy L M; van Veldhoven, Petrus J; Koelling, Sebastian; Verheijen, Marcel A; Pendharkar, Mihir; Pennachio, Daniel J; Shojaei, Borzoyeh; Lee, Joon Sue; Palmstrøm, Chris J; Bakkers, Erik P A M; Sarma, S Das; Kouwenhoven, Leo P

    2018-04-05

    Majorana zero-modes-a type of localized quasiparticle-hold great promise for topological quantum computing. Tunnelling spectroscopy in electrical transport is the primary tool for identifying the presence of Majorana zero-modes, for instance as a zero-bias peak in differential conductance. The height of the Majorana zero-bias peak is predicted to be quantized at the universal conductance value of 2e 2 /h at zero temperature (where e is the charge of an electron and h is the Planck constant), as a direct consequence of the famous Majorana symmetry in which a particle is its own antiparticle. The Majorana symmetry protects the quantization against disorder, interactions and variations in the tunnel coupling. Previous experiments, however, have mostly shown zero-bias peaks much smaller than 2e 2 /h, with a recent observation of a peak height close to 2e 2 /h. Here we report a quantized conductance plateau at 2e 2 /h in the zero-bias conductance measured in indium antimonide semiconductor nanowires covered with an aluminium superconducting shell. The height of our zero-bias peak remains constant despite changing parameters such as the magnetic field and tunnel coupling, indicating that it is a quantized conductance plateau. We distinguish this quantized Majorana peak from possible non-Majorana origins by investigating its robustness to electric and magnetic fields as well as its temperature dependence. The observation of a quantized conductance plateau strongly supports the existence of Majorana zero-modes in the system, consequently paving the way for future braiding experiments that could lead to topological quantum computing.

  14. Controlling charge quantization with quantum fluctuations.

    PubMed

    Jezouin, S; Iftikhar, Z; Anthore, A; Parmentier, F D; Gennser, U; Cavanna, A; Ouerghi, A; Levkivskyi, I P; Idrisov, E; Sukhorukov, E V; Glazman, L I; Pierre, F

    2016-08-04

    In 1909, Millikan showed that the charge of electrically isolated systems is quantized in units of the elementary electron charge e. Today, the persistence of charge quantization in small, weakly connected conductors allows for circuits in which single electrons are manipulated, with applications in, for example, metrology, detectors and thermometry. However, as the connection strength is increased, the discreteness of charge is progressively reduced by quantum fluctuations. Here we report the full quantum control and characterization of charge quantization. By using semiconductor-based tunable elemental conduction channels to connect a micrometre-scale metallic island to a circuit, we explore the complete evolution of charge quantization while scanning the entire range of connection strengths, from a very weak (tunnel) to a perfect (ballistic) contact. We observe, when approaching the ballistic limit, that charge quantization is destroyed by quantum fluctuations, and scales as the square root of the residual probability for an electron to be reflected across the quantum channel; this scaling also applies beyond the different regimes of connection strength currently accessible to theory. At increased temperatures, the thermal fluctuations result in an exponential suppression of charge quantization and in a universal square-root scaling, valid for all connection strengths, in agreement with expectations. Besides being pertinent for the improvement of single-electron circuits and their applications, and for the metal-semiconductor hybrids relevant to topological quantum computing, knowledge of the quantum laws of electricity will be essential for the quantum engineering of future nanoelectronic devices.

  15. Luminescence and efficiency optimization of InGaN/GaN core-shell nanowire LEDs by numerical modelling

    NASA Astrophysics Data System (ADS)

    Römer, Friedhard; Deppner, Marcus; Andreev, Zhelio; Kölper, Christopher; Sabathil, Matthias; Strassburg, Martin; Ledig, Johannes; Li, Shunfeng; Waag, Andreas; Witzigmann, Bernd

    2012-02-01

    We present a computational study on the anisotropic luminescence and the efficiency of a core-shell type nanowire LED based on GaN with InGaN active quantum wells. The physical simulator used for analyzing this device integrates a multidimensional drift-diffusion transport solver and a k . p Schrödinger problem solver for quantization effects and luminescence. The solution of both problems is coupled to achieve self-consistency. Using this solver we investigate the effect of dimensions, design of quantum wells, and current injection on the efficiency and luminescence of the core-shell nanowire LED. The anisotropy of the luminescence and re-absorption is analyzed with respect to the external efficiency of the LED. From the results we derive strategies for design optimization.

  16. Primer vector theory applied to the linear relative-motion equations. [for N-impulse space trajectory optimization

    NASA Technical Reports Server (NTRS)

    Jezewski, D.

    1980-01-01

    Prime vector theory is used in analyzing a set of linear relative-motion equations - the Clohessy-Wiltshire (C/W) equations - to determine the criteria and necessary conditions for an optimal N-impulse trajectory. The analysis develops the analytical criteria for improving a solution by: (1) moving any dependent or independent variable in the initial and/or final orbit, and (2) adding intermediate impulses. If these criteria are violated, the theory establishes a sufficient number of analytical equations. The subsequent satisfaction of these equations will result in the optimal position vectors and times of an N-impulse trajectory. The solution is examined for the specific boundary conditions of: (1) fixed-end conditions, two impulse, and time-open transfer; (2) an orbit-to-orbit transfer; and (3) a generalized renezvous problem.

  17. Evaluation of NASA speech encoder

    NASA Technical Reports Server (NTRS)

    1976-01-01

    Techniques developed by NASA for spaceflight instrumentation were used in the design of a quantizer for speech-decoding. Computer simulation of the actions of the quantizer was tested with synthesized and real speech signals. Results were evaluated by a phometician. Topics discussed include the relationship between the number of quantizer levels and the required sampling rate; reconstruction of signals; digital filtering; speech recording, sampling, and storage, and processing results.

  18. Attacking the mosquito on multiple fronts: Insights from the Vector Control Optimization Model (VCOM) for malaria elimination.

    PubMed

    Kiware, Samson S; Chitnis, Nakul; Tatarsky, Allison; Wu, Sean; Castellanos, Héctor Manuel Sánchez; Gosling, Roly; Smith, David; Marshall, John M

    2017-01-01

    Despite great achievements by insecticide-treated nets (ITNs) and indoor residual spraying (IRS) in reducing malaria transmission, it is unlikely these tools will be sufficient to eliminate malaria transmission on their own in many settings today. Fortunately, field experiments indicate that there are many promising vector control interventions that can be used to complement ITNs and/or IRS by targeting a wide range of biological and environmental mosquito resources. The majority of these experiments were performed to test a single vector control intervention in isolation; however, there is growing evidence and consensus that effective vector control with the goal of malaria elimination will require a combination of interventions. We have developed a model of mosquito population dynamic to describe the mosquito life and feeding cycles and to optimize the impact of vector control intervention combinations at suppressing mosquito populations. The model simulations were performed for the main three malaria vectors in sub-Saharan Africa, Anopheles gambiae s.s, An. arabiensis and An. funestus. We considered areas having low, moderate and high malaria transmission, corresponding to entomological inoculation rates of 10, 50 and 100 infective bites per person per year, respectively. In all settings, we considered baseline ITN coverage of 50% or 80% in addition to a range of other vector control tools to interrupt malaria transmission. The model was used to sweep through parameters space to select the best optimal intervention packages. Sample model simulations indicate that, starting with ITNs at a coverage of 50% (An. gambiae s.s. and An. funestus) or 80% (An. arabiensis) and adding interventions that do not require human participation (e.g. larviciding at 80% coverage, endectocide treated cattle at 50% coverage and attractive toxic sugar baits at 50% coverage) may be sufficient to suppress all the three species to an extent required to achieve local malaria elimination. The Vector Control Optimization Model (VCOM) is a computational tool to predict the impact of combined vector control interventions at the mosquito population level in a range of eco-epidemiological settings. The model predicts specific combinations of vector control tools to achieve local malaria elimination in a range of eco-epidemiological settings and can assist researchers and program decision-makers on the design of experimental or operational research to test vector control interventions. A corresponding graphical user interface is available for national malaria control programs and other end users.

  19. Cascade Error Projection: A Learning Algorithm for Hardware Implementation

    NASA Technical Reports Server (NTRS)

    Duong, Tuan A.; Daud, Taher

    1996-01-01

    In this paper, we workout a detailed mathematical analysis for a new learning algorithm termed Cascade Error Projection (CEP) and a general learning frame work. This frame work can be used to obtain the cascade correlation learning algorithm by choosing a particular set of parameters. Furthermore, CEP learning algorithm is operated only on one layer, whereas the other set of weights can be calculated deterministically. In association with the dynamical stepsize change concept to convert the weight update from infinite space into a finite space, the relation between the current stepsize and the previous energy level is also given and the estimation procedure for optimal stepsize is used for validation of our proposed technique. The weight values of zero are used for starting the learning for every layer, and a single hidden unit is applied instead of using a pool of candidate hidden units similar to cascade correlation scheme. Therefore, simplicity in hardware implementation is also obtained. Furthermore, this analysis allows us to select from other methods (such as the conjugate gradient descent or the Newton's second order) one of which will be a good candidate for the learning technique. The choice of learning technique depends on the constraints of the problem (e.g., speed, performance, and hardware implementation); one technique may be more suitable than others. Moreover, for a discrete weight space, the theoretical analysis presents the capability of learning with limited weight quantization. Finally, 5- to 8-bit parity and chaotic time series prediction problems are investigated; the simulation results demonstrate that 4-bit or more weight quantization is sufficient for learning neural network using CEP. In addition, it is demonstrated that this technique is able to compensate for less bit weight resolution by incorporating additional hidden units. However, generation result may suffer somewhat with lower bit weight quantization.

  20. Modular protein expression by RNA trans-splicing enables flexible expression of antibody formats in mammalian cells from a dual-host phage display vector.

    PubMed

    Shang, Yonglei; Tesar, Devin; Hötzel, Isidro

    2015-10-01

    A recently described dual-host phage display vector that allows expression of immunoglobulin G (IgG) in mammalian cells bypasses the need for subcloning of phage display clone inserts to mammalian vectors for IgG expression in large antibody discovery and optimization campaigns. However, antibody discovery and optimization campaigns usually need different antibody formats for screening, requiring reformatting of the clones in the dual-host phage display vector to an alternative vector. We developed a modular protein expression system mediated by RNA trans-splicing to enable the expression of different antibody formats from the same phage display vector. The heavy-chain region encoded by the phage display vector is directly and precisely fused to different downstream heavy-chain sequences encoded by complementing plasmids simply by joining exons in different pre-mRNAs by trans-splicing. The modular expression system can be used to efficiently express structurally correct IgG and Fab fragments or other antibody formats from the same phage display clone in mammalian cells without clone reformatting. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  1. Methods, systems and apparatus for optimization of third harmonic current injection in a multi-phase machine

    DOEpatents

    Gallegos-Lopez, Gabriel

    2012-10-02

    Methods, system and apparatus are provided for increasing voltage utilization in a five-phase vector controlled machine drive system that employs third harmonic current injection to increase torque and power output by a five-phase machine. To do so, a fundamental current angle of a fundamental current vector is optimized for each particular torque-speed of operating point of the five-phase machine.

  2. Direct model-based predictive control scheme without cost function for voltage source inverters with reduced common-mode voltage

    NASA Astrophysics Data System (ADS)

    Kim, Jae-Chang; Moon, Sung-Ki; Kwak, Sangshin

    2018-04-01

    This paper presents a direct model-based predictive control scheme for voltage source inverters (VSIs) with reduced common-mode voltages (CMVs). The developed method directly finds optimal vectors without using repetitive calculation of a cost function. To adjust output currents with the CMVs in the range of -Vdc/6 to +Vdc/6, the developed method uses voltage vectors, as finite control resources, excluding zero voltage vectors which produce the CMVs in the VSI within ±Vdc/2. In a model-based predictive control (MPC), not using zero voltage vectors increases the output current ripples and the current errors. To alleviate these problems, the developed method uses two non-zero voltage vectors in one sampling step. In addition, the voltage vectors scheduled to be used are directly selected at every sampling step once the developed method calculates the future reference voltage vector, saving the efforts of repeatedly calculating the cost function. And the two non-zero voltage vectors are optimally allocated to make the output current approach the reference current as close as possible. Thus, low CMV, rapid current-following capability and sufficient output current ripple performance are attained by the developed method. The results of a simulation and an experiment verify the effectiveness of the developed method.

  3. Application of three controls optimally in a vector-borne disease - a mathematical study

    NASA Astrophysics Data System (ADS)

    Kar, T. K.; Jana, Soovoojeet

    2013-10-01

    We have proposed and analyzed a vector-borne disease model with three types of controls for the eradication of the disease. Four different classes for the human population namely susceptible, infected, recovered and vaccinated and two different classes for the vector populations namely susceptible and infected are considered. In the first part of our analysis the disease dynamics are described for fixed controls and some inferences have been drawn regarding the spread of the disease. Next the optimal control problem is formulated and solved considering control parameters as time dependent. Different possible combination of controls are used and their effectiveness are compared by numerical simulation.

  4. Optimal integer resolution for attitude determination using global positioning system signals

    NASA Technical Reports Server (NTRS)

    Crassidis, John L.; Markley, F. Landis; Lightsey, E. Glenn

    1998-01-01

    In this paper, a new motion-based algorithm for GPS integer ambiguity resolution is derived. The first step of this algorithm converts the reference sightline vectors into body frame vectors. This is accomplished by an optimal vectorized transformation of the phase difference measurements. The result of this transformation leads to the conversion of the integer ambiguities to vectorized biases. This essentially converts the problem to the familiar magnetometer-bias determination problem, for which an optimal and efficient solution exists. Also, the formulation in this paper is re-derived to provide a sequential estimate, so that a suitable stopping condition can be found during the vehicle motion. The advantages of the new algorithm include: it does not require an a-priori estimate of the vehicle's attitude; it provides an inherent integrity check using a covariance-type expression; and it can sequentially estimate the ambiguities during the vehicle motion. The only disadvantage of the new algorithm is that it requires at least three non-coplanar baselines. The performance of the new algorithm is tested on a dynamic hardware simulator.

  5. Quantum games of opinion formation based on the Marinatto-Weber quantum game scheme

    NASA Astrophysics Data System (ADS)

    Deng, Xinyang; Deng, Yong; Liu, Qi; Shi, Lei; Wang, Zhen

    2016-06-01

    Quantization has become a new way to investigate classical game theory since quantum strategies and quantum games were proposed. In the existing studies, many typical game models, such as the prisoner's dilemma, battle of the sexes, Hawk-Dove game, have been extensively explored by using quantization approach. Along a similar method, here several game models of opinion formations will be quantized on the basis of the Marinatto-Weber quantum game scheme, a frequently used scheme of converting classical games to quantum versions. Our results show that the quantization can fascinatingly change the properties of some classical opinion formation game models so as to generate win-win outcomes.

  6. Exploratory research session on the quantization of the gravitational field. At the Institute for Theoretical Physics, Copenhagen, Denmark, June-July 1957

    NASA Astrophysics Data System (ADS)

    DeWitt, Bryce S.

    2017-06-01

    During the period June-July 1957 six physicists met at the Institute for Theoretical Physics of the University of Copenhagen in Denmark to work together on problems connected with the quantization of the gravitational field. A large part of the discussion was devoted to exposition of the individual work of the various participants, but a number of new results were also obtained. The topics investigated by these physicists are outlined in this report and may be grouped under the following main headings: The theory of measurement. Topographical problems in general relativity. Feynman quantization. Canonical quantization. Approximation methods. Special problems.

  7. Topologies on quantum topoi induced by quantization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakayama, Kunji

    2013-07-15

    In the present paper, we consider effects of quantization in a topos approach of quantum theory. A quantum system is assumed to be coded in a quantum topos, by which we mean the topos of presheaves on the context category of commutative subalgebras of a von Neumann algebra of bounded operators on a Hilbert space. A classical system is modeled by a Lie algebra of classical observables. It is shown that a quantization map from the classical observables to self-adjoint operators on the Hilbert space naturally induces geometric morphisms from presheaf topoi related to the classical system to the quantummore » topos. By means of the geometric morphisms, we give Lawvere-Tierney topologies on the quantum topos (and their equivalent Grothendieck topologies on the context category). We show that, among them, there exists a canonical one which we call a quantization topology. We furthermore give an explicit expression of a sheafification functor associated with the quantization topology.« less

  8. Bulk-edge correspondence in topological transport and pumping

    NASA Astrophysics Data System (ADS)

    Imura, Ken-Ichiro; Yoshimura, Yukinori; Fukui, Takahiro; Hatsugai, Yasuhiro

    2018-03-01

    The bulk-edge correspondence (BEC) refers to a one-to-one relation between the bulk and edge properties ubiquitous in topologically nontrivial systems. Depending on the setup, BEC manifests in different forms and govern the spectral and transport properties of topological insulators and semimetals. Although the topological pump is theoretically old, BEC in the pump has been established just recently [1] motivated by the state-of-the-art experiments using cold atoms [2, 3]. The center of mass (CM) of a system with boundaries shows a sequence of quantized jumps in the adiabatic limit associated with the edge states. Despite that the bulk is adiabatic, the edge is inevitably non-adiabatic in the experimental setup or in any numerical simulations. Still the pumped charge is quantized and carried by the bulk. Its quantization is guaranteed by a compensation between the bulk and edges. We show that in the presence of disorder the pumped charge continues to be quantized despite the appearance of non-quantized jumps.

  9. Characterizing the Nash equilibria of three-player Bayesian quantum games

    NASA Astrophysics Data System (ADS)

    Solmeyer, Neal; Balu, Radhakrishnan

    2017-05-01

    Quantum games with incomplete information can be studied within a Bayesian framework. We analyze games quantized within the EWL framework [Eisert, Wilkens, and Lewenstein, Phys Rev. Lett. 83, 3077 (1999)]. We solve for the Nash equilibria of a variety of two-player quantum games and compare the results to the solutions of the corresponding classical games. We then analyze Bayesian games where there is uncertainty about the player types in two-player conflicting interest games. The solutions to the Bayesian games are found to have a phase diagram-like structure where different equilibria exist in different parameter regions, depending both on the amount of uncertainty and the degree of entanglement. We find that in games where a Pareto-optimal solution is not a Nash equilibrium, it is possible for the quantized game to have an advantage over the classical version. In addition, we analyze the behavior of the solutions as the strategy choices approach an unrestricted operation. We find that some games have a continuum of solutions, bounded by the solutions of a simpler restricted game. A deeper understanding of Bayesian quantum game theory could lead to novel quantum applications in a multi-agent setting.

  10. Optimal Compression Methods for Floating-point Format Images

    NASA Technical Reports Server (NTRS)

    Pence, W. D.; White, R. L.; Seaman, R.

    2009-01-01

    We report on the results of a comparison study of different techniques for compressing FITS images that have floating-point (real*4) pixel values. Standard file compression methods like GZIP are generally ineffective in this case (with compression ratios only in the range 1.2 - 1.6), so instead we use a technique of converting the floating-point values into quantized scaled integers which are compressed using the Rice algorithm. The compressed data stream is stored in FITS format using the tiled-image compression convention. This is technically a lossy compression method, since the pixel values are not exactly reproduced, however all the significant photometric and astrometric information content of the image can be preserved while still achieving file compression ratios in the range of 4 to 8. We also show that introducing dithering, or randomization, when assigning the quantized pixel-values can significantly improve the photometric and astrometric precision in the stellar images in the compressed file without adding additional noise. We quantify our results by comparing the stellar magnitudes and positions as measured in the original uncompressed image to those derived from the same image after applying successively greater amounts of compression.

  11. Haralick texture features from apparent diffusion coefficient (ADC) MRI images depend on imaging and pre-processing parameters.

    PubMed

    Brynolfsson, Patrik; Nilsson, David; Torheim, Turid; Asklund, Thomas; Karlsson, Camilla Thellenberg; Trygg, Johan; Nyholm, Tufve; Garpebring, Anders

    2017-06-22

    In recent years, texture analysis of medical images has become increasingly popular in studies investigating diagnosis, classification and treatment response assessment of cancerous disease. Despite numerous applications in oncology and medical imaging in general, there is no consensus regarding texture analysis workflow, or reporting of parameter settings crucial for replication of results. The aim of this study was to assess how sensitive Haralick texture features of apparent diffusion coefficient (ADC) MR images are to changes in five parameters related to image acquisition and pre-processing: noise, resolution, how the ADC map is constructed, the choice of quantization method, and the number of gray levels in the quantized image. We found that noise, resolution, choice of quantization method and the number of gray levels in the quantized images had a significant influence on most texture features, and that the effect size varied between different features. Different methods for constructing the ADC maps did not have an impact on any texture feature. Based on our results, we recommend using images with similar resolutions and noise levels, using one quantization method, and the same number of gray levels in all quantized images, to make meaningful comparisons of texture feature results between different subjects.

  12. From black holes to white holes: a quantum gravitational, symmetric bounce

    NASA Astrophysics Data System (ADS)

    Olmedo, Javier; Saini, Sahil; Singh, Parampreet

    2017-11-01

    Recently, a consistent non-perturbative quantization of the Schwarzschild interior resulting in a bounce from black hole to white hole geometry has been obtained by loop quantizing the Kantowski-Sachs vacuum spacetime. As in other spacetimes where the singularity is dominated by the Weyl part of the spacetime curvature, the structure of the singularity is highly anisotropic in the Kantowski-Sachs vacuum spacetime. As a result, the bounce turns out to be in general asymmetric, creating a large mass difference between the parent black hole and the child white hole. In this manuscript, we investigate under what circumstances a symmetric bounce scenario can be constructed in the above quantization. Using the setting of Dirac observables and geometric clocks, we obtain a symmetric bounce condition which can be satisfied by a slight modification in the construction of loops over which holonomies are considered in the quantization procedure. These modifications can be viewed as quantization ambiguities, and are demonstrated in three different flavors, all of which lead to a non-singular black to white hole transition with identical masses. Our results show that quantization ambiguities can mitigate or even qualitatively change some key features of the physics of singularity resolution. Further, these results are potentially helpful in motivating and constructing symmetric black to white hole transition scenarios.

  13. Rate-distortion analysis of dead-zone plus uniform threshold scalar quantization and its application--part II: two-pass VBR coding for H.264/AVC.

    PubMed

    Sun, Jun; Duan, Yizhou; Li, Jiangtao; Liu, Jiaying; Guo, Zongming

    2013-01-01

    In the first part of this paper, we derive a source model describing the relationship between the rate, distortion, and quantization steps of the dead-zone plus uniform threshold scalar quantizers with nearly uniform reconstruction quantizers for generalized Gaussian distribution. This source model consists of rate-quantization, distortion-quantization (D-Q), and distortion-rate (D-R) models. In this part, we first rigorously confirm the accuracy of the proposed source model by comparing the calculated results with the coding data of JM 16.0. Efficient parameter estimation strategies are then developed to better employ this source model in our two-pass rate control method for H.264 variable bit rate coding. Based on our D-Q and D-R models, the proposed method is of high stability, low complexity and is easy to implement. Extensive experiments demonstrate that the proposed method achieves: 1) average peak signal-to-noise ratio variance of only 0.0658 dB, compared to 1.8758 dB of JM 16.0's method, with an average rate control error of 1.95% and 2) significant improvement in smoothing the video quality compared with the latest two-pass rate control method.

  14. Design of 2D time-varying vector fields.

    PubMed

    Chen, Guoning; Kwatra, Vivek; Wei, Li-Yi; Hansen, Charles D; Zhang, Eugene

    2012-10-01

    Design of time-varying vector fields, i.e., vector fields that can change over time, has a wide variety of important applications in computer graphics. Existing vector field design techniques do not address time-varying vector fields. In this paper, we present a framework for the design of time-varying vector fields, both for planar domains as well as manifold surfaces. Our system supports the creation and modification of various time-varying vector fields with desired spatial and temporal characteristics through several design metaphors, including streamlines, pathlines, singularity paths, and bifurcations. These design metaphors are integrated into an element-based design to generate the time-varying vector fields via a sequence of basis field summations or spatial constrained optimizations at the sampled times. The key-frame design and field deformation are also introduced to support other user design scenarios. Accordingly, a spatial-temporal constrained optimization and the time-varying transformation are employed to generate the desired fields for these two design scenarios, respectively. We apply the time-varying vector fields generated using our design system to a number of important computer graphics applications that require controllable dynamic effects, such as evolving surface appearance, dynamic scene design, steerable crowd movement, and painterly animation. Many of these are difficult or impossible to achieve via prior simulation-based methods. In these applications, the time-varying vector fields have been applied as either orientation fields or advection fields to control the instantaneous appearance or evolving trajectories of the dynamic effects.

  15. Ant Navigation: Fractional Use of the Home Vector

    PubMed Central

    Cheung, Allen; Hiby, Lex; Narendra, Ajay

    2012-01-01

    Home is a special location for many animals, offering shelter from the elements, protection from predation, and a common place for gathering of the same species. Not surprisingly, many species have evolved efficient, robust homing strategies, which are used as part of each and every foraging journey. A basic strategy used by most animals is to take the shortest possible route home by accruing the net distances and directions travelled during foraging, a strategy well known as path integration. This strategy is part of the navigation toolbox of ants occupying different landscapes. However, when there is a visual discrepancy between test and training conditions, the distance travelled by animals relying on the path integrator varies dramatically between species: from 90% of the home vector to an absolute distance of only 50 cm. We here ask what the theoretically optimal balance between PI-driven and landmark-driven navigation should be. In combination with well-established results from optimal search theory, we show analytically that this fractional use of the home vector is an optimal homing strategy under a variety of circumstances. Assuming there is a familiar route that an ant recognizes, theoretically optimal search should always begin at some fraction of the home vector, depending on the region of familiarity. These results are shown to be largely independent of the search algorithm used. Ant species from different habitats appear to have optimized their navigation strategy based on the availability and nature of navigational information content in their environment. PMID:23209744

  16. From Weyl to Born-Jordan quantization: The Schrödinger representation revisited

    NASA Astrophysics Data System (ADS)

    de Gosson, Maurice A.

    2016-03-01

    The ordering problem has been one of the long standing and much discussed questions in quantum mechanics from its very beginning. Nowadays, there is more or less a consensus among physicists that the right prescription is Weyl's rule, which is closely related to the Moyal-Wigner phase space formalism. We propose in this report an alternative approach by replacing Weyl quantization with the less well-known Born-Jordan quantization. This choice is actually natural if we want the Heisenberg and Schrödinger pictures of quantum mechanics to be mathematically equivalent. It turns out that, in addition, Born-Jordan quantization can be recovered from Feynman's path integral approach provided that one used short-time propagators arising from correct formulas for the short-time action, as observed by Makri and Miller. These observations lead to a slightly different quantum mechanics, exhibiting some unexpected features, and this without affecting the main existing theory; for instance quantizations of physical Hamiltonian functions are the same as in the Weyl correspondence. The differences are in fact of a more subtle nature; for instance, the quantum observables will not correspond in a one-to-one fashion to classical ones, and the dequantization of a Born-Jordan quantum operator is less straightforward than that of the corresponding Weyl operator. The use of Born-Jordan quantization moreover solves the "angular momentum dilemma", which already puzzled L. Pauling. Born-Jordan quantization has been known for some time (but not fully exploited) by mathematicians working in time-frequency analysis and signal analysis, but ignored by physicists. One of the aims of this report is to collect and synthesize these sporadic discussions, while analyzing the conceptual differences with Weyl quantization, which is also reviewed in detail. Another striking feature is that the Born-Jordan formalism leads to a redefinition of phase space quantum mechanics, where the usual Wigner distribution has to be replaced with a new quasi-distribution reducing interference effects.

  17. A New Unified Analysis of Estimate Errors by Model-Matching Phase-Estimation Methods for Sensorless Drive of Permanent-Magnet Synchronous Motors and New Trajectory-Oriented Vector Control, Part I

    NASA Astrophysics Data System (ADS)

    Shinnaka, Shinji; Sano, Kousuke

    This paper presents a new unified analysis of estimate errors by model-matching phase-estimation methods such as rotor-flux state-observers, back EMF state-observers, and back EMF disturbance-observers, for sensorless drive of permanent-magnet synchronous motors. Analytical solutions about estimate errors, whose validity is confirmed by numerical experiments, are rich in universality and applicability. As an example of universality and applicability, a new trajectory-oriented vector control method is proposed, which can realize directly quasi-optimal strategy minimizing total losses with no additional computational loads by simply orienting one of vector-control coordinates to the associated quasi-optimal trajectory. The coordinate orientation rule, which is analytically derived, is surprisingly simple. Consequently the trajectory-oriented vector control method can be applied to a number of conventional vector control systems using one of the model-matching phase-estimation methods.

  18. Optimization of the Brillouin operator on the KNL architecture

    NASA Astrophysics Data System (ADS)

    Dürr, Stephan

    2018-03-01

    Experiences with optimizing the matrix-times-vector application of the Brillouin operator on the Intel KNL processor are reported. Without adjustments to the memory layout, performance figures of 360 Gflop/s in single and 270 Gflop/s in double precision are observed. This is with Nc = 3 colors, Nv = 12 right-hand-sides, Nthr = 256 threads, on lattices of size 323 × 64, using exclusively OMP pragmas. Interestingly, the same routine performs quite well on Intel Core i7 architectures, too. Some observations on the much harderWilson fermion matrix-times-vector optimization problem are added.

  19. Tomo3D 2.0--exploitation of advanced vector extensions (AVX) for 3D reconstruction.

    PubMed

    Agulleiro, Jose-Ignacio; Fernandez, Jose-Jesus

    2015-02-01

    Tomo3D is a program for fast tomographic reconstruction on multicore computers. Its high speed stems from code optimization, vectorization with Streaming SIMD Extensions (SSE), multithreading and optimization of disk access. Recently, Advanced Vector eXtensions (AVX) have been introduced in the x86 processor architecture. Compared to SSE, AVX double the number of simultaneous operations, thus pointing to a potential twofold gain in speed. However, in practice, achieving this potential is extremely difficult. Here, we provide a technical description and an assessment of the optimizations included in Tomo3D to take advantage of AVX instructions. Tomo3D 2.0 allows huge reconstructions to be calculated in standard computers in a matter of minutes. Thus, it will be a valuable tool for electron tomography studies with increasing resolution needs. Copyright © 2014 Elsevier Inc. All rights reserved.

  20. A novel method based on new adaptive LVQ neural network for predicting protein-protein interactions from protein sequences.

    PubMed

    Yousef, Abdulaziz; Moghadam Charkari, Nasrollah

    2013-11-07

    Protein-Protein interaction (PPI) is one of the most important data in understanding the cellular processes. Many interesting methods have been proposed in order to predict PPIs. However, the methods which are based on the sequence of proteins as a prior knowledge are more universal. In this paper, a sequence-based, fast, and adaptive PPI prediction method is introduced to assign two proteins to an interaction class (yes, no). First, in order to improve the presentation of the sequences, twelve physicochemical properties of amino acid have been used by different representation methods to transform the sequence of protein pairs into different feature vectors. Then, for speeding up the learning process and reducing the effect of noise PPI data, principal component analysis (PCA) is carried out as a proper feature extraction algorithm. Finally, a new and adaptive Learning Vector Quantization (LVQ) predictor is designed to deal with different models of datasets that are classified into balanced and imbalanced datasets. The accuracy of 93.88%, 90.03%, and 89.72% has been found on S. cerevisiae, H. pylori, and independent datasets, respectively. The results of various experiments indicate the efficiency and validity of the method. © 2013 Published by Elsevier Ltd.

  1. Pose Invariant Face Recognition Based on Hybrid Dominant Frequency Features

    NASA Astrophysics Data System (ADS)

    Wijaya, I. Gede Pasek Suta; Uchimura, Keiichi; Hu, Zhencheng

    Face recognition is one of the most active research areas in pattern recognition, not only because the face is a human biometric characteristics of human being but also because there are many potential applications of the face recognition which range from human-computer interactions to authentication, security, and surveillance. This paper presents an approach to pose invariant human face image recognition. The proposed scheme is based on the analysis of discrete cosine transforms (DCT) and discrete wavelet transforms (DWT) of face images. From both the DCT and DWT domain coefficients, which describe the facial information, we build compact and meaningful features vector, using simple statistical measures and quantization. This feature vector is called as the hybrid dominant frequency features. Then, we apply a combination of the L2 and Lq metric to classify the hybrid dominant frequency features to a person's class. The aim of the proposed system is to overcome the high memory space requirement, the high computational load, and the retraining problems of previous methods. The proposed system is tested using several face databases and the experimental results are compared to a well-known Eigenface method. The proposed method shows good performance, robustness, stability, and accuracy without requiring geometrical normalization. Furthermore, the purposed method has low computational cost, requires little memory space, and can overcome retraining problem.

  2. Spectral feature extraction of EEG signals and pattern recognition during mental tasks of 2-D cursor movements for BCI using SVM and ANN.

    PubMed

    Bascil, M Serdar; Tesneli, Ahmet Y; Temurtas, Feyzullah

    2016-09-01

    Brain computer interface (BCI) is a new communication way between man and machine. It identifies mental task patterns stored in electroencephalogram (EEG). So, it extracts brain electrical activities recorded by EEG and transforms them machine control commands. The main goal of BCI is to make available assistive environmental devices for paralyzed people such as computers and makes their life easier. This study deals with feature extraction and mental task pattern recognition on 2-D cursor control from EEG as offline analysis approach. The hemispherical power density changes are computed and compared on alpha-beta frequency bands with only mental imagination of cursor movements. First of all, power spectral density (PSD) features of EEG signals are extracted and high dimensional data reduced by principle component analysis (PCA) and independent component analysis (ICA) which are statistical algorithms. In the last stage, all features are classified with two types of support vector machine (SVM) which are linear and least squares (LS-SVM) and three different artificial neural network (ANN) structures which are learning vector quantization (LVQ), multilayer neural network (MLNN) and probabilistic neural network (PNN) and mental task patterns are successfully identified via k-fold cross validation technique.

  3. An analogue of Weyl’s law for quantized irreducible generalized flag manifolds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matassa, Marco, E-mail: marco.matassa@gmail.com, E-mail: mmatassa@math.uio.no

    2015-09-15

    We prove an analogue of Weyl’s law for quantized irreducible generalized flag manifolds. This is formulated in terms of a zeta function which, similarly to the classical setting, satisfies the following two properties: as a functional on the quantized algebra it is proportional to the Haar state and its first singularity coincides with the classical dimension. The relevant formulas are given for the more general case of compact quantum groups.

  4. Quantization error of CCD cameras and their influence on phase calculation in fringe pattern analysis.

    PubMed

    Skydan, Oleksandr A; Lilley, Francis; Lalor, Michael J; Burton, David R

    2003-09-10

    We present an investigation into the phase errors that occur in fringe pattern analysis that are caused by quantization effects. When acquisition devices with a limited value of camera bit depth are used, there are a limited number of quantization levels available to record the signal. This may adversely affect the recorded signal and adds a potential source of instrumental error to the measurement system. Quantization effects also determine the accuracy that may be achieved by acquisition devices in a measurement system. We used the Fourier fringe analysis measurement technique. However, the principles can be applied equally well for other phase measuring techniques to yield a phase error distribution that is caused by the camera bit depth.

  5. Gravitational surface Hamiltonian and entropy quantization

    NASA Astrophysics Data System (ADS)

    Bakshi, Ashish; Majhi, Bibhas Ranjan; Samanta, Saurav

    2017-02-01

    The surface Hamiltonian corresponding to the surface part of a gravitational action has xp structure where p is conjugate momentum of x. Moreover, it leads to TS on the horizon of a black hole. Here T and S are temperature and entropy of the horizon. Imposing the hermiticity condition we quantize this Hamiltonian. This leads to an equidistant spectrum of its eigenvalues. Using this we show that the entropy of the horizon is quantized. This analysis holds for any order of Lanczos-Lovelock gravity. For general relativity, the area spectrum is consistent with Bekenstein's observation. This provides a more robust confirmation of this earlier result as the calculation is based on the direct quantization of the Hamiltonian in the sense of usual quantum mechanics.

  6. Quantization of Non-Lagrangian Systems

    NASA Astrophysics Data System (ADS)

    Kochan, Denis

    A novel method for quantization of non-Lagrangian (open) systems is proposed. It is argued that the essential object, which provides both classical and quantum evolution, is a certain canonical two-form defined in extended velocity space. In this setting classical dynamics is recovered from the stringy-type variational principle, which employs umbilical surfaces instead of histories of the system. Quantization is then accomplished in accordance with the introduced variational principle. The path integral for the transition probability amplitude (propagator) is rearranged to a surface functional integral. In the standard case of closed (Lagrangian) systems the presented method reduces to the standard Feynman's approach. The inverse problem of the calculus of variation, the problem of quantization ambiguity and the quantum mechanics in the presence of friction are analyzed in detail.

  7. Toward semantic-based retrieval of visual information: a model-based approach

    NASA Astrophysics Data System (ADS)

    Park, Youngchoon; Golshani, Forouzan; Panchanathan, Sethuraman

    2002-07-01

    This paper center around the problem of automated visual content classification. To enable classification based image or visual object retrieval, we propose a new image representation scheme called visual context descriptor (VCD) that is a multidimensional vector in which each element represents the frequency of a unique visual property of an image or a region. VCD utilizes the predetermined quality dimensions (i.e., types of features and quantization level) and semantic model templates mined in priori. Not only observed visual cues, but also contextually relevant visual features are proportionally incorporated in VCD. Contextual relevance of a visual cue to a semantic class is determined by using correlation analysis of ground truth samples. Such co-occurrence analysis of visual cues requires transformation of a real-valued visual feature vector (e.g., color histogram, Gabor texture, etc.,) into a discrete event (e.g., terms in text). Good-feature to track, rule of thirds, iterative k-means clustering and TSVQ are involved in transformation of feature vectors into unified symbolic representations called visual terms. Similarity-based visual cue frequency estimation is also proposed and used for ensuring the correctness of model learning and matching since sparseness of sample data causes the unstable results of frequency estimation of visual cues. The proposed method naturally allows integration of heterogeneous visual or temporal or spatial cues in a single classification or matching framework, and can be easily integrated into a semantic knowledge base such as thesaurus, and ontology. Robust semantic visual model template creation and object based image retrieval are demonstrated based on the proposed content description scheme.

  8. Production Task Queue Optimization Based on Multi-Attribute Evaluation for Complex Product Assembly Workshop.

    PubMed

    Li, Lian-Hui; Mo, Rong

    2015-01-01

    The production task queue has a great significance for manufacturing resource allocation and scheduling decision. Man-made qualitative queue optimization method has a poor effect and makes the application difficult. A production task queue optimization method is proposed based on multi-attribute evaluation. According to the task attributes, the hierarchical multi-attribute model is established and the indicator quantization methods are given. To calculate the objective indicator weight, criteria importance through intercriteria correlation (CRITIC) is selected from three usual methods. To calculate the subjective indicator weight, BP neural network is used to determine the judge importance degree, and then the trapezoid fuzzy scale-rough AHP considering the judge importance degree is put forward. The balanced weight, which integrates the objective weight and the subjective weight, is calculated base on multi-weight contribution balance model. The technique for order preference by similarity to an ideal solution (TOPSIS) improved by replacing Euclidean distance with relative entropy distance is used to sequence the tasks and optimize the queue by the weighted indicator value. A case study is given to illustrate its correctness and feasibility.

  9. Production Task Queue Optimization Based on Multi-Attribute Evaluation for Complex Product Assembly Workshop

    PubMed Central

    Li, Lian-hui; Mo, Rong

    2015-01-01

    The production task queue has a great significance for manufacturing resource allocation and scheduling decision. Man-made qualitative queue optimization method has a poor effect and makes the application difficult. A production task queue optimization method is proposed based on multi-attribute evaluation. According to the task attributes, the hierarchical multi-attribute model is established and the indicator quantization methods are given. To calculate the objective indicator weight, criteria importance through intercriteria correlation (CRITIC) is selected from three usual methods. To calculate the subjective indicator weight, BP neural network is used to determine the judge importance degree, and then the trapezoid fuzzy scale-rough AHP considering the judge importance degree is put forward. The balanced weight, which integrates the objective weight and the subjective weight, is calculated base on multi-weight contribution balance model. The technique for order preference by similarity to an ideal solution (TOPSIS) improved by replacing Euclidean distance with relative entropy distance is used to sequence the tasks and optimize the queue by the weighted indicator value. A case study is given to illustrate its correctness and feasibility. PMID:26414758

  10. [Extraction Optimization of Rhizome of Curcuma longa by Response Surface Methodology and Support Vector Regression].

    PubMed

    Zhou, Pei-pei; Shan, Jin-feng; Jiang, Jian-lan

    2015-12-01

    To optimize the optimal microwave-assisted extraction method of curcuminoids from Curcuma longa. On the base of single factor experiment, the ethanol concentration, the ratio of liquid to solid and the microwave time were selected for further optimization. Support Vector Regression (SVR) and Central Composite Design-Response Surface Methodology (CCD) algorithm were utilized to design and establish models respectively, while Particle Swarm Optimization (PSO) was introduced to optimize the parameters of SVR models and to search optimal points of models. The evaluation indicator, the sum of curcumin, demethoxycurcumin and bisdemethoxycurcumin by HPLC, were used. The optimal parameters of microwave-assisted extraction were as follows: ethanol concentration of 69%, ratio of liquid to solid of 21 : 1, microwave time of 55 s. On those conditions, the sum of three curcuminoids was 28.97 mg/g (per gram of rhizomes powder). Both the CCD model and the SVR model were credible, for they have predicted the similar process condition and the deviation of yield were less than 1.2%.

  11. ℓ(p)-Norm multikernel learning approach for stock market price forecasting.

    PubMed

    Shao, Xigao; Wu, Kun; Liao, Bifeng

    2012-01-01

    Linear multiple kernel learning model has been used for predicting financial time series. However, ℓ(1)-norm multiple support vector regression is rarely observed to outperform trivial baselines in practical applications. To allow for robust kernel mixtures that generalize well, we adopt ℓ(p)-norm multiple kernel support vector regression (1 ≤ p < ∞) as a stock price prediction model. The optimization problem is decomposed into smaller subproblems, and the interleaved optimization strategy is employed to solve the regression model. The model is evaluated on forecasting the daily stock closing prices of Shanghai Stock Index in China. Experimental results show that our proposed model performs better than ℓ(1)-norm multiple support vector regression model.

  12. Experiences in using the CYBER 203 for three-dimensional transonic flow calculations

    NASA Technical Reports Server (NTRS)

    Melson, N. D.; Keller, J. D.

    1982-01-01

    In this paper, the authors report on some of their experiences modifying two three-dimensional transonic flow programs (FLO22 and FLO27) for use on the NASA Langley Research Center CYBER 203. Both of the programs discussed were originally written for use on serial machines. Several methods were attempted to optimize the execution of the two programs on the vector machine, including: (1) leaving the program in a scalar form (i.e., serial computation) with compiler software used to optimize and vectorize the program, (2) vectorizing parts of the existing algorithm in the program, and (3) incorporating a new vectorizable algorithm (ZEBRA I or ZEBRA II) in the program.

  13. Optimization of large matrix calculations for execution on the Cray X-MP vector supercomputer

    NASA Technical Reports Server (NTRS)

    Hornfeck, William A.

    1988-01-01

    A considerable volume of large computational computer codes were developed for NASA over the past twenty-five years. This code represents algorithms developed for machines of earlier generation. With the emergence of the vector supercomputer as a viable, commercially available machine, an opportunity exists to evaluate optimization strategies to improve the efficiency of existing software. This result is primarily due to architectural differences in the latest generation of large-scale machines and the earlier, mostly uniprocessor, machines. A sofware package being used by NASA to perform computations on large matrices is described, and a strategy for conversion to the Cray X-MP vector supercomputer is also described.

  14. Optimal space communications techniques. [using digital and phase locked systems for signal processing

    NASA Technical Reports Server (NTRS)

    Schilling, D. L.

    1974-01-01

    Digital multiplication of two waveforms using delta modulation (DM) is discussed. It is shown that while conventional multiplication of two N bit words requires N2 complexity, multiplication using DM requires complexity which increases linearly with N. Bounds on the signal-to-quantization noise ratio (SNR) resulting from this multiplication are determined and compared with the SNR obtained using standard multiplication techniques. The phase locked loop (PLL) system, consisting of a phase detector, voltage controlled oscillator, and a linear loop filter, is discussed in terms of its design and system advantages. Areas requiring further research are identified.

  15. A Scalable Architecture of a Structured LDPC Decoder

    NASA Technical Reports Server (NTRS)

    Lee, Jason Kwok-San; Lee, Benjamin; Thorpe, Jeremy; Andrews, Kenneth; Dolinar, Sam; Hamkins, Jon

    2004-01-01

    We present a scalable decoding architecture for a certain class of structured LDPC codes. The codes are designed using a small (n,r) protograph that is replicated Z times to produce a decoding graph for a (Z x n, Z x r) code. Using this architecture, we have implemented a decoder for a (4096,2048) LDPC code on a Xilinx Virtex-II 2000 FPGA, and achieved decoding speeds of 31 Mbps with 10 fixed iterations. The implemented message-passing algorithm uses an optimized 3-bit non-uniform quantizer that operates with 0.2dB implementation loss relative to a floating point decoder.

  16. Hydrodynamical model of anisotropic, polarized turbulent superfluids. I: constraints for the fluxes

    NASA Astrophysics Data System (ADS)

    Mongiovì, Maria Stella; Restuccia, Liliana

    2018-02-01

    This work is the first of a series of papers devoted to the study of the influence of the anisotropy and polarization of the tangle of quantized vortex lines in superfluid turbulence. A thermodynamical model of inhomogeneous superfluid turbulence previously formulated is here extended, to take into consideration also these effects. The model chooses as thermodynamic state vector the density, the velocity, the energy density, the heat flux, and a complete vorticity tensor field, including its symmetric traceless part and its antisymmetric part. The relations which constrain the constitutive quantities are deduced from the second principle of thermodynamics using the Liu procedure. The results show that the presence of anisotropy and polarization in the vortex tangle affects in a substantial way the dynamics of the heat flux, and allow us to give a physical interpretation of the vorticity tensor here introduced, and to better describe the internal structure of a turbulent superfluid.

  17. Synthesis of a combined system for precise stabilization of the Spektr-UF observatory: II

    NASA Astrophysics Data System (ADS)

    Bychkov, I. V.; Voronov, V. A.; Druzhinin, E. I.; Kozlov, R. I.; Ul'yanov, S. A.; Belyaev, B. B.; Telepnev, P. P.; Ul'yashin, A. I.

    2014-03-01

    The paper presents the second part of the results of search studies for the development of a combined system of high-precision stabilization of the optical telescope for the designed Spectr-UF international observatory [1]. A new modification of the strict method of the synthesis of nonlinear discrete-continuous stabilization systems with uncertainties is described, which is based on the minimization of the guaranteed accuracy estimate calculated using vector Lyapunov functions. Using this method, the synthesis of the feedback parameters in the mode of precise inertial stabilization of the optical telescope axis is performed taking the design nonrigidity, quantization of signals over time and level, and errors of orientation meters, as well as the errors and limitation of control moments of executive engine-flywheels into account. The results of numerical experiments that demonstrate the quality of the synthesized system are presented.

  18. FPGA implementation of self organizing map with digital phase locked loops.

    PubMed

    Hikawa, Hiroomi

    2005-01-01

    The self-organizing map (SOM) has found applicability in a wide range of application areas. Recently new SOM hardware with phase modulated pulse signal and digital phase-locked loops (DPLLs) has been proposed (Hikawa, 2005). The system uses the DPLL as a computing element since the operation of the DPLL is very similar to that of SOM's computation. The system also uses square waveform phase to hold the value of the each input vector element. This paper discuss the hardware implementation of the DPLL SOM architecture. For effective hardware implementation, some components are redesigned to reduce the circuit size. The proposed SOM architecture is described in VHDL and implemented on field programmable gate array (FPGA). Its feasibility is verified by experiments. Results show that the proposed SOM implemented on the FPGA has a good quantization capability, and its circuit size very small.

  19. A progressive data compression scheme based upon adaptive transform coding: Mixture block coding of natural images

    NASA Technical Reports Server (NTRS)

    Rost, Martin C.; Sayood, Khalid

    1991-01-01

    A method for efficiently coding natural images using a vector-quantized variable-blocksized transform source coder is presented. The method, mixture block coding (MBC), incorporates variable-rate coding by using a mixture of discrete cosine transform (DCT) source coders. Which coders are selected to code any given image region is made through a threshold driven distortion criterion. In this paper, MBC is used in two different applications. The base method is concerned with single-pass low-rate image data compression. The second is a natural extension of the base method which allows for low-rate progressive transmission (PT). Since the base method adapts easily to progressive coding, it offers the aesthetic advantage of progressive coding without incorporating extensive channel overhead. Image compression rates of approximately 0.5 bit/pel are demonstrated for both monochrome and color images.

  20. Land-use Scene Classification in High-Resolution Remote Sensing Images by Multiscale Deeply Described Correlatons

    NASA Astrophysics Data System (ADS)

    Qi, K.; Qingfeng, G.

    2017-12-01

    With the popular use of High-Resolution Satellite (HRS) images, more and more research efforts have been placed on land-use scene classification. However, it makes the task difficult with HRS images for the complex background and multiple land-cover classes or objects. This article presents a multiscale deeply described correlaton model for land-use scene classification. Specifically, the convolutional neural network is introduced to learn and characterize the local features at different scales. Then, learnt multiscale deep features are explored to generate visual words. The spatial arrangement of visual words is achieved through the introduction of adaptive vector quantized correlograms at different scales. Experiments on two publicly available land-use scene datasets demonstrate that the proposed model is compact and yet discriminative for efficient representation of land-use scene images, and achieves competitive classification results with the state-of-art methods.

  1. New exact solutions of the Dirac and Klein-Gordon equations of a charged particle propagating in a strong laser field in an underdense plasma

    NASA Astrophysics Data System (ADS)

    Varró, Sándor

    2014-03-01

    Exact solutions are presented of the Dirac and Klein-Gordon equations of a charged particle propagating in a classical monochromatic electromagnetic plane wave in a medium of index of refraction nm<1. In the Dirac case the solutions are expressed in terms of new complex polynomials, and in the Klein-Gordon case the found solutions are expressed in terms of Ince polynomials. In each case they form a doubly infinite set, labeled by two integer quantum numbers. These integer numbers represent quantized momentum components of the charged particle along the polarization vector and along the propagation direction of the electromagnetic radiation. Since this radiation may represent a plasmon wave of arbitrary high amplitude, propagating in an underdense plasma, the solutions obtained may have relevance in describing possible quantum features of novel acceleration mechanisms.

  2. Resolution enhancement of low-quality videos using a high-resolution frame

    NASA Astrophysics Data System (ADS)

    Pham, Tuan Q.; van Vliet, Lucas J.; Schutte, Klamer

    2006-01-01

    This paper proposes an example-based Super-Resolution (SR) algorithm of compressed videos in the Discrete Cosine Transform (DCT) domain. Input to the system is a Low-Resolution (LR) compressed video together with a High-Resolution (HR) still image of similar content. Using a training set of corresponding LR-HR pairs of image patches from the HR still image, high-frequency details are transferred from the HR source to the LR video. The DCT-domain algorithm is much faster than example-based SR in spatial domain 6 because of a reduction in search dimensionality, which is a direct result of the compact and uncorrelated DCT representation. Fast searching techniques like tree-structure vector quantization 16 and coherence search1 are also key to the improved efficiency. Preliminary results on MJPEG sequence show promising result of the DCT-domain SR synthesis approach.

  3. Competitive learning with pairwise constraints.

    PubMed

    Covões, Thiago F; Hruschka, Eduardo R; Ghosh, Joydeep

    2013-01-01

    Constrained clustering has been an active research topic since the last decade. Most studies focus on batch-mode algorithms. This brief introduces two algorithms for on-line constrained learning, named on-line linear constrained vector quantization error (O-LCVQE) and constrained rival penalized competitive learning (C-RPCL). The former is a variant of the LCVQE algorithm for on-line settings, whereas the latter is an adaptation of the (on-line) RPCL algorithm to deal with constrained clustering. The accuracy results--in terms of the normalized mutual information (NMI)--from experiments with nine datasets show that the partitions induced by O-LCVQE are competitive with those found by the (batch-mode) LCVQE. Compared with this formidable baseline algorithm, it is surprising that C-RPCL can provide better partitions (in terms of the NMI) for most of the datasets. Also, experiments on a large dataset show that on-line algorithms for constrained clustering can significantly reduce the computational time.

  4. Automatic assessment of voice quality according to the GRBAS scale.

    PubMed

    Sáenz-Lechón, Nicolás; Godino-Llorente, Juan I; Osma-Ruiz, Víctor; Blanco-Velasco, Manuel; Cruz-Roldán, Fernando

    2006-01-01

    Nowadays, the most extended techniques to measure the voice quality are based on perceptual evaluation by well trained professionals. The GRBAS scale is a widely used method for perceptual evaluation of voice quality. The GRBAS scale is widely used in Japan and there is increasing interest in both Europe and the United States. However, this technique needs well-trained experts, and is based on the evaluator's expertise, depending a lot on his own psycho-physical state. Furthermore, a great variability in the assessments performed from one evaluator to another is observed. Therefore, an objective method to provide such measurement of voice quality would be very valuable. In this paper, the automatic assessment of voice quality is addressed by means of short-term Mel cepstral parameters (MFCC), and learning vector quantization (LVQ) in a pattern recognition stage. Results show that this approach provides acceptable results for this purpose, with accuracy around 65% at the best.

  5. Advances in image compression and automatic target recognition; Proceedings of the Meeting, Orlando, FL, Mar. 30, 31, 1989

    NASA Technical Reports Server (NTRS)

    Tescher, Andrew G. (Editor)

    1989-01-01

    Various papers on image compression and automatic target recognition are presented. Individual topics addressed include: target cluster detection in cluttered SAR imagery, model-based target recognition using laser radar imagery, Smart Sensor front-end processor for feature extraction of images, object attitude estimation and tracking from a single video sensor, symmetry detection in human vision, analysis of high resolution aerial images for object detection, obscured object recognition for an ATR application, neural networks for adaptive shape tracking, statistical mechanics and pattern recognition, detection of cylinders in aerial range images, moving object tracking using local windows, new transform method for image data compression, quad-tree product vector quantization of images, predictive trellis encoding of imagery, reduced generalized chain code for contour description, compact architecture for a real-time vision system, use of human visibility functions in segmentation coding, color texture analysis and synthesis using Gibbs random fields.

  6. Quantify spatial relations to discover handwritten graphical symbols

    NASA Astrophysics Data System (ADS)

    Li, Jinpeng; Mouchère, Harold; Viard-Gaudin, Christian

    2012-01-01

    To model a handwritten graphical language, spatial relations describe how the strokes are positioned in the 2-dimensional space. Most of existing handwriting recognition systems make use of some predefined spatial relations. However, considering a complex graphical language, it is hard to express manually all the spatial relations. Another possibility would be to use a clustering technique to discover the spatial relations. In this paper, we discuss how to create a relational graph between strokes (nodes) labeled with graphemes in a graphical language. Then we vectorize spatial relations (edges) for clustering and quantization. As the targeted application, we extract the repetitive sub-graphs (graphical symbols) composed of graphemes and learned spatial relations. On two handwriting databases, a simple mathematical expression database and a complex flowchart database, the unsupervised spatial relations outperform the predefined spatial relations. In addition, we visualize the frequent patterns on two text-lines containing Chinese characters.

  7. Magnetofermionic condensate in two dimensions

    PubMed Central

    Kulik, L. V.; Zhuravlev, A. S.; Dickmann, S.; Gorbunov, A. V.; Timofeev, V. B.; Kukushkin, I. V.; Schmult, S.

    2016-01-01

    Coherent condensate states of particles obeying either Bose or Fermi statistics are in the focus of interest in modern physics. Here we report on condensation of collective excitations with Bose statistics, cyclotron magnetoexcitons, in a high-mobility two-dimensional electron system in a magnetic field. At low temperatures, the dense non-equilibrium ensemble of long-lived triplet magnetoexcitons exhibits both a drastic reduction in the viscosity and a steep enhancement in the response to the external electromagnetic field. The observed effects are related to formation of a super-absorbing state interacting coherently with the electromagnetic field. Simultaneously, the electrons below the Fermi level form a super-emitting state. The effects are explicable from the viewpoint of a coherent condensate phase in a non-equilibrium system of two-dimensional fermions with a fully quantized energy spectrum. The condensation occurs in the space of vectors of magnetic translations, a property providing a completely new landscape for future physical investigations. PMID:27848969

  8. Neural networks to classify speaker independent isolated words recorded in radio car environments

    NASA Astrophysics Data System (ADS)

    Alippi, C.; Simeoni, M.; Torri, V.

    1993-02-01

    Many applications, in particular the ones requiring nonlinear signal processing, have proved Artificial Neural Networks (ANN's) to be invaluable tools for model free estimation. The classifying abilities of ANN's are addressed by testing their performance in a speaker independent word recognition application. A real world case requiring implementation of compact integrated devices is taken into account: the classification of isolated words in radio car environment. A multispeaker database of isolated words was recorded in different environments. Data were first processed to determinate the boundaries of each word and then to extract speech features, the latter accomplished by using cepstral coefficient representation, log area ratios and filters bank techniques. Multilayered perceptron and adaptive vector quantization neural paradigms were tested to find a reasonable compromise between performances and network simplicity, fundamental requirement for the implementation of compact real time running neural devices.

  9. Quantum entanglement and position-momentum entropic squeezing of a moving Lambda-type three-level atom interacting with a single-mode quantized field with intensity-dependent coupling

    NASA Astrophysics Data System (ADS)

    Faghihi, M. J.; Tavassoly, M. K.

    2013-07-01

    In this paper, we study the interaction between a moving Λ-type three-level atom and a single-mode cavity field in the presence of intensity-dependent atom-field coupling. After obtaining the state vector of the entire system explicitly, we study the nonclassical features of the system such as quantum entanglement, position-momentum entropic squeezing, quadrature squeezing and sub-Poissonian statistics. According to the obtained numerical results we illustrate that the squeezed period, the duration of entropy squeezing and the maximal squeezing can be controlled by choosing the appropriate nonlinearity function together with entering the atomic motion effect by the suitable selection of the field-mode structure parameter. Also, the atomic motion, as well as the nonlinearity function, leads to the oscillatory behaviour of the degree of entanglement between the atom and field.

  10. Limited Rank Matrix Learning, discriminative dimension reduction and visualization.

    PubMed

    Bunte, Kerstin; Schneider, Petra; Hammer, Barbara; Schleif, Frank-Michael; Villmann, Thomas; Biehl, Michael

    2012-02-01

    We present an extension of the recently introduced Generalized Matrix Learning Vector Quantization algorithm. In the original scheme, adaptive square matrices of relevance factors parameterize a discriminative distance measure. We extend the scheme to matrices of limited rank corresponding to low-dimensional representations of the data. This allows to incorporate prior knowledge of the intrinsic dimension and to reduce the number of adaptive parameters efficiently. In particular, for very large dimensional data, the limitation of the rank can reduce computation time and memory requirements significantly. Furthermore, two- or three-dimensional representations constitute an efficient visualization method for labeled data sets. The identification of a suitable projection is not treated as a pre-processing step but as an integral part of the supervised training. Several real world data sets serve as an illustration and demonstrate the usefulness of the suggested method. Copyright © 2011 Elsevier Ltd. All rights reserved.

  11. Analytical study of bound states in graphene nanoribbons and carbon nanotubes: The variable phase method and the relativistic Levinson theorem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miserev, D. S., E-mail: d.miserev@student.unsw.edu.au, E-mail: erazorheader@gmail.com

    2016-06-15

    The problem of localized states in 1D systems with a relativistic spectrum, namely, graphene stripes and carbon nanotubes, is studied analytically. The bound state as a superposition of two chiral states is completely described by their relative phase, which is the foundation of the variable phase method (VPM) developed herein. Based on our VPM, we formulate and prove the relativistic Levinson theorem. The problem of bound states can be reduced to the analysis of closed trajectories of some vector field. Remarkably, the Levinson theorem appears as the Poincaré index theorem for these closed trajectories. The VPM equation is also reducedmore » to the nonrelativistic and semiclassical limits. The limit of a small momentum p{sub y} of transverse quantization is applicable to an arbitrary integrable potential. In this case, a single confined mode is predicted.« less

  12. [Effects of plant viruses on vector and non-vector herbivorous arthropods and their natural enemies: a mini review].

    PubMed

    He, Xiao-Chan; Xu, Hong-Xing; Zhou, Xiao-Jun; Zheng, Xu-Song; Sun, Yu-Jian; Yang, Ya-Jun; Tian, Jun-Ce; Lü, Zhong-Xian

    2014-05-01

    Plant viruses transmitted by arthropods, as an important biotic factor, may not only directly affect the yield and quality of host plants, and development, physiological characteristics and ecological performances of their vector arthropods, but also directly or indirectly affect the non-vector herbivorous arthropods and their natural enemies in the same ecosystem, thereby causing influences to the whole agro-ecosystem. This paper reviewed the progress on the effects of plant viruses on herbivorous arthropods, including vector and non-vector, and their natural enemies, and on their ecological mechanisms to provide a reference for optimizing the management of vector and non-vector arthropod populations and sustainable control of plant viruses in agro-ecosystem.

  13. Predictability of a Coupled Model of ENSO Using Singular Vector Analysis: Optimal Growth and Forecast Skill.

    NASA Astrophysics Data System (ADS)

    Xue, Yan

    The optimal growth and its relationship with the forecast skill of the Zebiak and Cane model are studied using a simple statistical model best fit to the original nonlinear model and local linear tangent models about idealized climatic states (the mean background and ENSO cycles in a long model run), and the actual forecast states, including two sets of runs using two different initialization procedures. The seasonally varying Markov model best fit to a suite of 3-year forecasts in a reduced EOF space (18 EOFs) fits the original nonlinear model reasonably well and has comparable or better forecast skill. The initial error growth in a linear evolution operator A is governed by the eigenvalues of A^{T}A, and the square roots of eigenvalues and eigenvectors of A^{T}A are named singular values and singular vectors. One dominant growing singular vector is found, and the optimal 6 month growth rate is largest for a (boreal) spring start and smallest for a fall start. Most of the variation in the optimal growth rate of the two forecasts is seasonal, attributable to the seasonal variations in the mean background, except that in the cold events it is substantially suppressed. It is found that the mean background (zero anomaly) is the most unstable state, and the "forecast IC states" are more unstable than the "coupled model states". One dominant growing singular vector is found, characterized by north-south and east -west dipoles, convergent winds on the equator in the eastern Pacific and a deepened thermocline in the whole equatorial belt. This singular vector is insensitive to initial time and optimization time, but its final pattern is a strong function of initial states. The ENSO system is inherently unpredictable for the dominant singular vector can amplify 5-fold to 24-fold in 6 months and evolve into the large scales characteristic of ENSO. However, the inherent ENSO predictability is only a secondary factor, while the mismatches between the model and data is a primary factor controlling the current forecast skill.

  14. Non-linear dynamic compensation system

    NASA Technical Reports Server (NTRS)

    Lin, Yu-Hwan (Inventor); Lurie, Boris J. (Inventor)

    1992-01-01

    A non-linear dynamic compensation subsystem is added in the feedback loop of a high precision optical mirror positioning control system to smoothly alter the control system response bandwidth from a relatively wide response bandwidth optimized for speed of control system response to a bandwidth sufficiently narrow to reduce position errors resulting from the quantization noise inherent in the inductosyn used to measure mirror position. The non-linear dynamic compensation system includes a limiter for limiting the error signal within preselected limits, a compensator for modifying the limiter output to achieve the reduced bandwidth response, and an adder for combining the modified error signal with the difference between the limited and unlimited error signals. The adder output is applied to control system motor so that the system response is optimized for accuracy when the error signal is within the preselected limits, optimized for speed of response when the error signal is substantially beyond the preselected limits and smoothly varied therebetween as the error signal approaches the preselected limits.

  15. Particle swarm optimization-based local entropy weighted histogram equalization for infrared image enhancement

    NASA Astrophysics Data System (ADS)

    Wan, Minjie; Gu, Guohua; Qian, Weixian; Ren, Kan; Chen, Qian; Maldague, Xavier

    2018-06-01

    Infrared image enhancement plays a significant role in intelligent urban surveillance systems for smart city applications. Unlike existing methods only exaggerating the global contrast, we propose a particle swam optimization-based local entropy weighted histogram equalization which involves the enhancement of both local details and fore-and background contrast. First of all, a novel local entropy weighted histogram depicting the distribution of detail information is calculated based on a modified hyperbolic tangent function. Then, the histogram is divided into two parts via a threshold maximizing the inter-class variance in order to improve the contrasts of foreground and background, respectively. To avoid over-enhancement and noise amplification, double plateau thresholds of the presented histogram are formulated by means of particle swarm optimization algorithm. Lastly, each sub-image is equalized independently according to the constrained sub-local entropy weighted histogram. Comparative experiments implemented on real infrared images prove that our algorithm outperforms other state-of-the-art methods in terms of both visual and quantized evaluations.

  16. An adaptive evolutionary multi-objective approach based on simulated annealing.

    PubMed

    Li, H; Landa-Silva, D

    2011-01-01

    A multi-objective optimization problem can be solved by decomposing it into one or more single objective subproblems in some multi-objective metaheuristic algorithms. Each subproblem corresponds to one weighted aggregation function. For example, MOEA/D is an evolutionary multi-objective optimization (EMO) algorithm that attempts to optimize multiple subproblems simultaneously by evolving a population of solutions. However, the performance of MOEA/D highly depends on the initial setting and diversity of the weight vectors. In this paper, we present an improved version of MOEA/D, called EMOSA, which incorporates an advanced local search technique (simulated annealing) and adapts the search directions (weight vectors) corresponding to various subproblems. In EMOSA, the weight vector of each subproblem is adaptively modified at the lowest temperature in order to diversify the search toward the unexplored parts of the Pareto-optimal front. Our computational results show that EMOSA outperforms six other well established multi-objective metaheuristic algorithms on both the (constrained) multi-objective knapsack problem and the (unconstrained) multi-objective traveling salesman problem. Moreover, the effects of the main algorithmic components and parameter sensitivities on the search performance of EMOSA are experimentally investigated.

  17. On the Improvement of Convergence Performance for Integrated Design of Wind Turbine Blade Using a Vector Dominating Multi-objective Evolution Algorithm

    NASA Astrophysics Data System (ADS)

    Wang, L.; Wang, T. G.; Wu, J. H.; Cheng, G. P.

    2016-09-01

    A novel multi-objective optimization algorithm incorporating evolution strategies and vector mechanisms, referred as VD-MOEA, is proposed and applied in aerodynamic- structural integrated design of wind turbine blade. In the algorithm, a set of uniformly distributed vectors is constructed to guide population in moving forward to the Pareto front rapidly and maintain population diversity with high efficiency. For example, two- and three- objective designs of 1.5MW wind turbine blade are subsequently carried out for the optimization objectives of maximum annual energy production, minimum blade mass, and minimum extreme root thrust. The results show that the Pareto optimal solutions can be obtained in one single simulation run and uniformly distributed in the objective space, maximally maintaining the population diversity. In comparison to conventional evolution algorithms, VD-MOEA displays dramatic improvement of algorithm performance in both convergence and diversity preservation for handling complex problems of multi-variables, multi-objectives and multi-constraints. This provides a reliable high-performance optimization approach for the aerodynamic-structural integrated design of wind turbine blade.

  18. Development of Advanced Technologies for Complete Genomic and Proteomic Characterization of Quantized Human Tumor Cells

    DTIC Science & Technology

    2014-07-01

    establishment of Glioblastoma ( GBM ) cell lines from GBM patient’s tumor samples and quantized cell populations of each of the parental GBM cell lines, we... GBM patients are now well established and from the basis of the molecular characterization of the tumor development and signatures presented by these...analysis of these quantized cell sub populations and have begun to assemble the protein signatures of GBM tumors underpinned by the comprehensive

  19. Differential calculus on quantized simple lie groups

    NASA Astrophysics Data System (ADS)

    Jurčo, Branislav

    1991-07-01

    Differential calculi, generalizations of Woronowicz's four-dimensional calculus on SU q (2), are introduced for quantized classical simple Lie groups in a constructive way. For this purpose, the approach of Faddeev and his collaborators to quantum groups was used. An equivalence of Woronowicz's enveloping algebra generated by the dual space to the left-invariant differential forms and the corresponding quantized universal enveloping algebra, is obtained for our differential calculi. Real forms for q ∈ ℝ are also discussed.

  20. Light-hole quantization in the optical response of ultra-wide GaAs/Al(x)Ga(1-x)As quantum wells.

    PubMed

    Solovyev, V V; Bunakov, V A; Schmult, S; Kukushkin, I V

    2013-01-16

    Temperature-dependent reflectivity and photoluminescence spectra are studied for undoped ultra-wide 150 and 250 nm GaAs quantum wells. It is shown that spectral features previously attributed to a size quantization of the exciton motion in the z-direction coincide well with energies of quantized levels for light holes. Furthermore, optical spectra reveal very similar properties at temperatures above the exciton dissociation point.

Top