Combining Vector Quantization and Histogram Equalization.
ERIC Educational Resources Information Center
Cosman, Pamela C.; And Others
1992-01-01
Discussion of contrast enhancement techniques focuses on the use of histogram equalization with a data compression technique, i.e., tree-structured vector quantization. The enhancement technique of intensity windowing is described, and the use of enhancement techniques for medical images is explained, including adaptive histogram equalization.…
A VLSI chip set for real time vector quantization of image sequences
NASA Technical Reports Server (NTRS)
Baker, Richard L.
1989-01-01
The architecture and implementation of a VLSI chip set that vector quantizes (VQ) image sequences in real time is described. The chip set forms a programmable Single-Instruction, Multiple-Data (SIMD) machine which can implement various vector quantization encoding structures. Its VQ codebook may contain unlimited number of codevectors, N, having dimension up to K = 64. Under a weighted least squared error criterion, the engine locates at video rates the best code vector in full-searched or large tree searched VQ codebooks. The ability to manipulate tree structured codebooks, coupled with parallelism and pipelining, permits searches in as short as O (log N) cycles. A full codebook search results in O(N) performance, compared to O(KN) for a Single-Instruction, Single-Data (SISD) machine. With this VLSI chip set, an entire video code can be built on a single board that permits realtime experimentation with very large codebooks.
2002-01-01
their expression profile and for classification of cells into tumerous and non- tumerous classes. Then we will present a parallel tree method for... cancerous cells. We will use the same dataset and use tree structured classifiers with multi-resolution analysis for classifying cancerous from non- cancerous ...cells. We have the expressions of 4096 genes from 98 different cell types. Of these 98, 72 are cancerous while 26 are non- cancerous . We are interested
Vector quantizer designs for joint compression and terrain categorization of multispectral imagery
NASA Technical Reports Server (NTRS)
Gorman, John D.; Lyons, Daniel F.
1994-01-01
Two vector quantizer designs for compression of multispectral imagery and their impact on terrain categorization performance are evaluated. The mean-squared error (MSE) and classification performance of the two quantizers are compared, and it is shown that a simple two-stage design minimizing MSE subject to a constraint on classification performance has a significantly better classification performance than a standard MSE-based tree-structured vector quantizer followed by maximum likelihood classification. This improvement in classification performance is obtained with minimal loss in MSE performance. The results show that it is advantageous to tailor compression algorithm designs to the required data exploitation tasks. Applications of joint compression/classification include compression for the archival or transmission of Landsat imagery that is later used for land utility surveys and/or radiometric analysis.
Recursive optimal pruning with applications to tree structured vector quantizers
NASA Technical Reports Server (NTRS)
Kiang, Shei-Zein; Baker, Richard L.; Sullivan, Gary J.; Chiu, Chung-Yen
1992-01-01
A pruning algorithm of Chou et al. (1989) for designing optimal tree structures identifies only those codebooks which lie on the convex hull of the original codebook's operational distortion rate function. The authors introduce a modified version of the original algorithm, which identifies a large number of codebooks having minimum average distortion, under the constraint that, in each step, only modes having no descendents are removed from the tree. All codebooks generated by the original algorithm are also generated by this algorithm. The new algorithm generates a much larger number of codebooks in the middle- and low-rate regions. The additional codebooks permit operation near the codebook's operational distortion rate function without time sharing by choosing from the increased number of available bit rates. Despite the statistical mismatch which occurs when coding data outside the training sequence, these pruned codebooks retain their performance advantage over full search vector quantizers (VQs) for a large range of rates.
Subband directional vector quantization in radiological image compression
NASA Astrophysics Data System (ADS)
Akrout, Nabil M.; Diab, Chaouki; Prost, Remy; Goutte, Robert; Amiel, Michel
1992-05-01
The aim of this paper is to propose a new scheme for image compression. The method is very efficient for images which have directional edges such as the tree-like structure of the coronary vessels in digital angiograms. This method involves two steps. First, the original image is decomposed at different resolution levels using a pyramidal subband decomposition scheme. For decomposition/reconstruction of the image, free of aliasing and boundary errors, we use an ideal band-pass filter bank implemented in the Discrete Cosine Transform domain (DCT). Second, the high-frequency subbands are vector quantized using a multiresolution codebook with vertical and horizontal codewords which take into account the edge orientation of each subband. The proposed method reduces the blocking effect encountered at low bit rates in conventional vector quantization.
Interframe vector wavelet coding technique
NASA Astrophysics Data System (ADS)
Wus, John P.; Li, Weiping
1997-01-01
Wavelet coding is often used to divide an image into multi- resolution wavelet coefficients which are quantized and coded. By 'vectorizing' scalar wavelet coding and combining this with vector quantization (VQ), vector wavelet coding (VWC) can be implemented. Using a finite number of states, finite-state vector quantization (FSVQ) takes advantage of the similarity between frames by incorporating memory into the video coding system. Lattice VQ eliminates the potential mismatch that could occur using pre-trained VQ codebooks. It also eliminates the need for codebook storage in the VQ process, thereby creating a more robust coding system. Therefore, by using the VWC coding method in conjunction with the FSVQ system and lattice VQ, the formulation of a high quality very low bit rate coding systems is proposed. A coding system using a simple FSVQ system where the current state is determined by the previous channel symbol only is developed. To achieve a higher degree of compression, a tree-like FSVQ system is implemented. The groupings are done in this tree-like structure from the lower subbands to the higher subbands in order to exploit the nature of subband analysis in terms of the parent-child relationship. Class A and Class B video sequences from the MPEG-IV testing evaluations are used in the evaluation of this coding method.
Speech coding at low to medium bit rates
NASA Astrophysics Data System (ADS)
Leblanc, Wilfred Paul
1992-09-01
Improved search techniques coupled with improved codebook design methodologies are proposed to improve the performance of conventional code-excited linear predictive coders for speech. Improved methods for quantizing the short term filter are developed by employing a tree search algorithm and joint codebook design to multistage vector quantization. Joint codebook design procedures are developed to design locally optimal multistage codebooks. Weighting during centroid computation is introduced to improve the outlier performance of the multistage vector quantizer. Multistage vector quantization is shown to be both robust against input characteristics and in the presence of channel errors. Spectral distortions of about 1 dB are obtained at rates of 22-28 bits/frame. Structured codebook design procedures for excitation in code-excited linear predictive coders are compared to general codebook design procedures. Little is lost using significant structure in the excitation codebooks while greatly reducing the search complexity. Sparse multistage configurations are proposed for reducing computational complexity and memory size. Improved search procedures are applied to code-excited linear prediction which attempt joint optimization of the short term filter, the adaptive codebook, and the excitation. Improvements in signal to noise ratio of 1-2 dB are realized in practice.
NASA Technical Reports Server (NTRS)
Chang, Chi-Yung (Inventor); Fang, Wai-Chi (Inventor); Curlander, John C. (Inventor)
1995-01-01
A system for data compression utilizing systolic array architecture for Vector Quantization (VQ) is disclosed for both full-searched and tree-searched. For a tree-searched VQ, the special case of a Binary Tree-Search VQ (BTSVQ) is disclosed with identical Processing Elements (PE) in the array for both a Raw-Codebook VQ (RCVQ) and a Difference-Codebook VQ (DCVQ) algorithm. A fault tolerant system is disclosed which allows a PE that has developed a fault to be bypassed in the array and replaced by a spare at the end of the array, with codebook memory assignment shifted one PE past the faulty PE of the array.
NASA Astrophysics Data System (ADS)
Yang, Shuyu; Mitra, Sunanda
2002-05-01
Due to the huge volumes of radiographic images to be managed in hospitals, efficient compression techniques yielding no perceptual loss in the reconstructed images are becoming a requirement in the storage and management of such datasets. A wavelet-based multi-scale vector quantization scheme that generates a global codebook for efficient storage and transmission of medical images is presented in this paper. The results obtained show that even at low bit rates one is able to obtain reconstructed images with perceptual quality higher than that of the state-of-the-art scalar quantization method, the set partitioning in hierarchical trees.
Accelerating simulation for the multiple-point statistics algorithm using vector quantization
NASA Astrophysics Data System (ADS)
Zuo, Chen; Pan, Zhibin; Liang, Hao
2018-03-01
Multiple-point statistics (MPS) is a prominent algorithm to simulate categorical variables based on a sequential simulation procedure. Assuming training images (TIs) as prior conceptual models, MPS extracts patterns from TIs using a template and records their occurrences in a database. However, complex patterns increase the size of the database and require considerable time to retrieve the desired elements. In order to speed up simulation and improve simulation quality over state-of-the-art MPS methods, we propose an accelerating simulation for MPS using vector quantization (VQ), called VQ-MPS. First, a variable representation is presented to make categorical variables applicable for vector quantization. Second, we adopt a tree-structured VQ to compress the database so that stationary simulations are realized. Finally, a transformed template and classified VQ are used to address nonstationarity. A two-dimensional (2D) stationary channelized reservoir image is used to validate the proposed VQ-MPS. In comparison with several existing MPS programs, our method exhibits significantly better performance in terms of computational time, pattern reproductions, and spatial uncertainty. Further demonstrations consist of a 2D four facies simulation, two 2D nonstationary channel simulations, and a three-dimensional (3D) rock simulation. The results reveal that our proposed method is also capable of solving multifacies, nonstationarity, and 3D simulations based on 2D TIs.
A Heisenberg Algebra Bundle of a Vector Field in Three-Space and its Weyl Quantization
NASA Astrophysics Data System (ADS)
Binz, Ernst; Pods, Sonja
2006-01-01
In these notes we associate a natural Heisenberg group bundle Ha with a singularity free smooth vector field X = (id,a) on a submanifold M in a Euclidean three-space. This bundle yields naturally an infinite dimensional Heisenberg group HX∞. A representation of the C*-group algebra of HX∞ is a quantization. It causes a natural Weyl-deformation quantization of X. The influence of the topological structure of M on this quantization is encoded in the Chern class of a canonical complex line bundle inside Ha.
Prior-Based Quantization Bin Matching for Cloud Storage of JPEG Images.
Liu, Xianming; Cheung, Gene; Lin, Chia-Wen; Zhao, Debin; Gao, Wen
2018-07-01
Millions of user-generated images are uploaded to social media sites like Facebook daily, which translate to a large storage cost. However, there exists an asymmetry in upload and download data: only a fraction of the uploaded images are subsequently retrieved for viewing. In this paper, we propose a cloud storage system that reduces the storage cost of all uploaded JPEG photos, at the expense of a controlled increase in computation mainly during download of requested image subset. Specifically, the system first selectively re-encodes code blocks of uploaded JPEG images using coarser quantization parameters for smaller storage sizes. Then during download, the system exploits known signal priors-sparsity prior and graph-signal smoothness prior-for reverse mapping to recover original fine quantization bin indices, with either deterministic guarantee (lossless mode) or statistical guarantee (near-lossless mode). For fast reverse mapping, we use small dictionaries and sparse graphs that are tailored for specific clusters of similar blocks, which are classified via tree-structured vector quantizer. During image upload, cluster indices identifying the appropriate dictionaries and graphs for the re-quantized blocks are encoded as side information using a differential distributed source coding scheme to facilitate reverse mapping during image download. Experimental results show that our system can reap significant storage savings (up to 12.05%) at roughly the same image PSNR (within 0.18 dB).
Video data compression using artificial neural network differential vector quantization
NASA Technical Reports Server (NTRS)
Krishnamurthy, Ashok K.; Bibyk, Steven B.; Ahalt, Stanley C.
1991-01-01
An artificial neural network vector quantizer is developed for use in data compression applications such as Digital Video. Differential Vector Quantization is used to preserve edge features, and a new adaptive algorithm, known as Frequency-Sensitive Competitive Learning, is used to develop the vector quantizer codebook. To develop real time performance, a custom Very Large Scale Integration Application Specific Integrated Circuit (VLSI ASIC) is being developed to realize the associative memory functions needed in the vector quantization algorithm. By using vector quantization, the need for Huffman coding can be eliminated, resulting in superior performance against channel bit errors than methods that use variable length codes.
BSIFT: toward data-independent codebook for large scale image search.
Zhou, Wengang; Li, Houqiang; Hong, Richang; Lu, Yijuan; Tian, Qi
2015-03-01
Bag-of-Words (BoWs) model based on Scale Invariant Feature Transform (SIFT) has been widely used in large-scale image retrieval applications. Feature quantization by vector quantization plays a crucial role in BoW model, which generates visual words from the high- dimensional SIFT features, so as to adapt to the inverted file structure for the scalable retrieval. Traditional feature quantization approaches suffer several issues, such as necessity of visual codebook training, limited reliability, and update inefficiency. To avoid the above problems, in this paper, a novel feature quantization scheme is proposed to efficiently quantize each SIFT descriptor to a descriptive and discriminative bit-vector, which is called binary SIFT (BSIFT). Our quantizer is independent of image collections. In addition, by taking the first 32 bits out from BSIFT as code word, the generated BSIFT naturally lends itself to adapt to the classic inverted file structure for image indexing. Moreover, the quantization error is reduced by feature filtering, code word expansion, and query sensitive mask shielding. Without any explicit codebook for quantization, our approach can be readily applied in image search in some resource-limited scenarios. We evaluate the proposed algorithm for large scale image search on two public image data sets. Experimental results demonstrate the index efficiency and retrieval accuracy of our approach.
An adaptive vector quantization scheme
NASA Technical Reports Server (NTRS)
Cheung, K.-M.
1990-01-01
Vector quantization is known to be an effective compression scheme to achieve a low bit rate so as to minimize communication channel bandwidth and also to reduce digital memory storage while maintaining the necessary fidelity of the data. However, the large number of computations required in vector quantizers has been a handicap in using vector quantization for low-rate source coding. An adaptive vector quantization algorithm is introduced that is inherently suitable for simple hardware implementation because it has a simple architecture. It allows fast encoding and decoding because it requires only addition and subtraction operations.
Conditional Entropy-Constrained Residual VQ with Application to Image Coding
NASA Technical Reports Server (NTRS)
Kossentini, Faouzi; Chung, Wilson C.; Smith, Mark J. T.
1996-01-01
This paper introduces an extension of entropy-constrained residual vector quantization (VQ) where intervector dependencies are exploited. The method, which we call conditional entropy-constrained residual VQ, employs a high-order entropy conditioning strategy that captures local information in the neighboring vectors. When applied to coding images, the proposed method is shown to achieve better rate-distortion performance than that of entropy-constrained residual vector quantization with less computational complexity and lower memory requirements. Moreover, it can be designed to support progressive transmission in a natural way. It is also shown to outperform some of the best predictive and finite-state VQ techniques reported in the literature. This is due partly to the joint optimization between the residual vector quantizer and a high-order conditional entropy coder as well as the efficiency of the multistage residual VQ structure and the dynamic nature of the prediction.
NASA Astrophysics Data System (ADS)
Aghamaleki, Javad Abbasi; Behrad, Alireza
2018-01-01
Double compression detection is a crucial stage in digital image and video forensics. However, the detection of double compressed videos is challenging when the video forger uses the same quantization matrix and synchronized group of pictures (GOP) structure during the recompression history to conceal tampering effects. A passive approach is proposed for detecting double compressed MPEG videos with the same quantization matrix and synchronized GOP structure. To devise the proposed algorithm, the effects of recompression on P frames are mathematically studied. Then, based on the obtained guidelines, a feature vector is proposed to detect double compressed frames on the GOP level. Subsequently, sparse representations of the feature vectors are used for dimensionality reduction and enrich the traces of recompression. Finally, a support vector machine classifier is employed to detect and localize double compression in temporal domain. The experimental results show that the proposed algorithm achieves the accuracy of more than 95%. In addition, the comparisons of the results of the proposed method with those of other methods reveal the efficiency of the proposed algorithm.
A recursive technique for adaptive vector quantization
NASA Technical Reports Server (NTRS)
Lindsay, Robert A.
1989-01-01
Vector Quantization (VQ) is fast becoming an accepted, if not preferred method for image compression. The VQ performs well when compressing all types of imagery including Video, Electro-Optical (EO), Infrared (IR), Synthetic Aperture Radar (SAR), Multi-Spectral (MS), and digital map data. The only requirement is to change the codebook to switch the compressor from one image sensor to another. There are several approaches for designing codebooks for a vector quantizer. Adaptive Vector Quantization is a procedure that simultaneously designs codebooks as the data is being encoded or quantized. This is done by computing the centroid as a recursive moving average where the centroids move after every vector is encoded. When computing the centroid of a fixed set of vectors the resultant centroid is identical to the previous centroid calculation. This method of centroid calculation can be easily combined with VQ encoding techniques. The defined quantizer changes after every encoded vector by recursively updating the centroid of minimum distance which is the selected by the encoder. Since the quantizer is changing definition or states after every encoded vector, the decoder must now receive updates to the codebook. This is done as side information by multiplexing bits into the compressed source data.
Segmentation of magnetic resonance images using fuzzy algorithms for learning vector quantization.
Karayiannis, N B; Pai, P I
1999-02-01
This paper evaluates a segmentation technique for magnetic resonance (MR) images of the brain based on fuzzy algorithms for learning vector quantization (FALVQ). These algorithms perform vector quantization by updating all prototypes of a competitive network through an unsupervised learning process. Segmentation of MR images is formulated as an unsupervised vector quantization process, where the local values of different relaxation parameters form the feature vectors which are represented by a relatively small set of prototypes. The experiments evaluate a variety of FALVQ algorithms in terms of their ability to identify different tissues and discriminate between normal tissues and abnormalities.
Low-rate image coding using vector quantization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Makur, A.
1990-01-01
This thesis deals with the development and analysis of a computationally simple vector quantization image compression system for coding monochrome images at low bit rate. Vector quantization has been known to be an effective compression scheme when a low bit rate is desirable, but the intensive computation required in a vector quantization encoder has been a handicap in using it for low rate image coding. The present work shows that, without substantially increasing the coder complexity, it is indeed possible to achieve acceptable picture quality while attaining a high compression ratio. Several modifications to the conventional vector quantization coder aremore » proposed in the thesis. These modifications are shown to offer better subjective quality when compared to the basic coder. Distributed blocks are used instead of spatial blocks to construct the input vectors. A class of input-dependent weighted distortion functions is used to incorporate psychovisual characteristics in the distortion measure. Computationally simple filtering techniques are applied to further improve the decoded image quality. Finally, unique designs of the vector quantization coder using electronic neural networks are described, so that the coding delay is reduced considerably.« less
Fast large-scale object retrieval with binary quantization
NASA Astrophysics Data System (ADS)
Zhou, Shifu; Zeng, Dan; Shen, Wei; Zhang, Zhijiang; Tian, Qi
2015-11-01
The objective of large-scale object retrieval systems is to search for images that contain the target object in an image database. Where state-of-the-art approaches rely on global image representations to conduct searches, we consider many boxes per image as candidates to search locally in a picture. In this paper, a feature quantization algorithm called binary quantization is proposed. In binary quantization, a scale-invariant feature transform (SIFT) feature is quantized into a descriptive and discriminative bit-vector, which allows itself to adapt to the classic inverted file structure for box indexing. The inverted file, which stores the bit-vector and box ID where the SIFT feature is located inside, is compact and can be loaded into the main memory for efficient box indexing. We evaluate our approach on available object retrieval datasets. Experimental results demonstrate that the proposed approach is fast and achieves excellent search quality. Therefore, the proposed approach is an improvement over state-of-the-art approaches for object retrieval.
Multipath search coding of stationary signals with applications to speech
NASA Astrophysics Data System (ADS)
Fehn, H. G.; Noll, P.
1982-04-01
This paper deals with the application of multipath search coding (MSC) concepts to the coding of stationary memoryless and correlated sources, and of speech signals, at a rate of one bit per sample. Use is made of three MSC classes: (1) codebook coding, or vector quantization, (2) tree coding, and (3) trellis coding. This paper explains the performances of these coders and compares them both with those of conventional coders and with rate-distortion bounds. The potentials of MSC coding strategies are demonstrated by illustrations. The paper reports also on results of MSC coding of speech, where both the strategy of adaptive quantization and of adaptive prediction were included in coder design.
Image coding using entropy-constrained residual vector quantization
NASA Technical Reports Server (NTRS)
Kossentini, Faouzi; Smith, Mark J. T.; Barnes, Christopher F.
1993-01-01
The residual vector quantization (RVQ) structure is exploited to produce a variable length codeword RVQ. Necessary conditions for the optimality of this RVQ are presented, and a new entropy-constrained RVQ (ECRVQ) design algorithm is shown to be very effective in designing RVQ codebooks over a wide range of bit rates and vector sizes. The new EC-RVQ has several important advantages. It can outperform entropy-constrained VQ (ECVQ) in terms of peak signal-to-noise ratio (PSNR), memory, and computation requirements. It can also be used to design high rate codebooks and codebooks with relatively large vector sizes. Experimental results indicate that when the new EC-RVQ is applied to image coding, very high quality is achieved at relatively low bit rates.
Image Coding Based on Address Vector Quantization.
NASA Astrophysics Data System (ADS)
Feng, Yushu
Image coding is finding increased application in teleconferencing, archiving, and remote sensing. This thesis investigates the potential of Vector Quantization (VQ), a relatively new source coding technique, for compression of monochromatic and color images. Extensions of the Vector Quantization technique to the Address Vector Quantization method have been investigated. In Vector Quantization, the image data to be encoded are first processed to yield a set of vectors. A codeword from the codebook which best matches the input image vector is then selected. Compression is achieved by replacing the image vector with the index of the code-word which produced the best match, the index is sent to the channel. Reconstruction of the image is done by using a table lookup technique, where the label is simply used as an address for a table containing the representative vectors. A code-book of representative vectors (codewords) is generated using an iterative clustering algorithm such as K-means, or the generalized Lloyd algorithm. A review of different Vector Quantization techniques are given in chapter 1. Chapter 2 gives an overview of codebook design methods including the Kohonen neural network to design codebook. During the encoding process, the correlation of the address is considered and Address Vector Quantization is developed for color image and monochrome image coding. Address VQ which includes static and dynamic processes is introduced in chapter 3. In order to overcome the problems in Hierarchical VQ, Multi-layer Address Vector Quantization is proposed in chapter 4. This approach gives the same performance as that of the normal VQ scheme but the bit rate is about 1/2 to 1/3 as that of the normal VQ method. In chapter 5, a Dynamic Finite State VQ based on a probability transition matrix to select the best subcodebook to encode the image is developed. In chapter 6, a new adaptive vector quantization scheme, suitable for color video coding, called "A Self -Organizing Adaptive VQ Technique" is presented. In addition to chapters 2 through 6 which report on new work, this dissertation includes one chapter (chapter 1) and part of chapter 2 which review previous work on VQ and image coding, respectively. Finally, a short discussion of directions for further research is presented in conclusion.
Vector quantizer based on brightness maps for image compression with the polynomial transform
NASA Astrophysics Data System (ADS)
Escalante-Ramirez, Boris; Moreno-Gutierrez, Mauricio; Silvan-Cardenas, Jose L.
2002-11-01
We present a vector quantization scheme acting on brightness fields based on distance/distortion criteria correspondent with psycho-visual aspects. These criteria quantify sensorial distortion between vectors that represent either portions of a digital image or alternatively, coefficients of a transform-based coding system. In the latter case, we use an image representation model, namely the Hermite transform, that is based on some of the main perceptual characteristics of the human vision system (HVS) and in their response to light stimulus. Energy coding in the brightness domain, determination of local structure, code-book training and local orientation analysis are all obtained by means of the Hermite transform. This paper, for thematic reasons, is divided in four sections. The first one will shortly highlight the importance of having newer and better compression algorithms. This section will also serve to explain briefly the most relevant characteristics of the HVS, advantages and disadvantages related with the behavior of our vision in front of ocular stimulus. The second section shall go through a quick review of vector quantization techniques, focusing their performance on image treatment, as a preview for the image vector quantizer compressor actually constructed in section 5. Third chapter was chosen to concentrate the most important data gathered on brightness models. The building of this so-called brightness maps (quantification of the human perception on the visible objects reflectance), in a bi-dimensional model, will be addressed here. The Hermite transform, a special case of polynomial transforms, and its usefulness, will be treated, in an applicable discrete form, in the fourth chapter. As we have learned from previous works 1, Hermite transform has showed to be a useful and practical solution to efficiently code the energy within an image block, deciding which kind of quantization is to be used upon them (whether scalar or vector). It will also be a unique tool to structurally classify the image block within a given lattice. This particular operation intends to be one of the main contributions of this work. The fifth section will fuse the proposals derived from the study of the three main topics- addressed in the last sections- in order to propose an image compression model that takes advantage of vector quantizers inside the brightness transformed domain to determine the most important structures, finding the energy distribution inside the Hermite domain. Sixth and last section will show some results obtained while testing the coding-decoding model. The guidelines to evaluate the image compressing performance were the compression ratio, SNR and psycho-visual quality. Some conclusions derived from the research and possible unexplored paths will be shown on this section as well.
Image segmentation using hidden Markov Gauss mixture models.
Pyun, Kyungsuk; Lim, Johan; Won, Chee Sun; Gray, Robert M
2007-07-01
Image segmentation is an important tool in image processing and can serve as an efficient front end to sophisticated algorithms and thereby simplify subsequent processing. We develop a multiclass image segmentation method using hidden Markov Gauss mixture models (HMGMMs) and provide examples of segmentation of aerial images and textures. HMGMMs incorporate supervised learning, fitting the observation probability distribution given each class by a Gauss mixture estimated using vector quantization with a minimum discrimination information (MDI) distortion. We formulate the image segmentation problem using a maximum a posteriori criteria and find the hidden states that maximize the posterior density given the observation. We estimate both the hidden Markov parameter and hidden states using a stochastic expectation-maximization algorithm. Our results demonstrate that HMGMM provides better classification in terms of Bayes risk and spatial homogeneity of the classified objects than do several popular methods, including classification and regression trees, learning vector quantization, causal hidden Markov models (HMMs), and multiresolution HMMs. The computational load of HMGMM is similar to that of the causal HMM.
NASA Technical Reports Server (NTRS)
Gray, Robert M.
1989-01-01
During the past ten years Vector Quantization (VQ) has developed from a theoretical possibility promised by Shannon's source coding theorems into a powerful and competitive technique for speech and image coding and compression at medium to low bit rates. In this survey, the basic ideas behind the design of vector quantizers are sketched and some comments made on the state-of-the-art and current research efforts.
Robust vector quantization for noisy channels
NASA Technical Reports Server (NTRS)
Demarca, J. R. B.; Farvardin, N.; Jayant, N. S.; Shoham, Y.
1988-01-01
The paper briefly discusses techniques for making vector quantizers more tolerant to tranmsission errors. Two algorithms are presented for obtaining an efficient binary word assignment to the vector quantizer codewords without increasing the transmission rate. It is shown that about 4.5 dB gain over random assignment can be achieved with these algorithms. It is also proposed to reduce the effects of error propagation in vector-predictive quantizers by appropriately constraining the response of the predictive loop. The constrained system is shown to have about 4 dB of SNR gain over an unconstrained system in a noisy channel, with a small loss of clean-channel performance.
A hybrid LBG/lattice vector quantizer for high quality image coding
NASA Technical Reports Server (NTRS)
Ramamoorthy, V.; Sayood, K.; Arikan, E. (Editor)
1991-01-01
It is well known that a vector quantizer is an efficient coder offering a good trade-off between quantization distortion and bit rate. The performance of a vector quantizer asymptotically approaches the optimum bound with increasing dimensionality. A vector quantized image suffers from the following types of degradations: (1) edge regions in the coded image contain staircase effects, (2) quasi-constant or slowly varying regions suffer from contouring effects, and (3) textured regions lose details and suffer from granular noise. All three of these degradations are due to the finite size of the code book, the distortion measures used in the design, and due to the finite training procedure involved in the construction of the code book. In this paper, we present an adaptive technique which attempts to ameliorate the edge distortion and contouring effects.
Face recognition algorithm using extended vector quantization histogram features.
Yan, Yan; Lee, Feifei; Wu, Xueqian; Chen, Qiu
2018-01-01
In this paper, we propose a face recognition algorithm based on a combination of vector quantization (VQ) and Markov stationary features (MSF). The VQ algorithm has been shown to be an effective method for generating features; it extracts a codevector histogram as a facial feature representation for face recognition. Still, the VQ histogram features are unable to convey spatial structural information, which to some extent limits their usefulness in discrimination. To alleviate this limitation of VQ histograms, we utilize Markov stationary features (MSF) to extend the VQ histogram-based features so as to add spatial structural information. We demonstrate the effectiveness of our proposed algorithm by achieving recognition results superior to those of several state-of-the-art methods on publicly available face databases.
Quantization of Electromagnetic Fields in Cavities
NASA Technical Reports Server (NTRS)
Kakazu, Kiyotaka; Oshiro, Kazunori
1996-01-01
A quantization procedure for the electromagnetic field in a rectangular cavity with perfect conductor walls is presented, where a decomposition formula of the field plays an essential role. All vector mode functions are obtained by using the decomposition. After expanding the field in terms of the vector mode functions, we get the quantized electromagnetic Hamiltonian.
Medical Image Compression Based on Vector Quantization with Variable Block Sizes in Wavelet Domain
Jiang, Huiyan; Ma, Zhiyuan; Hu, Yang; Yang, Benqiang; Zhang, Libo
2012-01-01
An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD) was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality. PMID:23049544
Medical image compression based on vector quantization with variable block sizes in wavelet domain.
Jiang, Huiyan; Ma, Zhiyuan; Hu, Yang; Yang, Benqiang; Zhang, Libo
2012-01-01
An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD) was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality.
Quantized kernel least mean square algorithm.
Chen, Badong; Zhao, Songlin; Zhu, Pingping; Príncipe, José C
2012-01-01
In this paper, we propose a quantization approach, as an alternative of sparsification, to curb the growth of the radial basis function structure in kernel adaptive filtering. The basic idea behind this method is to quantize and hence compress the input (or feature) space. Different from sparsification, the new approach uses the "redundant" data to update the coefficient of the closest center. In particular, a quantized kernel least mean square (QKLMS) algorithm is developed, which is based on a simple online vector quantization method. The analytical study of the mean square convergence has been carried out. The energy conservation relation for QKLMS is established, and on this basis we arrive at a sufficient condition for mean square convergence, and a lower and upper bound on the theoretical value of the steady-state excess mean square error. Static function estimation and short-term chaotic time-series prediction examples are presented to demonstrate the excellent performance.
Multipurpose image watermarking algorithm based on multistage vector quantization.
Lu, Zhe-Ming; Xu, Dian-Guo; Sun, Sheng-He
2005-06-01
The rapid growth of digital multimedia and Internet technologies has made copyright protection, copy protection, and integrity verification three important issues in the digital world. To solve these problems, the digital watermarking technique has been presented and widely researched. Traditional watermarking algorithms are mostly based on discrete transform domains, such as the discrete cosine transform, discrete Fourier transform (DFT), and discrete wavelet transform (DWT). Most of these algorithms are good for only one purpose. Recently, some multipurpose digital watermarking methods have been presented, which can achieve the goal of content authentication and copyright protection simultaneously. However, they are based on DWT or DFT. Lately, several robust watermarking schemes based on vector quantization (VQ) have been presented, but they can only be used for copyright protection. In this paper, we present a novel multipurpose digital image watermarking method based on the multistage vector quantizer structure, which can be applied to image authentication and copyright protection. In the proposed method, the semi-fragile watermark and the robust watermark are embedded in different VQ stages using different techniques, and both of them can be extracted without the original image. Simulation results demonstrate the effectiveness of our algorithm in terms of robustness and fragility.
Distance learning in discriminative vector quantization.
Schneider, Petra; Biehl, Michael; Hammer, Barbara
2009-10-01
Discriminative vector quantization schemes such as learning vector quantization (LVQ) and extensions thereof offer efficient and intuitive classifiers based on the representation of classes by prototypes. The original methods, however, rely on the Euclidean distance corresponding to the assumption that the data can be represented by isotropic clusters. For this reason, extensions of the methods to more general metric structures have been proposed, such as relevance adaptation in generalized LVQ (GLVQ) and matrix learning in GLVQ. In these approaches, metric parameters are learned based on the given classification task such that a data-driven distance measure is found. In this letter, we consider full matrix adaptation in advanced LVQ schemes. In particular, we introduce matrix learning to a recent statistical formalization of LVQ, robust soft LVQ, and we compare the results on several artificial and real-life data sets to matrix learning in GLVQ, a derivation of LVQ-like learning based on a (heuristic) cost function. In all cases, matrix adaptation allows a significant improvement of the classification accuracy. Interestingly, however, the principled behavior of the models with respect to prototype locations and extracted matrix dimensions shows several characteristic differences depending on the data sets.
Godino-Llorente, J I; Gómez-Vilda, P
2004-02-01
It is well known that vocal and voice diseases do not necessarily cause perceptible changes in the acoustic voice signal. Acoustic analysis is a useful tool to diagnose voice diseases being a complementary technique to other methods based on direct observation of the vocal folds by laryngoscopy. Through the present paper two neural-network based classification approaches applied to the automatic detection of voice disorders will be studied. Structures studied are multilayer perceptron and learning vector quantization fed using short-term vectors calculated accordingly to the well-known Mel Frequency Coefficient cepstral parameterization. The paper shows that these architectures allow the detection of voice disorders--including glottic cancer--under highly reliable conditions. Within this context, the Learning Vector quantization methodology demonstrated to be more reliable than the multilayer perceptron architecture yielding 96% frame accuracy under similar working conditions.
Application of a VLSI vector quantization processor to real-time speech coding
NASA Technical Reports Server (NTRS)
Davidson, G.; Gersho, A.
1986-01-01
Attention is given to a working vector quantization processor for speech coding that is based on a first-generation VLSI chip which efficiently performs the pattern-matching operation needed for the codebook search process (CPS). Using this chip, the CPS architecture has been successfully incorporated into a compact, single-board Vector PCM implementation operating at 7-18 kbits/sec. A real time Adaptive Vector Predictive Coder system using the CPS has also been implemented.
Resolution enhancement of low-quality videos using a high-resolution frame
NASA Astrophysics Data System (ADS)
Pham, Tuan Q.; van Vliet, Lucas J.; Schutte, Klamer
2006-01-01
This paper proposes an example-based Super-Resolution (SR) algorithm of compressed videos in the Discrete Cosine Transform (DCT) domain. Input to the system is a Low-Resolution (LR) compressed video together with a High-Resolution (HR) still image of similar content. Using a training set of corresponding LR-HR pairs of image patches from the HR still image, high-frequency details are transferred from the HR source to the LR video. The DCT-domain algorithm is much faster than example-based SR in spatial domain 6 because of a reduction in search dimensionality, which is a direct result of the compact and uncorrelated DCT representation. Fast searching techniques like tree-structure vector quantization 16 and coherence search1 are also key to the improved efficiency. Preliminary results on MJPEG sequence show promising result of the DCT-domain SR synthesis approach.
Perceptual compression of magnitude-detected synthetic aperture radar imagery
NASA Technical Reports Server (NTRS)
Gorman, John D.; Werness, Susan A.
1994-01-01
A perceptually-based approach for compressing synthetic aperture radar (SAR) imagery is presented. Key components of the approach are a multiresolution wavelet transform, a bit allocation mask based on an empirical human visual system (HVS) model, and hybrid scalar/vector quantization. Specifically, wavelet shrinkage techniques are used to segregate wavelet transform coefficients into three components: local means, edges, and texture. Each of these three components is then quantized separately according to a perceptually-based bit allocation scheme. Wavelet coefficients associated with local means and edges are quantized using high-rate scalar quantization while texture information is quantized using low-rate vector quantization. The impact of the perceptually-based multiresolution compression algorithm on visual image quality, impulse response, and texture properties is assessed for fine-resolution magnitude-detected SAR imagery; excellent image quality is found at bit rates at or above 1 bpp along with graceful performance degradation at rates below 1 bpp.
NASA Astrophysics Data System (ADS)
Ng, Theam Foo; Pham, Tuan D.; Zhou, Xiaobo
2010-01-01
With the fast development of multi-dimensional data compression and pattern classification techniques, vector quantization (VQ) has become a system that allows large reduction of data storage and computational effort. One of the most recent VQ techniques that handle the poor estimation of vector centroids due to biased data from undersampling is to use fuzzy declustering-based vector quantization (FDVQ) technique. Therefore, in this paper, we are motivated to propose a justification of FDVQ based hidden Markov model (HMM) for investigating its effectiveness and efficiency in classification of genotype-image phenotypes. The performance evaluation and comparison of the recognition accuracy between a proposed FDVQ based HMM (FDVQ-HMM) and a well-known LBG (Linde, Buzo, Gray) vector quantization based HMM (LBG-HMM) will be carried out. The experimental results show that the performances of both FDVQ-HMM and LBG-HMM are almost similar. Finally, we have justified the competitiveness of FDVQ-HMM in classification of cellular phenotype image database by using hypotheses t-test. As a result, we have validated that the FDVQ algorithm is a robust and an efficient classification technique in the application of RNAi genome-wide screening image data.
Necessary conditions for the optimality of variable rate residual vector quantizers
NASA Technical Reports Server (NTRS)
Kossentini, Faouzi; Smith, Mark J. T.; Barnes, Christopher F.
1993-01-01
Residual vector quantization (RVQ), or multistage VQ, as it is also called, has recently been shown to be a competitive technique for data compression. The competitive performance of RVQ reported in results from the joint optimization of variable rate encoding and RVQ direct-sum code books. In this paper, necessary conditions for the optimality of variable rate RVQ's are derived, and an iterative descent algorithm based on a Lagrangian formulation is introduced for designing RVQ's having minimum average distortion subject to an entropy constraint. Simulation results for these entropy-constrained RVQ's (EC-RVQ's) are presented for memory less Gaussian, Laplacian, and uniform sources. A Gauss-Markov source is also considered. The performance is superior to that of entropy-constrained scalar quantizers (EC-SQ's) and practical entropy-constrained vector quantizers (EC-VQ's), and is competitive with that of some of the best source coding techniques that have appeared in the literature.
Hao, Li-Ying; Yang, Guang-Hong
2013-09-01
This paper is concerned with the problem of robust fault-tolerant compensation control problem for uncertain linear systems subject to both state and input signal quantization. By incorporating novel matrix full-rank factorization technique with sliding surface design successfully, the total failure of certain actuators can be coped with, under a special actuator redundancy assumption. In order to compensate for quantization errors, an adjustment range of quantization sensitivity for a dynamic uniform quantizer is given through the flexible choices of design parameters. Comparing with the existing results, the derived inequality condition leads to the fault tolerance ability stronger and much wider scope of applicability. With a static adjustment policy of quantization sensitivity, an adaptive sliding mode controller is then designed to maintain the sliding mode, where the gain of the nonlinear unit vector term is updated automatically to compensate for the effects of actuator faults, quantization errors, exogenous disturbances and parameter uncertainties without the need for a fault detection and isolation (FDI) mechanism. Finally, the effectiveness of the proposed design method is illustrated via a model of a rocket fairing structural-acoustic. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.
Accelerating Families of Fuzzy K-Means Algorithms for Vector Quantization Codebook Design
Mata, Edson; Bandeira, Silvio; de Mattos Neto, Paulo; Lopes, Waslon; Madeiro, Francisco
2016-01-01
The performance of signal processing systems based on vector quantization depends on codebook design. In the image compression scenario, the quality of the reconstructed images depends on the codebooks used. In this paper, alternatives are proposed for accelerating families of fuzzy K-means algorithms for codebook design. The acceleration is obtained by reducing the number of iterations of the algorithms and applying efficient nearest neighbor search techniques. Simulation results concerning image vector quantization have shown that the acceleration obtained so far does not decrease the quality of the reconstructed images. Codebook design time savings up to about 40% are obtained by the accelerated versions with respect to the original versions of the algorithms. PMID:27886061
Accelerating Families of Fuzzy K-Means Algorithms for Vector Quantization Codebook Design.
Mata, Edson; Bandeira, Silvio; de Mattos Neto, Paulo; Lopes, Waslon; Madeiro, Francisco
2016-11-23
The performance of signal processing systems based on vector quantization depends on codebook design. In the image compression scenario, the quality of the reconstructed images depends on the codebooks used. In this paper, alternatives are proposed for accelerating families of fuzzy K-means algorithms for codebook design. The acceleration is obtained by reducing the number of iterations of the algorithms and applying efficient nearest neighbor search techniques. Simulation results concerning image vector quantization have shown that the acceleration obtained so far does not decrease the quality of the reconstructed images. Codebook design time savings up to about 40% are obtained by the accelerated versions with respect to the original versions of the algorithms.
NASA Astrophysics Data System (ADS)
Sadrzadeh, Mehrnoosh
2017-07-01
Compact Closed categories and Frobenius and Bi algebras have been applied to model and reason about Quantum protocols. The same constructions have also been applied to reason about natural language semantics under the name: ``categorical distributional compositional'' semantics, or in short, the ``DisCoCat'' model. This model combines the statistical vector models of word meaning with the compositional models of grammatical structure. It has been applied to natural language tasks such as disambiguation, paraphrasing and entailment of phrases and sentences. The passage from the grammatical structure to vectors is provided by a functor, similar to the Quantization functor of Quantum Field Theory. The original DisCoCat model only used compact closed categories. Later, Frobenius algebras were added to it to model long distance dependancies such as relative pronouns. Recently, bialgebras have been added to the pack to reason about quantifiers. This paper reviews these constructions and their application to natural language semantics. We go over the theory and present some of the core experimental results.
Quantized Vector Potential and the Photon Wave-function
NASA Astrophysics Data System (ADS)
Meis, C.; Dahoo, P. R.
2017-12-01
The vector potential function {\\overrightarrow{α }}kλ (\\overrightarrow{r},t) for a k-mode and λ-polarization photon, with the quantized amplitude α 0k (ω k ) = ξω k , satisfies the classical wave propagation equation as well as the Schrodinger’s equation with the relativistic massless Hamiltonian \\mathop{H}\\limits∼ =-i\\hslash c\\overrightarrow{\
Comparing the performance of two CBIRS indexing schemes
NASA Astrophysics Data System (ADS)
Mueller, Wolfgang; Robbert, Guenter; Henrich, Andreas
2003-01-01
Content based image retrieval (CBIR) as it is known today has to deal with a number of challenges. Quickly summarized, the main challenges are firstly, to bridge the semantic gap between high-level concepts and low-level features using feedback, secondly to provide performance under adverse conditions. High-dimensional spaces, as well as a demanding machine learning task make the right way of indexing an important issue. When indexing multimedia data, most groups opt for extraction of high-dimensional feature vectors from the data, followed by dimensionality reduction like PCA (Principal Components Analysis) or LSI (Latent Semantic Indexing). The resulting vectors are indexed using spatial indexing structures such as kd-trees or R-trees, for example. Other projects, such as MARS and Viper propose the adaptation of text indexing techniques, notably the inverted file. Here, the Viper system is the most direct adaptation of text retrieval techniques to quantized vectors. However, while the Viper query engine provides decent performance together with impressive user-feedback behavior, as well as the possibility for easy integration of long-term learning algorithms, and support for potentially infinite feature vectors, there has been no comparison of vector-based methods and inverted-file-based methods under similar conditions. In this publication, we compare a CBIR query engine that uses inverted files (Bothrops, a rewrite of the Viper query engine based on a relational database), and a CBIR query engine based on LSD (Local Split Decision) trees for spatial indexing using the same feature sets. The Benchathlon initiative works on providing a set of images and ground truth for simulating image queries by example and corresponding user feedback. When performing the Benchathlon benchmark on a CBIR system (the System Under Test, SUT), a benchmarking harness connects over internet to the SUT, performing a number of queries using an agreed-upon protocol, the multimedia retrieval markup language (MRML). Using this benchmark one can measure the quality of retrieval, as well as the overall (speed) performance of the benchmarked system. Our Benchmarks will draw on the Benchathlon"s work for documenting the retrieval performance of both inverted file-based and LSD tree based techniques. However in addition to these results, we will present statistics, that can be obtained only inside the system under test. These statistics will include the number of complex mathematical operations, as well as the amount of data that has to be read from disk during operation of a query.
Gain-adaptive vector quantization for medium-rate speech coding
NASA Technical Reports Server (NTRS)
Chen, J.-H.; Gersho, A.
1985-01-01
A class of adaptive vector quantizers (VQs) that can dynamically adjust the 'gain' of codevectors according to the input signal level is introduced. The encoder uses a gain estimator to determine a suitable normalization of each input vector prior to VQ coding. The normalized vectors have reduced dynamic range and can then be more efficiently coded. At the receiver, the VQ decoder output is multiplied by the estimated gain. Both forward and backward adaptation are considered and several different gain estimators are compared and evaluated. An approach to optimizing the design of gain estimators is introduced. Some of the more obvious techniques for achieving gain adaptation are substantially less effective than the use of optimized gain estimators. A novel design technique that is needed to generate the appropriate gain-normalized codebook for the vector quantizer is introduced. Experimental results show that a significant gain in segmental SNR can be obtained over nonadaptive VQ with a negligible increase in complexity.
Vector Quantization Algorithm Based on Associative Memories
NASA Astrophysics Data System (ADS)
Guzmán, Enrique; Pogrebnyak, Oleksiy; Yáñez, Cornelio; Manrique, Pablo
This paper presents a vector quantization algorithm for image compression based on extended associative memories. The proposed algorithm is divided in two stages. First, an associative network is generated applying the learning phase of the extended associative memories between a codebook generated by the LBG algorithm and a training set. This associative network is named EAM-codebook and represents a new codebook which is used in the next stage. The EAM-codebook establishes a relation between training set and the LBG codebook. Second, the vector quantization process is performed by means of the recalling stage of EAM using as associative memory the EAM-codebook. This process generates a set of the class indices to which each input vector belongs. With respect to the LBG algorithm, the main advantages offered by the proposed algorithm is high processing speed and low demand of resources (system memory); results of image compression and quality are presented.
High Order Entropy-Constrained Residual VQ for Lossless Compression of Images
NASA Technical Reports Server (NTRS)
Kossentini, Faouzi; Smith, Mark J. T.; Scales, Allen
1995-01-01
High order entropy coding is a powerful technique for exploiting high order statistical dependencies. However, the exponentially high complexity associated with such a method often discourages its use. In this paper, an entropy-constrained residual vector quantization method is proposed for lossless compression of images. The method consists of first quantizing the input image using a high order entropy-constrained residual vector quantizer and then coding the residual image using a first order entropy coder. The distortion measure used in the entropy-constrained optimization is essentially the first order entropy of the residual image. Experimental results show very competitive performance.
Density-Dependent Quantized Least Squares Support Vector Machine for Large Data Sets.
Nan, Shengyu; Sun, Lei; Chen, Badong; Lin, Zhiping; Toh, Kar-Ann
2017-01-01
Based on the knowledge that input data distribution is important for learning, a data density-dependent quantization scheme (DQS) is proposed for sparse input data representation. The usefulness of the representation scheme is demonstrated by using it as a data preprocessing unit attached to the well-known least squares support vector machine (LS-SVM) for application on big data sets. Essentially, the proposed DQS adopts a single shrinkage threshold to obtain a simple quantization scheme, which adapts its outputs to input data density. With this quantization scheme, a large data set is quantized to a small subset where considerable sample size reduction is generally obtained. In particular, the sample size reduction can save significant computational cost when using the quantized subset for feature approximation via the Nyström method. Based on the quantized subset, the approximated features are incorporated into LS-SVM to develop a data density-dependent quantized LS-SVM (DQLS-SVM), where an analytic solution is obtained in the primal solution space. The developed DQLS-SVM is evaluated on synthetic and benchmark data with particular emphasis on large data sets. Extensive experimental results show that the learning machine incorporating DQS attains not only high computational efficiency but also good generalization performance.
Enhancing speech recognition using improved particle swarm optimization based hidden Markov model.
Selvaraj, Lokesh; Ganesan, Balakrishnan
2014-01-01
Enhancing speech recognition is the primary intention of this work. In this paper a novel speech recognition method based on vector quantization and improved particle swarm optimization (IPSO) is suggested. The suggested methodology contains four stages, namely, (i) denoising, (ii) feature mining (iii), vector quantization, and (iv) IPSO based hidden Markov model (HMM) technique (IP-HMM). At first, the speech signals are denoised using median filter. Next, characteristics such as peak, pitch spectrum, Mel frequency Cepstral coefficients (MFCC), mean, standard deviation, and minimum and maximum of the signal are extorted from the denoised signal. Following that, to accomplish the training process, the extracted characteristics are given to genetic algorithm based codebook generation in vector quantization. The initial populations are created by selecting random code vectors from the training set for the codebooks for the genetic algorithm process and IP-HMM helps in doing the recognition. At this point the creativeness will be done in terms of one of the genetic operation crossovers. The proposed speech recognition technique offers 97.14% accuracy.
Magnetic resonance image compression using scalar-vector quantization
NASA Astrophysics Data System (ADS)
Mohsenian, Nader; Shahri, Homayoun
1995-12-01
A new coding scheme based on the scalar-vector quantizer (SVQ) is developed for compression of medical images. SVQ is a fixed-rate encoder and its rate-distortion performance is close to that of optimal entropy-constrained scalar quantizers (ECSQs) for memoryless sources. The use of a fixed-rate quantizer is expected to eliminate some of the complexity issues of using variable-length scalar quantizers. When transmission of images over noisy channels is considered, our coding scheme does not suffer from error propagation which is typical of coding schemes which use variable-length codes. For a set of magnetic resonance (MR) images, coding results obtained from SVQ and ECSQ at low bit-rates are indistinguishable. Furthermore, our encoded images are perceptually indistinguishable from the original, when displayed on a monitor. This makes our SVQ based coder an attractive compression scheme for picture archiving and communication systems (PACS), currently under consideration for an all digital radiology environment in hospitals, where reliable transmission, storage, and high fidelity reconstruction of images are desired.
Proofs for the Wave Theory of Plants
NASA Astrophysics Data System (ADS)
Wagner, Orvin E.
1997-03-01
Oscillatory behavior in plants. (2)Standing waves observed coming from probes equally spaced up tree trunks and freshly cut live wood samples. (3)Beat frequencies observed while applying AC voltages to plants. (4)Plant length quantization. (5)Plant growth angle and voltage quantization with respect to the gravitational field. (6)The measurement of plant frequences with a low frequency spectrum analyzer which correlate with the frequencies observed by other means such as by measuring plant lengths, considered as half wavelengths, and beat frequencies. (7)Voltages obtained from insulated, isolated from light, diode dies placed in slits in tree trunks. Diodes become relatively low impedance sources for voltages as high as eight volts. Diodes indicate charge separating longitudinal standing waves sweeping up and down a tree trunk. Longitudinal waves also indicated by plant structure. (8)The measured discrete wave velocities appear to be dependent on their direction of travel with respect to the gravitational field. These provide growth references for the plant and a wave guide affect. For references see Wagner Research Laboratory Web Page.
Symplectic Quantization of a Vector-Tensor Gauge Theory with Topological Coupling
NASA Astrophysics Data System (ADS)
Barcelos-Neto, J.; Silva, M. B. D.
We use the symplectic formalism to quantize a gauge theory where vectors and tensors fields are coupled in a topological way. This is an example of reducible theory and a procedure like of ghosts-of-ghosts of the BFV method is applied but in terms of Lagrange multipliers. Our final results are in agreement with the ones found in the literature by using the Dirac method.
Locally adaptive vector quantization: Data compression with feature preservation
NASA Technical Reports Server (NTRS)
Cheung, K. M.; Sayano, M.
1992-01-01
A study of a locally adaptive vector quantization (LAVQ) algorithm for data compression is presented. This algorithm provides high-speed one-pass compression and is fully adaptable to any data source and does not require a priori knowledge of the source statistics. Therefore, LAVQ is a universal data compression algorithm. The basic algorithm and several modifications to improve performance are discussed. These modifications are nonlinear quantization, coarse quantization of the codebook, and lossless compression of the output. Performance of LAVQ on various images using irreversible (lossy) coding is comparable to that of the Linde-Buzo-Gray algorithm, but LAVQ has a much higher speed; thus this algorithm has potential for real-time video compression. Unlike most other image compression algorithms, LAVQ preserves fine detail in images. LAVQ's performance as a lossless data compression algorithm is comparable to that of Lempel-Ziv-based algorithms, but LAVQ uses far less memory during the coding process.
A constrained joint source/channel coder design and vector quantization of nonstationary sources
NASA Technical Reports Server (NTRS)
Sayood, Khalid; Chen, Y. C.; Nori, S.; Araj, A.
1993-01-01
The emergence of broadband ISDN as the network for the future brings with it the promise of integration of all proposed services in a flexible environment. In order to achieve this flexibility, asynchronous transfer mode (ATM) has been proposed as the transfer technique. During this period a study was conducted on the bridging of network transmission performance and video coding. The successful transmission of variable bit rate video over ATM networks relies on the interaction between the video coding algorithm and the ATM networks. Two aspects of networks that determine the efficiency of video transmission are the resource allocation algorithm and the congestion control algorithm. These are explained in this report. Vector quantization (VQ) is one of the more popular compression techniques to appear in the last twenty years. Numerous compression techniques, which incorporate VQ, have been proposed. While the LBG VQ provides excellent compression, there are also several drawbacks to the use of the LBG quantizers including search complexity and memory requirements, and a mismatch between the codebook and the inputs. The latter mainly stems from the fact that the VQ is generally designed for a specific rate and a specific class of inputs. In this work, an adaptive technique is proposed for vector quantization of images and video sequences. This technique is an extension of the recursively indexed scalar quantization (RISQ) algorithm.
Radial quantization of the 3d CFT and the higher spin/vector model duality
NASA Astrophysics Data System (ADS)
Hu, Shan; Li, Tianjun
2014-10-01
We study the radial quantization of the 3dO(N) vector model. We calculate the higher spin charges whose commutation relations give the higher spin algebra. The Fock states of higher spin gravity in AdS4 are realized as the states in the 3d CFT. The dynamical information is encoded in their inner products. This serves as the simplest explicit demonstration of the CFT definition for the quantum gravity.
Wavelet subband coding of computer simulation output using the A++ array class library
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bradley, J.N.; Brislawn, C.M.; Quinlan, D.J.
1995-07-01
The goal of the project is to produce utility software for off-line compression of existing data and library code that can be called from a simulation program for on-line compression of data dumps as the simulation proceeds. Naturally, we would like the amount of CPU time required by the compression algorithm to be small in comparison to the requirements of typical simulation codes. We also want the algorithm to accomodate a wide variety of smooth, multidimensional data types. For these reasons, the subband vector quantization (VQ) approach employed in has been replaced by a scalar quantization (SQ) strategy using amore » bank of almost-uniform scalar subband quantizers in a scheme similar to that used in the FBI fingerprint image compression standard. This eliminates the considerable computational burdens of training VQ codebooks for each new type of data and performing nearest-vector searches to encode the data. The comparison of subband VQ and SQ algorithms in indicated that, in practice, there is relatively little additional gain from using vector as opposed to scalar quantization on DWT subbands, even when the source imagery is from a very homogeneous population, and our subjective experience with synthetic computer-generated data supports this stance. It appears that a careful study is needed of the tradeoffs involved in selecting scalar vs. vector subband quantization, but such an analysis is beyond the scope of this paper. Our present work is focused on the problem of generating wavelet transform/scalar quantization (WSQ) implementations that can be ported easily between different hardware environments. This is an extremely important consideration given the great profusion of different high-performance computing architectures available, the high cost associated with learning how to map algorithms effectively onto a new architecture, and the rapid rate of evolution in the world of high-performance computing.« less
A Hybrid Shared-Memory Parallel Max-Tree Algorithm for Extreme Dynamic-Range Images.
Moschini, Ugo; Meijster, Arnold; Wilkinson, Michael H F
2018-03-01
Max-trees, or component trees, are graph structures that represent the connected components of an image in a hierarchical way. Nowadays, many application fields rely on images with high-dynamic range or floating point values. Efficient sequential algorithms exist to build trees and compute attributes for images of any bit depth. However, we show that the current parallel algorithms perform poorly already with integers at bit depths higher than 16 bits per pixel. We propose a parallel method combining the two worlds of flooding and merging max-tree algorithms. First, a pilot max-tree of a quantized version of the image is built in parallel using a flooding method. Later, this structure is used in a parallel leaf-to-root approach to compute efficiently the final max-tree and to drive the merging of the sub-trees computed by the threads. We present an analysis of the performance both on simulated and actual 2D images and 3D volumes. Execution times are about better than the fastest sequential algorithm and speed-up goes up to on 64 threads.
Three learning phases for radial-basis-function networks.
Schwenker, F; Kestler, H A; Palm, G
2001-05-01
In this paper, learning algorithms for radial basis function (RBF) networks are discussed. Whereas multilayer perceptrons (MLP) are typically trained with backpropagation algorithms, starting the training procedure with a random initialization of the MLP's parameters, an RBF network may be trained in many different ways. We categorize these RBF training methods into one-, two-, and three-phase learning schemes. Two-phase RBF learning is a very common learning scheme. The two layers of an RBF network are learnt separately; first the RBF layer is trained, including the adaptation of centers and scaling parameters, and then the weights of the output layer are adapted. RBF centers may be trained by clustering, vector quantization and classification tree algorithms, and the output layer by supervised learning (through gradient descent or pseudo inverse solution). Results from numerical experiments of RBF classifiers trained by two-phase learning are presented in three completely different pattern recognition applications: (a) the classification of 3D visual objects; (b) the recognition hand-written digits (2D objects); and (c) the categorization of high-resolution electrocardiograms given as a time series (ID objects) and as a set of features extracted from these time series. In these applications, it can be observed that the performance of RBF classifiers trained with two-phase learning can be improved through a third backpropagation-like training phase of the RBF network, adapting the whole set of parameters (RBF centers, scaling parameters, and output layer weights) simultaneously. This, we call three-phase learning in RBF networks. A practical advantage of two- and three-phase learning in RBF networks is the possibility to use unlabeled training data for the first training phase. Support vector (SV) learning in RBF networks is a different learning approach. SV learning can be considered, in this context of learning, as a special type of one-phase learning, where only the output layer weights of the RBF network are calculated, and the RBF centers are restricted to be a subset of the training data. Numerical experiments with several classifier schemes including k-nearest-neighbor, learning vector quantization and RBF classifiers trained through two-phase, three-phase and support vector learning are given. The performance of the RBF classifiers trained through SV learning and three-phase learning are superior to the results of two-phase learning, but SV learning often leads to complex network structures, since the number of support vectors is not a small fraction of the total number of data points.
Classification of postural profiles among mouth-breathing children by learning vector quantization.
Mancini, F; Sousa, F S; Hummel, A D; Falcão, A E J; Yi, L C; Ortolani, C F; Sigulem, D; Pisa, I T
2011-01-01
Mouth breathing is a chronic syndrome that may bring about postural changes. Finding characteristic patterns of changes occurring in the complex musculoskeletal system of mouth-breathing children has been a challenge. Learning vector quantization (LVQ) is an artificial neural network model that can be applied for this purpose. The aim of the present study was to apply LVQ to determine the characteristic postural profiles shown by mouth-breathing children, in order to further understand abnormal posture among mouth breathers. Postural training data on 52 children (30 mouth breathers and 22 nose breathers) and postural validation data on 32 children (22 mouth breathers and 10 nose breathers) were used. The performance of LVQ and other classification models was compared in relation to self-organizing maps, back-propagation applied to multilayer perceptrons, Bayesian networks, naive Bayes, J48 decision trees, k, and k-nearest-neighbor classifiers. Classifier accuracy was assessed by means of leave-one-out cross-validation, area under ROC curve (AUC), and inter-rater agreement (Kappa statistics). By using the LVQ model, five postural profiles for mouth-breathing children could be determined. LVQ showed satisfactory results for mouth-breathing and nose-breathing classification: sensitivity and specificity rates of 0.90 and 0.95, respectively, when using the training dataset, and 0.95 and 0.90, respectively, when using the validation dataset. The five postural profiles for mouth-breathing children suggested by LVQ were incorporated into application software for classifying the severity of mouth breathers' abnormal posture.
Zhang, Yu; Wu, Jianxin; Cai, Jianfei
2016-05-01
In large-scale visual recognition and image retrieval tasks, feature vectors, such as Fisher vector (FV) or the vector of locally aggregated descriptors (VLAD), have achieved state-of-the-art results. However, the combination of the large numbers of examples and high-dimensional vectors necessitates dimensionality reduction, in order to reduce its storage and CPU costs to a reasonable range. In spite of the popularity of various feature compression methods, this paper shows that the feature (dimension) selection is a better choice for high-dimensional FV/VLAD than the feature (dimension) compression methods, e.g., product quantization. We show that strong correlation among the feature dimensions in the FV and the VLAD may not exist, which renders feature selection a natural choice. We also show that, many dimensions in FV/VLAD are noise. Throwing them away using feature selection is better than compressing them and useful dimensions altogether using feature compression methods. To choose features, we propose an efficient importance sorting algorithm considering both the supervised and unsupervised cases, for visual recognition and image retrieval, respectively. Combining with the 1-bit quantization, feature selection has achieved both higher accuracy and less computational cost than feature compression methods, such as product quantization, on the FV and the VLAD image representations.
Zhang, Lu; Pang, Xiaodan; Ozolins, Oskars; Udalcovs, Aleksejs; Popov, Sergei; Xiao, Shilin; Hu, Weisheng; Chen, Jiajia
2018-04-01
We propose a spectrally efficient digitized radio-over-fiber (D-RoF) system by grouping highly correlated neighboring samples of the analog signals into multidimensional vectors, where the k-means clustering algorithm is adopted for adaptive quantization. A 30 Gbit/s D-RoF system is experimentally demonstrated to validate the proposed scheme, reporting a carrier aggregation of up to 40 100 MHz orthogonal frequency division multiplexing (OFDM) channels with quadrate amplitude modulation (QAM) order of 4 and an aggregation of 10 100 MHz OFDM channels with a QAM order of 16384. The equivalent common public radio interface rates from 37 to 150 Gbit/s are supported. Besides, the error vector magnitude (EVM) of 8% is achieved with the number of quantization bits of 4, and the EVM can be further reduced to 1% by increasing the number of quantization bits to 7. Compared with conventional pulse coding modulation-based D-RoF systems, the proposed D-RoF system improves the signal-to-noise-ratio up to ∼9 dB and greatly reduces the EVM, given the same number of quantization bits.
Improved image decompression for reduced transform coding artifacts
NASA Technical Reports Server (NTRS)
Orourke, Thomas P.; Stevenson, Robert L.
1994-01-01
The perceived quality of images reconstructed from low bit rate compression is severely degraded by the appearance of transform coding artifacts. This paper proposes a method for producing higher quality reconstructed images based on a stochastic model for the image data. Quantization (scalar or vector) partitions the transform coefficient space and maps all points in a partition cell to a representative reconstruction point, usually taken as the centroid of the cell. The proposed image estimation technique selects the reconstruction point within the quantization partition cell which results in a reconstructed image which best fits a non-Gaussian Markov random field (MRF) image model. This approach results in a convex constrained optimization problem which can be solved iteratively. At each iteration, the gradient projection method is used to update the estimate based on the image model. In the transform domain, the resulting coefficient reconstruction points are projected to the particular quantization partition cells defined by the compressed image. Experimental results will be shown for images compressed using scalar quantization of block DCT and using vector quantization of subband wavelet transform. The proposed image decompression provides a reconstructed image with reduced visibility of transform coding artifacts and superior perceived quality.
Image splitting and remapping method for radiological image compression
NASA Astrophysics Data System (ADS)
Lo, Shih-Chung B.; Shen, Ellen L.; Mun, Seong K.
1990-07-01
A new decomposition method using image splitting and gray-level remapping has been proposed for image compression, particularly for images with high contrast resolution. The effects of this method are especially evident in our radiological image compression study. In our experiments, we tested the impact of this decomposition method on image compression by employing it with two coding techniques on a set of clinically used CT images and several laser film digitized chest radiographs. One of the compression techniques used was full-frame bit-allocation in the discrete cosine transform domain, which has been proven to be an effective technique for radiological image compression. The other compression technique used was vector quantization with pruned tree-structured encoding, which through recent research has also been found to produce a low mean-square-error and a high compression ratio. The parameters we used in this study were mean-square-error and the bit rate required for the compressed file. In addition to these parameters, the difference between the original and reconstructed images will be presented so that the specific artifacts generated by both techniques can be discerned by visual perception.
Using trees to compute approximate solutions to ordinary differential equations exactly
NASA Technical Reports Server (NTRS)
Grossman, Robert
1991-01-01
Some recent work is reviewed which relates families of trees to symbolic algorithms for the exact computation of series which approximate solutions of ordinary differential equations. It turns out that the vector space whose basis is the set of finite, rooted trees carries a natural multiplication related to the composition of differential operators, making the space of trees an algebra. This algebraic structure can be exploited to yield a variety of algorithms for manipulating vector fields and the series and algebras they generate.
NASA Technical Reports Server (NTRS)
Manohar, Mareboyana; Tilton, James C.
1994-01-01
A progressive vector quantization (VQ) compression approach is discussed which decomposes image data into a number of levels using full search VQ. The final level is losslessly compressed, enabling lossless reconstruction. The computational difficulties are addressed by implementation on a massively parallel SIMD machine. We demonstrate progressive VQ on multispectral imagery obtained from the Advanced Very High Resolution Radiometer instrument and other Earth observation image data, and investigate the trade-offs in selecting the number of decomposition levels and codebook training method.
Visual data mining for quantized spatial data
NASA Technical Reports Server (NTRS)
Braverman, Amy; Kahn, Brian
2004-01-01
In previous papers we've shown how a well known data compression algorithm called Entropy-constrained Vector Quantization ( can be modified to reduce the size and complexity of very large, satellite data sets. In this paper, we descuss how to visualize and understand the content of such reduced data sets.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Inomata, A.; Junker, G.; Wilson, R.
1993-08-01
The unified treatment of the Dirac monopole, the Schwinger monopole, and the Aharonov-Bahn problem by Barut and Wilson is revisited via a path integral approach. The Kustaanheimo-Stiefel transformation of space and time is utilized to calculate the path integral for a charged particle in the singular vector potential. In the process of dimensional reduction, a topological charge quantization rule is derived, which contains Dirac's quantization condition as a special case. 32 refs.
Covariant open bosonic string field theory on multiple D-branes in the proper-time gauge
NASA Astrophysics Data System (ADS)
Lee, Taejin
2017-12-01
We construct a covariant open bosonic string field theory on multiple D-branes, which reduces to a non-Abelian group Yang-Mills gauge theory in the zero-slope limit. Making use of the first quantized open bosonic string in the proper time gauge, we convert the string amplitudes given by the Polyakov path integrals on string world sheets into those of the second quantized theory. The world sheet diagrams generated by the constructed open string field theory are planar in contrast to those of the Witten's cubic string field theory. However, the constructed string field theory is yet equivalent to the Witten's cubic string field theory. Having obtained planar diagrams, we may adopt the light-cone string field theory technique to calculate the multi-string scattering amplitudes with an arbitrary number of external strings. We examine in detail the three-string vertex diagram and the effective four-string vertex diagrams generated perturbatively by the three-string vertex at tree level. In the zero-slope limit, the string scattering amplitudes are identified precisely as those of non-Abelian Yang-Mills gauge theory if the external states are chosen to be massless vector particles.
Understanding Local Structure Globally in Earth Science Remote Sensing Data Sets
NASA Technical Reports Server (NTRS)
Braverman, Amy; Fetzer, Eric
2007-01-01
Empirical probability distributions derived from the data are the signatures of physical processes generating the data. Distributions defined on different space-time windows can be compared and differences or changes can be attributed to physical processes. This presentation discusses on ways to reduce remote sensing data in a way that preserves information, focusing on the rate-distortion theory and using the entropy-constrained vector quantization algorithm.
NASA Astrophysics Data System (ADS)
Xiao, Guoqiang; Jiang, Yang; Song, Gang; Jiang, Jianmin
2010-12-01
We propose a support-vector-machine (SVM) tree to hierarchically learn from domain knowledge represented by low-level features toward automatic classification of sports videos. The proposed SVM tree adopts a binary tree structure to exploit the nature of SVM's binary classification, where each internal node is a single SVM learning unit, and each external node represents the classified output type. Such a SVM tree presents a number of advantages, which include: 1. low computing cost; 2. integrated learning and classification while preserving individual SVM's learning strength; and 3. flexibility in both structure and learning modules, where different numbers of nodes and features can be added to address specific learning requirements, and various learning models can be added as individual nodes, such as neural networks, AdaBoost, hidden Markov models, dynamic Bayesian networks, etc. Experiments support that the proposed SVM tree achieves good performances in sports video classifications.
Hu, J H; Wang, Y; Cahill, P T
1997-01-01
This paper reports a multispectral code excited linear prediction (MCELP) method for the compression of multispectral images. Different linear prediction models and adaptation schemes have been compared. The method that uses a forward adaptive autoregressive (AR) model has been proven to achieve a good compromise between performance, complexity, and robustness. This approach is referred to as the MFCELP method. Given a set of multispectral images, the linear predictive coefficients are updated over nonoverlapping three-dimensional (3-D) macroblocks. Each macroblock is further divided into several 3-D micro-blocks, and the best excitation signal for each microblock is determined through an analysis-by-synthesis procedure. The MFCELP method has been applied to multispectral magnetic resonance (MR) images. To satisfy the high quality requirement for medical images, the error between the original image set and the synthesized one is further specified using a vector quantizer. This method has been applied to images from 26 clinical MR neuro studies (20 slices/study, three spectral bands/slice, 256x256 pixels/band, 12 b/pixel). The MFCELP method provides a significant visual improvement over the discrete cosine transform (DCT) based Joint Photographers Expert Group (JPEG) method, the wavelet transform based embedded zero-tree wavelet (EZW) coding method, and the vector tree (VT) coding method, as well as the multispectral segmented autoregressive moving average (MSARMA) method we developed previously.
NASA Technical Reports Server (NTRS)
Jaggi, S.
1993-01-01
A study is conducted to investigate the effects and advantages of data compression techniques on multispectral imagery data acquired by NASA's airborne scanners at the Stennis Space Center. The first technique used was vector quantization. The vector is defined in the multispectral imagery context as an array of pixels from the same location from each channel. The error obtained in substituting the reconstructed images for the original set is compared for different compression ratios. Also, the eigenvalues of the covariance matrix obtained from the reconstructed data set are compared with the eigenvalues of the original set. The effects of varying the size of the vector codebook on the quality of the compression and on subsequent classification are also presented. The output data from the Vector Quantization algorithm was further compressed by a lossless technique called Difference-mapped Shift-extended Huffman coding. The overall compression for 7 channels of data acquired by the Calibrated Airborne Multispectral Scanner (CAMS), with an RMS error of 15.8 pixels was 195:1 (0.41 bpp) and with an RMS error of 3.6 pixels was 18:1 (.447 bpp). The algorithms were implemented in software and interfaced with the help of dedicated image processing boards to an 80386 PC compatible computer. Modules were developed for the task of image compression and image analysis. Also, supporting software to perform image processing for visual display and interpretation of the compressed/classified images was developed.
NASA Astrophysics Data System (ADS)
Jurčo, B.; Schlieker, M.
1995-07-01
In this paper explicitly natural (from the geometrical point of view) Fock-space representations (contragradient Verma modules) of the quantized enveloping algebras are constructed. In order to do so, one starts from the Gauss decomposition of the quantum group and introduces the differential operators on the corresponding q-deformed flag manifold (assumed as a left comodule for the quantum group) by a projection to it of the right action of the quantized enveloping algebra on the quantum group. Finally, the representatives of the elements of the quantized enveloping algebra corresponding to the left-invariant vector fields on the quantum group are expressed as first-order differential operators on the q-deformed flag manifold.
NASA Astrophysics Data System (ADS)
Hecht-Nielsen, Robert
1997-04-01
A new universal one-chart smooth manifold model for vector information sources is introduced. Natural coordinates (a particular type of chart) for such data manifolds are then defined. Uniformly quantized natural coordinates form an optimal vector quantization code for a general vector source. Replicator neural networks (a specialized type of multilayer perceptron with three hidden layers) are the introduced. As properly configured examples of replicator networks approach minimum mean squared error (e.g., via training and architecture adjustment using randomly chosen vectors from the source), these networks automatically develop a mapping which, in the limit, produces natural coordinates for arbitrary source vectors. The new concept of removable noise (a noise model applicable to a wide variety of real-world noise processes) is then discussed. Replicator neural networks, when configured to approach minimum mean squared reconstruction error (e.g., via training and architecture adjustment on randomly chosen examples from a vector source, each with randomly chosen additive removable noise contamination), in the limit eliminate removable noise and produce natural coordinates for the data vector portions of the noise-corrupted source vectors. Consideration regarding selection of the dimension of a data manifold source model and the training/configuration of replicator neural networks are discussed.
Diffraction pattern simulation of cellulose fibrils using distributed and quantized pair distances
Zhang, Yan; Inouye, Hideyo; Crowley, Michael; ...
2016-10-14
Intensity simulation of X-ray scattering from large twisted cellulose molecular fibrils is important in understanding the impact of chemical or physical treatments on structural properties such as twisting or coiling. This paper describes a highly efficient method for the simulation of X-ray diffraction patterns from complex fibrils using atom-type-specific pair-distance quantization. Pair distances are sorted into arrays which are labelled by atom type. Histograms of pair distances in each array are computed and binned and the resulting population distributions are used to represent the whole pair-distance data set. These quantized pair-distance arrays are used with a modified and vectorized Debyemore » formula to simulate diffraction patterns. This approach utilizes fewer pair distances in each iteration, and atomic scattering factors are moved outside the iteration since the arrays are labelled by atom type. As a result, this algorithm significantly reduces the computation time while maintaining the accuracy of diffraction pattern simulation, making possible the simulation of diffraction patterns from large twisted fibrils in a relatively short period of time, as is required for model testing and refinement.« less
Diffraction pattern simulation of cellulose fibrils using distributed and quantized pair distances
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Yan; Inouye, Hideyo; Crowley, Michael
Intensity simulation of X-ray scattering from large twisted cellulose molecular fibrils is important in understanding the impact of chemical or physical treatments on structural properties such as twisting or coiling. This paper describes a highly efficient method for the simulation of X-ray diffraction patterns from complex fibrils using atom-type-specific pair-distance quantization. Pair distances are sorted into arrays which are labelled by atom type. Histograms of pair distances in each array are computed and binned and the resulting population distributions are used to represent the whole pair-distance data set. These quantized pair-distance arrays are used with a modified and vectorized Debyemore » formula to simulate diffraction patterns. This approach utilizes fewer pair distances in each iteration, and atomic scattering factors are moved outside the iteration since the arrays are labelled by atom type. This algorithm significantly reduces the computation time while maintaining the accuracy of diffraction pattern simulation, making possible the simulation of diffraction patterns from large twisted fibrils in a relatively short period of time, as is required for model testing and refinement.« less
Diffraction pattern simulation of cellulose fibrils using distributed and quantized pair distances
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Yan; Inouye, Hideyo; Crowley, Michael
Intensity simulation of X-ray scattering from large twisted cellulose molecular fibrils is important in understanding the impact of chemical or physical treatments on structural properties such as twisting or coiling. This paper describes a highly efficient method for the simulation of X-ray diffraction patterns from complex fibrils using atom-type-specific pair-distance quantization. Pair distances are sorted into arrays which are labelled by atom type. Histograms of pair distances in each array are computed and binned and the resulting population distributions are used to represent the whole pair-distance data set. These quantized pair-distance arrays are used with a modified and vectorized Debyemore » formula to simulate diffraction patterns. This approach utilizes fewer pair distances in each iteration, and atomic scattering factors are moved outside the iteration since the arrays are labelled by atom type. As a result, this algorithm significantly reduces the computation time while maintaining the accuracy of diffraction pattern simulation, making possible the simulation of diffraction patterns from large twisted fibrils in a relatively short period of time, as is required for model testing and refinement.« less
Curvilinear component analysis: a self-organizing neural network for nonlinear mapping of data sets.
Demartines, P; Herault, J
1997-01-01
We present a new strategy called "curvilinear component analysis" (CCA) for dimensionality reduction and representation of multidimensional data sets. The principle of CCA is a self-organized neural network performing two tasks: vector quantization (VQ) of the submanifold in the data set (input space); and nonlinear projection (P) of these quantizing vectors toward an output space, providing a revealing unfolding of the submanifold. After learning, the network has the ability to continuously map any new point from one space into another: forward mapping of new points in the input space, or backward mapping of an arbitrary position in the output space.
Using a binaural biomimetic array to identify bottom objects ensonified by echolocating dolphins
Heiweg, D.A.; Moore, P.W.; Martin, S.W.; Dankiewicz, L.A.
2006-01-01
The development of a unique dolphin biomimetic sonar produced data that were used to study signal processing methods for object identification. Echoes from four metallic objects proud on the bottom, and a substrate-only condition, were generated by bottlenose dolphins trained to ensonify the targets in very shallow water. Using the two-element ('binaural') receive array, object echo spectra were collected and submitted for identification to four neural network architectures. Identification accuracy was evaluated over two receive array configurations, and five signal processing schemes. The four neural networks included backpropagation, learning vector quantization, genetic learning and probabilistic network architectures. The processing schemes included four methods that capitalized on the binaural data, plus a monaural benchmark process. All the schemes resulted in above-chance identification accuracy when applied to learning vector quantization and backpropagation. Beam-forming or concatenation of spectra from both receive elements outperformed the monaural benchmark, with higher sensitivity and lower bias. Ultimately, best object identification performance was achieved by the learning vector quantization network supplied with beam-formed data. The advantages of multi-element signal processing for object identification are clearly demonstrated in this development of a first-ever dolphin biomimetic sonar. ?? 2006 IOP Publishing Ltd.
Efficient enumeration of monocyclic chemical graphs with given path frequencies
2014-01-01
Background The enumeration of chemical graphs (molecular graphs) satisfying given constraints is one of the fundamental problems in chemoinformatics and bioinformatics because it leads to a variety of useful applications including structure determination and development of novel chemical compounds. Results We consider the problem of enumerating chemical graphs with monocyclic structure (a graph structure that contains exactly one cycle) from a given set of feature vectors, where a feature vector represents the frequency of the prescribed paths in a chemical compound to be constructed and the set is specified by a pair of upper and lower feature vectors. To enumerate all tree-like (acyclic) chemical graphs from a given set of feature vectors, Shimizu et al. and Suzuki et al. proposed efficient branch-and-bound algorithms based on a fast tree enumeration algorithm. In this study, we devise a novel method for extending these algorithms to enumeration of chemical graphs with monocyclic structure by designing a fast algorithm for testing uniqueness. The results of computational experiments reveal that the computational efficiency of the new algorithm is as good as those for enumeration of tree-like chemical compounds. Conclusions We succeed in expanding the class of chemical graphs that are able to be enumerated efficiently. PMID:24955135
Metrics for comparing neuronal tree shapes based on persistent homology.
Li, Yanjie; Wang, Dingkang; Ascoli, Giorgio A; Mitra, Partha; Wang, Yusu
2017-01-01
As more and more neuroanatomical data are made available through efforts such as NeuroMorpho.Org and FlyCircuit.org, the need to develop computational tools to facilitate automatic knowledge discovery from such large datasets becomes more urgent. One fundamental question is how best to compare neuron structures, for instance to organize and classify large collection of neurons. We aim to develop a flexible yet powerful framework to support comparison and classification of large collection of neuron structures efficiently. Specifically we propose to use a topological persistence-based feature vectorization framework. Existing methods to vectorize a neuron (i.e, convert a neuron to a feature vector so as to support efficient comparison and/or searching) typically rely on statistics or summaries of morphometric information, such as the average or maximum local torque angle or partition asymmetry. These simple summaries have limited power in encoding global tree structures. Based on the concept of topological persistence recently developed in the field of computational topology, we vectorize each neuron structure into a simple yet informative summary. In particular, each type of information of interest can be represented as a descriptor function defined on the neuron tree, which is then mapped to a simple persistence-signature. Our framework can encode both local and global tree structure, as well as other information of interest (electrophysiological or dynamical measures), by considering multiple descriptor functions on the neuron. The resulting persistence-based signature is potentially more informative than simple statistical summaries (such as average/mean/max) of morphometric quantities-Indeed, we show that using a certain descriptor function will give a persistence-based signature containing strictly more information than the classical Sholl analysis. At the same time, our framework retains the efficiency associated with treating neurons as points in a simple Euclidean feature space, which would be important for constructing efficient searching or indexing structures over them. We present preliminary experimental results to demonstrate the effectiveness of our persistence-based neuronal feature vectorization framework.
Metrics for comparing neuronal tree shapes based on persistent homology
Li, Yanjie; Wang, Dingkang; Ascoli, Giorgio A.; Mitra, Partha
2017-01-01
As more and more neuroanatomical data are made available through efforts such as NeuroMorpho.Org and FlyCircuit.org, the need to develop computational tools to facilitate automatic knowledge discovery from such large datasets becomes more urgent. One fundamental question is how best to compare neuron structures, for instance to organize and classify large collection of neurons. We aim to develop a flexible yet powerful framework to support comparison and classification of large collection of neuron structures efficiently. Specifically we propose to use a topological persistence-based feature vectorization framework. Existing methods to vectorize a neuron (i.e, convert a neuron to a feature vector so as to support efficient comparison and/or searching) typically rely on statistics or summaries of morphometric information, such as the average or maximum local torque angle or partition asymmetry. These simple summaries have limited power in encoding global tree structures. Based on the concept of topological persistence recently developed in the field of computational topology, we vectorize each neuron structure into a simple yet informative summary. In particular, each type of information of interest can be represented as a descriptor function defined on the neuron tree, which is then mapped to a simple persistence-signature. Our framework can encode both local and global tree structure, as well as other information of interest (electrophysiological or dynamical measures), by considering multiple descriptor functions on the neuron. The resulting persistence-based signature is potentially more informative than simple statistical summaries (such as average/mean/max) of morphometric quantities—Indeed, we show that using a certain descriptor function will give a persistence-based signature containing strictly more information than the classical Sholl analysis. At the same time, our framework retains the efficiency associated with treating neurons as points in a simple Euclidean feature space, which would be important for constructing efficient searching or indexing structures over them. We present preliminary experimental results to demonstrate the effectiveness of our persistence-based neuronal feature vectorization framework. PMID:28809960
Analysis of the Westland Data Set
NASA Technical Reports Server (NTRS)
Wen, Fang; Willett, Peter; Deb, Somnath
2001-01-01
The "Westland" set of empirical accelerometer helicopter data with seeded and labeled faults is analyzed with the aim of condition monitoring. The autoregressive (AR) coefficients from a simple linear model encapsulate a great deal of information in a relatively few measurements; and it has also been found that augmentation of these by harmonic and other parameters call improve classification significantly. Several techniques have been explored, among these restricted Coulomb energy (RCE) networks, learning vector quantization (LVQ), Gaussian mixture classifiers and decision trees. A problem with these approaches, and in common with many classification paradigms, is that augmentation of the feature dimension can degrade classification ability. Thus, we also introduce the Bayesian data reduction algorithm (BDRA), which imposes a Dirichlet prior oil training data and is thus able to quantify probability of error in all exact manner, such that features may be discarded or coarsened appropriately.
Wigner functions on non-standard symplectic vector spaces
NASA Astrophysics Data System (ADS)
Dias, Nuno Costa; Prata, João Nuno
2018-01-01
We consider the Weyl quantization on a flat non-standard symplectic vector space. We focus mainly on the properties of the Wigner functions defined therein. In particular we show that the sets of Wigner functions on distinct symplectic spaces are different but have non-empty intersections. This extends previous results to arbitrary dimension and arbitrary (constant) symplectic structure. As a by-product we introduce and prove several concepts and results on non-standard symplectic spaces which generalize those on the standard symplectic space, namely, the symplectic spectrum, Williamson's theorem, and Narcowich-Wigner spectra. We also show how Wigner functions on non-standard symplectic spaces behave under the action of an arbitrary linear coordinate transformation.
Vector Sum Excited Linear Prediction (VSELP) speech coding at 4.8 kbps
NASA Technical Reports Server (NTRS)
Gerson, Ira A.; Jasiuk, Mark A.
1990-01-01
Code Excited Linear Prediction (CELP) speech coders exhibit good performance at data rates as low as 4800 bps. The major drawback to CELP type coders is their larger computational requirements. The Vector Sum Excited Linear Prediction (VSELP) speech coder utilizes a codebook with a structure which allows for a very efficient search procedure. Other advantages of the VSELP codebook structure is discussed and a detailed description of a 4.8 kbps VSELP coder is given. This coder is an improved version of the VSELP algorithm, which finished first in the NSA's evaluation of the 4.8 kbps speech coders. The coder uses a subsample resolution single tap long term predictor, a single VSELP excitation codebook, a novel gain quantizer which is robust to channel errors, and a new adaptive pre/postfilter arrangement.
Low bit rate coding of Earth science images
NASA Technical Reports Server (NTRS)
Kossentini, Faouzi; Chung, Wilson C.; Smith, Mark J. T.
1993-01-01
In this paper, the authors discuss compression based on some new ideas in vector quantization and their incorporation in a sub-band coding framework. Several variations are considered, which collectively address many of the individual compression needs within the earth science community. The approach taken in this work is based on some recent advances in the area of variable rate residual vector quantization (RVQ). This new RVQ method is considered separately and in conjunction with sub-band image decomposition. Very good results are achieved in coding a variety of earth science images. The last section of the paper provides some comparisons that illustrate the improvement in performance attributable to this approach relative the the JPEG coding standard.
A fingerprint key binding algorithm based on vector quantization and error correction
NASA Astrophysics Data System (ADS)
Li, Liang; Wang, Qian; Lv, Ke; He, Ning
2012-04-01
In recent years, researches on seamless combination cryptosystem with biometric technologies, e.g. fingerprint recognition, are conducted by many researchers. In this paper, we propose a binding algorithm of fingerprint template and cryptographic key to protect and access the key by fingerprint verification. In order to avoid the intrinsic fuzziness of variant fingerprints, vector quantization and error correction technique are introduced to transform fingerprint template and then bind with key, after a process of fingerprint registration and extracting global ridge pattern of fingerprint. The key itself is secure because only hash value is stored and it is released only when fingerprint verification succeeds. Experimental results demonstrate the effectiveness of our ideas.
Application of Classification Models to Pharyngeal High-Resolution Manometry
ERIC Educational Resources Information Center
Mielens, Jason D.; Hoffman, Matthew R.; Ciucci, Michelle R.; McCulloch, Timothy M.; Jiang, Jack J.
2012-01-01
Purpose: The authors present 3 methods of performing pattern recognition on spatiotemporal plots produced by pharyngeal high-resolution manometry (HRM). Method: Classification models, including the artificial neural networks (ANNs) multilayer perceptron (MLP) and learning vector quantization (LVQ), as well as support vector machines (SVM), were…
Vector coding of wavelet-transformed images
NASA Astrophysics Data System (ADS)
Zhou, Jun; Zhi, Cheng; Zhou, Yuanhua
1998-09-01
Wavelet, as a brand new tool in signal processing, has got broad recognition. Using wavelet transform, we can get octave divided frequency band with specific orientation which combines well with the properties of Human Visual System. In this paper, we discuss the classified vector quantization method for multiresolution represented image.
Permutation modulation for quantization and information reconciliation in CV-QKD systems
NASA Astrophysics Data System (ADS)
Daneshgaran, Fred; Mondin, Marina; Olia, Khashayar
2017-08-01
This paper is focused on the problem of Information Reconciliation (IR) for continuous variable Quantum Key Distribution (QKD). The main problem is quantization and assignment of labels to the samples of the Gaussian variables observed at Alice and Bob. Trouble is that most of the samples, assuming that the Gaussian variable is zero mean which is de-facto the case, tend to have small magnitudes and are easily disturbed by noise. Transmission over longer and longer distances increases the losses corresponding to a lower effective Signal to Noise Ratio (SNR) exasperating the problem. Here we propose to use Permutation Modulation (PM) as a means of quantization of Gaussian vectors at Alice and Bob over a d-dimensional space with d ≫ 1. The goal is to achieve the necessary coding efficiency to extend the achievable range of continuous variable QKD by quantizing over larger and larger dimensions. Fractional bit rate per sample is easily achieved using PM at very reasonable computational cost. Ordered statistics is used extensively throughout the development from generation of the seed vector in PM to analysis of error rates associated with the signs of the Gaussian samples at Alice and Bob as a function of the magnitude of the observed samples at Bob.
The BRST complex of homological Poisson reduction
NASA Astrophysics Data System (ADS)
Müller-Lennert, Martin
2017-02-01
BRST complexes are differential graded Poisson algebras. They are associated with a coisotropic ideal J of a Poisson algebra P and provide a description of the Poisson algebra (P/J)^J as their cohomology in degree zero. Using the notion of stable equivalence introduced in Felder and Kazhdan (Contemporary Mathematics 610, Perspectives in representation theory, 2014), we prove that any two BRST complexes associated with the same coisotropic ideal are quasi-isomorphic in the case P = R[V] where V is a finite-dimensional symplectic vector space and the bracket on P is induced by the symplectic structure on V. As a corollary, the cohomology of the BRST complexes is canonically associated with the coisotropic ideal J in the symplectic case. We do not require any regularity assumptions on the constraints generating the ideal J. We finally quantize the BRST complex rigorously in the presence of infinitely many ghost variables and discuss the uniqueness of the quantization procedure.
Karayiannis, N B
2000-01-01
This paper presents the development and investigates the properties of ordered weighted learning vector quantization (LVQ) and clustering algorithms. These algorithms are developed by using gradient descent to minimize reformulation functions based on aggregation operators. An axiomatic approach provides conditions for selecting aggregation operators that lead to admissible reformulation functions. Minimization of admissible reformulation functions based on ordered weighted aggregation operators produces a family of soft LVQ and clustering algorithms, which includes fuzzy LVQ and clustering algorithms as special cases. The proposed LVQ and clustering algorithms are used to perform segmentation of magnetic resonance (MR) images of the brain. The diagnostic value of the segmented MR images provides the basis for evaluating a variety of ordered weighted LVQ and clustering algorithms.
NASA Technical Reports Server (NTRS)
Tescher, Andrew G. (Editor)
1989-01-01
Various papers on image compression and automatic target recognition are presented. Individual topics addressed include: target cluster detection in cluttered SAR imagery, model-based target recognition using laser radar imagery, Smart Sensor front-end processor for feature extraction of images, object attitude estimation and tracking from a single video sensor, symmetry detection in human vision, analysis of high resolution aerial images for object detection, obscured object recognition for an ATR application, neural networks for adaptive shape tracking, statistical mechanics and pattern recognition, detection of cylinders in aerial range images, moving object tracking using local windows, new transform method for image data compression, quad-tree product vector quantization of images, predictive trellis encoding of imagery, reduced generalized chain code for contour description, compact architecture for a real-time vision system, use of human visibility functions in segmentation coding, color texture analysis and synthesis using Gibbs random fields.
Robust 1-Bit Compressive Sensing via Binary Stable Embeddings of Sparse Vectors
2011-04-15
funded by Mitsubishi Electric Research Laboratories. †ICTEAM Institute, ELEN Department, Université catholique de Louvain (UCL), B-1348 Louvain-la-Neuve...reduced to a simple comparator that tests for values above or below zero, enabling extremely simple, efficient, and fast quantization. A 1-bit quantizer is...these two terms appears to be significantly different, according to the previously discussed experiments. To test the hypothesis that this term is the key
Wang, Yan-Wu; Bian, Tao; Xiao, Jiang-Wen; Wen, Changyun
2015-10-01
This paper studies the global synchronization of complex dynamical network (CDN) under digital communication with limited bandwidth. To realize the digital communication, the so-called uniform-quantizer-sets are introduced to quantize the states of nodes, which are then encoded and decoded by newly designed encoders and decoders. To meet the requirement of the bandwidth constraint, a scaling function is utilized to guarantee the quantizers having bounded inputs and thus achieving bounded real-time quantization levels. Moreover, a new type of vector norm is introduced to simplify the expression of the bandwidth limit. Through mathematical induction, a sufficient condition is derived to ensure global synchronization of the CDNs. The lower bound on the sum of the real-time quantization levels is analyzed for different cases. Optimization method is employed to relax the requirements on the network topology and to determine the minimum of such lower bound for each case, respectively. Simulation examples are also presented to illustrate the established results.
An, Ji-Yong; Meng, Fan-Rong; You, Zhu-Hong; Fang, Yu-Hong; Zhao, Yu-Jun; Zhang, Ming
2016-01-01
We propose a novel computational method known as RVM-LPQ that combines the Relevance Vector Machine (RVM) model and Local Phase Quantization (LPQ) to predict PPIs from protein sequences. The main improvements are the results of representing protein sequences using the LPQ feature representation on a Position Specific Scoring Matrix (PSSM), reducing the influence of noise using a Principal Component Analysis (PCA), and using a Relevance Vector Machine (RVM) based classifier. We perform 5-fold cross-validation experiments on Yeast and Human datasets, and we achieve very high accuracies of 92.65% and 97.62%, respectively, which is significantly better than previous works. To further evaluate the proposed method, we compare it with the state-of-the-art support vector machine (SVM) classifier on the Yeast dataset. The experimental results demonstrate that our RVM-LPQ method is obviously better than the SVM-based method. The promising experimental results show the efficiency and simplicity of the proposed method, which can be an automatic decision support tool for future proteomics research.
NASA Astrophysics Data System (ADS)
An, Fengwei; Akazawa, Toshinobu; Yamasaki, Shogo; Chen, Lei; Jürgen Mattausch, Hans
2015-04-01
This paper reports a VLSI realization of learning vector quantization (LVQ) with high flexibility for different applications. It is based on a hardware/software (HW/SW) co-design concept for on-chip learning and recognition and designed as a SoC in 180 nm CMOS. The time consuming nearest Euclidean distance search in the LVQ algorithm’s competition layer is efficiently implemented as a pipeline with parallel p-word input. Since neuron number in the competition layer, weight values, input and output number are scalable, the requirements of many different applications can be satisfied without hardware changes. Classification of a d-dimensional input vector is completed in n × \\lceil d/p \\rceil + R clock cycles, where R is the pipeline depth, and n is the number of reference feature vectors (FVs). Adjustment of stored reference FVs during learning is done by the embedded 32-bit RISC CPU, because this operation is not time critical. The high flexibility is verified by the application of human detection with different numbers for the dimensionality of the FVs.
Vector quantization for efficient coding of upper subbands
NASA Technical Reports Server (NTRS)
Zeng, W. J.; Huang, Y. F.
1994-01-01
This paper examines the application of vector quantization (VQ) to exploit both intra-band and inter-band redundancy in subband coding. The focus here is on the exploitation of inter-band dependency. It is shown that VQ is particularly suitable and effective for coding the upper subbands. Three subband decomposition-based VQ coding schemes are proposed here to exploit the inter-band dependency by making full use of the extra flexibility of VQ approach over scalar quantization. A quadtree-based variable rate VQ (VRVQ) scheme which takes full advantage of the intra-band and inter-band redundancy is first proposed. Then, a more easily implementable alternative based on an efficient block-based edge estimation technique is employed to overcome the implementational barriers of the first scheme. Finally, a predictive VQ scheme formulated in the context of finite state VQ is proposed to further exploit the dependency among different subbands. A VRVQ scheme proposed elsewhere is extended to provide an efficient bit allocation procedure. Simulation results show that these three hybrid techniques have advantages, in terms of peak signal-to-noise ratio (PSNR) and complexity, over other existing subband-VQ approaches.
NASA Astrophysics Data System (ADS)
Vaughan, Jennifer
2015-03-01
In the classical Kostant-Souriau prequantization procedure, the Poisson algebra of a symplectic manifold (M,ω) is realized as the space of infinitesimal quantomorphisms of the prequantization circle bundle. Robinson and Rawnsley developed an alternative to the Kostant-Souriau quantization process in which the prequantization circle bundle and metaplectic structure for (M,ω) are replaced by a metaplectic-c prequantization. They proved that metaplectic-c quantization can be applied to a larger class of manifolds than the classical recipe. This paper presents a definition for a metaplectic-c quantomorphism, which is a diffeomorphism of metaplectic-c prequantizations that preserves all of their structures. Since the structure of a metaplectic-c prequantization is more complicated than that of a circle bundle, we find that the definition must include an extra condition that does not have an analogue in the Kostant-Souriau case. We then define an infinitesimal quantomorphism to be a vector field whose flow consists of metaplectic-c quantomorphisms, and prove that the space of infinitesimal metaplectic-c quantomorphisms exhibits all of the same properties that are seen for the infinitesimal quantomorphisms of a prequantization circle bundle. In particular, this space is isomorphic to the Poisson algebra C^∞(M).
Efficient Encoding and Rendering of Time-Varying Volume Data
NASA Technical Reports Server (NTRS)
Ma, Kwan-Liu; Smith, Diann; Shih, Ming-Yun; Shen, Han-Wei
1998-01-01
Visualization of time-varying volumetric data sets, which may be obtained from numerical simulations or sensing instruments, provides scientists insights into the detailed dynamics of the phenomenon under study. This paper describes a coherent solution based on quantization, coupled with octree and difference encoding for visualizing time-varying volumetric data. Quantization is used to attain voxel-level compression and may have a significant influence on the performance of the subsequent encoding and visualization steps. Octree encoding is used for spatial domain compression, and difference encoding for temporal domain compression. In essence, neighboring voxels may be fused into macro voxels if they have similar values, and subtrees at consecutive time steps may be merged if they are identical. The software rendering process is tailored according to the tree structures and the volume visualization process. With the tree representation, selective rendering may be performed very efficiently. Additionally, the I/O costs are reduced. With these combined savings, a higher level of user interactivity is achieved. We have studied a variety of time-varying volume datasets, performed encoding based on data statistics, and optimized the rendering calculations wherever possible. Preliminary tests on workstations have shown in many cases tremendous reduction by as high as 90% in both storage space and inter-frame delay.
Clustering Tree-structured Data on Manifold
Lu, Na; Miao, Hongyu
2016-01-01
Tree-structured data usually contain both topological and geometrical information, and are necessarily considered on manifold instead of Euclidean space for appropriate data parameterization and analysis. In this study, we propose a novel tree-structured data parameterization, called Topology-Attribute matrix (T-A matrix), so the data clustering task can be conducted on matrix manifold. We incorporate the structure constraints embedded in data into the non-negative matrix factorization method to determine meta-trees from the T-A matrix, and the signature vector of each single tree can then be extracted by meta-tree decomposition. The meta-tree space turns out to be a cone space, in which we explore the distance metric and implement the clustering algorithm based on the concepts like Fréchet mean. Finally, the T-A matrix based clustering (TAMBAC) framework is evaluated and compared using both simulated data and real retinal images to illus trate its efficiency and accuracy. PMID:26660696
Murugesan, Gurusamy; Abdulkadhar, Sabenabanu; Natarajan, Jeyakumar
2017-01-01
Automatic extraction of protein-protein interaction (PPI) pairs from biomedical literature is a widely examined task in biological information extraction. Currently, many kernel based approaches such as linear kernel, tree kernel, graph kernel and combination of multiple kernels has achieved promising results in PPI task. However, most of these kernel methods fail to capture the semantic relation information between two entities. In this paper, we present a special type of tree kernel for PPI extraction which exploits both syntactic (structural) and semantic vectors information known as Distributed Smoothed Tree kernel (DSTK). DSTK comprises of distributed trees with syntactic information along with distributional semantic vectors representing semantic information of the sentences or phrases. To generate robust machine learning model composition of feature based kernel and DSTK were combined using ensemble support vector machine (SVM). Five different corpora (AIMed, BioInfer, HPRD50, IEPA, and LLL) were used for evaluating the performance of our system. Experimental results show that our system achieves better f-score with five different corpora compared to other state-of-the-art systems. PMID:29099838
Murugesan, Gurusamy; Abdulkadhar, Sabenabanu; Natarajan, Jeyakumar
2017-01-01
Automatic extraction of protein-protein interaction (PPI) pairs from biomedical literature is a widely examined task in biological information extraction. Currently, many kernel based approaches such as linear kernel, tree kernel, graph kernel and combination of multiple kernels has achieved promising results in PPI task. However, most of these kernel methods fail to capture the semantic relation information between two entities. In this paper, we present a special type of tree kernel for PPI extraction which exploits both syntactic (structural) and semantic vectors information known as Distributed Smoothed Tree kernel (DSTK). DSTK comprises of distributed trees with syntactic information along with distributional semantic vectors representing semantic information of the sentences or phrases. To generate robust machine learning model composition of feature based kernel and DSTK were combined using ensemble support vector machine (SVM). Five different corpora (AIMed, BioInfer, HPRD50, IEPA, and LLL) were used for evaluating the performance of our system. Experimental results show that our system achieves better f-score with five different corpora compared to other state-of-the-art systems.
Quantization with maximally degenerate Poisson brackets: the harmonic oscillator!
NASA Astrophysics Data System (ADS)
Nutku, Yavuz
2003-07-01
Nambu's construction of multi-linear brackets for super-integrable systems can be thought of as degenerate Poisson brackets with a maximal set of Casimirs in their kernel. By introducing privileged coordinates in phase space these degenerate Poisson brackets are brought to the form of Heisenberg's equations. We propose a definition for constructing quantum operators for classical functions, which enables us to turn the maximally degenerate Poisson brackets into operators. They pose a set of eigenvalue problems for a new state vector. The requirement of the single-valuedness of this eigenfunction leads to quantization. The example of the harmonic oscillator is used to illustrate this general procedure for quantizing a class of maximally super-integrable systems.
Quantized Spectral Compressed Sensing: Cramer–Rao Bounds and Recovery Algorithms
NASA Astrophysics Data System (ADS)
Fu, Haoyu; Chi, Yuejie
2018-06-01
Efficient estimation of wideband spectrum is of great importance for applications such as cognitive radio. Recently, sub-Nyquist sampling schemes based on compressed sensing have been proposed to greatly reduce the sampling rate. However, the important issue of quantization has not been fully addressed, particularly for high-resolution spectrum and parameter estimation. In this paper, we aim to recover spectrally-sparse signals and the corresponding parameters, such as frequency and amplitudes, from heavy quantizations of their noisy complex-valued random linear measurements, e.g. only the quadrant information. We first characterize the Cramer-Rao bound under Gaussian noise, which highlights the trade-off between sample complexity and bit depth under different signal-to-noise ratios for a fixed budget of bits. Next, we propose a new algorithm based on atomic norm soft thresholding for signal recovery, which is equivalent to proximal mapping of properly designed surrogate signals with respect to the atomic norm that motivates spectral sparsity. The proposed algorithm can be applied to both the single measurement vector case, as well as the multiple measurement vector case. It is shown that under the Gaussian measurement model, the spectral signals can be reconstructed accurately with high probability, as soon as the number of quantized measurements exceeds the order of K log n, where K is the level of spectral sparsity and $n$ is the signal dimension. Finally, numerical simulations are provided to validate the proposed approaches.
Direct Images, Fields of Hilbert Spaces, and Geometric Quantization
NASA Astrophysics Data System (ADS)
Lempert, László; Szőke, Róbert
2014-04-01
Geometric quantization often produces not one Hilbert space to represent the quantum states of a classical system but a whole family H s of Hilbert spaces, and the question arises if the spaces H s are canonically isomorphic. Axelrod et al. (J. Diff. Geo. 33:787-902, 1991) and Hitchin (Commun. Math. Phys. 131:347-380, 1990) suggest viewing H s as fibers of a Hilbert bundle H, introduce a connection on H, and use parallel transport to identify different fibers. Here we explore to what extent this can be done. First we introduce the notion of smooth and analytic fields of Hilbert spaces, and prove that if an analytic field over a simply connected base is flat, then it corresponds to a Hermitian Hilbert bundle with a flat connection and path independent parallel transport. Second we address a general direct image problem in complex geometry: pushing forward a Hermitian holomorphic vector bundle along a non-proper map . We give criteria for the direct image to be a smooth field of Hilbert spaces. Third we consider quantizing an analytic Riemannian manifold M by endowing TM with the family of adapted Kähler structures from Lempert and Szőke (Bull. Lond. Math. Soc. 44:367-374, 2012). This leads to a direct image problem. When M is homogeneous, we prove the direct image is an analytic field of Hilbert spaces. For certain such M—but not all—the direct image is even flat; which means that in those cases quantization is unique.
Efficient boundary hunting via vector quantization
NASA Astrophysics Data System (ADS)
Diamantini, Claudia; Panti, Maurizio
2001-03-01
A great amount of information about a classification problem is contained in those instances falling near the decision boundary. This intuition dates back to the earliest studies in pattern recognition, and in the more recent adaptive approaches to the so called boundary hunting, such as the work of Aha et alii on Instance Based Learning and the work of Vapnik et alii on Support Vector Machines. The last work is of particular interest, since theoretical and experimental results ensure the accuracy of boundary reconstruction. However, its optimization approach has heavy computational and memory requirements, which limits its application on huge amounts of data. In the paper we describe an alternative approach to boundary hunting based on adaptive labeled quantization architectures. The adaptation is performed by a stochastic gradient algorithm for the minimization of the error probability. Error probability minimization guarantees the accurate approximation of the optimal decision boundary, while the use of a stochastic gradient algorithm defines an efficient method to reach such approximation. In the paper comparisons to Support Vector Machines are considered.
Cross-entropy embedding of high-dimensional data using the neural gas model.
Estévez, Pablo A; Figueroa, Cristián J; Saito, Kazumi
2005-01-01
A cross-entropy approach to mapping high-dimensional data into a low-dimensional space embedding is presented. The method allows to project simultaneously the input data and the codebook vectors, obtained with the Neural Gas (NG) quantizer algorithm, into a low-dimensional output space. The aim of this approach is to preserve the relationship defined by the NG neighborhood function for each pair of input and codebook vectors. A cost function based on the cross-entropy between input and output probabilities is minimized by using a Newton-Raphson method. The new approach is compared with Sammon's non-linear mapping (NLM) and the hierarchical approach of combining a vector quantizer such as the self-organizing feature map (SOM) or NG with the NLM recall algorithm. In comparison with these techniques, our method delivers a clear visualization of both data points and codebooks, and it achieves a better mapping quality in terms of the topology preservation measure q(m).
NASA Astrophysics Data System (ADS)
Liu, Tuo; Chen, Changshui; Shi, Xingzhe; Liu, Chengyong
2016-05-01
The Raman spectra of tissue of 20 brain tumor patients was recorded using a confocal microlaser Raman spectroscope with 785 nm excitation in vitro. A total of 133 spectra were investigated. Spectra peaks from normal white matter tissue and tumor tissue were analyzed. Algorithms, such as principal component analysis, linear discriminant analysis, and the support vector machine, are commonly used to analyze spectral data. However, in this study, we employed the learning vector quantization (LVQ) neural network, which is typically used for pattern recognition. By applying the proposed method, a normal diagnosis accuracy of 85.7% and a glioma diagnosis accuracy of 89.5% were achieved. The LVQ neural network is a recent approach to excavating Raman spectra information. Moreover, it is fast and convenient, does not require the spectra peak counterpart, and achieves a relatively high accuracy. It can be used in brain tumor prognostics and in helping to optimize the cutting margins of gliomas.
NASA Astrophysics Data System (ADS)
Thibes, Ronaldo
2017-02-01
We perform the canonical and path integral quantizations of a lower-order derivatives model describing Podolsky's generalized electrodynamics. The physical content of the model shows an auxiliary massive vector field coupled to the usual electromagnetic field. The equivalence with Podolsky's original model is studied at classical and quantum levels. Concerning the dynamical time evolution, we obtain a theory with two first-class and two second-class constraints in phase space. We calculate explicitly the corresponding Dirac brackets involving both vector fields. We use the Senjanovic procedure to implement the second-class constraints and the Batalin-Fradkin-Vilkovisky path integral quantization scheme to deal with the symmetries generated by the first-class constraints. The physical interpretation of the results turns out to be simpler due to the reduced derivatives order permeating the equations of motion, Dirac brackets and effective action.
Hipp, Jason D; Cheng, Jerome Y; Toner, Mehmet; Tompkins, Ronald G; Balis, Ulysses J
2011-02-26
HISTORICALLY, EFFECTIVE CLINICAL UTILIZATION OF IMAGE ANALYSIS AND PATTERN RECOGNITION ALGORITHMS IN PATHOLOGY HAS BEEN HAMPERED BY TWO CRITICAL LIMITATIONS: 1) the availability of digital whole slide imagery data sets and 2) a relative domain knowledge deficit in terms of application of such algorithms, on the part of practicing pathologists. With the advent of the recent and rapid adoption of whole slide imaging solutions, the former limitation has been largely resolved. However, with the expectation that it is unlikely for the general cohort of contemporary pathologists to gain advanced image analysis skills in the short term, the latter problem remains, thus underscoring the need for a class of algorithm that has the concurrent properties of image domain (or organ system) independence and extreme ease of use, without the need for specialized training or expertise. In this report, we present a novel, general case pattern recognition algorithm, Spatially Invariant Vector Quantization (SIVQ), that overcomes the aforementioned knowledge deficit. Fundamentally based on conventional Vector Quantization (VQ) pattern recognition approaches, SIVQ gains its superior performance and essentially zero-training workflow model from its use of ring vectors, which exhibit continuous symmetry, as opposed to square or rectangular vectors, which do not. By use of the stochastic matching properties inherent in continuous symmetry, a single ring vector can exhibit as much as a millionfold improvement in matching possibilities, as opposed to conventional VQ vectors. SIVQ was utilized to demonstrate rapid and highly precise pattern recognition capability in a broad range of gross and microscopic use-case settings. With the performance of SIVQ observed thus far, we find evidence that indeed there exist classes of image analysis/pattern recognition algorithms suitable for deployment in settings where pathologists alone can effectively incorporate their use into clinical workflow, as a turnkey solution. We anticipate that SIVQ, and other related class-independent pattern recognition algorithms, will become part of the overall armamentarium of digital image analysis approaches that are immediately available to practicing pathologists, without the need for the immediate availability of an image analysis expert.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McClanahan, Richard; De Leon, Phillip L.
The majority of state-of-the-art speaker recognition systems (SR) utilize speaker models that are derived from an adapted universal background model (UBM) in the form of a Gaussian mixture model (GMM). This is true for GMM supervector systems, joint factor analysis systems, and most recently i-vector systems. In all of the identified systems, the posterior probabilities and sufficient statistics calculations represent a computational bottleneck in both enrollment and testing. We propose a multi-layered hash system, employing a tree-structured GMM–UBM which uses Runnalls’ Gaussian mixture reduction technique, in order to reduce the number of these calculations. Moreover, with this tree-structured hash, wemore » can trade-off reduction in computation with a corresponding degradation of equal error rate (EER). As an example, we also reduce this computation by a factor of 15× while incurring less than 10% relative degradation of EER (or 0.3% absolute EER) when evaluated with NIST 2010 speaker recognition evaluation (SRE) telephone data.« less
McClanahan, Richard; De Leon, Phillip L.
2014-08-20
The majority of state-of-the-art speaker recognition systems (SR) utilize speaker models that are derived from an adapted universal background model (UBM) in the form of a Gaussian mixture model (GMM). This is true for GMM supervector systems, joint factor analysis systems, and most recently i-vector systems. In all of the identified systems, the posterior probabilities and sufficient statistics calculations represent a computational bottleneck in both enrollment and testing. We propose a multi-layered hash system, employing a tree-structured GMM–UBM which uses Runnalls’ Gaussian mixture reduction technique, in order to reduce the number of these calculations. Moreover, with this tree-structured hash, wemore » can trade-off reduction in computation with a corresponding degradation of equal error rate (EER). As an example, we also reduce this computation by a factor of 15× while incurring less than 10% relative degradation of EER (or 0.3% absolute EER) when evaluated with NIST 2010 speaker recognition evaluation (SRE) telephone data.« less
Hopf-algebraic structure of combinatorial objects and differential operators
NASA Technical Reports Server (NTRS)
Grossman, Robert; Larson, Richard G.
1989-01-01
A Hopf-algebraic structure on a vector space which has as basis a family of trees is described. Some applications of this structure to combinatorics and to differential operators are surveyed. Some possible future directions for this work are indicated.
Progressive low-bitrate digital color/monochrome image coding by neuro-fuzzy clustering
NASA Astrophysics Data System (ADS)
Mitra, Sunanda; Meadows, Steven
1997-10-01
Color image coding at low bit rates is an area of research that is just being addressed in recent literature since the problems of storage and transmission of color images are becoming more prominent in many applications. Current trends in image coding exploit the advantage of subband/wavelet decompositions in reducing the complexity in optimal scalar/vector quantizer (SQ/VQ) design. Compression ratios (CRs) of the order of 10:1 to 20:1 with high visual quality have been achieved by using vector quantization of subband decomposed color images in perceptually weighted color spaces. We report the performance of a recently developed adaptive vector quantizer, namely, AFLC-VQ for effective reduction in bit rates while maintaining high visual quality of reconstructed color as well as monochrome images. For 24 bit color images, excellent visual quality is maintained upto a bit rate reduction to approximately 0.48 bpp (for each color plane or monochrome 0.16 bpp, CR 50:1) by using the RGB color space. Further tuning of the AFLC-VQ, and addition of an entropy coder module after the VQ stage results in extremely low bit rates (CR 80:1) for good quality, reconstructed images. Our recent study also reveals that for similar visual quality, RGB color space requires less bits/pixel than either the YIQ, or HIS color space for storing the same information when entropy coding is applied. AFLC-VQ outperforms other standard VQ and adaptive SQ techniques in retaining visual fidelity at similar bit rate reduction.
Lahr, Eleanor C.; Krokene, Paal
2013-01-01
Bark beetles and associated fungi are among the greatest natural threats to conifers worldwide. Conifers have potent defenses, but resistance to beetles and fungal pathogens may be reduced if tree stored resources are consumed by fungi rather than used for tree defense. Here, we assessed the relationship between tree stored resources and resistance to Ceratocystis polonica, a phytopathogenic fungus vectored by the spruce bark beetle Ips typographus. We measured phloem and sapwood nitrogen, non-structural carbohydrates (NSC), and lipids before and after trees were attacked by I. typographus (vectoring C. polonica) or artificially inoculated with C. polonica alone. Tree resistance was assessed by measuring phloem lesions and the proportion of necrotic phloem around the tree's circumference following attack or inoculation. While initial resource concentrations were unrelated to tree resistance to C. polonica, over time, phloem NSC and sapwood lipids declined in the trees inoculated with C. polonica. Greater resource declines correlated with less resistant trees (trees with larger lesions or more necrotic phloem), suggesting that resource depletion may be caused by fungal consumption rather than tree resistance. Ips typographus may then benefit indirectly from reduced tree defenses caused by fungal resource uptake. Our research on tree stored resources represents a novel way of understanding bark beetle-fungal-conifer interactions. PMID:23967298
Quantum theory of structured monochromatic light
NASA Astrophysics Data System (ADS)
Punnoose, Alexander; Tu, J. J.
2017-08-01
Applications that envisage utilizing the orbital angular momentum (OAM) at the single photon level assume that the OAM degrees of freedom of the photons are orthogonal. To test this critical assumption, we quantize the beam-like solutions of the vector Helmholtz equation from first principles. We show that although the photon operators of a diffracting monochromatic beam do not in general satisfy the canonical commutation relations, implying that the photon states in Fock space are not orthogonal, the states are bona fide eigenstates of the number and Hamiltonian operators. As a result, the representation for the photon operators presented in this work form a natural basis to study structured monochromatic light at the single photon level.
Coherent states for the relativistic harmonic oscillator
NASA Technical Reports Server (NTRS)
Aldaya, Victor; Guerrero, J.
1995-01-01
Recently we have obtained, on the basis of a group approach to quantization, a Bargmann-Fock-like realization of the Relativistic Harmonic Oscillator as well as a generalized Bargmann transform relating fock wave functions and a set of relativistic Hermite polynomials. Nevertheless, the relativistic creation and annihilation operators satisfy typical relativistic commutation relations of the Lie product (vector-z, vector-z(sup dagger)) approximately equals Energy (an SL(2,R) algebra). Here we find higher-order polarization operators on the SL(2,R) group, providing canonical creation and annihilation operators satisfying the Lie product (vector-a, vector-a(sup dagger)) = identity vector 1, the eigenstates of which are 'true' coherent states.
Labeled trees and the efficient computation of derivations
NASA Technical Reports Server (NTRS)
Grossman, Robert; Larson, Richard G.
1989-01-01
The effective parallel symbolic computation of operators under composition is discussed. Examples include differential operators under composition and vector fields under the Lie bracket. Data structures consisting of formal linear combinations of rooted labeled trees are discussed. A multiplication on rooted labeled trees is defined, thereby making the set of these data structures into an associative algebra. An algebra homomorphism is defined from the original algebra of operators into this algebra of trees. An algebra homomorphism from the algebra of trees into the algebra of differential operators is then described. The cancellation which occurs when noncommuting operators are expressed in terms of commuting ones occurs naturally when the operators are represented using this data structure. This leads to an algorithm which, for operators which are derivations, speeds up the computation exponentially in the degree of the operator. It is shown that the algebra of trees leads naturally to a parallel version of the algorithm.
Topological quantization in units of the fine structure constant.
Maciejko, Joseph; Qi, Xiao-Liang; Drew, H Dennis; Zhang, Shou-Cheng
2010-10-15
Fundamental topological phenomena in condensed matter physics are associated with a quantized electromagnetic response in units of fundamental constants. Recently, it has been predicted theoretically that the time-reversal invariant topological insulator in three dimensions exhibits a topological magnetoelectric effect quantized in units of the fine structure constant α=e²/ℏc. In this Letter, we propose an optical experiment to directly measure this topological quantization phenomenon, independent of material details. Our proposal also provides a way to measure the half-quantized Hall conductances on the two surfaces of the topological insulator independently of each other.
Quantized Overcomplete Expansions: Analysis, Synthesis and Algorithms
1995-07-01
would be in the spirit of the Lempel - Ziv algorithm . The decoder would have to be aware of changes in the dictionary, but depending on the nature of the...37 3.4 A General Vector Compression Algorithm Based on Frames : : : : : : : : : : 40 ii 3.4.1 Design Considerations...x3.3. Along with exploring general properties of matching pursuit, we are interested in its application to compressing data vectors in RN. A general
Querying and Ranking XML Documents.
ERIC Educational Resources Information Center
Schlieder, Torsten; Meuss, Holger
2002-01-01
Discussion of XML, information retrieval, precision, and recall focuses on a retrieval technique that adopts the similarity measure of the vector space model, incorporates the document structure, and supports structured queries. Topics include a query model based on tree matching; structured queries and term-based ranking; and term frequency and…
Lin, Yi; Jiang, Miao; Pellikka, Petri; Heiskanen, Janne
2018-01-01
Mensuration of tree growth habits is of considerable importance for understanding forest ecosystem processes and forest biophysical responses to climate changes. However, the complexity of tree crown morphology that is typically formed after many years of growth tends to render it a non-trivial task, even for the state-of-the-art 3D forest mapping technology-light detection and ranging (LiDAR). Fortunately, botanists have deduced the large structural diversity of tree forms into only a limited number of tree architecture models, which can present a-priori knowledge about tree structure, growth, and other attributes for different species. This study attempted to recruit Hallé architecture models (HAMs) into LiDAR mapping to investigate tree growth habits in structure. First, following the HAM-characterized tree structure organization rules, we run the kernel procedure of tree species classification based on the LiDAR-collected point clouds using a support vector machine classifier in the leave-one-out-for-cross-validation mode. Then, the HAM corresponding to each of the classified tree species was identified based on expert knowledge, assisted by the comparison of the LiDAR-derived feature parameters. Next, the tree growth habits in structure for each of the tree species were derived from the determined HAM. In the case of four tree species growing in the boreal environment, the tests indicated that the classification accuracy reached 85.0%, and their growth habits could be derived by qualitative and quantitative means. Overall, the strategy of recruiting conventional HAMs into LiDAR mapping for investigating tree growth habits in structure was validated, thereby paving a new way for efficiently reflecting tree growth habits and projecting forest structure dynamics.
Lin, Yi; Jiang, Miao; Pellikka, Petri; Heiskanen, Janne
2018-01-01
Mensuration of tree growth habits is of considerable importance for understanding forest ecosystem processes and forest biophysical responses to climate changes. However, the complexity of tree crown morphology that is typically formed after many years of growth tends to render it a non-trivial task, even for the state-of-the-art 3D forest mapping technology—light detection and ranging (LiDAR). Fortunately, botanists have deduced the large structural diversity of tree forms into only a limited number of tree architecture models, which can present a-priori knowledge about tree structure, growth, and other attributes for different species. This study attempted to recruit Hallé architecture models (HAMs) into LiDAR mapping to investigate tree growth habits in structure. First, following the HAM-characterized tree structure organization rules, we run the kernel procedure of tree species classification based on the LiDAR-collected point clouds using a support vector machine classifier in the leave-one-out-for-cross-validation mode. Then, the HAM corresponding to each of the classified tree species was identified based on expert knowledge, assisted by the comparison of the LiDAR-derived feature parameters. Next, the tree growth habits in structure for each of the tree species were derived from the determined HAM. In the case of four tree species growing in the boreal environment, the tests indicated that the classification accuracy reached 85.0%, and their growth habits could be derived by qualitative and quantitative means. Overall, the strategy of recruiting conventional HAMs into LiDAR mapping for investigating tree growth habits in structure was validated, thereby paving a new way for efficiently reflecting tree growth habits and projecting forest structure dynamics. PMID:29515616
An implementation of a tree code on a SIMD, parallel computer
NASA Technical Reports Server (NTRS)
Olson, Kevin M.; Dorband, John E.
1994-01-01
We describe a fast tree algorithm for gravitational N-body simulation on SIMD parallel computers. The tree construction uses fast, parallel sorts. The sorted lists are recursively divided along their x, y and z coordinates. This data structure is a completely balanced tree (i.e., each particle is paired with exactly one other particle) and maintains good spatial locality. An implementation of this tree-building algorithm on a 16k processor Maspar MP-1 performs well and constitutes only a small fraction (approximately 15%) of the entire cycle of finding the accelerations. Each node in the tree is treated as a monopole. The tree search and the summation of accelerations also perform well. During the tree search, node data that is needed from another processor is simply fetched. Roughly 55% of the tree search time is spent in communications between processors. We apply the code to two problems of astrophysical interest. The first is a simulation of the close passage of two gravitationally, interacting, disk galaxies using 65,636 particles. We also simulate the formation of structure in an expanding, model universe using 1,048,576 particles. Our code attains speeds comparable to one head of a Cray Y-MP, so single instruction, multiple data (SIMD) type computers can be used for these simulations. The cost/performance ratio for SIMD machines like the Maspar MP-1 make them an extremely attractive alternative to either vector processors or large multiple instruction, multiple data (MIMD) type parallel computers. With further optimizations (e.g., more careful load balancing), speeds in excess of today's vector processing computers should be possible.
NASA Technical Reports Server (NTRS)
Jandhyala, Vikram (Inventor); Chowdhury, Indranil (Inventor)
2011-01-01
An approach that efficiently solves for a desired parameter of a system or device that can include both electrically large fast multipole method (FMM) elements, and electrically small QR elements. The system or device is setup as an oct-tree structure that can include regions of both the FMM type and the QR type. An iterative solver is then used to determine a first matrix vector product for any electrically large elements, and a second matrix vector product for any electrically small elements that are included in the structure. These matrix vector products for the electrically large elements and the electrically small elements are combined, and a net delta for a combination of the matrix vector products is determined. The iteration continues until a net delta is obtained that is within predefined limits. The matrix vector products that were last obtained are used to solve for the desired parameter.
TreeVector: scalable, interactive, phylogenetic trees for the web.
Pethica, Ralph; Barker, Gary; Kovacs, Tim; Gough, Julian
2010-01-28
Phylogenetic trees are complex data forms that need to be graphically displayed to be human-readable. Traditional techniques of plotting phylogenetic trees focus on rendering a single static image, but increases in the production of biological data and large-scale analyses demand scalable, browsable, and interactive trees. We introduce TreeVector, a Scalable Vector Graphics-and Java-based method that allows trees to be integrated and viewed seamlessly in standard web browsers with no extra software required, and can be modified and linked using standard web technologies. There are now many bioinformatics servers and databases with a range of dynamic processes and updates to cope with the increasing volume of data. TreeVector is designed as a framework to integrate with these processes and produce user-customized phylogenies automatically. We also address the strengths of phylogenetic trees as part of a linked-in browsing process rather than an end graphic for print. TreeVector is fast and easy to use and is available to download precompiled, but is also open source. It can also be run from the web server listed below or the user's own web server. It has already been deployed on two recognized and widely used database Web sites.
Associative Pattern Recognition In Analog VLSI Circuits
NASA Technical Reports Server (NTRS)
Tawel, Raoul
1995-01-01
Winner-take-all circuit selects best-match stored pattern. Prototype cascadable very-large-scale integrated (VLSI) circuit chips built and tested to demonstrate concept of electronic associative pattern recognition. Based on low-power, sub-threshold analog complementary oxide/semiconductor (CMOS) VLSI circuitry, each chip can store 128 sets (vectors) of 16 analog values (vector components), vectors representing known patterns as diverse as spectra, histograms, graphs, or brightnesses of pixels in images. Chips exploit parallel nature of vector quantization architecture to implement highly parallel processing in relatively simple computational cells. Through collective action, cells classify input pattern in fraction of microsecond while consuming power of few microwatts.
A new local-global approach for classification.
Peres, R T; Pedreira, C E
2010-09-01
In this paper, we propose a new local-global pattern classification scheme that combines supervised and unsupervised approaches, taking advantage of both, local and global environments. We understand as global methods the ones concerned with the aim of constructing a model for the whole problem space using the totality of the available observations. Local methods focus into sub regions of the space, possibly using an appropriately selected subset of the sample. In the proposed method, the sample is first divided in local cells by using a Vector Quantization unsupervised algorithm, the LBG (Linde-Buzo-Gray). In a second stage, the generated assemblage of much easier problems is locally solved with a scheme inspired by Bayes' rule. Four classification methods were implemented for comparison purposes with the proposed scheme: Learning Vector Quantization (LVQ); Feedforward Neural Networks; Support Vector Machine (SVM) and k-Nearest Neighbors. These four methods and the proposed scheme were implemented in eleven datasets, two controlled experiments, plus nine public available datasets from the UCI repository. The proposed method has shown a quite competitive performance when compared to these classical and largely used classifiers. Our method is simple concerning understanding and implementation and is based on very intuitive concepts. Copyright 2010 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Daneshgaran, Fred; Mondin, Marina; Olia, Khashayar
This paper is focused on the problem of Information Reconciliation (IR) for continuous variable Quantum Key Distribution (QKD). The main problem is quantization and assignment of labels to the samples of the Gaussian variables observed at Alice and Bob. Trouble is that most of the samples, assuming that the Gaussian variable is zero mean which is de-facto the case, tend to have small magnitudes and are easily disturbed by noise. Transmission over longer and longer distances increases the losses corresponding to a lower effective Signal-to-Noise Ratio (SNR) exasperating the problem. Quantization over higher dimensions is advantageous since it allows for fractional bit per sample accuracy which may be needed at very low SNR conditions whereby the achievable secret key rate is significantly less than one bit per sample. In this paper, we propose to use Permutation Modulation (PM) for quantization of Gaussian vectors potentially containing thousands of samples. PM is applied to the magnitudes of the Gaussian samples and we explore the dependence of the sign error probability on the magnitude of the samples. At very low SNR, we may transmit the entire label of the PM code from Bob to Alice in Reverse Reconciliation (RR) over public channel. The side information extracted from this label can then be used by Alice to characterize the sign error probability of her individual samples. Forward Error Correction (FEC) coding can be used by Bob on each subset of samples with similar sign error probability to aid Alice in error correction. This can be done for different subsets of samples with similar sign error probabilities leading to an Unequal Error Protection (UEP) coding paradigm.
An efficient system for reliably transmitting image and video data over low bit rate noisy channels
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.; Huang, Y. F.; Stevenson, Robert L.
1994-01-01
This research project is intended to develop an efficient system for reliably transmitting image and video data over low bit rate noisy channels. The basic ideas behind the proposed approach are the following: employ statistical-based image modeling to facilitate pre- and post-processing and error detection, use spare redundancy that the source compression did not remove to add robustness, and implement coded modulation to improve bandwidth efficiency and noise rejection. Over the last six months, progress has been made on various aspects of the project. Through our studies of the integrated system, a list-based iterative Trellis decoder has been developed. The decoder accepts feedback from a post-processor which can detect channel errors in the reconstructed image. The error detection is based on the Huber Markov random field image model for the compressed image. The compression scheme used here is that of JPEG (Joint Photographic Experts Group). Experiments were performed and the results are quite encouraging. The principal ideas here are extendable to other compression techniques. In addition, research was also performed on unequal error protection channel coding, subband vector quantization as a means of source coding, and post processing for reducing coding artifacts. Our studies on unequal error protection (UEP) coding for image transmission focused on examining the properties of the UEP capabilities of convolutional codes. The investigation of subband vector quantization employed a wavelet transform with special emphasis on exploiting interband redundancy. The outcome of this investigation included the development of three algorithms for subband vector quantization. The reduction of transform coding artifacts was studied with the aid of a non-Gaussian Markov random field model. This results in improved image decompression. These studies are summarized and the technical papers included in the appendices.
Studies on image compression and image reconstruction
NASA Technical Reports Server (NTRS)
Sayood, Khalid; Nori, Sekhar; Araj, A.
1994-01-01
During this six month period our works concentrated on three, somewhat different areas. We looked at and developed a number of error concealment schemes for use in a variety of video coding environments. This work is described in an accompanying (draft) Masters thesis. In the thesis we describe application of this techniques to the MPEG video coding scheme. We felt that the unique frame ordering approach used in the MPEG scheme would be a challenge to any error concealment/error recovery technique. We continued with our work in the vector quantization area. We have also developed a new type of vector quantizer, which we call a scan predictive vector quantization. The scan predictive VQ was tested on data processed at Goddard to approximate Landsat 7 HRMSI resolution and compared favorably with existing VQ techniques. A paper describing this work is included. The third area is concerned more with reconstruction than compression. While there is a variety of efficient lossless image compression schemes, they all have a common property that they use past data to encode future data. This is done either via taking differences, context modeling, or by building dictionaries. When encoding large images, this common property becomes a common flaw. When the user wishes to decode just a portion of the image, the requirement that the past history be available forces the decoding of a significantly larger portion of the image than desired by the user. Even with intelligent partitioning of the image dataset, the number of pixels decoded may be four times the number of pixels requested. We have developed an adaptive scanning strategy which can be used with any lossless compression scheme and which lowers the additional number of pixels to be decoded to about 7 percent of the number of pixels requested! A paper describing these results is included.
Modeling adaptive kernels from probabilistic phylogenetic trees.
Nicotra, Luca; Micheli, Alessio
2009-01-01
Modeling phylogenetic interactions is an open issue in many computational biology problems. In the context of gene function prediction we introduce a class of kernels for structured data leveraging on a hierarchical probabilistic modeling of phylogeny among species. We derive three kernels belonging to this setting: a sufficient statistics kernel, a Fisher kernel, and a probability product kernel. The new kernels are used in the context of support vector machine learning. The kernels adaptivity is obtained through the estimation of the parameters of a tree structured model of evolution using as observed data phylogenetic profiles encoding the presence or absence of specific genes in a set of fully sequenced genomes. We report results obtained in the prediction of the functional class of the proteins of the budding yeast Saccharomyces cerevisae which favorably compare to a standard vector based kernel and to a non-adaptive tree kernel function. A further comparative analysis is performed in order to assess the impact of the different components of the proposed approach. We show that the key features of the proposed kernels are the adaptivity to the input domain and the ability to deal with structured data interpreted through a graphical model representation.
Neurocomputing strategies in decomposition based structural design
NASA Technical Reports Server (NTRS)
Szewczyk, Z.; Hajela, P.
1993-01-01
The present paper explores the applicability of neurocomputing strategies in decomposition based structural optimization problems. It is shown that the modeling capability of a backpropagation neural network can be used to detect weak couplings in a system, and to effectively decompose it into smaller, more tractable, subsystems. When such partitioning of a design space is possible, parallel optimization can be performed in each subsystem, with a penalty term added to its objective function to account for constraint violations in all other subsystems. Dependencies among subsystems are represented in terms of global design variables, and a neural network is used to map the relations between these variables and all subsystem constraints. A vector quantization technique, referred to as a z-Network, can effectively be used for this purpose. The approach is illustrated with applications to minimum weight sizing of truss structures with multiple design constraints.
Optical systolic array processor using residue arithmetic
NASA Technical Reports Server (NTRS)
Jackson, J.; Casasent, D.
1983-01-01
The use of residue arithmetic to increase the accuracy and reduce the dynamic range requirements of optical matrix-vector processors is evaluated. It is determined that matrix-vector operations and iterative algorithms can be performed totally in residue notation. A new parallel residue quantizer circuit is developed which significantly improves the performance of the systolic array feedback processor. Results are presented of a computer simulation of this system used to solve a set of three simultaneous equations.
Pseudo-Kähler Quantization on Flag Manifolds
NASA Astrophysics Data System (ADS)
Karabegov, Alexander V.
A unified approach to geometric, symbol and deformation quantizations on a generalized flag manifold endowed with an invariant pseudo-Kähler structure is proposed. In particular cases we arrive at Berezin's quantization via covariant and contravariant symbols.
NASA Astrophysics Data System (ADS)
Song, Ke; Li, Feiqiang; Hu, Xiao; He, Lin; Niu, Wenxu; Lu, Sihao; Zhang, Tong
2018-06-01
The development of fuel cell electric vehicles can to a certain extent alleviate worldwide energy and environmental issues. While a single energy management strategy cannot meet the complex road conditions of an actual vehicle, this article proposes a multi-mode energy management strategy for electric vehicles with a fuel cell range extender based on driving condition recognition technology, which contains a patterns recognizer and a multi-mode energy management controller. This paper introduces a learning vector quantization (LVQ) neural network to design the driving patterns recognizer according to a vehicle's driving information. This multi-mode strategy can automatically switch to the genetic algorithm optimized thermostat strategy under specific driving conditions in the light of the differences in condition recognition results. Simulation experiments were carried out based on the model's validity verification using a dynamometer test bench. Simulation results show that the proposed strategy can obtain better economic performance than the single-mode thermostat strategy under dynamic driving conditions.
FIVQ algorithm for interference hyper-spectral image compression
NASA Astrophysics Data System (ADS)
Wen, Jia; Ma, Caiwen; Zhao, Junsuo
2014-07-01
Based on the improved vector quantization (IVQ) algorithm [1] which was proposed in 2012, this paper proposes a further improved vector quantization (FIVQ) algorithm for LASIS (Large Aperture Static Imaging Spectrometer) interference hyper-spectral image compression. To get better image quality, IVQ algorithm takes both the mean values and the VQ indices as the encoding rules. Although IVQ algorithm can improve both the bit rate and the image quality, it still can be further improved in order to get much lower bit rate for the LASIS interference pattern with the special optical characteristics based on the pushing and sweeping in LASIS imaging principle. In the proposed algorithm FIVQ, the neighborhood of the encoding blocks of the interference pattern image, which are using the mean value rules, will be checked whether they have the same mean value as the current processing block. Experiments show the proposed algorithm FIVQ can get lower bit rate compared to that of the IVQ algorithm for the LASIS interference hyper-spectral sequences.
Swiercz, Miroslaw; Kochanowicz, Jan; Weigele, John; Hurst, Robert; Liebeskind, David S; Mariak, Zenon; Melhem, Elias R; Krejza, Jaroslaw
2008-01-01
To determine the performance of an artificial neural network in transcranial color-coded duplex sonography (TCCS) diagnosis of middle cerebral artery (MCA) spasm. TCCS was prospectively acquired within 2 h prior to routine cerebral angiography in 100 consecutive patients (54M:46F, median age 50 years). Angiographic MCA vasospasm was classified as mild (<25% of vessel caliber reduction), moderate (25-50%), or severe (>50%). A Learning Vector Quantization neural network classified MCA spasm based on TCCS peak-systolic, mean, and end-diastolic velocity data. During a four-class discrimination task, accurate classification by the network ranged from 64.9% to 72.3%, depending on the number of neurons in the Kohonen layer. Accurate classification of vasospasm ranged from 79.6% to 87.6%, with an accuracy of 84.7% to 92.1% for the detection of moderate-to-severe vasospasm. An artificial neural network may increase the accuracy of TCCS in diagnosis of MCA spasm.
Perturbative Quantum Gravity and its Relation to Gauge Theory.
Bern, Zvi
2002-01-01
In this review we describe a non-trivial relationship between perturbative gauge theory and gravity scattering amplitudes. At the semi-classical or tree-level, the scattering amplitudes of gravity theories in flat space can be expressed as a sum of products of well defined pieces of gauge theory amplitudes. These relationships were first discovered by Kawai, Lewellen, and Tye in the context of string theory, but hold more generally. In particular, they hold for standard Einstein gravity. A method based on D -dimensional unitarity can then be used to systematically construct all quantum loop corrections order-by-order in perturbation theory using as input the gravity tree amplitudes expressed in terms of gauge theory ones. More generally, the unitarity method provides a means for perturbatively quantizing massless gravity theories without the usual formal apparatus associated with the quantization of constrained systems. As one application, this method was used to demonstrate that maximally supersymmetric gravity is less divergent in the ultraviolet than previously thought.
Segmentation of touching mycobacterium tuberculosis from Ziehl-Neelsen stained sputum smear images
NASA Astrophysics Data System (ADS)
Xu, Chao; Zhou, Dongxiang; Liu, Yunhui
2015-12-01
Touching Mycobacterium tuberculosis objects in the Ziehl-Neelsen stained sputum smear images present different shapes and invisible boundaries in the adhesion areas, which increases the difficulty in objects recognition and counting. In this paper, we present a segmentation method of combining the hierarchy tree analysis with gradient vector flow snake to address this problem. The skeletons of the objects are used for structure analysis based on the hierarchy tree. The gradient vector flow snake is used to estimate the object edge. Experimental results show that the single objects composing the touching objects are successfully segmented by the proposed method. This work will improve the accuracy and practicability of the computer-aided diagnosis of tuberculosis.
Li, Huaqing; Chen, Guo; Huang, Tingwen; Dong, Zhaoyang; Zhu, Wei; Gao, Lan
2016-12-01
In this paper, we consider the event-triggered distributed average-consensus of discrete-time first-order multiagent systems with limited communication data rate and general directed network topology. In the framework of digital communication network, each agent has a real-valued state but can only exchange finite-bit binary symbolic data sequence with its neighborhood agents at each time step due to the digital communication channels with energy constraints. Novel event-triggered dynamic encoder and decoder for each agent are designed, based on which a distributed control algorithm is proposed. A scheme that selects the number of channel quantization level (number of bits) at each time step is developed, under which all the quantizers in the network are never saturated. The convergence rate of consensus is explicitly characterized, which is related to the scale of network, the maximum degree of nodes, the network structure, the scaling function, the quantization interval, the initial states of agents, the control gain and the event gain. It is also found that under the designed event-triggered protocol, by selecting suitable parameters, for any directed digital network containing a spanning tree, the distributed average consensus can be always achieved with an exponential convergence rate based on merely one bit information exchange between each pair of adjacent agents at each time step. Two simulation examples are provided to illustrate the feasibility of presented protocol and the correctness of the theoretical results.
NASA Astrophysics Data System (ADS)
Lavergne, T.; Eastwood, S.; Teffah, Z.; Schyberg, H.; Breivik, L.-A.
2010-10-01
The retrieval of sea ice motion with the Maximum Cross-Correlation (MCC) method from low-resolution (10-15 km) spaceborne imaging sensors is challenged by a dominating quantization noise as the time span of displacement vectors is shortened. To allow investigating shorter displacements from these instruments, we introduce an alternative sea ice motion tracking algorithm that builds on the MCC method but relies on a continuous optimization step for computing the motion vector. The prime effect of this method is to effectively dampen the quantization noise, an artifact of the MCC. It allows for retrieving spatially smooth 48 h sea ice motion vector fields in the Arctic. Strategies to detect and correct erroneous vectors as well as to optimally merge several polarization channels of a given instrument are also described. A test processing chain is implemented and run with several active and passive microwave imagers (Advanced Microwave Scanning Radiometer-EOS (AMSR-E), Special Sensor Microwave Imager, and Advanced Scatterometer) during three Arctic autumn, winter, and spring seasons. Ice motion vectors are collocated to and compared with GPS positions of in situ drifters. Error statistics are shown to be ranging from 2.5 to 4.5 km (standard deviation for components of the vectors) depending on the sensor, without significant bias. We discuss the relative contribution of measurement and representativeness errors by analyzing monthly validation statistics. The 37 GHz channels of the AMSR-E instrument allow for the best validation statistics. The operational low-resolution sea ice drift product of the EUMETSAT OSI SAF (European Organisation for the Exploitation of Meteorological Satellites Ocean and Sea Ice Satellite Application Facility) is based on the algorithms presented in this paper.
Noncommutative gerbes and deformation quantization
NASA Astrophysics Data System (ADS)
Aschieri, Paolo; Baković, Igor; Jurčo, Branislav; Schupp, Peter
2010-11-01
We define noncommutative gerbes using the language of star products. Quantized twisted Poisson structures are discussed as an explicit realization in the sense of deformation quantization. Our motivation is the noncommutative description of D-branes in the presence of topologically non-trivial background fields.
On the Dequantization of Fedosov's Deformation Quantization
NASA Astrophysics Data System (ADS)
Karabegov, Alexander V.
2003-08-01
To each natural deformation quantization on a Poisson manifold M we associate a Poisson morphism from the formal neighborhood of the zero section of the cotangent bundle to M to the formal neighborhood of the diagonal of the product M x M~, where M~ is a copy of M with the opposite Poisson structure. We call it dequantization of the natural deformation quantization. Then we "dequantize" Fedosov's quantization.
Vector adaptive predictive coder for speech and audio
NASA Technical Reports Server (NTRS)
Chen, Juin-Hwey (Inventor); Gersho, Allen (Inventor)
1990-01-01
A real-time vector adaptive predictive coder which approximates each vector of K speech samples by using each of M fixed vectors in a first codebook to excite a time-varying synthesis filter and picking the vector that minimizes distortion. Predictive analysis for each frame determines parameters used for computing from vectors in the first codebook zero-state response vectors that are stored at the same address (index) in a second codebook. Encoding of input speech vectors s.sub.n is then carried out using the second codebook. When the vector that minimizes distortion is found, its index is transmitted to a decoder which has a codebook identical to the first codebook of the decoder. There the index is used to read out a vector that is used to synthesize an output speech vector s.sub.n. The parameters used in the encoder are quantized, for example by using a table, and the indices are transmitted to the decoder where they are decoded to specify transfer characteristics of filters used in producing the vector s.sub.n from the receiver codebook vector selected by the vector index transmitted.
Condition Monitoring for Helicopter Data. Appendix A
NASA Technical Reports Server (NTRS)
Wen, Fang; Willett, Peter; Deb, Somnath
2000-01-01
In this paper the classical "Westland" set of empirical accelerometer helicopter data is analyzed with the aim of condition monitoring for diagnostic purposes. The goal is to determine features for failure events from these data, via a proprietary signal processing toolbox, and to weigh these according to a variety of classification algorithms. As regards signal processing, it appears that the autoregressive (AR) coefficients from a simple linear model encapsulate a great deal of information in a relatively few measurements; it has also been found that augmentation of these by harmonic and other parameters can improve classification significantly. As regards classification, several techniques have been explored, among these restricted Coulomb energy (RCE) networks, learning vector quantization (LVQ), Gaussian mixture classifiers and decision trees. A problem with these approaches, and in common with many classification paradigms, is that augmentation of the feature dimension can degrade classification ability. Thus, we also introduce the Bayesian data reduction algorithm (BDRA), which imposes a Dirichlet prior on training data and is thus able to quantify probability of error in an exact manner, such that features may be discarded or coarsened appropriately.
Quantizing and sampling considerations in digital phased-locked loops
NASA Technical Reports Server (NTRS)
Hurst, G. T.; Gupta, S. C.
1974-01-01
The quantizer problem is first considered. The conditions under which the uniform white sequence model for the quantizer error is valid are established independent of the sampling rate. An equivalent spectral density is defined for the quantizer error resulting in an effective SNR value. This effective SNR may be used to determine quantized performance from infinitely fine quantized results. Attention is given to sampling rate considerations. Sampling rate characteristics of the digital phase-locked loop (DPLL) structure are investigated for the infinitely fine quantized system. The predicted phase error variance equation is examined as a function of the sampling rate. Simulation results are presented and a method is described which enables the minimum required sampling rate to be determined from the predicted phase error variance equations.
Physics-based Detection of Subpixel Targets in Hyperspectral Imagery
2007-01-01
Learning Vector Quantization LWIR ...Wave Infrared ( LWIR ) from 7.0 to 15.0 microns regions as well. At these wavelengths, emissivity dominates the spectral signature. Emissivity is...object emits instead of reflects. Initial work has already been finished applying the hybrid detectors to LWIR sensors [13]. However, target
Intelligent classifier for dynamic fault patterns based on hidden Markov model
NASA Astrophysics Data System (ADS)
Xu, Bo; Feng, Yuguang; Yu, Jinsong
2006-11-01
It's difficult to build precise mathematical models for complex engineering systems because of the complexity of the structure and dynamics characteristics. Intelligent fault diagnosis introduces artificial intelligence and works in a different way without building the analytical mathematical model of a diagnostic object, so it's a practical approach to solve diagnostic problems of complex systems. This paper presents an intelligent fault diagnosis method, an integrated fault-pattern classifier based on Hidden Markov Model (HMM). This classifier consists of dynamic time warping (DTW) algorithm, self-organizing feature mapping (SOFM) network and Hidden Markov Model. First, after dynamic observation vector in measuring space is processed by DTW, the error vector including the fault feature of being tested system is obtained. Then a SOFM network is used as a feature extractor and vector quantization processor. Finally, fault diagnosis is realized by fault patterns classifying with the Hidden Markov Model classifier. The importing of dynamic time warping solves the problem of feature extracting from dynamic process vectors of complex system such as aeroengine, and makes it come true to diagnose complex system by utilizing dynamic process information. Simulating experiments show that the diagnosis model is easy to extend, and the fault pattern classifier is efficient and is convenient to the detecting and diagnosing of new faults.
Light-cone quantization of two dimensional field theory in the path integral approach
NASA Astrophysics Data System (ADS)
Cortés, J. L.; Gamboa, J.
1999-05-01
A quantization condition due to the boundary conditions and the compatification of the light cone space-time coordinate x- is identified at the level of the classical equations for the right-handed fermionic field in two dimensions. A detailed analysis of the implications of the implementation of this quantization condition at the quantum level is presented. In the case of the Thirring model one has selection rules on the excitations as a function of the coupling and in the case of the Schwinger model a double integer structure of the vacuum is derived in the light-cone frame. Two different quantized chiral Schwinger models are found, one of them without a θ-vacuum structure. A generalization of the quantization condition to theories with several fermionic fields and to higher dimensions is presented.
Relational symplectic groupoid quantization for constant poisson structures
NASA Astrophysics Data System (ADS)
Cattaneo, Alberto S.; Moshayedi, Nima; Wernli, Konstantin
2017-09-01
As a detailed application of the BV-BFV formalism for the quantization of field theories on manifolds with boundary, this note describes a quantization of the relational symplectic groupoid for a constant Poisson structure. The presence of mixed boundary conditions and the globalization of results are also addressed. In particular, the paper includes an extension to space-times with boundary of some formal geometry considerations in the BV-BFV formalism, and specifically introduces into the BV-BFV framework a "differential" version of the classical and quantum master equations. The quantization constructed in this paper induces Kontsevich's deformation quantization on the underlying Poisson manifold, i.e., the Moyal product, which is known in full details. This allows focussing on the BV-BFV technology and testing it. For the inexperienced reader, this is also a practical and reasonably simple way to learn it.
A novel approach to internal crown characterization for coniferous tree species classification
NASA Astrophysics Data System (ADS)
Harikumar, A.; Bovolo, F.; Bruzzone, L.
2016-10-01
The knowledge about individual trees in forest is highly beneficial in forest management. High density small foot- print multi-return airborne Light Detection and Ranging (LiDAR) data can provide a very accurate information about the structural properties of individual trees in forests. Every tree species has a unique set of crown structural characteristics that can be used for tree species classification. In this paper, we use both the internal and external crown structural information of a conifer tree crown, derived from a high density small foot-print multi-return LiDAR data acquisition for species classification. Considering the fact that branches are the major building blocks of a conifer tree crown, we obtain the internal crown structural information using a branch level analysis. The structure of each conifer branch is represented using clusters in the LiDAR point cloud. We propose the joint use of the k-means clustering and geometric shape fitting, on the LiDAR data projected onto a novel 3-dimensional space, to identify branch clusters. After mapping the identified clusters back to the original space, six internal geometric features are estimated using a branch-level analysis. The external crown characteristics are modeled by using six least correlated features based on cone fitting and convex hull. Species classification is performed using a sparse Support Vector Machines (sparse SVM) classifier.
An Alternative to the Gauge Theoretic Setting
NASA Astrophysics Data System (ADS)
Schroer, Bert
2011-10-01
The standard formulation of quantum gauge theories results from the Lagrangian (functional integral) quantization of classical gauge theories. A more intrinsic quantum theoretical access in the spirit of Wigner's representation theory shows that there is a fundamental clash between the pointlike localization of zero mass (vector, tensor) potentials and the Hilbert space (positivity, unitarity) structure of QT. The quantization approach has no other way than to stay with pointlike localization and sacrifice the Hilbert space whereas the approach built on the intrinsic quantum concept of modular localization keeps the Hilbert space and trades the conflict creating pointlike generation with the tightest consistent localization: semiinfinite spacelike string localization. Whereas these potentials in the presence of interactions stay quite close to associated pointlike field strengths, the interacting matter fields to which they are coupled bear the brunt of the nonlocal aspect in that they are string-generated in a way which cannot be undone by any differentiation. The new stringlike approach to gauge theory also revives the idea of a Schwinger-Higgs screening mechanism as a deeper and less metaphoric description of the Higgs spontaneous symmetry breaking and its accompanying tale about "God's particle" and its mass generation for all the other particles.
Classical Field Theory and the Stress-Energy Tensor
NASA Astrophysics Data System (ADS)
Swanson, Mark S.
2015-09-01
This book is a concise introduction to the key concepts of classical field theory for beginning graduate students and advanced undergraduate students who wish to study the unifying structures and physical insights provided by classical field theory without dealing with the additional complication of quantization. In that regard, there are many important aspects of field theory that can be understood without quantizing the fields. These include the action formulation, Galilean and relativistic invariance, traveling and standing waves, spin angular momentum, gauge invariance, subsidiary conditions, fluctuations, spinor and vector fields, conservation laws and symmetries, and the Higgs mechanism, all of which are often treated briefly in a course on quantum field theory. The variational form of classical mechanics and continuum field theory are both developed in the time-honored graduate level text by Goldstein et al (2001). An introduction to classical field theory from a somewhat different perspective is available in Soper (2008). Basic classical field theory is often treated in books on quantum field theory. Two excellent texts where this is done are Greiner and Reinhardt (1996) and Peskin and Schroeder (1995). Green's function techniques are presented in Arfken et al (2013).
Vacuum polarization of the quantized massive fields in Friedman-Robertson-Walker spacetime
NASA Astrophysics Data System (ADS)
Matyjasek, Jerzy; Sadurski, Paweł; Telecka, Małgorzata
2014-04-01
The stress-energy tensor of the quantized massive fields in a spatially open, flat, and closed Friedman-Robertson-Walker universe is constructed using the adiabatic regularization (for the scalar field) and the Schwinger-DeWitt approach (for the scalar, spinor, and vector fields). It is shown that the stress-energy tensor calculated in the sixth adiabatic order coincides with the result obtained from the regularized effective action, constructed from the heat kernel coefficient a3. The behavior of the tensor is examined in the power-law cosmological models, and the semiclassical Einstein field equations are solved exactly in a few physically interesting cases, such as the generalized Starobinsky models.
Helicity amplitudes for QCD with massive quarks
NASA Astrophysics Data System (ADS)
Ochirov, Alexander
2018-04-01
The novel massive spinor-helicity formalism of Arkani-Hamed, Huang and Huang provides an elegant way to calculate scattering amplitudes in quantum chromodynamics for arbitrary quark spin projections. In this note we compute two families of tree-level QCD amplitudes with one massive quark pair and n - 2 gluons. The two cases include all gluons with identical helicity and one opposite-helicity gluon being color-adjacent to one of the quarks. Our results naturally incorporate the previously known amplitudes for both quark spins quantized along one of the gluonic momenta. In the all-multiplicity formulae presented here the spin quantization axes can be tuned at will, which includes the case of the definite-helicity quark states.
Spacetime algebra as a powerful tool for electromagnetism
NASA Astrophysics Data System (ADS)
Dressel, Justin; Bliokh, Konstantin Y.; Nori, Franco
2015-08-01
We present a comprehensive introduction to spacetime algebra that emphasizes its practicality and power as a tool for the study of electromagnetism. We carefully develop this natural (Clifford) algebra of the Minkowski spacetime geometry, with a particular focus on its intrinsic (and often overlooked) complex structure. Notably, the scalar imaginary that appears throughout the electromagnetic theory properly corresponds to the unit 4-volume of spacetime itself, and thus has physical meaning. The electric and magnetic fields are combined into a single complex and frame-independent bivector field, which generalizes the Riemann-Silberstein complex vector that has recently resurfaced in studies of the single photon wavefunction. The complex structure of spacetime also underpins the emergence of electromagnetic waves, circular polarizations, the normal variables for canonical quantization, the distinction between electric and magnetic charge, complex spinor representations of Lorentz transformations, and the dual (electric-magnetic field exchange) symmetry that produces helicity conservation in vacuum fields. This latter symmetry manifests as an arbitrary global phase of the complex field, motivating the use of a complex vector potential, along with an associated transverse and gauge-invariant bivector potential, as well as complex (bivector and scalar) Hertz potentials. Our detailed treatment aims to encourage the use of spacetime algebra as a readily available and mature extension to existing vector calculus and tensor methods that can greatly simplify the analysis of fundamentally relativistic objects like the electromagnetic field.
identification. URE from ten MSP430F5529 16-bit microcontrollers were analyzed using: 1) RF distinct native attributes (RF-DNA) fingerprints paired with multiple...discriminant analysis/maximum likelihood (MDA/ML) classification, 2) RF-DNA fingerprints paired with generalized relevance learning vector quantized
Image segmentation using fuzzy LVQ clustering networks
NASA Technical Reports Server (NTRS)
Tsao, Eric Chen-Kuo; Bezdek, James C.; Pal, Nikhil R.
1992-01-01
In this note we formulate image segmentation as a clustering problem. Feature vectors extracted from a raw image are clustered into subregions, thereby segmenting the image. A fuzzy generalization of a Kohonen learning vector quantization (LVQ) which integrates the Fuzzy c-Means (FCM) model with the learning rate and updating strategies of the LVQ is used for this task. This network, which segments images in an unsupervised manner, is thus related to the FCM optimization problem. Numerical examples on photographic and magnetic resonance images are given to illustrate this approach to image segmentation.
An improved spatial contour tree constructed method
NASA Astrophysics Data System (ADS)
Zheng, Yi; Zhang, Ling; Guilbert, Eric; Long, Yi
2018-05-01
Contours are important data to delineate the landform on a map. A contour tree provides an object-oriented description of landforms and can be used to enrich the topological information. The traditional contour tree is used to store topological relationships between contours in a hierarchical structure and allows for the identification of eminences and depressions as sets of nested contours. This research proposes an improved contour tree so-called spatial contour tree that contains not only the topological but also the geometric information. It can be regarded as a terrain skeleton in 3-dimention, and it is established based on the spatial nodes of contours which have the latitude, longitude and elevation information. The spatial contour tree is built by connecting spatial nodes from low to high elevation for a positive landform, and from high to low elevation for a negative landform to form a hierarchical structure. The connection between two spatial nodes can provide the real distance and direction as a Euclidean vector in 3-dimention. In this paper, the construction method is tested in the experiment, and the results are discussed. The proposed hierarchical structure is in 3-demintion and can show the skeleton inside a terrain. The structure, where all nodes have geo-information, can be used to distinguish different landforms and applied for contour generalization with consideration of geographic characteristics.
Little, Eliza; Barrera, Roberto; Seto, Karen C.; Diuk-Wasser, Maria
2015-01-01
Aedes aegypti is implicated in dengue transmission in tropical and subtropical urban areas around the world. Ae. aegypti populations are controlled through integrative vector management. However, the efficacy of vector control may be undermined by the presence of alternative, competent species. In Puerto Rico, a native mosquito, Ae. mediovittatus, is a competent dengue vector in laboratory settings and spatially overlaps with Ae. aegypti. It has been proposed that Ae. mediovittatus may act as a dengue reservoir during inter-epidemic periods, perpetuating endemic dengue transmission in rural Puerto Rico. Dengue transmission dynamics may therefore be influenced by the spatial overlap of Ae. mediovittatus, Ae. aegypti, dengue viruses, and humans. We take a landscape epidemiology approach to examine the association between landscape composition and configuration and the distribution of each of these Aedes species and their co-occurrence. We used remotely sensed imagery from a newly launched satellite to map landscape features at very high spatial resolution. We found that the distribution of Ae. aegypti is positively predicted by urban density and by the number of tree patches, Ae. mediovittatus is positively predicted by the number of tree patches, but negatively predicted by large contiguous urban areas, and both species are predicted by urban density and the number of tree patches. This analysis provides evidence that landscape composition and configuration is a surrogate for mosquito community composition, and suggests that mapping landscape structure can be used to inform vector control efforts as well as to inform urban planning. PMID:21989642
Fine structure constant and quantized optical transparency of plasmonic nanoarrays.
Kravets, V G; Schedin, F; Grigorenko, A N
2012-01-24
Optics is renowned for displaying quantum phenomena. Indeed, studies of emission and absorption lines, the photoelectric effect and blackbody radiation helped to build the foundations of quantum mechanics. Nevertheless, it came as a surprise that the visible transparency of suspended graphene is determined solely by the fine structure constant, as this kind of universality had been previously reserved only for quantized resistance and flux quanta in superconductors. Here we describe a plasmonic system in which relative optical transparency is determined solely by the fine structure constant. The system consists of a regular array of gold nanoparticles fabricated on a thin metallic sublayer. We show that its relative transparency can be quantized in the near-infrared, which we attribute to the quantized contact resistance between the nanoparticles and the metallic sublayer. Our results open new possibilities in the exploration of universal dynamic conductance in plasmonic nanooptics.
NASA Astrophysics Data System (ADS)
Ivanov, K. A.; Nikolaev, V. V.; Gubaydullin, A. R.; Kaliteevski, M. A.
2017-10-01
Based on the scattering matrix formalism, we have developed a method of quantization of an electromagnetic field in two-dimensional photonic nanostructures ( S-quantization in the two-dimensional case). In this method, the fields at the boundaries of the quantization box are expanded into a Fourier series and are related with each other by the scattering matrix of the system, which is the product of matrices describing the propagation of plane waves in empty regions of the quantization box and the scattering matrix of the photonic structure (or an arbitrary inhomogeneity). The quantization condition (similarly to the onedimensional case) is formulated as follows: the eigenvalues of the scattering matrix are equal to unity, which corresponds to the fact that the set of waves that are incident on the structure (components of the expansion into the Fourier series) is equal to the set of waves that travel away from the structure (outgoing waves). The coefficients of the matrix of scattering through the inhomogeneous structure have been calculated using the following procedure: the structure is divided into parallel layers such that the permittivity in each layer varies only along the axis that is perpendicular to the layers. Using the Fourier transform, the Maxwell equations have been written in the form of a matrix that relates the Fourier components of the electric field at the boundaries of neighboring layers. The product of these matrices is the transfer matrix in the basis of the Fourier components of the electric field. Represented in a block form, it is composed by matrices that contain the reflection and transmission coefficients for the Fourier components of the field, which, in turn, constitute the scattering matrix. The developed method considerably simplifies the calculation scheme for the analysis of the behavior of the electromagnetic field in structures with a two-dimensional inhomogeneity. In addition, this method makes it possible to obviate difficulties that arise in the analysis of the Purcell effect because of the divergence of the integral describing the effective volume of the mode in open systems.
Method for indexing and retrieving manufacturing-specific digital imagery based on image content
Ferrell, Regina K.; Karnowski, Thomas P.; Tobin, Jr., Kenneth W.
2004-06-15
A method for indexing and retrieving manufacturing-specific digital images based on image content comprises three steps. First, at least one feature vector can be extracted from a manufacturing-specific digital image stored in an image database. In particular, each extracted feature vector corresponds to a particular characteristic of the manufacturing-specific digital image, for instance, a digital image modality and overall characteristic, a substrate/background characteristic, and an anomaly/defect characteristic. Notably, the extracting step includes generating a defect mask using a detection process. Second, using an unsupervised clustering method, each extracted feature vector can be indexed in a hierarchical search tree. Third, a manufacturing-specific digital image associated with a feature vector stored in the hierarchicial search tree can be retrieved, wherein the manufacturing-specific digital image has image content comparably related to the image content of the query image. More particularly, can include two data reductions, the first performed based upon a query vector extracted from a query image. Subsequently, a user can select relevant images resulting from the first data reduction. From the selection, a prototype vector can be calculated, from which a second-level data reduction can be performed. The second-level data reduction can result in a subset of feature vectors comparable to the prototype vector, and further comparable to the query vector. An additional fourth step can include managing the hierarchical search tree by substituting a vector average for several redundant feature vectors encapsulated by nodes in the hierarchical search tree.
Supporting Dynamic Quantization for High-Dimensional Data Analytics.
Guzun, Gheorghi; Canahuate, Guadalupe
2017-05-01
Similarity searches are at the heart of exploratory data analysis tasks. Distance metrics are typically used to characterize the similarity between data objects represented as feature vectors. However, when the dimensionality of the data increases and the number of features is large, traditional distance metrics fail to distinguish between the closest and furthest data points. Localized distance functions have been proposed as an alternative to traditional distance metrics. These functions only consider dimensions close to query to compute the distance/similarity. Furthermore, in order to enable interactive explorations of high-dimensional data, indexing support for ad-hoc queries is needed. In this work we set up to investigate whether bit-sliced indices can be used for exploratory analytics such as similarity searches and data clustering for high-dimensional big-data. We also propose a novel dynamic quantization called Query dependent Equi-Depth (QED) quantization and show its effectiveness on characterizing high-dimensional similarity. When applying QED we observe improvements in kNN classification accuracy over traditional distance functions. Gheorghi Guzun and Guadalupe Canahuate. 2017. Supporting Dynamic Quantization for High-Dimensional Data Analytics. In Proceedings of Ex-ploreDB'17, Chicago, IL, USA, May 14-19, 2017, 6 pages. https://doi.org/http://dx.doi.org/10.1145/3077331.3077336.
René de Cotret, Laurent P; Siwick, Bradley J
2017-07-01
The general problem of background subtraction in ultrafast electron powder diffraction (UEPD) is presented with a focus on the diffraction patterns obtained from materials of moderately complex structure which contain many overlapping peaks and effectively no scattering vector regions that can be considered exclusively background. We compare the performance of background subtraction algorithms based on discrete and dual-tree complex (DTCWT) wavelet transforms when applied to simulated UEPD data on the M1-R phase transition in VO 2 with a time-varying background. We find that the DTCWT approach is capable of extracting intensities that are accurate to better than 2% across the whole range of scattering vector simulated, effectively independent of delay time. A Python package is available.
4D Sommerfeld quantization of the complex extended charge
NASA Astrophysics Data System (ADS)
Bulyzhenkov, Igor E.
2017-12-01
Gravitational fields and accelerations cannot change quantized magnetic flux in closed line contours due to flat 3D section of curved 4D space-time-matter. The relativistic Bohr-Sommerfeld quantization of the imaginary charge reveals an electric analog of the Compton length, which can introduce quantitatively the fine structure constant and the Plank length.
Spin dynamics of paramagnetic centers with anisotropic g tensor and spin of ½
Maryasov, Alexander G.
2012-01-01
The influence of g tensor anisotropy on spin dynamics of paramagnetic centers having real or effective spin of 1/2 is studied. The g anisotropy affects both the excitation and the detection of EPR signals, producing noticeable differences between conventional continuous-wave (cw) EPR and pulsed EPR spectra. The magnitudes and directions of the spin and magnetic moment vectors are generally not proportional to each other, but are related to each other through the g tensor. The equilibrium magnetic moment direction is generally parallel to neither the magnetic field nor the spin quantization axis due to the g anisotropy. After excitation with short microwave pulses, the spin vector precesses around its quantization axis, in a plane that is generally not perpendicular to the applied magnetic field. Paradoxically, the magnetic moment vector precesses around its equilibrium direction in a plane exactly perpendicular to the external magnetic field. In the general case, the oscillating part of the magnetic moment is elliptically polarized and the direction of precession is determined by the sign of the g tensor determinant (g tensor signature). Conventional pulsed and cw EPR spectrometers do not allow determination of the g tensor signature or the ellipticity of the magnetic moment trajectory. It is generally impossible to set a uniform spin turning angle for simple pulses in an unoriented or ‘powder’ sample when g tensor anisotropy is significant. PMID:22743542
Spin dynamics of paramagnetic centers with anisotropic g tensor and spin of 1/2
NASA Astrophysics Data System (ADS)
Maryasov, Alexander G.; Bowman, Michael K.
2012-08-01
The influence of g tensor anisotropy on spin dynamics of paramagnetic centers having real or effective spin of 1/2 is studied. The g anisotropy affects both the excitation and the detection of EPR signals, producing noticeable differences between conventional continuous-wave (cw) EPR and pulsed EPR spectra. The magnitudes and directions of the spin and magnetic moment vectors are generally not proportional to each other, but are related to each other through the g tensor. The equilibrium magnetic moment direction is generally parallel to neither the magnetic field nor the spin quantization axis due to the g anisotropy. After excitation with short microwave pulses, the spin vector precesses around its quantization axis, in a plane that is generally not perpendicular to the applied magnetic field. Paradoxically, the magnetic moment vector precesses around its equilibrium direction in a plane exactly perpendicular to the external magnetic field. In the general case, the oscillating part of the magnetic moment is elliptically polarized and the direction of precession is determined by the sign of the g tensor determinant (g tensor signature). Conventional pulsed and cw EPR spectrometers do not allow determination of the g tensor signature or the ellipticity of the magnetic moment trajectory. It is generally impossible to set a uniform spin turning angle for simple pulses in an unoriented or 'powder' sample when g tensor anisotropy is significant.
Obliquely propagating ion acoustic solitary structures in the presence of quantized magnetic field
NASA Astrophysics Data System (ADS)
Iqbal Shaukat, Muzzamal
2017-10-01
The effect of linear and nonlinear propagation of electrostatic waves have been studied in degenerate magnetoplasma taking into account the effect of electron trapping and finite temperature with quantizing magnetic field. The formation of solitary structures has been investigated by employing the small amplitude approximation both for fully and partially degenerate quantum plasma. It is observed that the inclusion of quantizing magnetic field significantly affects the propagation characteristics of the solitary wave. Importantly, the Zakharov-Kuznetsov equation under consideration has been found to allow the formation of compressive solitary structures only. The present investigation may be beneficial to understand the propagation of nonlinear electrostatic structures in dense astrophysical environments such as those found in white dwarfs.
Wavelet Transforms in Parallel Image Processing
1994-01-27
NUMBER OF PAGES Object Segmentation, Texture Segmentation, Image Compression, Image 137 Halftoning , Neural Network, Parallel Algorithms, 2D and 3D...Vector Quantization of Wavelet Transform Coefficients ........ ............................. 57 B.1.f Adaptive Image Halftoning based on Wavelet...application has been directed to the adaptive image halftoning . The gray information at a pixel, including its gray value and gradient, is represented by
Development of a brain MRI-based hidden Markov model for dementia recognition.
Chen, Ying; Pham, Tuan D
2013-01-01
Dementia is an age-related cognitive decline which is indicated by an early degeneration of cortical and sub-cortical structures. Characterizing those morphological changes can help to understand the disease development and contribute to disease early prediction and prevention. But modeling that can best capture brain structural variability and can be valid in both disease classification and interpretation is extremely challenging. The current study aimed to establish a computational approach for modeling the magnetic resonance imaging (MRI)-based structural complexity of the brain using the framework of hidden Markov models (HMMs) for dementia recognition. Regularity dimension and semi-variogram were used to extract structural features of the brains, and vector quantization method was applied to convert extracted feature vectors to prototype vectors. The output VQ indices were then utilized to estimate parameters for HMMs. To validate its accuracy and robustness, experiments were carried out on individuals who were characterized as non-demented and mild Alzheimer's diseased. Four HMMs were constructed based on the cohort of non-demented young, middle-aged, elder and demented elder subjects separately. Classification was carried out using a data set including both non-demented and demented individuals with a wide age range. The proposed HMMs have succeeded in recognition of individual who has mild Alzheimer's disease and achieved a better classification accuracy compared to other related works using different classifiers. Results have shown the ability of the proposed modeling for recognition of early dementia. The findings from this research will allow individual classification to support the early diagnosis and prediction of dementia. By using the brain MRI-based HMMs developed in our proposed research, it will be more efficient, robust and can be easily used by clinicians as a computer-aid tool for validating imaging bio-markers for early prediction of dementia.
Quaternionic Kähler Detour Complexes and {mathcal{N} = 2} Supersymmetric Black Holes
NASA Astrophysics Data System (ADS)
Cherney, D.; Latini, E.; Waldron, A.
2011-03-01
We study a class of supersymmetric spinning particle models derived from the radial quantization of stationary, spherically symmetric black holes of four dimensional {{mathcal N} = 2} supergravities. By virtue of the c-map, these spinning particles move in quaternionic Kähler manifolds. Their spinning degrees of freedom describe mini-superspace-reduced supergravity fermions. We quantize these models using BRST detour complex technology. The construction of a nilpotent BRST charge is achieved by using local (worldline) supersymmetry ghosts to generate special holonomy transformations. (An interesting byproduct of the construction is a novel Dirac operator on the superghost extended Hilbert space.) The resulting quantized models are gauge invariant field theories with fields equaling sections of special quaternionic vector bundles. They underly and generalize the quaternionic version of Dolbeault cohomology discovered by Baston. In fact, Baston’s complex is related to the BPS sector of the models we write down. Our results rely on a calculus of operators on quaternionic Kähler manifolds that follows from BRST machinery, and although directly motivated by black hole physics, can be broadly applied to any model relying on quaternionic geometry.
The electronic structure of Au25 clusters: between discrete and continuous
NASA Astrophysics Data System (ADS)
Katsiev, Khabiboulakh; Lozova, Nataliya; Wang, Lu; Sai Krishna, Katla; Li, Ruipeng; Mei, Wai-Ning; Skrabalak, Sara E.; Kumar, Challa S. S. R.; Losovyj, Yaroslav
2016-08-01
Here, an approach based on synchrotron resonant photoemission is employed to explore the transition between quantization and hybridization of the electronic structure in atomically precise ligand-stabilized nanoparticles. While the presence of ligands maintains quantization in Au25 clusters, their removal renders increased hybridization of the electronic states in the vicinity of the Fermi level. These observations are supported by DFT studies.Here, an approach based on synchrotron resonant photoemission is employed to explore the transition between quantization and hybridization of the electronic structure in atomically precise ligand-stabilized nanoparticles. While the presence of ligands maintains quantization in Au25 clusters, their removal renders increased hybridization of the electronic states in the vicinity of the Fermi level. These observations are supported by DFT studies. Electronic supplementary information (ESI) available: Experimental details including chemicals, sample preparation, and characterization methods. Computation techniques, SV-AUC, GIWAXS, XPS, UPS, MALDI-TOF, ESI data of Au25 clusters. See DOI: 10.1039/c6nr02374f
Master equation for open two-band systems and its applications to Hall conductance
NASA Astrophysics Data System (ADS)
Shen, H. Z.; Zhang, S. S.; Dai, C. M.; Yi, X. X.
2018-02-01
Hall conductivity in the presence of a dephasing environment has recently been investigated with a dissipative term introduced phenomenologically. In this paper, we study the dissipative topological insulator (TI) and its topological transition in the presence of quantized electromagnetic environments. A Lindblad-type equation is derived to determine the dynamics of a two-band system. When the two-band model describes TIs, the environment may be the fluctuations of radiation that surround the TIs. We find the dependence of decay rates in the master equation on Bloch vectors in the two-band system, which leads to a mixing of the band occupations. Hence the environment-induced current is in general not perfectly topological in the presence of coupling to the environment, although deviations are small in the weak limit. As an illustration, we apply the Bloch-vector-dependent master equation to TIs and calculate the Hall conductance of tight-binding electrons in a two-dimensional lattice. The influence of environments on the Hall conductance is presented and discussed. The calculations show that the phase transition points of the TIs are robust against the quantized electromagnetic environment. The results might bridge the gap between quantum optics and topological photonic materials.
Musical sound analysis/synthesis using vector-quantized time-varying spectra
NASA Astrophysics Data System (ADS)
Ehmann, Andreas F.; Beauchamp, James W.
2002-11-01
A fundamental goal of computer music sound synthesis is accurate, yet efficient resynthesis of musical sounds, with the possibility of extending the synthesis into new territories using control of perceptually intuitive parameters. A data clustering technique known as vector quantization (VQ) is used to extract a globally optimum set of representative spectra from phase vocoder analyses of instrument tones. This set of spectra, called a Codebook, is used for sinusoidal additive synthesis or, more efficiently, for wavetable synthesis. Instantaneous spectra are synthesized by first determining the Codebook indices corresponding to the best least-squares matches to the original time-varying spectrum. Spectral index versus time functions are then smoothed, and interpolation is employed to provide smooth transitions between Codebook spectra. Furthermore, spectral frames are pre-flattened and their slope, or tilt, extracted before clustering is applied. This allows spectral tilt, closely related to the perceptual parameter ''brightness,'' to be independently controlled during synthesis. The result is a highly compressed format consisting of the Codebook spectra and time-varying tilt, amplitude, and Codebook index parameters. This technique has been applied to a variety of harmonic musical instrument sounds with the resulting resynthesized tones providing good matches to the originals.
Polymer-Fourier quantization of the scalar field revisited
NASA Astrophysics Data System (ADS)
Garcia-Chung, Angel; Vergara, J. David
2016-10-01
The polymer quantization of the Fourier modes of the real scalar field is studied within algebraic scheme. We replace the positive linear functional of the standard Poincaré invariant quantization by a singular one. This singular positive linear functional is constructed as mimicking the singular limit of the complex structure of the Poincaré invariant Fock quantization. The resulting symmetry group of such polymer quantization is the subgroup SDiff(ℝ4) which is a subgroup of Diff(ℝ4) formed by spatial volume preserving diffeomorphisms. In consequence, this yields an entirely different irreducible representation of the canonical commutation relations, nonunitary equivalent to the standard Fock representation. We also compared the Poincaré invariant Fock vacuum with the polymer Fourier vacuum.
On Correspondence of BRST-BFV, Dirac, and Refined Algebraic Quantizations of Constrained Systems
NASA Astrophysics Data System (ADS)
Shvedov, O. Yu.
2002-11-01
The correspondence between BRST-BFV, Dirac, and refined algebraic (group averaging, projection operator) approaches to quantizing constrained systems is analyzed. For the closed-algebra case, it is shown that the component of the BFV wave function corresponding to maximal (minimal) value of number of ghosts and antighosts in the Schrodinger representation may be viewed as a wave function in the refined algebraic (Dirac) quantization approach. The Giulini-Marolf group averaging formula for the inner product in the refined algebraic quantization approach is obtained from the Batalin-Marnelius prescription for the BRST-BFV inner product, which should be generally modified due to topological problems. The considered prescription for the correspondence of states is observed to be applicable to the open-algebra case. The refined algebraic quantization approach is generalized then to the case of nontrivial structure functions. A simple example is discussed. The correspondence of observables for different quantization methods is also investigated.
Fourier transform inequalities for phylogenetic trees.
Matsen, Frederick A
2009-01-01
Phylogenetic invariants are not the only constraints on site-pattern frequency vectors for phylogenetic trees. A mutation matrix, by its definition, is the exponential of a matrix with non-negative off-diagonal entries; this positivity requirement implies non-trivial constraints on the site-pattern frequency vectors. We call these additional constraints "edge-parameter inequalities". In this paper, we first motivate the edge-parameter inequalities by considering a pathological site-pattern frequency vector corresponding to a quartet tree with a negative internal edge. This site-pattern frequency vector nevertheless satisfies all of the constraints described up to now in the literature. We next describe two complete sets of edge-parameter inequalities for the group-based models; these constraints are square-free monomial inequalities in the Fourier transformed coordinates. These inequalities, along with the phylogenetic invariants, form a complete description of the set of site-pattern frequency vectors corresponding to bona fide trees. Said in mathematical language, this paper explicitly presents two finite lists of inequalities in Fourier coordinates of the form "monomial < or = 1", each list characterizing the phylogenetically relevant semialgebraic subsets of the phylogenetic varieties.
NASA Astrophysics Data System (ADS)
Catanzaro, Michael J.; Chernyak, Vladimir Y.; Klein, John R.
2016-12-01
Driven Langevin processes have appeared in a variety of fields due to the relevance of natural phenomena having both deterministic and stochastic effects. The stochastic currents and fluxes in these systems provide a convenient set of observables to describe their non-equilibrium steady states. Here we consider stochastic motion of a (k - 1) -dimensional object, which sweeps out a k-dimensional trajectory, and gives rise to a higher k-dimensional current. By employing the low-temperature (low-noise) limit, we reduce the problem to a discrete Markov chain model on a CW complex, a topological construction which generalizes the notion of a graph. This reduction allows the mean fluxes and currents of the process to be expressed in terms of solutions to the discrete Supersymmetric Fokker-Planck (SFP) equation. Taking the adiabatic limit, we show that generic driving leads to rational quantization of the generated higher dimensional current. The latter is achieved by implementing the recently developed tools, coined the higher-dimensional Kirchhoff tree and co-tree theorems. This extends the study of motion of extended objects in the continuous setting performed in the prequel (Catanzaro et al.) to this manuscript.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gavrilenko, V. I.; Krishtopenko, S. S., E-mail: ds_a-teens@mail.ru; Goiran, M.
2011-01-15
The effect of electron-electron interaction on the spectrum of two-dimensional electron states in InAs/AlSb (001) heterostructures with a GaSb cap layer with one filled size-quantization subband. The energy spectrum of two-dimensional electrons is calculated in the Hartree and Hartree-Fock approximations. It is shown that the exchange interaction decreasing the electron energy in subbands increases the energy gap between subbands and the spin-orbit splitting of the spectrum in the entire region of electron concentrations, at which only the lower size-quantization band is filled. The nonlinear dependence of the Rashba splitting constant at the Fermi wave vector on the concentration of two-dimensionalmore » electrons is demonstrated.« less
Feature Vector Construction Method for IRIS Recognition
NASA Astrophysics Data System (ADS)
Odinokikh, G.; Fartukov, A.; Korobkin, M.; Yoo, J.
2017-05-01
One of the basic stages of iris recognition pipeline is iris feature vector construction procedure. The procedure represents the extraction of iris texture information relevant to its subsequent comparison. Thorough investigation of feature vectors obtained from iris showed that not all the vector elements are equally relevant. There are two characteristics which determine the vector element utility: fragility and discriminability. Conventional iris feature extraction methods consider the concept of fragility as the feature vector instability without respect to the nature of such instability appearance. This work separates sources of the instability into natural and encodinginduced which helps deeply investigate each source of instability independently. According to the separation concept, a novel approach of iris feature vector construction is proposed. The approach consists of two steps: iris feature extraction using Gabor filtering with optimal parameters and quantization with separated preliminary optimized fragility thresholds. The proposed method has been tested on two different datasets of iris images captured under changing environmental conditions. The testing results show that the proposed method surpasses all the methods considered as a prior art by recognition accuracy on both datasets.
Heavy and Heavy-Light Mesons in the Covariant Spectator Theory
NASA Astrophysics Data System (ADS)
Stadler, Alfred; Leitão, Sofia; Peña, M. T.; Biernat, Elmar P.
2018-05-01
The masses and vertex functions of heavy and heavy-light mesons, described as quark-antiquark bound states, are calculated with the Covariant Spectator Theory (CST). We use a kernel with an adjustable mixture of Lorentz scalar, pseudoscalar, and vector linear confining interaction, together with a one-gluon-exchange kernel. A series of fits to the heavy and heavy-light meson spectrum were calculated, and we discuss what conclusions can be drawn from it, especially about the Lorentz structure of the kernel. We also apply the Brodsky-Huang-Lepage prescription to express the CST wave functions for heavy quarkonia in terms of light-front variables. They agree remarkably well with light-front wave functions obtained in the Hamiltonian basis light-front quantization approach, even in excited states.
Wilson, Jordan L; Samaranayake, V A; Limmer, Matthew A; Schumacher, John G; Burken, Joel G
2017-12-19
Contaminated sites pose ecological and human-health risks through exposure to contaminated soil and groundwater. Whereas we can readily locate, monitor, and track contaminants in groundwater, it is harder to perform these tasks in the vadose zone. In this study, tree-core samples were collected at a Superfund site to determine if the sample-collection location around a particular tree could reveal the subsurface location, or direction, of soil and soil-gas contaminant plumes. Contaminant-centroid vectors were calculated from tree-core data to reveal contaminant distributions in directional tree samples at a higher resolution, and vectors were correlated with soil-gas characterization collected using conventional methods. Results clearly demonstrated that directional tree coring around tree trunks can indicate gradients in soil and soil-gas contaminant plumes, and the strength of the correlations were directly proportionate to the magnitude of tree-core concentration gradients (spearman's coefficient of -0.61 and -0.55 in soil and tree-core gradients, respectively). Linear regression indicates agreement between the concentration-centroid vectors is significantly affected by in planta and soil concentration gradients and when concentration centroids in soil are closer to trees. Given the existing link between soil-gas and vapor intrusion, this study also indicates that directional tree coring might be applicable in vapor intrusion assessment.
Wilson, Jordan L.; Samaranayake, V.A.; Limmer, Matthew A.; Schumacher, John G.; Burken, Joel G.
2017-01-01
Contaminated sites pose ecological and human-health risks through exposure to contaminated soil and groundwater. Whereas we can readily locate, monitor, and track contaminants in groundwater, it is harder to perform these tasks in the vadose zone. In this study, tree-core samples were collected at a Superfund site to determine if the sample-collection location around a particular tree could reveal the subsurface location, or direction, of soil and soil-gas contaminant plumes. Contaminant-centroid vectors were calculated from tree-core data to reveal contaminant distributions in directional tree samples at a higher resolution, and vectors were correlated with soil-gas characterization collected using conventional methods. Results clearly demonstrated that directional tree coring around tree trunks can indicate gradients in soil and soil-gas contaminant plumes, and the strength of the correlations were directly proportionate to the magnitude of tree-core concentration gradients (spearman’s coefficient of -0.61 and -0.55 in soil and tree-core gradients, respectively). Linear regression indicates agreement between the concentration-centroid vectors is significantly affected by in-planta and soil concentration gradients and when concentration centroids in soil are closer to trees. Given the existing link between soil-gas and vapor intrusion, this study also indicates that directional tree coring might be applicable in vapor intrusion assessment.
Comparison of SOM point densities based on different criteria.
Kohonen, T
1999-11-15
Point densities of model (codebook) vectors in self-organizing maps (SOMs) are evaluated in this article. For a few one-dimensional SOMs with finite grid lengths and a given probability density function of the input, the numerically exact point densities have been computed. The point density derived from the SOM algorithm turned out to be different from that minimizing the SOM distortion measure, showing that the model vectors produced by the basic SOM algorithm in general do not exactly coincide with the optimum of the distortion measure. A new computing technique based on the calculus of variations has been introduced. It was applied to the computation of point densities derived from the distortion measure for both the classical vector quantization and the SOM with general but equal dimensionality of the input vectors and the grid, respectively. The power laws in the continuum limit obtained in these cases were found to be identical.
1999-01-01
distances and identities and Roger?’ genetic distances were clustered by the unweighted pair group method using arithmetic average ( UPGMA ) to produce...Seattle, WA) using the NEIGHBOR program with the UPGMA option and a phenogram was produced with DRAWGRAM, also in PHYLIP 3.X RAPDBOOT5’ was used to...generate 100 pseudoreplicate distance matrices, which were collapsed to form 100 trees with UPGMA . The bootstrap consensus tree was derived from the 100
A tree-parenchyma coupled model for lung ventilation simulation.
Pozin, Nicolas; Montesantos, Spyridon; Katz, Ira; Pichelin, Marine; Vignon-Clementel, Irene; Grandmont, Céline
2017-11-01
In this article, we develop a lung ventilation model. The parenchyma is described as an elastic homogenized media. It is irrigated by a space-filling dyadic resistive pipe network, which represents the tracheobronchial tree. In this model, the tree and the parenchyma are strongly coupled. The tree induces an extra viscous term in the system constitutive relation, which leads, in the finite element framework, to a full matrix. We consider an efficient algorithm that takes advantage of the tree structure to enable a fast matrix-vector product computation. This framework can be used to model both free and mechanically induced respiration, in health and disease. Patient-specific lung geometries acquired from computed tomography scans are considered. Realistic Dirichlet boundary conditions can be deduced from surface registration on computed tomography images. The model is compared to a more classical exit compartment approach. Results illustrate the coupling between the tree and the parenchyma, at global and regional levels, and how conditions for the purely 0D model can be inferred. Different types of boundary conditions are tested, including a nonlinear Robin model of the surrounding lung structures. Copyright © 2017 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Chang, Faliang; Liu, Chunsheng
2017-09-01
The high variability of sign colors and shapes in uncontrolled environments has made the detection of traffic signs a challenging problem in computer vision. We propose a traffic sign detection (TSD) method based on coarse-to-fine cascade and parallel support vector machine (SVM) detectors to detect Chinese warning and danger traffic signs. First, a region of interest (ROI) extraction method is proposed to extract ROIs using color contrast features in local regions. The ROI extraction can reduce scanning regions and save detection time. For multiclass TSD, we propose a structure that combines a coarse-to-fine cascaded tree with a parallel structure of histogram of oriented gradients (HOG) + SVM detectors. The cascaded tree is designed to detect different types of traffic signs in a coarse-to-fine process. The parallel HOG + SVM detectors are designed to do fine detection of different types of traffic signs. The experiments demonstrate the proposed TSD method can rapidly detect multiclass traffic signs with different colors and shapes in high accuracy.
Quantized Step-up Model for Evaluation of Internship in Teaching of Prospective Science Teachers.
ERIC Educational Resources Information Center
Sindhu, R. S.
2002-01-01
Describes the quantized step-up model developed for the evaluation purposes of internship in teaching which is an analogous model of the atomic structure. Assesses prospective teachers' abilities in lesson delivery. (YDS)
Minimum uncertainty and squeezing in diffusion processes and stochastic quantization
NASA Technical Reports Server (NTRS)
Demartino, S.; Desiena, S.; Illuminati, Fabrizo; Vitiello, Giuseppe
1994-01-01
We show that uncertainty relations, as well as minimum uncertainty coherent and squeezed states, are structural properties for diffusion processes. Through Nelson stochastic quantization we derive the stochastic image of the quantum mechanical coherent and squeezed states.
The Coulomb problem on a 3-sphere and Heun polynomials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bellucci, Stefano; Yeghikyan, Vahagn; Yerevan State University, Alex-Manoogian st. 1, 00025 Yerevan
2013-08-15
The paper studies the quantum mechanical Coulomb problem on a 3-sphere. We present a special parametrization of the ellipto-spheroidal coordinate system suitable for the separation of variables. After quantization we get the explicit form of the spectrum and present an algebraic equation for the eigenvalues of the Runge-Lentz vector. We also present the wave functions expressed via Heun polynomials.
Deformation quantization with separation of variables of an endomorphism bundle
NASA Astrophysics Data System (ADS)
Karabegov, Alexander
2014-01-01
Given a holomorphic Hermitian vector bundle E and a star-product with separation of variables on a pseudo-Kähler manifold, we construct a star product on the sections of the endomorphism bundle of the dual bundle E∗ which also has the appropriately generalized property of separation of variables. For this star product we prove a generalization of Gammelgaard's graph-theoretic formula.
NASA Technical Reports Server (NTRS)
Crouch, P. E.; Grossman, Robert
1992-01-01
This note is concerned with the explicit symbolic computation of expressions involving differential operators and their actions on functions. The derivation of specialized numerical algorithms, the explicit symbolic computation of integrals of motion, and the explicit computation of normal forms for nonlinear systems all require such computations. More precisely, if R = k(x(sub 1),...,x(sub N)), where k = R or C, F denotes a differential operator with coefficients from R, and g member of R, we describe data structures and algorithms for efficiently computing g. The basic idea is to impose a multiplicative structure on the vector space with basis the set of finite rooted trees and whose nodes are labeled with the coefficients of the differential operators. Cancellations of two trees with r + 1 nodes translates into cancellation of O(N(exp r)) expressions involving the coefficient functions and their derivatives.
Graphical models for optimal power flow
Dvijotham, Krishnamurthy; Chertkov, Michael; Van Hentenryck, Pascal; ...
2016-09-13
Optimal power flow (OPF) is the central optimization problem in electric power grids. Although solved routinely in the course of power grid operations, it is known to be strongly NP-hard in general, and weakly NP-hard over tree networks. In this paper, we formulate the optimal power flow problem over tree networks as an inference problem over a tree-structured graphical model where the nodal variables are low-dimensional vectors. We adapt the standard dynamic programming algorithm for inference over a tree-structured graphical model to the OPF problem. Combining this with an interval discretization of the nodal variables, we develop an approximation algorithmmore » for the OPF problem. Further, we use techniques from constraint programming (CP) to perform interval computations and adaptive bound propagation to obtain practically efficient algorithms. Compared to previous algorithms that solve OPF with optimality guarantees using convex relaxations, our approach is able to work for arbitrary tree-structured distribution networks and handle mixed-integer optimization problems. Further, it can be implemented in a distributed message-passing fashion that is scalable and is suitable for “smart grid” applications like control of distributed energy resources. In conclusion, numerical evaluations on several benchmark networks show that practical OPF problems can be solved effectively using this approach.« less
NASA Astrophysics Data System (ADS)
Wuthrich, Christian
My dissertation studies the foundations of loop quantum gravity (LQG), a candidate for a quantum theory of gravity based on classical general relativity. At the outset, I discuss two---and I claim separate---questions: first, do we need a quantum theory of gravity at all; and second, if we do, does it follow that gravity should or even must be quantized? My evaluation of different arguments either way suggests that while no argument can be considered conclusive, there are strong indications that gravity should be quantized. LQG attempts a canonical quantization of general relativity and thereby provokes a foundational interest as it must take a stance on many technical issues tightly linked to the interpretation of general relativity. Most importantly, it codifies general relativity's main innovation, the so-called background independence, in a formalism suitable for quantization. This codification pulls asunder what has been joined together in general relativity: space and time. It is thus a central issue whether or not general relativity's four-dimensional structure can be retrieved in the alternative formalism and how it fares through the quantization process. I argue that the rightful four-dimensional spacetime structure can only be partially retrieved at the classical level. What happens at the quantum level is an entirely open issue. Known examples of classically singular behaviour which gets regularized by quantization evoke an admittedly pious hope that the singularities which notoriously plague the classical theory may be washed away by quantization. This work scrutinizes pronouncements claiming that the initial singularity of classical cosmological models vanishes in quantum cosmology based on LQG and concludes that these claims must be severely qualified. In particular, I explicate why casting the quantum cosmological models in terms of a deterministic temporal evolution fails to capture the concepts at work adequately. Finally, a scheme is developed of how the re-emergence of the smooth spacetime from the underlying discrete quantum structure could be understood.
The electronic structure of Au25 clusters: between discrete and continuous.
Katsiev, Khabiboulakh; Lozova, Nataliya; Wang, Lu; Sai Krishna, Katla; Li, Ruipeng; Mei, Wai-Ning; Skrabalak, Sara E; Kumar, Challa S S R; Losovyj, Yaroslav
2016-08-21
Here, an approach based on synchrotron resonant photoemission is employed to explore the transition between quantization and hybridization of the electronic structure in atomically precise ligand-stabilized nanoparticles. While the presence of ligands maintains quantization in Au25 clusters, their removal renders increased hybridization of the electronic states in the vicinity of the Fermi level. These observations are supported by DFT studies.
Compass cues used by a nocturnal bull ant, Myrmecia midas.
Freas, Cody A; Narendra, Ajay; Cheng, Ken
2017-05-01
Ants use both terrestrial landmarks and celestial cues to navigate to and from their nest location. These cues persist even as light levels drop during the twilight/night. Here, we determined the compass cues used by a nocturnal bull ant, Myrmecia midas , in which the majority of individuals begin foraging during the evening twilight period. Myrmecia midas foragers with vectors of ≤5 m when displaced to unfamiliar locations did not follow the home vector, but instead showed random heading directions. Foragers with larger home vectors (≥10 m) oriented towards the fictive nest, indicating a possible increase in cue strength with vector length. When the ants were displaced locally to create a conflict between the home direction indicated by the path integrator and terrestrial landmarks, foragers oriented using landmark information exclusively and ignored any accumulated home vector regardless of vector length. When the visual landmarks at the local displacement site were blocked, foragers were unable to orient to the nest direction and their heading directions were randomly distributed. Myrmecia midas ants typically nest at the base of the tree and some individuals forage on the same tree. Foragers collected on the nest tree during evening twilight were unable to orient towards the nest after small lateral displacements away from the nest. This suggests the possibility of high tree fidelity and an inability to extrapolate landmark compass cues from information collected on the tree and at the nest site to close displacement sites. © 2017. Published by The Company of Biologists Ltd.
Principal component analysis and the locus of the Fréchet mean in the space of phylogenetic trees.
Nye, Tom M W; Tang, Xiaoxian; Weyenberg, Grady; Yoshida, Ruriko
2017-12-01
Evolutionary relationships are represented by phylogenetic trees, and a phylogenetic analysis of gene sequences typically produces a collection of these trees, one for each gene in the analysis. Analysis of samples of trees is difficult due to the multi-dimensionality of the space of possible trees. In Euclidean spaces, principal component analysis is a popular method of reducing high-dimensional data to a low-dimensional representation that preserves much of the sample's structure. However, the space of all phylogenetic trees on a fixed set of species does not form a Euclidean vector space, and methods adapted to tree space are needed. Previous work introduced the notion of a principal geodesic in this space, analogous to the first principal component. Here we propose a geometric object for tree space similar to the [Formula: see text]th principal component in Euclidean space: the locus of the weighted Fréchet mean of [Formula: see text] vertex trees when the weights vary over the [Formula: see text]-simplex. We establish some basic properties of these objects, in particular showing that they have dimension [Formula: see text], and propose algorithms for projection onto these surfaces and for finding the principal locus associated with a sample of trees. Simulation studies demonstrate that these algorithms perform well, and analyses of two datasets, containing Apicomplexa and African coelacanth genomes respectively, reveal important structure from the second principal components.
On the Problem of Bandwidth Partitioning in FDD Block-Fading Single-User MISO/SIMO Systems
NASA Astrophysics Data System (ADS)
Ivrlač, Michel T.; Nossek, Josef A.
2008-12-01
We report on our research activity on the problem of how to optimally partition the available bandwidth of frequency division duplex, multi-input single-output communication systems, into subbands for the uplink, the downlink, and the feedback. In the downlink, the transmitter applies coherent beamforming based on quantized channel information which is obtained by feedback from the receiver. As feedback takes away resources from the uplink, which could otherwise be used to transfer payload data, it is highly desirable to reserve the "right" amount of uplink resources for the feedback. Under the assumption of random vector quantization, and a frequency flat, independent and identically distributed block-fading channel, we derive closed-form expressions for both the feedback quantization and bandwidth partitioning which jointly maximize the sum of the average payload data rates of the downlink and the uplink. While we do introduce some approximations to facilitate mathematical tractability, the analytical solution is asymptotically exact as the number of antennas approaches infinity, while for systems with few antennas, it turns out to be a fairly accurate approximation. In this way, the obtained results are meaningful for practical communication systems, which usually can only employ a few antennas.
Radiation and matter: Electrodynamics postulates and Lorenz gauge
NASA Astrophysics Data System (ADS)
Bobrov, V. B.; Trigger, S. A.; van Heijst, G. J.; Schram, P. P.
2016-11-01
In general terms, we have considered matter as the system of charged particles and quantized electromagnetic field. For consistent description of the thermodynamic properties of matter, especially in an extreme state, the problem of quantization of the longitudinal and scalar potentials should be solved. In this connection, we pay attention that the traditional postulates of electrodynamics, which claim that only electric and magnetic fields are observable, is resolved by denial of the statement about validity of the Maxwell equations for microscopic fields. The Maxwell equations, as the generalization of experimental data, are valid only for averaged values. We show that microscopic electrodynamics may be based on postulation of the d'Alembert equations for four-vector of the electromagnetic field potential. The Lorenz gauge is valid for the averages potentials (and provides the implementation of the Maxwell equations for averages). The suggested concept overcomes difficulties under the electromagnetic field quantization procedure being in accordance with the results of quantum electrodynamics. As a result, longitudinal and scalar photons become real rather than virtual and may be observed in principle. The longitudinal and scalar photons provide not only the Coulomb interaction of charged particles, but also allow the electrical Aharonov-Bohm effect.
Applications of wavelet-based compression to multidimensional Earth science data
NASA Technical Reports Server (NTRS)
Bradley, Jonathan N.; Brislawn, Christopher M.
1993-01-01
A data compression algorithm involving vector quantization (VQ) and the discrete wavelet transform (DWT) is applied to two different types of multidimensional digital earth-science data. The algorithms (WVQ) is optimized for each particular application through an optimization procedure that assigns VQ parameters to the wavelet transform subbands subject to constraints on compression ratio and encoding complexity. Preliminary results of compressing global ocean model data generated on a Thinking Machines CM-200 supercomputer are presented. The WVQ scheme is used in both a predictive and nonpredictive mode. Parameters generated by the optimization algorithm are reported, as are signal-to-noise (SNR) measurements of actual quantized data. The problem of extrapolating hydrodynamic variables across the continental landmasses in order to compute the DWT on a rectangular grid is discussed. Results are also presented for compressing Landsat TM 7-band data using the WVQ scheme. The formulation of the optimization problem is presented along with SNR measurements of actual quantized data. Postprocessing applications are considered in which the seven spectral bands are clustered into 256 clusters using a k-means algorithm and analyzed using the Los Alamos multispectral data analysis program, SPECTRUM, both before and after being compressed using the WVQ program.
Tree Leaf Bacterial Community Structure and Diversity Differ along a Gradient of Urban Intensity
Messier, Christian; Kembel, Steven W.
2017-01-01
ABSTRACT Tree leaf-associated microbiota have been studied in natural ecosystems but less so in urban settings, where anthropogenic pressures on trees could impact microbial communities and modify their interaction with their hosts. Additionally, trees act as vectors spreading bacterial cells in the air in urban environments due to the density of microbial cells on aerial plant surfaces. Characterizing tree leaf bacterial communities along an urban gradient is thus key to understand the impact of anthropogenic pressures on urban tree-bacterium interactions and on the overall urban microbiome. In this study, we aimed (i) to characterize phyllosphere bacterial communities of seven tree species in urban environments and (ii) to describe the changes in tree phyllosphere bacterial community structure and diversity along a gradient of increasing urban intensity and at two degrees of tree isolation. Our results indicate that, as anthropogenic pressures increase, urban leaf bacterial communities show a reduction in the abundance of the dominant class in the natural plant microbiome, the Alphaproteobacteria. Our work in the urban environment here reveals that the structures of leaf bacterial communities differ along the gradient of urban intensity. The diversity of phyllosphere microbial communities increases at higher urban intensity, also displaying a greater number and variety of associated indicator taxa than the low and medium urban gradient sites. In conclusion, we find that urban environments influence tree bacterial community composition, and our results suggest that feedback between human activity and plant microbiomes could shape urban microbiomes. IMPORTANCE In natural forests, tree leaf surfaces host diverse bacterial communities whose structure and composition are primarily driven by host species identity. Tree leaf bacterial diversity has also been shown to influence tree community productivity, a key function of terrestrial ecosystems. However, most urban microbiome studies have focused on the built environment, improving our understanding of indoor microbial communities but leaving much to be understood, especially in the nonbuilt microbiome. Here, we provide the first multiple-species comparison of tree phyllosphere bacterial structures and diversity along a gradient of urban intensity. We demonstrate that urban trees possess characteristic bacterial communities that differ from those seen with trees in nonurban environments, with microbial community structure on trees influenced by host species identity but also by the gradient of urban intensity and by the degree of isolation from other trees. Our results suggest that feedback between human activity and plant microbiomes could shape urban microbiomes. PMID:29238751
Tree Leaf Bacterial Community Structure and Diversity Differ along a Gradient of Urban Intensity.
Laforest-Lapointe, Isabelle; Messier, Christian; Kembel, Steven W
2017-01-01
Tree leaf-associated microbiota have been studied in natural ecosystems but less so in urban settings, where anthropogenic pressures on trees could impact microbial communities and modify their interaction with their hosts. Additionally, trees act as vectors spreading bacterial cells in the air in urban environments due to the density of microbial cells on aerial plant surfaces. Characterizing tree leaf bacterial communities along an urban gradient is thus key to understand the impact of anthropogenic pressures on urban tree-bacterium interactions and on the overall urban microbiome. In this study, we aimed (i) to characterize phyllosphere bacterial communities of seven tree species in urban environments and (ii) to describe the changes in tree phyllosphere bacterial community structure and diversity along a gradient of increasing urban intensity and at two degrees of tree isolation. Our results indicate that, as anthropogenic pressures increase, urban leaf bacterial communities show a reduction in the abundance of the dominant class in the natural plant microbiome, the Alphaproteobacteria . Our work in the urban environment here reveals that the structures of leaf bacterial communities differ along the gradient of urban intensity. The diversity of phyllosphere microbial communities increases at higher urban intensity, also displaying a greater number and variety of associated indicator taxa than the low and medium urban gradient sites. In conclusion, we find that urban environments influence tree bacterial community composition, and our results suggest that feedback between human activity and plant microbiomes could shape urban microbiomes. IMPORTANCE In natural forests, tree leaf surfaces host diverse bacterial communities whose structure and composition are primarily driven by host species identity. Tree leaf bacterial diversity has also been shown to influence tree community productivity, a key function of terrestrial ecosystems. However, most urban microbiome studies have focused on the built environment, improving our understanding of indoor microbial communities but leaving much to be understood, especially in the nonbuilt microbiome. Here, we provide the first multiple-species comparison of tree phyllosphere bacterial structures and diversity along a gradient of urban intensity. We demonstrate that urban trees possess characteristic bacterial communities that differ from those seen with trees in nonurban environments, with microbial community structure on trees influenced by host species identity but also by the gradient of urban intensity and by the degree of isolation from other trees. Our results suggest that feedback between human activity and plant microbiomes could shape urban microbiomes.
Noncommutative Line Bundles and Gerbes
NASA Astrophysics Data System (ADS)
Jurčo, B.
We introduce noncommutative line bundles and gerbes within the framework of deformation quantization. The Seiberg-Witten map is used to construct the corresponding noncommutative Čech cocycles. Morita equivalence of star products and quantization of twisted Poisson structures are discussed from this point of view.
Direct Volume Rendering with Shading via Three-Dimensional Textures
NASA Technical Reports Server (NTRS)
VanGelder, Allen; Kim, Kwansik
1996-01-01
A new and easy-to-implement method for direct volume rendering that uses 3D texture maps for acceleration, and incorporates directional lighting, is described. The implementation, called Voltx, produces high-quality images at nearly interactive speeds on workstations with hardware support for three-dimensional texture maps. Previously reported methods did not incorporate a light model, and did not address issues of multiple texture maps for large volumes. Our research shows that these extensions impact performance by about a factor of ten. Voltx supports orthographic, perspective, and stereo views. This paper describes the theory and implementation of this technique, and compares it to the shear-warp factorization approach. A rectilinear data set is converted into a three-dimensional texture map containing color and opacity information. Quantized normal vectors and a lookup table provide efficiency. A new tesselation of the sphere is described, which serves as the basis for normal-vector quantization. A new gradient-based shading criterion is described, in which the gradient magnitude is interpreted in the context of the field-data value and the material classification parameters, and not in isolation. In the rendering phase, the texture map is applied to a stack of parallel planes, which effectively cut the texture into many slabs. The slabs are composited to form an image.
Image Classification of Ribbed Smoked Sheet using Learning Vector Quantization
NASA Astrophysics Data System (ADS)
Rahmat, R. F.; Pulungan, A. F.; Faza, S.; Budiarto, R.
2017-01-01
Natural rubber is an important export commodity in Indonesia, which can be a major contributor to national economic development. One type of rubber used as rubber material exports is Ribbed Smoked Sheet (RSS). The quantity of RSS exports depends on the quality of RSS. RSS rubber quality has been assigned in SNI 06-001-1987 and the International Standards of Quality and Packing for Natural Rubber Grades (The Green Book). The determination of RSS quality is also known as the sorting process. In the rubber factones, the sorting process is still done manually by looking and detecting at the levels of air bubbles on the surface of the rubber sheet by naked eyes so that the result is subjective and not so good. Therefore, a method is required to classify RSS rubber automatically and precisely. We propose some image processing techniques for the pre-processing, zoning method for feature extraction and Learning Vector Quantization (LVQ) method for classifying RSS rubber into two grades, namely RSS1 and RSS3. We used 120 RSS images as training dataset and 60 RSS images as testing dataset. The result shows that our proposed method can give 89% of accuracy and the best perform epoch is in the fifteenth epoch.
Quantization of Poisson Manifolds from the Integrability of the Modular Function
NASA Astrophysics Data System (ADS)
Bonechi, F.; Ciccoli, N.; Qiu, J.; Tarlini, M.
2014-10-01
We discuss a framework for quantizing a Poisson manifold via the quantization of its symplectic groupoid, combining the tools of geometric quantization with the results of Renault's theory of groupoid C*-algebras. This setting allows very singular polarizations. In particular, we consider the case when the modular function is multiplicatively integrable, i.e., when the space of leaves of the polarization inherits a groupoid structure. If suitable regularity conditions are satisfied, then one can define the quantum algebra as the convolution algebra of the subgroupoid of leaves satisfying the Bohr-Sommerfeld conditions. We apply this procedure to the case of a family of Poisson structures on , seen as Poisson homogeneous spaces of the standard Poisson-Lie group SU( n + 1). We show that a bihamiltonian system on defines a multiplicative integrable model on the symplectic groupoid; we compute the Bohr-Sommerfeld groupoid and show that it satisfies the needed properties for applying Renault theory. We recover and extend Sheu's description of quantum homogeneous spaces as groupoid C*-algebras.
Issues in the digital implementation of control compensators. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Moroney, P.
1979-01-01
Techniques developed for the finite-precision implementation of digital filters were used, adapted, and extended for digital feedback compensators, with particular emphasis on steady state, linear-quadratic-Gaussian compensators. Topics covered include: (1) the linear-quadratic-Gaussian problem; (2) compensator structures; (3) architectural issues: serialism, parallelism, and pipelining; (4) finite wordlength effects: quantization noise, quantizing the coefficients, and limit cycles; and (5) the optimization of structures.
Mixed-up trees: the structure of phylogenetic mixtures.
Matsen, Frederick A; Mossel, Elchanan; Steel, Mike
2008-05-01
In this paper, we apply new geometric and combinatorial methods to the study of phylogenetic mixtures. The focus of the geometric approach is to describe the geometry of phylogenetic mixture distributions for the two state random cluster model, which is a generalization of the two state symmetric (CFN) model. In particular, we show that the set of mixture distributions forms a convex polytope and we calculate its dimension; corollaries include a simple criterion for when a mixture of branch lengths on the star tree can mimic the site pattern frequency vector of a resolved quartet tree. Furthermore, by computing volumes of polytopes we can clarify how "common" non-identifiable mixtures are under the CFN model. We also present a new combinatorial result which extends any identifiability result for a specific pair of trees of size six to arbitrary pairs of trees. Next we present a positive result showing identifiability of rates-across-sites models. Finally, we answer a question raised in a previous paper concerning "mixed branch repulsion" on trees larger than quartet trees under the CFN model.
NASA Astrophysics Data System (ADS)
Omenzetter, Piotr; de Lautour, Oliver R.
2010-04-01
Developed for studying long, periodic records of various measured quantities, time series analysis methods are inherently suited and offer interesting possibilities for Structural Health Monitoring (SHM) applications. However, their use in SHM can still be regarded as an emerging application and deserves more studies. In this research, Autoregressive (AR) models were used to fit experimental acceleration time histories from two experimental structural systems, a 3- storey bookshelf-type laboratory structure and the ASCE Phase II SHM Benchmark Structure, in healthy and several damaged states. The coefficients of the AR models were chosen as damage sensitive features. Preliminary visual inspection of the large, multidimensional sets of AR coefficients to check the presence of clusters corresponding to different damage severities was achieved using Sammon mapping - an efficient nonlinear data compression technique. Systematic classification of damage into states based on the analysis of the AR coefficients was achieved using two supervised classification techniques: Nearest Neighbor Classification (NNC) and Learning Vector Quantization (LVQ), and one unsupervised technique: Self-organizing Maps (SOM). This paper discusses the performance of AR coefficients as damage sensitive features and compares the efficiency of the three classification techniques using experimental data.
Quantization and Superselection Sectors I:. Transformation Group C*-ALGEBRAS
NASA Astrophysics Data System (ADS)
Landsman, N. P.
Quantization is defined as the act of assigning an appropriate C*-algebra { A} to a given configuration space Q, along with a prescription mapping self-adjoint elements of { A} into physically interpretable observables. This procedure is adopted to solve the problem of quantizing a particle moving on a homogeneous locally compact configuration space Q=G/H. Here { A} is chosen to be the transformation group C*-algebra corresponding to the canonical action of G on Q. The structure of these algebras and their representations are examined in some detail. Inequivalent quantizations are identified with inequivalent irreducible representations of the C*-algebra corresponding to the system, hence with its superselection sectors. Introducing the concept of a pre-Hamiltonian, we construct a large class of G-invariant time-evolutions on these algebras, and find the Hamiltonians implementing these time-evolutions in each irreducible representation of { A}. “Topological” terms in the Hamiltonian (or the corresponding action) turn out to be representation-dependent, and are automatically induced by the quantization procedure. Known “topological” charge quantization or periodicity conditions are then identically satisfied as a consequence of the representation theory of { A}.
Observation of Landau levels in potassium-intercalated graphite under a zero magnetic field
Guo, Donghui; Kondo, Takahiro; Machida, Takahiro; Iwatake, Keigo; Okada, Susumu; Nakamura, Junji
2012-01-01
The charge carriers in graphene are massless Dirac fermions and exhibit a relativistic Landau-level quantization in a magnetic field. Recently, it has been reported that, without any external magnetic field, quantized energy levels have been also observed from strained graphene nanobubbles on a platinum surface, which were attributed to the Landau levels of massless Dirac fermions in graphene formed by a strain-induced pseudomagnetic field. Here we show the generation of the Landau levels of massless Dirac fermions on a partially potassium-intercalated graphite surface without applying external magnetic field. Landau levels of massless Dirac fermions indicate the graphene character in partially potassium-intercalated graphite. The generation of the Landau levels is ascribed to a vector potential induced by the perturbation of nearest-neighbour hopping, which may originate from a strain or a gradient of on-site potentials at the perimeters of potassium-free domains. PMID:22990864
Chen, Guangyao; Li, Yang; Maris, Pieter; ...
2017-04-14
Using the charmonium light-front wavefunctions obtained by diagonalizing an effective Hamiltonian with the one-gluon exchange interaction and a confining potential inspired by light-front holography in the basis light-front quantization formalism, we compute production of charmonium states in diffractive deep inelastic scattering and ultra-peripheral heavy ion collisions within the dipole picture. Our method allows us to predict yields of all vector charmonium states below the open flavor thresholds in high-energy deep inelastic scattering, proton-nucleus and ultra-peripheral heavy ion collisions, without introducing any new parameters in the light-front wavefunctions. The obtained charmonium cross section is in reasonable agreement with experimental data atmore » HERA, RHIC and LHC. We observe that the cross-section ratio σΨ(2s)/σJ/Ψ reveals significant independence of model parameters« less
Gravitational surface Hamiltonian and entropy quantization
NASA Astrophysics Data System (ADS)
Bakshi, Ashish; Majhi, Bibhas Ranjan; Samanta, Saurav
2017-02-01
The surface Hamiltonian corresponding to the surface part of a gravitational action has xp structure where p is conjugate momentum of x. Moreover, it leads to TS on the horizon of a black hole. Here T and S are temperature and entropy of the horizon. Imposing the hermiticity condition we quantize this Hamiltonian. This leads to an equidistant spectrum of its eigenvalues. Using this we show that the entropy of the horizon is quantized. This analysis holds for any order of Lanczos-Lovelock gravity. For general relativity, the area spectrum is consistent with Bekenstein's observation. This provides a more robust confirmation of this earlier result as the calculation is based on the direct quantization of the Hamiltonian in the sense of usual quantum mechanics.
Development of a brain MRI-based hidden Markov model for dementia recognition
2013-01-01
Background Dementia is an age-related cognitive decline which is indicated by an early degeneration of cortical and sub-cortical structures. Characterizing those morphological changes can help to understand the disease development and contribute to disease early prediction and prevention. But modeling that can best capture brain structural variability and can be valid in both disease classification and interpretation is extremely challenging. The current study aimed to establish a computational approach for modeling the magnetic resonance imaging (MRI)-based structural complexity of the brain using the framework of hidden Markov models (HMMs) for dementia recognition. Methods Regularity dimension and semi-variogram were used to extract structural features of the brains, and vector quantization method was applied to convert extracted feature vectors to prototype vectors. The output VQ indices were then utilized to estimate parameters for HMMs. To validate its accuracy and robustness, experiments were carried out on individuals who were characterized as non-demented and mild Alzheimer's diseased. Four HMMs were constructed based on the cohort of non-demented young, middle-aged, elder and demented elder subjects separately. Classification was carried out using a data set including both non-demented and demented individuals with a wide age range. Results The proposed HMMs have succeeded in recognition of individual who has mild Alzheimer's disease and achieved a better classification accuracy compared to other related works using different classifiers. Results have shown the ability of the proposed modeling for recognition of early dementia. Conclusion The findings from this research will allow individual classification to support the early diagnosis and prediction of dementia. By using the brain MRI-based HMMs developed in our proposed research, it will be more efficient, robust and can be easily used by clinicians as a computer-aid tool for validating imaging bio-markers for early prediction of dementia. PMID:24564961
Marek K. Jakubowksi; Qinghua Guo; Brandon Collins; Scott Stephens; Maggi Kelly
2013-01-01
We compared the ability of several classification and regression algorithms to predict forest stand structure metrics and standard surface fuel models. Our study area spans a dense, topographically complex Sierra Nevada mixed-conifer forest. We used clustering, regression trees, and support vector machine algorithms to analyze high density (average 9 pulses/m
A support vector machine based test for incongruence between sets of trees in tree space
2012-01-01
Background The increased use of multi-locus data sets for phylogenetic reconstruction has increased the need to determine whether a set of gene trees significantly deviate from the phylogenetic patterns of other genes. Such unusual gene trees may have been influenced by other evolutionary processes such as selection, gene duplication, or horizontal gene transfer. Results Motivated by this problem we propose a nonparametric goodness-of-fit test for two empirical distributions of gene trees, and we developed the software GeneOut to estimate a p-value for the test. Our approach maps trees into a multi-dimensional vector space and then applies support vector machines (SVMs) to measure the separation between two sets of pre-defined trees. We use a permutation test to assess the significance of the SVM separation. To demonstrate the performance of GeneOut, we applied it to the comparison of gene trees simulated within different species trees across a range of species tree depths. Applied directly to sets of simulated gene trees with large sample sizes, GeneOut was able to detect very small differences between two set of gene trees generated under different species trees. Our statistical test can also include tree reconstruction into its test framework through a variety of phylogenetic optimality criteria. When applied to DNA sequence data simulated from different sets of gene trees, results in the form of receiver operating characteristic (ROC) curves indicated that GeneOut performed well in the detection of differences between sets of trees with different distributions in a multi-dimensional space. Furthermore, it controlled false positive and false negative rates very well, indicating a high degree of accuracy. Conclusions The non-parametric nature of our statistical test provides fast and efficient analyses, and makes it an applicable test for any scenario where evolutionary or other factors can lead to trees with different multi-dimensional distributions. The software GeneOut is freely available under the GNU public license. PMID:22909268
NASA Technical Reports Server (NTRS)
Reif, John H.
1987-01-01
A parallel compression algorithm for the 16,384 processor MPP machine was developed. The serial version of the algorithm can be viewed as a combination of on-line dynamic lossless test compression techniques (which employ simple learning strategies) and vector quantization. These concepts are described. How these concepts are combined to form a new strategy for performing dynamic on-line lossy compression is discussed. Finally, the implementation of this algorithm in a massively parallel fashion on the MPP is discussed.
NASA Technical Reports Server (NTRS)
Lin, Paul P.; Jules, Kenol
2002-01-01
An intelligent system for monitoring the microgravity environment quality on-board the International Space Station is presented. The monitoring system uses a new approach combining Kohonen's self-organizing feature map, learning vector quantization, and back propagation neural network to recognize and classify the known and unknown patterns. Finally, fuzzy logic is used to assess the level of confidence associated with each vibrating source activation detected by the system.
NASA Astrophysics Data System (ADS)
Myrheim, J.
Contents 1 Introduction 1.1 The concept of particle statistics 1.2 Statistical mechanics and the many-body problem 1.3 Experimental physics in two dimensions 1.4 The algebraic approach: Heisenberg quantization 1.5 More general quantizations 2 The configuration space 2.1 The Euclidean relative space for two particles 2.2 Dimensions d=1,2,3 2.3 Homotopy 2.4 The braid group 3 Schroedinger quantization in one dimension 4 Heisenberg quantization in one dimension 4.1 The coordinate representation 5 Schroedinger quantization in dimension d ≥ 2 5.1 Scalar wave functions 5.2 Homotopy 5.3 Interchange phases 5.4 The statistics vector potential 5.5 The N-particle case 5.6 Chern-Simons theory 6 The Feynman path integral for anyons 6.1 Eigenstates for position and momentum 6.2 The path integral 6.3 Conjugation classes in SN 6.4 The non-interacting case 6.5 Duality of Feynman and Schroedinger quantization 7 The harmonic oscillator 7.1 The two-dimensional harmonic oscillator 7.2 Two anyons in a harmonic oscillator potential 7.3 More than two anyons 7.4 The three-anyon problem 8 The anyon gas 8.1 The cluster and virial expansions 8.2 First and second order perturbative results 8.3 Regularization by periodic boundary conditions 8.4 Regularization by a harmonic oscillator potential 8.5 Bosons and fermions 8.6 Two anyons 8.7 Three anyons 8.8 The Monte Carlo method 8.9 The path integral representation of the coefficients GP 8.10 Exact and approximate polynomials 8.11 The fourth virial coefficient of anyons 8.12 Two polynomial theorems 9 Charged particles in a constant magnetic field 9.1 One particle in a magnetic field 9.2 Two anyons in a magnetic field 9.3 The anyon gas in a magnetic field 10 Interchange phases and geometric phases 10.1 Introduction to geometric phases 10.2 One particle in a magnetic field 10.3 Two particles in a magnetic field 10.4 Interchange of two anyons in potential wells 10.5 Laughlin's theory of the fractional quantum Hall effect
Nodal distances for rooted phylogenetic trees.
Cardona, Gabriel; Llabrés, Mercè; Rosselló, Francesc; Valiente, Gabriel
2010-08-01
Dissimilarity measures for (possibly weighted) phylogenetic trees based on the comparison of their vectors of path lengths between pairs of taxa, have been present in the systematics literature since the early seventies. For rooted phylogenetic trees, however, these vectors can only separate non-weighted binary trees, and therefore these dissimilarity measures are metrics only on this class of rooted phylogenetic trees. In this paper we overcome this problem, by splitting in a suitable way each path length between two taxa into two lengths. We prove that the resulting splitted path lengths matrices single out arbitrary rooted phylogenetic trees with nested taxa and arcs weighted in the set of positive real numbers. This allows the definition of metrics on this general class of rooted phylogenetic trees by comparing these matrices through metrics in spaces M(n)(R) of real-valued n x n matrices. We conclude this paper by establishing some basic facts about the metrics for non-weighted phylogenetic trees defined in this way using L(p) metrics on M(n)(R), with p [epsilon] R(>0).
USDA-ARS?s Scientific Manuscript database
The redbay ambrosia beetle, Xyleborus glabratus, is the vector of a symbiotic fungus, Raffaelea lauricola that causes laurel wilt, a highly lethal disease to members of the Lauraceae. Pioneer Xyleborus glabratus beetles infect live trees with Raffaelea lauricola, and only when trees are declining be...
Progressive transmission of road network
NASA Astrophysics Data System (ADS)
Ai, Bo; Ai, Tinghua; Tang, Xinming; Li, Zhen
2009-10-01
The progressive transmission of vector map data requires efficient multi-scale data model to process the data into hierarchical structure. This paper presents such a data structure of road network without redundancy of geometry for progressive transmission. For a given scale, the road network display has to settle two questions. One is which road objects to be represented and the other is what geometric details to be visualized for the selected roads. This paper combines the Töpfer law and the BLG-tree structure into a multi-scale representation matrix to answer simultaneously the above two questions. In the matrix, rows from top to bottom represent the roads in the sequence of descending classification of traffic and length, which can support the Töpfer law to retrieve the more important roads. In a row, columns record one road by a linear BLG-tree to provide good line graphics.
Conductance Quantization in Resistive Random Access Memory
NASA Astrophysics Data System (ADS)
Li, Yang; Long, Shibing; Liu, Yang; Hu, Chen; Teng, Jiao; Liu, Qi; Lv, Hangbing; Suñé, Jordi; Liu, Ming
2015-10-01
The intrinsic scaling-down ability, simple metal-insulator-metal (MIM) sandwich structure, excellent performances, and complementary metal-oxide-semiconductor (CMOS) technology-compatible fabrication processes make resistive random access memory (RRAM) one of the most promising candidates for the next-generation memory. The RRAM device also exhibits rich electrical, thermal, magnetic, and optical effects, in close correlation with the abundant resistive switching (RS) materials, metal-oxide interface, and multiple RS mechanisms including the formation/rupture of nanoscale to atomic-sized conductive filament (CF) incorporated in RS layer. Conductance quantization effect has been observed in the atomic-sized CF in RRAM, which provides a good opportunity to deeply investigate the RS mechanism in mesoscopic dimension. In this review paper, the operating principles of RRAM are introduced first, followed by the summarization of the basic conductance quantization phenomenon in RRAM and the related RS mechanisms, device structures, and material system. Then, we discuss the theory and modeling of quantum transport in RRAM. Finally, we present the opportunities and challenges in quantized RRAM devices and our views on the future prospects.
Conductance Quantization in Resistive Random Access Memory.
Li, Yang; Long, Shibing; Liu, Yang; Hu, Chen; Teng, Jiao; Liu, Qi; Lv, Hangbing; Suñé, Jordi; Liu, Ming
2015-12-01
The intrinsic scaling-down ability, simple metal-insulator-metal (MIM) sandwich structure, excellent performances, and complementary metal-oxide-semiconductor (CMOS) technology-compatible fabrication processes make resistive random access memory (RRAM) one of the most promising candidates for the next-generation memory. The RRAM device also exhibits rich electrical, thermal, magnetic, and optical effects, in close correlation with the abundant resistive switching (RS) materials, metal-oxide interface, and multiple RS mechanisms including the formation/rupture of nanoscale to atomic-sized conductive filament (CF) incorporated in RS layer. Conductance quantization effect has been observed in the atomic-sized CF in RRAM, which provides a good opportunity to deeply investigate the RS mechanism in mesoscopic dimension. In this review paper, the operating principles of RRAM are introduced first, followed by the summarization of the basic conductance quantization phenomenon in RRAM and the related RS mechanisms, device structures, and material system. Then, we discuss the theory and modeling of quantum transport in RRAM. Finally, we present the opportunities and challenges in quantized RRAM devices and our views on the future prospects.
Cui, Hongguang; Wang, Aiming
2017-03-01
RNA silencing is a powerful technology for molecular characterization of gene functions in plants. A commonly used approach to the induction of RNA silencing is through genetic transformation. A potent alternative is to use a modified viral vector for virus-induced gene silencing (VIGS) to degrade RNA molecules sharing similar nucleotide sequence. Unfortunately, genomic studies in many allogamous woody perennials such as peach are severely hindered because they have a long juvenile period and are recalcitrant to genetic transformation. Here, we report the development of a viral vector derived from Prunus necrotic ringspot virus (PNRSV), a widespread fruit tree virus that is endemic in all Prunus fruit production countries and regions in the world. We show that the modified PNRSV vector, harbouring the sense-orientated target gene sequence of 100-200 bp in length in genomic RNA3, could efficiently trigger the silencing of a transgene or an endogenous gene in the model plant Nicotiana benthamiana. We further demonstrate that the PNRSV-based vector could be manipulated to silence endogenous genes in peach such as eukaryotic translation initiation factor 4E isoform (eIF(iso)4E), a host factor of many potyviruses including Plum pox virus (PPV). Moreover, the eIF(iso)4E-knocked down peach plants were resistant to PPV. This work opens a potential avenue for the control of virus diseases in perennial trees via viral vector-mediated silencing of host factors, and the PNRSV vector may serve as a powerful molecular tool for functional genomic studies of Prunus fruit trees. © 2016 The Authors. Plant Biotechnology Journal published by Society for Experimental Biology and The Association of Applied Biologists and John Wiley & Sons Ltd.
On families of differential equations on two-torus with all phase-lock areas
NASA Astrophysics Data System (ADS)
Glutsyuk, Alexey; Rybnikov, Leonid
2017-01-01
We consider two-parametric families of non-autonomous ordinary differential equations on the two-torus with coordinates (x, t) of the type \\overset{\\centerdot}{{x}} =v(x)+A+Bf(t) . We study its rotation number as a function of the parameters (A, B). The phase-lock areas are those level sets of the rotation number function ρ =ρ (A,B) that have non-empty interiors. Buchstaber, Karpov and Tertychnyi studied the case when v(x)=\\sin x in their joint paper. They observed the quantization effect: for every smooth periodic function f(t) the family of equations may have phase-lock areas only for integer rotation numbers. Another proof of this quantization statement was later obtained in a joint paper by Ilyashenko, Filimonov and Ryzhov. This implies a similar quantization effect for every v(x)=a\\sin (mx)+b\\cos (mx)+c and rotation numbers that are multiples of \\frac{1}{m} . We show that for every other analytic vector field v(x) (i.e. having at least two Fourier harmonics with non-zero non-opposite degrees and nonzero coefficients) there exists an analytic periodic function f(t) such that the corresponding family of equations has phase-lock areas for all the rational values of the rotation number.
A string theory which isn't about strings
NASA Astrophysics Data System (ADS)
Lee, Kanghoon; Rey, Soo-Jong; Rosabal, J. A.
2017-11-01
Quantization of closed string proceeds with a suitable choice of worldsheet vacuum. A priori, the vacuum may be chosen independently for left-moving and right-moving sectors. We construct ab initio quantized bosonic string theory with left-right asymmetric worldsheet vacuum and explore its consequences and implications. We critically examine the validity of new vacuum and carry out first-quantization using standard operator formalism. Remarkably, the string spectrum consists only of a finite number of degrees of freedom: string gravity (massless spin-two, Kalb-Ramond and dilaton fields) and two massive spin-two Fierz-Pauli fields. The massive spin-two fields have negative norm, opposite mass-squared, and provides a Lee-Wick type extension of string gravity. We compute two physical observables: tree-level scattering amplitudes and one-loop cosmological constant. Scattering amplitude of four dilatons is shown to be a rational function of kinematic invariants, and in D = 26 factorizes into contributions of massless spin-two and a pair of massive spin-two fields. The string one loop partition function is shown to perfectly agree with one loop Feynman diagram of string gravity and two massive spin-two fields. In particular, it does not exhibit modular invariance. We critically compare our construction with recent studies and contrast differences.
Chiral anomalies and effective vector meson Lagrangian beyond the tree level
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dominguez, C.A.
1987-12-01
The decays ..pi../sup O/ ..-->.. ..gamma gamma.., rho ..-->.. ..pi gamma.., ..omega.. ..-->.. ..pi gamma.., ..omega.. ..-->.. 3..pi.. and ..gamma.. ..-->.. 3..pi.. are studied in the framework of the chiral invariant effective Vector Meson Lagrangian beyond the tree level. The standard Lagrangian is enlarged by including an infinite number of radial excitations which are summed according to the dual model. As a result tree level diagrams are modified by a universal form factor at each vertex containing off-mass-shell mesons, but still respecting chiral anomaly low energy theorems. These vertex corrections bring the tree level predictions into better agreement with experiment.more » The presence of the ..omega.. ..-->.. 3..pi.. contact term is confirmed but its strength is considerably smaller than at tree level.« less
Hydrodynamical model of anisotropic, polarized turbulent superfluids. I: constraints for the fluxes
NASA Astrophysics Data System (ADS)
Mongiovì, Maria Stella; Restuccia, Liliana
2018-02-01
This work is the first of a series of papers devoted to the study of the influence of the anisotropy and polarization of the tangle of quantized vortex lines in superfluid turbulence. A thermodynamical model of inhomogeneous superfluid turbulence previously formulated is here extended, to take into consideration also these effects. The model chooses as thermodynamic state vector the density, the velocity, the energy density, the heat flux, and a complete vorticity tensor field, including its symmetric traceless part and its antisymmetric part. The relations which constrain the constitutive quantities are deduced from the second principle of thermodynamics using the Liu procedure. The results show that the presence of anisotropy and polarization in the vortex tangle affects in a substantial way the dynamics of the heat flux, and allow us to give a physical interpretation of the vorticity tensor here introduced, and to better describe the internal structure of a turbulent superfluid.
NASA Astrophysics Data System (ADS)
Faghihi, M. J.; Tavassoly, M. K.
2013-07-01
In this paper, we study the interaction between a moving Λ-type three-level atom and a single-mode cavity field in the presence of intensity-dependent atom-field coupling. After obtaining the state vector of the entire system explicitly, we study the nonclassical features of the system such as quantum entanglement, position-momentum entropic squeezing, quadrature squeezing and sub-Poissonian statistics. According to the obtained numerical results we illustrate that the squeezed period, the duration of entropy squeezing and the maximal squeezing can be controlled by choosing the appropriate nonlinearity function together with entering the atomic motion effect by the suitable selection of the field-mode structure parameter. Also, the atomic motion, as well as the nonlinearity function, leads to the oscillatory behaviour of the degree of entanglement between the atom and field.
Immirzi parameter without Immirzi ambiguity: Conformal loop quantization of scalar-tensor gravity
NASA Astrophysics Data System (ADS)
Veraguth, Olivier J.; Wang, Charles H.-T.
2017-10-01
Conformal loop quantum gravity provides an approach to loop quantization through an underlying conformal structure i.e. conformally equivalent class of metrics. The property that general relativity itself has no conformal invariance is reinstated with a constrained scalar field setting the physical scale. Conformally equivalent metrics have recently been shown to be amenable to loop quantization including matter coupling. It has been suggested that conformal geometry may provide an extended symmetry to allow a reformulated Immirzi parameter necessary for loop quantization to behave like an arbitrary group parameter that requires no further fixing as its present standard form does. Here, we find that this can be naturally realized via conformal frame transformations in scalar-tensor gravity. Such a theory generally incorporates a dynamical scalar gravitational field and reduces to general relativity when the scalar field becomes a pure gauge. In particular, we introduce a conformal Einstein frame in which loop quantization is implemented. We then discuss how different Immirzi parameters under this description may be related by conformal frame transformations and yet share the same quantization having, for example, the same area gaps, modulated by the scalar gravitational field.
Tribology of the lubricant quantized sliding state.
Castelli, Ivano Eligio; Capozza, Rosario; Vanossi, Andrea; Santoro, Giuseppe E; Manini, Nicola; Tosatti, Erio
2009-11-07
In the framework of Langevin dynamics, we demonstrate clear evidence of the peculiar quantized sliding state, previously found in a simple one-dimensional boundary lubricated model [A. Vanossi et al., Phys. Rev. Lett. 97, 056101 (2006)], for a substantially less idealized two-dimensional description of a confined multilayer solid lubricant under shear. This dynamical state, marked by a nontrivial "quantized" ratio of the averaged lubricant center-of-mass velocity to the externally imposed sliding speed, is recovered, and shown to be robust against the effects of thermal fluctuations, quenched disorder in the confining substrates, and over a wide range of loading forces. The lubricant softness, setting the width of the propagating solitonic structures, is found to play a major role in promoting in-registry commensurate regions beneficial to this quantized sliding. By evaluating the force instantaneously exerted on the top plate, we find that this quantized sliding represents a dynamical "pinned" state, characterized by significantly low values of the kinetic friction. While the quantized sliding occurs due to solitons being driven gently, the transition to ordinary unpinned sliding regimes can involve lubricant melting due to large shear-induced Joule heating, for example at large speed.
A stable RNA virus-based vector for citrus trees
DOE Office of Scientific and Technical Information (OSTI.GOV)
Folimonov, Alexey S.; Folimonova, Svetlana Y.; Bar-Joseph, Moshe
Virus-based vectors are important tools in plant molecular biology and plant genomics. A number of vectors based on viruses that infect herbaceous plants are in use for expression or silencing of genes in plants as well as screening unknown sequences for function. Yet there is a need for useful virus-based vectors for woody plants, which demand much greater stability because of the longer time required for systemic infection and analysis. We examined several strategies to develop a Citrus tristeza virus (CTV)-based vector for transient expression of foreign genes in citrus trees using a green fluorescent protein (GFP) as a reporter.more » These strategies included substitution of the p13 open reading frame (ORF) by the ORF of GFP, construction of a self-processing fusion of GFP in-frame with the major coat protein (CP), or expression of the GFP ORF as an extra gene from a subgenomic (sg) mRNA controlled either by a duplicated CTV CP sgRNA controller element (CE) or an introduced heterologous CE of Beet yellows virus. Engineered vector constructs were examined for replication, encapsidation, GFP expression during multiple passages in protoplasts, and for their ability to infect, move, express GFP, and be maintained in citrus plants. The most successful vectors based on the 'add-a-gene' strategy have been unusually stable, continuing to produce GFP fluorescence after more than 4 years in citrus trees.« less
Broad Absorption Line Quasar catalogues with Supervised Neural Networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scaringi, Simone; Knigge, Christian; Cottis, Christopher E.
2008-12-05
We have applied a Learning Vector Quantization (LVQ) algorithm to SDSS DR5 quasar spectra in order to create a large catalogue of broad absorption line quasars (BALQSOs). We first discuss the problems with BALQSO catalogues constructed using the conventional balnicity and/or absorption indices (BI and AI), and then describe the supervised LVQ network we have trained to recognise BALQSOs. The resulting BALQSO catalogue should be substantially more robust and complete than BI-or AI-based ones.
2001-10-25
form: (1) A is a scaling factor, t is time and r a coordinate vector describing the limb configuration. We...combination of limb state and EMG. In our early examination of EMG we detected underlying groups of muscles and phases of activity by inspection and...representations of EEG or other biological signals has been thoroughly explored. Such components might be used as a basis for neuroprosthetic control
Quantized Faraday and Kerr rotation and axion electrodynamics of a 3D topological insulator
NASA Astrophysics Data System (ADS)
Wu, Liang; Salehi, M.; Koirala, N.; Moon, J.; Oh, S.; Armitage, N. P.
2016-12-01
Topological insulators have been proposed to be best characterized as bulk magnetoelectric materials that show response functions quantized in terms of fundamental physical constants. Here, we lower the chemical potential of three-dimensional (3D) Bi2Se3 films to ~30 meV above the Dirac point and probe their low-energy electrodynamic response in the presence of magnetic fields with high-precision time-domain terahertz polarimetry. For fields higher than 5 tesla, we observed quantized Faraday and Kerr rotations, whereas the dc transport is still semiclassical. A nontrivial Berry’s phase offset to these values gives evidence for axion electrodynamics and the topological magnetoelectric effect. The time structure used in these measurements allows a direct measure of the fine-structure constant based on a topological invariant of a solid-state system.
Quality Scalability Aware Watermarking for Visual Content.
Bhowmik, Deepayan; Abhayaratne, Charith
2016-11-01
Scalable coding-based content adaptation poses serious challenges to traditional watermarking algorithms, which do not consider the scalable coding structure and hence cannot guarantee correct watermark extraction in media consumption chain. In this paper, we propose a novel concept of scalable blind watermarking that ensures more robust watermark extraction at various compression ratios while not effecting the visual quality of host media. The proposed algorithm generates scalable and robust watermarked image code-stream that allows the user to constrain embedding distortion for target content adaptations. The watermarked image code-stream consists of hierarchically nested joint distortion-robustness coding atoms. The code-stream is generated by proposing a new wavelet domain blind watermarking algorithm guided by a quantization based binary tree. The code-stream can be truncated at any distortion-robustness atom to generate the watermarked image with the desired distortion-robustness requirements. A blind extractor is capable of extracting watermark data from the watermarked images. The algorithm is further extended to incorporate a bit-plane discarding-based quantization model used in scalable coding-based content adaptation, e.g., JPEG2000. This improves the robustness against quality scalability of JPEG2000 compression. The simulation results verify the feasibility of the proposed concept, its applications, and its improved robustness against quality scalable content adaptation. Our proposed algorithm also outperforms existing methods showing 35% improvement. In terms of robustness to quality scalable video content adaptation using Motion JPEG2000 and wavelet-based scalable video coding, the proposed method shows major improvement for video watermarking.
Effect of temperature degeneracy and Landau quantization on drift solitary waves and double layers
NASA Astrophysics Data System (ADS)
Shan, Shaukat Ali; Haque, Q.
2018-01-01
The linear and nonlinear drift ion acoustic waves have been investigated in an inhomogeneous, magnetized, dense degenerate, and quantized magnetic field plasma. The linear drift ion acoustic wave propagation along with the nonlinear structures like double layers and solitary waves has been found to be strongly dependent on the drift speed, magnetic field quantization parameter β, and the temperature degeneracy. The graphical illustrations show that the frequency of linear waves and the amplitude of the solitary waves increase with the increase in temperature degeneracy and Landau quantization effect, while the amplitude of the double layers decreases with the increase in η and T. The relevance of the present study is pointed out in the plasma environment of fast ignition inertial confinement fusion, the white dwarf stars, and short pulsed petawatt laser technology.
Comparison of leaf-on and leaf-off ALS data for mapping riparian tree species
NASA Astrophysics Data System (ADS)
Laslier, Marianne; Ba, Antoine; Hubert-Moy, Laurence; Dufour, Simon
2017-10-01
Forest species composition is a fundamental indicator of forest study and management. However, describing forest species composition at large scales and of highly diverse populations remains an issue for which remote sensing can provide significant contribution, in particular, Airborne Laser Scanning (ALS) data. Riparian corridors are good examples of highly valuable ecosystems, with high species richness and large surface areas that can be time consuming and expensive to monitor with in situ measurements. Remote sensing could be useful to study them, but few studies have focused on monitoring riparian tree species using ALS data. This study aimed to determine which metrics derived from ALS data are best suited to identify and map riparian tree species. We acquired very high density leaf-on and leaf-off ALS data along the Sélune River (France). In addition, we inventoried eight main riparian deciduous tree species along the study site. After manual segmentation of the inventoried trees, we extracted 68 morphological and structural metrics from both leaf-on and leaf-off ALS point clouds. Some of these metrics were then selected using Sequential Forward Selection (SFS) algorithm. Support Vector Machine (SVM) classification results showed good accuracy with 7 metrics (0.77). Both leaf-on and leafoff metrics were kept as important metrics for distinguishing tree species. Results demonstrate the ability of 3D information derived from high density ALS data to identify riparian tree species using external and internal structural metrics. They also highlight the complementarity of leaf-on and leaf-off Lidar data for distinguishing riparian tree species.
Methods of Contemporary Gauge Theory
NASA Astrophysics Data System (ADS)
Makeenko, Yuri
2002-08-01
Preface; Part I. Path Integrals: 1. Operator calculus; 2. Second quantization; 3. Quantum anomalies from path integral; 4. Instantons in quantum mechanics; Part II. Lattice Gauge Theories: 5. Observables in gauge theories; 6. Gauge fields on a lattice; 7. Lattice methods; 8. Fermions on a lattice; 9. Finite temperatures; Part III. 1/N Expansion: 10. O(N) vector models; 11. Multicolor QCD; 12. QCD in loop space; 13. Matrix models; Part IV. Reduced Models: 14. Eguchi-Kawai model; 15. Twisted reduced models; 16. Non-commutative gauge theories.
Methods of Contemporary Gauge Theory
NASA Astrophysics Data System (ADS)
Makeenko, Yuri
2005-11-01
Preface; Part I. Path Integrals: 1. Operator calculus; 2. Second quantization; 3. Quantum anomalies from path integral; 4. Instantons in quantum mechanics; Part II. Lattice Gauge Theories: 5. Observables in gauge theories; 6. Gauge fields on a lattice; 7. Lattice methods; 8. Fermions on a lattice; 9. Finite temperatures; Part III. 1/N Expansion: 10. O(N) vector models; 11. Multicolor QCD; 12. QCD in loop space; 13. Matrix models; Part IV. Reduced Models: 14. Eguchi-Kawai model; 15. Twisted reduced models; 16. Non-commutative gauge theories.
Vector/Matrix Quantization for Narrow-Bandwidth Digital Speech Compression.
1982-09-01
8217o 0 -X -u -vc "oi ’" o 0 00i MN nM I -r -: I I Ir , I C 64 ut c 4c -C ;6 19I *~I C’ I I I 1 Kall 9 I I V4 S.0 M r4) ** al Iw* 0 0 10* 0 f 65 signal...Prediction of the Speech Wave, JASA Vol. 50, pp. 637-655, April 1971 . - 2. I. Itakura and S. Saito, Analysis Synthesis Telephony Based Upon the Maximum
Image and Video Compression with VLSI Neural Networks
NASA Technical Reports Server (NTRS)
Fang, W.; Sheu, B.
1993-01-01
An advanced motion-compensated predictive video compression system based on artificial neural networks has been developed to effectively eliminate the temporal and spatial redundancy of video image sequences and thus reduce the bandwidth and storage required for the transmission and recording of the video signal. The VLSI neuroprocessor for high-speed high-ratio image compression based upon a self-organization network and the conventional algorithm for vector quantization are compared. The proposed method is quite efficient and can achieve near-optimal results.
Modulated error diffusion CGHs for neural nets
NASA Astrophysics Data System (ADS)
Vermeulen, Pieter J. E.; Casasent, David P.
1990-05-01
New modulated error diffusion CGHs (computer generated holograms) for optical computing are considered. Specific attention is given to their use in optical matrix-vector, associative processor, neural net and optical interconnection architectures. We consider lensless CGH systems (many CGHs use an external Fourier transform (FT) lens), the Fresnel sampling requirements, the effects of finite CGH apertures (sample and hold inputs), dot size correction (for laser recorders), and new applications for this novel encoding method (that devotes attention to quantization noise effects).
Quantum angular momentum diffusion of rigid bodies
NASA Astrophysics Data System (ADS)
Papendell, Birthe; Stickler, Benjamin A.; Hornberger, Klaus
2017-12-01
We show how to describe the diffusion of the quantized angular momentum vector of an arbitrarily shaped rigid rotor as induced by its collisional interaction with an environment. We present the general form of the Lindblad-type master equation and relate it to the orientational decoherence of an asymmetric nanoparticle in the limit of small anisotropies. The corresponding diffusion coefficients are derived for gas particles scattering off large molecules and for ambient photons scattering off dielectric particles, using the elastic scattering amplitudes.
From black holes to white holes: a quantum gravitational, symmetric bounce
NASA Astrophysics Data System (ADS)
Olmedo, Javier; Saini, Sahil; Singh, Parampreet
2017-11-01
Recently, a consistent non-perturbative quantization of the Schwarzschild interior resulting in a bounce from black hole to white hole geometry has been obtained by loop quantizing the Kantowski-Sachs vacuum spacetime. As in other spacetimes where the singularity is dominated by the Weyl part of the spacetime curvature, the structure of the singularity is highly anisotropic in the Kantowski-Sachs vacuum spacetime. As a result, the bounce turns out to be in general asymmetric, creating a large mass difference between the parent black hole and the child white hole. In this manuscript, we investigate under what circumstances a symmetric bounce scenario can be constructed in the above quantization. Using the setting of Dirac observables and geometric clocks, we obtain a symmetric bounce condition which can be satisfied by a slight modification in the construction of loops over which holonomies are considered in the quantization procedure. These modifications can be viewed as quantization ambiguities, and are demonstrated in three different flavors, all of which lead to a non-singular black to white hole transition with identical masses. Our results show that quantization ambiguities can mitigate or even qualitatively change some key features of the physics of singularity resolution. Further, these results are potentially helpful in motivating and constructing symmetric black to white hole transition scenarios.
Becchi-Rouet-Stora-Tyutin formalism and zero locus reduction
NASA Astrophysics Data System (ADS)
Grigoriev, M. A.; Semikhatov, A. M.; Tipunin, I. Yu.
2001-08-01
In the Becchi-Rouet-Stora-Tyutin (BRST) quantization of gauge theories, the zero locus ZQ of the BRST differential Q carries an (anti)bracket whose parity is opposite to that of the fundamental bracket. Observables of the BRST theory are in a 1:1 correspondence with Casimir functions of the bracket on ZQ. For any constrained dynamical system with the phase space N0 and the constraint surface Σ, we prove its equivalence to the constrained system on the BFV-extended phase space with the constraint surface given by ZQ. Reduction to the zero locus of the differential gives rise to relations between bracket operations and differentials arising in different complexes (the Gerstenhaber, Schouten, Berezin-Kirillov, and Sklyanin brackets); the equation ensuring the existence of a nilpotent vector field on the reduced manifold can be the classical Yang-Baxter equation. We also generalize our constructions to the bi-QP manifolds which from the BRST theory viewpoint correspond to the BRST-anti-BRST-symmetric quantization.
NASA Astrophysics Data System (ADS)
Lee, Feifei; Kotani, Koji; Chen, Qiu; Ohmi, Tadahiro
2010-02-01
In this paper, a fast search algorithm for MPEG-4 video clips from video database is proposed. An adjacent pixel intensity difference quantization (APIDQ) histogram is utilized as the feature vector of VOP (video object plane), which had been reliably applied to human face recognition previously. Instead of fully decompressed video sequence, partially decoded data, namely DC sequence of the video object are extracted from the video sequence. Combined with active search, a temporal pruning algorithm, fast and robust video search can be realized. The proposed search algorithm has been evaluated by total 15 hours of video contained of TV programs such as drama, talk, news, etc. to search for given 200 MPEG-4 video clips which each length is 15 seconds. Experimental results show the proposed algorithm can detect the similar video clip in merely 80ms, and Equal Error Rate (ERR) of 2 % in drama and news categories are achieved, which are more accurately and robust than conventional fast video search algorithm.
Lang, Andrew S; Taylor, Terumi A; Beatty, J Thomas
2002-11-01
The gene transfer agent (GTA) of the a-proteobacterium Rhodobacter capsulatus is a cell-controlled genetic exchange vector. Genes that encode the GTA structure are clustered in a 15-kb region of the R. capsulatus chromosome, and some of these genes show sequence similarity to known bacteriophage head and tail genes. However, the production of GTA is controlled at the level of transcription by a cellular two-component signal transduction system. This paper describes homologues of both the GTA structural gene cluster and the GTA regulatory genes in the a-proteobacteria Rhodopseudomonas palustris, Rhodobacter sphaeroides, Caulobacter crescentus, Agrobacterium tumefaciens and Brucella melitensis. These sequences were used in a phylogenetic tree approach to examine the evolutionary relationships of selected GTA proteins to these homologues and (pro)phage proteins, which was compared to a 16S rRNA tree. The data indicate that a GTA-like element was present in a single progenitor of the extant species that contain both GTA structural cluster and regulatory gene homologues. The evolutionary relationships of GTA structural proteins to (pro)phage proteins indicated by the phylogenetic tree patterns suggest a predominantly vertical descent of GTA-like sequences in the a-proteobacteria and little past gene exchange with (pro)phages.
More on quantum groups from the quantization point of view
NASA Astrophysics Data System (ADS)
Jurčo, Branislav
1994-12-01
Star products on the classical double group of a simple Lie group and on corresponding symplectic groupoids are given so that the quantum double and the “quantized tangent bundle” are obtained in the deformation description. “Complex” quantum groups and bicovariant quantum Lie algebras are discussed from this point of view. Further we discuss the quantization of the Poisson structure on the symmetric algebra S(g) leading to the quantized enveloping algebra U h (g) as an example of biquantization in the sense of Turaev. Description of U h (g) in terms of the generators of the bicovariant differential calculus on F(G q ) is very convenient for this purpose. Finaly we interpret in the deformation framework some well known properties of compact quantum groups as simple consequences of corresponding properties of classical compact Lie groups. An analogue of the classical Kirillov's universal character formula is given for the unitary irreducble representation in the compact case.
Modeling shape and topology of low-resolution density maps of biological macromolecules.
De-Alarcón, Pedro A; Pascual-Montano, Alberto; Gupta, Amarnath; Carazo, Jose M
2002-01-01
In the present work we develop an efficient way of representing the geometry and topology of volumetric datasets of biological structures from medium to low resolution, aiming at storing and querying them in a database framework. We make use of a new vector quantization algorithm to select the points within the macromolecule that best approximate the probability density function of the original volume data. Connectivity among points is obtained with the use of the alpha shapes theory. This novel data representation has a number of interesting characteristics, such as 1) it allows us to automatically segment and quantify a number of important structural features from low-resolution maps, such as cavities and channels, opening the possibility of querying large collections of maps on the basis of these quantitative structural features; 2) it provides a compact representation in terms of size; 3) it contains a subset of three-dimensional points that optimally quantify the densities of medium resolution data; and 4) a general model of the geometry and topology of the macromolecule (as opposite to a spatially unrelated bunch of voxels) is easily obtained by the use of the alpha shapes theory. PMID:12124252
NASA Astrophysics Data System (ADS)
Kasamatsu, Kenichi; Sakashita, Kouhei
2018-05-01
We study numerically the structure of a vortex lattice in rotating two-component Bose-Einstein condensates with equal atomic masses and equal intra- and intercomponent coupling strengths. The numerical simulations of the Gross-Pitaevskii equation show that the quantized vortices in this situation form lattice configuration accompanying vortex stripes, honeycomb lattices, and their complexes. This is a result of the degeneracy of the system for the SU(2) symmetric operation, which causes a continuous transformation between the above structures. In terms of the pseudospin representation, the complex lattice structures are identified as a hexagonal lattice of doubly winding half skyrmions.
PHYTOPLASMAS IN POME FRUIT TREES: UPDATE OF THEIR PRESENCE AND THEIR VECTORS IN BELGIUM.
G, Peusens; K, De Jonghe; I, De Roo; S, Steyer; T, Olivier; F, Fauche; F, Rys; D, Bylemans; T, Beliën
2015-01-01
Among the numerous diseases that can attack pome fruit trees, apple proliferation and pear decline, both caused by a phytoplasma ('Candidatus Phytoplasma mali' (AP) and 'Ca. P. pyri' (PD), respectively), may result into important losses of quality and quantity of the crop. Until a few years ago, no scientific and reliable data on their presence in Belgium was available and so a 2-year survey was organised to obtain more detailed information on the status of both pathogens. Root and leaf samples collected in commercial orchards were analysed using molecular detection tools and tested positive for both phytoplasmas. Additionally, the presence and infectivity of Psyllidae, vectors of AP and PD, was assessed during this survey but no infected Cacopsylla-species were found. Lab trials revealed its vector capacity at the end of summer and autumn and its migration pattern 80 m in line and 10.5 m across trees in an orchard.
Bascil, M Serdar; Tesneli, Ahmet Y; Temurtas, Feyzullah
2016-09-01
Brain computer interface (BCI) is a new communication way between man and machine. It identifies mental task patterns stored in electroencephalogram (EEG). So, it extracts brain electrical activities recorded by EEG and transforms them machine control commands. The main goal of BCI is to make available assistive environmental devices for paralyzed people such as computers and makes their life easier. This study deals with feature extraction and mental task pattern recognition on 2-D cursor control from EEG as offline analysis approach. The hemispherical power density changes are computed and compared on alpha-beta frequency bands with only mental imagination of cursor movements. First of all, power spectral density (PSD) features of EEG signals are extracted and high dimensional data reduced by principle component analysis (PCA) and independent component analysis (ICA) which are statistical algorithms. In the last stage, all features are classified with two types of support vector machine (SVM) which are linear and least squares (LS-SVM) and three different artificial neural network (ANN) structures which are learning vector quantization (LVQ), multilayer neural network (MLNN) and probabilistic neural network (PNN) and mental task patterns are successfully identified via k-fold cross validation technique.
Joint Source-Channel Coding by Means of an Oversampled Filter Bank Code
NASA Astrophysics Data System (ADS)
Marinkovic, Slavica; Guillemot, Christine
2006-12-01
Quantized frame expansions based on block transforms and oversampled filter banks (OFBs) have been considered recently as joint source-channel codes (JSCCs) for erasure and error-resilient signal transmission over noisy channels. In this paper, we consider a coding chain involving an OFB-based signal decomposition followed by scalar quantization and a variable-length code (VLC) or a fixed-length code (FLC). This paper first examines the problem of channel error localization and correction in quantized OFB signal expansions. The error localization problem is treated as an[InlineEquation not available: see fulltext.]-ary hypothesis testing problem. The likelihood values are derived from the joint pdf of the syndrome vectors under various hypotheses of impulse noise positions, and in a number of consecutive windows of the received samples. The error amplitudes are then estimated by solving the syndrome equations in the least-square sense. The message signal is reconstructed from the corrected received signal by a pseudoinverse receiver. We then improve the error localization procedure by introducing a per-symbol reliability information in the hypothesis testing procedure of the OFB syndrome decoder. The per-symbol reliability information is produced by the soft-input soft-output (SISO) VLC/FLC decoders. This leads to the design of an iterative algorithm for joint decoding of an FLC and an OFB code. The performance of the algorithms developed is evaluated in a wavelet-based image coding system.
Passive forensics for copy-move image forgery using a method based on DCT and SVD.
Zhao, Jie; Guo, Jichang
2013-12-10
As powerful image editing tools are widely used, the demand for identifying the authenticity of an image is much increased. Copy-move forgery is one of the tampering techniques which are frequently used. Most existing techniques to expose this forgery need to improve the robustness for common post-processing operations and fail to precisely locate the tampering region especially when there are large similar or flat regions in the image. In this paper, a robust method based on DCT and SVD is proposed to detect this specific artifact. Firstly, the suspicious image is divided into fixed-size overlapping blocks and 2D-DCT is applied to each block, then the DCT coefficients are quantized by a quantization matrix to obtain a more robust representation of each block. Secondly, each quantized block is divided non-overlapping sub-blocks and SVD is applied to each sub-block, then features are extracted to reduce the dimension of each block using its largest singular value. Finally, the feature vectors are lexicographically sorted, and duplicated image blocks will be matched by predefined shift frequency threshold. Experiment results demonstrate that our proposed method can effectively detect multiple copy-move forgery and precisely locate the duplicated regions, even when an image was distorted by Gaussian blurring, AWGN, JPEG compression and their mixed operations. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Resolution-Adaptive Hybrid MIMO Architectures for Millimeter Wave Communications
NASA Astrophysics Data System (ADS)
Choi, Jinseok; Evans, Brian L.; Gatherer, Alan
2017-12-01
In this paper, we propose a hybrid analog-digital beamforming architecture with resolution-adaptive ADCs for millimeter wave (mmWave) receivers with large antenna arrays. We adopt array response vectors for the analog combiners and derive ADC bit-allocation (BA) solutions in closed form. The BA solutions reveal that the optimal number of ADC bits is logarithmically proportional to the RF chain's signal-to-noise ratio raised to the 1/3 power. Using the solutions, two proposed BA algorithms minimize the mean square quantization error of received analog signals under a total ADC power constraint. Contributions of this paper include 1) ADC bit-allocation algorithms to improve communication performance of a hybrid MIMO receiver, 2) approximation of the capacity with the BA algorithm as a function of channels, and 3) a worst-case analysis of the ergodic rate of the proposed MIMO receiver that quantifies system tradeoffs and serves as the lower bound. Simulation results demonstrate that the BA algorithms outperform a fixed-ADC approach in both spectral and energy efficiency, and validate the capacity and ergodic rate formula. For a power constraint equivalent to that of fixed 4-bit ADCs, the revised BA algorithm makes the quantization error negligible while achieving 22% better energy efficiency. Having negligible quantization error allows existing state-of-the-art digital beamformers to be readily applied to the proposed system.
A robust H.264/AVC video watermarking scheme with drift compensation.
Jiang, Xinghao; Sun, Tanfeng; Zhou, Yue; Wang, Wan; Shi, Yun-Qing
2014-01-01
A robust H.264/AVC video watermarking scheme for copyright protection with self-adaptive drift compensation is proposed. In our scheme, motion vector residuals of macroblocks with the smallest partition size are selected to hide copyright information in order to hold visual impact and distortion drift to a minimum. Drift compensation is also implemented to reduce the influence of watermark to the most extent. Besides, discrete cosine transform (DCT) with energy compact property is applied to the motion vector residual group, which can ensure robustness against intentional attacks. According to the experimental results, this scheme gains excellent imperceptibility and low bit-rate increase. Malicious attacks with different quantization parameters (QPs) or motion estimation algorithms can be resisted efficiently, with 80% accuracy on average after lossy compression.
A Robust H.264/AVC Video Watermarking Scheme with Drift Compensation
Sun, Tanfeng; Zhou, Yue; Shi, Yun-Qing
2014-01-01
A robust H.264/AVC video watermarking scheme for copyright protection with self-adaptive drift compensation is proposed. In our scheme, motion vector residuals of macroblocks with the smallest partition size are selected to hide copyright information in order to hold visual impact and distortion drift to a minimum. Drift compensation is also implemented to reduce the influence of watermark to the most extent. Besides, discrete cosine transform (DCT) with energy compact property is applied to the motion vector residual group, which can ensure robustness against intentional attacks. According to the experimental results, this scheme gains excellent imperceptibility and low bit-rate increase. Malicious attacks with different quantization parameters (QPs) or motion estimation algorithms can be resisted efficiently, with 80% accuracy on average after lossy compression. PMID:24672376
Identifying images of handwritten digits using deep learning in H2O
NASA Astrophysics Data System (ADS)
Sadhasivam, Jayakumar; Charanya, R.; Kumar, S. Harish; Srinivasan, A.
2017-11-01
Automatic digit recognition is of popular interest today. Deep learning techniques make it possible for object recognition in image data. Perceiving the digit has turned into a fundamental part as far as certifiable applications. Since, digits are composed in various styles in this way to distinguish the digit it is important to perceive and arrange it with the assistance of machine learning methods. This exploration depends on supervised learning vector quantization neural system arranged under counterfeit artificial neural network. The pictures of digits are perceived, prepared and tried. After the system is made digits are prepared utilizing preparing dataset vectors and testing is connected to the pictures of digits which are separated to each other by fragmenting the picture and resizing the digit picture as needs be for better precision.
Identification and Mapping of Tree Species in Urban Areas Using WORLDVIEW-2 Imagery
NASA Astrophysics Data System (ADS)
Mustafa, Y. T.; Habeeb, H. N.; Stein, A.; Sulaiman, F. Y.
2015-10-01
Monitoring and mapping of urban trees are essential to provide urban forestry authorities with timely and consistent information. Modern techniques increasingly facilitate these tasks, but require the development of semi-automatic tree detection and classification methods. In this article, we propose an approach to delineate and map the crown of 15 tree species in the city of Duhok, Kurdistan Region of Iraq using WorldView-2 (WV-2) imagery. A tree crown object is identified first and is subsequently delineated as an image object (IO) using vegetation indices and texture measurements. Next, three classification methods: Maximum Likelihood, Neural Network, and Support Vector Machine were used to classify IOs using selected IO features. The best results are obtained with Support Vector Machine classification that gives the best map of urban tree species in Duhok. The overall accuracy was between 60.93% to 88.92% and κ-coefficient was between 0.57 to 0.75. We conclude that fifteen tree species were identified and mapped at a satisfactory accuracy in urban areas of this study.
Theory of the Quantized Hall Conductance in Periodic Systems: a Topological Analysis.
NASA Astrophysics Data System (ADS)
Czerwinski, Michael Joseph
The integral quantization of the Hall conductance in two-dimensional periodic systems is investigated from a topological point of view. Attention is focused on the contributions from the electronic sub-bands which arise from perturbed Landau levels. After reviewing the theoretical work leading to the identification of the Hall conductance as a topological quantum number, both a determination and interpretation of these quantized values for the sub-band conductances is made. It is shown that the Hall conductance of each sub-band can be regarded as the sum of two terms which will be referred to as classical and nonclassical. Although each of these contributions individually leads to a fractional conductance, the sum of these two contributions does indeed yield an integer. These integral conductances are found to be given by the solution of a simple Diophantine equation which depends on the periodic perturbation. A connection between the quantized value of the Hall conductance and the covering of real space by the zeroes of the sub-band wavefunctions allows for a determination of these conductances under more general potentials. A method is described for obtaining the conductance values from only those states bordering the Brillouin zone, and not the states in its interior. This method is demonstrated to give Hall conductances in agreement with those obtained from the Diophantine equation for the sinusoidal potential case explored earlier. Generalizing a simple gauge invariance argument from real space to k-space, a k-space 'vector potential' is introduced. This allows for a explicit identification of the Hall conductance with the phase winding number of the sub-band wavefunction around the Brillouin zone. The previously described division of the Hall conductance into classical and nonclassical contributions is in this way made more rigorous; based on periodicity considerations alone, these terms are identified as the winding numbers associated with (i) the basis states and (ii) the coefficients of these basis states, respectively. In this way a general Diophantine equation, independent of the periodic potential, is obtained. Finally, the use of the 'parallel transport' of state vectors in the determination of an overall phase convention for these states is described. This is seen to lead to a simple and straightforward method for determining the Hall conductance. This method is based on the states directly, without reference to the particular component wavefunctions of these states. Mention is made of the generality of calculations of this type, within the context of the geometric (or Berry) phases acquired by systems under an adiabatic modification of their environment.
A Algebraic Approach to the Quantization of Constrained Systems: Finite Dimensional Examples.
NASA Astrophysics Data System (ADS)
Tate, Ranjeet Shekhar
1992-01-01
General relativity has two features in particular, which make it difficult to apply to it existing schemes for the quantization of constrained systems. First, there is no background structure in the theory, which could be used, e.g., to regularize constraint operators, to identify a "time" or to define an inner product on physical states. Second, in the Ashtekar formulation of general relativity, which is a promising avenue to quantum gravity, the natural variables for quantization are not canonical; and, classically, there are algebraic identities between them. Existing schemes are usually not concerned with such identities. Thus, from the point of view of canonical quantum gravity, it has become imperative to find a framework for quantization which provides a general prescription to find the physical inner product, and is flexible enough to accommodate non -canonical variables. In this dissertation I present an algebraic formulation of the Dirac approach to the quantization of constrained systems. The Dirac quantization program is augmented by a general principle to find the inner product on physical states. Essentially, the Hermiticity conditions on physical operators determine this inner product. I also clarify the role in quantum theory of possible algebraic identities between the elementary variables. I use this approach to quantize various finite dimensional systems. Some of these models test the new aspects of the algebraic framework. Others bear qualitative similarities to general relativity, and may give some insight into the pitfalls lurking in quantum gravity. The previous quantizations of one such model had many surprising features. When this model is quantized using the algebraic program, there is no longer any unexpected behaviour. I also construct the complete quantum theory for a previously unsolved relativistic cosmology. All these models indicate that the algebraic formulation provides powerful new tools for quantization. In (spatially compact) general relativity, the Hamiltonian is constrained to vanish. I present various approaches one can take to obtain an interpretation of the quantum theory of such "dynamically constrained" systems. I apply some of these ideas to the Bianchi I cosmology, and analyze the issue of the initial singularity in quantum theory.
Luminescence studies of HgCdTe- and InAsSb-based quantum-well structures
NASA Astrophysics Data System (ADS)
Izhnin, I. I.; Izhnin, A. I.; Fitsych, O. I.; Voitsekhovskii, A. V.; Gorn, D. I.; Semakova, A. A.; Bazhenov, N. L.; Mynbaev, K. D.; Zegrya, G. G.
2018-04-01
Results of photoluminescence studies of single-quantum-well HgCdTe-based structures and electroluminescence studies of multiple-quantum-well InAsSb-based structures are reported. HgCdTe structures were grown with molecular beam epitaxy on GaAs substrates. InAsSb-based structures were grown with metal-organic chemical vapor deposition on InAs substrates. The common feature of luminescence spectra of all the structures was the presence of peaks with the energy much larger than that of calculated optical transitions between the first quantization levels for electrons and heavy holes. Possibility of observation of optical transitions between the quantization levels of electrons and first and/or second heavy and light hole levels is discussed in the paper in relation to the specifics of the electronic structure of the materials under consideration.
Smolensky, Paul; Goldrick, Matthew; Mathis, Donald
2014-08-01
Mental representations have continuous as well as discrete, combinatorial properties. For example, while predominantly discrete, phonological representations also vary continuously; this is reflected by gradient effects in instrumental studies of speech production. Can an integrated theoretical framework address both aspects of structure? The framework we introduce here, Gradient Symbol Processing, characterizes the emergence of grammatical macrostructure from the Parallel Distributed Processing microstructure (McClelland, Rumelhart, & The PDP Research Group, 1986) of language processing. The mental representations that emerge, Distributed Symbol Systems, have both combinatorial and gradient structure. They are processed through Subsymbolic Optimization-Quantization, in which an optimization process favoring representations that satisfy well-formedness constraints operates in parallel with a distributed quantization process favoring discrete symbolic structures. We apply a particular instantiation of this framework, λ-Diffusion Theory, to phonological production. Simulations of the resulting model suggest that Gradient Symbol Processing offers a way to unify accounts of grammatical competence with both discrete and continuous patterns in language performance. Copyright © 2013 Cognitive Science Society, Inc.
Electrical and thermal conductance quantization in nanostructures
NASA Astrophysics Data System (ADS)
Nawrocki, Waldemar
2008-10-01
In the paper problems of electron transport in mesoscopic structures and nanostructures are considered. The electrical conductance of nanowires was measured in a simple experimental system. Investigations have been performed in air at room temperature measuring the conductance between two vibrating metal wires with standard oscilloscope. Conductance quantization in units of G0 = 2e2/h = (12.9 kΩ)-1 up to five quanta of conductance has been observed for nanowires formed in many metals. The explanation of this universal phenomena is the formation of a nanometer-sized wire (nanowire) between macroscopic metallic contacts which induced, due to theory proposed by Landauer, the quantization of conductance. Thermal problems in nanowires are also discussed in the paper.
USDA-ARS?s Scientific Manuscript database
Bluestain (ophiostomatoid) fungi are vectored to trees via bark beetle activity, but their ecological roles are not fully understood. Hypotheses range from fungi as harmless hitchhikers to integral mutualists aiding beetles in overwhelming tree defenses. Recently, correlational field studies and sma...
Jin, Mingwu; Deng, Weishu
2018-05-15
There is a spectrum of the progression from healthy control (HC) to mild cognitive impairment (MCI) without conversion to Alzheimer's disease (AD), to MCI with conversion to AD (cMCI), and to AD. This study aims to predict the different disease stages using brain structural information provided by magnetic resonance imaging (MRI) data. The neighborhood component analysis (NCA) is applied to select most powerful features for prediction. The ensemble decision tree classifier is built to predict which group the subject belongs to. The best features and model parameters are determined by cross validation of the training data. Our results show that 16 out of a total of 429 features were selected by NCA using 240 training subjects, including MMSE score and structural measures in memory-related regions. The boosting tree model with NCA features can achieve prediction accuracy of 56.25% on 160 test subjects. Principal component analysis (PCA) and sequential feature selection (SFS) are used for feature selection, while support vector machine (SVM) is used for classification. The boosting tree model with NCA features outperforms all other combinations of feature selection and classification methods. The results suggest that NCA be a better feature selection strategy than PCA and SFS for the data used in this study. Ensemble tree classifier with boosting is more powerful than SVM to predict the subject group. However, more advanced feature selection and classification methods or additional measures besides structural MRI may be needed to improve the prediction performance. Copyright © 2018 Elsevier B.V. All rights reserved.
Harwood, James F; Farooq, Muhammad; Turnwall, Brent T; Richardson, Alec G
2015-07-01
The principal vectors of chikungunya and dengue viruses typically oviposit in water-filled artificial and natural containers, including tree holes. Despite the risk these and similar tree hole-inhabiting mosquitoes present to global public health, surprisingly few studies have been conducted to determine an efficient method of applying larvicides specifically to tree holes. The Stihl SR 450, a backpack sprayer commonly utilized during military and civilian vector control operations, may be suitable for controlling larval tree-hole mosquitoes, as it is capable of delivering broadcast applications of granular and liquid dispersible formulations of Bacillus thuringiensis var. israelensis (Bti) to a large area relatively quickly. We compared the application effectiveness of two granular (AllPro Sustain MGB and VectoBac GR) and two liquid (Aquabac XT and VectoBac WDG) formulations of Bti in containers placed on bare ground, placed beneath vegetative cover, and hung 1.5 or 3 m above the ground to simulate tree holes. Aedes aegypti (L.) larval mortality and Bti droplet and granule density data (when appropriate) were recorded for each formulation. Overall, granular formulations of Bti resulted in higher mortality rates in the simulated tree-hole habitats, whereas applications of granular and liquid formulations resulted in similar levels of larval mortality in containers placed on the ground in the open and beneath vegetation. Published by Oxford University Press on behalf of Entomological Society of America 2015. This work is written by US Government employees and is in the public domain in the US.
The canonical quantization of chaotic maps on the torus
NASA Astrophysics Data System (ADS)
Rubin, Ron Shai
In this thesis, a quantization method for classical maps on the torus is presented. The quantum algebra of observables is defined as the quantization of measurable functions on the torus with generators exp (2/pi ix) and exp (2/pi ip). The Hilbert space we use remains the infinite-dimensional L2/ (/IR, dx). The dynamics is given by a unitary quantum propagator such that as /hbar /to 0, the classical dynamics is returned. We construct such a quantization for the Kronecker map, the cat map, the baker's map, the kick map, and the Harper map. For the cat map, we find the same for the propagator on the plane the same integral kernel conjectured in (HB) using semiclassical methods. We also define a quantum 'integral over phase space' as a trace over the quantum algebra. Using this definition, we proceed to define quantum ergodicity and mixing for maps on the torus. We prove that the quantum cat map and Kronecker map are both ergodic, but only the cat map is mixing, true to its classical origins. For Planck's constant satisfying the integrality condition h = 1/N, with N/in doubz+, we construct an explicit isomorphism between L2/ (/IR, dx) and the Hilbert space of sections of an N-dimensional vector bundle over a θ-torus T2 of boundary conditions. The basis functions are distributions in L2/ (/IR, dx), given by an infinite comb of Dirac δ-functions. In Bargmann space these distributions take on the form of Jacobi ϑ-functions. Transformations from position to momentum representation can be implemented via a finite N-dimensional discrete Fourier transform. With the θ-torus, we provide a connection between the finite-dimensional quantum maps given in the physics literature and the canonical quantization presented here and found in the language of pseudo-differential operators elsewhere in mathematics circles. Specifically, at a fixed point of the dynamics on the θ-torus, we return a finite-dimensional matrix propagator. We present this connection explicitly for several examples.
Senthamarai Selvan, P; Jebanesan, A; Reetha, D
2016-07-01
The distribution and abundance of various mosquito vectors is important in the determination of disease prevalence in disease endemic areas. The aim of the present study was to conduct regular entomological surveillance and to determine the relative abundance of tree hole mosquito species in Tamilnadu, India. In addition to this, the impact of weather-conditions on tree hole mosquito population were evaluated between June, 2014 and May, 2015. Six hills ranges viz., Anaimalai hills, Kodaikanal hills, Sitheri hills, Kolli hills, Yercaud hills, and Megamalai were selected, the immatures collected from tree holes by the help of suction tube. Collections were made at dusk and dawn at randomly selected 15 different tree species. The collected samples were stored and morphologically identified to species level in the laboratory. Mosquito diversity was calculated by Simpson's and Shannon-Weiner diversity indicies with spatial and temporal aspects. Over 2642 mosquitoes comprising the primary vectors of dengue, chickungunya, malaria, filariasis were identified. Other species collected from the fifteen sites in each hill during the study included Christophersiomyia annularis, Christophersiomyia thomsoni, Downsiomyia albolateralis, Downsiomyia nivea and Toxorhynchites splendens, etc. Study revealed high species diversity and relative density associated with different study sites. Based on the Shannon diversity index high number of species was recorded with Aedes pseudoalbopicta (0.0829) followed by Ae. aegypti (0.0805) and least species was recorded as Anopheles elegans (0.0059). The distribution of the primary vectors of DF along the high occurrence was evident with most study sites representing proportions of this vector population. This showed the high risk level associated with the livestock movement in amplification and circulation of the virus during the outbreaks. The findings of this study, therefore, demonstrated the potential vulnerability of nomadic communities to infection by arboviral diseases transmitted by mosquito vectors. Copyright © 2016 Elsevier B.V. All rights reserved.
Fedosov Deformation Quantization as a BRST Theory
NASA Astrophysics Data System (ADS)
Grigoriev, M. A.; Lyakhovich, S. L.
The relationship is established between the Fedosov deformation quantization of a general symplectic manifold and the BFV-BRST quantization of constrained dynamical systems. The original symplectic manifold M is presented as a second class constrained surface in the fibre bundle ?*ρM which is a certain modification of a usual cotangent bundle equipped with a natural symplectic structure. The second class system is converted into the first class one by continuation of the constraints into the extended manifold, being a direct sum of ?*ρM and the tangent bundle TM. This extended manifold is equipped with a nontrivial Poisson bracket which naturally involves two basic ingredients of Fedosov geometry: the symplectic structure and the symplectic connection. The constructed first class constrained theory, being equivalent to the original symplectic manifold, is quantized through the BFV-BRST procedure. The existence theorem is proven for the quantum BRST charge and the quantum BRST invariant observables. The adjoint action of the quantum BRST charge is identified with the Abelian Fedosov connection while any observable, being proven to be a unique BRST invariant continuation for the values defined in the original symplectic manifold, is identified with the Fedosov flat section of the Weyl bundle. The Fedosov fibrewise star multiplication is thus recognized as a conventional product of the quantum BRST invariant observables.
Hierarchical sequencing of online social graphs
NASA Astrophysics Data System (ADS)
Andjelković, Miroslav; Tadić, Bosiljka; Maletić, Slobodan; Rajković, Milan
2015-10-01
In online communications, patterns of conduct of individual actors and use of emotions in the process can lead to a complex social graph exhibiting multilayered structure and mesoscopic communities. Using simplicial complexes representation of graphs, we investigate in-depth topology of the online social network constructed from MySpace dialogs which exhibits original community structure. A simulation of emotion spreading in this network leads to the identification of two emotion-propagating layers. Three topological measures are introduced, referred to as the structure vectors, which quantify graph's architecture at different dimension levels. Notably, structures emerging through shared links, triangles and tetrahedral faces, frequently occur and range from tree-like to maximal 5-cliques and their respective complexes. On the other hand, the structures which spread only negative or only positive emotion messages appear to have much simpler topology consisting of links and triangles. The node's structure vector represents the number of simplices at each topology level in which the node resides and the total number of such simplices determines what we define as the node's topological dimension. The presented results suggest that the node's topological dimension provides a suitable measure of the social capital which measures the actor's ability to act as a broker in compact communities, the so called Simmelian brokerage. We also generalize the results to a wider class of computer-generated networks. Investigating components of the node's vector over network layers reveals that same nodes develop different socio-emotional relations and that the influential nodes build social capital by combining their connections in different layers.
Model-based VQ for image data archival, retrieval and distribution
NASA Technical Reports Server (NTRS)
Manohar, Mareboyana; Tilton, James C.
1995-01-01
An ideal image compression technique for image data archival, retrieval and distribution would be one with the asymmetrical computational requirements of Vector Quantization (VQ), but without the complications arising from VQ codebooks. Codebook generation and maintenance are stumbling blocks which have limited the use of VQ as a practical image compression algorithm. Model-based VQ (MVQ), a variant of VQ described here, has the computational properties of VQ but does not require explicit codebooks. The codebooks are internally generated using mean removed error and Human Visual System (HVS) models. The error model assumed is the Laplacian distribution with mean, lambda-computed from a sample of the input image. A Laplacian distribution with mean, lambda, is generated with uniform random number generator. These random numbers are grouped into vectors. These vectors are further conditioned to make them perceptually meaningful by filtering the DCT coefficients from each vector. The DCT coefficients are filtered by multiplying by a weight matrix that is found to be optimal for human perception. The inverse DCT is performed to produce the conditioned vectors for the codebook. The only image dependent parameter used in the generation of codebook is the mean, lambda, that is included in the coded file to repeat the codebook generation process for decoding.
Approaches to control diseases vectored by ambrosia beetles in avocado and other American Lauraceae
USDA-ARS?s Scientific Manuscript database
Invasive ambrosia beetles and the plant pathogenic fungi they vector represent a significant challenge to North American agriculture, native and landscape trees. Ambrosia beetles encompass a range of insect species and they vector a diverse set of plant pathogenic fungi. Our lab has taken several bi...
Is it worth changing pattern recognition methods for structural health monitoring?
NASA Astrophysics Data System (ADS)
Bull, L. A.; Worden, K.; Cross, E. J.; Dervilis, N.
2017-05-01
The key element of this work is to demonstrate alternative strategies for using pattern recognition algorithms whilst investigating structural health monitoring. This paper looks to determine if it makes any difference in choosing from a range of established classification techniques: from decision trees and support vector machines, to Gaussian processes. Classification algorithms are tested on adjustable synthetic data to establish performance metrics, then all techniques are applied to real SHM data. To aid the selection of training data, an informative chain of artificial intelligence tools is used to explore an active learning interaction between meaningful clusters of data.
J.N. Gibbs; D.W. French
1980-01-01
Provides an up-to-date review of factors affecting the transmission of oak wilt, Ceratocystis fagacearum. Discusses the history and severity of the disease, the saprophytic existence of the fungus in the dying tree, seasonal susceptibility of trees to infection, overland and underground spread, the role of animals and insects as vectors or tree wounders, and the...
M. Lake Maner; James Hanula; Scott Horn
2014-01-01
The redbay ambrosia beetle, Xyleborus glabratus Eichhoff, vectors laurel wilt, Raffaelea lauricola T.C. Harr., Fraedrich & Aghayeva, that quickly kills all large diam (> 2.5cm) redbay trees [Persea borbonia (L.) Sprengel] in an area but smaller diam trees (
Meng, Qier; Kitasaka, Takayuki; Nimura, Yukitaka; Oda, Masahiro; Ueno, Junji; Mori, Kensaku
2017-02-01
Airway segmentation plays an important role in analyzing chest computed tomography (CT) volumes for computerized lung cancer detection, emphysema diagnosis and pre- and intra-operative bronchoscope navigation. However, obtaining a complete 3D airway tree structure from a CT volume is quite a challenging task. Several researchers have proposed automated airway segmentation algorithms basically based on region growing and machine learning techniques. However, these methods fail to detect the peripheral bronchial branches, which results in a large amount of leakage. This paper presents a novel approach for more accurate extraction of the complex airway tree. This proposed segmentation method is composed of three steps. First, Hessian analysis is utilized to enhance the tube-like structure in CT volumes; then, an adaptive multiscale cavity enhancement filter is employed to detect the cavity-like structure with different radii. In the second step, support vector machine learning will be utilized to remove the false positive (FP) regions from the result obtained in the previous step. Finally, the graph-cut algorithm is used to refine the candidate voxels to form an integrated airway tree. A test dataset including 50 standard-dose chest CT volumes was used for evaluating our proposed method. The average extraction rate was about 79.1 % with the significantly decreased FP rate. A new method of airway segmentation based on local intensity structure and machine learning technique was developed. The method was shown to be feasible for airway segmentation in a computer-aided diagnosis system for a lung and bronchoscope guidance system.
The research of "blind" spot in the LVQ network
NASA Astrophysics Data System (ADS)
Guo, Zhanjie; Nan, Shupo; Wang, Xiaoli
2017-04-01
Nowadays competitive neural network has been widely used in the pattern recognition, classification and other aspects, and show the great advantages compared with the traditional clustering methods. But the competitive neural networks still has inadequate in many aspects, and it needs to be further improved. Based on the learning Vector Quantization Network proposed by Learning Kohonen [1], this paper resolve the issue of the large training error, when there are "blind" spots in a network through the introduction of threshold value learning rules and finally programs the realization with Matlab.
Research on conceptual/innovative design for the life cycle
NASA Technical Reports Server (NTRS)
Cagan, Jonathan; Agogino, Alice M.
1990-01-01
The goal of this research is developing and integrating qualitative and quantitative methods for life cycle design. The definition of the problem includes formal computer-based methods limited to final detailing stages of design; CAD data bases do not capture design intent or design history; and life cycle issues were ignored during early stages of design. Viewgraphs outline research in conceptual design; the SYMON (SYmbolic MONotonicity analyzer) algorithm; multistart vector quantization optimization algorithm; intelligent manufacturing: IDES - Influence Diagram Architecture; and 1st PRINCE (FIRST PRINciple Computational Evaluator).
Sn nanothreads in GaAs: experiment and simulation
NASA Astrophysics Data System (ADS)
Semenikhin, I.; Vyurkov, V.; Bugaev, A.; Khabibullin, R.; Ponomarev, D.; Yachmenev, A.; Maltsev, P.; Ryzhii, M.; Otsuji, T.; Ryzhii, V.
2016-12-01
The gated GaAs structures like the field-effect transistor with the array of the Sn nanothreads was fabricated via delta-doping of vicinal GaAs surface by Sn atoms with a subsequent regrowth. That results in the formation of the chains of Sn atoms at the terrace edges. Two device models were developed. The quantum model accounts for the quantization of the electron energy spectrum in the self-consistent two-dimensional electric potential, herewith the electron density distribution in nanothread arrays for different gate voltages is calculated. The classical model ignores the quantization and electrons are distributed in space according to 3D density of states and Fermi-Dirac statistics. It turned out that qualitatively both models demonstrate similar behavior, nevertheless, the classical one is in better quantitative agreement with experimental data. Plausibly, the quantization could be ignored because Sn atoms are randomly placed along the thread axis. The terahertz hot-electron bolometers (HEBs) could be based on the structure under consideration.
Introduction to quantized LIE groups and algebras
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tjin, T.
1992-10-10
In this paper, the authors give a self-contained introduction to the theory of quantum groups according to Drinfeld, highlighting the formal aspects as well as the applications to the Yang-Baxter equation and representation theory. Introductions to Hopf algebras, Poisson structures and deformation quantization are also provided. After defining Poisson Lie groups the authors study their relation to Lie bialgebras and the classical Yang-Baxter equation. Then the authors explain in detail the concept of quantization for them. As an example the quantization of sl[sub 2] is explicitly carried out. Next, the authors show how quantum groups are related to the Yang-Baxtermore » equation and how they can be used to solve it. Using the quantum double construction, the authors explicitly construct the universal R matrix for the quantum sl[sub 2] algebra. In the last section, the authors deduce all finite-dimensional irreducible representations for q a root of unity. The authors also give their tensor product decomposition (fusion rules), which is relevant to conformal field theory.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Du, Liang; Yang, Yi; Harley, Ronald Gordon
A system is for a plurality of different electric load types. The system includes a plurality of sensors structured to sense a voltage signal and a current signal for each of the different electric loads; and a processor. The processor acquires a voltage and current waveform from the sensors for a corresponding one of the different electric load types; calculates a power or current RMS profile of the waveform; quantizes the power or current RMS profile into a set of quantized state-values; evaluates a state-duration for each of the quantized state-values; evaluates a plurality of state-types based on the powermore » or current RMS profile and the quantized state-values; generates a state-sequence that describes a corresponding finite state machine model of a generalized load start-up or transient profile for the corresponding electric load type; and identifies the corresponding electric load type.« less
Linear time relational prototype based learning.
Gisbrecht, Andrej; Mokbel, Bassam; Schleif, Frank-Michael; Zhu, Xibin; Hammer, Barbara
2012-10-01
Prototype based learning offers an intuitive interface to inspect large quantities of electronic data in supervised or unsupervised settings. Recently, many techniques have been extended to data described by general dissimilarities rather than Euclidean vectors, so-called relational data settings. Unlike the Euclidean counterparts, the techniques have quadratic time complexity due to the underlying quadratic dissimilarity matrix. Thus, they are infeasible already for medium sized data sets. The contribution of this article is twofold: On the one hand we propose a novel supervised prototype based classification technique for dissimilarity data based on popular learning vector quantization (LVQ), on the other hand we transfer a linear time approximation technique, the Nyström approximation, to this algorithm and an unsupervised counterpart, the relational generative topographic mapping (GTM). This way, linear time and space methods result. We evaluate the techniques on three examples from the biomedical domain.
New vertices and canonical quantization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alexandrov, Sergei
2010-07-15
We present two results on the recently proposed new spin foam models. First, we show how a (slightly modified) restriction on representations in the Engle-Pereira-Rovelli-Livine model leads to the appearance of the Ashtekar-Barbero connection, thus bringing this model even closer to loop quantum gravity. Second, we however argue that the quantization procedure used to derive the new models is inconsistent since it relies on the symplectic structure of the unconstrained BF theory.
The Casalbuoni-Brink-Schwarz superparticle with covariant, reducible constraints
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dayi, O.F.
1992-04-30
This paper discusses the fermionic constraints of the massless Casalbuoni-Brink-Schwarz superparticle in d = 10 which are separated covariantly as first- and second-class constraints which are infinitely reducible. Although the reducibility conditions of the second-class constraints include the first-class ones a consistent quantization is possible. The ghost structure of the system for quantizing it in terms of the BFV-BRST methods is given and unitarity is shown.
Urbano, Plutarco; Poveda, Cristina; Molina, Jorge
2015-04-01
Rhodnius prolixus Stål, 1859 is one of the main vectors of Trypanosoma (Schyzotrypanum) cruzi Chagas, 1909. In its natural forest environment, this triatomine is mainly found in palm tree crowns, where it easily establishes and develops dense populations. The aim of this study was to evaluate the effect of the physiognomy and reproductive status of Attalea butyracea on the population relative density and age structure of R. prolixus and to determine the vector's population stratification according to the vertical and horizontal profile of an A. butyracea forest. Using live bait traps, 150 individuals of A. butyracea with different physiognomy and 40 individuals with similar physiognomy (crown size, number of leaves, palm tree height, diameter at breast height, reproductive status) were sampled for triatomines in Yopal, Casanare-Colombia. Temperature and relative humidity were measured in the crown of the palm tree. Entomological indices and natural infection rates were also determined. The relative population density of R. prolixus on natural A. butyracea groves is associated with the palm's height, number of leaves and crown volume. The young immature stages were present mostly at the crown's base and the advanced immature stages and adults were present mostly at the crown of the palm tree. This distribution correlates with the temperature stability and relative humidity in the base and the fluctuation of both environmental variables in the palm's crown. A higher density of R. prolixus was found as the palm tree height increased and as the distance of the palm with respect to the forest border decreased, especially towards anthropically intervened areas. A density index of 12.6 individuals per palm tree with an infestation index of 88.9% and a colonization index of 98.7% was observed. 85.2% was the infection index with T. cruzi. The physiognomy of palm trees affects the relative population density and the distribution of developmental stages of R. prolixus. Therefore, they constitute a risk factor for the potential migration of infected insects from wild environments towards residential environments and the subsequent epidemiological risk of transmission of T. cruzi to people.
NASA Astrophysics Data System (ADS)
Adelabu, Samuel; Mutanga, Onisimo; Adam, Elhadi; Cho, Moses Azong
2013-01-01
Classification of different tree species in semiarid areas can be challenging as a result of the change in leaf structure and orientation due to soil moisture constraints. Tree species mapping is, however, a key parameter for forest management in semiarid environments. In this study, we examined the suitability of 5-band RapidEye satellite data for the classification of five tree species in mopane woodland of Botswana using machine leaning algorithms with limited training samples.We performed classification using random forest (RF) and support vector machines (SVM) based on EnMap box. The overall accuracies for classifying the five tree species was 88.75 and 85% for both SVM and RF, respectively. We also demonstrated that the new red-edge band in the RapidEye sensor has the potential for classifying tree species in semiarid environments when integrated with other standard bands. Similarly, we observed that where there are limited training samples, SVM is preferred over RF. Finally, we demonstrated that the two accuracy measures of quantity and allocation disagreement are simpler and more helpful for the vast majority of remote sensing classification process than the kappa coefficient. Overall, high species classification can be achieved using strategically located RapidEye bands integrated with advanced processing algorithms.
NASA Technical Reports Server (NTRS)
Garay, Michael J.; Mazzoni, Dominic; Davies, Roger; Wagstaff, Kiri
2004-01-01
Support Vector Machines (SVMs) are a type of supervised learning algorith,, other examples of which are Artificial Neural Networks (ANNs), Decision Trees, and Naive Bayesian Classifiers. Supervised learning algorithms are used to classify objects labled by a 'supervisor' - typically a human 'expert.'.
NASA Astrophysics Data System (ADS)
Landsman, N. P. Klaas
2016-09-01
We reconsider the (non-relativistic) quantum theory of indistinguishable particles on the basis of Rieffel’s notion of C∗-algebraic (“strict”) deformation quantization. Using this formalism, we relate the operator approach of Messiah and Greenberg (1964) to the configuration space approach pioneered by Souriau (1967), Laidlaw and DeWitt-Morette (1971), Leinaas and Myrheim (1977), and others. In dimension d > 2, the former yields bosons, fermions, and paraparticles, whereas the latter seems to leave room for bosons and fermions only, apparently contradicting the operator approach as far as the admissibility of parastatistics is concerned. To resolve this, we first prove that in d > 2 the topologically non-trivial configuration spaces of the second approach are quantized by the algebras of observables of the first. Secondly, we show that the irreducible representations of the latter may be realized by vector bundle constructions, among which the line bundles recover the results of the second approach. Mathematically speaking, representations on higher-dimensional bundles (which define parastatistics) cannot be excluded, which render the configuration space approach incomplete. Physically, however, we show that the corresponding particle states may always be realized in terms of bosons and/or fermions with an unobserved internal degree of freedom (although based on non-relativistic quantum mechanics, this conclusion is analogous to the rigorous results of the Doplicher-Haag-Roberts analysis in algebraic quantum field theory, as well as to the heuristic arguments which led Gell-Mann and others to QCD (i.e. Quantum Chromodynamics)).
Design and evaluation of sparse quantization index modulation watermarking schemes
NASA Astrophysics Data System (ADS)
Cornelis, Bruno; Barbarien, Joeri; Dooms, Ann; Munteanu, Adrian; Cornelis, Jan; Schelkens, Peter
2008-08-01
In the past decade the use of digital data has increased significantly. The advantages of digital data are, amongst others, easy editing, fast, cheap and cross-platform distribution and compact storage. The most crucial disadvantages are the unauthorized copying and copyright issues, by which authors and license holders can suffer considerable financial losses. Many inexpensive methods are readily available for editing digital data and, unlike analog information, the reproduction in the digital case is simple and robust. Hence, there is great interest in developing technology that helps to protect the integrity of a digital work and the copyrights of its owners. Watermarking, which is the embedding of a signal (known as the watermark) into the original digital data, is one method that has been proposed for the protection of digital media elements such as audio, video and images. In this article, we examine watermarking schemes for still images, based on selective quantization of the coefficients of a wavelet transformed image, i.e. sparse quantization-index modulation (QIM) watermarking. Different grouping schemes for the wavelet coefficients are evaluated and experimentally verified for robustness against several attacks. Wavelet tree-based grouping schemes yield a slightly improved performance over block-based grouping schemes. Additionally, the impact of the deployment of error correction codes on the most promising configurations is examined. The utilization of BCH-codes (Bose, Ray-Chaudhuri, Hocquenghem) results in an improved robustness as long as the capacity of the error codes is not exceeded (cliff-effect).
NASA Astrophysics Data System (ADS)
Hosseini-Golgoo, S. M.; Bozorgi, H.; Saberkari, A.
2015-06-01
Performances of three neural networks, consisting of a multi-layer perceptron, a radial basis function, and a neuro-fuzzy network with local linear model tree training algorithm, in modeling and extracting discriminative features from the response patterns of a temperature-modulated resistive gas sensor are quantitatively compared. For response pattern recording, a voltage staircase containing five steps each with a 20 s plateau is applied to the micro-heater of the sensor, when 12 different target gases, each at 11 concentration levels, are present. In each test, the hidden layer neuron weights are taken as the discriminatory feature vector of the target gas. These vectors are then mapped to a 3D feature space using linear discriminant analysis. The discriminative information content of the feature vectors are determined by the calculation of the Fisher’s discriminant ratio, affording quantitative comparison among the success rates achieved by the different neural network structures. The results demonstrate a superior discrimination ratio for features extracted from local linear neuro-fuzzy and radial-basis-function networks with recognition rates of 96.27% and 90.74%, respectively.
Spread of plant pathogens and insect vectors at the northern range margin of cypress in Italy
NASA Astrophysics Data System (ADS)
Zocca, Alessia; Zanini, Corrado; Aimi, Andrea; Frigimelica, Gabriella; La Porta, Nicola; Battisti, Andrea
2008-05-01
The Mediterranean cypress ( Cupressus sempervirens) is a multi-purpose tree widely used in the Mediterranean region. An anthropogenic range expansion of cypress has taken place at the northern margin of the range in Italy in recent decades, driven by ornamental planting in spite of climatic constraints imposed by low winter temperature. The expansion has created new habitats for pathogens and pests, which strongly limit tree survival in the historical (core) part of the range. Based on the enemy release hypothesis, we predicted that damage should be lower in the expansion area. By comparing tree and seed cone damage by pathogens and pests in core and expansion areas of Trentino, a district in the southern Alps, we showed that tree damage was significantly higher in the core area. Seed cones of C. sempervirens are intensively colonized by an aggressive and specific pathogen (the canker fungus Seiridium cardinale, Coelomycetes), associated with seed insect vectors Megastigmus wachtli (Hymenoptera Torymidae) and Orsillus maculatus (Heteroptera Lygaeidae). In contrast, we observed lower tree damage in the expansion area, where a non-aggressive fungus ( Pestalotiopsis funerea, Coelomycetes) was more frequently associated with the same insect vectors. Our results indicate that both insect species have a great potential to reach the range margin, representing a continuous threat of the arrival of fungal pathogens to trees planted at extreme sites. Global warming may accelerate this process since both insects and fungi profit from increased temperature. In the future, cypress planted at the range margin may then face similar pest and pathogen threats as in the historical range.
Smits, Samuel A; Ouverney, Cleber C
2010-08-18
Many software packages have been developed to address the need for generating phylogenetic trees intended for print. With an increased use of the web to disseminate scientific literature, there is a need for phylogenetic trees to be viewable across many types of devices and feature some of the interactive elements that are integral to the browsing experience. We propose a novel approach for publishing interactive phylogenetic trees. We present a javascript library, jsPhyloSVG, which facilitates constructing interactive phylogenetic trees from raw Newick or phyloXML formats directly within the browser in Scalable Vector Graphics (SVG) format. It is designed to work across all major browsers and renders an alternative format for those browsers that do not support SVG. The library provides tools for building rectangular and circular phylograms with integrated charting. Interactive features may be integrated and made to respond to events such as clicks on any element of the tree, including labels. jsPhyloSVG is an open-source solution for rendering dynamic phylogenetic trees. It is capable of generating complex and interactive phylogenetic trees across all major browsers without the need for plugins. It is novel in supporting the ability to interpret the tree inference formats directly, exposing the underlying markup to data-mining services. The library source code, extensive documentation and live examples are freely accessible at www.jsphylosvg.com.
A robust hidden Markov Gauss mixture vector quantizer for a noisy source.
Pyun, Kyungsuk Peter; Lim, Johan; Gray, Robert M
2009-07-01
Noise is ubiquitous in real life and changes image acquisition, communication, and processing characteristics in an uncontrolled manner. Gaussian noise and Salt and Pepper noise, in particular, are prevalent in noisy communication channels, camera and scanner sensors, and medical MRI images. It is not unusual for highly sophisticated image processing algorithms developed for clean images to malfunction when used on noisy images. For example, hidden Markov Gauss mixture models (HMGMM) have been shown to perform well in image segmentation applications, but they are quite sensitive to image noise. We propose a modified HMGMM procedure specifically designed to improve performance in the presence of noise. The key feature of the proposed procedure is the adjustment of covariance matrices in Gauss mixture vector quantizer codebooks to minimize an overall minimum discrimination information distortion (MDI). In adjusting covariance matrices, we expand or shrink their elements based on the noisy image. While most results reported in the literature assume a particular noise type, we propose a framework without assuming particular noise characteristics. Without denoising the corrupted source, we apply our method directly to the segmentation of noisy sources. We apply the proposed procedure to the segmentation of aerial images with Salt and Pepper noise and with independent Gaussian noise, and we compare our results with those of the median filter restoration method and the blind deconvolution-based method, respectively. We show that our procedure has better performance than image restoration-based techniques and closely matches to the performance of HMGMM for clean images in terms of both visual segmentation results and error rate.
Poisson traces, D-modules, and symplectic resolutions
NASA Astrophysics Data System (ADS)
Etingof, Pavel; Schedler, Travis
2018-03-01
We survey the theory of Poisson traces (or zeroth Poisson homology) developed by the authors in a series of recent papers. The goal is to understand this subtle invariant of (singular) Poisson varieties, conditions for it to be finite-dimensional, its relationship to the geometry and topology of symplectic resolutions, and its applications to quantizations. The main technique is the study of a canonical D-module on the variety. In the case the variety has finitely many symplectic leaves (such as for symplectic singularities and Hamiltonian reductions of symplectic vector spaces by reductive groups), the D-module is holonomic, and hence, the space of Poisson traces is finite-dimensional. As an application, there are finitely many irreducible finite-dimensional representations of every quantization of the variety. Conjecturally, the D-module is the pushforward of the canonical D-module under every symplectic resolution of singularities, which implies that the space of Poisson traces is dual to the top cohomology of the resolution. We explain many examples where the conjecture is proved, such as symmetric powers of du Val singularities and symplectic surfaces and Slodowy slices in the nilpotent cone of a semisimple Lie algebra. We compute the D-module in the case of surfaces with isolated singularities and show it is not always semisimple. We also explain generalizations to arbitrary Lie algebras of vector fields, connections to the Bernstein-Sato polynomial, relations to two-variable special polynomials such as Kostka polynomials and Tutte polynomials, and a conjectural relationship with deformations of symplectic resolutions. In the appendix we give a brief recollection of the theory of D-modules on singular varieties that we require.
Poisson traces, D-modules, and symplectic resolutions.
Etingof, Pavel; Schedler, Travis
2018-01-01
We survey the theory of Poisson traces (or zeroth Poisson homology) developed by the authors in a series of recent papers. The goal is to understand this subtle invariant of (singular) Poisson varieties, conditions for it to be finite-dimensional, its relationship to the geometry and topology of symplectic resolutions, and its applications to quantizations. The main technique is the study of a canonical D-module on the variety. In the case the variety has finitely many symplectic leaves (such as for symplectic singularities and Hamiltonian reductions of symplectic vector spaces by reductive groups), the D-module is holonomic, and hence, the space of Poisson traces is finite-dimensional. As an application, there are finitely many irreducible finite-dimensional representations of every quantization of the variety. Conjecturally, the D-module is the pushforward of the canonical D-module under every symplectic resolution of singularities, which implies that the space of Poisson traces is dual to the top cohomology of the resolution. We explain many examples where the conjecture is proved, such as symmetric powers of du Val singularities and symplectic surfaces and Slodowy slices in the nilpotent cone of a semisimple Lie algebra. We compute the D-module in the case of surfaces with isolated singularities and show it is not always semisimple. We also explain generalizations to arbitrary Lie algebras of vector fields, connections to the Bernstein-Sato polynomial, relations to two-variable special polynomials such as Kostka polynomials and Tutte polynomials, and a conjectural relationship with deformations of symplectic resolutions. In the appendix we give a brief recollection of the theory of D-modules on singular varieties that we require.
Joint Machine Learning and Game Theory for Rate Control in High Efficiency Video Coding.
Gao, Wei; Kwong, Sam; Jia, Yuheng
2017-08-25
In this paper, a joint machine learning and game theory modeling (MLGT) framework is proposed for inter frame coding tree unit (CTU) level bit allocation and rate control (RC) optimization in High Efficiency Video Coding (HEVC). First, a support vector machine (SVM) based multi-classification scheme is proposed to improve the prediction accuracy of CTU-level Rate-Distortion (R-D) model. The legacy "chicken-and-egg" dilemma in video coding is proposed to be overcome by the learning-based R-D model. Second, a mixed R-D model based cooperative bargaining game theory is proposed for bit allocation optimization, where the convexity of the mixed R-D model based utility function is proved, and Nash bargaining solution (NBS) is achieved by the proposed iterative solution search method. The minimum utility is adjusted by the reference coding distortion and frame-level Quantization parameter (QP) change. Lastly, intra frame QP and inter frame adaptive bit ratios are adjusted to make inter frames have more bit resources to maintain smooth quality and bit consumption in the bargaining game optimization. Experimental results demonstrate that the proposed MLGT based RC method can achieve much better R-D performances, quality smoothness, bit rate accuracy, buffer control results and subjective visual quality than the other state-of-the-art one-pass RC methods, and the achieved R-D performances are very close to the performance limits from the FixedQP method.
TBA-like integral equations from quantized mirror curves
NASA Astrophysics Data System (ADS)
Okuyama, Kazumi; Zakany, Szabolcs
2016-03-01
Quantizing the mirror curve of certain toric Calabi-Yau (CY) three-folds leads to a family of trace class operators. The resolvent function of these operators is known to encode topological data of the CY. In this paper, we show that in certain cases, this resolvent function satisfies a system of non-linear integral equations whose structure is very similar to the Thermodynamic Bethe Ansatz (TBA) systems. This can be used to compute spectral traces, both exactly and as a semiclassical expansion. As a main example, we consider the system related to the quantized mirror curve of local P2. According to a recent proposal, the traces of this operator are determined by the refined BPS indices of the underlying CY. We use our non-linear integral equations to test that proposal.
SVM-based tree-type neural networks as a critic in adaptive critic designs for control.
Deb, Alok Kanti; Jayadeva; Gopal, Madan; Chandra, Suresh
2007-07-01
In this paper, we use the approach of adaptive critic design (ACD) for control, specifically, the action-dependent heuristic dynamic programming (ADHDP) method. A least squares support vector machine (SVM) regressor has been used for generating the control actions, while an SVM-based tree-type neural network (NN) is used as the critic. After a failure occurs, the critic and action are retrained in tandem using the failure data. Failure data is binary classification data, where the number of failure states are very few as compared to the number of no-failure states. The difficulty of conventional multilayer feedforward NNs in learning this type of classification data has been overcome by using the SVM-based tree-type NN, which due to its feature to add neurons to learn misclassified data, has the capability to learn any binary classification data without a priori choice of the number of neurons or the structure of the network. The capability of the trained controller to handle unforeseen situations is demonstrated.
Face biometrics with renewable templates
NASA Astrophysics Data System (ADS)
van der Veen, Michiel; Kevenaar, Tom; Schrijen, Geert-Jan; Akkermans, Ton H.; Zuo, Fei
2006-02-01
In recent literature, privacy protection technologies for biometric templates were proposed. Among these is the so-called helper-data system (HDS) based on reliable component selection. In this paper we integrate this approach with face biometrics such that we achieve a system in which the templates are privacy protected, and multiple templates can be derived from the same facial image for the purpose of template renewability. Extracting binary feature vectors forms an essential step in this process. Using the FERET and Caltech databases, we show that this quantization step does not significantly degrade the classification performance compared to, for example, traditional correlation-based classifiers. The binary feature vectors are integrated in the HDS leading to a privacy protected facial recognition algorithm with acceptable FAR and FRR, provided that the intra-class variation is sufficiently small. This suggests that a controlled enrollment procedure with a sufficient number of enrollment measurements is required.
Detecting double compression of audio signal
NASA Astrophysics Data System (ADS)
Yang, Rui; Shi, Yun Q.; Huang, Jiwu
2010-01-01
MP3 is the most popular audio format nowadays in our daily life, for example music downloaded from the Internet and file saved in the digital recorder are often in MP3 format. However, low bitrate MP3s are often transcoded to high bitrate since high bitrate ones are of high commercial value. Also audio recording in digital recorder can be doctored easily by pervasive audio editing software. This paper presents two methods for the detection of double MP3 compression. The methods are essential for finding out fake-quality MP3 and audio forensics. The proposed methods use support vector machine classifiers with feature vectors formed by the distributions of the first digits of the quantized MDCT (modified discrete cosine transform) coefficients. Extensive experiments demonstrate the effectiveness of the proposed methods. To the best of our knowledge, this piece of work is the first one to detect double compression of audio signal.
LDFT-based watermarking resilient to local desynchronization attacks.
Tian, Huawei; Zhao, Yao; Ni, Rongrong; Qin, Lunming; Li, Xuelong
2013-12-01
Up to now, a watermarking scheme that is robust against desynchronization attacks (DAs) is still a grand challenge. Most image watermarking resynchronization schemes in literature can survive individual global DAs (e.g., rotation, scaling, translation, and other affine transforms), but few are resilient to challenging cropping and local DAs. The main reason is that robust features for watermark synchronization are only globally invariable rather than locally invariable. In this paper, we present a blind image watermarking resynchronization scheme against local transform attacks. First, we propose a new feature transform named local daisy feature transform (LDFT), which is not only globally but also locally invariable. Then, the binary space partitioning (BSP) tree is used to partition the geometrically invariant LDFT space. In the BSP tree, the location of each pixel is fixed under global transform, local transform, and cropping. Lastly, the watermarking sequence is embedded bit by bit into each leaf node of the BSP tree by using the logarithmic quantization index modulation watermarking embedding method. Simulation results show that the proposed watermarking scheme can survive numerous kinds of distortions, including common image-processing attacks, local and global DAs, and noninvertible cropping.
Hamiltonian and Thermodynamic Modeling of Quantum Turbulence
NASA Astrophysics Data System (ADS)
Grmela, Miroslav
2010-10-01
The state variables in the novel model introduced in this paper are the fields playing this role in the classical Landau-Tisza model and additional fields of mass, entropy (or temperature), superfluid velocity, and gradient of the superfluid velocity, all depending on the position vector and another tree dimensional vector labeling the scale, describing the small-scale structure developed in 4He superfluid experiencing turbulent motion. The fluxes of mass, momentum, energy, and entropy in the position space as well as the fluxes of energy and entropy in scales, appear in the time evolution equations as explicit functions of the state variables and of their conjugates. The fundamental thermodynamic relation relating the fields to their conjugates is left in this paper undetermined. The GENERIC structure of the equations serves two purposes: (i) it guarantees that solutions to the governing equations, independently of the choice of the fundamental thermodynamic relation, agree with the observed compatibility with thermodynamics, and (ii) it is used as a guide in the construction of the novel model.
Topological vortices in gauge models of graphene
NASA Astrophysics Data System (ADS)
Zhang, Xin-Hui; Li, Xueqin; Hao, Jin-Bo
2018-06-01
Graphene-like structure possessing the topological vortices and knots, and the magnetic flux of the vortices configuration quantized, are proposed in this paper. The topological charges of the vortices are characterized by Hopf indices and Brower degrees. The Abelian background field action (BF action) is a topological invariant for the knot family, which is just the total sum of all the self-linking numbers and all the linking numbers. Flux quantization opens the possibility of having Aharonov-Bohm-type effects in graphene without external electromagnetic field.
NASA Astrophysics Data System (ADS)
Abramov, G. V.; Emeljanov, A. E.; Ivashin, A. L.
Theoretical bases for modeling a digital control system with information transfer via the channel of plural access and a regular quantization cycle are submitted. The theory of dynamic systems with random changes of the structure including elements of the Markov random processes theory is used for a mathematical description of a network control system. The characteristics of similar control systems are received. Experimental research of the given control systems is carried out.
Dirac’s magnetic monopole and the Kontsevich star product
NASA Astrophysics Data System (ADS)
Soloviev, M. A.
2018-03-01
We examine relationships between various quantization schemes for an electrically charged particle in the field of a magnetic monopole. Quantization maps are defined in invariant geometrical terms, appropriate to the case of nontrivial topology, and are constructed for two operator representations. In the first setting, the quantum operators act on the Hilbert space of sections of a nontrivial complex line bundle associated with the Hopf bundle, whereas the second approach uses instead a quaternionic Hilbert module of sections of a trivial quaternionic line bundle. We show that these two quantizations are naturally related by a bundle morphism and, as a consequence, induce the same phase-space star product. We obtain explicit expressions for the integral kernels of star-products corresponding to various operator orderings and calculate their asymptotic expansions up to the third order in the Planck constant \\hbar . We also show that the differential form of the magnetic Weyl product corresponding to the symmetric ordering agrees completely with the Kontsevich formula for deformation quantization of Poisson structures and can be represented by Kontsevich’s graphs.
Perspectives of Light-Front Quantized Field Theory: Some New Results
DOE Office of Scientific and Technical Information (OSTI.GOV)
Srivastava, Prem P.
1999-08-13
A review of some basic topics in the light-front (LF) quantization of relativistic field theory is made. It is argued that the LF quantization is equally appropriate as the conventional one and that they lead, assuming the microcausality principle, to the same physical content. This is confirmed in the studies on the LF of the spontaneous symmetry breaking (SSB), of the degenerate vacua in Schwinger model (SM) and Chiral SM (CSM), of the chiral boson theory, and of the QCD in covariant gauges among others. The discussion on the LF is more economical and more transparent than that found inmore » the conventional equal-time quantized theory. The removal of the constraints on the LF phase space by following the Dirac method, in fact, results in a substantially reduced number of independent dynamical variables. Consequently, the descriptions of the physical Hilbert space and the vacuum structure, for example, become more tractable. In the context of the Dyson-Wick perturbation theory the relevant propagators in the front form theory are causal. The Wick rotation can then be performed to employ the Euclidean space integrals in momentum space. The lack of manifest covariance becomes tractable, and still more so if we employ, as discussed in the text, the Fourier transform of the fermionic field based on a special construction of the LF spinor. The fact that the hyperplanes x{sup {+-}} = 0 constitute characteristic surfaces of the hyperbolic partial differential equation is found irrelevant in the quantized theory; it seems sufficient to quantize the theory on one of the characteristic hyperplanes.« less
Generalized noise terms for the quantized fluctuational electrodynamics
NASA Astrophysics Data System (ADS)
Partanen, Mikko; Häyrynen, Teppo; Tulkki, Jukka; Oksanen, Jani
2017-03-01
The quantization of optical fields in vacuum has been known for decades, but extending the field quantization to lossy and dispersive media in nonequilibrium conditions has proven to be complicated due to the position-dependent electric and magnetic responses of the media. In fact, consistent position-dependent quantum models for the photon number in resonant structures have only been formulated very recently and only for dielectric media. Here we present a general position-dependent quantized fluctuational electrodynamics (QFED) formalism that extends the consistent field quantization to describe the photon number also in the presence of magnetic field-matter interactions. It is shown that the magnetic fluctuations provide an additional degree of freedom in media where the magnetic coupling to the field is prominent. Therefore, the field quantization requires an additional independent noise operator that is commuting with the conventional bosonic noise operator describing the polarization current fluctuations in dielectric media. In addition to allowing the detailed description of field fluctuations, our methods provide practical tools for modeling optical energy transfer and the formation of thermal balance in general dielectric and magnetic nanodevices. We use QFED to investigate the magnetic properties of microcavity systems to demonstrate an example geometry in which it is possible to probe fields arising from the electric and magnetic source terms. We show that, as a consequence of the magnetic Purcell effect, the tuning of the position of an emitter layer placed inside a vacuum cavity can make the emissivity of a magnetic emitter to exceed the emissivity of a corresponding electric emitter.
Jiao, Y; Chen, R; Ke, X; Cheng, L; Chu, K; Lu, Z; Herskovits, E H
2011-01-01
Autism spectrum disorder (ASD) is a neurodevelopmental disorder, of which Asperger syndrome and high-functioning autism are subtypes. Our goal is: 1) to determine whether a diagnostic model based on single-nucleotide polymorphisms (SNPs), brain regional thickness measurements, or brain regional volume measurements can distinguish Asperger syndrome from high-functioning autism; and 2) to compare the SNP, thickness, and volume-based diagnostic models. Our study included 18 children with ASD: 13 subjects with high-functioning autism and 5 subjects with Asperger syndrome. For each child, we obtained 25 SNPs for 8 ASD-related genes; we also computed regional cortical thicknesses and volumes for 66 brain structures, based on structural magnetic resonance (MR) examination. To generate diagnostic models, we employed five machine-learning techniques: decision stump, alternating decision trees, multi-class alternating decision trees, logistic model trees, and support vector machines. For SNP-based classification, three decision-tree-based models performed better than the other two machine-learning models. The performance metrics for three decision-tree-based models were similar: decision stump was modestly better than the other two methods, with accuracy = 90%, sensitivity = 0.95 and specificity = 0.75. All thickness and volume-based diagnostic models performed poorly. The SNP-based diagnostic models were superior to those based on thickness and volume. For SNP-based classification, rs878960 in GABRB3 (gamma-aminobutyric acid A receptor, beta 3) was selected by all tree-based models. Our analysis demonstrated that SNP-based classification was more accurate than morphometry-based classification in ASD subtype classification. Also, we found that one SNP--rs878960 in GABRB3--distinguishes Asperger syndrome from high-functioning autism.
Quantized Algebras of Functions on Homogeneous Spaces with Poisson Stabilizers
NASA Astrophysics Data System (ADS)
Neshveyev, Sergey; Tuset, Lars
2012-05-01
Let G be a simply connected semisimple compact Lie group with standard Poisson structure, K a closed Poisson-Lie subgroup, 0 < q < 1. We study a quantization C( G q / K q ) of the algebra of continuous functions on G/ K. Using results of Soibelman and Dijkhuizen-Stokman we classify the irreducible representations of C( G q / K q ) and obtain a composition series for C( G q / K q ). We describe closures of the symplectic leaves of G/ K refining the well-known description in the case of flag manifolds in terms of the Bruhat order. We then show that the same rules describe the topology on the spectrum of C( G q / K q ). Next we show that the family of C*-algebras C( G q / K q ), 0 < q ≤ 1, has a canonical structure of a continuous field of C*-algebras and provides a strict deformation quantization of the Poisson algebra {{C}[G/K]} . Finally, extending a result of Nagy, we show that C( G q / K q ) is canonically KK-equivalent to C( G/ K).
Detection of phytoplasmas of temperate fruit trees.
Laimer, Margit
2009-01-01
Phytoplasmas are associated with hundreds of plant diseases globally. Many fruit tree phytoplasmas are transmitted by insect vectors or grafting, are considered quarantine organisms and a major economic threat to orchards. Diagnosis can be difficult, but immunochemical and molecular methods have been developed.
BRST theory without Hamiltonian and Lagrangian
NASA Astrophysics Data System (ADS)
Lyakhovich, S. L.; Sharapov, A. A.
2005-03-01
We consider a generic gauge system, whose physical degrees of freedom are obtained by restriction on a constraint surface followed by factorization with respect to the action of gauge transformations; in so doing, no Hamiltonian structure or action principle is supposed to exist. For such a generic gauge system we construct a consistent BRST formulation, which includes the conventional BV Lagrangian and BFV Hamiltonian schemes as particular cases. If the original manifold carries a weak Poisson structure (a bivector field giving rise to a Poisson bracket on the space of physical observables) the generic gauge system is shown to admit deformation quantization by means of the Kontsevich formality theorem. A sigma-model interpretation of this quantization algorithm is briefly discussed.
Observation of Landau quantization and standing waves in HfSiS
NASA Astrophysics Data System (ADS)
Jiao, L.; Xu, Q. N.; Qi, Y. P.; Wu, S.-C.; Sun, Y.; Felser, C.; Wirth, S.
2018-05-01
Recently, HfSiS was found to be a new type of Dirac semimetal with a line of Dirac nodes in the band structure. Meanwhile, Rashba-split surface states are also pronounced in this compound. Here we report a systematic study of HfSiS by scanning tunneling microscopy/spectroscopy at low temperature and high magnetic field. The Rashba-split surface states are characterized by measuring Landau quantization and standing waves, which reveal a quasilinear dispersive band structure. First-principles calculations based on density-functional theory are conducted and compared with the experimental results. Based on these investigations, the properties of the Rashba-split surface states and their interplay with defects and collective modes are discussed.
Phylogenetic mixtures and linear invariants for equal input models.
Casanellas, Marta; Steel, Mike
2017-04-01
The reconstruction of phylogenetic trees from molecular sequence data relies on modelling site substitutions by a Markov process, or a mixture of such processes. In general, allowing mixed processes can result in different tree topologies becoming indistinguishable from the data, even for infinitely long sequences. However, when the underlying Markov process supports linear phylogenetic invariants, then provided these are sufficiently informative, the identifiability of the tree topology can be restored. In this paper, we investigate a class of processes that support linear invariants once the stationary distribution is fixed, the 'equal input model'. This model generalizes the 'Felsenstein 1981' model (and thereby the Jukes-Cantor model) from four states to an arbitrary number of states (finite or infinite), and it can also be described by a 'random cluster' process. We describe the structure and dimension of the vector spaces of phylogenetic mixtures and of linear invariants for any fixed phylogenetic tree (and for all trees-the so called 'model invariants'), on any number n of leaves. We also provide a precise description of the space of mixtures and linear invariants for the special case of [Formula: see text] leaves. By combining techniques from discrete random processes and (multi-) linear algebra, our results build on a classic result that was first established by James Lake (Mol Biol Evol 4:167-191, 1987).
Forest tree species clssification based on airborne hyper-spectral imagery
NASA Astrophysics Data System (ADS)
Dian, Yuanyong; Li, Zengyuan; Pang, Yong
2013-10-01
Forest precision classification products were the basic data for surveying of forest resource, updating forest subplot information, logging and design of forest. However, due to the diversity of stand structure, complexity of the forest growth environment, it's difficult to discriminate forest tree species using multi-spectral image. The airborne hyperspectral images can achieve the high spatial and spectral resolution imagery of forest canopy, so it will good for tree species level classification. The aim of this paper was to test the effective of combining spatial and spectral features in airborne hyper-spectral image classification. The CASI hyper spectral image data were acquired from Liangshui natural reserves area. Firstly, we use the MNF (minimum noise fraction) transform method for to reduce the hyperspectral image dimensionality and highlighting variation. And secondly, we use the grey level co-occurrence matrix (GLCM) to extract the texture features of forest tree canopy from the hyper-spectral image, and thirdly we fused the texture and the spectral features of forest canopy to classify the trees species using support vector machine (SVM) with different kernel functions. The results showed that when using the SVM classifier, MNF and texture-based features combined with linear kernel function can achieve the best overall accuracy which was 85.92%. It was also confirm that combine the spatial and spectral information can improve the accuracy of tree species classification.
Cophenetic metrics for phylogenetic trees, after Sokal and Rohlf.
Cardona, Gabriel; Mir, Arnau; Rosselló, Francesc; Rotger, Lucía; Sánchez, David
2013-01-16
Phylogenetic tree comparison metrics are an important tool in the study of evolution, and hence the definition of such metrics is an interesting problem in phylogenetics. In a paper in Taxon fifty years ago, Sokal and Rohlf proposed to measure quantitatively the difference between a pair of phylogenetic trees by first encoding them by means of their half-matrices of cophenetic values, and then comparing these matrices. This idea has been used several times since then to define dissimilarity measures between phylogenetic trees but, to our knowledge, no proper metric on weighted phylogenetic trees with nested taxa based on this idea has been formally defined and studied yet. Actually, the cophenetic values of pairs of different taxa alone are not enough to single out phylogenetic trees with weighted arcs or nested taxa. For every (rooted) phylogenetic tree T, let its cophenetic vectorφ(T) consist of all pairs of cophenetic values between pairs of taxa in T and all depths of taxa in T. It turns out that these cophenetic vectors single out weighted phylogenetic trees with nested taxa. We then define a family of cophenetic metrics dφ,p by comparing these cophenetic vectors by means of Lp norms, and we study, either analytically or numerically, some of their basic properties: neighbors, diameter, distribution, and their rank correlation with each other and with other metrics. The cophenetic metrics can be safely used on weighted phylogenetic trees with nested taxa and no restriction on degrees, and they can be computed in O(n2) time, where n stands for the number of taxa. The metrics dφ,1 and dφ,2 have positive skewed distributions, and they show a low rank correlation with the Robinson-Foulds metric and the nodal metrics, and a very high correlation with each other and with the splitted nodal metrics. The diameter of dφ,p, for p⩾1 , is in O(n(p+2)/p), and thus for low p they are more discriminative, having a wider range of values.
Interactive Tree Of Life v2: online annotation and display of phylogenetic trees made easy.
Letunic, Ivica; Bork, Peer
2011-07-01
Interactive Tree Of Life (http://itol.embl.de) is a web-based tool for the display, manipulation and annotation of phylogenetic trees. It is freely available and open to everyone. In addition to classical tree viewer functions, iTOL offers many novel ways of annotating trees with various additional data. Current version introduces numerous new features and greatly expands the number of supported data set types. Trees can be interactively manipulated and edited. A free personal account system is available, providing management and sharing of trees in user defined workspaces and projects. Export to various bitmap and vector graphics formats is supported. Batch access interface is available for programmatic access or inclusion of interactive trees into other web services.
A hybrid approach to select features and classify diseases based on medical data
NASA Astrophysics Data System (ADS)
AbdelLatif, Hisham; Luo, Jiawei
2018-03-01
Feature selection is popular problem in the classification of diseases in clinical medicine. Here, we developing a hybrid methodology to classify diseases, based on three medical datasets, Arrhythmia, Breast cancer, and Hepatitis datasets. This methodology called k-means ANOVA Support Vector Machine (K-ANOVA-SVM) uses K-means cluster with ANOVA statistical to preprocessing data and selection the significant features, and Support Vector Machines in the classification process. To compare and evaluate the performance, we choice three classification algorithms, decision tree Naïve Bayes, Support Vector Machines and applied the medical datasets direct to these algorithms. Our methodology was a much better classification accuracy is given of 98% in Arrhythmia datasets, 92% in Breast cancer datasets and 88% in Hepatitis datasets, Compare to use the medical data directly with decision tree Naïve Bayes, and Support Vector Machines. Also, the ROC curve and precision with (K-ANOVA-SVM) Achieved best results than other algorithms
Vortex creation during magnetic trap manipulations of spinor Bose-Einstein condensates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Itin, A. P.; Space Research Institute, RAS, Moscow; Morishita, T.
2006-06-15
We investigate several mechanisms of vortex creation during splitting of a spinor Bose-Einstein condensate (BEC) in a magnetic double-well trap controlled by a pair of current carrying wires and bias magnetic fields. Our study is motivated by a recent MIT experiment on splitting BECs with a similar trap [Y. Shin et al., Phys. Rev. A 72, 021604 (2005)], where an unexpected fork-like structure appeared in the interference fringes indicating the presence of a singly quantized vortex in one of the interfering condensates. It is well known that in a spin-1 BEC in a quadrupole trap, a doubly quantized vortex ismore » topologically produced by a 'slow' reversal of bias magnetic field B{sub z}. Since in the experiment a doubly quantized vortex had never been seen, Shin et al. ruled out the topological mechanism and concentrated on the nonadiabatic mechanical mechanism for explanation of the vortex creation. We find, however, that in the magnetic trap considered both mechanisms are possible: singly quantized vortices can be formed in a spin-1 BEC topologically (for example, during the magnetic field switching-off process). We therefore provide a possible alternative explanation for the interference patterns observed in the experiment. We also present a numerical example of creation of singly quantized vortices due to 'fast' splitting; i.e., by a dynamical (nonadiabatic) mechanism.« less
Abad-Franch, Fernando; Ferraz, Gonçalo; Campos, Ciro; Palomeque, Francisco S.; Grijalva, Mario J.; Aguilar, H. Marcelo; Miles, Michael A.
2010-01-01
Background Failure to detect a disease agent or vector where it actually occurs constitutes a serious drawback in epidemiology. In the pervasive situation where no sampling technique is perfect, the explicit analytical treatment of detection failure becomes a key step in the estimation of epidemiological parameters. We illustrate this approach with a study of Attalea palm tree infestation by Rhodnius spp. (Triatominae), the most important vectors of Chagas disease (CD) in northern South America. Methodology/Principal Findings The probability of detecting triatomines in infested palms is estimated by repeatedly sampling each palm. This knowledge is used to derive an unbiased estimate of the biologically relevant probability of palm infestation. We combine maximum-likelihood analysis and information-theoretic model selection to test the relationships between environmental covariates and infestation of 298 Amazonian palm trees over three spatial scales: region within Amazonia, landscape, and individual palm. Palm infestation estimates are high (40–60%) across regions, and well above the observed infestation rate (24%). Detection probability is higher (∼0.55 on average) in the richest-soil region than elsewhere (∼0.08). Infestation estimates are similar in forest and rural areas, but lower in urban landscapes. Finally, individual palm covariates (accumulated organic matter and stem height) explain most of infestation rate variation. Conclusions/Significance Individual palm attributes appear as key drivers of infestation, suggesting that CD surveillance must incorporate local-scale knowledge and that peridomestic palm tree management might help lower transmission risk. Vector populations are probably denser in rich-soil sub-regions, where CD prevalence tends to be higher; this suggests a target for research on broad-scale risk mapping. Landscape-scale effects indicate that palm triatomine populations can endure deforestation in rural areas, but become rarer in heavily disturbed urban settings. Our methodological approach has wide application in infectious disease research; by improving eco-epidemiological parameter estimation, it can also significantly strengthen vector surveillance-control strategies. PMID:20209149
Second quantization in bit-string physics
NASA Technical Reports Server (NTRS)
Noyes, H. Pierre
1993-01-01
Using a new fundamental theory based on bit-strings, a finite and discrete version of the solutions of the free one particle Dirac equation as segmented trajectories with steps of length h/mc along the forward and backward light cones executed at velocity +/- c are derived. Interpreting the statistical fluctuations which cause the bends in these segmented trajectories as emission and absorption of radiation, these solutions are analogous to a fermion propagator in a second quantized theory. This allows us to interpret the mass parameter in the step length as the physical mass of the free particle. The radiation in interaction with it has the usual harmonic oscillator structure of a second quantized theory. How these free particle masses can be generated gravitationally using the combinatorial hierarchy sequence (3,10,137,2(sup 127) + 136), and some of the predictive consequences are sketched.
Zhang, Wanli; Yang, Shiju; Li, Chuandong; Zhang, Wei; Yang, Xinsong
2018-08-01
This paper focuses on stochastic exponential synchronization of delayed memristive neural networks (MNNs) by the aid of systems with interval parameters which are established by using the concept of Filippov solution. New intermittent controller and adaptive controller with logarithmic quantization are structured to deal with the difficulties induced by time-varying delays, interval parameters as well as stochastic perturbations, simultaneously. Moreover, not only control cost can be reduced but also communication channels and bandwidth are saved by using these controllers. Based on novel Lyapunov functions and new analytical methods, several synchronization criteria are established to realize the exponential synchronization of MNNs with stochastic perturbations via intermittent control and adaptive control with or without logarithmic quantization. Finally, numerical simulations are offered to substantiate our theoretical results. Copyright © 2018 Elsevier Ltd. All rights reserved.
Quantized conductance operation near a single-atom point contact in a polymer-based atomic switch
NASA Astrophysics Data System (ADS)
Krishnan, Karthik; Muruganathan, Manoharan; Tsuruoka, Tohru; Mizuta, Hiroshi; Aono, Masakazu
2017-06-01
Highly-controlled conductance quantization is achieved near a single-atom point contact in a redox-based atomic switch device, in which a poly(ethylene oxide) (PEO) film is sandwiched between Ag and Pt electrodes. Current-voltage measurements revealed reproducible quantized conductance of ˜1G 0 for more than 102 continuous voltage sweep cycles under a specific condition, indicating the formation of a well-defined single-atom point contact of Ag in the PEO matrix. The device exhibited a conductance state distribution centered at 1G 0, with distinct half-integer multiples of G 0 and small fractional variations. First-principles density functional theory simulations showed that the experimental observations could be explained by the existence of a tunneling gap and the structural rearrangement of an atomic point contact.
On Relevance of Codon Usage to Expression of Synthetic and Natural Genes in Escherichia coli
Supek, Fran; Šmuc, Tomislav
2010-01-01
A recent investigation concluded that codon bias did not affect expression of green fluorescent protein (GFP) variants in Escherichia coli, while stability of an mRNA secondary structure near the 5′ end played a dominant role. We demonstrate that combining the two variables using regression trees or support vector regression yields a biologically plausible model with better support in the GFP data set and in other experimental data: codon usage is relevant for protein levels if the 5′ mRNA structures are not strong. Natural E. coli genes had weaker 5′ mRNA structures than the examined set of GFP variants and did not exhibit a correlation between the folding free energy of 5′ mRNA structures and protein expression. PMID:20421604
Memory-efficient decoding of LDPC codes
NASA Technical Reports Server (NTRS)
Kwok-San Lee, Jason; Thorpe, Jeremy; Hawkins, Jon
2005-01-01
We present a low-complexity quantization scheme for the implementation of regular (3,6) LDPC codes. The quantization parameters are optimized to maximize the mutual information between the source and the quantized messages. Using this non-uniform quantized belief propagation algorithm, we have simulated that an optimized 3-bit quantizer operates with 0.2dB implementation loss relative to a floating point decoder, and an optimized 4-bit quantizer operates less than 0.1dB quantization loss.
Identifying saltcedar with hyperspectral data and support vector machines
USDA-ARS?s Scientific Manuscript database
Saltcedar (Tamarix spp.) are a group of dense phreatophytic shrubs and trees that are invasive to riparian areas throughout the United States. This study determined the feasibility of using hyperspectral data and a support vector machine (SVM) classifier to discriminate saltcedar from other cover t...
NASA Astrophysics Data System (ADS)
Ohkubo, Makio
2016-06-01
In observed neutron resonances, long believed to be a form of quantum chaos, regular family structures are found in the s-wave resonances of many even-even nuclei in the tens keV to MeV region [M.Ohkubo, Phys. Rev. C 87, 014608(2013)]. Resonance reactions take place when the incident de Broglie wave synchronizes with the Poincaré cycle of the compound nucleus, which is composed of several normal modes with periods that are time quantized by inverse Fermi energy. Based on the breathing model of the compound nucleus, neutron resonance energies in family structures are written by simple arithmetic expressions using Sn and small integers. Family structures in observed resonances of 40Ca+n and 37Cl+n are described as simple cases. A model for time quantization is discussed.
W. J. Otrosina; J. T. Kliejunas; S. S. Sung; S. Smith; D. R. Cluck
2008-01-01
Black stain root disease of ponderosa pine, caused by Lepfographium wageneri var. ponderosum (Harrington & Cobb) Harrington & Cobb, is increasing on many eastside pine stands in northeastern California. The disease is spread from tree to tree via root contacts and grafts but new infections are likely vectored by root...
Optimized universal color palette design for error diffusion
NASA Astrophysics Data System (ADS)
Kolpatzik, Bernd W.; Bouman, Charles A.
1995-04-01
Currently, many low-cost computers can only simultaneously display a palette of 256 color. However, this palette is usually selectable from a very large gamut of available colors. For many applications, this limited palette size imposes a significant constraint on the achievable image quality. We propose a method for designing an optimized universal color palette for use with halftoning methods such as error diffusion. The advantage of a universal color palette is that it is fixed and therefore allows multiple images to be displayed simultaneously. To design the palette, we employ a new vector quantization method known as sequential scalar quantization (SSQ) to allocate the colors in a visually uniform color space. The SSQ method achieves near-optimal allocation, but may be efficiently implemented using a series of lookup tables. When used with error diffusion, SSQ adds little computational overhead and may be used to minimize the visual error in an opponent color coordinate system. We compare the performance of the optimized algorithm to standard error diffusion by evaluating a visually weighted mean-squared-error measure. Our metric is based on the color difference in CIE L*AL*B*, but also accounts for the lowpass characteristic of human contrast sensitivity.
Development of a good-quality speech coder for transmission over noisy channels at 2.4 kb/s
NASA Astrophysics Data System (ADS)
Viswanathan, V. R.; Berouti, M.; Higgins, A.; Russell, W.
1982-03-01
This report describes the development, study, and experimental results of a 2.4 kb/s speech coder called harmonic deviations (HDV) vocoder, which transmits good-quality speech over noisy channels with bit-error rates of up to 1%. The HDV coder is based on the linear predictive coding (LPC) vocoder, and it transmits additional information over and above the data transmitted by the LPC vocoder, in the form of deviations between the speech spectrum and the LPC all-pole model spectrum at a selected set of frequencies. At the receiver, the spectral deviations are used to generate the excitation signal for the all-pole synthesis filter. The report describes and compares several methods for extracting the spectral deviations from the speech signal and for encoding them. To limit the bit-rate of the HDV coder to 2.4 kb/s the report discusses several methods including orthogonal transformation and minimum-mean-square-error scalar quantization of log area ratios, two-stage vector-scalar quantization, and variable frame rate transmission. The report also presents the results of speech-quality optimization of the HDV coder at 2.4 kb/s.
Magnetic quantization in monolayer bismuthene
NASA Astrophysics Data System (ADS)
Chen, Szu-Chao; Chiu, Chih-Wei; Lin, Hui-Chi; Lin, Ming-Fa
The magnetic quantization in monolayer bismuthene is investigated by the generalized tight-binding model. The quite large Hamiltonian matrix is built from the tight-binding functions of the various sublattices, atomic orbitals and spin states. Due to the strong spin orbital coupling and sp3 bonding, monolayer bismuthene has the diverse low-lying energy bands such as the parabolic, linear and oscillating energy bands. The main features of band structures are further reflected in the rich magnetic quantization. Under a uniform perpendicular magnetic field (Bz) , three groups of Landau levels (LLs) with distinct features are revealed near the Fermi level. Their Bz-dependent energy spectra display the linear, square-root and non-monotonous dependences, respectively. These LLs are dominated by the combinations of the 6pz orbital and (6px,6py) orbitals as a result of strong sp3 bonding. Specifically, the LL anti-crossings only occur between LLs originating from the oscillating energy band.
Correlated Light-Matter Interactions in Cavity QED
NASA Astrophysics Data System (ADS)
Flick, Johannes; Pellegrini, Camilla; Ruggenthaler, Michael; Appel, Heiko; Tokatly, Ilya; Rubio, Angel
2015-03-01
In the last decade, time-dependent density functional theory (TDDFT) has been successfully applied to a large variety of problems, such as calculations of absorption spectra, excitation energies, or dynamics in strong laser fields. Recently, we have generalized TDDFT to also describe electron-photon systems (QED-TDDFT). Here, matter and light are treated on an equal quantized footing. In this work, we present the first numerical calculations in the framework of QED-TDDFT. We show exact solutions for fully quantized prototype systems consisting of atoms or molecules placed in optical high-Q cavities and coupled to quantized electromagnetic modes. We focus on the electron-photon exchange-correlation (xc) contribution by calculating exact Kohn-Sham potentials using fixed-point inversions and present the performance of the first approximated xc-potential based on an optimized effective potential (OEP) approach. Max Planck Institute for the Structure and Dynamics of Matter, Hamburg, and Fritz-Haber-Institut der MPG, Berlin
Flux Quantization in Aperiodic and Periodic Networks
NASA Astrophysics Data System (ADS)
Behrooz, Angelika
The normal - superconducting phase boundary, T_{c}(H), of a periodic wire network shows periodic oscillations with period H _{o} = phi_ {o}/A due to flux quantization around the individual plaquettes (of area A) of the network. The magnetic flux quantum is phi_{o } = hc/2e. The phase boundary also shows fine structure at fields H = (p/q)H_{o} (p,q integers), where the flux vortices can form commensurate superlattices on the periodic substrate. We have studied the phase boundary of quasicrystalline, quasiperiodic and random networks. We have found that if a network is composed of two different tiles, whose areas are relatively irrational then the T_ {c}(H) curve shows large scale structure at fields that approximate flux quantization around the tiles, i.e. when the ratio of fluxoids contained in the large tiles to those in the small tiles is a rational approximant to the irrational area ratio. The phase boundaries of quasicrystalline and quasiperiodic networks show fine structure indicating the existence of commensurate vortex superlattices on these networks. No such fine structure is found on the random array. For a quasicrystal whose quasiperiodic long-range order is characterized by the irrational number tau the commensurate vortex lattices are all found at H = H_{o}| n + mtau| (n,m integers). We have found that the commensurate superlattices on quasicrystalline as well as on crystalline networks are related to the inflation symmetry. We propose a general definition of commensurability.
Medical Ultrasound Video Coding with H.265/HEVC Based on ROI Extraction
Wu, Yueying; Liu, Pengyu; Gao, Yuan; Jia, Kebin
2016-01-01
High-efficiency video compression technology is of primary importance to the storage and transmission of digital medical video in modern medical communication systems. To further improve the compression performance of medical ultrasound video, two innovative technologies based on diagnostic region-of-interest (ROI) extraction using the high efficiency video coding (H.265/HEVC) standard are presented in this paper. First, an effective ROI extraction algorithm based on image textural features is proposed to strengthen the applicability of ROI detection results in the H.265/HEVC quad-tree coding structure. Second, a hierarchical coding method based on transform coefficient adjustment and a quantization parameter (QP) selection process is designed to implement the otherness encoding for ROIs and non-ROIs. Experimental results demonstrate that the proposed optimization strategy significantly improves the coding performance by achieving a BD-BR reduction of 13.52% and a BD-PSNR gain of 1.16 dB on average compared to H.265/HEVC (HM15.0). The proposed medical video coding algorithm is expected to satisfy low bit-rate compression requirements for modern medical communication systems. PMID:27814367
Medical Ultrasound Video Coding with H.265/HEVC Based on ROI Extraction.
Wu, Yueying; Liu, Pengyu; Gao, Yuan; Jia, Kebin
2016-01-01
High-efficiency video compression technology is of primary importance to the storage and transmission of digital medical video in modern medical communication systems. To further improve the compression performance of medical ultrasound video, two innovative technologies based on diagnostic region-of-interest (ROI) extraction using the high efficiency video coding (H.265/HEVC) standard are presented in this paper. First, an effective ROI extraction algorithm based on image textural features is proposed to strengthen the applicability of ROI detection results in the H.265/HEVC quad-tree coding structure. Second, a hierarchical coding method based on transform coefficient adjustment and a quantization parameter (QP) selection process is designed to implement the otherness encoding for ROIs and non-ROIs. Experimental results demonstrate that the proposed optimization strategy significantly improves the coding performance by achieving a BD-BR reduction of 13.52% and a BD-PSNR gain of 1.16 dB on average compared to H.265/HEVC (HM15.0). The proposed medical video coding algorithm is expected to satisfy low bit-rate compression requirements for modern medical communication systems.
Cheng, Jerome; Hipp, Jason; Monaco, James; Lucas, David R; Madabhushi, Anant; Balis, Ulysses J
2011-01-01
Spatially invariant vector quantization (SIVQ) is a texture and color-based image matching algorithm that queries the image space through the use of ring vectors. In prior studies, the selection of one or more optimal vectors for a particular feature of interest required a manual process, with the user initially stochastically selecting candidate vectors and subsequently testing them upon other regions of the image to verify the vector's sensitivity and specificity properties (typically by reviewing a resultant heat map). In carrying out the prior efforts, the SIVQ algorithm was noted to exhibit highly scalable computational properties, where each region of analysis can take place independently of others, making a compelling case for the exploration of its deployment on high-throughput computing platforms, with the hypothesis that such an exercise will result in performance gains that scale linearly with increasing processor count. An automated process was developed for the selection of optimal ring vectors to serve as the predicate matching operator in defining histopathological features of interest. Briefly, candidate vectors were generated from every possible coordinate origin within a user-defined vector selection area (VSA) and subsequently compared against user-identified positive and negative "ground truth" regions on the same image. Each vector from the VSA was assessed for its goodness-of-fit to both the positive and negative areas via the use of the receiver operating characteristic (ROC) transfer function, with each assessment resulting in an associated area-under-the-curve (AUC) figure of merit. Use of the above-mentioned automated vector selection process was demonstrated in two cases of use: First, to identify malignant colonic epithelium, and second, to identify soft tissue sarcoma. For both examples, a very satisfactory optimized vector was identified, as defined by the AUC metric. Finally, as an additional effort directed towards attaining high-throughput capability for the SIVQ algorithm, we demonstrated the successful incorporation of it with the MATrix LABoratory (MATLAB™) application interface. The SIVQ algorithm is suitable for automated vector selection settings and high throughput computation.
Gottdenker, Nicole L.; Chaves, Luis Fernando; Calzada, José E.; Saldaña, Azael; Carroll, C. Ronald
2012-01-01
Background Anthropogenic land use may influence transmission of multi-host vector-borne pathogens by changing diversity, relative abundance, and community composition of reservoir hosts. These reservoir hosts may have varying competence for vector-borne pathogens depending on species-specific characteristics, such as life history strategy. The objective of this study is to evaluate how anthropogenic land use change influences blood meal species composition and the effects of changing blood meal species composition on the parasite infection rate of the Chagas disease vector Rhodnius pallescens in Panama. Methodology/Principal Findings R. pallescens vectors (N = 643) were collected in different habitat types across a gradient of anthropogenic disturbance. Blood meal species in DNA extracted from these vectors was identified in 243 (40.3%) vectors by amplification and sequencing of a vertebrate-specific fragment of the 12SrRNA gene, and T. cruzi vector infection was determined by pcr. Vector infection rate was significantly greater in deforested habitats as compared to contiguous forests. Forty-two different species of blood meal were identified in R. pallescens, and species composition of blood meals varied across habitat types. Mammals (88.3%) dominated R. pallescens blood meals. Xenarthrans (sloths and tamanduas) were the most frequently identified species in blood meals across all habitat types. A regression tree analysis indicated that blood meal species diversity, host life history strategy (measured as rmax, the maximum intrinsic rate of population increase), and habitat type (forest fragments and peridomiciliary sites) were important determinants of vector infection with T. cruzi. The mean intrinsic rate of increase and the skewness and variability of rmax were positively associated with higher vector infection rate at a site. Conclusions/Significance In this study, anthropogenic landscape disturbance increased vector infection with T. cruzi, potentially by changing host community structure to favor hosts that are short-lived with high reproductive rates. Study results apply to potential environmental management strategies for Chagas disease. PMID:23166846
Temporal and spatial scaling of the genetic structure of a vector-borne plant pathogen.
Coletta-Filho, Helvécio D; Francisco, Carolina S; Almeida, Rodrigo P P
2014-02-01
The ecology of plant pathogens of perennial crops is affected by the long-lived nature of their immobile hosts. In addition, changes to the genetic structure of pathogen populations may affect disease epidemiology and management practices; examples include local adaptation of more fit genotypes or introduction of novel genotypes from geographically distant areas via human movement of infected plant material or insect vectors. We studied the genetic structure of Xylella fastidiosa populations causing disease in sweet orange plants in Brazil at multiple scales using fast-evolving molecular markers (simple-sequence DNA repeats). Results show that populations of X. fastidiosa were regionally isolated, and that isolation was maintained for populations analyzed a decade apart from each other. However, despite such geographic isolation, local populations present in year 2000 were largely replaced by novel genotypes in 2009 but not as a result of migration. At a smaller spatial scale (individual trees), results suggest that isolates within plants originated from a shared common ancestor. In summary, new insights on the ecology of this economically important plant pathogen were obtained by sampling populations at different spatial scales and two different time points.
Goo, Yeong-Jia James; Shen, Zone-De
2014-01-01
As the fraudulent financial statement of an enterprise is increasingly serious with each passing day, establishing a valid forecasting fraudulent financial statement model of an enterprise has become an important question for academic research and financial practice. After screening the important variables using the stepwise regression, the study also matches the logistic regression, support vector machine, and decision tree to construct the classification models to make a comparison. The study adopts financial and nonfinancial variables to assist in establishment of the forecasting fraudulent financial statement model. Research objects are the companies to which the fraudulent and nonfraudulent financial statement happened between years 1998 to 2012. The findings are that financial and nonfinancial information are effectively used to distinguish the fraudulent financial statement, and decision tree C5.0 has the best classification effect 85.71%. PMID:25302338
Chen, Suduan; Goo, Yeong-Jia James; Shen, Zone-De
2014-01-01
As the fraudulent financial statement of an enterprise is increasingly serious with each passing day, establishing a valid forecasting fraudulent financial statement model of an enterprise has become an important question for academic research and financial practice. After screening the important variables using the stepwise regression, the study also matches the logistic regression, support vector machine, and decision tree to construct the classification models to make a comparison. The study adopts financial and nonfinancial variables to assist in establishment of the forecasting fraudulent financial statement model. Research objects are the companies to which the fraudulent and nonfraudulent financial statement happened between years 1998 to 2012. The findings are that financial and nonfinancial information are effectively used to distinguish the fraudulent financial statement, and decision tree C5.0 has the best classification effect 85.71%.
Can chaos be observed in quantum gravity?
NASA Astrophysics Data System (ADS)
Dittrich, Bianca; Höhn, Philipp A.; Koslowski, Tim A.; Nelson, Mike I.
2017-06-01
Full general relativity is almost certainly 'chaotic'. We argue that this entails a notion of non-integrability: a generic general relativistic model, at least when coupled to cosmologically interesting matter, likely possesses neither differentiable Dirac observables nor a reduced phase space. It follows that the standard notion of observable has to be extended to include non-differentiable or even discontinuous generalized observables. These cannot carry Poisson-algebraic structures and do not admit a standard quantization; one thus faces a quantum representation problem of gravitational observables. This has deep consequences for a quantum theory of gravity, which we investigate in a simple model for a system with Hamiltonian constraint that fails to be completely integrable. We show that basing the quantization on standard topology precludes a semiclassical limit and can even prohibit any solutions to the quantum constraints. Our proposed solution to this problem is to refine topology such that a complete set of Dirac observables becomes continuous. In the toy model, it turns out that a refinement to a polymer-type topology, as e.g. used in loop gravity, is sufficient. Basing quantization of the toy model on this finer topology, we find a complete set of quantum Dirac observables and a suitable semiclassical limit. This strategy is applicable to realistic candidate theories of quantum gravity and thereby suggests a solution to a long-standing problem which implies ramifications for the very concept of quantization. Our work reveals a qualitatively novel facet of chaos in physics and opens up a new avenue of research on chaos in gravity which hints at deep insights into the structure of quantum gravity.
Adaptive h -refinement for reduced-order models: ADAPTIVE h -refinement for reduced-order models
Carlberg, Kevin T.
2014-11-05
Our work presents a method to adaptively refine reduced-order models a posteriori without requiring additional full-order-model solves. The technique is analogous to mesh-adaptive h-refinement: it enriches the reduced-basis space online by ‘splitting’ a given basis vector into several vectors with disjoint support. The splitting scheme is defined by a tree structure constructed offline via recursive k-means clustering of the state variables using snapshot data. This method identifies the vectors to split online using a dual-weighted-residual approach that aims to reduce error in an output quantity of interest. The resulting method generates a hierarchy of subspaces online without requiring large-scale operationsmore » or full-order-model solves. Furthermore, it enables the reduced-order model to satisfy any prescribed error tolerance regardless of its original fidelity, as a completely refined reduced-order model is mathematically equivalent to the original full-order model. Experiments on a parameterized inviscid Burgers equation highlight the ability of the method to capture phenomena (e.g., moving shocks) not contained in the span of the original reduced basis.« less
Mathematics of Quantization and Quantum Fields
NASA Astrophysics Data System (ADS)
Dereziński, Jan; Gérard, Christian
2013-03-01
Preface; 1. Vector spaces; 2. Operators in Hilbert spaces; 3. Tensor algebras; 4. Analysis in L2(Rd); 5. Measures; 6. Algebras; 7. Anti-symmetric calculus; 8. Canonical commutation relations; 9. CCR on Fock spaces; 10. Symplectic invariance of CCR in finite dimensions; 11. Symplectic invariance of the CCR on Fock spaces; 12. Canonical anti-commutation relations; 13. CAR on Fock spaces; 14. Orthogonal invariance of CAR algebras; 15. Clifford relations; 16. Orthogonal invariance of the CAR on Fock spaces; 17. Quasi-free states; 18. Dynamics of quantum fields; 19. Quantum fields on space-time; 20. Diagrammatics; 21. Euclidean approach for bosons; 22. Interacting bosonic fields; Subject index; Symbols index.
Perceptual distortion analysis of color image VQ-based coding
NASA Astrophysics Data System (ADS)
Charrier, Christophe; Knoblauch, Kenneth; Cherifi, Hocine
1997-04-01
It is generally accepted that a RGB color image can be easily encoded by using a gray-scale compression technique on each of the three color planes. Such an approach, however, fails to take into account correlations existing between color planes and perceptual factors. We evaluated several linear and non-linear color spaces, some introduced by the CIE, compressed with the vector quantization technique for minimum perceptual distortion. To study these distortions, we measured contrast and luminance of the video framebuffer, to precisely control color. We then obtained psychophysical judgements to measure how well these methods work to minimize perceptual distortion in a variety of color space.
High Performance Compression of Science Data
NASA Technical Reports Server (NTRS)
Storer, James A.; Carpentieri, Bruno; Cohn, Martin
1994-01-01
Two papers make up the body of this report. One presents a single-pass adaptive vector quantization algorithm that learns a codebook of variable size and shape entries; the authors present experiments on a set of test images showing that with no training or prior knowledge of the data, for a given fidelity, the compression achieved typically equals or exceeds that of the JPEG standard. The second paper addresses motion compensation, one of the most effective techniques used in interframe data compression. A parallel block-matching algorithm for estimating interframe displacement of blocks with minimum error is presented. The algorithm is designed for a simple parallel architecture to process video in real time.
NASA Astrophysics Data System (ADS)
Rost, E.; Shephard, J. R.
1992-08-01
This report discusses the following topics: Exact 1-loop vacuum polarization effects in 1 + 1 dimensional QHD; exact 1-fermion loop contributions in 1 + 1 dimensional solitons; exact scalar 1-loop contributions in 1 + 3 dimensions; exact vacuum calculations in a hyper-spherical basis; relativistic nuclear matter with self-consistent correlation energy; consistent RHA-RPA for finite nuclei; transverse response functions in the (triangle)-resonance region; hadronic matter in a nontopological soliton model; scalar and vector contributions to (bar p)p yields (bar lambda)lambda reaction; 0+ and 2+ strengths in pion double-charge exchange to double giant-dipole resonances; and nucleons in a hybrid sigma model including a quantized pion field.
Multi-rate, real time image compression for images dominated by point sources
NASA Technical Reports Server (NTRS)
Huber, A. Kris; Budge, Scott E.; Harris, Richard W.
1993-01-01
An image compression system recently developed for compression of digital images dominated by point sources is presented. Encoding consists of minimum-mean removal, vector quantization, adaptive threshold truncation, and modified Huffman encoding. Simulations are presented showing that the peaks corresponding to point sources can be transmitted losslessly for low signal-to-noise ratios (SNR) and high point source densities while maintaining a reduced output bit rate. Encoding and decoding hardware has been built and tested which processes 552,960 12-bit pixels per second at compression rates of 10:1 and 4:1. Simulation results are presented for the 10:1 case only.
Indel-tolerant read mapping with trinucleotide frequencies using cache-oblivious kd-trees.
Mahmud, Md Pavel; Wiedenhoeft, John; Schliep, Alexander
2012-09-15
Mapping billions of reads from next generation sequencing experiments to reference genomes is a crucial task, which can require hundreds of hours of running time on a single CPU even for the fastest known implementations. Traditional approaches have difficulties dealing with matches of large edit distance, particularly in the presence of frequent or large insertions and deletions (indels). This is a serious obstacle both in determining the spectrum and abundance of genetic variations and in personal genomics. For the first time, we adopt the approximate string matching paradigm of geometric embedding to read mapping, thus rephrasing it to nearest neighbor queries in a q-gram frequency vector space. Using the L(1) distance between frequency vectors has the benefit of providing lower bounds for an edit distance with affine gap costs. Using a cache-oblivious kd-tree, we realize running times, which match the state-of-the-art. Additionally, running time and memory requirements are about constant for read lengths between 100 and 1000 bp. We provide a first proof-of-concept that geometric embedding is a promising paradigm for read mapping and that L(1) distance might serve to detect structural variations. TreQ, our initial implementation of that concept, performs more accurate than many popular read mappers over a wide range of structural variants. TreQ will be released under the GNU Public License (GPL), and precomputed genome indices will be provided for download at http://treq.sf.net. pavelm@cs.rutgers.edu Supplementary data are available at Bioinformatics online.
Indel-tolerant read mapping with trinucleotide frequencies using cache-oblivious kd-trees
Mahmud, Md Pavel; Wiedenhoeft, John; Schliep, Alexander
2012-01-01
Motivation: Mapping billions of reads from next generation sequencing experiments to reference genomes is a crucial task, which can require hundreds of hours of running time on a single CPU even for the fastest known implementations. Traditional approaches have difficulties dealing with matches of large edit distance, particularly in the presence of frequent or large insertions and deletions (indels). This is a serious obstacle both in determining the spectrum and abundance of genetic variations and in personal genomics. Results: For the first time, we adopt the approximate string matching paradigm of geometric embedding to read mapping, thus rephrasing it to nearest neighbor queries in a q-gram frequency vector space. Using the L1 distance between frequency vectors has the benefit of providing lower bounds for an edit distance with affine gap costs. Using a cache-oblivious kd-tree, we realize running times, which match the state-of-the-art. Additionally, running time and memory requirements are about constant for read lengths between 100 and 1000 bp. We provide a first proof-of-concept that geometric embedding is a promising paradigm for read mapping and that L1 distance might serve to detect structural variations. TreQ, our initial implementation of that concept, performs more accurate than many popular read mappers over a wide range of structural variants. Availability and implementation: TreQ will be released under the GNU Public License (GPL), and precomputed genome indices will be provided for download at http://treq.sf.net. Contact: pavelm@cs.rutgers.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:22962448
A discrete mechanics approach to dislocation dynamics in BCC crystals
NASA Astrophysics Data System (ADS)
Ramasubramaniam, A.; Ariza, M. P.; Ortiz, M.
2007-03-01
A discrete mechanics approach to modeling the dynamics of dislocations in BCC single crystals is presented. Ideas are borrowed from discrete differential calculus and algebraic topology and suitably adapted to crystal lattices. In particular, the extension of a crystal lattice to a CW complex allows for convenient manipulation of forms and fields defined over the crystal. Dislocations are treated within the theory as energy-minimizing structures that lead to locally lattice-invariant but globally incompatible eigendeformations. The discrete nature of the theory eliminates the need for regularization of the core singularity and inherently allows for dislocation reactions and complicated topological transitions. The quantization of slip to integer multiples of the Burgers' vector leads to a large integer optimization problem. A novel approach to solving this NP-hard problem based on considerations of metastability is proposed. A numerical example that applies the method to study the emanation of dislocation loops from a point source of dilatation in a large BCC crystal is presented. The structure and energetics of BCC screw dislocation cores, as obtained via the present formulation, are also considered and shown to be in good agreement with available atomistic studies. The method thus provides a realistic avenue for mesoscale simulations of dislocation based crystal plasticity with fully atomistic resolution.
Evaluation of olive as a host of Xylella fastidiosa and associated sharpshooter vectors
USDA-ARS?s Scientific Manuscript database
Olive (Olea europaea L.) trees exhibiting leaf scorch and/or branch dieback symptoms in California were surveyed for the xylem-limited, fastidious bacterium Xylella fastidiosa. Only ~17% of diseased trees tested positive for X. fastidiosa by PCR, and disease symptoms could not be attributed to X. fa...
Abel Lopez-Buenfil, Jose; Abrahan Ramirez-Pool, Jose; Ruiz-Medrano, Roberto; Del Carmen Montes-Horcasitas, Maria; Chavarin-Palacio, Claudio; Moya-Hinojosa, Jesus; Javier Trujillo-Arriaga, Francisco; Carmona, Rosalia Lira; Xoconostle-Cazares, Beatriz
2017-01-01
The bacterial disease citrus huanglongbing (HLB), associated with "Candidatus Liberibacter asiaticus" (C.Las) has severely impacted the citrus industry, causing a significant reduction in production and fruit quality. In the present study, it was monitored the C.Las population dynamics in symptomatic, HLB-positive Mexican lime trees (Citrus aurantifolia Swingle) in a tropical, citrus-producing area of Mexico. The objective of this study was to identify the dynamics of the population of huanglongbing-associated bacterium Candidatus Liberibacter asiaticus and its insect vector in Citrus aurantifolia Swingle (Mexican lime). Leaf samples were collected every 2 months over a period of 26 months for quantification of bacterial titers and young and mature leaves were collected in each season to determine preferential sites of bacterial accumulation. The proportion of living and dead bacterial cells could be determined through the use of quantitative real-time PCR in the presence of ethidium monoazide (EMA-qPCR). It was observed a lower bacterial titer at high temperatures in the infected trees relative to titers in mild weather, despite a higher accumulation of the insect vector Diaphorina citri in these conditions. This study also revealed seasonal fluctuations in the titers of bacteria in mature leaves when compared to young leaves. No statistically significant correlation between any meteorological variable, C.Las concentration and D. citri population could be drawn. Although, HLB management strategies have focused on vector control, host tree phenology may be important. The evaluation of citrus phenology, C.Las concentration, ACP population and environmental conditions provides insights into the cyclical, seasonal variations of both the HLB pathogen and its vector. These findings should help in the design of integrative HLB control strategies that take into account the accumulation of the pathogen and the presence of its vector.
Document page structure learning for fixed-layout e-books using conditional random fields
NASA Astrophysics Data System (ADS)
Tao, Xin; Tang, Zhi; Xu, Canhui
2013-12-01
In this paper, a model is proposed to learn logical structure of fixed-layout document pages by combining support vector machine (SVM) and conditional random fields (CRF). Features related to each logical label and their dependencies are extracted from various original Portable Document Format (PDF) attributes. Both local evidence and contextual dependencies are integrated in the proposed model so as to achieve better logical labeling performance. With the merits of SVM as local discriminative classifier and CRF modeling contextual correlations of adjacent fragments, it is capable of resolving the ambiguities of semantic labels. The experimental results show that CRF based models with both tree and chain graph structures outperform the SVM model with an increase of macro-averaged F1 by about 10%.
BRST quantization of cosmological perturbations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Armendariz-Picon, Cristian; Şengör, Gizem
2016-11-08
BRST quantization is an elegant and powerful method to quantize theories with local symmetries. In this article we study the Hamiltonian BRST quantization of cosmological perturbations in a universe dominated by a scalar field, along with the closely related quantization method of Dirac. We describe how both formalisms apply to perturbations in a time-dependent background, and how expectation values of gauge-invariant operators can be calculated in the in-in formalism. Our analysis focuses mostly on the free theory. By appropriate canonical transformations we simplify and diagonalize the free Hamiltonian. BRST quantization in derivative gauges allows us to dramatically simplify the structuremore » of the propagators, whereas Dirac quantization, which amounts to quantization in synchronous gauge, dispenses with the need to introduce ghosts and preserves the locality of the gauge-fixed action.« less
Support vector machines-based fault diagnosis for turbo-pump rotor
NASA Astrophysics Data System (ADS)
Yuan, Sheng-Fa; Chu, Fu-Lei
2006-05-01
Most artificial intelligence methods used in fault diagnosis are based on empirical risk minimisation principle and have poor generalisation when fault samples are few. Support vector machines (SVM) is a new general machine-learning tool based on structural risk minimisation principle that exhibits good generalisation even when fault samples are few. Fault diagnosis based on SVM is discussed. Since basic SVM is originally designed for two-class classification, while most of fault diagnosis problems are multi-class cases, a new multi-class classification of SVM named 'one to others' algorithm is presented to solve the multi-class recognition problems. It is a binary tree classifier composed of several two-class classifiers organised by fault priority, which is simple, and has little repeated training amount, and the rate of training and recognition is expedited. The effectiveness of the method is verified by the application to the fault diagnosis for turbo pump rotor.
A fast learning method for large scale and multi-class samples of SVM
NASA Astrophysics Data System (ADS)
Fan, Yu; Guo, Huiming
2017-06-01
A multi-class classification SVM(Support Vector Machine) fast learning method based on binary tree is presented to solve its low learning efficiency when SVM processing large scale multi-class samples. This paper adopts bottom-up method to set up binary tree hierarchy structure, according to achieved hierarchy structure, sub-classifier learns from corresponding samples of each node. During the learning, several class clusters are generated after the first clustering of the training samples. Firstly, central points are extracted from those class clusters which just have one type of samples. For those which have two types of samples, cluster numbers of their positive and negative samples are set respectively according to their mixture degree, secondary clustering undertaken afterwards, after which, central points are extracted from achieved sub-class clusters. By learning from the reduced samples formed by the integration of extracted central points above, sub-classifiers are obtained. Simulation experiment shows that, this fast learning method, which is based on multi-level clustering, can guarantee higher classification accuracy, greatly reduce sample numbers and effectively improve learning efficiency.
Implementation of Kane's Method for a Spacecraft Composed of Multiple Rigid Bodies
NASA Technical Reports Server (NTRS)
Stoneking, Eric T.
2013-01-01
Equations of motion are derived for a general spacecraft composed of rigid bodies connected via rotary (spherical or gimballed) joints in a tree topology. Several supporting concepts are developed in depth. Basis dyads aid in the transition from basis-free vector equations to component-wise equations. Joint partials allow abstraction of 1-DOF, 2-DOF, 3-DOF gimballed and spherical rotational joints to a common notation. The basic building block consisting of an "inner" body and an "outer" body connected by a joint enables efficient organization of arbitrary tree structures. Kane's equation is recast in a form which facilitates systematic assembly of large systems of equations, and exposes a relationship of Kane's equation to Newton and Euler's equations which is obscured by the usual presentation. The resulting system of dynamic equations is of minimum dimension, and is suitable for numerical solution by computer. Implementation is ·discussed, and illustrative simulation results are presented.
Deformation of second and third quantization
NASA Astrophysics Data System (ADS)
Faizal, Mir
2015-03-01
In this paper, we will deform the second and third quantized theories by deforming the canonical commutation relations in such a way that they become consistent with the generalized uncertainty principle. Thus, we will first deform the second quantized commutator and obtain a deformed version of the Wheeler-DeWitt equation. Then we will further deform the third quantized theory by deforming the third quantized canonical commutation relation. This way we will obtain a deformed version of the third quantized theory for the multiverse.
W-waves Explain Gravitropism, Phototropism, Sap Flow, Plant Structure, and other Plant Processes
NASA Astrophysics Data System (ADS)
Wagner, Raymond E.; Wagner, Orvin E.
1996-11-01
Eight years of research here confirm that plants act as wave guides for W-waves: The wavelengths of these longitudinal plant waves depend on the angle with which they are traveling with respect to the gravitational field. A structure grows tuned to a particular angle under the influence of genetics. If a structure is displaced from this angle plant action produces a correction. (2) Light waves produce certain W-wave modes in the W-wave medium and a plant's response to light results. (3) Wave action produces forces in the plant (that cancel gravity in the vertical case), combined with other affects, and sap flow results. (4) Plant structures are determined by genetics and environment from a set of quantized wavelengths available to all plants. The quantized values available to plants and all life provide templates for life to develop. Compare with quantum mechanics as a template for the structure of matter. Life processes suggest that templates also influence the development and stability of all structures in the universe (see www.chatlink.com/ oedphd/ for references).
Stacked Denoising Autoencoders Applied to Star/Galaxy Classification
NASA Astrophysics Data System (ADS)
Qin, Hao-ran; Lin, Ji-ming; Wang, Jun-yi
2017-04-01
In recent years, the deep learning algorithm, with the characteristics of strong adaptability, high accuracy, and structural complexity, has become more and more popular, but it has not yet been used in astronomy. In order to solve the problem that the star/galaxy classification accuracy is high for the bright source set, but low for the faint source set of the Sloan Digital Sky Survey (SDSS) data, we introduced the new deep learning algorithm, namely the SDA (stacked denoising autoencoder) neural network and the dropout fine-tuning technique, which can greatly improve the robustness and antinoise performance. We randomly selected respectively the bright source sets and faint source sets from the SDSS DR12 and DR7 data with spectroscopic measurements, and made preprocessing on them. Then, we randomly selected respectively the training sets and testing sets without replacement from the bright source sets and faint source sets. At last, using these training sets we made the training to obtain the SDA models of the bright sources and faint sources in the SDSS DR7 and DR12, respectively. We compared the test result of the SDA model on the DR12 testing set with the test results of the Library for Support Vector Machines (LibSVM), J48 decision tree, Logistic Model Tree (LMT), Support Vector Machine (SVM), Logistic Regression, and Decision Stump algorithm, and compared the test result of the SDA model on the DR7 testing set with the test results of six kinds of decision trees. The experiments show that the SDA has a better classification accuracy than other machine learning algorithms for the faint source sets of DR7 and DR12. Especially, when the completeness function is used as the evaluation index, compared with the decision tree algorithms, the correctness rate of SDA has improved about 15% for the faint source set of SDSS-DR7.
Albert E. Mayfield; James L. Hanula
2012-01-01
The redbay ambrosia beetle, Xyleborus glabratus Eichhoff, is a non-native invasive pest and vector of the fungus that causes laurel wilt disease in certain trees of the family Lauraceae. This study assessed the relative attractiveness and suitability of cut bolts of several tree species to X. glabratus. In 2009, female X. glabratus were equally attracted to traps...
A conservation law, entropy principle and quantization of fractal dimensions in hadron interactions
NASA Astrophysics Data System (ADS)
Zborovský, I.
2018-04-01
Fractal self-similarity of hadron interactions demonstrated by the z-scaling of inclusive spectra is studied. The scaling regularity reflects fractal structure of the colliding hadrons (or nuclei) and takes into account general features of fragmentation processes expressed by fractal dimensions. The self-similarity variable z is a function of the momentum fractions x1 and x2 of the colliding objects carried by the interacting hadron constituents and depends on the momentum fractions ya and yb of the scattered and recoil constituents carried by the inclusive particle and its recoil counterpart, respectively. Based on entropy principle, new properties of the z-scaling concept are found. They are conservation of fractal cumulativity in hadron interactions and quantization of fractal dimensions characterizing hadron structure and fragmentation processes at a constituent level.
Vortex filament method as a tool for computational visualization of quantum turbulence
Hänninen, Risto; Baggaley, Andrew W.
2014-01-01
The vortex filament model has become a standard and powerful tool to visualize the motion of quantized vortices in helium superfluids. In this article, we present an overview of the method and highlight its impact in aiding our understanding of quantum turbulence, particularly superfluid helium. We present an analysis of the structure and arrangement of quantized vortices. Our results are in agreement with previous studies showing that under certain conditions, vortices form coherent bundles, which allows for classical vortex stretching, giving quantum turbulence a classical nature. We also offer an explanation for the differences between the observed properties of counterflow and pure superflow turbulence in a pipe. Finally, we suggest a mechanism for the generation of coherent structures in the presence of normal fluid shear. PMID:24704873
Visualization of JPEG Metadata
NASA Astrophysics Data System (ADS)
Malik Mohamad, Kamaruddin; Deris, Mustafa Mat
There are a lot of information embedded in JPEG image than just graphics. Visualization of its metadata would benefit digital forensic investigator to view embedded data including corrupted image where no graphics can be displayed in order to assist in evidence collection for cases such as child pornography or steganography. There are already available tools such as metadata readers, editors and extraction tools but mostly focusing on visualizing attribute information of JPEG Exif. However, none have been done to visualize metadata by consolidating markers summary, header structure, Huffman table and quantization table in a single program. In this paper, metadata visualization is done by developing a program that able to summarize all existing markers, header structure, Huffman table and quantization table in JPEG. The result shows that visualization of metadata helps viewing the hidden information within JPEG more easily.
Quantization of wave equations and hermitian structures in partial differential varieties
Paneitz, S. M.; Segal, I. E.
1980-01-01
Sufficiently close to 0, the solution variety of a nonlinear relativistic wave equation—e.g., of the form □ϕ + m2ϕ + gϕp = 0—admits a canonical Lorentz-invariant hermitian structure, uniquely determined by the consideration that the action of the differential scattering transformation in each tangent space be unitary. Similar results apply to linear time-dependent equations or to equations in a curved asymptotically flat space-time. A close relation of the Riemannian structure to the determination of vacuum expectation values is developed and illustrated by an explicit determination of a perturbative 2-point function for the case of interaction arising from curvature. The theory underlying these developments is in part a generalization of that of M. G. Krein and collaborators concerning stability of differential equations in Hilbert space and in part a precise relation between the unitarization of given symplectic linear actions and their full probabilistic quantization. The unique causal structure in the infinite symplectic group is instrumental in these developments. PMID:16592923
NASA Astrophysics Data System (ADS)
Khodja, A.; Kadja, A.; Benamira, F.; Guechi, L.
2017-12-01
The problem of a Klein-Gordon particle moving in equal vector and scalar Rosen-Morse-type potentials is solved in the framework of Feynman's path integral approach. Explicit path integration leads to a closed form for the radial Green's function associated with different shapes of the potentials. For q≤-1, and 1/2α ln | q|
Detection of laryngeal function using speech and electroglottographic data.
Childers, D G; Bae, K S
1992-01-01
The purpose of this research was to develop quantitative measures for the assessment of laryngeal function using speech and electroglottographic (EGG) data. We developed two procedures for the detection of laryngeal pathology: 1) a spectral distortion measure using pitch synchronous and asynchronous methods with linear predictive coding (LPC) vectors and vector quantization (VQ) and 2) analysis of the EGG signal using time interval and amplitude difference measures. The VQ procedure was conjectured to offer the possibility of circumventing the need to estimate the glottal volume velocity wave-form by inverse filtering techniques. The EGG procedure was to evaluate data that was "nearly" a direct measure of vocal fold vibratory motion and thus was conjectured to offer the potential for providing an excellent assessment of laryngeal function. A threshold based procedure gave 75.9 and 69.0% probability of pathological detection using procedures 1) and 2), respectively, for 29 patients with pathological voices and 52 normal subjects. The false alarm probability was 9.6% for the normal subjects.
Condition monitoring of 3G cellular networks through competitive neural models.
Barreto, Guilherme A; Mota, João C M; Souza, Luis G M; Frota, Rewbenio A; Aguayo, Leonardo
2005-09-01
We develop an unsupervised approach to condition monitoring of cellular networks using competitive neural algorithms. Training is carried out with state vectors representing the normal functioning of a simulated CDMA2000 network. Once training is completed, global and local normality profiles (NPs) are built from the distribution of quantization errors of the training state vectors and their components, respectively. The global NP is used to evaluate the overall condition of the cellular system. If abnormal behavior is detected, local NPs are used in a component-wise fashion to find abnormal state variables. Anomaly detection tests are performed via percentile-based confidence intervals computed over the global and local NPs. We compared the performance of four competitive algorithms [winner-take-all (WTA), frequency-sensitive competitive learning (FSCL), self-organizing map (SOM), and neural-gas algorithm (NGA)] and the results suggest that the joint use of global and local NPs is more efficient and more robust than current single-threshold methods.
Meson effective mass in the isospin medium in hard-wall AdS/QCD model
NASA Astrophysics Data System (ADS)
Mamedov, Shahin
2016-02-01
We study a mass splitting of the light vector, axial-vector, and pseudoscalar mesons in the isospin medium in the framework of the hard-wall model. We write an effective mass definition for the interacting gauge fields and scalar field introduced in gauge field theory in the bulk of AdS space-time. Relying on holographic duality we obtain a formula for the effective mass of a boundary meson in terms of derivative operator over the extra bulk coordinate. The effective mass found in this way coincides with the one obtained from finding of poles of the two-point correlation function. In order to avoid introducing distinguished infrared boundaries in the quantization formula for the different mesons from the same isotriplet we introduce extra action terms at this boundary, which reduces distinguished values of this boundary to the same value. Profile function solutions and effective mass expressions were found for the in-medium ρ , a_1, and π mesons.
Quantization selection in the high-throughput H.264/AVC encoder based on the RD
NASA Astrophysics Data System (ADS)
Pastuszak, Grzegorz
2013-10-01
In the hardware video encoder, the quantization is responsible for quality losses. On the other hand, it allows the reduction of bit rates to the target one. If the mode selection is based on the rate-distortion criterion, the quantization can also be adjusted to obtain better compression efficiency. Particularly, the use of Lagrangian function with a given multiplier enables the encoder to select the most suitable quantization step determined by the quantization parameter QP. Moreover, the quantization offset added before discarding the fraction value after quantization can be adjusted. In order to select the best quantization parameter and offset in real time, the HD/SD encoder should be implemented in the hardware. In particular, the hardware architecture should embed the transformation and quantization modules able to process the same residuals many times. In this work, such an architecture is used. Experimental results show what improvements in terms of compression efficiency are achievable for Intra coding.
Swings and roundabouts: optical Poincaré spheres for polarization and Gaussian beams
NASA Astrophysics Data System (ADS)
Dennis, M. R.; Alonso, M. A.
2017-02-01
The connection between Poincaré spheres for polarization and Gaussian beams is explored, focusing on the interpretation of elliptic polarization in terms of the isotropic two-dimensional harmonic oscillator in Hamiltonian mechanics, its canonical quantization and semiclassical interpretation. This leads to the interpretation of structured Gaussian modes, the Hermite-Gaussian, Laguerre-Gaussian and generalized Hermite-Laguerre-Gaussian modes as eigenfunctions of operators corresponding to the classical constants of motion of the two-dimensional oscillator, which acquire an extra significance as families of classical ellipses upon semiclassical quantization. This article is part of the themed issue 'Optical orbital angular momentum'.
Resonant tunneling of spin-wave packets via quantized states in potential wells.
Hansen, Ulf-Hendrik; Gatzen, Marius; Demidov, Vladislav E; Demokritov, Sergej O
2007-09-21
We have studied the tunneling of spin-wave pulses through a system of two closely situated potential barriers. The barriers represent two areas of inhomogeneity of the static magnetic field, where the existence of spin waves is forbidden. We show that for certain values of the spin-wave frequency corresponding to the quantized spin-wave states existing in the well formed between the barriers, the tunneling has a resonant character. As a result, transmission of spin-wave packets through the double-barrier structure is much more efficient than the sequent tunneling through two single barriers.
NASA Astrophysics Data System (ADS)
Ogle, K.; Fell, M.; Barber, J. J.
2016-12-01
Empirical, field studies of plant functional traits have revealed important trade-offs among pairs or triplets of traits, such as the leaf (LES) and wood (WES) economics spectra. Trade-offs include correlations between leaf longevity (LL) vs specific leaf area (SLA), LL vs mass-specific leaf respiration rate (RmL), SLA vs RmL, and resistance to breakage vs wood density. Ordination analyses (e.g., PCA) show groupings of traits that tend to align with different life-history strategies or taxonomic groups. It is unclear, however, what underlies such trade-offs and emergent spectra. Do they arise from inherent physiological constraints on growth, or are they more reflective of environmental filtering? The relative importance of these mechanisms has implications for predicting biogeochemical cycling, which is influenced by trait distributions of the plant community. We address this question using an individual-based model of tree growth (ACGCA) to quantify the theoretical trait space of trees that emerges from physiological constraints. ACGCA's inputs include 32 physiological, anatomical, and allometric traits, many of which are related to the LES and WES. We fit ACGCA to 1.6 million USFS FIA observations of tree diameters and heights to obtain vectors of trait values that produce realistic growth, and we explored the structure of this trait space. No notable correlations emerged among the 496 trait pairs, but stepwise regressions revealed complicated multi-variate structure: e.g., relationships between pairs of traits (e.g., RmL and SLA) are governed by other traits (e.g., LL, radiation-use efficiency [RUE]). We also simulated growth under various canopy gap scenarios that impose varying degrees of environmental filtering to explore the multi-dimensional trait space (hypervolume) of trees that died vs survived. The centroid and volume of the hypervolumes differed among dead and live trees, especially under gap conditions leading to low mortality. Traits most predictive of tree-level mortality were maximum tree height, RUE, xylem conducting area, and branch turn-over rate. We are using these hypervolumes as priors to an emulator that approximates the ACGCA, which we are fitting to the FIA data to quantify species-specific trait spectra and to explore factors giving rise to species differences.
Spin asymmetries for vector boson production in polarized p + p collisions
Huang, Jin; Kang, Zhong-Bo; Vitev, Ivan; ...
2016-01-28
We study the cross section for vector boson (W ±/Z 0/γ more » $$\\star$$) production in polarized nucleon-nucleon collisions for low transverse momentum of the observed vector boson. For the case where one measures the transverse momentum and azimuthal angle of the vector bosons, we present the cross sections and the associated spin asymmetries in terms of transverse momentum dependent parton distribution functions (TMDs) at tree level within the TMD factorization formalism. To assess the feasibility of experimental measurements, we estimate the spin asymmetries forW ±/Z 0 boson production in polarized proton-proton collisions at the Relativistic Heavy Ion Collider by using current knowledge of the relevant TMDs. Here, we find that some of these asymmetries can be sizable if the suppression effect from TMD evolution is not too strong. The W program at RHIC can, thus, test and constrain spin theory by providing unique information on the universality properties of TMDs, TMD evolution, and the nucleon structure. For example, the single transverse spin asymmetries could be used to probe the well-known Sivers function f$$⊥q\\atop{1T}$$, as well as the transversal helicity distribution g$$q\\atop{1T}$$ via the parity-violating nature of W production.« less
Full Spectrum Conversion Using Traveling Pulse Wave Quantization
2017-03-01
Full Spectrum Conversion Using Traveling Pulse Wave Quantization Michael S. Kappes Mikko E. Waltari IQ-Analog Corporation San Diego, California...temporal-domain quantization technique called Traveling Pulse Wave Quantization (TPWQ). Full spectrum conversion is defined as the complete...pulse width measurements that are continuously generated hence the name “traveling” pulse wave quantization. Our TPWQ-based ADC is composed of a
Genetic diversity and mating system of Copaifera langsdorffii (Leguminosae/Caesalpinioideae).
Gonela, A; Sebbenn, A M; Soriani, H H; Mestriner, M A; Martinez, C A; Alzate-Marin, A L
2013-02-27
Copaifera langsdorffii, locally known as copaíba, is a valuable tropical tree with medicinal properties of its oil. We studied the genetic variation, genetic structure, and the mating system of trees in stands of C. langsdorffii (Leguminosae/Caesalpinioideae) located in an extensive area between the Pardo and Mogi-Guaçu basins in São Paulo State, Brazil, and their offspring, conserved in an ex situ germplasm bank at the University of São Paulo in Ribeirão Preto, SP, Brazil, using six microsatellite loci. Leaves were collected from 80 seed trees and from 259 offspring and their DNA extracted. A total of 140 and 175 alleles were found in the seed trees and their offspring, respectively. Low genetic differentiation was observed between stands, indicating intense gene flow due to efficient pollen dispersion vectors. An estimation of the outcrossing rate showed that these stands are outcrossed (tm = 0.98, P > 0.05). The mean variance of the effective population size of each family in two of the stands was 3.69 and 3.43, while the total effective population size retained in the germplasm bank was between 81 and 96. The paternity correlation was low, ranging from 0.052 to 0.148, demonstrating that the families implanted in this germplasm bank are composed predominantly of half-sibs.
Testing Quantum Chromodynamics with Antiprotons
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brodsky, S.
2004-10-21
The antiproton storage ring HESR to be constructed at GSI will open up a new range of perturbative and nonperturbative tests of QCD in exclusive and inclusive reactions. I discuss 21 tests of QCD using antiproton beams which can illuminate novel features of QCD. The proposed experiments include the formation of exotic hadrons, measurements of timelike generalized parton distributions, the production of charm at threshold, transversity measurements in Drell-Yan reactions, and searches for single-spin asymmetries. The interactions of antiprotons in nuclear targets will allow tests of exotic nuclear phenomena such as color transparency, hidden color, reduced nuclear amplitudes, and themore » non-universality of nuclear antishadowing. The central tool used in these lectures are light-front Fock state wavefunctions which encode the bound-state properties of hadrons in terms of their quark and gluon degrees of freedom at the amplitude level. The freedom to choose the light-like quantization four-vector provides an explicitly covariant formulation of light-front quantization and can be used to determine the analytic structure of light-front wave functions. QCD becomes scale free and conformally symmetric in the analytic limit of zero quark mass and zero {beta} function. This ''conformal correspondence principle'' determines the form of the expansion polynomials for distribution amplitudes and the behavior of non-perturbative wavefunctions which control hard exclusive processes at leading twist. The conformal template also can be used to derive commensurate scale relations which connect observables in QCD without scale or scheme ambiguity. The AdS/CFT correspondence of large N{sub C} supergravity theory in higher-dimensional anti-de Sitter space with supersymmetric QCD in 4-dimensional space-time has important implications for hadron phenomenology in the conformal limit, including the nonperturbative derivation of counting rules for exclusive processes and the behavior of structure functions at large x{sub bj}. String/gauge duality also predicts the QCD power-law fall-off of light-front Fock-state hadronic wavefunctions with arbitrary orbital angular momentum at high momentum transfer. I also review recent work which shows that the diffractive component of deep inelastic scattering, single spin asymmetries, as well as nuclear shadowing and antishadowing, cannot be computed from the LFWFs of hadrons in isolation.« less
Vectors, viscin, and Viscaceae: mistletoes as parasites, mutualists, and resources.
Juliann E. Aukema
2003-01-01
Mistletoes are aerial, hemiparasitic plants found on trees throughout the world. They have unique ecological arrangements with the host plants they parasitize and the birds that disperse their seeds. Similar in many respects to vector-borne macroparasites, mistletoes are often detrimental to their hosts, and can even kill them. Coevolution has led to resistance...
USDA-ARS?s Scientific Manuscript database
The redbay ambrosia beetle (RAB), Xyleborus glabratus (Coleoptera: Curculionidae: Scolytinae) vectors the fungal pathogen, Raffaelea lauricola, which causes laurel wilt (LW), a lethal disease of trees in the family Lauraceae, including the most commercially important crop in this family, avocado, Pe...
USDA-ARS?s Scientific Manuscript database
Laurel wilt is a deadly vascular disease of trees in the Lauraceae that kills healthy redbay (Persea borbonia), sassafras (Sassafras albidum), and other related hosts. The fungal pathogen (Raffaelea lauricola) and it vector, the redbay ambrosia beetle (Xyleborus glabratus) are native to Asia and ha...
Paul F. Rugman-Jones; Steven J. Seybold; Andrew D. Graves; Richard Stouthamer
2015-01-01
Thousand cankers disease (TCD) of walnut trees (Juglans spp.) results from aggressive feeding in the phloem by the walnut twig beetle (WTB), Pityophthorus juglandis, accompanied by inoculation of its galleries with a pathogenic fungus, Geosmithia morbida. In 1960, WTB was only known from four U.S. counties...
James L. Hanula; Albert E. Mayfield; Stephen W. Fraedrich; Robert J. Babaglia
2008-01-01
The redbay ambrosia beetle, Xyleborus glabratus Eichhoff (Coleoptera: Curculionidae: Scolyhnae), and its fungal symbiont, Rafaelea sp., are new introductions to the southeastern United States responsible for the wilt of mature redbay, Persea borbonia (L) Spreng., trees. In 2006 and 2007, we investigated the...
Gene and enhancer trap tagging of vascular-expressed genes in poplar trees
Andrew Groover; Joseph R. Fontana; Gayle Dupper; Caiping Ma; Robert Martienssen; Steven Strauss; Richard Meilan
2004-01-01
We report a gene discovery system for poplar trees based on gene and enhancer traps. Gene and enhancer trap vectors carrying the β-glucuronidase (GUS) reporter gene were inserted into the poplar genome via Agrobacterium tumefaciens transformation, where they reveal the expression pattern of genes at or near the insertion sites. Because GUS...
No rest for the laurels: symbiotic invaders cause unprecedented damage to southern USA forests
M. A. Hughes; J. J. Riggins; F. H. Koch; A. I. Cognato; C. Anderson; J. P. Formby; T. J. Dreaden; R. C. Ploetz; J. A. Smith
2017-01-01
Laurel wilt is an extraordinarily destructive exotic tree disease in the southeastern United States that involves new-encounter hosts in the Lauraceae, an introduced vector (Xyleborus glabratus) and pathogen symbiont (Raffaelea lauricola). USDA Forest Service Forest Inventory and Analysis data were used to estimate that over 300 million trees of redbay (Persea borbonia...
Zarella, Mark D; Breen, David E; Plagov, Andrei; Garcia, Fernando U
2015-01-01
Hematoxylin and eosin (H&E) staining is ubiquitous in pathology practice and research. As digital pathology has evolved, the reliance of quantitative methods that make use of H&E images has similarly expanded. For example, cell counting and nuclear morphometry rely on the accurate demarcation of nuclei from other structures and each other. One of the major obstacles to quantitative analysis of H&E images is the high degree of variability observed between different samples and different laboratories. In an effort to characterize this variability, as well as to provide a substrate that can potentially mitigate this factor in quantitative image analysis, we developed a technique to project H&E images into an optimized space more appropriate for many image analysis procedures. We used a decision tree-based support vector machine learning algorithm to classify 44 H&E stained whole slide images of resected breast tumors according to the histological structures that are present. This procedure takes an H&E image as an input and produces a classification map of the image that predicts the likelihood of a pixel belonging to any one of a set of user-defined structures (e.g., cytoplasm, stroma). By reducing these maps into their constituent pixels in color space, an optimal reference vector is obtained for each structure, which identifies the color attributes that maximally distinguish one structure from other elements in the image. We show that tissue structures can be identified using this semi-automated technique. By comparing structure centroids across different images, we obtained a quantitative depiction of H&E variability for each structure. This measurement can potentially be utilized in the laboratory to help calibrate daily staining or identify troublesome slides. Moreover, by aligning reference vectors derived from this technique, images can be transformed in a way that standardizes their color properties and makes them more amenable to image processing.
NASA Astrophysics Data System (ADS)
Huang, Yingyi; Setiawan, F.; Sau, Jay D.
2018-03-01
A weak superconducting proximity effect in the vicinity of the topological transition of a quantum anomalous Hall system has been proposed as a venue to realize a topological superconductor (TSC) with chiral Majorana edge modes (CMEMs). A recent experiment [Science 357, 294 (2017), 10.1126/science.aag2792] claimed to have observed such CMEMs in the form of a half-integer quantized conductance plateau in the two-terminal transport measurement of a quantum anomalous Hall-superconductor junction. Although the presence of a superconducting proximity effect generically splits the quantum Hall transition into two phase transitions with a gapped TSC in between, in this Rapid Communication we propose that a nearly flat conductance plateau, similar to that expected from CMEMs, can also arise from the percolation of quantum Hall edges well before the onset of the TSC or at temperatures much above the TSC gap. Our Rapid Communication, therefore, suggests that, in order to confirm the TSC, it is necessary to supplement the observation of the half-quantized conductance plateau with a hard superconducting gap (which is unlikely for a disordered system) from the conductance measurements or the heat transport measurement of the transport gap. Alternatively, the half-quantized thermal conductance would also serve as a smoking-gun signature of the TSC.
Generalized Ehrenfest Relations, Deformation Quantization, and the Geometry of Inter-model Reduction
NASA Astrophysics Data System (ADS)
Rosaler, Joshua
2018-03-01
This study attempts to spell out more explicitly than has been done previously the connection between two types of formal correspondence that arise in the study of quantum-classical relations: one the one hand, deformation quantization and the associated continuity between quantum and classical algebras of observables in the limit \\hbar → 0, and, on the other, a certain generalization of Ehrenfest's Theorem and the result that expectation values of position and momentum evolve approximately classically for narrow wave packet states. While deformation quantization establishes a direct continuity between the abstract algebras of quantum and classical observables, the latter result makes in-eliminable reference to the quantum and classical state spaces on which these structures act—specifically, via restriction to narrow wave packet states. Here, we describe a certain geometrical re-formulation and extension of the result that expectation values evolve approximately classically for narrow wave packet states, which relies essentially on the postulates of deformation quantization, but describes a relationship between the actions of quantum and classical algebras and groups over their respective state spaces that is non-trivially distinct from deformation quantization. The goals of the discussion are partly pedagogical in that it aims to provide a clear, explicit synthesis of known results; however, the particular synthesis offered aspires to some novelty in its emphasis on a certain general type of mathematical and physical relationship between the state spaces of different models that represent the same physical system, and in the explicitness with which it details the above-mentioned connection between quantum and classical models.
Krishnan, M Muthu Rama; Venkatraghavan, Vikram; Acharya, U Rajendra; Pal, Mousumi; Paul, Ranjan Rashmi; Min, Lim Choo; Ray, Ajoy Kumar; Chatterjee, Jyotirmoy; Chakraborty, Chandan
2012-02-01
Oral cancer (OC) is the sixth most common cancer in the world. In India it is the most common malignant neoplasm. Histopathological images have widely been used in the differential diagnosis of normal, oral precancerous (oral sub-mucous fibrosis (OSF)) and cancer lesions. However, this technique is limited by subjective interpretations and less accurate diagnosis. The objective of this work is to improve the classification accuracy based on textural features in the development of a computer assisted screening of OSF. The approach introduced here is to grade the histopathological tissue sections into normal, OSF without Dysplasia (OSFWD) and OSF with Dysplasia (OSFD), which would help the oral onco-pathologists to screen the subjects rapidly. The biopsy sections are stained with H&E. The optical density of the pixels in the light microscopic images is recorded and represented as matrix quantized as integers from 0 to 255 for each fundamental color (Red, Green, Blue), resulting in a M×N×3 matrix of integers. Depending on either normal or OSF condition, the image has various granular structures which are self similar patterns at different scales termed "texture". We have extracted these textural changes using Higher Order Spectra (HOS), Local Binary Pattern (LBP), and Laws Texture Energy (LTE) from the histopathological images (normal, OSFWD and OSFD). These feature vectors were fed to five different classifiers: Decision Tree (DT), Sugeno Fuzzy, Gaussian Mixture Model (GMM), K-Nearest Neighbor (K-NN), Radial Basis Probabilistic Neural Network (RBPNN) to select the best classifier. Our results show that combination of texture and HOS features coupled with Fuzzy classifier resulted in 95.7% accuracy, sensitivity and specificity of 94.5% and 98.8% respectively. Finally, we have proposed a novel integrated index called Oral Malignancy Index (OMI) using the HOS, LBP, LTE features, to diagnose benign or malignant tissues using just one number. We hope that this OMI can help the clinicians in making a faster and more objective detection of benign/malignant oral lesions. Copyright © 2011 Elsevier Ltd. All rights reserved.
Compound analysis via graph kernels incorporating chirality.
Brown, J B; Urata, Takashi; Tamura, Takeyuki; Arai, Midori A; Kawabata, Takeo; Akutsu, Tatsuya
2010-12-01
High accuracy is paramount when predicting biochemical characteristics using Quantitative Structural-Property Relationships (QSPRs). Although existing graph-theoretic kernel methods combined with machine learning techniques are efficient for QSPR model construction, they cannot distinguish topologically identical chiral compounds which often exhibit different biological characteristics. In this paper, we propose a new method that extends the recently developed tree pattern graph kernel to accommodate stereoisomers. We show that Support Vector Regression (SVR) with a chiral graph kernel is useful for target property prediction by demonstrating its application to a set of human vitamin D receptor ligands currently under consideration for their potential anti-cancer effects.
Accurate airway segmentation based on intensity structure analysis and graph-cut
NASA Astrophysics Data System (ADS)
Meng, Qier; Kitsaka, Takayuki; Nimura, Yukitaka; Oda, Masahiro; Mori, Kensaku
2016-03-01
This paper presents a novel airway segmentation method based on intensity structure analysis and graph-cut. Airway segmentation is an important step in analyzing chest CT volumes for computerized lung cancer detection, emphysema diagnosis, asthma diagnosis, and pre- and intra-operative bronchoscope navigation. However, obtaining a complete 3-D airway tree structure from a CT volume is quite challenging. Several researchers have proposed automated algorithms basically based on region growing and machine learning techniques. However these methods failed to detect the peripheral bronchi branches. They caused a large amount of leakage. This paper presents a novel approach that permits more accurate extraction of complex bronchial airway region. Our method are composed of three steps. First, the Hessian analysis is utilized for enhancing the line-like structure in CT volumes, then a multiscale cavity-enhancement filter is employed to detect the cavity-like structure from the previous enhanced result. In the second step, we utilize the support vector machine (SVM) to construct a classifier for removing the FP regions generated. Finally, the graph-cut algorithm is utilized to connect all of the candidate voxels to form an integrated airway tree. We applied this method to sixteen cases of 3D chest CT volumes. The results showed that the branch detection rate of this method can reach about 77.7% without leaking into the lung parenchyma areas.
NASA Astrophysics Data System (ADS)
Munera, Hector A.
Following the discovery of quantum phenomena at laboratory scale (Couder & Fort 2006), de Broglie pilot wave theory (De Broglie 1962) has been revived under a hydrodynamic guise (Bush 2015). Theoretically, it boils down to solving the transport equations for the energy and linear momentum densities of a postulated fundamental fluid in terms of classical wave equations, which inherently are Lorentz-invariant and scale-invariant. Instead of the conventional harmonic solutions, for astronomical and gravitational problems the novel solutions for the homogeneous wave equation in spherical coordinates are more suitable (Munera et al. 1995, Munera & Guzman 1997, and Munera 2000). Two groups of solutions are particularly relevant: (a) The inherently-quantized helicoidal solutions that may be applicable to describe spiral galaxies, and (b) The non-harmonic solutions with time (t) and distance (r) entangled in the single variable q = Ct/r (C is the two-way local electromagnetic speed). When these functions are plotted against 1/q they manifestly depict quantum effects in the near field, and Newtonian-like gravity in the far-field. The near-field predicts quantized effects similar to ring structures and to Titius-Bode structures, both in our own solar system and in exoplanets, the correlation between predicted and observed structures being typically larger than 99 per cent. In the far-field, some non-harmonic functions have a rate of decrement with distance slower than inverse-square thus explaining the flat rotation rate of galaxies. Additional implications for Trojan orbits, and quantized effects in photon deflection were also noted.
Modeling and analysis of energy quantization effects on single electron inverter performance
NASA Astrophysics Data System (ADS)
Dan, Surya Shankar; Mahapatra, Santanu
2009-08-01
In this paper, for the first time, the effects of energy quantization on single electron transistor (SET) inverter performance are analyzed through analytical modeling and Monte Carlo simulations. It is shown that energy quantization mainly changes the Coulomb blockade region and drain current of SET devices and thus affects the noise margin, power dissipation, and the propagation delay of SET inverter. A new analytical model for the noise margin of SET inverter is proposed which includes the energy quantization effects. Using the noise margin as a metric, the robustness of SET inverter is studied against the effects of energy quantization. A compact expression is developed for a novel parameter quantization threshold which is introduced for the first time in this paper. Quantization threshold explicitly defines the maximum energy quantization that an SET inverter logic circuit can withstand before its noise margin falls below a specified tolerance level. It is found that SET inverter designed with CT:CG=1/3 (where CT and CG are tunnel junction and gate capacitances, respectively) offers maximum robustness against energy quantization.
Berezin-Toeplitz quantization and naturally defined star products for Kähler manifolds
NASA Astrophysics Data System (ADS)
Schlichenmaier, Martin
2018-04-01
For compact quantizable Kähler manifolds the Berezin-Toeplitz quantization schemes, both operator and deformation quantization (star product) are reviewed. The treatment includes Berezin's covariant symbols and the Berezin transform. The general compact quantizable case was done by Bordemann-Meinrenken-Schlichenmaier, Schlichenmaier, and Karabegov-Schlichenmaier. For star products on Kähler manifolds, separation of variables, or equivalently star product of (anti-) Wick type, is a crucial property. As canonically defined star products the Berezin-Toeplitz, Berezin, and the geometric quantization are treated. It turns out that all three are equivalent, but different.
Subband/transform functions for image processing
NASA Technical Reports Server (NTRS)
Glover, Daniel
1993-01-01
Functions for image data processing written for use with the MATLAB(TM) software package are presented. These functions provide the capability to transform image data with block transformations (such as the Walsh Hadamard) and to produce spatial frequency subbands of the transformed data. Block transforms are equivalent to simple subband systems. The transform coefficients are reordered using a simple permutation to give subbands. The low frequency subband is a low resolution version of the original image, while the higher frequency subbands contain edge information. The transform functions can be cascaded to provide further decomposition into more subbands. If the cascade is applied to all four of the first stage subbands (in the case of a four band decomposition), then a uniform structure of sixteen bands is obtained. If the cascade is applied only to the low frequency subband, an octave structure of seven bands results. Functions for the inverse transforms are also given. These functions can be used for image data compression systems. The transforms do not in themselves produce data compression, but prepare the data for quantization and compression. Sample quantization functions for subbands are also given. A typical compression approach is to subband the image data, quantize it, then use statistical coding (e.g., run-length coding followed by Huffman coding) for compression. Contour plots of image data and subbanded data are shown.
Carbon nanotube-clamped metal atomic chain
Tang, Dai-Ming; Yin, Li-Chang; Li, Feng; Liu, Chang; Yu, Wan-Jing; Hou, Peng-Xiang; Wu, Bo; Lee, Young-Hee; Ma, Xiu-Liang; Cheng, Hui-Ming
2010-01-01
Metal atomic chain (MAC) is an ultimate one-dimensional structure with unique physical properties, such as quantized conductance, colossal magnetic anisotropy, and quantized magnetoresistance. Therefore, MACs show great potential as possible components of nanoscale electronic and spintronic devices. However, MACs are usually suspended between two macroscale metallic electrodes; hence obvious technical barriers exist in the interconnection and integration of MACs. Here we report a carbon nanotube (CNT)-clamped MAC, where CNTs play the roles of both nanoconnector and electrodes. This nanostructure is prepared by in situ machining a metal-filled CNT, including peeling off carbon shells by spatially and elementally selective electron beam irradiation and further elongating the exposed metal nanorod. The microstructure and formation process of this CNT-clamped MAC are explored by both transmission electron microscopy observations and theoretical simulations. First-principles calculations indicate that strong covalent bonds are formed between the CNT and MAC. The electrical transport property of the CNT-clamped MAC was experimentally measured, and quantized conductance was observed. PMID:20427743
Quantized Average Consensus on Gossip Digraphs with Reduced Computation
NASA Astrophysics Data System (ADS)
Cai, Kai; Ishii, Hideaki
The authors have recently proposed a class of randomized gossip algorithms which solve the distributed averaging problem on directed graphs, with the constraint that each node has an integer-valued state. The essence of this algorithm is to maintain local records, called “surplus”, of individual state updates, thereby achieving quantized average consensus even though the state sum of all nodes is not preserved. In this paper we study a modified version of this algorithm, whose feature is primarily in reducing both computation and communication effort. Concretely, each node needs to update fewer local variables, and can transmit surplus by requiring only one bit. Under this modified algorithm we prove that reaching the average is ensured for arbitrary strongly connected graphs. The condition of arbitrary strong connection is less restrictive than those known in the literature for either real-valued or quantized states; in particular, it does not require the special structure on the network called balanced. Finally, we provide numerical examples to illustrate the convergence result, with emphasis on convergence time analysis.
Quantization-Based Adaptive Actor-Critic Tracking Control With Tracking Error Constraints.
Fan, Quan-Yong; Yang, Guang-Hong; Ye, Dan
2018-04-01
In this paper, the problem of adaptive actor-critic (AC) tracking control is investigated for a class of continuous-time nonlinear systems with unknown nonlinearities and quantized inputs. Different from the existing results based on reinforcement learning, the tracking error constraints are considered and new critic functions are constructed to improve the performance further. To ensure that the tracking errors keep within the predefined time-varying boundaries, a tracking error transformation technique is used to constitute an augmented error system. Specific critic functions, rather than the long-term cost function, are introduced to supervise the tracking performance and tune the weights of the AC neural networks (NNs). A novel adaptive controller with a special structure is designed to reduce the effect of the NN reconstruction errors, input quantization, and disturbances. Based on the Lyapunov stability theory, the boundedness of the closed-loop signals and the desired tracking performance can be guaranteed. Finally, simulations on two connected inverted pendulums are given to illustrate the effectiveness of the proposed method.
NASA Astrophysics Data System (ADS)
Jacak, Janusz E.
2018-01-01
We demonstrate an original development of path-integral quantization in the case of a multiply connected configuration space of indistinguishable charged particles on a 2D manifold and exposed to a strong perpendicular magnetic field. The system occurs to be exceptionally homotopy-rich and the structure of the homotopy essentially depends on the magnetic field strength resulting in multiloop trajectories at specific conditions. We have proved, by a generalization of the Bohr-Sommerfeld quantization rule, that the size of a magnetic field flux quantum grows for multiloop orbits like (2 k +1 ) h/c with the number of loops k . Utilizing this property for electrons on the 2D substrate jellium, we have derived upon the path integration a complete FQHE hierarchy in excellent consistence with experiments. The path integral has been next developed to a sum over configurations, displaying various patterns of trajectory homotopies (topological configurations), which in the nonstationary case of quantum kinetics, reproduces some unclear formerly details in the longitudinal resistivity observed in experiments.
Atomic-scale epitaxial aluminum film on GaAs substrate
NASA Astrophysics Data System (ADS)
Fan, Yen-Ting; Lo, Ming-Cheng; Wu, Chu-Chun; Chen, Peng-Yu; Wu, Jenq-Shinn; Liang, Chi-Te; Lin, Sheng-Di
2017-07-01
Atomic-scale metal films exhibit intriguing size-dependent film stability, electrical conductivity, superconductivity, and chemical reactivity. With advancing methods for preparing ultra-thin and atomically smooth metal films, clear evidences of the quantum size effect have been experimentally collected in the past two decades. However, with the problems of small-area fabrication, film oxidation in air, and highly-sensitive interfaces between the metal, substrate, and capping layer, the uses of the quantized metallic films for further ex-situ investigations and applications have been seriously limited. To this end, we develop a large-area fabrication method for continuous atomic-scale aluminum film. The self-limited oxidation of aluminum protects and quantizes the metallic film and enables ex-situ characterizations and device processing in air. Structure analysis and electrical measurements on the prepared films imply the quantum size effect in the atomic-scale aluminum film. Our work opens the way for further physics studies and device applications using the quantized electronic states in metals.
Submonolayer Quantum Dot Infrared Photodetector
NASA Technical Reports Server (NTRS)
Ting, David Z.; Bandara, Sumith V.; Gunapala, Sarath D.; Chang, Yia-Chang
2010-01-01
A method has been developed for inserting submonolayer (SML) quantum dots (QDs) or SML QD stacks, instead of conventional Stranski-Krastanov (S-K) QDs, into the active region of intersubband photodetectors. A typical configuration would be InAs SML QDs embedded in thin layers of GaAs, surrounded by AlGaAs barriers. Here, the GaAs and the AlGaAs have nearly the same lattice constant, while InAs has a larger lattice constant. In QD infrared photodetector, the important quantization directions are in the plane perpendicular to the normal incidence radiation. In-plane quantization is what enables the absorption of normal incidence radiation. The height of the S-K QD controls the positions of the quantized energy levels, but is not critically important to the desired normal incidence absorption properties. The SML QD or SML QD stack configurations give more control of the structure grown, retains normal incidence absorption properties, and decreases the strain build-up to allow thicker active layers for higher quantum efficiency.
Lee, Bumshik; Kim, Munchurl
2016-08-01
In this paper, a low complexity coding unit (CU)-level rate and distortion estimation scheme is proposed for High Efficiency Video Coding (HEVC) hardware-friendly implementation where a Walsh-Hadamard transform (WHT)-based low-complexity integer discrete cosine transform (DCT) is employed for distortion estimation. Since HEVC adopts quadtree structures of coding blocks with hierarchical coding depths, it becomes more difficult to estimate accurate rate and distortion values without actually performing transform, quantization, inverse transform, de-quantization, and entropy coding. Furthermore, DCT for rate-distortion optimization (RDO) is computationally high, because it requires a number of multiplication and addition operations for various transform block sizes of 4-, 8-, 16-, and 32-orders and requires recursive computations to decide the optimal depths of CU or transform unit. Therefore, full RDO-based encoding is highly complex, especially for low-power implementation of HEVC encoders. In this paper, a rate and distortion estimation scheme is proposed in CU levels based on a low-complexity integer DCT that can be computed in terms of WHT whose coefficients are produced in prediction stages. For rate and distortion estimation in CU levels, two orthogonal matrices of 4×4 and 8×8 , which are applied to WHT that are newly designed in a butterfly structure only with addition and shift operations. By applying the integer DCT based on the WHT and newly designed transforms in each CU block, the texture rate can precisely be estimated after quantization using the number of non-zero quantized coefficients and the distortion can also be precisely estimated in transform domain without de-quantization and inverse transform required. In addition, a non-texture rate estimation is proposed by using a pseudoentropy code to obtain accurate total rate estimates. The proposed rate and the distortion estimation scheme can effectively be used for HW-friendly implementation of HEVC encoders with 9.8% loss over HEVC full RDO, which much less than 20.3% and 30.2% loss of a conventional approach and Hadamard-only scheme, respectively.
A visual detection model for DCT coefficient quantization
NASA Technical Reports Server (NTRS)
Ahumada, Albert J., Jr.; Watson, Andrew B.
1994-01-01
The discrete cosine transform (DCT) is widely used in image compression and is part of the JPEG and MPEG compression standards. The degree of compression and the amount of distortion in the decompressed image are controlled by the quantization of the transform coefficients. The standards do not specify how the DCT coefficients should be quantized. One approach is to set the quantization level for each coefficient so that the quantization error is near the threshold of visibility. Results from previous work are combined to form the current best detection model for DCT coefficient quantization noise. This model predicts sensitivity as a function of display parameters, enabling quantization matrices to be designed for display situations varying in luminance, veiling light, and spatial frequency related conditions (pixel size, viewing distance, and aspect ratio). It also allows arbitrary color space directions for the representation of color. A model-based method of optimizing the quantization matrix for an individual image was developed. The model described above provides visual thresholds for each DCT frequency. These thresholds are adjusted within each block for visual light adaptation and contrast masking. For given quantization matrix, the DCT quantization errors are scaled by the adjusted thresholds to yield perceptual errors. These errors are pooled nonlinearly over the image to yield total perceptual error. With this model one may estimate the quantization matrix for a particular image that yields minimum bit rate for a given total perceptual error, or minimum perceptual error for a given bit rate. Custom matrices for a number of images show clear improvement over image-independent matrices. Custom matrices are compatible with the JPEG standard, which requires transmission of the quantization matrix.
Development and implementation of (Q)SAR modeling within the CHARMMing web-user interface.
Weidlich, Iwona E; Pevzner, Yuri; Miller, Benjamin T; Filippov, Igor V; Woodcock, H Lee; Brooks, Bernard R
2015-01-05
Recent availability of large publicly accessible databases of chemical compounds and their biological activities (PubChem, ChEMBL) has inspired us to develop a web-based tool for structure activity relationship and quantitative structure activity relationship modeling to add to the services provided by CHARMMing (www.charmming.org). This new module implements some of the most recent advances in modern machine learning algorithms-Random Forest, Support Vector Machine, Stochastic Gradient Descent, Gradient Tree Boosting, so forth. A user can import training data from Pubchem Bioassay data collections directly from our interface or upload his or her own SD files which contain structures and activity information to create new models (either categorical or numerical). A user can then track the model generation process and run models on new data to predict activity. © 2014 Wiley Periodicals, Inc.
Technique for Solving Electrically Small to Large Structures for Broadband Applications
NASA Technical Reports Server (NTRS)
Jandhyala, Vikram; Chowdhury, Indranil
2011-01-01
Fast iterative algorithms are often used for solving Method of Moments (MoM) systems, having a large number of unknowns, to determine current distribution and other parameters. The most commonly used fast methods include the fast multipole method (FMM), the precorrected fast Fourier transform (PFFT), and low-rank QR compression methods. These methods reduce the O(N) memory and time requirements to O(N log N) by compressing the dense MoM system so as to exploit the physics of Green s Function interactions. FFT-based techniques for solving such problems are efficient for spacefilling and uniform structures, but their performance substantially degrades for non-uniformly distributed structures due to the inherent need to employ a uniform global grid. FMM or QR techniques are better suited than FFT techniques; however, neither the FMM nor the QR technique can be used at all frequencies. This method has been developed to efficiently solve for a desired parameter of a system or device that can include both electrically large FMM elements, and electrically small QR elements. The system or device is set up as an oct-tree structure that can include regions of both the FMM type and the QR type. The system is enclosed with a cube at a 0- th level, splitting the cube at the 0-th level into eight child cubes. This forms cubes at a 1st level, recursively repeating the splitting process for cubes at successive levels until a desired number of levels is created. For each cube that is thus formed, neighbor lists and interaction lists are maintained. An iterative solver is then used to determine a first matrix vector product for any electrically large elements as well as a second matrix vector product for any electrically small elements that are included in the structure. These matrix vector products for the electrically large and small elements are combined, and a net delta for a combination of the matrix vector products is determined. The iteration continues until a net delta is obtained that is within the predefined limits. The matrix vector products that were last obtained are used to solve for the desired parameter. The solution for the desired parameter is then presented to a user in a tangible form; for example, on a display.
James L. Hanula; Brian Sullivan
2008-01-01
Redbay ambrosia beetle, Xyleborus glabratus Eichhoff, is a native of Southeast Asia recently established in coastal forests of Georgia, SC and Florida It vectors a wilt fungus, Raffaeka sp., lethal to redbay trees, Persea borbonia L. Spreng, and certain other Lauraceae. No practical monitoring system exists for this beetle so we...
A Key to Phoretic Mites Commonly Found on Long-Horned Beetles Emerging from Southern Pines
D.N. Kinn; M.J. Linit
1989-01-01
Long-horned beetles that attack conifers are usually considered secondary pests because they generally develop in dead and dying trees and are not the cause of tree mortality (Drooz 1985). Recently that status has changed with the realization that a number of species, especially those belonging to the genus Monochamus, are vectors of the pinewood...
NASA Astrophysics Data System (ADS)
Kisi, Ozgur; Parmar, Kulwinder Singh
2016-03-01
This study investigates the accuracy of least square support vector machine (LSSVM), multivariate adaptive regression splines (MARS) and M5 model tree (M5Tree) in modeling river water pollution. Various combinations of water quality parameters, Free Ammonia (AMM), Total Kjeldahl Nitrogen (TKN), Water Temperature (WT), Total Coliform (TC), Fecal Coliform (FC) and Potential of Hydrogen (pH) monitored at Nizamuddin, Delhi Yamuna River in India were used as inputs to the applied models. Results indicated that the LSSVM and MARS models had almost same accuracy and they performed better than the M5Tree model in modeling monthly chemical oxygen demand (COD). The average root mean square error (RMSE) of the LSSVM and M5Tree models was decreased by 1.47% and 19.1% using MARS model, respectively. Adding TC input to the models did not increase their accuracy in modeling COD while adding FC and pH inputs to the models generally decreased the accuracy. The overall results indicated that the MARS and LSSVM models could be successfully used in estimating monthly river water pollution level by using AMM, TKN and WT parameters as inputs.
Westreich, Daniel; Lessler, Justin; Funk, Michele Jonsson
2010-08-01
Propensity scores for the analysis of observational data are typically estimated using logistic regression. Our objective in this review was to assess machine learning alternatives to logistic regression, which may accomplish the same goals but with fewer assumptions or greater accuracy. We identified alternative methods for propensity score estimation and/or classification from the public health, biostatistics, discrete mathematics, and computer science literature, and evaluated these algorithms for applicability to the problem of propensity score estimation, potential advantages over logistic regression, and ease of use. We identified four techniques as alternatives to logistic regression: neural networks, support vector machines, decision trees (classification and regression trees [CART]), and meta-classifiers (in particular, boosting). Although the assumptions of logistic regression are well understood, those assumptions are frequently ignored. All four alternatives have advantages and disadvantages compared with logistic regression. Boosting (meta-classifiers) and, to a lesser extent, decision trees (particularly CART), appear to be most promising for use in the context of propensity score analysis, but extensive simulation studies are needed to establish their utility in practice. Copyright (c) 2010 Elsevier Inc. All rights reserved.
Thin-layer chromatographic identification of Chinese propolis using chemometric fingerprinting.
Tang, Tie-xin; Guo, Wei-yan; Xu, Ye; Zhang, Si-ming; Xu, Xin-jun; Wang, Dong-mei; Zhao, Zhi-min; Zhu, Long-ping; Yang, De-po
2014-01-01
Poplar tree gum has a similar chemical composition and appearance to Chinese propolis (bee glue) and has been widely used as a counterfeit propolis because Chinese propolis is typically the poplar-type propolis, the chemical composition of which is determined mainly by the resin of poplar trees. The discrimination of Chinese propolis from poplar tree gum is a challenging task. To develop a rapid thin-layer chromatographic (TLC) identification method using chemometric fingerprinting to discriminate Chinese propolis from poplar tree gum. A new TLC method using a combination of ammonia and hydrogen peroxide vapours as the visualisation reagent was developed to characterise the chemical profile of Chinese propolis. Three separate people performed TLC on eight Chinese propolis samples and three poplar tree gum samples of varying origins. Five chemometric methods, including similarity analysis, hierarchical clustering, k-means clustering, neural network and support vector machine, were compared for use in classifying the samples based on their densitograms obtained from the TLC chromatograms via image analysis. Hierarchical clustering, neural network and support vector machine analyses achieved a correct classification rate of 100% in classifying the samples. A strategy for TLC identification of Chinese propolis using chemometric fingerprinting was proposed and it provided accurate sample classification. The study has shown that the TLC identification method using chemometric fingerprinting is a rapid, low-cost method for the discrimination of Chinese propolis from poplar tree gum and may be used for the quality control of Chinese propolis. Copyright © 2014 John Wiley & Sons, Ltd.
Dielectric properties of classical and quantized ionic fluids.
Høye, Johan S
2010-06-01
We study time-dependent correlation functions of classical and quantum gases using methods of equilibrium statistical mechanics for systems of uniform as well as nonuniform densities. The basis for our approach is the path integral formalism of quantum mechanical systems. With this approach the statistical mechanics of a quantum mechanical system becomes the equivalent of a classical polymer problem in four dimensions where imaginary time is the fourth dimension. Several nontrivial results for quantum systems have been obtained earlier by this analogy. Here, we will focus upon the presence of a time-dependent electromagnetic pair interaction where the electromagnetic vector potential that depends upon currents, will be present. Thus both density and current correlations are needed to evaluate the influence of this interaction. Then we utilize that densities and currents can be expressed by polarizations by which the ionic fluid can be regarded as a dielectric one for which a nonlocal susceptibility is found. This nonlocality has as a consequence that we find no contribution from a possible transverse electric zero-frequency mode for the Casimir force between metallic plates. Further, we establish expressions for a leading correction to ab initio calculations for the energies of the quantized electrons of molecules where now retardation effects also are taken into account.
NASA Astrophysics Data System (ADS)
Visinescu, M.
2012-10-01
Hidden symmetries in a covariant Hamiltonian framework are investigated. The special role of the Stackel-Killing and Killing-Yano tensors is pointed out. The covariant phase-space is extended to include external gauge fields and scalar potentials. We investigate the possibility for a higher-order symmetry to survive when the electromagnetic interactions are taken into account. Aconcrete realization of this possibility is given by the Killing-Maxwell system. The classical conserved quantities do not generally transfer to the quantized systems producing quantum gravitational anomalies. As a rule the conformal extension of the Killing vectors and tensors does not produce symmetry operators for the Klein-Gordon operator.
NASA Astrophysics Data System (ADS)
Albeverio, Sergio; Tamura, Hiroshi
2018-04-01
We consider a model describing the coupling of a vector-valued and a scalar homogeneous Markovian random field over R4, interpreted as expressing the interaction between a charged scalar quantum field coupled with a nonlinear quantized electromagnetic field. Expectations of functionals of the random fields are expressed by Brownian bridges. Using this, together with Feynman-Kac-Itô type formulae and estimates on the small time and large time behaviour of Brownian functionals, we prove asymptotic upper and lower bounds on the kernel of the transition semigroup for our model. The upper bound gives faster than exponential decay for large distances of the corresponding resolvent (propagator).
High performance compression of science data
NASA Technical Reports Server (NTRS)
Storer, James A.; Cohn, Martin
1994-01-01
Two papers make up the body of this report. One presents a single-pass adaptive vector quantization algorithm that learns a codebook of variable size and shape entries; the authors present experiments on a set of test images showing that with no training or prior knowledge of the data, for a given fidelity, the compression achieved typically equals or exceeds that of the JPEG standard. The second paper addresses motion compensation, one of the most effective techniques used in the interframe data compression. A parallel block-matching algorithm for estimating interframe displacement of blocks with minimum error is presented. The algorithm is designed for a simple parallel architecture to process video in real time.
Coherent distributions for the rigid rotator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grigorescu, Marius
2016-06-15
Coherent solutions of the classical Liouville equation for the rigid rotator are presented as positive phase-space distributions localized on the Lagrangian submanifolds of Hamilton-Jacobi theory. These solutions become Wigner-type quasiprobability distributions by a formal discretization of the left-invariant vector fields from their Fourier transform in angular momentum. The results are consistent with the usual quantization of the anisotropic rotator, but the expected value of the Hamiltonian contains a finite “zero point” energy term. It is shown that during the time when a quasiprobability distribution evolves according to the Liouville equation, the related quantum wave function should satisfy the time-dependent Schrödingermore » equation.« less
NASA Astrophysics Data System (ADS)
Maragos, Petros
The topics discussed at the conference include hierarchical image coding, motion analysis, feature extraction and image restoration, video coding, and morphological and related nonlinear filtering. Attention is also given to vector quantization, morphological image processing, fractals and wavelets, architectures for image and video processing, image segmentation, biomedical image processing, and model-based analysis. Papers are presented on affine models for motion and shape recovery, filters for directly detecting surface orientation in an image, tracking of unresolved targets in infrared imagery using a projection-based method, adaptive-neighborhood image processing, and regularized multichannel restoration of color images using cross-validation. (For individual items see A93-20945 to A93-20951)
SAR data compression: Application, requirements, and designs
NASA Technical Reports Server (NTRS)
Curlander, John C.; Chang, C. Y.
1991-01-01
The feasibility of reducing data volume and data rate is evaluated for the Earth Observing System (EOS) Synthetic Aperture Radar (SAR). All elements of data stream from the sensor downlink data stream to electronic delivery of browse data products are explored. The factors influencing design of a data compression system are analyzed, including the signal data characteristics, the image quality requirements, and the throughput requirements. The conclusion is that little or no reduction can be achieved in the raw signal data using traditional data compression techniques (e.g., vector quantization, adaptive discrete cosine transform) due to the induced phase errors in the output image. However, after image formation, a number of techniques are effective for data compression.
NASA Astrophysics Data System (ADS)
Chernyak, Vladimir Y.; Klein, John R.; Sinitsyn, Nikolai A.
2012-04-01
This article studies Markovian stochastic motion of a particle on a graph with finite number of nodes and periodically time-dependent transition rates that satisfy the detailed balance condition at any time. We show that under general conditions, the currents in the system on average become quantized or fractionally quantized for adiabatic driving at sufficiently low temperature. We develop the quantitative theory of this quantization and interpret it in terms of topological invariants. By implementing the celebrated Kirchhoff theorem we derive a general and explicit formula for the average generated current that plays a role of an efficient tool for treating the current quantization effects.
Seminal quality prediction using data mining methods.
Sahoo, Anoop J; Kumar, Yugal
2014-01-01
Now-a-days, some new classes of diseases have come into existences which are known as lifestyle diseases. The main reasons behind these diseases are changes in the lifestyle of people such as alcohol drinking, smoking, food habits etc. After going through the various lifestyle diseases, it has been found that the fertility rates (sperm quantity) in men has considerably been decreasing in last two decades. Lifestyle factors as well as environmental factors are mainly responsible for the change in the semen quality. The objective of this paper is to identify the lifestyle and environmental features that affects the seminal quality and also fertility rate in man using data mining methods. The five artificial intelligence techniques such as Multilayer perceptron (MLP), Decision Tree (DT), Navie Bayes (Kernel), Support vector machine+Particle swarm optimization (SVM+PSO) and Support vector machine (SVM) have been applied on fertility dataset to evaluate the seminal quality and also to predict the person is either normal or having altered fertility rate. While the eight feature selection techniques such as support vector machine (SVM), neural network (NN), evolutionary logistic regression (LR), support vector machine plus particle swarm optimization (SVM+PSO), principle component analysis (PCA), chi-square test, correlation and T-test methods have been used to identify more relevant features which affect the seminal quality. These techniques are applied on fertility dataset which contains 100 instances with nine attribute with two classes. The experimental result shows that SVM+PSO provides higher accuracy and area under curve (AUC) rate (94% & 0.932) among multi-layer perceptron (MLP) (92% & 0.728), Support Vector Machines (91% & 0.758), Navie Bayes (Kernel) (89% & 0.850) and Decision Tree (89% & 0.735) for some of the seminal parameters. This paper also focuses on the feature selection process i.e. how to select the features which are more important for prediction of fertility rate. In this paper, eight feature selection methods are applied on fertility dataset to find out a set of good features. The investigational results shows that childish diseases (0.079) and high fever features (0.057) has less impact on fertility rate while age (0.8685), season (0.843), surgical intervention (0.7683), alcohol consumption (0.5992), smoking habit (0.575), number of hours spent on setting (0.4366) and accident (0.5973) features have more impact. It is also observed that feature selection methods increase the accuracy of above mentioned techniques (multilayer perceptron 92%, support vector machine 91%, SVM+PSO 94%, Navie Bayes (Kernel) 89% and decision tree 89%) as compared to without feature selection methods (multilayer perceptron 86%, support vector machine 86%, SVM+PSO 85%, Navie Bayes (Kernel) 83% and decision tree 84%) which shows the applicability of feature selection methods in prediction. This paper lightens the application of artificial techniques in medical domain. From this paper, it can be concluded that data mining methods can be used to predict a person with or without disease based on environmental and lifestyle parameters/features rather than undergoing various medical test. In this paper, five data mining techniques are used to predict the fertility rate and among which SVM+PSO provide more accurate results than support vector machine and decision tree.
Besansky, N J; Powell, J R; Caccone, A; Hamm, D M; Scott, J A; Collins, F H
1994-01-01
The six Afrotropical species of mosquitoes comprising the Anopheles gambiae complex include the most efficient vectors of malaria in the world as well as a nonvector species. The accepted interpretation of evolutionary relationships among these species is based on chromosomal inversions and suggests that the two principal vectors, A. gambiae and Anopheles arabiensis, are on distant branches of the phylogenetic tree. However, DNA sequence data indicate that these two species are sister taxa and suggest gene flow between them. These results have important implications for malaria control strategies involving the replacement of vector with nonvector populations. Images PMID:8041714
Zhang, Senlin; Chen, Huayan; Liu, Meiqin; Zhang, Qunfei
2017-11-07
Target tracking is one of the broad applications of underwater wireless sensor networks (UWSNs). However, as a result of the temporal and spatial variability of acoustic channels, underwater acoustic communications suffer from an extremely limited bandwidth. In order to reduce network congestion, it is important to shorten the length of the data transmitted from local sensors to the fusion center by quantization. Although quantization can reduce bandwidth cost, it also brings about bad tracking performance as a result of information loss after quantization. To solve this problem, this paper proposes an optimal quantization-based target tracking scheme. It improves the tracking performance of low-bit quantized measurements by minimizing the additional covariance caused by quantization. The simulation demonstrates that our scheme performs much better than the conventional uniform quantization-based target tracking scheme and the increment of the data length affects our scheme only a little. Its tracking performance improves by only 4.4% from 2- to 3-bit, which means our scheme weakly depends on the number of data bits. Moreover, our scheme also weakly depends on the number of participate sensors, and it can work well in sparse sensor networks. In a 6 × 6 × 6 sensor network, compared with 4 × 4 × 4 sensor networks, the number of participant sensors increases by 334.92%, while the tracking accuracy using 1-bit quantized measurements improves by only 50.77%. Overall, our optimal quantization-based target tracking scheme can achieve the pursuit of data-efficiency, which fits the requirements of low-bandwidth UWSNs.
Chern structure in the Bose-insulating phase of Sr2RuO4 nanofilms
NASA Astrophysics Data System (ADS)
Nobukane, Hiroyoshi; Matsuyama, Toyoki; Tanda, Satoshi
2017-01-01
The quantum anomaly that breaks the symmetry, for example the parity and the chirality, in the quantization leads to a physical quantity with a topological Chern invariant. We report the observation of a Chern structure in the Bose-insulating phase of Sr2RuO4 nanofilms by employing electric transport. We observed the superconductor-to-insulator transition by reducing the thickness of Sr2RuO4 single crystals. The appearance of a gap structure in the insulating phase implies local superconductivity. Fractional quantized conductance was observed without an external magnetic field. We found an anomalous induced voltage with temperature and thickness dependence, and the induced voltage exhibited switching behavior when we applied a magnetic field. We suggest that there was fractional magnetic-field-induced electric polarization in the interlayer. These anomalous results are related to topological invariance. The fractional axion angle Θ = π/6 was determined by observing the topological magneto-electric effect in the Bose-insulating phase of Sr2RuO4 nanofilms.
Structure variation of the index of refraction of GaAs-AlAs superlattices and multiple quantum wells
NASA Technical Reports Server (NTRS)
Kahen, K. B.; Leburton, J. P.
1985-01-01
A detailed calculation of the index refraction of various GaAs-AlAs superlattices is presented for the first time. The calculation is performed by using a hybrid approach which combines the k-p method with the pseudopotential technique. Appropriate quantization conditions account for the influence of the superstructures on the electronic properties of the systems. The results of the model are in very good agreement with the experimental data. In comparison with the index of refraction of the corresponding AlGaAs alloy, characterized by the same average mole fraction of Al, the results indicate that the superlattice index of refraction values attain maxima at the various quantized transition energies. For certain structures the difference can be as large as 2 percent. These results suggest that the waveguiding and dispersion relation properties of optoelectronic devices can be tailored to design for specific optical application by an appropriate choice of the superlattice structure parameters.
Detection of potential mosquito breeding sites based on community sourced geotagged images
NASA Astrophysics Data System (ADS)
Agarwal, Ankit; Chaudhuri, Usashi; Chaudhuri, Subhasis; Seetharaman, Guna
2014-06-01
Various initiatives have been taken all over the world to involve the citizens in the collection and reporting of data to make better and informed data-driven decisions. Our work shows how the geotagged images collected through the general population can be used to combat Malaria and Dengue by identifying and visualizing localities that contain potential mosquito breeding sites. Our method first employs image quality assessment on the client side to reject the images with distortions like blur and artifacts. Each geotagged image received on the server is converted into a feature vector using the bag of visual words model. We train an SVM classifier on a histogram-based feature vector obtained after the vector quantization of SIFT features to discriminate images containing either a small stagnant water body like puddle, or open containers and tires, bushes etc. from those that contain flowing water, manicured lawns, tires attached to a vehicle etc. A geographical heat map is generated by assigning a specific location a probability value of it being a potential mosquito breeding ground of mosquito using feature level fusion or the max approach presented in the paper. The heat map thus generated can be used by concerned health authorities to take appropriate action and to promote civic awareness.
NASA Astrophysics Data System (ADS)
Wiederkehr, A. W.; Schmutz, H.; Motsch, M.; Merkt, F.
2012-08-01
Cold samples of oxygen molecules in supersonic beams have been decelerated from initial velocities of 390 and 450 m s-1 to final velocities in the range between 150 and 280 m s-1 using a 90-stage Zeeman decelerator. (2 + 1) resonance-enhanced-multiphoton-ionization (REMPI) spectra of the 3sσ g 3Π g (C) ? two-photon transition of O2 have been recorded to characterize the state selectivity of the deceleration process. The decelerated molecular sample was found to consist exclusively of molecules in the J ‧‧ = 2 spin-rotational component of the X ? ground state of O2. Measurements of the REMPI spectra using linearly polarized laser radiation with polarization vector parallel to the decelerator axis, and thus to the magnetic-field vector of the deceleration solenoids, further showed that only the ? magnetic sublevel of the N‧‧ = 1, J ‧‧ = 2 spin-rotational level is populated in the decelerated sample, which therefore is characterized by a fully oriented total-angular-momentum vector. By maintaining a weak quantization magnetic field beyond the decelerator, the polarization of the sample could be maintained over the 5 cm distance separating the last deceleration solenoid and the detection region.
Two generalizations of Kohonen clustering
NASA Technical Reports Server (NTRS)
Bezdek, James C.; Pal, Nikhil R.; Tsao, Eric C. K.
1993-01-01
The relationship between the sequential hard c-means (SHCM), learning vector quantization (LVQ), and fuzzy c-means (FCM) clustering algorithms is discussed. LVQ and SHCM suffer from several major problems. For example, they depend heavily on initialization. If the initial values of the cluster centers are outside the convex hull of the input data, such algorithms, even if they terminate, may not produce meaningful results in terms of prototypes for cluster representation. This is due in part to the fact that they update only the winning prototype for every input vector. The impact and interaction of these two families with Kohonen's self-organizing feature mapping (SOFM), which is not a clustering method, but which often leads ideas to clustering algorithms is discussed. Then two generalizations of LVQ that are explicitly designed as clustering algorithms are presented; these algorithms are referred to as generalized LVQ = GLVQ; and fuzzy LVQ = FLVQ. Learning rules are derived to optimize an objective function whose goal is to produce 'good clusters'. GLVQ/FLVQ (may) update every node in the clustering net for each input vector. Neither GLVQ nor FLVQ depends upon a choice for the update neighborhood or learning rate distribution - these are taken care of automatically. Segmentation of a gray tone image is used as a typical application of these algorithms to illustrate the performance of GLVQ/FLVQ.
LVQ and backpropagation neural networks applied to NASA SSME data
NASA Technical Reports Server (NTRS)
Doniere, Timothy F.; Dhawan, Atam P.
1993-01-01
Feedfoward neural networks with backpropagation learning have been used as function approximators for modeling the space shuttle main engine (SSME) sensor signals. The modeling of these sensor signals is aimed at the development of a sensor fault detection system that can be used during ground test firings. The generalization capability of a neural network based function approximator depends on the training vectors which in this application may be derived from a number of SSME ground test-firings. This yields a large number of training vectors. Large training sets can cause the time required to train the network to be very large. Also, the network may not be able to generalize for large training sets. To reduce the size of the training sets, the SSME test-firing data is reduced using the learning vector quantization (LVQ) based technique. Different compression ratios were used to obtain compressed data in training the neural network model. The performance of the neural model trained using reduced sets of training patterns is presented and compared with the performance of the model trained using complete data. The LVQ can also be used as a function approximator. The performance of the LVQ as a function approximator using reduced training sets is presented and compared with the performance of the backpropagation network.
Unique Fock quantization of scalar cosmological perturbations
NASA Astrophysics Data System (ADS)
Fernández-Méndez, Mikel; Mena Marugán, Guillermo A.; Olmedo, Javier; Velhinho, José M.
2012-05-01
We investigate the ambiguities in the Fock quantization of the scalar perturbations of a Friedmann-Lemaître-Robertson-Walker model with a massive scalar field as matter content. We consider the case of compact spatial sections (thus avoiding infrared divergences), with the topology of a three-sphere. After expanding the perturbations in series of eigenfunctions of the Laplace-Beltrami operator, the Hamiltonian of the system is written up to quadratic order in them. We fix the gauge of the local degrees of freedom in two different ways, reaching in both cases the same qualitative results. A canonical transformation, which includes the scaling of the matter-field perturbations by the scale factor of the geometry, is performed in order to arrive at a convenient formulation of the system. We then study the quantization of these perturbations in the classical background determined by the homogeneous variables. Based on previous work, we introduce a Fock representation for the perturbations in which: (a) the complex structure is invariant under the isometries of the spatial sections and (b) the field dynamics is implemented as a unitary operator. These two properties select not only a unique unitary equivalence class of representations, but also a preferred field description, picking up a canonical pair of field variables among all those that can be obtained by means of a time-dependent scaling of the matter field (completed into a linear canonical transformation). Finally, we present an equivalent quantization constructed in terms of gauge-invariant quantities. We prove that this quantization can be attained by a mode-by-mode time-dependent linear canonical transformation which admits a unitary implementation, so that it is also uniquely determined.
Perceptual Optimization of DCT Color Quantization Matrices
NASA Technical Reports Server (NTRS)
Watson, Andrew B.; Statler, Irving C. (Technical Monitor)
1994-01-01
Many image compression schemes employ a block Discrete Cosine Transform (DCT) and uniform quantization. Acceptable rate/distortion performance depends upon proper design of the quantization matrix. In previous work, we showed how to use a model of the visibility of DCT basis functions to design quantization matrices for arbitrary display resolutions and color spaces. Subsequently, we showed how to optimize greyscale quantization matrices for individual images, for optimal rate/perceptual distortion performance. Here we describe extensions of this optimization algorithm to color images.
A visual detection model for DCT coefficient quantization
NASA Technical Reports Server (NTRS)
Ahumada, Albert J., Jr.; Peterson, Heidi A.
1993-01-01
The discrete cosine transform (DCT) is widely used in image compression, and is part of the JPEG and MPEG compression standards. The degree of compression, and the amount of distortion in the decompressed image are determined by the quantization of the transform coefficients. The standards do not specify how the DCT coefficients should be quantized. Our approach is to set the quantization level for each coefficient so that the quantization error is at the threshold of visibility. Here we combine results from our previous work to form our current best detection model for DCT coefficient quantization noise. This model predicts sensitivity as a function of display parameters, enabling quantization matrices to be designed for display situations varying in luminance, veiling light, and spatial frequency related conditions (pixel size, viewing distance, and aspect ratio). It also allows arbitrary color space directions for the representation of color.
NASA Astrophysics Data System (ADS)
Mazzola, F.; Wells, J. W.; Pakpour-Tabrizi, A. C.; Jackman, R. B.; Thiagarajan, B.; Hofmann, Ph.; Miwa, J. A.
2018-01-01
We demonstrate simultaneous quantization of conduction band (CB) and valence band (VB) states in silicon using ultrashallow, high-density, phosphorus doping profiles (so-called Si:P δ layers). We show that, in addition to the well-known quantization of CB states within the dopant plane, the confinement of VB-derived states between the subsurface P dopant layer and the Si surface gives rise to a simultaneous quantization of VB states in this narrow region. We also show that the VB quantization can be explained using a simple particle-in-a-box model, and that the number and energy separation of the quantized VB states depend on the depth of the P dopant layer beneath the Si surface. Since the quantized CB states do not show a strong dependence on the dopant depth (but rather on the dopant density), it is straightforward to exhibit control over the properties of the quantized CB and VB states independently of each other by choosing the dopant density and depth accordingly, thus offering new possibilities for engineering quantum matter.
Monzo, Cesar; Stansly, Philip A.
2017-01-01
The Asian citrus psyllid (ACP), Diaphorina citri Kuwayama, is the key pest of citrus wherever it occurs due to its role as vector of huanglongbing (HLB) also known as citrus greening disease. Insecticidal vector control is considered to be the primary strategy for HLB management and is typically intense owing to the severity of this disease. While this approach slows spread and also decreases severity of HLB once the disease is established, economic viability of increasingly frequent sprays is uncertain. Lacking until now were studies evaluating the optimum frequency of insecticide applications to mature trees during the growing season under conditions of high HLB incidence. We related different degrees of insecticide control with ACP abundance and ultimately, with HLB-associated yield losses in two four-year replicated experiments conducted in commercial groves of mature orange trees under high HLB incidence. Decisions on insecticide applications directed at ACP were made by project managers and confined to designated plots according to experimental design. All operational costs as well as production benefits were taken into account for economic analysis. The relationship between management costs, ACP abundance and HLB-associated economic losses based on current prices for process oranges was used to determine the optimum frequency and timing for insecticide applications during the growing season. Trees under the most intensive insecticidal control harbored fewest ACP resulting in greatest yields. The relationship between vector densities and yield loss was significant but differed between the two test orchards, possibly due to varying initial HLB infection levels, ACP populations or cultivar response. Based on these relationships, treatment thresholds during the growing season were obtained as a function of application costs, juice market prices and ACP densities. A conservative threshold for mature trees with high incidence of HLB would help maintain economic viability by reducing excessive insecticide sprays, thereby leaving more room for non-aggressive management tools such as biological control. PMID:28426676
Monzo, Cesar; Stansly, Philip A
2017-01-01
The Asian citrus psyllid (ACP), Diaphorina citri Kuwayama, is the key pest of citrus wherever it occurs due to its role as vector of huanglongbing (HLB) also known as citrus greening disease. Insecticidal vector control is considered to be the primary strategy for HLB management and is typically intense owing to the severity of this disease. While this approach slows spread and also decreases severity of HLB once the disease is established, economic viability of increasingly frequent sprays is uncertain. Lacking until now were studies evaluating the optimum frequency of insecticide applications to mature trees during the growing season under conditions of high HLB incidence. We related different degrees of insecticide control with ACP abundance and ultimately, with HLB-associated yield losses in two four-year replicated experiments conducted in commercial groves of mature orange trees under high HLB incidence. Decisions on insecticide applications directed at ACP were made by project managers and confined to designated plots according to experimental design. All operational costs as well as production benefits were taken into account for economic analysis. The relationship between management costs, ACP abundance and HLB-associated economic losses based on current prices for process oranges was used to determine the optimum frequency and timing for insecticide applications during the growing season. Trees under the most intensive insecticidal control harbored fewest ACP resulting in greatest yields. The relationship between vector densities and yield loss was significant but differed between the two test orchards, possibly due to varying initial HLB infection levels, ACP populations or cultivar response. Based on these relationships, treatment thresholds during the growing season were obtained as a function of application costs, juice market prices and ACP densities. A conservative threshold for mature trees with high incidence of HLB would help maintain economic viability by reducing excessive insecticide sprays, thereby leaving more room for non-aggressive management tools such as biological control.
Gregorio I. Gavier-Pizarro; Tobias Kuemmerle; Laura E. Hoyos; Susan I. Stewart; Cynthia D. Huebner; Nicholas S. Keuler; Volker C. Radeloff
2012-01-01
In central Argentina, the Chinese tree glossy privet (Ligustrum lucidum) is an aggressive invasive species replacing native forests, forming dense stands, and is thus a major conservation concern. Mapping the spread of biological invasions is a necessary first step toward understanding the factors determining invasion patterns. Urban areas may...
NASA Technical Reports Server (NTRS)
Jules, Kenol; Lin, Paul P.
2001-01-01
This paper presents an artificial intelligence monitoring system developed by the NASA Glenn Principal Investigator Microgravity Services project to help the principal investigator teams identify the primary vibratory disturbance sources that are active, at any moment in time, on-board the International Space Station, which might impact the microgravity environment their experiments are exposed to. From the Principal Investigator Microgravity Services' web site, the principal investigator teams can monitor via a graphical display, in near real time, which event(s) is/are on, such as crew activities, pumps, fans, centrifuges, compressor, crew exercise, platform structural modes, etc., and decide whether or not to run their experiments based on the acceleration environment associated with a specific event. This monitoring system is focused primarily on detecting the vibratory disturbance sources, but could be used as well to detect some of the transient disturbance sources, depending on the events duration. The system has built-in capability to detect both known and unknown vibratory disturbance sources. Several soft computing techniques such as Kohonen's Self-Organizing Feature Map, Learning Vector Quantization, Back-Propagation Neural Networks, and Fuzzy Logic were used to design the system.
Debboun, Mustapha; Green, Theodore J; Rueda, Leopoldo M; Hall, Robert D
2005-09-01
Aedes (Protomacleaya) triseriatus currently shares its habitat in the USA with the introduced species Aedes (Finlaya) japonicus and Aedes (Stegomyia) albopictus. In the late 1980s, before the introduction of these 2 species, Ae. triseriatus was the dominant tree hole- and artificial container-breeding mosquito in central Missouri. Aedes triseriatus represented 89% of the mosquito immatures collected from water-filled tree holes and artificial containers at 3 forested field sites in central Missouri, from May to October, 1986 to 1988. Laboratory-reared female Ae. triseriatus were able to support larval development of Dirofilaria immitis (canine heartworm) to the infective 3rd larval stage. A blood meal from a microfilaremic Collie-mix dog was sufficient to infect adult female mosquitoes, indicating that Ae. triseriatus is a possible vector of canine heartworm in central Missouri. Confirmation of the vector status of this species depends on the yet-to-be observed transmission of D. immitis by Ae. triseriatus in the field, possibly by experimental infection of dogs by wild-caught mosquitoes. Defining the role of this species in epizootic outbreaks could contribute toward accurate risk assessment as the abundance of Ae. triseriatus increases and decreases in response to the success of Ae. albopictus, Ae. japonicus, or other introduced container-breeding mosquitoes.
Electronically decoupled stacking fault tetrahedra embedded in Au(111) films
Schouteden, Koen; Amin-Ahmadi, Behnam; Li, Zhe; Muzychenko, Dmitry; Schryvers, Dominique; Van Haesendonck, Chris
2016-01-01
Stacking faults are known as defective structures in crystalline materials that typically lower the structural quality of the material. Here, we show that a particular type of defect, that is, stacking fault tetrahedra (SFTs), exhibits pronounced quantized electronic behaviour, revealing a potential synthetic route to decoupled nanoparticles in metal films. We report on the electronic properties of SFTs that exist in Au(111) films, as evidenced by scanning tunnelling microscopy and confirmed by transmission electron microscopy. We find that the SFTs reveal a remarkable decoupling from their metal surroundings, leading to pronounced energy level quantization effects within the SFTs. The electronic behaviour of the SFTs can be described well by the particle-in-a-box model. Our findings demonstrate that controlled preparation of SFTs may offer an alternative way to achieve well-decoupled nanoparticles of high crystalline quality in metal thin films without the need of thin insulating layers. PMID:28008910
Electronically decoupled stacking fault tetrahedra embedded in Au(111) films.
Schouteden, Koen; Amin-Ahmadi, Behnam; Li, Zhe; Muzychenko, Dmitry; Schryvers, Dominique; Van Haesendonck, Chris
2016-12-23
Stacking faults are known as defective structures in crystalline materials that typically lower the structural quality of the material. Here, we show that a particular type of defect, that is, stacking fault tetrahedra (SFTs), exhibits pronounced quantized electronic behaviour, revealing a potential synthetic route to decoupled nanoparticles in metal films. We report on the electronic properties of SFTs that exist in Au(111) films, as evidenced by scanning tunnelling microscopy and confirmed by transmission electron microscopy. We find that the SFTs reveal a remarkable decoupling from their metal surroundings, leading to pronounced energy level quantization effects within the SFTs. The electronic behaviour of the SFTs can be described well by the particle-in-a-box model. Our findings demonstrate that controlled preparation of SFTs may offer an alternative way to achieve well-decoupled nanoparticles of high crystalline quality in metal thin films without the need of thin insulating layers.
Scalable hybrid computation with spikes.
Sarpeshkar, Rahul; O'Halloran, Micah
2002-09-01
We outline a hybrid analog-digital scheme for computing with three important features that enable it to scale to systems of large complexity: First, like digital computation, which uses several one-bit precise logical units to collectively compute a precise answer to a computation, the hybrid scheme uses several moderate-precision analog units to collectively compute a precise answer to a computation. Second, frequent discrete signal restoration of the analog information prevents analog noise and offset from degrading the computation. And, third, a state machine enables complex computations to be created using a sequence of elementary computations. A natural choice for implementing this hybrid scheme is one based on spikes because spike-count codes are digital, while spike-time codes are analog. We illustrate how spikes afford easy ways to implement all three components of scalable hybrid computation. First, as an important example of distributed analog computation, we show how spikes can create a distributed modular representation of an analog number by implementing digital carry interactions between spiking analog neurons. Second, we show how signal restoration may be performed by recursive spike-count quantization of spike-time codes. And, third, we use spikes from an analog dynamical system to trigger state transitions in a digital dynamical system, which reconfigures the analog dynamical system using a binary control vector; such feedback interactions between analog and digital dynamical systems create a hybrid state machine (HSM). The HSM extends and expands the concept of a digital finite-state-machine to the hybrid domain. We present experimental data from a two-neuron HSM on a chip that implements error-correcting analog-to-digital conversion with the concurrent use of spike-time and spike-count codes. We also present experimental data from silicon circuits that implement HSM-based pattern recognition using spike-time synchrony. We outline how HSMs may be used to perform learning, vector quantization, spike pattern recognition and generation, and how they may be reconfigured.
[Glossary of terms used by radiologists in image processing].
Rolland, Y; Collorec, R; Bruno, A; Ramée, A; Morcet, N; Haigron, P
1995-01-01
We give the definition of 166 words used in image processing. Adaptivity, aliazing, analog-digital converter, analysis, approximation, arc, artifact, artificial intelligence, attribute, autocorrelation, bandwidth, boundary, brightness, calibration, class, classification, classify, centre, cluster, coding, color, compression, contrast, connectivity, convolution, correlation, data base, decision, decomposition, deconvolution, deduction, descriptor, detection, digitization, dilation, discontinuity, discretization, discrimination, disparity, display, distance, distorsion, distribution dynamic, edge, energy, enhancement, entropy, erosion, estimation, event, extrapolation, feature, file, filter, filter floaters, fitting, Fourier transform, frequency, fusion, fuzzy, Gaussian, gradient, graph, gray level, group, growing, histogram, Hough transform, Houndsfield, image, impulse response, inertia, intensity, interpolation, interpretation, invariance, isotropy, iterative, JPEG, knowledge base, label, laplacian, learning, least squares, likelihood, matching, Markov field, mask, matching, mathematical morphology, merge (to), MIP, median, minimization, model, moiré, moment, MPEG, neural network, neuron, node, noise, norm, normal, operator, optical system, optimization, orthogonal, parametric, pattern recognition, periodicity, photometry, pixel, polygon, polynomial, prediction, pulsation, pyramidal, quantization, raster, reconstruction, recursive, region, rendering, representation space, resolution, restoration, robustness, ROC, thinning, transform, sampling, saturation, scene analysis, segmentation, separable function, sequential, smoothing, spline, split (to), shape, threshold, tree, signal, speckle, spectrum, spline, stationarity, statistical, stochastic, structuring element, support, syntaxic, synthesis, texture, truncation, variance, vision, voxel, windowing.
Shi, Weimin; Zhang, Xiaoya; Shen, Qi
2010-01-01
Quantitative structure-activity relationship (QSAR) study of chemokine receptor 5 (CCR5) binding affinity of substituted 1-(3,3-diphenylpropyl)-piperidinyl amides and ureas and toxicity of aromatic compounds have been performed. The gene expression programming (GEP) was used to select variables and produce nonlinear QSAR models simultaneously using the selected variables. In our GEP implementation, a simple and convenient method was proposed to infer the K-expression from the number of arguments of the function in a gene, without building the expression tree. The results were compared to those obtained by artificial neural network (ANN) and support vector machine (SVM). It has been demonstrated that the GEP is a useful tool for QSAR modeling. Copyright 2009 Elsevier Masson SAS. All rights reserved.
Signal processing and neural network toolbox and its application to failure diagnosis and prognosis
NASA Astrophysics Data System (ADS)
Tu, Fang; Wen, Fang; Willett, Peter K.; Pattipati, Krishna R.; Jordan, Eric H.
2001-07-01
Many systems are comprised of components equipped with self-testing capability; however, if the system is complex involving feedback and the self-testing itself may occasionally be faulty, tracing faults to a single or multiple causes is difficult. Moreover, many sensors are incapable of reliable decision-making on their own. In such cases, a signal processing front-end that can match inference needs will be very helpful. The work is concerned with providing an object-oriented simulation environment for signal processing and neural network-based fault diagnosis and prognosis. In the toolbox, we implemented a wide range of spectral and statistical manipulation methods such as filters, harmonic analyzers, transient detectors, and multi-resolution decomposition to extract features for failure events from data collected by data sensors. Then we evaluated multiple learning paradigms for general classification, diagnosis and prognosis. The network models evaluated include Restricted Coulomb Energy (RCE) Neural Network, Learning Vector Quantization (LVQ), Decision Trees (C4.5), Fuzzy Adaptive Resonance Theory (FuzzyArtmap), Linear Discriminant Rule (LDR), Quadratic Discriminant Rule (QDR), Radial Basis Functions (RBF), Multiple Layer Perceptrons (MLP) and Single Layer Perceptrons (SLP). Validation techniques, such as N-fold cross-validation and bootstrap techniques, are employed for evaluating the robustness of network models. The trained networks are evaluated for their performance using test data on the basis of percent error rates obtained via cross-validation, time efficiency, generalization ability to unseen faults. Finally, the usage of neural networks for the prediction of residual life of turbine blades with thermal barrier coatings is described and the results are shown. The neural network toolbox has also been applied to fault diagnosis in mixed-signal circuits.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruan, D; Shao, W; Low, D
Purpose: To evaluate and test the hypothesis that plan quality may be systematically affected by treatment delivery techniques and target-tocritical structure geometric relationship in radiotherapy for brain tumor. Methods: Thirty-four consecutive brain tumor patients treated between 2011–2014 were analyzed. Among this cohort, 10 were planned with 3DCRT, 11 with RadipArc, and 13 with helical IMRT on TomoTherapy. The selected dosimetric endpoints (i.e., PTV V100, maximum brainstem/chiasm/ optic nerve doses) were considered as a vector in a highdimensional space. A Pareto analysis was performed to identify the subset of Pareto-efficient plans.The geometric relationships, specifically the overlapping volume and centroid-of-mass distance betweenmore » each critical structure to the PTV were extracted as potential geometric features. The classification-tree analyses were repeated using these geometric features with and without the treatment modality as an additional categorical predictor. In both scenarios, the dominant features to prognosticate the Pareto membership were identified and the tree structures to provide optimal inference were recorded. The classification performance was further analyzed to determine the role of treatment modality in affecting plan quality. Results: Seven Pareto-efficient plans were identified based on dosimetric endpoints (3 from 3DCRT, 3 from RapicArc, 1 from Tomo), which implies that the evaluated treatment modality may have a minor influence on plan quality. Classification trees with/without the treatment modality as a predictor both achieved accuracy of 88.2%: with 100% sensitivity and 87.1% specificity for the former, and 66.7% sensitivity and 96.0% specificity for the latter. The coincidence of accuracy from both analyses further indicates no-to-weak dependence of plan quality on treatment modality. Both analyses have identified the brainstem to PTV distance as the primary predictive feature for Pareto-efficiency. Conclusion: Pareto evaluation and classification-tree analyses have indicated that plan quality depends strongly on geometry for brain tumor, specifically PTV-tobrain-stem-distance but minimally on treatment modality.« less
Overall, Lisa M; Rebek, Eric J
2015-12-01
Xylella fastidiosa is the causative agent of diseases of perennial plants including peach, plum, elm, oak, pecan, and grape. This bacterial pathogen is transmitted by xylem-feeding insects. In recent years, Pierce's disease of grape has been detected in 10 counties in central and northeastern Oklahoma, prompting further investigation of the disease epidemiology in this state. We surveyed vineyards and tree nurseries in Oklahoma for potential insect vectors to determine species composition, infectivity, and natural inoculativity of commonly captured insect vectors. Yellow sticky cards were used to sample insect fauna at each location. Insects were removed from sticky cards and screened for X. fastidiosa using immunocapture-PCR to determine their infectivity. A second objective was to test the natural inoculativity of insect vectors that are found in vineyards. Graphocephala versuta (Say), Graphocephala coccinea (Forster), Paraulacizes irrorata (F.), Oncometopia orbona (F.), Cuerna costalis (F.), and Entylia carinata Germar were collected from vineyards and taken back to the lab to determine their natural inoculativity. Immunocapture-PCR was used to test plant and insect samples for presence of X. fastidiosa. The three most frequently captured species from vineyards and tree nurseries were G. versuta, Clastoptera xanthocephala Germar, and O. orbona. Of those insects screened for X. fastidiosa, 2.4% tested positive for the bacterium. Field-collected G. versuta were inoculative to both ragweed and alfalfa. Following a 7-d inoculation access period, a higher percentage of alfalfa became infected than ragweed. Results from this study provide insight into the epidemiology of X. fastidiosa in Oklahoma. © The Authors 2015. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Galdino, Tarcísio Visintin da Silva; Ferreira, Dalton de Oliveira; Santana Júnior, Paulo Antônio; Arcanjo, Lucas de Paulo; Queiroz, Elenir Aparecida; Sarmento, Renato Almeida; Picanço, Marcelo Coutinho
2017-06-01
The knowledge of the spatiotemporal dynamics of pathogens and their vectors is an important step in determining the pathogen dispersion pattern and the role of vectors in disease dynamics. However, in the case of mango wilt little is known about its spatiotemporal dynamics and the relationship of its vector [the beetle Hypocryphalus mangiferae (Stebbing 1914)] to these dynamics. The aim of this work was to determine the spatial-seasonal dynamic of H. mangiferae attacks and mango wilt in mango orchards and to verify the importance of H. mangiferae in the spatiotemporal dynamics of the disease. Two mango orchards were monitored during a period of 3 yr. The plants in these orchards were georeferenced and inspected monthly to quantify the number of plants attacked by beetles and the fungus. In these orchards, the percentage of mango trees attacked by beetles was always higher than the percentage infected by the fungus. The colonization of mango trees by beetles and the fungus occurred by colonization of trees both distant and proximal to previously attacked trees. The new plants attacked by the fungus emerged in places where the beetles had previously begun their attack. This phenomenon led to a large overlap in sites of beetle and fungal occurrence, indicating that establishment by the beetle was followed by establishment by the fungus. This information can be used by farmers to predict disease infection, and to control bark beetle infestation in mango orchards. © The Authors 2017. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Hadronic three-body decays of B mesons
NASA Astrophysics Data System (ADS)
Cheng, Hai-Yang
2016-04-01
Hadronic three-body decays of B mesons receive both resonant and nonresonant contributions. Dominant nonresonant contributions to tree-dominated three-body decays arise from the b → u tree transition which can be evaluated using heavy meson chiral perturbation theory valid in the soft meson limit. For penguin-dominated decays, nonresonant signals come mainly from the penguin amplitude governed by the matrix elements of scalar densities
Probing leptophilic dark sectors with hadronic processes
NASA Astrophysics Data System (ADS)
D'Eramo, Francesco; Kavanagh, Bradley J.; Panci, Paolo
2017-08-01
We study vector portal dark matter models where the mediator couples only to leptons. In spite of the lack of tree-level couplings to colored states, radiative effects generate interactions with quark fields that could give rise to a signal in current and future experiments. We identify such experimental signatures: scattering of nuclei in dark matter direct detection; resonant production of lepton-antilepton pairs at the Large Hadron Collider; and hadronic final states in dark matter indirect searches. Furthermore, radiative effects also generate an irreducible mass mixing between the vector mediator and the Z boson, severely bounded by ElectroWeak Precision Tests. We use current experimental results to put bounds on this class of models, accounting for both radiatively induced and tree-level processes. Remarkably, the former often overwhelm the latter.
Probing leptophilic dark sectors with hadronic processes
D'Eramo, Francesco; Kavanagh, Bradley J.; Panci, Paolo
2017-05-29
We study vector portal dark matter models where the mediator couples only to leptons. In spite of the lack of tree-level couplings to colored states, radiative effects generate interactions with quark fields that could give rise to a signal in current and future experiments. We identify such experimental signatures: scattering of nuclei in dark matter direct detection; resonant production of lepton–antilepton pairs at the Large Hadron Collider; and hadronic final states in dark matter indirect searches. Furthermore, radiative effects also generate an irreducible mass mixing between the vector mediator and the Z boson, severely bounded by ElectroWeak Precision Tests. Wemore » use current experimental results to put bounds on this class of models, accounting for both radiatively induced and tree-level processes. Remarkably, the former often overwhelm the latter.« less
Software tool for data mining and its applications
NASA Astrophysics Data System (ADS)
Yang, Jie; Ye, Chenzhou; Chen, Nianyi
2002-03-01
A software tool for data mining is introduced, which integrates pattern recognition (PCA, Fisher, clustering, hyperenvelop, regression), artificial intelligence (knowledge representation, decision trees), statistical learning (rough set, support vector machine), computational intelligence (neural network, genetic algorithm, fuzzy systems). It consists of nine function models: pattern recognition, decision trees, association rule, fuzzy rule, neural network, genetic algorithm, Hyper Envelop, support vector machine, visualization. The principle and knowledge representation of some function models of data mining are described. The software tool of data mining is realized by Visual C++ under Windows 2000. Nonmonotony in data mining is dealt with by concept hierarchy and layered mining. The software tool of data mining has satisfactorily applied in the prediction of regularities of the formation of ternary intermetallic compounds in alloy systems, and diagnosis of brain glioma.
Hilbert space structure in quantum gravity: an algebraic perspective
Giddings, Steven B.
2015-12-16
If quantum gravity respects the principles of quantum mechanics, suitably generalized, it may be that a more viable approach to the theory is through identifying the relevant quantum structures rather than by quantizing classical spacetime. Here, this viewpoint is supported by difficulties of such quantization, and by the apparent lack of a fundamental role for locality. In finite or discrete quantum systems, important structure is provided by tensor factorizations of the Hilbert space. However, even in local quantum field theory properties of the generic type III von Neumann algebras and of long range gauge fields indicate that factorization of themore » Hilbert space is problematic. Instead it is better to focus on the structure of the algebra of observables, and in particular on its subalgebras corresponding to regions. This paper suggests that study of analogous algebraic structure in gravity gives an important perspective on the nature of the quantum theory. Significant departures from the subalgebra structure of local quantum field theory are found, working in the correspondence limit of long-distances/low-energies. Particularly, there are obstacles to identifying commuting algebras of localized operators. In addition to suggesting important properties of the algebraic structure, this and related observations pose challenges to proposals of a fundamental role for entanglement.« less
Hilbert space structure in quantum gravity: an algebraic perspective
DOE Office of Scientific and Technical Information (OSTI.GOV)
Giddings, Steven B.
If quantum gravity respects the principles of quantum mechanics, suitably generalized, it may be that a more viable approach to the theory is through identifying the relevant quantum structures rather than by quantizing classical spacetime. Here, this viewpoint is supported by difficulties of such quantization, and by the apparent lack of a fundamental role for locality. In finite or discrete quantum systems, important structure is provided by tensor factorizations of the Hilbert space. However, even in local quantum field theory properties of the generic type III von Neumann algebras and of long range gauge fields indicate that factorization of themore » Hilbert space is problematic. Instead it is better to focus on the structure of the algebra of observables, and in particular on its subalgebras corresponding to regions. This paper suggests that study of analogous algebraic structure in gravity gives an important perspective on the nature of the quantum theory. Significant departures from the subalgebra structure of local quantum field theory are found, working in the correspondence limit of long-distances/low-energies. Particularly, there are obstacles to identifying commuting algebras of localized operators. In addition to suggesting important properties of the algebraic structure, this and related observations pose challenges to proposals of a fundamental role for entanglement.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Serwer, Philip, E-mail: serwer@uthscsa.edu; Wright, Elena T.; Liu, Zheng
DNA packaging of phages phi29, T3 and T7 sometimes produces incompletely packaged DNA with quantized lengths, based on gel electrophoretic band formation. We discover here a packaging ATPase-free, in vitro model for packaged DNA length quantization. We use directed evolution to isolate a five-site T3 point mutant that hyper-produces tail-free capsids with mature DNA (heads). Three tail gene mutations, but no head gene mutations, are present. A variable-length DNA segment leaks from some mutant heads, based on DNase I-protection assay and electron microscopy. The protected DNA segment has quantized lengths, based on restriction endonuclease analysis: six sharp bands of DNAmore » missing 3.7–12.3% of the last end packaged. Native gel electrophoresis confirms quantized DNA expulsion and, after removal of external DNA, provides evidence that capsid radius is the quantization-ruler. Capsid-based DNA length quantization possibly evolved via selection for stalling that provides time for feedback control during DNA packaging and injection. - Graphical abstract: Highlights: • We implement directed evolution- and DNA-sequencing-based phage assembly genetics. • We purify stable, mutant phage heads with a partially leaked mature DNA molecule. • Native gels and DNase-protection show leaked DNA segments to have quantized lengths. • Native gels after DNase I-removal of leaked DNA reveal the capsids to vary in radius. • Thus, we hypothesize leaked DNA quantization via variably quantized capsid radius.« less
Gravity quantized: Loop quantum gravity with a scalar field
DOE Office of Scientific and Technical Information (OSTI.GOV)
Domagala, Marcin; Kaminski, Wojciech; Giesel, Kristina
2010-11-15
...''but we do not have quantum gravity.'' This phrase is often used when analysis of a physical problem enters the regime in which quantum gravity effects should be taken into account. In fact, there are several models of the gravitational field coupled to (scalar) fields for which the quantization procedure can be completed using loop quantum gravity techniques. The model we present in this paper consists of the gravitational field coupled to a scalar field. The result has similar structure to the loop quantum cosmology models, except that it involves all the local degrees of freedom of the gravitational fieldmore » because no symmetry reduction has been performed at the classical level.« less
Quasinormal modes and quantization of area/entropy for noncommutative BTZ black hole
NASA Astrophysics Data System (ADS)
Huang, Lu; Chen, Juhua; Wang, Yongjiu
2018-04-01
We investigate the quasinormal modes and area/entropy spectrum for the noncommutative BTZ black hole. The exact expressions for QNM frequencies are presented by expanding the noncommutative parameter in horizon radius. We find that the noncommutativity does not affect conformal weights (hL, hR), but it influences the thermal equilibrium. The intuitive expressions of the area/entropy spectrum are calculated in terms of Bohr-Sommerfeld quantization, and our results show that the noncommutativity leads to a nonuniform area/entropy spectrum. We also find that the coupling constant ξ , which is the coupling between the scalar and the gravitational fields, shifts the QNM frequencies but not influences the structure of area/entorpy spectrum.
Size quantization patterns in self-assembled InAs/GaAs quantum dots
NASA Astrophysics Data System (ADS)
Colocci, M.; Bogani, F.; Carraresi, L.; Mattolini, R.; Bosacchi, A.; Franchi, S.; Frigeri, P.; Taddei, S.; Rosa-Clot, M.
1997-07-01
Molecular beam epitaxy has been used for growing self-assembled InAs quantum dots. A continuous variation of the InAs average coverage across the sample has been obtained by properly aligning the (001) GaAs substrate with respect to the molecular beam. Excitation of a large number of dots (laser spot diameter ≈ 100 μm) results in structured photoluminescence spectra; a clear quantization of the dot sizes is deduced from the distinct luminescence bands separated in energy by an average spacing of 20-30 meV. We ascribe the individual bands of the photoluminescence spectrum after low excitation to families of dots with roughly the same diameter and heights differing by one monolayer.
High-temperature quantum oscillations caused by recurring Bloch states in graphene superlattices
NASA Astrophysics Data System (ADS)
Krishna Kumar, R.; Chen, X.; Auton, G. H.; Mishchenko, A.; Bandurin, D. A.; Morozov, S. V.; Cao, Y.; Khestanova, E.; Ben Shalom, M.; Kretinin, A. V.; Novoselov, K. S.; Eaves, L.; Grigorieva, I. V.; Ponomarenko, L. A.; Fal'ko, V. I.; Geim, A. K.
2017-07-01
Cyclotron motion of charge carriers in metals and semiconductors leads to Landau quantization and magneto-oscillatory behavior in their properties. Cryogenic temperatures are usually required to observe these oscillations. We show that graphene superlattices support a different type of quantum oscillation that does not rely on Landau quantization. The oscillations are extremely robust and persist well above room temperature in magnetic fields of only a few tesla. We attribute this phenomenon to repetitive changes in the electronic structure of superlattices such that charge carriers experience effectively no magnetic field at simple fractions of the flux quantum per superlattice unit cell. Our work hints at unexplored physics in Hofstadter butterfly systems at high temperatures.
Dimensional quantization effects in the thermodynamics of conductive filaments
NASA Astrophysics Data System (ADS)
Niraula, D.; Grice, C. R.; Karpov, V. G.
2018-06-01
We consider the physical effects of dimensional quantization in conductive filaments that underlie operations of some modern electronic devices. We show that, as a result of quantization, a sufficiently thin filament acquires a positive charge. Several applications of this finding include the host material polarization, the stability of filament constrictions, the equilibrium filament radius, polarity in device switching, and quantization of conductance.
Nearly associative deformation quantization
NASA Astrophysics Data System (ADS)
Vassilevich, Dmitri; Oliveira, Fernando Martins Costa
2018-04-01
We study several classes of non-associative algebras as possible candidates for deformation quantization in the direction of a Poisson bracket that does not satisfy Jacobi identities. We show that in fact alternative deformation quantization algebras require the Jacobi identities on the Poisson bracket and, under very general assumptions, are associative. At the same time, flexible deformation quantization algebras exist for any Poisson bracket.
Dimensional quantization effects in the thermodynamics of conductive filaments.
Niraula, D; Grice, C R; Karpov, V G
2018-06-29
We consider the physical effects of dimensional quantization in conductive filaments that underlie operations of some modern electronic devices. We show that, as a result of quantization, a sufficiently thin filament acquires a positive charge. Several applications of this finding include the host material polarization, the stability of filament constrictions, the equilibrium filament radius, polarity in device switching, and quantization of conductance.
Face recognition via sparse representation of SIFT feature on hexagonal-sampling image
NASA Astrophysics Data System (ADS)
Zhang, Daming; Zhang, Xueyong; Li, Lu; Liu, Huayong
2018-04-01
This paper investigates a face recognition approach based on Scale Invariant Feature Transform (SIFT) feature and sparse representation. The approach takes advantage of SIFT which is local feature other than holistic feature in classical Sparse Representation based Classification (SRC) algorithm and possesses strong robustness to expression, pose and illumination variations. Since hexagonal image has more inherit merits than square image to make recognition process more efficient, we extract SIFT keypoint in hexagonal-sampling image. Instead of matching SIFT feature, firstly the sparse representation of each SIFT keypoint is given according the constructed dictionary; secondly these sparse vectors are quantized according dictionary; finally each face image is represented by a histogram and these so-called Bag-of-Words vectors are classified by SVM. Due to use of local feature, the proposed method achieves better result even when the number of training sample is small. In the experiments, the proposed method gave higher face recognition rather than other methods in ORL and Yale B face databases; also, the effectiveness of the hexagonal-sampling in the proposed method is verified.
NASA Astrophysics Data System (ADS)
Faghihi, M. J.; Tavassoly, M. K.
2012-02-01
In this paper, we study the interaction between a three-level atom and a quantized single-mode field with ‘intensity-dependent coupling’ in a ‘Kerr medium’. The three-level atom is considered to be in a Λ-type configuration. Under particular initial conditions, which may be prepared for the atom and the field, the dynamical state vector of the entire system will be explicitly obtained, for the arbitrary nonlinearity function f(n) associated with any physical system. Then, after evaluating the variation of the field entropy against time, we will investigate the quantum statistics as well as some of the nonclassical properties of the introduced state. During our calculations we investigate the effects of intensity-dependent coupling, Kerr medium and detuning parameters on the depth and domain of the nonclassicality features of the atom-field state vector. Finally, we compare our obtained results with those of V-type three-level atoms.
The Newick utilities: high-throughput phylogenetic tree processing in the UNIX shell.
Junier, Thomas; Zdobnov, Evgeny M
2010-07-01
We present a suite of Unix shell programs for processing any number of phylogenetic trees of any size. They perform frequently-used tree operations without requiring user interaction. They also allow tree drawing as scalable vector graphics (SVG), suitable for high-quality presentations and further editing, and as ASCII graphics for command-line inspection. As an example we include an implementation of bootscanning, a procedure for finding recombination breakpoints in viral genomes. C source code, Python bindings and executables for various platforms are available from http://cegg.unige.ch/newick_utils. The distribution includes a manual and example data. The package is distributed under the BSD License. thomas.junier@unige.ch
Measuring and Modeling Shared Visual Attention
NASA Technical Reports Server (NTRS)
Mulligan, Jeffrey B.; Gontar, Patrick
2016-01-01
Multi-person teams are sometimes responsible for critical tasks, such as flying an airliner. Here we present a method using gaze tracking data to assess shared visual attention, a term we use to describe the situation where team members are attending to a common set of elements in the environment. Gaze data are quantized with respect to a set of N areas of interest (AOIs); these are then used to construct a time series of N dimensional vectors, with each vector component representing one of the AOIs, all set to 0 except for the component corresponding to the currently fixated AOI, which is set to 1. The resulting sequence of vectors can be averaged in time, with the result that each vector component represents the proportion of time that the corresponding AOI was fixated within the given time interval. We present two methods for comparing sequences of this sort, one based on computing the time-varying correlation of the averaged vectors, and another based on a chi-square test testing the hypothesis that the observed gaze proportions are drawn from identical probability distributions. We have evaluated the method using synthetic data sets, in which the behavior was modeled as a series of "activities," each of which was modeled as a first-order Markov process. By tabulating distributions for pairs of identical and disparate activities, we are able to perform a receiver operating characteristic (ROC) analysis, allowing us to choose appropriate criteria and estimate error rates. We have applied the methods to data from airline crews, collected in a high-fidelity flight simulator (Haslbeck, Gontar & Schubert, 2014). We conclude by considering the problem of automatic (blind) discovery of activities, using methods developed for text analysis.
Measuring and Modeling Shared Visual Attention
NASA Technical Reports Server (NTRS)
Mulligan, Jeffrey B.
2016-01-01
Multi-person teams are sometimes responsible for critical tasks, such as flying an airliner. Here we present a method using gaze tracking data to assess shared visual attention, a term we use to describe the situation where team members are attending to a common set of elements in the environment. Gaze data are quantized with respect to a set of N areas of interest (AOIs); these are then used to construct a time series of N dimensional vectors, with each vector component representing one of the AOIs, all set to 0 except for the component corresponding to the currently fixated AOI, which is set to 1. The resulting sequence of vectors can be averaged in time, with the result that each vector component represents the proportion of time that the corresponding AOI was fixated within the given time interval. We present two methods for comparing sequences of this sort, one based on computing the time-varying correlation of the averaged vectors, and another based on a chi-square test testing the hypothesis that the observed gaze proportions are drawn from identical probability distributions.We have evaluated the method using synthetic data sets, in which the behavior was modeled as a series of activities, each of which was modeled as a first-order Markov process. By tabulating distributions for pairs of identical and disparate activities, we are able to perform a receiver operating characteristic (ROC) analysis, allowing us to choose appropriate criteria and estimate error rates.We have applied the methods to data from airline crews, collected in a high-fidelity flight simulator (Haslbeck, Gontar Schubert, 2014). We conclude by considering the problem of automatic (blind) discovery of activities, using methods developed for text analysis.
Learning binary code via PCA of angle projection for image retrieval
NASA Astrophysics Data System (ADS)
Yang, Fumeng; Ye, Zhiqiang; Wei, Xueqi; Wu, Congzhong
2018-01-01
With benefits of low storage costs and high query speeds, binary code representation methods are widely researched for efficiently retrieving large-scale data. In image hashing method, learning hashing function to embed highdimensions feature to Hamming space is a key step for accuracy retrieval. Principal component analysis (PCA) technical is widely used in compact hashing methods, and most these hashing methods adopt PCA projection functions to project the original data into several dimensions of real values, and then each of these projected dimensions is quantized into one bit by thresholding. The variances of different projected dimensions are different, and with real-valued projection produced more quantization error. To avoid the real-valued projection with large quantization error, in this paper we proposed to use Cosine similarity projection for each dimensions, the angle projection can keep the original structure and more compact with the Cosine-valued. We used our method combined the ITQ hashing algorithm, and the extensive experiments on the public CIFAR-10 and Caltech-256 datasets validate the effectiveness of the proposed method.
NASA Astrophysics Data System (ADS)
Tavousi, Alireza; Mansouri-Birjandi, Mohammad Ali; Saffari, Mehdi
2016-09-01
Implementing of photonic sampling and quantizing analog-to-digital converters (ADCs) enable us to extract a single binary word from optical signals without need for extra electronic assisting parts. This would enormously increase the sampling and quantizing time as well as decreasing the consumed power. To this end, based on the concept of successive approximation method, a 4-bit full-optical ADC that operates using the intensity-dependent Kerr-like nonlinearity in a two dimensional photonic crystal (2DPhC) platform is proposed. The Silicon (Si) nanocrystal is chosen because of the suitable nonlinear material characteristic. An optical limiter is used for the clamping and quantization of each successive levels that represent the ADC bits. In the proposal, an energy efficient optical ADC circuit is implemented by controlling the system parameters such as ring-to-waveguide coupling coefficients, the ring's nonlinear refractive index, and the ring's length. The performance of the ADC structure is verified by the simulation using finite difference time domain (FDTD) method.
Determination of the angle γ from nonleptonic Bc-->DsD0 decays
NASA Astrophysics Data System (ADS)
Giri, A. K.; Mohanta, R.; Khanna, M. P.
2002-02-01
We note that the two-body nonleptonic pure tree decays B+/-c-->D+/-sD0(D0) and the corresponding vector-vector modes B+/-c-->D*+/-sD*0(D*0) are well suited to extract the weak phase γ of the unitarity triangle. The CP violating phase γ can be determined cleanly as these decay modes are free from the penguin pollutions.
Steven J. Seybold; Andrew D. Graves; Tom W. Coleman
2011-01-01
The walnut twig beetle, Pityophthorus juglandis Blackman (Coleoptera: Scolytidae) (sensu Wood 2007), is a native North American bark beetle that has been recently implicated as the vector of thousand cankers disease of walnut trees in the western U.S. (Tisserat et al. 2009, Utley et al. 2009, Seybold et al. 2010).
Face verification system for Android mobile devices using histogram based features
NASA Astrophysics Data System (ADS)
Sato, Sho; Kobayashi, Kazuhiro; Chen, Qiu
2016-07-01
This paper proposes a face verification system that runs on Android mobile devices. In this system, facial image is captured by a built-in camera on the Android device firstly, and then face detection is implemented using Haar-like features and AdaBoost learning algorithm. The proposed system verify the detected face using histogram based features, which are generated by binary Vector Quantization (VQ) histogram using DCT coefficients in low frequency domains, as well as Improved Local Binary Pattern (Improved LBP) histogram in spatial domain. Verification results with different type of histogram based features are first obtained separately and then combined by weighted averaging. We evaluate our proposed algorithm by using publicly available ORL database and facial images captured by an Android tablet.
Feature weighting using particle swarm optimization for learning vector quantization classifier
NASA Astrophysics Data System (ADS)
Dongoran, A.; Rahmadani, S.; Zarlis, M.; Zakarias
2018-03-01
This paper discusses and proposes a method of feature weighting in classification assignments on competitive learning artificial neural network LVQ. The weighting feature method is the search for the weight of an attribute using the PSO so as to give effect to the resulting output. This method is then applied to the LVQ-Classifier and tested on the 3 datasets obtained from the UCI Machine Learning repository. Then an accuracy analysis will be generated by two approaches. The first approach using LVQ1, referred to as LVQ-Classifier and the second approach referred to as PSOFW-LVQ, is a proposed model. The result shows that the PSO algorithm is capable of finding attribute weights that increase LVQ-classifier accuracy.
New adaptive color quantization method based on self-organizing maps.
Chang, Chip-Hong; Xu, Pengfei; Xiao, Rui; Srikanthan, Thambipillai
2005-01-01
Color quantization (CQ) is an image processing task popularly used to convert true color images to palletized images for limited color display devices. To minimize the contouring artifacts introduced by the reduction of colors, a new competitive learning (CL) based scheme called the frequency sensitive self-organizing maps (FS-SOMs) is proposed to optimize the color palette design for CQ. FS-SOM harmonically blends the neighborhood adaptation of the well-known self-organizing maps (SOMs) with the neuron dependent frequency sensitive learning model, the global butterfly permutation sequence for input randomization, and the reinitialization of dead neurons to harness effective utilization of neurons. The net effect is an improvement in adaptation, a well-ordered color palette, and the alleviation of underutilization problem, which is the main cause of visually perceivable artifacts of CQ. Extensive simulations have been performed to analyze and compare the learning behavior and performance of FS-SOM against other vector quantization (VQ) algorithms. The results show that the proposed FS-SOM outperforms classical CL, Linde, Buzo, and Gray (LBG), and SOM algorithms. More importantly, FS-SOM achieves its superiority in reconstruction quality and topological ordering with a much greater robustness against variations in network parameters than the current art SOM algorithm for CQ. A most significant bit (MSB) biased encoding scheme is also introduced to reduce the number of parallel processing units. By mapping the pixel values as sign-magnitude numbers and biasing the magnitudes according to their sign bits, eight lattice points in the color space are condensed into one common point density function. Consequently, the same processing element can be used to map several color clusters and the entire FS-SOM network can be substantially scaled down without severely scarifying the quality of the displayed image. The drawback of this encoding scheme is the additional storage overhead, which can be cut down by leveraging on existing encoder in an overall lossy compression scheme.
NASA Astrophysics Data System (ADS)
Freeman, Mary Pyott
ABSTRACT An Analysis of Tree Mortality Using High Resolution Remotely-Sensed Data for Mixed-Conifer Forests in San Diego County by Mary Pyott Freeman The montane mixed-conifer forests of San Diego County are currently experiencing extensive tree mortality, which is defined as dieback where whole stands are affected. This mortality is likely the result of the complex interaction of many variables, such as altered fire regimes, climatic conditions such as drought, as well as forest pathogens and past management strategies. Conifer tree mortality and its spatial pattern and change over time were examined in three components. In component 1, two remote sensing approaches were compared for their effectiveness in delineating dead trees, a spatial contextual approach and an OBIA (object based image analysis) approach, utilizing various dates and spatial resolutions of airborne image data. For each approach transforms and masking techniques were explored, which were found to improve classifications, and an object-based assessment approach was tested. In component 2, dead tree maps produced by the most effective techniques derived from component 1 were utilized for point pattern and vector analyses to further understand spatio-temporal changes in tree mortality for the years 1997, 2000, 2002, and 2005 for three study areas: Palomar, Volcan and Laguna mountains. Plot-based fieldwork was conducted to further assess mortality patterns. Results indicate that conifer mortality was significantly clustered, increased substantially between 2002 and 2005, and was non-random with respect to tree species and diameter class sizes. In component 3, multiple environmental variables were used in Generalized Linear Model (GLM-logistic regression) and decision tree classifier model development, revealing the importance of climate and topographic factors such as precipitation and elevation, in being able to predict areas of high risk for tree mortality. The results from this study highlight the importance of multi-scale spatial as well as temporal analyses, in order to understand mixed-conifer forest structure, dynamics, and processes of decline, which can lead to more sustainable management of forests with continued natural and anthropogenic disturbance.
Quantum Computing and Second Quantization
Makaruk, Hanna Ewa
2017-02-10
Quantum computers are by their nature many particle quantum systems. Both the many-particle arrangement and being quantum are necessary for the existence of the entangled states, which are responsible for the parallelism of the quantum computers. Second quantization is a very important approximate method of describing such systems. This lecture will present the general idea of the second quantization, and discuss shortly some of the most important formulations of second quantization.
Quantum Computing and Second Quantization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Makaruk, Hanna Ewa
Quantum computers are by their nature many particle quantum systems. Both the many-particle arrangement and being quantum are necessary for the existence of the entangled states, which are responsible for the parallelism of the quantum computers. Second quantization is a very important approximate method of describing such systems. This lecture will present the general idea of the second quantization, and discuss shortly some of the most important formulations of second quantization.
Particle on a torus knot: Constrained dynamics and semi-classical quantization in a magnetic field
DOE Office of Scientific and Technical Information (OSTI.GOV)
Das, Praloy, E-mail: praloydasdurgapur@gmail.com; Pramanik, Souvik, E-mail: souvick.in@gmail.com; Ghosh, Subir, E-mail: subirghosh20@gmail.com
2016-11-15
Kinematics and dynamics of a particle moving on a torus knot poses an interesting problem as a constrained system. In the first part of the paper we have derived the modified symplectic structure or Dirac brackets of the above model in Dirac’s Hamiltonian framework, both in toroidal and Cartesian coordinate systems. This algebra has been used to study the dynamics, in particular small fluctuations in motion around a specific torus. The spatial symmetries of the system have also been studied. In the second part of the paper we have considered the quantum theory of a charge moving in a torusmore » knot in the presence of a uniform magnetic field along the axis of the torus in a semiclassical quantization framework. We exploit the Einstein–Brillouin–Keller (EBK) scheme of quantization that is appropriate for multidimensional systems. Embedding of the knot on a specific torus is inherently two dimensional that gives rise to two quantization conditions. This shows that although the system, after imposing the knot condition reduces to a one dimensional system, even then it has manifested non-planar features which shows up again in the study of fractional angular momentum. Finally we compare the results obtained from EBK (multi-dimensional) and Bohr–Sommerfeld (single dimensional) schemes. The energy levels and fractional spin depend on the torus knot parameters that specifies its non-planar features. Interestingly, we show that there can be non-planar corrections to the planar anyon-like fractional spin.« less
Image-adapted visually weighted quantization matrices for digital image compression
NASA Technical Reports Server (NTRS)
Watson, Andrew B. (Inventor)
1994-01-01
A method for performing image compression that eliminates redundant and invisible image components is presented. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The present invention adapts or customizes the quantization matrix to the image being compressed. The quantization matrix comprises visual masking by luminance and contrast techniques and by an error pooling technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.
Generic absence of strong singularities in loop quantum Bianchi-IX spacetimes
NASA Astrophysics Data System (ADS)
Saini, Sahil; Singh, Parampreet
2018-03-01
We study the generic resolution of strong singularities in loop quantized effective Bianchi-IX spacetime in two different quantizations—the connection operator based ‘A’ quantization and the extrinsic curvature based ‘K’ quantization. We show that in the effective spacetime description with arbitrary matter content, it is necessary to include inverse triad corrections to resolve all the strong singularities in the ‘A’ quantization. Whereas in the ‘K’ quantization these results can be obtained without including inverse triad corrections. Under these conditions, the energy density, expansion and shear scalars for both of the quantization prescriptions are bounded. Notably, both the quantizations can result in potentially curvature divergent events if matter content allows divergences in the partial derivatives of the energy density with respect to the triad variables at a finite energy density. Such events are found to be weak curvature singularities beyond which geodesics can be extended in the effective spacetime. Our results show that all potential strong curvature singularities of the classical theory are forbidden in Bianchi-IX spacetime in loop quantum cosmology and geodesic evolution never breaks down for such events.
Flux quantization in aperiodic and periodic networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Behrooz, A.
1987-01-01
The phase boundary of quasicrystalline, quasi-periodic, and random networks, was studied. It was found that if a network is composed of two different tiles, whose areas are relatively irrational, then the T/sub c/ (H) curve shows large-scale structure at fields that approximate flux quantization around the tiles, i.e., when the ratio of fluxoids contained in the large tiles to those in the small tiles is a rational approximant to the irrational area ratio. The phase boundaries of quasi-crystalline and quasi-periodic networks show fine structure indicating the existence of commensurate vortex superlattices on these networks. No such fine structure is foundmore » on the random array. For a quasi-crystal whose quasi-periodic long-range order is characterized by the irrational number of tau, the commensurate vortex lattices are all found at H = H/sub 0/ absolute value n + m tau (n,m integers). It was found that the commensurate superlattices on quasicrystalline as well as on crystalline networks are related to the inflation symmetry. A general definition of commensurability is proposed.« less
Chern structure in the Bose-insulating phase of Sr2RuO4 nanofilms
Nobukane, Hiroyoshi; Matsuyama, Toyoki; Tanda, Satoshi
2017-01-01
The quantum anomaly that breaks the symmetry, for example the parity and the chirality, in the quantization leads to a physical quantity with a topological Chern invariant. We report the observation of a Chern structure in the Bose-insulating phase of Sr2RuO4 nanofilms by employing electric transport. We observed the superconductor-to-insulator transition by reducing the thickness of Sr2RuO4 single crystals. The appearance of a gap structure in the insulating phase implies local superconductivity. Fractional quantized conductance was observed without an external magnetic field. We found an anomalous induced voltage with temperature and thickness dependence, and the induced voltage exhibited switching behavior when we applied a magnetic field. We suggest that there was fractional magnetic-field-induced electric polarization in the interlayer. These anomalous results are related to topological invariance. The fractional axion angle Θ = π/6 was determined by observing the topological magneto-electric effect in the Bose-insulating phase of Sr2RuO4 nanofilms. PMID:28112269
Optimum Array Processing for Detecting Binary Signals Corrupted by Directional Interference.
1972-12-01
specific cases. Two different series representations of a vector random process are discussed in Van Trees [3]. These two methods both require the... spaci ~ng d, etc.) its detection error represents a lower bound for the performance that might be obtained with other types of array processing (such...Middleton, Introduction to Statistical Communication Theory, New York: McGraw-Hill, 1960. 3. H.L. Van Trees , Detection, Estimation, and Modulation Theory
Instant-Form and Light-Front Quantization of Field Theories
NASA Astrophysics Data System (ADS)
Kulshreshtha, Usha; Kulshreshtha, Daya Shankar; Vary, James
2018-05-01
In this work we consider the instant-form and light-front quantization of some field theories. As an example, we consider a class of gauged non-linear sigma models with different regularizations. In particular, we present the path integral quantization of the gauged non-linear sigma model in the Faddeevian regularization. We also make a comparision of the possible differences in the instant-form and light-front quantization at appropriate places.
Quantization improves stabilization of dynamical systems with delayed feedback
NASA Astrophysics Data System (ADS)
Stepan, Gabor; Milton, John G.; Insperger, Tamas
2017-11-01
We show that an unstable scalar dynamical system with time-delayed feedback can be stabilized by quantizing the feedback. The discrete time model corresponds to a previously unrecognized case of the microchaotic map in which the fixed point is both locally and globally repelling. In the continuous-time model, stabilization by quantization is possible when the fixed point in the absence of feedback is an unstable node, and in the presence of feedback, it is an unstable focus (spiral). The results are illustrated with numerical simulation of the unstable Hayes equation. The solutions of the quantized Hayes equation take the form of oscillations in which the amplitude is a function of the size of the quantization step. If the quantization step is sufficiently small, the amplitude of the oscillations can be small enough to practically approximate the dynamics around a stable fixed point.
Sarkar, Sujit
2018-04-12
An attempt is made to study and understand the behavior of quantization of geometric phase of a quantum Ising chain with long range interaction. We show the existence of integer and fractional topological characterization for this model Hamiltonian with different quantization condition and also the different quantized value of geometric phase. The quantum critical lines behave differently from the perspective of topological characterization. The results of duality and its relation to the topological quantization is presented here. The symmetry study for this model Hamiltonian is also presented. Our results indicate that the Zak phase is not the proper physical parameter to describe the topological characterization of system with long range interaction. We also present quite a few exact solutions with physical explanation. Finally we present the relation between duality, symmetry and topological characterization. Our work provides a new perspective on topological quantization.
The Newick utilities: high-throughput phylogenetic tree processing in the Unix shell
Junier, Thomas; Zdobnov, Evgeny M.
2010-01-01
Summary: We present a suite of Unix shell programs for processing any number of phylogenetic trees of any size. They perform frequently-used tree operations without requiring user interaction. They also allow tree drawing as scalable vector graphics (SVG), suitable for high-quality presentations and further editing, and as ASCII graphics for command-line inspection. As an example we include an implementation of bootscanning, a procedure for finding recombination breakpoints in viral genomes. Availability: C source code, Python bindings and executables for various platforms are available from http://cegg.unige.ch/newick_utils. The distribution includes a manual and example data. The package is distributed under the BSD License. Contact: thomas.junier@unige.ch PMID:20472542
EEG feature selection method based on decision tree.
Duan, Lijuan; Ge, Hui; Ma, Wei; Miao, Jun
2015-01-01
This paper aims to solve automated feature selection problem in brain computer interface (BCI). In order to automate feature selection process, we proposed a novel EEG feature selection method based on decision tree (DT). During the electroencephalogram (EEG) signal processing, a feature extraction method based on principle component analysis (PCA) was used, and the selection process based on decision tree was performed by searching the feature space and automatically selecting optimal features. Considering that EEG signals are a series of non-linear signals, a generalized linear classifier named support vector machine (SVM) was chosen. In order to test the validity of the proposed method, we applied the EEG feature selection method based on decision tree to BCI Competition II datasets Ia, and the experiment showed encouraging results.
A data management system for engineering and scientific computing
NASA Technical Reports Server (NTRS)
Elliot, L.; Kunii, H. S.; Browne, J. C.
1978-01-01
Data elements and relationship definition capabilities for this data management system are explicitly tailored to the needs of engineering and scientific computing. System design was based upon studies of data management problems currently being handled through explicit programming. The system-defined data element types include real scalar numbers, vectors, arrays and special classes of arrays such as sparse arrays and triangular arrays. The data model is hierarchical (tree structured). Multiple views of data are provided at two levels. Subschemas provide multiple structural views of the total data base and multiple mappings for individual record types are supported through the use of a REDEFINES capability. The data definition language and the data manipulation language are designed as extensions to FORTRAN. Examples of the coding of real problems taken from existing practice in the data definition language and the data manipulation language are given.
A BRST formulation for the conic constrained particle
NASA Astrophysics Data System (ADS)
Barbosa, Gabriel D.; Thibes, Ronaldo
2018-04-01
We describe the gauge invariant BRST formulation of a particle constrained to move in a general conic. The model considered constitutes an explicit example of an originally second-class system which can be quantized within the BRST framework. We initially impose the conic constraint by means of a Lagrange multiplier leading to a consistent second-class system which generalizes previous models studied in the literature. After calculating the constraint structure and the corresponding Dirac brackets, we introduce a suitable first-order Lagrangian, the resulting modified system is then shown to be gauge invariant. We proceed to the extended phase space introducing fermionic ghost variables, exhibiting the BRST symmetry transformations and writing the Green’s function generating functional for the BRST quantized model.
NASA Astrophysics Data System (ADS)
Didiş Körhasan, Nilüfer; Eryılmaz, Ali; Erkoç, Şakir
2016-01-01
Mental models are coherently organized knowledge structures used to explain phenomena. They interact with social environments and evolve with the interaction. Lacking daily experience with phenomena, the social interaction gains much more importance. In this part of our multiphase study, we investigate how instructional interactions influenced students’ mental models about the quantization of physical observables. Class observations and interviews were analysed by studying students’ mental models constructed in a modern physics course during an academic semester. The research revealed that students’ mental models were influenced by (1) the manner of teaching, including instructional methodologies and content specific techniques used by the instructor, (2) order of the topics and familiarity with concepts, and (3) peers.
From N=4 Galilean superparticle to three-dimensional non-relativistic N=4 superfields
NASA Astrophysics Data System (ADS)
Fedoruk, Sergey; Ivanov, Evgeny; Lukierski, Jerzy
2018-05-01
We consider the general N=4 , d = 3 Galilean superalgebra with arbitrary central charges and study its dynamical realizations. Using the nonlinear realization techniques, we introduce a class of actions for N=4 three-dimensional non-relativistic superparticle, such that they are linear in the central charge Maurer-Cartan one-forms. As a prerequisite to the quantization, we analyze the phase space constraints structure of our model for various choices of the central charges. The first class constraints generate gauge transformations, involving fermionic κ-gauge transformations. The quantization of the model gives rise to the collection of free N=4 , d = 3 Galilean superfields, which can be further employed, e.g., for description of three-dimensional non-relativistic N=4 supersymmetric theories.
Terahertz spectroscopy on Faraday and Kerr rotations in a quantum anomalous Hall state.
Okada, Ken N; Takahashi, Youtarou; Mogi, Masataka; Yoshimi, Ryutaro; Tsukazaki, Atsushi; Takahashi, Kei S; Ogawa, Naoki; Kawasaki, Masashi; Tokura, Yoshinori
2016-07-20
Electrodynamic responses from three-dimensional topological insulators are characterized by the universal magnetoelectric term constituent of the Lagrangian formalism. The quantized magnetoelectric coupling, which is generally referred to as topological magnetoelectric effect, has been predicted to induce exotic phenomena including the universal low-energy magneto-optical effects. Here we report the experimental indication of the topological magnetoelectric effect, which is exemplified by magneto-optical Faraday and Kerr rotations in the quantum anomalous Hall states of magnetic topological insulator surfaces by terahertz magneto-optics. The universal relation composed of the observed Faraday and Kerr rotation angles but not of any material parameters (for example, dielectric constant and magnetic susceptibility) well exhibits the trajectory towards the fine structure constant in the quantized limit.
Dantur Juri, María J; Moreno, Marta; Prado Izaguirre, Mónica J; Navarro, Juan C; Zaidenberg, Mario O; Almirón, Walter R; Claps, Guillermo L; Conn, Jan E
2014-09-04
Anopheles pseudopunctipennis is an important malaria vector in the Neotropical region and the only species involved in Plasmodium transmission in the Andean foothills. Its wide geographical distribution in America, high preference for biting humans and capacity to rest inside dwellings after feeding, are attributes contributing to its vector status. Previous reports have tried to elucidate its taxonomic status, distinguishing populations from North, Central and South America. In the present study we used a mitochondrial marker to examine the demographic history of An. pseudopunctipennis in northwestern Argentina. Twelve localities were selected across 550 km of the distribution of this species in Argentina, including two near the Bolivian border and several in South Tucumán, for sampling. A fragment of the cytochrome oxidase I (COI) gene was sequenced and haplotype relationships were analyzed by a statistical parsimony network and a Neighbor-Joining (NJ) tree. Genetic differentiation was estimated with FST. Historical demographic processes were evaluated using diversity measures, neutrality tests and mismatch distribution. Forty-one haplotypes were identified, of which haplotype A was the most common and widely distributed. Neither the network nor the NJ tree showed any geographic differentiation between northern and southern populations. Haplotype diversities, Tajima's DT and Fu & Li's F and D neutrality tests and mismatch distribution supported a scenario of Holocene demographic expansion. The demographic pattern suggests that An. pseudopunctipennis has undergone a single colonization process, and the ancestral haplotype is shared by specimens from all localities, indicating mitochondrial gene flow. Genetic differentiation was minimal, observed only between one northern and one southern locality. The estimated time of the population expansion of this species was during the Holocene. These data suggest that regional vector control measures would be equally effective in both northern and southern localities sampled, but also that insecticide resistant genes may spread rapidly within this region.