Video data compression using artificial neural network differential vector quantization
NASA Technical Reports Server (NTRS)
Krishnamurthy, Ashok K.; Bibyk, Steven B.; Ahalt, Stanley C.
1991-01-01
An artificial neural network vector quantizer is developed for use in data compression applications such as Digital Video. Differential Vector Quantization is used to preserve edge features, and a new adaptive algorithm, known as Frequency-Sensitive Competitive Learning, is used to develop the vector quantizer codebook. To develop real time performance, a custom Very Large Scale Integration Application Specific Integrated Circuit (VLSI ASIC) is being developed to realize the associative memory functions needed in the vector quantization algorithm. By using vector quantization, the need for Huffman coding can be eliminated, resulting in superior performance against channel bit errors than methods that use variable length codes.
Low-rate image coding using vector quantization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Makur, A.
1990-01-01
This thesis deals with the development and analysis of a computationally simple vector quantization image compression system for coding monochrome images at low bit rate. Vector quantization has been known to be an effective compression scheme when a low bit rate is desirable, but the intensive computation required in a vector quantization encoder has been a handicap in using it for low rate image coding. The present work shows that, without substantially increasing the coder complexity, it is indeed possible to achieve acceptable picture quality while attaining a high compression ratio. Several modifications to the conventional vector quantization coder aremore » proposed in the thesis. These modifications are shown to offer better subjective quality when compared to the basic coder. Distributed blocks are used instead of spatial blocks to construct the input vectors. A class of input-dependent weighted distortion functions is used to incorporate psychovisual characteristics in the distortion measure. Computationally simple filtering techniques are applied to further improve the decoded image quality. Finally, unique designs of the vector quantization coder using electronic neural networks are described, so that the coding delay is reduced considerably.« less
NASA Astrophysics Data System (ADS)
Hecht-Nielsen, Robert
1997-04-01
A new universal one-chart smooth manifold model for vector information sources is introduced. Natural coordinates (a particular type of chart) for such data manifolds are then defined. Uniformly quantized natural coordinates form an optimal vector quantization code for a general vector source. Replicator neural networks (a specialized type of multilayer perceptron with three hidden layers) are the introduced. As properly configured examples of replicator networks approach minimum mean squared error (e.g., via training and architecture adjustment using randomly chosen vectors from the source), these networks automatically develop a mapping which, in the limit, produces natural coordinates for arbitrary source vectors. The new concept of removable noise (a noise model applicable to a wide variety of real-world noise processes) is then discussed. Replicator neural networks, when configured to approach minimum mean squared reconstruction error (e.g., via training and architecture adjustment on randomly chosen examples from a vector source, each with randomly chosen additive removable noise contamination), in the limit eliminate removable noise and produce natural coordinates for the data vector portions of the noise-corrupted source vectors. Consideration regarding selection of the dimension of a data manifold source model and the training/configuration of replicator neural networks are discussed.
Curvilinear component analysis: a self-organizing neural network for nonlinear mapping of data sets.
Demartines, P; Herault, J
1997-01-01
We present a new strategy called "curvilinear component analysis" (CCA) for dimensionality reduction and representation of multidimensional data sets. The principle of CCA is a self-organized neural network performing two tasks: vector quantization (VQ) of the submanifold in the data set (input space); and nonlinear projection (P) of these quantizing vectors toward an output space, providing a revealing unfolding of the submanifold. After learning, the network has the ability to continuously map any new point from one space into another: forward mapping of new points in the input space, or backward mapping of an arbitrary position in the output space.
NASA Astrophysics Data System (ADS)
Liu, Tuo; Chen, Changshui; Shi, Xingzhe; Liu, Chengyong
2016-05-01
The Raman spectra of tissue of 20 brain tumor patients was recorded using a confocal microlaser Raman spectroscope with 785 nm excitation in vitro. A total of 133 spectra were investigated. Spectra peaks from normal white matter tissue and tumor tissue were analyzed. Algorithms, such as principal component analysis, linear discriminant analysis, and the support vector machine, are commonly used to analyze spectral data. However, in this study, we employed the learning vector quantization (LVQ) neural network, which is typically used for pattern recognition. By applying the proposed method, a normal diagnosis accuracy of 85.7% and a glioma diagnosis accuracy of 89.5% were achieved. The LVQ neural network is a recent approach to excavating Raman spectra information. Moreover, it is fast and convenient, does not require the spectra peak counterpart, and achieves a relatively high accuracy. It can be used in brain tumor prognostics and in helping to optimize the cutting margins of gliomas.
Image Coding Based on Address Vector Quantization.
NASA Astrophysics Data System (ADS)
Feng, Yushu
Image coding is finding increased application in teleconferencing, archiving, and remote sensing. This thesis investigates the potential of Vector Quantization (VQ), a relatively new source coding technique, for compression of monochromatic and color images. Extensions of the Vector Quantization technique to the Address Vector Quantization method have been investigated. In Vector Quantization, the image data to be encoded are first processed to yield a set of vectors. A codeword from the codebook which best matches the input image vector is then selected. Compression is achieved by replacing the image vector with the index of the code-word which produced the best match, the index is sent to the channel. Reconstruction of the image is done by using a table lookup technique, where the label is simply used as an address for a table containing the representative vectors. A code-book of representative vectors (codewords) is generated using an iterative clustering algorithm such as K-means, or the generalized Lloyd algorithm. A review of different Vector Quantization techniques are given in chapter 1. Chapter 2 gives an overview of codebook design methods including the Kohonen neural network to design codebook. During the encoding process, the correlation of the address is considered and Address Vector Quantization is developed for color image and monochrome image coding. Address VQ which includes static and dynamic processes is introduced in chapter 3. In order to overcome the problems in Hierarchical VQ, Multi-layer Address Vector Quantization is proposed in chapter 4. This approach gives the same performance as that of the normal VQ scheme but the bit rate is about 1/2 to 1/3 as that of the normal VQ method. In chapter 5, a Dynamic Finite State VQ based on a probability transition matrix to select the best subcodebook to encode the image is developed. In chapter 6, a new adaptive vector quantization scheme, suitable for color video coding, called "A Self -Organizing Adaptive VQ Technique" is presented. In addition to chapters 2 through 6 which report on new work, this dissertation includes one chapter (chapter 1) and part of chapter 2 which review previous work on VQ and image coding, respectively. Finally, a short discussion of directions for further research is presented in conclusion.
Godino-Llorente, J I; Gómez-Vilda, P
2004-02-01
It is well known that vocal and voice diseases do not necessarily cause perceptible changes in the acoustic voice signal. Acoustic analysis is a useful tool to diagnose voice diseases being a complementary technique to other methods based on direct observation of the vocal folds by laryngoscopy. Through the present paper two neural-network based classification approaches applied to the automatic detection of voice disorders will be studied. Structures studied are multilayer perceptron and learning vector quantization fed using short-term vectors calculated accordingly to the well-known Mel Frequency Coefficient cepstral parameterization. The paper shows that these architectures allow the detection of voice disorders--including glottic cancer--under highly reliable conditions. Within this context, the Learning Vector quantization methodology demonstrated to be more reliable than the multilayer perceptron architecture yielding 96% frame accuracy under similar working conditions.
Swiercz, Miroslaw; Kochanowicz, Jan; Weigele, John; Hurst, Robert; Liebeskind, David S; Mariak, Zenon; Melhem, Elias R; Krejza, Jaroslaw
2008-01-01
To determine the performance of an artificial neural network in transcranial color-coded duplex sonography (TCCS) diagnosis of middle cerebral artery (MCA) spasm. TCCS was prospectively acquired within 2 h prior to routine cerebral angiography in 100 consecutive patients (54M:46F, median age 50 years). Angiographic MCA vasospasm was classified as mild (<25% of vessel caliber reduction), moderate (25-50%), or severe (>50%). A Learning Vector Quantization neural network classified MCA spasm based on TCCS peak-systolic, mean, and end-diastolic velocity data. During a four-class discrimination task, accurate classification by the network ranged from 64.9% to 72.3%, depending on the number of neurons in the Kohonen layer. Accurate classification of vasospasm ranged from 79.6% to 87.6%, with an accuracy of 84.7% to 92.1% for the detection of moderate-to-severe vasospasm. An artificial neural network may increase the accuracy of TCCS in diagnosis of MCA spasm.
Application of Classification Models to Pharyngeal High-Resolution Manometry
ERIC Educational Resources Information Center
Mielens, Jason D.; Hoffman, Matthew R.; Ciucci, Michelle R.; McCulloch, Timothy M.; Jiang, Jack J.
2012-01-01
Purpose: The authors present 3 methods of performing pattern recognition on spatiotemporal plots produced by pharyngeal high-resolution manometry (HRM). Method: Classification models, including the artificial neural networks (ANNs) multilayer perceptron (MLP) and learning vector quantization (LVQ), as well as support vector machines (SVM), were…
Using a binaural biomimetic array to identify bottom objects ensonified by echolocating dolphins
Heiweg, D.A.; Moore, P.W.; Martin, S.W.; Dankiewicz, L.A.
2006-01-01
The development of a unique dolphin biomimetic sonar produced data that were used to study signal processing methods for object identification. Echoes from four metallic objects proud on the bottom, and a substrate-only condition, were generated by bottlenose dolphins trained to ensonify the targets in very shallow water. Using the two-element ('binaural') receive array, object echo spectra were collected and submitted for identification to four neural network architectures. Identification accuracy was evaluated over two receive array configurations, and five signal processing schemes. The four neural networks included backpropagation, learning vector quantization, genetic learning and probabilistic network architectures. The processing schemes included four methods that capitalized on the binaural data, plus a monaural benchmark process. All the schemes resulted in above-chance identification accuracy when applied to learning vector quantization and backpropagation. Beam-forming or concatenation of spectra from both receive elements outperformed the monaural benchmark, with higher sensitivity and lower bias. Ultimately, best object identification performance was achieved by the learning vector quantization network supplied with beam-formed data. The advantages of multi-element signal processing for object identification are clearly demonstrated in this development of a first-ever dolphin biomimetic sonar. ?? 2006 IOP Publishing Ltd.
Cross-entropy embedding of high-dimensional data using the neural gas model.
Estévez, Pablo A; Figueroa, Cristián J; Saito, Kazumi
2005-01-01
A cross-entropy approach to mapping high-dimensional data into a low-dimensional space embedding is presented. The method allows to project simultaneously the input data and the codebook vectors, obtained with the Neural Gas (NG) quantizer algorithm, into a low-dimensional output space. The aim of this approach is to preserve the relationship defined by the NG neighborhood function for each pair of input and codebook vectors. A cost function based on the cross-entropy between input and output probabilities is minimized by using a Newton-Raphson method. The new approach is compared with Sammon's non-linear mapping (NLM) and the hierarchical approach of combining a vector quantizer such as the self-organizing feature map (SOM) or NG with the NLM recall algorithm. In comparison with these techniques, our method delivers a clear visualization of both data points and codebooks, and it achieves a better mapping quality in terms of the topology preservation measure q(m).
NASA Astrophysics Data System (ADS)
Song, Ke; Li, Feiqiang; Hu, Xiao; He, Lin; Niu, Wenxu; Lu, Sihao; Zhang, Tong
2018-06-01
The development of fuel cell electric vehicles can to a certain extent alleviate worldwide energy and environmental issues. While a single energy management strategy cannot meet the complex road conditions of an actual vehicle, this article proposes a multi-mode energy management strategy for electric vehicles with a fuel cell range extender based on driving condition recognition technology, which contains a patterns recognizer and a multi-mode energy management controller. This paper introduces a learning vector quantization (LVQ) neural network to design the driving patterns recognizer according to a vehicle's driving information. This multi-mode strategy can automatically switch to the genetic algorithm optimized thermostat strategy under specific driving conditions in the light of the differences in condition recognition results. Simulation experiments were carried out based on the model's validity verification using a dynamometer test bench. Simulation results show that the proposed strategy can obtain better economic performance than the single-mode thermostat strategy under dynamic driving conditions.
Image and Video Compression with VLSI Neural Networks
NASA Technical Reports Server (NTRS)
Fang, W.; Sheu, B.
1993-01-01
An advanced motion-compensated predictive video compression system based on artificial neural networks has been developed to effectively eliminate the temporal and spatial redundancy of video image sequences and thus reduce the bandwidth and storage required for the transmission and recording of the video signal. The VLSI neuroprocessor for high-speed high-ratio image compression based upon a self-organization network and the conventional algorithm for vector quantization are compared. The proposed method is quite efficient and can achieve near-optimal results.
Modulated error diffusion CGHs for neural nets
NASA Astrophysics Data System (ADS)
Vermeulen, Pieter J. E.; Casasent, David P.
1990-05-01
New modulated error diffusion CGHs (computer generated holograms) for optical computing are considered. Specific attention is given to their use in optical matrix-vector, associative processor, neural net and optical interconnection architectures. We consider lensless CGH systems (many CGHs use an external Fourier transform (FT) lens), the Fresnel sampling requirements, the effects of finite CGH apertures (sample and hold inputs), dot size correction (for laser recorders), and new applications for this novel encoding method (that devotes attention to quantization noise effects).
An adaptive vector quantization scheme
NASA Technical Reports Server (NTRS)
Cheung, K.-M.
1990-01-01
Vector quantization is known to be an effective compression scheme to achieve a low bit rate so as to minimize communication channel bandwidth and also to reduce digital memory storage while maintaining the necessary fidelity of the data. However, the large number of computations required in vector quantizers has been a handicap in using vector quantization for low-rate source coding. An adaptive vector quantization algorithm is introduced that is inherently suitable for simple hardware implementation because it has a simple architecture. It allows fast encoding and decoding because it requires only addition and subtraction operations.
Wavelet Transforms in Parallel Image Processing
1994-01-27
NUMBER OF PAGES Object Segmentation, Texture Segmentation, Image Compression, Image 137 Halftoning , Neural Network, Parallel Algorithms, 2D and 3D...Vector Quantization of Wavelet Transform Coefficients ........ ............................. 57 B.1.f Adaptive Image Halftoning based on Wavelet...application has been directed to the adaptive image halftoning . The gray information at a pixel, including its gray value and gradient, is represented by
A recursive technique for adaptive vector quantization
NASA Technical Reports Server (NTRS)
Lindsay, Robert A.
1989-01-01
Vector Quantization (VQ) is fast becoming an accepted, if not preferred method for image compression. The VQ performs well when compressing all types of imagery including Video, Electro-Optical (EO), Infrared (IR), Synthetic Aperture Radar (SAR), Multi-Spectral (MS), and digital map data. The only requirement is to change the codebook to switch the compressor from one image sensor to another. There are several approaches for designing codebooks for a vector quantizer. Adaptive Vector Quantization is a procedure that simultaneously designs codebooks as the data is being encoded or quantized. This is done by computing the centroid as a recursive moving average where the centroids move after every vector is encoded. When computing the centroid of a fixed set of vectors the resultant centroid is identical to the previous centroid calculation. This method of centroid calculation can be easily combined with VQ encoding techniques. The defined quantizer changes after every encoded vector by recursively updating the centroid of minimum distance which is the selected by the encoder. Since the quantizer is changing definition or states after every encoded vector, the decoder must now receive updates to the codebook. This is done as side information by multiplexing bits into the compressed source data.
The research of "blind" spot in the LVQ network
NASA Astrophysics Data System (ADS)
Guo, Zhanjie; Nan, Shupo; Wang, Xiaoli
2017-04-01
Nowadays competitive neural network has been widely used in the pattern recognition, classification and other aspects, and show the great advantages compared with the traditional clustering methods. But the competitive neural networks still has inadequate in many aspects, and it needs to be further improved. Based on the learning Vector Quantization Network proposed by Learning Kohonen [1], this paper resolve the issue of the large training error, when there are "blind" spots in a network through the introduction of threshold value learning rules and finally programs the realization with Matlab.
Application of two neural network paradigms to the study of voluntary employee turnover.
Somers, M J
1999-04-01
Two neural network paradigms--multilayer perceptron and learning vector quantization--were used to study voluntary employee turnover with a sample of 577 hospital employees. The objectives of the study were twofold. The 1st was to assess whether neural computing techniques offered greater predictive accuracy than did conventional turnover methodologies. The 2nd was to explore whether computer models of turnover based on neural network technologies offered new insights into turnover processes. When compared with logistic regression analysis, both neural network paradigms provided considerably more accurate predictions of turnover behavior, particularly with respect to the correct classification of leavers. In addition, these neural network paradigms captured nonlinear relationships that are relevant for theory development. Results are discussed in terms of their implications for future research.
Segmentation of magnetic resonance images using fuzzy algorithms for learning vector quantization.
Karayiannis, N B; Pai, P I
1999-02-01
This paper evaluates a segmentation technique for magnetic resonance (MR) images of the brain based on fuzzy algorithms for learning vector quantization (FALVQ). These algorithms perform vector quantization by updating all prototypes of a competitive network through an unsupervised learning process. Segmentation of MR images is formulated as an unsupervised vector quantization process, where the local values of different relaxation parameters form the feature vectors which are represented by a relatively small set of prototypes. The experiments evaluate a variety of FALVQ algorithms in terms of their ability to identify different tissues and discriminate between normal tissues and abnormalities.
Identifying images of handwritten digits using deep learning in H2O
NASA Astrophysics Data System (ADS)
Sadhasivam, Jayakumar; Charanya, R.; Kumar, S. Harish; Srinivasan, A.
2017-11-01
Automatic digit recognition is of popular interest today. Deep learning techniques make it possible for object recognition in image data. Perceiving the digit has turned into a fundamental part as far as certifiable applications. Since, digits are composed in various styles in this way to distinguish the digit it is important to perceive and arrange it with the assistance of machine learning methods. This exploration depends on supervised learning vector quantization neural system arranged under counterfeit artificial neural network. The pictures of digits are perceived, prepared and tried. After the system is made digits are prepared utilizing preparing dataset vectors and testing is connected to the pictures of digits which are separated to each other by fragmenting the picture and resizing the digit picture as needs be for better precision.
LVQ and backpropagation neural networks applied to NASA SSME data
NASA Technical Reports Server (NTRS)
Doniere, Timothy F.; Dhawan, Atam P.
1993-01-01
Feedfoward neural networks with backpropagation learning have been used as function approximators for modeling the space shuttle main engine (SSME) sensor signals. The modeling of these sensor signals is aimed at the development of a sensor fault detection system that can be used during ground test firings. The generalization capability of a neural network based function approximator depends on the training vectors which in this application may be derived from a number of SSME ground test-firings. This yields a large number of training vectors. Large training sets can cause the time required to train the network to be very large. Also, the network may not be able to generalize for large training sets. To reduce the size of the training sets, the SSME test-firing data is reduced using the learning vector quantization (LVQ) based technique. Different compression ratios were used to obtain compressed data in training the neural network model. The performance of the neural model trained using reduced sets of training patterns is presented and compared with the performance of the model trained using complete data. The LVQ can also be used as a function approximator. The performance of the LVQ as a function approximator using reduced training sets is presented and compared with the performance of the backpropagation network.
Condition monitoring of 3G cellular networks through competitive neural models.
Barreto, Guilherme A; Mota, João C M; Souza, Luis G M; Frota, Rewbenio A; Aguayo, Leonardo
2005-09-01
We develop an unsupervised approach to condition monitoring of cellular networks using competitive neural algorithms. Training is carried out with state vectors representing the normal functioning of a simulated CDMA2000 network. Once training is completed, global and local normality profiles (NPs) are built from the distribution of quantization errors of the training state vectors and their components, respectively. The global NP is used to evaluate the overall condition of the cellular system. If abnormal behavior is detected, local NPs are used in a component-wise fashion to find abnormal state variables. Anomaly detection tests are performed via percentile-based confidence intervals computed over the global and local NPs. We compared the performance of four competitive algorithms [winner-take-all (WTA), frequency-sensitive competitive learning (FSCL), self-organizing map (SOM), and neural-gas algorithm (NGA)] and the results suggest that the joint use of global and local NPs is more efficient and more robust than current single-threshold methods.
NASA Technical Reports Server (NTRS)
Lin, Paul P.; Jules, Kenol
2002-01-01
An intelligent system for monitoring the microgravity environment quality on-board the International Space Station is presented. The monitoring system uses a new approach combining Kohonen's self-organizing feature map, learning vector quantization, and back propagation neural network to recognize and classify the known and unknown patterns. Finally, fuzzy logic is used to assess the level of confidence associated with each vibrating source activation detected by the system.
NASA Technical Reports Server (NTRS)
Gray, Robert M.
1989-01-01
During the past ten years Vector Quantization (VQ) has developed from a theoretical possibility promised by Shannon's source coding theorems into a powerful and competitive technique for speech and image coding and compression at medium to low bit rates. In this survey, the basic ideas behind the design of vector quantizers are sketched and some comments made on the state-of-the-art and current research efforts.
Robust vector quantization for noisy channels
NASA Technical Reports Server (NTRS)
Demarca, J. R. B.; Farvardin, N.; Jayant, N. S.; Shoham, Y.
1988-01-01
The paper briefly discusses techniques for making vector quantizers more tolerant to tranmsission errors. Two algorithms are presented for obtaining an efficient binary word assignment to the vector quantizer codewords without increasing the transmission rate. It is shown that about 4.5 dB gain over random assignment can be achieved with these algorithms. It is also proposed to reduce the effects of error propagation in vector-predictive quantizers by appropriately constraining the response of the predictive loop. The constrained system is shown to have about 4 dB of SNR gain over an unconstrained system in a noisy channel, with a small loss of clean-channel performance.
A new local-global approach for classification.
Peres, R T; Pedreira, C E
2010-09-01
In this paper, we propose a new local-global pattern classification scheme that combines supervised and unsupervised approaches, taking advantage of both, local and global environments. We understand as global methods the ones concerned with the aim of constructing a model for the whole problem space using the totality of the available observations. Local methods focus into sub regions of the space, possibly using an appropriately selected subset of the sample. In the proposed method, the sample is first divided in local cells by using a Vector Quantization unsupervised algorithm, the LBG (Linde-Buzo-Gray). In a second stage, the generated assemblage of much easier problems is locally solved with a scheme inspired by Bayes' rule. Four classification methods were implemented for comparison purposes with the proposed scheme: Learning Vector Quantization (LVQ); Feedforward Neural Networks; Support Vector Machine (SVM) and k-Nearest Neighbors. These four methods and the proposed scheme were implemented in eleven datasets, two controlled experiments, plus nine public available datasets from the UCI repository. The proposed method has shown a quite competitive performance when compared to these classical and largely used classifiers. Our method is simple concerning understanding and implementation and is based on very intuitive concepts. Copyright 2010 Elsevier Ltd. All rights reserved.
A hybrid LBG/lattice vector quantizer for high quality image coding
NASA Technical Reports Server (NTRS)
Ramamoorthy, V.; Sayood, K.; Arikan, E. (Editor)
1991-01-01
It is well known that a vector quantizer is an efficient coder offering a good trade-off between quantization distortion and bit rate. The performance of a vector quantizer asymptotically approaches the optimum bound with increasing dimensionality. A vector quantized image suffers from the following types of degradations: (1) edge regions in the coded image contain staircase effects, (2) quasi-constant or slowly varying regions suffer from contouring effects, and (3) textured regions lose details and suffer from granular noise. All three of these degradations are due to the finite size of the code book, the distortion measures used in the design, and due to the finite training procedure involved in the construction of the code book. In this paper, we present an adaptive technique which attempts to ameliorate the edge distortion and contouring effects.
Broad Absorption Line Quasar catalogues with Supervised Neural Networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scaringi, Simone; Knigge, Christian; Cottis, Christopher E.
2008-12-05
We have applied a Learning Vector Quantization (LVQ) algorithm to SDSS DR5 quasar spectra in order to create a large catalogue of broad absorption line quasars (BALQSOs). We first discuss the problems with BALQSO catalogues constructed using the conventional balnicity and/or absorption indices (BI and AI), and then describe the supervised LVQ network we have trained to recognise BALQSOs. The resulting BALQSO catalogue should be substantially more robust and complete than BI-or AI-based ones.
Quantization of Electromagnetic Fields in Cavities
NASA Technical Reports Server (NTRS)
Kakazu, Kiyotaka; Oshiro, Kazunori
1996-01-01
A quantization procedure for the electromagnetic field in a rectangular cavity with perfect conductor walls is presented, where a decomposition formula of the field plays an essential role. All vector mode functions are obtained by using the decomposition. After expanding the field in terms of the vector mode functions, we get the quantized electromagnetic Hamiltonian.
Detection of Road Surface States from Tire Noise Using Neural Network Analysis
NASA Astrophysics Data System (ADS)
Kongrattanaprasert, Wuttiwat; Nomura, Hideyuki; Kamakura, Tomoo; Ueda, Koji
This report proposes a new processing method for automatically detecting the states of road surfaces from tire noises of passing vehicles. In addition to multiple indicators of the signal features in the frequency domain, we propose a few feature indicators in the time domain to successfully classify the road states into four categories: snowy, slushy, wet, and dry states. The method is based on artificial neural networks. The proposed classification is carried out in multiple neural networks using learning vector quantization. The outcomes of the networks are then integrated by the voting decision-making scheme. Experimental results obtained from recorded signals for ten days in the snowy season demonstrated that an accuracy of approximately 90% can be attained for predicting road surface states using only tire noise data.
Neural networks to classify speaker independent isolated words recorded in radio car environments
NASA Astrophysics Data System (ADS)
Alippi, C.; Simeoni, M.; Torri, V.
1993-02-01
Many applications, in particular the ones requiring nonlinear signal processing, have proved Artificial Neural Networks (ANN's) to be invaluable tools for model free estimation. The classifying abilities of ANN's are addressed by testing their performance in a speaker independent word recognition application. A real world case requiring implementation of compact integrated devices is taken into account: the classification of isolated words in radio car environment. A multispeaker database of isolated words was recorded in different environments. Data were first processed to determinate the boundaries of each word and then to extract speech features, the latter accomplished by using cepstral coefficient representation, log area ratios and filters bank techniques. Multilayered perceptron and adaptive vector quantization neural paradigms were tested to find a reasonable compromise between performances and network simplicity, fundamental requirement for the implementation of compact real time running neural devices.
NASA Astrophysics Data System (ADS)
Ji, Zhengping; Ovsiannikov, Ilia; Wang, Yibing; Shi, Lilong; Zhang, Qiang
2015-05-01
In this paper, we develop a server-client quantization scheme to reduce bit resolution of deep learning architecture, i.e., Convolutional Neural Networks, for image recognition tasks. Low bit resolution is an important factor in bringing the deep learning neural network into hardware implementation, which directly determines the cost and power consumption. We aim to reduce the bit resolution of the network without sacrificing its performance. To this end, we design a new quantization algorithm called supervised iterative quantization to reduce the bit resolution of learned network weights. In the training stage, the supervised iterative quantization is conducted via two steps on server - apply k-means based adaptive quantization on learned network weights and retrain the network based on quantized weights. These two steps are alternated until the convergence criterion is met. In this testing stage, the network configuration and low-bit weights are loaded to the client hardware device to recognize coming input in real time, where optimized but expensive quantization becomes infeasible. Considering this, we adopt a uniform quantization for the inputs and internal network responses (called feature maps) to maintain low on-chip expenses. The Convolutional Neural Network with reduced weight and input/response precision is demonstrated in recognizing two types of images: one is hand-written digit images and the other is real-life images in office scenarios. Both results show that the new network is able to achieve the performance of the neural network with full bit resolution, even though in the new network the bit resolution of both weight and input are significantly reduced, e.g., from 64 bits to 4-5 bits.
Medical Image Compression Based on Vector Quantization with Variable Block Sizes in Wavelet Domain
Jiang, Huiyan; Ma, Zhiyuan; Hu, Yang; Yang, Benqiang; Zhang, Libo
2012-01-01
An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD) was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality. PMID:23049544
Medical image compression based on vector quantization with variable block sizes in wavelet domain.
Jiang, Huiyan; Ma, Zhiyuan; Hu, Yang; Yang, Benqiang; Zhang, Libo
2012-01-01
An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD) was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality.
A Heisenberg Algebra Bundle of a Vector Field in Three-Space and its Weyl Quantization
NASA Astrophysics Data System (ADS)
Binz, Ernst; Pods, Sonja
2006-01-01
In these notes we associate a natural Heisenberg group bundle Ha with a singularity free smooth vector field X = (id,a) on a submanifold M in a Euclidean three-space. This bundle yields naturally an infinite dimensional Heisenberg group HX∞. A representation of the C*-group algebra of HX∞ is a quantization. It causes a natural Weyl-deformation quantization of X. The influence of the topological structure of M on this quantization is encoded in the Chern class of a canonical complex line bundle inside Ha.
Application of heterogeneous pulse coupled neural network in image quantization
NASA Astrophysics Data System (ADS)
Huang, Yi; Ma, Yide; Li, Shouliang; Zhan, Kun
2016-11-01
On the basis of the different strengths of synaptic connections between actual neurons, this paper proposes a heterogeneous pulse coupled neural network (HPCNN) algorithm to perform quantization on images. HPCNNs are developed from traditional pulse coupled neural network (PCNN) models, which have different parameters corresponding to different image regions. This allows pixels of different gray levels to be classified broadly into two categories: background regional and object regional. Moreover, an HPCNN also satisfies human visual characteristics. The parameters of the HPCNN model are calculated automatically according to these categories, and quantized results will be optimal and more suitable for humans to observe. At the same time, the experimental results of natural images from the standard image library show the validity and efficiency of our proposed quantization method.
Vector quantizer designs for joint compression and terrain categorization of multispectral imagery
NASA Technical Reports Server (NTRS)
Gorman, John D.; Lyons, Daniel F.
1994-01-01
Two vector quantizer designs for compression of multispectral imagery and their impact on terrain categorization performance are evaluated. The mean-squared error (MSE) and classification performance of the two quantizers are compared, and it is shown that a simple two-stage design minimizing MSE subject to a constraint on classification performance has a significantly better classification performance than a standard MSE-based tree-structured vector quantizer followed by maximum likelihood classification. This improvement in classification performance is obtained with minimal loss in MSE performance. The results show that it is advantageous to tailor compression algorithm designs to the required data exploitation tasks. Applications of joint compression/classification include compression for the archival or transmission of Landsat imagery that is later used for land utility surveys and/or radiometric analysis.
Improved Autoassociative Neural Networks
NASA Technical Reports Server (NTRS)
Hand, Charles
2003-01-01
Improved autoassociative neural networks, denoted nexi, have been proposed for use in controlling autonomous robots, including mobile exploratory robots of the biomorphic type. In comparison with conventional autoassociative neural networks, nexi would be more complex but more capable in that they could be trained to do more complex tasks. A nexus would use bit weights and simple arithmetic in a manner that would enable training and operation without a central processing unit, programs, weight registers, or large amounts of memory. Only a relatively small amount of memory (to hold the bit weights) and a simple logic application- specific integrated circuit would be needed. A description of autoassociative neural networks is prerequisite to a meaningful description of a nexus. An autoassociative network is a set of neurons that are completely connected in the sense that each neuron receives input from, and sends output to, all the other neurons. (In some instantiations, a neuron could also send output back to its own input terminal.) The state of a neuron is completely determined by the inner product of its inputs with weights associated with its input channel. Setting the weights sets the behavior of the network. The neurons of an autoassociative network are usually regarded as comprising a row or vector. Time is a quantized phenomenon for most autoassociative networks in the sense that time proceeds in discrete steps. At each time step, the row of neurons forms a pattern: some neurons are firing, some are not. Hence, the current state of an autoassociative network can be described with a single binary vector. As time goes by, the network changes the vector. Autoassociative networks move vectors over hyperspace landscapes of possibilities.
Quantized Synchronization of Chaotic Neural Networks With Scheduled Output Feedback Control.
Wan, Ying; Cao, Jinde; Wen, Guanghui
In this paper, the synchronization problem of master-slave chaotic neural networks with remote sensors, quantization process, and communication time delays is investigated. The information communication channel between the master chaotic neural network and slave chaotic neural network consists of several remote sensors, with each sensor able to access only partial knowledge of output information of the master neural network. At each sampling instants, each sensor updates its own measurement and only one sensor is scheduled to transmit its latest information to the controller's side in order to update the control inputs for the slave neural network. Thus, such communication process and control strategy are much more energy-saving comparing with the traditional point-to-point scheme. Sufficient conditions for output feedback control gain matrix, allowable length of sampling intervals, and upper bound of network-induced delays are derived to ensure the quantized synchronization of master-slave chaotic neural networks. Lastly, Chua's circuit system and 4-D Hopfield neural network are simulated to validate the effectiveness of the main results.In this paper, the synchronization problem of master-slave chaotic neural networks with remote sensors, quantization process, and communication time delays is investigated. The information communication channel between the master chaotic neural network and slave chaotic neural network consists of several remote sensors, with each sensor able to access only partial knowledge of output information of the master neural network. At each sampling instants, each sensor updates its own measurement and only one sensor is scheduled to transmit its latest information to the controller's side in order to update the control inputs for the slave neural network. Thus, such communication process and control strategy are much more energy-saving comparing with the traditional point-to-point scheme. Sufficient conditions for output feedback control gain matrix, allowable length of sampling intervals, and upper bound of network-induced delays are derived to ensure the quantized synchronization of master-slave chaotic neural networks. Lastly, Chua's circuit system and 4-D Hopfield neural network are simulated to validate the effectiveness of the main results.
Combining Vector Quantization and Histogram Equalization.
ERIC Educational Resources Information Center
Cosman, Pamela C.; And Others
1992-01-01
Discussion of contrast enhancement techniques focuses on the use of histogram equalization with a data compression technique, i.e., tree-structured vector quantization. The enhancement technique of intensity windowing is described, and the use of enhancement techniques for medical images is explained, including adaptive histogram equalization.…
Application of a VLSI vector quantization processor to real-time speech coding
NASA Technical Reports Server (NTRS)
Davidson, G.; Gersho, A.
1986-01-01
Attention is given to a working vector quantization processor for speech coding that is based on a first-generation VLSI chip which efficiently performs the pattern-matching operation needed for the codebook search process (CPS). Using this chip, the CPS architecture has been successfully incorporated into a compact, single-board Vector PCM implementation operating at 7-18 kbits/sec. A real time Adaptive Vector Predictive Coder system using the CPS has also been implemented.
Perceptual compression of magnitude-detected synthetic aperture radar imagery
NASA Technical Reports Server (NTRS)
Gorman, John D.; Werness, Susan A.
1994-01-01
A perceptually-based approach for compressing synthetic aperture radar (SAR) imagery is presented. Key components of the approach are a multiresolution wavelet transform, a bit allocation mask based on an empirical human visual system (HVS) model, and hybrid scalar/vector quantization. Specifically, wavelet shrinkage techniques are used to segregate wavelet transform coefficients into three components: local means, edges, and texture. Each of these three components is then quantized separately according to a perceptually-based bit allocation scheme. Wavelet coefficients associated with local means and edges are quantized using high-rate scalar quantization while texture information is quantized using low-rate vector quantization. The impact of the perceptually-based multiresolution compression algorithm on visual image quality, impulse response, and texture properties is assessed for fine-resolution magnitude-detected SAR imagery; excellent image quality is found at bit rates at or above 1 bpp along with graceful performance degradation at rates below 1 bpp.
NASA Astrophysics Data System (ADS)
Ng, Theam Foo; Pham, Tuan D.; Zhou, Xiaobo
2010-01-01
With the fast development of multi-dimensional data compression and pattern classification techniques, vector quantization (VQ) has become a system that allows large reduction of data storage and computational effort. One of the most recent VQ techniques that handle the poor estimation of vector centroids due to biased data from undersampling is to use fuzzy declustering-based vector quantization (FDVQ) technique. Therefore, in this paper, we are motivated to propose a justification of FDVQ based hidden Markov model (HMM) for investigating its effectiveness and efficiency in classification of genotype-image phenotypes. The performance evaluation and comparison of the recognition accuracy between a proposed FDVQ based HMM (FDVQ-HMM) and a well-known LBG (Linde, Buzo, Gray) vector quantization based HMM (LBG-HMM) will be carried out. The experimental results show that the performances of both FDVQ-HMM and LBG-HMM are almost similar. Finally, we have justified the competitiveness of FDVQ-HMM in classification of cellular phenotype image database by using hypotheses t-test. As a result, we have validated that the FDVQ algorithm is a robust and an efficient classification technique in the application of RNAi genome-wide screening image data.
Necessary conditions for the optimality of variable rate residual vector quantizers
NASA Technical Reports Server (NTRS)
Kossentini, Faouzi; Smith, Mark J. T.; Barnes, Christopher F.
1993-01-01
Residual vector quantization (RVQ), or multistage VQ, as it is also called, has recently been shown to be a competitive technique for data compression. The competitive performance of RVQ reported in results from the joint optimization of variable rate encoding and RVQ direct-sum code books. In this paper, necessary conditions for the optimality of variable rate RVQ's are derived, and an iterative descent algorithm based on a Lagrangian formulation is introduced for designing RVQ's having minimum average distortion subject to an entropy constraint. Simulation results for these entropy-constrained RVQ's (EC-RVQ's) are presented for memory less Gaussian, Laplacian, and uniform sources. A Gauss-Markov source is also considered. The performance is superior to that of entropy-constrained scalar quantizers (EC-SQ's) and practical entropy-constrained vector quantizers (EC-VQ's), and is competitive with that of some of the best source coding techniques that have appeared in the literature.
A neural net based architecture for the segmentation of mixed gray-level and binary pictures
NASA Technical Reports Server (NTRS)
Tabatabai, Ali; Troudet, Terry P.
1991-01-01
A neural-net-based architecture is proposed to perform segmentation in real time for mixed gray-level and binary pictures. In this approach, the composite picture is divided into 16 x 16 pixel blocks, which are identified as character blocks or image blocks on the basis of a dichotomy measure computed by an adaptive 16 x 16 neural net. For compression purposes, each image block is further divided into 4 x 4 subblocks; a one-bit nonparametric quantizer is used to encode 16 x 16 character and 4 x 4 image blocks; and the binary map and quantizer levels are obtained through a neural net segmentor over each block. The efficiency of the neural segmentation in terms of computational speed, data compression, and quality of the compressed picture is demonstrated. The effect of weight quantization is also discussed. VLSI implementations of such adaptive neural nets in CMOS technology are described and simulated in real time for a maximum block size of 256 pixels.
Accelerating Families of Fuzzy K-Means Algorithms for Vector Quantization Codebook Design
Mata, Edson; Bandeira, Silvio; de Mattos Neto, Paulo; Lopes, Waslon; Madeiro, Francisco
2016-01-01
The performance of signal processing systems based on vector quantization depends on codebook design. In the image compression scenario, the quality of the reconstructed images depends on the codebooks used. In this paper, alternatives are proposed for accelerating families of fuzzy K-means algorithms for codebook design. The acceleration is obtained by reducing the number of iterations of the algorithms and applying efficient nearest neighbor search techniques. Simulation results concerning image vector quantization have shown that the acceleration obtained so far does not decrease the quality of the reconstructed images. Codebook design time savings up to about 40% are obtained by the accelerated versions with respect to the original versions of the algorithms. PMID:27886061
Accelerating Families of Fuzzy K-Means Algorithms for Vector Quantization Codebook Design.
Mata, Edson; Bandeira, Silvio; de Mattos Neto, Paulo; Lopes, Waslon; Madeiro, Francisco
2016-11-23
The performance of signal processing systems based on vector quantization depends on codebook design. In the image compression scenario, the quality of the reconstructed images depends on the codebooks used. In this paper, alternatives are proposed for accelerating families of fuzzy K-means algorithms for codebook design. The acceleration is obtained by reducing the number of iterations of the algorithms and applying efficient nearest neighbor search techniques. Simulation results concerning image vector quantization have shown that the acceleration obtained so far does not decrease the quality of the reconstructed images. Codebook design time savings up to about 40% are obtained by the accelerated versions with respect to the original versions of the algorithms.
Applying cybernetic technology to diagnose human pulmonary sounds.
Chen, Mei-Yung; Chou, Cheng-Han
2014-06-01
Chest auscultation is a crucial and efficient method for diagnosing lung disease; however, it is a subjective process that relies on physician experience and the ability to differentiate between various sound patterns. Because the physiological signals composed of heart sounds and pulmonary sounds (PSs) are greater than 120 Hz and the human ear is not sensitive to low frequencies, successfully making diagnostic classifications is difficult. To solve this problem, we constructed various PS recognition systems for classifying six PS classes: vesicular breath sounds, bronchial breath sounds, tracheal breath sounds, crackles, wheezes, and stridor sounds. First, we used a piezoelectric microphone and data acquisition card to acquire PS signals and perform signal preprocessing. A wavelet transform was used for feature extraction, and the PS signals were decomposed into frequency subbands. Using a statistical method, we extracted 17 features that were used as the input vectors of a neural network. We proposed a 2-stage classifier combined with a back-propagation (BP) neural network and learning vector quantization (LVQ) neural network, which improves classification accuracy by using a haploid neural network. The receiver operating characteristic (ROC) curve verifies the high performance level of the neural network. To expand traditional auscultation methods, we constructed various PS diagnostic systems that can correctly classify the six common PSs. The proposed device overcomes the lack of human sensitivity to low-frequency sounds and various PS waves, characteristic values, and a spectral analysis charts are provided to elucidate the design of the human-machine interface.
Bascil, M Serdar; Tesneli, Ahmet Y; Temurtas, Feyzullah
2016-09-01
Brain computer interface (BCI) is a new communication way between man and machine. It identifies mental task patterns stored in electroencephalogram (EEG). So, it extracts brain electrical activities recorded by EEG and transforms them machine control commands. The main goal of BCI is to make available assistive environmental devices for paralyzed people such as computers and makes their life easier. This study deals with feature extraction and mental task pattern recognition on 2-D cursor control from EEG as offline analysis approach. The hemispherical power density changes are computed and compared on alpha-beta frequency bands with only mental imagination of cursor movements. First of all, power spectral density (PSD) features of EEG signals are extracted and high dimensional data reduced by principle component analysis (PCA) and independent component analysis (ICA) which are statistical algorithms. In the last stage, all features are classified with two types of support vector machine (SVM) which are linear and least squares (LS-SVM) and three different artificial neural network (ANN) structures which are learning vector quantization (LVQ), multilayer neural network (MLNN) and probabilistic neural network (PNN) and mental task patterns are successfully identified via k-fold cross validation technique.
Quantized Vector Potential and the Photon Wave-function
NASA Astrophysics Data System (ADS)
Meis, C.; Dahoo, P. R.
2017-12-01
The vector potential function {\\overrightarrow{α }}kλ (\\overrightarrow{r},t) for a k-mode and λ-polarization photon, with the quantized amplitude α 0k (ω k ) = ξω k , satisfies the classical wave propagation equation as well as the Schrodinger’s equation with the relativistic massless Hamiltonian \\mathop{H}\\limits∼ =-i\\hslash c\\overrightarrow{\
Ellipsoidal fuzzy learning for smart car platoons
NASA Astrophysics Data System (ADS)
Dickerson, Julie A.; Kosko, Bart
1993-12-01
A neural-fuzzy system combined supervised and unsupervised learning to find and tune the fuzzy-rules. An additive fuzzy system approximates a function by covering its graph with fuzzy rules. A fuzzy rule patch can take the form of an ellipsoid in the input-output space. Unsupervised competitive learning found the statistics of data clusters. The covariance matrix of each synaptic quantization vector defined on ellipsoid centered at the centroid of the data cluster. Tightly clustered data gave smaller ellipsoids or more certain rules. Sparse data gave larger ellipsoids or less certain rules. Supervised learning tuned the ellipsoids to improve the approximation. The supervised neural system used gradient descent to find the ellipsoidal fuzzy patches. It locally minimized the mean-squared error of the fuzzy approximation. Hybrid ellipsoidal learning estimated the control surface for a smart car controller.
Neurocomputing strategies in decomposition based structural design
NASA Technical Reports Server (NTRS)
Szewczyk, Z.; Hajela, P.
1993-01-01
The present paper explores the applicability of neurocomputing strategies in decomposition based structural optimization problems. It is shown that the modeling capability of a backpropagation neural network can be used to detect weak couplings in a system, and to effectively decompose it into smaller, more tractable, subsystems. When such partitioning of a design space is possible, parallel optimization can be performed in each subsystem, with a penalty term added to its objective function to account for constraint violations in all other subsystems. Dependencies among subsystems are represented in terms of global design variables, and a neural network is used to map the relations between these variables and all subsystem constraints. A vector quantization technique, referred to as a z-Network, can effectively be used for this purpose. The approach is illustrated with applications to minimum weight sizing of truss structures with multiple design constraints.
Gain-adaptive vector quantization for medium-rate speech coding
NASA Technical Reports Server (NTRS)
Chen, J.-H.; Gersho, A.
1985-01-01
A class of adaptive vector quantizers (VQs) that can dynamically adjust the 'gain' of codevectors according to the input signal level is introduced. The encoder uses a gain estimator to determine a suitable normalization of each input vector prior to VQ coding. The normalized vectors have reduced dynamic range and can then be more efficiently coded. At the receiver, the VQ decoder output is multiplied by the estimated gain. Both forward and backward adaptation are considered and several different gain estimators are compared and evaluated. An approach to optimizing the design of gain estimators is introduced. Some of the more obvious techniques for achieving gain adaptation are substantially less effective than the use of optimized gain estimators. A novel design technique that is needed to generate the appropriate gain-normalized codebook for the vector quantizer is introduced. Experimental results show that a significant gain in segmental SNR can be obtained over nonadaptive VQ with a negligible increase in complexity.
Vector Quantization Algorithm Based on Associative Memories
NASA Astrophysics Data System (ADS)
Guzmán, Enrique; Pogrebnyak, Oleksiy; Yáñez, Cornelio; Manrique, Pablo
This paper presents a vector quantization algorithm for image compression based on extended associative memories. The proposed algorithm is divided in two stages. First, an associative network is generated applying the learning phase of the extended associative memories between a codebook generated by the LBG algorithm and a training set. This associative network is named EAM-codebook and represents a new codebook which is used in the next stage. The EAM-codebook establishes a relation between training set and the LBG codebook. Second, the vector quantization process is performed by means of the recalling stage of EAM using as associative memory the EAM-codebook. This process generates a set of the class indices to which each input vector belongs. With respect to the LBG algorithm, the main advantages offered by the proposed algorithm is high processing speed and low demand of resources (system memory); results of image compression and quality are presented.
Conditional Entropy-Constrained Residual VQ with Application to Image Coding
NASA Technical Reports Server (NTRS)
Kossentini, Faouzi; Chung, Wilson C.; Smith, Mark J. T.
1996-01-01
This paper introduces an extension of entropy-constrained residual vector quantization (VQ) where intervector dependencies are exploited. The method, which we call conditional entropy-constrained residual VQ, employs a high-order entropy conditioning strategy that captures local information in the neighboring vectors. When applied to coding images, the proposed method is shown to achieve better rate-distortion performance than that of entropy-constrained residual vector quantization with less computational complexity and lower memory requirements. Moreover, it can be designed to support progressive transmission in a natural way. It is also shown to outperform some of the best predictive and finite-state VQ techniques reported in the literature. This is due partly to the joint optimization between the residual vector quantizer and a high-order conditional entropy coder as well as the efficiency of the multistage residual VQ structure and the dynamic nature of the prediction.
A VLSI chip set for real time vector quantization of image sequences
NASA Technical Reports Server (NTRS)
Baker, Richard L.
1989-01-01
The architecture and implementation of a VLSI chip set that vector quantizes (VQ) image sequences in real time is described. The chip set forms a programmable Single-Instruction, Multiple-Data (SIMD) machine which can implement various vector quantization encoding structures. Its VQ codebook may contain unlimited number of codevectors, N, having dimension up to K = 64. Under a weighted least squared error criterion, the engine locates at video rates the best code vector in full-searched or large tree searched VQ codebooks. The ability to manipulate tree structured codebooks, coupled with parallelism and pipelining, permits searches in as short as O (log N) cycles. A full codebook search results in O(N) performance, compared to O(KN) for a Single-Instruction, Single-Data (SISD) machine. With this VLSI chip set, an entire video code can be built on a single board that permits realtime experimentation with very large codebooks.
Consciousness of Unification: The Mind-Matter Phallacy Bites the Dust
NASA Astrophysics Data System (ADS)
Beichler, James E.
A complete theoretical model of how consciousness arises in neural nets can be developed based on a mixed quantum/classical basis. Both mind and consciousness are multi-leveled scalar and vector electromagnetic complexity patterns, respectively, which emerge within all living organisms through the process of evolution. Like life, the mind and consciousness patterns extend throughout living organisms (bodies), but the neural nets and higher level groupings that distinguish higher levels of consciousness only exist in the brain so mind and consciousness have been traditionally associated with the brain alone. A close study of neurons and neural nets in the brain shows that the microtubules within axons are classical bio-magnetic inductors that emit and absorb electromagnetic pulses from each other. These pulses establish interference patterns that influence the quantized vector potential patterns of interstitial water molecules within the neurons as well as create the coherence within neurons and neural nets that scientists normally associate with more complex memories, thought processes and streams of thought. Memory storage and recall are guided by the microtubules and the actual memory patterns are stored as magnetic vector potential complexity patterns in the points of space at the quantum level occupied by the water molecules. This model also accounts for the plasticity of the brain and implies that mind and consciousness, like life itself, are the result of evolutionary processes. However, consciousness can evolve independent of an organism's birth genetics once it has evolved by normal bottom-up genetic processes and thus force a new type of top-down evolution on living organisms and species as a whole that can be explained by expanding the laws of thermodynamics to include orderly systems.
High Order Entropy-Constrained Residual VQ for Lossless Compression of Images
NASA Technical Reports Server (NTRS)
Kossentini, Faouzi; Smith, Mark J. T.; Scales, Allen
1995-01-01
High order entropy coding is a powerful technique for exploiting high order statistical dependencies. However, the exponentially high complexity associated with such a method often discourages its use. In this paper, an entropy-constrained residual vector quantization method is proposed for lossless compression of images. The method consists of first quantizing the input image using a high order entropy-constrained residual vector quantizer and then coding the residual image using a first order entropy coder. The distortion measure used in the entropy-constrained optimization is essentially the first order entropy of the residual image. Experimental results show very competitive performance.
Subband directional vector quantization in radiological image compression
NASA Astrophysics Data System (ADS)
Akrout, Nabil M.; Diab, Chaouki; Prost, Remy; Goutte, Robert; Amiel, Michel
1992-05-01
The aim of this paper is to propose a new scheme for image compression. The method is very efficient for images which have directional edges such as the tree-like structure of the coronary vessels in digital angiograms. This method involves two steps. First, the original image is decomposed at different resolution levels using a pyramidal subband decomposition scheme. For decomposition/reconstruction of the image, free of aliasing and boundary errors, we use an ideal band-pass filter bank implemented in the Discrete Cosine Transform domain (DCT). Second, the high-frequency subbands are vector quantized using a multiresolution codebook with vertical and horizontal codewords which take into account the edge orientation of each subband. The proposed method reduces the blocking effect encountered at low bit rates in conventional vector quantization.
BSIFT: toward data-independent codebook for large scale image search.
Zhou, Wengang; Li, Houqiang; Hong, Richang; Lu, Yijuan; Tian, Qi
2015-03-01
Bag-of-Words (BoWs) model based on Scale Invariant Feature Transform (SIFT) has been widely used in large-scale image retrieval applications. Feature quantization by vector quantization plays a crucial role in BoW model, which generates visual words from the high- dimensional SIFT features, so as to adapt to the inverted file structure for the scalable retrieval. Traditional feature quantization approaches suffer several issues, such as necessity of visual codebook training, limited reliability, and update inefficiency. To avoid the above problems, in this paper, a novel feature quantization scheme is proposed to efficiently quantize each SIFT descriptor to a descriptive and discriminative bit-vector, which is called binary SIFT (BSIFT). Our quantizer is independent of image collections. In addition, by taking the first 32 bits out from BSIFT as code word, the generated BSIFT naturally lends itself to adapt to the classic inverted file structure for image indexing. Moreover, the quantization error is reduced by feature filtering, code word expansion, and query sensitive mask shielding. Without any explicit codebook for quantization, our approach can be readily applied in image search in some resource-limited scenarios. We evaluate the proposed algorithm for large scale image search on two public image data sets. Experimental results demonstrate the index efficiency and retrieval accuracy of our approach.
Density-Dependent Quantized Least Squares Support Vector Machine for Large Data Sets.
Nan, Shengyu; Sun, Lei; Chen, Badong; Lin, Zhiping; Toh, Kar-Ann
2017-01-01
Based on the knowledge that input data distribution is important for learning, a data density-dependent quantization scheme (DQS) is proposed for sparse input data representation. The usefulness of the representation scheme is demonstrated by using it as a data preprocessing unit attached to the well-known least squares support vector machine (LS-SVM) for application on big data sets. Essentially, the proposed DQS adopts a single shrinkage threshold to obtain a simple quantization scheme, which adapts its outputs to input data density. With this quantization scheme, a large data set is quantized to a small subset where considerable sample size reduction is generally obtained. In particular, the sample size reduction can save significant computational cost when using the quantized subset for feature approximation via the Nyström method. Based on the quantized subset, the approximated features are incorporated into LS-SVM to develop a data density-dependent quantized LS-SVM (DQLS-SVM), where an analytic solution is obtained in the primal solution space. The developed DQLS-SVM is evaluated on synthetic and benchmark data with particular emphasis on large data sets. Extensive experimental results show that the learning machine incorporating DQS attains not only high computational efficiency but also good generalization performance.
NASA Astrophysics Data System (ADS)
Yang, Shuyu; Mitra, Sunanda
2002-05-01
Due to the huge volumes of radiographic images to be managed in hospitals, efficient compression techniques yielding no perceptual loss in the reconstructed images are becoming a requirement in the storage and management of such datasets. A wavelet-based multi-scale vector quantization scheme that generates a global codebook for efficient storage and transmission of medical images is presented in this paper. The results obtained show that even at low bit rates one is able to obtain reconstructed images with perceptual quality higher than that of the state-of-the-art scalar quantization method, the set partitioning in hierarchical trees.
Enhancing speech recognition using improved particle swarm optimization based hidden Markov model.
Selvaraj, Lokesh; Ganesan, Balakrishnan
2014-01-01
Enhancing speech recognition is the primary intention of this work. In this paper a novel speech recognition method based on vector quantization and improved particle swarm optimization (IPSO) is suggested. The suggested methodology contains four stages, namely, (i) denoising, (ii) feature mining (iii), vector quantization, and (iv) IPSO based hidden Markov model (HMM) technique (IP-HMM). At first, the speech signals are denoised using median filter. Next, characteristics such as peak, pitch spectrum, Mel frequency Cepstral coefficients (MFCC), mean, standard deviation, and minimum and maximum of the signal are extorted from the denoised signal. Following that, to accomplish the training process, the extracted characteristics are given to genetic algorithm based codebook generation in vector quantization. The initial populations are created by selecting random code vectors from the training set for the codebooks for the genetic algorithm process and IP-HMM helps in doing the recognition. At this point the creativeness will be done in terms of one of the genetic operation crossovers. The proposed speech recognition technique offers 97.14% accuracy.
Magnetic resonance image compression using scalar-vector quantization
NASA Astrophysics Data System (ADS)
Mohsenian, Nader; Shahri, Homayoun
1995-12-01
A new coding scheme based on the scalar-vector quantizer (SVQ) is developed for compression of medical images. SVQ is a fixed-rate encoder and its rate-distortion performance is close to that of optimal entropy-constrained scalar quantizers (ECSQs) for memoryless sources. The use of a fixed-rate quantizer is expected to eliminate some of the complexity issues of using variable-length scalar quantizers. When transmission of images over noisy channels is considered, our coding scheme does not suffer from error propagation which is typical of coding schemes which use variable-length codes. For a set of magnetic resonance (MR) images, coding results obtained from SVQ and ECSQ at low bit-rates are indistinguishable. Furthermore, our encoded images are perceptually indistinguishable from the original, when displayed on a monitor. This makes our SVQ based coder an attractive compression scheme for picture archiving and communication systems (PACS), currently under consideration for an all digital radiology environment in hospitals, where reliable transmission, storage, and high fidelity reconstruction of images are desired.
Fast large-scale object retrieval with binary quantization
NASA Astrophysics Data System (ADS)
Zhou, Shifu; Zeng, Dan; Shen, Wei; Zhang, Zhijiang; Tian, Qi
2015-11-01
The objective of large-scale object retrieval systems is to search for images that contain the target object in an image database. Where state-of-the-art approaches rely on global image representations to conduct searches, we consider many boxes per image as candidates to search locally in a picture. In this paper, a feature quantization algorithm called binary quantization is proposed. In binary quantization, a scale-invariant feature transform (SIFT) feature is quantized into a descriptive and discriminative bit-vector, which allows itself to adapt to the classic inverted file structure for box indexing. The inverted file, which stores the bit-vector and box ID where the SIFT feature is located inside, is compact and can be loaded into the main memory for efficient box indexing. We evaluate our approach on available object retrieval datasets. Experimental results demonstrate that the proposed approach is fast and achieves excellent search quality. Therefore, the proposed approach is an improvement over state-of-the-art approaches for object retrieval.
Symplectic Quantization of a Vector-Tensor Gauge Theory with Topological Coupling
NASA Astrophysics Data System (ADS)
Barcelos-Neto, J.; Silva, M. B. D.
We use the symplectic formalism to quantize a gauge theory where vectors and tensors fields are coupled in a topological way. This is an example of reducible theory and a procedure like of ghosts-of-ghosts of the BFV method is applied but in terms of Lagrange multipliers. Our final results are in agreement with the ones found in the literature by using the Dirac method.
Rakkiyappan, R; Maheswari, K; Velmurugan, G; Park, Ju H
2018-05-17
This paper investigates H ∞ state estimation problem for a class of semi-Markovian jumping discrete-time neural networks model with event-triggered scheme and quantization. First, a new event-triggered communication scheme is introduced to determine whether or not the current sampled sensor data should be broad-casted and transmitted to the quantizer, which can save the limited communication resource. Second, a novel communication framework is employed by the logarithmic quantizer that quantifies and reduces the data transmission rate in the network, which apparently improves the communication efficiency of networks. Third, a stabilization criterion is derived based on the sufficient condition which guarantees a prescribed H ∞ performance level in the estimation error system in terms of the linear matrix inequalities. Finally, numerical simulations are given to illustrate the correctness of the proposed scheme. Copyright © 2018 Elsevier Ltd. All rights reserved.
Locally adaptive vector quantization: Data compression with feature preservation
NASA Technical Reports Server (NTRS)
Cheung, K. M.; Sayano, M.
1992-01-01
A study of a locally adaptive vector quantization (LAVQ) algorithm for data compression is presented. This algorithm provides high-speed one-pass compression and is fully adaptable to any data source and does not require a priori knowledge of the source statistics. Therefore, LAVQ is a universal data compression algorithm. The basic algorithm and several modifications to improve performance are discussed. These modifications are nonlinear quantization, coarse quantization of the codebook, and lossless compression of the output. Performance of LAVQ on various images using irreversible (lossy) coding is comparable to that of the Linde-Buzo-Gray algorithm, but LAVQ has a much higher speed; thus this algorithm has potential for real-time video compression. Unlike most other image compression algorithms, LAVQ preserves fine detail in images. LAVQ's performance as a lossless data compression algorithm is comparable to that of Lempel-Ziv-based algorithms, but LAVQ uses far less memory during the coding process.
A constrained joint source/channel coder design and vector quantization of nonstationary sources
NASA Technical Reports Server (NTRS)
Sayood, Khalid; Chen, Y. C.; Nori, S.; Araj, A.
1993-01-01
The emergence of broadband ISDN as the network for the future brings with it the promise of integration of all proposed services in a flexible environment. In order to achieve this flexibility, asynchronous transfer mode (ATM) has been proposed as the transfer technique. During this period a study was conducted on the bridging of network transmission performance and video coding. The successful transmission of variable bit rate video over ATM networks relies on the interaction between the video coding algorithm and the ATM networks. Two aspects of networks that determine the efficiency of video transmission are the resource allocation algorithm and the congestion control algorithm. These are explained in this report. Vector quantization (VQ) is one of the more popular compression techniques to appear in the last twenty years. Numerous compression techniques, which incorporate VQ, have been proposed. While the LBG VQ provides excellent compression, there are also several drawbacks to the use of the LBG quantizers including search complexity and memory requirements, and a mismatch between the codebook and the inputs. The latter mainly stems from the fact that the VQ is generally designed for a specific rate and a specific class of inputs. In this work, an adaptive technique is proposed for vector quantization of images and video sequences. This technique is an extension of the recursively indexed scalar quantization (RISQ) algorithm.
Radial quantization of the 3d CFT and the higher spin/vector model duality
NASA Astrophysics Data System (ADS)
Hu, Shan; Li, Tianjun
2014-10-01
We study the radial quantization of the 3dO(N) vector model. We calculate the higher spin charges whose commutation relations give the higher spin algebra. The Fock states of higher spin gravity in AdS4 are realized as the states in the 3d CFT. The dynamical information is encoded in their inner products. This serves as the simplest explicit demonstration of the CFT definition for the quantum gravity.
Wavelet subband coding of computer simulation output using the A++ array class library
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bradley, J.N.; Brislawn, C.M.; Quinlan, D.J.
1995-07-01
The goal of the project is to produce utility software for off-line compression of existing data and library code that can be called from a simulation program for on-line compression of data dumps as the simulation proceeds. Naturally, we would like the amount of CPU time required by the compression algorithm to be small in comparison to the requirements of typical simulation codes. We also want the algorithm to accomodate a wide variety of smooth, multidimensional data types. For these reasons, the subband vector quantization (VQ) approach employed in has been replaced by a scalar quantization (SQ) strategy using amore » bank of almost-uniform scalar subband quantizers in a scheme similar to that used in the FBI fingerprint image compression standard. This eliminates the considerable computational burdens of training VQ codebooks for each new type of data and performing nearest-vector searches to encode the data. The comparison of subband VQ and SQ algorithms in indicated that, in practice, there is relatively little additional gain from using vector as opposed to scalar quantization on DWT subbands, even when the source imagery is from a very homogeneous population, and our subjective experience with synthetic computer-generated data supports this stance. It appears that a careful study is needed of the tradeoffs involved in selecting scalar vs. vector subband quantization, but such an analysis is beyond the scope of this paper. Our present work is focused on the problem of generating wavelet transform/scalar quantization (WSQ) implementations that can be ported easily between different hardware environments. This is an extremely important consideration given the great profusion of different high-performance computing architectures available, the high cost associated with learning how to map algorithms effectively onto a new architecture, and the rapid rate of evolution in the world of high-performance computing.« less
Speech coding at low to medium bit rates
NASA Astrophysics Data System (ADS)
Leblanc, Wilfred Paul
1992-09-01
Improved search techniques coupled with improved codebook design methodologies are proposed to improve the performance of conventional code-excited linear predictive coders for speech. Improved methods for quantizing the short term filter are developed by employing a tree search algorithm and joint codebook design to multistage vector quantization. Joint codebook design procedures are developed to design locally optimal multistage codebooks. Weighting during centroid computation is introduced to improve the outlier performance of the multistage vector quantizer. Multistage vector quantization is shown to be both robust against input characteristics and in the presence of channel errors. Spectral distortions of about 1 dB are obtained at rates of 22-28 bits/frame. Structured codebook design procedures for excitation in code-excited linear predictive coders are compared to general codebook design procedures. Little is lost using significant structure in the excitation codebooks while greatly reducing the search complexity. Sparse multistage configurations are proposed for reducing computational complexity and memory size. Improved search procedures are applied to code-excited linear prediction which attempt joint optimization of the short term filter, the adaptive codebook, and the excitation. Improvements in signal to noise ratio of 1-2 dB are realized in practice.
Image coding using entropy-constrained residual vector quantization
NASA Technical Reports Server (NTRS)
Kossentini, Faouzi; Smith, Mark J. T.; Barnes, Christopher F.
1993-01-01
The residual vector quantization (RVQ) structure is exploited to produce a variable length codeword RVQ. Necessary conditions for the optimality of this RVQ are presented, and a new entropy-constrained RVQ (ECRVQ) design algorithm is shown to be very effective in designing RVQ codebooks over a wide range of bit rates and vector sizes. The new EC-RVQ has several important advantages. It can outperform entropy-constrained VQ (ECVQ) in terms of peak signal-to-noise ratio (PSNR), memory, and computation requirements. It can also be used to design high rate codebooks and codebooks with relatively large vector sizes. Experimental results indicate that when the new EC-RVQ is applied to image coding, very high quality is achieved at relatively low bit rates.
Classification of postural profiles among mouth-breathing children by learning vector quantization.
Mancini, F; Sousa, F S; Hummel, A D; Falcão, A E J; Yi, L C; Ortolani, C F; Sigulem, D; Pisa, I T
2011-01-01
Mouth breathing is a chronic syndrome that may bring about postural changes. Finding characteristic patterns of changes occurring in the complex musculoskeletal system of mouth-breathing children has been a challenge. Learning vector quantization (LVQ) is an artificial neural network model that can be applied for this purpose. The aim of the present study was to apply LVQ to determine the characteristic postural profiles shown by mouth-breathing children, in order to further understand abnormal posture among mouth breathers. Postural training data on 52 children (30 mouth breathers and 22 nose breathers) and postural validation data on 32 children (22 mouth breathers and 10 nose breathers) were used. The performance of LVQ and other classification models was compared in relation to self-organizing maps, back-propagation applied to multilayer perceptrons, Bayesian networks, naive Bayes, J48 decision trees, k, and k-nearest-neighbor classifiers. Classifier accuracy was assessed by means of leave-one-out cross-validation, area under ROC curve (AUC), and inter-rater agreement (Kappa statistics). By using the LVQ model, five postural profiles for mouth-breathing children could be determined. LVQ showed satisfactory results for mouth-breathing and nose-breathing classification: sensitivity and specificity rates of 0.90 and 0.95, respectively, when using the training dataset, and 0.95 and 0.90, respectively, when using the validation dataset. The five postural profiles for mouth-breathing children suggested by LVQ were incorporated into application software for classifying the severity of mouth breathers' abnormal posture.
Zhang, Yu; Wu, Jianxin; Cai, Jianfei
2016-05-01
In large-scale visual recognition and image retrieval tasks, feature vectors, such as Fisher vector (FV) or the vector of locally aggregated descriptors (VLAD), have achieved state-of-the-art results. However, the combination of the large numbers of examples and high-dimensional vectors necessitates dimensionality reduction, in order to reduce its storage and CPU costs to a reasonable range. In spite of the popularity of various feature compression methods, this paper shows that the feature (dimension) selection is a better choice for high-dimensional FV/VLAD than the feature (dimension) compression methods, e.g., product quantization. We show that strong correlation among the feature dimensions in the FV and the VLAD may not exist, which renders feature selection a natural choice. We also show that, many dimensions in FV/VLAD are noise. Throwing them away using feature selection is better than compressing them and useful dimensions altogether using feature compression methods. To choose features, we propose an efficient importance sorting algorithm considering both the supervised and unsupervised cases, for visual recognition and image retrieval, respectively. Combining with the 1-bit quantization, feature selection has achieved both higher accuracy and less computational cost than feature compression methods, such as product quantization, on the FV and the VLAD image representations.
Interframe vector wavelet coding technique
NASA Astrophysics Data System (ADS)
Wus, John P.; Li, Weiping
1997-01-01
Wavelet coding is often used to divide an image into multi- resolution wavelet coefficients which are quantized and coded. By 'vectorizing' scalar wavelet coding and combining this with vector quantization (VQ), vector wavelet coding (VWC) can be implemented. Using a finite number of states, finite-state vector quantization (FSVQ) takes advantage of the similarity between frames by incorporating memory into the video coding system. Lattice VQ eliminates the potential mismatch that could occur using pre-trained VQ codebooks. It also eliminates the need for codebook storage in the VQ process, thereby creating a more robust coding system. Therefore, by using the VWC coding method in conjunction with the FSVQ system and lattice VQ, the formulation of a high quality very low bit rate coding systems is proposed. A coding system using a simple FSVQ system where the current state is determined by the previous channel symbol only is developed. To achieve a higher degree of compression, a tree-like FSVQ system is implemented. The groupings are done in this tree-like structure from the lower subbands to the higher subbands in order to exploit the nature of subband analysis in terms of the parent-child relationship. Class A and Class B video sequences from the MPEG-IV testing evaluations are used in the evaluation of this coding method.
Distributed memory approaches for robotic neural controllers
NASA Technical Reports Server (NTRS)
Jorgensen, Charles C.
1990-01-01
The suitability is explored of two varieties of distributed memory neutral networks as trainable controllers for a simulated robotics task. The task requires that two cameras observe an arbitrary target point in space. Coordinates of the target on the camera image planes are passed to a neural controller which must learn to solve the inverse kinematics of a manipulator with one revolute and two prismatic joints. Two new network designs are evaluated. The first, radial basis sparse distributed memory (RBSDM), approximates functional mappings as sums of multivariate gaussians centered around previously learned patterns. The second network types involved variations of Adaptive Vector Quantizers or Self Organizing Maps. In these networks, random N dimensional points are given local connectivities. They are then exposed to training patterns and readjust their locations based on a nearest neighbor rule. Both approaches are tested based on their ability to interpolate manipulator joint coordinates for simulated arm movement while simultaneously performing stereo fusion of the camera data. Comparisons are made with classical k-nearest neighbor pattern recognition techniques.
A Feature-Free 30-Disease Pathological Brain Detection System by Linear Regression Classifier.
Chen, Yi; Shao, Ying; Yan, Jie; Yuan, Ti-Fei; Qu, Yanwen; Lee, Elizabeth; Wang, Shuihua
2017-01-01
Alzheimer's disease patients are increasing rapidly every year. Scholars tend to use computer vision methods to develop automatic diagnosis system. (Background) In 2015, Gorji et al. proposed a novel method using pseudo Zernike moment. They tested four classifiers: learning vector quantization neural network, pattern recognition neural network trained by Levenberg-Marquardt, by resilient backpropagation, and by scaled conjugate gradient. This study presents an improved method by introducing a relatively new classifier-linear regression classification. Our method selects one axial slice from 3D brain image, and employed pseudo Zernike moment with maximum order of 15 to extract 256 features from each image. Finally, linear regression classification was harnessed as the classifier. The proposed approach obtains an accuracy of 97.51%, a sensitivity of 96.71%, and a specificity of 97.73%. Our method performs better than Gorji's approach and five other state-of-the-art approaches. Therefore, it can be used to detect Alzheimer's disease. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Feature weighting using particle swarm optimization for learning vector quantization classifier
NASA Astrophysics Data System (ADS)
Dongoran, A.; Rahmadani, S.; Zarlis, M.; Zakarias
2018-03-01
This paper discusses and proposes a method of feature weighting in classification assignments on competitive learning artificial neural network LVQ. The weighting feature method is the search for the weight of an attribute using the PSO so as to give effect to the resulting output. This method is then applied to the LVQ-Classifier and tested on the 3 datasets obtained from the UCI Machine Learning repository. Then an accuracy analysis will be generated by two approaches. The first approach using LVQ1, referred to as LVQ-Classifier and the second approach referred to as PSOFW-LVQ, is a proposed model. The result shows that the PSO algorithm is capable of finding attribute weights that increase LVQ-classifier accuracy.
Zhang, Lu; Pang, Xiaodan; Ozolins, Oskars; Udalcovs, Aleksejs; Popov, Sergei; Xiao, Shilin; Hu, Weisheng; Chen, Jiajia
2018-04-01
We propose a spectrally efficient digitized radio-over-fiber (D-RoF) system by grouping highly correlated neighboring samples of the analog signals into multidimensional vectors, where the k-means clustering algorithm is adopted for adaptive quantization. A 30 Gbit/s D-RoF system is experimentally demonstrated to validate the proposed scheme, reporting a carrier aggregation of up to 40 100 MHz orthogonal frequency division multiplexing (OFDM) channels with quadrate amplitude modulation (QAM) order of 4 and an aggregation of 10 100 MHz OFDM channels with a QAM order of 16384. The equivalent common public radio interface rates from 37 to 150 Gbit/s are supported. Besides, the error vector magnitude (EVM) of 8% is achieved with the number of quantization bits of 4, and the EVM can be further reduced to 1% by increasing the number of quantization bits to 7. Compared with conventional pulse coding modulation-based D-RoF systems, the proposed D-RoF system improves the signal-to-noise-ratio up to ∼9 dB and greatly reduces the EVM, given the same number of quantization bits.
Improved image decompression for reduced transform coding artifacts
NASA Technical Reports Server (NTRS)
Orourke, Thomas P.; Stevenson, Robert L.
1994-01-01
The perceived quality of images reconstructed from low bit rate compression is severely degraded by the appearance of transform coding artifacts. This paper proposes a method for producing higher quality reconstructed images based on a stochastic model for the image data. Quantization (scalar or vector) partitions the transform coefficient space and maps all points in a partition cell to a representative reconstruction point, usually taken as the centroid of the cell. The proposed image estimation technique selects the reconstruction point within the quantization partition cell which results in a reconstructed image which best fits a non-Gaussian Markov random field (MRF) image model. This approach results in a convex constrained optimization problem which can be solved iteratively. At each iteration, the gradient projection method is used to update the estimate based on the image model. In the transform domain, the resulting coefficient reconstruction points are projected to the particular quantization partition cells defined by the compressed image. Experimental results will be shown for images compressed using scalar quantization of block DCT and using vector quantization of subband wavelet transform. The proposed image decompression provides a reconstructed image with reduced visibility of transform coding artifacts and superior perceived quality.
Vector quantizer based on brightness maps for image compression with the polynomial transform
NASA Astrophysics Data System (ADS)
Escalante-Ramirez, Boris; Moreno-Gutierrez, Mauricio; Silvan-Cardenas, Jose L.
2002-11-01
We present a vector quantization scheme acting on brightness fields based on distance/distortion criteria correspondent with psycho-visual aspects. These criteria quantify sensorial distortion between vectors that represent either portions of a digital image or alternatively, coefficients of a transform-based coding system. In the latter case, we use an image representation model, namely the Hermite transform, that is based on some of the main perceptual characteristics of the human vision system (HVS) and in their response to light stimulus. Energy coding in the brightness domain, determination of local structure, code-book training and local orientation analysis are all obtained by means of the Hermite transform. This paper, for thematic reasons, is divided in four sections. The first one will shortly highlight the importance of having newer and better compression algorithms. This section will also serve to explain briefly the most relevant characteristics of the HVS, advantages and disadvantages related with the behavior of our vision in front of ocular stimulus. The second section shall go through a quick review of vector quantization techniques, focusing their performance on image treatment, as a preview for the image vector quantizer compressor actually constructed in section 5. Third chapter was chosen to concentrate the most important data gathered on brightness models. The building of this so-called brightness maps (quantification of the human perception on the visible objects reflectance), in a bi-dimensional model, will be addressed here. The Hermite transform, a special case of polynomial transforms, and its usefulness, will be treated, in an applicable discrete form, in the fourth chapter. As we have learned from previous works 1, Hermite transform has showed to be a useful and practical solution to efficiently code the energy within an image block, deciding which kind of quantization is to be used upon them (whether scalar or vector). It will also be a unique tool to structurally classify the image block within a given lattice. This particular operation intends to be one of the main contributions of this work. The fifth section will fuse the proposals derived from the study of the three main topics- addressed in the last sections- in order to propose an image compression model that takes advantage of vector quantizers inside the brightness transformed domain to determine the most important structures, finding the energy distribution inside the Hermite domain. Sixth and last section will show some results obtained while testing the coding-decoding model. The guidelines to evaluate the image compressing performance were the compression ratio, SNR and psycho-visual quality. Some conclusions derived from the research and possible unexplored paths will be shown on this section as well.
NASA Technical Reports Server (NTRS)
Manohar, Mareboyana; Tilton, James C.
1994-01-01
A progressive vector quantization (VQ) compression approach is discussed which decomposes image data into a number of levels using full search VQ. The final level is losslessly compressed, enabling lossless reconstruction. The computational difficulties are addressed by implementation on a massively parallel SIMD machine. We demonstrate progressive VQ on multispectral imagery obtained from the Advanced Very High Resolution Radiometer instrument and other Earth observation image data, and investigate the trade-offs in selecting the number of decomposition levels and codebook training method.
Visual data mining for quantized spatial data
NASA Technical Reports Server (NTRS)
Braverman, Amy; Kahn, Brian
2004-01-01
In previous papers we've shown how a well known data compression algorithm called Entropy-constrained Vector Quantization ( can be modified to reduce the size and complexity of very large, satellite data sets. In this paper, we descuss how to visualize and understand the content of such reduced data sets.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Inomata, A.; Junker, G.; Wilson, R.
1993-08-01
The unified treatment of the Dirac monopole, the Schwinger monopole, and the Aharonov-Bahn problem by Barut and Wilson is revisited via a path integral approach. The Kustaanheimo-Stiefel transformation of space and time is utilized to calculate the path integral for a charged particle in the singular vector potential. In the process of dimensional reduction, a topological charge quantization rule is derived, which contains Dirac's quantization condition as a special case. 32 refs.
Zhang, Wanli; Yang, Shiju; Li, Chuandong; Zhang, Wei; Yang, Xinsong
2018-08-01
This paper focuses on stochastic exponential synchronization of delayed memristive neural networks (MNNs) by the aid of systems with interval parameters which are established by using the concept of Filippov solution. New intermittent controller and adaptive controller with logarithmic quantization are structured to deal with the difficulties induced by time-varying delays, interval parameters as well as stochastic perturbations, simultaneously. Moreover, not only control cost can be reduced but also communication channels and bandwidth are saved by using these controllers. Based on novel Lyapunov functions and new analytical methods, several synchronization criteria are established to realize the exponential synchronization of MNNs with stochastic perturbations via intermittent control and adaptive control with or without logarithmic quantization. Finally, numerical simulations are offered to substantiate our theoretical results. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Aghamaleki, Javad Abbasi; Behrad, Alireza
2018-01-01
Double compression detection is a crucial stage in digital image and video forensics. However, the detection of double compressed videos is challenging when the video forger uses the same quantization matrix and synchronized group of pictures (GOP) structure during the recompression history to conceal tampering effects. A passive approach is proposed for detecting double compressed MPEG videos with the same quantization matrix and synchronized GOP structure. To devise the proposed algorithm, the effects of recompression on P frames are mathematically studied. Then, based on the obtained guidelines, a feature vector is proposed to detect double compressed frames on the GOP level. Subsequently, sparse representations of the feature vectors are used for dimensionality reduction and enrich the traces of recompression. Finally, a support vector machine classifier is employed to detect and localize double compression in temporal domain. The experimental results show that the proposed algorithm achieves the accuracy of more than 95%. In addition, the comparisons of the results of the proposed method with those of other methods reveal the efficiency of the proposed algorithm.
Lewicke, Aaron; Sazonov, Edward; Corwin, Michael J; Neuman, Michael; Schuckers, Stephanie
2008-01-01
Reliability of classification performance is important for many biomedical applications. A classification model which considers reliability in the development of the model such that unreliable segments are rejected would be useful, particularly, in large biomedical data sets. This approach is demonstrated in the development of a technique to reliably determine sleep and wake using only the electrocardiogram (ECG) of infants. Typically, sleep state scoring is a time consuming task in which sleep states are manually derived from many physiological signals. The method was tested with simultaneous 8-h ECG and polysomnogram (PSG) determined sleep scores from 190 infants enrolled in the collaborative home infant monitoring evaluation (CHIME) study. Learning vector quantization (LVQ) neural network, multilayer perceptron (MLP) neural network, and support vector machines (SVMs) are tested as the classifiers. After systematic rejection of difficult to classify segments, the models can achieve 85%-87% correct classification while rejecting only 30% of the data. This corresponds to a Kappa statistic of 0.65-0.68. With rejection, accuracy improves by about 8% over a model without rejection. Additionally, the impact of the PSG scored indeterminate state epochs is analyzed. The advantages of a reliable sleep/wake classifier based only on ECG include high accuracy, simplicity of use, and low intrusiveness. Reliability of the classification can be built directly in the model, such that unreliable segments are rejected.
NASA Technical Reports Server (NTRS)
Jaggi, S.
1993-01-01
A study is conducted to investigate the effects and advantages of data compression techniques on multispectral imagery data acquired by NASA's airborne scanners at the Stennis Space Center. The first technique used was vector quantization. The vector is defined in the multispectral imagery context as an array of pixels from the same location from each channel. The error obtained in substituting the reconstructed images for the original set is compared for different compression ratios. Also, the eigenvalues of the covariance matrix obtained from the reconstructed data set are compared with the eigenvalues of the original set. The effects of varying the size of the vector codebook on the quality of the compression and on subsequent classification are also presented. The output data from the Vector Quantization algorithm was further compressed by a lossless technique called Difference-mapped Shift-extended Huffman coding. The overall compression for 7 channels of data acquired by the Calibrated Airborne Multispectral Scanner (CAMS), with an RMS error of 15.8 pixels was 195:1 (0.41 bpp) and with an RMS error of 3.6 pixels was 18:1 (.447 bpp). The algorithms were implemented in software and interfaced with the help of dedicated image processing boards to an 80386 PC compatible computer. Modules were developed for the task of image compression and image analysis. Also, supporting software to perform image processing for visual display and interpretation of the compressed/classified images was developed.
NASA Astrophysics Data System (ADS)
Jurčo, B.; Schlieker, M.
1995-07-01
In this paper explicitly natural (from the geometrical point of view) Fock-space representations (contragradient Verma modules) of the quantized enveloping algebras are constructed. In order to do so, one starts from the Gauss decomposition of the quantum group and introduces the differential operators on the corresponding q-deformed flag manifold (assumed as a left comodule for the quantum group) by a projection to it of the right action of the quantized enveloping algebra on the quantum group. Finally, the representatives of the elements of the quantized enveloping algebra corresponding to the left-invariant vector fields on the quantum group are expressed as first-order differential operators on the q-deformed flag manifold.
Multipurpose image watermarking algorithm based on multistage vector quantization.
Lu, Zhe-Ming; Xu, Dian-Guo; Sun, Sheng-He
2005-06-01
The rapid growth of digital multimedia and Internet technologies has made copyright protection, copy protection, and integrity verification three important issues in the digital world. To solve these problems, the digital watermarking technique has been presented and widely researched. Traditional watermarking algorithms are mostly based on discrete transform domains, such as the discrete cosine transform, discrete Fourier transform (DFT), and discrete wavelet transform (DWT). Most of these algorithms are good for only one purpose. Recently, some multipurpose digital watermarking methods have been presented, which can achieve the goal of content authentication and copyright protection simultaneously. However, they are based on DWT or DFT. Lately, several robust watermarking schemes based on vector quantization (VQ) have been presented, but they can only be used for copyright protection. In this paper, we present a novel multipurpose digital image watermarking method based on the multistage vector quantizer structure, which can be applied to image authentication and copyright protection. In the proposed method, the semi-fragile watermark and the robust watermark are embedded in different VQ stages using different techniques, and both of them can be extracted without the original image. Simulation results demonstrate the effectiveness of our algorithm in terms of robustness and fragility.
Distance learning in discriminative vector quantization.
Schneider, Petra; Biehl, Michael; Hammer, Barbara
2009-10-01
Discriminative vector quantization schemes such as learning vector quantization (LVQ) and extensions thereof offer efficient and intuitive classifiers based on the representation of classes by prototypes. The original methods, however, rely on the Euclidean distance corresponding to the assumption that the data can be represented by isotropic clusters. For this reason, extensions of the methods to more general metric structures have been proposed, such as relevance adaptation in generalized LVQ (GLVQ) and matrix learning in GLVQ. In these approaches, metric parameters are learned based on the given classification task such that a data-driven distance measure is found. In this letter, we consider full matrix adaptation in advanced LVQ schemes. In particular, we introduce matrix learning to a recent statistical formalization of LVQ, robust soft LVQ, and we compare the results on several artificial and real-life data sets to matrix learning in GLVQ, a derivation of LVQ-like learning based on a (heuristic) cost function. In all cases, matrix adaptation allows a significant improvement of the classification accuracy. Interestingly, however, the principled behavior of the models with respect to prototype locations and extracted matrix dimensions shows several characteristic differences depending on the data sets.
Quantized kernel least mean square algorithm.
Chen, Badong; Zhao, Songlin; Zhu, Pingping; Príncipe, José C
2012-01-01
In this paper, we propose a quantization approach, as an alternative of sparsification, to curb the growth of the radial basis function structure in kernel adaptive filtering. The basic idea behind this method is to quantize and hence compress the input (or feature) space. Different from sparsification, the new approach uses the "redundant" data to update the coefficient of the closest center. In particular, a quantized kernel least mean square (QKLMS) algorithm is developed, which is based on a simple online vector quantization method. The analytical study of the mean square convergence has been carried out. The energy conservation relation for QKLMS is established, and on this basis we arrive at a sufficient condition for mean square convergence, and a lower and upper bound on the theoretical value of the steady-state excess mean square error. Static function estimation and short-term chaotic time-series prediction examples are presented to demonstrate the excellent performance.
Low bit rate coding of Earth science images
NASA Technical Reports Server (NTRS)
Kossentini, Faouzi; Chung, Wilson C.; Smith, Mark J. T.
1993-01-01
In this paper, the authors discuss compression based on some new ideas in vector quantization and their incorporation in a sub-band coding framework. Several variations are considered, which collectively address many of the individual compression needs within the earth science community. The approach taken in this work is based on some recent advances in the area of variable rate residual vector quantization (RVQ). This new RVQ method is considered separately and in conjunction with sub-band image decomposition. Very good results are achieved in coding a variety of earth science images. The last section of the paper provides some comparisons that illustrate the improvement in performance attributable to this approach relative the the JPEG coding standard.
NASA Technical Reports Server (NTRS)
Chang, Chi-Yung (Inventor); Fang, Wai-Chi (Inventor); Curlander, John C. (Inventor)
1995-01-01
A system for data compression utilizing systolic array architecture for Vector Quantization (VQ) is disclosed for both full-searched and tree-searched. For a tree-searched VQ, the special case of a Binary Tree-Search VQ (BTSVQ) is disclosed with identical Processing Elements (PE) in the array for both a Raw-Codebook VQ (RCVQ) and a Difference-Codebook VQ (DCVQ) algorithm. A fault tolerant system is disclosed which allows a PE that has developed a fault to be bypassed in the array and replaced by a spare at the end of the array, with codebook memory assignment shifted one PE past the faulty PE of the array.
A fingerprint key binding algorithm based on vector quantization and error correction
NASA Astrophysics Data System (ADS)
Li, Liang; Wang, Qian; Lv, Ke; He, Ning
2012-04-01
In recent years, researches on seamless combination cryptosystem with biometric technologies, e.g. fingerprint recognition, are conducted by many researchers. In this paper, we propose a binding algorithm of fingerprint template and cryptographic key to protect and access the key by fingerprint verification. In order to avoid the intrinsic fuzziness of variant fingerprints, vector quantization and error correction technique are introduced to transform fingerprint template and then bind with key, after a process of fingerprint registration and extracting global ridge pattern of fingerprint. The key itself is secure because only hash value is stored and it is released only when fingerprint verification succeeds. Experimental results demonstrate the effectiveness of our ideas.
Accelerating simulation for the multiple-point statistics algorithm using vector quantization
NASA Astrophysics Data System (ADS)
Zuo, Chen; Pan, Zhibin; Liang, Hao
2018-03-01
Multiple-point statistics (MPS) is a prominent algorithm to simulate categorical variables based on a sequential simulation procedure. Assuming training images (TIs) as prior conceptual models, MPS extracts patterns from TIs using a template and records their occurrences in a database. However, complex patterns increase the size of the database and require considerable time to retrieve the desired elements. In order to speed up simulation and improve simulation quality over state-of-the-art MPS methods, we propose an accelerating simulation for MPS using vector quantization (VQ), called VQ-MPS. First, a variable representation is presented to make categorical variables applicable for vector quantization. Second, we adopt a tree-structured VQ to compress the database so that stationary simulations are realized. Finally, a transformed template and classified VQ are used to address nonstationarity. A two-dimensional (2D) stationary channelized reservoir image is used to validate the proposed VQ-MPS. In comparison with several existing MPS programs, our method exhibits significantly better performance in terms of computational time, pattern reproductions, and spatial uncertainty. Further demonstrations consist of a 2D four facies simulation, two 2D nonstationary channel simulations, and a three-dimensional (3D) rock simulation. The results reveal that our proposed method is also capable of solving multifacies, nonstationarity, and 3D simulations based on 2D TIs.
Vector coding of wavelet-transformed images
NASA Astrophysics Data System (ADS)
Zhou, Jun; Zhi, Cheng; Zhou, Yuanhua
1998-09-01
Wavelet, as a brand new tool in signal processing, has got broad recognition. Using wavelet transform, we can get octave divided frequency band with specific orientation which combines well with the properties of Human Visual System. In this paper, we discuss the classified vector quantization method for multiresolution represented image.
Permutation modulation for quantization and information reconciliation in CV-QKD systems
NASA Astrophysics Data System (ADS)
Daneshgaran, Fred; Mondin, Marina; Olia, Khashayar
2017-08-01
This paper is focused on the problem of Information Reconciliation (IR) for continuous variable Quantum Key Distribution (QKD). The main problem is quantization and assignment of labels to the samples of the Gaussian variables observed at Alice and Bob. Trouble is that most of the samples, assuming that the Gaussian variable is zero mean which is de-facto the case, tend to have small magnitudes and are easily disturbed by noise. Transmission over longer and longer distances increases the losses corresponding to a lower effective Signal to Noise Ratio (SNR) exasperating the problem. Here we propose to use Permutation Modulation (PM) as a means of quantization of Gaussian vectors at Alice and Bob over a d-dimensional space with d ≫ 1. The goal is to achieve the necessary coding efficiency to extend the achievable range of continuous variable QKD by quantizing over larger and larger dimensions. Fractional bit rate per sample is easily achieved using PM at very reasonable computational cost. Ordered statistics is used extensively throughout the development from generation of the seed vector in PM to analysis of error rates associated with the signs of the Gaussian samples at Alice and Bob as a function of the magnitude of the observed samples at Bob.
Hao, Li-Ying; Yang, Guang-Hong
2013-09-01
This paper is concerned with the problem of robust fault-tolerant compensation control problem for uncertain linear systems subject to both state and input signal quantization. By incorporating novel matrix full-rank factorization technique with sliding surface design successfully, the total failure of certain actuators can be coped with, under a special actuator redundancy assumption. In order to compensate for quantization errors, an adjustment range of quantization sensitivity for a dynamic uniform quantizer is given through the flexible choices of design parameters. Comparing with the existing results, the derived inequality condition leads to the fault tolerance ability stronger and much wider scope of applicability. With a static adjustment policy of quantization sensitivity, an adaptive sliding mode controller is then designed to maintain the sliding mode, where the gain of the nonlinear unit vector term is updated automatically to compensate for the effects of actuator faults, quantization errors, exogenous disturbances and parameter uncertainties without the need for a fault detection and isolation (FDI) mechanism. Finally, the effectiveness of the proposed design method is illustrated via a model of a rocket fairing structural-acoustic. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Qi, K.; Qingfeng, G.
2017-12-01
With the popular use of High-Resolution Satellite (HRS) images, more and more research efforts have been placed on land-use scene classification. However, it makes the task difficult with HRS images for the complex background and multiple land-cover classes or objects. This article presents a multiscale deeply described correlaton model for land-use scene classification. Specifically, the convolutional neural network is introduced to learn and characterize the local features at different scales. Then, learnt multiscale deep features are explored to generate visual words. The spatial arrangement of visual words is achieved through the introduction of adaptive vector quantized correlograms at different scales. Experiments on two publicly available land-use scene datasets demonstrate that the proposed model is compact and yet discriminative for efficient representation of land-use scene images, and achieves competitive classification results with the state-of-art methods.
NASA Technical Reports Server (NTRS)
Tescher, Andrew G. (Editor)
1989-01-01
Various papers on image compression and automatic target recognition are presented. Individual topics addressed include: target cluster detection in cluttered SAR imagery, model-based target recognition using laser radar imagery, Smart Sensor front-end processor for feature extraction of images, object attitude estimation and tracking from a single video sensor, symmetry detection in human vision, analysis of high resolution aerial images for object detection, obscured object recognition for an ATR application, neural networks for adaptive shape tracking, statistical mechanics and pattern recognition, detection of cylinders in aerial range images, moving object tracking using local windows, new transform method for image data compression, quad-tree product vector quantization of images, predictive trellis encoding of imagery, reduced generalized chain code for contour description, compact architecture for a real-time vision system, use of human visibility functions in segmentation coding, color texture analysis and synthesis using Gibbs random fields.
Face recognition algorithm using extended vector quantization histogram features.
Yan, Yan; Lee, Feifei; Wu, Xueqian; Chen, Qiu
2018-01-01
In this paper, we propose a face recognition algorithm based on a combination of vector quantization (VQ) and Markov stationary features (MSF). The VQ algorithm has been shown to be an effective method for generating features; it extracts a codevector histogram as a facial feature representation for face recognition. Still, the VQ histogram features are unable to convey spatial structural information, which to some extent limits their usefulness in discrimination. To alleviate this limitation of VQ histograms, we utilize Markov stationary features (MSF) to extend the VQ histogram-based features so as to add spatial structural information. We demonstrate the effectiveness of our proposed algorithm by achieving recognition results superior to those of several state-of-the-art methods on publicly available face databases.
Karayiannis, N B
2000-01-01
This paper presents the development and investigates the properties of ordered weighted learning vector quantization (LVQ) and clustering algorithms. These algorithms are developed by using gradient descent to minimize reformulation functions based on aggregation operators. An axiomatic approach provides conditions for selecting aggregation operators that lead to admissible reformulation functions. Minimization of admissible reformulation functions based on ordered weighted aggregation operators produces a family of soft LVQ and clustering algorithms, which includes fuzzy LVQ and clustering algorithms as special cases. The proposed LVQ and clustering algorithms are used to perform segmentation of magnetic resonance (MR) images of the brain. The diagnostic value of the segmented MR images provides the basis for evaluating a variety of ordered weighted LVQ and clustering algorithms.
Robust 1-Bit Compressive Sensing via Binary Stable Embeddings of Sparse Vectors
2011-04-15
funded by Mitsubishi Electric Research Laboratories. †ICTEAM Institute, ELEN Department, Université catholique de Louvain (UCL), B-1348 Louvain-la-Neuve...reduced to a simple comparator that tests for values above or below zero, enabling extremely simple, efficient, and fast quantization. A 1-bit quantizer is...these two terms appears to be significantly different, according to the previously discussed experiments. To test the hypothesis that this term is the key
Wang, Yan-Wu; Bian, Tao; Xiao, Jiang-Wen; Wen, Changyun
2015-10-01
This paper studies the global synchronization of complex dynamical network (CDN) under digital communication with limited bandwidth. To realize the digital communication, the so-called uniform-quantizer-sets are introduced to quantize the states of nodes, which are then encoded and decoded by newly designed encoders and decoders. To meet the requirement of the bandwidth constraint, a scaling function is utilized to guarantee the quantizers having bounded inputs and thus achieving bounded real-time quantization levels. Moreover, a new type of vector norm is introduced to simplify the expression of the bandwidth limit. Through mathematical induction, a sufficient condition is derived to ensure global synchronization of the CDNs. The lower bound on the sum of the real-time quantization levels is analyzed for different cases. Optimization method is employed to relax the requirements on the network topology and to determine the minimum of such lower bound for each case, respectively. Simulation examples are also presented to illustrate the established results.
An, Ji-Yong; Meng, Fan-Rong; You, Zhu-Hong; Fang, Yu-Hong; Zhao, Yu-Jun; Zhang, Ming
2016-01-01
We propose a novel computational method known as RVM-LPQ that combines the Relevance Vector Machine (RVM) model and Local Phase Quantization (LPQ) to predict PPIs from protein sequences. The main improvements are the results of representing protein sequences using the LPQ feature representation on a Position Specific Scoring Matrix (PSSM), reducing the influence of noise using a Principal Component Analysis (PCA), and using a Relevance Vector Machine (RVM) based classifier. We perform 5-fold cross-validation experiments on Yeast and Human datasets, and we achieve very high accuracies of 92.65% and 97.62%, respectively, which is significantly better than previous works. To further evaluate the proposed method, we compare it with the state-of-the-art support vector machine (SVM) classifier on the Yeast dataset. The experimental results demonstrate that our RVM-LPQ method is obviously better than the SVM-based method. The promising experimental results show the efficiency and simplicity of the proposed method, which can be an automatic decision support tool for future proteomics research.
NASA Astrophysics Data System (ADS)
An, Fengwei; Akazawa, Toshinobu; Yamasaki, Shogo; Chen, Lei; Jürgen Mattausch, Hans
2015-04-01
This paper reports a VLSI realization of learning vector quantization (LVQ) with high flexibility for different applications. It is based on a hardware/software (HW/SW) co-design concept for on-chip learning and recognition and designed as a SoC in 180 nm CMOS. The time consuming nearest Euclidean distance search in the LVQ algorithm’s competition layer is efficiently implemented as a pipeline with parallel p-word input. Since neuron number in the competition layer, weight values, input and output number are scalable, the requirements of many different applications can be satisfied without hardware changes. Classification of a d-dimensional input vector is completed in n × \\lceil d/p \\rceil + R clock cycles, where R is the pipeline depth, and n is the number of reference feature vectors (FVs). Adjustment of stored reference FVs during learning is done by the embedded 32-bit RISC CPU, because this operation is not time critical. The high flexibility is verified by the application of human detection with different numbers for the dimensionality of the FVs.
Neural network classification technique and machine vision for bread crumb grain evaluation
NASA Astrophysics Data System (ADS)
Zayas, Inna Y.; Chung, O. K.; Caley, M.
1995-10-01
Bread crumb grain was studied to develop a model for pattern recognition of bread baked at Hard Winter Wheat Quality Laboratory (HWWQL), Grain Marketing and Production Research Center (GMPRC). Images of bread slices were acquired with a scanner in a 512 multiplied by 512 format. Subimages in the central part of the slices were evaluated by several features such as mean, determinant, eigen values, shape of a slice and other crumb features. Derived features were used to describe slices and loaves. Neural network programs of MATLAB package were used for data analysis. Learning vector quantization method and multivariate discriminant analysis were applied to bread slices from what of different sources. A training and test sets of different bread crumb texture classes were obtained. The ranking of subimages was well correlated with visual judgement. The performance of different models on slice recognition rate was studied to choose the best model. The recognition of classes created according to human judgement with image features was low. Recognition of arbitrarily created classes, according to porosity patterns, with several feature patterns was approximately 90%. Correlation coefficient was approximately 0.7 between slice shape features and loaf volume.
Demirhan, Ayşe; Toru, Mustafa; Guler, Inan
2015-07-01
Robust brain magnetic resonance (MR) segmentation algorithms are critical to analyze tissues and diagnose tumor and edema in a quantitative way. In this study, we present a new tissue segmentation algorithm that segments brain MR images into tumor, edema, white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF). The detection of the healthy tissues is performed simultaneously with the diseased tissues because examining the change caused by the spread of tumor and edema on healthy tissues is very important for treatment planning. We used T1, T2, and FLAIR MR images of 20 subjects suffering from glial tumor. We developed an algorithm for stripping the skull before the segmentation process. The segmentation is performed using self-organizing map (SOM) that is trained with unsupervised learning algorithm and fine-tuned with learning vector quantization (LVQ). Unlike other studies, we developed an algorithm for clustering the SOM instead of using an additional network. Input feature vector is constructed with the features obtained from stationary wavelet transform (SWT) coefficients. The results showed that average dice similarity indexes are 91% for WM, 87% for GM, 96% for CSF, 61% for tumor, and 77% for edema.
Vector quantization for efficient coding of upper subbands
NASA Technical Reports Server (NTRS)
Zeng, W. J.; Huang, Y. F.
1994-01-01
This paper examines the application of vector quantization (VQ) to exploit both intra-band and inter-band redundancy in subband coding. The focus here is on the exploitation of inter-band dependency. It is shown that VQ is particularly suitable and effective for coding the upper subbands. Three subband decomposition-based VQ coding schemes are proposed here to exploit the inter-band dependency by making full use of the extra flexibility of VQ approach over scalar quantization. A quadtree-based variable rate VQ (VRVQ) scheme which takes full advantage of the intra-band and inter-band redundancy is first proposed. Then, a more easily implementable alternative based on an efficient block-based edge estimation technique is employed to overcome the implementational barriers of the first scheme. Finally, a predictive VQ scheme formulated in the context of finite state VQ is proposed to further exploit the dependency among different subbands. A VRVQ scheme proposed elsewhere is extended to provide an efficient bit allocation procedure. Simulation results show that these three hybrid techniques have advantages, in terms of peak signal-to-noise ratio (PSNR) and complexity, over other existing subband-VQ approaches.
Quantization with maximally degenerate Poisson brackets: the harmonic oscillator!
NASA Astrophysics Data System (ADS)
Nutku, Yavuz
2003-07-01
Nambu's construction of multi-linear brackets for super-integrable systems can be thought of as degenerate Poisson brackets with a maximal set of Casimirs in their kernel. By introducing privileged coordinates in phase space these degenerate Poisson brackets are brought to the form of Heisenberg's equations. We propose a definition for constructing quantum operators for classical functions, which enables us to turn the maximally degenerate Poisson brackets into operators. They pose a set of eigenvalue problems for a new state vector. The requirement of the single-valuedness of this eigenfunction leads to quantization. The example of the harmonic oscillator is used to illustrate this general procedure for quantizing a class of maximally super-integrable systems.
Quantized Spectral Compressed Sensing: Cramer–Rao Bounds and Recovery Algorithms
NASA Astrophysics Data System (ADS)
Fu, Haoyu; Chi, Yuejie
2018-06-01
Efficient estimation of wideband spectrum is of great importance for applications such as cognitive radio. Recently, sub-Nyquist sampling schemes based on compressed sensing have been proposed to greatly reduce the sampling rate. However, the important issue of quantization has not been fully addressed, particularly for high-resolution spectrum and parameter estimation. In this paper, we aim to recover spectrally-sparse signals and the corresponding parameters, such as frequency and amplitudes, from heavy quantizations of their noisy complex-valued random linear measurements, e.g. only the quadrant information. We first characterize the Cramer-Rao bound under Gaussian noise, which highlights the trade-off between sample complexity and bit depth under different signal-to-noise ratios for a fixed budget of bits. Next, we propose a new algorithm based on atomic norm soft thresholding for signal recovery, which is equivalent to proximal mapping of properly designed surrogate signals with respect to the atomic norm that motivates spectral sparsity. The proposed algorithm can be applied to both the single measurement vector case, as well as the multiple measurement vector case. It is shown that under the Gaussian measurement model, the spectral signals can be reconstructed accurately with high probability, as soon as the number of quantized measurements exceeds the order of K log n, where K is the level of spectral sparsity and $n$ is the signal dimension. Finally, numerical simulations are provided to validate the proposed approaches.
Efficient boundary hunting via vector quantization
NASA Astrophysics Data System (ADS)
Diamantini, Claudia; Panti, Maurizio
2001-03-01
A great amount of information about a classification problem is contained in those instances falling near the decision boundary. This intuition dates back to the earliest studies in pattern recognition, and in the more recent adaptive approaches to the so called boundary hunting, such as the work of Aha et alii on Instance Based Learning and the work of Vapnik et alii on Support Vector Machines. The last work is of particular interest, since theoretical and experimental results ensure the accuracy of boundary reconstruction. However, its optimization approach has heavy computational and memory requirements, which limits its application on huge amounts of data. In the paper we describe an alternative approach to boundary hunting based on adaptive labeled quantization architectures. The adaptation is performed by a stochastic gradient algorithm for the minimization of the error probability. Error probability minimization guarantees the accurate approximation of the optimal decision boundary, while the use of a stochastic gradient algorithm defines an efficient method to reach such approximation. In the paper comparisons to Support Vector Machines are considered.
NASA Astrophysics Data System (ADS)
Thibes, Ronaldo
2017-02-01
We perform the canonical and path integral quantizations of a lower-order derivatives model describing Podolsky's generalized electrodynamics. The physical content of the model shows an auxiliary massive vector field coupled to the usual electromagnetic field. The equivalence with Podolsky's original model is studied at classical and quantum levels. Concerning the dynamical time evolution, we obtain a theory with two first-class and two second-class constraints in phase space. We calculate explicitly the corresponding Dirac brackets involving both vector fields. We use the Senjanovic procedure to implement the second-class constraints and the Batalin-Fradkin-Vilkovisky path integral quantization scheme to deal with the symmetries generated by the first-class constraints. The physical interpretation of the results turns out to be simpler due to the reduced derivatives order permeating the equations of motion, Dirac brackets and effective action.
Wang, Chenliang; Wen, Changyun; Hu, Qinglei; Wang, Wei; Zhang, Xiuyu
2018-06-01
This paper is devoted to distributed adaptive containment control for a class of nonlinear multiagent systems with input quantization. By employing a matrix factorization and a novel matrix normalization technique, some assumptions involving control gain matrices in existing results are relaxed. By fusing the techniques of sliding mode control and backstepping control, a two-step design method is proposed to construct controllers and, with the aid of neural networks, all system nonlinearities are allowed to be unknown. Moreover, a linear time-varying model and a similarity transformation are introduced to circumvent the obstacle brought by quantization, and the controllers need no information about the quantizer parameters. The proposed scheme is able to ensure the boundedness of all closed-loop signals and steer the containment errors into an arbitrarily small residual set. The simulation results illustrate the effectiveness of the scheme.
Hipp, Jason D; Cheng, Jerome Y; Toner, Mehmet; Tompkins, Ronald G; Balis, Ulysses J
2011-02-26
HISTORICALLY, EFFECTIVE CLINICAL UTILIZATION OF IMAGE ANALYSIS AND PATTERN RECOGNITION ALGORITHMS IN PATHOLOGY HAS BEEN HAMPERED BY TWO CRITICAL LIMITATIONS: 1) the availability of digital whole slide imagery data sets and 2) a relative domain knowledge deficit in terms of application of such algorithms, on the part of practicing pathologists. With the advent of the recent and rapid adoption of whole slide imaging solutions, the former limitation has been largely resolved. However, with the expectation that it is unlikely for the general cohort of contemporary pathologists to gain advanced image analysis skills in the short term, the latter problem remains, thus underscoring the need for a class of algorithm that has the concurrent properties of image domain (or organ system) independence and extreme ease of use, without the need for specialized training or expertise. In this report, we present a novel, general case pattern recognition algorithm, Spatially Invariant Vector Quantization (SIVQ), that overcomes the aforementioned knowledge deficit. Fundamentally based on conventional Vector Quantization (VQ) pattern recognition approaches, SIVQ gains its superior performance and essentially zero-training workflow model from its use of ring vectors, which exhibit continuous symmetry, as opposed to square or rectangular vectors, which do not. By use of the stochastic matching properties inherent in continuous symmetry, a single ring vector can exhibit as much as a millionfold improvement in matching possibilities, as opposed to conventional VQ vectors. SIVQ was utilized to demonstrate rapid and highly precise pattern recognition capability in a broad range of gross and microscopic use-case settings. With the performance of SIVQ observed thus far, we find evidence that indeed there exist classes of image analysis/pattern recognition algorithms suitable for deployment in settings where pathologists alone can effectively incorporate their use into clinical workflow, as a turnkey solution. We anticipate that SIVQ, and other related class-independent pattern recognition algorithms, will become part of the overall armamentarium of digital image analysis approaches that are immediately available to practicing pathologists, without the need for the immediate availability of an image analysis expert.
Signal processing and neural network toolbox and its application to failure diagnosis and prognosis
NASA Astrophysics Data System (ADS)
Tu, Fang; Wen, Fang; Willett, Peter K.; Pattipati, Krishna R.; Jordan, Eric H.
2001-07-01
Many systems are comprised of components equipped with self-testing capability; however, if the system is complex involving feedback and the self-testing itself may occasionally be faulty, tracing faults to a single or multiple causes is difficult. Moreover, many sensors are incapable of reliable decision-making on their own. In such cases, a signal processing front-end that can match inference needs will be very helpful. The work is concerned with providing an object-oriented simulation environment for signal processing and neural network-based fault diagnosis and prognosis. In the toolbox, we implemented a wide range of spectral and statistical manipulation methods such as filters, harmonic analyzers, transient detectors, and multi-resolution decomposition to extract features for failure events from data collected by data sensors. Then we evaluated multiple learning paradigms for general classification, diagnosis and prognosis. The network models evaluated include Restricted Coulomb Energy (RCE) Neural Network, Learning Vector Quantization (LVQ), Decision Trees (C4.5), Fuzzy Adaptive Resonance Theory (FuzzyArtmap), Linear Discriminant Rule (LDR), Quadratic Discriminant Rule (QDR), Radial Basis Functions (RBF), Multiple Layer Perceptrons (MLP) and Single Layer Perceptrons (SLP). Validation techniques, such as N-fold cross-validation and bootstrap techniques, are employed for evaluating the robustness of network models. The trained networks are evaluated for their performance using test data on the basis of percent error rates obtained via cross-validation, time efficiency, generalization ability to unseen faults. Finally, the usage of neural networks for the prediction of residual life of turbine blades with thermal barrier coatings is described and the results are shown. The neural network toolbox has also been applied to fault diagnosis in mixed-signal circuits.
Progressive low-bitrate digital color/monochrome image coding by neuro-fuzzy clustering
NASA Astrophysics Data System (ADS)
Mitra, Sunanda; Meadows, Steven
1997-10-01
Color image coding at low bit rates is an area of research that is just being addressed in recent literature since the problems of storage and transmission of color images are becoming more prominent in many applications. Current trends in image coding exploit the advantage of subband/wavelet decompositions in reducing the complexity in optimal scalar/vector quantizer (SQ/VQ) design. Compression ratios (CRs) of the order of 10:1 to 20:1 with high visual quality have been achieved by using vector quantization of subband decomposed color images in perceptually weighted color spaces. We report the performance of a recently developed adaptive vector quantizer, namely, AFLC-VQ for effective reduction in bit rates while maintaining high visual quality of reconstructed color as well as monochrome images. For 24 bit color images, excellent visual quality is maintained upto a bit rate reduction to approximately 0.48 bpp (for each color plane or monochrome 0.16 bpp, CR 50:1) by using the RGB color space. Further tuning of the AFLC-VQ, and addition of an entropy coder module after the VQ stage results in extremely low bit rates (CR 80:1) for good quality, reconstructed images. Our recent study also reveals that for similar visual quality, RGB color space requires less bits/pixel than either the YIQ, or HIS color space for storing the same information when entropy coding is applied. AFLC-VQ outperforms other standard VQ and adaptive SQ techniques in retaining visual fidelity at similar bit rate reduction.
NASA Astrophysics Data System (ADS)
Martinez, Dominique; Clément, Maxime; Messaoudi, Belkacem; Gervasoni, Damien; Litaudon, Philippe; Buonviso, Nathalie
2018-04-01
Objective. Modern neuroscience research requires electrophysiological recording of local field potentials (LFPs) in moving animals. Wireless transmission has the advantage of removing the wires between the animal and the recording equipment but is hampered by the large number of data to be sent at a relatively high rate. Approach. To reduce transmission bandwidth, we propose an encoder/decoder scheme based on adaptive non-uniform quantization. Our algorithm uses the current transmitted codeword to adapt the quantization intervals to changing statistics in LFP signals. It is thus backward adaptive and does not require the sending of side information. The computational complexity is low and similar at the encoder and decoder sides. These features allow for real-time signal recovery and facilitate hardware implementation with low-cost commercial microcontrollers. Main results. As proof-of-concept, we developed an open-source neural recording device called NeRD. The NeRD prototype digitally transmits eight channels encoded at 10 kHz with 2 bits per sample. It occupies a volume of 2 × 2 × 2 cm3 and weighs 8 g with a small battery allowing for 2 h 40 min of autonomy. The power dissipation is 59.4 mW for a communication range of 8 m and transmission losses below 0.1%. The small weight and low power consumption offer the possibility of mounting the entire device on the head of a rodent without resorting to a separate head-stage and battery backpack. The NeRD prototype is validated in recording LFPs in freely moving rats at 2 bits per sample while maintaining an acceptable signal-to-noise ratio (>30 dB) over a range of noisy channels. Significance. Adaptive quantization in neural implants allows for lower transmission bandwidths while retaining high signal fidelity and preserving fundamental frequencies in LFPs.
Martinez, Dominique; Clément, Maxime; Messaoudi, Belkacem; Gervasoni, Damien; Litaudon, Philippe; Buonviso, Nathalie
2018-04-01
Modern neuroscience research requires electrophysiological recording of local field potentials (LFPs) in moving animals. Wireless transmission has the advantage of removing the wires between the animal and the recording equipment but is hampered by the large number of data to be sent at a relatively high rate. To reduce transmission bandwidth, we propose an encoder/decoder scheme based on adaptive non-uniform quantization. Our algorithm uses the current transmitted codeword to adapt the quantization intervals to changing statistics in LFP signals. It is thus backward adaptive and does not require the sending of side information. The computational complexity is low and similar at the encoder and decoder sides. These features allow for real-time signal recovery and facilitate hardware implementation with low-cost commercial microcontrollers. As proof-of-concept, we developed an open-source neural recording device called NeRD. The NeRD prototype digitally transmits eight channels encoded at 10 kHz with 2 bits per sample. It occupies a volume of 2 × 2 × 2 cm 3 and weighs 8 g with a small battery allowing for 2 h 40 min of autonomy. The power dissipation is 59.4 mW for a communication range of 8 m and transmission losses below 0.1%. The small weight and low power consumption offer the possibility of mounting the entire device on the head of a rodent without resorting to a separate head-stage and battery backpack. The NeRD prototype is validated in recording LFPs in freely moving rats at 2 bits per sample while maintaining an acceptable signal-to-noise ratio (>30 dB) over a range of noisy channels. Adaptive quantization in neural implants allows for lower transmission bandwidths while retaining high signal fidelity and preserving fundamental frequencies in LFPs.
Coherent states for the relativistic harmonic oscillator
NASA Technical Reports Server (NTRS)
Aldaya, Victor; Guerrero, J.
1995-01-01
Recently we have obtained, on the basis of a group approach to quantization, a Bargmann-Fock-like realization of the Relativistic Harmonic Oscillator as well as a generalized Bargmann transform relating fock wave functions and a set of relativistic Hermite polynomials. Nevertheless, the relativistic creation and annihilation operators satisfy typical relativistic commutation relations of the Lie product (vector-z, vector-z(sup dagger)) approximately equals Energy (an SL(2,R) algebra). Here we find higher-order polarization operators on the SL(2,R) group, providing canonical creation and annihilation operators satisfying the Lie product (vector-a, vector-a(sup dagger)) = identity vector 1, the eigenstates of which are 'true' coherent states.
Yousef, Abdulaziz; Moghadam Charkari, Nasrollah
2013-11-07
Protein-Protein interaction (PPI) is one of the most important data in understanding the cellular processes. Many interesting methods have been proposed in order to predict PPIs. However, the methods which are based on the sequence of proteins as a prior knowledge are more universal. In this paper, a sequence-based, fast, and adaptive PPI prediction method is introduced to assign two proteins to an interaction class (yes, no). First, in order to improve the presentation of the sequences, twelve physicochemical properties of amino acid have been used by different representation methods to transform the sequence of protein pairs into different feature vectors. Then, for speeding up the learning process and reducing the effect of noise PPI data, principal component analysis (PCA) is carried out as a proper feature extraction algorithm. Finally, a new and adaptive Learning Vector Quantization (LVQ) predictor is designed to deal with different models of datasets that are classified into balanced and imbalanced datasets. The accuracy of 93.88%, 90.03%, and 89.72% has been found on S. cerevisiae, H. pylori, and independent datasets, respectively. The results of various experiments indicate the efficiency and validity of the method. © 2013 Published by Elsevier Ltd.
Constraints in distortion-invariant target recognition system simulation
NASA Astrophysics Data System (ADS)
Iftekharuddin, Khan M.; Razzaque, Md A.
2000-11-01
Automatic target recognition (ATR) is a mature but active research area. In an earlier paper, we proposed a novel ATR approach for recognition of targets varying in fine details, rotation, and translation using a Learning Vector Quantization (LVQ) Neural Network (NN). The proposed approach performed segmentation of multiple objects and the identification of the objects using LVQNN. In this current paper, we extend the previous approach for recognition of targets varying in rotation, translation, scale, and combination of all three distortions. We obtain the analytical results of the system level design to show that the approach performs well with some constraints. The first constraint determines the size of the input images and input filters. The second constraint shows the limits on amount of rotation, translation, and scale of input objects. We present the simulation verification of the constraints using DARPA's Moving and Stationary Target Recognition (MSTAR) images with different depression and pose angles. The simulation results using MSTAR images verify the analytical constraints of the system level design.
Recursive optimal pruning with applications to tree structured vector quantizers
NASA Technical Reports Server (NTRS)
Kiang, Shei-Zein; Baker, Richard L.; Sullivan, Gary J.; Chiu, Chung-Yen
1992-01-01
A pruning algorithm of Chou et al. (1989) for designing optimal tree structures identifies only those codebooks which lie on the convex hull of the original codebook's operational distortion rate function. The authors introduce a modified version of the original algorithm, which identifies a large number of codebooks having minimum average distortion, under the constraint that, in each step, only modes having no descendents are removed from the tree. All codebooks generated by the original algorithm are also generated by this algorithm. The new algorithm generates a much larger number of codebooks in the middle- and low-rate regions. The additional codebooks permit operation near the codebook's operational distortion rate function without time sharing by choosing from the increased number of available bit rates. Despite the statistical mismatch which occurs when coding data outside the training sequence, these pruned codebooks retain their performance advantage over full search vector quantizers (VQs) for a large range of rates.
Quantized Overcomplete Expansions: Analysis, Synthesis and Algorithms
1995-07-01
would be in the spirit of the Lempel - Ziv algorithm . The decoder would have to be aware of changes in the dictionary, but depending on the nature of the...37 3.4 A General Vector Compression Algorithm Based on Frames : : : : : : : : : : 40 ii 3.4.1 Design Considerations...x3.3. Along with exploring general properties of matching pursuit, we are interested in its application to compressing data vectors in RN. A general
Associative Pattern Recognition In Analog VLSI Circuits
NASA Technical Reports Server (NTRS)
Tawel, Raoul
1995-01-01
Winner-take-all circuit selects best-match stored pattern. Prototype cascadable very-large-scale integrated (VLSI) circuit chips built and tested to demonstrate concept of electronic associative pattern recognition. Based on low-power, sub-threshold analog complementary oxide/semiconductor (CMOS) VLSI circuitry, each chip can store 128 sets (vectors) of 16 analog values (vector components), vectors representing known patterns as diverse as spectra, histograms, graphs, or brightnesses of pixels in images. Chips exploit parallel nature of vector quantization architecture to implement highly parallel processing in relatively simple computational cells. Through collective action, cells classify input pattern in fraction of microsecond while consuming power of few microwatts.
Hierarchically clustered adaptive quantization CMAC and its learning convergence.
Teddy, S D; Lai, E M K; Quek, C
2007-11-01
The cerebellar model articulation controller (CMAC) neural network (NN) is a well-established computational model of the human cerebellum. Nevertheless, there are two major drawbacks associated with the uniform quantization scheme of the CMAC network. They are the following: (1) a constant output resolution associated with the entire input space and (2) the generalization-accuracy dilemma. Moreover, the size of the CMAC network is an exponential function of the number of inputs. Depending on the characteristics of the training data, only a small percentage of the entire set of CMAC memory cells is utilized. Therefore, the efficient utilization of the CMAC memory is a crucial issue. One approach is to quantize the input space nonuniformly. For existing nonuniformly quantized CMAC systems, there is a tradeoff between memory efficiency and computational complexity. Inspired by the underlying organizational mechanism of the human brain, this paper presents a novel CMAC architecture named hierarchically clustered adaptive quantization CMAC (HCAQ-CMAC). HCAQ-CMAC employs hierarchical clustering for the nonuniform quantization of the input space to identify significant input segments and subsequently allocating more memory cells to these regions. The stability of the HCAQ-CMAC network is theoretically guaranteed by the proof of its learning convergence. The performance of the proposed network is subsequently benchmarked against the original CMAC network, as well as two other existing CMAC variants on two real-life applications, namely, automated control of car maneuver and modeling of the human blood glucose dynamics. The experimental results have demonstrated that the HCAQ-CMAC network offers an efficient memory allocation scheme and improves the generalization and accuracy of the network output to achieve better or comparable performances with smaller memory usages. Index Terms-Cerebellar model articulation controller (CMAC), hierarchical clustering, hierarchically clustered adaptive quantization CMAC (HCAQ-CMAC), learning convergence, nonuniform quantization.
Cascade Error Projection with Low Bit Weight Quantization for High Order Correlation Data
NASA Technical Reports Server (NTRS)
Duong, Tuan A.; Daud, Taher
1998-01-01
In this paper, we reinvestigate the solution for chaotic time series prediction problem using neural network approach. The nature of this problem is such that the data sequences are never repeated, but they are rather in chaotic region. However, these data sequences are correlated between past, present, and future data in high order. We use Cascade Error Projection (CEP) learning algorithm to capture the high order correlation between past and present data to predict a future data using limited weight quantization constraints. This will help to predict a future information that will provide us better estimation in time for intelligent control system. In our earlier work, it has been shown that CEP can sufficiently learn 5-8 bit parity problem with 4- or more bits, and color segmentation problem with 7- or more bits of weight quantization. In this paper, we demonstrate that chaotic time series can be learned and generalized well with as low as 4-bit weight quantization using round-off and truncation techniques. The results show that generalization feature will suffer less as more bit weight quantization is available and error surfaces with the round-off technique are more symmetric around zero than error surfaces with the truncation technique. This study suggests that CEP is an implementable learning technique for hardware consideration.
NASA Astrophysics Data System (ADS)
Daneshgaran, Fred; Mondin, Marina; Olia, Khashayar
This paper is focused on the problem of Information Reconciliation (IR) for continuous variable Quantum Key Distribution (QKD). The main problem is quantization and assignment of labels to the samples of the Gaussian variables observed at Alice and Bob. Trouble is that most of the samples, assuming that the Gaussian variable is zero mean which is de-facto the case, tend to have small magnitudes and are easily disturbed by noise. Transmission over longer and longer distances increases the losses corresponding to a lower effective Signal-to-Noise Ratio (SNR) exasperating the problem. Quantization over higher dimensions is advantageous since it allows for fractional bit per sample accuracy which may be needed at very low SNR conditions whereby the achievable secret key rate is significantly less than one bit per sample. In this paper, we propose to use Permutation Modulation (PM) for quantization of Gaussian vectors potentially containing thousands of samples. PM is applied to the magnitudes of the Gaussian samples and we explore the dependence of the sign error probability on the magnitude of the samples. At very low SNR, we may transmit the entire label of the PM code from Bob to Alice in Reverse Reconciliation (RR) over public channel. The side information extracted from this label can then be used by Alice to characterize the sign error probability of her individual samples. Forward Error Correction (FEC) coding can be used by Bob on each subset of samples with similar sign error probability to aid Alice in error correction. This can be done for different subsets of samples with similar sign error probabilities leading to an Unequal Error Protection (UEP) coding paradigm.
An efficient system for reliably transmitting image and video data over low bit rate noisy channels
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.; Huang, Y. F.; Stevenson, Robert L.
1994-01-01
This research project is intended to develop an efficient system for reliably transmitting image and video data over low bit rate noisy channels. The basic ideas behind the proposed approach are the following: employ statistical-based image modeling to facilitate pre- and post-processing and error detection, use spare redundancy that the source compression did not remove to add robustness, and implement coded modulation to improve bandwidth efficiency and noise rejection. Over the last six months, progress has been made on various aspects of the project. Through our studies of the integrated system, a list-based iterative Trellis decoder has been developed. The decoder accepts feedback from a post-processor which can detect channel errors in the reconstructed image. The error detection is based on the Huber Markov random field image model for the compressed image. The compression scheme used here is that of JPEG (Joint Photographic Experts Group). Experiments were performed and the results are quite encouraging. The principal ideas here are extendable to other compression techniques. In addition, research was also performed on unequal error protection channel coding, subband vector quantization as a means of source coding, and post processing for reducing coding artifacts. Our studies on unequal error protection (UEP) coding for image transmission focused on examining the properties of the UEP capabilities of convolutional codes. The investigation of subband vector quantization employed a wavelet transform with special emphasis on exploiting interband redundancy. The outcome of this investigation included the development of three algorithms for subband vector quantization. The reduction of transform coding artifacts was studied with the aid of a non-Gaussian Markov random field model. This results in improved image decompression. These studies are summarized and the technical papers included in the appendices.
Studies on image compression and image reconstruction
NASA Technical Reports Server (NTRS)
Sayood, Khalid; Nori, Sekhar; Araj, A.
1994-01-01
During this six month period our works concentrated on three, somewhat different areas. We looked at and developed a number of error concealment schemes for use in a variety of video coding environments. This work is described in an accompanying (draft) Masters thesis. In the thesis we describe application of this techniques to the MPEG video coding scheme. We felt that the unique frame ordering approach used in the MPEG scheme would be a challenge to any error concealment/error recovery technique. We continued with our work in the vector quantization area. We have also developed a new type of vector quantizer, which we call a scan predictive vector quantization. The scan predictive VQ was tested on data processed at Goddard to approximate Landsat 7 HRMSI resolution and compared favorably with existing VQ techniques. A paper describing this work is included. The third area is concerned more with reconstruction than compression. While there is a variety of efficient lossless image compression schemes, they all have a common property that they use past data to encode future data. This is done either via taking differences, context modeling, or by building dictionaries. When encoding large images, this common property becomes a common flaw. When the user wishes to decode just a portion of the image, the requirement that the past history be available forces the decoding of a significantly larger portion of the image than desired by the user. Even with intelligent partitioning of the image dataset, the number of pixels decoded may be four times the number of pixels requested. We have developed an adaptive scanning strategy which can be used with any lossless compression scheme and which lowers the additional number of pixels to be decoded to about 7 percent of the number of pixels requested! A paper describing these results is included.
Fast temporal neural learning using teacher forcing
NASA Technical Reports Server (NTRS)
Toomarian, Nikzad (Inventor); Bahren, Jacob (Inventor)
1992-01-01
A neural network is trained to output a time dependent target vector defined over a predetermined time interval in response to a time dependent input vector defined over the same time interval by applying corresponding elements of the error vector, or difference between the target vector and the actual neuron output vector, to the inputs of corresponding output neurons of the network as corrective feedback. This feedback decreases the error and quickens the learning process, so that a much smaller number of training cycles are required to complete the learning process. A conventional gradient descent algorithm is employed to update the neural network parameters at the end of the predetermined time interval. The foregoing process is repeated in repetitive cycles until the actual output vector corresponds to the target vector. In the preferred embodiment, as the overall error of the neural network output decreasing during successive training cycles, the portion of the error fed back to the output neurons is decreased accordingly, allowing the network to learn with greater freedom from teacher forcing as the network parameters converge to their optimum values. The invention may also be used to train a neural network with stationary training and target vectors.
Fast temporal neural learning using teacher forcing
NASA Technical Reports Server (NTRS)
Toomarian, Nikzad (Inventor); Bahren, Jacob (Inventor)
1995-01-01
A neural network is trained to output a time dependent target vector defined over a predetermined time interval in response to a time dependent input vector defined over the same time interval by applying corresponding elements of the error vector, or difference between the target vector and the actual neuron output vector, to the inputs of corresponding output neurons of the network as corrective feedback. This feedback decreases the error and quickens the learning process, so that a much smaller number of training cycles are required to complete the learning process. A conventional gradient descent algorithm is employed to update the neural network parameters at the end of the predetermined time interval. The foregoing process is repeated in repetitive cycles until the actual output vector corresponds to the target vector. In the preferred embodiment, as the overall error of the neural network output decreasing during successive training cycles, the portion of the error fed back to the output neurons is decreased accordingly, allowing the network to learn with greater freedom from teacher forcing as the network parameters converge to their optimum values. The invention may also be used to train a neural network with stationary training and target vectors.
Optical systolic array processor using residue arithmetic
NASA Technical Reports Server (NTRS)
Jackson, J.; Casasent, D.
1983-01-01
The use of residue arithmetic to increase the accuracy and reduce the dynamic range requirements of optical matrix-vector processors is evaluated. It is determined that matrix-vector operations and iterative algorithms can be performed totally in residue notation. A new parallel residue quantizer circuit is developed which significantly improves the performance of the systolic array feedback processor. Results are presented of a computer simulation of this system used to solve a set of three simultaneous equations.
2002-01-01
their expression profile and for classification of cells into tumerous and non- tumerous classes. Then we will present a parallel tree method for... cancerous cells. We will use the same dataset and use tree structured classifiers with multi-resolution analysis for classifying cancerous from non- cancerous ...cells. We have the expressions of 4096 genes from 98 different cell types. Of these 98, 72 are cancerous while 26 are non- cancerous . We are interested
Image segmentation using hidden Markov Gauss mixture models.
Pyun, Kyungsuk; Lim, Johan; Won, Chee Sun; Gray, Robert M
2007-07-01
Image segmentation is an important tool in image processing and can serve as an efficient front end to sophisticated algorithms and thereby simplify subsequent processing. We develop a multiclass image segmentation method using hidden Markov Gauss mixture models (HMGMMs) and provide examples of segmentation of aerial images and textures. HMGMMs incorporate supervised learning, fitting the observation probability distribution given each class by a Gauss mixture estimated using vector quantization with a minimum discrimination information (MDI) distortion. We formulate the image segmentation problem using a maximum a posteriori criteria and find the hidden states that maximize the posterior density given the observation. We estimate both the hidden Markov parameter and hidden states using a stochastic expectation-maximization algorithm. Our results demonstrate that HMGMM provides better classification in terms of Bayes risk and spatial homogeneity of the classified objects than do several popular methods, including classification and regression trees, learning vector quantization, causal hidden Markov models (HMMs), and multiresolution HMMs. The computational load of HMGMM is similar to that of the causal HMM.
FIVQ algorithm for interference hyper-spectral image compression
NASA Astrophysics Data System (ADS)
Wen, Jia; Ma, Caiwen; Zhao, Junsuo
2014-07-01
Based on the improved vector quantization (IVQ) algorithm [1] which was proposed in 2012, this paper proposes a further improved vector quantization (FIVQ) algorithm for LASIS (Large Aperture Static Imaging Spectrometer) interference hyper-spectral image compression. To get better image quality, IVQ algorithm takes both the mean values and the VQ indices as the encoding rules. Although IVQ algorithm can improve both the bit rate and the image quality, it still can be further improved in order to get much lower bit rate for the LASIS interference pattern with the special optical characteristics based on the pushing and sweeping in LASIS imaging principle. In the proposed algorithm FIVQ, the neighborhood of the encoding blocks of the interference pattern image, which are using the mean value rules, will be checked whether they have the same mean value as the current processing block. Experiments show the proposed algorithm FIVQ can get lower bit rate compared to that of the IVQ algorithm for the LASIS interference hyper-spectral sequences.
NASA Astrophysics Data System (ADS)
Sadrzadeh, Mehrnoosh
2017-07-01
Compact Closed categories and Frobenius and Bi algebras have been applied to model and reason about Quantum protocols. The same constructions have also been applied to reason about natural language semantics under the name: ``categorical distributional compositional'' semantics, or in short, the ``DisCoCat'' model. This model combines the statistical vector models of word meaning with the compositional models of grammatical structure. It has been applied to natural language tasks such as disambiguation, paraphrasing and entailment of phrases and sentences. The passage from the grammatical structure to vectors is provided by a functor, similar to the Quantization functor of Quantum Field Theory. The original DisCoCat model only used compact closed categories. Later, Frobenius algebras were added to it to model long distance dependancies such as relative pronouns. Recently, bialgebras have been added to the pack to reason about quantifiers. This paper reviews these constructions and their application to natural language semantics. We go over the theory and present some of the core experimental results.
NASA Astrophysics Data System (ADS)
Lavergne, T.; Eastwood, S.; Teffah, Z.; Schyberg, H.; Breivik, L.-A.
2010-10-01
The retrieval of sea ice motion with the Maximum Cross-Correlation (MCC) method from low-resolution (10-15 km) spaceborne imaging sensors is challenged by a dominating quantization noise as the time span of displacement vectors is shortened. To allow investigating shorter displacements from these instruments, we introduce an alternative sea ice motion tracking algorithm that builds on the MCC method but relies on a continuous optimization step for computing the motion vector. The prime effect of this method is to effectively dampen the quantization noise, an artifact of the MCC. It allows for retrieving spatially smooth 48 h sea ice motion vector fields in the Arctic. Strategies to detect and correct erroneous vectors as well as to optimally merge several polarization channels of a given instrument are also described. A test processing chain is implemented and run with several active and passive microwave imagers (Advanced Microwave Scanning Radiometer-EOS (AMSR-E), Special Sensor Microwave Imager, and Advanced Scatterometer) during three Arctic autumn, winter, and spring seasons. Ice motion vectors are collocated to and compared with GPS positions of in situ drifters. Error statistics are shown to be ranging from 2.5 to 4.5 km (standard deviation for components of the vectors) depending on the sensor, without significant bias. We discuss the relative contribution of measurement and representativeness errors by analyzing monthly validation statistics. The 37 GHz channels of the AMSR-E instrument allow for the best validation statistics. The operational low-resolution sea ice drift product of the EUMETSAT OSI SAF (European Organisation for the Exploitation of Meteorological Satellites Ocean and Sea Ice Satellite Application Facility) is based on the algorithms presented in this paper.
Segmentation-based L-filtering of speckle noise in ultrasonic images
NASA Astrophysics Data System (ADS)
Kofidis, Eleftherios; Theodoridis, Sergios; Kotropoulos, Constantine L.; Pitas, Ioannis
1994-05-01
We introduce segmentation-based L-filters, that is, filtering processes combining segmentation and (nonadaptive) optimum L-filtering, and use them for the suppression of speckle noise in ultrasonic (US) images. With the aid of a suitable modification of the learning vector quantizer self-organizing neural network, the image is segmented in regions of approximately homogeneous first-order statistics. For each such region a minimum mean-squared error L- filter is designed on the basis of a multiplicative noise model by using the histogram of grey values as an estimate of the parent distribution of the noisy observations and a suitable estimate of the original signal in the corresponding region. Thus, we obtain a bank of L-filters that are corresponding to and are operating on different image regions. Simulation results on a simulated US B-mode image of a tissue mimicking phantom are presented which verify the superiority of the proposed method as compared to a number of conventional filtering strategies in terms of a suitably defined signal-to-noise ratio measure and detection theoretic performance measures.
Visual information processing II; Proceedings of the Meeting, Orlando, FL, Apr. 14-16, 1993
NASA Technical Reports Server (NTRS)
Huck, Friedrich O. (Editor); Juday, Richard D. (Editor)
1993-01-01
Various papers on visual information processing are presented. Individual topics addressed include: aliasing as noise, satellite image processing using a hammering neural network, edge-detetion method using visual perception, adaptive vector median filters, design of a reading test for low-vision image warping, spatial transformation architectures, automatic image-enhancement method, redundancy reduction in image coding, lossless gray-scale image compression by predictive GDF, information efficiency in visual communication, optimizing JPEG quantization matrices for different applications, use of forward error correction to maintain image fidelity, effect of peanoscanning on image compression. Also discussed are: computer vision for autonomous robotics in space, optical processor for zero-crossing edge detection, fractal-based image edge detection, simulation of the neon spreading effect by bandpass filtering, wavelet transform (WT) on parallel SIMD architectures, nonseparable 2D wavelet image representation, adaptive image halftoning based on WT, wavelet analysis of global warming, use of the WT for signal detection, perfect reconstruction two-channel rational filter banks, N-wavelet coding for pattern classification, simulation of image of natural objects, number-theoretic coding for iconic systems.
Vector adaptive predictive coder for speech and audio
NASA Technical Reports Server (NTRS)
Chen, Juin-Hwey (Inventor); Gersho, Allen (Inventor)
1990-01-01
A real-time vector adaptive predictive coder which approximates each vector of K speech samples by using each of M fixed vectors in a first codebook to excite a time-varying synthesis filter and picking the vector that minimizes distortion. Predictive analysis for each frame determines parameters used for computing from vectors in the first codebook zero-state response vectors that are stored at the same address (index) in a second codebook. Encoding of input speech vectors s.sub.n is then carried out using the second codebook. When the vector that minimizes distortion is found, its index is transmitted to a decoder which has a codebook identical to the first codebook of the decoder. There the index is used to read out a vector that is used to synthesize an output speech vector s.sub.n. The parameters used in the encoder are quantized, for example by using a table, and the indices are transmitted to the decoder where they are decoded to specify transfer characteristics of filters used in producing the vector s.sub.n from the receiver codebook vector selected by the vector index transmitted.
Prior-Based Quantization Bin Matching for Cloud Storage of JPEG Images.
Liu, Xianming; Cheung, Gene; Lin, Chia-Wen; Zhao, Debin; Gao, Wen
2018-07-01
Millions of user-generated images are uploaded to social media sites like Facebook daily, which translate to a large storage cost. However, there exists an asymmetry in upload and download data: only a fraction of the uploaded images are subsequently retrieved for viewing. In this paper, we propose a cloud storage system that reduces the storage cost of all uploaded JPEG photos, at the expense of a controlled increase in computation mainly during download of requested image subset. Specifically, the system first selectively re-encodes code blocks of uploaded JPEG images using coarser quantization parameters for smaller storage sizes. Then during download, the system exploits known signal priors-sparsity prior and graph-signal smoothness prior-for reverse mapping to recover original fine quantization bin indices, with either deterministic guarantee (lossless mode) or statistical guarantee (near-lossless mode). For fast reverse mapping, we use small dictionaries and sparse graphs that are tailored for specific clusters of similar blocks, which are classified via tree-structured vector quantizer. During image upload, cluster indices identifying the appropriate dictionaries and graphs for the re-quantized blocks are encoded as side information using a differential distributed source coding scheme to facilitate reverse mapping during image download. Experimental results show that our system can reap significant storage savings (up to 12.05%) at roughly the same image PSNR (within 0.18 dB).
Vectorized algorithms for spiking neural network simulation.
Brette, Romain; Goodman, Dan F M
2011-06-01
High-level languages (Matlab, Python) are popular in neuroscience because they are flexible and accelerate development. However, for simulating spiking neural networks, the cost of interpretation is a bottleneck. We describe a set of algorithms to simulate large spiking neural networks efficiently with high-level languages using vector-based operations. These algorithms constitute the core of Brian, a spiking neural network simulator written in the Python language. Vectorized simulation makes it possible to combine the flexibility of high-level languages with the computational efficiency usually associated with compiled languages.
Physics-based Detection of Subpixel Targets in Hyperspectral Imagery
2007-01-01
Learning Vector Quantization LWIR ...Wave Infrared ( LWIR ) from 7.0 to 15.0 microns regions as well. At these wavelengths, emissivity dominates the spectral signature. Emissivity is...object emits instead of reflects. Initial work has already been finished applying the hybrid detectors to LWIR sensors [13]. However, target
Li, Shuhui; Fairbank, Michael; Johnson, Cameron; Wunsch, Donald C; Alonso, Eduardo; Proaño, Julio L
2014-04-01
Three-phase grid-connected converters are widely used in renewable and electric power system applications. Traditionally, grid-connected converters are controlled with standard decoupled d-q vector control mechanisms. However, recent studies indicate that such mechanisms show limitations in their applicability to dynamic systems. This paper investigates how to mitigate such restrictions using a neural network to control a grid-connected rectifier/inverter. The neural network implements a dynamic programming algorithm and is trained by using back-propagation through time. To enhance performance and stability under disturbance, additional strategies are adopted, including the use of integrals of error signals to the network inputs and the introduction of grid disturbance voltage to the outputs of a well-trained network. The performance of the neural-network controller is studied under typical vector control conditions and compared against conventional vector control methods, which demonstrates that the neural vector control strategy proposed in this paper is effective. Even in dynamic and power converter switching environments, the neural vector controller shows strong ability to trace rapidly changing reference commands, tolerate system disturbances, and satisfy control requirements for a faulted power system.
Multipath search coding of stationary signals with applications to speech
NASA Astrophysics Data System (ADS)
Fehn, H. G.; Noll, P.
1982-04-01
This paper deals with the application of multipath search coding (MSC) concepts to the coding of stationary memoryless and correlated sources, and of speech signals, at a rate of one bit per sample. Use is made of three MSC classes: (1) codebook coding, or vector quantization, (2) tree coding, and (3) trellis coding. This paper explains the performances of these coders and compares them both with those of conventional coders and with rate-distortion bounds. The potentials of MSC coding strategies are demonstrated by illustrations. The paper reports also on results of MSC coding of speech, where both the strategy of adaptive quantization and of adaptive prediction were included in coder design.
Vacuum polarization of the quantized massive fields in Friedman-Robertson-Walker spacetime
NASA Astrophysics Data System (ADS)
Matyjasek, Jerzy; Sadurski, Paweł; Telecka, Małgorzata
2014-04-01
The stress-energy tensor of the quantized massive fields in a spatially open, flat, and closed Friedman-Robertson-Walker universe is constructed using the adiabatic regularization (for the scalar field) and the Schwinger-DeWitt approach (for the scalar, spinor, and vector fields). It is shown that the stress-energy tensor calculated in the sixth adiabatic order coincides with the result obtained from the regularized effective action, constructed from the heat kernel coefficient a3. The behavior of the tensor is examined in the power-law cosmological models, and the semiclassical Einstein field equations are solved exactly in a few physically interesting cases, such as the generalized Starobinsky models.
Quantization-Based Adaptive Actor-Critic Tracking Control With Tracking Error Constraints.
Fan, Quan-Yong; Yang, Guang-Hong; Ye, Dan
2018-04-01
In this paper, the problem of adaptive actor-critic (AC) tracking control is investigated for a class of continuous-time nonlinear systems with unknown nonlinearities and quantized inputs. Different from the existing results based on reinforcement learning, the tracking error constraints are considered and new critic functions are constructed to improve the performance further. To ensure that the tracking errors keep within the predefined time-varying boundaries, a tracking error transformation technique is used to constitute an augmented error system. Specific critic functions, rather than the long-term cost function, are introduced to supervise the tracking performance and tune the weights of the AC neural networks (NNs). A novel adaptive controller with a special structure is designed to reduce the effect of the NN reconstruction errors, input quantization, and disturbances. Based on the Lyapunov stability theory, the boundedness of the closed-loop signals and the desired tracking performance can be guaranteed. Finally, simulations on two connected inverted pendulums are given to illustrate the effectiveness of the proposed method.
Optoelectronic Inner-Product Neural Associative Memory
NASA Technical Reports Server (NTRS)
Liu, Hua-Kuang
1993-01-01
Optoelectronic apparatus acts as artificial neural network performing associative recall of binary images. Recall process is iterative one involving optical computation of inner products between binary input vector and one or more reference binary vectors in memory. Inner-product method requires far less memory space than matrix-vector method.
Music Signal Processing Using Vector Product Neural Networks
NASA Astrophysics Data System (ADS)
Fan, Z. C.; Chan, T. S.; Yang, Y. H.; Jang, J. S. R.
2017-05-01
We propose a novel neural network model for music signal processing using vector product neurons and dimensionality transformations. Here, the inputs are first mapped from real values into three-dimensional vectors then fed into a three-dimensional vector product neural network where the inputs, outputs, and weights are all three-dimensional values. Next, the final outputs are mapped back to the reals. Two methods for dimensionality transformation are proposed, one via context windows and the other via spectral coloring. Experimental results on the iKala dataset for blind singing voice separation confirm the efficacy of our model.
identification. URE from ten MSP430F5529 16-bit microcontrollers were analyzed using: 1) RF distinct native attributes (RF-DNA) fingerprints paired with multiple...discriminant analysis/maximum likelihood (MDA/ML) classification, 2) RF-DNA fingerprints paired with generalized relevance learning vector quantized
Image segmentation using fuzzy LVQ clustering networks
NASA Technical Reports Server (NTRS)
Tsao, Eric Chen-Kuo; Bezdek, James C.; Pal, Nikhil R.
1992-01-01
In this note we formulate image segmentation as a clustering problem. Feature vectors extracted from a raw image are clustered into subregions, thereby segmenting the image. A fuzzy generalization of a Kohonen learning vector quantization (LVQ) which integrates the Fuzzy c-Means (FCM) model with the learning rate and updating strategies of the LVQ is used for this task. This network, which segments images in an unsupervised manner, is thus related to the FCM optimization problem. Numerical examples on photographic and magnetic resonance images are given to illustrate this approach to image segmentation.
Wavelet-based higher-order neural networks for mine detection in thermal IR imagery
NASA Astrophysics Data System (ADS)
Baertlein, Brian A.; Liao, Wen-Jiao
2000-08-01
An image processing technique is described for the detection of miens in RI imagery. The proposed technique is based on a third-order neural network, which processes the output of a wavelet packet transform. The technique is inherently invariant to changes in signature position, rotation and scaling. The well-known memory limitations that arise with higher-order neural networks are addressed by (1) the data compression capabilities of wavelet packets, (2) protections of the image data into a space of similar triangles, and (3) quantization of that 'triangle space'. Using these techniques, image chips of size 28 by 28, which would require 0(109) neural net weights, are processed by a network having 0(102) weights. ROC curves are presented for mine detection in real and simulated imagery.
Supporting Dynamic Quantization for High-Dimensional Data Analytics.
Guzun, Gheorghi; Canahuate, Guadalupe
2017-05-01
Similarity searches are at the heart of exploratory data analysis tasks. Distance metrics are typically used to characterize the similarity between data objects represented as feature vectors. However, when the dimensionality of the data increases and the number of features is large, traditional distance metrics fail to distinguish between the closest and furthest data points. Localized distance functions have been proposed as an alternative to traditional distance metrics. These functions only consider dimensions close to query to compute the distance/similarity. Furthermore, in order to enable interactive explorations of high-dimensional data, indexing support for ad-hoc queries is needed. In this work we set up to investigate whether bit-sliced indices can be used for exploratory analytics such as similarity searches and data clustering for high-dimensional big-data. We also propose a novel dynamic quantization called Query dependent Equi-Depth (QED) quantization and show its effectiveness on characterizing high-dimensional similarity. When applying QED we observe improvements in kNN classification accuracy over traditional distance functions. Gheorghi Guzun and Guadalupe Canahuate. 2017. Supporting Dynamic Quantization for High-Dimensional Data Analytics. In Proceedings of Ex-ploreDB'17, Chicago, IL, USA, May 14-19, 2017, 6 pages. https://doi.org/http://dx.doi.org/10.1145/3077331.3077336.
Spin dynamics of paramagnetic centers with anisotropic g tensor and spin of ½
Maryasov, Alexander G.
2012-01-01
The influence of g tensor anisotropy on spin dynamics of paramagnetic centers having real or effective spin of 1/2 is studied. The g anisotropy affects both the excitation and the detection of EPR signals, producing noticeable differences between conventional continuous-wave (cw) EPR and pulsed EPR spectra. The magnitudes and directions of the spin and magnetic moment vectors are generally not proportional to each other, but are related to each other through the g tensor. The equilibrium magnetic moment direction is generally parallel to neither the magnetic field nor the spin quantization axis due to the g anisotropy. After excitation with short microwave pulses, the spin vector precesses around its quantization axis, in a plane that is generally not perpendicular to the applied magnetic field. Paradoxically, the magnetic moment vector precesses around its equilibrium direction in a plane exactly perpendicular to the external magnetic field. In the general case, the oscillating part of the magnetic moment is elliptically polarized and the direction of precession is determined by the sign of the g tensor determinant (g tensor signature). Conventional pulsed and cw EPR spectrometers do not allow determination of the g tensor signature or the ellipticity of the magnetic moment trajectory. It is generally impossible to set a uniform spin turning angle for simple pulses in an unoriented or ‘powder’ sample when g tensor anisotropy is significant. PMID:22743542
Spin dynamics of paramagnetic centers with anisotropic g tensor and spin of 1/2
NASA Astrophysics Data System (ADS)
Maryasov, Alexander G.; Bowman, Michael K.
2012-08-01
The influence of g tensor anisotropy on spin dynamics of paramagnetic centers having real or effective spin of 1/2 is studied. The g anisotropy affects both the excitation and the detection of EPR signals, producing noticeable differences between conventional continuous-wave (cw) EPR and pulsed EPR spectra. The magnitudes and directions of the spin and magnetic moment vectors are generally not proportional to each other, but are related to each other through the g tensor. The equilibrium magnetic moment direction is generally parallel to neither the magnetic field nor the spin quantization axis due to the g anisotropy. After excitation with short microwave pulses, the spin vector precesses around its quantization axis, in a plane that is generally not perpendicular to the applied magnetic field. Paradoxically, the magnetic moment vector precesses around its equilibrium direction in a plane exactly perpendicular to the external magnetic field. In the general case, the oscillating part of the magnetic moment is elliptically polarized and the direction of precession is determined by the sign of the g tensor determinant (g tensor signature). Conventional pulsed and cw EPR spectrometers do not allow determination of the g tensor signature or the ellipticity of the magnetic moment trajectory. It is generally impossible to set a uniform spin turning angle for simple pulses in an unoriented or 'powder' sample when g tensor anisotropy is significant.
Skog, Johan; Mei, Ya-Fang; Wadell, Göran
2002-06-01
Most currently used adenovirus vectors are based upon adenovirus serotypes 2 and 5 (Ad2 and Ad5), which have limited efficiencies for gene transfer to human neural cells. Both serotypes bind to the known adenovirus receptor, CAR (coxsackievirus and adenovirus receptor), and have restricted cell tropism. The purpose of this study was to find vector candidates that are superior to Ad5 in infecting human neural tumours. Using flow cytometry, the vector candidates Ad4p, Ad11p and Ad17p were compared to the commonly used adenovirus vector Ad5v for their binding capacity to neural cell lines derived from glioblastoma, medulloblastoma and neuroblastoma cell lines. The production of viral structural proteins and the CAR-binding properties of the different serotypes were also assessed in these cells. Computer-based models of the fibre knobs of Ad4p and Ad17 were created based upon the crystallized fibre knob structure of adenoviruses and analysed for putative receptor-interacting regions that differed from the fibre knob of Ad5. The non CAR-binding vector candidate Ad11p showed clearly the best binding capacity to all of the neural cell lines, binding more than 90% of cells of all of the neural cell lines tested, in contrast to 20% or less for the commonly used vector Ad5v. Ad4p and Ad11p were also internalized and produced viral proteins more successfully than Ad5. Ad4p showed a low binding ability but a very efficient capacity for infection in cell culture. Ad17p virions neither bound or efficiently infected any of the neural cell lines studied.
NASA Astrophysics Data System (ADS)
Krippner, Wolfgang; Wagner, Felix; Bauer, Sebastian; Puente León, Fernando
2017-06-01
Using appropriately designed spectral filters allows to optically determine material abundances. While an infinite number of possibilities exist for determining spectral filters, we take advantage of using neural networks to derive spectral filters leading to precise estimations. To overcome some drawbacks that regularly influence the determination of material abundances using hyperspectral data, we incorporate the spectral variability of the raw materials into the training of the considered neural networks. As a main result, we successfully classify quantized material abundances optically. Thus, the main part of the high computational load, which belongs to the use of neural networks, is avoided. In addition, the derived material abundances become invariant against spatially varying illumination intensity as a remarkable benefit in comparison with spectral filters based on the Moore-Penrose pseudoinverse, for instance.
NASA Technical Reports Server (NTRS)
Jules, Kenol; Lin, Paul P.
2001-01-01
This paper presents an artificial intelligence monitoring system developed by the NASA Glenn Principal Investigator Microgravity Services project to help the principal investigator teams identify the primary vibratory disturbance sources that are active, at any moment in time, on-board the International Space Station, which might impact the microgravity environment their experiments are exposed to. From the Principal Investigator Microgravity Services' web site, the principal investigator teams can monitor via a graphical display, in near real time, which event(s) is/are on, such as crew activities, pumps, fans, centrifuges, compressor, crew exercise, platform structural modes, etc., and decide whether or not to run their experiments based on the acceleration environment associated with a specific event. This monitoring system is focused primarily on detecting the vibratory disturbance sources, but could be used as well to detect some of the transient disturbance sources, depending on the events duration. The system has built-in capability to detect both known and unknown vibratory disturbance sources. Several soft computing techniques such as Kohonen's Self-Organizing Feature Map, Learning Vector Quantization, Back-Propagation Neural Networks, and Fuzzy Logic were used to design the system.
Can Selforganizing Maps Accurately Predict Photometric Redshifts?
NASA Technical Reports Server (NTRS)
Way, Michael J.; Klose, Christian
2012-01-01
We present an unsupervised machine-learning approach that can be employed for estimating photometric redshifts. The proposed method is based on a vector quantization called the self-organizing-map (SOM) approach. A variety of photometrically derived input values were utilized from the Sloan Digital Sky Survey's main galaxy sample, luminous red galaxy, and quasar samples, along with the PHAT0 data set from the Photo-z Accuracy Testing project. Regression results obtained with this new approach were evaluated in terms of root-mean-square error (RMSE) to estimate the accuracy of the photometric redshift estimates. The results demonstrate competitive RMSE and outlier percentages when compared with several other popular approaches, such as artificial neural networks and Gaussian process regression. SOM RMSE results (using delta(z) = z(sub phot) - z(sub spec)) are 0.023 for the main galaxy sample, 0.027 for the luminous red galaxy sample, 0.418 for quasars, and 0.022 for PHAT0 synthetic data. The results demonstrate that there are nonunique solutions for estimating SOM RMSEs. Further research is needed in order to find more robust estimation techniques using SOMs, but the results herein are a positive indication of their capabilities when compared with other well-known methods
Diffraction pattern simulation of cellulose fibrils using distributed and quantized pair distances
Zhang, Yan; Inouye, Hideyo; Crowley, Michael; ...
2016-10-14
Intensity simulation of X-ray scattering from large twisted cellulose molecular fibrils is important in understanding the impact of chemical or physical treatments on structural properties such as twisting or coiling. This paper describes a highly efficient method for the simulation of X-ray diffraction patterns from complex fibrils using atom-type-specific pair-distance quantization. Pair distances are sorted into arrays which are labelled by atom type. Histograms of pair distances in each array are computed and binned and the resulting population distributions are used to represent the whole pair-distance data set. These quantized pair-distance arrays are used with a modified and vectorized Debyemore » formula to simulate diffraction patterns. This approach utilizes fewer pair distances in each iteration, and atomic scattering factors are moved outside the iteration since the arrays are labelled by atom type. As a result, this algorithm significantly reduces the computation time while maintaining the accuracy of diffraction pattern simulation, making possible the simulation of diffraction patterns from large twisted fibrils in a relatively short period of time, as is required for model testing and refinement.« less
Diffraction pattern simulation of cellulose fibrils using distributed and quantized pair distances
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Yan; Inouye, Hideyo; Crowley, Michael
Intensity simulation of X-ray scattering from large twisted cellulose molecular fibrils is important in understanding the impact of chemical or physical treatments on structural properties such as twisting or coiling. This paper describes a highly efficient method for the simulation of X-ray diffraction patterns from complex fibrils using atom-type-specific pair-distance quantization. Pair distances are sorted into arrays which are labelled by atom type. Histograms of pair distances in each array are computed and binned and the resulting population distributions are used to represent the whole pair-distance data set. These quantized pair-distance arrays are used with a modified and vectorized Debyemore » formula to simulate diffraction patterns. This approach utilizes fewer pair distances in each iteration, and atomic scattering factors are moved outside the iteration since the arrays are labelled by atom type. This algorithm significantly reduces the computation time while maintaining the accuracy of diffraction pattern simulation, making possible the simulation of diffraction patterns from large twisted fibrils in a relatively short period of time, as is required for model testing and refinement.« less
Diffraction pattern simulation of cellulose fibrils using distributed and quantized pair distances
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Yan; Inouye, Hideyo; Crowley, Michael
Intensity simulation of X-ray scattering from large twisted cellulose molecular fibrils is important in understanding the impact of chemical or physical treatments on structural properties such as twisting or coiling. This paper describes a highly efficient method for the simulation of X-ray diffraction patterns from complex fibrils using atom-type-specific pair-distance quantization. Pair distances are sorted into arrays which are labelled by atom type. Histograms of pair distances in each array are computed and binned and the resulting population distributions are used to represent the whole pair-distance data set. These quantized pair-distance arrays are used with a modified and vectorized Debyemore » formula to simulate diffraction patterns. This approach utilizes fewer pair distances in each iteration, and atomic scattering factors are moved outside the iteration since the arrays are labelled by atom type. As a result, this algorithm significantly reduces the computation time while maintaining the accuracy of diffraction pattern simulation, making possible the simulation of diffraction patterns from large twisted fibrils in a relatively short period of time, as is required for model testing and refinement.« less
Quaternionic Kähler Detour Complexes and {mathcal{N} = 2} Supersymmetric Black Holes
NASA Astrophysics Data System (ADS)
Cherney, D.; Latini, E.; Waldron, A.
2011-03-01
We study a class of supersymmetric spinning particle models derived from the radial quantization of stationary, spherically symmetric black holes of four dimensional {{mathcal N} = 2} supergravities. By virtue of the c-map, these spinning particles move in quaternionic Kähler manifolds. Their spinning degrees of freedom describe mini-superspace-reduced supergravity fermions. We quantize these models using BRST detour complex technology. The construction of a nilpotent BRST charge is achieved by using local (worldline) supersymmetry ghosts to generate special holonomy transformations. (An interesting byproduct of the construction is a novel Dirac operator on the superghost extended Hilbert space.) The resulting quantized models are gauge invariant field theories with fields equaling sections of special quaternionic vector bundles. They underly and generalize the quaternionic version of Dolbeault cohomology discovered by Baston. In fact, Baston’s complex is related to the BPS sector of the models we write down. Our results rely on a calculus of operators on quaternionic Kähler manifolds that follows from BRST machinery, and although directly motivated by black hole physics, can be broadly applied to any model relying on quaternionic geometry.
Master equation for open two-band systems and its applications to Hall conductance
NASA Astrophysics Data System (ADS)
Shen, H. Z.; Zhang, S. S.; Dai, C. M.; Yi, X. X.
2018-02-01
Hall conductivity in the presence of a dephasing environment has recently been investigated with a dissipative term introduced phenomenologically. In this paper, we study the dissipative topological insulator (TI) and its topological transition in the presence of quantized electromagnetic environments. A Lindblad-type equation is derived to determine the dynamics of a two-band system. When the two-band model describes TIs, the environment may be the fluctuations of radiation that surround the TIs. We find the dependence of decay rates in the master equation on Bloch vectors in the two-band system, which leads to a mixing of the band occupations. Hence the environment-induced current is in general not perfectly topological in the presence of coupling to the environment, although deviations are small in the weak limit. As an illustration, we apply the Bloch-vector-dependent master equation to TIs and calculate the Hall conductance of tight-binding electrons in a two-dimensional lattice. The influence of environments on the Hall conductance is presented and discussed. The calculations show that the phase transition points of the TIs are robust against the quantized electromagnetic environment. The results might bridge the gap between quantum optics and topological photonic materials.
Musical sound analysis/synthesis using vector-quantized time-varying spectra
NASA Astrophysics Data System (ADS)
Ehmann, Andreas F.; Beauchamp, James W.
2002-11-01
A fundamental goal of computer music sound synthesis is accurate, yet efficient resynthesis of musical sounds, with the possibility of extending the synthesis into new territories using control of perceptually intuitive parameters. A data clustering technique known as vector quantization (VQ) is used to extract a globally optimum set of representative spectra from phase vocoder analyses of instrument tones. This set of spectra, called a Codebook, is used for sinusoidal additive synthesis or, more efficiently, for wavetable synthesis. Instantaneous spectra are synthesized by first determining the Codebook indices corresponding to the best least-squares matches to the original time-varying spectrum. Spectral index versus time functions are then smoothed, and interpolation is employed to provide smooth transitions between Codebook spectra. Furthermore, spectral frames are pre-flattened and their slope, or tilt, extracted before clustering is applied. This allows spectral tilt, closely related to the perceptual parameter ''brightness,'' to be independently controlled during synthesis. The result is a highly compressed format consisting of the Codebook spectra and time-varying tilt, amplitude, and Codebook index parameters. This technique has been applied to a variety of harmonic musical instrument sounds with the resulting resynthesized tones providing good matches to the originals.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gavrilenko, V. I.; Krishtopenko, S. S., E-mail: ds_a-teens@mail.ru; Goiran, M.
2011-01-15
The effect of electron-electron interaction on the spectrum of two-dimensional electron states in InAs/AlSb (001) heterostructures with a GaSb cap layer with one filled size-quantization subband. The energy spectrum of two-dimensional electrons is calculated in the Hartree and Hartree-Fock approximations. It is shown that the exchange interaction decreasing the electron energy in subbands increases the energy gap between subbands and the spin-orbit splitting of the spectrum in the entire region of electron concentrations, at which only the lower size-quantization band is filled. The nonlinear dependence of the Rashba splitting constant at the Fermi wave vector on the concentration of two-dimensionalmore » electrons is demonstrated.« less
Feature Vector Construction Method for IRIS Recognition
NASA Astrophysics Data System (ADS)
Odinokikh, G.; Fartukov, A.; Korobkin, M.; Yoo, J.
2017-05-01
One of the basic stages of iris recognition pipeline is iris feature vector construction procedure. The procedure represents the extraction of iris texture information relevant to its subsequent comparison. Thorough investigation of feature vectors obtained from iris showed that not all the vector elements are equally relevant. There are two characteristics which determine the vector element utility: fragility and discriminability. Conventional iris feature extraction methods consider the concept of fragility as the feature vector instability without respect to the nature of such instability appearance. This work separates sources of the instability into natural and encodinginduced which helps deeply investigate each source of instability independently. According to the separation concept, a novel approach of iris feature vector construction is proposed. The approach consists of two steps: iris feature extraction using Gabor filtering with optimal parameters and quantization with separated preliminary optimized fragility thresholds. The proposed method has been tested on two different datasets of iris images captured under changing environmental conditions. The testing results show that the proposed method surpasses all the methods considered as a prior art by recognition accuracy on both datasets.
Motamarri, Srinivas; Boccelli, Dominic L
2012-09-15
Users of recreational waters may be exposed to elevated pathogen levels through various point/non-point sources. Typical daily notifications rely on microbial analysis of indicator organisms (e.g., Escherichia coli) that require 18, or more, hours to provide an adequate response. Modeling approaches, such as multivariate linear regression (MLR) and artificial neural networks (ANN), have been utilized to provide quick predictions of microbial concentrations for classification purposes, but generally suffer from high false negative rates. This study introduces the use of learning vector quantization (LVQ)--a direct classification approach--for comparison with MLR and ANN approaches and integrates input selection for model development with respect to primary and secondary water quality standards within the Charles River Basin (Massachusetts, USA) using meteorologic, hydrologic, and microbial explanatory variables. Integrating input selection into model development showed that discharge variables were the most important explanatory variables while antecedent rainfall and time since previous events were also important. With respect to classification, all three models adequately represented the non-violated samples (>90%). The MLR approach had the highest false negative rates associated with classifying violated samples (41-62% vs 13-43% (ANN) and <16% (LVQ)) when using five or more explanatory variables. The ANN performance was more similar to LVQ when a larger number of explanatory variables were utilized, but the ANN performance degraded toward MLR performance as explanatory variables were removed. Overall, the use of LVQ as a direct classifier provided the best overall classification ability with respect to violated/non-violated samples for both standards. Copyright © 2012 Elsevier Ltd. All rights reserved.
Comparison of SOM point densities based on different criteria.
Kohonen, T
1999-11-15
Point densities of model (codebook) vectors in self-organizing maps (SOMs) are evaluated in this article. For a few one-dimensional SOMs with finite grid lengths and a given probability density function of the input, the numerically exact point densities have been computed. The point density derived from the SOM algorithm turned out to be different from that minimizing the SOM distortion measure, showing that the model vectors produced by the basic SOM algorithm in general do not exactly coincide with the optimum of the distortion measure. A new computing technique based on the calculus of variations has been introduced. It was applied to the computation of point densities derived from the distortion measure for both the classical vector quantization and the SOM with general but equal dimensionality of the input vectors and the grid, respectively. The power laws in the continuum limit obtained in these cases were found to be identical.
A Neurocomputational Model of Goal-Directed Navigation in Insect-Inspired Artificial Agents
Goldschmidt, Dennis; Manoonpong, Poramate; Dasgupta, Sakyasingha
2017-01-01
Despite their small size, insect brains are able to produce robust and efficient navigation in complex environments. Specifically in social insects, such as ants and bees, these navigational capabilities are guided by orientation directing vectors generated by a process called path integration. During this process, they integrate compass and odometric cues to estimate their current location as a vector, called the home vector for guiding them back home on a straight path. They further acquire and retrieve path integration-based vector memories globally to the nest or based on visual landmarks. Although existing computational models reproduced similar behaviors, a neurocomputational model of vector navigation including the acquisition of vector representations has not been described before. Here we present a model of neural mechanisms in a modular closed-loop control—enabling vector navigation in artificial agents. The model consists of a path integration mechanism, reward-modulated global learning, random search, and action selection. The path integration mechanism integrates compass and odometric cues to compute a vectorial representation of the agent's current location as neural activity patterns in circular arrays. A reward-modulated learning rule enables the acquisition of vector memories by associating the local food reward with the path integration state. A motor output is computed based on the combination of vector memories and random exploration. In simulation, we show that the neural mechanisms enable robust homing and localization, even in the presence of external sensory noise. The proposed learning rules lead to goal-directed navigation and route formation performed under realistic conditions. Consequently, we provide a novel approach for vector learning and navigation in a simulated, situated agent linking behavioral observations to their possible underlying neural substrates. PMID:28446872
A Neurocomputational Model of Goal-Directed Navigation in Insect-Inspired Artificial Agents.
Goldschmidt, Dennis; Manoonpong, Poramate; Dasgupta, Sakyasingha
2017-01-01
Despite their small size, insect brains are able to produce robust and efficient navigation in complex environments. Specifically in social insects, such as ants and bees, these navigational capabilities are guided by orientation directing vectors generated by a process called path integration. During this process, they integrate compass and odometric cues to estimate their current location as a vector, called the home vector for guiding them back home on a straight path. They further acquire and retrieve path integration-based vector memories globally to the nest or based on visual landmarks. Although existing computational models reproduced similar behaviors, a neurocomputational model of vector navigation including the acquisition of vector representations has not been described before. Here we present a model of neural mechanisms in a modular closed-loop control-enabling vector navigation in artificial agents. The model consists of a path integration mechanism, reward-modulated global learning, random search, and action selection. The path integration mechanism integrates compass and odometric cues to compute a vectorial representation of the agent's current location as neural activity patterns in circular arrays. A reward-modulated learning rule enables the acquisition of vector memories by associating the local food reward with the path integration state. A motor output is computed based on the combination of vector memories and random exploration. In simulation, we show that the neural mechanisms enable robust homing and localization, even in the presence of external sensory noise. The proposed learning rules lead to goal-directed navigation and route formation performed under realistic conditions. Consequently, we provide a novel approach for vector learning and navigation in a simulated, situated agent linking behavioral observations to their possible underlying neural substrates.
Face recognition: a convolutional neural-network approach.
Lawrence, S; Giles, C L; Tsoi, A C; Back, A D
1997-01-01
We present a hybrid neural-network for human face recognition which compares favourably with other methods. The system combines local image sampling, a self-organizing map (SOM) neural network, and a convolutional neural network. The SOM provides a quantization of the image samples into a topological space where inputs that are nearby in the original space are also nearby in the output space, thereby providing dimensionality reduction and invariance to minor changes in the image sample, and the convolutional neural network provides partial invariance to translation, rotation, scale, and deformation. The convolutional network extracts successively larger features in a hierarchical set of layers. We present results using the Karhunen-Loeve transform in place of the SOM, and a multilayer perceptron (MLP) in place of the convolutional network for comparison. We use a database of 400 images of 40 individuals which contains quite a high degree of variability in expression, pose, and facial details. We analyze the computational complexity and discuss how new classes could be added to the trained recognizer.
Vector control of wind turbine on the basis of the fuzzy selective neural net*
NASA Astrophysics Data System (ADS)
Engel, E. A.; Kovalev, I. V.; Engel, N. E.
2016-04-01
An article describes vector control of wind turbine based on fuzzy selective neural net. Based on the wind turbine system’s state, the fuzzy selective neural net tracks an maximum power point under random perturbations. Numerical simulations are accomplished to clarify the applicability and advantages of the proposed vector wind turbine’s control on the basis of the fuzzy selective neuronet. The simulation results show that the proposed intelligent control of wind turbine achieves real-time control speed and competitive performance, as compared to a classical control model with PID controllers based on traditional maximum torque control strategy.
The Coulomb problem on a 3-sphere and Heun polynomials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bellucci, Stefano; Yeghikyan, Vahagn; Yerevan State University, Alex-Manoogian st. 1, 00025 Yerevan
2013-08-15
The paper studies the quantum mechanical Coulomb problem on a 3-sphere. We present a special parametrization of the ellipto-spheroidal coordinate system suitable for the separation of variables. After quantization we get the explicit form of the spectrum and present an algebraic equation for the eigenvalues of the Runge-Lentz vector. We also present the wave functions expressed via Heun polynomials.
Deformation quantization with separation of variables of an endomorphism bundle
NASA Astrophysics Data System (ADS)
Karabegov, Alexander
2014-01-01
Given a holomorphic Hermitian vector bundle E and a star-product with separation of variables on a pseudo-Kähler manifold, we construct a star product on the sections of the endomorphism bundle of the dual bundle E∗ which also has the appropriately generalized property of separation of variables. For this star product we prove a generalization of Gammelgaard's graph-theoretic formula.
Optical implementation of inner product neural associative memory
NASA Technical Reports Server (NTRS)
Liu, Hua-Kuang (Inventor)
1995-01-01
An optical implementation of an inner-product neural associative memory is realized with a first spatial light modulator for entering an initial two-dimensional N-tuple vector and for entering a thresholded output vector image after each iteration until convergence is reached, and a second spatial light modulator for entering M weighted vectors of inner-product scalars multiplied with each of the M stored vectors, where the inner-product scalars are produced by multiplication of the initial input vector in the first iterative cycle (and thresholded vectors in subsequent iterative cycles) with each of the M stored vectors, and the weighted vectors are produced by multiplication of the scalars with corresponding ones of the stored vectors. A Hughes liquid crystal light valve is used for the dual function of summing the weighted vectors and thresholding the sum vector. The thresholded vector is then entered through the first spatial light modulator for reiteration of the process cycle until convergence is reached.
Design of thrust vectoring exhaust nozzles for real-time applications using neural networks
NASA Technical Reports Server (NTRS)
Prasanth, Ravi K.; Markin, Robert E.; Whitaker, Kevin W.
1991-01-01
Thrust vectoring continues to be an important issue in military aircraft system designs. A recently developed concept of vectoring aircraft thrust makes use of flexible exhaust nozzles. Subtle modifications in the nozzle wall contours produce a non-uniform flow field containing a complex pattern of shock and expansion waves. The end result, due to the asymmetric velocity and pressure distributions, is vectored thrust. Specification of the nozzle contours required for a desired thrust vector angle (an inverse design problem) has been achieved with genetic algorithms. This approach is computationally intensive and prevents the nozzles from being designed in real-time, which is necessary for an operational aircraft system. An investigation was conducted into using genetic algorithms to train a neural network in an attempt to obtain, in real-time, two-dimensional nozzle contours. Results show that genetic algorithm trained neural networks provide a viable, real-time alternative for designing thrust vectoring nozzles contours. Thrust vector angles up to 20 deg were obtained within an average error of 0.0914 deg. The error surfaces encountered were highly degenerate and thus the robustness of genetic algorithms was well suited for minimizing global errors.
On the Problem of Bandwidth Partitioning in FDD Block-Fading Single-User MISO/SIMO Systems
NASA Astrophysics Data System (ADS)
Ivrlač, Michel T.; Nossek, Josef A.
2008-12-01
We report on our research activity on the problem of how to optimally partition the available bandwidth of frequency division duplex, multi-input single-output communication systems, into subbands for the uplink, the downlink, and the feedback. In the downlink, the transmitter applies coherent beamforming based on quantized channel information which is obtained by feedback from the receiver. As feedback takes away resources from the uplink, which could otherwise be used to transfer payload data, it is highly desirable to reserve the "right" amount of uplink resources for the feedback. Under the assumption of random vector quantization, and a frequency flat, independent and identically distributed block-fading channel, we derive closed-form expressions for both the feedback quantization and bandwidth partitioning which jointly maximize the sum of the average payload data rates of the downlink and the uplink. While we do introduce some approximations to facilitate mathematical tractability, the analytical solution is asymptotically exact as the number of antennas approaches infinity, while for systems with few antennas, it turns out to be a fairly accurate approximation. In this way, the obtained results are meaningful for practical communication systems, which usually can only employ a few antennas.
Radiation and matter: Electrodynamics postulates and Lorenz gauge
NASA Astrophysics Data System (ADS)
Bobrov, V. B.; Trigger, S. A.; van Heijst, G. J.; Schram, P. P.
2016-11-01
In general terms, we have considered matter as the system of charged particles and quantized electromagnetic field. For consistent description of the thermodynamic properties of matter, especially in an extreme state, the problem of quantization of the longitudinal and scalar potentials should be solved. In this connection, we pay attention that the traditional postulates of electrodynamics, which claim that only electric and magnetic fields are observable, is resolved by denial of the statement about validity of the Maxwell equations for microscopic fields. The Maxwell equations, as the generalization of experimental data, are valid only for averaged values. We show that microscopic electrodynamics may be based on postulation of the d'Alembert equations for four-vector of the electromagnetic field potential. The Lorenz gauge is valid for the averages potentials (and provides the implementation of the Maxwell equations for averages). The suggested concept overcomes difficulties under the electromagnetic field quantization procedure being in accordance with the results of quantum electrodynamics. As a result, longitudinal and scalar photons become real rather than virtual and may be observed in principle. The longitudinal and scalar photons provide not only the Coulomb interaction of charged particles, but also allow the electrical Aharonov-Bohm effect.
Applications of wavelet-based compression to multidimensional Earth science data
NASA Technical Reports Server (NTRS)
Bradley, Jonathan N.; Brislawn, Christopher M.
1993-01-01
A data compression algorithm involving vector quantization (VQ) and the discrete wavelet transform (DWT) is applied to two different types of multidimensional digital earth-science data. The algorithms (WVQ) is optimized for each particular application through an optimization procedure that assigns VQ parameters to the wavelet transform subbands subject to constraints on compression ratio and encoding complexity. Preliminary results of compressing global ocean model data generated on a Thinking Machines CM-200 supercomputer are presented. The WVQ scheme is used in both a predictive and nonpredictive mode. Parameters generated by the optimization algorithm are reported, as are signal-to-noise (SNR) measurements of actual quantized data. The problem of extrapolating hydrodynamic variables across the continental landmasses in order to compute the DWT on a rectangular grid is discussed. Results are also presented for compressing Landsat TM 7-band data using the WVQ scheme. The formulation of the optimization problem is presented along with SNR measurements of actual quantized data. Postprocessing applications are considered in which the seven spectral bands are clustered into 256 clusters using a k-means algorithm and analyzed using the Los Alamos multispectral data analysis program, SPECTRUM, both before and after being compressed using the WVQ program.
Direct Images, Fields of Hilbert Spaces, and Geometric Quantization
NASA Astrophysics Data System (ADS)
Lempert, László; Szőke, Róbert
2014-04-01
Geometric quantization often produces not one Hilbert space to represent the quantum states of a classical system but a whole family H s of Hilbert spaces, and the question arises if the spaces H s are canonically isomorphic. Axelrod et al. (J. Diff. Geo. 33:787-902, 1991) and Hitchin (Commun. Math. Phys. 131:347-380, 1990) suggest viewing H s as fibers of a Hilbert bundle H, introduce a connection on H, and use parallel transport to identify different fibers. Here we explore to what extent this can be done. First we introduce the notion of smooth and analytic fields of Hilbert spaces, and prove that if an analytic field over a simply connected base is flat, then it corresponds to a Hermitian Hilbert bundle with a flat connection and path independent parallel transport. Second we address a general direct image problem in complex geometry: pushing forward a Hermitian holomorphic vector bundle along a non-proper map . We give criteria for the direct image to be a smooth field of Hilbert spaces. Third we consider quantizing an analytic Riemannian manifold M by endowing TM with the family of adapted Kähler structures from Lempert and Szőke (Bull. Lond. Math. Soc. 44:367-374, 2012). This leads to a direct image problem. When M is homogeneous, we prove the direct image is an analytic field of Hilbert spaces. For certain such M—but not all—the direct image is even flat; which means that in those cases quantization is unique.
Understanding Local Structure Globally in Earth Science Remote Sensing Data Sets
NASA Technical Reports Server (NTRS)
Braverman, Amy; Fetzer, Eric
2007-01-01
Empirical probability distributions derived from the data are the signatures of physical processes generating the data. Distributions defined on different space-time windows can be compared and differences or changes can be attributed to physical processes. This presentation discusses on ways to reduce remote sensing data in a way that preserves information, focusing on the rate-distortion theory and using the entropy-constrained vector quantization algorithm.
Direct Volume Rendering with Shading via Three-Dimensional Textures
NASA Technical Reports Server (NTRS)
VanGelder, Allen; Kim, Kwansik
1996-01-01
A new and easy-to-implement method for direct volume rendering that uses 3D texture maps for acceleration, and incorporates directional lighting, is described. The implementation, called Voltx, produces high-quality images at nearly interactive speeds on workstations with hardware support for three-dimensional texture maps. Previously reported methods did not incorporate a light model, and did not address issues of multiple texture maps for large volumes. Our research shows that these extensions impact performance by about a factor of ten. Voltx supports orthographic, perspective, and stereo views. This paper describes the theory and implementation of this technique, and compares it to the shear-warp factorization approach. A rectilinear data set is converted into a three-dimensional texture map containing color and opacity information. Quantized normal vectors and a lookup table provide efficiency. A new tesselation of the sphere is described, which serves as the basis for normal-vector quantization. A new gradient-based shading criterion is described, in which the gradient magnitude is interpreted in the context of the field-data value and the material classification parameters, and not in isolation. In the rendering phase, the texture map is applied to a stack of parallel planes, which effectively cut the texture into many slabs. The slabs are composited to form an image.
Image Classification of Ribbed Smoked Sheet using Learning Vector Quantization
NASA Astrophysics Data System (ADS)
Rahmat, R. F.; Pulungan, A. F.; Faza, S.; Budiarto, R.
2017-01-01
Natural rubber is an important export commodity in Indonesia, which can be a major contributor to national economic development. One type of rubber used as rubber material exports is Ribbed Smoked Sheet (RSS). The quantity of RSS exports depends on the quality of RSS. RSS rubber quality has been assigned in SNI 06-001-1987 and the International Standards of Quality and Packing for Natural Rubber Grades (The Green Book). The determination of RSS quality is also known as the sorting process. In the rubber factones, the sorting process is still done manually by looking and detecting at the levels of air bubbles on the surface of the rubber sheet by naked eyes so that the result is subjective and not so good. Therefore, a method is required to classify RSS rubber automatically and precisely. We propose some image processing techniques for the pre-processing, zoning method for feature extraction and Learning Vector Quantization (LVQ) method for classifying RSS rubber into two grades, namely RSS1 and RSS3. We used 120 RSS images as training dataset and 60 RSS images as testing dataset. The result shows that our proposed method can give 89% of accuracy and the best perform epoch is in the fifteenth epoch.
An optimization method for speech enhancement based on deep neural network
NASA Astrophysics Data System (ADS)
Sun, Haixia; Li, Sikun
2017-06-01
Now, this document puts forward a deep neural network (DNN) model with more credible data set and more robust structure. First, we take two regularization skills, dropout and sparsity constraint to strengthen the generalization ability of the model. In this way, not only the model is able to reach the consistency between the pre-training model and the fine-tuning model, but also it reduce resource consumption. Then network compression by weights sharing and quantization is allowed to reduce storage cost. In the end, we evaluate the quality of the reconstructed speech according to different criterion. The result proofs that the improved framework has good performance on speech enhancement and meets the requirement of speech processing.
Wigner functions on non-standard symplectic vector spaces
NASA Astrophysics Data System (ADS)
Dias, Nuno Costa; Prata, João Nuno
2018-01-01
We consider the Weyl quantization on a flat non-standard symplectic vector space. We focus mainly on the properties of the Wigner functions defined therein. In particular we show that the sets of Wigner functions on distinct symplectic spaces are different but have non-empty intersections. This extends previous results to arbitrary dimension and arbitrary (constant) symplectic structure. As a by-product we introduce and prove several concepts and results on non-standard symplectic spaces which generalize those on the standard symplectic space, namely, the symplectic spectrum, Williamson's theorem, and Narcowich-Wigner spectra. We also show how Wigner functions on non-standard symplectic spaces behave under the action of an arbitrary linear coordinate transformation.
The BRST complex of homological Poisson reduction
NASA Astrophysics Data System (ADS)
Müller-Lennert, Martin
2017-02-01
BRST complexes are differential graded Poisson algebras. They are associated with a coisotropic ideal J of a Poisson algebra P and provide a description of the Poisson algebra (P/J)^J as their cohomology in degree zero. Using the notion of stable equivalence introduced in Felder and Kazhdan (Contemporary Mathematics 610, Perspectives in representation theory, 2014), we prove that any two BRST complexes associated with the same coisotropic ideal are quasi-isomorphic in the case P = R[V] where V is a finite-dimensional symplectic vector space and the bracket on P is induced by the symplectic structure on V. As a corollary, the cohomology of the BRST complexes is canonically associated with the coisotropic ideal J in the symplectic case. We do not require any regularity assumptions on the constraints generating the ideal J. We finally quantize the BRST complex rigorously in the presence of infinitely many ghost variables and discuss the uniqueness of the quantization procedure.
Observation of Landau levels in potassium-intercalated graphite under a zero magnetic field
Guo, Donghui; Kondo, Takahiro; Machida, Takahiro; Iwatake, Keigo; Okada, Susumu; Nakamura, Junji
2012-01-01
The charge carriers in graphene are massless Dirac fermions and exhibit a relativistic Landau-level quantization in a magnetic field. Recently, it has been reported that, without any external magnetic field, quantized energy levels have been also observed from strained graphene nanobubbles on a platinum surface, which were attributed to the Landau levels of massless Dirac fermions in graphene formed by a strain-induced pseudomagnetic field. Here we show the generation of the Landau levels of massless Dirac fermions on a partially potassium-intercalated graphite surface without applying external magnetic field. Landau levels of massless Dirac fermions indicate the graphene character in partially potassium-intercalated graphite. The generation of the Landau levels is ascribed to a vector potential induced by the perturbation of nearest-neighbour hopping, which may originate from a strain or a gradient of on-site potentials at the perimeters of potassium-free domains. PMID:22990864
Chen, Guangyao; Li, Yang; Maris, Pieter; ...
2017-04-14
Using the charmonium light-front wavefunctions obtained by diagonalizing an effective Hamiltonian with the one-gluon exchange interaction and a confining potential inspired by light-front holography in the basis light-front quantization formalism, we compute production of charmonium states in diffractive deep inelastic scattering and ultra-peripheral heavy ion collisions within the dipole picture. Our method allows us to predict yields of all vector charmonium states below the open flavor thresholds in high-energy deep inelastic scattering, proton-nucleus and ultra-peripheral heavy ion collisions, without introducing any new parameters in the light-front wavefunctions. The obtained charmonium cross section is in reasonable agreement with experimental data atmore » HERA, RHIC and LHC. We observe that the cross-section ratio σΨ(2s)/σJ/Ψ reveals significant independence of model parameters« less
NASA Astrophysics Data System (ADS)
Wan, Tat C.; Kabuka, Mansur R.
1994-05-01
With the tremendous growth in imaging applications and the development of filmless radiology, the need for compression techniques that can achieve high compression ratios with user specified distortion rates becomes necessary. Boundaries and edges in the tissue structures are vital for detection of lesions and tumors, which in turn requires the preservation of edges in the image. The proposed edge preserving image compressor (EPIC) combines lossless compression of edges with neural network compression techniques based on dynamic associative neural networks (DANN), to provide high compression ratios with user specified distortion rates in an adaptive compression system well-suited to parallel implementations. Improvements to DANN-based training through the use of a variance classifier for controlling a bank of neural networks speed convergence and allow the use of higher compression ratios for `simple' patterns. The adaptation and generalization capabilities inherent in EPIC also facilitate progressive transmission of images through varying the number of quantization levels used to represent compressed patterns. Average compression ratios of 7.51:1 with an averaged average mean squared error of 0.0147 were achieved.
Agerskov, Claus
2016-04-01
A neural network model is presented of novelty detection in the CA1 subdomain of the hippocampal formation from the perspective of information flow. This computational model is restricted on several levels by both anatomical information about hippocampal circuitry and behavioral data from studies done in rats. Several studies report that the CA1 area broadcasts a generalized novelty signal in response to changes in the environment. Using the neural engineering framework developed by Eliasmith et al., a spiking neural network architecture is created that is able to compare high-dimensional vectors, symbolizing semantic information, according to the semantic pointer hypothesis. This model then computes the similarity between the vectors, as both direct inputs and a recalled memory from a long-term memory network by performing the dot-product operation in a novelty neural network architecture. The developed CA1 model agrees with available neuroanatomical data, as well as the presented behavioral data, and so it is a biologically realistic model of novelty detection in the hippocampus, which can provide a feasible explanation for experimentally observed dynamics.
NASA Astrophysics Data System (ADS)
Valizadeh, Maryam; Sohrabi, Mahmoud Reza
2018-03-01
In the present study, artificial neural networks (ANNs) and support vector regression (SVR) as intelligent methods coupled with UV spectroscopy for simultaneous quantitative determination of Dorzolamide (DOR) and Timolol (TIM) in eye drop. Several synthetic mixtures were analyzed for validating the proposed methods. At first, neural network time series, which one type of network from the artificial neural network was employed and its efficiency was evaluated. Afterwards, the radial basis network was applied as another neural network. Results showed that the performance of this method is suitable for predicting. Finally, support vector regression was proposed to construct the Zilomole prediction model. Also, root mean square error (RMSE) and mean recovery (%) were calculated for SVR method. Moreover, the proposed methods were compared to the high-performance liquid chromatography (HPLC) as a reference method. One way analysis of variance (ANOVA) test at the 95% confidence level applied to the comparison results of suggested and reference methods that there were no significant differences between them. Also, the effect of interferences was investigated in spike solutions.
NASA Technical Reports Server (NTRS)
Reif, John H.
1987-01-01
A parallel compression algorithm for the 16,384 processor MPP machine was developed. The serial version of the algorithm can be viewed as a combination of on-line dynamic lossless test compression techniques (which employ simple learning strategies) and vector quantization. These concepts are described. How these concepts are combined to form a new strategy for performing dynamic on-line lossy compression is discussed. Finally, the implementation of this algorithm in a massively parallel fashion on the MPP is discussed.
NASA Astrophysics Data System (ADS)
Myrheim, J.
Contents 1 Introduction 1.1 The concept of particle statistics 1.2 Statistical mechanics and the many-body problem 1.3 Experimental physics in two dimensions 1.4 The algebraic approach: Heisenberg quantization 1.5 More general quantizations 2 The configuration space 2.1 The Euclidean relative space for two particles 2.2 Dimensions d=1,2,3 2.3 Homotopy 2.4 The braid group 3 Schroedinger quantization in one dimension 4 Heisenberg quantization in one dimension 4.1 The coordinate representation 5 Schroedinger quantization in dimension d ≥ 2 5.1 Scalar wave functions 5.2 Homotopy 5.3 Interchange phases 5.4 The statistics vector potential 5.5 The N-particle case 5.6 Chern-Simons theory 6 The Feynman path integral for anyons 6.1 Eigenstates for position and momentum 6.2 The path integral 6.3 Conjugation classes in SN 6.4 The non-interacting case 6.5 Duality of Feynman and Schroedinger quantization 7 The harmonic oscillator 7.1 The two-dimensional harmonic oscillator 7.2 Two anyons in a harmonic oscillator potential 7.3 More than two anyons 7.4 The three-anyon problem 8 The anyon gas 8.1 The cluster and virial expansions 8.2 First and second order perturbative results 8.3 Regularization by periodic boundary conditions 8.4 Regularization by a harmonic oscillator potential 8.5 Bosons and fermions 8.6 Two anyons 8.7 Three anyons 8.8 The Monte Carlo method 8.9 The path integral representation of the coefficients GP 8.10 Exact and approximate polynomials 8.11 The fourth virial coefficient of anyons 8.12 Two polynomial theorems 9 Charged particles in a constant magnetic field 9.1 One particle in a magnetic field 9.2 Two anyons in a magnetic field 9.3 The anyon gas in a magnetic field 10 Interchange phases and geometric phases 10.1 Introduction to geometric phases 10.2 One particle in a magnetic field 10.3 Two particles in a magnetic field 10.4 Interchange of two anyons in potential wells 10.5 Laughlin's theory of the fractional quantum Hall effect
Design of a universal two-layered neural network derived from the PLI theory
NASA Astrophysics Data System (ADS)
Hu, Chia-Lun J.
2004-05-01
The if-and-only-if (IFF) condition that a set of M analog-to-digital vector-mapping relations can be learned by a one-layered-feed-forward neural network (OLNN) is that all the input analog vectors dichotomized by the i-th output bit must be positively, linearly independent, or PLI. If they are not PLI, then the OLNN just cannot learn no matter what learning rules is employed because the solution of the connection matrix does not exist mathematically. However, in this case, one can still design a parallel-cascaded, two-layered, perceptron (PCTLP) to acheive this general mapping goal. The design principle of this "universal" neural network is derived from the major mathematical properties of the PLI theory - changing the output bits of the dependent relations existing among the dichotomized input vectors to make the PLD relations PLI. Then with a vector concatenation technique, the required mapping can still be learned by this PCTLP system with very high efficiency. This paper will report in detail the mathematical derivation of the general design principle and the design procedures of the PCTLP neural network system. It then will be verified in general by a practical numerical example.
Learning-Based Just-Noticeable-Quantization- Distortion Modeling for Perceptual Video Coding.
Ki, Sehwan; Bae, Sung-Ho; Kim, Munchurl; Ko, Hyunsuk
2018-07-01
Conventional predictive video coding-based approaches are reaching the limit of their potential coding efficiency improvements, because of severely increasing computation complexity. As an alternative approach, perceptual video coding (PVC) has attempted to achieve high coding efficiency by eliminating perceptual redundancy, using just-noticeable-distortion (JND) directed PVC. The previous JNDs were modeled by adding white Gaussian noise or specific signal patterns into the original images, which were not appropriate in finding JND thresholds due to distortion with energy reduction. In this paper, we present a novel discrete cosine transform-based energy-reduced JND model, called ERJND, that is more suitable for JND-based PVC schemes. Then, the proposed ERJND model is extended to two learning-based just-noticeable-quantization-distortion (JNQD) models as preprocessing that can be applied for perceptual video coding. The two JNQD models can automatically adjust JND levels based on given quantization step sizes. One of the two JNQD models, called LR-JNQD, is based on linear regression and determines the model parameter for JNQD based on extracted handcraft features. The other JNQD model is based on a convolution neural network (CNN), called CNN-JNQD. To our best knowledge, our paper is the first approach to automatically adjust JND levels according to quantization step sizes for preprocessing the input to video encoders. In experiments, both the LR-JNQD and CNN-JNQD models were applied to high efficiency video coding (HEVC) and yielded maximum (average) bitrate reductions of 38.51% (10.38%) and 67.88% (24.91%), respectively, with little subjective video quality degradation, compared with the input without preprocessing applied.
Quantization and training of object detection networks with low-precision weights and activations
NASA Astrophysics Data System (ADS)
Yang, Bo; Liu, Jian; Zhou, Li; Wang, Yun; Chen, Jie
2018-01-01
As convolutional neural networks have demonstrated state-of-the-art performance in object recognition and detection, there is a growing need for deploying these systems on resource-constrained mobile platforms. However, the computational burden and energy consumption of inference for these networks are significantly higher than what most low-power devices can afford. To address these limitations, this paper proposes a method to train object detection networks with low-precision weights and activations. The probability density functions of weights and activations of each layer are first directly estimated using piecewise Gaussian models. Then, the optimal quantization intervals and step sizes for each convolution layer are adaptively determined according to the distribution of weights and activations. As the most computationally expensive convolutions can be replaced by effective fixed point operations, the proposed method can drastically reduce computation complexity and memory footprint. Performing on the tiny you only look once (YOLO) and YOLO architectures, the proposed method achieves comparable accuracy to their 32-bit counterparts. As an illustration, the proposed 4-bit and 8-bit quantized versions of the YOLO model achieve a mean average precision of 62.6% and 63.9%, respectively, on the Pascal visual object classes 2012 test dataset. The mAP of the 32-bit full-precision baseline model is 64.0%.
On families of differential equations on two-torus with all phase-lock areas
NASA Astrophysics Data System (ADS)
Glutsyuk, Alexey; Rybnikov, Leonid
2017-01-01
We consider two-parametric families of non-autonomous ordinary differential equations on the two-torus with coordinates (x, t) of the type \\overset{\\centerdot}{{x}} =v(x)+A+Bf(t) . We study its rotation number as a function of the parameters (A, B). The phase-lock areas are those level sets of the rotation number function ρ =ρ (A,B) that have non-empty interiors. Buchstaber, Karpov and Tertychnyi studied the case when v(x)=\\sin x in their joint paper. They observed the quantization effect: for every smooth periodic function f(t) the family of equations may have phase-lock areas only for integer rotation numbers. Another proof of this quantization statement was later obtained in a joint paper by Ilyashenko, Filimonov and Ryzhov. This implies a similar quantization effect for every v(x)=a\\sin (mx)+b\\cos (mx)+c and rotation numbers that are multiples of \\frac{1}{m} . We show that for every other analytic vector field v(x) (i.e. having at least two Fourier harmonics with non-zero non-opposite degrees and nonzero coefficients) there exists an analytic periodic function f(t) such that the corresponding family of equations has phase-lock areas for all the rational values of the rotation number.
ERIC Educational Resources Information Center
Chen, Chau-Kuang
2010-01-01
Artificial Neural Network (ANN) and Support Vector Machine (SVM) approaches have been on the cutting edge of science and technology for pattern recognition and data classification. In the ANN model, classification accuracy can be achieved by using the feed-forward of inputs, back-propagation of errors, and the adjustment of connection weights. In…
An Example of Unsupervised Networks Kohonen's Self-Organizing Feature Map
NASA Technical Reports Server (NTRS)
Niebur, Dagmar
1995-01-01
Kohonen's self-organizing feature map belongs to a class of unsupervised artificial neural network commonly referred to as topographic maps. It serves two purposes, the quantization and dimensionality reduction of date. A short description of its history and its biological context is given. We show that the inherent classification properties of the feature map make it a suitable candidate for solving the classification task in power system areas like load forecasting, fault diagnosis and security assessment.
2001-10-25
form: (1) A is a scaling factor, t is time and r a coordinate vector describing the limb configuration. We...combination of limb state and EMG. In our early examination of EMG we detected underlying groups of muscles and phases of activity by inspection and...representations of EEG or other biological signals has been thoroughly explored. Such components might be used as a basis for neuroprosthetic control
Hsieh, Chung-Ho; Lu, Ruey-Hwa; Lee, Nai-Hsin; Chiu, Wen-Ta; Hsu, Min-Huei; Li, Yu-Chuan Jack
2011-01-01
Diagnosing acute appendicitis clinically is still difficult. We developed random forests, support vector machines, and artificial neural network models to diagnose acute appendicitis. Between January 2006 and December 2008, patients who had a consultation session with surgeons for suspected acute appendicitis were enrolled. Seventy-five percent of the data set was used to construct models including random forest, support vector machines, artificial neural networks, and logistic regression. Twenty-five percent of the data set was withheld to evaluate model performance. The area under the receiver operating characteristic curve (AUC) was used to evaluate performance, which was compared with that of the Alvarado score. Data from a total of 180 patients were collected, 135 used for training and 45 for testing. The mean age of patients was 39.4 years (range, 16-85). Final diagnosis revealed 115 patients with and 65 without appendicitis. The AUC of random forest, support vector machines, artificial neural networks, logistic regression, and Alvarado was 0.98, 0.96, 0.91, 0.87, and 0.77, respectively. The sensitivity, specificity, positive, and negative predictive values of random forest were 94%, 100%, 100%, and 87%, respectively. Random forest performed better than artificial neural networks, logistic regression, and Alvarado. We demonstrated that random forest can predict acute appendicitis with good accuracy and, deployed appropriately, can be an effective tool in clinical decision making. Copyright © 2011 Mosby, Inc. All rights reserved.
Methods of Contemporary Gauge Theory
NASA Astrophysics Data System (ADS)
Makeenko, Yuri
2002-08-01
Preface; Part I. Path Integrals: 1. Operator calculus; 2. Second quantization; 3. Quantum anomalies from path integral; 4. Instantons in quantum mechanics; Part II. Lattice Gauge Theories: 5. Observables in gauge theories; 6. Gauge fields on a lattice; 7. Lattice methods; 8. Fermions on a lattice; 9. Finite temperatures; Part III. 1/N Expansion: 10. O(N) vector models; 11. Multicolor QCD; 12. QCD in loop space; 13. Matrix models; Part IV. Reduced Models: 14. Eguchi-Kawai model; 15. Twisted reduced models; 16. Non-commutative gauge theories.
Methods of Contemporary Gauge Theory
NASA Astrophysics Data System (ADS)
Makeenko, Yuri
2005-11-01
Preface; Part I. Path Integrals: 1. Operator calculus; 2. Second quantization; 3. Quantum anomalies from path integral; 4. Instantons in quantum mechanics; Part II. Lattice Gauge Theories: 5. Observables in gauge theories; 6. Gauge fields on a lattice; 7. Lattice methods; 8. Fermions on a lattice; 9. Finite temperatures; Part III. 1/N Expansion: 10. O(N) vector models; 11. Multicolor QCD; 12. QCD in loop space; 13. Matrix models; Part IV. Reduced Models: 14. Eguchi-Kawai model; 15. Twisted reduced models; 16. Non-commutative gauge theories.
Vector/Matrix Quantization for Narrow-Bandwidth Digital Speech Compression.
1982-09-01
8217o 0 -X -u -vc "oi ’" o 0 00i MN nM I -r -: I I Ir , I C 64 ut c 4c -C ;6 19I *~I C’ I I I 1 Kall 9 I I V4 S.0 M r4) ** al Iw* 0 0 10* 0 f 65 signal...Prediction of the Speech Wave, JASA Vol. 50, pp. 637-655, April 1971 . - 2. I. Itakura and S. Saito, Analysis Synthesis Telephony Based Upon the Maximum
Quantum angular momentum diffusion of rigid bodies
NASA Astrophysics Data System (ADS)
Papendell, Birthe; Stickler, Benjamin A.; Hornberger, Klaus
2017-12-01
We show how to describe the diffusion of the quantized angular momentum vector of an arbitrarily shaped rigid rotor as induced by its collisional interaction with an environment. We present the general form of the Lindblad-type master equation and relate it to the orientational decoherence of an asymmetric nanoparticle in the limit of small anisotropies. The corresponding diffusion coefficients are derived for gas particles scattering off large molecules and for ambient photons scattering off dielectric particles, using the elastic scattering amplitudes.
Ba, Yutao; Zhang, Wei; Peng, QiJia; Salvendy, Gavriel; Crundall, David
2016-01-01
Drivers' risk-taking is a key issue of road safety. This study explored individual differences in drivers' decision-making, linking external behaviours to internal neural activity, to reveal the cognitive mechanisms of risky driving. Twenty-four male drivers were split into two groups (risky vs. safe drivers) via the Drivier Behaviour Questionnaire-violation. The risky drivers demonstrated higher preference for the risky choices in the paradigms of Iowa Gambling Task and Balloon Analogue Risk Task. More importantly, the risky drivers showed lower amplitudes of feedback-related negativity (FRN) and loss-minus-gain FRN in both paradigms, which indicated their neural processing of error-detection. A significant difference of P300 amplitudes was also reported between groups, which indicated their neural processing of reward-evaluation and were modified by specific paradigm and feedback. These results suggested that the neural basis of risky driving was the decision patterns less revised by losses and more motivated by rewards. Risk-taking on the road is largely determined by inherent cognitive mechanisms, which can be indicated by the behavioural and neural patterns of decision-making. In this regard, it is feasible to quantize drivers’ riskiness in the cognitive stage before actual risky driving or accidents, and intervene accordingly.
The neural network classification of false killer whale (Pseudorca crassidens) vocalizations.
Murray, S O; Mercado, E; Roitblat, H L
1998-12-01
This study reports the use of unsupervised, self-organizing neural network to categorize the repertoire of false killer whale vocalizations. Self-organizing networks are capable of detecting patterns in their input and partitioning those patterns into categories without requiring that the number or types of categories be predefined. The inputs for the neural networks were two-dimensional characterization of false killer whale vocalization, where each vocalization was characterized by a sequence of short-time measurements of duty cycle and peak frequency. The first neural network used competitive learning, where units in a competitive layer distributed themselves to recognize frequently presented input vectors. This network resulted in classes representing typical patterns in the vocalizations. The second network was a Kohonen feature map which organized the outputs topologically, providing a graphical organization of pattern relationships. The networks performed well as measured by (1) the average correlation between the input vectors and the weight vectors for each category, and (2) the ability of the networks to classify novel vocalizations. The techniques used in this study could easily be applied to other species and facilitate the development of objective, comprehensive repertoire models.
Strategies for targeting primate neural circuits with viral vectors
El-Shamayleh, Yasmine; Ni, Amy M.
2016-01-01
Understanding how the brain works requires understanding how different types of neurons contribute to circuit function and organism behavior. Progress on this front has been accelerated by optogenetics and chemogenetics, which provide an unprecedented level of control over distinct neuronal types in small animals. In primates, however, targeting specific types of neurons with these tools remains challenging. In this review, we discuss existing and emerging strategies for directing genetic manipulations to targeted neurons in the adult primate central nervous system. We review the literature on viral vectors for gene delivery to neurons, focusing on adeno-associated viral vectors and lentiviral vectors, their tropism for different cell types, and prospects for new variants with improved efficacy and selectivity. We discuss two projection targeting approaches for probing neural circuits: anterograde projection targeting and retrograde transport of viral vectors. We conclude with an analysis of cell type-specific promoters and other nucleotide sequences that can be used in viral vectors to target neuronal types at the transcriptional level. PMID:27052579
Chern Numbers Hiding in Time of Flight Images
NASA Astrophysics Data System (ADS)
Satija, Indubala; Zhao, Erhai; Ghosh, Parag; Bray-Ali, Noah
2011-03-01
Since the experimental realization of synthetic magnetic fields in neural ultracold atoms, transport measurement such as quantized Hall conductivity remains an open challenge. Here we propose a novel and feasible scheme to measure the topological invariants, namely the chern numbers, in the time of flight images. We study both the commensurate and the incommensurate flux, with the later being the main focus here. The central concept underlying our proposal is the mapping between the chern numbers and the size of the dimerized states that emerge when the two-dimensional hopping is tuned to the highly anisotropic limit. In a uncoupled double quantum Hall system exhibiting time reversal invariance, only odd-sized dimer correlation functions are non-zero and hence encode quantized spin current. Finally, we illustrate that inspite of highly fragmented spectrum, a finite set of chern numbers are meaningful. Our results are supported by direct numerical computation of transverse conductivity. NBA acknowledges support from a National Research Council postdoctoral research associateship.
Becchi-Rouet-Stora-Tyutin formalism and zero locus reduction
NASA Astrophysics Data System (ADS)
Grigoriev, M. A.; Semikhatov, A. M.; Tipunin, I. Yu.
2001-08-01
In the Becchi-Rouet-Stora-Tyutin (BRST) quantization of gauge theories, the zero locus ZQ of the BRST differential Q carries an (anti)bracket whose parity is opposite to that of the fundamental bracket. Observables of the BRST theory are in a 1:1 correspondence with Casimir functions of the bracket on ZQ. For any constrained dynamical system with the phase space N0 and the constraint surface Σ, we prove its equivalence to the constrained system on the BFV-extended phase space with the constraint surface given by ZQ. Reduction to the zero locus of the differential gives rise to relations between bracket operations and differentials arising in different complexes (the Gerstenhaber, Schouten, Berezin-Kirillov, and Sklyanin brackets); the equation ensuring the existence of a nilpotent vector field on the reduced manifold can be the classical Yang-Baxter equation. We also generalize our constructions to the bi-QP manifolds which from the BRST theory viewpoint correspond to the BRST-anti-BRST-symmetric quantization.
NASA Astrophysics Data System (ADS)
Lee, Feifei; Kotani, Koji; Chen, Qiu; Ohmi, Tadahiro
2010-02-01
In this paper, a fast search algorithm for MPEG-4 video clips from video database is proposed. An adjacent pixel intensity difference quantization (APIDQ) histogram is utilized as the feature vector of VOP (video object plane), which had been reliably applied to human face recognition previously. Instead of fully decompressed video sequence, partially decoded data, namely DC sequence of the video object are extracted from the video sequence. Combined with active search, a temporal pruning algorithm, fast and robust video search can be realized. The proposed search algorithm has been evaluated by total 15 hours of video contained of TV programs such as drama, talk, news, etc. to search for given 200 MPEG-4 video clips which each length is 15 seconds. Experimental results show the proposed algorithm can detect the similar video clip in merely 80ms, and Equal Error Rate (ERR) of 2 % in drama and news categories are achieved, which are more accurately and robust than conventional fast video search algorithm.
Wilson, Patricia G; Payne, Tiffany
2014-01-01
The promise of genetic reprogramming has prompted initiatives to develop banks of induced pluripotent stem cells (iPSCs) from diverse sources. Sentinel assays for pluripotency could maximize available resources for generating iPSCs. Neural rosettes represent a primitive neural tissue that is unique to differentiating PSCs and commonly used to identify derivative neural/stem progenitors. Here, neural rosettes were used as a sentinel assay for pluripotency in selection of candidates to advance to validation assays. Candidate iPSCs were generated from independent populations of amniotic cells with episomal vectors. Phase imaging of living back up cultures showed neural rosettes in 2 of the 5 candidate populations. Rosettes were immunopositive for the Sox1, Sox2, Pax6 and Pax7 transcription factors that govern neural development in the earliest stage of development and for the Isl1/2 and Otx2 transcription factors that are expressed in the dorsal and ventral domains, respectively, of the neural tube in vivo. Dissociation of rosettes produced cultures of differentiation competent neural/stem progenitors that generated immature neurons that were immunopositive for βIII-tubulin and glia that were immunopositive for GFAP. Subsequent validation assays of selected candidates showed induced expression of endogenous pluripotency genes, epigenetic modification of chromatin and formation of teratomas in immunodeficient mice that contained derivatives of the 3 embryonic germ layers. Validated lines were vector-free and maintained a normal karyotype for more than 60 passages. The credibility of rosette assembly as a sentinel assay for PSCs is supported by coordinate loss of nuclear-localized pluripotency factors Oct4 and Nanog in neural rosettes that emerge spontaneously in cultures of self-renewing validated lines. Taken together, these findings demonstrate value in neural rosettes as sentinels for pluripotency and selection of promising candidates for advance to validation assays.
Payne, Tiffany
2014-01-01
The promise of genetic reprogramming has prompted initiatives to develop banks of induced pluripotent stem cells (iPSCs) from diverse sources. Sentinel assays for pluripotency could maximize available resources for generating iPSCs. Neural rosettes represent a primitive neural tissue that is unique to differentiating PSCs and commonly used to identify derivative neural/stem progenitors. Here, neural rosettes were used as a sentinel assay for pluripotency in selection of candidates to advance to validation assays. Candidate iPSCs were generated from independent populations of amniotic cells with episomal vectors. Phase imaging of living back up cultures showed neural rosettes in 2 of the 5 candidate populations. Rosettes were immunopositive for the Sox1, Sox2, Pax6 and Pax7 transcription factors that govern neural development in the earliest stage of development and for the Isl1/2 and Otx2 transcription factors that are expressed in the dorsal and ventral domains, respectively, of the neural tube in vivo. Dissociation of rosettes produced cultures of differentiation competent neural/stem progenitors that generated immature neurons that were immunopositive for βIII-tubulin and glia that were immunopositive for GFAP. Subsequent validation assays of selected candidates showed induced expression of endogenous pluripotency genes, epigenetic modification of chromatin and formation of teratomas in immunodeficient mice that contained derivatives of the 3 embryonic germ layers. Validated lines were vector-free and maintained a normal karyotype for more than 60 passages. The credibility of rosette assembly as a sentinel assay for PSCs is supported by coordinate loss of nuclear-localized pluripotency factors Oct4 and Nanog in neural rosettes that emerge spontaneously in cultures of self-renewing validated lines. Taken together, these findings demonstrate value in neural rosettes as sentinels for pluripotency and selection of promising candidates for advance to validation assays. PMID:25426336
Witoonchart, Peerajak; Chongstitvatana, Prabhas
2017-08-01
In this study, for the first time, we show how to formulate a structured support vector machine (SSVM) as two layers in a convolutional neural network, where the top layer is a loss augmented inference layer and the bottom layer is the normal convolutional layer. We show that a deformable part model can be learned with the proposed structured SVM neural network by backpropagating the error of the deformable part model to the convolutional neural network. The forward propagation calculates the loss augmented inference and the backpropagation calculates the gradient from the loss augmented inference layer to the convolutional layer. Thus, we obtain a new type of convolutional neural network called an Structured SVM convolutional neural network, which we applied to the human pose estimation problem. This new neural network can be used as the final layers in deep learning. Our method jointly learns the structural model parameters and the appearance model parameters. We implemented our method as a new layer in the existing Caffe library. Copyright © 2017 Elsevier Ltd. All rights reserved.
Joint Source-Channel Coding by Means of an Oversampled Filter Bank Code
NASA Astrophysics Data System (ADS)
Marinkovic, Slavica; Guillemot, Christine
2006-12-01
Quantized frame expansions based on block transforms and oversampled filter banks (OFBs) have been considered recently as joint source-channel codes (JSCCs) for erasure and error-resilient signal transmission over noisy channels. In this paper, we consider a coding chain involving an OFB-based signal decomposition followed by scalar quantization and a variable-length code (VLC) or a fixed-length code (FLC). This paper first examines the problem of channel error localization and correction in quantized OFB signal expansions. The error localization problem is treated as an[InlineEquation not available: see fulltext.]-ary hypothesis testing problem. The likelihood values are derived from the joint pdf of the syndrome vectors under various hypotheses of impulse noise positions, and in a number of consecutive windows of the received samples. The error amplitudes are then estimated by solving the syndrome equations in the least-square sense. The message signal is reconstructed from the corrected received signal by a pseudoinverse receiver. We then improve the error localization procedure by introducing a per-symbol reliability information in the hypothesis testing procedure of the OFB syndrome decoder. The per-symbol reliability information is produced by the soft-input soft-output (SISO) VLC/FLC decoders. This leads to the design of an iterative algorithm for joint decoding of an FLC and an OFB code. The performance of the algorithms developed is evaluated in a wavelet-based image coding system.
Passive forensics for copy-move image forgery using a method based on DCT and SVD.
Zhao, Jie; Guo, Jichang
2013-12-10
As powerful image editing tools are widely used, the demand for identifying the authenticity of an image is much increased. Copy-move forgery is one of the tampering techniques which are frequently used. Most existing techniques to expose this forgery need to improve the robustness for common post-processing operations and fail to precisely locate the tampering region especially when there are large similar or flat regions in the image. In this paper, a robust method based on DCT and SVD is proposed to detect this specific artifact. Firstly, the suspicious image is divided into fixed-size overlapping blocks and 2D-DCT is applied to each block, then the DCT coefficients are quantized by a quantization matrix to obtain a more robust representation of each block. Secondly, each quantized block is divided non-overlapping sub-blocks and SVD is applied to each sub-block, then features are extracted to reduce the dimension of each block using its largest singular value. Finally, the feature vectors are lexicographically sorted, and duplicated image blocks will be matched by predefined shift frequency threshold. Experiment results demonstrate that our proposed method can effectively detect multiple copy-move forgery and precisely locate the duplicated regions, even when an image was distorted by Gaussian blurring, AWGN, JPEG compression and their mixed operations. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Resolution-Adaptive Hybrid MIMO Architectures for Millimeter Wave Communications
NASA Astrophysics Data System (ADS)
Choi, Jinseok; Evans, Brian L.; Gatherer, Alan
2017-12-01
In this paper, we propose a hybrid analog-digital beamforming architecture with resolution-adaptive ADCs for millimeter wave (mmWave) receivers with large antenna arrays. We adopt array response vectors for the analog combiners and derive ADC bit-allocation (BA) solutions in closed form. The BA solutions reveal that the optimal number of ADC bits is logarithmically proportional to the RF chain's signal-to-noise ratio raised to the 1/3 power. Using the solutions, two proposed BA algorithms minimize the mean square quantization error of received analog signals under a total ADC power constraint. Contributions of this paper include 1) ADC bit-allocation algorithms to improve communication performance of a hybrid MIMO receiver, 2) approximation of the capacity with the BA algorithm as a function of channels, and 3) a worst-case analysis of the ergodic rate of the proposed MIMO receiver that quantifies system tradeoffs and serves as the lower bound. Simulation results demonstrate that the BA algorithms outperform a fixed-ADC approach in both spectral and energy efficiency, and validate the capacity and ergodic rate formula. For a power constraint equivalent to that of fixed 4-bit ADCs, the revised BA algorithm makes the quantization error negligible while achieving 22% better energy efficiency. Having negligible quantization error allows existing state-of-the-art digital beamformers to be readily applied to the proposed system.
A robust H.264/AVC video watermarking scheme with drift compensation.
Jiang, Xinghao; Sun, Tanfeng; Zhou, Yue; Wang, Wan; Shi, Yun-Qing
2014-01-01
A robust H.264/AVC video watermarking scheme for copyright protection with self-adaptive drift compensation is proposed. In our scheme, motion vector residuals of macroblocks with the smallest partition size are selected to hide copyright information in order to hold visual impact and distortion drift to a minimum. Drift compensation is also implemented to reduce the influence of watermark to the most extent. Besides, discrete cosine transform (DCT) with energy compact property is applied to the motion vector residual group, which can ensure robustness against intentional attacks. According to the experimental results, this scheme gains excellent imperceptibility and low bit-rate increase. Malicious attacks with different quantization parameters (QPs) or motion estimation algorithms can be resisted efficiently, with 80% accuracy on average after lossy compression.
A Robust H.264/AVC Video Watermarking Scheme with Drift Compensation
Sun, Tanfeng; Zhou, Yue; Shi, Yun-Qing
2014-01-01
A robust H.264/AVC video watermarking scheme for copyright protection with self-adaptive drift compensation is proposed. In our scheme, motion vector residuals of macroblocks with the smallest partition size are selected to hide copyright information in order to hold visual impact and distortion drift to a minimum. Drift compensation is also implemented to reduce the influence of watermark to the most extent. Besides, discrete cosine transform (DCT) with energy compact property is applied to the motion vector residual group, which can ensure robustness against intentional attacks. According to the experimental results, this scheme gains excellent imperceptibility and low bit-rate increase. Malicious attacks with different quantization parameters (QPs) or motion estimation algorithms can be resisted efficiently, with 80% accuracy on average after lossy compression. PMID:24672376
Spiking Neural P Systems With Rules on Synapses Working in Maximum Spiking Strategy.
Tao Song; Linqiang Pan
2015-06-01
Spiking neural P systems (called SN P systems for short) are a class of parallel and distributed neural-like computation models inspired by the way the neurons process information and communicate with each other by means of impulses or spikes. In this work, we introduce a new variant of SN P systems, called SN P systems with rules on synapses working in maximum spiking strategy, and investigate the computation power of the systems as both number and vector generators. Specifically, we prove that i) if no limit is imposed on the number of spikes in any neuron during any computation, such systems can generate the sets of Turing computable natural numbers and the sets of vectors of positive integers computed by k-output register machine; ii) if an upper bound is imposed on the number of spikes in each neuron during any computation, such systems can characterize semi-linear sets of natural numbers as number generating devices; as vector generating devices, such systems can only characterize the family of sets of vectors computed by sequential monotonic counter machine, which is strictly included in family of semi-linear sets of vectors. This gives a positive answer to the problem formulated in Song et al., Theor. Comput. Sci., vol. 529, pp. 82-95, 2014.
Wang, Li; Xu, Huiren; Song, Yilin; Luo, Jinping; Wei, Wenjing; Xu, Shengwei; Cai, Xinxia
2015-04-15
For the measurement of events of dopamine (DA) release as well as the coordinating neurotransmission in the nerve system, a neural microelectrode array (nMEA) electrodeposited directionally with polypyrrole graphene (PG) nanocomposites was fabricated. The deposited graphene significantly increased the surface area of working electrode, which led to the nMEA (with diameter of 20 μm) with excellent selectivity and sensitivity to DA. Furthermore, PG film modification exhibited low detection limit (4 nM, S/N = 3.21), high sensitivity, and good linearity in the presence of ascorbic acid (e.g., 13933.12 μA mM(-1) cm(-2) in the range of 0.8-10 μM). In particular, the nMEA combined with the patch-clamp system was used to detect quantized DA release from pheochromocytoma cells under 100 mM K(+) stimulation. The nMEA that integrates 60 microelectrodes is novel for detecting a large number of samples simultaneously, which has potential for neural communication research.
Theory of the Quantized Hall Conductance in Periodic Systems: a Topological Analysis.
NASA Astrophysics Data System (ADS)
Czerwinski, Michael Joseph
The integral quantization of the Hall conductance in two-dimensional periodic systems is investigated from a topological point of view. Attention is focused on the contributions from the electronic sub-bands which arise from perturbed Landau levels. After reviewing the theoretical work leading to the identification of the Hall conductance as a topological quantum number, both a determination and interpretation of these quantized values for the sub-band conductances is made. It is shown that the Hall conductance of each sub-band can be regarded as the sum of two terms which will be referred to as classical and nonclassical. Although each of these contributions individually leads to a fractional conductance, the sum of these two contributions does indeed yield an integer. These integral conductances are found to be given by the solution of a simple Diophantine equation which depends on the periodic perturbation. A connection between the quantized value of the Hall conductance and the covering of real space by the zeroes of the sub-band wavefunctions allows for a determination of these conductances under more general potentials. A method is described for obtaining the conductance values from only those states bordering the Brillouin zone, and not the states in its interior. This method is demonstrated to give Hall conductances in agreement with those obtained from the Diophantine equation for the sinusoidal potential case explored earlier. Generalizing a simple gauge invariance argument from real space to k-space, a k-space 'vector potential' is introduced. This allows for a explicit identification of the Hall conductance with the phase winding number of the sub-band wavefunction around the Brillouin zone. The previously described division of the Hall conductance into classical and nonclassical contributions is in this way made more rigorous; based on periodicity considerations alone, these terms are identified as the winding numbers associated with (i) the basis states and (ii) the coefficients of these basis states, respectively. In this way a general Diophantine equation, independent of the periodic potential, is obtained. Finally, the use of the 'parallel transport' of state vectors in the determination of an overall phase convention for these states is described. This is seen to lead to a simple and straightforward method for determining the Hall conductance. This method is based on the states directly, without reference to the particular component wavefunctions of these states. Mention is made of the generality of calculations of this type, within the context of the geometric (or Berry) phases acquired by systems under an adiabatic modification of their environment.
NASA Technical Reports Server (NTRS)
Niebur, D.; Germond, A.
1993-01-01
This report investigates the classification of power system states using an artificial neural network model, Kohonen's self-organizing feature map. The ultimate goal of this classification is to assess power system static security in real-time. Kohonen's self-organizing feature map is an unsupervised neural network which maps N-dimensional input vectors to an array of M neurons. After learning, the synaptic weight vectors exhibit a topological organization which represents the relationship between the vectors of the training set. This learning is unsupervised, which means that the number and size of the classes are not specified beforehand. In the application developed in this report, the input vectors used as the training set are generated by off-line load-flow simulations. The learning algorithm and the results of the organization are discussed.
NASA Technical Reports Server (NTRS)
Decker, A. J.; Fite, E. B.; Thorp, S. A.; Mehmed, O.
1998-01-01
The responses of artificial neural networks to experimental and model-generated inputs are compared for detection of damage in twisted fan blades using electronic holography. The training-set inputs, for this work, are experimentally generated characteristic patterns of the vibrating blades. The outputs are damage-flag indicators or second derivatives of the sensitivity-vector-projected displacement vectors from a finite element model. Artificial neural networks have been trained in the past with computational-model-generated training sets. This approach avoids the difficult inverse calculations traditionally used to compare interference fringes with the models. But the high modeling standards are hard to achieve, even with fan-blade finite-element models.
NASA Technical Reports Server (NTRS)
Decker, A. J.; Fite, E. B.; Thorp, S. A.; Mehmed, O.
1998-01-01
The responses of artificial neural networks to experimental and model-generated inputs are compared for detection of damage in twisted fan blades using electronic holography. The training-set inputs, for this work, are experimentally generated characteristic patterns of the vibrating blades. The outputs are damage-flag indicators or second derivatives of the sensitivity-vector-projected displacement vectors from a finite element model. Artificial neural networks have been trained in the past with computational-model- generated training sets. This approach avoids the difficult inverse calculations traditionally used to compare interference fringes with the models. But the high modeling standards are hard to achieve, even with fan-blade finite-element models.
NASA Astrophysics Data System (ADS)
Ndaw, Joseph D.; Faye, Andre; Maïga, Amadou S.
2017-05-01
Artificial neural networks (ANN)-based models are efficient ways of source localisation. However very large training sets are needed to precisely estimate two-dimensional Direction of arrival (2D-DOA) with ANN models. In this paper we present a fast artificial neural network approach for 2D-DOA estimation with reduced training sets sizes. We exploit the symmetry properties of Uniform Circular Arrays (UCA) to build two different datasets for elevation and azimuth angles. Linear Vector Quantisation (LVQ) neural networks are then sequentially trained on each dataset to separately estimate elevation and azimuth angles. A multilevel training process is applied to further reduce the training sets sizes.
Feature detection in satellite images using neural network technology
NASA Technical Reports Server (NTRS)
Augusteijn, Marijke F.; Dimalanta, Arturo S.
1992-01-01
A feasibility study of automated classification of satellite images is described. Satellite images were characterized by the textures they contain. In particular, the detection of cloud textures was investigated. The method of second-order gray level statistics, using co-occurrence matrices, was applied to extract feature vectors from image segments. Neural network technology was employed to classify these feature vectors. The cascade-correlation architecture was successfully used as a classifier. The use of a Kohonen network was also investigated but this architecture could not reliably classify the feature vectors due to the complicated structure of the classification problem. The best results were obtained when data from different spectral bands were fused.
The canonical quantization of chaotic maps on the torus
NASA Astrophysics Data System (ADS)
Rubin, Ron Shai
In this thesis, a quantization method for classical maps on the torus is presented. The quantum algebra of observables is defined as the quantization of measurable functions on the torus with generators exp (2/pi ix) and exp (2/pi ip). The Hilbert space we use remains the infinite-dimensional L2/ (/IR, dx). The dynamics is given by a unitary quantum propagator such that as /hbar /to 0, the classical dynamics is returned. We construct such a quantization for the Kronecker map, the cat map, the baker's map, the kick map, and the Harper map. For the cat map, we find the same for the propagator on the plane the same integral kernel conjectured in (HB) using semiclassical methods. We also define a quantum 'integral over phase space' as a trace over the quantum algebra. Using this definition, we proceed to define quantum ergodicity and mixing for maps on the torus. We prove that the quantum cat map and Kronecker map are both ergodic, but only the cat map is mixing, true to its classical origins. For Planck's constant satisfying the integrality condition h = 1/N, with N/in doubz+, we construct an explicit isomorphism between L2/ (/IR, dx) and the Hilbert space of sections of an N-dimensional vector bundle over a θ-torus T2 of boundary conditions. The basis functions are distributions in L2/ (/IR, dx), given by an infinite comb of Dirac δ-functions. In Bargmann space these distributions take on the form of Jacobi ϑ-functions. Transformations from position to momentum representation can be implemented via a finite N-dimensional discrete Fourier transform. With the θ-torus, we provide a connection between the finite-dimensional quantum maps given in the physics literature and the canonical quantization presented here and found in the language of pseudo-differential operators elsewhere in mathematics circles. Specifically, at a fixed point of the dynamics on the θ-torus, we return a finite-dimensional matrix propagator. We present this connection explicitly for several examples.
Vector neural network signal integration for radar application
NASA Astrophysics Data System (ADS)
Bierman, Gregory S.
1994-07-01
The Litton Data Systems Vector Neural Network (VNN) is a unique multi-scan integration algorithm currently in development. The target of interest is a low-flying cruise missile. Current tactical radar cannot detect and track the missile in ground clutter at tactically useful ranges. The VNN solves this problem by integrating the energy from multiple frames to effectively increase the target's signal-to-noise ratio. The implementation plan is addressing the APG-63 radar. Real-time results will be available by March 1994.
T-wave end detection using neural networks and Support Vector Machines.
Suárez-León, Alexander Alexeis; Varon, Carolina; Willems, Rik; Van Huffel, Sabine; Vázquez-Seisdedos, Carlos Román
2018-05-01
In this paper we propose a new approach for detecting the end of the T-wave in the electrocardiogram (ECG) using Neural Networks and Support Vector Machines. Both, Multilayer Perceptron (MLP) neural networks and Fixed-Size Least-Squares Support Vector Machines (FS-LSSVM) were used as regression algorithms to determine the end of the T-wave. Different strategies for selecting the training set such as random selection, k-means, robust clustering and maximum quadratic (Rényi) entropy were evaluated. Individual parameters were tuned for each method during training and the results are given for the evaluation set. A comparison between MLP and FS-LSSVM approaches was performed. Finally, a fair comparison of the FS-LSSVM method with other state-of-the-art algorithms for detecting the end of the T-wave was included. The experimental results show that FS-LSSVM approaches are more suitable as regression algorithms than MLP neural networks. Despite the small training sets used, the FS-LSSVM methods outperformed the state-of-the-art techniques. FS-LSSVM can be successfully used as a T-wave end detection algorithm in ECG even with small training set sizes. Copyright © 2018 Elsevier Ltd. All rights reserved.
Model-based VQ for image data archival, retrieval and distribution
NASA Technical Reports Server (NTRS)
Manohar, Mareboyana; Tilton, James C.
1995-01-01
An ideal image compression technique for image data archival, retrieval and distribution would be one with the asymmetrical computational requirements of Vector Quantization (VQ), but without the complications arising from VQ codebooks. Codebook generation and maintenance are stumbling blocks which have limited the use of VQ as a practical image compression algorithm. Model-based VQ (MVQ), a variant of VQ described here, has the computational properties of VQ but does not require explicit codebooks. The codebooks are internally generated using mean removed error and Human Visual System (HVS) models. The error model assumed is the Laplacian distribution with mean, lambda-computed from a sample of the input image. A Laplacian distribution with mean, lambda, is generated with uniform random number generator. These random numbers are grouped into vectors. These vectors are further conditioned to make them perceptually meaningful by filtering the DCT coefficients from each vector. The DCT coefficients are filtered by multiplying by a weight matrix that is found to be optimal for human perception. The inverse DCT is performed to produce the conditioned vectors for the codebook. The only image dependent parameter used in the generation of codebook is the mean, lambda, that is included in the coded file to repeat the codebook generation process for decoding.
Sentence alignment using feed forward neural network.
Fattah, Mohamed Abdel; Ren, Fuji; Kuroiwa, Shingo
2006-12-01
Parallel corpora have become an essential resource for work in multi lingual natural language processing. However, sentence aligned parallel corpora are more efficient than non-aligned parallel corpora for cross language information retrieval and machine translation applications. In this paper, we present a new approach to align sentences in bilingual parallel corpora based on feed forward neural network classifier. A feature parameter vector is extracted from the text pair under consideration. This vector contains text features such as length, punctuate score, and cognate score values. A set of manually prepared training data has been assigned to train the feed forward neural network. Another set of data was used for testing. Using this new approach, we could achieve an error reduction of 60% over length based approach when applied on English-Arabic parallel documents. Moreover this new approach is valid for any language pair and it is quite flexible approach since the feature parameter vector may contain more/less or different features than that we used in our system such as lexical match feature.
Iterative free-energy optimization for recurrent neural networks (INFERNO).
Pitti, Alexandre; Gaussier, Philippe; Quoy, Mathias
2017-01-01
The intra-parietal lobe coupled with the Basal Ganglia forms a working memory that demonstrates strong planning capabilities for generating robust yet flexible neuronal sequences. Neurocomputational models however, often fails to control long range neural synchrony in recurrent spiking networks due to spontaneous activity. As a novel framework based on the free-energy principle, we propose to see the problem of spikes' synchrony as an optimization problem of the neurons sub-threshold activity for the generation of long neuronal chains. Using a stochastic gradient descent, a reinforcement signal (presumably dopaminergic) evaluates the quality of one input vector to move the recurrent neural network to a desired activity; depending on the error made, this input vector is strengthened to hill-climb the gradient or elicited to search for another solution. This vector can be learned then by one associative memory as a model of the basal-ganglia to control the recurrent neural network. Experiments on habit learning and on sequence retrieving demonstrate the capabilities of the dual system to generate very long and precise spatio-temporal sequences, above two hundred iterations. Its features are applied then to the sequential planning of arm movements. In line with neurobiological theories, we discuss its relevance for modeling the cortico-basal working memory to initiate flexible goal-directed neuronal chains of causation and its relation to novel architectures such as Deep Networks, Neural Turing Machines and the Free-Energy Principle.
NASA Astrophysics Data System (ADS)
Lohani, A. K.; Kumar, Rakesh; Singh, R. D.
2012-06-01
SummaryTime series modeling is necessary for the planning and management of reservoirs. More recently, the soft computing techniques have been used in hydrological modeling and forecasting. In this study, the potential of artificial neural networks and neuro-fuzzy system in monthly reservoir inflow forecasting are examined by developing and comparing monthly reservoir inflow prediction models, based on autoregressive (AR), artificial neural networks (ANNs) and adaptive neural-based fuzzy inference system (ANFIS). To take care the effect of monthly periodicity in the flow data, cyclic terms are also included in the ANN and ANFIS models. Working with time series flow data of the Sutlej River at Bhakra Dam, India, several ANN and adaptive neuro-fuzzy models are trained with different input vectors. To evaluate the performance of the selected ANN and adaptive neural fuzzy inference system (ANFIS) models, comparison is made with the autoregressive (AR) models. The ANFIS model trained with the input data vector including previous inflows and cyclic terms of monthly periodicity has shown a significant improvement in the forecast accuracy in comparison with the ANFIS models trained with the input vectors considering only previous inflows. In all cases ANFIS gives more accurate forecast than the AR and ANN models. The proposed ANFIS model coupled with the cyclic terms is shown to provide better representation of the monthly inflow forecasting for planning and operation of reservoir.
Research on conceptual/innovative design for the life cycle
NASA Technical Reports Server (NTRS)
Cagan, Jonathan; Agogino, Alice M.
1990-01-01
The goal of this research is developing and integrating qualitative and quantitative methods for life cycle design. The definition of the problem includes formal computer-based methods limited to final detailing stages of design; CAD data bases do not capture design intent or design history; and life cycle issues were ignored during early stages of design. Viewgraphs outline research in conceptual design; the SYMON (SYmbolic MONotonicity analyzer) algorithm; multistart vector quantization optimization algorithm; intelligent manufacturing: IDES - Influence Diagram Architecture; and 1st PRINCE (FIRST PRINciple Computational Evaluator).
Linear time relational prototype based learning.
Gisbrecht, Andrej; Mokbel, Bassam; Schleif, Frank-Michael; Zhu, Xibin; Hammer, Barbara
2012-10-01
Prototype based learning offers an intuitive interface to inspect large quantities of electronic data in supervised or unsupervised settings. Recently, many techniques have been extended to data described by general dissimilarities rather than Euclidean vectors, so-called relational data settings. Unlike the Euclidean counterparts, the techniques have quadratic time complexity due to the underlying quadratic dissimilarity matrix. Thus, they are infeasible already for medium sized data sets. The contribution of this article is twofold: On the one hand we propose a novel supervised prototype based classification technique for dissimilarity data based on popular learning vector quantization (LVQ), on the other hand we transfer a linear time approximation technique, the Nyström approximation, to this algorithm and an unsupervised counterpart, the relational generative topographic mapping (GTM). This way, linear time and space methods result. We evaluate the techniques on three examples from the biomedical domain.
Vector Sum Excited Linear Prediction (VSELP) speech coding at 4.8 kbps
NASA Technical Reports Server (NTRS)
Gerson, Ira A.; Jasiuk, Mark A.
1990-01-01
Code Excited Linear Prediction (CELP) speech coders exhibit good performance at data rates as low as 4800 bps. The major drawback to CELP type coders is their larger computational requirements. The Vector Sum Excited Linear Prediction (VSELP) speech coder utilizes a codebook with a structure which allows for a very efficient search procedure. Other advantages of the VSELP codebook structure is discussed and a detailed description of a 4.8 kbps VSELP coder is given. This coder is an improved version of the VSELP algorithm, which finished first in the NSA's evaluation of the 4.8 kbps speech coders. The coder uses a subsample resolution single tap long term predictor, a single VSELP excitation codebook, a novel gain quantizer which is robust to channel errors, and a new adaptive pre/postfilter arrangement.
Computing Generalized Matrix Inverse on Spiking Neural Substrate.
Shukla, Rohit; Khoram, Soroosh; Jorgensen, Erik; Li, Jing; Lipasti, Mikko; Wright, Stephen
2018-01-01
Emerging neural hardware substrates, such as IBM's TrueNorth Neurosynaptic System, can provide an appealing platform for deploying numerical algorithms. For example, a recurrent Hopfield neural network can be used to find the Moore-Penrose generalized inverse of a matrix, thus enabling a broad class of linear optimizations to be solved efficiently, at low energy cost. However, deploying numerical algorithms on hardware platforms that severely limit the range and precision of representation for numeric quantities can be quite challenging. This paper discusses these challenges and proposes a rigorous mathematical framework for reasoning about range and precision on such substrates. The paper derives techniques for normalizing inputs and properly quantizing synaptic weights originating from arbitrary systems of linear equations, so that solvers for those systems can be implemented in a provably correct manner on hardware-constrained neural substrates. The analytical model is empirically validated on the IBM TrueNorth platform, and results show that the guarantees provided by the framework for range and precision hold under experimental conditions. Experiments with optical flow demonstrate the energy benefits of deploying a reduced-precision and energy-efficient generalized matrix inverse engine on the IBM TrueNorth platform, reflecting 10× to 100× improvement over FPGA and ARM core baselines.
Locally connected neural network with improved feature vector
NASA Technical Reports Server (NTRS)
Thomas, Tyson (Inventor)
2004-01-01
A pattern recognizer which uses neuromorphs with a fixed amount of energy that is distributed among the elements. The distribution of the energy is used to form a histogram which is used as a feature vector.
NASA Technical Reports Server (NTRS)
Garay, Michael J.; Mazzoni, Dominic; Davies, Roger; Wagstaff, Kiri
2004-01-01
Support Vector Machines (SVMs) are a type of supervised learning algorith,, other examples of which are Artificial Neural Networks (ANNs), Decision Trees, and Naive Bayesian Classifiers. Supervised learning algorithms are used to classify objects labled by a 'supervisor' - typically a human 'expert.'.
Adly, Amr A.; Abd-El-Hafiz, Salwa K.
2012-01-01
Incorporation of hysteresis models in electromagnetic analysis approaches is indispensable to accurate field computation in complex magnetic media. Throughout those computations, vector nature and computational efficiency of such models become especially crucial when sophisticated geometries requiring massive sub-region discretization are involved. Recently, an efficient vector Preisach-type hysteresis model constructed from only two scalar models having orthogonally coupled elementary operators has been proposed. This paper presents a novel Hopfield neural network approach for the implementation of Stoner–Wohlfarth-like operators that could lead to a significant enhancement in the computational efficiency of the aforementioned model. Advantages of this approach stem from the non-rectangular nature of these operators that substantially minimizes the number of operators needed to achieve an accurate vector hysteresis model. Details of the proposed approach, its identification and experimental testing are presented in the paper. PMID:25685446
NASA Astrophysics Data System (ADS)
Landsman, N. P. Klaas
2016-09-01
We reconsider the (non-relativistic) quantum theory of indistinguishable particles on the basis of Rieffel’s notion of C∗-algebraic (“strict”) deformation quantization. Using this formalism, we relate the operator approach of Messiah and Greenberg (1964) to the configuration space approach pioneered by Souriau (1967), Laidlaw and DeWitt-Morette (1971), Leinaas and Myrheim (1977), and others. In dimension d > 2, the former yields bosons, fermions, and paraparticles, whereas the latter seems to leave room for bosons and fermions only, apparently contradicting the operator approach as far as the admissibility of parastatistics is concerned. To resolve this, we first prove that in d > 2 the topologically non-trivial configuration spaces of the second approach are quantized by the algebras of observables of the first. Secondly, we show that the irreducible representations of the latter may be realized by vector bundle constructions, among which the line bundles recover the results of the second approach. Mathematically speaking, representations on higher-dimensional bundles (which define parastatistics) cannot be excluded, which render the configuration space approach incomplete. Physically, however, we show that the corresponding particle states may always be realized in terms of bosons and/or fermions with an unobserved internal degree of freedom (although based on non-relativistic quantum mechanics, this conclusion is analogous to the rigorous results of the Doplicher-Haag-Roberts analysis in algebraic quantum field theory, as well as to the heuristic arguments which led Gell-Mann and others to QCD (i.e. Quantum Chromodynamics)).
Pseudotyped Lentiviral Vectors for Retrograde Gene Delivery into Target Brain Regions
Kobayashi, Kenta; Inoue, Ken-ichi; Tanabe, Soshi; Kato, Shigeki; Takada, Masahiko; Kobayashi, Kazuto
2017-01-01
Gene transfer through retrograde axonal transport of viral vectors offers a substantial advantage for analyzing roles of specific neuronal pathways or cell types forming complex neural networks. This genetic approach may also be useful in gene therapy trials by enabling delivery of transgenes into a target brain region distant from the injection site of the vectors. Pseudotyping of a lentiviral vector based on human immunodeficiency virus type 1 (HIV-1) with various fusion envelope glycoproteins composed of different combinations of rabies virus glycoprotein (RV-G) and vesicular stomatitis virus glycoprotein (VSV-G) enhances the efficiency of retrograde gene transfer in both rodent and nonhuman primate brains. The most recently developed lentiviral vector is a pseudotype with fusion glycoprotein type E (FuG-E), which demonstrates highly efficient retrograde gene transfer in the brain. The FuG-E–pseudotyped vector permits powerful experimental strategies for more precisely investigating the mechanisms underlying various brain functions. It also contributes to the development of new gene therapy approaches for neurodegenerative disorders, such as Parkinson’s disease, by delivering genes required for survival and protection into specific neuronal populations. In this review article, we report the properties of the FuG-E–pseudotyped vector, and we describe the application of the vector to neural circuit analysis and the potential use of the FuG-E vector in gene therapy for Parkinson’s disease. PMID:28824385
Escobar, W A
2013-01-01
The proposed model holds that, at its most fundamental level, visual awareness is quantized. That is to say that visual awareness arises as individual bits of awareness through the action of neural circuits with hundreds to thousands of neurons in at least the human striate cortex. Circuits with specific topologies will reproducibly result in visual awareness that correspond to basic aspects of vision like color, motion, and depth. These quanta of awareness (qualia) are produced by the feedforward sweep that occurs through the geniculocortical pathway but are not integrated into a conscious experience until recurrent processing from centers like V4 or V5 select the appropriate qualia being produced in V1 to create a percept. The model proposed here has the potential to shift the focus of the search for visual awareness to the level of microcircuits and these likely exist across the kingdom Animalia. Thus establishing qualia as the fundamental nature of visual awareness will not only provide a deeper understanding of awareness, but also allow for a more quantitative understanding of the evolution of visual awareness throughout the animal kingdom.
A robust hidden Markov Gauss mixture vector quantizer for a noisy source.
Pyun, Kyungsuk Peter; Lim, Johan; Gray, Robert M
2009-07-01
Noise is ubiquitous in real life and changes image acquisition, communication, and processing characteristics in an uncontrolled manner. Gaussian noise and Salt and Pepper noise, in particular, are prevalent in noisy communication channels, camera and scanner sensors, and medical MRI images. It is not unusual for highly sophisticated image processing algorithms developed for clean images to malfunction when used on noisy images. For example, hidden Markov Gauss mixture models (HMGMM) have been shown to perform well in image segmentation applications, but they are quite sensitive to image noise. We propose a modified HMGMM procedure specifically designed to improve performance in the presence of noise. The key feature of the proposed procedure is the adjustment of covariance matrices in Gauss mixture vector quantizer codebooks to minimize an overall minimum discrimination information distortion (MDI). In adjusting covariance matrices, we expand or shrink their elements based on the noisy image. While most results reported in the literature assume a particular noise type, we propose a framework without assuming particular noise characteristics. Without denoising the corrupted source, we apply our method directly to the segmentation of noisy sources. We apply the proposed procedure to the segmentation of aerial images with Salt and Pepper noise and with independent Gaussian noise, and we compare our results with those of the median filter restoration method and the blind deconvolution-based method, respectively. We show that our procedure has better performance than image restoration-based techniques and closely matches to the performance of HMGMM for clean images in terms of both visual segmentation results and error rate.
Poisson traces, D-modules, and symplectic resolutions
NASA Astrophysics Data System (ADS)
Etingof, Pavel; Schedler, Travis
2018-03-01
We survey the theory of Poisson traces (or zeroth Poisson homology) developed by the authors in a series of recent papers. The goal is to understand this subtle invariant of (singular) Poisson varieties, conditions for it to be finite-dimensional, its relationship to the geometry and topology of symplectic resolutions, and its applications to quantizations. The main technique is the study of a canonical D-module on the variety. In the case the variety has finitely many symplectic leaves (such as for symplectic singularities and Hamiltonian reductions of symplectic vector spaces by reductive groups), the D-module is holonomic, and hence, the space of Poisson traces is finite-dimensional. As an application, there are finitely many irreducible finite-dimensional representations of every quantization of the variety. Conjecturally, the D-module is the pushforward of the canonical D-module under every symplectic resolution of singularities, which implies that the space of Poisson traces is dual to the top cohomology of the resolution. We explain many examples where the conjecture is proved, such as symmetric powers of du Val singularities and symplectic surfaces and Slodowy slices in the nilpotent cone of a semisimple Lie algebra. We compute the D-module in the case of surfaces with isolated singularities and show it is not always semisimple. We also explain generalizations to arbitrary Lie algebras of vector fields, connections to the Bernstein-Sato polynomial, relations to two-variable special polynomials such as Kostka polynomials and Tutte polynomials, and a conjectural relationship with deformations of symplectic resolutions. In the appendix we give a brief recollection of the theory of D-modules on singular varieties that we require.
Poisson traces, D-modules, and symplectic resolutions.
Etingof, Pavel; Schedler, Travis
2018-01-01
We survey the theory of Poisson traces (or zeroth Poisson homology) developed by the authors in a series of recent papers. The goal is to understand this subtle invariant of (singular) Poisson varieties, conditions for it to be finite-dimensional, its relationship to the geometry and topology of symplectic resolutions, and its applications to quantizations. The main technique is the study of a canonical D-module on the variety. In the case the variety has finitely many symplectic leaves (such as for symplectic singularities and Hamiltonian reductions of symplectic vector spaces by reductive groups), the D-module is holonomic, and hence, the space of Poisson traces is finite-dimensional. As an application, there are finitely many irreducible finite-dimensional representations of every quantization of the variety. Conjecturally, the D-module is the pushforward of the canonical D-module under every symplectic resolution of singularities, which implies that the space of Poisson traces is dual to the top cohomology of the resolution. We explain many examples where the conjecture is proved, such as symmetric powers of du Val singularities and symplectic surfaces and Slodowy slices in the nilpotent cone of a semisimple Lie algebra. We compute the D-module in the case of surfaces with isolated singularities and show it is not always semisimple. We also explain generalizations to arbitrary Lie algebras of vector fields, connections to the Bernstein-Sato polynomial, relations to two-variable special polynomials such as Kostka polynomials and Tutte polynomials, and a conjectural relationship with deformations of symplectic resolutions. In the appendix we give a brief recollection of the theory of D-modules on singular varieties that we require.
Prediction of Human Intestinal Absorption of Compounds Using Artificial Intelligence Techniques.
Kumar, Rajnish; Sharma, Anju; Siddiqui, Mohammed Haris; Tiwari, Rajesh Kumar
2017-01-01
Information about Pharmacokinetics of compounds is an essential component of drug design and development. Modeling the pharmacokinetic properties require identification of the factors effecting absorption, distribution, metabolism and excretion of compounds. There have been continuous attempts in the prediction of intestinal absorption of compounds using various Artificial intelligence methods in the effort to reduce the attrition rate of drug candidates entering to preclinical and clinical trials. Currently, there are large numbers of individual predictive models available for absorption using machine learning approaches. Six Artificial intelligence methods namely, Support vector machine, k- nearest neighbor, Probabilistic neural network, Artificial neural network, Partial least square and Linear discriminant analysis were used for prediction of absorption of compounds. Prediction accuracy of Support vector machine, k- nearest neighbor, Probabilistic neural network, Artificial neural network, Partial least square and Linear discriminant analysis for prediction of intestinal absorption of compounds was found to be 91.54%, 88.33%, 84.30%, 86.51%, 79.07% and 80.08% respectively. Comparative analysis of all the six prediction models suggested that Support vector machine with Radial basis function based kernel is comparatively better for binary classification of compounds using human intestinal absorption and may be useful at preliminary stages of drug design and development. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Efficient sensor network vehicle classification using peak harmonics of acoustic emissions
NASA Astrophysics Data System (ADS)
William, Peter E.; Hoffman, Michael W.
2008-04-01
An application is proposed for detection and classification of battlefield ground vehicles using the emitted acoustic signal captured at individual sensor nodes of an ad hoc Wireless Sensor Network (WSN). We make use of the harmonic characteristics of the acoustic emissions of battlefield vehicles, in reducing both the computations carried on the sensor node and the transmitted data to the fusion center for reliable and effcient classification of targets. Previous approaches focus on the lower frequency band of the acoustic emissions up to 500Hz; however, we show in the proposed application how effcient discrimination between battlefield vehicles is performed using features extracted from higher frequency bands (50 - 1500Hz). The application shows that selective time domain acoustic features surpass equivalent spectral features. Collaborative signal processing is utilized, such that estimation of certain signal model parameters is carried by the sensor node, in order to reduce the communication between the sensor node and the fusion center, while the remaining model parameters are estimated at the fusion center. The transmitted data from the sensor node to the fusion center ranges from 1 ~ 5% of the sampled acoustic signal at the node. A variety of classification schemes were examined, such as maximum likelihood, vector quantization and artificial neural networks. Evaluation of the proposed application, through processing of an acoustic data set with comparison to previous results, shows that the improvement is not only in the number of computations but also in the detection and false alarm rate as well.
Neural Network-Based Sensor Validation for Turboshaft Engines
NASA Technical Reports Server (NTRS)
Moller, James C.; Litt, Jonathan S.; Guo, Ten-Huei
1998-01-01
Sensor failure detection, isolation, and accommodation using a neural network approach is described. An auto-associative neural network is configured to perform dimensionality reduction on the sensor measurement vector and provide estimated sensor values. The sensor validation scheme is applied in a simulation of the T700 turboshaft engine in closed loop operation. Performance is evaluated based on the ability to detect faults correctly and maintain stable and responsive engine operation. The set of sensor outputs used for engine control forms the network input vector. Analytical redundancy is verified by training networks of successively smaller bottleneck layer sizes. Training data generation and strategy are discussed. The engine maintained stable behavior in the presence of sensor hard failures. With proper selection of fault determination thresholds, stability was maintained in the presence of sensor soft failures.
Learning and optimization with cascaded VLSI neural network building-block chips
NASA Technical Reports Server (NTRS)
Duong, T.; Eberhardt, S. P.; Tran, M.; Daud, T.; Thakoor, A. P.
1992-01-01
To demonstrate the versatility of the building-block approach, two neural network applications were implemented on cascaded analog VLSI chips. Weights were implemented using 7-b multiplying digital-to-analog converter (MDAC) synapse circuits, with 31 x 32 and 32 x 32 synapses per chip. A novel learning algorithm compatible with analog VLSI was applied to the two-input parity problem. The algorithm combines dynamically evolving architecture with limited gradient-descent backpropagation for efficient and versatile supervised learning. To implement the learning algorithm in hardware, synapse circuits were paralleled for additional quantization levels. The hardware-in-the-loop learning system allocated 2-5 hidden neurons for parity problems. Also, a 7 x 7 assignment problem was mapped onto a cascaded 64-neuron fully connected feedback network. In 100 randomly selected problems, the network found optimal or good solutions in most cases, with settling times in the range of 7-100 microseconds.
Face biometrics with renewable templates
NASA Astrophysics Data System (ADS)
van der Veen, Michiel; Kevenaar, Tom; Schrijen, Geert-Jan; Akkermans, Ton H.; Zuo, Fei
2006-02-01
In recent literature, privacy protection technologies for biometric templates were proposed. Among these is the so-called helper-data system (HDS) based on reliable component selection. In this paper we integrate this approach with face biometrics such that we achieve a system in which the templates are privacy protected, and multiple templates can be derived from the same facial image for the purpose of template renewability. Extracting binary feature vectors forms an essential step in this process. Using the FERET and Caltech databases, we show that this quantization step does not significantly degrade the classification performance compared to, for example, traditional correlation-based classifiers. The binary feature vectors are integrated in the HDS leading to a privacy protected facial recognition algorithm with acceptable FAR and FRR, provided that the intra-class variation is sufficiently small. This suggests that a controlled enrollment procedure with a sufficient number of enrollment measurements is required.
Detecting double compression of audio signal
NASA Astrophysics Data System (ADS)
Yang, Rui; Shi, Yun Q.; Huang, Jiwu
2010-01-01
MP3 is the most popular audio format nowadays in our daily life, for example music downloaded from the Internet and file saved in the digital recorder are often in MP3 format. However, low bitrate MP3s are often transcoded to high bitrate since high bitrate ones are of high commercial value. Also audio recording in digital recorder can be doctored easily by pervasive audio editing software. This paper presents two methods for the detection of double MP3 compression. The methods are essential for finding out fake-quality MP3 and audio forensics. The proposed methods use support vector machine classifiers with feature vectors formed by the distributions of the first digits of the quantized MDCT (modified discrete cosine transform) coefficients. Extensive experiments demonstrate the effectiveness of the proposed methods. To the best of our knowledge, this piece of work is the first one to detect double compression of audio signal.
Three learning phases for radial-basis-function networks.
Schwenker, F; Kestler, H A; Palm, G
2001-05-01
In this paper, learning algorithms for radial basis function (RBF) networks are discussed. Whereas multilayer perceptrons (MLP) are typically trained with backpropagation algorithms, starting the training procedure with a random initialization of the MLP's parameters, an RBF network may be trained in many different ways. We categorize these RBF training methods into one-, two-, and three-phase learning schemes. Two-phase RBF learning is a very common learning scheme. The two layers of an RBF network are learnt separately; first the RBF layer is trained, including the adaptation of centers and scaling parameters, and then the weights of the output layer are adapted. RBF centers may be trained by clustering, vector quantization and classification tree algorithms, and the output layer by supervised learning (through gradient descent or pseudo inverse solution). Results from numerical experiments of RBF classifiers trained by two-phase learning are presented in three completely different pattern recognition applications: (a) the classification of 3D visual objects; (b) the recognition hand-written digits (2D objects); and (c) the categorization of high-resolution electrocardiograms given as a time series (ID objects) and as a set of features extracted from these time series. In these applications, it can be observed that the performance of RBF classifiers trained with two-phase learning can be improved through a third backpropagation-like training phase of the RBF network, adapting the whole set of parameters (RBF centers, scaling parameters, and output layer weights) simultaneously. This, we call three-phase learning in RBF networks. A practical advantage of two- and three-phase learning in RBF networks is the possibility to use unlabeled training data for the first training phase. Support vector (SV) learning in RBF networks is a different learning approach. SV learning can be considered, in this context of learning, as a special type of one-phase learning, where only the output layer weights of the RBF network are calculated, and the RBF centers are restricted to be a subset of the training data. Numerical experiments with several classifier schemes including k-nearest-neighbor, learning vector quantization and RBF classifiers trained through two-phase, three-phase and support vector learning are given. The performance of the RBF classifiers trained through SV learning and three-phase learning are superior to the results of two-phase learning, but SV learning often leads to complex network structures, since the number of support vectors is not a small fraction of the total number of data points.
Polar exponential sensor arrays unify iconic and Hough space representation
NASA Technical Reports Server (NTRS)
Weiman, Carl F. R.
1990-01-01
The log-polar coordinate system, inherent in both polar exponential sensor arrays and log-polar remapped video imagery, is identical to the coordinate system of its corresponding Hough transform parameter space. The resulting unification of iconic and Hough domains simplifies computation for line recognition and eliminates the slope quantization problems inherent in the classical Cartesian Hough transform. The geometric organization of the algorithm is more amenable to massively parallel architectures than that of the Cartesian version. The neural architecture of the human visual cortex meets the geometric requirements to execute 'in-place' log-Hough algorithms of the kind described here.
NASA Astrophysics Data System (ADS)
Shastri, Niket; Pathak, Kamlesh
2018-05-01
The water vapor content in atmosphere plays very important role in climate. In this paper the application of GPS signal in meteorology is discussed, which is useful technique that is used to estimate the perceptible water vapor of atmosphere. In this paper various algorithms like artificial neural network, support vector machine and multiple linear regression are use to predict perceptible water vapor. The comparative studies in terms of root mean square error and mean absolute errors are also carried out for all the algorithms.
NASA Astrophysics Data System (ADS)
Mann, Kulwinder S.; Kaur, Sukhpreet
2017-06-01
There are various eye diseases in the patients suffering from the diabetes which includes Diabetic Retinopathy, Glaucoma, Hypertension etc. These all are the most common sight threatening eye diseases due to the changes in the blood vessel structure. The proposed method using supervised methods concluded that the segmentation of the retinal blood vessels can be performed accurately using neural networks training. It uses features which include Gray level features; Moment Invariant based features, Gabor filtering, Intensity feature, Vesselness feature for feature vector computation. Then the feature vector is calculated using only the prominent features.
1994-06-09
Competitive Neural Nets Speed Complex Fluid Flow Calculations 1-366 T. Long, E. Hanzevack Neural Networks for Steam Boiler MIMO Modeling and Advisory Control...Gallinr The Cochlear Nucleus and Primary Cortex as a Sequence of Distributed Neural Filters in Phoneme IV-607 Perception J. Antrobus, C. Tarshish, S...propulsion linear model, a fuel flow actuator modelled as a linear second order system with position and rate limits, and a thrust vectoring actuator
Computing Generalized Matrix Inverse on Spiking Neural Substrate
Shukla, Rohit; Khoram, Soroosh; Jorgensen, Erik; Li, Jing; Lipasti, Mikko; Wright, Stephen
2018-01-01
Emerging neural hardware substrates, such as IBM's TrueNorth Neurosynaptic System, can provide an appealing platform for deploying numerical algorithms. For example, a recurrent Hopfield neural network can be used to find the Moore-Penrose generalized inverse of a matrix, thus enabling a broad class of linear optimizations to be solved efficiently, at low energy cost. However, deploying numerical algorithms on hardware platforms that severely limit the range and precision of representation for numeric quantities can be quite challenging. This paper discusses these challenges and proposes a rigorous mathematical framework for reasoning about range and precision on such substrates. The paper derives techniques for normalizing inputs and properly quantizing synaptic weights originating from arbitrary systems of linear equations, so that solvers for those systems can be implemented in a provably correct manner on hardware-constrained neural substrates. The analytical model is empirically validated on the IBM TrueNorth platform, and results show that the guarantees provided by the framework for range and precision hold under experimental conditions. Experiments with optical flow demonstrate the energy benefits of deploying a reduced-precision and energy-efficient generalized matrix inverse engine on the IBM TrueNorth platform, reflecting 10× to 100× improvement over FPGA and ARM core baselines. PMID:29593483
Intelligent classifier for dynamic fault patterns based on hidden Markov model
NASA Astrophysics Data System (ADS)
Xu, Bo; Feng, Yuguang; Yu, Jinsong
2006-11-01
It's difficult to build precise mathematical models for complex engineering systems because of the complexity of the structure and dynamics characteristics. Intelligent fault diagnosis introduces artificial intelligence and works in a different way without building the analytical mathematical model of a diagnostic object, so it's a practical approach to solve diagnostic problems of complex systems. This paper presents an intelligent fault diagnosis method, an integrated fault-pattern classifier based on Hidden Markov Model (HMM). This classifier consists of dynamic time warping (DTW) algorithm, self-organizing feature mapping (SOFM) network and Hidden Markov Model. First, after dynamic observation vector in measuring space is processed by DTW, the error vector including the fault feature of being tested system is obtained. Then a SOFM network is used as a feature extractor and vector quantization processor. Finally, fault diagnosis is realized by fault patterns classifying with the Hidden Markov Model classifier. The importing of dynamic time warping solves the problem of feature extracting from dynamic process vectors of complex system such as aeroengine, and makes it come true to diagnose complex system by utilizing dynamic process information. Simulating experiments show that the diagnosis model is easy to extend, and the fault pattern classifier is efficient and is convenient to the detecting and diagnosing of new faults.
Real-time object-to-features vectorisation via Siamese neural networks
NASA Astrophysics Data System (ADS)
Fedorenko, Fedor; Usilin, Sergey
2017-03-01
Object-to-features vectorisation is a hard problem to solve for objects that can be hard to distinguish. Siamese and Triplet neural networks are one of the more recent tools used for such task. However, most networks used are very deep networks that prove to be hard to compute in the Internet of Things setting. In this paper, a computationally efficient neural network is proposed for real-time object-to-features vectorisation into a Euclidean metric space. We use L2 distance to reflect feature vector similarity during both training and testing. In this way, feature vectors we develop can be easily classified using K-Nearest Neighbours classifier. Such approach can be used to train networks to vectorise such "problematic" objects like images of human faces, keypoint image patches, like keypoints on Arctic maps and surrounding marine areas.
Emotion-independent face recognition
NASA Astrophysics Data System (ADS)
De Silva, Liyanage C.; Esther, Kho G. P.
2000-12-01
Current face recognition techniques tend to work well when recognizing faces under small variations in lighting, facial expression and pose, but deteriorate under more extreme conditions. In this paper, a face recognition system to recognize faces of known individuals, despite variations in facial expression due to different emotions, is developed. The eigenface approach is used for feature extraction. Classification methods include Euclidean distance, back propagation neural network and generalized regression neural network. These methods yield 100% recognition accuracy when the training database is representative, containing one image representing the peak expression for each emotion of each person apart from the neutral expression. The feature vectors used for comparison in the Euclidean distance method and for training the neural network must be all the feature vectors of the training set. These results are obtained for a face database consisting of only four persons.
Diagnostic methodology for incipient system disturbance based on a neural wavelet approach
NASA Astrophysics Data System (ADS)
Won, In-Ho
Since incipient system disturbances are easily mixed up with other events or noise sources, the signal from the system disturbance can be neglected or identified as noise. Thus, as available knowledge and information is obtained incompletely or inexactly from the measurements; an exploration into the use of artificial intelligence (AI) tools to overcome these uncertainties and limitations was done. A methodology integrating the feature extraction efficiency of the wavelet transform with the classification capabilities of neural networks is developed for signal classification in the context of detecting incipient system disturbances. The synergistic effects of wavelets and neural networks present more strength and less weakness than either technique taken alone. A wavelet feature extractor is developed to form concise feature vectors for neural network inputs. The feature vectors are calculated from wavelet coefficients to reduce redundancy and computational expense. During this procedure, the statistical features based on the fractal concept to the wavelet coefficients play a role as crucial key in the wavelet feature extractor. To verify the proposed methodology, two applications are investigated and successfully tested. The first involves pump cavitation detection using dynamic pressure sensor. The second pertains to incipient pump cavitation detection using signals obtained from a current sensor. Also, through comparisons between three proposed feature vectors and with statistical techniques, it is shown that the variance feature extractor provides a better approach in the performed applications.
Advances in reprogramming somatic cells to induced pluripotent stem cells.
Patel, Minal; Yang, Shuying
2010-09-01
Traditionally, nuclear reprogramming of cells has been performed by transferring somatic cell nuclei into oocytes, by combining somatic and pluripotent cells together through cell fusion and through genetic integration of factors through somatic cell chromatin. All of these techniques changes gene expression which further leads to a change in cell fate. Here we discuss recent advances in generating induced pluripotent stem cells, different reprogramming methods and clinical applications of iPS cells. Viral vectors have been used to transfer transcription factors (Oct4, Sox2, c-myc, Klf4, and nanog) to induce reprogramming of mouse fibroblasts, neural stem cells, neural progenitor cells, keratinocytes, B lymphocytes and meningeal membrane cells towards pluripotency. Human fibroblasts, neural cells, blood and keratinocytes have also been reprogrammed towards pluripotency. In this review we have discussed the use of viral vectors for reprogramming both animal and human stem cells. Currently, many studies are also involved in finding alternatives to using viral vectors carrying transcription factors for reprogramming cells. These include using plasmid transfection, piggyback transposon system and piggyback transposon system combined with a non viral vector system. Applications of these techniques have been discussed in detail including its advantages and disadvantages. Finally, current clinical applications of induced pluripotent stem cells and its limitations have also been reviewed. Thus, this review is a summary of current research advances in reprogramming cells into induced pluripotent stem cells.
Memory-efficient decoding of LDPC codes
NASA Technical Reports Server (NTRS)
Kwok-San Lee, Jason; Thorpe, Jeremy; Hawkins, Jon
2005-01-01
We present a low-complexity quantization scheme for the implementation of regular (3,6) LDPC codes. The quantization parameters are optimized to maximize the mutual information between the source and the quantized messages. Using this non-uniform quantized belief propagation algorithm, we have simulated that an optimized 3-bit quantizer operates with 0.2dB implementation loss relative to a floating point decoder, and an optimized 4-bit quantizer operates less than 0.1dB quantization loss.
NASA Technical Reports Server (NTRS)
Mitchell, Paul H.
1991-01-01
F77NNS (FORTRAN 77 Neural Network Simulator) computer program simulates popular back-error-propagation neural network. Designed to take advantage of vectorization when used on computers having this capability, also used on any computer equipped with ANSI-77 FORTRAN Compiler. Problems involving matching of patterns or mathematical modeling of systems fit class of problems F77NNS designed to solve. Program has restart capability so neural network solved in stages suitable to user's resources and desires. Enables user to customize patterns of connections between layers of network. Size of neural network F77NNS applied to limited only by amount of random-access memory available to user.
Support vector machine (SVM) was applied for land-cover characterization using MODIS time-series data. Classification performance was examined with respect to training sample size, sample variability, and landscape homogeneity (purity). The results were compared to two convention...
Chakravarthi, Srikant; Monroy-Sosa, Alejandro; Gonen, Lior; Fukui, Melanie; Rovin, Richard; Kojis, Nathaniel; Lindsay, Mark; Khalili, Sammy; Celix, Juanita; Corsten, Martin; Kassam, Amin B
2018-06-01
Endoscopic endonasal access to the jugular foramen and occipital condyle - the transcondylar-transtubercular approach - is anatomically complex and requires detailed knowledge of the relative position of critical neurovascular structures, in order to avoid inadvertent injury and resultant complications. However, access to this region can be confusing as the orientation and relationships of osseous, vascular, and neural structures are very much different from traditional dorsal approaches. This review aims at providing an organizational construct for a more understandable framework in accessing the transcondylar-transtubercular window. The region can be conceptualized using a three-vector coordinate system: vector 1 represents a dorsal or ventral corridor, vector 2 represents the outer and inner circumferential anatomical limits; in an "onion-skin" fashion, key osseous, vascular, and neural landmarks are organized based on a 360-degree skull base model, and vector 3 represents the final core or target of the surgical corridor. The creation of an organized "global-positioning system" may better guide the surgeon in accessing the far-medial transcondylar-transtubercular region, and related pathologies, and help understand the surgical limits to the occipital condyle and jugular foramen - the ventral posterolateral corridor - via the endoscopic endonasal approach.
A feedforward artificial neural network based on quantum effect vector-matrix multipliers.
Levy, H J; McGill, T C
1993-01-01
The vector-matrix multiplier is the engine of many artificial neural network implementations because it can simulate the way in which neurons collect weighted input signals from a dendritic arbor. A new technology for building analog weighting elements that is theoretically capable of densities and speeds far beyond anything that conventional VLSI in silicon could ever offer is presented. To illustrate the feasibility of such a technology, a small three-layer feedforward prototype network with five binary neurons and six tri-state synapses was built and used to perform all of the fundamental logic functions: XOR, AND, OR, and NOT.
A Software Package for Neural Network Applications Development
NASA Technical Reports Server (NTRS)
Baran, Robert H.
1993-01-01
Original Backprop (Version 1.2) is an MS-DOS package of four stand-alone C-language programs that enable users to develop neural network solutions to a variety of practical problems. Original Backprop generates three-layer, feed-forward (series-coupled) networks which map fixed-length input vectors into fixed length output vectors through an intermediate (hidden) layer of binary threshold units. Version 1.2 can handle up to 200 input vectors at a time, each having up to 128 real-valued components. The first subprogram, TSET, appends a number (up to 16) of classification bits to each input, thus creating a training set of input output pairs. The second subprogram, BACKPROP, creates a trilayer network to do the prescribed mapping and modifies the weights of its connections incrementally until the training set is leaned. The learning algorithm is the 'back-propagating error correction procedures first described by F. Rosenblatt in 1961. The third subprogram, VIEWNET, lets the trained network be examined, tested, and 'pruned' (by the deletion of unnecessary hidden units). The fourth subprogram, DONET, makes a TSR routine by which the finished product of the neural net design-and-training exercise can be consulted under other MS-DOS applications.
NASA Astrophysics Data System (ADS)
Hosseini-Golgoo, S. M.; Bozorgi, H.; Saberkari, A.
2015-06-01
Performances of three neural networks, consisting of a multi-layer perceptron, a radial basis function, and a neuro-fuzzy network with local linear model tree training algorithm, in modeling and extracting discriminative features from the response patterns of a temperature-modulated resistive gas sensor are quantitatively compared. For response pattern recording, a voltage staircase containing five steps each with a 20 s plateau is applied to the micro-heater of the sensor, when 12 different target gases, each at 11 concentration levels, are present. In each test, the hidden layer neuron weights are taken as the discriminatory feature vector of the target gas. These vectors are then mapped to a 3D feature space using linear discriminant analysis. The discriminative information content of the feature vectors are determined by the calculation of the Fisher’s discriminant ratio, affording quantitative comparison among the success rates achieved by the different neural network structures. The results demonstrate a superior discrimination ratio for features extracted from local linear neuro-fuzzy and radial-basis-function networks with recognition rates of 96.27% and 90.74%, respectively.
Object recognition of real targets using modelled SAR images
NASA Astrophysics Data System (ADS)
Zherdev, D. A.
2017-12-01
In this work the problem of recognition is studied using SAR images. The algorithm of recognition is based on the computation of conjugation indices with vectors of class. The support subspaces for each class are constructed by exception of the most and the less correlated vectors in a class. In the study we examine the ability of a significant feature vector size reduce that leads to recognition time decrease. The images of targets form the feature vectors that are transformed using pre-trained convolutional neural network (CNN).
An Alternative to the Gauge Theoretic Setting
NASA Astrophysics Data System (ADS)
Schroer, Bert
2011-10-01
The standard formulation of quantum gauge theories results from the Lagrangian (functional integral) quantization of classical gauge theories. A more intrinsic quantum theoretical access in the spirit of Wigner's representation theory shows that there is a fundamental clash between the pointlike localization of zero mass (vector, tensor) potentials and the Hilbert space (positivity, unitarity) structure of QT. The quantization approach has no other way than to stay with pointlike localization and sacrifice the Hilbert space whereas the approach built on the intrinsic quantum concept of modular localization keeps the Hilbert space and trades the conflict creating pointlike generation with the tightest consistent localization: semiinfinite spacelike string localization. Whereas these potentials in the presence of interactions stay quite close to associated pointlike field strengths, the interacting matter fields to which they are coupled bear the brunt of the nonlocal aspect in that they are string-generated in a way which cannot be undone by any differentiation. The new stringlike approach to gauge theory also revives the idea of a Schwinger-Higgs screening mechanism as a deeper and less metaphoric description of the Higgs spontaneous symmetry breaking and its accompanying tale about "God's particle" and its mass generation for all the other particles.
Optimized universal color palette design for error diffusion
NASA Astrophysics Data System (ADS)
Kolpatzik, Bernd W.; Bouman, Charles A.
1995-04-01
Currently, many low-cost computers can only simultaneously display a palette of 256 color. However, this palette is usually selectable from a very large gamut of available colors. For many applications, this limited palette size imposes a significant constraint on the achievable image quality. We propose a method for designing an optimized universal color palette for use with halftoning methods such as error diffusion. The advantage of a universal color palette is that it is fixed and therefore allows multiple images to be displayed simultaneously. To design the palette, we employ a new vector quantization method known as sequential scalar quantization (SSQ) to allocate the colors in a visually uniform color space. The SSQ method achieves near-optimal allocation, but may be efficiently implemented using a series of lookup tables. When used with error diffusion, SSQ adds little computational overhead and may be used to minimize the visual error in an opponent color coordinate system. We compare the performance of the optimized algorithm to standard error diffusion by evaluating a visually weighted mean-squared-error measure. Our metric is based on the color difference in CIE L*AL*B*, but also accounts for the lowpass characteristic of human contrast sensitivity.
Development of a good-quality speech coder for transmission over noisy channels at 2.4 kb/s
NASA Astrophysics Data System (ADS)
Viswanathan, V. R.; Berouti, M.; Higgins, A.; Russell, W.
1982-03-01
This report describes the development, study, and experimental results of a 2.4 kb/s speech coder called harmonic deviations (HDV) vocoder, which transmits good-quality speech over noisy channels with bit-error rates of up to 1%. The HDV coder is based on the linear predictive coding (LPC) vocoder, and it transmits additional information over and above the data transmitted by the LPC vocoder, in the form of deviations between the speech spectrum and the LPC all-pole model spectrum at a selected set of frequencies. At the receiver, the spectral deviations are used to generate the excitation signal for the all-pole synthesis filter. The report describes and compares several methods for extracting the spectral deviations from the speech signal and for encoding them. To limit the bit-rate of the HDV coder to 2.4 kb/s the report discusses several methods including orthogonal transformation and minimum-mean-square-error scalar quantization of log area ratios, two-stage vector-scalar quantization, and variable frame rate transmission. The report also presents the results of speech-quality optimization of the HDV coder at 2.4 kb/s.
Covariant open bosonic string field theory on multiple D-branes in the proper-time gauge
NASA Astrophysics Data System (ADS)
Lee, Taejin
2017-12-01
We construct a covariant open bosonic string field theory on multiple D-branes, which reduces to a non-Abelian group Yang-Mills gauge theory in the zero-slope limit. Making use of the first quantized open bosonic string in the proper time gauge, we convert the string amplitudes given by the Polyakov path integrals on string world sheets into those of the second quantized theory. The world sheet diagrams generated by the constructed open string field theory are planar in contrast to those of the Witten's cubic string field theory. However, the constructed string field theory is yet equivalent to the Witten's cubic string field theory. Having obtained planar diagrams, we may adopt the light-cone string field theory technique to calculate the multi-string scattering amplitudes with an arbitrary number of external strings. We examine in detail the three-string vertex diagram and the effective four-string vertex diagrams generated perturbatively by the three-string vertex at tree level. In the zero-slope limit, the string scattering amplitudes are identified precisely as those of non-Abelian Yang-Mills gauge theory if the external states are chosen to be massless vector particles.
NASA Astrophysics Data System (ADS)
Vaughan, Jennifer
2015-03-01
In the classical Kostant-Souriau prequantization procedure, the Poisson algebra of a symplectic manifold (M,ω) is realized as the space of infinitesimal quantomorphisms of the prequantization circle bundle. Robinson and Rawnsley developed an alternative to the Kostant-Souriau quantization process in which the prequantization circle bundle and metaplectic structure for (M,ω) are replaced by a metaplectic-c prequantization. They proved that metaplectic-c quantization can be applied to a larger class of manifolds than the classical recipe. This paper presents a definition for a metaplectic-c quantomorphism, which is a diffeomorphism of metaplectic-c prequantizations that preserves all of their structures. Since the structure of a metaplectic-c prequantization is more complicated than that of a circle bundle, we find that the definition must include an extra condition that does not have an analogue in the Kostant-Souriau case. We then define an infinitesimal quantomorphism to be a vector field whose flow consists of metaplectic-c quantomorphisms, and prove that the space of infinitesimal metaplectic-c quantomorphisms exhibits all of the same properties that are seen for the infinitesimal quantomorphisms of a prequantization circle bundle. In particular, this space is isomorphic to the Poisson algebra C^∞(M).
Classical Field Theory and the Stress-Energy Tensor
NASA Astrophysics Data System (ADS)
Swanson, Mark S.
2015-09-01
This book is a concise introduction to the key concepts of classical field theory for beginning graduate students and advanced undergraduate students who wish to study the unifying structures and physical insights provided by classical field theory without dealing with the additional complication of quantization. In that regard, there are many important aspects of field theory that can be understood without quantizing the fields. These include the action formulation, Galilean and relativistic invariance, traveling and standing waves, spin angular momentum, gauge invariance, subsidiary conditions, fluctuations, spinor and vector fields, conservation laws and symmetries, and the Higgs mechanism, all of which are often treated briefly in a course on quantum field theory. The variational form of classical mechanics and continuum field theory are both developed in the time-honored graduate level text by Goldstein et al (2001). An introduction to classical field theory from a somewhat different perspective is available in Soper (2008). Basic classical field theory is often treated in books on quantum field theory. Two excellent texts where this is done are Greiner and Reinhardt (1996) and Peskin and Schroeder (1995). Green's function techniques are presented in Arfken et al (2013).
Stewart, Terrence C; Eliasmith, Chris
2013-06-01
Quantum probability (QP) theory can be seen as a type of vector symbolic architecture (VSA): mental states are vectors storing structured information and manipulated using algebraic operations. Furthermore, the operations needed by QP match those in other VSAs. This allows existing biologically realistic neural models to be adapted to provide a mechanistic explanation of the cognitive phenomena described in the target article by Pothos & Busemeyer (P&B).
Optical computing and image processing using photorefractive gallium arsenide
NASA Technical Reports Server (NTRS)
Cheng, Li-Jen; Liu, Duncan T. H.
1990-01-01
Recent experimental results on matrix-vector multiplication and multiple four-wave mixing using GaAs are presented. Attention is given to a simple concept of using two overlapping holograms in GaAs to do two matrix-vector multiplication processes operating in parallel with a common input vector. This concept can be used to construct high-speed, high-capacity, reconfigurable interconnection and multiplexing modules, important for optical computing and neural-network applications.
NASA Astrophysics Data System (ADS)
Wang, Bingjie; Pi, Shaohua; Sun, Qi; Jia, Bo
2015-05-01
An improved classification algorithm that considers multiscale wavelet packet Shannon entropy is proposed. Decomposition coefficients at all levels are obtained to build the initial Shannon entropy feature vector. After subtracting the Shannon entropy map of the background signal, components of the strongest discriminating power in the initial feature vector are picked out to rebuild the Shannon entropy feature vector, which is transferred to radial basis function (RBF) neural network for classification. Four types of man-made vibrational intrusion signals are recorded based on a modified Sagnac interferometer. The performance of the improved classification algorithm has been evaluated by the classification experiments via RBF neural network under different diffusion coefficients. An 85% classification accuracy rate is achieved, which is higher than the other common algorithms. The classification results show that this improved classification algorithm can be used to classify vibrational intrusion signals in an automatic real-time monitoring system.
Cheng, Jerome; Hipp, Jason; Monaco, James; Lucas, David R; Madabhushi, Anant; Balis, Ulysses J
2011-01-01
Spatially invariant vector quantization (SIVQ) is a texture and color-based image matching algorithm that queries the image space through the use of ring vectors. In prior studies, the selection of one or more optimal vectors for a particular feature of interest required a manual process, with the user initially stochastically selecting candidate vectors and subsequently testing them upon other regions of the image to verify the vector's sensitivity and specificity properties (typically by reviewing a resultant heat map). In carrying out the prior efforts, the SIVQ algorithm was noted to exhibit highly scalable computational properties, where each region of analysis can take place independently of others, making a compelling case for the exploration of its deployment on high-throughput computing platforms, with the hypothesis that such an exercise will result in performance gains that scale linearly with increasing processor count. An automated process was developed for the selection of optimal ring vectors to serve as the predicate matching operator in defining histopathological features of interest. Briefly, candidate vectors were generated from every possible coordinate origin within a user-defined vector selection area (VSA) and subsequently compared against user-identified positive and negative "ground truth" regions on the same image. Each vector from the VSA was assessed for its goodness-of-fit to both the positive and negative areas via the use of the receiver operating characteristic (ROC) transfer function, with each assessment resulting in an associated area-under-the-curve (AUC) figure of merit. Use of the above-mentioned automated vector selection process was demonstrated in two cases of use: First, to identify malignant colonic epithelium, and second, to identify soft tissue sarcoma. For both examples, a very satisfactory optimized vector was identified, as defined by the AUC metric. Finally, as an additional effort directed towards attaining high-throughput capability for the SIVQ algorithm, we demonstrated the successful incorporation of it with the MATrix LABoratory (MATLAB™) application interface. The SIVQ algorithm is suitable for automated vector selection settings and high throughput computation.
NASA Astrophysics Data System (ADS)
Patkin, M. L.; Rogachev, G. N.
2018-02-01
A method for constructing a multi-agent control system for mobile robots based on training with reinforcement using deep neural networks is considered. Synthesis of the management system is proposed to be carried out with reinforcement training and the modified Actor-Critic method, in which the Actor module is divided into Action Actor and Communication Actor in order to simultaneously manage mobile robots and communicate with partners. Communication is carried out by sending partners at each step a vector of real numbers that are added to the observation vector and affect the behaviour. Functions of Actors and Critic are approximated by deep neural networks. The Critics value function is trained by using the TD-error method and the Actor’s function by using DDPG. The Communication Actor’s neural network is trained through gradients received from partner agents. An environment in which a cooperative multi-agent interaction is present was developed, computer simulation of the application of this method in the control problem of two robots pursuing two goals was carried out.
Analyzing neural responses with vector fields.
Buneo, Christopher A
2011-04-15
Analyzing changes in the shape and scale of single cell response fields is a key component of many neurophysiological studies. Typical analyses of shape change involve correlating firing rates between experimental conditions or "cross-correlating" single cell tuning curves by shifting them with respect to one another and correlating the overlapping data. Such shifting results in a loss of data, making interpretation of the resulting correlation coefficients problematic. The problem is particularly acute for two dimensional response fields, which require shifting along two axes. Here, an alternative method for quantifying response field shape and scale based on correlation of vector field representations is introduced. The merits and limitations of the methods are illustrated using both simulated and experimental data. It is shown that vector correlation provides more information on response field changes than scalar correlation without requiring field shifting and concomitant data loss. An extension of this vector field approach is also demonstrated which can be used to identify the manner in which experimental variables are encoded in studies of neural reference frames. Copyright © 2011 Elsevier B.V. All rights reserved.
Mathematics of Quantization and Quantum Fields
NASA Astrophysics Data System (ADS)
Dereziński, Jan; Gérard, Christian
2013-03-01
Preface; 1. Vector spaces; 2. Operators in Hilbert spaces; 3. Tensor algebras; 4. Analysis in L2(Rd); 5. Measures; 6. Algebras; 7. Anti-symmetric calculus; 8. Canonical commutation relations; 9. CCR on Fock spaces; 10. Symplectic invariance of CCR in finite dimensions; 11. Symplectic invariance of the CCR on Fock spaces; 12. Canonical anti-commutation relations; 13. CAR on Fock spaces; 14. Orthogonal invariance of CAR algebras; 15. Clifford relations; 16. Orthogonal invariance of the CAR on Fock spaces; 17. Quasi-free states; 18. Dynamics of quantum fields; 19. Quantum fields on space-time; 20. Diagrammatics; 21. Euclidean approach for bosons; 22. Interacting bosonic fields; Subject index; Symbols index.
Perceptual distortion analysis of color image VQ-based coding
NASA Astrophysics Data System (ADS)
Charrier, Christophe; Knoblauch, Kenneth; Cherifi, Hocine
1997-04-01
It is generally accepted that a RGB color image can be easily encoded by using a gray-scale compression technique on each of the three color planes. Such an approach, however, fails to take into account correlations existing between color planes and perceptual factors. We evaluated several linear and non-linear color spaces, some introduced by the CIE, compressed with the vector quantization technique for minimum perceptual distortion. To study these distortions, we measured contrast and luminance of the video framebuffer, to precisely control color. We then obtained psychophysical judgements to measure how well these methods work to minimize perceptual distortion in a variety of color space.
High Performance Compression of Science Data
NASA Technical Reports Server (NTRS)
Storer, James A.; Carpentieri, Bruno; Cohn, Martin
1994-01-01
Two papers make up the body of this report. One presents a single-pass adaptive vector quantization algorithm that learns a codebook of variable size and shape entries; the authors present experiments on a set of test images showing that with no training or prior knowledge of the data, for a given fidelity, the compression achieved typically equals or exceeds that of the JPEG standard. The second paper addresses motion compensation, one of the most effective techniques used in interframe data compression. A parallel block-matching algorithm for estimating interframe displacement of blocks with minimum error is presented. The algorithm is designed for a simple parallel architecture to process video in real time.
NASA Astrophysics Data System (ADS)
Rost, E.; Shephard, J. R.
1992-08-01
This report discusses the following topics: Exact 1-loop vacuum polarization effects in 1 + 1 dimensional QHD; exact 1-fermion loop contributions in 1 + 1 dimensional solitons; exact scalar 1-loop contributions in 1 + 3 dimensions; exact vacuum calculations in a hyper-spherical basis; relativistic nuclear matter with self-consistent correlation energy; consistent RHA-RPA for finite nuclei; transverse response functions in the (triangle)-resonance region; hadronic matter in a nontopological soliton model; scalar and vector contributions to (bar p)p yields (bar lambda)lambda reaction; 0+ and 2+ strengths in pion double-charge exchange to double giant-dipole resonances; and nucleons in a hybrid sigma model including a quantized pion field.
Multi-rate, real time image compression for images dominated by point sources
NASA Technical Reports Server (NTRS)
Huber, A. Kris; Budge, Scott E.; Harris, Richard W.
1993-01-01
An image compression system recently developed for compression of digital images dominated by point sources is presented. Encoding consists of minimum-mean removal, vector quantization, adaptive threshold truncation, and modified Huffman encoding. Simulations are presented showing that the peaks corresponding to point sources can be transmitted losslessly for low signal-to-noise ratios (SNR) and high point source densities while maintaining a reduced output bit rate. Encoding and decoding hardware has been built and tested which processes 552,960 12-bit pixels per second at compression rates of 10:1 and 4:1. Simulation results are presented for the 10:1 case only.
Distorted Character Recognition Via An Associative Neural Network
NASA Astrophysics Data System (ADS)
Messner, Richard A.; Szu, Harold H.
1987-03-01
The purpose of this paper is two-fold. First, it is intended to provide some preliminary results of a character recognition scheme which has foundations in on-going neural network architecture modeling, and secondly, to apply some of the neural network results in a real application area where thirty years of effort has had little effect on providing the machine an ability to recognize distorted objects within the same object class. It is the author's belief that the time is ripe to start applying in ernest the results of over twenty years of effort in neural modeling to some of the more difficult problems which seem so hard to solve by conventional means. The character recognition scheme proposed utilizes a preprocessing stage which performs a 2-dimensional Walsh transform of an input cartesian image field, then sequency filters this spectrum into three feature bands. Various features are then extracted and organized into three sets of feature vectors. These vector patterns that are stored and recalled associatively. Two possible associative neural memory models are proposed for further investigation. The first being an outer-product linear matrix associative memory with a threshold function controlling the strength of the output pattern (similar to Kohonen's crosscorrelation approach [1]). The second approach is based upon a modified version of Grossberg's neural architecture [2] which provides better self-organizing properties due to its adaptive nature. Preliminary results of the sequency filtering and feature extraction preprocessing stage and discussion about the use of the proposed neural architectures is included.
Efficiently modeling neural networks on massively parallel computers
NASA Technical Reports Server (NTRS)
Farber, Robert M.
1993-01-01
Neural networks are a very useful tool for analyzing and modeling complex real world systems. Applying neural network simulations to real world problems generally involves large amounts of data and massive amounts of computation. To efficiently handle the computational requirements of large problems, we have implemented at Los Alamos a highly efficient neural network compiler for serial computers, vector computers, vector parallel computers, and fine grain SIMD computers such as the CM-2 connection machine. This paper describes the mapping used by the compiler to implement feed-forward backpropagation neural networks for a SIMD (Single Instruction Multiple Data) architecture parallel computer. Thinking Machines Corporation has benchmarked our code at 1.3 billion interconnects per second (approximately 3 gigaflops) on a 64,000 processor CM-2 connection machine (Singer 1990). This mapping is applicable to other SIMD computers and can be implemented on MIMD computers such as the CM-5 connection machine. Our mapping has virtually no communications overhead with the exception of the communications required for a global summation across the processors (which has a sub-linear runtime growth on the order of O(log(number of processors)). We can efficiently model very large neural networks which have many neurons and interconnects and our mapping can extend to arbitrarily large networks (within memory limitations) by merging the memory space of separate processors with fast adjacent processor interprocessor communications. This paper will consider the simulation of only feed forward neural network although this method is extendable to recurrent networks.
A Hybrid Neuro-Fuzzy Model For Integrating Large Earth-Science Datasets
NASA Astrophysics Data System (ADS)
Porwal, A.; Carranza, J.; Hale, M.
2004-12-01
A GIS-based hybrid neuro-fuzzy approach to integration of large earth-science datasets for mineral prospectivity mapping is described. It implements a Takagi-Sugeno type fuzzy inference system in the framework of a four-layered feed-forward adaptive neural network. Each unique combination of the datasets is considered a feature vector whose components are derived by knowledge-based ordinal encoding of the constituent datasets. A subset of feature vectors with a known output target vector (i.e., unique conditions known to be associated with either a mineralized or a barren location) is used for the training of an adaptive neuro-fuzzy inference system. Training involves iterative adjustment of parameters of the adaptive neuro-fuzzy inference system using a hybrid learning procedure for mapping each training vector to its output target vector with minimum sum of squared error. The trained adaptive neuro-fuzzy inference system is used to process all feature vectors. The output for each feature vector is a value that indicates the extent to which a feature vector belongs to the mineralized class or the barren class. These values are used to generate a prospectivity map. The procedure is demonstrated by an application to regional-scale base metal prospectivity mapping in a study area located in the Aravalli metallogenic province (western India). A comparison of the hybrid neuro-fuzzy approach with pure knowledge-driven fuzzy and pure data-driven neural network approaches indicates that the former offers a superior method for integrating large earth-science datasets for predictive spatial mathematical modelling.
Development of the disable software reporting system on the basis of the neural network
NASA Astrophysics Data System (ADS)
Gavrylenko, S.; Babenko, O.; Ignatova, E.
2018-04-01
The PE structure of malicious and secure software is analyzed, features are highlighted, binary sign vectors are obtained and used as inputs for training the neural network. A software model for detecting malware based on the ART-1 neural network was developed, optimal similarity coefficients were found, and testing was performed. The obtained research results showed the possibility of using the developed system of identifying malicious software in computer systems protection systems
BRST quantization of cosmological perturbations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Armendariz-Picon, Cristian; Şengör, Gizem
2016-11-08
BRST quantization is an elegant and powerful method to quantize theories with local symmetries. In this article we study the Hamiltonian BRST quantization of cosmological perturbations in a universe dominated by a scalar field, along with the closely related quantization method of Dirac. We describe how both formalisms apply to perturbations in a time-dependent background, and how expectation values of gauge-invariant operators can be calculated in the in-in formalism. Our analysis focuses mostly on the free theory. By appropriate canonical transformations we simplify and diagonalize the free Hamiltonian. BRST quantization in derivative gauges allows us to dramatically simplify the structuremore » of the propagators, whereas Dirac quantization, which amounts to quantization in synchronous gauge, dispenses with the need to introduce ghosts and preserves the locality of the gauge-fixed action.« less
Cascade Error Projection: A Learning Algorithm for Hardware Implementation
NASA Technical Reports Server (NTRS)
Duong, Tuan A.; Daud, Taher
1996-01-01
In this paper, we workout a detailed mathematical analysis for a new learning algorithm termed Cascade Error Projection (CEP) and a general learning frame work. This frame work can be used to obtain the cascade correlation learning algorithm by choosing a particular set of parameters. Furthermore, CEP learning algorithm is operated only on one layer, whereas the other set of weights can be calculated deterministically. In association with the dynamical stepsize change concept to convert the weight update from infinite space into a finite space, the relation between the current stepsize and the previous energy level is also given and the estimation procedure for optimal stepsize is used for validation of our proposed technique. The weight values of zero are used for starting the learning for every layer, and a single hidden unit is applied instead of using a pool of candidate hidden units similar to cascade correlation scheme. Therefore, simplicity in hardware implementation is also obtained. Furthermore, this analysis allows us to select from other methods (such as the conjugate gradient descent or the Newton's second order) one of which will be a good candidate for the learning technique. The choice of learning technique depends on the constraints of the problem (e.g., speed, performance, and hardware implementation); one technique may be more suitable than others. Moreover, for a discrete weight space, the theoretical analysis presents the capability of learning with limited weight quantization. Finally, 5- to 8-bit parity and chaotic time series prediction problems are investigated; the simulation results demonstrate that 4-bit or more weight quantization is sufficient for learning neural network using CEP. In addition, it is demonstrated that this technique is able to compensate for less bit weight resolution by incorporating additional hidden units. However, generation result may suffer somewhat with lower bit weight quantization.
Deformation of second and third quantization
NASA Astrophysics Data System (ADS)
Faizal, Mir
2015-03-01
In this paper, we will deform the second and third quantized theories by deforming the canonical commutation relations in such a way that they become consistent with the generalized uncertainty principle. Thus, we will first deform the second quantized commutator and obtain a deformed version of the Wheeler-DeWitt equation. Then we will further deform the third quantized theory by deforming the third quantized canonical commutation relation. This way we will obtain a deformed version of the third quantized theory for the multiverse.
Application of neural based estimation algorithm for gait phases of above knee prosthesis.
Tileylioğlu, E; Yilmaz, A
2015-01-01
In this study, two gait phase estimation methods which utilize a rule based quantization and an artificial neural network model respectively are developed and applied for the microcontroller based semi-active knee prosthesis in order to respond user demands and adapt environmental conditions. In this context, an experimental environment in which gait data collected synchronously from both inertial and image based measurement systems has been set up. The inertial measurement system that incorporates MEM accelerometers and gyroscopes is used to perform direct motion measurement through the microcontroller, while the image based measurement system is employed for producing the verification data and assessing the success of the prosthesis. Embedded algorithms dynamically normalize the input data prior to gait phase estimation. The real time analyses of two methods revealed that embedded ANN based approach performs slightly better in comparison with the rule based algorithm and has advantage of being easily-scalable, thus able to accommodate additional input parameters considering the microcontroller constraints.
Research on bearing fault diagnosis of large machinery based on mathematical morphology
NASA Astrophysics Data System (ADS)
Wang, Yu
2018-04-01
To study the automatic diagnosis of large machinery fault based on support vector machine, combining the four common faults of the large machinery, the support vector machine is used to classify and identify the fault. The extracted feature vectors are entered. The feature vector is trained and identified by multi - classification method. The optimal parameters of the support vector machine are searched by trial and error method and cross validation method. Then, the support vector machine is compared with BP neural network. The results show that the support vector machines are short in time and high in classification accuracy. It is more suitable for the research of fault diagnosis in large machinery. Therefore, it can be concluded that the training speed of support vector machines (SVM) is fast and the performance is good.
HYBRID NEURAL NETWORK AND SUPPORT VECTOR MACHINE METHOD FOR OPTIMIZATION
NASA Technical Reports Server (NTRS)
Rai, Man Mohan (Inventor)
2005-01-01
System and method for optimization of a design associated with a response function, using a hybrid neural net and support vector machine (NN/SVM) analysis to minimize or maximize an objective function, optionally subject to one or more constraints. As a first example, the NN/SVM analysis is applied iteratively to design of an aerodynamic component, such as an airfoil shape, where the objective function measures deviation from a target pressure distribution on the perimeter of the aerodynamic component. As a second example, the NN/SVM analysis is applied to data classification of a sequence of data points in a multidimensional space. The NN/SVM analysis is also applied to data regression.
Hybrid Neural Network and Support Vector Machine Method for Optimization
NASA Technical Reports Server (NTRS)
Rai, Man Mohan (Inventor)
2007-01-01
System and method for optimization of a design associated with a response function, using a hybrid neural net and support vector machine (NN/SVM) analysis to minimize or maximize an objective function, optionally subject to one or more constraints. As a first example, the NN/SVM analysis is applied iteratively to design of an aerodynamic component, such as an airfoil shape, where the objective function measures deviation from a target pressure distribution on the perimeter of the aerodynamic component. As a second example, the NN/SVM analysis is applied to data classification of a sequence of data points in a multidimensional space. The NN/SVM analysis is also applied to data regression.
NASA Astrophysics Data System (ADS)
Khodja, A.; Kadja, A.; Benamira, F.; Guechi, L.
2017-12-01
The problem of a Klein-Gordon particle moving in equal vector and scalar Rosen-Morse-type potentials is solved in the framework of Feynman's path integral approach. Explicit path integration leads to a closed form for the radial Green's function associated with different shapes of the potentials. For q≤-1, and 1/2α ln | q|
Detection of laryngeal function using speech and electroglottographic data.
Childers, D G; Bae, K S
1992-01-01
The purpose of this research was to develop quantitative measures for the assessment of laryngeal function using speech and electroglottographic (EGG) data. We developed two procedures for the detection of laryngeal pathology: 1) a spectral distortion measure using pitch synchronous and asynchronous methods with linear predictive coding (LPC) vectors and vector quantization (VQ) and 2) analysis of the EGG signal using time interval and amplitude difference measures. The VQ procedure was conjectured to offer the possibility of circumventing the need to estimate the glottal volume velocity wave-form by inverse filtering techniques. The EGG procedure was to evaluate data that was "nearly" a direct measure of vocal fold vibratory motion and thus was conjectured to offer the potential for providing an excellent assessment of laryngeal function. A threshold based procedure gave 75.9 and 69.0% probability of pathological detection using procedures 1) and 2), respectively, for 29 patients with pathological voices and 52 normal subjects. The false alarm probability was 9.6% for the normal subjects.
Meson effective mass in the isospin medium in hard-wall AdS/QCD model
NASA Astrophysics Data System (ADS)
Mamedov, Shahin
2016-02-01
We study a mass splitting of the light vector, axial-vector, and pseudoscalar mesons in the isospin medium in the framework of the hard-wall model. We write an effective mass definition for the interacting gauge fields and scalar field introduced in gauge field theory in the bulk of AdS space-time. Relying on holographic duality we obtain a formula for the effective mass of a boundary meson in terms of derivative operator over the extra bulk coordinate. The effective mass found in this way coincides with the one obtained from finding of poles of the two-point correlation function. In order to avoid introducing distinguished infrared boundaries in the quantization formula for the different mesons from the same isotriplet we introduce extra action terms at this boundary, which reduces distinguished values of this boundary to the same value. Profile function solutions and effective mass expressions were found for the in-medium ρ , a_1, and π mesons.
Quantization selection in the high-throughput H.264/AVC encoder based on the RD
NASA Astrophysics Data System (ADS)
Pastuszak, Grzegorz
2013-10-01
In the hardware video encoder, the quantization is responsible for quality losses. On the other hand, it allows the reduction of bit rates to the target one. If the mode selection is based on the rate-distortion criterion, the quantization can also be adjusted to obtain better compression efficiency. Particularly, the use of Lagrangian function with a given multiplier enables the encoder to select the most suitable quantization step determined by the quantization parameter QP. Moreover, the quantization offset added before discarding the fraction value after quantization can be adjusted. In order to select the best quantization parameter and offset in real time, the HD/SD encoder should be implemented in the hardware. In particular, the hardware architecture should embed the transformation and quantization modules able to process the same residuals many times. In this work, such an architecture is used. Experimental results show what improvements in terms of compression efficiency are achievable for Intra coding.
Hierarchical Recurrent Neural Hashing for Image Retrieval With Hierarchical Convolutional Features.
Lu, Xiaoqiang; Chen, Yaxiong; Li, Xuelong
Hashing has been an important and effective technology in image retrieval due to its computational efficiency and fast search speed. The traditional hashing methods usually learn hash functions to obtain binary codes by exploiting hand-crafted features, which cannot optimally represent the information of the sample. Recently, deep learning methods can achieve better performance, since deep learning architectures can learn more effective image representation features. However, these methods only use semantic features to generate hash codes by shallow projection but ignore texture details. In this paper, we proposed a novel hashing method, namely hierarchical recurrent neural hashing (HRNH), to exploit hierarchical recurrent neural network to generate effective hash codes. There are three contributions of this paper. First, a deep hashing method is proposed to extensively exploit both spatial details and semantic information, in which, we leverage hierarchical convolutional features to construct image pyramid representation. Second, our proposed deep network can exploit directly convolutional feature maps as input to preserve the spatial structure of convolutional feature maps. Finally, we propose a new loss function that considers the quantization error of binarizing the continuous embeddings into the discrete binary codes, and simultaneously maintains the semantic similarity and balanceable property of hash codes. Experimental results on four widely used data sets demonstrate that the proposed HRNH can achieve superior performance over other state-of-the-art hashing methods.Hashing has been an important and effective technology in image retrieval due to its computational efficiency and fast search speed. The traditional hashing methods usually learn hash functions to obtain binary codes by exploiting hand-crafted features, which cannot optimally represent the information of the sample. Recently, deep learning methods can achieve better performance, since deep learning architectures can learn more effective image representation features. However, these methods only use semantic features to generate hash codes by shallow projection but ignore texture details. In this paper, we proposed a novel hashing method, namely hierarchical recurrent neural hashing (HRNH), to exploit hierarchical recurrent neural network to generate effective hash codes. There are three contributions of this paper. First, a deep hashing method is proposed to extensively exploit both spatial details and semantic information, in which, we leverage hierarchical convolutional features to construct image pyramid representation. Second, our proposed deep network can exploit directly convolutional feature maps as input to preserve the spatial structure of convolutional feature maps. Finally, we propose a new loss function that considers the quantization error of binarizing the continuous embeddings into the discrete binary codes, and simultaneously maintains the semantic similarity and balanceable property of hash codes. Experimental results on four widely used data sets demonstrate that the proposed HRNH can achieve superior performance over other state-of-the-art hashing methods.
NASA Technical Reports Server (NTRS)
Kosko, Bart
1991-01-01
Mappings between fuzzy cubes are discussed. This level of abstraction provides a surprising and fruitful alternative to the propositional and predicate-calculas reasoning techniques used in expert systems. It allows one to reason with sets instead of propositions. Discussed here are fuzzy and neural function estimators, neural vs. fuzzy representation of structured knowledge, fuzzy vector-matrix multiplication, and fuzzy associative memory (FAM) system architecture.
USDA-ARS?s Scientific Manuscript database
Nepeta essential oil (Neo) (catnip) and its major component, nepetalactone, have long been known to repel insects including mosquitoes. However, the neural mechanisms through which these repellents are detected by mosquitoes, including the yellow fever mosquito Aedes aegypti, an important vector of...
Unsupervised Discovery of Nonlinear Structure Using Contrastive Backpropagation
ERIC Educational Resources Information Center
Hinton, Geoffrey; Osindero, Simon; Welling, Max; Teh, Yee-Whye
2006-01-01
We describe a way of modeling high-dimensional data vectors by using an unsupervised, nonlinear, multilayer neural network in which the activity of each neuron-like unit makes an additive contribution to a global energy score that indicates how surprised the network is by the data vector. The connection weights that determine how the activity of…
Full Spectrum Conversion Using Traveling Pulse Wave Quantization
2017-03-01
Full Spectrum Conversion Using Traveling Pulse Wave Quantization Michael S. Kappes Mikko E. Waltari IQ-Analog Corporation San Diego, California...temporal-domain quantization technique called Traveling Pulse Wave Quantization (TPWQ). Full spectrum conversion is defined as the complete...pulse width measurements that are continuously generated hence the name “traveling” pulse wave quantization. Our TPWQ-based ADC is composed of a
Kim, Seung U; Nagai, Atsushi; Nakagawa, Eiji; Choi, Hyun B; Bang, Jung H; Lee, Hong J; Lee, Myung A; Lee, Yong B; Park, In H
2008-01-01
We document the protocols and methods for the production of immortalized cell lines of human neural stem cells from the human fetal central nervous system (CNS) cells by using a retroviral vector encoding v-myc oncogene. One of the human neural stem cell lines (HB1.F3) was found to express nestin and other specific markers for human neural stem cells, giving rise to three fundamental cell types of the CNS: neurons, astrocytes, and oligodendrocytes. After transplantation into the brain of mouse model of stroke, implanted human neural stem cells were observed to migrate extensively from the site of implantation into other anatomical sites and to differentiate into neurons and glial cells.
Zou, Lingyun; Wang, Zhengzhi; Huang, Jiaomin
2007-12-01
Subcellular location is one of the key biological characteristics of proteins. Position-specific profiles (PSP) have been introduced as important characteristics of proteins in this article. In this study, to obtain position-specific profiles, the Position Specific Iterative-Basic Local Alignment Search Tool (PSI-BLAST) has been used to search for protein sequences in a database. Position-specific scoring matrices are extracted from the profiles as one class of characteristics. Four-part amino acid compositions and 1st-7th order dipeptide compositions have also been calculated as the other two classes of characteristics. Therefore, twelve characteristic vectors are extracted from each of the protein sequences. Next, the characteristic vectors are weighed by a simple weighing function and inputted into a BP neural network predictor named PSP-Weighted Neural Network (PSP-WNN). The Levenberg-Marquardt algorithm is employed to adjust the weight matrices and thresholds during the network training instead of the error back propagation algorithm. With a jackknife test on the RH2427 dataset, PSP-WNN has achieved a higher overall prediction accuracy of 88.4% rather than the prediction results by the general BP neural network, Markov model, and fuzzy k-nearest neighbors algorithm on this dataset. In addition, the prediction performance of PSP-WNN has been evaluated with a five-fold cross validation test on the PK7579 dataset and the prediction results have been consistently better than those of the previous method on the basis of several support vector machines, using compositions of both amino acids and amino acid pairs. These results indicate that PSP-WNN is a powerful tool for subcellular localization prediction. At the end of the article, influences on prediction accuracy using different weighting proportions among three characteristic vector categories have been discussed. An appropriate proportion is considered by increasing the prediction accuracy.
Quantizing and sampling considerations in digital phased-locked loops
NASA Technical Reports Server (NTRS)
Hurst, G. T.; Gupta, S. C.
1974-01-01
The quantizer problem is first considered. The conditions under which the uniform white sequence model for the quantizer error is valid are established independent of the sampling rate. An equivalent spectral density is defined for the quantizer error resulting in an effective SNR value. This effective SNR may be used to determine quantized performance from infinitely fine quantized results. Attention is given to sampling rate considerations. Sampling rate characteristics of the digital phase-locked loop (DPLL) structure are investigated for the infinitely fine quantized system. The predicted phase error variance equation is examined as a function of the sampling rate. Simulation results are presented and a method is described which enables the minimum required sampling rate to be determined from the predicted phase error variance equations.
Modeling and analysis of energy quantization effects on single electron inverter performance
NASA Astrophysics Data System (ADS)
Dan, Surya Shankar; Mahapatra, Santanu
2009-08-01
In this paper, for the first time, the effects of energy quantization on single electron transistor (SET) inverter performance are analyzed through analytical modeling and Monte Carlo simulations. It is shown that energy quantization mainly changes the Coulomb blockade region and drain current of SET devices and thus affects the noise margin, power dissipation, and the propagation delay of SET inverter. A new analytical model for the noise margin of SET inverter is proposed which includes the energy quantization effects. Using the noise margin as a metric, the robustness of SET inverter is studied against the effects of energy quantization. A compact expression is developed for a novel parameter quantization threshold which is introduced for the first time in this paper. Quantization threshold explicitly defines the maximum energy quantization that an SET inverter logic circuit can withstand before its noise margin falls below a specified tolerance level. It is found that SET inverter designed with CT:CG=1/3 (where CT and CG are tunnel junction and gate capacitances, respectively) offers maximum robustness against energy quantization.
Sakura, Midori; Lambrinos, Dimitrios; Labhart, Thomas
2008-02-01
Many insects exploit skylight polarization for visual compass orientation or course control. As found in crickets, the peripheral visual system (optic lobe) contains three types of polarization-sensitive neurons (POL neurons), which are tuned to different ( approximately 60 degrees diverging) e-vector orientations. Thus each e-vector orientation elicits a specific combination of activities among the POL neurons coding any e-vector orientation by just three neural signals. In this study, we hypothesize that in the presumed orientation center of the brain (central complex) e-vector orientation is population-coded by a set of "compass neurons." Using computer modeling, we present a neural network model transforming the signal triplet provided by the POL neurons to compass neuron activities coding e-vector orientation by a population code. Using intracellular electrophysiology and cell marking, we present evidence that neurons with the response profile of the presumed compass neurons do indeed exist in the insect brain: each of these compass neuron-like (CNL) cells is activated by a specific e-vector orientation only and otherwise remains silent. Morphologically, CNL cells are tangential neurons extending from the lateral accessory lobe to the lower division of the central body. Surpassing the modeled compass neurons in performance, CNL cells are insensitive to the degree of polarization of the stimulus between 99% and at least down to 18% polarization and thus largely disregard variations of skylight polarization due to changing solar elevations or atmospheric conditions. This suggests that the polarization vision system includes a gain control circuit keeping the output activity at a constant level.
An emergence of coordinated communication in populations of agents.
Kvasnicka, V; Pospichal, J
1999-01-01
The purpose of this article is to demonstrate that coordinated communication spontaneously emerges in a population composed of agents that are capable of specific cognitive activities. Internal states of agents are characterized by meaning vectors. Simple neural networks composed of one layer of hidden neurons perform cognitive activities of agents. An elementary communication act consists of the following: (a) two agents are selected, where one of them is declared the speaker and the other the listener; (b) the speaker codes a selected meaning vector onto a sequence of symbols and sends it to the listener as a message; and finally, (c) the listener decodes this message into a meaning vector and adapts his or her neural network such that the differences between speaker and listener meaning vectors are decreased. A Darwinian evolution enlarged by ideas from the Baldwin effect and Dawkins' memes is simulated by a simple version of an evolutionary algorithm without crossover. The agent fitness is determined by success of the mutual pairwise communications. It is demonstrated that agents in the course of evolution gradually do a better job of decoding received messages (they are closer to meaning vectors of speakers) and all agents gradually start to use the same vocabulary for the common communication. Moreover, if agent meaning vectors contain regularities, then these regularities are manifested also in messages created by agent speakers, that is, similar parts of meaning vectors are coded by similar symbol substrings. This observation is considered a manifestation of the emergence of a grammar system in the common coordinated communication.
Berezin-Toeplitz quantization and naturally defined star products for Kähler manifolds
NASA Astrophysics Data System (ADS)
Schlichenmaier, Martin
2018-04-01
For compact quantizable Kähler manifolds the Berezin-Toeplitz quantization schemes, both operator and deformation quantization (star product) are reviewed. The treatment includes Berezin's covariant symbols and the Berezin transform. The general compact quantizable case was done by Bordemann-Meinrenken-Schlichenmaier, Schlichenmaier, and Karabegov-Schlichenmaier. For star products on Kähler manifolds, separation of variables, or equivalently star product of (anti-) Wick type, is a crucial property. As canonically defined star products the Berezin-Toeplitz, Berezin, and the geometric quantization are treated. It turns out that all three are equivalent, but different.
NASA Astrophysics Data System (ADS)
Jaithwa, Ishan
Deployment of smart grid technologies is accelerating. Smart grid enables bidirectional flows of energy and energy-related communications. The future electricity grid will look very different from today's power system. Large variable renewable energy sources will provide a greater portion of electricity, small DERs and energy storage systems will become more common, and utilities will operate many different kinds of energy efficiency. All of these changes will add complexity to the grid and require operators to be able to respond to fast dynamic changes to maintain system stability and security. This thesis investigates advanced control technology for grid integration of renewable energy sources and STATCOM systems by verifying them on real time hardware experiments using two different systems: d SPACE and OPAL RT. Three controls: conventional, direct vector control and the intelligent Neural network control were first simulated using Matlab to check the stability and safety of the system and were then implemented on real time hardware using the d SPACE and OPAL RT systems. The thesis then shows how dynamic-programming (DP) methods employed to train the neural networks are better than any other controllers where, an optimal control strategy is developed to ensure effective power delivery and to improve system stability. Through real time hardware implementation it is proved that the neural vector control approach produces the fastest response time, low overshoot, and, the best performance compared to the conventional standard vector control method and DCC vector control technique. Finally the entrepreneurial approach taken to drive the technologies from the lab to market via ORANGE ELECTRIC is discussed in brief.
Neural Network Target Identification System for False Alarm Reduction
NASA Technical Reports Server (NTRS)
Ye, David; Edens, Weston; Lu, Thomas T.; Chao, Tien-Hsin
2009-01-01
A multi-stage automated target recognition (ATR) system has been designed to perform computer vision tasks with adequate proficiency in mimicking human vision. The system is able to detect, identify, and track targets of interest. Potential regions of interest (ROIs) are first identified by the detection stage using an Optimum Trade-off Maximum Average Correlation Height (OT-MACH) filter combined with a wavelet transform. False positives are then eliminated by the verification stage using feature extraction methods in conjunction with neural networks. Feature extraction transforms the ROIs using filtering and binning algorithms to create feature vectors. A feed forward back propagation neural network (NN) is then trained to classify each feature vector and remove false positives. This paper discusses the test of the system performance and parameter optimizations process which adapts the system to various targets and datasets. The test results show that the system was successful in substantially reducing the false positive rate when tested on a sonar image dataset.
CNN universal machine as classificaton platform: an art-like clustering algorithm.
Bálya, David
2003-12-01
Fast and robust classification of feature vectors is a crucial task in a number of real-time systems. A cellular neural/nonlinear network universal machine (CNN-UM) can be very efficient as a feature detector. The next step is to post-process the results for object recognition. This paper shows how a robust classification scheme based on adaptive resonance theory (ART) can be mapped to the CNN-UM. Moreover, this mapping is general enough to include different types of feed-forward neural networks. The designed analogic CNN algorithm is capable of classifying the extracted feature vectors keeping the advantages of the ART networks, such as robust, plastic and fault-tolerant behaviors. An analogic algorithm is presented for unsupervised classification with tunable sensitivity and automatic new class creation. The algorithm is extended for supervised classification. The presented binary feature vector classification is implemented on the existing standard CNN-UM chips for fast classification. The experimental evaluation shows promising performance after 100% accuracy on the training set.
Capowski, Elizabeth E; Schneider, Bernard L; Ebert, Allison D; Seehus, Corey R; Szulc, Jolanta; Zufferey, Romain; Aebischer, Patrick; Svendsen, Clive N
2007-07-30
Human neural progenitor cells (hNPC) hold great potential as an ex vivo system for delivery of therapeutic proteins to the central nervous system. When cultured as aggregates, termed neurospheres, hNPC are capable of significant in vitro expansion. In the current study, we present a robust method for lentiviral vector-mediated gene delivery into hNPC that maintains the differentiation and proliferative properties of neurosphere cultures while minimizing the amount of viral vector used and controlling the number of insertion sites per population. This method results in long-term, stable expression even after differentiation of the hNPC to neurons and astrocytes and allows for generation of equivalent transgenic populations of hNPC. In addition, the in vitro analysis presented predicts the behavior of transgenic lines in vivo when transplanted into a rodent model of Parkinson's disease. The methods presented provide a powerful tool for assessing the impact of factors such as promoter systems or different transgenes on the therapeutic utility of these cells.
An artificial neural network model for periodic trajectory generation
NASA Astrophysics Data System (ADS)
Shankar, S.; Gander, R. E.; Wood, H. C.
A neural network model based on biological systems was developed for potential robotic application. The model consists of three interconnected layers of artificial neurons or units: an input layer subdivided into state and plan units, an output layer, and a hidden layer between the two outer layers which serves to implement nonlinear mappings between the input and output activation vectors. Weighted connections are created between the three layers, and learning is effected by modifying these weights. Feedback connections between the output and the input state serve to make the network operate as a finite state machine. The activation vector of the plan units of the input layer emulates the supraspinal commands in biological central pattern generators in that different plan activation vectors correspond to different sequences or trajectories being recalled, even with different frequencies. Three trajectories were chosen for implementation, and learning was accomplished in 10,000 trials. The fault tolerant behavior, adaptiveness, and phase maintenance of the implemented network are discussed.
A visual detection model for DCT coefficient quantization
NASA Technical Reports Server (NTRS)
Ahumada, Albert J., Jr.; Watson, Andrew B.
1994-01-01
The discrete cosine transform (DCT) is widely used in image compression and is part of the JPEG and MPEG compression standards. The degree of compression and the amount of distortion in the decompressed image are controlled by the quantization of the transform coefficients. The standards do not specify how the DCT coefficients should be quantized. One approach is to set the quantization level for each coefficient so that the quantization error is near the threshold of visibility. Results from previous work are combined to form the current best detection model for DCT coefficient quantization noise. This model predicts sensitivity as a function of display parameters, enabling quantization matrices to be designed for display situations varying in luminance, veiling light, and spatial frequency related conditions (pixel size, viewing distance, and aspect ratio). It also allows arbitrary color space directions for the representation of color. A model-based method of optimizing the quantization matrix for an individual image was developed. The model described above provides visual thresholds for each DCT frequency. These thresholds are adjusted within each block for visual light adaptation and contrast masking. For given quantization matrix, the DCT quantization errors are scaled by the adjusted thresholds to yield perceptual errors. These errors are pooled nonlinearly over the image to yield total perceptual error. With this model one may estimate the quantization matrix for a particular image that yields minimum bit rate for a given total perceptual error, or minimum perceptual error for a given bit rate. Custom matrices for a number of images show clear improvement over image-independent matrices. Custom matrices are compatible with the JPEG standard, which requires transmission of the quantization matrix.
Sadeque, Farig; Xu, Dongfang; Bethard, Steven
2017-01-01
The 2017 CLEF eRisk pilot task focuses on automatically detecting depression as early as possible from a users’ posts to Reddit. In this paper we present the techniques employed for the University of Arizona team’s participation in this early risk detection shared task. We leveraged external information beyond the small training set, including a preexisting depression lexicon and concepts from the Unified Medical Language System as features. For prediction, we used both sequential (recurrent neural network) and non-sequential (support vector machine) models. Our models perform decently on the test data, and the recurrent neural models perform better than the non-sequential support vector machines while using the same feature sets. PMID:29075167
On-line determination of transient stability status using multilayer perceptron neural network
NASA Astrophysics Data System (ADS)
Frimpong, Emmanuel Asuming; Okyere, Philip Yaw; Asumadu, Johnson
2018-01-01
A scheme to predict transient stability status following a disturbance is presented. The scheme is activated upon the tripping of a line or bus and operates as follows: Two samples of frequency deviation values at all generator buses are obtained. At each generator bus, the maximum frequency deviation within the two samples is extracted. A vector is then constructed from the extracted maximum frequency deviations. The Euclidean norm of the constructed vector is calculated and then fed as input to a trained multilayer perceptron neural network which predicts the stability status of the system. The scheme was tested using data generated from the New England test system. The scheme successfully predicted the stability status of all two hundred and five disturbance test cases.
Dielectric properties of classical and quantized ionic fluids.
Høye, Johan S
2010-06-01
We study time-dependent correlation functions of classical and quantum gases using methods of equilibrium statistical mechanics for systems of uniform as well as nonuniform densities. The basis for our approach is the path integral formalism of quantum mechanical systems. With this approach the statistical mechanics of a quantum mechanical system becomes the equivalent of a classical polymer problem in four dimensions where imaginary time is the fourth dimension. Several nontrivial results for quantum systems have been obtained earlier by this analogy. Here, we will focus upon the presence of a time-dependent electromagnetic pair interaction where the electromagnetic vector potential that depends upon currents, will be present. Thus both density and current correlations are needed to evaluate the influence of this interaction. Then we utilize that densities and currents can be expressed by polarizations by which the ionic fluid can be regarded as a dielectric one for which a nonlocal susceptibility is found. This nonlocality has as a consequence that we find no contribution from a possible transverse electric zero-frequency mode for the Casimir force between metallic plates. Further, we establish expressions for a leading correction to ab initio calculations for the energies of the quantized electrons of molecules where now retardation effects also are taken into account.
NASA Astrophysics Data System (ADS)
Visinescu, M.
2012-10-01
Hidden symmetries in a covariant Hamiltonian framework are investigated. The special role of the Stackel-Killing and Killing-Yano tensors is pointed out. The covariant phase-space is extended to include external gauge fields and scalar potentials. We investigate the possibility for a higher-order symmetry to survive when the electromagnetic interactions are taken into account. Aconcrete realization of this possibility is given by the Killing-Maxwell system. The classical conserved quantities do not generally transfer to the quantized systems producing quantum gravitational anomalies. As a rule the conformal extension of the Killing vectors and tensors does not produce symmetry operators for the Klein-Gordon operator.
NASA Astrophysics Data System (ADS)
Albeverio, Sergio; Tamura, Hiroshi
2018-04-01
We consider a model describing the coupling of a vector-valued and a scalar homogeneous Markovian random field over R4, interpreted as expressing the interaction between a charged scalar quantum field coupled with a nonlinear quantized electromagnetic field. Expectations of functionals of the random fields are expressed by Brownian bridges. Using this, together with Feynman-Kac-Itô type formulae and estimates on the small time and large time behaviour of Brownian functionals, we prove asymptotic upper and lower bounds on the kernel of the transition semigroup for our model. The upper bound gives faster than exponential decay for large distances of the corresponding resolvent (propagator).
High performance compression of science data
NASA Technical Reports Server (NTRS)
Storer, James A.; Cohn, Martin
1994-01-01
Two papers make up the body of this report. One presents a single-pass adaptive vector quantization algorithm that learns a codebook of variable size and shape entries; the authors present experiments on a set of test images showing that with no training or prior knowledge of the data, for a given fidelity, the compression achieved typically equals or exceeds that of the JPEG standard. The second paper addresses motion compensation, one of the most effective techniques used in the interframe data compression. A parallel block-matching algorithm for estimating interframe displacement of blocks with minimum error is presented. The algorithm is designed for a simple parallel architecture to process video in real time.
Coherent distributions for the rigid rotator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grigorescu, Marius
2016-06-15
Coherent solutions of the classical Liouville equation for the rigid rotator are presented as positive phase-space distributions localized on the Lagrangian submanifolds of Hamilton-Jacobi theory. These solutions become Wigner-type quasiprobability distributions by a formal discretization of the left-invariant vector fields from their Fourier transform in angular momentum. The results are consistent with the usual quantization of the anisotropic rotator, but the expected value of the Hamiltonian contains a finite “zero point” energy term. It is shown that during the time when a quasiprobability distribution evolves according to the Liouville equation, the related quantum wave function should satisfy the time-dependent Schrödingermore » equation.« less
NASA Astrophysics Data System (ADS)
Maragos, Petros
The topics discussed at the conference include hierarchical image coding, motion analysis, feature extraction and image restoration, video coding, and morphological and related nonlinear filtering. Attention is also given to vector quantization, morphological image processing, fractals and wavelets, architectures for image and video processing, image segmentation, biomedical image processing, and model-based analysis. Papers are presented on affine models for motion and shape recovery, filters for directly detecting surface orientation in an image, tracking of unresolved targets in infrared imagery using a projection-based method, adaptive-neighborhood image processing, and regularized multichannel restoration of color images using cross-validation. (For individual items see A93-20945 to A93-20951)
SAR data compression: Application, requirements, and designs
NASA Technical Reports Server (NTRS)
Curlander, John C.; Chang, C. Y.
1991-01-01
The feasibility of reducing data volume and data rate is evaluated for the Earth Observing System (EOS) Synthetic Aperture Radar (SAR). All elements of data stream from the sensor downlink data stream to electronic delivery of browse data products are explored. The factors influencing design of a data compression system are analyzed, including the signal data characteristics, the image quality requirements, and the throughput requirements. The conclusion is that little or no reduction can be achieved in the raw signal data using traditional data compression techniques (e.g., vector quantization, adaptive discrete cosine transform) due to the induced phase errors in the output image. However, after image formation, a number of techniques are effective for data compression.
Quantum theory of structured monochromatic light
NASA Astrophysics Data System (ADS)
Punnoose, Alexander; Tu, J. J.
2017-08-01
Applications that envisage utilizing the orbital angular momentum (OAM) at the single photon level assume that the OAM degrees of freedom of the photons are orthogonal. To test this critical assumption, we quantize the beam-like solutions of the vector Helmholtz equation from first principles. We show that although the photon operators of a diffracting monochromatic beam do not in general satisfy the canonical commutation relations, implying that the photon states in Fock space are not orthogonal, the states are bona fide eigenstates of the number and Hamiltonian operators. As a result, the representation for the photon operators presented in this work form a natural basis to study structured monochromatic light at the single photon level.
NASA Astrophysics Data System (ADS)
Chernyak, Vladimir Y.; Klein, John R.; Sinitsyn, Nikolai A.
2012-04-01
This article studies Markovian stochastic motion of a particle on a graph with finite number of nodes and periodically time-dependent transition rates that satisfy the detailed balance condition at any time. We show that under general conditions, the currents in the system on average become quantized or fractionally quantized for adiabatic driving at sufficiently low temperature. We develop the quantitative theory of this quantization and interpret it in terms of topological invariants. By implementing the celebrated Kirchhoff theorem we derive a general and explicit formula for the average generated current that plays a role of an efficient tool for treating the current quantization effects.
MATLAB Simulation of Gradient-Based Neural Network for Online Matrix Inversion
NASA Astrophysics Data System (ADS)
Zhang, Yunong; Chen, Ke; Ma, Weimu; Li, Xiao-Dong
This paper investigates the simulation of a gradient-based recurrent neural network for online solution of the matrix-inverse problem. Several important techniques are employed as follows to simulate such a neural system. 1) Kronecker product of matrices is introduced to transform a matrix-differential-equation (MDE) to a vector-differential-equation (VDE); i.e., finally, a standard ordinary-differential-equation (ODE) is obtained. 2) MATLAB routine "ode45" is introduced to solve the transformed initial-value ODE problem. 3) In addition to various implementation errors, different kinds of activation functions are simulated to show the characteristics of such a neural network. Simulation results substantiate the theoretical analysis and efficacy of the gradient-based neural network for online constant matrix inversion.
Zhang, Senlin; Chen, Huayan; Liu, Meiqin; Zhang, Qunfei
2017-11-07
Target tracking is one of the broad applications of underwater wireless sensor networks (UWSNs). However, as a result of the temporal and spatial variability of acoustic channels, underwater acoustic communications suffer from an extremely limited bandwidth. In order to reduce network congestion, it is important to shorten the length of the data transmitted from local sensors to the fusion center by quantization. Although quantization can reduce bandwidth cost, it also brings about bad tracking performance as a result of information loss after quantization. To solve this problem, this paper proposes an optimal quantization-based target tracking scheme. It improves the tracking performance of low-bit quantized measurements by minimizing the additional covariance caused by quantization. The simulation demonstrates that our scheme performs much better than the conventional uniform quantization-based target tracking scheme and the increment of the data length affects our scheme only a little. Its tracking performance improves by only 4.4% from 2- to 3-bit, which means our scheme weakly depends on the number of data bits. Moreover, our scheme also weakly depends on the number of participate sensors, and it can work well in sparse sensor networks. In a 6 × 6 × 6 sensor network, compared with 4 × 4 × 4 sensor networks, the number of participant sensors increases by 334.92%, while the tracking accuracy using 1-bit quantized measurements improves by only 50.77%. Overall, our optimal quantization-based target tracking scheme can achieve the pursuit of data-efficiency, which fits the requirements of low-bandwidth UWSNs.
Event-Driven Random Back-Propagation: Enabling Neuromorphic Deep Learning Machines
Neftci, Emre O.; Augustine, Charles; Paul, Somnath; Detorakis, Georgios
2017-01-01
An ongoing challenge in neuromorphic computing is to devise general and computationally efficient models of inference and learning which are compatible with the spatial and temporal constraints of the brain. One increasingly popular and successful approach is to take inspiration from inference and learning algorithms used in deep neural networks. However, the workhorse of deep learning, the gradient descent Gradient Back Propagation (BP) rule, often relies on the immediate availability of network-wide information stored with high-precision memory during learning, and precise operations that are difficult to realize in neuromorphic hardware. Remarkably, recent work showed that exact backpropagated gradients are not essential for learning deep representations. Building on these results, we demonstrate an event-driven random BP (eRBP) rule that uses an error-modulated synaptic plasticity for learning deep representations. Using a two-compartment Leaky Integrate & Fire (I&F) neuron, the rule requires only one addition and two comparisons for each synaptic weight, making it very suitable for implementation in digital or mixed-signal neuromorphic hardware. Our results show that using eRBP, deep representations are rapidly learned, achieving classification accuracies on permutation invariant datasets comparable to those obtained in artificial neural network simulations on GPUs, while being robust to neural and synaptic state quantizations during learning. PMID:28680387
Event-Driven Random Back-Propagation: Enabling Neuromorphic Deep Learning Machines.
Neftci, Emre O; Augustine, Charles; Paul, Somnath; Detorakis, Georgios
2017-01-01
An ongoing challenge in neuromorphic computing is to devise general and computationally efficient models of inference and learning which are compatible with the spatial and temporal constraints of the brain. One increasingly popular and successful approach is to take inspiration from inference and learning algorithms used in deep neural networks. However, the workhorse of deep learning, the gradient descent Gradient Back Propagation (BP) rule, often relies on the immediate availability of network-wide information stored with high-precision memory during learning, and precise operations that are difficult to realize in neuromorphic hardware. Remarkably, recent work showed that exact backpropagated gradients are not essential for learning deep representations. Building on these results, we demonstrate an event-driven random BP (eRBP) rule that uses an error-modulated synaptic plasticity for learning deep representations. Using a two-compartment Leaky Integrate & Fire (I&F) neuron, the rule requires only one addition and two comparisons for each synaptic weight, making it very suitable for implementation in digital or mixed-signal neuromorphic hardware. Our results show that using eRBP, deep representations are rapidly learned, achieving classification accuracies on permutation invariant datasets comparable to those obtained in artificial neural network simulations on GPUs, while being robust to neural and synaptic state quantizations during learning.
Westö, Johan; May, Patrick J C
2018-05-02
Receptive field (RF) models are an important tool for deciphering neural responses to sensory stimuli. The two currently popular RF models are multi-filter linear-nonlinear (LN) models and context models. Models are, however, never correct and they rely on assumptions to keep them simple enough to be interpretable. As a consequence, different models describe different stimulus-response mappings, which may or may not be good approximations of real neural behavior. In the current study, we take up two tasks: First, we introduce new ways to estimate context models with realistic nonlinearities, that is, with logistic and exponential functions. Second, we evaluate context models and multi-filter LN models in terms of how well they describe recorded data from complex cells in cat primary visual cortex. Our results, based on single-spike information and correlation coefficients, indicate that context models outperform corresponding multi-filter LN models of equal complexity (measured in terms of number of parameters), with the best increase in performance being achieved by the novel context models. Consequently, our results suggest that the multi-filter LN-model framework is suboptimal for describing the behavior of complex cells: the context-model framework is clearly superior while still providing interpretable quantizations of neural behavior.
Detection of potential mosquito breeding sites based on community sourced geotagged images
NASA Astrophysics Data System (ADS)
Agarwal, Ankit; Chaudhuri, Usashi; Chaudhuri, Subhasis; Seetharaman, Guna
2014-06-01
Various initiatives have been taken all over the world to involve the citizens in the collection and reporting of data to make better and informed data-driven decisions. Our work shows how the geotagged images collected through the general population can be used to combat Malaria and Dengue by identifying and visualizing localities that contain potential mosquito breeding sites. Our method first employs image quality assessment on the client side to reject the images with distortions like blur and artifacts. Each geotagged image received on the server is converted into a feature vector using the bag of visual words model. We train an SVM classifier on a histogram-based feature vector obtained after the vector quantization of SIFT features to discriminate images containing either a small stagnant water body like puddle, or open containers and tires, bushes etc. from those that contain flowing water, manicured lawns, tires attached to a vehicle etc. A geographical heat map is generated by assigning a specific location a probability value of it being a potential mosquito breeding ground of mosquito using feature level fusion or the max approach presented in the paper. The heat map thus generated can be used by concerned health authorities to take appropriate action and to promote civic awareness.
NASA Astrophysics Data System (ADS)
Wiederkehr, A. W.; Schmutz, H.; Motsch, M.; Merkt, F.
2012-08-01
Cold samples of oxygen molecules in supersonic beams have been decelerated from initial velocities of 390 and 450 m s-1 to final velocities in the range between 150 and 280 m s-1 using a 90-stage Zeeman decelerator. (2 + 1) resonance-enhanced-multiphoton-ionization (REMPI) spectra of the 3sσ g 3Π g (C) ? two-photon transition of O2 have been recorded to characterize the state selectivity of the deceleration process. The decelerated molecular sample was found to consist exclusively of molecules in the J ‧‧ = 2 spin-rotational component of the X ? ground state of O2. Measurements of the REMPI spectra using linearly polarized laser radiation with polarization vector parallel to the decelerator axis, and thus to the magnetic-field vector of the deceleration solenoids, further showed that only the ? magnetic sublevel of the N‧‧ = 1, J ‧‧ = 2 spin-rotational level is populated in the decelerated sample, which therefore is characterized by a fully oriented total-angular-momentum vector. By maintaining a weak quantization magnetic field beyond the decelerator, the polarization of the sample could be maintained over the 5 cm distance separating the last deceleration solenoid and the detection region.
Two generalizations of Kohonen clustering
NASA Technical Reports Server (NTRS)
Bezdek, James C.; Pal, Nikhil R.; Tsao, Eric C. K.
1993-01-01
The relationship between the sequential hard c-means (SHCM), learning vector quantization (LVQ), and fuzzy c-means (FCM) clustering algorithms is discussed. LVQ and SHCM suffer from several major problems. For example, they depend heavily on initialization. If the initial values of the cluster centers are outside the convex hull of the input data, such algorithms, even if they terminate, may not produce meaningful results in terms of prototypes for cluster representation. This is due in part to the fact that they update only the winning prototype for every input vector. The impact and interaction of these two families with Kohonen's self-organizing feature mapping (SOFM), which is not a clustering method, but which often leads ideas to clustering algorithms is discussed. Then two generalizations of LVQ that are explicitly designed as clustering algorithms are presented; these algorithms are referred to as generalized LVQ = GLVQ; and fuzzy LVQ = FLVQ. Learning rules are derived to optimize an objective function whose goal is to produce 'good clusters'. GLVQ/FLVQ (may) update every node in the clustering net for each input vector. Neither GLVQ nor FLVQ depends upon a choice for the update neighborhood or learning rate distribution - these are taken care of automatically. Segmentation of a gray tone image is used as a typical application of these algorithms to illustrate the performance of GLVQ/FLVQ.
Perceptual Optimization of DCT Color Quantization Matrices
NASA Technical Reports Server (NTRS)
Watson, Andrew B.; Statler, Irving C. (Technical Monitor)
1994-01-01
Many image compression schemes employ a block Discrete Cosine Transform (DCT) and uniform quantization. Acceptable rate/distortion performance depends upon proper design of the quantization matrix. In previous work, we showed how to use a model of the visibility of DCT basis functions to design quantization matrices for arbitrary display resolutions and color spaces. Subsequently, we showed how to optimize greyscale quantization matrices for individual images, for optimal rate/perceptual distortion performance. Here we describe extensions of this optimization algorithm to color images.
Invariant-feature-based adaptive automatic target recognition in obscured 3D point clouds
NASA Astrophysics Data System (ADS)
Khuon, Timothy; Kershner, Charles; Mattei, Enrico; Alverio, Arnel; Rand, Robert
2014-06-01
Target recognition and classification in a 3D point cloud is a non-trivial process due to the nature of the data collected from a sensor system. The signal can be corrupted by noise from the environment, electronic system, A/D converter, etc. Therefore, an adaptive system with a desired tolerance is required to perform classification and recognition optimally. The feature-based pattern recognition algorithm architecture as described below is particularly devised for solving a single-sensor classification non-parametrically. Feature set is extracted from an input point cloud, normalized, and classifier a neural network classifier. For instance, automatic target recognition in an urban area would require different feature sets from one in a dense foliage area. The figure above (see manuscript) illustrates the architecture of the feature based adaptive signature extraction of 3D point cloud including LIDAR, RADAR, and electro-optical data. This network takes a 3D cluster and classifies it into a specific class. The algorithm is a supervised and adaptive classifier with two modes: the training mode and the performing mode. For the training mode, a number of novel patterns are selected from actual or artificial data. A particular 3D cluster is input to the network as shown above for the decision class output. The network consists of three sequential functional modules. The first module is for feature extraction that extracts the input cluster into a set of singular value features or feature vector. Then the feature vector is input into the feature normalization module to normalize and balance it before being fed to the neural net classifier for the classification. The neural net can be trained by actual or artificial novel data until each trained output reaches the declared output within the defined tolerance. In case new novel data is added after the neural net has been learned, the training is then resumed until the neural net has incrementally learned with the new novel data. The associative memory capability of the neural net enables the incremental learning. The back propagation algorithm or support vector machine can be utilized for the classification and recognition.
A visual detection model for DCT coefficient quantization
NASA Technical Reports Server (NTRS)
Ahumada, Albert J., Jr.; Peterson, Heidi A.
1993-01-01
The discrete cosine transform (DCT) is widely used in image compression, and is part of the JPEG and MPEG compression standards. The degree of compression, and the amount of distortion in the decompressed image are determined by the quantization of the transform coefficients. The standards do not specify how the DCT coefficients should be quantized. Our approach is to set the quantization level for each coefficient so that the quantization error is at the threshold of visibility. Here we combine results from our previous work to form our current best detection model for DCT coefficient quantization noise. This model predicts sensitivity as a function of display parameters, enabling quantization matrices to be designed for display situations varying in luminance, veiling light, and spatial frequency related conditions (pixel size, viewing distance, and aspect ratio). It also allows arbitrary color space directions for the representation of color.
NASA Astrophysics Data System (ADS)
Mazzola, F.; Wells, J. W.; Pakpour-Tabrizi, A. C.; Jackman, R. B.; Thiagarajan, B.; Hofmann, Ph.; Miwa, J. A.
2018-01-01
We demonstrate simultaneous quantization of conduction band (CB) and valence band (VB) states in silicon using ultrashallow, high-density, phosphorus doping profiles (so-called Si:P δ layers). We show that, in addition to the well-known quantization of CB states within the dopant plane, the confinement of VB-derived states between the subsurface P dopant layer and the Si surface gives rise to a simultaneous quantization of VB states in this narrow region. We also show that the VB quantization can be explained using a simple particle-in-a-box model, and that the number and energy separation of the quantized VB states depend on the depth of the P dopant layer beneath the Si surface. Since the quantized CB states do not show a strong dependence on the dopant depth (but rather on the dopant density), it is straightforward to exhibit control over the properties of the quantized CB and VB states independently of each other by choosing the dopant density and depth accordingly, thus offering new possibilities for engineering quantum matter.
Wearable-Sensor-Based Classification Models of Faller Status in Older Adults.
Howcroft, Jennifer; Lemaire, Edward D; Kofman, Jonathan
2016-01-01
Wearable sensors have potential for quantitative, gait-based, point-of-care fall risk assessment that can be easily and quickly implemented in clinical-care and older-adult living environments. This investigation generated models for wearable-sensor based fall-risk classification in older adults and identified the optimal sensor type, location, combination, and modelling method; for walking with and without a cognitive load task. A convenience sample of 100 older individuals (75.5 ± 6.7 years; 76 non-fallers, 24 fallers based on 6 month retrospective fall occurrence) walked 7.62 m under single-task and dual-task conditions while wearing pressure-sensing insoles and tri-axial accelerometers at the head, pelvis, and left and right shanks. Participants also completed the Activities-specific Balance Confidence scale, Community Health Activities Model Program for Seniors questionnaire, six minute walk test, and ranked their fear of falling. Fall risk classification models were assessed for all sensor combinations and three model types: multi-layer perceptron neural network, naïve Bayesian, and support vector machine. The best performing model was a multi-layer perceptron neural network with input parameters from pressure-sensing insoles and head, pelvis, and left shank accelerometers (accuracy = 84%, F1 score = 0.600, MCC score = 0.521). Head sensor-based models had the best performance of the single-sensor models for single-task gait assessment. Single-task gait assessment models outperformed models based on dual-task walking or clinical assessment data. Support vector machines and neural networks were the best modelling technique for fall risk classification. Fall risk classification models developed for point-of-care environments should be developed using support vector machines and neural networks, with a multi-sensor single-task gait assessment.
Scalable hybrid computation with spikes.
Sarpeshkar, Rahul; O'Halloran, Micah
2002-09-01
We outline a hybrid analog-digital scheme for computing with three important features that enable it to scale to systems of large complexity: First, like digital computation, which uses several one-bit precise logical units to collectively compute a precise answer to a computation, the hybrid scheme uses several moderate-precision analog units to collectively compute a precise answer to a computation. Second, frequent discrete signal restoration of the analog information prevents analog noise and offset from degrading the computation. And, third, a state machine enables complex computations to be created using a sequence of elementary computations. A natural choice for implementing this hybrid scheme is one based on spikes because spike-count codes are digital, while spike-time codes are analog. We illustrate how spikes afford easy ways to implement all three components of scalable hybrid computation. First, as an important example of distributed analog computation, we show how spikes can create a distributed modular representation of an analog number by implementing digital carry interactions between spiking analog neurons. Second, we show how signal restoration may be performed by recursive spike-count quantization of spike-time codes. And, third, we use spikes from an analog dynamical system to trigger state transitions in a digital dynamical system, which reconfigures the analog dynamical system using a binary control vector; such feedback interactions between analog and digital dynamical systems create a hybrid state machine (HSM). The HSM extends and expands the concept of a digital finite-state-machine to the hybrid domain. We present experimental data from a two-neuron HSM on a chip that implements error-correcting analog-to-digital conversion with the concurrent use of spike-time and spike-count codes. We also present experimental data from silicon circuits that implement HSM-based pattern recognition using spike-time synchrony. We outline how HSMs may be used to perform learning, vector quantization, spike pattern recognition and generation, and how they may be reconfigured.
Nonlinear calibration for petroleum water content measurement using PSO
NASA Astrophysics Data System (ADS)
Li, Mingbao; Zhang, Jiawei
2008-10-01
A new algorithmic for strapdown inertial navigation system (SINS) state estimation based on neural networks is introduced. In training strategy, the error vector and its delay are introduced. This error vector is made of the position and velocity difference between the estimations of system and the outputs of GPS. After state prediction and state update, the states of the system are estimated. After off-line training, the network can approach the status switching of SINS and after on-line training, the state estimate precision can be improved further by reducing network output errors. Then the network convergence is discussed. In the end, several simulations with different noise are given. The results show that the neural network state estimator has lower noise sensitivity and better noise immunity than Kalman filter.
Software tool for data mining and its applications
NASA Astrophysics Data System (ADS)
Yang, Jie; Ye, Chenzhou; Chen, Nianyi
2002-03-01
A software tool for data mining is introduced, which integrates pattern recognition (PCA, Fisher, clustering, hyperenvelop, regression), artificial intelligence (knowledge representation, decision trees), statistical learning (rough set, support vector machine), computational intelligence (neural network, genetic algorithm, fuzzy systems). It consists of nine function models: pattern recognition, decision trees, association rule, fuzzy rule, neural network, genetic algorithm, Hyper Envelop, support vector machine, visualization. The principle and knowledge representation of some function models of data mining are described. The software tool of data mining is realized by Visual C++ under Windows 2000. Nonmonotony in data mining is dealt with by concept hierarchy and layered mining. The software tool of data mining has satisfactorily applied in the prediction of regularities of the formation of ternary intermetallic compounds in alloy systems, and diagnosis of brain glioma.
Classification of subsurface objects using singular values derived from signal frames
Chambers, David H; Paglieroni, David W
2014-05-06
The classification system represents a detected object with a feature vector derived from the return signals acquired by an array of N transceivers operating in multistatic mode. The classification system generates the feature vector by transforming the real-valued return signals into complex-valued spectra, using, for example, a Fast Fourier Transform. The classification system then generates a feature vector of singular values for each user-designated spectral sub-band by applying a singular value decomposition (SVD) to the N.times.N square complex-valued matrix formed from sub-band samples associated with all possible transmitter-receiver pairs. The resulting feature vector of singular values may be transformed into a feature vector of singular value likelihoods and then subjected to a multi-category linear or neural network classifier for object classification.
Equivalent Skin Analysis of Wing Structures Using Neural Networks
NASA Technical Reports Server (NTRS)
Liu, Youhua; Kapania, Rakesh K.
2000-01-01
An efficient method of modeling trapezoidal built-up wing structures is developed by coupling. in an indirect way, an Equivalent Plate Analysis (EPA) with Neural Networks (NN). Being assumed to behave like a Mindlin-plate, the wing is solved using the Ritz method with Legendre polynomials employed as the trial functions. This analysis method can be made more efficient by avoiding most of the computational effort spent on calculating contributions to the stiffness and mass matrices from each spar and rib. This is accomplished by replacing the wing inner-structure with an "equivalent" material that combines to the skin and whose properties are simulated by neural networks. The constitutive matrix, which relates the stress vector to the strain vector, and the density of the equivalent material are obtained by enforcing mass and stiffness matrix equities with rec,ard to the EPA in a least-square sense. Neural networks for the material properties are trained in terms of the design variables of the wing structure. Examples show that the present method, which can be called an Equivalent Skin Analysis (ESA) of the wing structure, is more efficient than the EPA and still fairly good results can be obtained. The present ESA is very promising to be used at the early stages of wing structure design.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Serwer, Philip, E-mail: serwer@uthscsa.edu; Wright, Elena T.; Liu, Zheng
DNA packaging of phages phi29, T3 and T7 sometimes produces incompletely packaged DNA with quantized lengths, based on gel electrophoretic band formation. We discover here a packaging ATPase-free, in vitro model for packaged DNA length quantization. We use directed evolution to isolate a five-site T3 point mutant that hyper-produces tail-free capsids with mature DNA (heads). Three tail gene mutations, but no head gene mutations, are present. A variable-length DNA segment leaks from some mutant heads, based on DNase I-protection assay and electron microscopy. The protected DNA segment has quantized lengths, based on restriction endonuclease analysis: six sharp bands of DNAmore » missing 3.7–12.3% of the last end packaged. Native gel electrophoresis confirms quantized DNA expulsion and, after removal of external DNA, provides evidence that capsid radius is the quantization-ruler. Capsid-based DNA length quantization possibly evolved via selection for stalling that provides time for feedback control during DNA packaging and injection. - Graphical abstract: Highlights: • We implement directed evolution- and DNA-sequencing-based phage assembly genetics. • We purify stable, mutant phage heads with a partially leaked mature DNA molecule. • Native gels and DNase-protection show leaked DNA segments to have quantized lengths. • Native gels after DNase I-removal of leaked DNA reveal the capsids to vary in radius. • Thus, we hypothesize leaked DNA quantization via variably quantized capsid radius.« less
Dimensional quantization effects in the thermodynamics of conductive filaments
NASA Astrophysics Data System (ADS)
Niraula, D.; Grice, C. R.; Karpov, V. G.
2018-06-01
We consider the physical effects of dimensional quantization in conductive filaments that underlie operations of some modern electronic devices. We show that, as a result of quantization, a sufficiently thin filament acquires a positive charge. Several applications of this finding include the host material polarization, the stability of filament constrictions, the equilibrium filament radius, polarity in device switching, and quantization of conductance.
Nearly associative deformation quantization
NASA Astrophysics Data System (ADS)
Vassilevich, Dmitri; Oliveira, Fernando Martins Costa
2018-04-01
We study several classes of non-associative algebras as possible candidates for deformation quantization in the direction of a Poisson bracket that does not satisfy Jacobi identities. We show that in fact alternative deformation quantization algebras require the Jacobi identities on the Poisson bracket and, under very general assumptions, are associative. At the same time, flexible deformation quantization algebras exist for any Poisson bracket.
Dimensional quantization effects in the thermodynamics of conductive filaments.
Niraula, D; Grice, C R; Karpov, V G
2018-06-29
We consider the physical effects of dimensional quantization in conductive filaments that underlie operations of some modern electronic devices. We show that, as a result of quantization, a sufficiently thin filament acquires a positive charge. Several applications of this finding include the host material polarization, the stability of filament constrictions, the equilibrium filament radius, polarity in device switching, and quantization of conductance.
Face recognition via sparse representation of SIFT feature on hexagonal-sampling image
NASA Astrophysics Data System (ADS)
Zhang, Daming; Zhang, Xueyong; Li, Lu; Liu, Huayong
2018-04-01
This paper investigates a face recognition approach based on Scale Invariant Feature Transform (SIFT) feature and sparse representation. The approach takes advantage of SIFT which is local feature other than holistic feature in classical Sparse Representation based Classification (SRC) algorithm and possesses strong robustness to expression, pose and illumination variations. Since hexagonal image has more inherit merits than square image to make recognition process more efficient, we extract SIFT keypoint in hexagonal-sampling image. Instead of matching SIFT feature, firstly the sparse representation of each SIFT keypoint is given according the constructed dictionary; secondly these sparse vectors are quantized according dictionary; finally each face image is represented by a histogram and these so-called Bag-of-Words vectors are classified by SVM. Due to use of local feature, the proposed method achieves better result even when the number of training sample is small. In the experiments, the proposed method gave higher face recognition rather than other methods in ORL and Yale B face databases; also, the effectiveness of the hexagonal-sampling in the proposed method is verified.
NASA Astrophysics Data System (ADS)
Faghihi, M. J.; Tavassoly, M. K.
2012-02-01
In this paper, we study the interaction between a three-level atom and a quantized single-mode field with ‘intensity-dependent coupling’ in a ‘Kerr medium’. The three-level atom is considered to be in a Λ-type configuration. Under particular initial conditions, which may be prepared for the atom and the field, the dynamical state vector of the entire system will be explicitly obtained, for the arbitrary nonlinearity function f(n) associated with any physical system. Then, after evaluating the variation of the field entropy against time, we will investigate the quantum statistics as well as some of the nonclassical properties of the introduced state. During our calculations we investigate the effects of intensity-dependent coupling, Kerr medium and detuning parameters on the depth and domain of the nonclassicality features of the atom-field state vector. Finally, we compare our obtained results with those of V-type three-level atoms.
Measuring and Modeling Shared Visual Attention
NASA Technical Reports Server (NTRS)
Mulligan, Jeffrey B.; Gontar, Patrick
2016-01-01
Multi-person teams are sometimes responsible for critical tasks, such as flying an airliner. Here we present a method using gaze tracking data to assess shared visual attention, a term we use to describe the situation where team members are attending to a common set of elements in the environment. Gaze data are quantized with respect to a set of N areas of interest (AOIs); these are then used to construct a time series of N dimensional vectors, with each vector component representing one of the AOIs, all set to 0 except for the component corresponding to the currently fixated AOI, which is set to 1. The resulting sequence of vectors can be averaged in time, with the result that each vector component represents the proportion of time that the corresponding AOI was fixated within the given time interval. We present two methods for comparing sequences of this sort, one based on computing the time-varying correlation of the averaged vectors, and another based on a chi-square test testing the hypothesis that the observed gaze proportions are drawn from identical probability distributions. We have evaluated the method using synthetic data sets, in which the behavior was modeled as a series of "activities," each of which was modeled as a first-order Markov process. By tabulating distributions for pairs of identical and disparate activities, we are able to perform a receiver operating characteristic (ROC) analysis, allowing us to choose appropriate criteria and estimate error rates. We have applied the methods to data from airline crews, collected in a high-fidelity flight simulator (Haslbeck, Gontar & Schubert, 2014). We conclude by considering the problem of automatic (blind) discovery of activities, using methods developed for text analysis.
Measuring and Modeling Shared Visual Attention
NASA Technical Reports Server (NTRS)
Mulligan, Jeffrey B.
2016-01-01
Multi-person teams are sometimes responsible for critical tasks, such as flying an airliner. Here we present a method using gaze tracking data to assess shared visual attention, a term we use to describe the situation where team members are attending to a common set of elements in the environment. Gaze data are quantized with respect to a set of N areas of interest (AOIs); these are then used to construct a time series of N dimensional vectors, with each vector component representing one of the AOIs, all set to 0 except for the component corresponding to the currently fixated AOI, which is set to 1. The resulting sequence of vectors can be averaged in time, with the result that each vector component represents the proportion of time that the corresponding AOI was fixated within the given time interval. We present two methods for comparing sequences of this sort, one based on computing the time-varying correlation of the averaged vectors, and another based on a chi-square test testing the hypothesis that the observed gaze proportions are drawn from identical probability distributions.We have evaluated the method using synthetic data sets, in which the behavior was modeled as a series of activities, each of which was modeled as a first-order Markov process. By tabulating distributions for pairs of identical and disparate activities, we are able to perform a receiver operating characteristic (ROC) analysis, allowing us to choose appropriate criteria and estimate error rates.We have applied the methods to data from airline crews, collected in a high-fidelity flight simulator (Haslbeck, Gontar Schubert, 2014). We conclude by considering the problem of automatic (blind) discovery of activities, using methods developed for text analysis.
Nonparametric methods for drought severity estimation at ungauged sites
NASA Astrophysics Data System (ADS)
Sadri, S.; Burn, D. H.
2012-12-01
The objective in frequency analysis is, given extreme events such as drought severity or duration, to estimate the relationship between that event and the associated return periods at a catchment. Neural networks and other artificial intelligence approaches in function estimation and regression analysis are relatively new techniques in engineering, providing an attractive alternative to traditional statistical models. There are, however, few applications of neural networks and support vector machines in the area of severity quantile estimation for drought frequency analysis. In this paper, we compare three methods for this task: multiple linear regression, radial basis function neural networks, and least squares support vector regression (LS-SVR). The area selected for this study includes 32 catchments in the Canadian Prairies. From each catchment drought severities are extracted and fitted to a Pearson type III distribution, which act as observed values. For each method-duration pair, we use a jackknife algorithm to produce estimated values at each site. The results from these three approaches are compared and analyzed, and it is found that LS-SVR provides the best quantile estimates and extrapolating capacity.
NASA Astrophysics Data System (ADS)
Yoshida, Yuki; Karakida, Ryo; Okada, Masato; Amari, Shun-ichi
2017-04-01
Weight normalization, a newly proposed optimization method for neural networks by Salimans and Kingma (2016), decomposes the weight vector of a neural network into a radial length and a direction vector, and the decomposed parameters follow their steepest descent update. They reported that learning with the weight normalization achieves better converging speed in several tasks including image recognition and reinforcement learning than learning with the conventional parameterization. However, it remains theoretically uncovered how the weight normalization improves the converging speed. In this study, we applied a statistical mechanical technique to analyze on-line learning in single layer linear and nonlinear perceptrons with weight normalization. By deriving order parameters of the learning dynamics, we confirmed quantitatively that weight normalization realizes fast converging speed by automatically tuning the effective learning rate, regardless of the nonlinearity of the neural network. This property is realized when the initial value of the radial length is near the global minimum; therefore, our theory suggests that it is important to choose the initial value of the radial length appropriately when using weight normalization.
A structural and a functional aspect of stable information processing by the brain
2007-01-01
Brain is an expert in producing the same output from a particular set of inputs, even from a very noisy environment. In this article a model of neural circuit in the brain has been proposed which is composed of cyclic sub-circuits. A big loop has been defined to be consisting of a feed forward path from the sensory neurons to the highest processing area of the brain and feed back paths from that region back up to close to the same sensory neurons. It has been mathematically shown how some smaller cycles can amplify signal. A big loop processes information by contrast and amplify principle. How a pair of presynaptic and postsynaptic neurons can be identified by an exact synchronization detection method has also been mentioned. It has been assumed that the spike train coming out of a firing neuron encodes all the information produced by it as output. It is possible to extract this information over a period of time by Fourier transforms. The Fourier coefficients arranged in a vector form will uniquely represent the neural spike train over a period of time. The information emanating out of all the neurons in a given neural circuit over a period of time can be represented by a collection of points in a multidimensional vector space. This cluster of points represents the functional or behavioral form of the neural circuit. It has been proposed that a particular cluster of vectors as the representation of a new behavior is chosen by the brain interactively with respect to the memory stored in that circuit and the amount of emotion involved. It has been proposed that in this situation a Coulomb force like expression governs the dynamics of functioning of the circuit and stability of the system is reached at the minimum of all the minima of a potential function derived from the force like expression. The calculations have been done with respect to a pseudometric defined in a multidimensional vector space. PMID:19003500
Topological quantization in units of the fine structure constant.
Maciejko, Joseph; Qi, Xiao-Liang; Drew, H Dennis; Zhang, Shou-Cheng
2010-10-15
Fundamental topological phenomena in condensed matter physics are associated with a quantized electromagnetic response in units of fundamental constants. Recently, it has been predicted theoretically that the time-reversal invariant topological insulator in three dimensions exhibits a topological magnetoelectric effect quantized in units of the fine structure constant α=e²/ℏc. In this Letter, we propose an optical experiment to directly measure this topological quantization phenomenon, independent of material details. Our proposal also provides a way to measure the half-quantized Hall conductances on the two surfaces of the topological insulator independently of each other.
On the Dequantization of Fedosov's Deformation Quantization
NASA Astrophysics Data System (ADS)
Karabegov, Alexander V.
2003-08-01
To each natural deformation quantization on a Poisson manifold M we associate a Poisson morphism from the formal neighborhood of the zero section of the cotangent bundle to M to the formal neighborhood of the diagonal of the product M x M~, where M~ is a copy of M with the opposite Poisson structure. We call it dequantization of the natural deformation quantization. Then we "dequantize" Fedosov's quantization.
Face verification system for Android mobile devices using histogram based features
NASA Astrophysics Data System (ADS)
Sato, Sho; Kobayashi, Kazuhiro; Chen, Qiu
2016-07-01
This paper proposes a face verification system that runs on Android mobile devices. In this system, facial image is captured by a built-in camera on the Android device firstly, and then face detection is implemented using Haar-like features and AdaBoost learning algorithm. The proposed system verify the detected face using histogram based features, which are generated by binary Vector Quantization (VQ) histogram using DCT coefficients in low frequency domains, as well as Improved Local Binary Pattern (Improved LBP) histogram in spatial domain. Verification results with different type of histogram based features are first obtained separately and then combined by weighted averaging. We evaluate our proposed algorithm by using publicly available ORL database and facial images captured by an Android tablet.
Autonomous Environment-Monitoring Networks
NASA Technical Reports Server (NTRS)
Hand, Charles
2004-01-01
Autonomous environment-monitoring networks (AEMNs) are artificial neural networks that are specialized for recognizing familiarity and, conversely, novelty. Like a biological neural network, an AEMN receives a constant stream of inputs. For purposes of computational implementation, the inputs are vector representations of the information of interest. As long as the most recent input vector is similar to the previous input vectors, no action is taken. Action is taken only when a novel vector is encountered. Whether a given input vector is regarded as novel depends on the previous vectors; hence, the same input vector could be regarded as familiar or novel, depending on the context of previous input vectors. AEMNs have been proposed as means to enable exploratory robots on remote planets to recognize novel features that could merit closer scientific attention. AEMNs could also be useful for processing data from medical instrumentation for automated monitoring or diagnosis. The primary substructure of an AEMN is called a spindle. In its simplest form, a spindle consists of a central vector (C), a scalar (r), and algorithms for changing C and r. The vector C is constructed from all the vectors in a given continuous stream of inputs, such that it is minimally distant from those vectors. The scalar r is the distance between C and the most remote vector in the same set. The construction of a spindle involves four vital parameters: setup size, spindle-population size, and the radii of two novelty boundaries. The setup size is the number of vectors that are taken into account before computing C. The spindle-population size is the total number of input vectors used in constructing the spindle counting both those that arrive before and those that arrive after the computation of C. The novelty-boundary radii are distances from C that partition the neighborhood around C into three concentric regions (see Figure 1). During construction of the spindle, the changing spindle radius is denoted by h. It is the final value of h, reached before beginning construction on the next spindle, that is denoted by r. During construction of a spindle, if a new vector falls between C and the inner boundary, the vector is regarded as completely familiar and no action is taken. If the new vector falls into the region between the inner and outer boundaries, it is considered unusual enough to warrant the adjustment of C and r by use of the aforementioned algorithms, but not unusual enough to be considered novel. If a vector falls outside the outer boundary, it is considered novel, in which case one of several appropriate responses could be initiation of construction of a new spindle.
New adaptive color quantization method based on self-organizing maps.
Chang, Chip-Hong; Xu, Pengfei; Xiao, Rui; Srikanthan, Thambipillai
2005-01-01
Color quantization (CQ) is an image processing task popularly used to convert true color images to palletized images for limited color display devices. To minimize the contouring artifacts introduced by the reduction of colors, a new competitive learning (CL) based scheme called the frequency sensitive self-organizing maps (FS-SOMs) is proposed to optimize the color palette design for CQ. FS-SOM harmonically blends the neighborhood adaptation of the well-known self-organizing maps (SOMs) with the neuron dependent frequency sensitive learning model, the global butterfly permutation sequence for input randomization, and the reinitialization of dead neurons to harness effective utilization of neurons. The net effect is an improvement in adaptation, a well-ordered color palette, and the alleviation of underutilization problem, which is the main cause of visually perceivable artifacts of CQ. Extensive simulations have been performed to analyze and compare the learning behavior and performance of FS-SOM against other vector quantization (VQ) algorithms. The results show that the proposed FS-SOM outperforms classical CL, Linde, Buzo, and Gray (LBG), and SOM algorithms. More importantly, FS-SOM achieves its superiority in reconstruction quality and topological ordering with a much greater robustness against variations in network parameters than the current art SOM algorithm for CQ. A most significant bit (MSB) biased encoding scheme is also introduced to reduce the number of parallel processing units. By mapping the pixel values as sign-magnitude numbers and biasing the magnitudes according to their sign bits, eight lattice points in the color space are condensed into one common point density function. Consequently, the same processing element can be used to map several color clusters and the entire FS-SOM network can be substantially scaled down without severely scarifying the quality of the displayed image. The drawback of this encoding scheme is the additional storage overhead, which can be cut down by leveraging on existing encoder in an overall lossy compression scheme.
Quantum Computing and Second Quantization
Makaruk, Hanna Ewa
2017-02-10
Quantum computers are by their nature many particle quantum systems. Both the many-particle arrangement and being quantum are necessary for the existence of the entangled states, which are responsible for the parallelism of the quantum computers. Second quantization is a very important approximate method of describing such systems. This lecture will present the general idea of the second quantization, and discuss shortly some of the most important formulations of second quantization.
Quantum Computing and Second Quantization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Makaruk, Hanna Ewa
Quantum computers are by their nature many particle quantum systems. Both the many-particle arrangement and being quantum are necessary for the existence of the entangled states, which are responsible for the parallelism of the quantum computers. Second quantization is a very important approximate method of describing such systems. This lecture will present the general idea of the second quantization, and discuss shortly some of the most important formulations of second quantization.
Implicitly-Defined Neural Networks for Sequence Labeling
2016-09-09
this is to improve performance on long-range dependencies, and to improve stability (solution drift) in NLP tasks. We choose an implicit neural network...there have been NLP tasks, and there are many effective approaches to dealing with them. In the context of HMMs, there are the “Forward-Backward...Malyska for interesting discussion of related work, and Liz Salesky for NLP application suggestions! Tagger WSJ Accuracy Word vectors only 0.9626 Single
Zhao, Chunnian; Sun, GuoQiang; Li, Shengxiu; Shi, Yanhong
2009-04-01
MicroRNAs have been implicated as having important roles in stem cell biology. MicroRNA-9 (miR-9) is expressed specifically in neurogenic areas of the brain and may be involved in neural stem cell self-renewal and differentiation. We showed previously that the nuclear receptor TLX is an essential regulator of neural stem cell self-renewal. Here we show that miR-9 suppresses TLX expression to negatively regulate neural stem cell proliferation and accelerate neural differentiation. Introducing a TLX expression vector that is not prone to miR-9 regulation rescued miR-9-induced proliferation deficiency and inhibited precocious differentiation. In utero electroporation of miR-9 in embryonic brains led to premature differentiation and outward migration of the transfected neural stem cells. Moreover, TLX represses expression of the miR-9 pri-miRNA. By forming a negative regulatory loop with TLX, miR-9 provides a model for controlling the balance between neural stem cell proliferation and differentiation.
Zhao, Chunnian; Sun, GuoQiang; Li, Shengxiu; Shi, Yanhong
2009-01-01
Summary MicroRNAs are important players in stem cell biology. Among them, microRNA-9 (miR-9) is expressed specifically in neurogenic areas of the brain. Whether miR-9 plays a role in neural stem cell self-renewal and differentiation is unknown. We showed previously that nuclear receptor TLX is an essential regulator of neural stem cell self-renewal. Here we show that miR-9 suppresses TLX expression to negatively regulate neural stem cell proliferation and accelerate neural differentiation. Introducing a TLX expression vector lacking the miR-9 recognition site rescued miR-9-induced proliferation deficiency and inhibited precocious differentiation. In utero electroporation of miR-9 in embryonic brains led to premature differentiation and outward migration of the transfected neural stem cells. Moreover, TLX represses miR-9 pri-miRNA expression. MiR-9, by forming a negative regulatory loop with TLX, establishes a model for controlling the balance between neural stem cell proliferation and differentiation. PMID:19330006
Comparison between sparsely distributed memory and Hopfield-type neural network models
NASA Technical Reports Server (NTRS)
Keeler, James D.
1986-01-01
The Sparsely Distributed Memory (SDM) model (Kanerva, 1984) is compared to Hopfield-type neural-network models. A mathematical framework for comparing the two is developed, and the capacity of each model is investigated. The capacity of the SDM can be increased independently of the dimension of the stored vectors, whereas the Hopfield capacity is limited to a fraction of this dimension. However, the total number of stored bits per matrix element is the same in the two models, as well as for extended models with higher order interactions. The models are also compared in their ability to store sequences of patterns. The SDM is extended to include time delays so that contextual information can be used to cover sequences. Finally, it is shown how a generalization of the SDM allows storage of correlated input pattern vectors.
NASA Astrophysics Data System (ADS)
Mofavvaz, Shirin; Sohrabi, Mahmoud Reza; Nezamzadeh-Ejhieh, Alireza
2017-07-01
In the present study, artificial neural networks (ANNs) and least squares support vector machines (LS-SVM) as intelligent methods based on absorption spectra in the range of 230-300 nm have been used for determination of antihistamine decongestant contents. In the first step, one type of network (feed-forward back-propagation) from the artificial neural network with two different training algorithms, Levenberg-Marquardt (LM) and gradient descent with momentum and adaptive learning rate back-propagation (GDX) algorithm, were employed and their performance was evaluated. The performance of the LM algorithm was better than the GDX algorithm. In the second one, the radial basis network was utilized and results compared with the previous network. In the last one, the other intelligent method named least squares support vector machine was proposed to construct the antihistamine decongestant prediction model and the results were compared with two of the aforementioned networks. The values of the statistical parameters mean square error (MSE), Regression coefficient (R2), correlation coefficient (r) and also mean recovery (%), relative standard deviation (RSD) used for selecting the best model between these methods. Moreover, the proposed methods were compared to the high- performance liquid chromatography (HPLC) as a reference method. One way analysis of variance (ANOVA) test at the 95% confidence level applied to the comparison results of suggested and reference methods that there were no significant differences between them.
Image-adapted visually weighted quantization matrices for digital image compression
NASA Technical Reports Server (NTRS)
Watson, Andrew B. (Inventor)
1994-01-01
A method for performing image compression that eliminates redundant and invisible image components is presented. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The present invention adapts or customizes the quantization matrix to the image being compressed. The quantization matrix comprises visual masking by luminance and contrast techniques and by an error pooling technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.
System-Level Design of a 64-Channel Low Power Neural Spike Recording Sensor.
Delgado-Restituto, Manuel; Rodriguez-Perez, Alberto; Darie, Angela; Soto-Sanchez, Cristina; Fernandez-Jover, Eduardo; Rodriguez-Vazquez, Angel
2017-04-01
This paper reports an integrated 64-channel neural spike recording sensor, together with all the circuitry to process and configure the channels, process the neural data, transmit via a wireless link the information and receive the required instructions. Neural signals are acquired, filtered, digitized and compressed in the channels. Additionally, each channel implements an auto-calibration algorithm which individually configures the transfer characteristics of the recording site. The system has two transmission modes; in one case the information captured by the channels is sent as uncompressed raw data; in the other, feature vectors extracted from the detected neural spikes are released. Data streams coming from the channels are serialized by the embedded digital processor. Experimental results, including in vivo measurements, show that the power consumption of the complete system is lower than 330 μW.
Generic absence of strong singularities in loop quantum Bianchi-IX spacetimes
NASA Astrophysics Data System (ADS)
Saini, Sahil; Singh, Parampreet
2018-03-01
We study the generic resolution of strong singularities in loop quantized effective Bianchi-IX spacetime in two different quantizations—the connection operator based ‘A’ quantization and the extrinsic curvature based ‘K’ quantization. We show that in the effective spacetime description with arbitrary matter content, it is necessary to include inverse triad corrections to resolve all the strong singularities in the ‘A’ quantization. Whereas in the ‘K’ quantization these results can be obtained without including inverse triad corrections. Under these conditions, the energy density, expansion and shear scalars for both of the quantization prescriptions are bounded. Notably, both the quantizations can result in potentially curvature divergent events if matter content allows divergences in the partial derivatives of the energy density with respect to the triad variables at a finite energy density. Such events are found to be weak curvature singularities beyond which geodesics can be extended in the effective spacetime. Our results show that all potential strong curvature singularities of the classical theory are forbidden in Bianchi-IX spacetime in loop quantum cosmology and geodesic evolution never breaks down for such events.
Pseudo-Kähler Quantization on Flag Manifolds
NASA Astrophysics Data System (ADS)
Karabegov, Alexander V.
A unified approach to geometric, symbol and deformation quantizations on a generalized flag manifold endowed with an invariant pseudo-Kähler structure is proposed. In particular cases we arrive at Berezin's quantization via covariant and contravariant symbols.
Instant-Form and Light-Front Quantization of Field Theories
NASA Astrophysics Data System (ADS)
Kulshreshtha, Usha; Kulshreshtha, Daya Shankar; Vary, James
2018-05-01
In this work we consider the instant-form and light-front quantization of some field theories. As an example, we consider a class of gauged non-linear sigma models with different regularizations. In particular, we present the path integral quantization of the gauged non-linear sigma model in the Faddeevian regularization. We also make a comparision of the possible differences in the instant-form and light-front quantization at appropriate places.
Quantization improves stabilization of dynamical systems with delayed feedback
NASA Astrophysics Data System (ADS)
Stepan, Gabor; Milton, John G.; Insperger, Tamas
2017-11-01
We show that an unstable scalar dynamical system with time-delayed feedback can be stabilized by quantizing the feedback. The discrete time model corresponds to a previously unrecognized case of the microchaotic map in which the fixed point is both locally and globally repelling. In the continuous-time model, stabilization by quantization is possible when the fixed point in the absence of feedback is an unstable node, and in the presence of feedback, it is an unstable focus (spiral). The results are illustrated with numerical simulation of the unstable Hayes equation. The solutions of the quantized Hayes equation take the form of oscillations in which the amplitude is a function of the size of the quantization step. If the quantization step is sufficiently small, the amplitude of the oscillations can be small enough to practically approximate the dynamics around a stable fixed point.
On Correspondence of BRST-BFV, Dirac, and Refined Algebraic Quantizations of Constrained Systems
NASA Astrophysics Data System (ADS)
Shvedov, O. Yu.
2002-11-01
The correspondence between BRST-BFV, Dirac, and refined algebraic (group averaging, projection operator) approaches to quantizing constrained systems is analyzed. For the closed-algebra case, it is shown that the component of the BFV wave function corresponding to maximal (minimal) value of number of ghosts and antighosts in the Schrodinger representation may be viewed as a wave function in the refined algebraic (Dirac) quantization approach. The Giulini-Marolf group averaging formula for the inner product in the refined algebraic quantization approach is obtained from the Batalin-Marnelius prescription for the BRST-BFV inner product, which should be generally modified due to topological problems. The considered prescription for the correspondence of states is observed to be applicable to the open-algebra case. The refined algebraic quantization approach is generalized then to the case of nontrivial structure functions. A simple example is discussed. The correspondence of observables for different quantization methods is also investigated.
Sarkar, Sujit
2018-04-12
An attempt is made to study and understand the behavior of quantization of geometric phase of a quantum Ising chain with long range interaction. We show the existence of integer and fractional topological characterization for this model Hamiltonian with different quantization condition and also the different quantized value of geometric phase. The quantum critical lines behave differently from the perspective of topological characterization. The results of duality and its relation to the topological quantization is presented here. The symmetry study for this model Hamiltonian is also presented. Our results indicate that the Zak phase is not the proper physical parameter to describe the topological characterization of system with long range interaction. We also present quite a few exact solutions with physical explanation. Finally we present the relation between duality, symmetry and topological characterization. Our work provides a new perspective on topological quantization.
Spacetime algebra as a powerful tool for electromagnetism
NASA Astrophysics Data System (ADS)
Dressel, Justin; Bliokh, Konstantin Y.; Nori, Franco
2015-08-01
We present a comprehensive introduction to spacetime algebra that emphasizes its practicality and power as a tool for the study of electromagnetism. We carefully develop this natural (Clifford) algebra of the Minkowski spacetime geometry, with a particular focus on its intrinsic (and often overlooked) complex structure. Notably, the scalar imaginary that appears throughout the electromagnetic theory properly corresponds to the unit 4-volume of spacetime itself, and thus has physical meaning. The electric and magnetic fields are combined into a single complex and frame-independent bivector field, which generalizes the Riemann-Silberstein complex vector that has recently resurfaced in studies of the single photon wavefunction. The complex structure of spacetime also underpins the emergence of electromagnetic waves, circular polarizations, the normal variables for canonical quantization, the distinction between electric and magnetic charge, complex spinor representations of Lorentz transformations, and the dual (electric-magnetic field exchange) symmetry that produces helicity conservation in vacuum fields. This latter symmetry manifests as an arbitrary global phase of the complex field, motivating the use of a complex vector potential, along with an associated transverse and gauge-invariant bivector potential, as well as complex (bivector and scalar) Hertz potentials. Our detailed treatment aims to encourage the use of spacetime algebra as a readily available and mature extension to existing vector calculus and tensor methods that can greatly simplify the analysis of fundamentally relativistic objects like the electromagnetic field.
miR-137 forms a regulatory loop with nuclear receptor TLX and LSD1 in neural stem cells
Sun, GuoQiang; Ye, Peng; Murai, Kiyohito; Lang, Ming-Fei; Li, Shengxiu; Zhang, Heying; Li, Wendong; Fu, Chelsea; Yin, Jason; Wang, Allen; Ma, Xiaoxiao; Shi, Yanhong
2012-01-01
miR-137 is a brain-enriched microRNA. Its role in neural development remains unknown. Here we show that miR-137 plays an essential role in controlling embryonic neural stem cell fate determination. miR-137 negatively regulates cell proliferation and accelerates neural differentiation of embryonic neural stem cells. In addition, we show that histone demethylase LSD1, a transcriptional co-repressor of nuclear receptor TLX, is a downstream target of miR-137. In utero electroporation of miR-137 in embryonic mouse brains led to premature differentiation and outward migration of the transfected cells. Introducing a LSD1 expression vector lacking the miR-137 recognition site rescued miR-137-induced precocious differentiation. Furthermore, we demonstrate that TLX, an essential regulator of neural stem cell self-renewal, represses the expression of miR-137 by recruiting LSD1 to the genomic regions of miR-137. Thus, miR-137 forms a feedback regulatory loop with TLX and LSD1 to control the dynamics between neural stem cell proliferation and differentiation during neural development. PMID:22068596
Correlated Topic Vector for Scene Classification.
Wei, Pengxu; Qin, Fei; Wan, Fang; Zhu, Yi; Jiao, Jianbin; Ye, Qixiang
2017-07-01
Scene images usually involve semantic correlations, particularly when considering large-scale image data sets. This paper proposes a novel generative image representation, correlated topic vector, to model such semantic correlations. Oriented from the correlated topic model, correlated topic vector intends to naturally utilize the correlations among topics, which are seldom considered in the conventional feature encoding, e.g., Fisher vector, but do exist in scene images. It is expected that the involvement of correlations can increase the discriminative capability of the learned generative model and consequently improve the recognition accuracy. Incorporated with the Fisher kernel method, correlated topic vector inherits the advantages of Fisher vector. The contributions to the topics of visual words have been further employed by incorporating the Fisher kernel framework to indicate the differences among scenes. Combined with the deep convolutional neural network (CNN) features and Gibbs sampling solution, correlated topic vector shows great potential when processing large-scale and complex scene image data sets. Experiments on two scene image data sets demonstrate that correlated topic vector improves significantly the deep CNN features, and outperforms existing Fisher kernel-based features.
Controlling basins of attraction in a neural network-based telemetry monitor
NASA Technical Reports Server (NTRS)
Bell, Benjamin; Eilbert, James L.
1988-01-01
The size of the basins of attraction around fixed points in recurrent neural nets (NNs) can be modified by a training process. Controlling these attractive regions by presenting training data with various amount of noise added to the prototype signal vectors is discussed. Application of this technique to signal processing results in a classification system whose sensitivity can be controlled. This new technique is applied to the classification of temporal sequences in telemetry data.
Invariant object recognition based on the generalized discrete radon transform
NASA Astrophysics Data System (ADS)
Easley, Glenn R.; Colonna, Flavia
2004-04-01
We introduce a method for classifying objects based on special cases of the generalized discrete Radon transform. We adjust the transform and the corresponding ridgelet transform by means of circular shifting and a singular value decomposition (SVD) to obtain a translation, rotation and scaling invariant set of feature vectors. We then use a back-propagation neural network to classify the input feature vectors. We conclude with experimental results and compare these with other invariant recognition methods.
Exploiting Hidden Layer Responses of Deep Neural Networks for Language Recognition
2016-09-08
trained DNNs. We evaluated this ap- proach in NIST 2015 language recognition evaluation. The per- formances achieved by the proposed approach are very...activations, used in direct DNN-LID. Results from the LID experiments support our hypothesis. The LID experiments are performed on NIST Language Recognition...of-the-art I- vector system [3, 10, 11] in evaluation (eval) set of NIST LRE 2015. Combination of proposed technique and state-of-the-art I-vector
Fernandes, Alinda R; Chari, Divya M
2016-09-28
Genetically engineered neural stem cell (NSC) transplant populations offer key benefits in regenerative neurology, for release of therapeutic biomolecules in ex vivo gene therapy. NSCs are 'hard-to-transfect' but amenable to 'magnetofection'. Despite the high clinical potential of this approach, the low and transient transfection associated with the large size of therapeutic DNA constructs is a critical barrier to translation. We demonstrate for the first time that DNA minicircles (small DNA vectors encoding essential gene expression components but devoid of a bacterial backbone, thereby reducing construct size versus conventional plasmids) deployed with magnetofection achieve the highest, safe non-viral DNA transfection levels (up to 54%) reported so far for primary NSCs. Minicircle-functionalized magnetic nanoparticle (MNP)-mediated gene delivery also resulted in sustained gene expression for up to four weeks. All daughter cell types of engineered NSCs (neurons, astrocytes and oligodendrocytes) were transfected (in contrast to conventional plasmids which usually yield transfected astrocytes only), offering advantages for targeted cell engineering. In addition to enhancing MNP functionality as gene delivery vectors, minicircle technology provides key benefits from safety/scale up perspectives. Therefore, we consider the proof-of-concept of fusion of technologies used here offers high potential as a clinically translatable genetic modification strategy for cell therapy. Copyright © 2016 Elsevier B.V. All rights reserved.
Magnetically enhanced adeno-associated viral vector delivery for human neural stem cell infection.
Kim, Eunmi; Oh, Ji-Seon; Ahn, Ik-Sung; Park, Kook In; Jang, Jae-Hyung
2011-11-01
Gene therapy technology is a powerful tool to elucidate the molecular cues that precisely regulate stem cell fates, but developing safe vehicles or mechanisms that are capable of delivering genes to stem cells with high efficiency remains a challenge. In this study, we developed a magnetically guided adeno-associated virus (AAV) delivery system for gene delivery to human neural stem cells (hNSCs). Magnetically guided AAV delivery resulted in rapid accumulation of vectors on target cells followed by forced penetration of the vectors across the plasma membrane, ultimately leading to fast and efficient cellular transduction. To combine AAV vectors with the magnetically guided delivery, AAV was genetically modified to display hexa-histidine (6xHis) on the physically exposed loop of the AAV2 capsid (6xHis AAV), which interacted with nickel ions chelated on NTA-biotin conjugated to streptavidin-coated superparamagnetic iron oxide nanoparticles (NiStNPs). NiStNP-mediated 6xHis AAV delivery under magnetic fields led to significantly enhanced cellular transduction in a non-permissive cell type (i.e., hNSCs). In addition, this delivery method reduced the viral exposure times required to induce a high level of transduction by as much as to 2-10 min of hNSC infection, thus demonstrating the great potential of magnetically guided AAV delivery for numerous gene therapy and stem cell applications. Copyright © 2011 Elsevier Ltd. All rights reserved.
Noncommutative gerbes and deformation quantization
NASA Astrophysics Data System (ADS)
Aschieri, Paolo; Baković, Igor; Jurčo, Branislav; Schupp, Peter
2010-11-01
We define noncommutative gerbes using the language of star products. Quantized twisted Poisson structures are discussed as an explicit realization in the sense of deformation quantization. Our motivation is the noncommutative description of D-branes in the presence of topologically non-trivial background fields.
Quantized discrete space oscillators
NASA Technical Reports Server (NTRS)
Uzes, C. A.; Kapuscik, Edward
1993-01-01
A quasi-canonical sequence of finite dimensional quantizations was found which has canonical quantization as its limit. In order to demonstrate its practical utility and its numerical convergence, this formalism is applied to the eigenvalue and 'eigenfunction' problem of several harmonic and anharmonic oscillators.
Obstacle detection by recognizing binary expansion patterns
NASA Technical Reports Server (NTRS)
Baram, Yoram; Barniv, Yair
1993-01-01
This paper describes a technique for obstacle detection, based on the expansion of the image-plane projection of a textured object, as its distance from the sensor decreases. Information is conveyed by vectors whose components represent first-order temporal and spatial derivatives of the image intensity, which are related to the time to collision through the local divergence. Such vectors may be characterized as patterns corresponding to 'safe' or 'dangerous' situations. We show that essential information is conveyed by single-bit vector components, representing the signs of the relevant derivatives. We use two recently developed, high capacity classifiers, employing neural learning techniques, to recognize the imminence of collision from such patterns.
An accelerated training method for back propagation networks
NASA Technical Reports Server (NTRS)
Shelton, Robert O. (Inventor)
1993-01-01
The principal objective is to provide a training procedure for a feed forward, back propagation neural network which greatly accelerates the training process. A set of orthogonal singular vectors are determined from the input matrix such that the standard deviations of the projections of the input vectors along these singular vectors, as a set, are substantially maximized, thus providing an optimal means of presenting the input data. Novelty exists in the method of extracting from the set of input data, a set of features which can serve to represent the input data in a simplified manner, thus greatly reducing the time/expense to training the system.
Visibility of wavelet quantization noise
NASA Technical Reports Server (NTRS)
Watson, A. B.; Yang, G. Y.; Solomon, J. A.; Villasenor, J.
1997-01-01
The discrete wavelet transform (DWT) decomposes an image into bands that vary in spatial frequency and orientation. It is widely used for image compression. Measures of the visibility of DWT quantization errors are required to achieve optimal compression. Uniform quantization of a single band of coefficients results in an artifact that we call DWT uniform quantization noise; it is the sum of a lattice of random amplitude basis functions of the corresponding DWT synthesis filter. We measured visual detection thresholds for samples of DWT uniform quantization noise in Y, Cb, and Cr color channels. The spatial frequency of a wavelet is r 2-lambda, where r is display visual resolution in pixels/degree, and lambda is the wavelet level. Thresholds increase rapidly with wavelet spatial frequency. Thresholds also increase from Y to Cr to Cb, and with orientation from lowpass to horizontal/vertical to diagonal. We construct a mathematical model for DWT noise detection thresholds that is a function of level, orientation, and display visual resolution. This allows calculation of a "perceptually lossless" quantization matrix for which all errors are in theory below the visual threshold. The model may also be used as the basis for adaptive quantization schemes.
Hamilton, Lei; McConley, Marc; Angermueller, Kai; Goldberg, David; Corba, Massimiliano; Kim, Louis; Moran, James; Parks, Philip D; Sang Chin; Widge, Alik S; Dougherty, Darin D; Eskandar, Emad N
2015-08-01
A fully autonomous intracranial device is built to continually record neural activities in different parts of the brain, process these sampled signals, decode features that correlate to behaviors and neuropsychiatric states, and use these features to deliver brain stimulation in a closed-loop fashion. In this paper, we describe the sampling and stimulation aspects of such a device. We first describe the signal processing algorithms of two unsupervised spike sorting methods. Next, we describe the LFP time-frequency analysis and feature derivation from the two spike sorting methods. Spike sorting includes a novel approach to constructing a dictionary learning algorithm in a Compressed Sensing (CS) framework. We present a joint prediction scheme to determine the class of neural spikes in the dictionary learning framework; and, the second approach is a modified OSort algorithm which is implemented in a distributed system optimized for power efficiency. Furthermore, sorted spikes and time-frequency analysis of LFP signals can be used to generate derived features (including cross-frequency coupling, spike-field coupling). We then show how these derived features can be used in the design and development of novel decode and closed-loop control algorithms that are optimized to apply deep brain stimulation based on a patient's neuropsychiatric state. For the control algorithm, we define the state vector as representative of a patient's impulsivity, avoidance, inhibition, etc. Controller parameters are optimized to apply stimulation based on the state vector's current state as well as its historical values. The overall algorithm and software design for our implantable neural recording and stimulation system uses an innovative, adaptable, and reprogrammable architecture that enables advancement of the state-of-the-art in closed-loop neural control while also meeting the challenges of system power constraints and concurrent development with ongoing scientific research designed to define brain network connectivity and neural network dynamics that vary at the individual patient level and vary over time.
CNN: a speaker recognition system using a cascaded neural network.
Zaki, M; Ghalwash, A; Elkouny, A A
1996-05-01
The main emphasis of this paper is to present an approach for combining supervised and unsupervised neural network models to the issue of speaker recognition. To enhance the overall operation and performance of recognition, the proposed strategy integrates the two techniques, forming one global model called the cascaded model. We first present a simple conventional technique based on the distance measured between a test vector and a reference vector for different speakers in the population. This particular distance metric has the property of weighting down the components in those directions along which the intraspeaker variance is large. The reason for presenting this method is to clarify the discrepancy in performance between the conventional and neural network approach. We then introduce the idea of using unsupervised learning technique, presented by the winner-take-all model, as a means of recognition. Due to several tests that have been conducted and in order to enhance the performance of this model, dealing with noisy patterns, we have preceded it with a supervised learning model--the pattern association model--which acts as a filtration stage. This work includes both the design and implementation of both conventional and neural network approaches to recognize the speakers templates--which are introduced to the system via a voice master card and preprocessed before extracting the features used in the recognition. The conclusion indicates that the system performance in case of neural network is better than that of the conventional one, achieving a smooth degradation in respect of noisy patterns, and higher performance in respect of noise-free patterns.
Thermal field theory and generalized light front quantization
NASA Astrophysics Data System (ADS)
Weldon, H. Arthur
2003-04-01
The dependence of thermal field theory on the surface of quantization and on the velocity of the heat bath is investigated by working in general coordinates that are arbitrary linear combinations of the Minkowski coordinates. In the general coordinates the metric tensor gμν¯ is nondiagonal. The Kubo-Martin-Schwinger condition requires periodicity in thermal correlation functions when the temporal variable changes by an amount -i/(T(g00¯)). Light-front quantization fails since g00¯=0; however, various related quantizations are possible.
PIV-DCNN: cascaded deep convolutional neural networks for particle image velocimetry
NASA Astrophysics Data System (ADS)
Lee, Yong; Yang, Hua; Yin, Zhouping
2017-12-01
Velocity estimation (extracting the displacement vector information) from the particle image pairs is of critical importance for particle image velocimetry. This problem is mostly transformed into finding the sub-pixel peak in a correlation map. To address the original displacement extraction problem, we propose a different evaluation scheme (PIV-DCNN) with four-level regression deep convolutional neural networks. At each level, the networks are trained to predict a vector from two input image patches. The low-level network is skilled at large displacement estimation and the high- level networks are devoted to improving the accuracy. Outlier replacement and symmetric window offset operation glue the well- functioning networks in a cascaded manner. Through comparison with the standard PIV methods (one-pass cross-correlation method, three-pass window deformation), the practicability of the proposed PIV-DCNN is verified by the application to a diversity of synthetic and experimental PIV images.
Wang, Jie-Sheng; Han, Shuang
2015-01-01
For predicting the key technology indicators (concentrate grade and tailings recovery rate) of flotation process, a feed-forward neural network (FNN) based soft-sensor model optimized by the hybrid algorithm combining particle swarm optimization (PSO) algorithm and gravitational search algorithm (GSA) is proposed. Although GSA has better optimization capability, it has slow convergence velocity and is easy to fall into local optimum. So in this paper, the velocity vector and position vector of GSA are adjusted by PSO algorithm in order to improve its convergence speed and prediction accuracy. Finally, the proposed hybrid algorithm is adopted to optimize the parameters of FNN soft-sensor model. Simulation results show that the model has better generalization and prediction accuracy for the concentrate grade and tailings recovery rate to meet the online soft-sensor requirements of the real-time control in the flotation process. PMID:26583034
A new feature constituting approach to detection of vocal fold pathology
NASA Astrophysics Data System (ADS)
Hariharan, M.; Polat, Kemal; Yaacob, Sazali
2014-08-01
In the last two decades, non-invasive methods through acoustic analysis of voice signal have been proved to be excellent and reliable tool to diagnose vocal fold pathologies. This paper proposes a new feature vector based on the wavelet packet transform and singular value decomposition for the detection of vocal fold pathology. k-means clustering based feature weighting is proposed to increase the distinguishing performance of the proposed features. In this work, two databases Massachusetts Eye and Ear Infirmary (MEEI) voice disorders database and MAPACI speech pathology database are used. Four different supervised classifiers such as k-nearest neighbour (k-NN), least-square support vector machine, probabilistic neural network and general regression neural network are employed for testing the proposed features. The experimental results uncover that the proposed features give very promising classification accuracy of 100% for both MEEI database and MAPACI speech pathology database.
Generalized radiation-field quantization method and the Petermann excess-noise factor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheng, Y.-J.; Siegman, A.E.; E.L. Ginzton Laboratory, Stanford University, Stanford, California 94305
2003-10-01
We propose a generalized radiation-field quantization formalism, where quantization does not have to be referenced to a set of power-orthogonal eigenmodes as conventionally required. This formalism can be used to directly quantize the true system eigenmodes, which can be non-power-orthogonal due to the open nature of the system or the gain/loss medium involved in the system. We apply this generalized field quantization to the laser linewidth problem, in particular, lasers with non-power-orthogonal oscillation modes, and derive the excess-noise factor in a fully quantum-mechanical framework. We also show that, despite the excess-noise factor for oscillating modes, the total spatially averaged decaymore » rate for the laser atoms remains unchanged.« less
Simultaneous fault detection and control design for switched systems with two quantized signals.
Li, Jian; Park, Ju H; Ye, Dan
2017-01-01
The problem of simultaneous fault detection and control design for switched systems with two quantized signals is presented in this paper. Dynamic quantizers are employed, respectively, before the output is passed to fault detector, and before the control input is transmitted to the switched system. Taking the quantized errors into account, the robust performance for this kind of system is given. Furthermore, sufficient conditions for the existence of fault detector/controller are presented in the framework of linear matrix inequalities, and fault detector/controller gains and the supremum of quantizer range are derived by a convex optimized method. Finally, two illustrative examples demonstrate the effectiveness of the proposed method. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
BFV approach to geometric quantization
NASA Astrophysics Data System (ADS)
Fradkin, E. S.; Linetsky, V. Ya.
1994-12-01
A gauge-invariant approach to geometric quantization is developed. It yields a complete quantum description for dynamical systems with non-trivial geometry and topology of the phase space. The method is a global version of the gauge-invariant approach to quantization of second-class constraints developed by Batalin, Fradkin and Fradkina (BFF). Physical quantum states and quantum observables are respectively described by covariantly constant sections of the Fock bundle and the bundle of hermitian operators over the phase space with a flat connection defined by the nilpotent BVF-BRST operator. Perturbative calculation of the first non-trivial quantum correction to the Poisson brackets leads to the Chevalley cocycle known in deformation quantization. Consistency conditions lead to a topological quantization condition with metaplectic anomaly.
Deformation quantization of fermi fields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Galaviz, I.; Garcia-Compean, H.; Departamento de Fisica, Centro de Investigacion y de Estudios Avanzados del IPN, P.O. Box 14-740, 07000 Mexico, D.F.
2008-04-15
Deformation quantization for any Grassmann scalar free field is described via the Weyl-Wigner-Moyal formalism. The Stratonovich-Weyl quantizer, the Moyal *-product and the Wigner functional are obtained by extending the formalism proposed recently in [I. Galaviz, H. Garcia-Compean, M. Przanowski, F.J. Turrubiates, Weyl-Wigner-Moyal Formalism for Fermi Classical Systems, arXiv:hep-th/0612245] to the fermionic systems of infinite number of degrees of freedom. In particular, this formalism is applied to quantize the Dirac free field. It is observed that the use of suitable oscillator variables facilitates considerably the procedure. The Stratonovich-Weyl quantizer, the Moyal *-product, the Wigner functional, the normal ordering operator, and finally,more » the Dirac propagator have been found with the use of these variables.« less
Polymer-Fourier quantization of the scalar field revisited
NASA Astrophysics Data System (ADS)
Garcia-Chung, Angel; Vergara, J. David
2016-10-01
The polymer quantization of the Fourier modes of the real scalar field is studied within algebraic scheme. We replace the positive linear functional of the standard Poincaré invariant quantization by a singular one. This singular positive linear functional is constructed as mimicking the singular limit of the complex structure of the Poincaré invariant Fock quantization. The resulting symmetry group of such polymer quantization is the subgroup SDiff(ℝ4) which is a subgroup of Diff(ℝ4) formed by spatial volume preserving diffeomorphisms. In consequence, this yields an entirely different irreducible representation of the canonical commutation relations, nonunitary equivalent to the standard Fock representation. We also compared the Poincaré invariant Fock vacuum with the polymer Fourier vacuum.
Quantized Rabi oscillations and circular dichroism in quantum Hall systems
NASA Astrophysics Data System (ADS)
Tran, D. T.; Cooper, N. R.; Goldman, N.
2018-06-01
The dissipative response of a quantum system upon periodic driving can be exploited as a probe of its topological properties. Here we explore the implications of such phenomena in two-dimensional gases subjected to a uniform magnetic field. It is shown that a filled Landau level exhibits a quantized circular dichroism, which can be traced back to its underlying nontrivial topology. Based on selection rules, we find that this quantized effect can be suitably described in terms of Rabi oscillations, whose frequencies satisfy simple quantization laws. We discuss how quantized dissipative responses can be probed locally, both in the bulk and at the boundaries of the system. This work suggests alternative forms of topological probes based on circular dichroism.
Zhao, Chunnian; Sun, GuoQiang; Li, Shengxiu; Lang, Ming-Fei; Yang, Su; Li, Wendong; Shi, Yanhong
2010-01-01
Neural stem cell self-renewal and differentiation is orchestrated by precise control of gene expression involving nuclear receptor TLX. Let-7b, a member of the let-7 microRNA family, is expressed in mammalian brains and exhibits increased expression during neural differentiation. However, the role of let-7b in neural stem cell proliferation and differentiation remains unknown. Here we show that let-7b regulates neural stem cell proliferation and differentiation by targeting the stem cell regulator TLX and the cell cycle regulator cyclin D1. Overexpression of let-7b led to reduced neural stem cell proliferation and increased neural differentiation, whereas antisense knockdown of let-7b resulted in enhanced proliferation of neural stem cells. Moreover, in utero electroporation of let-7b to embryonic mouse brains led to reduced cell cycle progression in neural stem cells. Introducing an expression vector of Tlx or cyclin D1 that lacks the let-7b recognition site rescued let-7b-induced proliferation deficiency, suggesting that both TLX and cyclin D1 are important targets for let-7b-mediated regulation of neural stem cell proliferation. Let-7b, by targeting TLX and cyclin D1, establishes an efficient strategy to control neural stem cell proliferation and differentiation. PMID:20133835
Zhao, Chunnian; Sun, GuoQiang; Li, Shengxiu; Lang, Ming-Fei; Yang, Su; Li, Wendong; Shi, Yanhong
2010-02-02
Neural stem cell self-renewal and differentiation is orchestrated by precise control of gene expression involving nuclear receptor TLX. Let-7b, a member of the let-7 microRNA family, is expressed in mammalian brains and exhibits increased expression during neural differentiation. However, the role of let-7b in neural stem cell proliferation and differentiation remains unknown. Here we show that let-7b regulates neural stem cell proliferation and differentiation by targeting the stem cell regulator TLX and the cell cycle regulator cyclin D1. Overexpression of let-7b led to reduced neural stem cell proliferation and increased neural differentiation, whereas antisense knockdown of let-7b resulted in enhanced proliferation of neural stem cells. Moreover, in utero electroporation of let-7b to embryonic mouse brains led to reduced cell cycle progression in neural stem cells. Introducing an expression vector of Tlx or cyclin D1 that lacks the let-7b recognition site rescued let-7b-induced proliferation deficiency, suggesting that both TLX and cyclin D1 are important targets for let-7b-mediated regulation of neural stem cell proliferation. Let-7b, by targeting TLX and cyclin D1, establishes an efficient strategy to control neural stem cell proliferation and differentiation.
Novel method of finding extreme edges in a convex set of N-dimension vectors
NASA Astrophysics Data System (ADS)
Hu, Chia-Lun J.
2001-11-01
As we published in the last few years, for a binary neural network pattern recognition system to learn a given mapping {Um mapped to Vm, m=1 to M} where um is an N- dimension analog (pattern) vector, Vm is a P-bit binary (classification) vector, the if-and-only-if (IFF) condition that this network can learn this mapping is that each i-set in {Ymi, m=1 to M} (where Ymithere existsVmiUm and Vmi=+1 or -1, is the i-th bit of VR-m).)(i=1 to P and there are P sets included here.) Is POSITIVELY, LINEARLY, INDEPENDENT or PLI. We have shown that this PLI condition is MORE GENERAL than the convexity condition applied to a set of N-vectors. In the design of old learning machines, we know that if a set of N-dimension analog vectors form a convex set, and if the machine can learn the boundary vectors (or extreme edges) of this set, then it can definitely learn the inside vectors contained in this POLYHEDRON CONE. This paper reports a new method and new algorithm to find the boundary vectors of a convex set of ND analog vectors.
Sparks, Jackson T; Bohbot, Jonathan D; Ristic, Mihailo; Mišic, Danijela; Skoric, Marijana; Mattoo, Autar; Dickens, Joseph C
2017-07-01
Nepeta essential oil (Neo; catnip) and its major component, nepetalactone, have long been known to repel insects including mosquitoes. However, the neural mechanisms through which these repellents are detected by mosquitoes, including the yellow fever mosquito Aedes aegypti (L.), an important vector of Zika virus, were poorly understood. Here we show that Neo volatiles activate olfactory receptor neurons within the basiconic sensilla on the maxillary palps of female Ae. aegypti. A gustatory receptor neuron sensitive to the feeding deterrent quinine and housed within sensilla on the labella of females was activated by both Neo and nepetalactone. Activity of a second gustatory receptor neuron sensitive to the feeding stimulant sucrose was suppressed by both repellents. Our results provide neural pathways for the reported spatial repellency and feeding deterrence of these repellents. A better understanding of the neural input through which female mosquitoes make decisions to feed will facilitate design of new repellents and management strategies involving their use. Published by Oxford University Press on behalf of Entomological Society of America 2017. This work is written by US Government employees and is in the public domain in the US.
Experimental fault characterization of a neural network
NASA Technical Reports Server (NTRS)
Tan, Chang-Huong
1990-01-01
The effects of a variety of faults on a neural network is quantified via simulation. The neural network consists of a single-layered clustering network and a three-layered classification network. The percentage of vectors mistagged by the clustering network, the percentage of vectors misclassified by the classification network, the time taken for the network to stabilize, and the output values are all measured. The results show that both transient and permanent faults have a significant impact on the performance of the measured network. The corresponding mistag and misclassification percentages are typically within 5 to 10 percent of each other. The average mistag percentage and the average misclassification percentage are both about 25 percent. After relearning, the percentage of misclassifications is reduced to 9 percent. In addition, transient faults are found to cause the network to be increasingly unstable as the duration of a transient is increased. The impact of link faults is relatively insignificant in comparison with node faults (1 versus 19 percent misclassified after relearning). There is a linear increase in the mistag and misclassification percentages with decreasing hardware redundancy. In addition, the mistag and misclassification percentages linearly decrease with increasing network size.
Zazo, Ruben; Lozano-Diez, Alicia; Gonzalez-Dominguez, Javier; Toledano, Doroteo T; Gonzalez-Rodriguez, Joaquin
2016-01-01
Long Short Term Memory (LSTM) Recurrent Neural Networks (RNNs) have recently outperformed other state-of-the-art approaches, such as i-vector and Deep Neural Networks (DNNs), in automatic Language Identification (LID), particularly when dealing with very short utterances (∼3s). In this contribution we present an open-source, end-to-end, LSTM RNN system running on limited computational resources (a single GPU) that outperforms a reference i-vector system on a subset of the NIST Language Recognition Evaluation (8 target languages, 3s task) by up to a 26%. This result is in line with previously published research using proprietary LSTM implementations and huge computational resources, which made these former results hardly reproducible. Further, we extend those previous experiments modeling unseen languages (out of set, OOS, modeling), which is crucial in real applications. Results show that a LSTM RNN with OOS modeling is able to detect these languages and generalizes robustly to unseen OOS languages. Finally, we also analyze the effect of even more limited test data (from 2.25s to 0.1s) proving that with as little as 0.5s an accuracy of over 50% can be achieved.
Zazo, Ruben; Lozano-Diez, Alicia; Gonzalez-Dominguez, Javier; T. Toledano, Doroteo; Gonzalez-Rodriguez, Joaquin
2016-01-01
Long Short Term Memory (LSTM) Recurrent Neural Networks (RNNs) have recently outperformed other state-of-the-art approaches, such as i-vector and Deep Neural Networks (DNNs), in automatic Language Identification (LID), particularly when dealing with very short utterances (∼3s). In this contribution we present an open-source, end-to-end, LSTM RNN system running on limited computational resources (a single GPU) that outperforms a reference i-vector system on a subset of the NIST Language Recognition Evaluation (8 target languages, 3s task) by up to a 26%. This result is in line with previously published research using proprietary LSTM implementations and huge computational resources, which made these former results hardly reproducible. Further, we extend those previous experiments modeling unseen languages (out of set, OOS, modeling), which is crucial in real applications. Results show that a LSTM RNN with OOS modeling is able to detect these languages and generalizes robustly to unseen OOS languages. Finally, we also analyze the effect of even more limited test data (from 2.25s to 0.1s) proving that with as little as 0.5s an accuracy of over 50% can be achieved. PMID:26824467
Instabilities caused by floating-point arithmetic quantization.
NASA Technical Reports Server (NTRS)
Phillips, C. L.
1972-01-01
It is shown that an otherwise stable digital control system can be made unstable by signal quantization when the controller operates on floating-point arithmetic. Sufficient conditions of instability are determined, and an example of loss of stability is treated when only one quantizer is operated.
Prediction of Drug-Plasma Protein Binding Using Artificial Intelligence Based Algorithms.
Kumar, Rajnish; Sharma, Anju; Siddiqui, Mohammed Haris; Tiwari, Rajesh Kumar
2018-01-01
Plasma protein binding (PPB) has vital importance in the characterization of drug distribution in the systemic circulation. Unfavorable PPB can pose a negative effect on clinical development of promising drug candidates. The drug distribution properties should be considered at the initial phases of the drug design and development. Therefore, PPB prediction models are receiving an increased attention. In the current study, we present a systematic approach using Support vector machine, Artificial neural network, k- nearest neighbor, Probabilistic neural network, Partial least square and Linear discriminant analysis to relate various in vitro and in silico molecular descriptors to a diverse dataset of 736 drugs/drug-like compounds. The overall accuracy of Support vector machine with Radial basis function kernel came out to be comparatively better than the rest of the applied algorithms. The training set accuracy, validation set accuracy, precision, sensitivity, specificity and F1 score for the Suprort vector machine was found to be 89.73%, 89.97%, 92.56%, 87.26%, 91.97% and 0.898, respectively. This model can potentially be useful in screening of relevant drug candidates at the preliminary stages of drug design and development. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
High precision computing with charge domain devices and a pseudo-spectral method therefor
NASA Technical Reports Server (NTRS)
Barhen, Jacob (Inventor); Toomarian, Nikzad (Inventor); Fijany, Amir (Inventor); Zak, Michail (Inventor)
1997-01-01
The present invention enhances the bit resolution of a CCD/CID MVM processor by storing each bit of each matrix element as a separate CCD charge packet. The bits of each input vector are separately multiplied by each bit of each matrix element in massive parallelism and the resulting products are combined appropriately to synthesize the correct product. In another aspect of the invention, such arrays are employed in a pseudo-spectral method of the invention, in which partial differential equations are solved by expressing each derivative analytically as matrices, and the state function is updated at each computation cycle by multiplying it by the matrices. The matrices are treated as synaptic arrays of a neural network and the state function vector elements are treated as neurons. In a further aspect of the invention, moving target detection is performed by driving the soliton equation with a vector of detector outputs. The neural architecture consists of two synaptic arrays corresponding to the two differential terms of the soliton-equation and an adder connected to the output thereof and to the output of the detector array to drive the soliton equation.
NASA Astrophysics Data System (ADS)
Krasilenko, Vladimir G.; Nikolsky, Aleksandr I.; Lazarev, Alexander A.; Magas, Taras E.
2010-04-01
Equivalence models (EM) advantages of neural networks (NN) are shown in paper. EMs are based on vectormatrix procedures with basic operations of continuous neurologic: normalized vector operations "equivalence", "nonequivalence", "autoequivalence", "autononequivalence". The capacity of NN on the basis of EM and of its modifications, including auto-and heteroassociative memories for 2D images, exceeds in several times quantity of neurons. Such neuroparadigms are very perspective for processing, recognition, storing large size and strongly correlated images. A family of "normalized equivalence-nonequivalence" neuro-fuzzy logic operations on the based of generalized operations fuzzy-negation, t-norm and s-norm is elaborated. A biologically motivated concept and time pulse encoding principles of continuous logic photocurrent reflexions and sample-storage devices with pulse-width photoconverters have allowed us to design generalized structures for realization of the family of normalized linear vector operations "equivalence"-"nonequivalence". Simulation results show, that processing time in such circuits does not exceed units of micro seconds. Circuits are simple, have low supply voltage (1-3 V), low power consumption (milliwatts), low levels of input signals (microwatts), integrated construction, satisfy the problem of interconnections and cascading.
Velez, Mariel M.; Wernet, Mathias F.; Clark, Damon A.
2014-01-01
Understanding the mechanisms that link sensory stimuli to animal behavior is a central challenge in neuroscience. The quantitative description of behavioral responses to defined stimuli has led to a rich understanding of different behavioral strategies in many species. One important navigational cue perceived by many vertebrates and insects is the e-vector orientation of linearly polarized light. Drosophila manifests an innate orientation response to this cue (‘polarotaxis’), aligning its body axis with the e-vector field. We have established a population-based behavioral paradigm for the genetic dissection of neural circuits guiding polarotaxis to both celestial as well as reflected polarized stimuli. However, the behavioral mechanisms by which flies align with a linearly polarized stimulus remain unknown. Here, we present a detailed quantitative description of Drosophila polarotaxis, systematically measuring behavioral parameters that are modulated by the stimulus. We show that angular acceleration is modulated during alignment, and this single parameter may be sufficient for alignment. Furthermore, using monocular deprivation, we show that each eye is necessary for modulating turns in the ipsilateral direction. This analysis lays the foundation for understanding how neural circuits guide these important visual behaviors. PMID:24810784
Dynamic State Estimation of Power Systems With Quantization Effects: A Recursive Filter Approach.
Hu, Liang; Wang, Zidong; Liu, Xiaohui
2016-08-01
In this paper, a recursive filter algorithm is developed to deal with the state estimation problem for power systems with quantized nonlinear measurements. The measurements from both the remote terminal units and the phasor measurement unit are subject to quantizations described by a logarithmic quantizer. Attention is focused on the design of a recursive filter such that, in the simultaneous presence of nonlinear measurements and quantization effects, an upper bound for the estimation error covariance is guaranteed and subsequently minimized. Instead of using the traditional approximation methods in nonlinear estimation that simply ignore the linearization errors, we treat both the linearization and quantization errors as norm-bounded uncertainties in the algorithm development so as to improve the performance of the estimator. For the power system with such kind of introduced uncertainties, a filter is designed in the framework of robust recursive estimation, and the developed filter algorithm is tested on the IEEE benchmark power system to demonstrate its effectiveness.
Direct comparison of fractional and integer quantized Hall resistance
NASA Astrophysics Data System (ADS)
Ahlers, Franz J.; Götz, Martin; Pierz, Klaus
2017-08-01
We present precision measurements of the fractional quantized Hall effect, where the quantized resistance {{R}≤ft[ 1/3 \\right]} in the fractional quantum Hall state at filling factor 1/3 was compared with a quantized resistance {{R}[2]} , represented by an integer quantum Hall state at filling factor 2. A cryogenic current comparator bridge capable of currents down to the nanoampere range was used to directly compare two resistance values of two GaAs-based devices located in two cryostats. A value of 1-(5.3 ± 6.3) 10-8 (95% confidence level) was obtained for the ratio ({{R}≤ft[ 1/3 \\right]}/6{{R}[2]} ). This constitutes the most precise comparison of integer resistance quantization (in terms of h/e 2) in single-particle systems and of fractional quantization in fractionally charged quasi-particle systems. While not relevant for practical metrology, such a test of the validity of the underlying physics is of significance in the context of the upcoming revision of the SI.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Błaszak, Maciej, E-mail: blaszakm@amu.edu.pl; Domański, Ziemowit, E-mail: ziemowit@amu.edu.pl
In the paper is presented an invariant quantization procedure of classical mechanics on the phase space over flat configuration space. Then, the passage to an operator representation of quantum mechanics in a Hilbert space over configuration space is derived. An explicit form of position and momentum operators as well as their appropriate ordering in arbitrary curvilinear coordinates is demonstrated. Finally, the extension of presented formalism onto non-flat case and related ambiguities of the process of quantization are discussed. -- Highlights: •An invariant quantization procedure of classical mechanics on the phase space over flat configuration space is presented. •The passage tomore » an operator representation of quantum mechanics in a Hilbert space over configuration space is derived. •Explicit form of position and momentum operators and their appropriate ordering in curvilinear coordinates is shown. •The invariant form of Hamiltonian operators quadratic and cubic in momenta is derived. •The extension of presented formalism onto non-flat case and related ambiguities of the quantization process are discussed.« less
Quantization noise in digital speech. M.S. Thesis- Houston Univ.
NASA Technical Reports Server (NTRS)
Schmidt, O. L.
1972-01-01
The amount of quantization noise generated in a digital-to-analog converter is dependent on the number of bits or quantization levels used to digitize the analog signal in the analog-to-digital converter. The minimum number of quantization levels and the minimum sample rate were derived for a digital voice channel. A sample rate of 6000 samples per second and lowpass filters with a 3 db cutoff of 2400 Hz are required for 100 percent sentence intelligibility. Consonant sounds are the first speech components to be degraded by quantization noise. A compression amplifier can be used to increase the weighting of the consonant sound amplitudes in the analog-to-digital converter. An expansion network must be installed at the output of the digital-to-analog converter to restore the original weighting of the consonant sounds. This technique results in 100 percent sentence intelligibility for a sample rate of 5000 samples per second, eight quantization levels, and lowpass filters with a 3 db cutoff of 2000 Hz.
Gene delivery with viral vectors for cerebrovascular diseases
Gan, Yu; Jing, Zheng; Stetler, R. Anne; Cao, Guodong
2017-01-01
Recent achievements in the understanding of molecular events involved in the pathogenesis of central nervous system (CNS) injury have made gene transfer a promising approach for various neurological disorders, including cerebrovascular diseases. However, special obstacles, including the post-mitotic nature of neurons and the blood-brain barrier (BBB), constitute key challenges for gene delivery to the CNS. Despite the various limitations in current gene delivery systems, a spectrum of viral vectors has been successfully used to deliver genes to the CNS. Furthermore, recent advancements in vector engineering have improved the safety and delivery of viral vectors. Numerous viral vector-based clinical trials for neurological disorders have been initiated. This review will summarize the current implementation of viral gene delivery in the context of cerebrovascular diseases including ischemic stroke, hemorrhagic stroke and subarachnoid hemorrhage (SAH). In particular, we will discuss the potentially feasible ways in which viral vectors can be manipulated and exploited for use in neural delivery and therapy. PMID:23276981
Fuzzy support vector machine: an efficient rule-based classification technique for microarrays.
Hajiloo, Mohsen; Rabiee, Hamid R; Anooshahpour, Mahdi
2013-01-01
The abundance of gene expression microarray data has led to the development of machine learning algorithms applicable for tackling disease diagnosis, disease prognosis, and treatment selection problems. However, these algorithms often produce classifiers with weaknesses in terms of accuracy, robustness, and interpretability. This paper introduces fuzzy support vector machine which is a learning algorithm based on combination of fuzzy classifiers and kernel machines for microarray classification. Experimental results on public leukemia, prostate, and colon cancer datasets show that fuzzy support vector machine applied in combination with filter or wrapper feature selection methods develops a robust model with higher accuracy than the conventional microarray classification models such as support vector machine, artificial neural network, decision trees, k nearest neighbors, and diagonal linear discriminant analysis. Furthermore, the interpretable rule-base inferred from fuzzy support vector machine helps extracting biological knowledge from microarray data. Fuzzy support vector machine as a new classification model with high generalization power, robustness, and good interpretability seems to be a promising tool for gene expression microarray classification.
Research on the adaptation of skeletal muscle to hypogravity Past and future directions
NASA Technical Reports Server (NTRS)
Riley, D. A.; Ellis, S.
1983-01-01
The results of previous research on the cellular effects of microgravity on rat tissue are reviewed and areas of future necessary research are identified. The rats were flown on board Cosmos 605, 782, and 936. Postflight tissue analyses revealed increases in connective tissue cells and focal disruption of muscle fibers due to the microgravity environment of space. Evidence has been found for muscular and neural changes occurring as a result of reentry stresses. It is suggested that a data base be established for quantizing muscle function with electromyography, measurements of force output, and length measurement. The data can serve as a reference for comparisons with data obtained in orbiting laboratories such as the Spacelab. The experiments will have a goal of defining and preventing the mechanism of neuromuscular atrophy.
Which coordinate system for modelling path integration?
Vickerstaff, Robert J; Cheung, Allen
2010-03-21
Path integration is a navigation strategy widely observed in nature where an animal maintains a running estimate, called the home vector, of its location during an excursion. Evidence suggests it is both ancient and ubiquitous in nature, and has been studied for over a century. In that time, canonical and neural network models have flourished, based on a wide range of assumptions, justifications and supporting data. Despite the importance of the phenomenon, consensus and unifying principles appear lacking. A fundamental issue is the neural representation of space needed for biological path integration. This paper presents a scheme to classify path integration systems on the basis of the way the home vector records and updates the spatial relationship between the animal and its home location. Four extended classes of coordinate systems are used to unify and review both canonical and neural network models of path integration, from the arthropod and mammalian literature. This scheme demonstrates analytical equivalence between models which may otherwise appear unrelated, and distinguishes between models which may superficially appear similar. A thorough analysis is carried out of the equational forms of important facets of path integration including updating, steering, searching and systematic errors, using each of the four coordinate systems. The type of available directional cue, namely allothetic or idiothetic, is also considered. It is shown that on balance, the class of home vectors which includes the geocentric Cartesian coordinate system, appears to be the most robust for biological systems. A key conclusion is that deducing computational structure from behavioural data alone will be difficult or impossible, at least in the absence of an analysis of random errors. Consequently it is likely that further theoretical insights into path integration will require an in-depth study of the effect of noise on the four classes of home vectors. Copyright 2009 Elsevier Ltd. All rights reserved.
Conductance dips and spin precession in a nonuniform waveguide with spin–orbit coupling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Malyshev, A. I., E-mail: malyshev@phys.unn.ru; Kozulin, A. S.
An infinite waveguide with a nonuniformity, a segment of finite length with spin–orbit coupling, is considered in the case when the Rashba and Dresselhaus parameters are identical. Analytical expressions have been derived in the single-mode approximation for the conductance of the system for an arbitrary initial spin state. Based on numerical calculations with several size quantization modes, we have detected and described the conductance dips arising when the waves are localized in the nonuniformity due to the formation of an effective potential well in it. We show that allowance for the evanescent modes under carrier spin precession in an effectivemore » magnetic field does not lead to a change in the direction of the average spin vector at the output of the system.« less
Heavy and Heavy-Light Mesons in the Covariant Spectator Theory
NASA Astrophysics Data System (ADS)
Stadler, Alfred; Leitão, Sofia; Peña, M. T.; Biernat, Elmar P.
2018-05-01
The masses and vertex functions of heavy and heavy-light mesons, described as quark-antiquark bound states, are calculated with the Covariant Spectator Theory (CST). We use a kernel with an adjustable mixture of Lorentz scalar, pseudoscalar, and vector linear confining interaction, together with a one-gluon-exchange kernel. A series of fits to the heavy and heavy-light meson spectrum were calculated, and we discuss what conclusions can be drawn from it, especially about the Lorentz structure of the kernel. We also apply the Brodsky-Huang-Lepage prescription to express the CST wave functions for heavy quarkonia in terms of light-front variables. They agree remarkably well with light-front wave functions obtained in the Hamiltonian basis light-front quantization approach, even in excited states.
Information extraction from multivariate images
NASA Technical Reports Server (NTRS)
Park, S. K.; Kegley, K. A.; Schiess, J. R.
1986-01-01
An overview of several multivariate image processing techniques is presented, with emphasis on techniques based upon the principal component transformation (PCT). Multiimages in various formats have a multivariate pixel value, associated with each pixel location, which has been scaled and quantized into a gray level vector, and the bivariate of the extent to which two images are correlated. The PCT of a multiimage decorrelates the multiimage to reduce its dimensionality and reveal its intercomponent dependencies if some off-diagonal elements are not small, and for the purposes of display the principal component images must be postprocessed into multiimage format. The principal component analysis of a multiimage is a statistical analysis based upon the PCT whose primary application is to determine the intrinsic component dimensionality of the multiimage. Computational considerations are also discussed.
Kaneko, Hidekazu; Tamura, Hiroshi; Tate, Shunta; Kawashima, Takahiro; Suzuki, Shinya S; Fujita, Ichiro
2010-08-01
In order for patients with disabilities to control assistive devices with their own neural activity, multineuronal spike trains must be efficiently decoded because only limited computational resources can be used to generate prosthetic control signals in portable real-time applications. In this study, we compare the abilities of two vectorizing procedures (multineuronal and time-segmental) to extract information from spike trains during the same total neuron-seconds. In the multineuronal vectorizing procedure, we defined a response vector whose components represented the spike counts of one to five neurons. In the time-segmental vectorizing procedure, a response vector consisted of components representing a neuron's spike counts for one to five time-segment(s) of a response period of 1 s. Spike trains were recorded from neurons in the inferior temporal cortex of monkeys presented with visual stimuli. We examined whether the amount of information of the visual stimuli carried by these neurons differed between the two vectorizing procedures. The amount of information calculated with the multineuronal vectorizing procedure, but not the time-segmental vectorizing procedure, significantly increased with the dimensions of the response vector. We conclude that the multineuronal vectorizing procedure is superior to the time-segmental vectorizing procedure in efficiently extracting information from neuronal signals. Copyright (c) 2010 Elsevier Ltd. All rights reserved.
Sauter, Monica M.; Brandt, Curtis R.
2016-01-01
Injection of herpes simplex virus vectors into the vitreous of primate eyes induces an acute, transient uveitis. The purpose of this study was to characterize innate immune responses of macaque neural retina tissue to the herpes simplex virus type 1-based gene delivery vector hrR3. PCR array analysis demonstrated the induction of the pro-inflammatory cytokine IL-6, as well as the anti-inflammatory cytokine IL-10, following hrR3 exposure. Secretion of IL-6 was detected by ELISA and cone photoreceptors and Muller cells were the predominant IL-6 positive cell types. RNA in situ hybridization confirmed that IL-6 was expressed in photoreceptor and Muller cells. The IL-10 positive cells in the inner nuclear layer were identified as amacrine cells by immunofluorescence staining with calretinin antibody. hrR3 challenge resulted in activation of NFκB (p65) in Muller glial cells, but not in cone photoreceptors, suggesting a novel regulatory mechanism for IL-6 expression in cone cells. hrR3 replication was not required for IL-6 induction or NFκB (p65) activation. These data suggest a pro-inflammatory (IL-6)/anti-inflammatory (IL-10) axis exists in neural retina and the severity of acute posterior uveitis may be determined by this interaction. Further studies are needed to identify the trigger for IL-6 and IL-10 induction and the mechanism of IL-6 induction in cone cells. PMID:27170050
Coherent state quantization of quaternions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Muraleetharan, B., E-mail: bbmuraleetharan@jfn.ac.lk, E-mail: santhar@gmail.com; Thirulogasanthar, K., E-mail: bbmuraleetharan@jfn.ac.lk, E-mail: santhar@gmail.com
Parallel to the quantization of the complex plane, using the canonical coherent states of a right quaternionic Hilbert space, quaternion field of quaternionic quantum mechanics is quantized. Associated upper symbols, lower symbols, and related quantities are analyzed. Quaternionic version of the harmonic oscillator and Weyl-Heisenberg algebra are also obtained.
2011-01-01
Background Dementia and cognitive impairment associated with aging are a major medical and social concern. Neuropsychological testing is a key element in the diagnostic procedures of Mild Cognitive Impairment (MCI), but has presently a limited value in the prediction of progression to dementia. We advance the hypothesis that newer statistical classification methods derived from data mining and machine learning methods like Neural Networks, Support Vector Machines and Random Forests can improve accuracy, sensitivity and specificity of predictions obtained from neuropsychological testing. Seven non parametric classifiers derived from data mining methods (Multilayer Perceptrons Neural Networks, Radial Basis Function Neural Networks, Support Vector Machines, CART, CHAID and QUEST Classification Trees and Random Forests) were compared to three traditional classifiers (Linear Discriminant Analysis, Quadratic Discriminant Analysis and Logistic Regression) in terms of overall classification accuracy, specificity, sensitivity, Area under the ROC curve and Press'Q. Model predictors were 10 neuropsychological tests currently used in the diagnosis of dementia. Statistical distributions of classification parameters obtained from a 5-fold cross-validation were compared using the Friedman's nonparametric test. Results Press' Q test showed that all classifiers performed better than chance alone (p < 0.05). Support Vector Machines showed the larger overall classification accuracy (Median (Me) = 0.76) an area under the ROC (Me = 0.90). However this method showed high specificity (Me = 1.0) but low sensitivity (Me = 0.3). Random Forest ranked second in overall accuracy (Me = 0.73) with high area under the ROC (Me = 0.73) specificity (Me = 0.73) and sensitivity (Me = 0.64). Linear Discriminant Analysis also showed acceptable overall accuracy (Me = 0.66), with acceptable area under the ROC (Me = 0.72) specificity (Me = 0.66) and sensitivity (Me = 0.64). The remaining classifiers showed overall classification accuracy above a median value of 0.63, but for most sensitivity was around or even lower than a median value of 0.5. Conclusions When taking into account sensitivity, specificity and overall classification accuracy Random Forests and Linear Discriminant analysis rank first among all the classifiers tested in prediction of dementia using several neuropsychological tests. These methods may be used to improve accuracy, sensitivity and specificity of Dementia predictions from neuropsychological testing. PMID:21849043
Educational Information Quantization for Improving Content Quality in Learning Management Systems
ERIC Educational Resources Information Center
Rybanov, Alexander Aleksandrovich
2014-01-01
The article offers the educational information quantization method for improving content quality in Learning Management Systems. The paper considers questions concerning analysis of quality of quantized presentation of educational information, based on quantitative text parameters: average frequencies of parts of speech, used in the text; formal…
BFV quantization on hermitian symmetric spaces
NASA Astrophysics Data System (ADS)
Fradkin, E. S.; Linetsky, V. Ya.
1995-02-01
Gauge-invariant BFV approach to geometric quantization is applied to the case of hermitian symmetric spaces G/ H. In particular, gauge invariant quantization on the Lobachevski plane and sphere is carried out. Due to the presence of symmetry, master equations for the first-class constraints, quantum observables and physical quantum states are exactly solvable. BFV-BRST operator defines a flat G-connection in the Fock bundle over G/ H. Physical quantum states are covariantly constant sections with respect to this connection and are shown to coincide with the generalized coherent states for the group G. Vacuum expectation values of the quantum observables commuting with the quantum first-class constraints reduce to the covariant symbols of Berezin. The gauge-invariant approach to quantization on symplectic manifolds synthesizes geometric, deformation and Berezin quantization approaches.
Bleul, Christiane; Baumann-Klausener, Franziska; Labhart, Thomas; Dickinson, Michael H.
2016-01-01
Many insects exploit skylight polarization as a compass cue for orientation and navigation. In the fruit fly, Drosophila melanogaster, photoreceptors R7 and R8 in the dorsal rim area (DRA) of the compound eye are specialized to detect the electric vector (e-vector) of linearly polarized light. These photoreceptors are arranged in stacked pairs with identical fields of view and spectral sensitivities, but mutually orthogonal microvillar orientations. As in larger flies, we found that the microvillar orientation of the distal photoreceptor R7 changes in a fan-like fashion along the DRA. This anatomical arrangement suggests that the DRA constitutes a detector for skylight polarization, in which different e-vectors maximally excite different positions in the array. To test our hypothesis, we measured responses to polarized light of varying e-vector angles in the terminals of R7/8 cells using genetically encoded calcium indicators. Our data confirm a progression of preferred e-vector angles from anterior to posterior in the DRA, and a strict orthogonality between the e-vector preferences of paired R7/8 cells. We observed decreased activity in photoreceptors in response to flashes of light polarized orthogonally to their preferred e-vector angle, suggesting reciprocal inhibition between photoreceptors in the same medullar column, which may serve to increase polarization contrast. Together, our results indicate that the polarization-vision system relies on a spatial map of preferred e-vector angles at the earliest stage of sensory processing. SIGNIFICANCE STATEMENT The fly's visual system is an influential model system for studying neural computation, and much is known about its anatomy, physiology, and development. The circuits underlying motion processing have received the most attention, but researchers are increasingly investigating other functions, such as color perception and object recognition. In this work, we investigate the early neural processing of a somewhat exotic sense, called polarization vision. Because skylight is polarized in an orientation that is rigidly determined by the position of the sun, this cue provides compass information. Behavioral experiments have shown that many species use the polarization pattern in the sky to direct locomotion. Here we describe the input stage of the fly's polarization-vision system. PMID:27170135
Wei, Q; Hu, Y
2009-01-01
The major hurdle for segmenting lung lobes in computed tomographic (CT) images is to identify fissure regions, which encase lobar fissures. Accurate identification of these regions is difficult due to the variable shape and appearance of the fissures, along with the low contrast and high noise associated with CT images. This paper studies the effectiveness of two texture analysis methods - the gray level co-occurrence matrix (GLCM) and the gray level run length matrix (GLRLM) - in identifying fissure regions from isotropic CT image stacks. To classify GLCM and GLRLM texture features, we applied a feed-forward back-propagation neural network and achieved the best classification accuracy utilizing 16 quantized levels for computing the GLCM and GLRLM texture features and 64 neurons in the input/hidden layers of the neural network. Tested on isotropic CT image stacks of 24 patients with the pathologic lungs, we obtained accuracies of 86% and 87% for identifying fissure regions using the GLCM and GLRLM methods, respectively. These accuracies compare favorably with surgeons/radiologists' accuracy of 80% for identifying fissure regions in clinical settings. This shows promising potential for segmenting lung lobes using the GLCM and GLRLM methods.
Fukatsu, Hiroshi; Naganawa, Shinji; Yumura, Shinnichiro
2008-04-01
This study was aimed to validate the performance of a novel image compression method using a neural network to achieve a lossless compression. The encoding consists of the following blocks: a prediction block; a residual data calculation block; a transformation and quantization block; an organization and modification block; and an entropy encoding block. The predicted image is divided into four macro-blocks using the original image for teaching; and then redivided into sixteen sub-blocks. The predicted image is compared to the original image to create the residual image. The spatial and frequency data of the residual image are compared and transformed. Chest radiography, computed tomography (CT), magnetic resonance imaging, positron emission tomography, radioisotope mammography, ultrasonography, and digital subtraction angiography images were compressed using the AIC lossless compression method; and the compression rates were calculated. The compression rates were around 15:1 for chest radiography and mammography, 12:1 for CT, and around 6:1 for other images. This method thus enables greater lossless compression than the conventional methods. This novel method should improve the efficiency of handling of the increasing volume of medical imaging data.
Spin wave modes in out-of-plane magnetized nanorings
NASA Astrophysics Data System (ADS)
Zhou, X.; Tartakovskaya, E. V.; Kakazei, G. N.; Adeyeye, A. O.
2017-07-01
We investigated the spin wave modes in flat circular permalloy rings with a canted external bias field using ferromagnetic resonance spectroscopy. The external magnetic field H was large enough to saturate the samples. For θ =0∘ (perpendicular geometry), three distinct resonance peaks were observed experimentally. In the case of the cylindrical symmetry violation due to H inclination from normal to the ring plane (the angle θ of H inclination was varied in the 0∘-6∘ range), the splitting of all initial peaks appeared. The distance between neighbor split peaks increased with the θ increment. Unexpectedly, the biggest splitting was observed for the mode with the smallest radial wave vector. This special feature of splitting behavior is determined by the topology of the ring shape. Developed analytical theory revealed that in perpendicular geometry, each observed peak is a combination of signals from the set of radially quantized spin wave excitation with almost the same radial wave vectors, radial profiles, and frequencies, but with different azimuthal dependencies. This degeneracy is a consequence of circular symmetry of the system and can be removed by H inclination from the normal. Our findings were further supported by micromagnetic simulations.
Novel Integration of Frame Rate Up Conversion and HEVC Coding Based on Rate-Distortion Optimization.
Guo Lu; Xiaoyun Zhang; Li Chen; Zhiyong Gao
2018-02-01
Frame rate up conversion (FRUC) can improve the visual quality by interpolating new intermediate frames. However, high frame rate videos by FRUC are confronted with more bitrate consumption or annoying artifacts of interpolated frames. In this paper, a novel integration framework of FRUC and high efficiency video coding (HEVC) is proposed based on rate-distortion optimization, and the interpolated frames can be reconstructed at encoder side with low bitrate cost and high visual quality. First, joint motion estimation (JME) algorithm is proposed to obtain robust motion vectors, which are shared between FRUC and video coding. What's more, JME is embedded into the coding loop and employs the original motion search strategy in HEVC coding. Then, the frame interpolation is formulated as a rate-distortion optimization problem, where both the coding bitrate consumption and visual quality are taken into account. Due to the absence of original frames, the distortion model for interpolated frames is established according to the motion vector reliability and coding quantization error. Experimental results demonstrate that the proposed framework can achieve 21% ~ 42% reduction in BDBR, when compared with the traditional methods of FRUC cascaded with coding.
A Algebraic Approach to the Quantization of Constrained Systems: Finite Dimensional Examples.
NASA Astrophysics Data System (ADS)
Tate, Ranjeet Shekhar
1992-01-01
General relativity has two features in particular, which make it difficult to apply to it existing schemes for the quantization of constrained systems. First, there is no background structure in the theory, which could be used, e.g., to regularize constraint operators, to identify a "time" or to define an inner product on physical states. Second, in the Ashtekar formulation of general relativity, which is a promising avenue to quantum gravity, the natural variables for quantization are not canonical; and, classically, there are algebraic identities between them. Existing schemes are usually not concerned with such identities. Thus, from the point of view of canonical quantum gravity, it has become imperative to find a framework for quantization which provides a general prescription to find the physical inner product, and is flexible enough to accommodate non -canonical variables. In this dissertation I present an algebraic formulation of the Dirac approach to the quantization of constrained systems. The Dirac quantization program is augmented by a general principle to find the inner product on physical states. Essentially, the Hermiticity conditions on physical operators determine this inner product. I also clarify the role in quantum theory of possible algebraic identities between the elementary variables. I use this approach to quantize various finite dimensional systems. Some of these models test the new aspects of the algebraic framework. Others bear qualitative similarities to general relativity, and may give some insight into the pitfalls lurking in quantum gravity. The previous quantizations of one such model had many surprising features. When this model is quantized using the algebraic program, there is no longer any unexpected behaviour. I also construct the complete quantum theory for a previously unsolved relativistic cosmology. All these models indicate that the algebraic formulation provides powerful new tools for quantization. In (spatially compact) general relativity, the Hamiltonian is constrained to vanish. I present various approaches one can take to obtain an interpretation of the quantum theory of such "dynamically constrained" systems. I apply some of these ideas to the Bianchi I cosmology, and analyze the issue of the initial singularity in quantum theory.
A hypercube compact neural network
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rostykus, P.L.; Somani, A.K.
1988-09-01
A major problem facing implementation of neural networks is the connection problem. One popular tradeoff is to remove connections. Random disconnection severely degrades the capabilities. The hypercube based Compact Neural Network (CNN) has structured architecture combined with a rearrangement of the memory vectors gives a larger input space and better degradation than a cost equivalent network with more connections. The CNNs are based on a Hopfield network. The changes from the Hopfield net include states of -1 and +1 and when a node was evaluated to 0, it was not biased either positive or negative, instead it resumed its previousmore » state. L = PEs, N = memories and t/sub ij/s is the weights between i and j.« less
Lu, Wenlian; Zheng, Ren; Chen, Tianping
2016-03-01
In this paper, we discuss outer-synchronization of the asymmetrically connected recurrent time-varying neural networks. By using both centralized and decentralized discretization data sampling principles, we derive several sufficient conditions based on three vector norms to guarantee that the difference of any two trajectories starting from different initial values of the neural network converges to zero. The lower bounds of the common time intervals between data samples in centralized and decentralized principles are proved to be positive, which guarantees exclusion of Zeno behavior. A numerical example is provided to illustrate the efficiency of the theoretical results. Copyright © 2015 Elsevier Ltd. All rights reserved.
An efficient optical architecture for sparsely connected neural networks
NASA Technical Reports Server (NTRS)
Hine, Butler P., III; Downie, John D.; Reid, Max B.
1990-01-01
An architecture for general-purpose optical neural network processor is presented in which the interconnections and weights are formed by directing coherent beams holographically, thereby making use of the space-bandwidth products of the recording medium for sparsely interconnected networks more efficiently that the commonly used vector-matrix multiplier, since all of the hologram area is in use. An investigation is made of the use of computer-generated holograms recorded on such updatable media as thermoplastic materials, in order to define the interconnections and weights of a neural network processor; attention is given to limits on interconnection densities, diffraction efficiencies, and weighing accuracies possible with such an updatable thin film holographic device.
A 64-channel ultra-low power system-on-chip for local field and action potentials recording
NASA Astrophysics Data System (ADS)
Rodríguez-Pérez, Alberto; Delgado-Restituto, Manuel; Darie, Angela; Soto-Sánchez, Cristina; Fernández-Jover, Eduardo; Rodríguez-Vázquez, Ángel
2015-06-01
This paper reports an integrated 64-channel neural recording sensor. Neural signals are acquired, filtered, digitized and compressed in the channels. Additionally, each channel implements an auto-calibration mechanism which configures the transfer characteristics of the recording site. The system has two transmission modes; in one case the information captured by the channels is sent as uncompressed raw data; in the other, feature vectors extracted from the detected neural spikes are released. Data streams coming from the channels are serialized by an embedded digital processor. Experimental results, including in vivo measurements, show that the power consumption of the complete system is lower than 330μW.
miR-137 forms a regulatory loop with nuclear receptor TLX and LSD1 in neural stem cells.
Sun, GuoQiang; Ye, Peng; Murai, Kiyohito; Lang, Ming-Fei; Li, Shengxiu; Zhang, Heying; Li, Wendong; Fu, Chelsea; Yin, Jason; Wang, Allen; Ma, Xiaoxiao; Shi, Yanhong
2011-11-08
miR-137 is a brain-enriched microRNA. Its role in neural development remains unknown. Here we show that miR-137 has an essential role in controlling embryonic neural stem cell fate determination. miR-137 negatively regulates cell proliferation and accelerates neural differentiation of embryonic neural stem cells. In addition, we show that the histone lysine-specific demethylase 1 (LSD1), a transcriptional co-repressor of nuclear receptor TLX, is a downstream target of miR-137. In utero electroporation of miR-137 in embryonic mouse brains led to premature differentiation and outward migration of the transfected cells. Introducing a LSD1 expression vector lacking the miR-137 recognition site rescued miR-137-induced precocious differentiation. Furthermore, we demonstrate that TLX, an essential regulator of neural stem cell self-renewal, represses the expression of miR-137 by recruiting LSD1 to the genomic regions of miR-137. Thus, miR-137 forms a feedback regulatory loop with TLX and LSD1 to control the dynamics between neural stem cell proliferation and differentiation during neural development.
Real-Time Adaptive Color Segmentation by Neural Networks
NASA Technical Reports Server (NTRS)
Duong, Tuan A.
2004-01-01
Artificial neural networks that would utilize the cascade error projection (CEP) algorithm have been proposed as means of autonomous, real-time, adaptive color segmentation of images that change with time. In the original intended application, such a neural network would be used to analyze digitized color video images of terrain on a remote planet as viewed from an uninhabited spacecraft approaching the planet. During descent toward the surface of the planet, information on the segmentation of the images into differently colored areas would be updated adaptively in real time to capture changes in contrast, brightness, and resolution, all in an effort to identify a safe and scientifically productive landing site and provide control feedback to steer the spacecraft toward that site. Potential terrestrial applications include monitoring images of crops to detect insect invasions and monitoring of buildings and other facilities to detect intruders. The CEP algorithm is reliable and is well suited to implementation in very-large-scale integrated (VLSI) circuitry. It was chosen over other neural-network learning algorithms because it is better suited to realtime learning: It provides a self-evolving neural-network structure, requires fewer iterations to converge and is more tolerant to low resolution (that is, fewer bits) in the quantization of neural-network synaptic weights. Consequently, a CEP neural network learns relatively quickly, and the circuitry needed to implement it is relatively simple. Like other neural networks, a CEP neural network includes an input layer, hidden units, and output units (see figure). As in other neural networks, a CEP network is presented with a succession of input training patterns, giving rise to a set of outputs that are compared with the desired outputs. Also as in other neural networks, the synaptic weights are updated iteratively in an effort to bring the outputs closer to target values. A distinctive feature of the CEP neural network and algorithm is that each update of synaptic weights takes place in conjunction with the addition of another hidden unit, which then remains in place as still other hidden units are added on subsequent iterations. For a given training pattern, the synaptic weight between (1) the inputs and the previously added hidden units and (2) the newly added hidden unit is updated by an amount proportional to the partial derivative of a quadratic error function with respect to the synaptic weight. The synaptic weight between the newly added hidden unit and each output unit is given by a more complex function that involves the errors between the outputs and their target values, the transfer functions (hyperbolic tangents) of the neural units, and the derivatives of the transfer functions.
Cortical Neural Computation by Discrete Results Hypothesis
Castejon, Carlos; Nuñez, Angel
2016-01-01
One of the most challenging problems we face in neuroscience is to understand how the cortex performs computations. There is increasing evidence that the power of the cortical processing is produced by populations of neurons forming dynamic neuronal ensembles. Theoretical proposals and multineuronal experimental studies have revealed that ensembles of neurons can form emergent functional units. However, how these ensembles are implicated in cortical computations is still a mystery. Although cell ensembles have been associated with brain rhythms, the functional interaction remains largely unclear. It is still unknown how spatially distributed neuronal activity can be temporally integrated to contribute to cortical computations. A theoretical explanation integrating spatial and temporal aspects of cortical processing is still lacking. In this Hypothesis and Theory article, we propose a new functional theoretical framework to explain the computational roles of these ensembles in cortical processing. We suggest that complex neural computations underlying cortical processing could be temporally discrete and that sensory information would need to be quantized to be computed by the cerebral cortex. Accordingly, we propose that cortical processing is produced by the computation of discrete spatio-temporal functional units that we have called “Discrete Results” (Discrete Results Hypothesis). This hypothesis represents a novel functional mechanism by which information processing is computed in the cortex. Furthermore, we propose that precise dynamic sequences of “Discrete Results” is the mechanism used by the cortex to extract, code, memorize and transmit neural information. The novel “Discrete Results” concept has the ability to match the spatial and temporal aspects of cortical processing. We discuss the possible neural underpinnings of these functional computational units and describe the empirical evidence supporting our hypothesis. We propose that fast-spiking (FS) interneuron may be a key element in our hypothesis providing the basis for this computation. PMID:27807408
Cortical Neural Computation by Discrete Results Hypothesis.
Castejon, Carlos; Nuñez, Angel
2016-01-01
One of the most challenging problems we face in neuroscience is to understand how the cortex performs computations. There is increasing evidence that the power of the cortical processing is produced by populations of neurons forming dynamic neuronal ensembles. Theoretical proposals and multineuronal experimental studies have revealed that ensembles of neurons can form emergent functional units. However, how these ensembles are implicated in cortical computations is still a mystery. Although cell ensembles have been associated with brain rhythms, the functional interaction remains largely unclear. It is still unknown how spatially distributed neuronal activity can be temporally integrated to contribute to cortical computations. A theoretical explanation integrating spatial and temporal aspects of cortical processing is still lacking. In this Hypothesis and Theory article, we propose a new functional theoretical framework to explain the computational roles of these ensembles in cortical processing. We suggest that complex neural computations underlying cortical processing could be temporally discrete and that sensory information would need to be quantized to be computed by the cerebral cortex. Accordingly, we propose that cortical processing is produced by the computation of discrete spatio-temporal functional units that we have called "Discrete Results" (Discrete Results Hypothesis). This hypothesis represents a novel functional mechanism by which information processing is computed in the cortex. Furthermore, we propose that precise dynamic sequences of "Discrete Results" is the mechanism used by the cortex to extract, code, memorize and transmit neural information. The novel "Discrete Results" concept has the ability to match the spatial and temporal aspects of cortical processing. We discuss the possible neural underpinnings of these functional computational units and describe the empirical evidence supporting our hypothesis. We propose that fast-spiking (FS) interneuron may be a key element in our hypothesis providing the basis for this computation.
Selective neuronal differentiation of neural stem cells induced by nanosecond microplasma agitation.
Xiong, Z; Zhao, S; Mao, X; Lu, X; He, G; Yang, G; Chen, M; Ishaq, M; Ostrikov, K
2014-03-01
An essential step for therapeutic and research applications of stem cells is their ability to differentiate into specific cell types. Neuronal cells are of great interest for medical treatment of neurodegenerative diseases and traumatic injuries of central nervous system (CNS), but efforts to produce these cells have been met with only modest success. In an attempt of finding new approaches, atmospheric-pressure room-temperature microplasma jets (MPJs) are shown to effectively direct in vitro differentiation of neural stem cells (NSCs) predominantly into neuronal lineage. Murine neural stem cells (C17.2-NSCs) treated with MPJs exhibit rapid proliferation and differentiation with longer neurites and cell bodies eventually forming neuronal networks. MPJs regulate ~75% of NSCs to differentiate into neurons, which is a higher efficiency compared to common protein- and growth factors-based differentiation. NSCs exposure to quantized and transient (~150 ns) micro-plasma bullets up-regulates expression of different cell lineage markers as β-Tubulin III (for neurons) and O4 (for oligodendrocytes), while the expression of GFAP (for astrocytes) remains unchanged, as evidenced by quantitative PCR, immunofluorescence microscopy and Western Blot assay. It is shown that the plasma-increased nitric oxide (NO) production is a factor in the fate choice and differentiation of NSCs followed by axonal growth. The differentiated NSC cells matured and produced mostly cholinergic and motor neuronal progeny. It is also demonstrated that exposure of primary rat NSCs to the microplasma leads to quite similar differentiation effects. This suggests that the observed effect may potentially be generic and applicable to other types of neural progenitor cells. The application of this new in vitro strategy to selectively differentiate NSCs into neurons represents a step towards reproducible and efficient production of the desired NSC derivatives. Published by Elsevier B.V.
Quantization Distortion in Block Transform-Compressed Data
NASA Technical Reports Server (NTRS)
Boden, A. F.
1995-01-01
The popular JPEG image compression standard is an example of a block transform-based compression scheme; the image is systematically subdivided into block that are individually transformed, quantized, and encoded. The compression is achieved by quantizing the transformed data, reducing the data entropy and thus facilitating efficient encoding. A generic block transform model is introduced.
Quantized impedance dealing with the damping behavior of the one-dimensional oscillator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Jinghao; Zhang, Jing; Li, Yuan
2015-11-15
A quantized impedance is proposed to theoretically establish the relationship between the atomic eigenfrequency and the intrinsic frequency of the one-dimensional oscillator in this paper. The classical oscillator is modified by the idea that the electron transition is treated as a charge-discharge process of a suggested capacitor with the capacitive energy equal to the energy level difference of the jumping electron. The quantized capacitance of the impedance interacting with the jumping electron can lead the resonant frequency of the oscillator to the same as the atomic eigenfrequency. The quantized resistance reflects that the damping coefficient of the oscillator is themore » mean collision frequency of the transition electron. In addition, the first and third order electric susceptibilities based on the oscillator are accordingly quantized. Our simulation of the hydrogen atom emission spectrum based on the proposed method agrees well with the experimental one. Our results exhibits that the one-dimensional oscillator with the quantized impedance may become useful in the estimations of the refractive index and one- or multi-photon absorption coefficients of some nonmagnetic media composed of hydrogen-like atoms.« less
Quantized impedance dealing with the damping behavior of the one-dimensional oscillator
NASA Astrophysics Data System (ADS)
Zhu, Jinghao; Zhang, Jing; Li, Yuan; Zhang, Yong; Fang, Zhengji; Zhao, Peide; Li, Erping
2015-11-01
A quantized impedance is proposed to theoretically establish the relationship between the atomic eigenfrequency and the intrinsic frequency of the one-dimensional oscillator in this paper. The classical oscillator is modified by the idea that the electron transition is treated as a charge-discharge process of a suggested capacitor with the capacitive energy equal to the energy level difference of the jumping electron. The quantized capacitance of the impedance interacting with the jumping electron can lead the resonant frequency of the oscillator to the same as the atomic eigenfrequency. The quantized resistance reflects that the damping coefficient of the oscillator is the mean collision frequency of the transition electron. In addition, the first and third order electric susceptibilities based on the oscillator are accordingly quantized. Our simulation of the hydrogen atom emission spectrum based on the proposed method agrees well with the experimental one. Our results exhibits that the one-dimensional oscillator with the quantized impedance may become useful in the estimations of the refractive index and one- or multi-photon absorption coefficients of some nonmagnetic media composed of hydrogen-like atoms.
Probabilistic distance-based quantizer design for distributed estimation
NASA Astrophysics Data System (ADS)
Kim, Yoon Hak
2016-12-01
We consider an iterative design of independently operating local quantizers at nodes that should cooperate without interaction to achieve application objectives for distributed estimation systems. We suggest as a new cost function a probabilistic distance between the posterior distribution and its quantized one expressed as the Kullback Leibler (KL) divergence. We first present the analysis that minimizing the KL divergence in the cyclic generalized Lloyd design framework is equivalent to maximizing the logarithmic quantized posterior distribution on the average which can be further computationally reduced in our iterative design. We propose an iterative design algorithm that seeks to maximize the simplified version of the posterior quantized distribution and discuss that our algorithm converges to a global optimum due to the convexity of the cost function and generates the most informative quantized measurements. We also provide an independent encoding technique that enables minimization of the cost function and can be efficiently simplified for a practical use of power-constrained nodes. We finally demonstrate through extensive experiments an obvious advantage of improved estimation performance as compared with the typical designs and the novel design techniques previously published.
Quantization and Superselection Sectors I:. Transformation Group C*-ALGEBRAS
NASA Astrophysics Data System (ADS)
Landsman, N. P.
Quantization is defined as the act of assigning an appropriate C*-algebra { A} to a given configuration space Q, along with a prescription mapping self-adjoint elements of { A} into physically interpretable observables. This procedure is adopted to solve the problem of quantizing a particle moving on a homogeneous locally compact configuration space Q=G/H. Here { A} is chosen to be the transformation group C*-algebra corresponding to the canonical action of G on Q. The structure of these algebras and their representations are examined in some detail. Inequivalent quantizations are identified with inequivalent irreducible representations of the C*-algebra corresponding to the system, hence with its superselection sectors. Introducing the concept of a pre-Hamiltonian, we construct a large class of G-invariant time-evolutions on these algebras, and find the Hamiltonians implementing these time-evolutions in each irreducible representation of { A}. “Topological” terms in the Hamiltonian (or the corresponding action) turn out to be representation-dependent, and are automatically induced by the quantization procedure. Known “topological” charge quantization or periodicity conditions are then identically satisfied as a consequence of the representation theory of { A}.
Light-cone quantization of two dimensional field theory in the path integral approach
NASA Astrophysics Data System (ADS)
Cortés, J. L.; Gamboa, J.
1999-05-01
A quantization condition due to the boundary conditions and the compatification of the light cone space-time coordinate x- is identified at the level of the classical equations for the right-handed fermionic field in two dimensions. A detailed analysis of the implications of the implementation of this quantization condition at the quantum level is presented. In the case of the Thirring model one has selection rules on the excitations as a function of the coupling and in the case of the Schwinger model a double integer structure of the vacuum is derived in the light-cone frame. Two different quantized chiral Schwinger models are found, one of them without a θ-vacuum structure. A generalization of the quantization condition to theories with several fermionic fields and to higher dimensions is presented.
Relational symplectic groupoid quantization for constant poisson structures
NASA Astrophysics Data System (ADS)
Cattaneo, Alberto S.; Moshayedi, Nima; Wernli, Konstantin
2017-09-01
As a detailed application of the BV-BFV formalism for the quantization of field theories on manifolds with boundary, this note describes a quantization of the relational symplectic groupoid for a constant Poisson structure. The presence of mixed boundary conditions and the globalization of results are also addressed. In particular, the paper includes an extension to space-times with boundary of some formal geometry considerations in the BV-BFV formalism, and specifically introduces into the BV-BFV framework a "differential" version of the classical and quantum master equations. The quantization constructed in this paper induces Kontsevich's deformation quantization on the underlying Poisson manifold, i.e., the Moyal product, which is known in full details. This allows focussing on the BV-BFV technology and testing it. For the inexperienced reader, this is also a practical and reasonably simple way to learn it.
NASA Astrophysics Data System (ADS)
Tavassoly, M. K.; Daneshmand, R.; Rustaee, N.
2018-06-01
In this paper we study the linear and nonlinear (intensity-dependent) interactions of two two-level atoms with a single-mode quantized field far from resonance, while the phase-damping effect is also taken into account. To find the analytical solution of the atom-field state vector corresponding to the considered model, after deducing the effective Hamiltonian we evaluate the time-dependent elements of the density operator using the master equation approach and superoperator method. Consequently, we are able to study the influences of the special nonlinearity function f (n) = √ {n}, the intensity of the initial coherent state field and the phase-damping parameter on the degree of entanglement of the whole system as well as the field and atom. It is shown that in the presence of damping, by passing time, the amount of entanglement of each subsystem with the rest of system, asymptotically reaches to its stationary and maximum value. Also, the nonlinear interaction does not have any effect on the entanglement of one of the atoms with the rest of system, but it changes the amplitude and time period of entanglement oscillations of the field and the other atom. Moreover, this may cause that, the degree of entanglement which may be low (high) at some moments of time becomes high (low) by entering the intensity-dependent function in the atom-field coupling.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lindstrom, P; Cohen, J D
We present a streaming geometry compression codec for multiresolution, uniformly-gridded, triangular terrain patches that supports very fast decompression. Our method is based on linear prediction and residual coding for lossless compression of the full-resolution data. As simplified patches on coarser levels in the hierarchy already incur some data loss, we optionally allow further quantization for more lossy compression. The quantization levels are adaptive on a per-patch basis, while still permitting seamless, adaptive tessellations of the terrain. Our geometry compression on such a hierarchy achieves compression ratios of 3:1 to 12:1. Our scheme is not only suitable for fast decompression onmore » the CPU, but also for parallel decoding on the GPU with peak throughput over 2 billion triangles per second. Each terrain patch is independently decompressed on the fly from a variable-rate bitstream by a GPU geometry program with no branches or conditionals. Thus we can store the geometry compressed on the GPU, reducing storage and bandwidth requirements throughout the system. In our rendering approach, only compressed bitstreams and the decoded height values in the view-dependent 'cut' are explicitly stored on the GPU. Normal vectors are computed in a streaming fashion, and remaining geometry and texture coordinates, as well as mesh connectivity, are shared and re-used for all patches. We demonstrate and evaluate our algorithms on a small prototype system in which all compressed geometry fits in the GPU memory and decompression occurs on the fly every rendering frame without any cache maintenance.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barone, Fiorella; Graffi, Sandro
We consider on L{sup 2}(T{sup 2}) the Schrödinger operator family L{sub ε}:ε∈R with domain and action defined as D(L{sub ε})=H{sup 2}(T{sup 2}), L{sub ε}u=−(1/2)ℏ{sup 2}(α{sub 1}∂{sub φ{sub 1}{sup 2}}+α{sub 2}∂{sub φ{sub 2}{sup 2}})u−iℏ(γ{sub 1}∂{sub φ{sub 1}}+γ{sub 2}∂{sub φ{sub 2}})u+εVu. Here ε∈R, α= (α{sub 1}, α{sub 2}), γ= (γ{sub 1}, γ{sub 2}) are vectors of complex non-real frequencies, and V a pseudodifferential operator of order zero. L{sub ε} represents the Weyl quantization of the Hamiltonian family L{sub ε}(ξ,x)=(1/2)(α{sub 1}ξ{sub 1}{sup 2}+α{sub 2}ξ{sub 2}{sup 2})+γ{sub 1}ξ{sub 1}+γ{sub 2}ξ{sub 2}+εV(ξ,x) defined on the phase space R{sup 2}×T{sup 2}, where V(ξ,x)∈C{sup 2}(R{sup 2}×T{sup 2};R). Wemore » prove the uniform convergence with respect to ℏ∈[0, 1] of the quantum normal form, which reduces to the classical one for ℏ= 0. This result simultaneously entails an exact quantization formula for the quantum spectrum as well as a convergence criterion for the classical Birkhoff normal form generalizing a well known theorem of Cherry.« less
Lidar detection of underwater objects using a neuro-SVM-based architecture.
Mitra, Vikramjit; Wang, Chia-Jiu; Banerjee, Satarupa
2006-05-01
This paper presents a neural network architecture using a support vector machine (SVM) as an inference engine (IE) for classification of light detection and ranging (Lidar) data. Lidar data gives a sequence of laser backscatter intensities obtained from laser shots generated from an airborne object at various altitudes above the earth surface. Lidar data is pre-filtered to remove high frequency noise. As the Lidar shots are taken from above the earth surface, it has some air backscatter information, which is of no importance for detecting underwater objects. Because of these, the air backscatter information is eliminated from the data and a segment of this data is subsequently selected to extract features for classification. This is then encoded using linear predictive coding (LPC) and polynomial approximation. The coefficients thus generated are used as inputs to the two branches of a parallel neural architecture. The decisions obtained from the two branches are vector multiplied and the result is fed to an SVM-based IE that presents the final inference. Two parallel neural architectures using multilayer perception (MLP) and hybrid radial basis function (HRBF) are considered in this paper. The proposed structure fits the Lidar data classification task well due to the inherent classification efficiency of neural networks and accurate decision-making capability of SVM. A Bayesian classifier and a quadratic classifier were considered for the Lidar data classification task but they failed to offer high prediction accuracy. Furthermore, a single-layered artificial neural network (ANN) classifier was also considered and it failed to offer good accuracy. The parallel ANN architecture proposed in this paper offers high prediction accuracy (98.9%) and is found to be the most suitable architecture for the proposed task of Lidar data classification.
Some new classification methods for hyperspectral remote sensing
NASA Astrophysics Data System (ADS)
Du, Pei-jun; Chen, Yun-hao; Jones, Simon; Ferwerda, Jelle G.; Chen, Zhi-jun; Zhang, Hua-peng; Tan, Kun; Yin, Zuo-xia
2006-10-01
Hyperspectral Remote Sensing (HRS) is one of the most significant recent achievements of Earth Observation Technology. Classification is the most commonly employed processing methodology. In this paper three new hyperspectral RS image classification methods are analyzed. These methods are: Object-oriented FIRS image classification, HRS image classification based on information fusion and HSRS image classification by Back Propagation Neural Network (BPNN). OMIS FIRS image is used as the example data. Object-oriented techniques have gained popularity for RS image classification in recent years. In such method, image segmentation is used to extract the regions from the pixel information based on homogeneity criteria at first, and spectral parameters like mean vector, texture, NDVI and spatial/shape parameters like aspect ratio, convexity, solidity, roundness and orientation for each region are calculated, finally classification of the image using the region feature vectors and also using suitable classifiers such as artificial neural network (ANN). It proves that object-oriented methods can improve classification accuracy since they utilize information and features both from the point and the neighborhood, and the processing unit is a polygon (in which all pixels are homogeneous and belong to the class). HRS image classification based on information fusion, divides all bands of the image into different groups initially, and extracts features from every group according to the properties of each group. Three levels of information fusion: data level fusion, feature level fusion and decision level fusion are used to HRS image classification. Artificial Neural Network (ANN) can perform well in RS image classification. In order to promote the advances of ANN used for HIRS image classification, Back Propagation Neural Network (BPNN), the most commonly used neural network, is used to HRS image classification.
Lenselink, Eelke B; Ten Dijke, Niels; Bongers, Brandon; Papadatos, George; van Vlijmen, Herman W T; Kowalczyk, Wojtek; IJzerman, Adriaan P; van Westen, Gerard J P
2017-08-14
The increase of publicly available bioactivity data in recent years has fueled and catalyzed research in chemogenomics, data mining, and modeling approaches. As a direct result, over the past few years a multitude of different methods have been reported and evaluated, such as target fishing, nearest neighbor similarity-based methods, and Quantitative Structure Activity Relationship (QSAR)-based protocols. However, such studies are typically conducted on different datasets, using different validation strategies, and different metrics. In this study, different methods were compared using one single standardized dataset obtained from ChEMBL, which is made available to the public, using standardized metrics (BEDROC and Matthews Correlation Coefficient). Specifically, the performance of Naïve Bayes, Random Forests, Support Vector Machines, Logistic Regression, and Deep Neural Networks was assessed using QSAR and proteochemometric (PCM) methods. All methods were validated using both a random split validation and a temporal validation, with the latter being a more realistic benchmark of expected prospective execution. Deep Neural Networks are the top performing classifiers, highlighting the added value of Deep Neural Networks over other more conventional methods. Moreover, the best method ('DNN_PCM') performed significantly better at almost one standard deviation higher than the mean performance. Furthermore, Multi-task and PCM implementations were shown to improve performance over single task Deep Neural Networks. Conversely, target prediction performed almost two standard deviations under the mean performance. Random Forests, Support Vector Machines, and Logistic Regression performed around mean performance. Finally, using an ensemble of DNNs, alongside additional tuning, enhanced the relative performance by another 27% (compared with unoptimized 'DNN_PCM'). Here, a standardized set to test and evaluate different machine learning algorithms in the context of multi-task learning is offered by providing the data and the protocols. Graphical Abstract .
Evaluation of helper-dependent canine adenovirus vectors in a 3D human CNS model
Simão, Daniel; Pinto, Catarina; Fernandes, Paulo; Peddie, Christopher J.; Piersanti, Stefania; Collinson, Lucy M.; Salinas, Sara; Saggio, Isabella; Schiavo, Giampietro; Kremer, Eric J.; Brito, Catarina; Alves, Paula M.
2017-01-01
Gene therapy is a promising approach with enormous potential for treatment of neurodegenerative disorders. Viral vectors derived from canine adenovirus type 2 (CAV-2) present attractive features for gene delivery strategies in the human brain, by preferentially transducing neurons, are capable of efficient axonal transport to afferent brain structures, have a 30-kb cloning capacity and have low innate and induced immunogenicity in pre-clinical tests. For clinical translation, in-depth pre-clinical evaluation of efficacy and safety in a human setting is primordial. Stem cell-derived human neural cells have a great potential as complementary tools by bridging the gap between animal models, which often diverge considerably from human phenotype, and clinical trials. Herein, we explore helper-dependent CAV-2 (hd-CAV-2) efficacy and safety for gene delivery in a human stem cell-derived 3D neural in vitro model. Assessment of hd-CAV-2 vector efficacy was performed at different multiplicities of infection, by evaluating transgene expression and impact on cell viability, ultrastructural cellular organization and neuronal gene expression. Under optimized conditions, hd-CAV-2 transduction led to stable long-term transgene expression with minimal toxicity. hd-CAV-2 preferentially transduced neurons, while human adenovirus type 5 (HAdV5) showed increased tropism towards glial cells. This work demonstrates, in a physiologically relevant 3D model, that hd-CAV-2 vectors are efficient tools for gene delivery to human neurons, with stable long-term transgene expression and minimal cytotoxicity. PMID:26181626
Evaluation of helper-dependent canine adenovirus vectors in a 3D human CNS model.
Simão, D; Pinto, C; Fernandes, P; Peddie, C J; Piersanti, S; Collinson, L M; Salinas, S; Saggio, I; Schiavo, G; Kremer, E J; Brito, C; Alves, P M
2016-01-01
Gene therapy is a promising approach with enormous potential for treatment of neurodegenerative disorders. Viral vectors derived from canine adenovirus type 2 (CAV-2) present attractive features for gene delivery strategies in the human brain, by preferentially transducing neurons, are capable of efficient axonal transport to afferent brain structures, have a 30-kb cloning capacity and have low innate and induced immunogenicity in preclinical tests. For clinical translation, in-depth preclinical evaluation of efficacy and safety in a human setting is primordial. Stem cell-derived human neural cells have a great potential as complementary tools by bridging the gap between animal models, which often diverge considerably from human phenotype, and clinical trials. Herein, we explore helper-dependent CAV-2 (hd-CAV-2) efficacy and safety for gene delivery in a human stem cell-derived 3D neural in vitro model. Assessment of hd-CAV-2 vector efficacy was performed at different multiplicities of infection, by evaluating transgene expression and impact on cell viability, ultrastructural cellular organization and neuronal gene expression. Under optimized conditions, hd-CAV-2 transduction led to stable long-term transgene expression with minimal toxicity. hd-CAV-2 preferentially transduced neurons, whereas human adenovirus type 5 (HAdV5) showed increased tropism toward glial cells. This work demonstrates, in a physiologically relevant 3D model, that hd-CAV-2 vectors are efficient tools for gene delivery to human neurons, with stable long-term transgene expression and minimal cytotoxicity.
Dengue virus type 2: replication and tropisms in orally infected Aedes aegypti mosquitoes.
Salazar, Ma Isabel; Richardson, Jason H; Sánchez-Vargas, Irma; Olson, Ken E; Beaty, Barry J
2007-01-30
To be transmitted by its mosquito vector, dengue virus (DENV) must infect midgut epithelial cells, replicate and disseminate into the hemocoel, and finally infect the salivary glands, which is essential for transmission. The extrinsic incubation period (EIP) is very relevant epidemiologically and is the time required from the ingestion of virus until it can be transmitted to the next vertebrate host. The EIP is conditioned by the kinetics and tropisms of virus replication in its vector. Here we document the virogenesis of DENV-2 in newly-colonized Aedes aegypti mosquitoes from Chetumal, Mexico in order to understand better the effect of vector-virus interactions on dengue transmission. After ingestion of DENV-2, midgut infections in Chetumal mosquitoes were characterized by a peak in virus titers between 7 and 10 days post-infection (dpi). The amount of viral antigen and viral titers in the midgut then declined, but viral RNA levels remained stable. The presence of DENV-2 antigen in the trachea was positively correlated with virus dissemination from the midgut. DENV-2 antigen was found in salivary gland tissue in more than a third of mosquitoes at 4 dpi. Unlike in the midgut, the amount of viral antigen (as well as the percent of infected salivary glands) increased with time. DENV-2 antigen also accumulated and increased in neural tissue throughout the EIP. DENV-2 antigen was detected in multiple tissues of the vector, but unlike some other arboviruses, was not detected in muscle. Our results suggest that the EIP of DENV-2 in its vector may be shorter that the previously reported and that the tracheal system may facilitate DENV-2 dissemination from the midgut. Mosquito organs (e.g. midgut, neural tissue, and salivary glands) differed in their response to DENV-2 infection.
Vector-based navigation using grid-like representations in artificial agents.
Banino, Andrea; Barry, Caswell; Uria, Benigno; Blundell, Charles; Lillicrap, Timothy; Mirowski, Piotr; Pritzel, Alexander; Chadwick, Martin J; Degris, Thomas; Modayil, Joseph; Wayne, Greg; Soyer, Hubert; Viola, Fabio; Zhang, Brian; Goroshin, Ross; Rabinowitz, Neil; Pascanu, Razvan; Beattie, Charlie; Petersen, Stig; Sadik, Amir; Gaffney, Stephen; King, Helen; Kavukcuoglu, Koray; Hassabis, Demis; Hadsell, Raia; Kumaran, Dharshan
2018-05-01
Deep neural networks have achieved impressive successes in fields ranging from object recognition to complex games such as Go 1,2 . Navigation, however, remains a substantial challenge for artificial agents, with deep neural networks trained by reinforcement learning 3-5 failing to rival the proficiency of mammalian spatial behaviour, which is underpinned by grid cells in the entorhinal cortex 6 . Grid cells are thought to provide a multi-scale periodic representation that functions as a metric for coding space 7,8 and is critical for integrating self-motion (path integration) 6,7,9 and planning direct trajectories to goals (vector-based navigation) 7,10,11 . Here we set out to leverage the computational functions of grid cells to develop a deep reinforcement learning agent with mammal-like navigational abilities. We first trained a recurrent network to perform path integration, leading to the emergence of representations resembling grid cells, as well as other entorhinal cell types 12 . We then showed that this representation provided an effective basis for an agent to locate goals in challenging, unfamiliar, and changeable environments-optimizing the primary objective of navigation through deep reinforcement learning. The performance of agents endowed with grid-like representations surpassed that of an expert human and comparison agents, with the metric quantities necessary for vector-based navigation derived from grid-like units within the network. Furthermore, grid-like representations enabled agents to conduct shortcut behaviours reminiscent of those performed by mammals. Our findings show that emergent grid-like representations furnish agents with a Euclidean spatial metric and associated vector operations, providing a foundation for proficient navigation. As such, our results support neuroscientific theories that see grid cells as critical for vector-based navigation 7,10,11 , demonstrating that the latter can be combined with path-based strategies to support navigation in challenging environments.
Splitting Times of Doubly Quantized Vortices in Dilute Bose-Einstein Condensates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huhtamaeki, J. A. M.; Pietilae, V.; Virtanen, S. M. M.
2006-09-15
Recently, the splitting of a topologically created doubly quantized vortex into two singly quantized vortices was experimentally investigated in dilute atomic cigar-shaped Bose-Einstein condensates [Y. Shin et al., Phys. Rev. Lett. 93, 160406 (2004)]. In particular, the dependency of the splitting time on the peak particle density was studied. We present results of theoretical simulations which closely mimic the experimental setup. We show that the combination of gravitational sag and time dependency of the trapping potential alone suffices to split the doubly quantized vortex in time scales which are in good agreement with the experiments.
Response of two-band systems to a single-mode quantized field
NASA Astrophysics Data System (ADS)
Shi, Z. C.; Shen, H. Z.; Wang, W.; Yi, X. X.
2016-03-01
The response of topological insulators (TIs) to an external weakly classical field can be expressed in terms of Kubo formula, which predicts quantized Hall conductivity of the quantum Hall family. The response of TIs to a single-mode quantized field, however, remains unexplored. In this work, we take the quantum nature of the external field into account and define a Hall conductance to characterize the linear response of a two-band system to the quantized field. The theory is then applied to topological insulators. Comparisons with the traditional Hall conductance are presented and discussed.
Xiong, Wenjun; Yu, Xinghuo; Chen, Yao; Gao, Jie
2017-06-01
This brief investigates the quantized iterative learning problem for digital networks with time-varying topologies. The information is first encoded as symbolic data and then transmitted. After the data are received, a decoder is used by the receiver to get an estimate of the sender's state. Iterative learning quantized communication is considered in the process of encoding and decoding. A sufficient condition is then presented to achieve the consensus tracking problem in a finite interval using the quantized iterative learning controllers. Finally, simulation results are given to illustrate the usefulness of the developed criterion.
Universe creation from the third-quantized vacuum
DOE Office of Scientific and Technical Information (OSTI.GOV)
McGuigan, M.
1989-04-15
Third quantization leads to a Hilbert space containing a third-quantized vacuum in which no universes are present as well as multiuniverse states. We consider the possibility of universe creation for the special case where the universe emerges in a no-particle state. The probability of such a creation is computed from both the path-integral and operator formalisms.
Universe creation from the third-quantized vacuum
NASA Astrophysics Data System (ADS)
McGuigan, Michael
1989-04-01
Third quantization leads to a Hilbert space containing a third-quantized vacuum in which no universes are present as well as multiuniverse states. We consider the possibility of universe creation for the special case where the universe emerges in a no-particle state. The probability of such a creation is computed from both the path-integral and operator formalisms.
4D Sommerfeld quantization of the complex extended charge
NASA Astrophysics Data System (ADS)
Bulyzhenkov, Igor E.
2017-12-01
Gravitational fields and accelerations cannot change quantized magnetic flux in closed line contours due to flat 3D section of curved 4D space-time-matter. The relativistic Bohr-Sommerfeld quantization of the imaginary charge reveals an electric analog of the Compton length, which can introduce quantitatively the fine structure constant and the Plank length.
The coordinate coherent states approach revisited
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miao, Yan-Gang, E-mail: miaoyg@nankai.edu.cn; Zhang, Shao-Jun, E-mail: sjzhang@mail.nankai.edu.cn
2013-02-15
We revisit the coordinate coherent states approach through two different quantization procedures in the quantum field theory on the noncommutative Minkowski plane. The first procedure, which is based on the normal commutation relation between an annihilation and creation operators, deduces that a point mass can be described by a Gaussian function instead of the usual Dirac delta function. However, we argue this specific quantization by adopting the canonical one (based on the canonical commutation relation between a field and its conjugate momentum) and show that a point mass should still be described by the Dirac delta function, which implies thatmore » the concept of point particles is still valid when we deal with the noncommutativity by following the coordinate coherent states approach. In order to investigate the dependence on quantization procedures, we apply the two quantization procedures to the Unruh effect and Hawking radiation and find that they give rise to significantly different results. Under the first quantization procedure, the Unruh temperature and Unruh spectrum are not deformed by noncommutativity, but the Hawking temperature is deformed by noncommutativity while the radiation specturm is untack. However, under the second quantization procedure, the Unruh temperature and Hawking temperature are untack but the both spectra are modified by an effective greybody (deformed) factor. - Highlights: Black-Right-Pointing-Pointer Suggest a canonical quantization in the coordinate coherent states approach. Black-Right-Pointing-Pointer Prove the validity of the concept of point particles. Black-Right-Pointing-Pointer Apply the canonical quantization to the Unruh effect and Hawking radiation. Black-Right-Pointing-Pointer Find no deformations in the Unruh temperature and Hawking temperature. Black-Right-Pointing-Pointer Provide the modified spectra of the Unruh effect and Hawking radiation.« less
NASA Astrophysics Data System (ADS)
Liu, Hao; Li, Kangda; Wang, Bing; Tang, Hainie; Gong, Xiaohui
2017-01-01
A quantized block compressive sensing (QBCS) framework, which incorporates the universal measurement, quantization/inverse quantization, entropy coder/decoder, and iterative projected Landweber reconstruction, is summarized. Under the QBCS framework, this paper presents an improved reconstruction algorithm for aerial imagery, QBCS, with entropy-aware projected Landweber (QBCS-EPL), which leverages the full-image sparse transform without Wiener filter and an entropy-aware thresholding model for wavelet-domain image denoising. Through analyzing the functional relation between the soft-thresholding factors and entropy-based bitrates for different quantization methods, the proposed model can effectively remove wavelet-domain noise of bivariate shrinkage and achieve better image reconstruction quality. For the overall performance of QBCS reconstruction, experimental results demonstrate that the proposed QBCS-EPL algorithm significantly outperforms several existing algorithms. With the experiment-driven methodology, the QBCS-EPL algorithm can obtain better reconstruction quality at a relatively moderate computational cost, which makes it more desirable for aerial imagery applications.
Hao, Li-Ying; Park, Ju H; Ye, Dan
2017-09-01
In this paper, a new robust fault-tolerant compensation control method for uncertain linear systems over networks is proposed, where only quantized signals are assumed to be available. This approach is based on the integral sliding mode (ISM) method where two kinds of integral sliding surfaces are constructed. One is the continuous-state-dependent surface with the aim of sliding mode stability analysis and the other is the quantization-state-dependent surface, which is used for ISM controller design. A scheme that combines the adaptive ISM controller and quantization parameter adjustment strategy is then proposed. Through utilizing H ∞ control analytical technique, once the system is in the sliding mode, the nature of performing disturbance attenuation and fault tolerance from the initial time can be found without requiring any fault information. Finally, the effectiveness of our proposed ISM control fault-tolerant schemes against quantization errors is demonstrated in the simulation.
Rate and power efficient image compressed sensing and transmission
NASA Astrophysics Data System (ADS)
Olanigan, Saheed; Cao, Lei; Viswanathan, Ramanarayanan
2016-01-01
This paper presents a suboptimal quantization and transmission scheme for multiscale block-based compressed sensing images over wireless channels. The proposed method includes two stages: dealing with quantization distortion and transmission errors. First, given the total transmission bit rate, the optimal number of quantization bits is assigned to the sensed measurements in different wavelet sub-bands so that the total quantization distortion is minimized. Second, given the total transmission power, the energy is allocated to different quantization bit layers based on their different error sensitivities. The method of Lagrange multipliers with Karush-Kuhn-Tucker conditions is used to solve both optimization problems, for which the first problem can be solved with relaxation and the second problem can be solved completely. The effectiveness of the scheme is illustrated through simulation results, which have shown up to 10 dB improvement over the method without the rate and power optimization in medium and low signal-to-noise ratio cases.
Factor analysis of auto-associative neural networks with application in speaker verification.
Garimella, Sri; Hermansky, Hynek
2013-04-01
Auto-associative neural network (AANN) is a fully connected feed-forward neural network, trained to reconstruct its input at its output through a hidden compression layer, which has fewer numbers of nodes than the dimensionality of input. AANNs are used to model speakers in speaker verification, where a speaker-specific AANN model is obtained by adapting (or retraining) the universal background model (UBM) AANN, an AANN trained on multiple held out speakers, using corresponding speaker data. When the amount of speaker data is limited, this adaptation procedure may lead to overfitting as all the parameters of UBM-AANN are adapted. In this paper, we introduce and develop the factor analysis theory of AANNs to alleviate this problem. We hypothesize that only the weight matrix connecting the last nonlinear hidden layer and the output layer is speaker-specific, and further restrict it to a common low-dimensional subspace during adaptation. The subspace is learned using large amounts of development data, and is held fixed during adaptation. Thus, only the coordinates in a subspace, also known as i-vector, need to be estimated using speaker-specific data. The update equations are derived for learning both the common low-dimensional subspace and the i-vectors corresponding to speakers in the subspace. The resultant i-vector representation is used as a feature for the probabilistic linear discriminant analysis model. The proposed system shows promising results on the NIST-08 speaker recognition evaluation (SRE), and yields a 23% relative improvement in equal error rate over the previously proposed weighted least squares-based subspace AANNs system. The experiments on NIST-10 SRE confirm that these improvements are consistent and generalize across datasets.
Monthly evaporation forecasting using artificial neural networks and support vector machines
NASA Astrophysics Data System (ADS)
Tezel, Gulay; Buyukyildiz, Meral
2016-04-01
Evaporation is one of the most important components of the hydrological cycle, but is relatively difficult to estimate, due to its complexity, as it can be influenced by numerous factors. Estimation of evaporation is important for the design of reservoirs, especially in arid and semi-arid areas. Artificial neural network methods and support vector machines (SVM) are frequently utilized to estimate evaporation and other hydrological variables. In this study, usability of artificial neural networks (ANNs) (multilayer perceptron (MLP) and radial basis function network (RBFN)) and ɛ-support vector regression (SVR) artificial intelligence methods was investigated to estimate monthly pan evaporation. For this aim, temperature, relative humidity, wind speed, and precipitation data for the period 1972 to 2005 from Beysehir meteorology station were used as input variables while pan evaporation values were used as output. The Romanenko and Meyer method was also considered for the comparison. The results were compared with observed class A pan evaporation data. In MLP method, four different training algorithms, gradient descent with momentum and adaptive learning rule backpropagation (GDX), Levenberg-Marquardt (LVM), scaled conjugate gradient (SCG), and resilient backpropagation (RBP), were used. Also, ɛ-SVR model was used as SVR model. The models were designed via 10-fold cross-validation (CV); algorithm performance was assessed via mean absolute error (MAE), root mean square error (RMSE), and coefficient of determination (R 2). According to the performance criteria, the ANN algorithms and ɛ-SVR had similar results. The ANNs and ɛ-SVR methods were found to perform better than the Romanenko and Meyer methods. Consequently, the best performance using the test data was obtained using SCG(4,2,2,1) with R 2 = 0.905.
Induction of neural differentiation by electrically stimulated gene expression of NeuroD2.
Mie, Masayasu; Endoh, Tamaki; Yanagida, Yasuko; Kobatake, Eiry; Aizawa, Masuo
2003-02-13
Regulation of cell differentiation is an important assignment for cellular engineering. One of the techniques for regulation is gene transfection into undifferentiated cells. Transient expression of NeuroD2, one of neural bHLH transcription factors, converted mouse N1E-115 neuroblastoma cells into differentiated neurons. The regulation of neural bHLH expression should be a novel strategy for cell differentiation. In this study, we tried to regulate neural differentiation by NeuroD2 gene inserted under the control of heat shock protein-70 (HSP) promoter, which can be activated by electrical stimulation. Mouse neuroblastoma cell line, N1E-115, was stably transfected with expression vector containing mouse NeuroD2 cDNA under HSP promoter. Transfected cells were cultured on the electrode surface and applied electrical stimulation. After stimulation, NeuroD2 expression was induced, and transfected cells adopt a neuronal morphology at 3 days after stimulation. These results suggest that neural differentiation can be induced by electrically stimulated gene expression of NeuroD2.
Hong, Xia
2006-07-01
In this letter, a Box-Cox transformation-based radial basis function (RBF) neural network is introduced using the RBF neural network to represent the transformed system output. Initially a fixed and moderate sized RBF model base is derived based on a rank revealing orthogonal matrix triangularization (QR decomposition). Then a new fast identification algorithm is introduced using Gauss-Newton algorithm to derive the required Box-Cox transformation, based on a maximum likelihood estimator. The main contribution of this letter is to explore the special structure of the proposed RBF neural network for computational efficiency by utilizing the inverse of matrix block decomposition lemma. Finally, the Box-Cox transformation-based RBF neural network, with good generalization and sparsity, is identified based on the derived optimal Box-Cox transformation and a D-optimality-based orthogonal forward regression algorithm. The proposed algorithm and its efficacy are demonstrated with an illustrative example in comparison with support vector machine regression.
Neural network system for purposeful behavior based on foveal visual preprocessor
NASA Astrophysics Data System (ADS)
Golovan, Alexander V.; Shevtsova, Natalia A.; Klepatch, Arkadi A.
1996-10-01
Biologically plausible model of the system with an adaptive behavior in a priori environment and resistant to impairment has been developed. The system consists of input, learning, and output subsystems. The first subsystems classifies input patterns presented as n-dimensional vectors in accordance with some associative rule. The second one being a neural network determines adaptive responses of the system to input patterns. Arranged neural groups coding possible input patterns and appropriate output responses are formed during learning by means of negative reinforcement. Output subsystem maps a neural network activity into the system behavior in the environment. The system developed has been studied by computer simulation imitating a collision-free motion of a mobile robot. After some learning period the system 'moves' along a road without collisions. It is shown that in spite of impairment of some neural network elements the system functions reliably after relearning. Foveal visual preprocessor model developed earlier has been tested to form a kind of visual input to the system.
Sinkiewicz, Daniel; Friesen, Lendra; Ghoraani, Behnaz
2017-02-01
Cortical auditory evoked potentials (CAEP) are used to evaluate cochlear implant (CI) patient auditory pathways, but the CI device produces an electrical artifact, which obscures the relevant information in the neural response. Currently there are multiple methods, which attempt to recover the neural response from the contaminated CAEP, but there is no gold standard, which can quantitatively confirm the effectiveness of these methods. To address this crucial shortcoming, we develop a wavelet-based method to quantify the amount of artifact energy in the neural response. In addition, a novel technique for extracting the neural response from single channel CAEPs is proposed. The new method uses matching pursuit (MP) based feature extraction to represent the contaminated CAEP in a feature space, and support vector machines (SVM) to classify the components as normal hearing (NH) or artifact. The NH components are combined to recover the neural response without artifact energy, as verified using the evaluation tool. Although it needs some further evaluation, this approach is a promising method of electrical artifact removal from CAEPs. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.
Zimmermann, Karel; Gibrat, Jean-François
2010-01-04
Sequence comparisons make use of a one-letter representation for amino acids, the necessary quantitative information being supplied by the substitution matrices. This paper deals with the problem of finding a representation that provides a comprehensive description of amino acid intrinsic properties consistent with the substitution matrices. We present a Euclidian vector representation of the amino acids, obtained by the singular value decomposition of the substitution matrices. The substitution matrix entries correspond to the dot product of amino acid vectors. We apply this vector encoding to the study of the relative importance of various amino acid physicochemical properties upon the substitution matrices. We also characterize and compare the PAM and BLOSUM series substitution matrices. This vector encoding introduces a Euclidian metric in the amino acid space, consistent with substitution matrices. Such a numerical description of the amino acid is useful when intrinsic properties of amino acids are necessary, for instance, building sequence profiles or finding consensus sequences, using machine learning algorithms such as Support Vector Machine and Neural Networks algorithms.
Differential Effects of AAV.BDNF and AAV.Ntf3 in the Deafened Adult Guinea Pig Ear
Budenz, Cameron L.; Wong, Hiu Tung; Swiderski, Donald L.; Shibata, Seiji B.; Pfingst, Bryan E.; Raphael, Yehoash
2015-01-01
Cochlear hair cell loss results in secondary regression of peripheral auditory fibers (PAFs) and loss of spiral ganglion neurons (SGNs). The performance of cochlear implants (CI) in rehabilitating hearing depends on survival of SGNs. Here we compare the effects of adeno-associated virus vectors with neurotrophin gene inserts, AAV.BDNF and AAV.Ntf3, on guinea pig ears deafened systemically (kanamycin and furosemide) or locally (neomycin). AAV.BDNF or AAV.Ntf3 was delivered to the guinea pig cochlea one week following deafening and ears were assessed morphologically 3 months later. At that time, neurotrophins levels were not significantly elevated in the cochlear fluids, even though in vitro and shorter term in vivo experiments demonstrate robust elevation of neurotrophins with these viral vectors. Nevertheless, animals receiving these vectors exhibited considerable re-growth of PAFs in the basilar membrane area. In systemically deafened animals there was a negative correlation between the presence of differentiated supporting cells and PAFs, suggesting that supporting cells influence the outcome of neurotrophin over-expression aimed at enhancing the cochlear neural substrate. Counts of SGN in Rosenthal's canal indicate that BDNF was more effective than NT-3 in preserving SGNs. The results demonstrate that a transient elevation in neurotrophin levels can sustain the cochlear neural substrate in the long term. PMID:25726967
Fernandez-Lozano, C.; Canto, C.; Gestal, M.; Andrade-Garda, J. M.; Rabuñal, J. R.; Dorado, J.; Pazos, A.
2013-01-01
Given the background of the use of Neural Networks in problems of apple juice classification, this paper aim at implementing a newly developed method in the field of machine learning: the Support Vector Machines (SVM). Therefore, a hybrid model that combines genetic algorithms and support vector machines is suggested in such a way that, when using SVM as a fitness function of the Genetic Algorithm (GA), the most representative variables for a specific classification problem can be selected. PMID:24453933
Predicting healthcare associated infections using patients' experiences
NASA Astrophysics Data System (ADS)
Pratt, Michael A.; Chu, Henry
2016-05-01
Healthcare associated infections (HAI) are a major threat to patient safety and are costly to health systems. Our goal is to predict the HAI performance of a hospital using the patients' experience responses as input. We use four classifiers, viz. random forest, naive Bayes, artificial feedforward neural networks, and the support vector machine, to perform the prediction of six types of HAI. The six types include blood stream, urinary tract, surgical site, and intestinal infections. Experiments show that the random forest and support vector machine perform well across the six types of HAI.
Associative memory - An optimum binary neuron representation
NASA Technical Reports Server (NTRS)
Awwal, A. A.; Karim, M. A.; Liu, H. K.
1989-01-01
Convergence mechanism of vectors in the Hopfield's neural network is studied in terms of both weights (i.e., inner products) and Hamming distance. It is shown that Hamming distance should not always be used in determining the convergence of vectors. Instead, weights (which in turn depend on the neuron representation) are found to play a more dominant role in the convergence mechanism. Consequently, a new binary neuron representation for associative memory is proposed. With the new neuron representation, the associative memory responds unambiguously to the partial input in retrieving the stored information.
Adaptive filtering with the self-organizing map: a performance comparison.
Barreto, Guilherme A; Souza, Luís Gustavo M
2006-01-01
In this paper we provide an in-depth evaluation of the SOM as a feasible tool for nonlinear adaptive filtering. A comprehensive survey of existing SOM-based and related architectures for learning input-output mappings is carried out and the application of these architectures to nonlinear adaptive filtering is formulated. Then, we introduce two simple procedures for building RBF-based nonlinear filters using the Vector-Quantized Temporal Associative Memory (VQTAM), a recently proposed method for learning dynamical input-output mappings using the SOM. The aforementioned SOM-based adaptive filters are compared with standard FIR/LMS and FIR/LMS-Newton linear transversal filters, as well as with powerful MLP-based filters in nonlinear channel equalization and inverse modeling tasks. The obtained results in both tasks indicate that SOM-based filters can consistently outperform powerful MLP-based ones.
Analysis of the Westland Data Set
NASA Technical Reports Server (NTRS)
Wen, Fang; Willett, Peter; Deb, Somnath
2001-01-01
The "Westland" set of empirical accelerometer helicopter data with seeded and labeled faults is analyzed with the aim of condition monitoring. The autoregressive (AR) coefficients from a simple linear model encapsulate a great deal of information in a relatively few measurements; and it has also been found that augmentation of these by harmonic and other parameters call improve classification significantly. Several techniques have been explored, among these restricted Coulomb energy (RCE) networks, learning vector quantization (LVQ), Gaussian mixture classifiers and decision trees. A problem with these approaches, and in common with many classification paradigms, is that augmentation of the feature dimension can degrade classification ability. Thus, we also introduce the Bayesian data reduction algorithm (BDRA), which imposes a Dirichlet prior oil training data and is thus able to quantify probability of error in all exact manner, such that features may be discarded or coarsened appropriately.
Medical Image Retrieval Using Multi-Texton Assignment.
Tang, Qiling; Yang, Jirong; Xia, Xianfu
2018-02-01
In this paper, we present a multi-texton representation method for medical image retrieval, which utilizes the locality constraint to encode each filter bank response within its local-coordinate system consisting of the k nearest neighbors in texton dictionary and subsequently employs spatial pyramid matching technique to implement feature vector representation. Comparison with the traditional nearest neighbor assignment followed by texton histogram statistics method, our strategies reduce the quantization errors in mapping process and add information about the spatial layout of texton distributions and, thus, increase the descriptive power of the image representation. We investigate the effects of different parameters on system performance in order to choose the appropriate ones for our datasets and carry out experiments on the IRMA-2009 medical collection and the mammographic patch dataset. The extensive experimental results demonstrate that the proposed method has superior performance.
Image data compression having minimum perceptual error
NASA Technical Reports Server (NTRS)
Watson, Andrew B. (Inventor)
1995-01-01
A method for performing image compression that eliminates redundant and invisible image components is described. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The present invention adapts or customizes the quantization matrix to the image being compressed. The quantization matrix comprises visual masking by luminance and contrast techniques and by an error pooling technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.
Immirzi parameter without Immirzi ambiguity: Conformal loop quantization of scalar-tensor gravity
NASA Astrophysics Data System (ADS)
Veraguth, Olivier J.; Wang, Charles H.-T.
2017-10-01
Conformal loop quantum gravity provides an approach to loop quantization through an underlying conformal structure i.e. conformally equivalent class of metrics. The property that general relativity itself has no conformal invariance is reinstated with a constrained scalar field setting the physical scale. Conformally equivalent metrics have recently been shown to be amenable to loop quantization including matter coupling. It has been suggested that conformal geometry may provide an extended symmetry to allow a reformulated Immirzi parameter necessary for loop quantization to behave like an arbitrary group parameter that requires no further fixing as its present standard form does. Here, we find that this can be naturally realized via conformal frame transformations in scalar-tensor gravity. Such a theory generally incorporates a dynamical scalar gravitational field and reduces to general relativity when the scalar field becomes a pure gauge. In particular, we introduce a conformal Einstein frame in which loop quantization is implemented. We then discuss how different Immirzi parameters under this description may be related by conformal frame transformations and yet share the same quantization having, for example, the same area gaps, modulated by the scalar gravitational field.
Tribology of the lubricant quantized sliding state.
Castelli, Ivano Eligio; Capozza, Rosario; Vanossi, Andrea; Santoro, Giuseppe E; Manini, Nicola; Tosatti, Erio
2009-11-07
In the framework of Langevin dynamics, we demonstrate clear evidence of the peculiar quantized sliding state, previously found in a simple one-dimensional boundary lubricated model [A. Vanossi et al., Phys. Rev. Lett. 97, 056101 (2006)], for a substantially less idealized two-dimensional description of a confined multilayer solid lubricant under shear. This dynamical state, marked by a nontrivial "quantized" ratio of the averaged lubricant center-of-mass velocity to the externally imposed sliding speed, is recovered, and shown to be robust against the effects of thermal fluctuations, quenched disorder in the confining substrates, and over a wide range of loading forces. The lubricant softness, setting the width of the propagating solitonic structures, is found to play a major role in promoting in-registry commensurate regions beneficial to this quantized sliding. By evaluating the force instantaneously exerted on the top plate, we find that this quantized sliding represents a dynamical "pinned" state, characterized by significantly low values of the kinetic friction. While the quantized sliding occurs due to solitons being driven gently, the transition to ordinary unpinned sliding regimes can involve lubricant melting due to large shear-induced Joule heating, for example at large speed.