Sample records for quantized dct coefficients

  1. A visual detection model for DCT coefficient quantization

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert J., Jr.; Peterson, Heidi A.

    1993-01-01

    The discrete cosine transform (DCT) is widely used in image compression, and is part of the JPEG and MPEG compression standards. The degree of compression, and the amount of distortion in the decompressed image are determined by the quantization of the transform coefficients. The standards do not specify how the DCT coefficients should be quantized. Our approach is to set the quantization level for each coefficient so that the quantization error is at the threshold of visibility. Here we combine results from our previous work to form our current best detection model for DCT coefficient quantization noise. This model predicts sensitivity as a function of display parameters, enabling quantization matrices to be designed for display situations varying in luminance, veiling light, and spatial frequency related conditions (pixel size, viewing distance, and aspect ratio). It also allows arbitrary color space directions for the representation of color.

  2. A visual detection model for DCT coefficient quantization

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert J., Jr.; Watson, Andrew B.

    1994-01-01

    The discrete cosine transform (DCT) is widely used in image compression and is part of the JPEG and MPEG compression standards. The degree of compression and the amount of distortion in the decompressed image are controlled by the quantization of the transform coefficients. The standards do not specify how the DCT coefficients should be quantized. One approach is to set the quantization level for each coefficient so that the quantization error is near the threshold of visibility. Results from previous work are combined to form the current best detection model for DCT coefficient quantization noise. This model predicts sensitivity as a function of display parameters, enabling quantization matrices to be designed for display situations varying in luminance, veiling light, and spatial frequency related conditions (pixel size, viewing distance, and aspect ratio). It also allows arbitrary color space directions for the representation of color. A model-based method of optimizing the quantization matrix for an individual image was developed. The model described above provides visual thresholds for each DCT frequency. These thresholds are adjusted within each block for visual light adaptation and contrast masking. For given quantization matrix, the DCT quantization errors are scaled by the adjusted thresholds to yield perceptual errors. These errors are pooled nonlinearly over the image to yield total perceptual error. With this model one may estimate the quantization matrix for a particular image that yields minimum bit rate for a given total perceptual error, or minimum perceptual error for a given bit rate. Custom matrices for a number of images show clear improvement over image-independent matrices. Custom matrices are compatible with the JPEG standard, which requires transmission of the quantization matrix.

  3. Luminance-model-based DCT quantization for color image compression

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert J., Jr.; Peterson, Heidi A.

    1992-01-01

    A model is developed to approximate visibility thresholds for discrete cosine transform (DCT) coefficient quantization error based on the peak-to-peak luminance of the error image. Experimentally measured visibility thresholds for R, G, and B DCT basis functions can be predicted by a simple luminance-based detection model. This model allows DCT coefficient quantization matrices to be designed for display conditions other than those of the experimental measurements: other display luminances, other veiling luminances, and other spatial frequencies (different pixel spacings, viewing distances, and aspect ratios).

  4. Performance of customized DCT quantization tables on scientific data

    NASA Technical Reports Server (NTRS)

    Ratnakar, Viresh; Livny, Miron

    1994-01-01

    We show that it is desirable to use data-specific or customized quantization tables for scaling the spatial frequency coefficients obtained using the Discrete Cosine Transform (DCT). DCT is widely used for image and video compression (MP89, PM93) but applications typically use default quantization matrices. Using actual scientific data gathered from divers sources such as spacecrafts and electron-microscopes, we show that the default compression/quality tradeoffs can be significantly improved upon by using customized tables. We also show that significant improvements are possible for the standard test images Lena and Baboon. This work is part of an effort to develop a practical scheme for optimizing quantization matrices for any given image or video stream, under any given quality or compression constraints.

  5. Perceptual Optimization of DCT Color Quantization Matrices

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Statler, Irving C. (Technical Monitor)

    1994-01-01

    Many image compression schemes employ a block Discrete Cosine Transform (DCT) and uniform quantization. Acceptable rate/distortion performance depends upon proper design of the quantization matrix. In previous work, we showed how to use a model of the visibility of DCT basis functions to design quantization matrices for arbitrary display resolutions and color spaces. Subsequently, we showed how to optimize greyscale quantization matrices for individual images, for optimal rate/perceptual distortion performance. Here we describe extensions of this optimization algorithm to color images.

  6. An improved parameter estimation scheme for image modification detection based on DCT coefficient analysis.

    PubMed

    Yu, Liyang; Han, Qi; Niu, Xiamu; Yiu, S M; Fang, Junbin; Zhang, Ye

    2016-02-01

    Most of the existing image modification detection methods which are based on DCT coefficient analysis model the distribution of DCT coefficients as a mixture of a modified and an unchanged component. To separate the two components, two parameters, which are the primary quantization step, Q1, and the portion of the modified region, α, have to be estimated, and more accurate estimations of α and Q1 lead to better detection and localization results. Existing methods estimate α and Q1 in a completely blind manner, without considering the characteristics of the mixture model and the constraints to which α should conform. In this paper, we propose a more effective scheme for estimating α and Q1, based on the observations that, the curves on the surface of the likelihood function corresponding to the mixture model is largely smooth, and α can take values only in a discrete set. We conduct extensive experiments to evaluate the proposed method, and the experimental results confirm the efficacy of our method. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  7. Image-adapted visually weighted quantization matrices for digital image compression

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B. (Inventor)

    1994-01-01

    A method for performing image compression that eliminates redundant and invisible image components is presented. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The present invention adapts or customizes the quantization matrix to the image being compressed. The quantization matrix comprises visual masking by luminance and contrast techniques and by an error pooling technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.

  8. Output MSE and PSNR prediction in DCT-based lossy compression of remote sensing images

    NASA Astrophysics Data System (ADS)

    Kozhemiakin, Ruslan A.; Abramov, Sergey K.; Lukin, Vladimir V.; Vozel, Benoit; Chehdi, Kacem

    2017-10-01

    Amount and size of remote sensing (RS) images acquired by modern systems are so large that data have to be compressed in order to transfer, save and disseminate them. Lossy compression becomes more popular for aforementioned situations. But lossy compression has to be applied carefully with providing acceptable level of introduced distortions not to lose valuable information contained in data. Then introduced losses have to be controlled and predicted and this is problematic for many coders. In this paper, we analyze possibilities of predicting mean square error or, equivalently, PSNR for coders based on discrete cosine transform (DCT) applied either for compressing singlechannel RS images or multichannel data in component-wise manner. The proposed approach is based on direct dependence between distortions introduced due to DCT coefficient quantization and losses in compressed data. One more innovation deals with possibility to employ a limited number (percentage) of blocks for which DCT-coefficients have to be calculated. This accelerates prediction and makes it considerably faster than compression itself. There are two other advantages of the proposed approach. First, it is applicable for both uniform and non-uniform quantization of DCT coefficients. Second, the approach is quite general since it works for several analyzed DCT-based coders. The simulation results are obtained for standard test images and then verified for real-life RS data.

  9. Recovering DC coefficients in block-based DCT.

    PubMed

    Uehara, Takeyuki; Safavi-Naini, Reihaneh; Ogunbona, Philip

    2006-11-01

    It is a common approach for JPEG and MPEG encryption systems to provide higher protection for dc coefficients and less protection for ac coefficients. Some authors have employed a cryptographic encryption algorithm for the dc coefficients and left the ac coefficients to techniques based on random permutation lists which are known to be weak against known-plaintext and chosen-ciphertext attacks. In this paper we show that in block-based DCT, it is possible to recover dc coefficients from ac coefficients with reasonable image quality and show the insecurity of image encryption methods which rely on the encryption of dc values using a cryptoalgorithm. The method proposed in this paper combines dc recovery from ac coefficients and the fact that ac coefficients can be recovered using a chosen ciphertext attack. We demonstrate that a method proposed by Tang to encrypt and decrypt MPEG video can be completely broken.

  10. Detection of shifted double JPEG compression by an adaptive DCT coefficient model

    NASA Astrophysics Data System (ADS)

    Wang, Shi-Lin; Liew, Alan Wee-Chung; Li, Sheng-Hong; Zhang, Yu-Jin; Li, Jian-Hua

    2014-12-01

    In many JPEG image splicing forgeries, the tampered image patch has been JPEG-compressed twice with different block alignments. Such phenomenon in JPEG image forgeries is called the shifted double JPEG (SDJPEG) compression effect. Detection of SDJPEG-compressed patches could help in detecting and locating the tampered region. However, the current SDJPEG detection methods do not provide satisfactory results especially when the tampered region is small. In this paper, we propose a new SDJPEG detection method based on an adaptive discrete cosine transform (DCT) coefficient model. DCT coefficient distributions for SDJPEG and non-SDJPEG patches have been analyzed and a discriminative feature has been proposed to perform the two-class classification. An adaptive approach is employed to select the most discriminative DCT modes for SDJPEG detection. The experimental results show that the proposed approach can achieve much better results compared with some existing approaches in SDJPEG patch detection especially when the patch size is small.

  11. Passive forensics for copy-move image forgery using a method based on DCT and SVD.

    PubMed

    Zhao, Jie; Guo, Jichang

    2013-12-10

    As powerful image editing tools are widely used, the demand for identifying the authenticity of an image is much increased. Copy-move forgery is one of the tampering techniques which are frequently used. Most existing techniques to expose this forgery need to improve the robustness for common post-processing operations and fail to precisely locate the tampering region especially when there are large similar or flat regions in the image. In this paper, a robust method based on DCT and SVD is proposed to detect this specific artifact. Firstly, the suspicious image is divided into fixed-size overlapping blocks and 2D-DCT is applied to each block, then the DCT coefficients are quantized by a quantization matrix to obtain a more robust representation of each block. Secondly, each quantized block is divided non-overlapping sub-blocks and SVD is applied to each sub-block, then features are extracted to reduce the dimension of each block using its largest singular value. Finally, the feature vectors are lexicographically sorted, and duplicated image blocks will be matched by predefined shift frequency threshold. Experiment results demonstrate that our proposed method can effectively detect multiple copy-move forgery and precisely locate the duplicated regions, even when an image was distorted by Gaussian blurring, AWGN, JPEG compression and their mixed operations. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  12. Quantized correlation coefficient for measuring reproducibility of ChIP-chip data.

    PubMed

    Peng, Shouyong; Kuroda, Mitzi I; Park, Peter J

    2010-07-27

    Chromatin immunoprecipitation followed by microarray hybridization (ChIP-chip) is used to study protein-DNA interactions and histone modifications on a genome-scale. To ensure data quality, these experiments are usually performed in replicates, and a correlation coefficient between replicates is used often to assess reproducibility. However, the correlation coefficient can be misleading because it is affected not only by the reproducibility of the signal but also by the amount of binding signal present in the data. We develop the Quantized correlation coefficient (QCC) that is much less dependent on the amount of signal. This involves discretization of data into set of quantiles (quantization), a merging procedure to group the background probes, and recalculation of the Pearson correlation coefficient. This procedure reduces the influence of the background noise on the statistic, which then properly focuses more on the reproducibility of the signal. The performance of this procedure is tested in both simulated and real ChIP-chip data. For replicates with different levels of enrichment over background and coverage, we find that QCC reflects reproducibility more accurately and is more robust than the standard Pearson or Spearman correlation coefficients. The quantization and the merging procedure can also suggest a proper quantile threshold for separating signal from background for further analysis. To measure reproducibility of ChIP-chip data correctly, a correlation coefficient that is robust to the amount of signal present should be used. QCC is one such measure. The QCC statistic can also be applied in a variety of other contexts for measuring reproducibility, including analysis of array CGH data for DNA copy number and gene expression data.

  13. New fast DCT algorithms based on Loeffler's factorization

    NASA Astrophysics Data System (ADS)

    Hong, Yoon Mi; Kim, Il-Koo; Lee, Tammy; Cheon, Min-Su; Alshina, Elena; Han, Woo-Jin; Park, Jeong-Hoon

    2012-10-01

    This paper proposes a new 32-point fast discrete cosine transform (DCT) algorithm based on the Loeffler's 16-point transform. Fast integer realizations of 16-point and 32-point transforms are also provided based on the proposed transform. For the recent development of High Efficiency Video Coding (HEVC), simplified quanti-zation and de-quantization process are proposed. Three different forms of implementation with the essentially same performance, namely matrix multiplication, partial butterfly, and full factorization can be chosen accord-ing to the given platform. In terms of the number of multiplications required for the realization, our proposed full-factorization is 3~4 times faster than a partial butterfly, and about 10 times faster than direct matrix multiplication.

  14. Enhancement of Satellite Image Compression Using a Hybrid (DWT-DCT) Algorithm

    NASA Astrophysics Data System (ADS)

    Shihab, Halah Saadoon; Shafie, Suhaidi; Ramli, Abdul Rahman; Ahmad, Fauzan

    2017-12-01

    Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) image compression techniques have been utilized in most of the earth observation satellites launched during the last few decades. However, these techniques have some issues that should be addressed. The DWT method has proven to be more efficient than DCT for several reasons. Nevertheless, the DCT can be exploited to improve the high-resolution satellite image compression when combined with the DWT technique. Hence, a proposed hybrid (DWT-DCT) method was developed and implemented in the current work, simulating an image compression system on-board on a small remote sensing satellite, with the aim of achieving a higher compression ratio to decrease the onboard data storage and the downlink bandwidth, while avoiding further complex levels of DWT. This method also succeeded in maintaining the reconstructed satellite image quality through replacing the standard forward DWT thresholding and quantization processes with an alternative process that employed the zero-padding technique, which also helped to reduce the processing time of DWT compression. The DCT, DWT and the proposed hybrid methods were implemented individually, for comparison, on three LANDSAT 8 images, using the MATLAB software package. A comparison was also made between the proposed method and three other previously published hybrid methods. The evaluation of all the objective and subjective results indicated the feasibility of using the proposed hybrid (DWT-DCT) method to enhance the image compression process on-board satellites.

  15. A generalized Benford's law for JPEG coefficients and its applications in image forensics

    NASA Astrophysics Data System (ADS)

    Fu, Dongdong; Shi, Yun Q.; Su, Wei

    2007-02-01

    In this paper, a novel statistical model based on Benford's law for the probability distributions of the first digits of the block-DCT and quantized JPEG coefficients is presented. A parametric logarithmic law, i.e., the generalized Benford's law, is formulated. Furthermore, some potential applications of this model in image forensics are discussed in this paper, which include the detection of JPEG compression for images in bitmap format, the estimation of JPEG compression Qfactor for JPEG compressed bitmap image, and the detection of double compressed JPEG image. The results of our extensive experiments demonstrate the effectiveness of the proposed statistical model.

  16. Image Data Compression Having Minimum Perceptual Error

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B. (Inventor)

    1997-01-01

    A method is presented for performing color or grayscale image compression that eliminates redundant and invisible image components. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The quantization matrix comprises visual masking by luminance and contrast technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.

  17. Image data compression having minimum perceptual error

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B. (Inventor)

    1995-01-01

    A method for performing image compression that eliminates redundant and invisible image components is described. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The present invention adapts or customizes the quantization matrix to the image being compressed. The quantization matrix comprises visual masking by luminance and contrast techniques and by an error pooling technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.

  18. Subband directional vector quantization in radiological image compression

    NASA Astrophysics Data System (ADS)

    Akrout, Nabil M.; Diab, Chaouki; Prost, Remy; Goutte, Robert; Amiel, Michel

    1992-05-01

    The aim of this paper is to propose a new scheme for image compression. The method is very efficient for images which have directional edges such as the tree-like structure of the coronary vessels in digital angiograms. This method involves two steps. First, the original image is decomposed at different resolution levels using a pyramidal subband decomposition scheme. For decomposition/reconstruction of the image, free of aliasing and boundary errors, we use an ideal band-pass filter bank implemented in the Discrete Cosine Transform domain (DCT). Second, the high-frequency subbands are vector quantized using a multiresolution codebook with vertical and horizontal codewords which take into account the edge orientation of each subband. The proposed method reduces the blocking effect encountered at low bit rates in conventional vector quantization.

  19. Locally optimum nonlinearities for DCT watermark detection.

    PubMed

    Briassouli, Alexia; Strintzis, Michael G

    2004-12-01

    The issue of copyright protection of digital multimedia data has attracted a lot of attention during the last decade. An efficient copyright protection method that has been gaining popularity is watermarking, i.e., the embedding of a signature in a digital document that can be detected only by its rightful owner. Watermarks are usually blindly detected using correlating structures, which would be optimal in the case of Gaussian data. However, in the case of DCT-domain image watermarking, the data is more heavy-tailed and the correlator is clearly suboptimal. Nonlinear receivers have been shown to be particularly well suited for the detection of weak signals in heavy-tailed noise, as they are locally optimal. This motivates the use of the Gaussian-tailed zero-memory nonlinearity, as well as the locally optimal Cauchy nonlinearity for the detection of watermarks in DCT transformed images. We analyze the performance of these schemes theoretically and compare it to that of the traditionally used Gaussian correlator, but also to the recently proposed generalized Gaussian detector, which outperforms the correlator. The theoretical analysis and the actual performance of these systems is assessed through experiments, which verify the theoretical analysis and also justify the use of nonlinear structures for watermark detection. The performance of the correlator and the nonlinear detectors in the presence of quantization is also analyzed, using results from dither theory, and also verified experimentally.

  20. DCT-based iris recognition.

    PubMed

    Monro, Donald M; Rakshit, Soumyadip; Zhang, Dexin

    2007-04-01

    This paper presents a novel iris coding method based on differences of discrete cosine transform (DCT) coefficients of overlapped angular patches from normalized iris images. The feature extraction capabilities of the DCT are optimized on the two largest publicly available iris image data sets, 2,156 images of 308 eyes from the CASIA database and 2,955 images of 150 eyes from the Bath database. On this data, we achieve 100 percent Correct Recognition Rate (CRR) and perfect Receiver-Operating Characteristic (ROC) Curves with no registered false accepts or rejects. Individual feature bit and patch position parameters are optimized for matching through a product-of-sum approach to Hamming distance calculation. For verification, a variable threshold is applied to the distance metric and the False Acceptance Rate (FAR) and False Rejection Rate (FRR) are recorded. A new worst-case metric is proposed for predicting practical system performance in the absence of matching failures, and the worst case theoretical Equal Error Rate (EER) is predicted to be as low as 2.59 x 10(-4) on the available data sets.

  1. SPECT reconstruction using DCT-induced tight framelet regularization

    NASA Astrophysics Data System (ADS)

    Zhang, Jiahan; Li, Si; Xu, Yuesheng; Schmidtlein, C. R.; Lipson, Edward D.; Feiglin, David H.; Krol, Andrzej

    2015-03-01

    Wavelet transforms have been successfully applied in many fields of image processing. Yet, to our knowledge, they have never been directly incorporated to the objective function in Emission Computed Tomography (ECT) image reconstruction. Our aim has been to investigate if the ℓ1-norm of non-decimated discrete cosine transform (DCT) coefficients of the estimated radiotracer distribution could be effectively used as the regularization term for the penalized-likelihood (PL) reconstruction, where a regularizer is used to enforce the image smoothness in the reconstruction. In this study, the ℓ1-norm of 2D DCT wavelet decomposition was used as a regularization term. The Preconditioned Alternating Projection Algorithm (PAPA), which we proposed in earlier work to solve penalized likelihood (PL) reconstruction with non-differentiable regularizers, was used to solve this optimization problem. The DCT wavelet decompositions were performed on the transaxial reconstructed images. We reconstructed Monte Carlo simulated SPECT data obtained for a numerical phantom with Gaussian blobs as hot lesions and with a warm random lumpy background. Reconstructed images using the proposed method exhibited better noise suppression and improved lesion conspicuity, compared with images reconstructed using expectation maximization (EM) algorithm with Gaussian post filter (GPF). Also, the mean square error (MSE) was smaller, compared with EM-GPF. A critical and challenging aspect of this method was selection of optimal parameters. In summary, our numerical experiments demonstrated that the ℓ1-norm of discrete cosine transform (DCT) wavelet frame transform DCT regularizer shows promise for SPECT image reconstruction using PAPA method.

  2. Investigation of a novel common subexpression elimination method for low power and area efficient DCT architecture.

    PubMed

    Siddiqui, M F; Reza, A W; Kanesan, J; Ramiah, H

    2014-01-01

    A wide interest has been observed to find a low power and area efficient hardware design of discrete cosine transform (DCT) algorithm. This research work proposed a novel Common Subexpression Elimination (CSE) based pipelined architecture for DCT, aimed at reproducing the cost metrics of power and area while maintaining high speed and accuracy in DCT applications. The proposed design combines the techniques of Canonical Signed Digit (CSD) representation and CSE to implement the multiplier-less method for fixed constant multiplication of DCT coefficients. Furthermore, symmetry in the DCT coefficient matrix is used with CSE to further decrease the number of arithmetic operations. This architecture needs a single-port memory to feed the inputs instead of multiport memory, which leads to reduction of the hardware cost and area. From the analysis of experimental results and performance comparisons, it is observed that the proposed scheme uses minimum logic utilizing mere 340 slices and 22 adders. Moreover, this design meets the real time constraints of different video/image coders and peak-signal-to-noise-ratio (PSNR) requirements. Furthermore, the proposed technique has significant advantages over recent well-known methods along with accuracy in terms of power reduction, silicon area usage, and maximum operating frequency by 41%, 15%, and 15%, respectively.

  3. Investigation of a Novel Common Subexpression Elimination Method for Low Power and Area Efficient DCT Architecture

    PubMed Central

    Siddiqui, M. F.; Reza, A. W.; Kanesan, J.; Ramiah, H.

    2014-01-01

    A wide interest has been observed to find a low power and area efficient hardware design of discrete cosine transform (DCT) algorithm. This research work proposed a novel Common Subexpression Elimination (CSE) based pipelined architecture for DCT, aimed at reproducing the cost metrics of power and area while maintaining high speed and accuracy in DCT applications. The proposed design combines the techniques of Canonical Signed Digit (CSD) representation and CSE to implement the multiplier-less method for fixed constant multiplication of DCT coefficients. Furthermore, symmetry in the DCT coefficient matrix is used with CSE to further decrease the number of arithmetic operations. This architecture needs a single-port memory to feed the inputs instead of multiport memory, which leads to reduction of the hardware cost and area. From the analysis of experimental results and performance comparisons, it is observed that the proposed scheme uses minimum logic utilizing mere 340 slices and 22 adders. Moreover, this design meets the real time constraints of different video/image coders and peak-signal-to-noise-ratio (PSNR) requirements. Furthermore, the proposed technique has significant advantages over recent well-known methods along with accuracy in terms of power reduction, silicon area usage, and maximum operating frequency by 41%, 15%, and 15%, respectively. PMID:25133249

  4. TU-F-CAMPUS-J-03: Evaluation of a New GE Device-Less Cine 4D-CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martin, R; Pan, T; Chandler, A

    2015-06-15

    Purpose: Standard cine 4D-CT (S-4DCT) is the cine CT scan of the thorax followed by image sorting with the respiratory signal recorded by the RPM. Although the feasibility of cine 4D-CT without RPM or device-less 4DCT (DL-4DCT) has been reported in a laboratory setting, the only commercial implementation of DL-4DCT was made recently by GE based on the measurements of the lung, body and air area and density. We report the initial results of this new DL-4DCT on its determination of gross tumor volume (GTV). Methods: 30 stereotactic body radiation therapy (SBRT) patients with NSCLC were included in the study.more » All patients received the S-4DCT for their treatment planning. Their cine CT data without the respiratory signal from RPM were submitted to the DL-4DCT. The DL-4DCT image quality was assessed in reference to S-4DCT. Using maximum intensity projection (MIP) images, the GTVs of the S-4DCT and DL-4DCT were compared on a subset of 9 patients whose tumors in the low density lung regions could be contoured using a region growing algorithm in MIM without contouring bias from the user. A lower threshold of −424 HU was used for all patients and other algorithm parameters were held constant for each patient. Results: The DL-4DCT was able to produce the 4DCT images on 29 out of the 30 SBRT cases. One case failed due to the enhanced calcification surrounding both the breast implants. The GTVs determined on the 9 patients with DL-4DCT were 4.2 ± 4.8% smaller than the GTVs with S-4DCT. However, this was statistically insignificant (p=0.15). The Dice similarity coefficients were 95.1 ± 1.8%. The image quality of DL-4DCT and S-4DCT was similar on the 29 cases. Conclusion: The first commercial DL-4DCT was promising in generating 4D-CT images without a respiratory monitoring device in this preliminary study of 30 patients.« less

  5. Automatic assessment of average diaphragm motion trajectory from 4DCT images through machine learning.

    PubMed

    Li, Guang; Wei, Jie; Huang, Hailiang; Gaebler, Carl Philipp; Yuan, Amy; Deasy, Joseph O

    2015-12-01

    To automatically estimate average diaphragm motion trajectory (ADMT) based on four-dimensional computed tomography (4DCT), facilitating clinical assessment of respiratory motion and motion variation and retrospective motion study. We have developed an effective motion extraction approach and a machine-learning-based algorithm to estimate the ADMT. Eleven patients with 22 sets of 4DCT images (4DCT1 at simulation and 4DCT2 at treatment) were studied. After automatically segmenting the lungs, the differential volume-per-slice (dVPS) curves of the left and right lungs were calculated as a function of slice number for each phase with respective to the full-exhalation. After 5-slice moving average was performed, the discrete cosine transform (DCT) was applied to analyze the dVPS curves in frequency domain. The dimensionality of the spectrum data was reduced by using several lowest frequency coefficients ( f v ) to account for most of the spectrum energy (Σ f v 2 ). Multiple linear regression (MLR) method was then applied to determine the weights of these frequencies by fitting the ground truth-the measured ADMT, which are represented by three pivot points of the diaphragm on each side. The 'leave-one-out' cross validation method was employed to analyze the statistical performance of the prediction results in three image sets: 4DCT1, 4DCT2, and 4DCT1 + 4DCT2. Seven lowest frequencies in DCT domain were found to be sufficient to approximate the patient dVPS curves ( R = 91%-96% in MLR fitting). The mean error in the predicted ADMT using leave-one-out method was 0.3 ± 1.9 mm for the left-side diaphragm and 0.0 ± 1.4 mm for the right-side diaphragm. The prediction error is lower in 4DCT2 than 4DCT1, and is the lowest in 4DCT1 and 4DCT2 combined. This frequency-analysis-based machine learning technique was employed to predict the ADMT automatically with an acceptable error (0.2 ± 1.6 mm). This volumetric approach is not affected by the presence of the lung tumors

  6. Visibility of wavelet quantization noise

    NASA Technical Reports Server (NTRS)

    Watson, A. B.; Yang, G. Y.; Solomon, J. A.; Villasenor, J.

    1997-01-01

    The discrete wavelet transform (DWT) decomposes an image into bands that vary in spatial frequency and orientation. It is widely used for image compression. Measures of the visibility of DWT quantization errors are required to achieve optimal compression. Uniform quantization of a single band of coefficients results in an artifact that we call DWT uniform quantization noise; it is the sum of a lattice of random amplitude basis functions of the corresponding DWT synthesis filter. We measured visual detection thresholds for samples of DWT uniform quantization noise in Y, Cb, and Cr color channels. The spatial frequency of a wavelet is r 2-lambda, where r is display visual resolution in pixels/degree, and lambda is the wavelet level. Thresholds increase rapidly with wavelet spatial frequency. Thresholds also increase from Y to Cr to Cb, and with orientation from lowpass to horizontal/vertical to diagonal. We construct a mathematical model for DWT noise detection thresholds that is a function of level, orientation, and display visual resolution. This allows calculation of a "perceptually lossless" quantization matrix for which all errors are in theory below the visual threshold. The model may also be used as the basis for adaptive quantization schemes.

  7. Differentiation of DctA and DcuS function in the DctA/DcuS sensor complex of Escherichia coli: function of DctA as an activity switch and of DcuS as the C4-dicarboxylate sensor.

    PubMed

    Steinmetz, Philipp Aloysius; Wörner, Sebastian; Unden, Gottfried

    2014-10-01

    The C4-dicarboxylate responsiveness of the sensor kinase DcuS is only provided in concert with C4-dicarboxylate transporters DctA or DcuB. The individual roles of DctA and DcuS for the function of the DctA/DcuS sensor complex were analysed. (i) Variant DctA(S380D) in the C4-dicarboxylate site of DctA conferred C4-dicarboxylate sensitivity to DcuS in the DctA/DcuS complex, but was deficient for transport and for growth on C4-dicarboxylates. Consequently transport activity of DctA is not required for its function in the sensor complex. (ii) Effectors like fumarate induced expression of DctA/DcuS-dependent reporter genes (dcuB-lacZ) and served as substrates of DctA, whereas citrate served only as an inducer of dcuB-lacZ without affecting DctA function. (iii) Induction of dcuB-lacZ by fumarate required 33-fold higher concentrations than for transport by DctA (Km  = 30 μM), demonstrating the existence of different fumarate sites for both processes. (iv) In titration experiments with increasing dctA expression levels, the effect of DctA on the C4-dicarboxylate sensitivity of DcuS was concentration dependent. The data uniformly show that C4-dicarboxylate sensing by DctA/DcuS resides in DcuS, and that DctA serves as an activity switch. Shifting of DcuS from the constitutive ON to the C4-dicarboxylate responsive state, required presence of DctA but not transport by DctA. © 2014 John Wiley & Sons Ltd.

  8. A CU-Level Rate and Distortion Estimation Scheme for RDO of Hardware-Friendly HEVC Encoders Using Low-Complexity Integer DCTs.

    PubMed

    Lee, Bumshik; Kim, Munchurl

    2016-08-01

    In this paper, a low complexity coding unit (CU)-level rate and distortion estimation scheme is proposed for High Efficiency Video Coding (HEVC) hardware-friendly implementation where a Walsh-Hadamard transform (WHT)-based low-complexity integer discrete cosine transform (DCT) is employed for distortion estimation. Since HEVC adopts quadtree structures of coding blocks with hierarchical coding depths, it becomes more difficult to estimate accurate rate and distortion values without actually performing transform, quantization, inverse transform, de-quantization, and entropy coding. Furthermore, DCT for rate-distortion optimization (RDO) is computationally high, because it requires a number of multiplication and addition operations for various transform block sizes of 4-, 8-, 16-, and 32-orders and requires recursive computations to decide the optimal depths of CU or transform unit. Therefore, full RDO-based encoding is highly complex, especially for low-power implementation of HEVC encoders. In this paper, a rate and distortion estimation scheme is proposed in CU levels based on a low-complexity integer DCT that can be computed in terms of WHT whose coefficients are produced in prediction stages. For rate and distortion estimation in CU levels, two orthogonal matrices of 4×4 and 8×8 , which are applied to WHT that are newly designed in a butterfly structure only with addition and shift operations. By applying the integer DCT based on the WHT and newly designed transforms in each CU block, the texture rate can precisely be estimated after quantization using the number of non-zero quantized coefficients and the distortion can also be precisely estimated in transform domain without de-quantization and inverse transform required. In addition, a non-texture rate estimation is proposed by using a pseudoentropy code to obtain accurate total rate estimates. The proposed rate and the distortion estimation scheme can effectively be used for HW-friendly implementation of

  9. WE-AB-BRA-06: 4DCT-Ventilation: A Novel Imaging Modality for Thoracic Surgical Evaluation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vinogradskiy, Y; Jackson, M; Schubert, L

    Purpose: The current standard-of-care imaging used to evaluate lung cancer patients for surgical resection is nuclear-medicine ventilation. Surgeons use nuclear-medicine images along with pulmonary function tests (PFT) to calculate percent predicted postoperative (%PPO) PFT values by estimating the amount of functioning lung that would be lost with surgery. 4DCT-ventilation is an emerging imaging modality developed in radiation oncology that uses 4DCT data to calculate lung ventilation maps. We perform the first retrospective study to assess the use of 4DCT-ventilation for pre-operative surgical evaluation. The purpose of this work was to compare %PPO-PFT values calculated with 4DCT-ventilation and nuclear-medicine imaging. Methods:more » 16 lung cancer patients retrospectively reviewed had undergone 4DCTs, nuclear-medicine imaging, and had Forced Expiratory Volume in 1 second (FEV1) acquired as part of a standard PFT. For each patient, 4DCT data sets, spatial registration, and a density-change based model were used to compute 4DCT-ventilation maps. Both 4DCT and nuclear-medicine images were used to calculate %PPO-FEV1 using %PPO-FEV1=pre-operative FEV1*(1-fraction of total ventilation of resected lung). Fraction of ventilation resected was calculated assuming lobectomy and pneumonectomy. The %PPO-FEV1 values were compared between the 4DCT-ventilation-based calculations and the nuclear-medicine-based calculations using correlation coefficients and average differences. Results: The correlation between %PPO-FEV1 values calculated with 4DCT-ventilation and nuclear-medicine were 0.81 (p<0.01) and 0.99 (p<0.01) for pneumonectomy and lobectomy respectively. The average difference between the 4DCT-ventilation based and the nuclear-medicine-based %PPO-FEV1 values were small, 4.1±8.5% and 2.9±3.0% for pneumonectomy and lobectomy respectively. Conclusion: The high correlation results provide a strong rationale for a clinical trial translating 4DCT-ventilation to the

  10. Quantized kernel least mean square algorithm.

    PubMed

    Chen, Badong; Zhao, Songlin; Zhu, Pingping; Príncipe, José C

    2012-01-01

    In this paper, we propose a quantization approach, as an alternative of sparsification, to curb the growth of the radial basis function structure in kernel adaptive filtering. The basic idea behind this method is to quantize and hence compress the input (or feature) space. Different from sparsification, the new approach uses the "redundant" data to update the coefficient of the closest center. In particular, a quantized kernel least mean square (QKLMS) algorithm is developed, which is based on a simple online vector quantization method. The analytical study of the mean square convergence has been carried out. The energy conservation relation for QKLMS is established, and on this basis we arrive at a sufficient condition for mean square convergence, and a lower and upper bound on the theoretical value of the steady-state excess mean square error. Static function estimation and short-term chaotic time-series prediction examples are presented to demonstrate the excellent performance.

  11. A review on "A Novel Technique for Image Steganography Based on Block-DCT and Huffman Encoding"

    NASA Astrophysics Data System (ADS)

    Das, Rig; Tuithung, Themrichon

    2013-03-01

    This paper reviews the embedding and extraction algorithm proposed by "A. Nag, S. Biswas, D. Sarkar and P. P. Sarkar" on "A Novel Technique for Image Steganography based on Block-DCT and Huffman Encoding" in "International Journal of Computer Science and Information Technology, Volume 2, Number 3, June 2010" [3] and shows that the Extraction of Secret Image is Not Possible for the algorithm proposed in [3]. 8 bit Cover Image of size is divided into non joint blocks and a two dimensional Discrete Cosine Transformation (2-D DCT) is performed on each of the blocks. Huffman Encoding is performed on an 8 bit Secret Image of size and each bit of the Huffman Encoded Bit Stream is embedded in the frequency domain by altering the LSB of the DCT coefficients of Cover Image blocks. The Huffman Encoded Bit Stream and Huffman Table

  12. Sparse/DCT (S/DCT) two-layered representation of prediction residuals for video coding.

    PubMed

    Kang, Je-Won; Gabbouj, Moncef; Kuo, C-C Jay

    2013-07-01

    In this paper, we propose a cascaded sparse/DCT (S/DCT) two-layer representation of prediction residuals, and implement this idea on top of the state-of-the-art high efficiency video coding (HEVC) standard. First, a dictionary is adaptively trained to contain featured patterns of residual signals so that a high portion of energy in a structured residual can be efficiently coded via sparse coding. It is observed that the sparse representation alone is less effective in the R-D performance due to the side information overhead at higher bit rates. To overcome this problem, the DCT representation is cascaded at the second stage. It is applied to the remaining signal to improve coding efficiency. The two representations successfully complement each other. It is demonstrated by experimental results that the proposed algorithm outperforms the HEVC reference codec HM5.0 in the Common Test Condition.

  13. Rapid estimation of 4DCT motion-artifact severity based on 1D breathing-surrogate periodicity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Guang, E-mail: lig2@mskcc.org; Caraveo, Marshall; Wei, Jie

    2014-11-01

    Purpose: Motion artifacts are common in patient four-dimensional computed tomography (4DCT) images, leading to an ill-defined tumor volume with large variations for radiotherapy treatment and a poor foundation with low imaging fidelity for studying respiratory motion. The authors developed a method to estimate 4DCT image quality by establishing a correlation between the severity of motion artifacts in 4DCT images and the periodicity of the corresponding 1D respiratory waveform (1DRW) used for phase binning in 4DCT reconstruction. Methods: Discrete Fourier transformation (DFT) was applied to analyze 1DRW periodicity. The breathing periodicity index (BPI) was defined as the sum of the largestmore » five Fourier coefficients, ranging from 0 to 1. Distortional motion artifacts (excluding blurring) of cine-scan 4DCT at the junctions of adjacent couch positions around the diaphragm were classified in three categories: incomplete, overlapping, and duplicate anatomies. To quantify these artifacts, discontinuity of the diaphragm at the junctions was measured in distance and averaged along six directions in three orthogonal views. Artifacts per junction (APJ) across the entire diaphragm were calculated in each breathing phase and phase-averaged APJ{sup ¯}, defined as motion-artifact severity (MAS), was obtained for each patient. To make MAS independent of patient-specific motion amplitude, two new MAS quantities were defined: MAS{sup D} is normalized to the maximum diaphragmatic displacement and MAS{sup V} is normalized to the mean diaphragmatic velocity (the breathing period was obtained from DFT analysis of 1DRW). Twenty-six patients’ free-breathing 4DCT images and corresponding 1DRW data were studied. Results: Higher APJ values were found around midventilation and full inhalation while the lowest APJ values were around full exhalation. The distribution of MAS is close to Poisson distribution with a mean of 2.2 mm. The BPI among the 26 patients was calculated with a

  14. Rapid estimation of 4DCT motion-artifact severity based on 1D breathing-surrogate periodicity

    PubMed Central

    Li, Guang; Caraveo, Marshall; Wei, Jie; Rimner, Andreas; Wu, Abraham J.; Goodman, Karyn A.; Yorke, Ellen

    2014-01-01

    Purpose: Motion artifacts are common in patient four-dimensional computed tomography (4DCT) images, leading to an ill-defined tumor volume with large variations for radiotherapy treatment and a poor foundation with low imaging fidelity for studying respiratory motion. The authors developed a method to estimate 4DCT image quality by establishing a correlation between the severity of motion artifacts in 4DCT images and the periodicity of the corresponding 1D respiratory waveform (1DRW) used for phase binning in 4DCT reconstruction. Methods: Discrete Fourier transformation (DFT) was applied to analyze 1DRW periodicity. The breathing periodicity index (BPI) was defined as the sum of the largest five Fourier coefficients, ranging from 0 to 1. Distortional motion artifacts (excluding blurring) of cine-scan 4DCT at the junctions of adjacent couch positions around the diaphragm were classified in three categories: incomplete, overlapping, and duplicate anatomies. To quantify these artifacts, discontinuity of the diaphragm at the junctions was measured in distance and averaged along six directions in three orthogonal views. Artifacts per junction (APJ) across the entire diaphragm were calculated in each breathing phase and phase-averaged APJ¯, defined as motion-artifact severity (MAS), was obtained for each patient. To make MAS independent of patient-specific motion amplitude, two new MAS quantities were defined: MASD is normalized to the maximum diaphragmatic displacement and MASV is normalized to the mean diaphragmatic velocity (the breathing period was obtained from DFT analysis of 1DRW). Twenty-six patients’ free-breathing 4DCT images and corresponding 1DRW data were studied. Results: Higher APJ values were found around midventilation and full inhalation while the lowest APJ values were around full exhalation. The distribution of MAS is close to Poisson distribution with a mean of 2.2 mm. The BPI among the 26 patients was calculated with a value ranging from 0.25 to

  15. Quantized impedance dealing with the damping behavior of the one-dimensional oscillator

    NASA Astrophysics Data System (ADS)

    Zhu, Jinghao; Zhang, Jing; Li, Yuan; Zhang, Yong; Fang, Zhengji; Zhao, Peide; Li, Erping

    2015-11-01

    A quantized impedance is proposed to theoretically establish the relationship between the atomic eigenfrequency and the intrinsic frequency of the one-dimensional oscillator in this paper. The classical oscillator is modified by the idea that the electron transition is treated as a charge-discharge process of a suggested capacitor with the capacitive energy equal to the energy level difference of the jumping electron. The quantized capacitance of the impedance interacting with the jumping electron can lead the resonant frequency of the oscillator to the same as the atomic eigenfrequency. The quantized resistance reflects that the damping coefficient of the oscillator is the mean collision frequency of the transition electron. In addition, the first and third order electric susceptibilities based on the oscillator are accordingly quantized. Our simulation of the hydrogen atom emission spectrum based on the proposed method agrees well with the experimental one. Our results exhibits that the one-dimensional oscillator with the quantized impedance may become useful in the estimations of the refractive index and one- or multi-photon absorption coefficients of some nonmagnetic media composed of hydrogen-like atoms.

  16. Toward a perceptual image quality assessment of color quantized images

    NASA Astrophysics Data System (ADS)

    Frackiewicz, Mariusz; Palus, Henryk

    2018-04-01

    Color image quantization is an important operation in the field of color image processing. In this paper, we consider new perceptual image quality metrics for assessment of quantized images. These types of metrics, e.g. DSCSI, MDSIs, MDSIm and HPSI achieve the highest correlation coefficients with MOS during tests on the six publicly available image databases. Research was limited to images distorted by two types of compression: JPG and JPG2K. Statistical analysis of correlation coefficients based on the Friedman test and post-hoc procedures showed that the differences between the four new perceptual metrics are not statistically significant.

  17. Genetics algorithm optimization of DWT-DCT based image Watermarking

    NASA Astrophysics Data System (ADS)

    Budiman, Gelar; Novamizanti, Ledya; Iwut, Iwan

    2017-01-01

    Data hiding in an image content is mandatory for setting the ownership of the image. Two dimensions discrete wavelet transform (DWT) and discrete cosine transform (DCT) are proposed as transform method in this paper. First, the host image in RGB color space is converted to selected color space. We also can select the layer where the watermark is embedded. Next, 2D-DWT transforms the selected layer obtaining 4 subband. We select only one subband. And then block-based 2D-DCT transforms the selected subband. Binary-based watermark is embedded on the AC coefficients of each block after zigzag movement and range based pixel selection. Delta parameter replacing pixels in each range represents embedded bit. +Delta represents bit “1” and -delta represents bit “0”. Several parameters to be optimized by Genetics Algorithm (GA) are selected color space, layer, selected subband of DWT decomposition, block size, embedding range, and delta. The result of simulation performs that GA is able to determine the exact parameters obtaining optimum imperceptibility and robustness, in any watermarked image condition, either it is not attacked or attacked. DWT process in DCT based image watermarking optimized by GA has improved the performance of image watermarking. By five attacks: JPEG 50%, resize 50%, histogram equalization, salt-pepper and additive noise with variance 0.01, robustness in the proposed method has reached perfect watermark quality with BER=0. And the watermarked image quality by PSNR parameter is also increased about 5 dB than the watermarked image quality from previous method.

  18. Quantized impedance dealing with the damping behavior of the one-dimensional oscillator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Jinghao; Zhang, Jing; Li, Yuan

    2015-11-15

    A quantized impedance is proposed to theoretically establish the relationship between the atomic eigenfrequency and the intrinsic frequency of the one-dimensional oscillator in this paper. The classical oscillator is modified by the idea that the electron transition is treated as a charge-discharge process of a suggested capacitor with the capacitive energy equal to the energy level difference of the jumping electron. The quantized capacitance of the impedance interacting with the jumping electron can lead the resonant frequency of the oscillator to the same as the atomic eigenfrequency. The quantized resistance reflects that the damping coefficient of the oscillator is themore » mean collision frequency of the transition electron. In addition, the first and third order electric susceptibilities based on the oscillator are accordingly quantized. Our simulation of the hydrogen atom emission spectrum based on the proposed method agrees well with the experimental one. Our results exhibits that the one-dimensional oscillator with the quantized impedance may become useful in the estimations of the refractive index and one- or multi-photon absorption coefficients of some nonmagnetic media composed of hydrogen-like atoms.« less

  19. An optimal adder-based hardware architecture for the DCT/SA-DCT

    NASA Astrophysics Data System (ADS)

    Kinane, Andrew; Muresan, Valentin; O'Connor, Noel

    2005-07-01

    The explosive growth of the mobile multimedia industry has accentuated the need for ecient VLSI implemen- tations of the associated computationally demanding signal processing algorithms. This need becomes greater as end-users demand increasingly enhanced features and more advanced underpinning video analysis. One such feature is object-based video processing as supported by MPEG-4 core profile, which allows content-based in- teractivity. MPEG-4 has many computationally demanding underlying algorithms, an example of which is the Shape Adaptive Discrete Cosine Transform (SA-DCT). The dynamic nature of the SA-DCT processing steps pose significant VLSI implementation challenges and many of the previously proposed approaches use area and power consumptive multipliers. Most also ignore the subtleties of the packing steps and manipulation of the shape information. We propose a new multiplier-less serial datapath based solely on adders and multiplexers to improve area and power. The adder cost is minimised by employing resource re-use methods. The number of (physical) adders used has been derived using a common sub-expression elimination algorithm. Additional energy eciency is factored into the design by employing guarded evaluation and local clock gating. Our design implements the SA-DCT packing with minimal switching using ecient addressing logic with a transpose mem- ory RAM. The entire design has been synthesized using TSMC 0.09µm TCBN90LP technology yielding a gate count of 12028 for the datapath and its control logic.

  20. Individually optimized contrast-enhanced 4D-CT for radiotherapy simulation in pancreatic ductal adenocarcinoma

    PubMed Central

    Xue, Ming; Lane, Barton F.; Kang, Min Kyu; Patel, Kruti; Regine, William F.; Klahr, Paul; Wang, Jiahui; Chen, Shifeng; D’Souza, Warren; Lu, Wei

    2016-01-01

    Purpose: To develop an individually optimized contrast-enhanced (CE) 4D-computed tomography (CT) for radiotherapy simulation in pancreatic ductal adenocarcinomas (PDA). Methods: Ten PDA patients were enrolled. Each underwent three CT scans: a 4D-CT immediately following a CE 3D-CT and an individually optimized CE 4D-CT using test injection. Three physicians contoured the tumor and pancreatic tissues. Image quality scores, tumor volume, motion, tumor-to-pancreas contrast, and contrast-to-noise ratio (CNR) were compared in the three CTs. Interobserver variations were also evaluated in contouring the tumor using simultaneous truth and performance level estimation. Results: Average image quality scores for CE 3D-CT and CE 4D-CT were comparable (4.0 and 3.8, respectively; P = 0.082), and both were significantly better than that for 4D-CT (2.6, P < 0.001). Tumor-to-pancreas contrast results were comparable in CE 3D-CT and CE 4D-CT (15.5 and 16.7 Hounsfield units (HU), respectively; P = 0.21), and the latter was significantly higher than in 4D-CT (9.2 HU, P = 0.001). Image noise in CE 3D-CT (12.5 HU) was significantly lower than in CE 4D-CT (22.1 HU, P = 0.013) and 4D-CT (19.4 HU, P = 0.009). CNRs were comparable in CE 3D-CT and CE 4D-CT (1.4 and 0.8, respectively; P = 0.42), and both were significantly better in 4D-CT (0.6, P = 0.008 and 0.014). Mean tumor volumes were significantly smaller in CE 3D-CT (29.8 cm3, P = 0.03) and CE 4D-CT (22.8 cm3, P = 0.01) than in 4D-CT (42.0 cm3). Mean tumor motion was comparable in 4D-CT and CE 4D-CT (7.2 and 6.2 mm, P = 0.17). Interobserver variations were comparable in CE 3D-CT and CE 4D-CT (Jaccard index 66.0% and 61.9%, respectively) and were worse for 4D-CT (55.6%) than CE 3D-CT. Conclusions: CE 4D-CT demonstrated characteristics comparable to CE 3D-CT, with high potential for simultaneously delineating the tumor and quantifying tumor motion with a single scan. PMID:27782710

  1. Quantization of an electromagnetic field in two-dimensional photonic structures based on the scattering matrix formalism ( S-quantization)

    NASA Astrophysics Data System (ADS)

    Ivanov, K. A.; Nikolaev, V. V.; Gubaydullin, A. R.; Kaliteevski, M. A.

    2017-10-01

    Based on the scattering matrix formalism, we have developed a method of quantization of an electromagnetic field in two-dimensional photonic nanostructures ( S-quantization in the two-dimensional case). In this method, the fields at the boundaries of the quantization box are expanded into a Fourier series and are related with each other by the scattering matrix of the system, which is the product of matrices describing the propagation of plane waves in empty regions of the quantization box and the scattering matrix of the photonic structure (or an arbitrary inhomogeneity). The quantization condition (similarly to the onedimensional case) is formulated as follows: the eigenvalues of the scattering matrix are equal to unity, which corresponds to the fact that the set of waves that are incident on the structure (components of the expansion into the Fourier series) is equal to the set of waves that travel away from the structure (outgoing waves). The coefficients of the matrix of scattering through the inhomogeneous structure have been calculated using the following procedure: the structure is divided into parallel layers such that the permittivity in each layer varies only along the axis that is perpendicular to the layers. Using the Fourier transform, the Maxwell equations have been written in the form of a matrix that relates the Fourier components of the electric field at the boundaries of neighboring layers. The product of these matrices is the transfer matrix in the basis of the Fourier components of the electric field. Represented in a block form, it is composed by matrices that contain the reflection and transmission coefficients for the Fourier components of the field, which, in turn, constitute the scattering matrix. The developed method considerably simplifies the calculation scheme for the analysis of the behavior of the electromagnetic field in structures with a two-dimensional inhomogeneity. In addition, this method makes it possible to obviate

  2. Estimation of lung tumor position from multiple anatomical features on 4D-CT using multiple regression analysis.

    PubMed

    Ono, Tomohiro; Nakamura, Mitsuhiro; Hirose, Yoshinori; Kitsuda, Kenji; Ono, Yuka; Ishigaki, Takashi; Hiraoka, Masahiro

    2017-09-01

    To estimate the lung tumor position from multiple anatomical features on four-dimensional computed tomography (4D-CT) data sets using single regression analysis (SRA) and multiple regression analysis (MRA) approach and evaluate an impact of the approach on internal target volume (ITV) for stereotactic body radiotherapy (SBRT) of the lung. Eleven consecutive lung cancer patients (12 cases) underwent 4D-CT scanning. The three-dimensional (3D) lung tumor motion exceeded 5 mm. The 3D tumor position and anatomical features, including lung volume, diaphragm, abdominal wall, and chest wall positions, were measured on 4D-CT images. The tumor position was estimated by SRA using each anatomical feature and MRA using all anatomical features. The difference between the actual and estimated tumor positions was defined as the root-mean-square error (RMSE). A standard partial regression coefficient for the MRA was evaluated. The 3D lung tumor position showed a high correlation with the lung volume (R = 0.92 ± 0.10). Additionally, ITVs derived from SRA and MRA approaches were compared with ITV derived from contouring gross tumor volumes on all 10 phases of the 4D-CT (conventional ITV). The RMSE of the SRA was within 3.7 mm in all directions. Also, the RMSE of the MRA was within 1.6 mm in all directions. The standard partial regression coefficient for the lung volume was the largest and had the most influence on the estimated tumor position. Compared with conventional ITV, average percentage decrease of ITV were 31.9% and 38.3% using SRA and MRA approaches, respectively. The estimation accuracy of lung tumor position was improved by the MRA approach, which provided smaller ITV than conventional ITV. © 2017 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  3. Medical image compression based on vector quantization with variable block sizes in wavelet domain.

    PubMed

    Jiang, Huiyan; Ma, Zhiyuan; Hu, Yang; Yang, Benqiang; Zhang, Libo

    2012-01-01

    An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD) was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality.

  4. Modelling Nonlinear Dynamic Textures using Hybrid DWT-DCT and Kernel PCA with GPU

    NASA Astrophysics Data System (ADS)

    Ghadekar, Premanand Pralhad; Chopade, Nilkanth Bhikaji

    2016-12-01

    Most of the real-world dynamic textures are nonlinear, non-stationary, and irregular. Nonlinear motion also has some repetition of motion, but it exhibits high variation, stochasticity, and randomness. Hybrid DWT-DCT and Kernel Principal Component Analysis (KPCA) with YCbCr/YIQ colour coding using the Dynamic Texture Unit (DTU) approach is proposed to model a nonlinear dynamic texture, which provides better results than state-of-art methods in terms of PSNR, compression ratio, model coefficients, and model size. Dynamic texture is decomposed into DTUs as they help to extract temporal self-similarity. Hybrid DWT-DCT is used to extract spatial redundancy. YCbCr/YIQ colour encoding is performed to capture chromatic correlation. KPCA is applied to capture nonlinear motion. Further, the proposed algorithm is implemented on Graphics Processing Unit (GPU), which comprise of hundreds of small processors to decrease time complexity and to achieve parallelism.

  5. 2-Step scalar deadzone quantization for bitplane image coding.

    PubMed

    Auli-Llinas, Francesc

    2013-12-01

    Modern lossy image coding systems generate a quality progressive codestream that, truncated at increasing rates, produces an image with decreasing distortion. Quality progressivity is commonly provided by an embedded quantizer that employs uniform scalar deadzone quantization (USDQ) together with a bitplane coding strategy. This paper introduces a 2-step scalar deadzone quantization (2SDQ) scheme that achieves same coding performance as that of USDQ while reducing the coding passes and the emitted symbols of the bitplane coding engine. This serves to reduce the computational costs of the codec and/or to code high dynamic range images. The main insights behind 2SDQ are the use of two quantization step sizes that approximate wavelet coefficients with more or less precision depending on their density, and a rate-distortion optimization technique that adjusts the distortion decreases produced when coding 2SDQ indexes. The integration of 2SDQ in current codecs is straightforward. The applicability and efficiency of 2SDQ are demonstrated within the framework of JPEG2000.

  6. The impact of breathing guidance and prospective gating during thoracic 4DCT imaging: an XCAT study utilizing lung cancer patient motion

    NASA Astrophysics Data System (ADS)

    Pollock, Sean; Kipritidis, John; Lee, Danny; Bernatowicz, Kinga; Keall, Paul

    2016-09-01

    Two interventions to overcome the deleterious impact irregular breathing has on thoracic-abdominal 4D computed tomography (4DCT) are (1) facilitating regular breathing using audiovisual biofeedback (AVB), and (2) prospective respiratory gating of the 4DCT scan based on the real-time respiratory motion. The purpose of this study was to compare the impact of AVB and gating on 4DCT imaging using the 4D eXtended cardiac torso (XCAT) phantom driven by patient breathing patterns. We obtained simultaneous measurements of chest and abdominal walls, thoracic diaphragm, and tumor motion from 6 lung cancer patients under two breathing conditions: (1) AVB, and (2) free breathing. The XCAT phantom was used to simulate 4DCT acquisitions in cine and respiratory gated modes. 4DCT image quality was quantified by artefact detection (NCCdiff), mean square error (MSE), and Dice similarity coefficient of lung and tumor volumes (DSClung, DSCtumor). 4DCT acquisition times and imaging dose were recorded. In cine mode, AVB improved NCCdiff, MSE, DSClung, and DSCtumor by 20% (p  =  0.008), 23% (p  <  0.001), 0.5% (p  <  0.001), and 4.0% (p  <  0.003), respectively. In respiratory gated mode, AVB improved NCCdiff, MSE, and DSClung by 29% (p  <  0.001), 34% (p  <  0.001), 0.4% (p  <  0.001), respectively. AVB increased the cine acquisitions by 15 s and reduced respiratory gated acquisitions by 31 s. AVB increased imaging dose in cine mode by 10%. This was the first study to quantify the impact of breathing guidance and respiratory gating on 4DCT imaging. With the exception of DSCtumor in respiratory gated mode, AVB significantly improved 4DCT image analysis metrics in both cine and respiratory gated modes over free breathing. The results demonstrate that AVB and respiratory-gating can be beneficial interventions to improve 4DCT for cancer radiation therapy, with the biggest gains achieved when these interventions are used

  7. Medical Image Compression Based on Vector Quantization with Variable Block Sizes in Wavelet Domain

    PubMed Central

    Jiang, Huiyan; Ma, Zhiyuan; Hu, Yang; Yang, Benqiang; Zhang, Libo

    2012-01-01

    An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD) was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality. PMID:23049544

  8. Comparison of lung tumor motion measured using a model-based 4DCT technique and a commercial protocol.

    PubMed

    O'Connell, Dylan; Shaverdian, Narek; Kishan, Amar U; Thomas, David H; Dou, Tai H; Lewis, John H; Lamb, James M; Cao, Minsong; Tenn, Stephen; Percy, Lee P; Low, Daniel A

    To compare lung tumor motion measured with a model-based technique to commercial 4-dimensional computed tomography (4DCT) scans and describe a workflow for using model-based 4DCT as a clinical simulation protocol. Twenty patients were imaged using a model-based technique and commercial 4DCT. Tumor motion was measured on each commercial 4DCT dataset and was calculated on model-based datasets for 3 breathing amplitude percentile intervals: 5th to 85th, 5th to 95th, and 0th to 100th. Internal target volumes (ITVs) were defined on the 4DCT and 5th to 85th interval datasets and compared using Dice similarity. Images were evaluated for noise and rated by 2 radiation oncologists for artifacts. Mean differences in tumor motion magnitude between commercial and model-based images were 0.47 ± 3.0, 1.63 ± 3.17, and 5.16 ± 4.90 mm for the 5th to 85th, 5th to 95th, and 0th to 100th amplitude intervals, respectively. Dice coefficients between ITVs defined on commercial and 5th to 85th model-based images had a mean value of 0.77 ± 0.09. Single standard deviation image noise was 11.6 ± 9.6 HU in the liver and 6.8 ± 4.7 HU in the aorta for the model-based images compared with 57.7 ± 30 and 33.7 ± 15.4 for commercial 4DCT. Mean model error within the ITV regions was 1.71 ± 0.81 mm. Model-based images exhibited reduced presence of artifacts at the tumor compared with commercial images. Tumor motion measured with the model-based technique using the 5th to 85th percentile breathing amplitude interval corresponded more closely to commercial 4DCT than the 5th to 95th or 0th to 100th intervals, which showed greater motion on average. The model-based technique tended to display increased tumor motion when breathing amplitude intervals wider than 5th to 85th were used because of the influence of unusually deep inhalations. These results suggest that care must be taken in selecting the appropriate interval during image generation when using model-based 4DCT methods. Copyright © 2017

  9. DCT Trigger in a High-Resolution Test Platform for the Detection of Very Inclined Showers in Pierre Auger Surface Detectors

    NASA Astrophysics Data System (ADS)

    Szadkowski, Zbigniew; Wiedeński, Michał

    2017-06-01

    We present first results from a trigger based on the discrete cosine transform (DCT) operating in new front-end boards with a Cyclone V E field-programmable gate array (FPGA) deployed in seven test surface detectors in the Pierre Auger Test Array. The patterns of the ADC traces generated by very inclined showers (arriving at 70° to 90° from the vertical) were obtained from the Auger database and from the CORSIKA simulation package supported by the Auger OffLine event reconstruction platform that gives predicted digitized signal profiles. Simulations for many values of the initial cosmic ray angle of arrival, the shower initialization depth in the atmosphere, the type of particle, and its initial energy gave a boundary on the DCT coefficients used for the online pattern recognition in the FPGA. Preliminary results validated the approach used. We recorded several showers triggered by the DCT for 120 Msamples/s and 160 Msamples/s.

  10. Management of Respiration-Induced Motion With 4-Dimensional Computed Tomography (4DCT) for Pancreas Irradiation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tai, An, E-mail: atai@mcw.edu; Liang, Zhiwen; Radiation Oncology Center, Wuhan Union Hospital, Huazhong University of Science and Technology, Wuhan

    2013-08-01

    Purpose: The purposes of this study were to quantify respiration-induced organ motions for pancreatic cancer patients and to explore strategies to account for these motions. Methods and Materials: Both 3-dimensional computed tomography (3DCT) and 4-dimensional computed tomography (4DCT) scans were acquired sequentially for 15 pancreatic cancer patients, including 10 randomly selected patients and 5 patients selected from a subgroup of patients with large tumor respiratory motions. 3DCTs were fused with 2 sets of 4DCT data at the end of exhale phase (50%) and the end of inhale phase (0%). The target was delineated on the 50% and 0% phase CTmore » sets, and the organs at risk were drawn on the 3DCT. These contours were populated to the CT sets at other respiratory phases based on deformable image registration. Internal target volumes (ITV) were generated by tracing the target contours of all phases (ITV{sub 10}), 3 phases of 0%, 20% and 50% (ITV{sub 3}), and 2 phases of 0% and 50% (ITV{sub 2}). ITVs generated from phase images were compared using percentage of volume overlap, Dice coefficient, geometric centers, and average surface distance. Results: Volume variations of pancreas, kidneys, and liver as a function of respiratory phases were small (<5%) during respiration. For the 10 randomly selected patients, peak-to-peak amplitudes of liver, left kidney, right kidney, and the target along the superior-inferior (SI) direction were 7.9 ± 3.2 mm, 7.1 ± 3.1 mm, 5.7 ± 3.2 mm, and 5.9 ± 2.8 mm, respectively. The percentage of volume overlap and Dice coefficient were 92% ± 1% and 96% ± 1% between ITV{sub 10} and ITV{sub 2} and 96% ± 1% and 98% ± 1% between ITV{sub 10} and ITV{sub 3}, respectively. The percentage of volume overlap between ITV{sub 10} and ITV{sub 3} was 93.6 ± 1.1 for patients with tumor motion >8 mm. Conclusions: Appropriate motion management strategies are proposed for radiation treatment planning of pancreatic tumors based on magnitudes of

  11. An iris recognition algorithm based on DCT and GLCM

    NASA Astrophysics Data System (ADS)

    Feng, G.; Wu, Ye-qing

    2008-04-01

    With the enlargement of mankind's activity range, the significance for person's status identity is becoming more and more important. So many different techniques for person's status identity were proposed for this practical usage. Conventional person's status identity methods like password and identification card are not always reliable. A wide variety of biometrics has been developed for this challenge. Among those biologic characteristics, iris pattern gains increasing attention for its stability, reliability, uniqueness, noninvasiveness and difficult to counterfeit. The distinct merits of the iris lead to its high reliability for personal identification. So the iris identification technique had become hot research point in the past several years. This paper presents an efficient algorithm for iris recognition using gray-level co-occurrence matrix(GLCM) and Discrete Cosine transform(DCT). To obtain more representative iris features, features from space and DCT transformation domain are extracted. Both GLCM and DCT are applied on the iris image to form the feature sequence in this paper. The combination of GLCM and DCT makes the iris feature more distinct. Upon GLCM and DCT the eigenvector of iris extracted, which reflects features of spatial transformation and frequency transformation. Experimental results show that the algorithm is effective and feasible with iris recognition.

  12. Improved image decompression for reduced transform coding artifacts

    NASA Technical Reports Server (NTRS)

    Orourke, Thomas P.; Stevenson, Robert L.

    1994-01-01

    The perceived quality of images reconstructed from low bit rate compression is severely degraded by the appearance of transform coding artifacts. This paper proposes a method for producing higher quality reconstructed images based on a stochastic model for the image data. Quantization (scalar or vector) partitions the transform coefficient space and maps all points in a partition cell to a representative reconstruction point, usually taken as the centroid of the cell. The proposed image estimation technique selects the reconstruction point within the quantization partition cell which results in a reconstructed image which best fits a non-Gaussian Markov random field (MRF) image model. This approach results in a convex constrained optimization problem which can be solved iteratively. At each iteration, the gradient projection method is used to update the estimate based on the image model. In the transform domain, the resulting coefficient reconstruction points are projected to the particular quantization partition cells defined by the compressed image. Experimental results will be shown for images compressed using scalar quantization of block DCT and using vector quantization of subband wavelet transform. The proposed image decompression provides a reconstructed image with reduced visibility of transform coding artifacts and superior perceived quality.

  13. SU-E-J-187: Individually Optimized Contrast-Enhancement 4D-CT for Pancreatic Adenocarcinoma in Radiotherapy Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xue, M; Patel, K; Regine, W

    2014-06-01

    Purpose: To study the feasibility of individually optimized contrastenhancement (CE) 4D-CT for pancreatic adenocarcinoma (PDA) in radiotherapy simulation. To evaluate the image quality and contrast enhancement of tumor in the CE 4D-CT, compared to the clinical standard of CE 3D-CT and 4D-CT. Methods: In this IRB-approved study, each of the 7 PDA patients enrolled underwent 3 CT scans: a free-breathing 3D-CT with contrast (CE 3D-CT) followed by a 4D-CT without contrast (4D-CT) in the first study session, and a 4D-CT with individually synchronized contrast injection (CE 4D-CT) in the second study session. In CE 4D-CT, the time of full contrastmore » injection was determined based on the time of peak enhancement for the test injection, injection rate, table speed, and longitudinal location and span of the pancreatic region. Physicians contoured both the tumor (T) and the normal pancreatic parenchyma (P) on the three CTs (end-of-exhalation for 4D-CT). The contrast between the tumor and normal pancreatic tissue was computed as the difference of the mean enhancement level of three 1 cm3 regions of interests in T and P, respectively. Wilcoxon rank sum test was used to statistically compare the scores and contrasts. Results: In qualitative evaluations, both CE 3D-CT and CE 4D-CT scored significantly better than 4D-CT (4.0 and 3.6 vs. 2.6). There was no significant difference between CE 3D-CT and CE 4D-CT. In quantitative evaluations, the contrasts between the tumor and the normal pancreatic parenchyma were 0.6±23.4, −2.1±8.0, and −19.6±28.8 HU, in CE 3D-CT, 4D-CT, and CE 4D-CT, respectively. Although not statistically significant, CE 4D-CT achieved better contrast enhancement between the tumor and the normal pancreatic parenchyma than both CE 3D-CT and 4DCT. Conclusion: CE 4D-CT achieved equivalent image quality and better contrast enhancement between tumor and normal pancreatic parenchyma than the clinical standard of CE 3D-CT and 4D-CT. This study was supported

  14. A Local DCT-II Feature Extraction Approach for Personal Identification Based on Palmprint

    NASA Astrophysics Data System (ADS)

    Choge, H. Kipsang; Oyama, Tadahiro; Karungaru, Stephen; Tsuge, Satoru; Fukumi, Minoru

    Biometric applications based on the palmprint have recently attracted increased attention from various researchers. In this paper, a method is presented that differs from the commonly used global statistical and structural techniques by extracting and using local features instead. The middle palm area is extracted after preprocessing for rotation, position and illumination normalization. The segmented region of interest is then divided into blocks of either 8×8 or 16×16 pixels in size. The type-II Discrete Cosine Transform (DCT) is applied to transform the blocks into DCT space. A subset of coefficients that encode the low to medium frequency components is selected using the JPEG-style zigzag scanning method. Features from each block are subsequently concatenated into a compact feature vector and used in palmprint verification experiments with palmprints from the PolyU Palmprint Database. Results indicate that this approach achieves better results than many conventional transform-based methods, with an excellent recognition accuracy above 99% and an Equal Error Rate (EER) of less than 1.2% in palmprint verification.

  15. Comparison of helical and cine acquisitions for 4D-CT imaging with multislice CT.

    PubMed

    Pan, Tinsu

    2005-02-01

    We proposed a data sufficiency condition (DSC) for four-dimensional-CT (4D-CT) imaging on a multislice CT scanner, designed a pitch factor for a helical 4D-CT, and compared the acquisition time, slice sensitivity profile (SSP), effective dose, ability to cope with an irregular breathing cycle, and gating technique (retrospective or prospective) of the helical 4D-CT and the cine 4D-CT on the General Electric (GE) LightSpeed RT (4-slice), Plus (4-slice), Ultra (8-slice) and 16 (16-slice) multislice CT scanners. To satisfy the DSC, a helical or cine 4D-CT acquisition has to collect data at each location for the duration of a breathing cycle plus the duration of data acquisition for an image reconstruction. The conditions for the comparison were 20 cm coverage in the cranial-caudal direction, a 4 s breathing cycle, and half-scan reconstruction. We found that the helical 4D-CT has the advantage of a shorter scan time that is 10% shorter than that of the cine 4D-CT, and the disadvantages of 1.8 times broadening of SSP and requires an additional breathing cycle of scanning to ensure an adequate sampling at the start and end locations. The cine 4D-CT has the advantages of maintaining the same SSP as slice collimation (e.g., 8 x 2.5 mm slice collimation generates 2.5 mm SSP in the cine 4D-CT as opposed to 4.5 mm in the helical 4D-CT) and a lower dose by 4% on the 8- and 16-slice systems, and 8% on the 4-slice system. The advantage of faster scanning in the helical 4D-CT will diminish if a repeat scan at the location of a breathing irregularity becomes necessary. The cine 4D-CT performs better than the helical 4D-CT in the repeat scan because it can scan faster and is more dose efficient.

  16. A method for predicting DCT-based denoising efficiency for grayscale images corrupted by AWGN and additive spatially correlated noise

    NASA Astrophysics Data System (ADS)

    Rubel, Aleksey S.; Lukin, Vladimir V.; Egiazarian, Karen O.

    2015-03-01

    Results of denoising based on discrete cosine transform for a wide class of images corrupted by additive noise are obtained. Three types of noise are analyzed: additive white Gaussian noise and additive spatially correlated Gaussian noise with middle and high correlation levels. TID2013 image database and some additional images are taken as test images. Conventional DCT filter and BM3D are used as denoising techniques. Denoising efficiency is described by PSNR and PSNR-HVS-M metrics. Within hard-thresholding denoising mechanism, DCT-spectrum coefficient statistics are used to characterize images and, subsequently, denoising efficiency for them. Results of denoising efficiency are fitted for such statistics and efficient approximations are obtained. It is shown that the obtained approximations provide high accuracy of prediction of denoising efficiency.

  17. Quantized topological magnetoelectric effect of the zero-plateau quantum anomalous Hall state

    DOE PAGES

    Wang, Jing; Lian, Biao; Qi, Xiao-Liang; ...

    2015-08-10

    The topological magnetoelectric effect in a three-dimensional topological insulator is a novel phenomenon, where an electric field induces a magnetic field in the same direction, with a universal coefficient of proportionality quantized in units of $e²/2h$. Here in this paper, we propose that the topological magnetoelectric effect can be realized in the zero-plateau quantum anomalous Hall state of magnetic topological insulators or a ferromagnet-topological insulator heterostructure. The finite-size effect is also studied numerically, where the magnetoelectric coefficient is shown to converge to a quantized value when the thickness of the topological insulator film increases. We further propose a device setupmore » to eliminate nontopological contributions from the side surface.« less

  18. TH-EF-BRA-04: Individually Optimized Contrast-Enhanced 4D-CT for Radiotherapy Simulation in Pancreatic Ductal Adenocarcinoma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choi, W; Xue, M; Lane, B

    Purpose: To develop an individually optimized contrast-enhanced (CE) 4D-CT for radiotherapy simulation in pancreatic ductal adenocarcinomas (PDA). Methods: Ten PDA patients were enrolled. Each underwent 3 CT scans: a 4D-CT immediately following a CE 3D-CT and an individually optimized CE 4D-CT using test injection. Three physicians contoured the tumor and pancreatic tissues. We compared image quality scores, tumor volume, motion, tumor-to-pancreas contrast, and contrast-to-noise ratio (CNR) in the 3 CTs. We also evaluated interobserver variations in contouring the tumor using simultaneous truth and performance level estimation (STAPLE). Results: Average image quality scores for CE 3DCT and CE 4D-CT were comparablemore » (4.0 and 3.8, respectively; P=0.47), and both were significantly better than that for 4D-CT (2.6, P<0.001). Tumor-to-pancreas contrast results were comparable in CE 3D-CT and CE 4D-CT (15.5 and 16.7 HU, respectively; P=0.71), and the latter was significantly higher than in 4D-CT (9.2 HU, P=0.03). Image noise in CE 3D-CT (12.5 HU) was significantly lower than in CE 4D-CT (22.1 HU, P<0.001) and 4D-CT (19.4 HU, P=0.005). CNRs were comparable in CE 3D-CT and CE 4DCT (1.4 and 0.8, respectively; P=0.23), and the former was significantly better than in 4D-CT (0.6, P = 0.04). Mean tumor volumes were smaller in CE 3D-CT (29.8 cm{sup 3}) and CE 4D-CT (22.8 cm{sup 3}) than in 4D-CT (42.0 cm{sup 3}), although these differences were not statistically significant. Mean tumor motion was comparable in 4D-CT and CE 4D-CT (7.2 and 6.2 mm, P=0.23). Interobserver variations were comparable in CE 3D-CT and CE 4D-CT (Jaccard index 66.0% and 61.9%, respectively) and were worse for 4D-CT (55.6%) than CE 3D-CT. Conclusion: CE 4D-CT demonstrated characteristics comparable to CE 3D-CT, with high potential for simultaneously delineating the tumor and quantifying tumor motion with a single scan. Supported in part by Philips Healthcare.« less

  19. Measurement of regional compliance using 4DCT images for assessment of radiation treatment1

    PubMed Central

    Zhong, Hualiang; Jin, Jian-yue; Ajlouni, Munther; Movsas, Benjamin; Chetty, Indrin J.

    2011-01-01

    Purpose: Radiation-induced damage, such as inflammation and fibrosis, can compromise ventilation capability of local functional units (alveoli) of the lung. Ventilation function as measured with ventilation images, however, is often complicated by the underlying mechanical variations. The purpose of this study is to present a 4DCT-based method to measure the regional ventilation capability, namely, regional compliance, for the evaluation of radiation-induced lung damage. Methods: Six 4DCT images were investigated in this study: One previously used in the generation of a POPI model and the other five acquired at Henry Ford Health System. A tetrahedral geometrical model was created and scaled to encompass each of the 4DCT image domains. Image registrations were performed on each of the 4DCT images using a multiresolution Demons algorithm. The images at the end of exhalation were selected as a reference. Images at other exhalation phases were registered to the reference phase. For the POPI-modeled patient, each of these registration instances was validated using 40 landmarks. The displacement vector fields (DVFs) were used first to calculate the volumetric variation of each tetrahedron, which represents the change in the air volume. The calculated results were interpolated to generate 3D ventilation images. With the computed DVF, a finite element method (FEM) framework was developed to compute the stress images of the lung tissue. The regional compliance was then defined as the ratio of the ventilation and stress values and was calculated for each phase. Based on iterative FEM simulations, the potential range of the mechanical parameters for the lung was determined by comparing the model-computed average stress to the clinical reference value of airway pressure. The effect of the parameter variations on the computed stress distributions was estimated using Pearson correlation coefficients. Results: For the POPI-modeled patient, five exhalation phases from the start to

  20. Four-Dimensional Dose Reconstruction for Scanned Proton Therapy Using Liver 4DCT-MRI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bernatowicz, Kinga, E-mail: kinga.bernatowicz@psi.ch; Proton Therapy Center, Paul Scherrer Institute, PSI Villigen; Peroni, Marta

    Purpose: Four-dimensional computed tomography-magnetic resonance imaging (4DCT-MRI) is an image-processing technique for simulating many 4DCT data sets from a static reference CT and motions extracted from 4DMRI studies performed using either volunteers or patients. In this work, different motion extraction approaches were tested using 6 liver cases, and a detailed comparison between 4DCT-MRI and 4DCT was performed. Methods and Materials: 4DCT-MRI has been generated using 2 approaches. The first approach used motion extracted from 4DMRI as being “most similar” to that of 4DCT from the same patient (subject-specific), and the second approach used the most similar motion obtained from amore » motion library derived from 4DMRI liver studies of 13 healthy volunteers (population-based). The resulting 4DCT-MRI and 4DCTs were compared using scanned proton 4D dose calculations (4DDC). Results: Dosimetric analysis showed that 93% ± 8% of points inside the clinical target volume (CTV) agreed between 4DCT and subject-specific 4DCT-MRI (gamma analysis: 3%/3 mm). The population-based approach however showed lower dosimetric agreement with only 79% ± 14% points in the CTV reaching the 3%/3 mm criteria. Conclusions: 4D CT-MRI extends the capabilities of motion modeling for dose calculations by accounting for realistic and variable motion patterns, which can be directly employed in clinical research studies. We have found that the subject-specific liver modeling appears more accurate than the population-based approach. The former is particularly interesting for clinical applications, such as improved target delineation and 4D dose reconstruction for patient-specific QA to allow for inter- and/or intra-fractional plan corrections.« less

  1. Development of CCSDS DCT to Support Spacecraft Dynamic Events

    NASA Technical Reports Server (NTRS)

    Sidhwa, Anahita F

    2011-01-01

    This report discusses the development of Consultative Committee for Space Data Systems (CCSDS) Design Control Table (DCT) to support spacecraft dynamic events. The Consultative Committee for Space Data Systems (CCSDS) Design Control Table (DCT) is a versatile link calculation tool to analyze different kinds of radio frequency links. It started out as an Excel-based program, and is now being evolved into a Mathematica-based link analysis tool. The Mathematica platform offers a rich set of advanced analysis capabilities, and can be easily extended to a web-based architecture. Last year the CCSDS DCT's for the uplink, downlink, two-way, and ranging models were developed as well as the corresponding input and output interfaces. Another significant accomplishment is the integration of the NAIF SPICE library into the Mathematica computation platform.

  2. Measurement of regional compliance using 4DCT images for assessment of radiation treatment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhong Hualiang; Jin Jianyue; Ajlouni, Munther

    2011-03-15

    Purpose: Radiation-induced damage, such as inflammation and fibrosis, can compromise ventilation capability of local functional units (alveoli) of the lung. Ventilation function as measured with ventilation images, however, is often complicated by the underlying mechanical variations. The purpose of this study is to present a 4DCT-based method to measure the regional ventilation capability, namely, regional compliance, for the evaluation of radiation-induced lung damage. Methods: Six 4DCT images were investigated in this study: One previously used in the generation of a POPI model and the other five acquired at Henry Ford Health System. A tetrahedral geometrical model was created and scaledmore » to encompass each of the 4DCT image domains. Image registrations were performed on each of the 4DCT images using a multiresolution Demons algorithm. The images at the end of exhalation were selected as a reference. Images at other exhalation phases were registered to the reference phase. For the POPI-modeled patient, each of these registration instances was validated using 40 landmarks. The displacement vector fields (DVFs) were used first to calculate the volumetric variation of each tetrahedron, which represents the change in the air volume. The calculated results were interpolated to generate 3D ventilation images. With the computed DVF, a finite element method (FEM) framework was developed to compute the stress images of the lung tissue. The regional compliance was then defined as the ratio of the ventilation and stress values and was calculated for each phase. Based on iterative FEM simulations, the potential range of the mechanical parameters for the lung was determined by comparing the model-computed average stress to the clinical reference value of airway pressure. The effect of the parameter variations on the computed stress distributions was estimated using Pearson correlation coefficients. Results: For the POPI-modeled patient, five exhalation phases from the

  3. SU-E-J-154: Image Quality Assessment of Contrast-Enhanced 4D-CT for Pancreatic Adenocarcinoma in Radiotherapy Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choi, W; Xue, M; Patel, K

    2015-06-15

    Purpose: This study presents quantitative and qualitative assessment of the image qualities in contrast-enhanced (CE) 3D-CT, 4D-CT and CE 4D-CT to identify feasibility for replacing the clinical standard simulation with a single CE 4D-CT for pancreatic adenocarcinoma (PDA) in radiotherapy simulation. Methods: Ten PDA patients were enrolled and underwent three CT scans: a clinical standard pair of CE 3D-CT immediately followed by a 4D-CT, and a CE 4D-CT one week later. Physicians qualitatively evaluated the general image quality and regional vessel definitions and gave a score from 1 to 5. Next, physicians delineated the contours of the tumor (T) andmore » the normal pancreatic parenchyma (P) on the three CTs (CE 3D-CT, 50% phase for 4D-CT and CE 4D-CT), then high density areas were automatically removed by thresholding at 500 HU and morphological operations. The pancreatic tumor contrast-to-noise ratio (CNR), signal-tonoise ratio (SNR) and conspicuity (C, absolute difference of mean enhancement levels in P and T) were computed to quantitatively assess image quality. The Wilcoxon rank sum test was used to compare these quantities. Results: In qualitative evaluations, CE 3D-CT and CE 4D-CT scored equivalently (4.4±0.4 and 4.3±0.4) and both were significantly better than 4D-CT (3.1±0.6). In quantitative evaluations, the C values were higher in CE 4D-CT (28±19 HU, p=0.19 and 0.17) than the clinical standard pair of CE 3D-CT and 4D-CT (17±12 and 16±17 HU, p=0.65). In CE 3D-CT and CE 4D-CT, mean CNR (1.8±1.4 and 1.8±1.7, p=0.94) and mean SNR (5.8±2.6 and 5.5±3.2, p=0.71) both were higher than 4D-CT (CNR: 1.1±1.3, p<0.3; SNR: 3.3±2.1, p<0.1). The absolute enhancement levels for T and P were higher in CE 4D-CT (87, 82 HU) than in CE 3D-CT (60, 56) and 4DCT (53, 70). Conclusions: The individually optimized CE 4D-CT is feasible and achieved comparable image qualities to the clinical standard simulation. This study was supported in part by Philips Healthcare.« less

  4. Optimal Quantization Scheme for Data-Efficient Target Tracking via UWSNs Using Quantized Measurements.

    PubMed

    Zhang, Senlin; Chen, Huayan; Liu, Meiqin; Zhang, Qunfei

    2017-11-07

    Target tracking is one of the broad applications of underwater wireless sensor networks (UWSNs). However, as a result of the temporal and spatial variability of acoustic channels, underwater acoustic communications suffer from an extremely limited bandwidth. In order to reduce network congestion, it is important to shorten the length of the data transmitted from local sensors to the fusion center by quantization. Although quantization can reduce bandwidth cost, it also brings about bad tracking performance as a result of information loss after quantization. To solve this problem, this paper proposes an optimal quantization-based target tracking scheme. It improves the tracking performance of low-bit quantized measurements by minimizing the additional covariance caused by quantization. The simulation demonstrates that our scheme performs much better than the conventional uniform quantization-based target tracking scheme and the increment of the data length affects our scheme only a little. Its tracking performance improves by only 4.4% from 2- to 3-bit, which means our scheme weakly depends on the number of data bits. Moreover, our scheme also weakly depends on the number of participate sensors, and it can work well in sparse sensor networks. In a 6 × 6 × 6 sensor network, compared with 4 × 4 × 4 sensor networks, the number of participant sensors increases by 334.92%, while the tracking accuracy using 1-bit quantized measurements improves by only 50.77%. Overall, our optimal quantization-based target tracking scheme can achieve the pursuit of data-efficiency, which fits the requirements of low-bandwidth UWSNs.

  5. BRST quantization of cosmological perturbations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Armendariz-Picon, Cristian; Şengör, Gizem

    2016-11-08

    BRST quantization is an elegant and powerful method to quantize theories with local symmetries. In this article we study the Hamiltonian BRST quantization of cosmological perturbations in a universe dominated by a scalar field, along with the closely related quantization method of Dirac. We describe how both formalisms apply to perturbations in a time-dependent background, and how expectation values of gauge-invariant operators can be calculated in the in-in formalism. Our analysis focuses mostly on the free theory. By appropriate canonical transformations we simplify and diagonalize the free Hamiltonian. BRST quantization in derivative gauges allows us to dramatically simplify the structuremore » of the propagators, whereas Dirac quantization, which amounts to quantization in synchronous gauge, dispenses with the need to introduce ghosts and preserves the locality of the gauge-fixed action.« less

  6. Zero-block mode decision algorithm for H.264/AVC.

    PubMed

    Lee, Yu-Ming; Lin, Yinyi

    2009-03-01

    In the previous paper , we proposed a zero-block intermode decision algorithm for H.264 video coding based upon the number of zero-blocks of 4 x 4 DCT coefficients between the current macroblock and the co-located macroblock. The proposed algorithm can achieve significant improvement in computation, but the computation performance is limited for high bit-rate coding. To improve computation efficiency, in this paper, we suggest an enhanced zero-block decision algorithm, which uses an early zero-block detection method to compute the number of zero-blocks instead of direct DCT and quantization (DCT/Q) calculation and incorporates two adequate decision methods into semi-stationary and nonstationary regions of a video sequence. In addition, the zero-block decision algorithm is also applied to the intramode prediction in the P frame. The enhanced zero-block decision algorithm brings out a reduction of average 27% of total encoding time compared to the zero-block decision algorithm.

  7. JND measurements of the speech formants parameters and its implication in the LPC pole quantization

    NASA Astrophysics Data System (ADS)

    Orgad, Yaakov

    1988-08-01

    The inherent sensitivity of auditory perception is explicitly used with the objective of designing an efficient speech encoder. Speech can be modelled by a filter representing the vocal tract shape that is driven by an excitation signal representing glottal air flow. This work concentrates on the filter encoding problem, assuming that excitation signal encoding is optimal. Linear predictive coding (LPC) techniques were used to model a short speech segment by an all-pole filter; each pole was directly related to the speech formants. Measurements were made of the auditory just noticeable difference (JND) corresponding to the natural speech formants, with the LPC filter poles as the best candidates to represent the speech spectral envelope. The JND is the maximum precision required in speech quantization; it was defined on the basis of the shift of one pole parameter of a single frame of a speech segment, necessary to induce subjective perception of the distortion, with .75 probability. The average JND in LPC filter poles in natural speech was found to increase with increasing pole bandwidth and, to a lesser extent, frequency. The JND measurements showed a large spread of the residuals around the average values, indicating that inter-formant coupling and, perhaps, other, not yet fully understood, factors were not taken into account at this stage of the research. A future treatment should consider these factors. The average JNDs obtained in this work were used to design pole quantization tables for speech coding and provided a better bit-rate than the standard quantizer of reflection coefficient; a 30-bits-per-frame pole quantizer yielded a speech quality similar to that obtained with a standard 41-bits-per-frame reflection coefficient quantizer. Owing to the complexity of the numerical root extraction system, the practical implementation of the pole quantization approach remains to be proved.

  8. A Quantized Metric As an Alternative to Dark Matter

    NASA Astrophysics Data System (ADS)

    Maker, Joel

    2010-03-01

    The cosmological spherical symmetry background metric coefficient (g44≡) g00= 1-2GM/c^2r should be inserted into a Dirac equation σμ(gμμγ^μψ/xμ)-φψ = 0 (1,Maker) to make it generally covariant. The spin of this cosmological Dirac object is nearly unobservable due to inertial frame dragging and has rotational L(L+1) δɛ and oscillatory ɛ interactions with external objects at distance away r>>10^10 LY. The inside and outside frequencies φ match at the boundary allowing the outside metric eigenvalues to propagate inside. To include the correct 3 lepton masses in this Dirac equation we must use ansatz goo= e^i(2ɛ+δɛ) with ɛ=.06, δɛ=.00058. For local metric effects our ansatz is goo=e^iδɛ. Here the metric coefficient goo levels off to the quantized value e^iδɛ in the galaxy halo: goo=1-2GM/rc^2-> rel(e^iδɛ) =cos(δɛ)= 1-(δɛ)^2/2 ->(δɛ)^2/2=2GM/rc^2 for this circular motion v^2/r=GM/r^2=c^2(δɛ)^2/4r ->v^2 =c^2(δɛ)^2/4 =87km/sec)^2 100km/sec)^2. So the metric acts to quantize v. Note also there is rotational energy quantization for the δɛ rotational states that goes as: (L(L+1)) .5ex1 -.1em/ -.15em.25ex2 mv^2 ->√L(L+1) v. Thus differences in v are proportional to L, L being an integer. Therefore δv = kL so v = 1k, v = 2k, v = 3k, v = 4k. v=N (the above ˜100km/sec) with dark matter then not required to give these high halo velocities. Recent nearby galaxy Doppler halo velocity data strongly support this velocity quantization result.

  9. How quantizable matter gravitates: A practitioner's guide

    NASA Astrophysics Data System (ADS)

    Schuller, Frederic P.; Witte, Christof

    2014-05-01

    We present the practical step-by-step procedure for constructing canonical gravitational dynamics and kinematics directly from any previously specified quantizable classical matter dynamics, and then illustrate the application of this recipe by way of two completely worked case studies. Following the same procedure, any phenomenological proposal for fundamental matter dynamics must be supplemented with a suitable gravity theory providing the coefficients and kinematical interpretation of the matter theory, before any of the two theories can be meaningfully compared to experimental data.

  10. Stochastic quantization of topological field theory: Generalized Langevin equation with memory kernel

    NASA Astrophysics Data System (ADS)

    Menezes, G.; Svaiter, N. F.

    2006-07-01

    We use the method of stochastic quantization in a topological field theory defined in an Euclidean space, assuming a Langevin equation with a memory kernel. We show that our procedure for the Abelian Chern-Simons theory converges regardless of the nature of the Chern-Simons coefficient.

  11. Vector quantization

    NASA Technical Reports Server (NTRS)

    Gray, Robert M.

    1989-01-01

    During the past ten years Vector Quantization (VQ) has developed from a theoretical possibility promised by Shannon's source coding theorems into a powerful and competitive technique for speech and image coding and compression at medium to low bit rates. In this survey, the basic ideas behind the design of vector quantizers are sketched and some comments made on the state-of-the-art and current research efforts.

  12. New prospective 4D-CT for mitigating the effects of irregular respiratory motion

    NASA Astrophysics Data System (ADS)

    Pan, Tinsu; Martin, Rachael M.; Luo, Dershan

    2017-08-01

    Artifact caused by irregular respiration is a major source of error in 4D-CT imaging. We propose a new prospective 4D-CT to mitigate this source of error without new hardware, software or off-line data-processing on the GE CT scanner. We utilize the cine CT scan in the design of the new prospective 4D-CT. The cine CT scan at each position can be stopped by the operator when an irregular respiration occurs, and resumed when the respiration becomes regular. This process can be repeated at one or multiple scan positions. After the scan, a retrospective reconstruction is initiated on the CT console to reconstruct only the images corresponding to the regular respiratory cycles. The end result is a 4D-CT free of irregular respiration. To prove feasibility, we conducted a phantom and six patient studies. The artifacts associated with the irregular respiratory cycles could be removed from both the phantom and patient studies. A new prospective 4D-CT scanning and processing technique to mitigate the impact of irregular respiration in 4D-CT has been demonstrated. This technique can save radiation dose because the repeat scans are only at the scan positions where an irregular respiration occurs. Current practice is to repeat the scans at all positions. There is no cost to apply this technique because it is applicable on the GE CT scanner without new hardware, software or off-line data-processing.

  13. Nearly associative deformation quantization

    NASA Astrophysics Data System (ADS)

    Vassilevich, Dmitri; Oliveira, Fernando Martins Costa

    2018-04-01

    We study several classes of non-associative algebras as possible candidates for deformation quantization in the direction of a Poisson bracket that does not satisfy Jacobi identities. We show that in fact alternative deformation quantization algebras require the Jacobi identities on the Poisson bracket and, under very general assumptions, are associative. At the same time, flexible deformation quantization algebras exist for any Poisson bracket.

  14. Quantized Majorana conductance

    NASA Astrophysics Data System (ADS)

    Zhang, Hao; Liu, Chun-Xiao; Gazibegovic, Sasa; Xu, Di; Logan, John A.; Wang, Guanzhong; van Loo, Nick; Bommer, Jouri D. S.; de Moor, Michiel W. A.; Car, Diana; Op Het Veld, Roy L. M.; van Veldhoven, Petrus J.; Koelling, Sebastian; Verheijen, Marcel A.; Pendharkar, Mihir; Pennachio, Daniel J.; Shojaei, Borzoyeh; Lee, Joon Sue; Palmstrøm, Chris J.; Bakkers, Erik P. A. M.; Sarma, S. Das; Kouwenhoven, Leo P.

    2018-04-01

    Majorana zero-modes—a type of localized quasiparticle—hold great promise for topological quantum computing. Tunnelling spectroscopy in electrical transport is the primary tool for identifying the presence of Majorana zero-modes, for instance as a zero-bias peak in differential conductance. The height of the Majorana zero-bias peak is predicted to be quantized at the universal conductance value of 2e2/h at zero temperature (where e is the charge of an electron and h is the Planck constant), as a direct consequence of the famous Majorana symmetry in which a particle is its own antiparticle. The Majorana symmetry protects the quantization against disorder, interactions and variations in the tunnel coupling. Previous experiments, however, have mostly shown zero-bias peaks much smaller than 2e2/h, with a recent observation of a peak height close to 2e2/h. Here we report a quantized conductance plateau at 2e2/h in the zero-bias conductance measured in indium antimonide semiconductor nanowires covered with an aluminium superconducting shell. The height of our zero-bias peak remains constant despite changing parameters such as the magnetic field and tunnel coupling, indicating that it is a quantized conductance plateau. We distinguish this quantized Majorana peak from possible non-Majorana origins by investigating its robustness to electric and magnetic fields as well as its temperature dependence. The observation of a quantized conductance plateau strongly supports the existence of Majorana zero-modes in the system, consequently paving the way for future braiding experiments that could lead to topological quantum computing.

  15. Quantized Majorana conductance.

    PubMed

    Zhang, Hao; Liu, Chun-Xiao; Gazibegovic, Sasa; Xu, Di; Logan, John A; Wang, Guanzhong; van Loo, Nick; Bommer, Jouri D S; de Moor, Michiel W A; Car, Diana; Op Het Veld, Roy L M; van Veldhoven, Petrus J; Koelling, Sebastian; Verheijen, Marcel A; Pendharkar, Mihir; Pennachio, Daniel J; Shojaei, Borzoyeh; Lee, Joon Sue; Palmstrøm, Chris J; Bakkers, Erik P A M; Sarma, S Das; Kouwenhoven, Leo P

    2018-04-05

    Majorana zero-modes-a type of localized quasiparticle-hold great promise for topological quantum computing. Tunnelling spectroscopy in electrical transport is the primary tool for identifying the presence of Majorana zero-modes, for instance as a zero-bias peak in differential conductance. The height of the Majorana zero-bias peak is predicted to be quantized at the universal conductance value of 2e 2 /h at zero temperature (where e is the charge of an electron and h is the Planck constant), as a direct consequence of the famous Majorana symmetry in which a particle is its own antiparticle. The Majorana symmetry protects the quantization against disorder, interactions and variations in the tunnel coupling. Previous experiments, however, have mostly shown zero-bias peaks much smaller than 2e 2 /h, with a recent observation of a peak height close to 2e 2 /h. Here we report a quantized conductance plateau at 2e 2 /h in the zero-bias conductance measured in indium antimonide semiconductor nanowires covered with an aluminium superconducting shell. The height of our zero-bias peak remains constant despite changing parameters such as the magnetic field and tunnel coupling, indicating that it is a quantized conductance plateau. We distinguish this quantized Majorana peak from possible non-Majorana origins by investigating its robustness to electric and magnetic fields as well as its temperature dependence. The observation of a quantized conductance plateau strongly supports the existence of Majorana zero-modes in the system, consequently paving the way for future braiding experiments that could lead to topological quantum computing.

  16. Near infrared and visible face recognition based on decision fusion of LBP and DCT features

    NASA Astrophysics Data System (ADS)

    Xie, Zhihua; Zhang, Shuai; Liu, Guodong; Xiong, Jinquan

    2018-03-01

    Visible face recognition systems, being vulnerable to illumination, expression, and pose, can not achieve robust performance in unconstrained situations. Meanwhile, near infrared face images, being light- independent, can avoid or limit the drawbacks of face recognition in visible light, but its main challenges are low resolution and signal noise ratio (SNR). Therefore, near infrared and visible fusion face recognition has become an important direction in the field of unconstrained face recognition research. In order to extract the discriminative complementary features between near infrared and visible images, in this paper, we proposed a novel near infrared and visible face fusion recognition algorithm based on DCT and LBP features. Firstly, the effective features in near-infrared face image are extracted by the low frequency part of DCT coefficients and the partition histograms of LBP operator. Secondly, the LBP features of visible-light face image are extracted to compensate for the lacking detail features of the near-infrared face image. Then, the LBP features of visible-light face image, the DCT and LBP features of near-infrared face image are sent to each classifier for labeling. Finally, decision level fusion strategy is used to obtain the final recognition result. The visible and near infrared face recognition is tested on HITSZ Lab2 visible and near infrared face database. The experiment results show that the proposed method extracts the complementary features of near-infrared and visible face images and improves the robustness of unconstrained face recognition. Especially for the circumstance of small training samples, the recognition rate of proposed method can reach 96.13%, which has improved significantly than 92.75 % of the method based on statistical feature fusion.

  17. Effects of quantum noise in 4D-CT on deformable image registration and derived ventilation data

    NASA Astrophysics Data System (ADS)

    Latifi, Kujtim; Huang, Tzung-Chi; Feygelman, Vladimir; Budzevich, Mikalai M.; Moros, Eduardo G.; Dilling, Thomas J.; Stevens, Craig W.; van Elmpt, Wouter; Dekker, Andre; Zhang, Geoffrey G.

    2013-11-01

    Quantum noise is common in CT images and is a persistent problem in accurate ventilation imaging using 4D-CT and deformable image registration (DIR). This study focuses on the effects of noise in 4D-CT on DIR and thereby derived ventilation data. A total of six sets of 4D-CT data with landmarks delineated in different phases, called point-validated pixel-based breathing thorax models (POPI), were used in this study. The DIR algorithms, including diffeomorphic morphons (DM), diffeomorphic demons (DD), optical flow and B-spline, were used to register the inspiration phase to the expiration phase. The DIR deformation matrices (DIRDM) were used to map the landmarks. Target registration errors (TRE) were calculated as the distance errors between the delineated and the mapped landmarks. Noise of Gaussian distribution with different standard deviations (SD), from 0 to 200 Hounsfield Units (HU) in amplitude, was added to the POPI models to simulate different levels of quantum noise. Ventilation data were calculated using the ΔV algorithm which calculates the volume change geometrically based on the DIRDM. The ventilation images with different added noise levels were compared using Dice similarity coefficient (DSC). The root mean square (RMS) values of the landmark TRE over the six POPI models for the four DIR algorithms were stable when the noise level was low (SD <150 HU) and increased with added noise when the level is higher. The most accurate DIR was DD with a mean RMS of 1.5 ± 0.5 mm with no added noise and 1.8 ± 0.5 mm with noise (SD = 200 HU). The DSC values between the ventilation images with and without added noise decreased with the noise level, even when the noise level was relatively low. The DIR algorithm most robust with respect to noise was DM, with mean DSC = 0.89 ± 0.01 and 0.66 ± 0.02 for the top 50% ventilation volumes, as compared between 0 added noise and SD = 30 and 200 HU, respectively. Although the landmark TRE were stable with low noise, the

  18. First results from the spectral DCT trigger implemented in the Cyclone V Front-End Board used for a detection of very inclined showers in the Pierre Auger surface detector Engineering Array

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Szadkowski, Zbigniew

    2015-07-01

    The paper presents the first results from the trigger based on the Discrete Cosine Transform (DCT) operating in the new Front-End Boards with Cyclone V FPGA deployed in 8 test surface detectors in the Pierre Auger Engineering Array. The patterns of the ADC traces generated by very inclined showers were obtained from the Auger database and from the CORSIKA simulation package supported next by Offline reconstruction Auger platform which gives a predicted digitized signal profiles. Simulations for many variants of the initial angle of shower, initialization depth in the atmosphere, type of particle and its initial energy gave a boundarymore » of the DCT coefficients used next for the on-line pattern recognition in the FPGA. Preliminary results have proven a right approach. We registered several showers triggered by the DCT for 120 MSps and 160 MSps. (authors)« less

  19. Deformation of second and third quantization

    NASA Astrophysics Data System (ADS)

    Faizal, Mir

    2015-03-01

    In this paper, we will deform the second and third quantized theories by deforming the canonical commutation relations in such a way that they become consistent with the generalized uncertainty principle. Thus, we will first deform the second quantized commutator and obtain a deformed version of the Wheeler-DeWitt equation. Then we will further deform the third quantized theory by deforming the third quantized canonical commutation relation. This way we will obtain a deformed version of the third quantized theory for the multiverse.

  20. Model-based VQ for image data archival, retrieval and distribution

    NASA Technical Reports Server (NTRS)

    Manohar, Mareboyana; Tilton, James C.

    1995-01-01

    An ideal image compression technique for image data archival, retrieval and distribution would be one with the asymmetrical computational requirements of Vector Quantization (VQ), but without the complications arising from VQ codebooks. Codebook generation and maintenance are stumbling blocks which have limited the use of VQ as a practical image compression algorithm. Model-based VQ (MVQ), a variant of VQ described here, has the computational properties of VQ but does not require explicit codebooks. The codebooks are internally generated using mean removed error and Human Visual System (HVS) models. The error model assumed is the Laplacian distribution with mean, lambda-computed from a sample of the input image. A Laplacian distribution with mean, lambda, is generated with uniform random number generator. These random numbers are grouped into vectors. These vectors are further conditioned to make them perceptually meaningful by filtering the DCT coefficients from each vector. The DCT coefficients are filtered by multiplying by a weight matrix that is found to be optimal for human perception. The inverse DCT is performed to produce the conditioned vectors for the codebook. The only image dependent parameter used in the generation of codebook is the mean, lambda, that is included in the coded file to repeat the codebook generation process for decoding.

  1. Quantization and fractional quantization of currents in periodically driven stochastic systems. I. Average currents

    NASA Astrophysics Data System (ADS)

    Chernyak, Vladimir Y.; Klein, John R.; Sinitsyn, Nikolai A.

    2012-04-01

    This article studies Markovian stochastic motion of a particle on a graph with finite number of nodes and periodically time-dependent transition rates that satisfy the detailed balance condition at any time. We show that under general conditions, the currents in the system on average become quantized or fractionally quantized for adiabatic driving at sufficiently low temperature. We develop the quantitative theory of this quantization and interpret it in terms of topological invariants. By implementing the celebrated Kirchhoff theorem we derive a general and explicit formula for the average generated current that plays a role of an efficient tool for treating the current quantization effects.

  2. FPGA Implementation of Optimal 3D-Integer DCT Structure for Video Compression

    PubMed Central

    2015-01-01

    A novel optimal structure for implementing 3D-integer discrete cosine transform (DCT) is presented by analyzing various integer approximation methods. The integer set with reduced mean squared error (MSE) and high coding efficiency are considered for implementation in FPGA. The proposed method proves that the least resources are utilized for the integer set that has shorter bit values. Optimal 3D-integer DCT structure is determined by analyzing the MSE, power dissipation, coding efficiency, and hardware complexity of different integer sets. The experimental results reveal that direct method of computing the 3D-integer DCT using the integer set [10, 9, 6, 2, 3, 1, 1] performs better when compared to other integer sets in terms of resource utilization and power dissipation. PMID:26601120

  3. IGRINS on the DCT

    NASA Astrophysics Data System (ADS)

    Prato, Lisa A.

    2017-01-01

    Through an agreement with the University of Texas at Austin and the Korea Astronomy and Space Science Institute, the Immersion Grating Infrared Spectrograph (IGRINS) saw first light on the Lowell Observatory 4.3 m Discovery Channel Telescope (DCT) telescope on September 8, 2016. IGRINS, originally commissioned at the McDonald Observatory 2.7 m telescope, provides a spectral resolution of 45,000 and a simultaneous spectral grasp of 1.45 to 2.45 microns, recording all of the H and K bands with no gaps in wavelength coverage on two H2RG detectors in a single exposure. The instrument design minimizes optical surfaces, optimizing throughput, and has no moving parts, key for stability. IGRINS on the DCT attains a signal to noise of 100 per resolution element in one hour of integration time on a K=12 magnitude source, currently making it the most sensitive high-resolution spectrograph in the world at H and K. Science programs in the fourth quarter, 2016, include such diverse topics as abundance measurements in M dwarfs and population II stars, studies of ices and atmospheres in outer solar system bodies, measurement of fundamental properties of pre-main sequence stars, calibrating young star evolution, defining the substellar boundary at the youngest ages, outflow characteristics in Wolf-Rayet stars, finding the first generation of exoplanets, gas dynamics in planetary nebulae, and structure of the ISM in molecular clouds. In this talk I will report on initial results from selected programs.

  4. An adaptive vector quantization scheme

    NASA Technical Reports Server (NTRS)

    Cheung, K.-M.

    1990-01-01

    Vector quantization is known to be an effective compression scheme to achieve a low bit rate so as to minimize communication channel bandwidth and also to reduce digital memory storage while maintaining the necessary fidelity of the data. However, the large number of computations required in vector quantizers has been a handicap in using vector quantization for low-rate source coding. An adaptive vector quantization algorithm is introduced that is inherently suitable for simple hardware implementation because it has a simple architecture. It allows fast encoding and decoding because it requires only addition and subtraction operations.

  5. Canonical quantization of classical mechanics in curvilinear coordinates. Invariant quantization procedure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Błaszak, Maciej, E-mail: blaszakm@amu.edu.pl; Domański, Ziemowit, E-mail: ziemowit@amu.edu.pl

    In the paper is presented an invariant quantization procedure of classical mechanics on the phase space over flat configuration space. Then, the passage to an operator representation of quantum mechanics in a Hilbert space over configuration space is derived. An explicit form of position and momentum operators as well as their appropriate ordering in arbitrary curvilinear coordinates is demonstrated. Finally, the extension of presented formalism onto non-flat case and related ambiguities of the process of quantization are discussed. -- Highlights: •An invariant quantization procedure of classical mechanics on the phase space over flat configuration space is presented. •The passage tomore » an operator representation of quantum mechanics in a Hilbert space over configuration space is derived. •Explicit form of position and momentum operators and their appropriate ordering in curvilinear coordinates is shown. •The invariant form of Hamiltonian operators quadratic and cubic in momenta is derived. •The extension of presented formalism onto non-flat case and related ambiguities of the quantization process are discussed.« less

  6. SU-E-J-158: Audiovisual Biofeedback Reduces Image Artefacts in 4DCT: A Digital Phantom Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pollock, S; Kipritidis, J; Lee, D

    2015-06-15

    Purpose: Irregular breathing motion has a deleterious impact on 4DCT image quality. The breathing guidance system: audiovisual biofeedback (AVB) is designed to improve breathing regularity, however, its impact on 4DCT image quality has yet to be quantified. The purpose of this study was to quantify the impact of AVB on thoracic 4DCT image quality by utilizing the digital eXtended Cardiac Torso (XCAT) phantom driven by lung tumor motion patterns. Methods: 2D tumor motion obtained from 4 lung cancer patients under two breathing conditions (i) without breathing guidance (free breathing), and (ii) with guidance (AVB). There were two breathing sessions, yieldingmore » 8 tumor motion traces. This tumor motion was synchronized with the XCAT phantom to simulate 4DCT acquisitions under two acquisition modes: (1) cine mode, and (2) prospective respiratory-gated mode. Motion regularity was quantified by the root mean square error (RMSE) of displacement. The number of artefacts was visually assessed for each 4DCT and summed up for each breathing condition. Inter-session anatomic reproducibility was quantified by the mean absolute difference (MAD) between the Session 1 4DCT and Session 2 4DCT. Results: AVB improved tumor motion regularity by 30%. In cine mode, the number of artefacts was reduced from 61 in free breathing to 40 with AVB, in addition to AVB reducing the MAD by 34%. In gated mode, the number of artefacts was reduced from 63 in free breathing to 51 with AVB, in addition to AVB reducing the MAD by 23%. Conclusion: This was the first study to compare the impact of breathing guidance on 4DCT image quality compared to free breathing, with AVB reducing the amount of artefacts present in 4DCT images in addition to improving inter-session anatomic reproducibility. Results thus far suggest that breathing guidance interventions could have implications for improving radiotherapy treatment planning and interfraction reproducibility.« less

  7. SU-E-T-07: 4DCT Robust Optimization for Esophageal Cancer Using Intensity Modulated Proton Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liao, L; Department of Industrial Engineering, University of Houston, Houston, TX; Yu, J

    2015-06-15

    Purpose: To develop a 4DCT robust optimization method to reduce the dosimetric impact from respiratory motion in intensity modulated proton therapy (IMPT) for esophageal cancer. Methods: Four esophageal cancer patients were selected for this study. The different phases of CT from a set of 4DCT were incorporated into the worst-case dose distribution robust optimization algorithm. 4DCT robust treatment plans were designed and compared with the conventional non-robust plans. Result doses were calculated on the average and maximum inhale/exhale phases of 4DCT. Dose volume histogram (DVH) band graphic and ΔD95%, ΔD98%, ΔD5%, ΔD2% of CTV between different phases were used tomore » evaluate the robustness of the plans. Results: Compare to the IMPT plans optimized using conventional methods, the 4DCT robust IMPT plans can achieve the same quality in nominal cases, while yield a better robustness to breathing motion. The mean ΔD95%, ΔD98%, ΔD5% and ΔD2% of CTV are 6%, 3.2%, 0.9% and 1% for the robustly optimized plans vs. 16.2%, 11.8%, 1.6% and 3.3% from the conventional non-robust plans. Conclusion: A 4DCT robust optimization method was proposed for esophageal cancer using IMPT. We demonstrate that the 4DCT robust optimization can mitigate the dose deviation caused by the diaphragm motion.« less

  8. Preoperative evaluation of renal anatomy and renal masses with helical CT, 3D-CT and 3D-CT angiography.

    PubMed

    Toprak, Uğur; Erdoğan, Aysun; Gülbay, Mutlu; Karademir, Mehmet Alp; Paşaoğlu, Eşref; Akar, Okkeş Emrah

    2005-03-01

    The aim of this prospective study was to determine the efficacy of three-dimensional computed tomography (3D-CT) and three-dimensional computed tomographic angiography (3D-CTA) that were reconstructed by using the axial images of the multiphasic helical CT in the preoperative evaluation of renal masses and demonstration of renal anatomy. Twenty patients that were suspected of having renal masses upon initial physical examination and ultrasonographic evaluation were examined through multiphasic helical CT. Two authors executed CT evaluations. Axial images were first examined and then used to reconstruct 3D-CT and 3D- CTA images. Number, location and size of the renal masses and other findings were noted. Renal vascularization and relationships of the renal masses with the neighboring renal structures were further investigated with 3D-CT and 3D-CTA images. Out of 20 patients, 13 had histopathologically proven renal cell carcinoma. The diagnoses of the remaining seven patients were xanthogranulomatous pyelonephritis, abscess, simple cyst, infected cyst, angiomyolipoma, oncocytoma and arteriovenous fistula. In the renal cell carcinoma group, 3 patients had stage I, 7 patients had stage II, and 3 patients had stage III disease. Sizes of renal cell carcinoma masses were between 23 mm to 60 mm (mean, 36 mm). Vascular invasion was shown in 2 renal cell carcinoma patients. Collecting system invasion was identified in 11 of 13 renal cell patients. These radiologic findings were confirmed with surgical specimens. Three-dimensional CT and 3D-CTA are non-invasive, effective imaging techniques for the preoperative evaluation of renal masses.

  9. Three-wave scattering in magnetized plasmas: From cold fluid to quantized Lagrangian.

    PubMed

    Shi, Yuan; Qin, Hong; Fisch, Nathaniel J

    2017-08-01

    Large amplitude waves in magnetized plasmas, generated either by external pumps or internal instabilities, can scatter via three-wave interactions. While three-wave scattering is well known in collimated geometry, what happens when waves propagate at angles with one another in magnetized plasmas remains largely unknown, mainly due to the analytical difficulty of this problem. In this paper, we overcome this analytical difficulty and find a convenient formula for three-wave coupling coefficient in cold, uniform, magnetized, and collisionless plasmas in the most general geometry. This is achieved by systematically solving the fluid-Maxwell model to second order using a multiscale perturbative expansion. The general formula for the coupling coefficient becomes transparent when we reformulate it as the scattering matrix element of a quantized Lagrangian. Using the quantized Lagrangian, it is possible to bypass the perturbative solution and directly obtain the nonlinear coupling coefficient from the linear response of the plasma. To illustrate how to evaluate the cold coupling coefficient, we give a set of examples where the participating waves are either quasitransverse or quasilongitudinal. In these examples, we determine the angular dependence of three-wave scattering, and demonstrate that backscattering is not necessarily the strongest scattering channel in magnetized plasmas, in contrast to what happens in unmagnetized plasmas. Our approach gives a more complete picture, beyond the simple collimated geometry, of how injected waves can decay in magnetic confinement devices, as well as how lasers can be scattered in magnetized plasma targets.

  10. Crystallization and preliminary X-ray diffraction analysis of two extracytoplasmic solute receptors of the DctP family from Bordetella pertussis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rucktooa, Prakash; Huvent, Isabelle; IFR 142, Institut Pasteur de Lille, 1 Rue du Professeur Calmette, BP 245, 59021 Lille CEDEX

    2006-10-01

    Sample preparation, crystallization and preliminary X-ray analysis are reported for two B. pertussis extracytoplasmic solute receptors. DctP6 and DctP7 are two Bordetella pertussis proteins which belong to the extracytoplasmic solute receptors (ESR) superfamily. ESRs are involved in the transport of substrates from the periplasm to the cytosol of Gram-negative bacteria. DctP6 and DctP7 have been crystallized and diffraction data were collected using a synchrotron-radiation source. DctP6 crystallized in space group P4{sub 1}2{sub 1}2, with unit-cell parameters a = 108.39, b = 108.39, c = 63.09 Å, while selenomethionyl-derivatized DctP7 crystallized in space group P2{sub 1}2{sub 1}2{sub 1}, with unit-cell parametersmore » a = 64.87, b = 149.83, c = 170.65 Å. The three-dimensional structure of DctP7 will be determined by single-wavelength anomalous diffraction, while the DctP6 structure will be solved by molecular-replacement methods.« less

  11. A new Watermarking System based on Discrete Cosine Transform (DCT) in color biometric images.

    PubMed

    Dogan, Sengul; Tuncer, Turker; Avci, Engin; Gulten, Arif

    2012-08-01

    This paper recommend a biometric color images hiding approach An Watermarking System based on Discrete Cosine Transform (DCT), which is used to protect the security and integrity of transmitted biometric color images. Watermarking is a very important hiding information (audio, video, color image, gray image) technique. It is commonly used on digital objects together with the developing technology in the last few years. One of the common methods used for hiding information on image files is DCT method which used in the frequency domain. In this study, DCT methods in order to embed watermark data into face images, without corrupting their features.

  12. Study of the IMRT interplay effect using a 4DCT Monte Carlo dose calculation.

    PubMed

    Jensen, Michael D; Abdellatif, Ady; Chen, Jeff; Wong, Eugene

    2012-04-21

    Respiratory motion may lead to dose errors when treating thoracic and abdominal tumours with radiotherapy. The interplay between complex multileaf collimator patterns and patient respiratory motion could result in unintuitive dose changes. We have developed a treatment reconstruction simulation computer code that accounts for interplay effects by combining multileaf collimator controller log files, respiratory trace log files, 4DCT images and a Monte Carlo dose calculator. Two three-dimensional (3D) IMRT step-and-shoot plans, a concave target and integrated boost were delivered to a 1D rigid motion phantom. Three sets of experiments were performed with 100%, 50% and 25% duty cycle gating. The log files were collected, and five simulation types were performed on each data set: continuous isocentre shift, discrete isocentre shift, 4DCT, 4DCT delivery average and 4DCT plan average. Analysis was performed using 3D gamma analysis with passing criteria of 2%, 2 mm. The simulation framework was able to demonstrate that a single fraction of the integrated boost plan was more sensitive to interplay effects than the concave target. Gating was shown to reduce the interplay effects. We have developed a 4DCT Monte Carlo simulation method that accounts for IMRT interplay effects with respiratory motion by utilizing delivery log files.

  13. Quantized discrete space oscillators

    NASA Technical Reports Server (NTRS)

    Uzes, C. A.; Kapuscik, Edward

    1993-01-01

    A quasi-canonical sequence of finite dimensional quantizations was found which has canonical quantization as its limit. In order to demonstrate its practical utility and its numerical convergence, this formalism is applied to the eigenvalue and 'eigenfunction' problem of several harmonic and anharmonic oscillators.

  14. Multi-phase simultaneous segmentation of tumor in lung 4D-CT data with context information.

    PubMed

    Shen, Zhengwen; Wang, Huafeng; Xi, Weiwen; Deng, Xiaogang; Chen, Jin; Zhang, Yu

    2017-01-01

    Lung 4D computed tomography (4D-CT) plays an important role in high-precision radiotherapy because it characterizes respiratory motion, which is crucial for accurate target definition. However, the manual segmentation of a lung tumor is a heavy workload for doctors because of the large number of lung 4D-CT data slices. Meanwhile, tumor segmentation is still a notoriously challenging problem in computer-aided diagnosis. In this paper, we propose a new method based on an improved graph cut algorithm with context information constraint to find a convenient and robust approach of lung 4D-CT tumor segmentation. We combine all phases of the lung 4D-CT into a global graph, and construct a global energy function accordingly. The sub-graph is first constructed for each phase. A context cost term is enforced to achieve segmentation results in every phase by adding a context constraint between neighboring phases. A global energy function is finally constructed by combining all cost terms. The optimization is achieved by solving a max-flow/min-cut problem, which leads to simultaneous and robust segmentation of the tumor in all the lung 4D-CT phases. The effectiveness of our approach is validated through experiments on 10 different lung 4D-CT cases. The comparison with the graph cut without context constraint, the level set method and the graph cut with star shape prior demonstrates that the proposed method obtains more accurate and robust segmentation results.

  15. The role of 3DCT for the evaluation of chop injuries in clinical forensic medicine.

    PubMed

    Wittschieber, Daniel; Beck, Laura; Vieth, Volker; Hahnemann, Maria L

    2016-09-01

    As hatchet blows to the human head frequently cause fatal injuries, the forensic examination of survivors with cranial chop injuries is a rare phenomenon in forensic casework. Besides evaluation of clinical records, photographs, and medico-legal physical examination, the analysis and 3-dimensional reconstruction of pre-treatment computed tomography data (3DCT) must be considered an important and indispensable tool for the assessment of those cases because the characteristics of chopping trauma often appear masked or changed by clinical treatment. In the present article, the role of 3DCT for the evaluation of chop wounds in clinical forensic medicine is demonstrated by an illustrative case report of a young man who was attacked with a hatchet. 3DCT provides additional possibilities for supplementing missing information, such as number and direction of blows as well as weapon identification. Furthermore, 3DCT facilitates demonstration in court and understanding of medical lay people. We conclude that 3DCT is of particular value for the evaluation of survivors of life-threatening head and face injury. An increasing significance of this technique may be expected. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  16. Digital Model of Fourier and Fresnel Quantized Holograms

    NASA Astrophysics Data System (ADS)

    Boriskevich, Anatoly A.; Erokhovets, Valery K.; Tkachenko, Vadim V.

    Some models schemes of Fourier and Fresnel quantized protective holograms with visual effects are suggested. The condition to arrive at optimum relationship between the quality of reconstructed images, and the coefficient of data reduction about a hologram, and quantity of iterations in the reconstructing hologram process has been estimated through computer model. Higher protection level is achieved by means of greater number both bi-dimensional secret keys (more than 2128) in form of pseudorandom amplitude and phase encoding matrixes, and one-dimensional encoding key parameters for every image of single-layer or superimposed holograms.

  17. Quantum Computing and Second Quantization

    DOE PAGES

    Makaruk, Hanna Ewa

    2017-02-10

    Quantum computers are by their nature many particle quantum systems. Both the many-particle arrangement and being quantum are necessary for the existence of the entangled states, which are responsible for the parallelism of the quantum computers. Second quantization is a very important approximate method of describing such systems. This lecture will present the general idea of the second quantization, and discuss shortly some of the most important formulations of second quantization.

  18. Quantum Computing and Second Quantization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Makaruk, Hanna Ewa

    Quantum computers are by their nature many particle quantum systems. Both the many-particle arrangement and being quantum are necessary for the existence of the entangled states, which are responsible for the parallelism of the quantum computers. Second quantization is a very important approximate method of describing such systems. This lecture will present the general idea of the second quantization, and discuss shortly some of the most important formulations of second quantization.

  19. Crystal structures of two Bordetella pertussis periplasmic receptors contribute to defining a novel pyroglutamic acid binding DctP subfamily.

    PubMed

    Rucktooa, Prakash; Antoine, Rudy; Herrou, Julien; Huvent, Isabelle; Locht, Camille; Jacob-Dubuisson, Françoise; Villeret, Vincent; Bompard, Coralie

    2007-06-29

    Gram-negative bacteria have developed several different transport systems for solute uptake. One of these, the tripartite ATP independent periplasmic transport system (TRAP-T), makes use of an extracytoplasmic solute receptor (ESR) which captures specific solutes with high affinity and transfers them to their partner permease complex located in the bacterial inner membrane. We hereby report the structures of DctP6 and DctP7, two such ESRs from Bordetella pertussis. These two proteins display a high degree of sequence and structural similarity and possess the "Venus flytrap" fold characteristic of ESRs, comprising two globular alpha/beta domains hinged together to form a ligand binding cleft. DctP6 and DctP7 both show a closed conformation due to the presence of one pyroglutamic acid molecule bound by highly conserved residues in their respective ligand binding sites. BLAST analyses have revealed that the DctP6 and DctP7 residues involved in ligand binding are strictly present in a number of predicted TRAP-T ESRs from other bacteria. In most cases, the genes encoding these TRAP-T systems are located in the vicinity of a gene coding for a pyroglutamic acid metabolising enzyme. Both the high degree of conservation of these ligand binding residues and the genomic context of these TRAP-T-coding operons in a number of bacterial species, suggest that DctP6 and DctP7 constitute the prototypes of a novel TRAP-T DctP subfamily involved in pyroglutamic acid transport.

  20. BFV approach to geometric quantization

    NASA Astrophysics Data System (ADS)

    Fradkin, E. S.; Linetsky, V. Ya.

    1994-12-01

    A gauge-invariant approach to geometric quantization is developed. It yields a complete quantum description for dynamical systems with non-trivial geometry and topology of the phase space. The method is a global version of the gauge-invariant approach to quantization of second-class constraints developed by Batalin, Fradkin and Fradkina (BFF). Physical quantum states and quantum observables are respectively described by covariantly constant sections of the Fock bundle and the bundle of hermitian operators over the phase space with a flat connection defined by the nilpotent BVF-BRST operator. Perturbative calculation of the first non-trivial quantum correction to the Poisson brackets leads to the Chevalley cocycle known in deformation quantization. Consistency conditions lead to a topological quantization condition with metaplectic anomaly.

  1. Three-wave scattering in magnetized plasmas: From cold fluid to quantized Lagrangian

    DOE PAGES

    Shi, Yuan; Qin, Hong; Fisch, Nathaniel J.

    2017-08-14

    Large amplitude waves in magnetized plasmas, generated either by external pumps or internal instabilities, can scatter via three-wave interactions. While three-wave scattering is well known in collimated geometry, what happens when waves propagate at angles with one another in magnetized plasmas remains largely unknown, mainly due to the analytical difficulty of this problem. In this study, we overcome this analytical difficulty and find a convenient formula for three-wave coupling coefficient in cold, uniform, magnetized, and collisionless plasmas in the most general geometry. This is achieved by systematically solving the fluid-Maxwell model to second order using a multiscale perturbative expansion. Themore » general formula for the coupling coefficient becomes transparent when we reformulate it as the scattering matrix element of a quantized Lagrangian. Using the quantized Lagrangian, it is possible to bypass the perturbative solution and directly obtain the nonlinear coupling coefficient from the linear response of the plasma. To illustrate how to evaluate the cold coupling coefficient, we give a set of examples where the participating waves are either quasitransverse or quasilongitudinal. In these examples, we determine the angular dependence of three-wave scattering, and demonstrate that backscattering is not necessarily the strongest scattering channel in magnetized plasmas, in contrast to what happens in unmagnetized plasmas. Finally, our approach gives a more complete picture, beyond the simple collimated geometry, of how injected waves can decay in magnetic confinement devices, as well as how lasers can be scattered in magnetized plasma targets.« less

  2. Three-wave scattering in magnetized plasmas: From cold fluid to quantized Lagrangian

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shi, Yuan; Qin, Hong; Fisch, Nathaniel J.

    Large amplitude waves in magnetized plasmas, generated either by external pumps or internal instabilities, can scatter via three-wave interactions. While three-wave scattering is well known in collimated geometry, what happens when waves propagate at angles with one another in magnetized plasmas remains largely unknown, mainly due to the analytical difficulty of this problem. In this study, we overcome this analytical difficulty and find a convenient formula for three-wave coupling coefficient in cold, uniform, magnetized, and collisionless plasmas in the most general geometry. This is achieved by systematically solving the fluid-Maxwell model to second order using a multiscale perturbative expansion. Themore » general formula for the coupling coefficient becomes transparent when we reformulate it as the scattering matrix element of a quantized Lagrangian. Using the quantized Lagrangian, it is possible to bypass the perturbative solution and directly obtain the nonlinear coupling coefficient from the linear response of the plasma. To illustrate how to evaluate the cold coupling coefficient, we give a set of examples where the participating waves are either quasitransverse or quasilongitudinal. In these examples, we determine the angular dependence of three-wave scattering, and demonstrate that backscattering is not necessarily the strongest scattering channel in magnetized plasmas, in contrast to what happens in unmagnetized plasmas. Finally, our approach gives a more complete picture, beyond the simple collimated geometry, of how injected waves can decay in magnetic confinement devices, as well as how lasers can be scattered in magnetized plasma targets.« less

  3. SU-E-J-88: Deformable Registration Using Multi-Resolution Demons Algorithm for 4DCT.

    PubMed

    Li, Dengwang; Yin, Yong

    2012-06-01

    In order to register 4DCT efficiently, we propose an improved deformable registration algorithm based on improved multi-resolution demons strategy to improve the efficiency of the algorithm. 4DCT images of lung cancer patients are collected from a General Electric Discovery ST CT scanner from our cancer hospital. All of the images are sorted into groups and reconstructed according to their phases, and eachrespiratory cycle is divided into 10 phases with the time interval of 10%. Firstly, in our improved demons algorithm we use gradients of both reference and floating images as deformation forces and also redistribute the forces according to the proportion of the two forces. Furthermore, we introduce intermediate variable to cost function for decreasing the noise in registration process. At the same time, Gaussian multi-resolution strategy and BFGS method for optimization are used to improve speed and accuracy of the registration. To validate the performance of the algorithm, we register the previous 10 phase-images. We compared the difference of floating and reference images before and after registered where two landmarks are decided by experienced clinician. We registered 10 phase-images of 4D-CT which is lung cancer patient from cancer hospital and choose images in exhalationas the reference images, and all other images were registered into the reference images. This method has a good accuracy demonstrated by a higher similarity measure for registration of 4D-CT and it can register a large deformation precisely. Finally, we obtain the tumor target achieved by the deformation fields using proposed method, which is more accurately than the internal margin (IM) expanded by the Gross Tumor Volume (GTV). Furthermore, we achieve tumor and normal tissue tracking and dose accumulation using 4DCT data. An efficient deformable registration algorithm was proposed by using multi-resolution demons algorithm for 4DCT. © 2012 American Association of Physicists in Medicine.

  4. Deformation quantization of fermi fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Galaviz, I.; Garcia-Compean, H.; Departamento de Fisica, Centro de Investigacion y de Estudios Avanzados del IPN, P.O. Box 14-740, 07000 Mexico, D.F.

    2008-04-15

    Deformation quantization for any Grassmann scalar free field is described via the Weyl-Wigner-Moyal formalism. The Stratonovich-Weyl quantizer, the Moyal *-product and the Wigner functional are obtained by extending the formalism proposed recently in [I. Galaviz, H. Garcia-Compean, M. Przanowski, F.J. Turrubiates, Weyl-Wigner-Moyal Formalism for Fermi Classical Systems, arXiv:hep-th/0612245] to the fermionic systems of infinite number of degrees of freedom. In particular, this formalism is applied to quantize the Dirac free field. It is observed that the use of suitable oscillator variables facilitates considerably the procedure. The Stratonovich-Weyl quantizer, the Moyal *-product, the Wigner functional, the normal ordering operator, and finally,more » the Dirac propagator have been found with the use of these variables.« less

  5. Speckle Imaging at Gemini and the DCT

    NASA Astrophysics Data System (ADS)

    Horch, E. P.; Löbb, J.; Howell, S. B.; van Altena, W. F.; Henry, T. J.; van Belle, G. T.

    2018-01-01

    A program of speckle observations at Lowell Observatory's Discovery Channel Telescope (DCT) and the Gemini North and South Telescopes will be described. It has featured the Differential Speckle Survey Instrument (DSSI), built at Southern Connecticut State University in 2008. DSSI is a dual-port system that records speckle images in two colors simultaneously and produces diffraction limited images to V˜ 16.5 mag at Gemini and V˜ 14.5 mag at the DCT. Of the several science projects that are being pursued at these telescopes, three will be highlighted here. The first is high-resolution follow-up observations for Kepler and K2 exoplanet missions, the second is a study of metal-poor spectroscopic binaries in an attempt to resolve these systems and determine their visual orbits en route to making mass determinations, and the third is a systematic survey of nearby late-type dwarfs, where the multiplicity fraction will be directly measured and compared to that of G dwarfs. The current status of these projects is discussed and some representative results are given.

  6. MO-DE-207A-12: Toward Patient-Specific 4DCT Reconstruction Using Adaptive Velocity Binning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morris, E.D.; Glide-Hurst, C.; Wayne State University, Detroit, MI

    2016-06-15

    Purpose: While 4DCT provides organ/tumor motion information, it often samples data over 10–20 breathing cycles. For patients presenting with compromised pulmonary function, breathing patterns can change over the acquisition time, potentially leading to tumor delineation discrepancies. This work introduces a novel adaptive velocity-modulated binning (AVB) 4DCT algorithm that modulates the reconstruction based on the respiratory waveform, yielding a patient-specific 4DCT solution. Methods: AVB was implemented in a research reconstruction configuration. After filtering the respiratory waveform, the algorithm examines neighboring data to a phase reconstruction point and the temporal gate is widened until the difference between the reconstruction point and waveformmore » exceeds a threshold value—defined as percent difference between maximum/minimum waveform amplitude. The algorithm only impacts reconstruction if the gate width exceeds a set minimum temporal width required for accurate reconstruction. A sensitivity experiment of threshold values (0.5, 1, 5, 10, and 12%) was conducted to examine the interplay between threshold, signal to noise ratio (SNR), and image sharpness for phantom and several patient 4DCT cases using ten-phase reconstructions. Individual phase reconstructions were examined. Subtraction images and regions of interest were compared to quantify changes in SNR. Results: AVB increased signal in reconstructed 4DCT slices for respiratory waveforms that met the prescribed criteria. For the end-exhale phases, where the respiratory velocity is low, patient data revealed a threshold of 0.5% demonstrated increased SNR in the AVB reconstructions. For intermediate breathing phases, threshold values were required to be >10% to notice appreciable changes in CT intensity with AVB. AVB reconstructions exhibited appreciably higher SNR and reduced noise in regions of interest that were photon deprived such as the liver. Conclusion: We demonstrated that patient

  7. No-Reference Video Quality Assessment Based on Statistical Analysis in 3D-DCT Domain.

    PubMed

    Li, Xuelong; Guo, Qun; Lu, Xiaoqiang

    2016-05-13

    It is an important task to design models for universal no-reference video quality assessment (NR-VQA) in multiple video processing and computer vision applications. However, most existing NR-VQA metrics are designed for specific distortion types which are not often aware in practical applications. A further deficiency is that the spatial and temporal information of videos is hardly considered simultaneously. In this paper, we propose a new NR-VQA metric based on the spatiotemporal natural video statistics (NVS) in 3D discrete cosine transform (3D-DCT) domain. In the proposed method, a set of features are firstly extracted based on the statistical analysis of 3D-DCT coefficients to characterize the spatiotemporal statistics of videos in different views. These features are used to predict the perceived video quality via the efficient linear support vector regression (SVR) model afterwards. The contributions of this paper are: 1) we explore the spatiotemporal statistics of videos in 3DDCT domain which has the inherent spatiotemporal encoding advantage over other widely used 2D transformations; 2) we extract a small set of simple but effective statistical features for video visual quality prediction; 3) the proposed method is universal for multiple types of distortions and robust to different databases. The proposed method is tested on four widely used video databases. Extensive experimental results demonstrate that the proposed method is competitive with the state-of-art NR-VQA metrics and the top-performing FR-VQA and RR-VQA metrics.

  8. Random Walk Graph Laplacian-Based Smoothness Prior for Soft Decoding of JPEG Images.

    PubMed

    Liu, Xianming; Cheung, Gene; Wu, Xiaolin; Zhao, Debin

    2017-02-01

    Given the prevalence of joint photographic experts group (JPEG) compressed images, optimizing image reconstruction from the compressed format remains an important problem. Instead of simply reconstructing a pixel block from the centers of indexed discrete cosine transform (DCT) coefficient quantization bins (hard decoding), soft decoding reconstructs a block by selecting appropriate coefficient values within the indexed bins with the help of signal priors. The challenge thus lies in how to define suitable priors and apply them effectively. In this paper, we combine three image priors-Laplacian prior for DCT coefficients, sparsity prior, and graph-signal smoothness prior for image patches-to construct an efficient JPEG soft decoding algorithm. Specifically, we first use the Laplacian prior to compute a minimum mean square error initial solution for each code block. Next, we show that while the sparsity prior can reduce block artifacts, limiting the size of the overcomplete dictionary (to lower computation) would lead to poor recovery of high DCT frequencies. To alleviate this problem, we design a new graph-signal smoothness prior (desired signal has mainly low graph frequencies) based on the left eigenvectors of the random walk graph Laplacian matrix (LERaG). Compared with the previous graph-signal smoothness priors, LERaG has desirable image filtering properties with low computation overhead. We demonstrate how LERaG can facilitate recovery of high DCT frequencies of a piecewise smooth signal via an interpretation of low graph frequency components as relaxed solutions to normalized cut in spectral clustering. Finally, we construct a soft decoding algorithm using the three signal priors with appropriate prior weights. Experimental results show that our proposal outperforms the state-of-the-art soft decoding algorithms in both objective and subjective evaluations noticeably.

  9. MPEG-1 low-cost encoder solution

    NASA Astrophysics Data System (ADS)

    Grueger, Klaus; Schirrmeister, Frank; Filor, Lutz; von Reventlow, Christian; Schneider, Ulrich; Mueller, Gerriet; Sefzik, Nicolai; Fiedrich, Sven

    1995-02-01

    A solution for real-time compression of digital YCRCB video data to an MPEG-1 video data stream has been developed. As an additional option, motion JPEG and video telephone streams (H.261) can be generated. For MPEG-1, up to two bidirectional predicted images are supported. The required computational power for motion estimation and DCT/IDCT, memory size and memory bandwidth have been the main challenges. The design uses fast-page-mode memory accesses and requires only one single 80 ns EDO-DRAM with 256 X 16 organization for video encoding. This can be achieved only by using adequate access and coding strategies. The architecture consists of an input processing and filter unit, a memory interface, a motion estimation unit, a motion compensation unit, a DCT unit, a quantization control, a VLC unit and a bus interface. For using the available memory bandwidth by the processing tasks, a fixed schedule for memory accesses has been applied, that can be interrupted for asynchronous events. The motion estimation unit implements a highly sophisticated hierarchical search strategy based on block matching. The DCT unit uses a separated fast-DCT flowgraph realized by a switchable hardware unit for both DCT and IDCT operation. By appropriate multiplexing, only one multiplier is required for: DCT, quantization, inverse quantization, and IDCT. The VLC unit generates the video-stream up to the video sequence layer and is directly coupled with an intelligent bus-interface. Thus, the assembly of video, audio and system data can easily be performed by the host computer. Having a relatively low complexity and only small requirements for DRAM circuits, the developed solution can be applied to low-cost encoding products for consumer electronics.

  10. Poster — Thur Eve — 16: 4DCT simulation with synchronized contrast injection of liver SBRT patients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karotki, A.; Korol, R.; Milot, L.

    2014-08-15

    Stereotactic body radiation therapy (SBRT) has recently emerged as a valid option for treating liver metastases. SBRT delivers highly conformai dose over a small number of fractions. As such it is particularly sensitive to the accuracy of target volume delineation by the radiation oncologist. However, contouring liver metastases remains challenging for the following reasons. First, the liver usually undergoes significant motion due to respiration. Second, liver metastases are often nearly indistinguishable from the surrounding tissue when using computed tomography (CT) for imaging making it difficult to identify and delineate them. Both problems can be overcome by using four dimensional CTmore » (4DCT) synchronized with intravenous contrast injection. We describe a novel CT simulation process which involves two 4DCT scans. The first scan captures the tumor and immediately surrounding tissue which in turn reduces the 4DCT scan time so that it can be optimally timed with intravenous contrast injection. The second 4DCT scan covers a larger volume and is used as the primary CT dataset for dose calculation, as well as patient setup verification on the treatment unit. The combination of two 4DCT scans, short and long, allows visualization of the liver metastases over all phases of breathing cycle while simultaneously acquiring long enough 4DCT dataset suitable for planning and patient setup verification.« less

  11. Planning 4-Dimensional Computed Tomography (4DCT) Cannot Adequately Represent Daily Intrafractional Motion of Abdominal Tumors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ge, Jiajia; Santanam, Lakshmi; Noel, Camille

    2013-03-15

    Purpose: To evaluate whether planning 4-dimensional computed tomography (4DCT) can adequately represent daily motion of abdominal tumors in regularly fractionated and stereotactic body radiation therapy (SBRT) patients. Methods and Materials: Intrafractional tumor motion of 10 patients with abdominal tumors (4 pancreas-fractionated and 6 liver-stereotactic patients) with implanted fiducials was measured based on daily orthogonal fluoroscopic movies over 38 treatment fractions. The needed internal margin for at least 90% of tumor coverage was calculated based on a 95th and fifth percentile of daily 3-dimensional tumor motion. The planning internal margin was generated by fusing 4DCT motion from all phase bins. The disagreementmore » between needed and planning internal margin was analyzed fraction by fraction in 3 motion axes (superior-inferior [SI], anterior-posterior [AP], and left-right [LR]). The 4DCT margin was considered as an overestimation/underestimation of daily motion when disagreement exceeded at least 3 mm in the SI axis and/or 1.2 mm in the AP and LR axes (4DCT image resolution). The underlying reasons for this disagreement were evaluated based on interfractional and intrafractional breathing variation. Results: The 4DCT overestimated daily 3-dimensional motion in 39% of the fractions in 7 of 10 patients and underestimated it in 53% of the fractions in 8 of 10 patients. Median underestimation was 3.9 mm, 3.0 mm, and 1.7 mm in the SI axis, AP axis, and LR axis, respectively. The 4DCT was found to capture irregular deep breaths in 3 of 10 patients, with 4DCT motion larger than mean daily amplitude by 18 to 21 mm. The breathing pattern varied from breath to breath and day to day. The intrafractional variation of amplitude was significantly larger than intrafractional variation (2.7 mm vs 1.3 mm) in the primary motion axis (ie, SI axis). The SBRT patients showed significantly larger intrafractional amplitude variation than fractionated patients (3.0

  12. Wavelet versus DCT-based spread spectrum watermarking of image databases

    NASA Astrophysics Data System (ADS)

    Mitrea, Mihai P.; Zaharia, Titus B.; Preteux, Francoise J.; Vlad, Adriana

    2004-05-01

    This paper addresses the issue of oblivious robust watermarking, within the framework of colour still image database protection. We present an original method which complies with all the requirements nowadays imposed to watermarking applications: robustness (e.g. low-pass filtering, print & scan, StirMark), transparency (both quality and fidelity), low probability of false alarm, obliviousness and multiple bit recovering. The mark is generated from a 64 bit message (be it a logo, a serial number, etc.) by means of a Spread Spectrum technique and is embedded into DWT (Discrete Wavelet Transform) domain, into certain low frequency coefficients, selected according to the hierarchy of their absolute values. The best results were provided by the (9,7) bi-orthogonal transform. The experiments were carried out on 1200 image sequences, each of them of 32 images. Note that these sequences represented several types of images: natural, synthetic, medical, etc. and each time we obtained the same good results. These results are compared with those we already obtained for the DCT domain, the differences being pointed out and discussed.

  13. KCNJ10 determines the expression of the apical Na-Cl cotransporter (NCC) in the early distal convoluted tubule (DCT1).

    PubMed

    Zhang, Chengbiao; Wang, Lijun; Zhang, Junhui; Su, Xiao-Tong; Lin, Dao-Hong; Scholl, Ute I; Giebisch, Gerhard; Lifton, Richard P; Wang, Wen-Hui

    2014-08-12

    The renal phenotype induced by loss-of-function mutations of inwardly rectifying potassium channel (Kir), Kcnj10 (Kir4.1), includes salt wasting, hypomagnesemia, metabolic alkalosis and hypokalemia. However, the mechanism by which Kir.4.1 mutations cause the tubulopathy is not completely understood. Here we demonstrate that Kcnj10 is a main contributor to the basolateral K conductance in the early distal convoluted tubule (DCT1) and determines the expression of the apical Na-Cl cotransporter (NCC) in the DCT. Immunostaining demonstrated Kcnj10 and Kcnj16 were expressed in the basolateral membrane of DCT, and patch-clamp studies detected a 40-pS K channel in the basolateral membrane of the DCT1 of p8/p10 wild-type Kcnj10(+/+) mice (WT). This 40-pS K channel is absent in homozygous Kcnj10(-/-) (knockout) mice. The disruption of Kcnj10 almost completely eliminated the basolateral K conductance and decreased the negativity of the cell membrane potential in DCT1. Moreover, the lack of Kcnj10 decreased the basolateral Cl conductance, inhibited the expression of Ste20-related proline-alanine-rich kinase and diminished the apical NCC expression in DCT. We conclude that Kcnj10 plays a dominant role in determining the basolateral K conductance and membrane potential of DCT1 and that the basolateral K channel activity in the DCT determines the apical NCC expression possibly through a Ste20-related proline-alanine-rich kinase-dependent mechanism.

  14. Noncommutative gerbes and deformation quantization

    NASA Astrophysics Data System (ADS)

    Aschieri, Paolo; Baković, Igor; Jurčo, Branislav; Schupp, Peter

    2010-11-01

    We define noncommutative gerbes using the language of star products. Quantized twisted Poisson structures are discussed as an explicit realization in the sense of deformation quantization. Our motivation is the noncommutative description of D-branes in the presence of topologically non-trivial background fields.

  15. Full Spectrum Conversion Using Traveling Pulse Wave Quantization

    DTIC Science & Technology

    2017-03-01

    Full Spectrum Conversion Using Traveling Pulse Wave Quantization Michael S. Kappes Mikko E. Waltari IQ-Analog Corporation San Diego, California...temporal-domain quantization technique called Traveling Pulse Wave Quantization (TPWQ). Full spectrum conversion is defined as the complete...pulse width measurements that are continuously generated hence the name “traveling” pulse wave quantization. Our TPWQ-based ADC is composed of a

  16. Quantized Algebra I Texts

    NASA Astrophysics Data System (ADS)

    DeBuvitz, William

    2014-03-01

    I am a volunteer reader at the Princeton unit of "Learning Ally" (formerly "Recording for the Blind & Dyslexic") and I recently discovered that high school students are introduced to the concept of quantization well before they take chemistry and physics. For the past few months I have been reading onto computer files a popular Algebra I textbook, and I was surprised and dismayed by how it treated simultaneous equations and quadratic equations. The coefficients are always simple integers in examples and exercises, even when they are related to physics. This is probably a good idea when these topics are first presented to the students. It makes it easy to solve simultaneous equations by the method of elimination of a variable. And it makes it easy to solve some quadratic equations by factoring. The textbook also discusses the method of substitution for linear equations and the use of the quadratic formula, but only with simple integers.

  17. 3D delivered dose assessment using a 4DCT-based motion model

    PubMed Central

    Cai, Weixing; Hurwitz, Martina H.; Williams, Christopher L.; Dhou, Salam; Berbeco, Ross I.; Seco, Joao; Mishra, Pankaj; Lewis, John H.

    2015-01-01

    Purpose: The purpose of this work is to develop a clinically feasible method of calculating actual delivered dose distributions for patients who have significant respiratory motion during the course of stereotactic body radiation therapy (SBRT). Methods: A novel approach was proposed to calculate the actual delivered dose distribution for SBRT lung treatment. This approach can be specified in three steps. (1) At the treatment planning stage, a patient-specific motion model is created from planning 4DCT data. This model assumes that the displacement vector field (DVF) of any respiratory motion deformation can be described as a linear combination of some basis DVFs. (2) During the treatment procedure, 2D time-varying projection images (either kV or MV projections) are acquired, from which time-varying “fluoroscopic” 3D images of the patient are reconstructed using the motion model. The DVF of each timepoint in the time-varying reconstruction is an optimized linear combination of basis DVFs such that the 2D projection of the 3D volume at this timepoint matches the projection image. (3) 3D dose distribution is computed for each timepoint in the set of 3D reconstructed fluoroscopic images, from which the total effective 3D delivered dose is calculated by accumulating deformed dose distributions. This approach was first validated using two modified digital extended cardio-torso (XCAT) phantoms with lung tumors and different respiratory motions. The estimated doses were compared to the dose that would be calculated for routine 4DCT-based planning and to the actual delivered dose that was calculated using “ground truth” XCAT phantoms at all timepoints. The approach was also tested using one set of patient data, which demonstrated the application of our method in a clinical scenario. Results: For the first XCAT phantom that has a mostly regular breathing pattern, the errors in 95% volume dose (D95) are 0.11% and 0.83%, respectively for 3D fluoroscopic images

  18. 3D delivered dose assessment using a 4DCT-based motion model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cai, Weixing; Hurwitz, Martina H.; Williams, Christopher L.

    Purpose: The purpose of this work is to develop a clinically feasible method of calculating actual delivered dose distributions for patients who have significant respiratory motion during the course of stereotactic body radiation therapy (SBRT). Methods: A novel approach was proposed to calculate the actual delivered dose distribution for SBRT lung treatment. This approach can be specified in three steps. (1) At the treatment planning stage, a patient-specific motion model is created from planning 4DCT data. This model assumes that the displacement vector field (DVF) of any respiratory motion deformation can be described as a linear combination of some basismore » DVFs. (2) During the treatment procedure, 2D time-varying projection images (either kV or MV projections) are acquired, from which time-varying “fluoroscopic” 3D images of the patient are reconstructed using the motion model. The DVF of each timepoint in the time-varying reconstruction is an optimized linear combination of basis DVFs such that the 2D projection of the 3D volume at this timepoint matches the projection image. (3) 3D dose distribution is computed for each timepoint in the set of 3D reconstructed fluoroscopic images, from which the total effective 3D delivered dose is calculated by accumulating deformed dose distributions. This approach was first validated using two modified digital extended cardio-torso (XCAT) phantoms with lung tumors and different respiratory motions. The estimated doses were compared to the dose that would be calculated for routine 4DCT-based planning and to the actual delivered dose that was calculated using “ground truth” XCAT phantoms at all timepoints. The approach was also tested using one set of patient data, which demonstrated the application of our method in a clinical scenario. Results: For the first XCAT phantom that has a mostly regular breathing pattern, the errors in 95% volume dose (D95) are 0.11% and 0.83%, respectively for 3D fluoroscopic images

  19. Coherent state quantization of quaternions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Muraleetharan, B., E-mail: bbmuraleetharan@jfn.ac.lk, E-mail: santhar@gmail.com; Thirulogasanthar, K., E-mail: bbmuraleetharan@jfn.ac.lk, E-mail: santhar@gmail.com

    Parallel to the quantization of the complex plane, using the canonical coherent states of a right quaternionic Hilbert space, quaternion field of quaternionic quantum mechanics is quantized. Associated upper symbols, lower symbols, and related quantities are analyzed. Quaternionic version of the harmonic oscillator and Weyl-Heisenberg algebra are also obtained.

  20. Design and evaluation of sparse quantization index modulation watermarking schemes

    NASA Astrophysics Data System (ADS)

    Cornelis, Bruno; Barbarien, Joeri; Dooms, Ann; Munteanu, Adrian; Cornelis, Jan; Schelkens, Peter

    2008-08-01

    In the past decade the use of digital data has increased significantly. The advantages of digital data are, amongst others, easy editing, fast, cheap and cross-platform distribution and compact storage. The most crucial disadvantages are the unauthorized copying and copyright issues, by which authors and license holders can suffer considerable financial losses. Many inexpensive methods are readily available for editing digital data and, unlike analog information, the reproduction in the digital case is simple and robust. Hence, there is great interest in developing technology that helps to protect the integrity of a digital work and the copyrights of its owners. Watermarking, which is the embedding of a signal (known as the watermark) into the original digital data, is one method that has been proposed for the protection of digital media elements such as audio, video and images. In this article, we examine watermarking schemes for still images, based on selective quantization of the coefficients of a wavelet transformed image, i.e. sparse quantization-index modulation (QIM) watermarking. Different grouping schemes for the wavelet coefficients are evaluated and experimentally verified for robustness against several attacks. Wavelet tree-based grouping schemes yield a slightly improved performance over block-based grouping schemes. Additionally, the impact of the deployment of error correction codes on the most promising configurations is examined. The utilization of BCH-codes (Bose, Ray-Chaudhuri, Hocquenghem) results in an improved robustness as long as the capacity of the error codes is not exceeded (cliff-effect).

  1. BFV quantization on hermitian symmetric spaces

    NASA Astrophysics Data System (ADS)

    Fradkin, E. S.; Linetsky, V. Ya.

    1995-02-01

    Gauge-invariant BFV approach to geometric quantization is applied to the case of hermitian symmetric spaces G/ H. In particular, gauge invariant quantization on the Lobachevski plane and sphere is carried out. Due to the presence of symmetry, master equations for the first-class constraints, quantum observables and physical quantum states are exactly solvable. BFV-BRST operator defines a flat G-connection in the Fock bundle over G/ H. Physical quantum states are covariantly constant sections with respect to this connection and are shown to coincide with the generalized coherent states for the group G. Vacuum expectation values of the quantum observables commuting with the quantum first-class constraints reduce to the covariant symbols of Berezin. The gauge-invariant approach to quantization on symplectic manifolds synthesizes geometric, deformation and Berezin quantization approaches.

  2. 4D-CT Lung registration using anatomy-based multi-level multi-resolution optical flow analysis and thin-plate splines.

    PubMed

    Min, Yugang; Neylon, John; Shah, Amish; Meeks, Sanford; Lee, Percy; Kupelian, Patrick; Santhanam, Anand P

    2014-09-01

    The accuracy of 4D-CT registration is limited by inconsistent Hounsfield unit (HU) values in the 4D-CT data from one respiratory phase to another and lower image contrast for lung substructures. This paper presents an optical flow and thin-plate spline (TPS)-based 4D-CT registration method to account for these limitations. The use of unified HU values on multiple anatomy levels (e.g., the lung contour, blood vessels, and parenchyma) accounts for registration errors by inconsistent landmark HU value. While 3D multi-resolution optical flow analysis registers each anatomical level, TPS is employed for propagating the results from one anatomical level to another ultimately leading to the 4D-CT registration. 4D-CT registration was validated using target registration error (TRE), inverse consistency error (ICE) metrics, and a statistical image comparison using Gamma criteria of 1 % intensity difference in 2 mm(3) window range. Validation results showed that the proposed method was able to register CT lung datasets with TRE and ICE values <3 mm. In addition, the average number of voxel that failed the Gamma criteria was <3 %, which supports the clinical applicability of the propose registration mechanism. The proposed 4D-CT registration computes the volumetric lung deformations within clinically viable accuracy.

  3. Probing topology by "heating": Quantized circular dichroism in ultracold atoms.

    PubMed

    Tran, Duc Thanh; Dauphin, Alexandre; Grushin, Adolfo G; Zoller, Peter; Goldman, Nathan

    2017-08-01

    We reveal an intriguing manifestation of topology, which appears in the depletion rate of topological states of matter in response to an external drive. This phenomenon is presented by analyzing the response of a generic two-dimensional (2D) Chern insulator subjected to a circular time-periodic perturbation. Because of the system's chiral nature, the depletion rate is shown to depend on the orientation of the circular shake; taking the difference between the rates obtained from two opposite orientations of the drive, and integrating over a proper drive-frequency range, provides a direct measure of the topological Chern number (ν) of the populated band: This "differential integrated rate" is directly related to the strength of the driving field through the quantized coefficient η 0 = ν/ ℏ 2 , where h = 2π ℏ is Planck's constant. Contrary to the integer quantum Hall effect, this quantized response is found to be nonlinear with respect to the strength of the driving field, and it explicitly involves interband transitions. We investigate the possibility of probing this phenomenon in ultracold gases and highlight the crucial role played by edge states in this effect. We extend our results to 3D lattices, establishing a link between depletion rates and the nonlinear photogalvanic effect predicted for Weyl semimetals. The quantized circular dichroism revealed in this work designates depletion rate measurements as a universal probe for topological order in quantum matter.

  4. Controlling charge quantization with quantum fluctuations.

    PubMed

    Jezouin, S; Iftikhar, Z; Anthore, A; Parmentier, F D; Gennser, U; Cavanna, A; Ouerghi, A; Levkivskyi, I P; Idrisov, E; Sukhorukov, E V; Glazman, L I; Pierre, F

    2016-08-04

    In 1909, Millikan showed that the charge of electrically isolated systems is quantized in units of the elementary electron charge e. Today, the persistence of charge quantization in small, weakly connected conductors allows for circuits in which single electrons are manipulated, with applications in, for example, metrology, detectors and thermometry. However, as the connection strength is increased, the discreteness of charge is progressively reduced by quantum fluctuations. Here we report the full quantum control and characterization of charge quantization. By using semiconductor-based tunable elemental conduction channels to connect a micrometre-scale metallic island to a circuit, we explore the complete evolution of charge quantization while scanning the entire range of connection strengths, from a very weak (tunnel) to a perfect (ballistic) contact. We observe, when approaching the ballistic limit, that charge quantization is destroyed by quantum fluctuations, and scales as the square root of the residual probability for an electron to be reflected across the quantum channel; this scaling also applies beyond the different regimes of connection strength currently accessible to theory. At increased temperatures, the thermal fluctuations result in an exponential suppression of charge quantization and in a universal square-root scaling, valid for all connection strengths, in agreement with expectations. Besides being pertinent for the improvement of single-electron circuits and their applications, and for the metal-semiconductor hybrids relevant to topological quantum computing, knowledge of the quantum laws of electricity will be essential for the quantum engineering of future nanoelectronic devices.

  5. On the Dequantization of Fedosov's Deformation Quantization

    NASA Astrophysics Data System (ADS)

    Karabegov, Alexander V.

    2003-08-01

    To each natural deformation quantization on a Poisson manifold M we associate a Poisson morphism from the formal neighborhood of the zero section of the cotangent bundle to M to the formal neighborhood of the diagonal of the product M x M~, where M~ is a copy of M with the opposite Poisson structure. We call it dequantization of the natural deformation quantization. Then we "dequantize" Fedosov's quantization.

  6. Quantizing and sampling considerations in digital phased-locked loops

    NASA Technical Reports Server (NTRS)

    Hurst, G. T.; Gupta, S. C.

    1974-01-01

    The quantizer problem is first considered. The conditions under which the uniform white sequence model for the quantizer error is valid are established independent of the sampling rate. An equivalent spectral density is defined for the quantizer error resulting in an effective SNR value. This effective SNR may be used to determine quantized performance from infinitely fine quantized results. Attention is given to sampling rate considerations. Sampling rate characteristics of the digital phase-locked loop (DPLL) structure are investigated for the infinitely fine quantized system. The predicted phase error variance equation is examined as a function of the sampling rate. Simulation results are presented and a method is described which enables the minimum required sampling rate to be determined from the predicted phase error variance equations.

  7. TH-E-BRF-02: 4D-CT Ventilation Image-Based IMRT Plans Are Dosimetrically Comparable to SPECT Ventilation Image-Based Plans

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kida, S; University of Tokyo Hospital, Bunkyo, Tokyo; Bal, M

    Purpose: An emerging lung ventilation imaging method based on 4D-CT can be used in radiotherapy to selectively avoid irradiating highly-functional lung regions, which may reduce pulmonary toxicity. Efforts to validate 4DCT ventilation imaging have been focused on comparison with other imaging modalities including SPECT and xenon CT. The purpose of this study was to compare 4D-CT ventilation image-based functional IMRT plans with SPECT ventilation image-based plans as reference. Methods: 4D-CT and SPECT ventilation scans were acquired for five thoracic cancer patients in an IRB-approved prospective clinical trial. The ventilation images were created by quantitative analysis of regional volume changes (amore » surrogate for ventilation) using deformable image registration of the 4D-CT images. A pair of 4D-CT ventilation and SPECT ventilation image-based IMRT plans was created for each patient. Regional ventilation information was incorporated into lung dose-volume objectives for IMRT optimization by assigning different weights on a voxel-by-voxel basis. The objectives and constraints of the other structures in the plan were kept identical. The differences in the dose-volume metrics have been evaluated and tested by a paired t-test. SPECT ventilation was used to calculate the lung functional dose-volume metrics (i.e., mean dose, V20 and effective dose) for both 4D-CT ventilation image-based and SPECT ventilation image-based plans. Results: Overall there were no statistically significant differences in any dose-volume metrics between the 4D-CT and SPECT ventilation imagebased plans. For example, the average functional mean lung dose of the 4D-CT plans was 26.1±9.15 (Gy), which was comparable to 25.2±8.60 (Gy) of the SPECT plans (p = 0.89). For other critical organs and PTV, nonsignificant differences were found as well. Conclusion: This study has demonstrated that 4D-CT ventilation image-based functional IMRT plans are dosimetrically comparable to SPECT ventilation

  8. A recursive technique for adaptive vector quantization

    NASA Technical Reports Server (NTRS)

    Lindsay, Robert A.

    1989-01-01

    Vector Quantization (VQ) is fast becoming an accepted, if not preferred method for image compression. The VQ performs well when compressing all types of imagery including Video, Electro-Optical (EO), Infrared (IR), Synthetic Aperture Radar (SAR), Multi-Spectral (MS), and digital map data. The only requirement is to change the codebook to switch the compressor from one image sensor to another. There are several approaches for designing codebooks for a vector quantizer. Adaptive Vector Quantization is a procedure that simultaneously designs codebooks as the data is being encoded or quantized. This is done by computing the centroid as a recursive moving average where the centroids move after every vector is encoded. When computing the centroid of a fixed set of vectors the resultant centroid is identical to the previous centroid calculation. This method of centroid calculation can be easily combined with VQ encoding techniques. The defined quantizer changes after every encoded vector by recursively updating the centroid of minimum distance which is the selected by the encoder. Since the quantizer is changing definition or states after every encoded vector, the decoder must now receive updates to the codebook. This is done as side information by multiplexing bits into the compressed source data.

  9. SU-C-BRA-06: Developing Clinical and Quantitative Guidelines for a 4DCT-Ventilation Functional Avoidance Clinical Trial

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vinogradskiy, Y; Waxweiler, T; Diot, Q

    Purpose: 4DCT-ventilation is an exciting new imaging modality that uses 4DCTs to calculate lung ventilation. Because 4DCTs are acquired as part of routine care, calculating 4DCT-ventilation allows for lung function evaluation without additional cost or inconvenience to the patient. Development of a clinical trial is underway at our institution to use 4DCT-ventilation for thoracic functional avoidance with the idea that preferential sparing of functional lung regions can decrease pulmonary toxicity. The purpose of our work was to develop the practical aspects of a 4DCT-ventilation functional avoidance clinical trial including: 1.assessing patient eligibility 2.developing trial inclusion criteria and 3.developing treatment planningmore » and dose-function evaluation strategies. Methods: 96 stage III lung cancer patients from 2 institutions were retrospectively reviewed. 4DCT-ventilation maps were calculated using the patient’s 4DCTs, deformable image registrations, and a density-change-based algorithm. To assess patient eligibility and develop trial inclusion criteria we used an observer-based binary end point noting the presence or absence of a ventilation defect and developed an algorithm based on the percent ventilation in each lung third. Functional avoidance planning integrating 4DCT-ventilation was performed using rapid-arc and compared to the patient’s clinically used plan. Results: Investigator-determined clinical ventilation defects were present in 69% of patients. Our regional/lung-thirds ventilation algorithm identified that 59% of patients have lung functional profiles suitable for functional avoidance. Compared to the clinical plan, functional avoidance planning was able to reduce the mean dose to functional lung by 2 Gy while delivering comparable target coverage and cord/heart doses. Conclusions: 4DCT-ventilation functional avoidance clinical trials have great potential to reduce toxicity, and our data suggest that 59% of lung cancer patients

  10. 4D-CT motion estimation using deformable image registration and 5D respiratory motion modeling.

    PubMed

    Yang, Deshan; Lu, Wei; Low, Daniel A; Deasy, Joseph O; Hope, Andrew J; El Naqa, Issam

    2008-10-01

    Four-dimensional computed tomography (4D-CT) imaging technology has been developed for radiation therapy to provide tumor and organ images at the different breathing phases. In this work, a procedure is proposed for estimating and modeling the respiratory motion field from acquired 4D-CT imaging data and predicting tissue motion at the different breathing phases. The 4D-CT image data consist of series of multislice CT volume segments acquired in ciné mode. A modified optical flow deformable image registration algorithm is used to compute the image motion from the CT segments to a common full volume 3D-CT reference. This reference volume is reconstructed using the acquired 4D-CT data at the end-of-exhalation phase. The segments are optimally aligned to the reference volume according to a proposed a priori alignment procedure. The registration is applied using a multigrid approach and a feature-preserving image downsampling maxfilter to achieve better computational speed and higher registration accuracy. The registration accuracy is about 1.1 +/- 0.8 mm for the lung region according to our verification using manually selected landmarks and artificially deformed CT volumes. The estimated motion fields are fitted to two 5D (spatial 3D+tidal volume+airflow rate) motion models: forward model and inverse model. The forward model predicts tissue movements and the inverse model predicts CT density changes as a function of tidal volume and airflow rate. A leave-one-out procedure is used to validate these motion models. The estimated modeling prediction errors are about 0.3 mm for the forward model and 0.4 mm for the inverse model.

  11. Quantization of Non-Lagrangian Systems

    NASA Astrophysics Data System (ADS)

    Kochan, Denis

    A novel method for quantization of non-Lagrangian (open) systems is proposed. It is argued that the essential object, which provides both classical and quantum evolution, is a certain canonical two-form defined in extended velocity space. In this setting classical dynamics is recovered from the stringy-type variational principle, which employs umbilical surfaces instead of histories of the system. Quantization is then accomplished in accordance with the introduced variational principle. The path integral for the transition probability amplitude (propagator) is rearranged to a surface functional integral. In the standard case of closed (Lagrangian) systems the presented method reduces to the standard Feynman's approach. The inverse problem of the calculus of variation, the problem of quantization ambiguity and the quantum mechanics in the presence of friction are analyzed in detail.

  12. Dynamic Contour Tonometry (DCT) over a thin daily disposable hydrogel contact lens.

    PubMed

    Nosch, Daniela Sonja; Duddek, Armin P; Herrmann, Didier; Stuhrmann, Oliver M

    2010-10-01

    Dynamic Contour Tonometry (DCT) has been shown to measure the intraocular pressure (IOP) independent of corneal physical properties such as thickness, curvature and rigidity. The aim of this study was to find out if DCT remains accurate when it is applied on regularly shaped corneas while a thin, daily hydrogel contact lens (CL) is worn. This was a prospective, randomised study and included 46 patients (46 right eyes): 26 females and 20 males. The age varied from 22 to 66 years (mean: 43.0+/-12.70 years). IOP and ocular pulse amplitude (OPA) measurements were taken with and without a daily disposable hydrogel CL (-0.50 D), Filcon IV) in situ (using the DCT), with a randomised order of measurements. The average value for the IOP measurements without CL was 16.51+/-3.20 mmHg, and with CL in situ it was 16.10+/-3.10 mmHg. The mean difference was 0.41 mmHg and not found to be statistically significant (p=0.074). The average value for the OPA measurement without CL was 2.20+/-0.79 mmHg. With CL in situ it was 2.08+/-0.81 mmHg. This gave a mean difference of 0.11 mmHg and was statistically significant (p=0.025). The Bland-Altman plot showed a maximum difference in IOP of +2.44 and -2.00 mmHg (CI 0.95). Regarding OPA, the maximum difference was +0.81 and -0.60 mmHg (CI 0.95). The presence of a thin hydrogel CL did not affect the accuracy of IOP measurements using the DCT. The ocular pulse amplitude was measured on average 5.45% lower with a CL in situ. Copyright (c) 2010 British Contact Lens Association. Published by Elsevier Ltd. All rights reserved.

  13. Probing topology by “heating”: Quantized circular dichroism in ultracold atoms

    PubMed Central

    Tran, Duc Thanh; Dauphin, Alexandre; Grushin, Adolfo G.; Zoller, Peter; Goldman, Nathan

    2017-01-01

    We reveal an intriguing manifestation of topology, which appears in the depletion rate of topological states of matter in response to an external drive. This phenomenon is presented by analyzing the response of a generic two-dimensional (2D) Chern insulator subjected to a circular time-periodic perturbation. Because of the system’s chiral nature, the depletion rate is shown to depend on the orientation of the circular shake; taking the difference between the rates obtained from two opposite orientations of the drive, and integrating over a proper drive-frequency range, provides a direct measure of the topological Chern number (ν) of the populated band: This “differential integrated rate” is directly related to the strength of the driving field through the quantized coefficient η0 = ν/ℏ2, where h = 2π ℏ is Planck’s constant. Contrary to the integer quantum Hall effect, this quantized response is found to be nonlinear with respect to the strength of the driving field, and it explicitly involves interband transitions. We investigate the possibility of probing this phenomenon in ultracold gases and highlight the crucial role played by edge states in this effect. We extend our results to 3D lattices, establishing a link between depletion rates and the nonlinear photogalvanic effect predicted for Weyl semimetals. The quantized circular dichroism revealed in this work designates depletion rate measurements as a universal probe for topological order in quantum matter. PMID:28835930

  14. Robust vector quantization for noisy channels

    NASA Technical Reports Server (NTRS)

    Demarca, J. R. B.; Farvardin, N.; Jayant, N. S.; Shoham, Y.

    1988-01-01

    The paper briefly discusses techniques for making vector quantizers more tolerant to tranmsission errors. Two algorithms are presented for obtaining an efficient binary word assignment to the vector quantizer codewords without increasing the transmission rate. It is shown that about 4.5 dB gain over random assignment can be achieved with these algorithms. It is also proposed to reduce the effects of error propagation in vector-predictive quantizers by appropriately constraining the response of the predictive loop. The constrained system is shown to have about 4 dB of SNR gain over an unconstrained system in a noisy channel, with a small loss of clean-channel performance.

  15. Natural inflation from polymer quantization

    NASA Astrophysics Data System (ADS)

    Ali, Masooma; Seahra, Sanjeev S.

    2017-11-01

    We study the polymer quantization of a homogeneous massive scalar field in the early Universe using a prescription inequivalent to those previously appearing in the literature. Specifically, we assume a Hilbert space for which the scalar field momentum is well defined but its amplitude is not. This is closer in spirit to the quantization scheme of loop quantum gravity, in which no unique configuration operator exists. We show that in the semiclassical approximation, the main effect of this polymer quantization scheme is to compactify the phase space of chaotic inflation in the field amplitude direction. This gives rise to an effective scalar potential closely resembling that of hybrid natural inflation. Unlike polymer schemes in which the scalar field amplitude is well defined, the semiclassical dynamics involves a past cosmological singularity; i.e., this approach does not mitigate the big bang.

  16. Pseudo-Kähler Quantization on Flag Manifolds

    NASA Astrophysics Data System (ADS)

    Karabegov, Alexander V.

    A unified approach to geometric, symbol and deformation quantizations on a generalized flag manifold endowed with an invariant pseudo-Kähler structure is proposed. In particular cases we arrive at Berezin's quantization via covariant and contravariant symbols.

  17. Quantization of Electromagnetic Fields in Cavities

    NASA Technical Reports Server (NTRS)

    Kakazu, Kiyotaka; Oshiro, Kazunori

    1996-01-01

    A quantization procedure for the electromagnetic field in a rectangular cavity with perfect conductor walls is presented, where a decomposition formula of the field plays an essential role. All vector mode functions are obtained by using the decomposition. After expanding the field in terms of the vector mode functions, we get the quantized electromagnetic Hamiltonian.

  18. Low-rate image coding using vector quantization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Makur, A.

    1990-01-01

    This thesis deals with the development and analysis of a computationally simple vector quantization image compression system for coding monochrome images at low bit rate. Vector quantization has been known to be an effective compression scheme when a low bit rate is desirable, but the intensive computation required in a vector quantization encoder has been a handicap in using it for low rate image coding. The present work shows that, without substantially increasing the coder complexity, it is indeed possible to achieve acceptable picture quality while attaining a high compression ratio. Several modifications to the conventional vector quantization coder aremore » proposed in the thesis. These modifications are shown to offer better subjective quality when compared to the basic coder. Distributed blocks are used instead of spatial blocks to construct the input vectors. A class of input-dependent weighted distortion functions is used to incorporate psychovisual characteristics in the distortion measure. Computationally simple filtering techniques are applied to further improve the decoded image quality. Finally, unique designs of the vector quantization coder using electronic neural networks are described, so that the coding delay is reduced considerably.« less

  19. Tribology of the lubricant quantized sliding state.

    PubMed

    Castelli, Ivano Eligio; Capozza, Rosario; Vanossi, Andrea; Santoro, Giuseppe E; Manini, Nicola; Tosatti, Erio

    2009-11-07

    In the framework of Langevin dynamics, we demonstrate clear evidence of the peculiar quantized sliding state, previously found in a simple one-dimensional boundary lubricated model [A. Vanossi et al., Phys. Rev. Lett. 97, 056101 (2006)], for a substantially less idealized two-dimensional description of a confined multilayer solid lubricant under shear. This dynamical state, marked by a nontrivial "quantized" ratio of the averaged lubricant center-of-mass velocity to the externally imposed sliding speed, is recovered, and shown to be robust against the effects of thermal fluctuations, quenched disorder in the confining substrates, and over a wide range of loading forces. The lubricant softness, setting the width of the propagating solitonic structures, is found to play a major role in promoting in-registry commensurate regions beneficial to this quantized sliding. By evaluating the force instantaneously exerted on the top plate, we find that this quantized sliding represents a dynamical "pinned" state, characterized by significantly low values of the kinetic friction. While the quantized sliding occurs due to solitons being driven gently, the transition to ordinary unpinned sliding regimes can involve lubricant melting due to large shear-induced Joule heating, for example at large speed.

  20. On-chip integratable all-optical quantizer using strong cross-phase modulation in a silicon-organic hybrid slot waveguide

    PubMed Central

    Kang, Zhe; Yuan, Jinhui; Zhang, Xianting; Sang, Xinzhu; Wang, Kuiru; Wu, Qiang; Yan, Binbin; Li, Feng; Zhou, Xian; Zhong, Kangping; Zhou, Guiyao; Yu, Chongxiu; Farrell, Gerald; Lu, Chao; Yaw Tam, Hwa; Wai, P. K. A.

    2016-01-01

    High performance all-optical quantizer based on silicon waveguide is believed to have significant applications in photonic integratable optical communication links, optical interconnection networks, and real-time signal processing systems. In this paper, we propose an integratable all-optical quantizer for on-chip and low power consumption all-optical analog-to-digital converters. The quantization is realized by the strong cross-phase modulation and interference in a silicon-organic hybrid (SOH) slot waveguide based Mach-Zehnder interferometer. By carefully designing the dimension of the SOH waveguide, large nonlinear coefficients up to 16,000 and 18,069 W−1/m for the pump and probe signals can be obtained respectively, along with a low pulse walk-off parameter of 66.7 fs/mm, and all-normal dispersion in the wavelength regime considered. Simulation results show that the phase shift of the probe signal can reach 8π at a low pump pulse peak power of 206 mW and propagation length of 5 mm such that a 4-bit all-optical quantizer can be realized. The corresponding signal-to-noise ratio is 23.42 dB and effective number of bit is 3.89-bit. PMID:26777054

  1. Steganalysis based on JPEG compatibility

    NASA Astrophysics Data System (ADS)

    Fridrich, Jessica; Goljan, Miroslav; Du, Rui

    2001-11-01

    In this paper, we introduce a new forensic tool that can reliably detect modifications in digital images, such as distortion due to steganography and watermarking, in images that were originally stored in the JPEG format. The JPEG compression leave unique fingerprints and serves as a fragile watermark enabling us to detect changes as small as modifying the LSB of one randomly chosen pixel. The detection of changes is based on investigating the compatibility of 8x8 blocks of pixels with JPEG compression with a given quantization matrix. The proposed steganalytic method is applicable to virtually all steganongraphic and watermarking algorithms with the exception of those that embed message bits into the quantized JPEG DCT coefficients. The method can also be used to estimate the size of the secret message and identify the pixels that carry message bits. As a consequence of our steganalysis, we strongly recommend avoiding using images that have been originally stored in the JPEG format as cover-images for spatial-domain steganography.

  2. Interactions of C 4 Subtype Metabolic Activities and Transport in Maize Are Revealed through the Characterization of DCT2 Mutants

    DOE PAGES

    Weissmann, Sarit; Ma, Fangfang; Furuyama, Koki; ...

    2016-01-26

    C 4 photosynthesis in grasses requires the coordinated movement of metabolites through two specialized leaf cell types, mesophyll (M) and bundle sheath (BS), to concentrate CO 2 around Rubisco. Despite the importance of transporters in this process, few have been identified or rigorously characterized. In maize (Zea mays), DCT2 has been proposed to function as a plastid-localizedmalate transporter and is preferentially expressed in BS cells. Here, we characterized the role of DCT2 in maize leaves using Activator-tagged mutant alleles. Our results indicate that DCT2 enables the transport of malate into the BS chloroplast. Isotopic labeling experiments show that the lossmore » of DCT2 results in markedly different metabolic network operation and dramatically reduced biomass production. In the absence of a functioning malate shuttle, dct2 lines survive through the enhanced use of the phosphoenolpyruvate carboxykinase carbon shuttle pathway that in wild-type maize accounts for ;25% of the photosynthetic activity. The results emphasize the importance of malate transport during C 4 photosynthesis, define the role of a primary malate transporter in BS cells, and support a model for carbon exchange between BS and M cells in maize.« less

  3. A fully automated non-external marker 4D-CT sorting algorithm using a serial cine scanning protocol.

    PubMed

    Carnes, Greg; Gaede, Stewart; Yu, Edward; Van Dyk, Jake; Battista, Jerry; Lee, Ting-Yim

    2009-04-07

    Current 4D-CT methods require external marker data to retrospectively sort image data and generate CT volumes. In this work we develop an automated 4D-CT sorting algorithm that performs without the aid of data collected from an external respiratory surrogate. The sorting algorithm requires an overlapping cine scan protocol. The overlapping protocol provides a spatial link between couch positions. Beginning with a starting scan position, images from the adjacent scan position (which spatial match the starting scan position) are selected by maximizing the normalized cross correlation (NCC) of the images at the overlapping slice position. The process was continued by 'daisy chaining' all couch positions using the selected images until an entire 3D volume was produced. The algorithm produced 16 phase volumes to complete a 4D-CT dataset. Additional 4D-CT datasets were also produced using external marker amplitude and phase angle sorting methods. The image quality of the volumes produced by the different methods was quantified by calculating the mean difference of the sorted overlapping slices from adjacent couch positions. The NCC sorted images showed a significant decrease in the mean difference (p < 0.01) for the five patients.

  4. A method to map errors in the deformable registration of 4DCT images1

    PubMed Central

    Vaman, Constantin; Staub, David; Williamson, Jeffrey; Murphy, Martin J.

    2010-01-01

    Purpose: To present a new approach to the problem of estimating errors in deformable image registration (DIR) applied to sequential phases of a 4DCT data set. Methods: A set of displacement vector fields (DVFs) are made by registering a sequence of 4DCT phases. The DVFs are assumed to display anatomical movement, with the addition of errors due to the imaging and registration processes. The positions of physical landmarks in each CT phase are measured as ground truth for the physical movement in the DVF. Principal component analysis of the DVFs and the landmarks is used to identify and separate the eigenmodes of physical movement from the error eigenmodes. By subtracting the physical modes from the principal components of the DVFs, the registration errors are exposed and reconstructed as DIR error maps. The method is demonstrated via a simple numerical model of 4DCT DVFs that combines breathing movement with simulated maps of spatially correlated DIR errors. Results: The principal components of the simulated DVFs were observed to share the basic properties of principal components for actual 4DCT data. The simulated error maps were accurately recovered by the estimation method. Conclusions: Deformable image registration errors can have complex spatial distributions. Consequently, point-by-point landmark validation can give unrepresentative results that do not accurately reflect the registration uncertainties away from the landmarks. The authors are developing a method for mapping the complete spatial distribution of DIR errors using only a small number of ground truth validation landmarks. PMID:21158288

  5. Image Coding Based on Address Vector Quantization.

    NASA Astrophysics Data System (ADS)

    Feng, Yushu

    Image coding is finding increased application in teleconferencing, archiving, and remote sensing. This thesis investigates the potential of Vector Quantization (VQ), a relatively new source coding technique, for compression of monochromatic and color images. Extensions of the Vector Quantization technique to the Address Vector Quantization method have been investigated. In Vector Quantization, the image data to be encoded are first processed to yield a set of vectors. A codeword from the codebook which best matches the input image vector is then selected. Compression is achieved by replacing the image vector with the index of the code-word which produced the best match, the index is sent to the channel. Reconstruction of the image is done by using a table lookup technique, where the label is simply used as an address for a table containing the representative vectors. A code-book of representative vectors (codewords) is generated using an iterative clustering algorithm such as K-means, or the generalized Lloyd algorithm. A review of different Vector Quantization techniques are given in chapter 1. Chapter 2 gives an overview of codebook design methods including the Kohonen neural network to design codebook. During the encoding process, the correlation of the address is considered and Address Vector Quantization is developed for color image and monochrome image coding. Address VQ which includes static and dynamic processes is introduced in chapter 3. In order to overcome the problems in Hierarchical VQ, Multi-layer Address Vector Quantization is proposed in chapter 4. This approach gives the same performance as that of the normal VQ scheme but the bit rate is about 1/2 to 1/3 as that of the normal VQ method. In chapter 5, a Dynamic Finite State VQ based on a probability transition matrix to select the best subcodebook to encode the image is developed. In chapter 6, a new adaptive vector quantization scheme, suitable for color video coding, called "A Self -Organizing

  6. Quantization of simple parametrized systems

    NASA Astrophysics Data System (ADS)

    Ruffini, G.

    2005-11-01

    I study the canonical formulation and quantization of some simple parametrized systems, including the non-relativistic parametrized particle and the relativistic parametrized particle. Using Dirac's formalism I construct for each case the classical reduced phase space and study the dependence on the gauge fixing used. Two separate features of these systems can make this construction difficult: the actions are not invariant at the boundaries, and the constraints may have disconnected solution spaces. The relativistic particle is affected by both, while the non-relativistic particle displays only by the first. Analyzing the role of canonical transformations in the reduced phase space, I show that a change of gauge fixing is equivalent to a canonical transformation. In the relativistic case, quantization of one branch of the constraint at the time is applied and I analyze the electromagenetic backgrounds in which it is possible to quantize simultaneously both branches and still obtain a covariant unitary quantum theory. To preserve unitarity and space-time covariance, second quantization is needed unless there is no electric field. I motivate a definition of the inner product in all these cases and derive the Klein-Gordon inner product for the relativistic case. I construct phase space path integral representations for amplitudes for the BFV and the Faddeev path integrals, from which the path integrals in coordinate space (Faddeev-Popov and geometric path integrals) are derived.

  7. SU-D-17A-07: Development and Evaluation of a Prototype Ultrasonography Respiratory Monitoring System for 4DCT Reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yan, P; Cheng, S; Chao, C

    Purpose: Respiratory motion artifacts are commonly seen in the abdominal and thoracic CT images. A Real-time Position Management (RPM) system is integrated with CT simulator using abdominal surface as a surrogate for tracking the patient respiratory motion. The respiratory-correlated four-dimensional computed tomography (4DCT) is then reconstructed by GE advantage software. However, there are still artifacts due to inaccurate respiratory motion detecting and sorting methods. We developed an Ultrasonography Respiration Monitoring (URM) system which can directly monitor diaphragm motion to detect respiratory cycles. We also developed a new 4DCT sorting and motion estimation method to reduce the respiratory motion artifacts. Themore » new 4DCT system was compared with RPM and the GE 4DCT system. Methods: Imaging from a GE CT scanner was simultaneously correlated with both the RPM and URM to detect respiratory motion. A radiation detector, Blackcat GM-10, recorded the X-ray on/off and synchronized with URM. The diaphragm images were acquired with Ultrasonix RP system. The respiratory wave was derived from diaphragm images and synchronized with CT scanner. A more precise peaks and valleys detection tool was developed and compared with RPM. The motion is estimated for the slices which are not in the predefined respiratory phases by using block matching and optical flow method. The CT slices were then sorted into different phases and reconstructed, compared with the images reconstructed from GE Advantage software using respiratory wave produced from RPM system. Results: The 4DCT images were reconstructed for eight patients. The discontinuity at the diaphragm level due to an inaccurate identification of phases by the RPM was significantly improved by URM system. Conclusion: Our URM 4DCT system was evaluated and compared with RPM and GE 4DCT system. The new system is user friendly and able to reduce motion artifacts. It also has the potential to monitor organ motion

  8. Gravitational surface Hamiltonian and entropy quantization

    NASA Astrophysics Data System (ADS)

    Bakshi, Ashish; Majhi, Bibhas Ranjan; Samanta, Saurav

    2017-02-01

    The surface Hamiltonian corresponding to the surface part of a gravitational action has xp structure where p is conjugate momentum of x. Moreover, it leads to TS on the horizon of a black hole. Here T and S are temperature and entropy of the horizon. Imposing the hermiticity condition we quantize this Hamiltonian. This leads to an equidistant spectrum of its eigenvalues. Using this we show that the entropy of the horizon is quantized. This analysis holds for any order of Lanczos-Lovelock gravity. For general relativity, the area spectrum is consistent with Bekenstein's observation. This provides a more robust confirmation of this earlier result as the calculation is based on the direct quantization of the Hamiltonian in the sense of usual quantum mechanics.

  9. I-SceI-mediated double-strand break does not increase the frequency of homologous recombination at the Dct locus in mouse embryonic stem cells.

    PubMed

    Fenina, Myriam; Simon-Chazottes, Dominique; Vandormael-Pournin, Sandrine; Soueid, Jihane; Langa, Francina; Cohen-Tannoudji, Michel; Bernard, Bruno A; Panthier, Jean-Jacques

    2012-01-01

    Targeted induction of double-strand breaks (DSBs) at natural endogenous loci was shown to increase the rate of gene replacement by homologous recombination in mouse embryonic stem cells. The gene encoding dopachrome tautomerase (Dct) is specifically expressed in melanocytes and their precursors. To construct a genetic tool allowing the replacement of Dct gene by any gene of interest, we generated an embryonic stem cell line carrying the recognition site for the yeast I-SceI meganuclease embedded in the Dct genomic segment. The embryonic stem cell line was electroporated with an I-SceI expression plasmid, and a template for the DSB-repair process that carried sequence homologies to the Dct target. The I-SceI meganuclease was indeed able to introduce a DSB at the Dct locus in live embryonic stem cells. However, the level of gene targeting was not improved by the DSB induction, indicating a limited capacity of I-SceI to mediate homologous recombination at the Dct locus. These data suggest that homologous recombination by meganuclease-induced DSB may be locus dependent in mammalian cells.

  10. Polymer-Fourier quantization of the scalar field revisited

    NASA Astrophysics Data System (ADS)

    Garcia-Chung, Angel; Vergara, J. David

    2016-10-01

    The polymer quantization of the Fourier modes of the real scalar field is studied within algebraic scheme. We replace the positive linear functional of the standard Poincaré invariant quantization by a singular one. This singular positive linear functional is constructed as mimicking the singular limit of the complex structure of the Poincaré invariant Fock quantization. The resulting symmetry group of such polymer quantization is the subgroup SDiff(ℝ4) which is a subgroup of Diff(ℝ4) formed by spatial volume preserving diffeomorphisms. In consequence, this yields an entirely different irreducible representation of the canonical commutation relations, nonunitary equivalent to the standard Fock representation. We also compared the Poincaré invariant Fock vacuum with the polymer Fourier vacuum.

  11. Video data compression using artificial neural network differential vector quantization

    NASA Technical Reports Server (NTRS)

    Krishnamurthy, Ashok K.; Bibyk, Steven B.; Ahalt, Stanley C.

    1991-01-01

    An artificial neural network vector quantizer is developed for use in data compression applications such as Digital Video. Differential Vector Quantization is used to preserve edge features, and a new adaptive algorithm, known as Frequency-Sensitive Competitive Learning, is used to develop the vector quantizer codebook. To develop real time performance, a custom Very Large Scale Integration Application Specific Integrated Circuit (VLSI ASIC) is being developed to realize the associative memory functions needed in the vector quantization algorithm. By using vector quantization, the need for Huffman coding can be eliminated, resulting in superior performance against channel bit errors than methods that use variable length codes.

  12. New architecture for dynamic frame-skipping transcoder.

    PubMed

    Fung, Kai-Tat; Chan, Yui-Lam; Siu, Wan-Chi

    2002-01-01

    Transcoding is a key technique for reducing the bit rate of a previously compressed video signal. A high transcoding ratio may result in an unacceptable picture quality when the full frame rate of the incoming video bitstream is used. Frame skipping is often used as an efficient scheme to allocate more bits to the representative frames, so that an acceptable quality for each frame can be maintained. However, the skipped frame must be decompressed completely, which might act as a reference frame to nonskipped frames for reconstruction. The newly quantized discrete cosine transform (DCT) coefficients of the prediction errors need to be re-computed for the nonskipped frame with reference to the previous nonskipped frame; this can create undesirable complexity as well as introduce re-encoding errors. In this paper, we propose new algorithms and a novel architecture for frame-rate reduction to improve picture quality and to reduce complexity. The proposed architecture is mainly performed on the DCT domain to achieve a transcoder with low complexity. With the direct addition of DCT coefficients and an error compensation feedback loop, re-encoding errors are reduced significantly. Furthermore, we propose a frame-rate control scheme which can dynamically adjust the number of skipped frames according to the incoming motion vectors and re-encoding errors due to transcoding such that the decoded sequence can have a smooth motion as well as better transcoded pictures. Experimental results show that, as compared to the conventional transcoder, the new architecture for frame-skipping transcoder is more robust, produces fewer requantization errors, and has reduced computational complexity.

  13. Instabilities caused by floating-point arithmetic quantization.

    NASA Technical Reports Server (NTRS)

    Phillips, C. L.

    1972-01-01

    It is shown that an otherwise stable digital control system can be made unstable by signal quantization when the controller operates on floating-point arithmetic. Sufficient conditions of instability are determined, and an example of loss of stability is treated when only one quantizer is operated.

  14. Topologies on quantum topoi induced by quantization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakayama, Kunji

    2013-07-15

    In the present paper, we consider effects of quantization in a topos approach of quantum theory. A quantum system is assumed to be coded in a quantum topos, by which we mean the topos of presheaves on the context category of commutative subalgebras of a von Neumann algebra of bounded operators on a Hilbert space. A classical system is modeled by a Lie algebra of classical observables. It is shown that a quantization map from the classical observables to self-adjoint operators on the Hilbert space naturally induces geometric morphisms from presheaf topoi related to the classical system to the quantummore » topos. By means of the geometric morphisms, we give Lawvere-Tierney topologies on the quantum topos (and their equivalent Grothendieck topologies on the context category). We show that, among them, there exists a canonical one which we call a quantization topology. We furthermore give an explicit expression of a sheafification functor associated with the quantization topology.« less

  15. Topological quantization in units of the fine structure constant.

    PubMed

    Maciejko, Joseph; Qi, Xiao-Liang; Drew, H Dennis; Zhang, Shou-Cheng

    2010-10-15

    Fundamental topological phenomena in condensed matter physics are associated with a quantized electromagnetic response in units of fundamental constants. Recently, it has been predicted theoretically that the time-reversal invariant topological insulator in three dimensions exhibits a topological magnetoelectric effect quantized in units of the fine structure constant α=e²/ℏc. In this Letter, we propose an optical experiment to directly measure this topological quantization phenomenon, independent of material details. Our proposal also provides a way to measure the half-quantized Hall conductances on the two surfaces of the topological insulator independently of each other.

  16. A face and palmprint recognition approach based on discriminant DCT feature extraction.

    PubMed

    Jing, Xiao-Yuan; Zhang, David

    2004-12-01

    In the field of image processing and recognition, discrete cosine transform (DCT) and linear discrimination are two widely used techniques. Based on them, we present a new face and palmprint recognition approach in this paper. It first uses a two-dimensional separability judgment to select the DCT frequency bands with favorable linear separability. Then from the selected bands, it extracts the linear discriminative features by an improved Fisherface method and performs the classification by the nearest neighbor classifier. We detailedly analyze theoretical advantages of our approach in feature extraction. The experiments on face databases and palmprint database demonstrate that compared to the state-of-the-art linear discrimination methods, our approach obtains better classification performance. It can significantly improve the recognition rates for face and palmprint data and effectively reduce the dimension of feature space.

  17. Quantization improves stabilization of dynamical systems with delayed feedback

    NASA Astrophysics Data System (ADS)

    Stepan, Gabor; Milton, John G.; Insperger, Tamas

    2017-11-01

    We show that an unstable scalar dynamical system with time-delayed feedback can be stabilized by quantizing the feedback. The discrete time model corresponds to a previously unrecognized case of the microchaotic map in which the fixed point is both locally and globally repelling. In the continuous-time model, stabilization by quantization is possible when the fixed point in the absence of feedback is an unstable node, and in the presence of feedback, it is an unstable focus (spiral). The results are illustrated with numerical simulation of the unstable Hayes equation. The solutions of the quantized Hayes equation take the form of oscillations in which the amplitude is a function of the size of the quantization step. If the quantization step is sufficiently small, the amplitude of the oscillations can be small enough to practically approximate the dynamics around a stable fixed point.

  18. SU-F-J-116: Clinical Experience-Based Verification and Improvement of a 4DCT Program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fogg, P; West, M; Aland, T

    2016-06-15

    Purpose: To demonstrate the role of continuous improvement fulfilled by the Medical Physicist in clinical 4DCT and CBCT scanning. Methods: Lung (SABR and Standard) patients’ 4D respiratory motion and image data were reviewed over a 3, 6 and 12 month period following commissioning testing. By identifying trends of clinically relevant parameters and respiratory motions, variables were tested with a programmable motion phantom and assessed. Patient traces were imported to a motion phantom and 4DCT and CBCT imaging were performed. Cos6 surrogate and sup-inf motion was also programmed into the phantom to simulate the long exhale of patients for image contrastmore » tests. Results: Patient surrogate motion amplitudes were 9.9+5.2mm (3–35) at 18+6bpm (6–30). Expiration/Inspiration time ratios of 1.4+0.5second (0.6–2.9) showed image contrast effects evident in the AveCT and 3DCBCT images. Small differences were found for patients with multiple 4DCT data sets. Patient motion assessments were simulated and verified with the phantom within 2mm. Initial image reviews to check for reconstructed artefacts and data loss identified a small number of patients with irregularities in the automatic placement of inspiration and expiration points. Conclusion: The Physicist’s involvement in the continuous improvements of a clinically commissioned technique, processes and workflows continues beyond the commissioning stage of a project. Our experience with our clinical 4DCT program shows that Physics presence is required at the clinical 4DCT scan to assist with technical aspects of the scan and also for clinical image quality assessment prior to voluming. The results of this work enabled the sharing of information from the Medical Physics group with the Radiation Oncologists and Radiation Therapists. This results in an improved awareness of clinical patient respiration variables and how they may affect 4D simulation images and also may also affect the treatment verification

  19. Dimensional quantization effects in the thermodynamics of conductive filaments

    NASA Astrophysics Data System (ADS)

    Niraula, D.; Grice, C. R.; Karpov, V. G.

    2018-06-01

    We consider the physical effects of dimensional quantization in conductive filaments that underlie operations of some modern electronic devices. We show that, as a result of quantization, a sufficiently thin filament acquires a positive charge. Several applications of this finding include the host material polarization, the stability of filament constrictions, the equilibrium filament radius, polarity in device switching, and quantization of conductance.

  20. Dimensional quantization effects in the thermodynamics of conductive filaments.

    PubMed

    Niraula, D; Grice, C R; Karpov, V G

    2018-06-29

    We consider the physical effects of dimensional quantization in conductive filaments that underlie operations of some modern electronic devices. We show that, as a result of quantization, a sufficiently thin filament acquires a positive charge. Several applications of this finding include the host material polarization, the stability of filament constrictions, the equilibrium filament radius, polarity in device switching, and quantization of conductance.

  1. Proton radiography and fluoroscopy of lung tumors: A Monte Carlo study using patient-specific 4DCT phantoms

    PubMed Central

    Han, Bin; Xu, X. George; Chen, George T. Y.

    2011-01-01

    Purpose: Monte Carlo methods are used to simulate and optimize a time-resolved proton range telescope (TRRT) in localization of intrafractional and interfractional motions of lung tumor and in quantification of proton range variations. Methods: The Monte Carlo N-Particle eXtended (MCNPX) code with a particle tracking feature was employed to evaluate the TRRT performance, especially in visualizing and quantifying proton range variations during respiration. Protons of 230 MeV were tracked one by one as they pass through position detectors, patient 4DCT phantom, and finally scintillator detectors that measured residual ranges. The energy response of the scintillator telescope was investigated. Mass density and elemental composition of tissues were defined for 4DCT data. Results: Proton water equivalent length (WEL) was deduced by a reconstruction algorithm that incorporates linear proton track and lateral spatial discrimination to improve the image quality. 4DCT data for three patients were used to visualize and measure tumor motion and WEL variations. The tumor trajectories extracted from the WEL map were found to be within ∼1 mm agreement with direct 4DCT measurement. Quantitative WEL variation studies showed that the proton radiograph is a good representation of WEL changes from entrance to distal of the target. Conclusions:MCNPX simulation results showed that TRRT can accurately track the motion of the tumor and detect the WEL variations. Image quality was optimized by choosing proton energy, testing parameters of image reconstruction algorithm, and comparing to ground truth 4DCT. The future study will demonstrate the feasibility of using the time resolved proton radiography as an imaging tool for proton treatments of lung tumors. PMID:21626923

  2. Haralick texture features from apparent diffusion coefficient (ADC) MRI images depend on imaging and pre-processing parameters.

    PubMed

    Brynolfsson, Patrik; Nilsson, David; Torheim, Turid; Asklund, Thomas; Karlsson, Camilla Thellenberg; Trygg, Johan; Nyholm, Tufve; Garpebring, Anders

    2017-06-22

    In recent years, texture analysis of medical images has become increasingly popular in studies investigating diagnosis, classification and treatment response assessment of cancerous disease. Despite numerous applications in oncology and medical imaging in general, there is no consensus regarding texture analysis workflow, or reporting of parameter settings crucial for replication of results. The aim of this study was to assess how sensitive Haralick texture features of apparent diffusion coefficient (ADC) MR images are to changes in five parameters related to image acquisition and pre-processing: noise, resolution, how the ADC map is constructed, the choice of quantization method, and the number of gray levels in the quantized image. We found that noise, resolution, choice of quantization method and the number of gray levels in the quantized images had a significant influence on most texture features, and that the effect size varied between different features. Different methods for constructing the ADC maps did not have an impact on any texture feature. Based on our results, we recommend using images with similar resolutions and noise levels, using one quantization method, and the same number of gray levels in all quantized images, to make meaningful comparisons of texture feature results between different subjects.

  3. Gauge fixing and BFV quantization

    NASA Astrophysics Data System (ADS)

    Rogers, Alice

    2000-01-01

    Non-singularity conditions are established for the Batalin-Fradkin-Vilkovisky (BFV) gauge-fixing fermion which are sufficient for it to lead to the correct path integral for a theory with constraints canonically quantized in the BFV approach. The conditions ensure that the anticommutator of this fermion with the BRST charge regularizes the path integral by regularizing the trace over non-physical states in each ghost sector. The results are applied to the quantization of a system which has a Gribov problem, using a non-standard form of the gauge-fixing fermion.

  4. A two layer chaotic encryption scheme of secure image transmission for DCT precoded OFDM-VLC transmission

    NASA Astrophysics Data System (ADS)

    Wang, Zhongpeng; Chen, Fangni; Qiu, Weiwei; Chen, Shoufa; Ren, Dongxiao

    2018-03-01

    In this paper, a two-layer image encryption scheme for a discrete cosine transform (DCT) precoded orthogonal frequency division multiplexing (OFDM) visible light communication (VLC) system is proposed. Firstly, in the proposed scheme the transmitted image is first encrypted by a chaos scrambling sequence,which is generated from the hybrid 4-D hyper- and Arnold map in the upper-layer. After that, the encrypted image is converted into digital QAM modulation signal, which is re-encrypted by chaos scrambling sequence based on Arnold map in physical layer to further enhance the security of the transmitted image. Moreover, DCT precoding is employed to improve BER performance of the proposed system and reduce the PAPR of OFDM signal. The BER and PAPR performances of the proposed system are evaluated by simulation experiments. The experiment results show that the proposed two-layer chaos scrambling schemes achieve image secure transmission for image-based OFDM VLC. Furthermore, DCT precoding can reduce the PAPR and improve the BER performance of OFDM-based VLC.

  5. Thermal field theory and generalized light front quantization

    NASA Astrophysics Data System (ADS)

    Weldon, H. Arthur

    2003-04-01

    The dependence of thermal field theory on the surface of quantization and on the velocity of the heat bath is investigated by working in general coordinates that are arbitrary linear combinations of the Minkowski coordinates. In the general coordinates the metric tensor gμν¯ is nondiagonal. The Kubo-Martin-Schwinger condition requires periodicity in thermal correlation functions when the temporal variable changes by an amount -i/(T(g00¯)). Light-front quantization fails since g00¯=0; however, various related quantizations are possible.

  6. Probabilistic distance-based quantizer design for distributed estimation

    NASA Astrophysics Data System (ADS)

    Kim, Yoon Hak

    2016-12-01

    We consider an iterative design of independently operating local quantizers at nodes that should cooperate without interaction to achieve application objectives for distributed estimation systems. We suggest as a new cost function a probabilistic distance between the posterior distribution and its quantized one expressed as the Kullback Leibler (KL) divergence. We first present the analysis that minimizing the KL divergence in the cyclic generalized Lloyd design framework is equivalent to maximizing the logarithmic quantized posterior distribution on the average which can be further computationally reduced in our iterative design. We propose an iterative design algorithm that seeks to maximize the simplified version of the posterior quantized distribution and discuss that our algorithm converges to a global optimum due to the convexity of the cost function and generates the most informative quantized measurements. We also provide an independent encoding technique that enables minimization of the cost function and can be efficiently simplified for a practical use of power-constrained nodes. We finally demonstrate through extensive experiments an obvious advantage of improved estimation performance as compared with the typical designs and the novel design techniques previously published.

  7. A Novel 2D Image Compression Algorithm Based on Two Levels DWT and DCT Transforms with Enhanced Minimize-Matrix-Size Algorithm for High Resolution Structured Light 3D Surface Reconstruction

    NASA Astrophysics Data System (ADS)

    Siddeq, M. M.; Rodrigues, M. A.

    2015-09-01

    Image compression techniques are widely used on 2D image 2D video 3D images and 3D video. There are many types of compression techniques and among the most popular are JPEG and JPEG2000. In this research, we introduce a new compression method based on applying a two level discrete cosine transform (DCT) and a two level discrete wavelet transform (DWT) in connection with novel compression steps for high-resolution images. The proposed image compression algorithm consists of four steps. (1) Transform an image by a two level DWT followed by a DCT to produce two matrices: DC- and AC-Matrix, or low and high frequency matrix, respectively, (2) apply a second level DCT on the DC-Matrix to generate two arrays, namely nonzero-array and zero-array, (3) apply the Minimize-Matrix-Size algorithm to the AC-Matrix and to the other high-frequencies generated by the second level DWT, (4) apply arithmetic coding to the output of previous steps. A novel decompression algorithm, Fast-Match-Search algorithm (FMS), is used to reconstruct all high-frequency matrices. The FMS-algorithm computes all compressed data probabilities by using a table of data, and then using a binary search algorithm for finding decompressed data inside the table. Thereafter, all decoded DC-values with the decoded AC-coefficients are combined in one matrix followed by inverse two levels DCT with two levels DWT. The technique is tested by compression and reconstruction of 3D surface patches. Additionally, this technique is compared with JPEG and JPEG2000 algorithm through 2D and 3D root-mean-square-error following reconstruction. The results demonstrate that the proposed compression method has better visual properties than JPEG and JPEG2000 and is able to more accurately reconstruct surface patches in 3D.

  8. Relational symplectic groupoid quantization for constant poisson structures

    NASA Astrophysics Data System (ADS)

    Cattaneo, Alberto S.; Moshayedi, Nima; Wernli, Konstantin

    2017-09-01

    As a detailed application of the BV-BFV formalism for the quantization of field theories on manifolds with boundary, this note describes a quantization of the relational symplectic groupoid for a constant Poisson structure. The presence of mixed boundary conditions and the globalization of results are also addressed. In particular, the paper includes an extension to space-times with boundary of some formal geometry considerations in the BV-BFV formalism, and specifically introduces into the BV-BFV framework a "differential" version of the classical and quantum master equations. The quantization constructed in this paper induces Kontsevich's deformation quantization on the underlying Poisson manifold, i.e., the Moyal product, which is known in full details. This allows focussing on the BV-BFV technology and testing it. For the inexperienced reader, this is also a practical and reasonably simple way to learn it.

  9. Quantization Distortion in Block Transform-Compressed Data

    NASA Technical Reports Server (NTRS)

    Boden, A. F.

    1995-01-01

    The popular JPEG image compression standard is an example of a block transform-based compression scheme; the image is systematically subdivided into block that are individually transformed, quantized, and encoded. The compression is achieved by quantizing the transformed data, reducing the data entropy and thus facilitating efficient encoding. A generic block transform model is introduced.

  10. Instant-Form and Light-Front Quantization of Field Theories

    NASA Astrophysics Data System (ADS)

    Kulshreshtha, Usha; Kulshreshtha, Daya Shankar; Vary, James

    2018-05-01

    In this work we consider the instant-form and light-front quantization of some field theories. As an example, we consider a class of gauged non-linear sigma models with different regularizations. In particular, we present the path integral quantization of the gauged non-linear sigma model in the Faddeevian regularization. We also make a comparision of the possible differences in the instant-form and light-front quantization at appropriate places.

  11. Universe creation from the third-quantized vacuum

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McGuigan, M.

    1989-04-15

    Third quantization leads to a Hilbert space containing a third-quantized vacuum in which no universes are present as well as multiuniverse states. We consider the possibility of universe creation for the special case where the universe emerges in a no-particle state. The probability of such a creation is computed from both the path-integral and operator formalisms.

  12. Universe creation from the third-quantized vacuum

    NASA Astrophysics Data System (ADS)

    McGuigan, Michael

    1989-04-01

    Third quantization leads to a Hilbert space containing a third-quantized vacuum in which no universes are present as well as multiuniverse states. We consider the possibility of universe creation for the special case where the universe emerges in a no-particle state. The probability of such a creation is computed from both the path-integral and operator formalisms.

  13. Quantization of collagen organization in the stroma with a new order coefficient

    PubMed Central

    Germann, James A.; Martinez-Enriquez, Eduardo; Marcos, Susana

    2017-01-01

    Many optical and biomechanical properties of the cornea, specifically the transparency of the stroma and its stiffness, can be traced to the degree of order and direction of the constituent collagen fibers. To measure the degree of order inside the cornea, a new metric, the order coefficient, was introduced to quantify the organization of the collagen fibers from images of the stroma produced with a custom-developed second harmonic generation microscope. The order coefficient method gave a quantitative assessment of the differences in stromal collagen arrangement across the cornea depths and between untreated stroma and cross-linked stroma. PMID:29359095

  14. An iterative sinogram gap-filling method with object- and scanner-dedicated discrete cosine transform (DCT)-domain filters for high resolution PET scanners.

    PubMed

    Kim, Kwangdon; Lee, Kisung; Lee, Hakjae; Joo, Sungkwan; Kang, Jungwon

    2018-01-01

    We aimed to develop a gap-filling algorithm, in particular the filter mask design method of the algorithm, which optimizes the filter to the imaging object by an adaptive and iterative process, rather than by manual means. Two numerical phantoms (Shepp-Logan and Jaszczak) were used for sinogram generation. The algorithm works iteratively, not only on the gap-filling iteration but also on the mask generation, to identify the object-dedicated low frequency area in the DCT-domain that is to be preserved. We redefine the low frequency preserving region of the filter mask at every gap-filling iteration, and the region verges on the property of the original image in the DCT domain. The previous DCT2 mask for each phantom case had been manually well optimized, and the results show little difference from the reference image and sinogram. We observed little or no difference between the results of the manually optimized DCT2 algorithm and those of the proposed algorithm. The proposed algorithm works well for various types of scanning object and shows results that compare to those of the manually optimized DCT2 algorithm without perfect or full information of the imaging object.

  15. Modeling and analysis of energy quantization effects on single electron inverter performance

    NASA Astrophysics Data System (ADS)

    Dan, Surya Shankar; Mahapatra, Santanu

    2009-08-01

    In this paper, for the first time, the effects of energy quantization on single electron transistor (SET) inverter performance are analyzed through analytical modeling and Monte Carlo simulations. It is shown that energy quantization mainly changes the Coulomb blockade region and drain current of SET devices and thus affects the noise margin, power dissipation, and the propagation delay of SET inverter. A new analytical model for the noise margin of SET inverter is proposed which includes the energy quantization effects. Using the noise margin as a metric, the robustness of SET inverter is studied against the effects of energy quantization. A compact expression is developed for a novel parameter quantization threshold which is introduced for the first time in this paper. Quantization threshold explicitly defines the maximum energy quantization that an SET inverter logic circuit can withstand before its noise margin falls below a specified tolerance level. It is found that SET inverter designed with CT:CG=1/3 (where CT and CG are tunnel junction and gate capacitances, respectively) offers maximum robustness against energy quantization.

  16. Direct comparison of fractional and integer quantized Hall resistance

    NASA Astrophysics Data System (ADS)

    Ahlers, Franz J.; Götz, Martin; Pierz, Klaus

    2017-08-01

    We present precision measurements of the fractional quantized Hall effect, where the quantized resistance {{R}≤ft[ 1/3 \\right]} in the fractional quantum Hall state at filling factor 1/3 was compared with a quantized resistance {{R}[2]} , represented by an integer quantum Hall state at filling factor 2. A cryogenic current comparator bridge capable of currents down to the nanoampere range was used to directly compare two resistance values of two GaAs-based devices located in two cryostats. A value of 1-(5.3  ±  6.3) 10-8 (95% confidence level) was obtained for the ratio ({{R}≤ft[ 1/3 \\right]}/6{{R}[2]} ). This constitutes the most precise comparison of integer resistance quantization (in terms of h/e 2) in single-particle systems and of fractional quantization in fractionally charged quasi-particle systems. While not relevant for practical metrology, such a test of the validity of the underlying physics is of significance in the context of the upcoming revision of the SI.

  17. Fast large-scale object retrieval with binary quantization

    NASA Astrophysics Data System (ADS)

    Zhou, Shifu; Zeng, Dan; Shen, Wei; Zhang, Zhijiang; Tian, Qi

    2015-11-01

    The objective of large-scale object retrieval systems is to search for images that contain the target object in an image database. Where state-of-the-art approaches rely on global image representations to conduct searches, we consider many boxes per image as candidates to search locally in a picture. In this paper, a feature quantization algorithm called binary quantization is proposed. In binary quantization, a scale-invariant feature transform (SIFT) feature is quantized into a descriptive and discriminative bit-vector, which allows itself to adapt to the classic inverted file structure for box indexing. The inverted file, which stores the bit-vector and box ID where the SIFT feature is located inside, is compact and can be loaded into the main memory for efficient box indexing. We evaluate our approach on available object retrieval datasets. Experimental results demonstrate that the proposed approach is fast and achieves excellent search quality. Therefore, the proposed approach is an improvement over state-of-the-art approaches for object retrieval.

  18. Prediction-guided quantization for video tone mapping

    NASA Astrophysics Data System (ADS)

    Le Dauphin, Agnès.; Boitard, Ronan; Thoreau, Dominique; Olivier, Yannick; Francois, Edouard; LeLéannec, Fabrice

    2014-09-01

    Tone Mapping Operators (TMOs) compress High Dynamic Range (HDR) content to address Low Dynamic Range (LDR) displays. However, before reaching the end-user, this tone mapped content is usually compressed for broadcasting or storage purposes. Any TMO includes a quantization step to convert floating point values to integer ones. In this work, we propose to adapt this quantization, in the loop of an encoder, to reduce the entropy of the tone mapped video content. Our technique provides an appropriate quantization for each mode of both the Intra and Inter-prediction that is performed in the loop of a block-based encoder. The mode that minimizes a rate-distortion criterion uses its associated quantization to provide integer values for the rest of the encoding process. The method has been implemented in HEVC and was tested over two different scenarios: the compression of tone mapped LDR video content (using the HM10.0) and the compression of perceptually encoded HDR content (HM14.0). Results show an average bit-rate reduction under the same PSNR for all the sequences and TMO considered of 20.3% and 27.3% for tone mapped content and 2.4% and 2.7% for HDR content.

  19. Quantization of Simple Parametrized Systems

    NASA Astrophysics Data System (ADS)

    Ruffini, Giulio

    1995-01-01

    I study the canonical formulation and quantization of some simple parametrized systems using Dirac's formalism and the Becchi-Rouet-Stora-Tyutin (BRST) extended phase space method. These systems include the parametrized particle and minisuperspace. Using Dirac's formalism I first analyze for each case the construction of the classical reduced phase space. There are two separate features of these systems that may make this construction difficult: (a) Because of the boundary conditions used, the actions are not gauge invariant at the boundaries. (b) The constraints may have a disconnected solution space. The relativistic particle and minisuperspace have such complicated constraints, while the non-relativistic particle displays only the first feature. I first show that a change of gauge fixing is equivalent to a canonical transformation in the reduced phase space, thus resolving the problems associated with the first feature above. Then I consider the quantization of these systems using several approaches: Dirac's method, Dirac-Fock quantization, and the BRST formalism. In the cases of the relativistic particle and minisuperspace I consider first the quantization of one branch of the constraint at the time and then discuss the backgrounds in which it is possible to quantize simultaneously both branches. I motivate and define the inner product, and obtain, for example, the Klein-Gordon inner product for the relativistic case. Then I show how to construct phase space path integral representations for amplitudes in these approaches--the Batalin-Fradkin-Vilkovisky (BFV) and the Faddeev path integrals --from which one can then derive the path integrals in coordinate space--the Faddeev-Popov path integral and the geometric path integral. In particular I establish the connection between the Hilbert space representation and the range of the lapse in the path integrals. I also examine the class of paths that contribute in the path integrals and how they affect space

  20. Berezin-Toeplitz quantization and naturally defined star products for Kähler manifolds

    NASA Astrophysics Data System (ADS)

    Schlichenmaier, Martin

    2018-04-01

    For compact quantizable Kähler manifolds the Berezin-Toeplitz quantization schemes, both operator and deformation quantization (star product) are reviewed. The treatment includes Berezin's covariant symbols and the Berezin transform. The general compact quantizable case was done by Bordemann-Meinrenken-Schlichenmaier, Schlichenmaier, and Karabegov-Schlichenmaier. For star products on Kähler manifolds, separation of variables, or equivalently star product of (anti-) Wick type, is a crucial property. As canonically defined star products the Berezin-Toeplitz, Berezin, and the geometric quantization are treated. It turns out that all three are equivalent, but different.

  1. IRON UPTAKE AND NRAMP-2/DMTI/DCT IN HUMAN BRONCHIAL EPITHELIAL CELLS

    EPA Science Inventory

    The capacity of natural resistance-associated macrophage protein-2 [Nramp2; also called divalent metal transporter-1 (DMT1) and divalent cation transporter-1 (DCT1)] to transport iron and its ubiquitous expression make it a likely candidate for transferrin-independent uptake of i...

  2. A hybrid approach for fusing 4D-MRI temporal information with 3D-CT for the study of lung and lung tumor motion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Y. X.; Van Reeth, E.; Poh, C. L., E-mail: clpoh@ntu.edu.sg

    2015-08-15

    Purpose: Accurate visualization of lung motion is important in many clinical applications, such as radiotherapy of lung cancer. Advancement in imaging modalities [e.g., computed tomography (CT) and MRI] has allowed dynamic imaging of lung and lung tumor motion. However, each imaging modality has its advantages and disadvantages. The study presented in this paper aims at generating synthetic 4D-CT dataset for lung cancer patients by combining both continuous three-dimensional (3D) motion captured by 4D-MRI and the high spatial resolution captured by CT using the authors’ proposed approach. Methods: A novel hybrid approach based on deformable image registration (DIR) and finite elementmore » method simulation was developed to fuse a static 3D-CT volume (acquired under breath-hold) and the 3D motion information extracted from 4D-MRI dataset, creating a synthetic 4D-CT dataset. Results: The study focuses on imaging of lung and lung tumor. Comparing the synthetic 4D-CT dataset with the acquired 4D-CT dataset of six lung cancer patients based on 420 landmarks, accurate results (average error <2 mm) were achieved using the authors’ proposed approach. Their hybrid approach achieved a 40% error reduction (based on landmarks assessment) over using only DIR techniques. Conclusions: The synthetic 4D-CT dataset generated has high spatial resolution, has excellent lung details, and is able to show movement of lung and lung tumor over multiple breathing cycles.« less

  3. A hybrid approach for fusing 4D-MRI temporal information with 3D-CT for the study of lung and lung tumor motion.

    PubMed

    Yang, Y X; Teo, S-K; Van Reeth, E; Tan, C H; Tham, I W K; Poh, C L

    2015-08-01

    Accurate visualization of lung motion is important in many clinical applications, such as radiotherapy of lung cancer. Advancement in imaging modalities [e.g., computed tomography (CT) and MRI] has allowed dynamic imaging of lung and lung tumor motion. However, each imaging modality has its advantages and disadvantages. The study presented in this paper aims at generating synthetic 4D-CT dataset for lung cancer patients by combining both continuous three-dimensional (3D) motion captured by 4D-MRI and the high spatial resolution captured by CT using the authors' proposed approach. A novel hybrid approach based on deformable image registration (DIR) and finite element method simulation was developed to fuse a static 3D-CT volume (acquired under breath-hold) and the 3D motion information extracted from 4D-MRI dataset, creating a synthetic 4D-CT dataset. The study focuses on imaging of lung and lung tumor. Comparing the synthetic 4D-CT dataset with the acquired 4D-CT dataset of six lung cancer patients based on 420 landmarks, accurate results (average error <2 mm) were achieved using the authors' proposed approach. Their hybrid approach achieved a 40% error reduction (based on landmarks assessment) over using only DIR techniques. The synthetic 4D-CT dataset generated has high spatial resolution, has excellent lung details, and is able to show movement of lung and lung tumor over multiple breathing cycles.

  4. Geometric validation of self-gating k-space-sorted 4D-MRI vs 4D-CT using a respiratory motion phantom

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yue, Yong, E-mail: yong.yue@cshs.org; Yang, Wensha; McKenzie, Elizabeth

    Purpose: MRI is increasingly being used for radiotherapy planning, simulation, and in-treatment-room motion monitoring. To provide more detailed temporal and spatial MR data for these tasks, we have recently developed a novel self-gated (SG) MRI technique with advantage of k-space phase sorting, high isotropic spatial resolution, and high temporal resolution. The current work describes the validation of this 4D-MRI technique using a MRI- and CT-compatible respiratory motion phantom and comparison to 4D-CT. Methods: The 4D-MRI sequence is based on a spoiled gradient echo-based 3D projection reconstruction sequence with self-gating for 4D-MRI at 3 T. Respiratory phase is resolved by usingmore » SG k-space lines as the motion surrogate. 4D-MRI images are reconstructed into ten temporal bins with spatial resolution 1.56 × 1.56 × 1.56 mm{sup 3}. A MRI-CT compatible phantom was designed to validate the performance of the 4D-MRI sequence and 4D-CT imaging. A spherical target (diameter 23 mm, volume 6.37 ml) filled with high-concentration gadolinium (Gd) gel is embedded into a plastic box (35 × 40 × 63 mm{sup 3}) and stabilized with low-concentration Gd gel. The phantom, driven by an air pump, is able to produce human-type breathing patterns between 4 and 30 respiratory cycles/min. 4D-CT of the phantom has been acquired in cine mode, and reconstructed into ten phases with slice thickness 1.25 mm. The 4D images sets were imported into a treatment planning software for target contouring. The geometrical accuracy of the 4D MRI and CT images has been quantified using target volume, flattening, and eccentricity. The target motion was measured by tracking the centroids of the spheres in each individual phase. Motion ground-truth was obtained from input signals and real-time video recordings. Results: The dynamic phantom has been operated in four respiratory rate (RR) settings, 6, 10, 15, and 20/min, and was scanned with 4D-MRI and 4D-CT. 4D-CT images have target

  5. Hierarchically clustered adaptive quantization CMAC and its learning convergence.

    PubMed

    Teddy, S D; Lai, E M K; Quek, C

    2007-11-01

    The cerebellar model articulation controller (CMAC) neural network (NN) is a well-established computational model of the human cerebellum. Nevertheless, there are two major drawbacks associated with the uniform quantization scheme of the CMAC network. They are the following: (1) a constant output resolution associated with the entire input space and (2) the generalization-accuracy dilemma. Moreover, the size of the CMAC network is an exponential function of the number of inputs. Depending on the characteristics of the training data, only a small percentage of the entire set of CMAC memory cells is utilized. Therefore, the efficient utilization of the CMAC memory is a crucial issue. One approach is to quantize the input space nonuniformly. For existing nonuniformly quantized CMAC systems, there is a tradeoff between memory efficiency and computational complexity. Inspired by the underlying organizational mechanism of the human brain, this paper presents a novel CMAC architecture named hierarchically clustered adaptive quantization CMAC (HCAQ-CMAC). HCAQ-CMAC employs hierarchical clustering for the nonuniform quantization of the input space to identify significant input segments and subsequently allocating more memory cells to these regions. The stability of the HCAQ-CMAC network is theoretically guaranteed by the proof of its learning convergence. The performance of the proposed network is subsequently benchmarked against the original CMAC network, as well as two other existing CMAC variants on two real-life applications, namely, automated control of car maneuver and modeling of the human blood glucose dynamics. The experimental results have demonstrated that the HCAQ-CMAC network offers an efficient memory allocation scheme and improves the generalization and accuracy of the network output to achieve better or comparable performances with smaller memory usages. Index Terms-Cerebellar model articulation controller (CMAC), hierarchical clustering, hierarchically

  6. Quantization noise in digital speech. M.S. Thesis- Houston Univ.

    NASA Technical Reports Server (NTRS)

    Schmidt, O. L.

    1972-01-01

    The amount of quantization noise generated in a digital-to-analog converter is dependent on the number of bits or quantization levels used to digitize the analog signal in the analog-to-digital converter. The minimum number of quantization levels and the minimum sample rate were derived for a digital voice channel. A sample rate of 6000 samples per second and lowpass filters with a 3 db cutoff of 2400 Hz are required for 100 percent sentence intelligibility. Consonant sounds are the first speech components to be degraded by quantization noise. A compression amplifier can be used to increase the weighting of the consonant sound amplitudes in the analog-to-digital converter. An expansion network must be installed at the output of the digital-to-analog converter to restore the original weighting of the consonant sounds. This technique results in 100 percent sentence intelligibility for a sample rate of 5000 samples per second, eight quantization levels, and lowpass filters with a 3 db cutoff of 2000 Hz.

  7. Quantization and Superselection Sectors I:. Transformation Group C*-ALGEBRAS

    NASA Astrophysics Data System (ADS)

    Landsman, N. P.

    Quantization is defined as the act of assigning an appropriate C*-algebra { A} to a given configuration space Q, along with a prescription mapping self-adjoint elements of { A} into physically interpretable observables. This procedure is adopted to solve the problem of quantizing a particle moving on a homogeneous locally compact configuration space Q=G/H. Here { A} is chosen to be the transformation group C*-algebra corresponding to the canonical action of G on Q. The structure of these algebras and their representations are examined in some detail. Inequivalent quantizations are identified with inequivalent irreducible representations of the C*-algebra corresponding to the system, hence with its superselection sectors. Introducing the concept of a pre-Hamiltonian, we construct a large class of G-invariant time-evolutions on these algebras, and find the Hamiltonians implementing these time-evolutions in each irreducible representation of { A}. “Topological” terms in the Hamiltonian (or the corresponding action) turn out to be representation-dependent, and are automatically induced by the quantization procedure. Known “topological” charge quantization or periodicity conditions are then identically satisfied as a consequence of the representation theory of { A}.

  8. Quantized Rabi oscillations and circular dichroism in quantum Hall systems

    NASA Astrophysics Data System (ADS)

    Tran, D. T.; Cooper, N. R.; Goldman, N.

    2018-06-01

    The dissipative response of a quantum system upon periodic driving can be exploited as a probe of its topological properties. Here we explore the implications of such phenomena in two-dimensional gases subjected to a uniform magnetic field. It is shown that a filled Landau level exhibits a quantized circular dichroism, which can be traced back to its underlying nontrivial topology. Based on selection rules, we find that this quantized effect can be suitably described in terms of Rabi oscillations, whose frequencies satisfy simple quantization laws. We discuss how quantized dissipative responses can be probed locally, both in the bulk and at the boundaries of the system. This work suggests alternative forms of topological probes based on circular dichroism.

  9. Area and power efficient DCT architecture for image compression

    NASA Astrophysics Data System (ADS)

    Dhandapani, Vaithiyanathan; Ramachandran, Seshasayanan

    2014-12-01

    The discrete cosine transform (DCT) is one of the major components in image and video compression systems. The final output of these systems is interpreted by the human visual system (HVS), which is not perfect. The limited perception of human visualization allows the algorithm to be numerically approximate rather than exact. In this paper, we propose a new matrix for discrete cosine transform. The proposed 8 × 8 transformation matrix contains only zeros and ones which requires only adders, thus avoiding the need for multiplication and shift operations. The new class of transform requires only 12 additions, which highly reduces the computational complexity and achieves a performance in image compression that is comparable to that of the existing approximated DCT. Another important aspect of the proposed transform is that it provides an efficient area and power optimization while implementing in hardware. To ensure the versatility of the proposal and to further evaluate the performance and correctness of the structure in terms of speed, area, and power consumption, the model is implemented on Xilinx Virtex 7 field programmable gate array (FPGA) device and synthesized with Cadence® RTL Compiler® using UMC 90 nm standard cell library. The analysis obtained from the implementation indicates that the proposed structure is superior to the existing approximation techniques with a 30% reduction in power and 12% reduction in area.

  10. Magnetic resonance image compression using scalar-vector quantization

    NASA Astrophysics Data System (ADS)

    Mohsenian, Nader; Shahri, Homayoun

    1995-12-01

    A new coding scheme based on the scalar-vector quantizer (SVQ) is developed for compression of medical images. SVQ is a fixed-rate encoder and its rate-distortion performance is close to that of optimal entropy-constrained scalar quantizers (ECSQs) for memoryless sources. The use of a fixed-rate quantizer is expected to eliminate some of the complexity issues of using variable-length scalar quantizers. When transmission of images over noisy channels is considered, our coding scheme does not suffer from error propagation which is typical of coding schemes which use variable-length codes. For a set of magnetic resonance (MR) images, coding results obtained from SVQ and ECSQ at low bit-rates are indistinguishable. Furthermore, our encoded images are perceptually indistinguishable from the original, when displayed on a monitor. This makes our SVQ based coder an attractive compression scheme for picture archiving and communication systems (PACS), currently under consideration for an all digital radiology environment in hospitals, where reliable transmission, storage, and high fidelity reconstruction of images are desired.

  11. Can one ADM quantize relativistic bosonicstrings and membranes?

    NASA Astrophysics Data System (ADS)

    Moncrief, Vincent

    2006-04-01

    The standard methods for quantizing relativistic strings diverge significantly from the Dirac-Wheeler-DeWitt program for quantization of generally covariant systems and one wonders whether the latter could be successfully implemented as an alternative to the former. As a first step in this direction, we consider the possibility of quantizing strings (and also relativistic membranes) via a partially gauge-fixed ADM (Arnowitt, Deser and Misner) formulation of the reduced field equations for these systems. By exploiting some (Euclidean signature) Hamilton-Jacobi techniques that Mike Ryan and I had developed previously for the quantization of Bianchi IX cosmological models, I show how to construct Diff( S 1)-invariant (or Diff(Σ)-invariant in the case of membranes) ground state wave functionals for the cases of co-dimension one strings and membranes embedded in Minkowski spacetime. I also show that the reduced Hamiltonian density operators for these systems weakly commute when applied to physical (i.e. Diff( S 1) or Diff(Σ)-invariant) states. While many open questions remain, these preliminary results seem to encourage further research along the same lines.

  12. Vacuum polarization of the quantized massive fields in Friedman-Robertson-Walker spacetime

    NASA Astrophysics Data System (ADS)

    Matyjasek, Jerzy; Sadurski, Paweł; Telecka, Małgorzata

    2014-04-01

    The stress-energy tensor of the quantized massive fields in a spatially open, flat, and closed Friedman-Robertson-Walker universe is constructed using the adiabatic regularization (for the scalar field) and the Schwinger-DeWitt approach (for the scalar, spinor, and vector fields). It is shown that the stress-energy tensor calculated in the sixth adiabatic order coincides with the result obtained from the regularized effective action, constructed from the heat kernel coefficient a3. The behavior of the tensor is examined in the power-law cosmological models, and the semiclassical Einstein field equations are solved exactly in a few physically interesting cases, such as the generalized Starobinsky models.

  13. Resolution enhancement of low-quality videos using a high-resolution frame

    NASA Astrophysics Data System (ADS)

    Pham, Tuan Q.; van Vliet, Lucas J.; Schutte, Klamer

    2006-01-01

    This paper proposes an example-based Super-Resolution (SR) algorithm of compressed videos in the Discrete Cosine Transform (DCT) domain. Input to the system is a Low-Resolution (LR) compressed video together with a High-Resolution (HR) still image of similar content. Using a training set of corresponding LR-HR pairs of image patches from the HR still image, high-frequency details are transferred from the HR source to the LR video. The DCT-domain algorithm is much faster than example-based SR in spatial domain 6 because of a reduction in search dimensionality, which is a direct result of the compact and uncorrelated DCT representation. Fast searching techniques like tree-structure vector quantization 16 and coherence search1 are also key to the improved efficiency. Preliminary results on MJPEG sequence show promising result of the DCT-domain SR synthesis approach.

  14. DCT-based cyber defense techniques

    NASA Astrophysics Data System (ADS)

    Amsalem, Yaron; Puzanov, Anton; Bedinerman, Anton; Kutcher, Maxim; Hadar, Ofer

    2015-09-01

    With the increasing popularity of video streaming services and multimedia sharing via social networks, there is a need to protect the multimedia from malicious use. An attacker may use steganography and watermarking techniques to embed malicious content, in order to attack the end user. Most of the attack algorithms are robust to basic image processing techniques such as filtering, compression, noise addition, etc. Hence, in this article two novel, real-time, defense techniques are proposed: Smart threshold and anomaly correction. Both techniques operate at the DCT domain, and are applicable for JPEG images and H.264 I-Frames. The defense performance was evaluated against a highly robust attack, and the perceptual quality degradation was measured by the well-known PSNR and SSIM quality assessment metrics. A set of defense techniques is suggested for improving the defense efficiency. For the most aggressive attack configuration, the combination of all the defense techniques results in 80% protection against cyber-attacks with PSNR of 25.74 db.

  15. Differential calculus on quantized simple lie groups

    NASA Astrophysics Data System (ADS)

    Jurčo, Branislav

    1991-07-01

    Differential calculi, generalizations of Woronowicz's four-dimensional calculus on SU q (2), are introduced for quantized classical simple Lie groups in a constructive way. For this purpose, the approach of Faddeev and his collaborators to quantum groups was used. An equivalence of Woronowicz's enveloping algebra generated by the dual space to the left-invariant differential forms and the corresponding quantized universal enveloping algebra, is obtained for our differential calculi. Real forms for q ∈ ℝ are also discussed.

  16. Face verification system for Android mobile devices using histogram based features

    NASA Astrophysics Data System (ADS)

    Sato, Sho; Kobayashi, Kazuhiro; Chen, Qiu

    2016-07-01

    This paper proposes a face verification system that runs on Android mobile devices. In this system, facial image is captured by a built-in camera on the Android device firstly, and then face detection is implemented using Haar-like features and AdaBoost learning algorithm. The proposed system verify the detected face using histogram based features, which are generated by binary Vector Quantization (VQ) histogram using DCT coefficients in low frequency domains, as well as Improved Local Binary Pattern (Improved LBP) histogram in spatial domain. Verification results with different type of histogram based features are first obtained separately and then combined by weighted averaging. We evaluate our proposed algorithm by using publicly available ORL database and facial images captured by an Android tablet.

  17. Numerical simulation of transmission coefficient using c-number Langevin equation

    NASA Astrophysics Data System (ADS)

    Barik, Debashis; Bag, Bidhan Chandra; Ray, Deb Shankar

    2003-12-01

    We numerically implement the reactive flux formalism on the basis of a recently proposed c-number Langevin equation [Barik et al., J. Chem. Phys. 119, 680 (2003); Banerjee et al., Phys. Rev. E 65, 021109 (2002)] to calculate transmission coefficient. The Kramers' turnover, the T2 enhancement of the rate at low temperatures and other related features of temporal behavior of the transmission coefficient over a range of temperature down to absolute zero, noise correlation, and friction are examined for a double well potential and compared with other known results. This simple method is based on canonical quantization and Wigner quasiclassical phase space function and takes care of quantum effects due to the system order by order.

  18. Third Quantization and Quantum Universes

    NASA Astrophysics Data System (ADS)

    Kim, Sang Pyo

    2014-01-01

    We study the third quantization of the Friedmann-Robertson-Walker cosmology with N-minimal massless fields. The third quantized Hamiltonian for the Wheeler-DeWitt equation in the minisuperspace consists of infinite number of intrinsic time-dependent, decoupled oscillators. The Hamiltonian has a pair of invariant operators for each universe with conserved momenta of the fields that play a role of the annihilation and the creation operators and that construct various quantum states for the universe. The closed universe exhibits an interesting feature of transitions from stable states to tachyonic states depending on the conserved momenta of the fields. In the classical forbidden unstable regime, the quantum states have googolplex growing position and conjugate momentum dispersions, which defy any measurements of the position of the universe.

  19. Quantization selection in the high-throughput H.264/AVC encoder based on the RD

    NASA Astrophysics Data System (ADS)

    Pastuszak, Grzegorz

    2013-10-01

    In the hardware video encoder, the quantization is responsible for quality losses. On the other hand, it allows the reduction of bit rates to the target one. If the mode selection is based on the rate-distortion criterion, the quantization can also be adjusted to obtain better compression efficiency. Particularly, the use of Lagrangian function with a given multiplier enables the encoder to select the most suitable quantization step determined by the quantization parameter QP. Moreover, the quantization offset added before discarding the fraction value after quantization can be adjusted. In order to select the best quantization parameter and offset in real time, the HD/SD encoder should be implemented in the hardware. In particular, the hardware architecture should embed the transformation and quantization modules able to process the same residuals many times. In this work, such an architecture is used. Experimental results show what improvements in terms of compression efficiency are achievable for Intra coding.

  20. Group theoretical quantization of isotropic loop cosmology

    NASA Astrophysics Data System (ADS)

    Livine, Etera R.; Martín-Benito, Mercedes

    2012-06-01

    We achieve a group theoretical quantization of the flat Friedmann-Robertson-Walker model coupled to a massless scalar field adopting the improved dynamics of loop quantum cosmology. Deparemetrizing the system using the scalar field as internal time, we first identify a complete set of phase space observables whose Poisson algebra is isomorphic to the su(1,1) Lie algebra. It is generated by the volume observable and the Hamiltonian. These observables describe faithfully the regularized phase space underlying the loop quantization: they account for the polymerization of the variable conjugate to the volume and for the existence of a kinematical nonvanishing minimum volume. Since the Hamiltonian is an element in the su(1,1) Lie algebra, the dynamics is now implemented as SU(1, 1) transformations. At the quantum level, the system is quantized as a timelike irreducible representation of the group SU(1, 1). These representations are labeled by a half-integer spin, which gives the minimal volume. They provide superselection sectors without quantization anomalies and no factor ordering ambiguity arises when representing the Hamiltonian. We then explicitly construct SU(1, 1) coherent states to study the quantum evolution. They not only provide semiclassical states but truly dynamical coherent states. Their use further clarifies the nature of the bounce that resolves the big bang singularity.

  1. Generalized noise terms for the quantized fluctuational electrodynamics

    NASA Astrophysics Data System (ADS)

    Partanen, Mikko; Häyrynen, Teppo; Tulkki, Jukka; Oksanen, Jani

    2017-03-01

    The quantization of optical fields in vacuum has been known for decades, but extending the field quantization to lossy and dispersive media in nonequilibrium conditions has proven to be complicated due to the position-dependent electric and magnetic responses of the media. In fact, consistent position-dependent quantum models for the photon number in resonant structures have only been formulated very recently and only for dielectric media. Here we present a general position-dependent quantized fluctuational electrodynamics (QFED) formalism that extends the consistent field quantization to describe the photon number also in the presence of magnetic field-matter interactions. It is shown that the magnetic fluctuations provide an additional degree of freedom in media where the magnetic coupling to the field is prominent. Therefore, the field quantization requires an additional independent noise operator that is commuting with the conventional bosonic noise operator describing the polarization current fluctuations in dielectric media. In addition to allowing the detailed description of field fluctuations, our methods provide practical tools for modeling optical energy transfer and the formation of thermal balance in general dielectric and magnetic nanodevices. We use QFED to investigate the magnetic properties of microcavity systems to demonstrate an example geometry in which it is possible to probe fields arising from the electric and magnetic source terms. We show that, as a consequence of the magnetic Purcell effect, the tuning of the position of an emitter layer placed inside a vacuum cavity can make the emissivity of a magnetic emitter to exceed the emissivity of a corresponding electric emitter.

  2. Detection of small surface defects using DCT based enhancement approach in machine vision systems

    NASA Astrophysics Data System (ADS)

    He, Fuqiang; Wang, Wen; Chen, Zichen

    2005-12-01

    Utilizing DCT based enhancement approach, an improved small defect detection algorithm for real-time leather surface inspection was developed. A two-stage decomposition procedure was proposed to extract an odd-odd frequency matrix after a digital image has been transformed to DCT domain. Then, the reverse cumulative sum algorithm was proposed to detect the transition points of the gentle curves plotted from the odd-odd frequency matrix. The best radius of the cutting sector was computed in terms of the transition points and the high-pass filtering operation was implemented. The filtered image was then inversed and transformed back to the spatial domain. Finally, the restored image was segmented by an entropy method and some defect features are calculated. Experimental results show the proposed small defect detection method can reach the small defect detection rate by 94%.

  3. TH-E-17A-04: Geometric Validation of K-Space Self-Gated 4D-MRI Vs. 4D-CT Using A Respiratory Motion Phantom

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yue, Y; Fan, Z; Yang, W

    Purpose: 4D-CT is often limited by motion artifacts, low temporal resolution, and poor phase-based target definition. We recently developed a novel k-space self-gated 4D-MRI technique with high spatial and temporal resolution. The goal here is to geometrically validate 4D-MRI using a MRI-CT compatible respiratory motion phantom and comparison to 4D-CT. Methods: 4D-MRI was acquired using 3T spoiled gradient echo-based 3D projection sequences. Respiratory phases were resolved using self-gated k-space lines as the motion surrogate. Images were reconstructed into 10 temporal bins with 1.56×1.56×1.56mm3. A MRI-CT compatible phantom was designed with a 23mm diameter ball target filled with highconcentration gadolinium(Gd) gelmore » embedded in a 35×40×63mm3 plastic box stabilized with low-concentration Gd gel. The whole phantom was driven by an air pump. Human respiratory motion was mimicked using the controller from a commercial dynamic phantom (RSD). Four breathing settings (rates/depths: 10s/20mm, 6s/15mm, 4s/10mm, 3s/7mm) were scanned with 4D-MRI and 4D-CT (slice thickness 1.25mm). Motion ground-truth was obtained from input signals and real-time video recordings. Reconstructed images were imported into Eclipse(Varian) for target contouring. Volumes and target positions were compared with ground-truth. Initial human study was investigated on a liver patient. Results: 4D-MRI and 4D-CT scans for the different breathing cycles were reconstructed with 10 phases. Target volume in each phase was measured for both 4D-CT and 4D-MRI. Volume percentage difference for the 6.37ml target ranged from 6.67±5.33 to 11.63±5.57 for 4D-CT and from 1.47±0.52 to 2.12±1.60 for 4D-MRI. The Mann-Whitney U-test shows the 4D-MRI is significantly superior to 4D-CT (p=0.021) for phase-based target definition. Centroid motion error ranges were 1.35–1.25mm (4D-CT), and 0.31–0.12mm (4D-MRI). Conclusion: The k-space self-gated 4D-MRI we recently developed can accurately determine

  4. Uniform quantized electron gas

    NASA Astrophysics Data System (ADS)

    Høye, Johan S.; Lomba, Enrique

    2016-10-01

    In this work we study the correlation energy of the quantized electron gas of uniform density at temperature T  =  0. To do so we utilize methods from classical statistical mechanics. The basis for this is the Feynman path integral for the partition function of quantized systems. With this representation the quantum mechanical problem can be interpreted as, and is equivalent to, a classical polymer problem in four dimensions where the fourth dimension is imaginary time. Thus methods, results, and properties obtained in the statistical mechanics of classical fluids can be utilized. From this viewpoint we recover the well known RPA (random phase approximation). Then to improve it we modify the RPA by requiring the corresponding correlation function to be such that electrons with equal spins can not be on the same position. Numerical evaluations are compared with well known results of a standard parameterization of Monte Carlo correlation energies.

  5. Density-Dependent Quantized Least Squares Support Vector Machine for Large Data Sets.

    PubMed

    Nan, Shengyu; Sun, Lei; Chen, Badong; Lin, Zhiping; Toh, Kar-Ann

    2017-01-01

    Based on the knowledge that input data distribution is important for learning, a data density-dependent quantization scheme (DQS) is proposed for sparse input data representation. The usefulness of the representation scheme is demonstrated by using it as a data preprocessing unit attached to the well-known least squares support vector machine (LS-SVM) for application on big data sets. Essentially, the proposed DQS adopts a single shrinkage threshold to obtain a simple quantization scheme, which adapts its outputs to input data density. With this quantization scheme, a large data set is quantized to a small subset where considerable sample size reduction is generally obtained. In particular, the sample size reduction can save significant computational cost when using the quantized subset for feature approximation via the Nyström method. Based on the quantized subset, the approximated features are incorporated into LS-SVM to develop a data density-dependent quantized LS-SVM (DQLS-SVM), where an analytic solution is obtained in the primal solution space. The developed DQLS-SVM is evaluated on synthetic and benchmark data with particular emphasis on large data sets. Extensive experimental results show that the learning machine incorporating DQS attains not only high computational efficiency but also good generalization performance.

  6. Superfield quantization

    NASA Astrophysics Data System (ADS)

    Batalin, I. A.; Bering, K.; Damgaard, P. H.

    1998-03-01

    We present a superfield formulation of the quantization program for theories with first-class constraints. An exact operator formulation is given, and we show how to set up a phase-space path integral entirely in terms of superfields. BRST transformations and canonical transformations enter on equal footing, and they allow us to establish a superspace analog of the BFV theorem. We also present a formal derivation of the Lagrangian superfield analogue of the field-antifield formalism by an integration over half of the phase-space variables.

  7. 4D Sommerfeld quantization of the complex extended charge

    NASA Astrophysics Data System (ADS)

    Bulyzhenkov, Igor E.

    2017-12-01

    Gravitational fields and accelerations cannot change quantized magnetic flux in closed line contours due to flat 3D section of curved 4D space-time-matter. The relativistic Bohr-Sommerfeld quantization of the imaginary charge reveals an electric analog of the Compton length, which can introduce quantitatively the fine structure constant and the Plank length.

  8. Embedded DCT and wavelet methods for fine granular scalable video: analysis and comparison

    NASA Astrophysics Data System (ADS)

    van der Schaar-Mitrea, Mihaela; Chen, Yingwei; Radha, Hayder

    2000-04-01

    Video transmission over bandwidth-varying networks is becoming increasingly important due to emerging applications such as streaming of video over the Internet. The fundamental obstacle in designing such systems resides in the varying characteristics of the Internet (i.e. bandwidth variations and packet-loss patterns). In MPEG-4, a new SNR scalability scheme, called Fine-Granular-Scalability (FGS), is currently under standardization, which is able to adapt in real-time (i.e. at transmission time) to Internet bandwidth variations. The FGS framework consists of a non-scalable motion-predicted base-layer and an intra-coded fine-granular scalable enhancement layer. For example, the base layer can be coded using a DCT-based MPEG-4 compliant, highly efficient video compression scheme. Subsequently, the difference between the original and decoded base-layer is computed, and the resulting FGS-residual signal is intra-frame coded with an embedded scalable coder. In order to achieve high coding efficiency when compressing the FGS enhancement layer, it is crucial to analyze the nature and characteristics of residual signals common to the SNR scalability framework (including FGS). In this paper, we present a thorough analysis of SNR residual signals by evaluating its statistical properties, compaction efficiency and frequency characteristics. The signal analysis revealed that the energy compaction of the DCT and wavelet transforms is limited and the frequency characteristic of SNR residual signals decay rather slowly. Moreover, the blockiness artifacts of the low bit-rate coded base-layer result in artificial high frequencies in the residual signal. Subsequently, a variety of wavelet and embedded DCT coding techniques applicable to the FGS framework are evaluated and their results are interpreted based on the identified signal properties. As expected from the theoretical signal analysis, the rate-distortion performances of the embedded wavelet and DCT-based coders are very

  9. Time-Symmetric Quantization in Spacetimes with Event Horizons

    NASA Astrophysics Data System (ADS)

    Kobakhidze, Archil; Rodd, Nicholas

    2013-08-01

    The standard quantization formalism in spacetimes with event horizons implies a non-unitary evolution of quantum states, as initial pure states may evolve into thermal states. This phenomenon is behind the famous black hole information loss paradox which provoked long-standing debates on the compatibility of quantum mechanics and gravity. In this paper we demonstrate that within an alternative time-symmetric quantization formalism thermal radiation is absent and states evolve unitarily in spacetimes with event horizons. We also discuss the theoretical consistency of the proposed formalism. We explicitly demonstrate that the theory preserves the microcausality condition and suggest a "reinterpretation postulate" to resolve other apparent pathologies associated with negative energy states. Accordingly as there is a consistent alternative, we argue that choosing to use time-asymmetric quantization is a necessary condition for the black hole information loss paradox.

  10. Generalized radiation-field quantization method and the Petermann excess-noise factor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheng, Y.-J.; Siegman, A.E.; E.L. Ginzton Laboratory, Stanford University, Stanford, California 94305

    2003-10-01

    We propose a generalized radiation-field quantization formalism, where quantization does not have to be referenced to a set of power-orthogonal eigenmodes as conventionally required. This formalism can be used to directly quantize the true system eigenmodes, which can be non-power-orthogonal due to the open nature of the system or the gain/loss medium involved in the system. We apply this generalized field quantization to the laser linewidth problem, in particular, lasers with non-power-orthogonal oscillation modes, and derive the excess-noise factor in a fully quantum-mechanical framework. We also show that, despite the excess-noise factor for oscillating modes, the total spatially averaged decaymore » rate for the laser atoms remains unchanged.« less

  11. Quantized Algebra I Texts

    ERIC Educational Resources Information Center

    DeBuvitz, William

    2014-01-01

    I am a volunteer reader at the Princeton unit of "Learning Ally" (formerly "Recording for the Blind & Dyslexic") and I recently discovered that high school students are introduced to the concept of quantization well before they take chemistry and physics. For the past few months I have been reading onto computer files a…

  12. Vector quantizer based on brightness maps for image compression with the polynomial transform

    NASA Astrophysics Data System (ADS)

    Escalante-Ramirez, Boris; Moreno-Gutierrez, Mauricio; Silvan-Cardenas, Jose L.

    2002-11-01

    We present a vector quantization scheme acting on brightness fields based on distance/distortion criteria correspondent with psycho-visual aspects. These criteria quantify sensorial distortion between vectors that represent either portions of a digital image or alternatively, coefficients of a transform-based coding system. In the latter case, we use an image representation model, namely the Hermite transform, that is based on some of the main perceptual characteristics of the human vision system (HVS) and in their response to light stimulus. Energy coding in the brightness domain, determination of local structure, code-book training and local orientation analysis are all obtained by means of the Hermite transform. This paper, for thematic reasons, is divided in four sections. The first one will shortly highlight the importance of having newer and better compression algorithms. This section will also serve to explain briefly the most relevant characteristics of the HVS, advantages and disadvantages related with the behavior of our vision in front of ocular stimulus. The second section shall go through a quick review of vector quantization techniques, focusing their performance on image treatment, as a preview for the image vector quantizer compressor actually constructed in section 5. Third chapter was chosen to concentrate the most important data gathered on brightness models. The building of this so-called brightness maps (quantification of the human perception on the visible objects reflectance), in a bi-dimensional model, will be addressed here. The Hermite transform, a special case of polynomial transforms, and its usefulness, will be treated, in an applicable discrete form, in the fourth chapter. As we have learned from previous works 1, Hermite transform has showed to be a useful and practical solution to efficiently code the energy within an image block, deciding which kind of quantization is to be used upon them (whether scalar or vector). It will also be

  13. The uniform quantized electron gas revisited

    NASA Astrophysics Data System (ADS)

    Lomba, Enrique; Høye, Johan S.

    2017-11-01

    In this article we continue and extend our recent work on the correlation energy of the quantized electron gas of uniform density at temperature T=0 . As before, we utilize the methods, properties, and results obtained by means of classical statistical mechanics. These were extended to quantized systems via the Feynman path integral formalism. The latter translates the quantum problem into a classical polymer problem in four dimensions. Again, the well known RPA (random phase approximation) is recovered as a basic result which we then modify and improve upon. Here we analyze the condition of thermodynamic self-consistency. Our numerical calculations exhibit a remarkable agreement with well known results of a standard parameterization of Monte Carlo correlation energies.

  14. Design, development, and testing of the DCT Cassegrain instrument support assembly

    NASA Astrophysics Data System (ADS)

    Bida, Thomas A.; Dunham, Edward W.; Nye, Ralph A.; Chylek, Tomas; Oliver, Richard C.

    2012-09-01

    The 4.3m Discovery Channel Telescope delivers an f/6.1 unvignetted 0.5° field to its RC focal plane. In order to support guiding, wavefront sensing, and instrument installations, a Cassegrain instrument support assembly has been developed which includes a facility guider and wavefront sensor package (GWAVES) and multiple interfaces for instrumentation. A 2-element, all-spherical, fused-silica corrector compensates for field curvature and astigmatism over the 0.5° FOV, while reducing ghost pupil reflections to minimal levels. Dual roving GWAVES camera probes pick off stars in the outer annulus of the corrected field, providing simultaneous guiding and wavefront sensing for telescope operations. The instrument cube supports 5 co-mounted instruments with rapid feed selection via deployable fold mirrors. The corrected beam passes through a dual filter wheel before imaging with the 6K x 6K single CCD of the Large Monolithic Imager (LMI). We describe key development strategies for the DCT Cassegrain instrument assembly and GWAVES, including construction of a prime focus test assembly with wavefront sensor utilized in fall 2011 to begin characterization of the DCT primary mirror support. We also report on 2012 on-sky test results of wavefront sensing, guiding, and imaging with the integrated Cassegrain cube.

  15. Information Hiding In Digital Video Using DCT, DWT and CvT

    NASA Astrophysics Data System (ADS)

    Abed Shukur, Wisam; Najah Abdullah, Wathiq; Kareem Qurban, Luheb

    2018-05-01

    The type of video that used in this proposed hiding a secret information technique is .AVI; the proposed technique of a data hiding to embed a secret information into video frames by using Discrete Cosine Transform (DCT), Discrete Wavelet Transform (DWT) and Curvelet Transform (CvT). An individual pixel consists of three color components (RGB), the secret information is embedded in Red (R) color channel. On the receiver side, the secret information is extracted from received video. After extracting secret information, robustness of proposed hiding a secret information technique is measured and obtained by computing the degradation of the extracted secret information by comparing it with the original secret information via calculating the Normalized cross Correlation (NC). The experiments shows the error ratio of the proposed technique is (8%) while accuracy ratio is (92%) when the Curvelet Transform (CvT) is used, but compared with Discrete Wavelet Transform (DWT) and Discrete Cosine Transform (DCT), the error rates are 11% and 14% respectively, while the accuracy ratios are (89%) and (86%) respectively. So, the experiments shows the Poisson noise gives better results than other types of noises, while the speckle noise gives worst results compared with other types of noises. The proposed technique has been established by using MATLAB R2016a programming language.

  16. Introduction to quantized LIE groups and algebras

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tjin, T.

    1992-10-10

    In this paper, the authors give a self-contained introduction to the theory of quantum groups according to Drinfeld, highlighting the formal aspects as well as the applications to the Yang-Baxter equation and representation theory. Introductions to Hopf algebras, Poisson structures and deformation quantization are also provided. After defining Poisson Lie groups the authors study their relation to Lie bialgebras and the classical Yang-Baxter equation. Then the authors explain in detail the concept of quantization for them. As an example the quantization of sl[sub 2] is explicitly carried out. Next, the authors show how quantum groups are related to the Yang-Baxtermore » equation and how they can be used to solve it. Using the quantum double construction, the authors explicitly construct the universal R matrix for the quantum sl[sub 2] algebra. In the last section, the authors deduce all finite-dimensional irreducible representations for q a root of unity. The authors also give their tensor product decomposition (fusion rules), which is relevant to conformal field theory.« less

  17. Conductance Quantization in Resistive Random Access Memory

    NASA Astrophysics Data System (ADS)

    Li, Yang; Long, Shibing; Liu, Yang; Hu, Chen; Teng, Jiao; Liu, Qi; Lv, Hangbing; Suñé, Jordi; Liu, Ming

    2015-10-01

    The intrinsic scaling-down ability, simple metal-insulator-metal (MIM) sandwich structure, excellent performances, and complementary metal-oxide-semiconductor (CMOS) technology-compatible fabrication processes make resistive random access memory (RRAM) one of the most promising candidates for the next-generation memory. The RRAM device also exhibits rich electrical, thermal, magnetic, and optical effects, in close correlation with the abundant resistive switching (RS) materials, metal-oxide interface, and multiple RS mechanisms including the formation/rupture of nanoscale to atomic-sized conductive filament (CF) incorporated in RS layer. Conductance quantization effect has been observed in the atomic-sized CF in RRAM, which provides a good opportunity to deeply investigate the RS mechanism in mesoscopic dimension. In this review paper, the operating principles of RRAM are introduced first, followed by the summarization of the basic conductance quantization phenomenon in RRAM and the related RS mechanisms, device structures, and material system. Then, we discuss the theory and modeling of quantum transport in RRAM. Finally, we present the opportunities and challenges in quantized RRAM devices and our views on the future prospects.

  18. Conductance Quantization in Resistive Random Access Memory.

    PubMed

    Li, Yang; Long, Shibing; Liu, Yang; Hu, Chen; Teng, Jiao; Liu, Qi; Lv, Hangbing; Suñé, Jordi; Liu, Ming

    2015-12-01

    The intrinsic scaling-down ability, simple metal-insulator-metal (MIM) sandwich structure, excellent performances, and complementary metal-oxide-semiconductor (CMOS) technology-compatible fabrication processes make resistive random access memory (RRAM) one of the most promising candidates for the next-generation memory. The RRAM device also exhibits rich electrical, thermal, magnetic, and optical effects, in close correlation with the abundant resistive switching (RS) materials, metal-oxide interface, and multiple RS mechanisms including the formation/rupture of nanoscale to atomic-sized conductive filament (CF) incorporated in RS layer. Conductance quantization effect has been observed in the atomic-sized CF in RRAM, which provides a good opportunity to deeply investigate the RS mechanism in mesoscopic dimension. In this review paper, the operating principles of RRAM are introduced first, followed by the summarization of the basic conductance quantization phenomenon in RRAM and the related RS mechanisms, device structures, and material system. Then, we discuss the theory and modeling of quantum transport in RRAM. Finally, we present the opportunities and challenges in quantized RRAM devices and our views on the future prospects.

  19. Quantization Of Temperature

    NASA Astrophysics Data System (ADS)

    O'Brien, Paul

    2017-01-01

    Max Plank did not quantize temperature. I will show that the Plank temperature violates the Plank scale. Plank stated that the Plank scale was Natures scale and independent of human construct. Also stating that even aliens would derive the same values. He made a huge mistake, because temperature is based on the Kelvin scale, which is man-made just like the meter and kilogram. He did not discover natures scale for the quantization of temperature. His formula is flawed, and his value is incorrect. Plank's calculation is Tp = c2Mp/Kb. The general form of this equation is T = E/Kb Why is this wrong? The temperature for a fixed amount of energy is dependent upon the volume it occupies. Using the correct formula involves specifying the radius of the volume in the form of (RE). This leads to an inequality and a limit that is equivalent to the Bekenstein Bound, but using temperature instead of entropy. Rewriting this equation as a limit defines both the maximum temperature and Boltzmann's constant. This will saturate any space-time boundary with maximum temperature and information density, also the minimum radius and entropy. The general form of the equation then becomes a limit in BH thermodynamics T <= (RE)/(λKb) .

  20. A hybrid LBG/lattice vector quantizer for high quality image coding

    NASA Technical Reports Server (NTRS)

    Ramamoorthy, V.; Sayood, K.; Arikan, E. (Editor)

    1991-01-01

    It is well known that a vector quantizer is an efficient coder offering a good trade-off between quantization distortion and bit rate. The performance of a vector quantizer asymptotically approaches the optimum bound with increasing dimensionality. A vector quantized image suffers from the following types of degradations: (1) edge regions in the coded image contain staircase effects, (2) quasi-constant or slowly varying regions suffer from contouring effects, and (3) textured regions lose details and suffer from granular noise. All three of these degradations are due to the finite size of the code book, the distortion measures used in the design, and due to the finite training procedure involved in the construction of the code book. In this paper, we present an adaptive technique which attempts to ameliorate the edge distortion and contouring effects.

  1. Simultaneous Conduction and Valence Band Quantization in Ultrashallow High-Density Doping Profiles in Semiconductors

    NASA Astrophysics Data System (ADS)

    Mazzola, F.; Wells, J. W.; Pakpour-Tabrizi, A. C.; Jackman, R. B.; Thiagarajan, B.; Hofmann, Ph.; Miwa, J. A.

    2018-01-01

    We demonstrate simultaneous quantization of conduction band (CB) and valence band (VB) states in silicon using ultrashallow, high-density, phosphorus doping profiles (so-called Si:P δ layers). We show that, in addition to the well-known quantization of CB states within the dopant plane, the confinement of VB-derived states between the subsurface P dopant layer and the Si surface gives rise to a simultaneous quantization of VB states in this narrow region. We also show that the VB quantization can be explained using a simple particle-in-a-box model, and that the number and energy separation of the quantized VB states depend on the depth of the P dopant layer beneath the Si surface. Since the quantized CB states do not show a strong dependence on the dopant depth (but rather on the dopant density), it is straightforward to exhibit control over the properties of the quantized CB and VB states independently of each other by choosing the dopant density and depth accordingly, thus offering new possibilities for engineering quantum matter.

  2. Dosimetric impact of tumor bed delineation variability based on 4DCT scan for external-beam partial breast irradiation.

    PubMed

    Guo, Bing; Li, Jianbin; Wang, Wei; Li, Fengxiang; Guo, Yanluan; Li, Yankang; Liu, Tonghai

    2015-01-01

    This study sought to evaluate the dosimetric impact of tumor bed delineation variability (based on clips, seroma or both clips and seroma) during external-beam partial breast irradiation (EB-PBI) planned utilizing four-dimensional computed tomography (4DCT) scans. 4DCT scans of 20 patients with a seroma clarity score (SCS) 3~5 and ≥5 surgical clips were included in this study. The combined volume of the tumor bed formed using clips, seroma, or both clips and seroma on the 10 phases of 4DCT was defined as the internal gross target volume (termed IGTVC, IGTVS and IGTVC+S, respectively). A 1.5-cm margin was added by defining the planning target volume (termed PTVC, PTVS and PTVC+S, respectively). Three treatment plans were established using the 4DCT images (termed EB-PBIC, EB-PBIS, EB-PBIC+S, respectively). The results showed that the volume of IGTVC+S was significantly larger than that of IGTVCand IGTVS. Similarly, the volume of PTVC+S was markedly larger than that of PTVC and PTVS. However, the PTV coverage for EB-PBIC+S was similar to that of EB-PBIC and EB-PBIS, and there were no significant differences in the homogeneity index or conformity index between the three treatment plans (P=0.878, 0.086). The EB-PBIS plan resulted in the lowest ipsilateral normal breast and ipsilateral lung doses compared with the EB-PBIC and EB-PBIC+S plans. To conclude, the volume variability delineated based on clips, seroma or both clips and seroma resulted in dosimetric variability for organs at risk, but did not show a marked influence on the dosimetric distribution.

  3. Dosimetric impact of tumor bed delineation variability based on 4DCT scan for external-beam partial breast irradiation

    PubMed Central

    Guo, Bing; Li, Jianbin; Wang, Wei; Li, Fengxiang; Guo, Yanluan; Li, Yankang; Liu, Tonghai

    2015-01-01

    This study sought to evaluate the dosimetric impact of tumor bed delineation variability (based on clips, seroma or both clips and seroma) during external-beam partial breast irradiation (EB-PBI) planned utilizing four-dimensional computed tomography (4DCT) scans. 4DCT scans of 20 patients with a seroma clarity score (SCS) 3~5 and ≥5 surgical clips were included in this study. The combined volume of the tumor bed formed using clips, seroma, or both clips and seroma on the 10 phases of 4DCT was defined as the internal gross target volume (termed IGTVC, IGTVS and IGTVC+S, respectively). A 1.5-cm margin was added by defining the planning target volume (termed PTVC, PTVS and PTVC+S, respectively). Three treatment plans were established using the 4DCT images (termed EB-PBIC, EB-PBIS, EB-PBIC+S, respectively). The results showed that the volume of IGTVC+S was significantly larger than that of IGTVCand IGTVS. Similarly, the volume of PTVC+S was markedly larger than that of PTVC and PTVS. However, the PTV coverage for EB-PBIC+S was similar to that of EB-PBIC and EB-PBIS, and there were no significant differences in the homogeneity index or conformity index between the three treatment plans (P=0.878, 0.086). The EB-PBIS plan resulted in the lowest ipsilateral normal breast and ipsilateral lung doses compared with the EB-PBIC and EB-PBIC+S plans. To conclude, the volume variability delineated based on clips, seroma or both clips and seroma resulted in dosimetric variability for organs at risk, but did not show a marked influence on the dosimetric distribution. PMID:26885108

  4. Constraints on operator ordering from third quantization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ohkuwa, Yoshiaki; Faizal, Mir, E-mail: f2mir@uwaterloo.ca; Ezawa, Yasuo

    2016-02-15

    In this paper, we analyse the Wheeler–DeWitt equation in the third quantized formalism. We will demonstrate that for certain operator ordering, the early stages of the universe are dominated by quantum fluctuations, and the universe becomes classical at later stages during the cosmic expansion. This is physically expected, if the universe is formed from quantum fluctuations in the third quantized formalism. So, we will argue that this physical requirement can be used to constrain the form of the operator ordering chosen. We will explicitly demonstrate this to be the case for two different cosmological models.

  5. Comparison of breathing gated CT images generated using a 5DCT technique and a commercial clinical protocol in a porcine model

    PubMed Central

    O’Connell, Dylan P.; Thomas, David H.; Dou, Tai H.; Lamb, James M.; Feingold, Franklin; Low, Daniel A.; Fuld, Matthew K.; Sieren, Jered P.; Sloan, Chelsea M.; Shirk, Melissa A.; Hoffman, Eric A.; Hofmann, Christian

    2015-01-01

    Purpose: To demonstrate that a “5DCT” technique which utilizes fast helical acquisition yields the same respiratory-gated images as a commercial technique for regular, mechanically produced breathing cycles. Methods: Respiratory-gated images of an anesthetized, mechanically ventilated pig were generated using a Siemens low-pitch helical protocol and 5DCT for a range of breathing rates and amplitudes and with standard and low dose imaging protocols. 5DCT reconstructions were independently evaluated by measuring the distances between tissue positions predicted by a 5D motion model and those measured using deformable registration, as well by reconstructing the originally acquired scans. Discrepancies between the 5DCT and commercial reconstructions were measured using landmark correspondences. Results: The mean distance between model predicted tissue positions and deformably registered tissue positions over the nine datasets was 0.65 ± 0.28 mm. Reconstructions of the original scans were on average accurate to 0.78 ± 0.57 mm. Mean landmark displacement between the commercial and 5DCT images was 1.76 ± 1.25 mm while the maximum lung tissue motion over the breathing cycle had a mean value of 27.2 ± 4.6 mm. An image composed of the average of 30 deformably registered images acquired with a low dose protocol had 6 HU image noise (single standard deviation) in the heart versus 31 HU for the commercial images. Conclusions: An end to end evaluation of the 5DCT technique was conducted through landmark based comparison to breathing gated images acquired with a commercial protocol under highly regular ventilation. The techniques were found to agree to within 2 mm for most respiratory phases and most points in the lung. PMID:26133604

  6. Quantization and Quantum-Like Phenomena: A Number Amplitude Approach

    NASA Astrophysics Data System (ADS)

    Robinson, T. R.; Haven, E.

    2015-12-01

    Historically, quantization has meant turning the dynamical variables of classical mechanics that are represented by numbers into their corresponding operators. Thus the relationships between classical variables determine the relationships between the corresponding quantum mechanical operators. Here, we take a radically different approach to this conventional quantization procedure. Our approach does not rely on any relations based on classical Hamiltonian or Lagrangian mechanics nor on any canonical quantization relations, nor even on any preconceptions of particle trajectories in space and time. Instead we examine the symmetry properties of certain Hermitian operators with respect to phase changes. This introduces harmonic operators that can be identified with a variety of cyclic systems, from clocks to quantum fields. These operators are shown to have the characteristics of creation and annihilation operators that constitute the primitive fields of quantum field theory. Such an approach not only allows us to recover the Hamiltonian equations of classical mechanics and the Schrödinger wave equation from the fundamental quantization relations, but also, by freeing the quantum formalism from any physical connotation, makes it more directly applicable to non-physical, so-called quantum-like systems. Over the past decade or so, there has been a rapid growth of interest in such applications. These include, the use of the Schrödinger equation in finance, second quantization and the number operator in social interactions, population dynamics and financial trading, and quantum probability models in cognitive processes and decision-making. In this paper we try to look beyond physical analogies to provide a foundational underpinning of such applications.

  7. WE-AB-202-09: Feasibility and Quantitative Analysis of 4DCT-Based High Precision Lung Elastography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hasse, K; Neylon, J; Low, D

    2016-06-15

    Purpose: The purpose of this project is to derive high precision elastography measurements from 4DCT lung scans to facilitate the implementation of elastography in a radiotherapy context. Methods: 4DCT scans of the lungs were acquired, and breathing stages were subsequently registered to each other using an optical flow DIR algorithm. The displacement of each voxel gleaned from the registration was taken to be the ground-truth deformation. These vectors, along with the 4DCT source datasets, were used to generate a GPU-based biomechanical simulation that acted as a forward model to solve the inverse elasticity problem. The lung surface displacements were appliedmore » as boundary constraints for the model-guided lung tissue elastography, while the inner voxels were allowed to deform according to the linear elastic forces within the model. A biomechanically-based anisotropic convergence magnification technique was applied to the inner voxels in order to amplify the subtleties of the interior deformation. Solving the inverse elasticity problem was accomplished by modifying the tissue elasticity and iteratively deforming the biomechanical model. Convergence occurred when each voxel was within 0.5 mm of the ground-truth deformation and 1 kPa of the ground-truth elasticity distribution. To analyze the feasibility of the model-guided approach, we present the results for regions of low ventilation, specifically, the apex. Results: The maximum apical boundary expansion was observed to be between 2 and 6 mm. Simulating this expansion within an apical lung model, it was observed that 100% of voxels converged within 0.5 mm of ground-truth deformation, while 91.8% converged within 1 kPa of the ground-truth elasticity distribution. A mean elasticity error of 0.6 kPa illustrates the high precision of our technique. Conclusion: By utilizing 4DCT lung data coupled with a biomechanical model, high precision lung elastography can be accurately performed, even in low ventilation

  8. NOTE: A feasibility study of markerless fluoroscopic gating for lung cancer radiotherapy using 4DCT templates

    NASA Astrophysics Data System (ADS)

    Li, Ruijiang; Lewis, John H.; Cerviño, Laura I.; Jiang, Steve B.

    2009-10-01

    A major difficulty in conformal lung cancer radiotherapy is respiratory organ motion, which may cause clinically significant targeting errors. Respiratory-gated radiotherapy allows for more precise delivery of prescribed radiation dose to the tumor, while minimizing normal tissue complications. Gating based on external surrogates is limited by its lack of accuracy, while gating based on implanted fiducial markers is limited primarily by the risk of pneumothorax due to marker implantation. Techniques for fluoroscopic gating without implanted fiducial markers (markerless gating) have been developed. These techniques usually require a training fluoroscopic image dataset with marked tumor positions in the images, which limits their clinical implementation. To remove this requirement, this study presents a markerless fluoroscopic gating algorithm based on 4DCT templates. To generate gating signals, we explored the application of three similarity measures or scores between fluoroscopic images and the reference 4DCT template: un-normalized cross-correlation (CC), normalized cross-correlation (NCC) and normalized mutual information (NMI), as well as average intensity (AI) of the region of interest (ROI) in the fluoroscopic images. Performance was evaluated using fluoroscopic and 4DCT data from three lung cancer patients. On average, gating based on CC achieves the highest treatment accuracy given the same efficiency, with a high target coverage (average between 91.9% and 98.6%) for a wide range of nominal duty cycles (20-50%). AI works well for two patients out of three, but failed for the third patient due to interference from the heart. Gating based on NCC and NMI usually failed below 50% nominal duty cycle. Based on this preliminary study with three patients, we found that the proposed CC-based gating algorithm can generate accurate and robust gating signals when using 4DCT reference template. However, this observation is based on results obtained from a very limited

  9. Quantized magnetoresistance in atomic-size contacts.

    PubMed

    Sokolov, Andrei; Zhang, Chunjuan; Tsymbal, Evgeny Y; Redepenning, Jody; Doudin, Bernard

    2007-03-01

    When the dimensions of a metallic conductor are reduced so that they become comparable to the de Broglie wavelengths of the conduction electrons, the absence of scattering results in ballistic electron transport and the conductance becomes quantized. In ferromagnetic metals, the spin angular momentum of the electrons results in spin-dependent conductance quantization and various unusual magnetoresistive phenomena. Theorists have predicted a related phenomenon known as ballistic anisotropic magnetoresistance (BAMR). Here we report the first experimental evidence for BAMR by observing a stepwise variation in the ballistic conductance of cobalt nanocontacts as the direction of an applied magnetic field is varied. Our results show that BAMR can be positive and negative, and exhibits symmetric and asymmetric angular dependences, consistent with theoretical predictions.

  10. Response of two-band systems to a single-mode quantized field

    NASA Astrophysics Data System (ADS)

    Shi, Z. C.; Shen, H. Z.; Wang, W.; Yi, X. X.

    2016-03-01

    The response of topological insulators (TIs) to an external weakly classical field can be expressed in terms of Kubo formula, which predicts quantized Hall conductivity of the quantum Hall family. The response of TIs to a single-mode quantized field, however, remains unexplored. In this work, we take the quantum nature of the external field into account and define a Hall conductance to characterize the linear response of a two-band system to the quantized field. The theory is then applied to topological insulators. Comparisons with the traditional Hall conductance are presented and discussed.

  11. Thermal distributions of first, second and third quantization

    NASA Astrophysics Data System (ADS)

    McGuigan, Michael

    1989-05-01

    We treat first quantized string theory as two-dimensional gravity plus matter. This allows us to compute the two-dimensional density of one string states by the method of Darwin and Fowler. One can then use second quantized methods to form a grand microcanonical ensemble in which one can compute the density of multistring states of arbitrary momentum and mass. It is argued that modelling an elementary particle as a d-1-dimensional object whose internal degrees of freedom are described by a massless d-dimensional gas yields a density of internal states given by σ d(m)∼m -aexp((bm) {2(d-1)}/{d}) . This indicates that these objects cannot be in thermal equilibrium at any temperature unless d⩽2; that is for a string or a particle. Finally, we discuss the application of the above ideas to four-dimensional gravity and introduce an ensemble of multiuniverse states parameterized by second quantized canonical momenta and particle number.

  12. WE-AB-303-11: Verification of a Deformable 4DCT Motion Model for Lung Tumor Tracking Using Different Driving Surrogates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Woelfelschneider, J; Friedrich-Alexander-University Erlangen-Nuremberg, Erlangen, DE; Seregni, M

    2015-06-15

    Purpose: Tumor tracking is an advanced technique to treat intra-fractionally moving tumors. The aim of this study is to validate a surrogate-driven model based on four-dimensional computed tomography (4DCT) that is able to predict CT volumes corresponding to arbitrary respiratory states. Further, the comparison of three different driving surrogates is evaluated. Methods: This study is based on multiple 4DCTs of two patients treated for bronchial carcinoma and metastasis. Analyses for 18 additional patients are currently ongoing. The motion model was estimated from the planning 4DCT through deformable image registration. To predict a certain phase of a follow-up 4DCT, the modelmore » considers for inter-fractional variations (baseline correction) and intra-fractional respiratory parameters (amplitude and phase) derived from surrogates. In this evaluation, three different approaches were used to extract the motion surrogate: for each 4DCT phase, the 3D thoraco-abdominal surface motion, the body volume and the anterior-posterior motion of a virtual single external marker defined on the sternum were investigated. The estimated volumes resulting from the model were compared to the ground-truth clinical 4DCTs using absolute HU differences in the lung volume and landmarks localized using the Scale Invariant Feature Transform (SIFT). Results: The results show absolute HU differences between estimated and ground-truth images with median values limited to 55 HU and inter-quartile ranges (IQR) lower than 100 HU. Median 3D distances between about 1500 matching landmarks are below 2 mm for 3D surface motion and body volume methods. The single marker surrogates Result in increased median distances up to 0.6 mm. Analyses for the extended database incl. 20 patients are currently in progress. Conclusion: The results depend mainly on the image quality of the initial 4DCTs and the deformable image registration. All investigated surrogates can be used to estimate follow-up 4DCT

  13. Unique Fock quantization of scalar cosmological perturbations

    NASA Astrophysics Data System (ADS)

    Fernández-Méndez, Mikel; Mena Marugán, Guillermo A.; Olmedo, Javier; Velhinho, José M.

    2012-05-01

    We investigate the ambiguities in the Fock quantization of the scalar perturbations of a Friedmann-Lemaître-Robertson-Walker model with a massive scalar field as matter content. We consider the case of compact spatial sections (thus avoiding infrared divergences), with the topology of a three-sphere. After expanding the perturbations in series of eigenfunctions of the Laplace-Beltrami operator, the Hamiltonian of the system is written up to quadratic order in them. We fix the gauge of the local degrees of freedom in two different ways, reaching in both cases the same qualitative results. A canonical transformation, which includes the scaling of the matter-field perturbations by the scale factor of the geometry, is performed in order to arrive at a convenient formulation of the system. We then study the quantization of these perturbations in the classical background determined by the homogeneous variables. Based on previous work, we introduce a Fock representation for the perturbations in which: (a) the complex structure is invariant under the isometries of the spatial sections and (b) the field dynamics is implemented as a unitary operator. These two properties select not only a unique unitary equivalence class of representations, but also a preferred field description, picking up a canonical pair of field variables among all those that can be obtained by means of a time-dependent scaling of the matter field (completed into a linear canonical transformation). Finally, we present an equivalent quantization constructed in terms of gauge-invariant quantities. We prove that this quantization can be attained by a mode-by-mode time-dependent linear canonical transformation which admits a unitary implementation, so that it is also uniquely determined.

  14. A Algebraic Approach to the Quantization of Constrained Systems: Finite Dimensional Examples.

    NASA Astrophysics Data System (ADS)

    Tate, Ranjeet Shekhar

    1992-01-01

    General relativity has two features in particular, which make it difficult to apply to it existing schemes for the quantization of constrained systems. First, there is no background structure in the theory, which could be used, e.g., to regularize constraint operators, to identify a "time" or to define an inner product on physical states. Second, in the Ashtekar formulation of general relativity, which is a promising avenue to quantum gravity, the natural variables for quantization are not canonical; and, classically, there are algebraic identities between them. Existing schemes are usually not concerned with such identities. Thus, from the point of view of canonical quantum gravity, it has become imperative to find a framework for quantization which provides a general prescription to find the physical inner product, and is flexible enough to accommodate non -canonical variables. In this dissertation I present an algebraic formulation of the Dirac approach to the quantization of constrained systems. The Dirac quantization program is augmented by a general principle to find the inner product on physical states. Essentially, the Hermiticity conditions on physical operators determine this inner product. I also clarify the role in quantum theory of possible algebraic identities between the elementary variables. I use this approach to quantize various finite dimensional systems. Some of these models test the new aspects of the algebraic framework. Others bear qualitative similarities to general relativity, and may give some insight into the pitfalls lurking in quantum gravity. The previous quantizations of one such model had many surprising features. When this model is quantized using the algebraic program, there is no longer any unexpected behaviour. I also construct the complete quantum theory for a previously unsolved relativistic cosmology. All these models indicate that the algebraic formulation provides powerful new tools for quantization. In (spatially compact

  15. Subband Image Coding with Jointly Optimized Quantizers

    NASA Technical Reports Server (NTRS)

    Kossentini, Faouzi; Chung, Wilson C.; Smith Mark J. T.

    1995-01-01

    An iterative design algorithm for the joint design of complexity- and entropy-constrained subband quantizers and associated entropy coders is proposed. Unlike conventional subband design algorithms, the proposed algorithm does not require the use of various bit allocation algorithms. Multistage residual quantizers are employed here because they provide greater control of the complexity-performance tradeoffs, and also because they allow efficient and effective high-order statistical modeling. The resulting subband coder exploits statistical dependencies within subbands, across subbands, and across stages, mainly through complexity-constrained high-order entropy coding. Experimental results demonstrate that the complexity-rate-distortion performance of the new subband coder is exceptional.

  16. Magnetic quantization in monolayer bismuthene

    NASA Astrophysics Data System (ADS)

    Chen, Szu-Chao; Chiu, Chih-Wei; Lin, Hui-Chi; Lin, Ming-Fa

    The magnetic quantization in monolayer bismuthene is investigated by the generalized tight-binding model. The quite large Hamiltonian matrix is built from the tight-binding functions of the various sublattices, atomic orbitals and spin states. Due to the strong spin orbital coupling and sp3 bonding, monolayer bismuthene has the diverse low-lying energy bands such as the parabolic, linear and oscillating energy bands. The main features of band structures are further reflected in the rich magnetic quantization. Under a uniform perpendicular magnetic field (Bz) , three groups of Landau levels (LLs) with distinct features are revealed near the Fermi level. Their Bz-dependent energy spectra display the linear, square-root and non-monotonous dependences, respectively. These LLs are dominated by the combinations of the 6pz orbital and (6px,6py) orbitals as a result of strong sp3 bonding. Specifically, the LL anti-crossings only occur between LLs originating from the oscillating energy band.

  17. Weighted Bergman Kernels and Quantization}

    NASA Astrophysics Data System (ADS)

    Engliš, Miroslav

    Let Ω be a bounded pseudoconvex domain in CN, φ, ψ two positive functions on Ω such that - log ψ, - log φ are plurisubharmonic, and z∈Ω a point at which - log φ is smooth and strictly plurisubharmonic. We show that as k-->∞, the Bergman kernels with respect to the weights φkψ have an asymptotic expansion for x,y near z, where φ(x,y) is an almost-analytic extension of &\\phi(x)=φ(x,x) and similarly for ψ. Further, . If in addition Ω is of finite type, φ,ψ behave reasonably at the boundary, and - log φ, - log ψ are strictly plurisubharmonic on Ω, we obtain also an analogous asymptotic expansion for the Berezin transform and give applications to the Berezin quantization. Finally, for Ω smoothly bounded and strictly pseudoconvex and φ a smooth strictly plurisubharmonic defining function for Ω, we also obtain results on the Berezin-Toeplitz quantization.

  18. The Holographic Electron Density Theorem, de-quantization, re-quantization, and nuclear charge space extrapolations of the Universal Molecule Model

    NASA Astrophysics Data System (ADS)

    Mezey, Paul G.

    2017-11-01

    Two strongly related theorems on non-degenerate ground state electron densities serve as the basis of "Molecular Informatics". The Hohenberg-Kohn theorem is a statement on global molecular information, ensuring that the complete electron density contains the complete molecular information. However, the Holographic Electron Density Theorem states more: the local information present in each and every positive volume density fragment is already complete: the information in the fragment is equivalent to the complete molecular information. In other words, the complete molecular information provided by the Hohenberg-Kohn Theorem is already provided, in full, by any positive volume, otherwise arbitrarily small electron density fragment. In this contribution some of the consequences of the Holographic Electron Density Theorem are discussed within the framework of the "Nuclear Charge Space" and the Universal Molecule Model. In the Nuclear Charge Space" the nuclear charges are regarded as continuous variables, and in the more general Universal Molecule Model some other quantized parameteres are also allowed to become "de-quantized and then re-quantized, leading to interrelations among real molecules through abstract molecules. Here the specific role of the Holographic Electron Density Theorem is discussed within the above context.

  19. Optimal sampling and quantization of synthetic aperture radar signals

    NASA Technical Reports Server (NTRS)

    Wu, C.

    1978-01-01

    Some theoretical and experimental results on optimal sampling and quantization of synthetic aperture radar (SAR) signals are presented. It includes a description of a derived theoretical relationship between the pixel signal to noise ratio of processed SAR images and the number of quantization bits per sampled signal, assuming homogeneous extended targets. With this relationship known, a solution may be realized for the problem of optimal allocation of a fixed data bit-volume (for specified surface area and resolution criterion) between the number of samples and the number of bits per sample. The results indicate that to achieve the best possible image quality for a fixed bit rate and a given resolution criterion, one should quantize individual samples coarsely and thereby maximize the number of multiple looks. The theoretical results are then compared with simulation results obtained by processing aircraft SAR data.

  20. Splitting Times of Doubly Quantized Vortices in Dilute Bose-Einstein Condensates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huhtamaeki, J. A. M.; Pietilae, V.; Virtanen, S. M. M.

    2006-09-15

    Recently, the splitting of a topologically created doubly quantized vortex into two singly quantized vortices was experimentally investigated in dilute atomic cigar-shaped Bose-Einstein condensates [Y. Shin et al., Phys. Rev. Lett. 93, 160406 (2004)]. In particular, the dependency of the splitting time on the peak particle density was studied. We present results of theoretical simulations which closely mimic the experimental setup. We show that the combination of gravitational sag and time dependency of the trapping potential alone suffices to split the doubly quantized vortex in time scales which are in good agreement with the experiments.

  1. Contour propagation for lung tumor delineation in 4D-CT using tensor-product surface of uniform and non-uniform closed cubic B-splines

    NASA Astrophysics Data System (ADS)

    Jin, Renchao; Liu, Yongchuan; Chen, Mi; Zhang, Sheng; Song, Enmin

    2018-01-01

    A robust contour propagation method is proposed to help physicians delineate lung tumors on all phase images of four-dimensional computed tomography (4D-CT) by only manually delineating the contours on a reference phase. The proposed method models the trajectory surface swept by a contour in a respiratory cycle as a tensor-product surface of two closed cubic B-spline curves: a non-uniform B-spline curve which models the contour and a uniform B-spline curve which models the trajectory of a point on the contour. The surface is treated as a deformable entity, and is optimized from an initial surface by moving its control vertices such that the sum of the intensity similarities between the sampling points on the manually delineated contour and their corresponding ones on different phases is maximized. The initial surface is constructed by fitting the manually delineated contour on the reference phase with a closed B-spline curve. In this way, the proposed method can focus the registration on the contour instead of the entire image to prevent the deformation of the contour from being smoothed by its surrounding tissues, and greatly reduce the time consumption while keeping the accuracy of the contour propagation as well as the temporal consistency of the estimated respiratory motions across all phases in 4D-CT. Eighteen 4D-CT cases with 235 gross tumor volume (GTV) contours on the maximal inhale phase and 209 GTV contours on the maximal exhale phase are manually delineated slice by slice. The maximal inhale phase is used as the reference phase, which provides the initial contours. On the maximal exhale phase, the Jaccard similarity coefficient between the propagated GTV and the manually delineated GTV is 0.881 +/- 0.026, and the Hausdorff distance is 3.07 +/- 1.08 mm. The time for propagating the GTV to all phases is 5.55 +/- 6.21 min. The results are better than those of the fast adaptive stochastic gradient descent B-spline method, the 3D  +  t B

  2. Acute Insulin Stimulation Induces Phosphorylation of the Na-Cl Cotransporter in Cultured Distal mpkDCT Cells and Mouse Kidney

    PubMed Central

    Sohara, Eisei; Rai, Tatemitsu; Yang, Sung-Sen; Ohta, Akihito; Naito, Shotaro; Chiga, Motoko; Nomura, Naohiro; Lin, Shih-Hua; Vandewalle, Alain; Ohta, Eriko; Sasaki, Sei; Uchida, Shinichi

    2011-01-01

    The NaCl cotransporter (NCC) is essential for sodium reabsorption at the distal convoluted tubules (DCT), and its phosphorylation increases its transport activity and apical membrane localization. Although insulin has been reported to increase sodium reabsorption in the kidney, the linkage between insulin and NCC phosphorylation has not yet been investigated. This study examined whether insulin regulates NCC phosphorylation. In cultured mpkDCT cells, insulin increased phosphorylation of STE20/SPS1-related proline-alanine-rich kinase (SPAK) and NCC in a dose-dependent manner. This insulin-induced phosphorylation of NCC was suppressed in WNK4 and SPAK knockdown cells. In addition, Ly294002, a PI3K inhibitor, decreased the insulin effect on SPAK and NCC phosphorylation, indicating that insulin induces phosphorylation of SPAK and NCC through PI3K and WNK4 in mpkDCT cells. Moreover, acute insulin administration to mice increased phosphorylation of oxidative stress-responsive kinase-1 (OSR1), SPAK and NCC in the kidney. Time-course experiments in mpkDCT cells and mice suggested that SPAK is upstream of NCC in this insulin-induced NCC phosphorylation mechanism, which was confirmed by the lack of insulin-induced NCC phosphorylation in SPAK knockout mice. Moreover, insulin administration to WNK4 hypomorphic mice did not increase phosphorylation of OSR1, SPAK and NCC in the kidney, suggesting that WNK4 is also involved in the insulin-induced OSR1, SPAK and NCC phosphorylation mechanism in vivo. The present results demonstrated that insulin is a potent regulator of NCC phosphorylation in the kidney, and that WNK4 and SPAK are involved in this mechanism of NCC phosphorylation by insulin. PMID:21909387

  3. Is the evaluation of the anterior inferior iliac spine (AIIS) in the AP pelvis possible? Analysis of conventional X-rays and 3D-CT reconstructions.

    PubMed

    Krueger, David R; Windler, Markus; Geßlein, Markus; Schuetz, Michael; Perka, Carsten; Schroeder, Joerg H

    2017-07-01

    A hypertrophic AIIS has been identified as a cause for extraarticular hip impingement and is classified according to Hetsroni using 3D-CT reconstructions. The role of the conventional AP pelvis X-ray, which is the first standard imaging step for the evaluation of hip pain, has not been investigated yet. AP pelvis X-rays and 3D-CT reconstructions of patients were evaluated regarding their morphology of the AIIS. The conventional X-rays were categorized into three groups according to the projection of the AIIS: above (A) or below (B) the acetabular sourcil or even exceeding the anterior acetabular rim (C). They were compared to the morphologic types in the 3D-CT reconstruction (Hetsroni type I-III). Ninety patients with an equal distribution of type A, B or C projection in the AP pelvis were evaluated and compared to the morphology in the 3D-CT reconstruction. The projection of the AIIS below the acetabular sourcil (B + C) showed only moderate sensitivity (0.76) and specificity (0.64) for a hypertrophic AIIS (Hetsroni type II + III), but if the AIIS exceeds the anterior rim, all cases showed a hypertrophic AIIS in the 3D-CT reconstructions (Hetsroni type II + III). Distinct differentiation of the AIIS morphology in the AP pelvis is not possible, but the projection of the AIIS below the anterior acetabular rim represented a hypertrophic AIIS in all cases and should, therefore, be critically investigated for a relevant AIIS impingement.

  4. Educational Information Quantization for Improving Content Quality in Learning Management Systems

    ERIC Educational Resources Information Center

    Rybanov, Alexander Aleksandrovich

    2014-01-01

    The article offers the educational information quantization method for improving content quality in Learning Management Systems. The paper considers questions concerning analysis of quality of quantized presentation of educational information, based on quantitative text parameters: average frequencies of parts of speech, used in the text; formal…

  5. Dynamic State Estimation of Power Systems With Quantization Effects: A Recursive Filter Approach.

    PubMed

    Hu, Liang; Wang, Zidong; Liu, Xiaohui

    2016-08-01

    In this paper, a recursive filter algorithm is developed to deal with the state estimation problem for power systems with quantized nonlinear measurements. The measurements from both the remote terminal units and the phasor measurement unit are subject to quantizations described by a logarithmic quantizer. Attention is focused on the design of a recursive filter such that, in the simultaneous presence of nonlinear measurements and quantization effects, an upper bound for the estimation error covariance is guaranteed and subsequently minimized. Instead of using the traditional approximation methods in nonlinear estimation that simply ignore the linearization errors, we treat both the linearization and quantization errors as norm-bounded uncertainties in the algorithm development so as to improve the performance of the estimator. For the power system with such kind of introduced uncertainties, a filter is designed in the framework of robust recursive estimation, and the developed filter algorithm is tested on the IEEE benchmark power system to demonstrate its effectiveness.

  6. On Correspondence of BRST-BFV, Dirac, and Refined Algebraic Quantizations of Constrained Systems

    NASA Astrophysics Data System (ADS)

    Shvedov, O. Yu.

    2002-11-01

    The correspondence between BRST-BFV, Dirac, and refined algebraic (group averaging, projection operator) approaches to quantizing constrained systems is analyzed. For the closed-algebra case, it is shown that the component of the BFV wave function corresponding to maximal (minimal) value of number of ghosts and antighosts in the Schrodinger representation may be viewed as a wave function in the refined algebraic (Dirac) quantization approach. The Giulini-Marolf group averaging formula for the inner product in the refined algebraic quantization approach is obtained from the Batalin-Marnelius prescription for the BRST-BFV inner product, which should be generally modified due to topological problems. The considered prescription for the correspondence of states is observed to be applicable to the open-algebra case. The refined algebraic quantization approach is generalized then to the case of nontrivial structure functions. A simple example is discussed. The correspondence of observables for different quantization methods is also investigated.

  7. Application of heterogeneous pulse coupled neural network in image quantization

    NASA Astrophysics Data System (ADS)

    Huang, Yi; Ma, Yide; Li, Shouliang; Zhan, Kun

    2016-11-01

    On the basis of the different strengths of synaptic connections between actual neurons, this paper proposes a heterogeneous pulse coupled neural network (HPCNN) algorithm to perform quantization on images. HPCNNs are developed from traditional pulse coupled neural network (PCNN) models, which have different parameters corresponding to different image regions. This allows pixels of different gray levels to be classified broadly into two categories: background regional and object regional. Moreover, an HPCNN also satisfies human visual characteristics. The parameters of the HPCNN model are calculated automatically according to these categories, and quantized results will be optimal and more suitable for humans to observe. At the same time, the experimental results of natural images from the standard image library show the validity and efficiency of our proposed quantization method.

  8. Quantization of gauge fields, graph polynomials and graph homology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kreimer, Dirk, E-mail: kreimer@physik.hu-berlin.de; Sars, Matthias; Suijlekom, Walter D. van

    2013-09-15

    We review quantization of gauge fields using algebraic properties of 3-regular graphs. We derive the Feynman integrand at n loops for a non-abelian gauge theory quantized in a covariant gauge from scalar integrands for connected 3-regular graphs, obtained from the two Symanzik polynomials. The transition to the full gauge theory amplitude is obtained by the use of a third, new, graph polynomial, the corolla polynomial. This implies effectively a covariant quantization without ghosts, where all the relevant signs of the ghost sector are incorporated in a double complex furnished by the corolla polynomial–we call it cycle homology–and by graph homology.more » -- Highlights: •We derive gauge theory Feynman from scalar field theory with 3-valent vertices. •We clarify the role of graph homology and cycle homology. •We use parametric renormalization and the new corolla polynomial.« less

  9. Augmenting Phase Space Quantization to Introduce Additional Physical Effects

    NASA Astrophysics Data System (ADS)

    Robbins, Matthew P. G.

    Quantum mechanics can be done using classical phase space functions and a star product. The state of the system is described by a quasi-probability distribution. A classical system can be quantized in phase space in different ways with different quasi-probability distributions and star products. A transition differential operator relates different phase space quantizations. The objective of this thesis is to introduce additional physical effects into the process of quantization by using the transition operator. As prototypical examples, we first look at the coarse-graining of the Wigner function and the damped simple harmonic oscillator. By generalizing the transition operator and star product to also be functions of the position and momentum, we show that additional physical features beyond damping and coarse-graining can be introduced into a quantum system, including the generalized uncertainty principle of quantum gravity phenomenology, driving forces, and decoherence.

  10. Electrical and thermal conductance quantization in nanostructures

    NASA Astrophysics Data System (ADS)

    Nawrocki, Waldemar

    2008-10-01

    In the paper problems of electron transport in mesoscopic structures and nanostructures are considered. The electrical conductance of nanowires was measured in a simple experimental system. Investigations have been performed in air at room temperature measuring the conductance between two vibrating metal wires with standard oscilloscope. Conductance quantization in units of G0 = 2e2/h = (12.9 kΩ)-1 up to five quanta of conductance has been observed for nanowires formed in many metals. The explanation of this universal phenomena is the formation of a nanometer-sized wire (nanowire) between macroscopic metallic contacts which induced, due to theory proposed by Landauer, the quantization of conductance. Thermal problems in nanowires are also discussed in the paper.

  11. Landau quantization effects on hole-acoustic instability in semiconductor plasmas

    NASA Astrophysics Data System (ADS)

    Sumera, P.; Rasheed, A.; Jamil, M.; Siddique, M.; Areeb, F.

    2017-12-01

    The growth rate of the hole acoustic waves (HAWs) exciting in magnetized semiconductor quantum plasma pumped by the electron beam has been investigated. The instability of the waves contains quantum effects including the exchange and correlation potential, Bohm potential, Fermi-degenerate pressure, and the magnetic quantization of semiconductor plasma species. The effects of various plasma parameters, which include relative concentration of plasma particles, beam electron temperature, beam speed, plasma temperature (temperature of electrons/holes), and Landau electron orbital magnetic quantization parameter η, on the growth rate of HAWs, have been discussed. The numerical study of our model of acoustic waves has been applied, as an example, to the GaAs semiconductor exposed to electron beam in the magnetic field environment. An increment in either the concentration of the semiconductor electrons or the speed of beam electrons, in the presence of magnetic quantization of fermion orbital motion, enhances remarkably the growth rate of the HAWs. Although the growth rate of the waves reduces with a rise in the thermal temperature of plasma species, at a particular temperature, we receive a higher instability due to the contribution of magnetic quantization of fermions to it.

  12. Immirzi parameter without Immirzi ambiguity: Conformal loop quantization of scalar-tensor gravity

    NASA Astrophysics Data System (ADS)

    Veraguth, Olivier J.; Wang, Charles H.-T.

    2017-10-01

    Conformal loop quantum gravity provides an approach to loop quantization through an underlying conformal structure i.e. conformally equivalent class of metrics. The property that general relativity itself has no conformal invariance is reinstated with a constrained scalar field setting the physical scale. Conformally equivalent metrics have recently been shown to be amenable to loop quantization including matter coupling. It has been suggested that conformal geometry may provide an extended symmetry to allow a reformulated Immirzi parameter necessary for loop quantization to behave like an arbitrary group parameter that requires no further fixing as its present standard form does. Here, we find that this can be naturally realized via conformal frame transformations in scalar-tensor gravity. Such a theory generally incorporates a dynamical scalar gravitational field and reduces to general relativity when the scalar field becomes a pure gauge. In particular, we introduce a conformal Einstein frame in which loop quantization is implemented. We then discuss how different Immirzi parameters under this description may be related by conformal frame transformations and yet share the same quantization having, for example, the same area gaps, modulated by the scalar gravitational field.

  13. Second quantization in bit-string physics

    NASA Technical Reports Server (NTRS)

    Noyes, H. Pierre

    1993-01-01

    Using a new fundamental theory based on bit-strings, a finite and discrete version of the solutions of the free one particle Dirac equation as segmented trajectories with steps of length h/mc along the forward and backward light cones executed at velocity +/- c are derived. Interpreting the statistical fluctuations which cause the bends in these segmented trajectories as emission and absorption of radiation, these solutions are analogous to a fermion propagator in a second quantized theory. This allows us to interpret the mass parameter in the step length as the physical mass of the free particle. The radiation in interaction with it has the usual harmonic oscillator structure of a second quantized theory. How these free particle masses can be generated gravitationally using the combinatorial hierarchy sequence (3,10,137,2(sup 127) + 136), and some of the predictive consequences are sketched.

  14. Perspectives of Light-Front Quantized Field Theory: Some New Results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Srivastava, Prem P.

    1999-08-13

    A review of some basic topics in the light-front (LF) quantization of relativistic field theory is made. It is argued that the LF quantization is equally appropriate as the conventional one and that they lead, assuming the microcausality principle, to the same physical content. This is confirmed in the studies on the LF of the spontaneous symmetry breaking (SSB), of the degenerate vacua in Schwinger model (SM) and Chiral SM (CSM), of the chiral boson theory, and of the QCD in covariant gauges among others. The discussion on the LF is more economical and more transparent than that found inmore » the conventional equal-time quantized theory. The removal of the constraints on the LF phase space by following the Dirac method, in fact, results in a substantially reduced number of independent dynamical variables. Consequently, the descriptions of the physical Hilbert space and the vacuum structure, for example, become more tractable. In the context of the Dyson-Wick perturbation theory the relevant propagators in the front form theory are causal. The Wick rotation can then be performed to employ the Euclidean space integrals in momentum space. The lack of manifest covariance becomes tractable, and still more so if we employ, as discussed in the text, the Fourier transform of the fermionic field based on a special construction of the LF spinor. The fact that the hyperplanes x{sup {+-}} = 0 constitute characteristic surfaces of the hyperbolic partial differential equation is found irrelevant in the quantized theory; it seems sufficient to quantize the theory on one of the characteristic hyperplanes.« less

  15. Quantized Iterative Learning Consensus Tracking of Digital Networks With Limited Information Communication.

    PubMed

    Xiong, Wenjun; Yu, Xinghuo; Chen, Yao; Gao, Jie

    2017-06-01

    This brief investigates the quantized iterative learning problem for digital networks with time-varying topologies. The information is first encoded as symbolic data and then transmitted. After the data are received, a decoder is used by the receiver to get an estimate of the sender's state. Iterative learning quantized communication is considered in the process of encoding and decoding. A sufficient condition is then presented to achieve the consensus tracking problem in a finite interval using the quantized iterative learning controllers. Finally, simulation results are given to illustrate the usefulness of the developed criterion.

  16. Simultaneous fault detection and control design for switched systems with two quantized signals.

    PubMed

    Li, Jian; Park, Ju H; Ye, Dan

    2017-01-01

    The problem of simultaneous fault detection and control design for switched systems with two quantized signals is presented in this paper. Dynamic quantizers are employed, respectively, before the output is passed to fault detector, and before the control input is transmitted to the switched system. Taking the quantized errors into account, the robust performance for this kind of system is given. Furthermore, sufficient conditions for the existence of fault detector/controller are presented in the framework of linear matrix inequalities, and fault detector/controller gains and the supremum of quantizer range are derived by a convex optimized method. Finally, two illustrative examples demonstrate the effectiveness of the proposed method. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  17. Locally adaptive vector quantization: Data compression with feature preservation

    NASA Technical Reports Server (NTRS)

    Cheung, K. M.; Sayano, M.

    1992-01-01

    A study of a locally adaptive vector quantization (LAVQ) algorithm for data compression is presented. This algorithm provides high-speed one-pass compression and is fully adaptable to any data source and does not require a priori knowledge of the source statistics. Therefore, LAVQ is a universal data compression algorithm. The basic algorithm and several modifications to improve performance are discussed. These modifications are nonlinear quantization, coarse quantization of the codebook, and lossless compression of the output. Performance of LAVQ on various images using irreversible (lossy) coding is comparable to that of the Linde-Buzo-Gray algorithm, but LAVQ has a much higher speed; thus this algorithm has potential for real-time video compression. Unlike most other image compression algorithms, LAVQ preserves fine detail in images. LAVQ's performance as a lossless data compression algorithm is comparable to that of Lempel-Ziv-based algorithms, but LAVQ uses far less memory during the coding process.

  18. Image compression system and method having optimized quantization tables

    NASA Technical Reports Server (NTRS)

    Ratnakar, Viresh (Inventor); Livny, Miron (Inventor)

    1998-01-01

    A digital image compression preprocessor for use in a discrete cosine transform-based digital image compression device is provided. The preprocessor includes a gathering mechanism for determining discrete cosine transform statistics from input digital image data. A computing mechanism is operatively coupled to the gathering mechanism to calculate a image distortion array and a rate of image compression array based upon the discrete cosine transform statistics for each possible quantization value. A dynamic programming mechanism is operatively coupled to the computing mechanism to optimize the rate of image compression array against the image distortion array such that a rate-distortion-optimal quantization table is derived. In addition, a discrete cosine transform-based digital image compression device and a discrete cosine transform-based digital image compression and decompression system are provided. Also, a method for generating a rate-distortion-optimal quantization table, using discrete cosine transform-based digital image compression, and operating a discrete cosine transform-based digital image compression and decompression system are provided.

  19. Third quantization

    NASA Astrophysics Data System (ADS)

    Seligman, Thomas H.; Prosen, Tomaž

    2010-12-01

    The basic ideas of second quantization and Fock space are extended to density operator states, used in treatments of open many-body systems. This can be done for fermions and bosons. While the former only requires the use of a non-orthogonal basis, the latter requires the introduction of a dual set of spaces. In both cases an operator algebra closely resembling the canonical one is developed and used to define the dual sets of bases. We here concentrated on the bosonic case where the unboundedness of the operators requires the definitions of dual spaces to support the pair of bases. Some applications, mainly to non-equilibrium steady states, will be mentioned.

  20. Third quantization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seligman, Thomas H.; Centro Internacional de Ciencias, Cuernavaca, Morelos; Prosen, Tomaz

    2010-12-23

    The basic ideas of second quantization and Fock space are extended to density operator states, used in treatments of open many-body systems. This can be done for fermions and bosons. While the former only requires the use of a non-orthogonal basis, the latter requires the introduction of a dual set of spaces. In both cases an operator algebra closely resembling the canonical one is developed and used to define the dual sets of bases. We here concentrated on the bosonic case where the unboundedness of the operators requires the definitions of dual spaces to support the pair of bases. Somemore » applications, mainly to non-equilibrium steady states, will be mentioned.« less

  1. Perceptual Image Compression in Telemedicine

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Ahumada, Albert J., Jr.; Eckstein, Miguel; Null, Cynthia H. (Technical Monitor)

    1996-01-01

    The next era of space exploration, especially the "Mission to Planet Earth" will generate immense quantities of image data. For example, the Earth Observing System (EOS) is expected to generate in excess of one terabyte/day. NASA confronts a major technical challenge in managing this great flow of imagery: in collection, pre-processing, transmission to earth, archiving, and distribution to scientists at remote locations. Expected requirements in most of these areas clearly exceed current technology. Part of the solution to this problem lies in efficient image compression techniques. For much of this imagery, the ultimate consumer is the human eye. In this case image compression should be designed to match the visual capacities of the human observer. We have developed three techniques for optimizing image compression for the human viewer. The first consists of a formula, developed jointly with IBM and based on psychophysical measurements, that computes a DCT quantization matrix for any specified combination of viewing distance, display resolution, and display brightness. This DCT quantization matrix is used in most recent standards for digital image compression (JPEG, MPEG, CCITT H.261). The second technique optimizes the DCT quantization matrix for each individual image, based on the contents of the image. This is accomplished by means of a model of visual sensitivity to compression artifacts. The third technique extends the first two techniques to the realm of wavelet compression. Together these two techniques will allow systematic perceptual optimization of image compression in NASA imaging systems. Many of the image management challenges faced by NASA are mirrored in the field of telemedicine. Here too there are severe demands for transmission and archiving of large image databases, and the imagery is ultimately used primarily by human observers, such as radiologists. In this presentation I will describe some of our preliminary explorations of the applications

  2. Segmentation of magnetic resonance images using fuzzy algorithms for learning vector quantization.

    PubMed

    Karayiannis, N B; Pai, P I

    1999-02-01

    This paper evaluates a segmentation technique for magnetic resonance (MR) images of the brain based on fuzzy algorithms for learning vector quantization (FALVQ). These algorithms perform vector quantization by updating all prototypes of a competitive network through an unsupervised learning process. Segmentation of MR images is formulated as an unsupervised vector quantization process, where the local values of different relaxation parameters form the feature vectors which are represented by a relatively small set of prototypes. The experiments evaluate a variety of FALVQ algorithms in terms of their ability to identify different tissues and discriminate between normal tissues and abnormalities.

  3. Light-cone quantization of two dimensional field theory in the path integral approach

    NASA Astrophysics Data System (ADS)

    Cortés, J. L.; Gamboa, J.

    1999-05-01

    A quantization condition due to the boundary conditions and the compatification of the light cone space-time coordinate x- is identified at the level of the classical equations for the right-handed fermionic field in two dimensions. A detailed analysis of the implications of the implementation of this quantization condition at the quantum level is presented. In the case of the Thirring model one has selection rules on the excitations as a function of the coupling and in the case of the Schwinger model a double integer structure of the vacuum is derived in the light-cone frame. Two different quantized chiral Schwinger models are found, one of them without a θ-vacuum structure. A generalization of the quantization condition to theories with several fermionic fields and to higher dimensions is presented.

  4. Evaluation of 4D-CT lung registration.

    PubMed

    Kabus, Sven; Klinder, Tobias; Murphy, Keelin; van Ginneken, Bram; van Lorenz, Cristian; Pluim, Josien P W

    2009-01-01

    Non-rigid registration accuracy assessment is typically performed by evaluating the target registration error at manually placed landmarks. For 4D-CT lung data, we compare two sets of landmark distributions: a smaller set primarily defined on vessel bifurcations as commonly described in the literature and a larger set being well-distributed throughout the lung volume. For six different registration schemes (three in-house schemes and three schemes frequently used by the community) the landmark error is evaluated and found to depend significantly on the distribution of the landmarks. In particular, lung regions near to the pleura show a target registration error three times larger than near-mediastinal regions. While the inter-method variability on the landmark positions is rather small, the methods show discriminating differences with respect to consistency and local volume change. In conclusion, both a well-distributed set of landmarks and a deformation vector field analysis are necessary for reliable non-rigid registration accuracy assessment.

  5. WE-AB-202-04: Statistical Evaluation of Lung Function Using 4DCT Ventilation Imaging: Proton Therapy VS IMRT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Q; Zhang, M; Chen, T

    Purpose: Variation in function of different lung regions has been ignored so far for conventional lung cancer treatment planning, which may lead to higher risk of radiation induced lung disease. 4DCT based lung ventilation imaging provides a novel yet convenient approach for lung functional imaging as 4DCT is taken as routine for lung cancer treatment. Our work aims to evaluate the impact of accounting for spatial heterogeneity in lung function using 4DCT based lung ventilation imaging for proton and IMRT plans. Methods: Six patients with advanced stage lung cancer of various tumor locations were retrospectively evaluated for the study. Protonmore » and IMRT plans were designed following identical planning objective and constrains for each patient. Ventilation images were calculated from patients’ 4DCT using deformable image registration implemented by Velocity AI software based on Jacobian-metrics. Lung was delineated into two function level regions based on ventilation (low and high functional area). High functional region was defined as lung ventilation greater than 30%. Dose distribution and statistics in different lung function area was calculated for patients. Results: Variation in dosimetric statistics of different function lung region was observed between proton and IMRT plans. In all proton plans, high function lung regions receive lower maximum dose (100.2%–108.9%), compared with IMRT plans (106.4%–119.7%). Interestingly, three out of six proton plans gave higher mean dose by up to 2.2% than IMRT to high function lung region. Lower mean dose (lower by up to 14.1%) and maximum dose (lower by up to 9%) were observed in low function lung for proton plans. Conclusion: A systematic approach was developed to generate function lung ventilation imaging and use it to evaluate plans. This method hold great promise in function analysis of lung during planning. We are currently studying more subjects to evaluate this tool.« less

  6. Fedosov Deformation Quantization as a BRST Theory

    NASA Astrophysics Data System (ADS)

    Grigoriev, M. A.; Lyakhovich, S. L.

    The relationship is established between the Fedosov deformation quantization of a general symplectic manifold and the BFV-BRST quantization of constrained dynamical systems. The original symplectic manifold M is presented as a second class constrained surface in the fibre bundle ?*ρM which is a certain modification of a usual cotangent bundle equipped with a natural symplectic structure. The second class system is converted into the first class one by continuation of the constraints into the extended manifold, being a direct sum of ?*ρM and the tangent bundle TM. This extended manifold is equipped with a nontrivial Poisson bracket which naturally involves two basic ingredients of Fedosov geometry: the symplectic structure and the symplectic connection. The constructed first class constrained theory, being equivalent to the original symplectic manifold, is quantized through the BFV-BRST procedure. The existence theorem is proven for the quantum BRST charge and the quantum BRST invariant observables. The adjoint action of the quantum BRST charge is identified with the Abelian Fedosov connection while any observable, being proven to be a unique BRST invariant continuation for the values defined in the original symplectic manifold, is identified with the Fedosov flat section of the Weyl bundle. The Fedosov fibrewise star multiplication is thus recognized as a conventional product of the quantum BRST invariant observables.

  7. From Weyl to Born-Jordan quantization: The Schrödinger representation revisited

    NASA Astrophysics Data System (ADS)

    de Gosson, Maurice A.

    2016-03-01

    The ordering problem has been one of the long standing and much discussed questions in quantum mechanics from its very beginning. Nowadays, there is more or less a consensus among physicists that the right prescription is Weyl's rule, which is closely related to the Moyal-Wigner phase space formalism. We propose in this report an alternative approach by replacing Weyl quantization with the less well-known Born-Jordan quantization. This choice is actually natural if we want the Heisenberg and Schrödinger pictures of quantum mechanics to be mathematically equivalent. It turns out that, in addition, Born-Jordan quantization can be recovered from Feynman's path integral approach provided that one used short-time propagators arising from correct formulas for the short-time action, as observed by Makri and Miller. These observations lead to a slightly different quantum mechanics, exhibiting some unexpected features, and this without affecting the main existing theory; for instance quantizations of physical Hamiltonian functions are the same as in the Weyl correspondence. The differences are in fact of a more subtle nature; for instance, the quantum observables will not correspond in a one-to-one fashion to classical ones, and the dequantization of a Born-Jordan quantum operator is less straightforward than that of the corresponding Weyl operator. The use of Born-Jordan quantization moreover solves the "angular momentum dilemma", which already puzzled L. Pauling. Born-Jordan quantization has been known for some time (but not fully exploited) by mathematicians working in time-frequency analysis and signal analysis, but ignored by physicists. One of the aims of this report is to collect and synthesize these sporadic discussions, while analyzing the conceptual differences with Weyl quantization, which is also reviewed in detail. Another striking feature is that the Born-Jordan formalism leads to a redefinition of phase space quantum mechanics, where the usual Wigner

  8. Quantization of Space-like States in Lorentz-Violating Theories

    NASA Astrophysics Data System (ADS)

    Colladay, Don

    2018-01-01

    Lorentz violation frequently induces modified dispersion relations that can yield space-like states that impede the standard quantization procedures. In certain cases, an extended Hamiltonian formalism can be used to define observer-covariant normalization factors for field expansions and phase space integrals. These factors extend the theory to include non-concordant frames in which there are negative-energy states. This formalism provides a rigorous way to quantize certain theories containing space-like states and allows for the consistent computation of Cherenkov radiation rates in arbitrary frames and avoids singular expressions.

  9. Error diffusion concept for multi-level quantization

    NASA Astrophysics Data System (ADS)

    Broja, Manfred; Michalowski, Kristina; Bryngdahl, Olof

    1990-11-01

    The error diffusion binarization procedure is adapted to multi-level quantization. The threshold parameters then available have a noticeable influence on the process. Characteristic features of the technique are shown together with experimental results.

  10. Combining Vector Quantization and Histogram Equalization.

    ERIC Educational Resources Information Center

    Cosman, Pamela C.; And Others

    1992-01-01

    Discussion of contrast enhancement techniques focuses on the use of histogram equalization with a data compression technique, i.e., tree-structured vector quantization. The enhancement technique of intensity windowing is described, and the use of enhancement techniques for medical images is explained, including adaptive histogram equalization.…

  11. Determination of internal target volume for radiation treatment planning of esophageal cancer by using 4-dimensional computed tomography (4DCT).

    PubMed

    Chen, Xiaojian; Lu, Haijun; Tai, An; Johnstone, Candice; Gore, Elizabeth; Li, X Allen

    2014-09-01

    To determine an efficient strategy for the generation of the internal target volume (ITV) for radiation treatment planning for esophageal cancer using 4-dimensional computed tomography (4DCT). 4DCT sets acquired for 20 patients with esophageal carcinoma were analyzed. Each of the 4DCT sets was binned into 10 respiratory phases. For each patient, the gross tumor volume (GTV) was delineated on the 4DCT set at each phase. Various strategies to derive ITV were explored, including the volume from the maximum intensity projection (MIP; ITV_MIP), unions of the GTVs from selected multiple phases ITV2 (0% and 50% phases), ITV3 (ITV2 plus 80%), and ITV4 (ITV3 plus 60%), as well as the volumes expanded from ITV2 and ITV3 with a uniform margin. These ITVs were compared to ITV10 (the union of the GTVs for all 10 phases) and the differences were measured with the overlap ratio (OR) and relative volume ratio (RVR) relative to ITV10 (ITVx/ITV10). For all patients studied, the average GTV from a single phase was 84.9% of ITV10. The average ORs were 91.2%, 91.3%, 94.5%, and 96.4% for ITV_MIP, ITV2, ITV3, and ITV4, respectively. Low ORs were associated with irregular breathing patterns. ITV3s plus 1 mm uniform margins (ITV3+1) led to an average OR of 98.1% and an average RVR of 106.4%. The ITV generated directly from MIP underestimates the range of the respiration motion for esophageal cancer. The ITV generated from 3 phases (ITV3) may be used for regular breathers, whereas the ITV generated from 4 phases (ITV4) or ITV3 plus a 1-mm uniform margin may be applied for irregular breathers. Copyright © 2014 Elsevier Inc. All rights reserved.

  12. Phase-Quantized Block Noncoherent Communication

    DTIC Science & Technology

    2013-07-01

    2828 IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 61, NO. 7, JULY 2013 Phase-Quantized Block Noncoherent Communication Jaspreet Singh and Upamanyu...in a carrier asynchronous system. Specifically, we consider transmission over the block noncoherent additive white Gaussian noise channel, and...block noncoherent channel. Several results, based on the symmetry inherent in the channel model, are provided to characterize this transition density

  13. Fine structure constant and quantized optical transparency of plasmonic nanoarrays.

    PubMed

    Kravets, V G; Schedin, F; Grigorenko, A N

    2012-01-24

    Optics is renowned for displaying quantum phenomena. Indeed, studies of emission and absorption lines, the photoelectric effect and blackbody radiation helped to build the foundations of quantum mechanics. Nevertheless, it came as a surprise that the visible transparency of suspended graphene is determined solely by the fine structure constant, as this kind of universality had been previously reserved only for quantized resistance and flux quanta in superconductors. Here we describe a plasmonic system in which relative optical transparency is determined solely by the fine structure constant. The system consists of a regular array of gold nanoparticles fabricated on a thin metallic sublayer. We show that its relative transparency can be quantized in the near-infrared, which we attribute to the quantized contact resistance between the nanoparticles and the metallic sublayer. Our results open new possibilities in the exploration of universal dynamic conductance in plasmonic nanooptics.

  14. A 4DCT imaging-based breathing lung model with relative hysteresis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miyawaki, Shinjiro; Choi, Sanghun; Hoffman, Eric A.

    To reproduce realistic airway motion and airflow, the authors developed a deforming lung computational fluid dynamics (CFD) model based on four-dimensional (4D, space and time) dynamic computed tomography (CT) images. A total of 13 time points within controlled tidal volume respiration were used to account for realistic and irregular lung motion in human volunteers. Because of the irregular motion of 4DCT-based airways, we identified an optimal interpolation method for airway surface deformation during respiration, and implemented a computational solid mechanics-based moving mesh algorithm to produce smooth deforming airway mesh. In addition, we developed physiologically realistic airflow boundary conditions for bothmore » models based on multiple images and a single image. Furthermore, we examined simplified models based on one or two dynamic or static images. By comparing these simplified models with the model based on 13 dynamic images, we investigated the effects of relative hysteresis of lung structure with respect to lung volume, lung deformation, and imaging methods, i.e., dynamic vs. static scans, on CFD-predicted pressure drop. The effect of imaging method on pressure drop was 24 percentage points due to the differences in airflow distribution and airway geometry. - Highlights: • We developed a breathing human lung CFD model based on 4D-dynamic CT images. • The 4DCT-based breathing lung model is able to capture lung relative hysteresis. • A new boundary condition for lung model based on one static CT image was proposed. • The difference between lung models based on 4D and static CT images was quantified.« less

  15. Use of Maximum Intensity Projections (MIPs) for target outlining in 4DCT radiotherapy planning.

    PubMed

    Muirhead, Rebecca; McNee, Stuart G; Featherstone, Carrie; Moore, Karen; Muscat, Sarah

    2008-12-01

    Four-dimensional computed tomography (4DCT) is currently being introduced to radiotherapy centers worldwide, for use in radical radiotherapy planning for non-small cell lung cancer (NSCLC). A significant drawback is the time required to delineate 10 individual CT scans for each patient. Every department will hence ask the question if the single Maximum Intensity Projection (MIP) scan can be used as an alternative. Although the problems regarding the use of the MIP in node-positive disease have been discussed in the literature, a comprehensive study assessing its use has not been published. We compared an internal target volume (ITV) created using the MIP to an ITV created from the composite volume of 10 clinical target volumes (CTVs) delineated on the 10 phases of the 4DCT. 4DCT data was collected from 14 patients with NSCLC. In each patient, the ITV was delineated on the MIP image (ITV_MIP) and a composite ITV created from the 10 CTVs delineated on each of the 10 scans in the dataset. The structures were compared by assessment of volumes of overlap and exclusion. There was a median of 19.0% (range, 5.5-35.4%) of the volume of ITV_10phase not enclosed by the ITV_MIP, demonstrating that the use of the MIP could result in under-treatment of disease. In contrast only a very small amount of the ITV_MIP was not enclosed by the ITV_10phase (median of 2.3%, range, 0.4-9.8%), indicating the ITV_10phase covers almost all of the tumor tissue as identified by MIP. Although there were only two Stage I patients, both demonstrated very similar ITV_10phase and ITV_MIP volumes. These findings suggest that Stage I NSCLC tumors could be outlined on the MIP alone. In Stage II and III tumors the ITV_10phase would be more reliable. To prevent under-treatment of disease, the MIP image can only be used for delineation in Stage I tumors.

  16. Robust fault tolerant control based on sliding mode method for uncertain linear systems with quantization.

    PubMed

    Hao, Li-Ying; Yang, Guang-Hong

    2013-09-01

    This paper is concerned with the problem of robust fault-tolerant compensation control problem for uncertain linear systems subject to both state and input signal quantization. By incorporating novel matrix full-rank factorization technique with sliding surface design successfully, the total failure of certain actuators can be coped with, under a special actuator redundancy assumption. In order to compensate for quantization errors, an adjustment range of quantization sensitivity for a dynamic uniform quantizer is given through the flexible choices of design parameters. Comparing with the existing results, the derived inequality condition leads to the fault tolerance ability stronger and much wider scope of applicability. With a static adjustment policy of quantization sensitivity, an adaptive sliding mode controller is then designed to maintain the sliding mode, where the gain of the nonlinear unit vector term is updated automatically to compensate for the effects of actuator faults, quantization errors, exogenous disturbances and parameter uncertainties without the need for a fault detection and isolation (FDI) mechanism. Finally, the effectiveness of the proposed design method is illustrated via a model of a rocket fairing structural-acoustic. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  17. On the quantization of the massless Bateman system

    NASA Astrophysics Data System (ADS)

    Takahashi, K.

    2018-03-01

    The so-called Bateman system for the damped harmonic oscillator is reduced to a genuine dual dissipation system (DDS) by setting the mass to zero. We explore herein the condition under which the canonical quantization of the DDS is consistently performed. The roles of the observable and auxiliary coordinates are discriminated. The results show that the complete and orthogonal Fock space of states can be constructed on the stable vacuum if an anti-Hermite representation of the canonical Hamiltonian is adopted. The amplitude of the one-particle wavefunction is consistent with the classical solution. The fields can be quantized as bosonic or fermionic. For bosonic systems, the quantum fluctuation of the field is directly associated with the dissipation rate.

  18. Fill-in binary loop pulse-torque quantizer

    NASA Technical Reports Server (NTRS)

    Lory, C. B.

    1975-01-01

    Fill-in binary (FIB) loop provides constant heating of torque generator, an advantage of binary current switching. At the same time, it avoids mode-related dead zone and data delay of binary, an advantage of ternary quantization.

  19. The wavelet/scalar quantization compression standard for digital fingerprint images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bradley, J.N.; Brislawn, C.M.

    1994-04-01

    A new digital image compression standard has been adopted by the US Federal Bureau of Investigation for use on digitized gray-scale fingerprint images. The algorithm is based on adaptive uniform scalar quantization of a discrete wavelet transform image decomposition and is referred to as the wavelet/scalar quantization standard. The standard produces archival quality images at compression ratios of around 20:1 and will allow the FBI to replace their current database of paper fingerprint cards with digital imagery.

  20. Supporting Dynamic Quantization for High-Dimensional Data Analytics.

    PubMed

    Guzun, Gheorghi; Canahuate, Guadalupe

    2017-05-01

    Similarity searches are at the heart of exploratory data analysis tasks. Distance metrics are typically used to characterize the similarity between data objects represented as feature vectors. However, when the dimensionality of the data increases and the number of features is large, traditional distance metrics fail to distinguish between the closest and furthest data points. Localized distance functions have been proposed as an alternative to traditional distance metrics. These functions only consider dimensions close to query to compute the distance/similarity. Furthermore, in order to enable interactive explorations of high-dimensional data, indexing support for ad-hoc queries is needed. In this work we set up to investigate whether bit-sliced indices can be used for exploratory analytics such as similarity searches and data clustering for high-dimensional big-data. We also propose a novel dynamic quantization called Query dependent Equi-Depth (QED) quantization and show its effectiveness on characterizing high-dimensional similarity. When applying QED we observe improvements in kNN classification accuracy over traditional distance functions. Gheorghi Guzun and Guadalupe Canahuate. 2017. Supporting Dynamic Quantization for High-Dimensional Data Analytics. In Proceedings of Ex-ploreDB'17, Chicago, IL, USA, May 14-19, 2017, 6 pages. https://doi.org/http://dx.doi.org/10.1145/3077331.3077336.

  1. Symplectic Quantization of a Reducible Theory

    NASA Astrophysics Data System (ADS)

    Barcelos-Neto, J.; Silva, M. B. D.

    We use the symplectic formalism to quantize the Abelian antisymmetric tensor gauge field. It is related to a reducible theory in the sense that all of its constraints are not independent. A procedure like ghost-of-ghost of the BFV method has to be used, but in terms of Lagrange multipliers.

  2. SU-E-CAMPUS-T-02: Can Pre-Treatment 4DCT-Based Motion Margins Estimates Be Trusted for Proton Radiotherapy?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seco, J; Koybasi, O; Mishra, P

    2014-06-15

    Purpose: Radiotherapy motion margins are generated using pre-treatment 4DCT data. The purpose of this study is to assess if pre-treatment 4DCT is sufficient in proton therapy to provide accurate estimate of motion margins. A dosimetric assessment is performed comparing pre-treatment margins with daily-customized margins. Methods: Gold fiducial markers implanted in lung tumors of patients were used to track the tumor. A spherical tumor of diameter 20 mm is inserted into a realistic digital respiratory phantom, where the tumor motion is based on real patient lung tumor trajectories recorded over multiple days. Using “Day 1” patient data, 100 ITVs were generatedmore » with 1 s interval between consecutive scan start times. Each ITV was made up by the union of 10 tumor positions obtained from 6 s scan time. Two ITV volumes were chosen for treatment planning: ITVmean-σ and ITVmean+σ. The delivered dose was computed on i) 10 phases forming the planning ITV (“10-phase” - simulating dose calculation based on 4DCT) and ii) 50 phantoms produced from 100 s of data from any other day with tumor positions sampled every 2 s (“dynamic” - simulating the dose that would actually be delivered). Results: For similar breathing patterns between “Day 1” and any other “Day N(>1)”, the 95% volume coverage (D95) for “dynamic” case was 8.13% lower than the “10-phase” case for ITVmean+σ. For breathing patterns that were very different between “Day 1” and any other “Day N(>1)”, this difference was as high as 24.5% for ITVmean-σ. Conclusion: Proton treatment planning based on pre-treatment 4DCT can lead to under-dosage of the tumor and over-dosage of the surrounding tissues, because of inadequate estimate of the range of motion of the tumor. This is due to the shift of the Bragg peak compared to photon therapy in which the tumor is surrounded by an electron bath.« less

  3. Landau quantization of Dirac fermions in graphene and its multilayers

    NASA Astrophysics Data System (ADS)

    Yin, Long-Jing; Bai, Ke-Ke; Wang, Wen-Xiao; Li, Si-Yu; Zhang, Yu; He, Lin

    2017-08-01

    When electrons are confined in a two-dimensional (2D) system, typical quantum-mechanical phenomena such as Landau quantization can be detected. Graphene systems, including the single atomic layer and few-layer stacked crystals, are ideal 2D materials for studying a variety of quantum-mechanical problems. In this article, we review the experimental progress in the unusual Landau quantized behaviors of Dirac fermions in monolayer and multilayer graphene by using scanning tunneling microscopy (STM) and scanning tunneling spectroscopy (STS). Through STS measurement of the strong magnetic fields, distinct Landau-level spectra and rich level-splitting phenomena are observed in different graphene layers. These unique properties provide an effective method for identifying the number of layers, as well as the stacking orders, and investigating the fundamentally physical phenomena of graphene. Moreover, in the presence of a strain and charged defects, the Landau quantization of graphene can be significantly modified, leading to unusual spectroscopic and electronic properties.

  4. More on quantum groups from the quantization point of view

    NASA Astrophysics Data System (ADS)

    Jurčo, Branislav

    1994-12-01

    Star products on the classical double group of a simple Lie group and on corresponding symplectic groupoids are given so that the quantum double and the “quantized tangent bundle” are obtained in the deformation description. “Complex” quantum groups and bicovariant quantum Lie algebras are discussed from this point of view. Further we discuss the quantization of the Poisson structure on the symmetric algebra S(g) leading to the quantized enveloping algebra U h (g) as an example of biquantization in the sense of Turaev. Description of U h (g) in terms of the generators of the bicovariant differential calculus on F(G q ) is very convenient for this purpose. Finaly we interpret in the deformation framework some well known properties of compact quantum groups as simple consequences of corresponding properties of classical compact Lie groups. An analogue of the classical Kirillov's universal character formula is given for the unitary irreducble representation in the compact case.

  5. Mass quantization of the Schwarzschild black hole

    NASA Astrophysics Data System (ADS)

    Vaz, Cenalo; Witten, Louis

    1999-07-01

    We examine the Wheeler-DeWitt equation for a static, eternal Schwarzschild black hole in Kuchař-Brown variables and obtain its energy eigenstates. Consistent solutions vanish in the exterior of the Kruskal manifold and are nonvanishing only in the interior. The system is reminiscent of a particle in a box. States of definite parity avoid the singular geometry by vanishing at the origin. These definite parity states admit a discrete energy spectrum, depending on one quantum number which determines the Arnowitt-Deser-Misner mass of the black hole according to a relation conjectured long ago by Bekenstein M~nMp. If attention is restricted only to these quantized energy states, a black hole is described not only by its mass but also by its parity. States of indefinite parity do not admit a quantized mass spectrum.

  6. Quantization of the nonlinear sigma model revisited

    NASA Astrophysics Data System (ADS)

    Nguyen, Timothy

    2016-08-01

    We revisit the subject of perturbatively quantizing the nonlinear sigma model in two dimensions from a rigorous, mathematical point of view. Our main contribution is to make precise the cohomological problem of eliminating potential anomalies that may arise when trying to preserve symmetries under quantization. The symmetries we consider are twofold: (i) diffeomorphism covariance for a general target manifold; (ii) a transitive group of isometries when the target manifold is a homogeneous space. We show that there are no anomalies in case (i) and that (ii) is also anomaly-free under additional assumptions on the target homogeneous space, in agreement with the work of Friedan. We carry out some explicit computations for the O(N)-model. Finally, we show how a suitable notion of the renormalization group establishes the Ricci flow as the one loop renormalization group flow of the nonlinear sigma model.

  7. On a canonical quantization of 3D Anti de Sitter pure gravity

    NASA Astrophysics Data System (ADS)

    Kim, Jihun; Porrati, Massimo

    2015-10-01

    We perform a canonical quantization of pure gravity on AdS 3 using as a technical tool its equivalence at the classical level with a Chern-Simons theory with gauge group SL(2,{R})× SL(2,{R}) . We first quantize the theory canonically on an asymptotically AdS space -which is topologically the real line times a Riemann surface with one connected boundary. Using the "constrain first" approach we reduce canonical quantization to quantization of orbits of the Virasoro group and Kähler quantization of Teichmüller space. After explicitly computing the Kähler form for the torus with one boundary component and after extending that result to higher genus, we recover known results, such as that wave functions of SL(2,{R}) Chern-Simons theory are conformal blocks. We find new restrictions on the Hilbert space of pure gravity by imposing invariance under large diffeomorphisms and normalizability of the wave function. The Hilbert space of pure gravity is shown to be the target space of Conformal Field Theories with continuous spectrum and a lower bound on operator dimensions. A projection defined by topology changing amplitudes in Euclidean gravity is proposed. It defines an invariant subspace that allows for a dual interpretation in terms of a Liouville CFT. Problems and features of the CFT dual are assessed and a new definition of the Hilbert space, exempt from those problems, is proposed in the case of highly-curved AdS 3.

  8. Entropy-aware projected Landweber reconstruction for quantized block compressive sensing of aerial imagery

    NASA Astrophysics Data System (ADS)

    Liu, Hao; Li, Kangda; Wang, Bing; Tang, Hainie; Gong, Xiaohui

    2017-01-01

    A quantized block compressive sensing (QBCS) framework, which incorporates the universal measurement, quantization/inverse quantization, entropy coder/decoder, and iterative projected Landweber reconstruction, is summarized. Under the QBCS framework, this paper presents an improved reconstruction algorithm for aerial imagery, QBCS, with entropy-aware projected Landweber (QBCS-EPL), which leverages the full-image sparse transform without Wiener filter and an entropy-aware thresholding model for wavelet-domain image denoising. Through analyzing the functional relation between the soft-thresholding factors and entropy-based bitrates for different quantization methods, the proposed model can effectively remove wavelet-domain noise of bivariate shrinkage and achieve better image reconstruction quality. For the overall performance of QBCS reconstruction, experimental results demonstrate that the proposed QBCS-EPL algorithm significantly outperforms several existing algorithms. With the experiment-driven methodology, the QBCS-EPL algorithm can obtain better reconstruction quality at a relatively moderate computational cost, which makes it more desirable for aerial imagery applications.

  9. Reformulation of the covering and quantizer problems as ground states of interacting particles.

    PubMed

    Torquato, S

    2010-11-01

    It is known that the sphere-packing problem and the number-variance problem (closely related to an optimization problem in number theory) can be posed as energy minimizations associated with an infinite number of point particles in d-dimensional Euclidean space R(d) interacting via certain repulsive pair potentials. We reformulate the covering and quantizer problems as the determination of the ground states of interacting particles in R(d) that generally involve single-body, two-body, three-body, and higher-body interactions. This is done by linking the covering and quantizer problems to certain optimization problems involving the "void" nearest-neighbor functions that arise in the theory of random media and statistical mechanics. These reformulations, which again exemplify the deep interplay between geometry and physics, allow one now to employ theoretical and numerical optimization techniques to analyze and solve these energy minimization problems. The covering and quantizer problems have relevance in numerous applications, including wireless communication network layouts, the search of high-dimensional data parameter spaces, stereotactic radiation therapy, data compression, digital communications, meshing of space for numerical analysis, and coding and cryptography, among other examples. In the first three space dimensions, the best known solutions of the sphere-packing and number-variance problems (or their "dual" solutions) are directly related to those of the covering and quantizer problems, but such relationships may or may not exist for d≥4 , depending on the peculiarities of the dimensions involved. Our reformulation sheds light on the reasons for these similarities and differences. We also show that disordered saturated sphere packings provide relatively thin (economical) coverings and may yield thinner coverings than the best known lattice coverings in sufficiently large dimensions. In the case of the quantizer problem, we derive improved upper bounds

  10. Reformulation of the covering and quantizer problems as ground states of interacting particles

    NASA Astrophysics Data System (ADS)

    Torquato, S.

    2010-11-01

    It is known that the sphere-packing problem and the number-variance problem (closely related to an optimization problem in number theory) can be posed as energy minimizations associated with an infinite number of point particles in d -dimensional Euclidean space Rd interacting via certain repulsive pair potentials. We reformulate the covering and quantizer problems as the determination of the ground states of interacting particles in Rd that generally involve single-body, two-body, three-body, and higher-body interactions. This is done by linking the covering and quantizer problems to certain optimization problems involving the “void” nearest-neighbor functions that arise in the theory of random media and statistical mechanics. These reformulations, which again exemplify the deep interplay between geometry and physics, allow one now to employ theoretical and numerical optimization techniques to analyze and solve these energy minimization problems. The covering and quantizer problems have relevance in numerous applications, including wireless communication network layouts, the search of high-dimensional data parameter spaces, stereotactic radiation therapy, data compression, digital communications, meshing of space for numerical analysis, and coding and cryptography, among other examples. In the first three space dimensions, the best known solutions of the sphere-packing and number-variance problems (or their “dual” solutions) are directly related to those of the covering and quantizer problems, but such relationships may or may not exist for d≥4 , depending on the peculiarities of the dimensions involved. Our reformulation sheds light on the reasons for these similarities and differences. We also show that disordered saturated sphere packings provide relatively thin (economical) coverings and may yield thinner coverings than the best known lattice coverings in sufficiently large dimensions. In the case of the quantizer problem, we derive improved upper

  11. Deformation quantizations with separation of variables on a Kähler manifold

    NASA Astrophysics Data System (ADS)

    Karabegov, Alexander V.

    1996-10-01

    We give a simple geometric description of all formal differentiable deformation quantizations on a Kähler manifold M such that for each open subset U⊂ M ⋆-multiplication from the left by a holomorphic function and from the right by an antiholomorphic function on U coincides with the pointwise multiplication by these functions. We show that these quantizations are in 1-1 correspondence with the formal deformations of the original Kähler metrics on M.

  12. Combinatorial quantization of the Hamiltonian Chern-Simons theory II

    NASA Astrophysics Data System (ADS)

    Alekseev, Anton Yu.; Grosse, Harald; Schomerus, Volker

    1996-01-01

    This paper further develops the combinatorial approach to quantization of the Hamiltonian Chern Simons theory advertised in [1]. Using the theory of quantum Wilson lines, we show how the Verlinde algebra appears within the context of quantum group gauge theory. This allows to discuss flatness of quantum connections so that we can give a mathematically rigorous definition of the algebra of observables A CS of the Chern Simons model. It is a *-algebra of “functions on the quantum moduli space of flat connections” and comes equipped with a positive functional ω (“integration”). We prove that this data does not depend on the particular choices which have been made in the construction. Following ideas of Fock and Rosly [2], the algebra A CS provides a deformation quantization of the algebra of functions on the moduli space along the natural Poisson bracket induced by the Chern Simons action. We evaluate a volume of the quantized moduli space and prove that it coincides with the Verlinde number. This answer is also interpreted as a partition partition function of the lattice Yang-Mills theory corresponding to a quantum gauge group.

  13. Scalets, wavelets and (complex) turning point quantization

    NASA Astrophysics Data System (ADS)

    Handy, C. R.; Brooks, H. A.

    2001-05-01

    Despite the many successes of wavelet analysis in image and signal processing, the incorporation of continuous wavelet transform theory within quantum mechanics has lacked a compelling, first principles, motivating analytical framework, until now. For arbitrary one-dimensional rational fraction Hamiltonians, we develop a simple, unified formalism, which clearly underscores the complementary, and mutually interdependent, role played by moment quantization theory (i.e. via scalets, as defined herein) and wavelets. This analysis involves no approximation of the Hamiltonian within the (equivalent) wavelet space, and emphasizes the importance of (complex) multiple turning point contributions in the quantization process. We apply the method to three illustrative examples. These include the (double-well) quartic anharmonic oscillator potential problem, V(x) = Z2x2 + gx4, the quartic potential, V(x) = x4, and the very interesting and significant non-Hermitian potential V(x) = -(ix)3, recently studied by Bender and Boettcher.

  14. Dielectric response properties of parabolically-confined nanostructures in a quantizing magnetic field

    NASA Astrophysics Data System (ADS)

    Sabeeh, Kashif

    This thesis presents theoretical studies of dielectric response properties of parabolically-confined nanostructures in a magnetic field. We have determined the retarded Schrodinger Green's function for an electron in such a parabolically confined system in the presence of a time dependent electric field and an ambient magnetic field. Following an operator equation of motion approach developed by Schwinger, we calculate the result in closed form in terms of elementary functions in direct-time representation. From the retarded Schrodinger Green's function we construct the closed-form thermodynamic Green's function for a parabolically confined quantum-dot in a magnetic field to determine its plasmon spectrum. Due to confinement and Landau quantization this system is fully quantized, with an infinite number of collective modes. The RPA integral equation for the inverse dielectric function is solved using Fredholm theory in the nondegenerate and quantum limit to determine the frequencies with which the plasmons participate in response to excitation by an external potential. We exhibit results for the variation of plasmon frequency as a function of magnetic field strength and of confinement frequency. A calculation of the van der Waals interaction energy between two harmonically confined quantum dots is discussed in terms of the dipole-dipole correlation function. The results are presented as a function of confinement strength and distance between the dots. We also rederive a result of Fertig & Halperin [32] for the tunneling-scattering of an electron through a saddle potential which is also known as a quantum point contact (QPC), in the presence of a magnetic field. Using the retarded Green's function we confirm the result for the transmission coefficient and analyze it.

  15. Image segmentation-based robust feature extraction for color image watermarking

    NASA Astrophysics Data System (ADS)

    Li, Mianjie; Deng, Zeyu; Yuan, Xiaochen

    2018-04-01

    This paper proposes a local digital image watermarking method based on Robust Feature Extraction. The segmentation is achieved by Simple Linear Iterative Clustering (SLIC) based on which an Image Segmentation-based Robust Feature Extraction (ISRFE) method is proposed for feature extraction. Our method can adaptively extract feature regions from the blocks segmented by SLIC. This novel method can extract the most robust feature region in every segmented image. Each feature region is decomposed into low-frequency domain and high-frequency domain by Discrete Cosine Transform (DCT). Watermark images are then embedded into the coefficients in the low-frequency domain. The Distortion-Compensated Dither Modulation (DC-DM) algorithm is chosen as the quantization method for embedding. The experimental results indicate that the method has good performance under various attacks. Furthermore, the proposed method can obtain a trade-off between high robustness and good image quality.

  16. Comments on `Area and power efficient DCT architecture for image compression' by Dhandapani and Ramachandran

    NASA Astrophysics Data System (ADS)

    Cintra, Renato J.; Bayer, Fábio M.

    2017-12-01

    In [Dhandapani and Ramachandran, "Area and power efficient DCT architecture for image compression", EURASIP Journal on Advances in Signal Processing 2014, 2014:180] the authors claim to have introduced an approximation for the discrete cosine transform capable of outperforming several well-known approximations in literature in terms of additive complexity. We could not verify the above results and we offer corrections for their work.

  17. Necessary conditions for the optimality of variable rate residual vector quantizers

    NASA Technical Reports Server (NTRS)

    Kossentini, Faouzi; Smith, Mark J. T.; Barnes, Christopher F.

    1993-01-01

    Residual vector quantization (RVQ), or multistage VQ, as it is also called, has recently been shown to be a competitive technique for data compression. The competitive performance of RVQ reported in results from the joint optimization of variable rate encoding and RVQ direct-sum code books. In this paper, necessary conditions for the optimality of variable rate RVQ's are derived, and an iterative descent algorithm based on a Lagrangian formulation is introduced for designing RVQ's having minimum average distortion subject to an entropy constraint. Simulation results for these entropy-constrained RVQ's (EC-RVQ's) are presented for memory less Gaussian, Laplacian, and uniform sources. A Gauss-Markov source is also considered. The performance is superior to that of entropy-constrained scalar quantizers (EC-SQ's) and practical entropy-constrained vector quantizers (EC-VQ's), and is competitive with that of some of the best source coding techniques that have appeared in the literature.

  18. Determination of Internal Target Volume for Radiation Treatment Planning of Esophageal Cancer by Using 4-Dimensional Computed Tomography (4DCT)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Xiaojian; Lu, Haijun; Radiation Oncology Center, Affiliated Hospital of Medical College, Qingdao University, Qingdao

    2014-09-01

    Purpose: To determine an efficient strategy for the generation of the internal target volume (ITV) for radiation treatment planning for esophageal cancer using 4-dimensional computed tomography (4DCT). Methods and Materials: 4DCT sets acquired for 20 patients with esophageal carcinoma were analyzed. Each of the 4DCT sets was binned into 10 respiratory phases. For each patient, the gross tumor volume (GTV) was delineated on the 4DCT set at each phase. Various strategies to derive ITV were explored, including the volume from the maximum intensity projection (MIP; ITV{sub M}IP), unions of the GTVs from selected multiple phases ITV2 (0% and 50% phases), ITV3 (ITV2more » plus 80%), and ITV4 (ITV3 plus 60%), as well as the volumes expanded from ITV2 and ITV3 with a uniform margin. These ITVs were compared to ITV10 (the union of the GTVs for all 10 phases) and the differences were measured with the overlap ratio (OR) and relative volume ratio (RVR) relative to ITV10 (ITVx/ITV10). Results: For all patients studied, the average GTV from a single phase was 84.9% of ITV10. The average ORs were 91.2%, 91.3%, 94.5%, and 96.4% for ITV{sub M}IP, ITV2, ITV3, and ITV4, respectively. Low ORs were associated with irregular breathing patterns. ITV3s plus 1 mm uniform margins (ITV3+1) led to an average OR of 98.1% and an average RVR of 106.4%. Conclusions: The ITV generated directly from MIP underestimates the range of the respiration motion for esophageal cancer. The ITV generated from 3 phases (ITV3) may be used for regular breathers, whereas the ITV generated from 4 phases (ITV4) or ITV3 plus a 1-mm uniform margin may be applied for irregular breathers.« less

  19. Dynamic volume vs respiratory correlated 4DCT for motion assessment in radiation therapy simulation.

    PubMed

    Coolens, Catherine; Bracken, John; Driscoll, Brandon; Hope, Andrew; Jaffray, David

    2012-05-01

    Conventional (i.e., respiratory-correlated) 4DCT exploits the repetitive nature of breathing to provide an estimate of motion; however, it has limitations due to binning artifacts and irregular breathing in actual patient breathing patterns. The aim of this work was to evaluate the accuracy and image quality of a dynamic volume, CT approach (4D(vol)) using a 320-slice CT scanner to minimize these limitations, wherein entire image volumes are acquired dynamically without couch movement. This will be compared to the conventional respiratory-correlated 4DCT approach (RCCT). 4D(vol) CT was performed and characterized on an in-house, programmable respiratory motion phantom containing multiple geometric and morphological "tumor" objects over a range of regular and irregular patient breathing traces obtained from 3D fluoroscopy and compared to RCCT. The accuracy of volumetric capture and breathing displacement were evaluated and compared with the ground truth values and with the results reported using RCCT. A motion model was investigated to validate the number of motion samples needed to obtain accurate motion probability density functions (PDF). The impact of 4D image quality on this accuracy was then investigated. Dose measurements using volumetric and conventional scan techniques were also performed and compared. Both conventional and dynamic volume 4DCT methods were capable of estimating the programmed displacement of sinusoidal motion, but patient breathing is known to not be regular, and obvious differences were seen for realistic, irregular motion. The mean RCCT amplitude error averaged at 4 mm (max. 7.8 mm) whereas the 4D(vol) CT error stayed below 0.5 mm. Similarly, the average absolute volume error was lower with 4D(vol) CT. Under irregular breathing, the 4D(vol) CT method provides a close description of the motion PDF (cross-correlation 0.99) and is able to track each object, whereas the RCCT method results in a significantly different PDF from the

  20. TBA-like integral equations from quantized mirror curves

    NASA Astrophysics Data System (ADS)

    Okuyama, Kazumi; Zakany, Szabolcs

    2016-03-01

    Quantizing the mirror curve of certain toric Calabi-Yau (CY) three-folds leads to a family of trace class operators. The resolvent function of these operators is known to encode topological data of the CY. In this paper, we show that in certain cases, this resolvent function satisfies a system of non-linear integral equations whose structure is very similar to the Thermodynamic Bethe Ansatz (TBA) systems. This can be used to compute spectral traces, both exactly and as a semiclassical expansion. As a main example, we consider the system related to the quantized mirror curve of local P2. According to a recent proposal, the traces of this operator are determined by the refined BPS indices of the underlying CY. We use our non-linear integral equations to test that proposal.

  1. On two mathematical problems of canonical quantization. IV

    NASA Astrophysics Data System (ADS)

    Kirillov, A. I.

    1992-11-01

    A method for solving the problem of reconstructing a measure beginning with its logarithmic derivative is presented. The method completes that of solving the stochastic differential equation via Dirichlet forms proposed by S. Albeverio and M. Rockner. As a result one obtains the mathematical apparatus for the stochastic quantization. The apparatus is applied to prove the existence of the Feynman-Kac measure of the sine-Gordon and λφ2n/(1 + K2φ2n)-models. A synthesis of both mathematical problems of canonical quantization is obtained in the form of a second-order martingale problem for vacuum noise. It is shown that in stochastic mechanics the martingale problem is an analog of Newton's second law and enables us to find the Nelson's stochastic trajectories without determining the wave functions.

  2. Prior-Based Quantization Bin Matching for Cloud Storage of JPEG Images.

    PubMed

    Liu, Xianming; Cheung, Gene; Lin, Chia-Wen; Zhao, Debin; Gao, Wen

    2018-07-01

    Millions of user-generated images are uploaded to social media sites like Facebook daily, which translate to a large storage cost. However, there exists an asymmetry in upload and download data: only a fraction of the uploaded images are subsequently retrieved for viewing. In this paper, we propose a cloud storage system that reduces the storage cost of all uploaded JPEG photos, at the expense of a controlled increase in computation mainly during download of requested image subset. Specifically, the system first selectively re-encodes code blocks of uploaded JPEG images using coarser quantization parameters for smaller storage sizes. Then during download, the system exploits known signal priors-sparsity prior and graph-signal smoothness prior-for reverse mapping to recover original fine quantization bin indices, with either deterministic guarantee (lossless mode) or statistical guarantee (near-lossless mode). For fast reverse mapping, we use small dictionaries and sparse graphs that are tailored for specific clusters of similar blocks, which are classified via tree-structured vector quantizer. During image upload, cluster indices identifying the appropriate dictionaries and graphs for the re-quantized blocks are encoded as side information using a differential distributed source coding scheme to facilitate reverse mapping during image download. Experimental results show that our system can reap significant storage savings (up to 12.05%) at roughly the same image PSNR (within 0.18 dB).

  3. Vector quantizer designs for joint compression and terrain categorization of multispectral imagery

    NASA Technical Reports Server (NTRS)

    Gorman, John D.; Lyons, Daniel F.

    1994-01-01

    Two vector quantizer designs for compression of multispectral imagery and their impact on terrain categorization performance are evaluated. The mean-squared error (MSE) and classification performance of the two quantizers are compared, and it is shown that a simple two-stage design minimizing MSE subject to a constraint on classification performance has a significantly better classification performance than a standard MSE-based tree-structured vector quantizer followed by maximum likelihood classification. This improvement in classification performance is obtained with minimal loss in MSE performance. The results show that it is advantageous to tailor compression algorithm designs to the required data exploitation tasks. Applications of joint compression/classification include compression for the archival or transmission of Landsat imagery that is later used for land utility surveys and/or radiometric analysis.

  4. Pose Invariant Face Recognition Based on Hybrid Dominant Frequency Features

    NASA Astrophysics Data System (ADS)

    Wijaya, I. Gede Pasek Suta; Uchimura, Keiichi; Hu, Zhencheng

    Face recognition is one of the most active research areas in pattern recognition, not only because the face is a human biometric characteristics of human being but also because there are many potential applications of the face recognition which range from human-computer interactions to authentication, security, and surveillance. This paper presents an approach to pose invariant human face image recognition. The proposed scheme is based on the analysis of discrete cosine transforms (DCT) and discrete wavelet transforms (DWT) of face images. From both the DCT and DWT domain coefficients, which describe the facial information, we build compact and meaningful features vector, using simple statistical measures and quantization. This feature vector is called as the hybrid dominant frequency features. Then, we apply a combination of the L2 and Lq metric to classify the hybrid dominant frequency features to a person's class. The aim of the proposed system is to overcome the high memory space requirement, the high computational load, and the retraining problems of previous methods. The proposed system is tested using several face databases and the experimental results are compared to a well-known Eigenface method. The proposed method shows good performance, robustness, stability, and accuracy without requiring geometrical normalization. Furthermore, the purposed method has low computational cost, requires little memory space, and can overcome retraining problem.

  5. On Fock-space representations of quantized enveloping algebras related to noncommutative differential geometry

    NASA Astrophysics Data System (ADS)

    Jurčo, B.; Schlieker, M.

    1995-07-01

    In this paper explicitly natural (from the geometrical point of view) Fock-space representations (contragradient Verma modules) of the quantized enveloping algebras are constructed. In order to do so, one starts from the Gauss decomposition of the quantum group and introduces the differential operators on the corresponding q-deformed flag manifold (assumed as a left comodule for the quantum group) by a projection to it of the right action of the quantized enveloping algebra on the quantum group. Finally, the representatives of the elements of the quantized enveloping algebra corresponding to the left-invariant vector fields on the quantum group are expressed as first-order differential operators on the q-deformed flag manifold.

  6. A novel fast helical 4D-CT acquisition technique to generate low-noise sorting artifact-free images at user-selected breathing phases.

    PubMed

    Thomas, David; Lamb, James; White, Benjamin; Jani, Shyam; Gaudio, Sergio; Lee, Percy; Ruan, Dan; McNitt-Gray, Michael; Low, Daniel

    2014-05-01

    To develop a novel 4-dimensional computed tomography (4D-CT) technique that exploits standard fast helical acquisition, a simultaneous breathing surrogate measurement, deformable image registration, and a breathing motion model to remove sorting artifacts. Ten patients were imaged under free-breathing conditions 25 successive times in alternating directions with a 64-slice CT scanner using a low-dose fast helical protocol. An abdominal bellows was used as a breathing surrogate. Deformable registration was used to register the first image (defined as the reference image) to the subsequent 24 segmented images. Voxel-specific motion model parameters were determined using a breathing motion model. The tissue locations predicted by the motion model in the 25 images were compared against the deformably registered tissue locations, allowing a model prediction error to be evaluated. A low-noise image was created by averaging the 25 images deformed to the first image geometry, reducing statistical image noise by a factor of 5. The motion model was used to deform the low-noise reference image to any user-selected breathing phase. A voxel-specific correction was applied to correct the Hounsfield units for lung parenchyma density as a function of lung air filling. Images produced using the model at user-selected breathing phases did not suffer from sorting artifacts common to conventional 4D-CT protocols. The mean prediction error across all patients between the breathing motion model predictions and the measured lung tissue positions was determined to be 1.19 ± 0.37 mm. The proposed technique can be used as a clinical 4D-CT technique. It is robust in the presence of irregular breathing and allows the entire imaging dose to contribute to the resulting image quality, providing sorting artifact-free images at a patient dose similar to or less than current 4D-CT techniques. Copyright © 2014 Elsevier Inc. All rights reserved.

  7. Quantized phase coding and connected region labeling for absolute phase retrieval.

    PubMed

    Chen, Xiangcheng; Wang, Yuwei; Wang, Yajun; Ma, Mengchao; Zeng, Chunnian

    2016-12-12

    This paper proposes an absolute phase retrieval method for complex object measurement based on quantized phase-coding and connected region labeling. A specific code sequence is embedded into quantized phase of three coded fringes. Connected regions of different codes are labeled and assigned with 3-digit-codes combining the current period and its neighbors. Wrapped phase, more than 36 periods, can be restored with reference to the code sequence. Experimental results verify the capability of the proposed method to measure multiple isolated objects.

  8. Vector Quantization Algorithm Based on Associative Memories

    NASA Astrophysics Data System (ADS)

    Guzmán, Enrique; Pogrebnyak, Oleksiy; Yáñez, Cornelio; Manrique, Pablo

    This paper presents a vector quantization algorithm for image compression based on extended associative memories. The proposed algorithm is divided in two stages. First, an associative network is generated applying the learning phase of the extended associative memories between a codebook generated by the LBG algorithm and a training set. This associative network is named EAM-codebook and represents a new codebook which is used in the next stage. The EAM-codebook establishes a relation between training set and the LBG codebook. Second, the vector quantization process is performed by means of the recalling stage of EAM using as associative memory the EAM-codebook. This process generates a set of the class indices to which each input vector belongs. With respect to the LBG algorithm, the main advantages offered by the proposed algorithm is high processing speed and low demand of resources (system memory); results of image compression and quality are presented.

  9. q-Derivatives, quantization methods and q-algebras

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Twarock, Reidun

    1998-12-15

    Using the example of Borel quantization on S{sup 1}, we discuss the relation between quantization methods and q-algebras. In particular, it is shown that a q-deformation of the Witt algebra with generators labeled by Z is realized by q-difference operators. This leads to a discrete quantum mechanics. Because of Z, the discretization is equidistant. As an approach to a non-equidistant discretization of quantum mechanics one can change the Witt algebra using not the number field Z as labels but a quadratic extension of Z characterized by an irrational number {tau}. This extension is denoted as quasi-crystal Lie algebra, because thismore » is a relation to one-dimensional quasicrystals. The q-deformation of this quasicrystal Lie algebra is discussed. It is pointed out that quasicrystal Lie algebras can be considered also as a 'deformed' Witt algebra with a 'deformation' of the labeling number field. Their application to the theory is discussed.« less

  10. Quantized circular photogalvanic effect in Weyl semimetals

    NASA Astrophysics Data System (ADS)

    de Juan, Fernando; Grushin, Adolfo G.; Morimoto, Takahiro; Moore, Joel E.

    The circular photogalvanic effect (CPGE) is the part of a photocurrent that switches depending on the sense of circular polarization of the incident light. It has been consistently observed in systems without inversion symmetry and depends on non-universal material details. We find that in a class of Weyl semimetals (e.g. SrSi2) and three-dimensional Rashba materials (e.g. doped Te) without inversion and mirror symmetries, the CPGE trace is effectively Quantized in terms of the combination of fundamental constants e3/h2 cɛ0 with no material-dependent parameters. This is so because the CPGE directly measures the topological charge of Weyl points near the Fermi surface, and non-quantized corrections from disorder and additional bands can be small over a significant range of incident frequencies. Moreover, the magnitude of the CPGE induced by a Weyl node is relatively large, which enables the direct detection of the monopole charge with current techniques.

  11. Quantized circular photogalvanic effect in Weyl semimetals

    NASA Astrophysics Data System (ADS)

    de Juan, Fernando; Grushin, Adolfo G.; Morimoto, Takahiro; Moore, Joel E.

    2017-07-01

    The circular photogalvanic effect (CPGE) is the part of a photocurrent that switches depending on the sense of circular polarization of the incident light. It has been consistently observed in systems without inversion symmetry and depends on non-universal material details. Here we find that in a class of Weyl semimetals (for example, SrSi2) and three-dimensional Rashba materials (for example, doped Te) without inversion and mirror symmetries, the injection contribution to the CPGE trace is effectively quantized in terms of the fundamental constants e, h, c and with no material-dependent parameters. This is so because the CPGE directly measures the topological charge of Weyl points, and non-quantized corrections from disorder and additional bands can be small over a significant range of incident frequencies. Moreover, the magnitude of the CPGE induced by a Weyl node is relatively large, which enables the direct detection of the monopole charge with current techniques.

  12. Quantization with maximally degenerate Poisson brackets: the harmonic oscillator!

    NASA Astrophysics Data System (ADS)

    Nutku, Yavuz

    2003-07-01

    Nambu's construction of multi-linear brackets for super-integrable systems can be thought of as degenerate Poisson brackets with a maximal set of Casimirs in their kernel. By introducing privileged coordinates in phase space these degenerate Poisson brackets are brought to the form of Heisenberg's equations. We propose a definition for constructing quantum operators for classical functions, which enables us to turn the maximally degenerate Poisson brackets into operators. They pose a set of eigenvalue problems for a new state vector. The requirement of the single-valuedness of this eigenfunction leads to quantization. The example of the harmonic oscillator is used to illustrate this general procedure for quantizing a class of maximally super-integrable systems.

  13. Floating-point system quantization errors in digital control systems

    NASA Technical Reports Server (NTRS)

    Phillips, C. L.

    1973-01-01

    The results are reported of research into the effects on system operation of signal quantization in a digital control system. The investigation considered digital controllers (filters) operating in floating-point arithmetic in either open-loop or closed-loop systems. An error analysis technique is developed, and is implemented by a digital computer program that is based on a digital simulation of the system. As an output the program gives the programing form required for minimum system quantization errors (either maximum of rms errors), and the maximum and rms errors that appear in the system output for a given bit configuration. The program can be integrated into existing digital simulations of a system.

  14. An analogue of Weyl’s law for quantized irreducible generalized flag manifolds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matassa, Marco, E-mail: marco.matassa@gmail.com, E-mail: mmatassa@math.uio.no

    2015-09-15

    We prove an analogue of Weyl’s law for quantized irreducible generalized flag manifolds. This is formulated in terms of a zeta function which, similarly to the classical setting, satisfies the following two properties: as a functional on the quantized algebra it is proportional to the Haar state and its first singularity coincides with the classical dimension. The relevant formulas are given for the more general case of compact quantum groups.

  15. A Heisenberg Algebra Bundle of a Vector Field in Three-Space and its Weyl Quantization

    NASA Astrophysics Data System (ADS)

    Binz, Ernst; Pods, Sonja

    2006-01-01

    In these notes we associate a natural Heisenberg group bundle Ha with a singularity free smooth vector field X = (id,a) on a submanifold M in a Euclidean three-space. This bundle yields naturally an infinite dimensional Heisenberg group HX∞. A representation of the C*-group algebra of HX∞ is a quantization. It causes a natural Weyl-deformation quantization of X. The influence of the topological structure of M on this quantization is encoded in the Chern class of a canonical complex line bundle inside Ha.

  16. New vertices and canonical quantization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alexandrov, Sergei

    2010-07-15

    We present two results on the recently proposed new spin foam models. First, we show how a (slightly modified) restriction on representations in the Engle-Pereira-Rovelli-Livine model leads to the appearance of the Ashtekar-Barbero connection, thus bringing this model even closer to loop quantum gravity. Second, we however argue that the quantization procedure used to derive the new models is inconsistent since it relies on the symplectic structure of the unconstrained BF theory.

  17. Development of CT and 3D-CT Using Flat Panel Detector Based Real-Time Digital Radiography System

    NASA Astrophysics Data System (ADS)

    Ravindran, V. R.; Sreelakshmi, C.; Vibin, Vibin

    2008-09-01

    The application of Digital Radiography in the Nondestructive Evaluation (NDE) of space vehicle components is a recent development in India. A Real-time DR system based on amorphous silicon Flat Panel Detector has been developed for the NDE of solid rocket motors at Rocket Propellant Plant of VSSC in a few years back. The technique has been successfully established for the nondestructive evaluation of solid rocket motors. The DR images recorded for a few solid rocket specimens are presented in the paper. The Real-time DR system is capable of generating sufficient digital X-ray image data with object rotation for the CT image reconstruction. In this paper the indigenous development of CT imaging based on the Realtime DR system for solid rocket motor is presented. Studies are also carried out to generate 3D-CT image from a set of adjacent CT images of the rocket motor. The capability of revealing the spatial location and characterisation of defect is demonstrated by the CT and 3D-CT images generated.

  18. Development of CT and 3D-CT Using Flat Panel Detector Based Real-Time Digital Radiography System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ravindran, V. R.; Sreelakshmi, C.; Vibin

    2008-09-26

    The application of Digital Radiography in the Nondestructive Evaluation (NDE) of space vehicle components is a recent development in India. A Real-time DR system based on amorphous silicon Flat Panel Detector has been developed for the NDE of solid rocket motors at Rocket Propellant Plant of VSSC in a few years back. The technique has been successfully established for the nondestructive evaluation of solid rocket motors. The DR images recorded for a few solid rocket specimens are presented in the paper. The Real-time DR system is capable of generating sufficient digital X-ray image data with object rotation for the CTmore » image reconstruction. In this paper the indigenous development of CT imaging based on the Realtime DR system for solid rocket motor is presented. Studies are also carried out to generate 3D-CT image from a set of adjacent CT images of the rocket motor. The capability of revealing the spatial location and characterisation of defect is demonstrated by the CT and 3D-CT images generated.« less

  19. Theory of quantized systems: formal basis for DEVS/HLA distributed simulation environment

    NASA Astrophysics Data System (ADS)

    Zeigler, Bernard P.; Lee, J. S.

    1998-08-01

    In the context of a DARPA ASTT project, we are developing an HLA-compliant distributed simulation environment based on the DEVS formalism. This environment will provide a user- friendly, high-level tool-set for developing interoperable discrete and continuous simulation models. One application is the study of contract-based predictive filtering. This paper presents a new approach to predictive filtering based on a process called 'quantization' to reduce state update transmission. Quantization, which generates state updates only at quantum level crossings, abstracts a sender model into a DEVS representation. This affords an alternative, efficient approach to embedding continuous models within distributed discrete event simulations. Applications of quantization to message traffic reduction are discussed. The theory has been validated by DEVSJAVA simulations of test cases. It will be subject to further test in actual distributed simulations using the DEVS/HLA modeling and simulation environment.

  20. Polymer quantization of the Einstein-Rosen wormhole throat

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kunstatter, Gabor; Peltola, Ari; Louko, Jorma

    2010-01-15

    We present a polymer quantization of spherically symmetric Einstein gravity in which the polymerized variable is the area of the Einstein-Rosen wormhole throat. In the classical polymer theory, the singularity is replaced by a bounce at a radius that depends on the polymerization scale. In the polymer quantum theory, we show numerically that the area spectrum is evenly spaced and in agreement with a Bohr-Sommerfeld semiclassical estimate, and this spectrum is not qualitatively sensitive to issues of factor ordering or boundary conditions except in the lowest few eigenvalues. In the limit of small polymerization scale we recover, within the numericalmore » accuracy, the area spectrum obtained from a Schroedinger quantization of the wormhole throat dynamics. The prospects of recovering from the polymer throat theory a full quantum-corrected spacetime are discussed.« less

  1. Distance learning in discriminative vector quantization.

    PubMed

    Schneider, Petra; Biehl, Michael; Hammer, Barbara

    2009-10-01

    Discriminative vector quantization schemes such as learning vector quantization (LVQ) and extensions thereof offer efficient and intuitive classifiers based on the representation of classes by prototypes. The original methods, however, rely on the Euclidean distance corresponding to the assumption that the data can be represented by isotropic clusters. For this reason, extensions of the methods to more general metric structures have been proposed, such as relevance adaptation in generalized LVQ (GLVQ) and matrix learning in GLVQ. In these approaches, metric parameters are learned based on the given classification task such that a data-driven distance measure is found. In this letter, we consider full matrix adaptation in advanced LVQ schemes. In particular, we introduce matrix learning to a recent statistical formalization of LVQ, robust soft LVQ, and we compare the results on several artificial and real-life data sets to matrix learning in GLVQ, a derivation of LVQ-like learning based on a (heuristic) cost function. In all cases, matrix adaptation allows a significant improvement of the classification accuracy. Interestingly, however, the principled behavior of the models with respect to prototype locations and extracted matrix dimensions shows several characteristic differences depending on the data sets.

  2. Effect of mid-scan breathing changes on quality of 4DCT using a commercial phase-based sorting algorithm.

    PubMed

    Noel, Camille E; Parikh, Parag J

    2011-05-01

    Though it is known that irregular breathing can introduce artifacts in commercial 4DCT, this has not been systematically explored. The purpose of this study is to investigate the effect of variations in basic parameters of the breathing wave on 4DCT imaging quality. A four-dimensional motion platform holding an acrylic sphere was scanned while moving in a trajectory modeled from a lung cancer patient. A bellows device was used as a respiratory surrogate, and the images were sorted by a commercial phase-based sorting algorithm. Motion during the first half of the scan was produced at a baseline trajectory with a consistent frequency and amplitude of 15 breaths per minute and 1 cm, peak to peak. The two parameters were then varied mid-scan to new frequency and amplitude values, with frequencies ranging from 7.5 to 22 bpm and amplitudes ranging from 0.5 to 1.5 cm. Image sets representing four respiratory phases were contoured. Each set was analyzed to compare centroid displacement, density homogeneity, and volumetric and geometric distortions of the imaged sphere. Undercoverage of the target ITV and overcoverage of healthy tissue was also evaluated. Changes in amplitude of 25% or more, with or without changes in frequency, consistently caused measurable distortions in shape, position, and density of the imaged sphere. Frequency changes over 50% showed a similar trend. This study suggests that basic breathing statistics can be used to quickly assess the quality of a 4DCT scan prior to image reconstruction. Such information can help give indication of the proper course of action when irregular breathing patterns are observed during CT scanning.

  3. Visual data mining for quantized spatial data

    NASA Technical Reports Server (NTRS)

    Braverman, Amy; Kahn, Brian

    2004-01-01

    In previous papers we've shown how a well known data compression algorithm called Entropy-constrained Vector Quantization ( can be modified to reduce the size and complexity of very large, satellite data sets. In this paper, we descuss how to visualize and understand the content of such reduced data sets.

  4. Quantization of Gaussian samples at very low SNR regime in continuous variable QKD applications

    NASA Astrophysics Data System (ADS)

    Daneshgaran, Fred; Mondin, Marina

    2016-09-01

    The main problem for information reconciliation in continuous variable Quantum Key Distribution (QKD) at low Signal to Noise Ratio (SNR) is quantization and assignment of labels to the samples of the Gaussian Random Variables (RVs) observed at Alice and Bob. Trouble is that most of the samples, assuming that the Gaussian variable is zero mean which is de-facto the case, tend to have small magnitudes and are easily disturbed by noise. Transmission over longer and longer distances increases the losses corresponding to a lower effective SNR exasperating the problem. This paper looks at the quantization problem of the Gaussian samples at very low SNR regime from an information theoretic point of view. We look at the problem of two bit per sample quantization of the Gaussian RVs at Alice and Bob and derive expressions for the mutual information between the bit strings as a result of this quantization. The quantization threshold for the Most Significant Bit (MSB) should be chosen based on the maximization of the mutual information between the quantized bit strings. Furthermore, while the LSB string at Alice and Bob are balanced in a sense that their entropy is close to maximum, this is not the case for the second most significant bit even under optimal threshold. We show that with two bit quantization at SNR of -3 dB we achieve 75.8% of maximal achievable mutual information between Alice and Bob, hence, as the number of quantization bits increases beyond 2-bits, the number of additional useful bits that can be extracted for secret key generation decreases rapidly. Furthermore, the error rates between the bit strings at Alice and Bob at the same significant bit level are rather high demanding very powerful error correcting codes. While our calculations and simulation shows that the mutual information between the LSB at Alice and Bob is 0.1044 bits, that at the MSB level is only 0.035 bits. Hence, it is only by looking at the bits jointly that we are able to achieve a

  5. Microstructural evolution during sintering of copper particles studied by laboratory diffraction contrast tomography (LabDCT).

    PubMed

    McDonald, S A; Holzner, C; Lauridsen, E M; Reischig, P; Merkle, A P; Withers, P J

    2017-07-12

    Pressureless sintering of loose or compacted granular bodies at elevated temperature occurs by a combination of particle rearrangement, rotation, local deformation and diffusion, and grain growth. Understanding of how each of these processes contributes to the densification of a powder body is still immature. Here we report a fundamental study coupling the crystallographic imaging capability of laboratory diffraction contrast tomography (LabDCT) with conventional computed tomography (CT) in a time-lapse study. We are able to follow and differentiate these processes non-destructively and in three-dimensions during the sintering of a simple copper powder sample at 1050 °C. LabDCT quantifies particle rotation (to <0.05° accuracy) and grain growth while absorption CT simultaneously records the diffusion and deformation-related morphological changes of the sintering particles. We find that the rate of particle rotation is lowest for the more highly coordinated particles and decreases during sintering. Consequently, rotations are greater for surface breaking particles than for more highly coordinated interior ones. Both rolling (cooperative) and sliding particle rotations are observed. By tracking individual grains the grain growth/shrinkage kinetics during sintering are quantified grain by grain for the first time. Rapid, abnormal grain growth is observed for one grain while others either grow or are consumed more gradually.

  6. Highly efficient codec based on significance-linked connected-component analysis of wavelet coefficients

    NASA Astrophysics Data System (ADS)

    Chai, Bing-Bing; Vass, Jozsef; Zhuang, Xinhua

    1997-04-01

    Recent success in wavelet coding is mainly attributed to the recognition of importance of data organization. There has been several very competitive wavelet codecs developed, namely, Shapiro's Embedded Zerotree Wavelets (EZW), Servetto et. al.'s Morphological Representation of Wavelet Data (MRWD), and Said and Pearlman's Set Partitioning in Hierarchical Trees (SPIHT). In this paper, we propose a new image compression algorithm called Significant-Linked Connected Component Analysis (SLCCA) of wavelet coefficients. SLCCA exploits both within-subband clustering of significant coefficients and cross-subband dependency in significant fields. A so-called significant link between connected components is designed to reduce the positional overhead of MRWD. In addition, the significant coefficients' magnitude are encoded in bit plane order to match the probability model of the adaptive arithmetic coder. Experiments show that SLCCA outperforms both EZW and MRWD, and is tied with SPIHT. Furthermore, it is observed that SLCCA generally has the best performance on images with large portion of texture. When applied to fingerprint image compression, it outperforms FBI's wavelet scalar quantization by about 1 dB.

  7. A VLSI implementation of DCT using pass transistor technology

    NASA Technical Reports Server (NTRS)

    Kamath, S.; Lynn, Douglas; Whitaker, Sterling

    1992-01-01

    A VLSI design for performing the Discrete Cosine Transform (DCT) operation on image blocks of size 16 x 16 in a real time fashion operating at 34 MHz (worst case) is presented. The process used was Hewlett-Packard's CMOS26--A 3 metal CMOS process with a minimum feature size of 0.75 micron. The design is based on Multiply-Accumulate (MAC) cells which make use of a modified Booth recoding algorithm for performing multiplication. The design of these cells is straight forward, and the layouts are regular with no complex routing. Two versions of these MAC cells were designed and their layouts completed. Both versions were simulated using SPICE to estimate their performance. One version is slightly faster at the cost of larger silicon area and higher power consumption. An improvement in speed of almost 20 percent is achieved after several iterations of simulation and re-sizing.

  8. Instruments at the Lowell Observatory Discovery Channel Telescope (DCT)

    NASA Astrophysics Data System (ADS)

    Jacoby, George H.; Bida, Thomas A.; Fischer, Debra; Horch, Elliott; Kutyrev, Alexander; Mace, Gregory N.; Massey, Philip; Roe, Henry G.; Prato, Lisa A.

    2017-01-01

    The Lowell Observatory Discovery Channel Telescope (DCT) has been in full science operation for 2 years (2015 and 2016). Five instruments have been commissioned during that period, and two additional instruments are planned for 2017. These include:+ Large Monolithic Imager (LMI) - a CCD imager (12.6 arcmin FoV)+ DeVeny - a general purpose optical spectrograph (2 arcmin slit length, 10 grating choices)+ NIHTS - a low resolution (R=160) YJHK spectrograph (1.3 arcmin slit)+ DSSI - a two-channel optical speckle imager (5 arcsec FoV)+ IGRINS - a high resolution (45,000) HK spectrograph, on loan from the University of Texas.In the upcoming year, instruments will be delivered from the University of Maryland (RIMAS - a YJHK imager/spectrograph) and from Yale University (EXPRES - a very high resolution stabilized optical echelle for PRV).Each of these instruments will be described, along with their primary science goals.

  9. Equivalence of Einstein and Jordan frames in quantized anisotropic cosmological models

    NASA Astrophysics Data System (ADS)

    Pandey, Sachin; Pal, Sridip; Banerjee, Narayan

    2018-06-01

    The present work shows that the mathematical equivalence of the Jordan frame and its conformally transformed version, the Einstein frame, so as far as Brans-Dicke theory is concerned, survives a quantization of cosmological models, arising as solutions to the Brans-Dicke theory. We work with the Wheeler-deWitt quantization scheme and take up quite a few anisotropic cosmological models as examples. We effectively show that the transformation from the Jordan to the Einstein frame is a canonical one and hence two frames furnish equivalent description of same physical scenario.

  10. Integral Sliding Mode Fault-Tolerant Control for Uncertain Linear Systems Over Networks With Signals Quantization.

    PubMed

    Hao, Li-Ying; Park, Ju H; Ye, Dan

    2017-09-01

    In this paper, a new robust fault-tolerant compensation control method for uncertain linear systems over networks is proposed, where only quantized signals are assumed to be available. This approach is based on the integral sliding mode (ISM) method where two kinds of integral sliding surfaces are constructed. One is the continuous-state-dependent surface with the aim of sliding mode stability analysis and the other is the quantization-state-dependent surface, which is used for ISM controller design. A scheme that combines the adaptive ISM controller and quantization parameter adjustment strategy is then proposed. Through utilizing H ∞ control analytical technique, once the system is in the sliding mode, the nature of performing disturbance attenuation and fault tolerance from the initial time can be found without requiring any fault information. Finally, the effectiveness of our proposed ISM control fault-tolerant schemes against quantization errors is demonstrated in the simulation.

  11. Table look-up estimation of signal and noise parameters from quantized observables

    NASA Technical Reports Server (NTRS)

    Vilnrotter, V. A.; Rodemich, E. R.

    1986-01-01

    A table look-up algorithm for estimating underlying signal and noise parameters from quantized observables is examined. A general mathematical model is developed, and a look-up table designed specifically for estimating parameters from four-bit quantized data is described. Estimator performance is evaluated both analytically and by means of numerical simulation, and an example is provided to illustrate the use of the look-up table for estimating signal-to-noise ratios commonly encountered in Voyager-type data.

  12. Sub-Selective Quantization for Learning Binary Codes in Large-Scale Image Search.

    PubMed

    Li, Yeqing; Liu, Wei; Huang, Junzhou

    2018-06-01

    Recently with the explosive growth of visual content on the Internet, large-scale image search has attracted intensive attention. It has been shown that mapping high-dimensional image descriptors to compact binary codes can lead to considerable efficiency gains in both storage and performing similarity computation of images. However, most existing methods still suffer from expensive training devoted to large-scale binary code learning. To address this issue, we propose a sub-selection based matrix manipulation algorithm, which can significantly reduce the computational cost of code learning. As case studies, we apply the sub-selection algorithm to several popular quantization techniques including cases using linear and nonlinear mappings. Crucially, we can justify the resulting sub-selective quantization by proving its theoretic properties. Extensive experiments are carried out on three image benchmarks with up to one million samples, corroborating the efficacy of the sub-selective quantization method in terms of image retrieval.

  13. Quantization of Poisson Manifolds from the Integrability of the Modular Function

    NASA Astrophysics Data System (ADS)

    Bonechi, F.; Ciccoli, N.; Qiu, J.; Tarlini, M.

    2014-10-01

    We discuss a framework for quantizing a Poisson manifold via the quantization of its symplectic groupoid, combining the tools of geometric quantization with the results of Renault's theory of groupoid C*-algebras. This setting allows very singular polarizations. In particular, we consider the case when the modular function is multiplicatively integrable, i.e., when the space of leaves of the polarization inherits a groupoid structure. If suitable regularity conditions are satisfied, then one can define the quantum algebra as the convolution algebra of the subgroupoid of leaves satisfying the Bohr-Sommerfeld conditions. We apply this procedure to the case of a family of Poisson structures on , seen as Poisson homogeneous spaces of the standard Poisson-Lie group SU( n + 1). We show that a bihamiltonian system on defines a multiplicative integrable model on the symplectic groupoid; we compute the Bohr-Sommerfeld groupoid and show that it satisfies the needed properties for applying Renault theory. We recover and extend Sheu's description of quantum homogeneous spaces as groupoid C*-algebras.

  14. BFV-BRST quantization of two-dimensional supergravity

    NASA Astrophysics Data System (ADS)

    Fujiwara, T.; Igarashi, Y.; Kuriki, R.; Tabei, T.

    1996-01-01

    Two-dimensional supergravity theory is quantized as an anomalous gauge theory. In the Batalin-Fradkin (BF) formalism, the anomaly-canceling super-Liouville fields are introduced to identify the original second-class constrained system with a gauge-fixed version of a first-class system. The BFV-BRST quantization applies to formulate the theory in the most general class of gauges. A local effective action constructed in the configuration space contains two super-Liouville actions; one is a noncovariant but local functional written only in terms of two-dimensional supergravity fields, and the other contains the super-Liouville fields canceling the super-Weyl anomaly. Auxiliary fields for the Liouville and the gravity supermultiplets are introduced to make the BRST algebra close off-shell. Inclusion of them turns out to be essentially important especially in the super-light-cone gauge fixing, where the supercurvature equations (∂3-g++=∂2-χ++=0) are obtained as a result of BRST invariance of the theory. Our approach reveals the origin of the OSp(1,2) current algebra symmetry in a transparent manner.

  15. Clinical Validation of 4-Dimensional Computed Tomography Ventilation With Pulmonary Function Test Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brennan, Douglas; Schubert, Leah; Diot, Quentin

    Purpose: A new form of functional imaging has been proposed in the form of 4-dimensional computed tomography (4DCT) ventilation. Because 4DCTs are acquired as part of routine care for lung cancer patients, calculating ventilation maps from 4DCTs provides spatial lung function information without added dosimetric or monetary cost to the patient. Before 4DCT-ventilation is implemented it needs to be clinically validated. Pulmonary function tests (PFTs) provide a clinically established way of evaluating lung function. The purpose of our work was to perform a clinical validation by comparing 4DCT-ventilation metrics with PFT data. Methods and Materials: Ninety-eight lung cancer patients withmore » pretreatment 4DCT and PFT data were included in the study. Pulmonary function test metrics used to diagnose obstructive lung disease were recorded: forced expiratory volume in 1 second (FEV1) and FEV1/forced vital capacity. Four-dimensional CT data sets and spatial registration were used to compute 4DCT-ventilation images using a density change–based and a Jacobian-based model. The ventilation maps were reduced to single metrics intended to reflect the degree of ventilation obstruction. Specifically, we computed the coefficient of variation (SD/mean), ventilation V20 (volume of lung ≤20% ventilation), and correlated the ventilation metrics with PFT data. Regression analysis was used to determine whether 4DCT ventilation data could predict for normal versus abnormal lung function using PFT thresholds. Results: Correlation coefficients comparing 4DCT-ventilation with PFT data ranged from 0.63 to 0.72, with the best agreement between FEV1 and coefficient of variation. Four-dimensional CT ventilation metrics were able to significantly delineate between clinically normal versus abnormal PFT results. Conclusions: Validation of 4DCT ventilation with clinically relevant metrics is essential. We demonstrate good global agreement between PFTs and 4DCT-ventilation, indicating

  16. The canonical quantization of chaotic maps on the torus

    NASA Astrophysics Data System (ADS)

    Rubin, Ron Shai

    In this thesis, a quantization method for classical maps on the torus is presented. The quantum algebra of observables is defined as the quantization of measurable functions on the torus with generators exp (2/pi ix) and exp (2/pi ip). The Hilbert space we use remains the infinite-dimensional L2/ (/IR, dx). The dynamics is given by a unitary quantum propagator such that as /hbar /to 0, the classical dynamics is returned. We construct such a quantization for the Kronecker map, the cat map, the baker's map, the kick map, and the Harper map. For the cat map, we find the same for the propagator on the plane the same integral kernel conjectured in (HB) using semiclassical methods. We also define a quantum 'integral over phase space' as a trace over the quantum algebra. Using this definition, we proceed to define quantum ergodicity and mixing for maps on the torus. We prove that the quantum cat map and Kronecker map are both ergodic, but only the cat map is mixing, true to its classical origins. For Planck's constant satisfying the integrality condition h = 1/N, with N/in doubz+, we construct an explicit isomorphism between L2/ (/IR, dx) and the Hilbert space of sections of an N-dimensional vector bundle over a θ-torus T2 of boundary conditions. The basis functions are distributions in L2/ (/IR, dx), given by an infinite comb of Dirac δ-functions. In Bargmann space these distributions take on the form of Jacobi ϑ-functions. Transformations from position to momentum representation can be implemented via a finite N-dimensional discrete Fourier transform. With the θ-torus, we provide a connection between the finite-dimensional quantum maps given in the physics literature and the canonical quantization presented here and found in the language of pseudo-differential operators elsewhere in mathematics circles. Specifically, at a fixed point of the dynamics on the θ-torus, we return a finite-dimensional matrix propagator. We present this connection explicitly for several

  17. TU-G-BRA-03: Predicting Radiation Therapy Induced Ventilation Changes Using 4DCT Jacobian Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patton, T; Du, K; Bayouth, J

    2015-06-15

    Purpose: Longitudinal changes in lung ventilation following radiation therapy can be mapped using four-dimensional computed tomography(4DCT) and image registration. This study aimed to predict ventilation changes caused by radiation therapy(RT) as a function of pre-RT ventilation and delivered dose. Methods: 4DCT images were acquired before and 3 months after radiation therapy for 13 subjects. Jacobian ventilation maps were calculated from the 4DCT images, warped to a common coordinate system, and a Jacobian ratio map was computed voxel-by-voxel as the ratio of post-RT to pre-RT Jacobian calculations. A leave-one-out method was used to build a response model for each subject: post-RTmore » to pre-RT Jacobian ratio data and dose distributions of 12 subjects were applied to the subject’s pre-RT Jacobian map to predict the post-RT Jacobian. The predicted Jacobian map was compared to the actual post-RT Jacobian map to evaluate efficacy. Within this cohort, 8 subjects had repeat pre-RT scans that were compared as a reference for no ventilation change. Maps were compared using gamma pass rate criteria of 2mm distance-to-agreement and 6% ventilation difference. Gamma pass rates were compared using paired t-tests to determine significant differences. Further analysis masked non-radiation induced changes by excluding voxels below specified dose thresholds. Results: Visual inspection demonstrates the predicted post-RT ventilation map is similar to the actual map in magnitude and distribution. Quantitatively, the percentage of voxels in agreement when excluding voxels receiving below specified doses are: 74%/20Gy, 73%/10Gy, 73%/5Gy, and 71%/0Gy. By comparison, repeat scans produced 73% of voxels within the 6%/2mm criteria. The agreement of the actual post-RT maps with the predicted maps was significantly better than agreement with pre-RT maps (p<0.02). Conclusion: This work validates that significant changes to ventilation post-RT can be predicted. The differences between

  18. Quantization ambiguities and bounds on geometric scalars in anisotropic loop quantum cosmology

    NASA Astrophysics Data System (ADS)

    Singh, Parampreet; Wilson-Ewing, Edward

    2014-02-01

    We study quantization ambiguities in loop quantum cosmology that arise for space-times with non-zero spatial curvature and anisotropies. Motivated by lessons from different possible loop quantizations of the closed Friedmann-Lemaître-Robertson-Walker cosmology, we find that using open holonomies of the extrinsic curvature, which due to gauge-fixing can be treated as a connection, leads to the same quantum geometry effects that are found in spatially flat cosmologies. More specifically, in contrast to the quantization based on open holonomies of the Ashtekar-Barbero connection, the expansion and shear scalars in the effective theories of the Bianchi type II and Bianchi type IX models have upper bounds, and these are in exact agreement with the bounds found in the effective theories of the Friedmann-Lemaître-Robertson-Walker and Bianchi type I models in loop quantum cosmology. We also comment on some ambiguities present in the definition of inverse triad operators and their role.

  19. [Validation of an improved Demons deformable registration algorithm and its application in re-contouring in 4D-CT].

    PubMed

    Zhen, Xin; Zhou, Ling-hong; Lu, Wen-ting; Zhang, Shu-xu; Zhou, Lu

    2010-12-01

    To validate the efficiency and accuracy of an improved Demons deformable registration algorithm and evaluate its application in contour recontouring in 4D-CT. To increase the additional Demons force and reallocate the bilateral forces to accelerate convergent speed, we propose a novel energy function as the similarity measure, and utilize a BFGS method for optimization to avoid specifying the numbers of iteration. Mathematical transformed deformable CT images and home-made deformable phantom were used to validate the accuracy of the improved algorithm, and its effectiveness for contour recontouring was tested. The improved algorithm showed a relatively high registration accuracy and speed when compared with the classic Demons algorithm and optical flow based method. Visual inspection of the positions and shapes of the deformed contours agreed well with the physician-drawn contours. Deformable registration is a key technique in 4D-CT, and this improved Demons algorithm for contour recontouring can significantly reduce the workload of the physicians. The registration accuracy of this method proves to be sufficient for clinical needs.

  20. Novel 3D-CT evaluation of carotid stent volume: greater chronological expansion of stents in patients with vulnerable plaques.

    PubMed

    Itami, Hisakazu; Tokunaga, Koji; Okuma, Yu; Hishikawa, Tomohito; Sugiu, Kenji; Ida, Kentaro; Date, Isao

    2013-09-01

    Although self-expanding carotid stents may dilate gradually, the degrees of residual stenosis have been quantified by the NASCET criteria, which is too simple to reflect the configuration of the stented artery. We measured the volumes of the stent lumens chronologically by 3D-CT in patients after carotid artery stenting (CAS), and analyzed the correlations between the volume change and medical factors. Fourteen patients with carotid artery stenosis were treated using self-expanding, open-cell stents. All patients underwent preoperative plaque MRI (magnetization-prepared rapid acquisition gradient-echo, MPRAGE) and chronological 3D-CT examinations of their stents immediately after their placement and 1 day, 1 week, and 1 month after the procedure. The volume of the stent lumen was measured using a 3D workstation. The correlations between stent volume and various factors including the presence of underlying diseases, plaque characteristics, and the results of the CAS procedure were analyzed. Stent volume gradually increased in each case and had increased by 1.04-1.55 (mean, 1.25)-fold at 1 postoperative month. The presence of underlying medical diseases, plaque length, the degree of residual stenosis immediately after CAS, and plaque calcification did not have an impact on the change in stent volume. On the other hand, the stent volume increase was significantly larger in the patients with vulnerable plaques that demonstrated high MPRAGE signal intensity (P < 0.05). A 3D-CT examination is useful for precisely measuring stent volume. Self-expanding stents in carotid arteries containing vulnerable plaques expand significantly more than those without such plaques in a follow-up period.

  1. Exact quantization of Einstein-Rosen waves coupled to massless scalar matter.

    PubMed

    Barbero G, J Fernando; Garay, Iñaki; Villaseñor, Eduardo J S

    2005-07-29

    We show in this Letter that gravity coupled to a massless scalar field with full cylindrical symmetry can be exactly quantized by an extension of the techniques used in the quantization of Einstein-Rosen waves. This system provides a useful test bed to discuss a number of issues in quantum general relativity, such as the emergence of the classical metric, microcausality, and large quantum gravity effects. It may also provide an appropriate framework to study gravitational critical phenomena from a quantum point of view, issues related to black hole evaporation, and the consistent definition of test fields and particles in quantum gravity.

  2. Covariant scalar representation of ? and quantization of the scalar relativistic particle

    NASA Astrophysics Data System (ADS)

    Jarvis, P. D.; Tsohantjis, I.

    1996-03-01

    A covariant scalar representation of iosp(d,2/2) is constructed and analysed in comparison with existing BFV-BRST methods for the quantization of the scalar relativistic particle. It is found that, with appropriately defined wavefunctions, this iosp(d,2/2) produced representation can be identified with the state space arising from the canonical BFV-BRST quantization of the modular-invariant, unoriented scalar particle (or antiparticle) with admissible gauge-fixing conditions. For this model, the cohomological determination of physical states can thus be obtained purely from the representation theory of the iosp(d,2/2) algebra.

  3. Fractional quantization of the magnetic flux in cylindrical unconventional superconductors.

    PubMed

    Loder, F; Kampf, A P; Kopp, T

    2013-07-26

    The magnetic flux threading a conventional superconducting ring is typically quantized in units of Φ0=hc/2e. The factor of 2 in the denominator of Φ0 originates from the existence of two different types of pairing states with minima of the free energy at even and odd multiples of Φ0. Here we show that spatially modulated pairing states exist with energy minima at fractional flux values, in particular, at multiples of Φ0/2. In such states, condensates with different center-of-mass momenta of the Cooper pairs coexist. The proposed mechanism for fractional flux quantization is discussed in the context of cuprate superconductors, where hc/4e flux periodicities were observed.

  4. Cascade Error Projection with Low Bit Weight Quantization for High Order Correlation Data

    NASA Technical Reports Server (NTRS)

    Duong, Tuan A.; Daud, Taher

    1998-01-01

    In this paper, we reinvestigate the solution for chaotic time series prediction problem using neural network approach. The nature of this problem is such that the data sequences are never repeated, but they are rather in chaotic region. However, these data sequences are correlated between past, present, and future data in high order. We use Cascade Error Projection (CEP) learning algorithm to capture the high order correlation between past and present data to predict a future data using limited weight quantization constraints. This will help to predict a future information that will provide us better estimation in time for intelligent control system. In our earlier work, it has been shown that CEP can sufficiently learn 5-8 bit parity problem with 4- or more bits, and color segmentation problem with 7- or more bits of weight quantization. In this paper, we demonstrate that chaotic time series can be learned and generalized well with as low as 4-bit weight quantization using round-off and truncation techniques. The results show that generalization feature will suffer less as more bit weight quantization is available and error surfaces with the round-off technique are more symmetric around zero than error surfaces with the truncation technique. This study suggests that CEP is an implementable learning technique for hardware consideration.

  5. TU-G-BRA-04: Changes in Regional Lung Function Measured by 4D-CT Ventilation Imaging for Thoracic Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakajima, Y; Kadoya, N; Kabus, S

    Purpose: To test the hypothesis: 4D-CT ventilation imaging can show the known effects of radiotherapy on lung function: (1) radiation-induced ventilation reductions, and (2) ventilation increases caused by tumor regression. Methods: Repeat 4D-CT scans (pre-, mid- and/or post-treatment) were acquired prospectively for 11 thoracic cancer patients in an IRB-approved clinical trial. A ventilation image for each time point was created using deformable image registration and the Hounsfield unit (HU)-based or Jacobian-based metric. The 11 patients were divided into two subgroups based on tumor volume reduction using a threshold of 5 cm{sup 3}. To quantify radiation-induced ventilation reduction, six patients whomore » showed a small tumor volume reduction (<5 cm{sup 3}) were analyzed for dose-response relationships. To investigate ventilation increase caused by tumor regression, two of the other five patients were analyzed to compare ventilation changes in the lung lobes affected and unaffected by the tumor. The remaining three patients were excluded because there were no unaffected lobes. Results: Dose-dependent reductions of HU-based ventilation were observed in a majority of the patient-specific dose-response curves and in the population-based dose-response curve, whereas no clear relationship was seen for Jacobian-based ventilation. The post-treatment population-based dose-response curve of HU-based ventilation demonstrated the average ventilation reductions of 20.9±7.0% at 35–40 Gy (equivalent dose in 2-Gy fractions, EQD2), and 40.6±22.9% at 75–80 Gy EQD2. Remarkable ventilation increases in the affected lobes were observed for the two patients who showed an average tumor volume reduction of 37.1 cm{sup 3} and re-opening airways. The mid-treatment increase in HU-based ventilation of patient 3 was 100.4% in the affected lobes, which was considerably greater than 7.8% in the unaffected lobes. Conclusion: This study has demonstrated that 4D-CT ventilation

  6. Wavelet/scalar quantization compression standard for fingerprint images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brislawn, C.M.

    1996-06-12

    US Federal Bureau of Investigation (FBI) has recently formulated a national standard for digitization and compression of gray-scale fingerprint images. Fingerprints are scanned at a spatial resolution of 500 dots per inch, with 8 bits of gray-scale resolution. The compression algorithm for the resulting digital images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition (wavelet/scalar quantization method). The FBI standard produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. The compression standard specifies a class ofmore » potential encoders and a universal decoder with sufficient generality to reconstruct compressed images produced by any compliant encoder, allowing flexibility for future improvements in encoder technology. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations.« less

  7. Topological charge quantization via path integration: An application of the Kustaanheimo-Stiefel transformation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Inomata, A.; Junker, G.; Wilson, R.

    1993-08-01

    The unified treatment of the Dirac monopole, the Schwinger monopole, and the Aharonov-Bahn problem by Barut and Wilson is revisited via a path integral approach. The Kustaanheimo-Stiefel transformation of space and time is utilized to calculate the path integral for a charged particle in the singular vector potential. In the process of dimensional reduction, a topological charge quantization rule is derived, which contains Dirac's quantization condition as a special case. 32 refs.

  8. Quantization of geometric phase with integer and fractional topological characterization in a quantum Ising chain with long-range interaction.

    PubMed

    Sarkar, Sujit

    2018-04-12

    An attempt is made to study and understand the behavior of quantization of geometric phase of a quantum Ising chain with long range interaction. We show the existence of integer and fractional topological characterization for this model Hamiltonian with different quantization condition and also the different quantized value of geometric phase. The quantum critical lines behave differently from the perspective of topological characterization. The results of duality and its relation to the topological quantization is presented here. The symmetry study for this model Hamiltonian is also presented. Our results indicate that the Zak phase is not the proper physical parameter to describe the topological characterization of system with long range interaction. We also present quite a few exact solutions with physical explanation. Finally we present the relation between duality, symmetry and topological characterization. Our work provides a new perspective on topological quantization.

  9. Spectrally efficient digitized radio-over-fiber system with k-means clustering-based multidimensional quantization.

    PubMed

    Zhang, Lu; Pang, Xiaodan; Ozolins, Oskars; Udalcovs, Aleksejs; Popov, Sergei; Xiao, Shilin; Hu, Weisheng; Chen, Jiajia

    2018-04-01

    We propose a spectrally efficient digitized radio-over-fiber (D-RoF) system by grouping highly correlated neighboring samples of the analog signals into multidimensional vectors, where the k-means clustering algorithm is adopted for adaptive quantization. A 30  Gbit/s D-RoF system is experimentally demonstrated to validate the proposed scheme, reporting a carrier aggregation of up to 40 100 MHz orthogonal frequency division multiplexing (OFDM) channels with quadrate amplitude modulation (QAM) order of 4 and an aggregation of 10 100 MHz OFDM channels with a QAM order of 16384. The equivalent common public radio interface rates from 37 to 150  Gbit/s are supported. Besides, the error vector magnitude (EVM) of 8% is achieved with the number of quantization bits of 4, and the EVM can be further reduced to 1% by increasing the number of quantization bits to 7. Compared with conventional pulse coding modulation-based D-RoF systems, the proposed D-RoF system improves the signal-to-noise-ratio up to ∼9  dB and greatly reduces the EVM, given the same number of quantization bits.

  10. A consistent covariant quantization of the Brink-Schwarz superparticle

    NASA Astrophysics Data System (ADS)

    Eisenberg, Yeshayahu

    1992-02-01

    We perform the covariant quantization of the ten-dimensional Brink-Schwarz superparticle by reducing it to a system whose constraints are all first class, covariant and have only two levels of reducibility. Research supported by the Rothschild Fellowship.

  11. Novel properties of the q-analogue quantized radiation field

    NASA Technical Reports Server (NTRS)

    Nelson, Charles A.

    1993-01-01

    The 'classical limit' of the q-analog quantized radiation field is studied paralleling conventional quantum optics analyses. The q-generalizations of the phase operator of Susskind and Glogower and that of Pegg and Barnett are constructed. Both generalizations and their associated number-phase uncertainty relations are manifestly q-independent in the n greater than g number basis. However, in the q-coherent state z greater than q basis, the variance of the generic electric field, (delta(E))(sup 2) is found to be increased by a factor lambda(z) where lambda(z) greater than 1 if q not equal to 1. At large amplitudes, the amplitude itself would be quantized if the available resolution of unity for the q-analog coherent states is accepted in the formulation. These consequences are remarkable versus the conventional q = 1 limit.

  12. Can The Periods of Some Extra-Solar Planetary Systems be Quantized?

    NASA Astrophysics Data System (ADS)

    El Fady Morcos, Abd

    A simple formula was derived before by Morcos (2013 ), to relate the quantum numbers of planetary systems and their periods. This formula is applicable perfectly for the solar system planets, and some extra-solar planets , of stars of approximately the same masses like the Sun. This formula has been used to estimate the periods of some extra-solar planet of known quantum numbers. The used quantum numbers were calculated previously by other authors. A comparison between the observed and estimated periods, from the given formula has been done. The differences between the observed and calculated periods for the extra-solar systems have been calculated and tabulated. It is found that there is an error of the range of 10% The same formula has been also used to find the quantum numbers, of some known periods, exo-planet. Keywords: Quantization; Periods; Extra-Planetary; Extra-Solar Planet REFERENCES [1] Agnese, A. G. and Festa, R. “Discretization on the Cosmic Scale Inspirred from the Old Quantum Mechanics,” 1998. http://arxiv.org/abs/astro-ph/9807186 [2] Agnese, A. G. and Festa, R. “Discretizing ups-Andro- medae Planetary System,” 1999. http://arxiv.org/abs/astro-ph/9910534. [3] Barnothy, J. M. “The Stability of the Solar Systemand of Small Stellar Systems,” Proceedings of the IAU Sympo-sium 62, Warsaw, 5-8 September 1973, pp. 23-31. [4] Morcos, A.B. , “Confrontation between Quantized Periods of Some Extra-Solar Planetary Systems and Observations”, International Journal of Astronomy and Astrophysics, 2013, 3, 28-32. [5] Nottale, L. “Fractal Space-Time and Microphysics, To-wards a Theory of Scale Relativity,” World Scientific, London, 1994. [6] Nottale , L., “Scale-Relativity and Quantization of Extra- Solar Planetary Systems,” Astronomy & Astrophysics, Vol. 315, 1996, pp. L9-L12 [7] Nottale, L., Schumacher, G. and Gay, J. “Scale-Relativity and Quantization of the Solar Systems,” Astronomy & Astrophysics letters, Vol. 322, 1997, pp. 1018-10 [8

  13. Theory of the Quantized Hall Conductance in Periodic Systems: a Topological Analysis.

    NASA Astrophysics Data System (ADS)

    Czerwinski, Michael Joseph

    The integral quantization of the Hall conductance in two-dimensional periodic systems is investigated from a topological point of view. Attention is focused on the contributions from the electronic sub-bands which arise from perturbed Landau levels. After reviewing the theoretical work leading to the identification of the Hall conductance as a topological quantum number, both a determination and interpretation of these quantized values for the sub-band conductances is made. It is shown that the Hall conductance of each sub-band can be regarded as the sum of two terms which will be referred to as classical and nonclassical. Although each of these contributions individually leads to a fractional conductance, the sum of these two contributions does indeed yield an integer. These integral conductances are found to be given by the solution of a simple Diophantine equation which depends on the periodic perturbation. A connection between the quantized value of the Hall conductance and the covering of real space by the zeroes of the sub-band wavefunctions allows for a determination of these conductances under more general potentials. A method is described for obtaining the conductance values from only those states bordering the Brillouin zone, and not the states in its interior. This method is demonstrated to give Hall conductances in agreement with those obtained from the Diophantine equation for the sinusoidal potential case explored earlier. Generalizing a simple gauge invariance argument from real space to k-space, a k-space 'vector potential' is introduced. This allows for a explicit identification of the Hall conductance with the phase winding number of the sub-band wavefunction around the Brillouin zone. The previously described division of the Hall conductance into classical and nonclassical contributions is in this way made more rigorous; based on periodicity considerations alone, these terms are identified as the winding numbers associated with (i) the basis

  14. Quantization error of CCD cameras and their influence on phase calculation in fringe pattern analysis.

    PubMed

    Skydan, Oleksandr A; Lilley, Francis; Lalor, Michael J; Burton, David R

    2003-09-10

    We present an investigation into the phase errors that occur in fringe pattern analysis that are caused by quantization effects. When acquisition devices with a limited value of camera bit depth are used, there are a limited number of quantization levels available to record the signal. This may adversely affect the recorded signal and adds a potential source of instrumental error to the measurement system. Quantization effects also determine the accuracy that may be achieved by acquisition devices in a measurement system. We used the Fourier fringe analysis measurement technique. However, the principles can be applied equally well for other phase measuring techniques to yield a phase error distribution that is caused by the camera bit depth.

  15. Effect of temperature degeneracy and Landau quantization on drift solitary waves and double layers

    NASA Astrophysics Data System (ADS)

    Shan, Shaukat Ali; Haque, Q.

    2018-01-01

    The linear and nonlinear drift ion acoustic waves have been investigated in an inhomogeneous, magnetized, dense degenerate, and quantized magnetic field plasma. The linear drift ion acoustic wave propagation along with the nonlinear structures like double layers and solitary waves has been found to be strongly dependent on the drift speed, magnetic field quantization parameter β, and the temperature degeneracy. The graphical illustrations show that the frequency of linear waves and the amplitude of the solitary waves increase with the increase in temperature degeneracy and Landau quantization effect, while the amplitude of the double layers decreases with the increase in η and T. The relevance of the present study is pointed out in the plasma environment of fast ignition inertial confinement fusion, the white dwarf stars, and short pulsed petawatt laser technology.

  16. A case of laparoscopic high anterior resection of rectosigmoid colon cancer associated with a horseshoe kidney using preoperative 3D-CT angiography.

    PubMed

    Kubo, Naoki; Furusawa, Norihiko; Imai, Shinichiro; Terada, Masaru

    2018-06-27

    Horseshoe kidney is a congenital malformation in which the bilateral kidneys are fused. It is frequently complicated by other congenital malformations and is often accompanied by anomalies of the ureteropelvic and vascular systems, which must be evaluated to avoid iatrogenic injury. We report a case of laparoscopic high anterior resection of rectosigmoid colon cancer associated with a horseshoe kidney using preoperative 3D-CT angiography. A 52-year-old Japanese man with lower abdominal pain underwent lower endoscopy, revealing a type 2 lesion in the rectosigmoid colon. He was diagnosed with rectosigmoid colon cancer with multiple lung metastases and a horseshoe kidney on computed tomography (CT) scan. Three-dimensional (3D)-CT angiography showed an aberrant renal artery at the isthmus from 3 cm under the inferior mesenteric artery (IMA) branch of the aorta. Laparoscopic anterior rectal resection was performed. During the operation, the inferior mesenteric artery, left ureter, left gonadal vessels, and hypogastric nerve plexus could be seen passing over the horseshoe kidney isthmus and were preserved. The left branch of aberrant renal artery that was close to IMA was also detected and preserved. To prevent intraoperative misidentification, 3D-CT angiography should be performed preoperatively to ascertain the precise positional relationships between the extra renal arteries and the kidney. We always must consider anomalous locations of renal vessels, ureter, gonadal vessels, and lumbar splanchnic nerve to avoid laparoscopic iatrogenic injury in patients with a horseshoe kidney.

  17. A constrained joint source/channel coder design and vector quantization of nonstationary sources

    NASA Technical Reports Server (NTRS)

    Sayood, Khalid; Chen, Y. C.; Nori, S.; Araj, A.

    1993-01-01

    The emergence of broadband ISDN as the network for the future brings with it the promise of integration of all proposed services in a flexible environment. In order to achieve this flexibility, asynchronous transfer mode (ATM) has been proposed as the transfer technique. During this period a study was conducted on the bridging of network transmission performance and video coding. The successful transmission of variable bit rate video over ATM networks relies on the interaction between the video coding algorithm and the ATM networks. Two aspects of networks that determine the efficiency of video transmission are the resource allocation algorithm and the congestion control algorithm. These are explained in this report. Vector quantization (VQ) is one of the more popular compression techniques to appear in the last twenty years. Numerous compression techniques, which incorporate VQ, have been proposed. While the LBG VQ provides excellent compression, there are also several drawbacks to the use of the LBG quantizers including search complexity and memory requirements, and a mismatch between the codebook and the inputs. The latter mainly stems from the fact that the VQ is generally designed for a specific rate and a specific class of inputs. In this work, an adaptive technique is proposed for vector quantization of images and video sequences. This technique is an extension of the recursively indexed scalar quantization (RISQ) algorithm.

  18. WE-AB-202-03: Quantifying Ventilation Change Due to Radiation Therapy Using 4DCT Jacobian Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patton, T; Du, K; Bayouth, J

    Purpose: Four-dimensional computed tomography (4DCT) and image registration can be used to determine regional lung ventilation changes after radiation therapy (RT). This study aimed to determine if lung ventilation change following radiation therapy was affected by the pre-RT ventilation of the lung. Methods: 13 subjects had three 4DCT scans: two repeat scans acquired before RT and one three months after RT. Regional ventilation was computed using Jacobian determinant calculations on the registered 4DCT images. The post-RT ventilation map was divided by the pre-RT ventilation map to get a voxel-by-voxel Jacobian ratio map depicting ventilation change over the course of RT.more » Jacobian ratio change was compared over the range of delivered doses. The first pre-RT ventilation image was divided by the second to establish a control for Jacobian ratio change without radiation delivered. The functional change between scans was assessed using histograms of the Jacobian ratios. Results: There were significantly (p < 0.05) more voxels that had a large decrease in Jacobian ratio in the post-RT divided by pre-RT map (15.6%) than the control (13.2%). There were also significantly (p < .01) more voxels that had a large increase in Jacobian ratio (16.2%) when compared to control (13.3%). Lung regions with low function (<10% expansion by Jacobian) showed a slight linear reduction in expansion (0.2%/10 Gy delivered), while high function regions (>10% expansion) showed a greater response (1.2% reduction/10 Gy). Contiguous high function regions > 1 liter occurred in 11 of 13 subjects. Conclusion: There is a significant change in regional ventilation following a course of radiation therapy. The change in Jacobian following RT is dependent both on the delivered dose and the initial ventilation of the lung tissue: high functioning lung has greater ventilation loss for equivalent radiation doses. Substantial regions of high function lung tissue are prevalent. Research support from

  19. Electronic quantization in dielectric nanolaminates

    NASA Astrophysics Data System (ADS)

    Willemsen, T.; Geerke, P.; Jupé, M.; Gallais, L.; Ristau, D.

    2016-12-01

    The scientific background in the field of the laser induced damage processes in optical coatings has been significantly extended during the last decades. Especially for the ultra-short pulse regime a clear correlation between the electronic material parameters and the laser damage threshold could be demonstrated. In the present study, the quantization in nanolaminates is investigated to gain a deeper insight into the behavior of the blue shift of the bandgap in specific coating materials as well as to find approximations for the effective mass of the electrons. The theoretical predictions are correlated to the measurements.

  20. Generalized Ehrenfest Relations, Deformation Quantization, and the Geometry of Inter-model Reduction

    NASA Astrophysics Data System (ADS)

    Rosaler, Joshua

    2018-03-01

    This study attempts to spell out more explicitly than has been done previously the connection between two types of formal correspondence that arise in the study of quantum-classical relations: one the one hand, deformation quantization and the associated continuity between quantum and classical algebras of observables in the limit \\hbar → 0, and, on the other, a certain generalization of Ehrenfest's Theorem and the result that expectation values of position and momentum evolve approximately classically for narrow wave packet states. While deformation quantization establishes a direct continuity between the abstract algebras of quantum and classical observables, the latter result makes in-eliminable reference to the quantum and classical state spaces on which these structures act—specifically, via restriction to narrow wave packet states. Here, we describe a certain geometrical re-formulation and extension of the result that expectation values evolve approximately classically for narrow wave packet states, which relies essentially on the postulates of deformation quantization, but describes a relationship between the actions of quantum and classical algebras and groups over their respective state spaces that is non-trivially distinct from deformation quantization. The goals of the discussion are partly pedagogical in that it aims to provide a clear, explicit synthesis of known results; however, the particular synthesis offered aspires to some novelty in its emphasis on a certain general type of mathematical and physical relationship between the state spaces of different models that represent the same physical system, and in the explicitness with which it details the above-mentioned connection between quantum and classical models.

  1. Polymer quantization, stability and higher-order time derivative terms

    NASA Astrophysics Data System (ADS)

    Cumsille, Patricio; Reyes, Carlos M.; Ossandon, Sebastian; Reyes, Camilo

    2016-03-01

    The possibility that fundamental discreteness implicit in a quantum gravity theory may act as a natural regulator for ultraviolet singularities arising in quantum field theory has been intensively studied. Here, along the same expectations, we investigate whether a nonstandard representation called polymer representation can smooth away the large amount of negative energy that afflicts the Hamiltonians of higher-order time derivative theories, rendering the theory unstable when interactions come into play. We focus on the fourth-order Pais-Uhlenbeck model which can be reexpressed as the sum of two decoupled harmonic oscillators one producing positive energy and the other negative energy. As expected, the Schrödinger quantization of such model leads to the stability problem or to negative norm states called ghosts. Within the framework of polymer quantization we show the existence of new regions where the Hamiltonian can be defined well bounded from below.

  2. Distributed Adaptive Containment Control for a Class of Nonlinear Multiagent Systems With Input Quantization.

    PubMed

    Wang, Chenliang; Wen, Changyun; Hu, Qinglei; Wang, Wei; Zhang, Xiuyu

    2018-06-01

    This paper is devoted to distributed adaptive containment control for a class of nonlinear multiagent systems with input quantization. By employing a matrix factorization and a novel matrix normalization technique, some assumptions involving control gain matrices in existing results are relaxed. By fusing the techniques of sliding mode control and backstepping control, a two-step design method is proposed to construct controllers and, with the aid of neural networks, all system nonlinearities are allowed to be unknown. Moreover, a linear time-varying model and a similarity transformation are introduced to circumvent the obstacle brought by quantization, and the controllers need no information about the quantizer parameters. The proposed scheme is able to ensure the boundedness of all closed-loop signals and steer the containment errors into an arbitrarily small residual set. The simulation results illustrate the effectiveness of the scheme.

  3. Direct Images, Fields of Hilbert Spaces, and Geometric Quantization

    NASA Astrophysics Data System (ADS)

    Lempert, László; Szőke, Róbert

    2014-04-01

    Geometric quantization often produces not one Hilbert space to represent the quantum states of a classical system but a whole family H s of Hilbert spaces, and the question arises if the spaces H s are canonically isomorphic. Axelrod et al. (J. Diff. Geo. 33:787-902, 1991) and Hitchin (Commun. Math. Phys. 131:347-380, 1990) suggest viewing H s as fibers of a Hilbert bundle H, introduce a connection on H, and use parallel transport to identify different fibers. Here we explore to what extent this can be done. First we introduce the notion of smooth and analytic fields of Hilbert spaces, and prove that if an analytic field over a simply connected base is flat, then it corresponds to a Hermitian Hilbert bundle with a flat connection and path independent parallel transport. Second we address a general direct image problem in complex geometry: pushing forward a Hermitian holomorphic vector bundle along a non-proper map . We give criteria for the direct image to be a smooth field of Hilbert spaces. Third we consider quantizing an analytic Riemannian manifold M by endowing TM with the family of adapted Kähler structures from Lempert and Szőke (Bull. Lond. Math. Soc. 44:367-374, 2012). This leads to a direct image problem. When M is homogeneous, we prove the direct image is an analytic field of Hilbert spaces. For certain such M—but not all—the direct image is even flat; which means that in those cases quantization is unique.

  4. Perceptually-Based Adaptive JPEG Coding

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Rosenholtz, Ruth; Null, Cynthia H. (Technical Monitor)

    1996-01-01

    An extension to the JPEG standard (ISO/IEC DIS 10918-3) allows spatial adaptive coding of still images. As with baseline JPEG coding, one quantization matrix applies to an entire image channel, but in addition the user may specify a multiplier for each 8 x 8 block, which multiplies the quantization matrix, yielding the new matrix for the block. MPEG 1 and 2 use much the same scheme, except there the multiplier changes only on macroblock boundaries. We propose a method for perceptual optimization of the set of multipliers. We compute the perceptual error for each block based upon DCT quantization error adjusted according to contrast sensitivity, light adaptation, and contrast masking, and pick the set of multipliers which yield maximally flat perceptual error over the blocks of the image. We investigate the bitrate savings due to this adaptive coding scheme and the relative importance of the different sorts of masking on adaptive coding.

  5. Quantizing higher-spin gravity in free-field variables

    NASA Astrophysics Data System (ADS)

    Campoleoni, Andrea; Fredenhagen, Stefan; Raeymaekers, Joris

    2018-02-01

    We study the formulation of massless higher-spin gravity on AdS3 in a gauge in which the fundamental variables satisfy free field Poisson brackets. This gauge choice leaves a small portion of the gauge freedom unfixed, which should be further quotiented out. We show that doing so leads to a bulk version of the Coulomb gas formalism for W N CFT's: the generators of the residual gauge symmetries are the classical limits of screening charges, while the gauge-invariant observables are classical W N charges. Quantization in these variables can be carried out using standard techniques and makes manifest a remnant of the triality symmetry of W ∞[λ]. This symmetry can be used to argue that the theory should be supplemented with additional matter content which is precisely that of the Prokushkin-Vasiliev theory. As a further application, we use our formulation to quantize a class of conical surplus solutions and confirm the conjecture that these are dual to specific degenerate W N primaries, to all orders in the large central charge expansion.

  6. Rotating effects on the Landau quantization for an atom with a magnetic quadrupole moment

    NASA Astrophysics Data System (ADS)

    Fonseca, I. C.; Bakke, K.

    2016-01-01

    Based on the single particle approximation [Dmitriev et al., Phys. Rev. C 50, 2358 (1994) and C.-C. Chen, Phys. Rev. A 51, 2611 (1995)], the Landau quantization associated with an atom with a magnetic quadrupole moment is introduced, and then, rotating effects on this analogue of the Landau quantization is investigated. It is shown that rotating effects can modify the cyclotron frequency and breaks the degeneracy of the analogue of the Landau levels.

  7. Rotating effects on the Landau quantization for an atom with a magnetic quadrupole moment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fonseca, I. C.; Bakke, K., E-mail: kbakke@fisica.ufpb.br

    2016-01-07

    Based on the single particle approximation [Dmitriev et al., Phys. Rev. C 50, 2358 (1994) and C.-C. Chen, Phys. Rev. A 51, 2611 (1995)], the Landau quantization associated with an atom with a magnetic quadrupole moment is introduced, and then, rotating effects on this analogue of the Landau quantization is investigated. It is shown that rotating effects can modify the cyclotron frequency and breaks the degeneracy of the analogue of the Landau levels.

  8. Correspondence between quantization schemes for two-player nonzero-sum games and CNOT complexity

    NASA Astrophysics Data System (ADS)

    Vijayakrishnan, V.; Balakrishnan, S.

    2018-05-01

    The well-known quantization schemes for two-player nonzero-sum games are Eisert-Wilkens-Lewenstein scheme and Marinatto-Weber scheme. In this work, we establish the connection between the two schemes from the perspective of quantum circuits. Further, we provide the correspondence between any game quantization schemes and the CNOT complexity, where CNOT complexity is up to the local unitary operations. While CNOT complexity is known to be useful in the analysis of universal quantum circuit, in this work, we find its applicability in quantum game theory.

  9. On the Perturbative Equivalence Between the Hamiltonian and Lagrangian Quantizations

    NASA Astrophysics Data System (ADS)

    Batalin, I. A.; Tyutin, I. V.

    The Hamiltonian (BFV) and Lagrangian (BV) quantization schemes are proved to be perturbatively equivalent to each other. It is shown in particular that the quantum master equation being treated perturbatively possesses a local formal solution.

  10. Permutation modulation for quantization and information reconciliation in CV-QKD systems

    NASA Astrophysics Data System (ADS)

    Daneshgaran, Fred; Mondin, Marina; Olia, Khashayar

    2017-08-01

    This paper is focused on the problem of Information Reconciliation (IR) for continuous variable Quantum Key Distribution (QKD). The main problem is quantization and assignment of labels to the samples of the Gaussian variables observed at Alice and Bob. Trouble is that most of the samples, assuming that the Gaussian variable is zero mean which is de-facto the case, tend to have small magnitudes and are easily disturbed by noise. Transmission over longer and longer distances increases the losses corresponding to a lower effective Signal to Noise Ratio (SNR) exasperating the problem. Here we propose to use Permutation Modulation (PM) as a means of quantization of Gaussian vectors at Alice and Bob over a d-dimensional space with d ≫ 1. The goal is to achieve the necessary coding efficiency to extend the achievable range of continuous variable QKD by quantizing over larger and larger dimensions. Fractional bit rate per sample is easily achieved using PM at very reasonable computational cost. Ordered statistics is used extensively throughout the development from generation of the seed vector in PM to analysis of error rates associated with the signs of the Gaussian samples at Alice and Bob as a function of the magnitude of the observed samples at Bob.

  11. FAST TRACK COMMUNICATION: Quantization over boson operator spaces

    NASA Astrophysics Data System (ADS)

    Prosen, Tomaž; Seligman, Thomas H.

    2010-10-01

    The framework of third quantization—canonical quantization in the Liouville space—is developed for open many-body bosonic systems. We show how to diagonalize the quantum Liouvillean for an arbitrary quadratic n-boson Hamiltonian with arbitrary linear Lindblad couplings to the baths and, as an example, explicitly work out a general case of a single boson.

  12. Quantized Vector Potential and the Photon Wave-function

    NASA Astrophysics Data System (ADS)

    Meis, C.; Dahoo, P. R.

    2017-12-01

    The vector potential function {\\overrightarrow{α }}kλ (\\overrightarrow{r},t) for a k-mode and λ-polarization photon, with the quantized amplitude α 0k (ω k ) = ξω k , satisfies the classical wave propagation equation as well as the Schrodinger’s equation with the relativistic massless Hamiltonian \\mathop{H}\\limits∼ =-i\\hslash c\\overrightarrow{\

  13. A Novel Fast Helical 4D-CT Acquisition Technique to Generate Low-Noise Sorting Artifact–Free Images at User-Selected Breathing Phases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thomas, David, E-mail: dhthomas@mednet.ucla.edu; Lamb, James; White, Benjamin

    2014-05-01

    Purpose: To develop a novel 4-dimensional computed tomography (4D-CT) technique that exploits standard fast helical acquisition, a simultaneous breathing surrogate measurement, deformable image registration, and a breathing motion model to remove sorting artifacts. Methods and Materials: Ten patients were imaged under free-breathing conditions 25 successive times in alternating directions with a 64-slice CT scanner using a low-dose fast helical protocol. An abdominal bellows was used as a breathing surrogate. Deformable registration was used to register the first image (defined as the reference image) to the subsequent 24 segmented images. Voxel-specific motion model parameters were determined using a breathing motion model. Themore » tissue locations predicted by the motion model in the 25 images were compared against the deformably registered tissue locations, allowing a model prediction error to be evaluated. A low-noise image was created by averaging the 25 images deformed to the first image geometry, reducing statistical image noise by a factor of 5. The motion model was used to deform the low-noise reference image to any user-selected breathing phase. A voxel-specific correction was applied to correct the Hounsfield units for lung parenchyma density as a function of lung air filling. Results: Images produced using the model at user-selected breathing phases did not suffer from sorting artifacts common to conventional 4D-CT protocols. The mean prediction error across all patients between the breathing motion model predictions and the measured lung tissue positions was determined to be 1.19 ± 0.37 mm. Conclusions: The proposed technique can be used as a clinical 4D-CT technique. It is robust in the presence of irregular breathing and allows the entire imaging dose to contribute to the resulting image quality, providing sorting artifact–free images at a patient dose similar to or less than current 4D-CT techniques.« less

  14. Dissipation and quantization for composite systems

    NASA Astrophysics Data System (ADS)

    Blasone, Massimo; Jizba, Petr; Scardigli, Fabio; Vitiello, Giuseppe

    2009-11-01

    In the framework of 't Hooft's quantization proposal, we show how to obtain from the composite system of two classical Bateman's oscillators a quantum isotonic oscillator. In a specific range of parameters, such a system can be interpreted as a particle in an effective magnetic field, interacting through a spin-orbit interaction term. In the limit of a large separation from the interaction region one can describe the system in terms of two irreducible elementary subsystems which correspond to two independent quantum harmonic oscillators.

  15. 't Hooft Quantization for Interacting Systems

    NASA Astrophysics Data System (ADS)

    Jizba, Petr; Scardigli, Fabio; Blasone, Massimo; Vitiello, Giuseppe

    2012-02-01

    In the framework of 't Hooft's "deterministic quantization" proposal, we show how to obtain from a composite system of two classical Bateman's oscillators a quantum isotonic oscillator. In a specific range of parameters, such a system can be also interpreted as a particle in an effective magnetic field, interacting through a spin-orbit interaction term. In the limit of a large separation from the interaction region, the system can be described in terms of two irreducible elementary subsystems, corresponding to two independent quantum harmonic oscillators.

  16. Gain-adaptive vector quantization for medium-rate speech coding

    NASA Technical Reports Server (NTRS)

    Chen, J.-H.; Gersho, A.

    1985-01-01

    A class of adaptive vector quantizers (VQs) that can dynamically adjust the 'gain' of codevectors according to the input signal level is introduced. The encoder uses a gain estimator to determine a suitable normalization of each input vector prior to VQ coding. The normalized vectors have reduced dynamic range and can then be more efficiently coded. At the receiver, the VQ decoder output is multiplied by the estimated gain. Both forward and backward adaptation are considered and several different gain estimators are compared and evaluated. An approach to optimizing the design of gain estimators is introduced. Some of the more obvious techniques for achieving gain adaptation are substantially less effective than the use of optimized gain estimators. A novel design technique that is needed to generate the appropriate gain-normalized codebook for the vector quantizer is introduced. Experimental results show that a significant gain in segmental SNR can be obtained over nonadaptive VQ with a negligible increase in complexity.

  17. Quantized transport and steady states of Floquet topological insulators

    NASA Astrophysics Data System (ADS)

    Esin, Iliya; Rudner, Mark S.; Refael, Gil; Lindner, Netanel H.

    2018-06-01

    Robust electronic edge or surface modes play key roles in the fascinating quantized responses exhibited by topological materials. Even in trivial materials, topological bands and edge states can be induced dynamically by a time-periodic drive. Such Floquet topological insulators (FTIs) inherently exist out of equilibrium; the extent to which they can host quantized transport, which depends on the steady-state population of their dynamically induced edge states, remains a crucial question. In this work, we obtain the steady states of two-dimensional FTIs in the presence of the natural dissipation mechanisms present in solid state systems. We give conditions under which the steady-state distribution resembles that of a topological insulator in the Floquet basis. In this state, the distribution in the Floquet edge modes exhibits a sharp feature akin to a Fermi level, while the bulk hosts a small density of excitations. We determine the regimes where topological edge-state transport persists and can be observed in FTIs.

  18. Minimum uncertainty and squeezing in diffusion processes and stochastic quantization

    NASA Technical Reports Server (NTRS)

    Demartino, S.; Desiena, S.; Illuminati, Fabrizo; Vitiello, Giuseppe

    1994-01-01

    We show that uncertainty relations, as well as minimum uncertainty coherent and squeezed states, are structural properties for diffusion processes. Through Nelson stochastic quantization we derive the stochastic image of the quantum mechanical coherent and squeezed states.

  19. BFV-BRST quantization of two-dimensional supergravity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fujiwara, T.; Igarashi, Y.; Kuriki, R.

    1996-01-01

    Two-dimensional supergravity theory is quantized as an anomalous gauge theory. In the Batalin-Fradkin (BF) formalism, the anomaly-canceling super-Liouville fields are introduced to identify the original second-class constrained system with a gauge-fixed version of a first-class system. The BFV-BRST quantization applies to formulate the theory in the most general class of gauges. A local effective action constructed in the configuration space contains two super-Liouville actions; one is a noncovariant but local functional written only in terms of two-dimensional supergravity fields, and the other contains the super-Liouville fields canceling the super-Weyl anomaly. Auxiliary fields for the Liouville and the gravity supermultiplets aremore » introduced to make the BRST algebra close off-shell. Inclusion of them turns out to be essentially important especially in the super-light-cone gauge fixing, where the supercurvature equations ({partial_derivative}{sup 3}{sub {minus}}{ital g}{sub +}{sub +}={partial_derivative}{sup 2}{sub {minus}}{chi}{sub +}{sub +}=0) are obtained as a result of BRST invariance of the theory. Our approach reveals the origin of the OSp(1,2) current algebra symmetry in a transparent manner. {copyright} {ital 1996 The American Physical Society.}« less

  20. Topos quantum theory on quantization-induced sheaves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakayama, Kunji, E-mail: nakayama@law.ryukoku.ac.jp

    2014-10-15

    In this paper, we construct a sheaf-based topos quantum theory. It is well known that a topos quantum theory can be constructed on the topos of presheaves on the category of commutative von Neumann algebras of bounded operators on a Hilbert space. Also, it is already known that quantization naturally induces a Lawvere-Tierney topology on the presheaf topos. We show that a topos quantum theory akin to the presheaf-based one can be constructed on sheaves defined by the quantization-induced Lawvere-Tierney topology. That is, starting from the spectral sheaf as a state space of a given quantum system, we construct sheaf-basedmore » expressions of physical propositions and truth objects, and thereby give a method of truth-value assignment to the propositions. Furthermore, we clarify the relationship to the presheaf-based quantum theory. We give translation rules between the sheaf-based ingredients and the corresponding presheaf-based ones. The translation rules have “coarse-graining” effects on the spaces of the presheaf-based ingredients; a lot of different proposition presheaves, truth presheaves, and presheaf-based truth-values are translated to a proposition sheaf, a truth sheaf, and a sheaf-based truth-value, respectively. We examine the extent of the coarse-graining made by translation.« less

  1. Quantized Spectral Compressed Sensing: Cramer–Rao Bounds and Recovery Algorithms

    NASA Astrophysics Data System (ADS)

    Fu, Haoyu; Chi, Yuejie

    2018-06-01

    Efficient estimation of wideband spectrum is of great importance for applications such as cognitive radio. Recently, sub-Nyquist sampling schemes based on compressed sensing have been proposed to greatly reduce the sampling rate. However, the important issue of quantization has not been fully addressed, particularly for high-resolution spectrum and parameter estimation. In this paper, we aim to recover spectrally-sparse signals and the corresponding parameters, such as frequency and amplitudes, from heavy quantizations of their noisy complex-valued random linear measurements, e.g. only the quadrant information. We first characterize the Cramer-Rao bound under Gaussian noise, which highlights the trade-off between sample complexity and bit depth under different signal-to-noise ratios for a fixed budget of bits. Next, we propose a new algorithm based on atomic norm soft thresholding for signal recovery, which is equivalent to proximal mapping of properly designed surrogate signals with respect to the atomic norm that motivates spectral sparsity. The proposed algorithm can be applied to both the single measurement vector case, as well as the multiple measurement vector case. It is shown that under the Gaussian measurement model, the spectral signals can be reconstructed accurately with high probability, as soon as the number of quantized measurements exceeds the order of K log n, where K is the level of spectral sparsity and $n$ is the signal dimension. Finally, numerical simulations are provided to validate the proposed approaches.

  2. Hamiltonian description and quantization of dissipative systems

    NASA Astrophysics Data System (ADS)

    Enz, Charles P.

    1994-09-01

    Dissipative systems are described by a Hamiltonian, combined with a “dynamical matrix” which generalizes the simplectic form of the equations of motion. Criteria for dissipation are given and the examples of a particle with friction and of the Lotka-Volterra model are presented. Quantization is first introduced by translating generalized Poisson brackets into commutators and anticommutators. Then a generalized Schrödinger equation expressed by a dynamical matrix is constructed and discussed.

  3. Development of Advanced Technologies for Complete Genomic and Proteomic Characterization of Quantized Human Tumor Cells

    DTIC Science & Technology

    2014-07-01

    establishment of Glioblastoma ( GBM ) cell lines from GBM patient’s tumor samples and quantized cell populations of each of the parental GBM cell lines, we... GBM patients are now well established and from the basis of the molecular characterization of the tumor development and signatures presented by these...analysis of these quantized cell sub populations and have begun to assemble the protein signatures of GBM tumors underpinned by the comprehensive

  4. A robust color image watermarking algorithm against rotation attacks

    NASA Astrophysics Data System (ADS)

    Han, Shao-cheng; Yang, Jin-feng; Wang, Rui; Jia, Gui-min

    2018-01-01

    A robust digital watermarking algorithm is proposed based on quaternion wavelet transform (QWT) and discrete cosine transform (DCT) for copyright protection of color images. The luminance component Y of a host color image in YIQ space is decomposed by QWT, and then the coefficients of four low-frequency subbands are transformed by DCT. An original binary watermark scrambled by Arnold map and iterated sine chaotic system is embedded into the mid-frequency DCT coefficients of the subbands. In order to improve the performance of the proposed algorithm against rotation attacks, a rotation detection scheme is implemented before watermark extracting. The experimental results demonstrate that the proposed watermarking scheme shows strong robustness not only against common image processing attacks but also against arbitrary rotation attacks.

  5. Event-triggered H∞ state estimation for semi-Markov jumping discrete-time neural networks with quantization.

    PubMed

    Rakkiyappan, R; Maheswari, K; Velmurugan, G; Park, Ju H

    2018-05-17

    This paper investigates H ∞ state estimation problem for a class of semi-Markovian jumping discrete-time neural networks model with event-triggered scheme and quantization. First, a new event-triggered communication scheme is introduced to determine whether or not the current sampled sensor data should be broad-casted and transmitted to the quantizer, which can save the limited communication resource. Second, a novel communication framework is employed by the logarithmic quantizer that quantifies and reduces the data transmission rate in the network, which apparently improves the communication efficiency of networks. Third, a stabilization criterion is derived based on the sufficient condition which guarantees a prescribed H ∞ performance level in the estimation error system in terms of the linear matrix inequalities. Finally, numerical simulations are given to illustrate the correctness of the proposed scheme. Copyright © 2018 Elsevier Ltd. All rights reserved.

  6. Model predictive control of non-linear systems over networks with data quantization and packet loss.

    PubMed

    Yu, Jimin; Nan, Liangsheng; Tang, Xiaoming; Wang, Ping

    2015-11-01

    This paper studies the approach of model predictive control (MPC) for the non-linear systems under networked environment where both data quantization and packet loss may occur. The non-linear controlled plant in the networked control system (NCS) is represented by a Tagaki-Sugeno (T-S) model. The sensed data and control signal are quantized in both links and described as sector bound uncertainties by applying sector bound approach. Then, the quantized data are transmitted in the communication networks and may suffer from the effect of packet losses, which are modeled as Bernoulli process. A fuzzy predictive controller which guarantees the stability of the closed-loop system is obtained by solving a set of linear matrix inequalities (LMIs). A numerical example is given to illustrate the effectiveness of the proposed method. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  7. Vector quantization for efficient coding of upper subbands

    NASA Technical Reports Server (NTRS)

    Zeng, W. J.; Huang, Y. F.

    1994-01-01

    This paper examines the application of vector quantization (VQ) to exploit both intra-band and inter-band redundancy in subband coding. The focus here is on the exploitation of inter-band dependency. It is shown that VQ is particularly suitable and effective for coding the upper subbands. Three subband decomposition-based VQ coding schemes are proposed here to exploit the inter-band dependency by making full use of the extra flexibility of VQ approach over scalar quantization. A quadtree-based variable rate VQ (VRVQ) scheme which takes full advantage of the intra-band and inter-band redundancy is first proposed. Then, a more easily implementable alternative based on an efficient block-based edge estimation technique is employed to overcome the implementational barriers of the first scheme. Finally, a predictive VQ scheme formulated in the context of finite state VQ is proposed to further exploit the dependency among different subbands. A VRVQ scheme proposed elsewhere is extended to provide an efficient bit allocation procedure. Simulation results show that these three hybrid techniques have advantages, in terms of peak signal-to-noise ratio (PSNR) and complexity, over other existing subband-VQ approaches.

  8. Quantized Chiral Magnetic Current from Reconnections of Magnetic Flux.

    PubMed

    Hirono, Yuji; Kharzeev, Dmitri E; Yin, Yi

    2016-10-21

    We introduce a new mechanism for the chiral magnetic effect that does not require an initial chirality imbalance. The chiral magnetic current is generated by reconnections of magnetic flux that change the magnetic helicity of the system. The resulting current is entirely determined by the change of magnetic helicity, and it is quantized.

  9. Quantized Synchronization of Chaotic Neural Networks With Scheduled Output Feedback Control.

    PubMed

    Wan, Ying; Cao, Jinde; Wen, Guanghui

    In this paper, the synchronization problem of master-slave chaotic neural networks with remote sensors, quantization process, and communication time delays is investigated. The information communication channel between the master chaotic neural network and slave chaotic neural network consists of several remote sensors, with each sensor able to access only partial knowledge of output information of the master neural network. At each sampling instants, each sensor updates its own measurement and only one sensor is scheduled to transmit its latest information to the controller's side in order to update the control inputs for the slave neural network. Thus, such communication process and control strategy are much more energy-saving comparing with the traditional point-to-point scheme. Sufficient conditions for output feedback control gain matrix, allowable length of sampling intervals, and upper bound of network-induced delays are derived to ensure the quantized synchronization of master-slave chaotic neural networks. Lastly, Chua's circuit system and 4-D Hopfield neural network are simulated to validate the effectiveness of the main results.In this paper, the synchronization problem of master-slave chaotic neural networks with remote sensors, quantization process, and communication time delays is investigated. The information communication channel between the master chaotic neural network and slave chaotic neural network consists of several remote sensors, with each sensor able to access only partial knowledge of output information of the master neural network. At each sampling instants, each sensor updates its own measurement and only one sensor is scheduled to transmit its latest information to the controller's side in order to update the control inputs for the slave neural network. Thus, such communication process and control strategy are much more energy-saving comparing with the traditional point-to-point scheme. Sufficient conditions for output feedback control

  10. Accelerating Families of Fuzzy K-Means Algorithms for Vector Quantization Codebook Design

    PubMed Central

    Mata, Edson; Bandeira, Silvio; de Mattos Neto, Paulo; Lopes, Waslon; Madeiro, Francisco

    2016-01-01

    The performance of signal processing systems based on vector quantization depends on codebook design. In the image compression scenario, the quality of the reconstructed images depends on the codebooks used. In this paper, alternatives are proposed for accelerating families of fuzzy K-means algorithms for codebook design. The acceleration is obtained by reducing the number of iterations of the algorithms and applying efficient nearest neighbor search techniques. Simulation results concerning image vector quantization have shown that the acceleration obtained so far does not decrease the quality of the reconstructed images. Codebook design time savings up to about 40% are obtained by the accelerated versions with respect to the original versions of the algorithms. PMID:27886061

  11. Accelerating Families of Fuzzy K-Means Algorithms for Vector Quantization Codebook Design.

    PubMed

    Mata, Edson; Bandeira, Silvio; de Mattos Neto, Paulo; Lopes, Waslon; Madeiro, Francisco

    2016-11-23

    The performance of signal processing systems based on vector quantization depends on codebook design. In the image compression scenario, the quality of the reconstructed images depends on the codebooks used. In this paper, alternatives are proposed for accelerating families of fuzzy K-means algorithms for codebook design. The acceleration is obtained by reducing the number of iterations of the algorithms and applying efficient nearest neighbor search techniques. Simulation results concerning image vector quantization have shown that the acceleration obtained so far does not decrease the quality of the reconstructed images. Codebook design time savings up to about 40% are obtained by the accelerated versions with respect to the original versions of the algorithms.

  12. Quantization of the Szekeres system

    NASA Astrophysics Data System (ADS)

    Paliathanasis, A.; Zampeli, Adamantia; Christodoulakis, T.; Mustafa, M. T.

    2018-06-01

    We study the quantum corrections on the Szekeres system in the context of canonical quantization in the presence of symmetries. We start from an effective point-like Lagrangian with two integrals of motion, one corresponding to the Hamiltonian and the other to a second rank killing tensor. Imposing their quantum version on the wave function results to a solution which is then interpreted in the context of Bohmian mechanics. In this semiclassical approach, it is shown that there is no quantum corrections, thus the classical trajectories of the Szekeres system are not affected at this level. Finally, we define a probability function which shows that a stationary surface of the probability corresponds to a classical exact solution.

  13. Adaptive robust fault tolerant control design for a class of nonlinear uncertain MIMO systems with quantization.

    PubMed

    Ao, Wei; Song, Yongdong; Wen, Changyun

    2017-05-01

    In this paper, we investigate the adaptive control problem for a class of nonlinear uncertain MIMO systems with actuator faults and quantization effects. Under some mild conditions, an adaptive robust fault-tolerant control is developed to compensate the affects of uncertainties, actuator failures and errors caused by quantization, and a range of the parameters for these quantizers is established. Furthermore, a Lyapunov-like approach is adopted to demonstrate that the ultimately uniformly bounded output tracking error is guaranteed by the controller, and the signals of the closed-loop system are ensured to be bounded, even in the presence of at most m-q actuators stuck or outage. Finally, numerical simulations are provided to verify and illustrate the effectiveness of the proposed adaptive schemes. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  14. Second quantization techniques in the scattering of nonidentical composite bodies

    NASA Technical Reports Server (NTRS)

    Norbury, J. W.; Townsend, L. W.; Deutchman, P. A.

    1986-01-01

    Second quantization techniques for describing elastic and inelastic interactions between nonidentical composite bodies are presented and are applied to nucleus-nucleus collisions involving ground-state and one-particle-one-hole excitations. Evaluations of the resultant collision matrix elements are made through use of Wick's theorem.

  15. Quantized Chiral Magnetic Current from Reconnections of Magnetic Flux

    DOE PAGES

    Hirono, Yuji; Kharzeev, Dmitri E.; Yin, Yi

    2016-10-20

    We introduce a new mechanism for the chiral magnetic e ect that does not require an initial chirality imbalance. The chiral magnetic current is generated by reconnections of magnetic ux that change the magnetic helicity of the system. The resulting current is entirely determined by the change of magnetic helicity, and it is quantized.

  16. Quantized Average Consensus on Gossip Digraphs with Reduced Computation

    NASA Astrophysics Data System (ADS)

    Cai, Kai; Ishii, Hideaki

    The authors have recently proposed a class of randomized gossip algorithms which solve the distributed averaging problem on directed graphs, with the constraint that each node has an integer-valued state. The essence of this algorithm is to maintain local records, called “surplus”, of individual state updates, thereby achieving quantized average consensus even though the state sum of all nodes is not preserved. In this paper we study a modified version of this algorithm, whose feature is primarily in reducing both computation and communication effort. Concretely, each node needs to update fewer local variables, and can transmit surplus by requiring only one bit. Under this modified algorithm we prove that reaching the average is ensured for arbitrary strongly connected graphs. The condition of arbitrary strong connection is less restrictive than those known in the literature for either real-valued or quantized states; in particular, it does not require the special structure on the network called balanced. Finally, we provide numerical examples to illustrate the convergence result, with emphasis on convergence time analysis.

  17. Evaluation of the motion of lung tumors during stereotactic body radiation therapy (SBRT) with four-dimensional computed tomography (4DCT) using real-time tumor-tracking radiotherapy system (RTRT).

    PubMed

    Harada, Keiichi; Katoh, Norio; Suzuki, Ryusuke; Ito, Yoichi M; Shimizu, Shinichi; Onimaru, Rikiya; Inoue, Tetsuya; Miyamoto, Naoki; Shirato, Hiroki

    2016-02-01

    We investigated the usefulness of four-dimensional computed tomography (4DCT) performed before stereotactic body radiation therapy (SBRT) in determining the internal margins for peripheral lung tumors. The amplitude of the movement of a fiducial marker near a lung tumor measured using the maximum intensity projection (MIP) method in 4DCT imaging was acquired before the SBRT (AmpCT) and compared with the mean amplitude of the marker movement during SBRT (Ampmean) and with the maximum amplitude of the marker movement during SBRT (Ampmax) using a real-time tumor-tracking radiotherapy (RTRT) system with 22 patients. There were no significant differences between the means of the Ampmean and the means of the AmpCT in all directions (LR, P = 0.45; CC, P = 0.80; AP, P = 0.65). The means of the Ampmax were significantly larger than the means of the AmpCT in all directions (LR, P < 0.01; CC, P = 0.03; AP, P < 0.01). In the lower lobe, the mean difference of the AmpCT from the mean of the Ampmax was 5.7 ± 8.0 mm, 12.5 ± 16.7 mm, and 6.8 ± 8.5 mm in the LR, CC, and AP directions, respectively. Acquiring 4DCT MIP images before the SBRT treatment is useful to establish the mean amplitude for a patient during SBRT but it underestimates the maximum amplitude during actual SBRT. Caution must be paid to determine the margin with the 4DCT especially for tumors at the lower lobe where it is of the potentially greatest benefit. Copyright © 2016 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  18. Bfv Quantization of Relativistic Spinning Particles with a Single Bosonic Constraint

    NASA Astrophysics Data System (ADS)

    Rabello, Silvio J.; Vaidya, Arvind N.

    Using the BFV approach we quantize a pseudoclassical model of the spin-1/2 relativistic particle that contains a single bosonic constraint, contrary to the usual locally supersymmetric models that display first and second class constraints.

  19. Progress on the three-particle quantization condition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Briceno, Raul; Hansen, Mawell T.; Sharpe, Stephen R.

    2016-10-01

    We report progress on extending the relativistic model-independent quantization condition for three particles, derived previously by two of us, to a broader class of theories, as well as progress on checking the formalism. In particular, we discuss the extension to include the possibility of 2->3 and 3->2 transitions and the calculation of the finite-volume energy shift of an Efimov-like three-particle bound state. The latter agrees with the results obtained previously using non-relativistic quantum mechanics.

  20. Justification of Fuzzy Declustering Vector Quantization Modeling in Classification of Genotype-Image Phenotypes

    NASA Astrophysics Data System (ADS)

    Ng, Theam Foo; Pham, Tuan D.; Zhou, Xiaobo

    2010-01-01

    With the fast development of multi-dimensional data compression and pattern classification techniques, vector quantization (VQ) has become a system that allows large reduction of data storage and computational effort. One of the most recent VQ techniques that handle the poor estimation of vector centroids due to biased data from undersampling is to use fuzzy declustering-based vector quantization (FDVQ) technique. Therefore, in this paper, we are motivated to propose a justification of FDVQ based hidden Markov model (HMM) for investigating its effectiveness and efficiency in classification of genotype-image phenotypes. The performance evaluation and comparison of the recognition accuracy between a proposed FDVQ based HMM (FDVQ-HMM) and a well-known LBG (Linde, Buzo, Gray) vector quantization based HMM (LBG-HMM) will be carried out. The experimental results show that the performances of both FDVQ-HMM and LBG-HMM are almost similar. Finally, we have justified the competitiveness of FDVQ-HMM in classification of cellular phenotype image database by using hypotheses t-test. As a result, we have validated that the FDVQ algorithm is a robust and an efficient classification technique in the application of RNAi genome-wide screening image data.

  1. Quantized Self-Assembly of Discotic Rings in a Liquid Crystal Confined in Nanopores

    NASA Astrophysics Data System (ADS)

    Sentker, Kathrin; Zantop, Arne W.; Lippmann, Milena; Hofmann, Tommy; Seeck, Oliver H.; Kityk, Andriy V.; Yildirim, Arda; Schönhals, Andreas; Mazza, Marco G.; Huber, Patrick

    2018-02-01

    Disklike molecules with aromatic cores spontaneously stack up in linear columns with high, one-dimensional charge carrier mobilities along the columnar axes, making them prominent model systems for functional, self-organized matter. We show by high-resolution optical birefringence and synchrotron-based x-ray diffraction that confining a thermotropic discotic liquid crystal in cylindrical nanopores induces a quantized formation of annular layers consisting of concentric circular bent columns, unknown in the bulk state. Starting from the walls this ring self-assembly propagates layer by layer towards the pore center in the supercooled domain of the bulk isotropic-columnar transition and thus allows one to switch on and off reversibly single, nanosized rings through small temperature variations. By establishing a Gibbs free energy phase diagram we trace the phase transition quantization to the discreteness of the layers' excess bend deformation energies in comparison to the thermal energy, even for this near room-temperature system. Monte Carlo simulations yielding spatially resolved nematic order parameters, density maps, and bond-orientational order parameters corroborate the universality and robustness of the confinement-induced columnar ring formation as well as its quantized nature.

  2. The fundamental role of quantized vibrations in coherent light harvesting by cryptophyte algae

    NASA Astrophysics Data System (ADS)

    Kolli, Avinash; O'Reilly, Edward J.; Scholes, Gregory D.; Olaya-Castro, Alexandra

    2012-11-01

    The influence of fast vibrations on energy transfer and conversion in natural molecular aggregates is an issue of central interest. This article shows the important role of high-energy quantized vibrations and their non-equilibrium dynamics for energy transfer in photosynthetic systems with highly localized excitonic states. We consider the cryptophyte antennae protein phycoerythrin 545 and show that coupling to quantized vibrations, which are quasi-resonant with excitonic transitions is fundamental for biological function as it generates non-cascaded transport with rapid and wider spatial distribution of excitation energy. Our work also indicates that the non-equilibrium dynamics of such vibrations can manifest itself in ultrafast beating of both excitonic populations and coherences at room temperature, with time scales in agreement with those reported in experiments. Moreover, we show that mechanisms supporting coherent excitonic dynamics assist coupling to selected modes that channel energy to preferential sites in the complex. We therefore argue that, in the presence of strong coupling between electronic excitations and quantized vibrations, a concrete and important advantage of quantum coherent dynamics is precisely to tune resonances that promote fast and effective energy distribution.

  3. Generalized centripetal force law and quantization of motion constrained on 2D surfaces

    NASA Astrophysics Data System (ADS)

    Liu, Q. H.; Zhang, J.; Lian, D. K.; Hu, L. D.; Li, Z.

    2017-03-01

    For a particle of mass μ moves on a 2D surface f(x) = 0 embedded in 3D Euclidean space of coordinates x, there is an open and controversial problem whether the Dirac's canonical quantization scheme for the constrained motion allows for the geometric potential that has been experimentally confirmed. We note that the Dirac's scheme hypothesizes that the symmetries indicated by classical brackets among positions x and momenta p and Hamiltonian Hc remain in quantum mechanics, i.e., the following Dirac brackets [ x ,Hc ] D and [ p ,Hc ] D holds true after quantization, in addition to the fundamental ones [ x , x ] D, [ x , p ] D and [ p , p ] D. This set of hypotheses implies that the Hamiltonian operator is simultaneously determined during the quantization. The quantum mechanical relations corresponding to the classical mechanical ones p / μ =[ x ,Hc ] D directly give the geometric momenta. The time t derivative of the momenta p ˙ =[ p ,Hc ] D in classical mechanics is in fact the generalized centripetal force law for particle on the 2D surface, which in quantum mechanics permits both the geometric momenta and the geometric potential.

  4. Floating-point system quantization errors in digital control systems

    NASA Technical Reports Server (NTRS)

    Phillips, C. L.; Vallely, D. P.

    1978-01-01

    This paper considers digital controllers (filters) operating in floating-point arithmetic in either open-loop or closed-loop systems. A quantization error analysis technique is developed, and is implemented by a digital computer program that is based on a digital simulation of the system. The program can be integrated into existing digital simulations of a system.

  5. Covariant spinor representation of iosp(d,2/2) and quantization of the spinning relativistic particle

    NASA Astrophysics Data System (ADS)

    Jarvis, P. D.; Corney, S. P.; Tsohantjis, I.

    1999-12-01

    A covariant spinor representation of iosp(d,2/2) is constructed for the quantization of the spinning relativistic particle. It is found that, with appropriately defined wavefunctions, this representation can be identified with the state space arising from the canonical extended BFV-BRST quantization of the spinning particle with admissible gauge fixing conditions after a contraction procedure. For this model, the cohomological determination of physical states can thus be obtained purely from the representation theory of the iosp(d,2/2) algebra.

  6. Validation of a quantized-current source with 0.2 ppm uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stein, Friederike; Fricke, Lukas, E-mail: lukas.fricke@ptb.de; Scherer, Hansjörg

    2015-09-07

    We report on high-accuracy measurements of quantized current, sourced by a tunable-barrier single-electron pump at frequencies f up to 1 GHz. The measurements were performed with an ultrastable picoammeter instrument, traceable to the Josephson and quantum Hall effects. Current quantization according to I = ef with e being the elementary charge was confirmed at f = 545 MHz with a total relative uncertainty of 0.2 ppm, improving the state of the art by about a factor of 5. The accuracy of a possible future quantum current standard based on single-electron transport was experimentally validated to be better than the best (indirect) realization of the ampere within themore » present SI.« less

  7. [Development of a video image system for wireless capsule endoscopes based on DSP].

    PubMed

    Yang, Li; Peng, Chenglin; Wu, Huafeng; Zhao, Dechun; Zhang, Jinhua

    2008-02-01

    A video image recorder to record video picture for wireless capsule endoscopes was designed. TMS320C6211 DSP of Texas Instruments Inc. is the core processor of this system. Images are periodically acquired from Composite Video Broadcast Signal (CVBS) source and scaled by video decoder (SAA7114H). Video data is transported from high speed buffer First-in First-out (FIFO) to Digital Signal Processor (DSP) under the control of Complex Programmable Logic Device (CPLD). This paper adopts JPEG algorithm for image coding, and the compressed data in DSP was stored to Compact Flash (CF) card. TMS320C6211 DSP is mainly used for image compression and data transporting. Fast Discrete Cosine Transform (DCT) algorithm and fast coefficient quantization algorithm are used to accelerate operation speed of DSP and decrease the executing code. At the same time, proper address is assigned for each memory, which has different speed;the memory structure is also optimized. In addition, this system uses plenty of Extended Direct Memory Access (EDMA) to transport and process image data, which results in stable and high performance.

  8. The lattice and quantized Yang–Mills theory

    DOE PAGES

    Creutz, Michael

    2015-11-30

    Quantized Yang–Mills fields lie at the heart of our understanding of the strong nuclear force. To understand the theory at low energies, we must work in the strong coupling regime. The primary technique for this is the lattice. While basically an ultraviolet regulator, the lattice avoids the use of a perturbative expansion. In this paper, I discuss the historical circumstances that drove us to this approach, which has had immense success, convincingly demonstrating quark confinement and obtaining crucial properties of the strong interactions from first principles.

  9. Multispectral data compression through transform coding and block quantization

    NASA Technical Reports Server (NTRS)

    Ready, P. J.; Wintz, P. A.

    1972-01-01

    Transform coding and block quantization techniques are applied to multispectral aircraft scanner data, and digitized satellite imagery. The multispectral source is defined and an appropriate mathematical model proposed. The Karhunen-Loeve, Fourier, and Hadamard encoders are considered and are compared to the rate distortion function for the equivalent Gaussian source and to the performance of the single sample PCM encoder.

  10. Superfield Hamiltonian quantization in terms of quantum antibrackets

    NASA Astrophysics Data System (ADS)

    Batalin, Igor A.; Lavrov, Peter M.

    2016-04-01

    We develop a new version of the superfield Hamiltonian quantization. The main new feature is that the BRST-BFV charge and the gauge fixing Fermion are introduced on equal footing within the sigma model approach, which provides for the actual use of the quantum/derived antibrackets. We study in detail the generating equations for the quantum antibrackets and their primed counterparts. We discuss the finite quantum anticanonical transformations generated by the quantum antibracket.

  11. The value of nodal information in predicting lung cancer relapse using 4DPET/4DCT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Heyse, E-mail: heyse.li@mail.utoronto.ca; Becker, Nathan; Raman, Srinivas

    2015-08-15

    Purpose: There is evidence that computed tomography (CT) and positron emission tomography (PET) imaging metrics are prognostic and predictive in nonsmall cell lung cancer (NSCLC) treatment outcomes. However, few studies have explored the use of standardized uptake value (SUV)-based image features of nodal regions as predictive features. The authors investigated and compared the use of tumor and node image features extracted from the radiotherapy target volumes to predict relapse in a cohort of NSCLC patients undergoing chemoradiation treatment. Methods: A prospective cohort of 25 patients with locally advanced NSCLC underwent 4DPET/4DCT imaging for radiation planning. Thirty-seven image features were derivedmore » from the CT-defined volumes and SUVs of the PET image from both the tumor and nodal target regions. The machine learning methods of logistic regression and repeated stratified five-fold cross-validation (CV) were used to predict local and overall relapses in 2 yr. The authors used well-known feature selection methods (Spearman’s rank correlation, recursive feature elimination) within each fold of CV. Classifiers were ranked on their Matthew’s correlation coefficient (MCC) after CV. Area under the curve, sensitivity, and specificity values are also presented. Results: For predicting local relapse, the best classifier found had a mean MCC of 0.07 and was composed of eight tumor features. For predicting overall relapse, the best classifier found had a mean MCC of 0.29 and was composed of a single feature: the volume greater than 0.5 times the maximum SUV (N). Conclusions: The best classifier for predicting local relapse had only tumor features. In contrast, the best classifier for predicting overall relapse included a node feature. Overall, the methods showed that nodes add value in predicting overall relapse but not local relapse.« less

  12. Flux Quantization in Aperiodic and Periodic Networks

    NASA Astrophysics Data System (ADS)

    Behrooz, Angelika

    The normal - superconducting phase boundary, T_{c}(H), of a periodic wire network shows periodic oscillations with period H _{o} = phi_ {o}/A due to flux quantization around the individual plaquettes (of area A) of the network. The magnetic flux quantum is phi_{o } = hc/2e. The phase boundary also shows fine structure at fields H = (p/q)H_{o} (p,q integers), where the flux vortices can form commensurate superlattices on the periodic substrate. We have studied the phase boundary of quasicrystalline, quasiperiodic and random networks. We have found that if a network is composed of two different tiles, whose areas are relatively irrational then the T_ {c}(H) curve shows large scale structure at fields that approximate flux quantization around the tiles, i.e. when the ratio of fluxoids contained in the large tiles to those in the small tiles is a rational approximant to the irrational area ratio. The phase boundaries of quasicrystalline and quasiperiodic networks show fine structure indicating the existence of commensurate vortex superlattices on these networks. No such fine structure is found on the random array. For a quasicrystal whose quasiperiodic long-range order is characterized by the irrational number tau the commensurate vortex lattices are all found at H = H_{o}| n + mtau| (n,m integers). We have found that the commensurate superlattices on quasicrystalline as well as on crystalline networks are related to the inflation symmetry. We propose a general definition of commensurability.

  13. Obliquely propagating ion acoustic solitary structures in the presence of quantized magnetic field

    NASA Astrophysics Data System (ADS)

    Iqbal Shaukat, Muzzamal

    2017-10-01

    The effect of linear and nonlinear propagation of electrostatic waves have been studied in degenerate magnetoplasma taking into account the effect of electron trapping and finite temperature with quantizing magnetic field. The formation of solitary structures has been investigated by employing the small amplitude approximation both for fully and partially degenerate quantum plasma. It is observed that the inclusion of quantizing magnetic field significantly affects the propagation characteristics of the solitary wave. Importantly, the Zakharov-Kuznetsov equation under consideration has been found to allow the formation of compressive solitary structures only. The present investigation may be beneficial to understand the propagation of nonlinear electrostatic structures in dense astrophysical environments such as those found in white dwarfs.

  14. Treatment of constraints in the stochastic quantization method and covariantized Langevin equation

    NASA Astrophysics Data System (ADS)

    Ikegami, Kenji; Kimura, Tadahiko; Mochizuki, Riuji

    1993-04-01

    We study the treatment of the constraints in the stochastic quantization method. We improve the treatment of the stochastic consistency condition proposed by Namiki et al. by suitably taking into account the Ito calculus. Then we obtain an improved Langevi equation and the Fokker-Planck equation which naturally leads to the correct path integral quantization of the constrained system as the stochastic equilibrium state. This treatment is applied to an O( N) non-linear α model and it is shown that singular terms appearing in the improved Langevin equation cancel out the σ n(O) divergences in one loop order. We also ascertain that the above Langevin equation, rewritten in terms of idependent variables, is actually equivalent to the one in the general-coordinate transformation covariant and vielbein-rotation invariant formalish.

  15. Quantization of higher abelian gauge theory in generalized differential cohomology

    NASA Astrophysics Data System (ADS)

    Szabo, R.

    We review and elaborate on some aspects of the quantization of certain classes of higher abelian gauge theories using techniques of generalized differential cohomology. Particular emphasis is placed on the examples of generalized Maxwell theory and Cheeger-Simons cohomology, and of Ramond-Ramond fields in Type II superstring theory and differential K-theory.

  16. Quantized Algebras of Functions on Homogeneous Spaces with Poisson Stabilizers

    NASA Astrophysics Data System (ADS)

    Neshveyev, Sergey; Tuset, Lars

    2012-05-01

    Let G be a simply connected semisimple compact Lie group with standard Poisson structure, K a closed Poisson-Lie subgroup, 0 < q < 1. We study a quantization C( G q / K q ) of the algebra of continuous functions on G/ K. Using results of Soibelman and Dijkhuizen-Stokman we classify the irreducible representations of C( G q / K q ) and obtain a composition series for C( G q / K q ). We describe closures of the symplectic leaves of G/ K refining the well-known description in the case of flag manifolds in terms of the Bruhat order. We then show that the same rules describe the topology on the spectrum of C( G q / K q ). Next we show that the family of C*-algebras C( G q / K q ), 0 < q ≤ 1, has a canonical structure of a continuous field of C*-algebras and provides a strict deformation quantization of the Poisson algebra {{C}[G/K]} . Finally, extending a result of Nagy, we show that C( G q / K q ) is canonically KK-equivalent to C( G/ K).

  17. Quantized visual awareness.

    PubMed

    Escobar, W A

    2013-01-01

    The proposed model holds that, at its most fundamental level, visual awareness is quantized. That is to say that visual awareness arises as individual bits of awareness through the action of neural circuits with hundreds to thousands of neurons in at least the human striate cortex. Circuits with specific topologies will reproducibly result in visual awareness that correspond to basic aspects of vision like color, motion, and depth. These quanta of awareness (qualia) are produced by the feedforward sweep that occurs through the geniculocortical pathway but are not integrated into a conscious experience until recurrent processing from centers like V4 or V5 select the appropriate qualia being produced in V1 to create a percept. The model proposed here has the potential to shift the focus of the search for visual awareness to the level of microcircuits and these likely exist across the kingdom Animalia. Thus establishing qualia as the fundamental nature of visual awareness will not only provide a deeper understanding of awareness, but also allow for a more quantitative understanding of the evolution of visual awareness throughout the animal kingdom.

  18. SU-E-E-11: Novel Matching Module for Respiration-Gated Motion Tumor of Cone-Beam Computed Tomography (CBCT) to 4DCT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, P; Tsai, Y; Nien, H

    2015-06-15

    Purpose: Four dimensional computed tomography (4DCT) scans reliably record whole respiratory phase and generate internal target volumes (ITV) for radiotherapy planning. However, image guiding with cone-beam computed tomography (CBCT) cannot acquire all or specific respiratory phases. This study was designed to investigate the correlation between average CT and Maximum Intensity Projection (MIP) from 4DCT and CBCT. Methods: Retrospective respiratory gating were performed by GE Discovery CT590 RT. 4DCT and CBCT data from CRIS Dynamic Thorax Phantom with simulated breathing mode were analyzed. The lung tissue equivalent material encompassed 3 cm sphere tissue equivalent material. Simulated breathing cycle period was setmore » as 4 seconds, 5 seconds and 6 seconds for representing variation of patient breathing cycle time, and the sphere material moved toward inferior and superior direction with 1 cm amplitude simulating lung tumor motion during respiration. Results: Under lung window, the volume ratio of CBCT scans to ITVs derived from 10 phases average scans was 1.00 ± 0.02, and 1.03 ± 0.03 for ratio of CBCT scans to MIP scans. Under abdomen window, the ratio of CBCT scans to ITVs derived from 10 phases average scans was 0.39 ± 0.06, and 0.06 ± 0.00 for ratio of CBCT scans to MIP scans. There was a significant difference between lung window Result and abdomen window Result. For reducing image guiding uncertainty, CBCT window was set with width 500 and level-250. The ratio of CBCT scans to ITVs derived from 4 phases average scans with abdomen window was 1.19 ± 0.02, and 1.06 ± 0.01 for ratio of CBCT to MIP scans. Conclusion: CBCT images with suitable window width and level can efficiently reduce image guiding uncertainty for patient with mobile tumor. By our setting, we can match motion tumor to gating tumor location on planning CT more accurately neglecting other motion artifacts during CBCT scans.« less

  19. q-bosons and the q-analogue quantized field

    NASA Technical Reports Server (NTRS)

    Nelson, Charles A.

    1995-01-01

    The q-analogue coherent states are used to identify physical signatures for the presence of a 1-analogue quantized radiation field in the q-CS classical limits where the absolute value of z is large. In this quantum-optics-like limit, the fractional uncertainties of most physical quantities (momentum, position, amplitude, phase) which characterize the quantum field are O(1). They only vanish as O(1/absolute value of z) when q = 1. However, for the number operator, N, and the N-Hamiltonian for a free q-boson gas, H(sub N) = h(omega)(N + 1/2), the fractional uncertainties do still approach zero. A signature for q-boson counting statistics is that (Delta N)(exp 2)/ (N) approaches 0 as the absolute value of z approaches infinity. Except for its O(1) fractional uncertainty, the q-generalization of the Hermitian phase operator of Pegg and Barnett, phi(sub q), still exhibits normal classical behavior. The standard number-phase uncertainty-relation, Delta(N) Delta phi(sub q) = 1/2, and the approximate commutation relation, (N, phi(sub q)) = i, still hold for the single-mode q-analogue quantized field. So, N and phi(sub q) are almost canonically conjugate operators in the q-CS classical limit. The q-analogue CS's minimize this uncertainty relation for moderate (absolute value of z)(exp 2).

  20. Effect of signal intensity and camera quantization on laser speckle contrast analysis

    PubMed Central

    Song, Lipei; Elson, Daniel S.

    2012-01-01

    Laser speckle contrast analysis (LASCA) is limited to being a qualitative method for the measurement of blood flow and tissue perfusion as it is sensitive to the measurement configuration. The signal intensity is one of the parameters that can affect the contrast values due to the quantization of the signals by the camera and analog-to-digital converter (ADC). In this paper we deduce the theoretical relationship between signal intensity and contrast values based on the probability density function (PDF) of the speckle pattern and simplify it to a rational function. A simple method to correct this contrast error is suggested. The experimental results demonstrate that this relationship can effectively compensate the bias in contrast values induced by the quantized signal intensity and correct for bias induced by signal intensity variations across the field of view. PMID:23304650

  1. Efficient storage and management of radiographic images using a novel wavelet-based multiscale vector quantizer

    NASA Astrophysics Data System (ADS)

    Yang, Shuyu; Mitra, Sunanda

    2002-05-01

    Due to the huge volumes of radiographic images to be managed in hospitals, efficient compression techniques yielding no perceptual loss in the reconstructed images are becoming a requirement in the storage and management of such datasets. A wavelet-based multi-scale vector quantization scheme that generates a global codebook for efficient storage and transmission of medical images is presented in this paper. The results obtained show that even at low bit rates one is able to obtain reconstructed images with perceptual quality higher than that of the state-of-the-art scalar quantization method, the set partitioning in hierarchical trees.

  2. Joint source-channel coding for motion-compensated DCT-based SNR scalable video.

    PubMed

    Kondi, Lisimachos P; Ishtiaq, Faisal; Katsaggelos, Aggelos K

    2002-01-01

    In this paper, we develop an approach toward joint source-channel coding for motion-compensated DCT-based scalable video coding and transmission. A framework for the optimal selection of the source and channel coding rates over all scalable layers is presented such that the overall distortion is minimized. The algorithm utilizes universal rate distortion characteristics which are obtained experimentally and show the sensitivity of the source encoder and decoder to channel errors. The proposed algorithm allocates the available bit rate between scalable layers and, within each layer, between source and channel coding. We present the results of this rate allocation algorithm for video transmission over a wireless channel using the H.263 Version 2 signal-to-noise ratio (SNR) scalable codec for source coding and rate-compatible punctured convolutional (RCPC) codes for channel coding. We discuss the performance of the algorithm with respect to the channel conditions, coding methodologies, layer rates, and number of layers.

  3. Integrability, Quantization and Moduli Spaces of Curves

    NASA Astrophysics Data System (ADS)

    Rossi, Paolo

    2017-07-01

    This paper has the purpose of presenting in an organic way a new approach to integrable (1+1)-dimensional field systems and their systematic quantization emerging from intersection theory of the moduli space of stable algebraic curves and, in particular, cohomological field theories, Hodge classes and double ramification cycles. This methods are alternative to the traditional Witten-Kontsevich framework and its generalizations by Dubrovin and Zhang and, among other advantages, have the merit of encompassing quantum integrable systems. Most of this material originates from an ongoing collaboration with A. Buryak, B. Dubrovin and J. Guéré.

  4. Learning-Based Just-Noticeable-Quantization- Distortion Modeling for Perceptual Video Coding.

    PubMed

    Ki, Sehwan; Bae, Sung-Ho; Kim, Munchurl; Ko, Hyunsuk

    2018-07-01

    Conventional predictive video coding-based approaches are reaching the limit of their potential coding efficiency improvements, because of severely increasing computation complexity. As an alternative approach, perceptual video coding (PVC) has attempted to achieve high coding efficiency by eliminating perceptual redundancy, using just-noticeable-distortion (JND) directed PVC. The previous JNDs were modeled by adding white Gaussian noise or specific signal patterns into the original images, which were not appropriate in finding JND thresholds due to distortion with energy reduction. In this paper, we present a novel discrete cosine transform-based energy-reduced JND model, called ERJND, that is more suitable for JND-based PVC schemes. Then, the proposed ERJND model is extended to two learning-based just-noticeable-quantization-distortion (JNQD) models as preprocessing that can be applied for perceptual video coding. The two JNQD models can automatically adjust JND levels based on given quantization step sizes. One of the two JNQD models, called LR-JNQD, is based on linear regression and determines the model parameter for JNQD based on extracted handcraft features. The other JNQD model is based on a convolution neural network (CNN), called CNN-JNQD. To our best knowledge, our paper is the first approach to automatically adjust JND levels according to quantization step sizes for preprocessing the input to video encoders. In experiments, both the LR-JNQD and CNN-JNQD models were applied to high efficiency video coding (HEVC) and yielded maximum (average) bitrate reductions of 38.51% (10.38%) and 67.88% (24.91%), respectively, with little subjective video quality degradation, compared with the input without preprocessing applied.

  5. Photoinduced half-integer quantized conductance plateaus in topological-insulator/superconductor heterostructures

    NASA Astrophysics Data System (ADS)

    Yap, Han Hoe; Zhou, Longwen; Lee, Ching Hua; Gong, Jiangbin

    2018-04-01

    The past few years have witnessed increased attention to the quest for Majorana-like excitations in the condensed matter community. As a promising candidate in this race, the one-dimensional chiral Majorana edge mode (CMEM) in topological insulator-superconductor heterostructures has gathered renewed interests after an experimental breakthrough [Q. L. He et al., Science 357, 294 (2017), 10.1126/science.aag2792]. In this work, we study computationally the quantum transport of topological insulator-superconductor hybrid devices subject to time-periodic modulation. We report half-integer quantized conductance plateaus at 1/2 e/2h and 3/2 e/2h upon applying the so-called sum rule in the theory of quantum transport in Floquet topological matter. In particular, in a photoinduced topological superconductor sandwiched between two Floquet Chern insulators, it is found that for each Floquet sideband, the CMEM admits equal probability for normal transmission and local Andreev reflection over a wide range of parameter regimes, yielding half-integer quantized plateaus that resist static and time-periodic disorder. While it is well-established that periodic driving fields can simultaneously create and manipulate multiple pairs of Majorana bound states, their detection scheme remains elusive, in part due to their being neutral excitations. Therefore the 3/2 e/2h plateau indicates the possibility to verify the generation of multiple pairs of photoinduced CMEMs via transport measurements. The robust and half-quantized conductance plateaus due to CMEMs are both fascinating and subtle because they only emerge after a summation over contributions from all Floquet sidebands. Our work may add insights into the transport properties of Floquet topological systems and stimulate further studies on the optical control of topological superconductivity.

  6. Exploratory research session on the quantization of the gravitational field. At the Institute for Theoretical Physics, Copenhagen, Denmark, June-July 1957

    NASA Astrophysics Data System (ADS)

    DeWitt, Bryce S.

    2017-06-01

    During the period June-July 1957 six physicists met at the Institute for Theoretical Physics of the University of Copenhagen in Denmark to work together on problems connected with the quantization of the gravitational field. A large part of the discussion was devoted to exposition of the individual work of the various participants, but a number of new results were also obtained. The topics investigated by these physicists are outlined in this report and may be grouped under the following main headings: The theory of measurement. Topographical problems in general relativity. Feynman quantization. Canonical quantization. Approximation methods. Special problems.

  7. Qiang-Dong proper quantization rule and its applications to exactly solvable quantum systems

    NASA Astrophysics Data System (ADS)

    Serrano, F. A.; Gu, Xiao-Yan; Dong, Shi-Hai

    2010-08-01

    We propose proper quantization rule, ∫x_Ax_B k(x)dx-∫x0Ax0Bk0(x)dx=nπ, where k(x )=√2M[E -V(x)] /ℏ. The xA and xB are two turning points determined by E =V(x), and n is the number of the nodes of wave function ψ(x ). We carry out the exact solutions of solvable quantum systems by this rule and find that the energy spectra of solvable systems can be determined only from its ground state energy. The previous complicated and tedious integral calculations involved in exact quantization rule are greatly simplified. The beauty and simplicity of the rule come from its meaning—whenever the number of the nodes of ϕ(x ) or the number of the nodes of the wave function ψ(x ) increases by 1, the momentum integral ∫xAxBk(x )dx will increase by π. We apply this proper quantization rule to carry out solvable quantum systems such as the one-dimensional harmonic oscillator, the Morse potential and its generalization, the Hulthén potential, the Scarf II potential, the asymmetric trigonometric Rosen-Morse potential, the Pöschl-Teller type potentials, the Rosen-Morse potential, the Eckart potential, the harmonic oscillator in three dimensions, the hydrogen atom, and the Manning-Rosen potential in D dimensions.

  8. CMOS-compatible 2-bit optical spectral quantization scheme using a silicon-nanocrystal-based horizontal slot waveguide

    PubMed Central

    Kang, Zhe; Yuan, Jinhui; Zhang, Xianting; Wu, Qiang; Sang, Xinzhu; Farrell, Gerald; Yu, Chongxiu; Li, Feng; Tam, Hwa Yaw; Wai, P. K. A.

    2014-01-01

    All-optical analog-to-digital converters based on the third-order nonlinear effects in silicon waveguide are a promising candidate to overcome the limitation of electronic devices and are suitable for photonic integration. In this paper, a 2-bit optical spectral quantization scheme for on-chip all-optical analog-to-digital conversion is proposed. The proposed scheme is realized by filtering the broadened and split spectrum induced by the self-phase modulation effect in a silicon horizontal slot waveguide filled with silicon-nanocrystal. Nonlinear coefficient as high as 8708 W−1/m is obtained because of the tight mode confinement of the horizontal slot waveguide and the high nonlinear refractive index of the silicon-nanocrystal, which provides the enhanced nonlinear interaction and accordingly low power threshold. The results show that a required input peak power level less than 0.4 W can be achieved, along with the 1.98-bit effective-number-of-bit and Gray code output. The proposed scheme can find important applications in on-chip all-optical digital signal processing systems. PMID:25417847

  9. CMOS-compatible 2-bit optical spectral quantization scheme using a silicon-nanocrystal-based horizontal slot waveguide.

    PubMed

    Kang, Zhe; Yuan, Jinhui; Zhang, Xianting; Wu, Qiang; Sang, Xinzhu; Farrell, Gerald; Yu, Chongxiu; Li, Feng; Tam, Hwa Yaw; Wai, P K A

    2014-11-24

    All-optical analog-to-digital converters based on the third-order nonlinear effects in silicon waveguide are a promising candidate to overcome the limitation of electronic devices and are suitable for photonic integration. In this paper, a 2-bit optical spectral quantization scheme for on-chip all-optical analog-to-digital conversion is proposed. The proposed scheme is realized by filtering the broadened and split spectrum induced by the self-phase modulation effect in a silicon horizontal slot waveguide filled with silicon-nanocrystal. Nonlinear coefficient as high as 8708 W(-1)/m is obtained because of the tight mode confinement of the horizontal slot waveguide and the high nonlinear refractive index of the silicon-nanocrystal, which provides the enhanced nonlinear interaction and accordingly low power threshold. The results show that a required input peak power level less than 0.4 W can be achieved, along with the 1.98-bit effective-number-of-bit and Gray code output. The proposed scheme can find important applications in on-chip all-optical digital signal processing systems.

  10. [Features of maxillary and mandibular nerves imaging during stem regional blockades. From paresthesia to 3D-CT guidance].

    PubMed

    Zaytsev, A Yu; Nazaryan, D N; Kim, S Yu; Dubrovin, K V; Svetlov, V A; Khovrin, V V

    2014-01-01

    There are difficulties in procedure of regional block of 2 and 3 brunches of the trigeminal nerve despite availability of many different methods of nerves imaging. The difficulties are connected with complex anatomy structure. Neurostimulation not always effective and as a rule, is accompanied with wrong interpretation of movement response on stimulation. The changing of the tactics on paraesthesia search improves the situation. The use of new methods of nerves imaging (3D-CT) also allows decreasing the frequency of fails during procedure of regional block of the brunches of the trigeminal nerve.

  11. Multipurpose image watermarking algorithm based on multistage vector quantization.

    PubMed

    Lu, Zhe-Ming; Xu, Dian-Guo; Sun, Sheng-He

    2005-06-01

    The rapid growth of digital multimedia and Internet technologies has made copyright protection, copy protection, and integrity verification three important issues in the digital world. To solve these problems, the digital watermarking technique has been presented and widely researched. Traditional watermarking algorithms are mostly based on discrete transform domains, such as the discrete cosine transform, discrete Fourier transform (DFT), and discrete wavelet transform (DWT). Most of these algorithms are good for only one purpose. Recently, some multipurpose digital watermarking methods have been presented, which can achieve the goal of content authentication and copyright protection simultaneously. However, they are based on DWT or DFT. Lately, several robust watermarking schemes based on vector quantization (VQ) have been presented, but they can only be used for copyright protection. In this paper, we present a novel multipurpose digital image watermarking method based on the multistage vector quantizer structure, which can be applied to image authentication and copyright protection. In the proposed method, the semi-fragile watermark and the robust watermark are embedded in different VQ stages using different techniques, and both of them can be extracted without the original image. Simulation results demonstrate the effectiveness of our algorithm in terms of robustness and fragility.

  12. Minimizing embedding impact in steganography using trellis-coded quantization

    NASA Astrophysics Data System (ADS)

    Filler, Tomáš; Judas, Jan; Fridrich, Jessica

    2010-01-01

    In this paper, we propose a practical approach to minimizing embedding impact in steganography based on syndrome coding and trellis-coded quantization and contrast its performance with bounds derived from appropriate rate-distortion bounds. We assume that each cover element can be assigned a positive scalar expressing the impact of making an embedding change at that element (single-letter distortion). The problem is to embed a given payload with minimal possible average embedding impact. This task, which can be viewed as a generalization of matrix embedding or writing on wet paper, has been approached using heuristic and suboptimal tools in the past. Here, we propose a fast and very versatile solution to this problem that can theoretically achieve performance arbitrarily close to the bound. It is based on syndrome coding using linear convolutional codes with the optimal binary quantizer implemented using the Viterbi algorithm run in the dual domain. The complexity and memory requirements of the embedding algorithm are linear w.r.t. the number of cover elements. For practitioners, we include detailed algorithms for finding good codes and their implementation. Finally, we report extensive experimental results for a large set of relative payloads and for different distortion profiles, including the wet paper channel.

  13. A Variant of the Mukai Pairing via Deformation Quantization

    NASA Astrophysics Data System (ADS)

    Ramadoss, Ajay C.

    2012-06-01

    Let X be a smooth projective complex variety. The Hochschild homology HH•( X) of X is an important invariant of X, which is isomorphic to the Hodge cohomology of X via the Hochschild-Kostant-Rosenberg isomorphism. On HH•( X), one has the Mukai pairing constructed by Caldararu. An explicit formula for the Mukai pairing at the level of Hodge cohomology was proven by the author in an earlier work (following ideas of Markarian). This formula implies a similar explicit formula for a closely related variant of the Mukai pairing on HH•( X). The latter pairing on HH•( X) is intimately linked to the study of Fourier-Mukai transforms of complex projective varieties. We give a new method to prove a formula computing the aforementioned variant of Caldararu's Mukai pairing. Our method is based on some important results in the area of deformation quantization. In particular, we use part of the work of Kashiwara and Schapira on Deformation Quantization modules together with an algebraic index theorem of Bressler, Nest and Tsygan. Our new method explicitly shows that the "Noncommutative Riemann-Roch" implies the classical Riemann-Roch. Further, it is hoped that our method would be useful for generalization to settings involving certain singular varieties.

  14. Disorder-induced half-integer quantized conductance plateau in quantum anomalous Hall insulator-superconductor structures

    NASA Astrophysics Data System (ADS)

    Huang, Yingyi; Setiawan, F.; Sau, Jay D.

    2018-03-01

    A weak superconducting proximity effect in the vicinity of the topological transition of a quantum anomalous Hall system has been proposed as a venue to realize a topological superconductor (TSC) with chiral Majorana edge modes (CMEMs). A recent experiment [Science 357, 294 (2017), 10.1126/science.aag2792] claimed to have observed such CMEMs in the form of a half-integer quantized conductance plateau in the two-terminal transport measurement of a quantum anomalous Hall-superconductor junction. Although the presence of a superconducting proximity effect generically splits the quantum Hall transition into two phase transitions with a gapped TSC in between, in this Rapid Communication we propose that a nearly flat conductance plateau, similar to that expected from CMEMs, can also arise from the percolation of quantum Hall edges well before the onset of the TSC or at temperatures much above the TSC gap. Our Rapid Communication, therefore, suggests that, in order to confirm the TSC, it is necessary to supplement the observation of the half-quantized conductance plateau with a hard superconducting gap (which is unlikely for a disordered system) from the conductance measurements or the heat transport measurement of the transport gap. Alternatively, the half-quantized thermal conductance would also serve as a smoking-gun signature of the TSC.

  15. Stochastic quantization of conformally coupled scalar in AdS

    NASA Astrophysics Data System (ADS)

    Jatkar, Dileep P.; Oh, Jae-Hyuk

    2013-10-01

    We explore the relation between stochastic quantization and holographic Wilsonian renormalization group flow further by studying conformally coupled scalar in AdS d+1. We establish one to one mapping between the radial flow of its double trace deformation and stochastic 2-point correlation function. This map is shown to be identical, up to a suitable field re-definition of the bulk scalar, to the original proposal in arXiv:1209.2242.

  16. A VLSI chip set for real time vector quantization of image sequences

    NASA Technical Reports Server (NTRS)

    Baker, Richard L.

    1989-01-01

    The architecture and implementation of a VLSI chip set that vector quantizes (VQ) image sequences in real time is described. The chip set forms a programmable Single-Instruction, Multiple-Data (SIMD) machine which can implement various vector quantization encoding structures. Its VQ codebook may contain unlimited number of codevectors, N, having dimension up to K = 64. Under a weighted least squared error criterion, the engine locates at video rates the best code vector in full-searched or large tree searched VQ codebooks. The ability to manipulate tree structured codebooks, coupled with parallelism and pipelining, permits searches in as short as O (log N) cycles. A full codebook search results in O(N) performance, compared to O(KN) for a Single-Instruction, Single-Data (SISD) machine. With this VLSI chip set, an entire video code can be built on a single board that permits realtime experimentation with very large codebooks.

  17. Quantization of the Kadomtsev-Petviashvili equation

    NASA Astrophysics Data System (ADS)

    Kozlowski, K.; Sklyanin, E. K.; Torrielli, A.

    2017-08-01

    We propose a quantization of the Kadomtsev-Petviashvili equation on a cylinder equivalent to an infinite system of nonrelativistic one-dimensional bosons with the masses m = 1, 2,.... The Hamiltonian is Galilei-invariant and includes the split and merge terms Ψ _{{m_1}}^\\dag Ψ _{{m_2}}^\\dag {Ψ _{{m_1} + {m_2}}} and Ψ _{{m_1} + {m_2}}^\\dag {Ψ _{{m_1}}}{Ψ _{{m_2}}} for all combinations of particles with masses m 1, m 2, and m 1 + m 2 for a special choice of coupling constants. We construct the Bethe eigenfunctions for the model and verify the consistency of the coordinate Bethe ansatz and hence the quantum integrability of the model up to the mass M=8 sector.

  18. Improved motion compensation in 3D-CT using respiratory-correlated segment reconstruction: diagnostic and radiotherapy applications.

    PubMed

    Mori, S; Endo, M; Kohno, R; Minohara, S

    2006-09-01

    Conventional respiratory-gated CT and four-dimensional CT (4DCT) are disadvantaged by their low temporal resolution, which results in the inclusion of anatomic motion-induced artefacts. These represent a significant source of error both in radiotherapy treatment planning for the thorax and upper abdomen and in diagnostic procedures. In particular, temporal resolution and image quality are vitally important to accurate diagnosis and the minimization of planning target volume margin due to respiratory motion. To improve both temporal resolution and signal-to-noise ratio (SNR), we developed a respiratory-correlated segment reconstruction method (RS) and adapted it to the Feldkamp-Davis-Kress algorithm (FDK) with a 256 multidetector row CT (256MDCT). The 256MDCT scans approximately 100 mm in the craniocaudal direction with a 0.5 mm slice thickness in one rotation. Data acquisition for the RS-FDK relies on the assistance of a respiratory sensing system operating in cine scan mode (continuous axial scan with the table stationary). We evaluated the RS-FDK for volume accuracy and image noise in a phantom study with the 256MDCT and compared results with those for a full scan (FS-FDK), which is usually employed in conventional 4DCT and in half scan (HS-FDK). Results showed that the RS-FDK gave a more accurate volume than the others and had the same SNR as the FS-FDK. In a subsequent animal study, we demonstrated a practical sorting process for projection data which was unaffected by variations in respiratory period, and found that the RS-FDK gave the clearest visualization among the three algorithms of the margins of the liver and pulmonary vessels. In summary, the RS-FDK algorithm provides multi-phase images with higher temporal resolution and better SNR. This method should prove useful when combined with new radiotherapeutic and diagnostic techniques.

  19. SU-E-J-90: Lobar-Level Lung Ventilation Analysis Using 4DCT and Deformable Image Registration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Du, K; Bayouth, J; Patton, T

    2015-06-15

    Purpose: To assess regional changes in human lung ventilation and mechanics using four-dimensional computed tomography (4DCT) and deformable image registration. This work extends our prior analysis of the entire lung to a lobe-based analysis. Methods: 4DCT images acquired from 20 patients prior to radiation therapy (RT) were used for this analysis. Jacobian ventilation and motion maps were computed from the displacement field after deformable image registration between the end of expiration breathing phase and the end of inspiration breathing phase. The lobes were manually segmented on the reference phase by a medical physicist expert. The voxel-by-voxel ventilation and motion magnitudemore » for all subjects were grouped by lobes and plotted into cumulative voxel frequency curves respectively. In addition, to eliminate the effect of different breathing efforts across subjects, we applied the inter-subject equivalent lung volume (ELV) method on a subset of the cohort and reevaluated the lobar ventilation. Results: 95% of voxels in the lung are expanding during inspiration. However, some local regions of lung tissue show far more expansion than others. The greatest expansion with respiration occurs within the lower lobes; between exhale and inhale the median expansion in lower lobes is approximately 15%, while the median expansion in upper lobes is 10%. This appears to be driven by a subset of lung tissues within the lobe that have greater expansion; twice the number of voxels in the lower lobes (20%) expand by > 30% when compared to the upper lobes (10%). Conclusion: Lung ventilation and motion show significant difference on the lobar level. There are different lobar fractions of driving voxels that contribute to the major expansion of the lung. This work was supported by NIH grant CA166703.« less

  20. Two dimensional topological insulator in quantizing magnetic fields

    NASA Astrophysics Data System (ADS)

    Olshanetsky, E. B.; Kvon, Z. D.; Gusev, G. M.; Mikhailov, N. N.; Dvoretsky, S. A.

    2018-05-01

    The effect of quantizing magnetic field on the electron transport is investigated in a two dimensional topological insulator (2D TI) based on a 8 nm (013) HgTe quantum well (QW). The local resistance behavior is indicative of a metal-insulator transition at B ≈ 6 T. On the whole the experimental data agrees with the theory according to which the helical edge states transport in a 2D TI persists from zero up to a critical magnetic field Bc after which a gap opens up in the 2D TI spectrum.

  1. A heat kernel proof of the index theorem for deformation quantization

    NASA Astrophysics Data System (ADS)

    Karabegov, Alexander

    2017-11-01

    We give a heat kernel proof of the algebraic index theorem for deformation quantization with separation of variables on a pseudo-Kähler manifold. We use normalizations of the canonical trace density of a star product and of the characteristic classes involved in the index formula for which this formula contains no extra constant factors.

  2. Closed almost-periodic orbits in semiclassical quantization of generic polygons

    PubMed

    Biswas

    2000-05-01

    Periodic orbits are the central ingredients of modern semiclassical theories and corrections to these are generally nonclassical in origin. We show here that, for the class of generic polygonal billiards, the corrections are predominantly classical in origin owing to the contributions from closed almost-periodic (CAP) orbit families. Furthermore, CAP orbit families outnumber periodic families but have comparable weights. They are hence indispensable for semiclassical quantization.

  3. Compact universal logic gates realized using quantization of current in nanodevices.

    PubMed

    Zhang, Wancheng; Wu, Nan-Jian; Yang, Fuhua

    2007-12-12

    This paper proposes novel universal logic gates using the current quantization characteristics of nanodevices. In nanodevices like the electron waveguide (EW) and single-electron (SE) turnstile, the channel current is a staircase quantized function of its control voltage. We use this unique characteristic to compactly realize Boolean functions. First we present the concept of the periodic-threshold threshold logic gate (PTTG), and we build a compact PTTG using EW and SE turnstiles. We show that an arbitrary three-input Boolean function can be realized with a single PTTG, and an arbitrary four-input Boolean function can be realized by using two PTTGs. We then use one PTTG to build a universal programmable two-input logic gate which can be used to realize all two-input Boolean functions. We also build a programmable three-input logic gate by using one PTTG. Compared with linear threshold logic gates, with the PTTG one can build digital circuits more compactly. The proposed PTTGs are promising for future smart nanoscale digital system use.

  4. Quantization-Based Adaptive Actor-Critic Tracking Control With Tracking Error Constraints.

    PubMed

    Fan, Quan-Yong; Yang, Guang-Hong; Ye, Dan

    2018-04-01

    In this paper, the problem of adaptive actor-critic (AC) tracking control is investigated for a class of continuous-time nonlinear systems with unknown nonlinearities and quantized inputs. Different from the existing results based on reinforcement learning, the tracking error constraints are considered and new critic functions are constructed to improve the performance further. To ensure that the tracking errors keep within the predefined time-varying boundaries, a tracking error transformation technique is used to constitute an augmented error system. Specific critic functions, rather than the long-term cost function, are introduced to supervise the tracking performance and tune the weights of the AC neural networks (NNs). A novel adaptive controller with a special structure is designed to reduce the effect of the NN reconstruction errors, input quantization, and disturbances. Based on the Lyapunov stability theory, the boundedness of the closed-loop signals and the desired tracking performance can be guaranteed. Finally, simulations on two connected inverted pendulums are given to illustrate the effectiveness of the proposed method.

  5. Performance Analysis for Channel Estimation With 1-Bit ADC and Unknown Quantization Threshold

    NASA Astrophysics Data System (ADS)

    Stein, Manuel S.; Bar, Shahar; Nossek, Josef A.; Tabrikian, Joseph

    2018-05-01

    In this work, the problem of signal parameter estimation from measurements acquired by a low-complexity analog-to-digital converter (ADC) with $1$-bit output resolution and an unknown quantization threshold is considered. Single-comparator ADCs are energy-efficient and can be operated at ultra-high sampling rates. For analysis of such systems, a fixed and known quantization threshold is usually assumed. In the symmetric case, i.e., zero hard-limiting offset, it is known that in the low signal-to-noise ratio (SNR) regime the signal processing performance degrades moderately by ${2}/{\\pi}$ ($-1.96$ dB) when comparing to an ideal $\\infty$-bit converter. Due to hardware imperfections, low-complexity $1$-bit ADCs will in practice exhibit an unknown threshold different from zero. Therefore, we study the accuracy which can be obtained with receive data processed by a hard-limiter with unknown quantization level by using asymptotically optimal channel estimation algorithms. To characterize the estimation performance of these nonlinear algorithms, we employ analytic error expressions for different setups while modeling the offset as a nuisance parameter. In the low SNR regime, we establish the necessary condition for a vanishing loss due to missing offset knowledge at the receiver. As an application, we consider the estimation of single-input single-output wireless channels with inter-symbol interference and validate our analysis by comparing the analytic and experimental performance of the studied estimation algorithms. Finally, we comment on the extension to multiple-input multiple-output channel models.

  6. Assessment of regional ventilation and deformation using 4D-CT imaging for healthy human lungs during tidal breathing

    PubMed Central

    Jahani, Nariman; Choi, Jiwoong; Iyer, Krishna; Hoffman, Eric A.

    2015-01-01

    This study aims to assess regional ventilation, nonlinearity, and hysteresis of human lungs during dynamic breathing via image registration of four-dimensional computed tomography (4D-CT) scans. Six healthy adult humans were studied by spiral multidetector-row CT during controlled tidal breathing as well as during total lung capacity and functional residual capacity breath holds. Static images were utilized to contrast static vs. dynamic (deep vs. tidal) breathing. A rolling-seal piston system was employed to maintain consistent tidal breathing during 4D-CT spiral image acquisition, providing required between-breath consistency for physiologically meaningful reconstructed respiratory motion. Registration-derived variables including local air volume and anisotropic deformation index (ADI, an indicator of preferential deformation in response to local force) were employed to assess regional ventilation and lung deformation. Lobar distributions of air volume change during tidal breathing were correlated with those of deep breathing (R2 ≈ 0.84). Small discrepancies between tidal and deep breathing were shown to be likely due to different distributions of air volume change in the left and the right lungs. We also demonstrated an asymmetric characteristic of flow rate between inhalation and exhalation. With ADI, we were able to quantify nonlinearity and hysteresis of lung deformation that can only be captured in dynamic images. Nonlinearity quantified by ADI is greater during inhalation, and it is stronger in the lower lobes (P < 0.05). Lung hysteresis estimated by the difference of ADI between inhalation and exhalation is more significant in the right lungs than that in the left lungs. PMID:26316512

  7. The behavior of quantization spectra as a function of signal-to-noise ratio

    NASA Technical Reports Server (NTRS)

    Flanagan, M. J.

    1991-01-01

    An expression for the spectrum of quantization error in a discrete-time system whose input is a sinusoid plus white Gaussian noise is derived. This quantization spectrum consists of two components: a white-noise floor and spurious harmonics. The dithering effect of the input Gaussian noise in both components of the spectrum is considered. Quantitative results in a discrete Fourier transform (DFT) example show the behavior of spurious harmonics as a function of the signal-to-noise ratio (SNR). These results have strong implications for digital reception and signal analysis systems. At low SNRs, spurious harmonics decay exponentially on a log-log scale, and the resulting spectrum is white. As the SNR increases, the spurious harmonics figure prominently in the output spectrum. A useful expression is given that roughly bounds the magnitude of a spurious harmonic as a function of the SNR.

  8. Estimation of color filter array data from JPEG images for improved demosaicking

    NASA Astrophysics Data System (ADS)

    Feng, Wei; Reeves, Stanley J.

    2006-02-01

    On-camera demosaicking algorithms are necessarily simple and therefore do not yield the best possible images. However, off-camera demosaicking algorithms face the additional challenge that the data has been compressed and therefore corrupted by quantization noise. We propose a method to estimate the original color filter array (CFA) data from JPEG-compressed images so that more sophisticated (and better) demosaicking schemes can be applied to get higher-quality images. The JPEG image formation process, including simple demosaicking, color space transformation, chrominance channel decimation and DCT, is modeled as a series of matrix operations followed by quantization on the CFA data, which is estimated by least squares. An iterative method is used to conserve memory and speed computation. Our experiments show that the mean square error (MSE) with respect to the original CFA data is reduced significantly using our algorithm, compared to that of unprocessed JPEG and deblocked JPEG data.

  9. Finite-time H∞ control for a class of discrete-time switched time-delay systems with quantized feedback

    NASA Astrophysics Data System (ADS)

    Song, Haiyu; Yu, Li; Zhang, Dan; Zhang, Wen-An

    2012-12-01

    This paper is concerned with the finite-time quantized H∞ control problem for a class of discrete-time switched time-delay systems with time-varying exogenous disturbances. By using the sector bound approach and the average dwell time method, sufficient conditions are derived for the switched system to be finite-time bounded and ensure a prescribed H∞ disturbance attenuation level, and a mode-dependent quantized state feedback controller is designed by solving an optimization problem. Two illustrative examples are provided to demonstrate the effectiveness of the proposed theoretical results.

  10. Quantized Step-up Model for Evaluation of Internship in Teaching of Prospective Science Teachers.

    ERIC Educational Resources Information Center

    Sindhu, R. S.

    2002-01-01

    Describes the quantized step-up model developed for the evaluation purposes of internship in teaching which is an analogous model of the atomic structure. Assesses prospective teachers' abilities in lesson delivery. (YDS)

  11. Feynman formulae and phase space Feynman path integrals for tau-quantization of some Lévy-Khintchine type Hamilton functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Butko, Yana A., E-mail: yanabutko@yandex.ru, E-mail: kinderknecht@math.uni-sb.de; Grothaus, Martin, E-mail: grothaus@mathematik.uni-kl.de; Smolyanov, Oleg G., E-mail: Smolyanov@yandex.ru

    2016-02-15

    Evolution semigroups generated by pseudo-differential operators are considered. These operators are obtained by different (parameterized by a number τ) procedures of quantization from a certain class of functions (or symbols) defined on the phase space. This class contains Hamilton functions of particles with variable mass in magnetic and potential fields and more general symbols given by the Lévy-Khintchine formula. The considered semigroups are represented as limits of n-fold iterated integrals when n tends to infinity. Such representations are called Feynman formulae. Some of these representations are constructed with the help of another pseudo-differential operator, obtained by the same procedure ofmore » quantization; such representations are called Hamiltonian Feynman formulae. Some representations are based on integral operators with elementary kernels; these are called Lagrangian Feynman formulae. Langrangian Feynman formulae provide approximations of evolution semigroups, suitable for direct computations and numerical modeling of the corresponding dynamics. Hamiltonian Feynman formulae allow to represent the considered semigroups by means of Feynman path integrals. In the article, a family of phase space Feynman pseudomeasures corresponding to different procedures of quantization is introduced. The considered evolution semigroups are represented as phase space Feynman path integrals with respect to these Feynman pseudomeasures, i.e., different quantizations correspond to Feynman path integrals with the same integrand but with respect to different pseudomeasures. This answers Berezin’s problem of distinguishing a procedure of quantization on the language of Feynman path integrals. Moreover, the obtained Lagrangian Feynman formulae allow also to calculate these phase space Feynman path integrals and to connect them with some functional integrals with respect to probability measures.« less

  12. Gold nanoparticles produced in situ mediate bioelectricity and hydrogen production in a microbial fuel cell by quantized capacitance charging.

    PubMed

    Kalathil, Shafeer; Lee, Jintae; Cho, Moo Hwan

    2013-02-01

    Oppan quantized style: By adding a gold precursor at its cathode, a microbial fuel cell (MFC) is demonstrated to form gold nanoparticles that can be used to simultaneously produce bioelectricity and hydrogen. By exploiting the quantized capacitance charging effect, the gold nanoparticles mediate the production of hydrogen without requiring an external power supply, while the MFC produces a stable power density. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. A 4DCT imaging-based breathing lung model with relative hysteresis

    NASA Astrophysics Data System (ADS)

    Miyawaki, Shinjiro; Choi, Sanghun; Hoffman, Eric A.; Lin, Ching-Long

    2016-12-01

    To reproduce realistic airway motion and airflow, the authors developed a deforming lung computational fluid dynamics (CFD) model based on four-dimensional (4D, space and time) dynamic computed tomography (CT) images. A total of 13 time points within controlled tidal volume respiration were used to account for realistic and irregular lung motion in human volunteers. Because of the irregular motion of 4DCT-based airways, we identified an optimal interpolation method for airway surface deformation during respiration, and implemented a computational solid mechanics-based moving mesh algorithm to produce smooth deforming airway mesh. In addition, we developed physiologically realistic airflow boundary conditions for both models based on multiple images and a single image. Furthermore, we examined simplified models based on one or two dynamic or static images. By comparing these simplified models with the model based on 13 dynamic images, we investigated the effects of relative hysteresis of lung structure with respect to lung volume, lung deformation, and imaging methods, i.e., dynamic vs. static scans, on CFD-predicted pressure drop. The effect of imaging method on pressure drop was 24 percentage points due to the differences in airflow distribution and airway geometry.

  14. The comparison between SVD-DCT and SVD-DWT digital image watermarking

    NASA Astrophysics Data System (ADS)

    Wira Handito, Kurniawan; Fauzi, Zulfikar; Aminy Ma’ruf, Firda; Widyaningrum, Tanti; Muslim Lhaksmana, Kemas

    2018-03-01

    With internet, anyone can publish their creation into digital data simply, inexpensively, and absolutely easy to be accessed by everyone. However, the problem appears when anyone else claims that the creation is their property or modifies some part of that creation. It causes necessary protection of copyrights; one of the examples is with watermarking method in digital image. The application of watermarking technique on digital data, especially on image, enables total invisibility if inserted in carrier image. Carrier image will not undergo any decrease of quality and also the inserted image will not be affected by attack. In this paper, watermarking will be implemented on digital image using Singular Value Decomposition based on Discrete Wavelet Transform (DWT) and Discrete Cosine Transform (DCT) by expectation in good performance of watermarking result. In this case, trade-off happen between invisibility and robustness of image watermarking. In embedding process, image watermarking has a good quality for scaling factor < 0.1. The quality of image watermarking in decomposition level 3 is better than level 2 and level 1. Embedding watermark in low-frequency is robust to Gaussian blur attack, rescale, and JPEG compression, but in high-frequency is robust to Gaussian noise.

  15. A 4DCT imaging-based breathing lung model with relative hysteresis

    PubMed Central

    Miyawaki, Shinjiro; Choi, Sanghun; Hoffman, Eric A.; Lin, Ching-Long

    2016-01-01

    To reproduce realistic airway motion and airflow, the authors developed a deforming lung computational fluid dynamics (CFD) model based on four-dimensional (4D, space and time) dynamic computed tomography (CT) images. A total of 13 time points within controlled tidal volume respiration were used to account for realistic and irregular lung motion in human volunteers. Because of the irregular motion of 4DCT-based airways, we identified an optimal interpolation method for airway surface deformation during respiration, and implemented a computational solid mechanics-based moving mesh algorithm to produce smooth deforming airway mesh. In addition, we developed physiologically realistic airflow boundary conditions for both models based on multiple images and a single image. Furthermore, we examined simplified models based on one or two dynamic or static images. By comparing these simplified models with the model based on 13 dynamic images, we investigated the effects of relative hysteresis of lung structure with respect to lung volume, lung deformation, and imaging methods, i.e., dynamic vs. static scans, on CFD-predicted pressure drop. The effect of imaging method on pressure drop was 24 percentage points due to the differences in airflow distribution and airway geometry. PMID:28260811

  16. Light-hole quantization in the optical response of ultra-wide GaAs/Al(x)Ga(1-x)As quantum wells.

    PubMed

    Solovyev, V V; Bunakov, V A; Schmult, S; Kukushkin, I V

    2013-01-16

    Temperature-dependent reflectivity and photoluminescence spectra are studied for undoped ultra-wide 150 and 250 nm GaAs quantum wells. It is shown that spectral features previously attributed to a size quantization of the exciton motion in the z-direction coincide well with energies of quantized levels for light holes. Furthermore, optical spectra reveal very similar properties at temperatures above the exciton dissociation point.

  17. MO-A-BRD-05: Evaluation of Composed Lung Ventilation with 4DCT and Image Registration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Du, K; Bayouth, J; Reinhardt, J

    Purpose: Regional pulmonary function can be derived using fourdimensional computed tomography (4DCT) combined with deformable image registration. However, only peak inhale and exhale phases have been used thus far while the lung ventilation during intermediate phases is not considered. In our previous work, we have investigated the spatiotemporal heterogeneity of lung ventilation and its dependence on respiration effort. In this study, composed ventilation is introduced using all inspiration phases and compared to direct ventilation. Both methods are evaluated against Xe-CT derived ventilation. Methods: Using an in-house tissue volume preserving deformable image registration, unlike the direct ventilation method, which computes frommore » end expiration to end inspiration, Jacobian ventilation maps were computed from one inhale phase to the next and then composed from all inspiration steps. The two methods were compared in both patients prior to RT and mechanically ventilated sheep subjects. In addition, they wereassessed for the correlation with Xe-CT derived ventilation in sheep subjects. Annotated lung landmarks were used to evaluate the accuracy of original and composed deformation field. Results: After registration, the landmark distance for composed deformation field was always higher than that for direct deformation field (0IN to 100IN average in human: 1.03 vs 1.53, p=0.001, and in sheep: 0.80 vs0.94, p=0.009), and both increased with longer phase interval. Direct and composed ventilation maps were similar in both sheep (gamma pass rate 87.6) and human subjects (gamma pass rate 71.9),and showed consistent pattern from ventral to dorsal when compared to Xe-CT derived ventilation. Correlation coefficient between Xe-CT and composed ventilation was slightly better than the direct method but not significant (average 0.89 vs 0.85, p=0.135). Conclusion: More strict breathing control in sheep subjects may explain higher similarity between direct and composed

  18. Rate-distortion analysis of dead-zone plus uniform threshold scalar quantization and its application--part II: two-pass VBR coding for H.264/AVC.

    PubMed

    Sun, Jun; Duan, Yizhou; Li, Jiangtao; Liu, Jiaying; Guo, Zongming

    2013-01-01

    In the first part of this paper, we derive a source model describing the relationship between the rate, distortion, and quantization steps of the dead-zone plus uniform threshold scalar quantizers with nearly uniform reconstruction quantizers for generalized Gaussian distribution. This source model consists of rate-quantization, distortion-quantization (D-Q), and distortion-rate (D-R) models. In this part, we first rigorously confirm the accuracy of the proposed source model by comparing the calculated results with the coding data of JM 16.0. Efficient parameter estimation strategies are then developed to better employ this source model in our two-pass rate control method for H.264 variable bit rate coding. Based on our D-Q and D-R models, the proposed method is of high stability, low complexity and is easy to implement. Extensive experiments demonstrate that the proposed method achieves: 1) average peak signal-to-noise ratio variance of only 0.0658 dB, compared to 1.8758 dB of JM 16.0's method, with an average rate control error of 1.95% and 2) significant improvement in smoothing the video quality compared with the latest two-pass rate control method.

  19. Rapid Evolution of Citrate Utilization by Escherichia coli by Direct Selection Requires citT and dctA

    PubMed Central

    Van Hofwegen, Dustin J.; Hovde, Carolyn J.

    2016-01-01

    ABSTRACT The isolation of aerobic citrate-utilizing Escherichia coli (Cit+) in long-term evolution experiments (LTEE) has been termed a rare, innovative, presumptive speciation event. We hypothesized that direct selection would rapidly yield the same class of E. coli Cit+ mutants and follow the same genetic trajectory: potentiation, actualization, and refinement. This hypothesis was tested with wild-type E. coli strain B and with K-12 and three K-12 derivatives: an E. coli ΔrpoS::kan mutant (impaired for stationary-phase survival), an E. coli ΔcitT::kan mutant (deleted for the anaerobic citrate/succinate antiporter), and an E. coli ΔdctA::kan mutant (deleted for the aerobic succinate transporter). E. coli underwent adaptation to aerobic citrate metabolism that was readily and repeatedly achieved using minimal medium supplemented with citrate (M9C), M9C with 0.005% glycerol, or M9C with 0.0025% glucose. Forty-six independent E. coli Cit+ mutants were isolated from all E. coli derivatives except the E. coli ΔcitT::kan mutant. Potentiation/actualization mutations occurred within as few as 12 generations, and refinement mutations occurred within 100 generations. Citrate utilization was confirmed using Simmons, Christensen, and LeMaster Richards citrate media and quantified by mass spectrometry. E. coli Cit+ mutants grew in clumps and in long incompletely divided chains, a phenotype that was reversible in rich media. Genomic DNA sequencing of four E. coli Cit+ mutants revealed the required sequence of mutational events leading to a refined Cit+ mutant. These events showed amplified citT and dctA loci followed by DNA rearrangements consistent with promoter capture events for citT. These mutations were equivalent to the amplification and promoter capture CitT-activating mutations identified in the LTEE. IMPORTANCE E. coli cannot use citrate aerobically. Long-term evolution experiments (LTEE) performed by Blount et al. (Z. D. Blount, J. E. Barrick, C. J. Davidson, and

  20. Stochastic exponential synchronization of memristive neural networks with time-varying delays via quantized control.

    PubMed

    Zhang, Wanli; Yang, Shiju; Li, Chuandong; Zhang, Wei; Yang, Xinsong

    2018-08-01

    This paper focuses on stochastic exponential synchronization of delayed memristive neural networks (MNNs) by the aid of systems with interval parameters which are established by using the concept of Filippov solution. New intermittent controller and adaptive controller with logarithmic quantization are structured to deal with the difficulties induced by time-varying delays, interval parameters as well as stochastic perturbations, simultaneously. Moreover, not only control cost can be reduced but also communication channels and bandwidth are saved by using these controllers. Based on novel Lyapunov functions and new analytical methods, several synchronization criteria are established to realize the exponential synchronization of MNNs with stochastic perturbations via intermittent control and adaptive control with or without logarithmic quantization. Finally, numerical simulations are offered to substantiate our theoretical results. Copyright © 2018 Elsevier Ltd. All rights reserved.

  1. Quantized Faraday and Kerr rotation and axion electrodynamics of a 3D topological insulator

    NASA Astrophysics Data System (ADS)

    Wu, Liang; Salehi, M.; Koirala, N.; Moon, J.; Oh, S.; Armitage, N. P.

    2016-12-01

    Topological insulators have been proposed to be best characterized as bulk magnetoelectric materials that show response functions quantized in terms of fundamental physical constants. Here, we lower the chemical potential of three-dimensional (3D) Bi2Se3 films to ~30 meV above the Dirac point and probe their low-energy electrodynamic response in the presence of magnetic fields with high-precision time-domain terahertz polarimetry. For fields higher than 5 tesla, we observed quantized Faraday and Kerr rotations, whereas the dc transport is still semiclassical. A nontrivial Berry’s phase offset to these values gives evidence for axion electrodynamics and the topological magnetoelectric effect. The time structure used in these measurements allows a direct measure of the fine-structure constant based on a topological invariant of a solid-state system.

  2. Comparison of 4-Dimensional Computed Tomography Ventilation With Nuclear Medicine Ventilation-Perfusion Imaging: A Clinical Validation Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vinogradskiy, Yevgeniy, E-mail: yevgeniy.vinogradskiy@ucdenver.edu; Koo, Phillip J.; Castillo, Richard

    Purpose: Four-dimensional computed tomography (4DCT) ventilation imaging provides lung function information for lung cancer patients undergoing radiation therapy. Before 4DCT-ventilation can be implemented clinically it needs to be validated against an established imaging modality. The purpose of this work was to compare 4DCT-ventilation to nuclear medicine ventilation, using clinically relevant global metrics and radiologist observations. Methods and Materials: Fifteen lung cancer patients with 16 sets of 4DCT and nuclear medicine ventilation-perfusion (VQ) images were used for the study. The VQ-ventilation images were acquired in planar mode using Tc-99m-labeled diethylenetriamine-pentaacetic acid aerosol inhalation. 4DCT data, spatial registration, and a density-change-based modelmore » were used to compute a 4DCT-based ventilation map for each patient. The percent ventilation was calculated in each lung and each lung third for both the 4DCT and VQ-ventilation scans. A nuclear medicine radiologist assessed the VQ and 4DCT scans for the presence of ventilation defects. The VQ and 4DCT-based images were compared using regional percent ventilation and radiologist clinical observations. Results: Individual patient examples demonstrate good qualitative agreement between the 4DCT and VQ-ventilation scans. The correlation coefficients were 0.68 and 0.45, using the percent ventilation in each individual lung and lung third, respectively. Using radiologist-noted presence of ventilation defects and receiver operating characteristic analysis, the sensitivity, specificity, and accuracy of the 4DCT-ventilation were 90%, 64%, and 81%, respectively. Conclusions: The current work compared 4DCT with VQ-based ventilation using clinically relevant global metrics and radiologist observations. We found good agreement between the radiologist's assessment of the 4DCT and VQ-ventilation images as well as the percent ventilation in each lung. The agreement lessened when the data were

  3. Hadron Spectra, Decays and Scattering Properties Within Basis Light Front Quantization

    NASA Astrophysics Data System (ADS)

    Vary, James P.; Adhikari, Lekha; Chen, Guangyao; Jia, Shaoyang; Li, Meijian; Li, Yang; Maris, Pieter; Qian, Wenyang; Spence, John R.; Tang, Shuo; Tuchin, Kirill; Yu, Anji; Zhao, Xingbo

    2018-07-01

    We survey recent progress in calculating properties of the electron and hadrons within the basis light front quantization (BLFQ) approach. We include applications to electromagnetic and strong scattering processes in relativistic heavy ion collisions. We present an initial investigation into the glueball states by applying BLFQ with multigluon sectors, introducing future research possibilities on multi-quark and multi-gluon systems.

  4. Nucleation of Quantized Vortices from Rotating Superfluid Drops

    NASA Technical Reports Server (NTRS)

    Donnelly, Russell J.

    2001-01-01

    The long-term goal of this project is to study the nucleation of quantized vortices in helium II by investigating the behavior of rotating droplets of helium II in a reduced gravity environment. The objective of this ground-based research grant was to develop new experimental techniques to aid in accomplishing that goal. The development of an electrostatic levitator for superfluid helium, described below, and the successful suspension of charged superfluid drops in modest electric fields was the primary focus of this work. Other key technologies of general low temperature use were developed and are also discussed.

  5. Quantum mechanics, gravity and modified quantization relations.

    PubMed

    Calmet, Xavier

    2015-08-06

    In this paper, we investigate a possible energy scale dependence of the quantization rules and, in particular, from a phenomenological point of view, an energy scale dependence of an effective [Formula: see text] (reduced Planck's constant). We set a bound on the deviation of the value of [Formula: see text] at the muon scale from its usual value using measurements of the anomalous magnetic moment of the muon. Assuming that inflation has taken place, we can conclude that nature is described by a quantum theory at least up to an energy scale of about 10(16) GeV. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  6. Quantum mechanical free energy profiles with post-quantization restraints: Binding free energy of the water dimer over a broad range of temperatures

    NASA Astrophysics Data System (ADS)

    Bishop, Kevin P.; Roy, Pierre-Nicholas

    2018-03-01

    Free energy calculations are a crucial part of understanding chemical systems but are often computationally expensive for all but the simplest of systems. Various enhanced sampling techniques have been developed to improve the efficiency of these calculations in numerical simulations. However, the majority of these approaches have been applied using classical molecular dynamics. There are many situations where nuclear quantum effects impact the system of interest and a classical description fails to capture these details. In this work, path integral molecular dynamics has been used in conjunction with umbrella sampling, and it has been observed that correct results are only obtained when the umbrella sampling potential is applied to a single path integral bead post quantization. This method has been validated against a Lennard-Jones benchmark system before being applied to the more complicated water dimer system over a broad range of temperatures. Free energy profiles are obtained, and these are utilized in the calculation of the second virial coefficient as well as the change in free energy from the separated water monomers to the dimer. Comparisons to experimental and ground state calculation values from the literature are made for the second virial coefficient at higher temperature and the dissociation energy of the dimer in the ground state.

  7. Quantum mechanical free energy profiles with post-quantization restraints: Binding free energy of the water dimer over a broad range of temperatures.

    PubMed

    Bishop, Kevin P; Roy, Pierre-Nicholas

    2018-03-14

    Free energy calculations are a crucial part of understanding chemical systems but are often computationally expensive for all but the simplest of systems. Various enhanced sampling techniques have been developed to improve the efficiency of these calculations in numerical simulations. However, the majority of these approaches have been applied using classical molecular dynamics. There are many situations where nuclear quantum effects impact the system of interest and a classical description fails to capture these details. In this work, path integral molecular dynamics has been used in conjunction with umbrella sampling, and it has been observed that correct results are only obtained when the umbrella sampling potential is applied to a single path integral bead post quantization. This method has been validated against a Lennard-Jones benchmark system before being applied to the more complicated water dimer system over a broad range of temperatures. Free energy profiles are obtained, and these are utilized in the calculation of the second virial coefficient as well as the change in free energy from the separated water monomers to the dimer. Comparisons to experimental and ground state calculation values from the literature are made for the second virial coefficient at higher temperature and the dissociation energy of the dimer in the ground state.

  8. Symplectic Quantization of a Vector-Tensor Gauge Theory with Topological Coupling

    NASA Astrophysics Data System (ADS)

    Barcelos-Neto, J.; Silva, M. B. D.

    We use the symplectic formalism to quantize a gauge theory where vectors and tensors fields are coupled in a topological way. This is an example of reducible theory and a procedure like of ghosts-of-ghosts of the BFV method is applied but in terms of Lagrange multipliers. Our final results are in agreement with the ones found in the literature by using the Dirac method.

  9. Particle localization, spinor two-valuedness, and Fermi quantization of tensor systems

    NASA Technical Reports Server (NTRS)

    Reifler, Frank; Morris, Randall

    1994-01-01

    Recent studies of particle localization shows that square-integrable positive energy bispinor fields in a Minkowski space-time cannot be physically distinguished from constrained tensor fields. In this paper we generalize this result by characterizing all classical tensor systems, which admit Fermi quantization, as those having unitary Lie-Poisson brackets. Examples include Euler's tensor equation for a rigid body and Dirac's equation in tensor form.

  10. A short essay on quantum black holes and underlying noncommutative quantized space-time

    NASA Astrophysics Data System (ADS)

    Tanaka, Sho

    2017-01-01

    We emphasize the importance of noncommutative geometry or Lorenz-covariant quantized space-time towards the ultimate theory of quantum gravity and Planck scale physics. We focus our attention on the statistical and substantial understanding of the Bekenstein-Hawking area-entropy law of black holes in terms of the kinematical holographic relation (KHR). KHR manifestly holds in Yang’s quantized space-time as the result of kinematical reduction of spatial degrees of freedom caused by its own nature of noncommutative geometry, and plays an important role in our approach without any recourse to the familiar hypothesis, so-called holographic principle. In the present paper, we find a unified form of KHR applicable to the whole region ranging from macroscopic to microscopic scales in spatial dimension d  =  3. We notice a possibility of nontrivial modification of area-entropy law of black holes which becomes most remarkable in the extremely microscopic system close to Planck scale.

  11. High-resolution quantization based on soliton self-frequency shift and spectral compression in a bi-directional comb-fiber architecture

    NASA Astrophysics Data System (ADS)

    Zhang, Xuyan; Zhang, Zhiyao; Wang, Shubing; Liang, Dong; Li, Heping; Liu, Yong

    2018-03-01

    We propose and demonstrate an approach that can achieve high-resolution quantization by employing soliton self-frequency shift and spectral compression. Our approach is based on a bi-directional comb-fiber architecture which is composed of a Sagnac-loop-based mirror and a comb-like combination of N sections of interleaved single-mode fibers and high nonlinear fibers. The Sagnac-loop-based mirror placed at the terminal of a bus line reflects the optical pulses back to the bus line to achieve additional N-stage spectral compression, thus single-stage soliton self-frequency shift (SSFS) and (2 N - 1)-stage spectral compression are realized in the bi-directional scheme. The fiber length in the architecture is numerically optimized, and the proposed quantization scheme is evaluated by both simulation and experiment in the case of N = 2. In the experiment, a quantization resolution of 6.2 bits is obtained, which is 1.2-bit higher than that of its uni-directional counterpart.

  12. There are many ways to spin a photon: Half-quantization of a total optical angular momentum

    PubMed Central

    Ballantine, Kyle E.; Donegan, John F.; Eastham, Paul R.

    2016-01-01

    The angular momentum of light plays an important role in many areas, from optical trapping to quantum information. In the usual three-dimensional setting, the angular momentum quantum numbers of the photon are integers, in units of the Planck constant ħ. We show that, in reduced dimensions, photons can have a half-integer total angular momentum. We identify a new form of total angular momentum, carried by beams of light, comprising an unequal mixture of spin and orbital contributions. We demonstrate the half-integer quantization of this total angular momentum using noise measurements. We conclude that for light, as is known for electrons, reduced dimensionality allows new forms of quantization. PMID:28861467

  13. Perceptual compression of magnitude-detected synthetic aperture radar imagery

    NASA Technical Reports Server (NTRS)

    Gorman, John D.; Werness, Susan A.

    1994-01-01

    A perceptually-based approach for compressing synthetic aperture radar (SAR) imagery is presented. Key components of the approach are a multiresolution wavelet transform, a bit allocation mask based on an empirical human visual system (HVS) model, and hybrid scalar/vector quantization. Specifically, wavelet shrinkage techniques are used to segregate wavelet transform coefficients into three components: local means, edges, and texture. Each of these three components is then quantized separately according to a perceptually-based bit allocation scheme. Wavelet coefficients associated with local means and edges are quantized using high-rate scalar quantization while texture information is quantized using low-rate vector quantization. The impact of the perceptually-based multiresolution compression algorithm on visual image quality, impulse response, and texture properties is assessed for fine-resolution magnitude-detected SAR imagery; excellent image quality is found at bit rates at or above 1 bpp along with graceful performance degradation at rates below 1 bpp.

  14. Nonperturbative quantization of the electroweak model's electrodynamic sector

    NASA Astrophysics Data System (ADS)

    Fry, M. P.

    2015-04-01

    Consider the Euclidean functional integral representation of any physical process in the electroweak model. Integrating out the fermion degrees of freedom introduces 24 fermion determinants. These multiply the Gaussian functional measures of the Maxwell, Z , W , and Higgs fields to give an effective functional measure. Suppose the functional integral over the Maxwell field is attempted first. This paper is concerned with the large amplitude behavior of the Maxwell effective measure. It is assumed that the large amplitude variation of this measure is insensitive to the presence of the Z , W , and H fields; they are assumed to be a subdominant perturbation of the large amplitude Maxwell sector. Accordingly, we need only examine the large amplitude variation of a single QED fermion determinant. To facilitate this the Schwinger proper time representation of this determinant is decomposed into a sum of three terms. The advantage of this is that the separate terms can be nonperturbatively estimated for a measurable class of large amplitude random fields in four dimensions. It is found that the QED fermion determinant grows faster than exp [c e2∫d4x Fμν 2] , c >0 , in the absence of zero mode supporting random background potentials. This raises doubt on whether the QED fermion determinant is integrable with any Gaussian measure whose support does not include zero mode supporting potentials. Including zero mode supporting background potentials can result in a decaying exponential growth of the fermion determinant. This is prima facie evidence that Maxwellian zero modes are necessary for the nonperturbative quantization of QED and, by implication, for the nonperturbative quantization of the electroweak model.

  15. Optimization of Trade-offs in Error-free Image Transmission

    NASA Astrophysics Data System (ADS)

    Cox, Jerome R.; Moore, Stephen M.; Blaine, G. James; Zimmerman, John B.; Wallace, Gregory K.

    1989-05-01

    The availability of ubiquitous wide-area channels of both modest cost and higher transmission rate than voice-grade lines promises to allow the expansion of electronic radiology services to a larger community. The band-widths of the new services becoming available from the Integrated Services Digital Network (ISDN) are typically limited to 128 Kb/s, almost two orders of magnitude lower than popular LANs can support. Using Discrete Cosine Transform (DCT) techniques, a compressed approximation to an image may be rapidly transmitted. However, intensity or resampling transformations of the reconstructed image may reveal otherwise invisible artifacts of the approximate encoding. A progressive transmission scheme reported in ISO Working Paper N800 offers an attractive solution to this problem by rapidly reconstructing an apparently undistorted image from the DCT coefficients and then subse-quently transmitting the error image corresponding to the difference between the original and the reconstructed images. This approach achieves an error-free transmission without sacrificing the perception of rapid image delivery. Furthermore, subsequent intensity and resampling manipulations can be carried out with confidence. DCT coefficient precision affects the amount of error information that must be transmitted and, hence the delivery speed of error-free images. This study calculates the overall information coding rate for six radiographic images as a function of DCT coefficient precision. The results demonstrate that a minimum occurs for each of the six images at an average coefficient precision of between 0.5 and 1.0 bits per pixel (b/p). Apparently undistorted versions of these six images can be transmitted with a coding rate of between 0.25 and 0.75 b/p while error-free versions can be transmitted with an overall coding rate between 4.5 and 6.5 b/p.

  16. Gauged BPS baby Skyrmions with quantized magnetic flux

    NASA Astrophysics Data System (ADS)

    Adam, C.; Wereszczynski, A.

    2017-06-01

    A new type of gauged BPS baby Skyrme model is presented, where the derivative term is just the Schroers current (i.e., gauge invariant and conserved version of the topological current) squared. This class of models has a topological bound saturated for solutions of the pertinent Bogomolnyi equations supplemented by a so-called superpotential equation. In contrast to the gauged BPS baby Skyrme models considered previously, the superpotential equation is linear and, hence, completely solvable. Furthermore, the magnetic flux is quantized in units of 2 π , which allows, in principle, to define this theory on a compact manifold without boundary, unlike all gauged baby Skyrme models considered so far.

  17. Quantized conductance operation near a single-atom point contact in a polymer-based atomic switch

    NASA Astrophysics Data System (ADS)

    Krishnan, Karthik; Muruganathan, Manoharan; Tsuruoka, Tohru; Mizuta, Hiroshi; Aono, Masakazu

    2017-06-01

    Highly-controlled conductance quantization is achieved near a single-atom point contact in a redox-based atomic switch device, in which a poly(ethylene oxide) (PEO) film is sandwiched between Ag and Pt electrodes. Current-voltage measurements revealed reproducible quantized conductance of ˜1G 0 for more than 102 continuous voltage sweep cycles under a specific condition, indicating the formation of a well-defined single-atom point contact of Ag in the PEO matrix. The device exhibited a conductance state distribution centered at 1G 0, with distinct half-integer multiples of G 0 and small fractional variations. First-principles density functional theory simulations showed that the experimental observations could be explained by the existence of a tunneling gap and the structural rearrangement of an atomic point contact.

  18. Evaluation of Fractional Regional Ventilation Using 4D-CT and Effects of Breathing Maneuvers on Ventilation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mistry, Nilesh N., E-mail: nmistry@som.umaryland.edu; Diwanji, Tejan; Shi, Xiutao

    2013-11-15

    Purpose: Current implementations of methods based on Hounsfield units to evaluate regional lung ventilation do not directly incorporate tissue-based mass changes that occur over the respiratory cycle. To overcome this, we developed a 4-dimensional computed tomography (4D-CT)-based technique to evaluate fractional regional ventilation (FRV) that uses an individualized ratio of tidal volume to end-expiratory lung volume for each voxel. We further evaluated the effect of different breathing maneuvers on regional ventilation. The results from this work will help elucidate the relationship between global and regional lung function. Methods and Materials: Eight patients underwent 3 sets of 4D-CT scans during 1more » session using free-breathing, audiovisual guidance, and active breathing control. FRV was estimated using a density-based algorithm with mass correction. Internal validation between global and regional ventilation was performed by use of the imaging data collected during the use of active breathing control. The impact of breathing maneuvers on FRV was evaluated comparing the tidal volume from 3 breathing methods. Results: Internal validation through comparison between the global and regional changes in ventilation revealed a strong linear correlation (slope of 1.01, R{sup 2} of 0.97) between the measured global lung volume and the regional lung volume calculated by use of the “mass corrected” FRV. A linear relationship was established between the tidal volume measured with the automated breathing control system and FRV based on 4D-CT imaging. Consistently larger breathing volumes were observed when coached breathing techniques were used. Conclusions: The technique presented improves density-based evaluation of lung ventilation and establishes a link between global and regional lung ventilation volumes. Furthermore, the results obtained are comparable with those of other techniques of functional evaluation such as spirometry and hyperpolarized

  19. A quantized microwave quadrupole insulator with topologically protected corner states

    NASA Astrophysics Data System (ADS)

    Peterson, Christopher W.; Benalcazar, Wladimir A.; Hughes, Taylor L.; Bahl, Gaurav

    2018-03-01

    The theory of electric polarization in crystals defines the dipole moment of an insulator in terms of a Berry phase (geometric phase) associated with its electronic ground state. This concept not only solves the long-standing puzzle of how to calculate dipole moments in crystals, but also explains topological band structures in insulators and superconductors, including the quantum anomalous Hall insulator and the quantum spin Hall insulator, as well as quantized adiabatic pumping processes. A recent theoretical study has extended the Berry phase framework to also account for higher electric multipole moments, revealing the existence of higher-order topological phases that have not previously been observed. Here we demonstrate experimentally a member of this predicted class of materials—a quantized quadrupole topological insulator—produced using a gigahertz-frequency reconfigurable microwave circuit. We confirm the non-trivial topological phase using spectroscopic measurements and by identifying corner states that result from the bulk topology. In addition, we test the critical prediction that these corner states are protected by the topology of the bulk, and are not due to surface artefacts, by deforming the edges of the crystal lattice from the topological to the trivial regime. Our results provide conclusive evidence of a unique form of robustness against disorder and deformation, which is characteristic of higher-order topological insulators.

  20. Tomlinson-Harashima Precoding for Multiuser MIMO Systems With Quantized CSI Feedback and User Scheduling

    NASA Astrophysics Data System (ADS)

    Sun, Liang; McKay, Matthew R.

    2014-08-01

    This paper studies the sum rate performance of a low complexity quantized CSI-based Tomlinson-Harashima (TH) precoding scheme for downlink multiuser MIMO tansmission, employing greedy user selection. The asymptotic distribution of the output signal to interference plus noise ratio of each selected user and the asymptotic sum rate as the number of users K grows large are derived by using extreme value theory. For fixed finite signal to noise ratios and a finite number of transmit antennas $n_T$, we prove that as K grows large, the proposed approach can achieve the optimal sum rate scaling of the MIMO broadcast channel. We also prove that, if we ignore the precoding loss, the average sum rate of this approach converges to the average sum capacity of the MIMO broadcast channel. Our results provide insights into the effect of multiuser interference caused by quantized CSI on the multiuser diversity gain.

  1. Topology, edge states, and zero-energy states of ultracold atoms in one-dimensional optical superlattices with alternating on-site potentials or hopping coefficients

    NASA Astrophysics Data System (ADS)

    He, Yan; Wright, Kevin; Kouachi, Said; Chien, Chih-Chun

    2018-02-01

    One-dimensional superlattices with periodic spatial modulations of onsite potentials or tunneling coefficients can exhibit a variety of properties associated with topology or symmetry. Recent developments of ring-shaped optical lattices allow a systematic study of those properties in superlattices with or without boundaries. While superlattices with additional modulating parameters are shown to have quantized topological invariants in the augmented parameter space, we also found localized or zero-energy states associated with symmetries of the Hamiltonians. Probing those states in ultracold atoms is possible by utilizing recently proposed methods analyzing particle depletion or the local density of states. Moreover, we summarize feasible realizations of configurable optical superlattices using currently available techniques.

  2. Torus as phase space: Weyl quantization, dequantization, and Wigner formalism

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ligabò, Marilena, E-mail: marilena.ligabo@uniba.it

    2016-08-15

    The Weyl quantization of classical observables on the torus (as phase space) without regularity assumptions is explicitly computed. The equivalence class of symbols yielding the same Weyl operator is characterized. The Heisenberg equation for the dynamics of general quantum observables is written through the Moyal brackets on the torus and the support of the Wigner transform is characterized. Finally, a dequantization procedure is introduced that applies, for instance, to the Pauli matrices. As a result we obtain the corresponding classical symbols.

  3. Synthetic aperture radar signal data compression using block adaptive quantization

    NASA Technical Reports Server (NTRS)

    Kuduvalli, Gopinath; Dutkiewicz, Melanie; Cumming, Ian

    1994-01-01

    This paper describes the design and testing of an on-board SAR signal data compression algorithm for ESA's ENVISAT satellite. The Block Adaptive Quantization (BAQ) algorithm was selected, and optimized for the various operational modes of the ASAR instrument. A flexible BAQ scheme was developed which allows a selection of compression ratio/image quality trade-offs. Test results show the high quality of the SAR images processed from the reconstructed signal data, and the feasibility of on-board implementation using a single ASIC.

  4. Dosimetric feasibility of 4DCT-ventilation imaging guided proton therapy for locally advanced non-small-cell lung cancer.

    PubMed

    Huang, Qijie; Jabbour, Salma K; Xiao, Zhiyan; Yue, Ning; Wang, Xiao; Cao, Hongbin; Kuang, Yu; Zhang, Yin; Nie, Ke

    2018-04-25

    The principle aim of this study is to incorporate 4DCT ventilation imaging into functional treatment planning that preserves high-functioning lung with both double scattering and scanning beam techniques in proton therapy. Eight patients with locally advanced non-small-cell lung cancer were included in this study. Deformable image registration was performed for each patient on their planning 4DCTs and the resultant displacement vector field with Jacobian analysis was used to identify the high-, medium- and low-functional lung regions. Five plans were designed for each patient: a regular photon IMRT vs. anatomic proton plans without consideration of functional ventilation information using double scattering proton therapy (DSPT) and intensity modulated proton therapy (IMPT) vs. functional proton plans with avoidance of high-functional lung using both DSPT and IMPT. Dosimetric parameters were compared in terms of tumor coverage, plan heterogeneity, and avoidance of normal tissues. Our results showed that both DSPT and IMPT plans gave superior dose advantage to photon IMRTs in sparing low dose regions of the total lung in terms of V5 (volume receiving 5Gy). The functional DSPT only showed marginal benefit in sparing high-functioning lung in terms of V5 or V20 (volume receiving 20Gy) compared to anatomical plans. Yet, the functional planning in IMPT delivery, can further reduce the low dose in high-functioning lung without degrading the PTV dosimetric coverages, compared to anatomical proton planning. Although the doses to some critical organs might increase during functional planning, the necessary constraints were all met. Incorporating 4DCT ventilation imaging into functional proton therapy is feasible. The functional proton plans, in intensity modulated proton delivery, are effective to further preserve high-functioning lung regions without degrading the PTV coverage.

  5. Investigating Students' Mental Models about the Quantization of Light, Energy, and Angular Momentum

    ERIC Educational Resources Information Center

    Didis, Nilüfer; Eryilmaz, Ali; Erkoç, Sakir

    2014-01-01

    This paper is the first part of a multiphase study examining students' mental models about the quantization of physical observables--light, energy, and angular momentum. Thirty-one second-year physics and physics education college students who were taking a modern physics course participated in the study. The qualitative analysis of data revealed…

  6. Quantized thermal transport in single-atom junctions

    NASA Astrophysics Data System (ADS)

    Cui, Longji; Jeong, Wonho; Hur, Sunghoon; Matt, Manuel; Klöckner, Jan C.; Pauly, Fabian; Nielaba, Peter; Cuevas, Juan Carlos; Meyhofer, Edgar; Reddy, Pramod

    2017-03-01

    Thermal transport in individual atomic junctions and chains is of great fundamental interest because of the distinctive quantum effects expected to arise in them. By using novel, custom-fabricated, picowatt-resolution calorimetric scanning probes, we measured the thermal conductance of gold and platinum metallic wires down to single-atom junctions. Our work reveals that the thermal conductance of gold single-atom junctions is quantized at room temperature and shows that the Wiedemann-Franz law relating thermal and electrical conductance is satisfied even in single-atom contacts. Furthermore, we quantitatively explain our experimental results within the Landauer framework for quantum thermal transport. The experimental techniques reported here will enable thermal transport studies in atomic and molecular chains, which will be key to investigating numerous fundamental issues that thus far have remained experimentally inaccessible.

  7. Quantization of a theory of 2D dilaton gravity

    NASA Astrophysics Data System (ADS)

    de Alwis, S. P.

    1992-09-01

    We discuss the quantization of the 2D gravity theory of Callan, Giddings, Harvey, and Strominger (CGHS), following the procedure of David, and of Distler and Kawai. We find that the physics depends crucially on whether the number of matter fields is greater than or less than 24. In the latter case the singularity pointed out by several authors is absent but the physical interpretation is unclear. In the former case (the one studied by CGHS) the quantum theory which gives CGHS in the linear dilaton semi-classical limit, is different from that which gives CGHS in the extreme Liouville regime.

  8. Diffraction pattern simulation of cellulose fibrils using distributed and quantized pair distances

    DOE PAGES

    Zhang, Yan; Inouye, Hideyo; Crowley, Michael; ...

    2016-10-14

    Intensity simulation of X-ray scattering from large twisted cellulose molecular fibrils is important in understanding the impact of chemical or physical treatments on structural properties such as twisting or coiling. This paper describes a highly efficient method for the simulation of X-ray diffraction patterns from complex fibrils using atom-type-specific pair-distance quantization. Pair distances are sorted into arrays which are labelled by atom type. Histograms of pair distances in each array are computed and binned and the resulting population distributions are used to represent the whole pair-distance data set. These quantized pair-distance arrays are used with a modified and vectorized Debyemore » formula to simulate diffraction patterns. This approach utilizes fewer pair distances in each iteration, and atomic scattering factors are moved outside the iteration since the arrays are labelled by atom type. As a result, this algorithm significantly reduces the computation time while maintaining the accuracy of diffraction pattern simulation, making possible the simulation of diffraction patterns from large twisted fibrils in a relatively short period of time, as is required for model testing and refinement.« less

  9. Unique Fock quantization of a massive fermion field in a cosmological scenario

    NASA Astrophysics Data System (ADS)

    Cortez, Jerónimo; Elizaga Navascués, Beatriz; Martín-Benito, Mercedes; Mena Marugán, Guillermo A.; Velhinho, José M.

    2016-04-01

    It is well known that the Fock quantization of field theories in general spacetimes suffers from an infinite ambiguity, owing to the inequivalent possibilities in the selection of a representation of the canonical commutation or anticommutation relations, but also owing to the freedom in the choice of variables to describe the field among all those related by linear time-dependent transformations, including the dependence through functions of the background. In this work we remove this ambiguity (up to unitary equivalence) in the case of a massive Dirac free field propagating in a spacetime with homogeneous and isotropic spatial sections of spherical topology. Two physically reasonable conditions are imposed in order to arrive at this result: (a) The invariance of the vacuum under the spatial isometries of the background, and (b) the unitary implementability of the dynamical evolution that dictates the Dirac equation. We characterize the Fock quantizations with a nontrivial fermion dynamics that satisfy these two conditions. Then, we provide a complete proof of the unitary equivalence of the representations in this class under very mild requirements on the time variation of the background, once a criterion to discern between particles and antiparticles has been set.

  10. Diffraction pattern simulation of cellulose fibrils using distributed and quantized pair distances

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Yan; Inouye, Hideyo; Crowley, Michael

    Intensity simulation of X-ray scattering from large twisted cellulose molecular fibrils is important in understanding the impact of chemical or physical treatments on structural properties such as twisting or coiling. This paper describes a highly efficient method for the simulation of X-ray diffraction patterns from complex fibrils using atom-type-specific pair-distance quantization. Pair distances are sorted into arrays which are labelled by atom type. Histograms of pair distances in each array are computed and binned and the resulting population distributions are used to represent the whole pair-distance data set. These quantized pair-distance arrays are used with a modified and vectorized Debyemore » formula to simulate diffraction patterns. This approach utilizes fewer pair distances in each iteration, and atomic scattering factors are moved outside the iteration since the arrays are labelled by atom type. This algorithm significantly reduces the computation time while maintaining the accuracy of diffraction pattern simulation, making possible the simulation of diffraction patterns from large twisted fibrils in a relatively short period of time, as is required for model testing and refinement.« less

  11. Diffraction pattern simulation of cellulose fibrils using distributed and quantized pair distances

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Yan; Inouye, Hideyo; Crowley, Michael

    Intensity simulation of X-ray scattering from large twisted cellulose molecular fibrils is important in understanding the impact of chemical or physical treatments on structural properties such as twisting or coiling. This paper describes a highly efficient method for the simulation of X-ray diffraction patterns from complex fibrils using atom-type-specific pair-distance quantization. Pair distances are sorted into arrays which are labelled by atom type. Histograms of pair distances in each array are computed and binned and the resulting population distributions are used to represent the whole pair-distance data set. These quantized pair-distance arrays are used with a modified and vectorized Debyemore » formula to simulate diffraction patterns. This approach utilizes fewer pair distances in each iteration, and atomic scattering factors are moved outside the iteration since the arrays are labelled by atom type. As a result, this algorithm significantly reduces the computation time while maintaining the accuracy of diffraction pattern simulation, making possible the simulation of diffraction patterns from large twisted fibrils in a relatively short period of time, as is required for model testing and refinement.« less

  12. Pulmonary Ventilation Imaging Based on 4-Dimensional Computed Tomography: Comparison With Pulmonary Function Tests and SPECT Ventilation Images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yamamoto, Tokihiro, E-mail: toyamamoto@ucdavis.edu; Department of Radiation Oncology, University of California Davis School of Medicine, Sacramento, California; Kabus, Sven

    Purpose: 4-dimensional computed tomography (4D-CT)-based pulmonary ventilation imaging is an emerging functional imaging modality. The purpose of this study was to investigate the physiological significance of 4D-CT ventilation imaging by comparison with pulmonary function test (PFT) measurements and single-photon emission CT (SPECT) ventilation images, which are the clinical references for global and regional lung function, respectively. Methods and Materials: In an institutional review board–approved prospective clinical trial, 4D-CT imaging and PFT and/or SPECT ventilation imaging were performed in thoracic cancer patients. Regional ventilation (V{sub 4DCT}) was calculated by deformable image registration of 4D-CT images and quantitative analysis for regional volumemore » change. V{sub 4DCT} defect parameters were compared with the PFT measurements (forced expiratory volume in 1 second (FEV{sub 1}; % predicted) and FEV{sub 1}/forced vital capacity (FVC; %). V{sub 4DCT} was also compared with SPECT ventilation (V{sub SPECT}) to (1) test whether V{sub 4DCT} in V{sub SPECT} defect regions is significantly lower than in nondefect regions by using the 2-tailed t test; (2) to quantify the spatial overlap between V{sub 4DCT} and V{sub SPECT} defect regions with Dice similarity coefficient (DSC); and (3) to test ventral-to-dorsal gradients by using the 2-tailed t test. Results: Of 21 patients enrolled in the study, 18 patients for whom 4D-CT and either PFT or SPECT were acquired were included in the analysis. V{sub 4DCT} defect parameters were found to have significant, moderate correlations with PFT measurements. For example, V{sub 4DCT}{sup HU} defect volume increased significantly with decreasing FEV{sub 1}/FVC (R=−0.65, P<.01). V{sub 4DCT} in V{sub SPECT} defect regions was significantly lower than in nondefect regions (mean V{sub 4DCT}{sup HU} 0.049 vs 0.076, P<.01). The average DSCs for the spatial overlap with SPECT ventilation defect regions were only

  13. Minimal Conductance Quantization in a Normal-Metal/Unconventional-Superconductor Junction

    NASA Astrophysics Data System (ADS)

    Ikegaya, Satoshi; Asano, Yasuhiro

    2018-04-01

    We discuss the minimum value of the zero-bias differential conductance in a normal-metal/unconventional-superconductor junction. A numerical simulation demonstrates that the zero-bias conductance is quantized at (4e^2/h) N_ZES in the limit of strong impurity scatterings in the normal-metal. The integer N_ZES represents the number of perfect transmission channels through the junction. By focusing on the chiral symmetry of Hamiltonian, we prove the existence of N_ZES-fold degenerate resonant states in the dirty normal segment.

  14. Iterative quantization: a Procrustean approach to learning binary codes for large-scale image retrieval.

    PubMed

    Gong, Yunchao; Lazebnik, Svetlana; Gordo, Albert; Perronnin, Florent

    2013-12-01

    This paper addresses the problem of learning similarity-preserving binary codes for efficient similarity search in large-scale image collections. We formulate this problem in terms of finding a rotation of zero-centered data so as to minimize the quantization error of mapping this data to the vertices of a zero-centered binary hypercube, and propose a simple and efficient alternating minimization algorithm to accomplish this task. This algorithm, dubbed iterative quantization (ITQ), has connections to multiclass spectral clustering and to the orthogonal Procrustes problem, and it can be used both with unsupervised data embeddings such as PCA and supervised embeddings such as canonical correlation analysis (CCA). The resulting binary codes significantly outperform several other state-of-the-art methods. We also show that further performance improvements can result from transforming the data with a nonlinear kernel mapping prior to PCA or CCA. Finally, we demonstrate an application of ITQ to learning binary attributes or "classemes" on the ImageNet data set.

  15. Photon induced non-linear quantized double layer charging in quaternary semiconducting quantum dots.

    PubMed

    Nair, Vishnu; Ananthoju, Balakrishna; Mohapatra, Jeotikanta; Aslam, M

    2018-03-15

    Room temperature quantized double layer charging was observed in 2 nm Cu 2 ZnSnS 4 (CZTS) quantum dots. In addition to this we observed a distinct non-linearity in the quantized double layer charging arising from UV light modulation of double layer. UV light irradiation resulted in a 26% increase in the integral capacitance at the semiconductor-dielectric (CZTS-oleylamine) interface of the quantum dot without any change in its core size suggesting that the cause be photocapacitive. The increasing charge separation at the semiconductor-dielectric interface due to highly stable and mobile photogenerated carriers cause larger electrostatic forces between the quantum dot and electrolyte leading to an enhanced double layer. This idea was supported by a decrease in the differential capacitance possible due to an enhanced double layer. Furthermore the UV illumination enhanced double layer gives us an AC excitation dependent differential double layer capacitance which confirms that the charging process is non-linear. This ultimately illustrates the utility of a colloidal quantum dot-electrolyte interface as a non-linear photocapacitor. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Aharonov–Anandan quantum phases and Landau quantization associated with a magnetic quadrupole moment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fonseca, I.C.; Bakke, K., E-mail: kbakke@fisica.ufpb.br

    The arising of geometric quantum phases in the wave function of a moving particle possessing a magnetic quadrupole moment is investigated. It is shown that an Aharonov–Anandan quantum phase (Aharonov and Anandan, 1987) can be obtained in the quantum dynamics of a moving particle with a magnetic quadrupole moment. In particular, it is obtained as an analogue of the scalar Aharonov–Bohm effect for a neutral particle (Anandan, 1989). Besides, by confining the quantum particle to a hard-wall confining potential, the dependence of the energy levels on the geometric quantum phase is discussed and, as a consequence, persistent currents can arisemore » from this dependence. Finally, an analogue of the Landau quantization is discussed. -- Highlights: •Scalar Aharonov–Bohm effect for a particle possessing a magnetic quadrupole moment. •Aharonov–Anandan quantum phase for a particle with a magnetic quadrupole moment. •Dependence of the energy levels on the Aharonov–Anandan quantum phase. •Landau quantization associated with a particle possessing a magnetic quadrupole moment.« less

  17. Manipulating and probing angular momentum and quantized circulation in optical fields and matter waves

    NASA Astrophysics Data System (ADS)

    Lowney, Joseph Daniel

    Methods to generate, manipulate, and measure optical and atomic fields with global or local angular momentum have a wide range of applications in both fundamental physics research and technology development. In optics, the engineering of angular momentum states of light can aid studies of orbital angular momentum (OAM) exchange between light and matter. The engineering of optical angular momentum states can also be used to increase the bandwidth of optical communications or serve as a means to distribute quantum keys, for example. Similar capabilities in Bose-Einstein condensates are being investigated to improve our understanding of superfluid dynamics, superconductivity, and turbulence, the last of which is widely considered to be one of most ubiquitous yet poorly understood subjects in physics. The first part of this two-part dissertation presents an analysis of techniques for measuring and manipulating quantized vortices in BECs. The second part of this dissertation presents theoretical and numerical analyses of new methods to engineer the OAM spectra of optical beams. The superfluid dynamics of a BEC are often well described by a nonlinear Schrodinger equation. The nonlinearity arises from interatomic scattering and enables BECs to support quantized vortices, which have quantized circulation and are fundamental structural elements of quantum turbulence. With the experimental tools to dynamically manipulate and measure quantized vortices, BECs are proving to be a useful medium for testing the theoretical predictions of quantum turbulence. In this dissertation we analyze a method for making minimally destructive in situ observations of quantized vortices in a BEC. Secondly, we numerically study a mechanism to imprint vortex dipoles in a BEC. With these advancements, more robust experiments of vortex dynamics and quantum turbulence will be within reach. A more complete understanding of quantum turbulence will enable principles of microscopic fluid flow to be

  18. Field quantization and squeezed states generation in resonators with time-dependent parameters

    NASA Technical Reports Server (NTRS)

    Dodonov, V. V.; Klimov, A. B.; Nikonov, D. E.

    1992-01-01

    The problem of electromagnetic field quantization is usually considered in textbooks under the assumption that the field occupies some empty box. The case when a nonuniform time-dependent dielectric medium is confined in some space region with time-dependent boundaries is studied. The basis of the subsequent consideration is the system of Maxwell's equations in linear passive time-dependent dielectric and magnetic medium without sources.

  19. Quantized mode of a leaky cavity

    NASA Astrophysics Data System (ADS)

    Dutra, S. M.; Nienhuis, G.

    2000-12-01

    We use Thomson's classical concept of mode of a leaky cavity to develop a quantum theory of cavity damping. This theory generalizes the conventional system-reservoir theory of high-Q cavity damping to arbitrary Q. The small system now consists of damped oscillators corresponding to the natural modes of the leaky cavity rather than undamped oscillators associated with the normal modes of a fictitious perfect cavity. The formalism unifies semiclassical Fox-Li modes and the normal modes traditionally used for quantization. It also lays the foundations for a full quantum description of excess noise. The connection with Siegman's semiclassical work is straightforward. In a wider context, this theory constitutes a radical departure from present models of dissipation in quantum mechanics: unlike conventional models, system and reservoir operators no longer commute with each other. This noncommutability is an unavoidable consequence of having to use natural cavity modes rather than normal modes of a fictitious perfect cavity.

  20. Diverse magnetic quantization in bilayer silicene

    NASA Astrophysics Data System (ADS)

    Do, Thi-Nga; Shih, Po-Hsin; Gumbs, Godfrey; Huang, Danhong; Chiu, Chih-Wei; Lin, Ming-Fa

    2018-03-01

    The generalized tight-binding model is developed to investigate the rich and unique electronic properties of A B -bt (bottom-top) bilayer silicene under uniform perpendicular electric and magnetic fields. The first pair of conduction and valence bands, with an observable energy gap, displays unusual energy dispersions. Each group of conduction/valence Landau levels (LLs) is further classified into four subgroups, i.e., the sublattice- and spin-dominated LL subgroups. The magnetic-field-dependent LL energy spectra exhibit irregular behavior corresponding to the critical points of the band structure. Moreover, the electric field can induce many LL anticrossings. The main features of the LLs are uncovered with many van Hove singularities in the density-of-states and nonuniform delta-function-like peaks in the magnetoabsorption spectra. The feature-rich magnetic quantization directly reflects the geometric symmetries, intralayer and interlayer atomic interactions, spin-orbital couplings, and field effects. The results of this work can be applied to novel designs of Si-based nanoelectronics and nanodevices with enhanced mobilities.