Image compression system and method having optimized quantization tables
NASA Technical Reports Server (NTRS)
Ratnakar, Viresh (Inventor); Livny, Miron (Inventor)
1998-01-01
A digital image compression preprocessor for use in a discrete cosine transform-based digital image compression device is provided. The preprocessor includes a gathering mechanism for determining discrete cosine transform statistics from input digital image data. A computing mechanism is operatively coupled to the gathering mechanism to calculate a image distortion array and a rate of image compression array based upon the discrete cosine transform statistics for each possible quantization value. A dynamic programming mechanism is operatively coupled to the computing mechanism to optimize the rate of image compression array against the image distortion array such that a rate-distortion-optimal quantization table is derived. In addition, a discrete cosine transform-based digital image compression device and a discrete cosine transform-based digital image compression and decompression system are provided. Also, a method for generating a rate-distortion-optimal quantization table, using discrete cosine transform-based digital image compression, and operating a discrete cosine transform-based digital image compression and decompression system are provided.
NASA Astrophysics Data System (ADS)
Zhao, Yun-wei; Zhu, Zi-qiang; Lu, Guang-yin; Han, Bo
2018-03-01
The sine and cosine transforms implemented with digital filters have been used in the Transient electromagnetic methods for a few decades. Kong (2007) proposed a method of obtaining filter coefficients, which are computed in the sample domain by Hankel transform pair. However, the curve shape of Hankel transform pair changes with a parameter, which usually is set to be 1 or 3 in the process of obtaining the digital filter coefficients of sine and cosine transforms. First, this study investigates the influence of the parameter on the digital filter algorithm of sine and cosine transforms based on the digital filter algorithm of Hankel transform and the relationship between the sine, cosine function and the ±1/2 order Bessel function of the first kind. The results show that the selection of the parameter highly influences the precision of digital filter algorithm. Second, upon the optimal selection of the parameter, it is found that an optimal sampling interval s also exists to achieve the best precision of digital filter algorithm. Finally, this study proposes four groups of sine and cosine transform digital filter coefficients with different length, which may help to develop the digital filter algorithm of sine and cosine transforms, and promote its application.
Integer cosine transform for image compression
NASA Technical Reports Server (NTRS)
Cheung, K.-M.; Pollara, F.; Shahshahani, M.
1991-01-01
This article describes a recently introduced transform algorithm called the integer cosine transform (ICT), which is used in transform-based data compression schemes. The ICT algorithm requires only integer operations on small integers and at the same time gives a rate-distortion performance comparable to that offered by the floating-point discrete cosine transform (DCT). The article addresses the issue of implementation complexity, which is of prime concern for source coding applications of interest in deep-space communications. Complexity reduction in the transform stage of the compression scheme is particularly relevant, since this stage accounts for most (typically over 80 percent) of the computational load.
Coloured computational imaging with single-pixel detectors based on a 2D discrete cosine transform
NASA Astrophysics Data System (ADS)
Liu, Bao-Lei; Yang, Zhao-Hua; Liu, Xia; Wu, Ling-An
2017-02-01
We propose and demonstrate a computational imaging technique that uses structured illumination based on a two-dimensional discrete cosine transform to perform imaging with a single-pixel detector. A scene is illuminated by a projector with two sets of orthogonal patterns, then by applying an inverse cosine transform to the spectra obtained from the single-pixel detector a full-colour image is retrieved. This technique can retrieve an image from sub-Nyquist measurements, and the background noise is easily cancelled to give excellent image quality. Moreover, the experimental set-up is very simple.
Optimal block cosine transform image coding for noisy channels
NASA Technical Reports Server (NTRS)
Vaishampayan, V.; Farvardin, N.
1986-01-01
The two dimensional block transform coding scheme based on the discrete cosine transform was studied extensively for image coding applications. While this scheme has proven to be efficient in the absence of channel errors, its performance degrades rapidly over noisy channels. A method is presented for the joint source channel coding optimization of a scheme based on the 2-D block cosine transform when the output of the encoder is to be transmitted via a memoryless design of the quantizers used for encoding the transform coefficients. This algorithm produces a set of locally optimum quantizers and the corresponding binary code assignment for the assumed transform coefficient statistics. To determine the optimum bit assignment among the transform coefficients, an algorithm was used based on the steepest descent method, which under certain convexity conditions on the performance of the channel optimized quantizers, yields the optimal bit allocation. Comprehensive simulation results for the performance of this locally optimum system over noisy channels were obtained and appropriate comparisons against a reference system designed for no channel error were rendered.
Minho Won; Albalawi, Hassan; Xin Li; Thomas, Donald E
2014-01-01
This paper describes a low-power hardware implementation for movement decoding of brain computer interface. Our proposed hardware design is facilitated by two novel ideas: (i) an efficient feature extraction method based on reduced-resolution discrete cosine transform (DCT), and (ii) a new hardware architecture of dual look-up table to perform discrete cosine transform without explicit multiplication. The proposed hardware implementation has been validated for movement decoding of electrocorticography (ECoG) signal by using a Xilinx FPGA Zynq-7000 board. It achieves more than 56× energy reduction over a reference design using band-pass filters for feature extraction.
Computation of Symmetric Discrete Cosine Transform Using Bakhvalov's Algorithm
NASA Technical Reports Server (NTRS)
Aburdene, Maurice F.; Strojny, Brian C.; Dorband, John E.
2005-01-01
A number of algorithms for recursive computation of the discrete cosine transform (DCT) have been developed recently. This paper presents a new method for computing the discrete cosine transform and its inverse using Bakhvalov's algorithm, a method developed for evaluation of a polynomial at a point. In this paper, we will focus on both the application of the algorithm to the computation of the DCT-I and its complexity. In addition, Bakhvalov s algorithm is compared with Clenshaw s algorithm for the computation of the DCT.
NASA Technical Reports Server (NTRS)
Jones, H. W.; Hein, D. N.; Knauer, S. C.
1978-01-01
A general class of even/odd transforms is presented that includes the Karhunen-Loeve transform, the discrete cosine transform, the Walsh-Hadamard transform, and other familiar transforms. The more complex even/odd transforms can be computed by combining a simpler even/odd transform with a sparse matrix multiplication. A theoretical performance measure is computed for some even/odd transforms, and two image compression experiments are reported.
A new Watermarking System based on Discrete Cosine Transform (DCT) in color biometric images.
Dogan, Sengul; Tuncer, Turker; Avci, Engin; Gulten, Arif
2012-08-01
This paper recommend a biometric color images hiding approach An Watermarking System based on Discrete Cosine Transform (DCT), which is used to protect the security and integrity of transmitted biometric color images. Watermarking is a very important hiding information (audio, video, color image, gray image) technique. It is commonly used on digital objects together with the developing technology in the last few years. One of the common methods used for hiding information on image files is DCT method which used in the frequency domain. In this study, DCT methods in order to embed watermark data into face images, without corrupting their features.
NASA Astrophysics Data System (ADS)
Strang, Gilbert
1994-06-01
Several methods are compared that are used to analyze and synthesize a signal. Three ways are mentioned to transform a symphony: into cosine waves (Fourier transform), into pieces of cosines (short-time Fourier transform), and into wavelets (little waves that start and stop). Choosing the best basis, higher dimensions, fast wavelet transform, and Daubechies wavelets are discussed. High-definition television is described. The use of wavelets in identifying fingerprints in the future is related.
High-speed real-time image compression based on all-optical discrete cosine transformation
NASA Astrophysics Data System (ADS)
Guo, Qiang; Chen, Hongwei; Wang, Yuxi; Chen, Minghua; Yang, Sigang; Xie, Shizhong
2017-02-01
In this paper, we present a high-speed single-pixel imaging (SPI) system based on all-optical discrete cosine transform (DCT) and demonstrate its capability to enable noninvasive imaging of flowing cells in a microfluidic channel. Through spectral shaping based on photonic time stretch (PTS) and wavelength-to-space conversion, structured illumination patterns are generated at a rate (tens of MHz) which is three orders of magnitude higher than the switching rate of a digital micromirror device (DMD) used in a conventional single-pixel camera. Using this pattern projector, high-speed image compression based on DCT can be achieved in the optical domain. In our proposed system, a high compression ratio (approximately 10:1) and a fast image reconstruction procedure are both achieved, which implicates broad applications in industrial quality control and biomedical imaging.
2015-12-01
group assignment of samples in unsupervised hierarchical clustering by the Unweighted Pair-Group Method using Arithmetic averages ( UPGMA ) based on...log2 transformed MAS5.0 signal values; probe set clustering was performed by the UPGMA method using Cosine correlation as the similarity met- ric. For...differentially-regulated genes identified were subjected to unsupervised hierarchical clustering analysis using the UPGMA algorithm with cosine correlation as
A Posteriori Restoration of Block Transform-Compressed Data
NASA Technical Reports Server (NTRS)
Brown, R.; Boden, A. F.
1995-01-01
The Galileo spacecraft will use lossy data compression for the transmission of its science imagery over the low-bandwidth communication system. The technique chosen for image compression is a block transform technique based on the Integer Cosine Transform, a derivative of the JPEG image compression standard. Considered here are two known a posteriori enhancement techniques, which are adapted.
NASA Astrophysics Data System (ADS)
Gong, Lihua; Deng, Chengzhi; Pan, Shumin; Zhou, Nanrun
2018-07-01
Based on hyper-chaotic system and discrete fractional random transform, an image compression-encryption algorithm is designed. The original image is first transformed into a spectrum by the discrete cosine transform and the resulting spectrum is compressed according to the method of spectrum cutting. The random matrix of the discrete fractional random transform is controlled by a chaotic sequence originated from the high dimensional hyper-chaotic system. Then the compressed spectrum is encrypted by the discrete fractional random transform. The order of DFrRT and the parameters of the hyper-chaotic system are the main keys of this image compression and encryption algorithm. The proposed algorithm can compress and encrypt image signal, especially can encrypt multiple images once. To achieve the compression of multiple images, the images are transformed into spectra by the discrete cosine transform, and then the spectra are incised and spliced into a composite spectrum by Zigzag scanning. Simulation results demonstrate that the proposed image compression and encryption algorithm is of high security and good compression performance.
Testing Fixture For Microwave Integrated Circuits
NASA Technical Reports Server (NTRS)
Romanofsky, Robert; Shalkhauser, Kurt
1989-01-01
Testing fixture facilitates radio-frequency characterization of microwave and millimeter-wave integrated circuits. Includes base onto which two cosine-tapered ridge waveguide-to-microstrip transitions fastened. Length and profile of taper determined analytically to provide maximum bandwidth and minimum insertion loss. Each cosine taper provides transformation from high impedance of waveguide to characteristic impedance of microstrip. Used in conjunction with automatic network analyzer to provide user with deembedded scattering parameters of device under test. Operates from 26.5 to 40.0 GHz, but operation extends to much higher frequencies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Balsa Terzic, Gabriele Bassi
In this paper we discuss representations of charge particle densities in particle-in-cell (PIC) simulations, analyze the sources and profiles of the intrinsic numerical noise, and present efficient methods for their removal. We devise two alternative estimation methods for charged particle distribution which represent significant improvement over the Monte Carlo cosine expansion used in the 2d code of Bassi, designed to simulate coherent synchrotron radiation (CSR) in charged particle beams. The improvement is achieved by employing an alternative beam density estimation to the Monte Carlo cosine expansion. The representation is first binned onto a finite grid, after which two grid-based methodsmore » are employed to approximate particle distributions: (i) truncated fast cosine transform (TFCT); and (ii) thresholded wavelet transform (TWT). We demonstrate that these alternative methods represent a staggering upgrade over the original Monte Carlo cosine expansion in terms of efficiency, while the TWT approximation also provides an appreciable improvement in accuracy. The improvement in accuracy comes from a judicious removal of the numerical noise enabled by the wavelet formulation. The TWT method is then integrated into Bassi's CSR code, and benchmarked against the original version. We show that the new density estimation method provides a superior performance in terms of efficiency and spatial resolution, thus enabling high-fidelity simulations of CSR effects, including microbunching instability.« less
Fast discrete cosine transform structure suitable for implementation with integer computation
NASA Astrophysics Data System (ADS)
Jeong, Yeonsik; Lee, Imgeun
2000-10-01
The discrete cosine transform (DCT) has wide applications in speech and image coding. We propose a fast DCT scheme with the property of reduced multiplication stages and fewer additions and multiplications. The proposed algorithm is structured so that most multiplications are performed at the final stage, which reduces the propagation error that could occur in the integer computation.
Vesicle sizing by static light scattering: a Fourier cosine transform approach
NASA Astrophysics Data System (ADS)
Wang, Jianhong; Hallett, F. Ross
1995-08-01
A Fourier cosine transform method, based on the Rayleigh-Gans-Debye thin-shell approximation, was developed to retrieve vesicle size distribution directly from the angular dependence of scattered light intensity. Its feasibility for real vesicles was partially tested on scattering data generated by the exact Mie solutions for isotropic vesicles. The noise tolerance of the method in recovering unimodal and biomodal distributions was studied with the simulated data. Applicability of this approach to vesicles with weak anisotropy was examined using Mie theory for anisotropic hollow spheres. A primitive theory about the first four moments of the radius distribution about the origin, excluding the mean radius, was obtained as an alternative to the direct retrieval of size distributions.
NASA Astrophysics Data System (ADS)
Wang, Zhongpeng; Zhang, Shaozhong; Chen, Fangni; Wu, Ming-Wei; Qiu, Weiwei
2017-11-01
A physical encryption scheme for orthogonal frequency-division multiplexing (OFDM) visible light communication (VLC) systems using chaotic discrete cosine transform (DCT) is proposed. In the scheme, the row of the DCT matrix is permutated by a scrambling sequence generated by a three-dimensional (3-D) Arnold chaos map. Furthermore, two scrambling sequences, which are also generated from a 3-D Arnold map, are employed to encrypt the real and imaginary parts of the transmitted OFDM signal before the chaotic DCT operation. The proposed scheme enhances the physical layer security and improves the bit error rate (BER) performance for OFDM-based VLC. The simulation results prove the efficiency of the proposed encryption method. The experimental results show that the proposed security scheme not only protects image data from eavesdroppers but also keeps the good BER and peak-to-average power ratio performances for image-based OFDM-VLC systems.
Discrete cosine and sine transforms generalized to honeycomb lattice
NASA Astrophysics Data System (ADS)
Hrivnák, Jiří; Motlochová, Lenka
2018-06-01
The discrete cosine and sine transforms are generalized to a triangular fragment of the honeycomb lattice. The honeycomb point sets are constructed by subtracting the root lattice from the weight lattice points of the crystallographic root system A2. The two-variable orbit functions of the Weyl group of A2, discretized simultaneously on the weight and root lattices, induce a novel parametric family of extended Weyl orbit functions. The periodicity and von Neumann and Dirichlet boundary properties of the extended Weyl orbit functions are detailed. Three types of discrete complex Fourier-Weyl transforms and real-valued Hartley-Weyl transforms are described. Unitary transform matrices and interpolating behavior of the discrete transforms are exemplified. Consequences of the developed discrete transforms for transversal eigenvibrations of the mechanical graphene model are discussed.
Transform coding for space applications
NASA Technical Reports Server (NTRS)
Glover, Daniel
1993-01-01
Data compression coding requirements for aerospace applications differ somewhat from the compression requirements for entertainment systems. On the one hand, entertainment applications are bit rate driven with the goal of getting the best quality possible with a given bandwidth. Science applications are quality driven with the goal of getting the lowest bit rate for a given level of reconstruction quality. In the past, the required quality level has been nothing less than perfect allowing only the use of lossless compression methods (if that). With the advent of better, faster, cheaper missions, an opportunity has arisen for lossy data compression methods to find a use in science applications as requirements for perfect quality reconstruction runs into cost constraints. This paper presents a review of the data compression problem from the space application perspective. Transform coding techniques are described and some simple, integer transforms are presented. The application of these transforms to space-based data compression problems is discussed. Integer transforms have an advantage over conventional transforms in computational complexity. Space applications are different from broadcast or entertainment in that it is desirable to have a simple encoder (in space) and tolerate a more complicated decoder (on the ground) rather than vice versa. Energy compaction with new transforms are compared with the Walsh-Hadamard (WHT), Discrete Cosine (DCT), and Integer Cosine (ICT) transforms.
NASA Astrophysics Data System (ADS)
Jin, Xin; Jiang, Qian; Yao, Shaowen; Zhou, Dongming; Nie, Rencan; Lee, Shin-Jye; He, Kangjian
2018-01-01
In order to promote the performance of infrared and visual image fusion and provide better visual effects, this paper proposes a hybrid fusion method for infrared and visual image by the combination of discrete stationary wavelet transform (DSWT), discrete cosine transform (DCT) and local spatial frequency (LSF). The proposed method has three key processing steps. Firstly, DSWT is employed to decompose the important features of the source image into a series of sub-images with different levels and spatial frequencies. Secondly, DCT is used to separate the significant details of the sub-images according to the energy of different frequencies. Thirdly, LSF is applied to enhance the regional features of DCT coefficients, and it can be helpful and useful for image feature extraction. Some frequently-used image fusion methods and evaluation metrics are employed to evaluate the validity of the proposed method. The experiments indicate that the proposed method can achieve good fusion effect, and it is more efficient than other conventional image fusion methods.
NASA Astrophysics Data System (ADS)
Song, Xiaoning; Feng, Zhen-Hua; Hu, Guosheng; Yang, Xibei; Yang, Jingyu; Qi, Yunsong
2015-09-01
This paper proposes a progressive sparse representation-based classification algorithm using local discrete cosine transform (DCT) evaluation to perform face recognition. Specifically, the sum of the contributions of all training samples of each subject is first taken as the contribution of this subject, then the redundant subject with the smallest contribution to the test sample is iteratively eliminated. Second, the progressive method aims at representing the test sample as a linear combination of all the remaining training samples, by which the representation capability of each training sample is exploited to determine the optimal "nearest neighbors" for the test sample. Third, the transformed DCT evaluation is constructed to measure the similarity between the test sample and each local training sample using cosine distance metrics in the DCT domain. The final goal of the proposed method is to determine an optimal weighted sum of nearest neighbors that are obtained under the local correlative degree evaluation, which is approximately equal to the test sample, and we can use this weighted linear combination to perform robust classification. Experimental results conducted on the ORL database of faces (created by the Olivetti Research Laboratory in Cambridge), the FERET face database (managed by the Defense Advanced Research Projects Agency and the National Institute of Standards and Technology), AR face database (created by Aleix Martinez and Robert Benavente in the Computer Vision Center at U.A.B), and USPS handwritten digit database (gathered at the Center of Excellence in Document Analysis and Recognition at SUNY Buffalo) demonstrate the effectiveness of the proposed method.
Santos, Rui; Pombo, Nuno; Flórez-Revuelta, Francisco
2018-01-01
An increase in the accuracy of identification of Activities of Daily Living (ADL) is very important for different goals of Enhanced Living Environments and for Ambient Assisted Living (AAL) tasks. This increase may be achieved through identification of the surrounding environment. Although this is usually used to identify the location, ADL recognition can be improved with the identification of the sound in that particular environment. This paper reviews audio fingerprinting techniques that can be used with the acoustic data acquired from mobile devices. A comprehensive literature search was conducted in order to identify relevant English language works aimed at the identification of the environment of ADLs using data acquired with mobile devices, published between 2002 and 2017. In total, 40 studies were analyzed and selected from 115 citations. The results highlight several audio fingerprinting techniques, including Modified discrete cosine transform (MDCT), Mel-frequency cepstrum coefficients (MFCC), Principal Component Analysis (PCA), Fast Fourier Transform (FFT), Gaussian mixture models (GMM), likelihood estimation, logarithmic moduled complex lapped transform (LMCLT), support vector machine (SVM), constant Q transform (CQT), symmetric pairwise boosting (SPB), Philips robust hash (PRH), linear discriminant analysis (LDA) and discrete cosine transform (DCT). PMID:29315232
Qualitative and semiquantitative Fourier transformation using a noncoherent system.
Rogers, G L
1979-09-15
A number of authors have pointed out that a system of zone plates combined with a diffuse source, transparent input, lens, and focusing screen will display on the output screen the Fourier transform of the input. Strictly speaking, the transform normally displayed is the cosine transform, and the bipolar output is superimposed on a dc gray level to give a positive-only intensity variation. By phase-shifting one zone plate the sine transform is obtained. Temporal modulation is possible. It is also possible to redesign the system to accept a diffusely reflecting input at the cost of introducing a phase gradient in the output. Results are given of the sine and cosine transforms of a small circular aperture. As expected, the sine transform is a uniform gray. Both transforms show unwanted artifacts beyond 0.1 rad off-axis. An analysis shows this is due to unwanted circularly symmetrical moire patterns between the zone plates.
New fast DCT algorithms based on Loeffler's factorization
NASA Astrophysics Data System (ADS)
Hong, Yoon Mi; Kim, Il-Koo; Lee, Tammy; Cheon, Min-Su; Alshina, Elena; Han, Woo-Jin; Park, Jeong-Hoon
2012-10-01
This paper proposes a new 32-point fast discrete cosine transform (DCT) algorithm based on the Loeffler's 16-point transform. Fast integer realizations of 16-point and 32-point transforms are also provided based on the proposed transform. For the recent development of High Efficiency Video Coding (HEVC), simplified quanti-zation and de-quantization process are proposed. Three different forms of implementation with the essentially same performance, namely matrix multiplication, partial butterfly, and full factorization can be chosen accord-ing to the given platform. In terms of the number of multiplications required for the realization, our proposed full-factorization is 3~4 times faster than a partial butterfly, and about 10 times faster than direct matrix multiplication.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Lei; Zuo, Chao; Idir, Mourad
A novel transport-of-intensity equation (TIE) based phase retrieval method is proposed with putting an arbitrarily-shaped aperture into the optical wavefield. In this arbitrarily-shaped aperture, the TIE can be solved under non-uniform illuminations and even non-homogeneous boundary conditions by iterative discrete cosine transforms with a phase compensation mechanism. Simulation with arbitrary phase, arbitrary aperture shape, and non-uniform intensity distribution verifies the effective compensation and high accuracy of the proposed method. Experiment is also carried out to check the feasibility of the proposed method in real measurement. Comparing to the existing methods, the proposed method is applicable for any types of phasemore » distribution under non-uniform illumination and non-homogeneous boundary conditions within an arbitrarily-shaped aperture, which enables the technique of TIE with hard aperture become a more flexible phase retrieval tool in practical measurements.« less
Huang, Lei; Zuo, Chao; Idir, Mourad; ...
2015-04-21
A novel transport-of-intensity equation (TIE) based phase retrieval method is proposed with putting an arbitrarily-shaped aperture into the optical wavefield. In this arbitrarily-shaped aperture, the TIE can be solved under non-uniform illuminations and even non-homogeneous boundary conditions by iterative discrete cosine transforms with a phase compensation mechanism. Simulation with arbitrary phase, arbitrary aperture shape, and non-uniform intensity distribution verifies the effective compensation and high accuracy of the proposed method. Experiment is also carried out to check the feasibility of the proposed method in real measurement. Comparing to the existing methods, the proposed method is applicable for any types of phasemore » distribution under non-uniform illumination and non-homogeneous boundary conditions within an arbitrarily-shaped aperture, which enables the technique of TIE with hard aperture become a more flexible phase retrieval tool in practical measurements.« less
Improved cosine similarity measures of simplified neutrosophic sets for medical diagnoses.
Ye, Jun
2015-03-01
In pattern recognition and medical diagnosis, similarity measure is an important mathematical tool. To overcome some disadvantages of existing cosine similarity measures of simplified neutrosophic sets (SNSs) in vector space, this paper proposed improved cosine similarity measures of SNSs based on cosine function, including single valued neutrosophic cosine similarity measures and interval neutrosophic cosine similarity measures. Then, weighted cosine similarity measures of SNSs were introduced by taking into account the importance of each element. Further, a medical diagnosis method using the improved cosine similarity measures was proposed to solve medical diagnosis problems with simplified neutrosophic information. The improved cosine similarity measures between SNSs were introduced based on cosine function. Then, we compared the improved cosine similarity measures of SNSs with existing cosine similarity measures of SNSs by numerical examples to demonstrate their effectiveness and rationality for overcoming some shortcomings of existing cosine similarity measures of SNSs in some cases. In the medical diagnosis method, we can find a proper diagnosis by the cosine similarity measures between the symptoms and considered diseases which are represented by SNSs. Then, the medical diagnosis method based on the improved cosine similarity measures was applied to two medical diagnosis problems to show the applications and effectiveness of the proposed method. Two numerical examples all demonstrated that the improved cosine similarity measures of SNSs based on the cosine function can overcome the shortcomings of the existing cosine similarity measures between two vectors in some cases. By two medical diagnoses problems, the medical diagnoses using various similarity measures of SNSs indicated the identical diagnosis results and demonstrated the effectiveness and rationality of the diagnosis method proposed in this paper. The improved cosine measures of SNSs based on cosine function can overcome some drawbacks of existing cosine similarity measures of SNSs in vector space, and then their diagnosis method is very suitable for handling the medical diagnosis problems with simplified neutrosophic information and demonstrates the effectiveness and rationality of medical diagnoses. Copyright © 2014 Elsevier B.V. All rights reserved.
Sparse Matrix Motivated Reconstruction of Far-Field Radiation Patterns
2015-03-01
method for base - station antenna radiation patterns. IEEE Antennas Propagation Magazine. 2001;43(2):132. 4. Vasiliadis TG, Dimitriou D, Sergiadis JD...algorithm based on sparse representations of radiation patterns using the inverse Discrete Fourier Transform (DFT) and the inverse Discrete Cosine...patterns using a Model- Based Parameter Estimation (MBPE) technique that reduces the computational time required to model radiation patterns. Another
Fluorescence molecular tomography reconstruction via discrete cosine transform-based regularization
NASA Astrophysics Data System (ADS)
Shi, Junwei; Liu, Fei; Zhang, Jiulou; Luo, Jianwen; Bai, Jing
2015-05-01
Fluorescence molecular tomography (FMT) as a noninvasive imaging modality has been widely used for biomedical preclinical applications. However, FMT reconstruction suffers from severe ill-posedness, especially when a limited number of projections are used. In order to improve the quality of FMT reconstruction results, a discrete cosine transform (DCT) based reweighted L1-norm regularization algorithm is proposed. In each iteration of the reconstruction process, different reweighted regularization parameters are adaptively assigned according to the values of DCT coefficients to suppress the reconstruction noise. In addition, the permission region of the reconstructed fluorophores is adaptively constructed to increase the convergence speed. In order to evaluate the performance of the proposed algorithm, physical phantom and in vivo mouse experiments with a limited number of projections are carried out. For comparison, different L1-norm regularization strategies are employed. By quantifying the signal-to-noise ratio (SNR) of the reconstruction results in the phantom and in vivo mouse experiments with four projections, the proposed DCT-based reweighted L1-norm regularization shows higher SNR than other L1-norm regularizations employed in this work.
Integer cosine transform compression for Galileo at Jupiter: A preliminary look
NASA Technical Reports Server (NTRS)
Ekroot, L.; Dolinar, S.; Cheung, K.-M.
1993-01-01
The Galileo low-gain antenna mission has a severely rate-constrained channel over which we wish to send large amounts of information. Because of this link pressure, compression techniques for image and other data are being selected. The compression technique that will be used for images is the integer cosine transform (ICT). This article investigates the compression performance of Galileo's ICT algorithm as applied to Galileo images taken during the early portion of the mission and to images that simulate those expected from the encounter at Jupiter.
Removing tidal-period variations from time-series data using low-pass digital filters
Walters, Roy A.; Heston, Cynthia
1982-01-01
Several low-pass, digital filters are examined for their ability to remove tidal Period Variations from a time-series of water surface elevation for San Francisco Bay. The most efficient filter is the one which is applied to the Fourier coefficients of the transformed data, and the filtered data recovered through an inverse transform. The ability of the filters to remove the tidal components increased in the following order: 1) cosine-Lanczos filter, 2) cosine-Lanczos squared filter; 3) Godin filter; and 4) a transform fitter. The Godin fitter is not sufficiently sharp to prevent severe attenuation of 2–3 day variations in surface elevation resulting from weather events.
Zhang, Jiulou; Shi, Junwei; Guang, Huizhi; Zuo, Simin; Liu, Fei; Bai, Jing; Luo, Jianwen
2016-06-01
High-intensity background fluorescence is generally encountered in fluorescence molecular tomography (FMT), because of the accumulation of fluorescent probes in nontarget tissues or the existence of autofluorescence in biological tissues. The reconstruction results are affected or even distorted by the background fluorescence, especially when the distribution of fluorescent targets is relatively sparse. The purpose of this paper is to reduce the negative effect of background fluorescence on FMT reconstruction. After each iteration of the Tikhonov regularization algorithm, 3-D discrete cosine transform is adopted to filter the intermediate results. And then, a sparsity constraint step based on L1 regularization is applied to restrain the energy of the objective function. Phantom experiments with different fluorescence intensities of homogeneous and heterogeneous background are carried out to validate the performance of the proposed scheme. The results show that the reconstruction quality can be improved with the proposed iterative correction scheme. The influence of background fluorescence in FMT can be reduced effectively because of the filtering of the intermediate results, the detail preservation, and noise suppression of L1 regularization.
NASA Astrophysics Data System (ADS)
Yang, Shuang-Long; Liang, Li-Ping; Liu, Hou-De; Xu, Ke-Jun
2018-03-01
Aiming at reducing the estimation error of the sensor frequency response function (FRF) estimated by the commonly used window-based spectral estimation method, the error models of interpolation and transient errors are derived in the form of non-parameter models. Accordingly, window effects on the errors are analyzed and reveal that the commonly used hanning window leads to smaller interpolation error which can also be significantly eliminated by the cubic spline interpolation method when estimating the FRF from the step response data, and window with smaller front-end value can restrain more transient error. Thus, a new dual-cosine window with its non-zero discrete Fourier transform bins at -3, -1, 0, 1, and 3 is constructed for FRF estimation. Compared with the hanning window, the new dual-cosine window has the equivalent interpolation error suppression capability and better transient error suppression capability when estimating the FRF from the step response; specifically, it reduces the asymptotic property of the transient error from O(N-2) of the hanning window method to O(N-4) while only increases the uncertainty slightly (about 0.4 dB). Then, one direction of a wind tunnel strain gauge balance which is a high order, small damping, and non-minimum phase system is employed as the example for verifying the new dual-cosine window-based spectral estimation method. The model simulation result shows that the new dual-cosine window method is better than the hanning window method for FRF estimation, and compared with the Gans method and LPM method, it has the advantages of simple computation, less time consumption, and short data requirement; the actual data calculation result of the balance FRF is consistent to the simulation result. Thus, the new dual-cosine window is effective and practical for FRF estimation.
NASA Astrophysics Data System (ADS)
Terzić, Balša; Bassi, Gabriele
2011-07-01
In this paper we discuss representations of charge particle densities in particle-in-cell simulations, analyze the sources and profiles of the intrinsic numerical noise, and present efficient methods for their removal. We devise two alternative estimation methods for charged particle distribution which represent significant improvement over the Monte Carlo cosine expansion used in the 2D code of Bassi et al. [G. Bassi, J. A. Ellison, K. Heinemann, and R. Warnock, Phys. Rev. ST Accel. Beams 12, 080704 (2009); PRABFM1098-440210.1103/PhysRevSTAB.12.080704G. Bassi and B. Terzić, in Proceedings of the 23rd Particle Accelerator Conference, Vancouver, Canada, 2009 (IEEE, Piscataway, NJ, 2009), TH5PFP043], designed to simulate coherent synchrotron radiation (CSR) in charged particle beams. The improvement is achieved by employing an alternative beam density estimation to the Monte Carlo cosine expansion. The representation is first binned onto a finite grid, after which two grid-based methods are employed to approximate particle distributions: (i) truncated fast cosine transform; and (ii) thresholded wavelet transform (TWT). We demonstrate that these alternative methods represent a staggering upgrade over the original Monte Carlo cosine expansion in terms of efficiency, while the TWT approximation also provides an appreciable improvement in accuracy. The improvement in accuracy comes from a judicious removal of the numerical noise enabled by the wavelet formulation. The TWT method is then integrated into the CSR code [G. Bassi, J. A. Ellison, K. Heinemann, and R. Warnock, Phys. Rev. ST Accel. Beams 12, 080704 (2009)PRABFM1098-440210.1103/PhysRevSTAB.12.080704], and benchmarked against the original version. We show that the new density estimation method provides a superior performance in terms of efficiency and spatial resolution, thus enabling high-fidelity simulations of CSR effects, including microbunching instability.
A trace map comparison algorithm for the discrete fracture network models of rock masses
NASA Astrophysics Data System (ADS)
Han, Shuai; Wang, Gang; Li, Mingchao
2018-06-01
Discrete fracture networks (DFN) are widely used to build refined geological models. However, validating whether a refined model can match to reality is a crucial problem, concerning whether the model can be used for analysis. The current validation methods include numerical validation and graphical validation. However, the graphical validation, aiming at estimating the similarity between a simulated trace map and the real trace map by visual observation, is subjective. In this paper, an algorithm for the graphical validation of DFN is set up. Four main indicators, including total gray, gray grade curve, characteristic direction and gray density distribution curve, are presented to assess the similarity between two trace maps. A modified Radon transform and loop cosine similarity are presented based on Radon transform and cosine similarity respectively. Besides, how to use Bézier curve to reduce the edge effect is described. Finally, a case study shows that the new algorithm can effectively distinguish which simulated trace map is more similar to the real trace map.
A low-power and high-quality implementation of the discrete cosine transformation
NASA Astrophysics Data System (ADS)
Heyne, B.; Götze, J.
2007-06-01
In this paper a computationally efficient and high-quality preserving DCT architecture is presented. It is obtained by optimizing the Loeffler DCT based on the Cordic algorithm. The computational complexity is reduced from 11 multiply and 29 add operations (Loeffler DCT) to 38 add and 16 shift operations (which is similar to the complexity of the binDCT). The experimental results show that the proposed DCT algorithm not only reduces the computational complexity significantly, but also retains the good transformation quality of the Loeffler DCT. Therefore, the proposed Cordic based Loeffler DCT is especially suited for low-power and high-quality CODECs in battery-based systems.
Determination of Fourier Transforms on an Instructional Analog Computer
ERIC Educational Resources Information Center
Anderson, Owen T.; Greenwood, Stephen R.
1974-01-01
An analog computer program to find and display the Fourier transform of some real, even functions is described. Oscilloscope traces are shown for Fourier transforms of a rectangular pulse, a Gaussian, a cosine wave, and a delayed narrow pulse. Instructional uses of the program are discussed briefly. (DT)
Dessouky, Mohamed M; Elrashidy, Mohamed A; Taha, Taha E; Abdelkader, Hatem M
2016-05-01
The different discrete transform techniques such as discrete cosine transform (DCT), discrete sine transform (DST), discrete wavelet transform (DWT), and mel-scale frequency cepstral coefficients (MFCCs) are powerful feature extraction techniques. This article presents a proposed computer-aided diagnosis (CAD) system for extracting the most effective and significant features of Alzheimer's disease (AD) using these different discrete transform techniques and MFCC techniques. Linear support vector machine has been used as a classifier in this article. Experimental results conclude that the proposed CAD system using MFCC technique for AD recognition has a great improvement for the system performance with small number of significant extracted features, as compared with the CAD system based on DCT, DST, DWT, and the hybrid combination methods of the different transform techniques. © The Author(s) 2015.
Multifocus watermarking approach based on discrete cosine transform.
Waheed, Safa Riyadh; Alkawaz, Mohammed Hazim; Rehman, Amjad; Almazyad, Abdulaziz S; Saba, Tanzila
2016-05-01
Image fusion process consolidates data and information from various images of same sight into a solitary image. Each of the source images might speak to a fractional perspective of the scene, and contains both "pertinent" and "immaterial" information. In this study, a new image fusion method is proposed utilizing the Discrete Cosine Transform (DCT) to join the source image into a solitary minimized image containing more exact depiction of the sight than any of the individual source images. In addition, the fused image comes out with most ideal quality image without bending appearance or loss of data. DCT algorithm is considered efficient in image fusion. The proposed scheme is performed in five steps: (1) RGB colour image (input image) is split into three channels R, G, and B for source images. (2) DCT algorithm is applied to each channel (R, G, and B). (3) The variance values are computed for the corresponding 8 × 8 blocks of each channel. (4) Each block of R of source images is compared with each other based on the variance value and then the block with maximum variance value is selected to be the block in the new image. This process is repeated for all channels of source images. (5) Inverse discrete cosine transform is applied on each fused channel to convert coefficient values to pixel values, and then combined all the channels to generate the fused image. The proposed technique can potentially solve the problem of unwanted side effects such as blurring or blocking artifacts by reducing the quality of the subsequent image in image fusion process. The proposed approach is evaluated using three measurement units: the average of Q(abf), standard deviation, and peak Signal Noise Rate. The experimental results of this proposed technique have shown good results as compared with older techniques. © 2016 Wiley Periodicals, Inc.
Schullcke, Benjamin; Gong, Bo; Krueger-Ziolek, Sabine; Soleimani, Manuchehr; Mueller-Lisse, Ullrich; Moeller, Knut
2016-05-16
Lung EIT is a functional imaging method that utilizes electrical currents to reconstruct images of conductivity changes inside the thorax. This technique is radiation free and applicable at the bedside, but lacks of spatial resolution compared to morphological imaging methods such as X-ray computed tomography (CT). In this article we describe an approach for EIT image reconstruction using morphologic information obtained from other structural imaging modalities. This leads to recon- structed images of lung ventilation that can easily be superimposed with structural CT or MRI images, which facilitates image interpretation. The approach is based on a Discrete Cosine Transformation (DCT) of an image of the considered transversal thorax slice. The use of DCT enables reduction of the dimensionality of the reconstruction and ensures that only conductivity changes of the lungs are reconstructed and displayed. The DCT based approach is well suited to fuse morphological image information with functional lung imaging at low computational costs. Results on simulated data indicate that this approach preserves the morphological structures of the lungs and avoids blurring of the solution. Images from patient measurements reveal the capabilities of the method and demonstrate benefits in possible applications.
Schullcke, Benjamin; Gong, Bo; Krueger-Ziolek, Sabine; Soleimani, Manuchehr; Mueller-Lisse, Ullrich; Moeller, Knut
2016-01-01
Lung EIT is a functional imaging method that utilizes electrical currents to reconstruct images of conductivity changes inside the thorax. This technique is radiation free and applicable at the bedside, but lacks of spatial resolution compared to morphological imaging methods such as X-ray computed tomography (CT). In this article we describe an approach for EIT image reconstruction using morphologic information obtained from other structural imaging modalities. This leads to recon- structed images of lung ventilation that can easily be superimposed with structural CT or MRI images, which facilitates image interpretation. The approach is based on a Discrete Cosine Transformation (DCT) of an image of the considered transversal thorax slice. The use of DCT enables reduction of the dimensionality of the reconstruction and ensures that only conductivity changes of the lungs are reconstructed and displayed. The DCT based approach is well suited to fuse morphological image information with functional lung imaging at low computational costs. Results on simulated data indicate that this approach preserves the morphological structures of the lungs and avoids blurring of the solution. Images from patient measurements reveal the capabilities of the method and demonstrate benefits in possible applications. PMID:27181695
Infrared images target detection based on background modeling in the discrete cosine domain
NASA Astrophysics Data System (ADS)
Ye, Han; Pei, Jihong
2018-02-01
Background modeling is the critical technology to detect the moving target for video surveillance. Most background modeling techniques are aimed at land monitoring and operated in the spatial domain. A background establishment becomes difficult when the scene is a complex fluctuating sea surface. In this paper, the background stability and separability between target are analyzed deeply in the discrete cosine transform (DCT) domain, on this basis, we propose a background modeling method. The proposed method models each frequency point as a single Gaussian model to represent background, and the target is extracted by suppressing the background coefficients. Experimental results show that our approach can establish an accurate background model for seawater, and the detection results outperform other background modeling methods in the spatial domain.
Warrick, P A; Precup, D; Hamilton, E F; Kearney, R E
2007-01-01
To develop a singular-spectrum analysis (SSA) based change-point detection algorithm applicable to fetal heart rate (FHR) monitoring to improve the detection of deceleration events. We present a method for decomposing a signal into near-orthogonal components via the discrete cosine transform (DCT) and apply this in a novel online manner to change-point detection based on SSA. The SSA technique forms models of the underlying signal that can be compared over time; models that are sufficiently different indicate signal change points. To adapt the algorithm to deceleration detection where many successive similar change events can occur, we modify the standard SSA algorithm to hold the reference model constant under such conditions, an approach that we term "base-hold SSA". The algorithm is applied to a database of 15 FHR tracings that have been preprocessed to locate candidate decelerations and is compared to the markings of an expert obstetrician. Of the 528 true and 1285 false decelerations presented to the algorithm, the base-hold approach improved on standard SSA, reducing the number of missed decelerations from 64 to 49 (21.9%) while maintaining the same reduction in false-positives (278). The standard SSA assumption that changes are infrequent does not apply to FHR analysis where decelerations can occur successively and in close proximity; our base-hold SSA modification improves detection of these types of event series.
NASA Astrophysics Data System (ADS)
Paramanandham, Nirmala; Rajendiran, Kishore
2018-01-01
A novel image fusion technique is presented for integrating infrared and visible images. Integration of images from the same or various sensing modalities can deliver the required information that cannot be delivered by viewing the sensor outputs individually and consecutively. In this paper, a swarm intelligence based image fusion technique using discrete cosine transform (DCT) domain is proposed for surveillance application which integrates the infrared image with the visible image for generating a single informative fused image. Particle swarm optimization (PSO) is used in the fusion process for obtaining the optimized weighting factor. These optimized weighting factors are used for fusing the DCT coefficients of visible and infrared images. Inverse DCT is applied for obtaining the initial fused image. An enhanced fused image is obtained through adaptive histogram equalization for a better visual understanding and target detection. The proposed framework is evaluated using quantitative metrics such as standard deviation, spatial frequency, entropy and mean gradient. The experimental results demonstrate the outperformance of the proposed algorithm over many other state- of- the- art techniques reported in literature.
Designing for Compressive Sensing: Compressive Art, Camouflage, Fonts, and Quick Response Codes
2018-01-01
an example where the signal is non-sparse in the standard basis, but sparse in the discrete cosine basis . The top plot shows the signal from the...previous example, now used as sparse discrete cosine transform (DCT) coefficients . The next plot shows the non-sparse signal in the standard...Romberg JK, Tao T. Stable signal recovery from incomplete and inaccurate measurements. Commun Pure Appl Math . 2006;59(8):1207–1223. 3. Donoho DL
Discrete Cosine Transform Image Coding With Sliding Block Codes
NASA Astrophysics Data System (ADS)
Divakaran, Ajay; Pearlman, William A.
1989-11-01
A transform trellis coding scheme for images is presented. A two dimensional discrete cosine transform is applied to the image followed by a search on a trellis structured code. This code is a sliding block code that utilizes a constrained size reproduction alphabet. The image is divided into blocks by the transform coding. The non-stationarity of the image is counteracted by grouping these blocks in clusters through a clustering algorithm, and then encoding the clusters separately. Mandela ordered sequences are formed from each cluster i.e identically indexed coefficients from each block are grouped together to form one dimensional sequences. A separate search ensues on each of these Mandela ordered sequences. Padding sequences are used to improve the trellis search fidelity. The padding sequences absorb the error caused by the building up of the trellis to full size. The simulations were carried out on a 256x256 image ('LENA'). The results are comparable to any existing scheme. The visual quality of the image is enhanced considerably by the padding and clustering.
A real-time inverse quantised transform for multi-standard with dynamic resolution support
NASA Astrophysics Data System (ADS)
Sun, Chi-Chia; Lin, Chun-Ying; Zhang, Ce
2016-06-01
In this paper, a real-time configurable intelligent property (IP) core is presented for image/video decoding process in compatibility with the standard MPEG-4 Visual and the standard H.264/AVC. The inverse quantised discrete cosine and integer transform can be used to perform inverse quantised discrete cosine transform and inverse quantised inverse integer transforms which only required shift and add operations. Meanwhile, COordinate Rotation DIgital Computer iterations and compensation steps are adjustable in order to compensate for the video compression quality regarding various data throughput. The implementations are embedded in publicly available software XVID Codes 1.2.2 for the standard MPEG-4 Visual and the H.264/AVC reference software JM 16.1, where the experimental results show that the balance between the computational complexity and video compression quality is retained. At the end, FPGA synthesised results show that the proposed IP core can bring advantages to low hardware costs and also provide real-time performance for Full HD and 4K-2K video decoding.
Improved method of step length estimation based on inverted pendulum model.
Zhao, Qi; Zhang, Boxue; Wang, Jingjing; Feng, Wenquan; Jia, Wenyan; Sun, Mingui
2017-04-01
Step length estimation is an important issue in areas such as gait analysis, sport training, or pedestrian localization. In this article, we estimate the step length of walking using a waist-worn wearable computer named eButton. Motion sensors within this device are used to record body movement from the trunk instead of extremities. Two signal-processing techniques are applied to our algorithm design. The direction cosine matrix transforms vertical acceleration from the device coordinates to the topocentric coordinates. The empirical mode decomposition is used to remove the zero- and first-order skew effects resulting from an integration process. Our experimental results show that our algorithm performs well in step length estimation. The effectiveness of the direction cosine matrix algorithm is improved from 1.69% to 3.56% while the walking speed increased.
NASA Technical Reports Server (NTRS)
Rost, Martin C.; Sayood, Khalid
1991-01-01
A method for efficiently coding natural images using a vector-quantized variable-blocksized transform source coder is presented. The method, mixture block coding (MBC), incorporates variable-rate coding by using a mixture of discrete cosine transform (DCT) source coders. Which coders are selected to code any given image region is made through a threshold driven distortion criterion. In this paper, MBC is used in two different applications. The base method is concerned with single-pass low-rate image data compression. The second is a natural extension of the base method which allows for low-rate progressive transmission (PT). Since the base method adapts easily to progressive coding, it offers the aesthetic advantage of progressive coding without incorporating extensive channel overhead. Image compression rates of approximately 0.5 bit/pel are demonstrated for both monochrome and color images.
Image Augmentation for Object Image Classification Based On Combination of Pre-Trained CNN and SVM
NASA Astrophysics Data System (ADS)
Shima, Yoshihiro
2018-04-01
Neural networks are a powerful means of classifying object images. The proposed image category classification method for object images combines convolutional neural networks (CNNs) and support vector machines (SVMs). A pre-trained CNN, called Alex-Net, is used as a pattern-feature extractor. Alex-Net is pre-trained for the large-scale object-image dataset ImageNet. Instead of training, Alex-Net, pre-trained for ImageNet is used. An SVM is used as trainable classifier. The feature vectors are passed to the SVM from Alex-Net. The STL-10 dataset are used as object images. The number of classes is ten. Training and test samples are clearly split. STL-10 object images are trained by the SVM with data augmentation. We use the pattern transformation method with the cosine function. We also apply some augmentation method such as rotation, skewing and elastic distortion. By using the cosine function, the original patterns were left-justified, right-justified, top-justified, or bottom-justified. Patterns were also center-justified and enlarged. Test error rate is decreased by 0.435 percentage points from 16.055% by augmentation with cosine transformation. Error rates are increased by other augmentation method such as rotation, skewing and elastic distortion, compared without augmentation. Number of augmented data is 30 times that of the original STL-10 5K training samples. Experimental test error rate for the test 8k STL-10 object images was 15.620%, which shows that image augmentation is effective for image category classification.
Hyperbolic Cross Truncations for Stochastic Fourier Cosine Series
Zhang, Zhihua
2014-01-01
Based on our decomposition of stochastic processes and our asymptotic representations of Fourier cosine coefficients, we deduce an asymptotic formula of approximation errors of hyperbolic cross truncations for bivariate stochastic Fourier cosine series. Moreover we propose a kind of Fourier cosine expansions with polynomials factors such that the corresponding Fourier cosine coefficients decay very fast. Although our research is in the setting of stochastic processes, our results are also new for deterministic functions. PMID:25147842
Luminance-model-based DCT quantization for color image compression
NASA Technical Reports Server (NTRS)
Ahumada, Albert J., Jr.; Peterson, Heidi A.
1992-01-01
A model is developed to approximate visibility thresholds for discrete cosine transform (DCT) coefficient quantization error based on the peak-to-peak luminance of the error image. Experimentally measured visibility thresholds for R, G, and B DCT basis functions can be predicted by a simple luminance-based detection model. This model allows DCT coefficient quantization matrices to be designed for display conditions other than those of the experimental measurements: other display luminances, other veiling luminances, and other spatial frequencies (different pixel spacings, viewing distances, and aspect ratios).
NASA Astrophysics Data System (ADS)
Franco, Patrick; Ogier, Jean-Marc; Loonis, Pierre; Mullot, Rémy
Recently we have developed a model for shape description and matching. Based on minimum spanning trees construction and specifics stages like the mixture, it seems to have many desirable properties. Recognition invariance in front shift, rotated and noisy shape was checked through median scale tests related to GREC symbol reference database. Even if extracting the topology of a shape by mapping the shortest path connecting all the pixels seems to be powerful, the construction of graph induces an expensive algorithmic cost. In this article we discuss on the ways to reduce time computing. An alternative solution based on image compression concepts is provided and evaluated. The model no longer operates in the image space but in a compact space, namely the Discrete Cosine space. The use of block discrete cosine transform is discussed and justified. The experimental results led on the GREC2003 database show that the proposed method is characterized by a good discrimination power, a real robustness to noise with an acceptable time computing.
A robust color image watermarking algorithm against rotation attacks
NASA Astrophysics Data System (ADS)
Han, Shao-cheng; Yang, Jin-feng; Wang, Rui; Jia, Gui-min
2018-01-01
A robust digital watermarking algorithm is proposed based on quaternion wavelet transform (QWT) and discrete cosine transform (DCT) for copyright protection of color images. The luminance component Y of a host color image in YIQ space is decomposed by QWT, and then the coefficients of four low-frequency subbands are transformed by DCT. An original binary watermark scrambled by Arnold map and iterated sine chaotic system is embedded into the mid-frequency DCT coefficients of the subbands. In order to improve the performance of the proposed algorithm against rotation attacks, a rotation detection scheme is implemented before watermark extracting. The experimental results demonstrate that the proposed watermarking scheme shows strong robustness not only against common image processing attacks but also against arbitrary rotation attacks.
NASA Astrophysics Data System (ADS)
Selwyn, Ebenezer Juliet; Florinabel, D. Jemi
2018-04-01
Compound image segmentation plays a vital role in the compression of computer screen images. Computer screen images are images which are mixed with textual, graphical, or pictorial contents. In this paper, we present a comparison of two transform based block classification of compound images based on metrics like speed of classification, precision and recall rate. Block based classification approaches normally divide the compound images into fixed size blocks of non-overlapping in nature. Then frequency transform like Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) are applied over each block. Mean and standard deviation are computed for each 8 × 8 block and are used as features set to classify the compound images into text/graphics and picture/background block. The classification accuracy of block classification based segmentation techniques are measured by evaluation metrics like precision and recall rate. Compound images of smooth background and complex background images containing text of varying size, colour and orientation are considered for testing. Experimental evidence shows that the DWT based segmentation provides significant improvement in recall rate and precision rate approximately 2.3% than DCT based segmentation with an increase in block classification time for both smooth and complex background images.
Information Hiding In Digital Video Using DCT, DWT and CvT
NASA Astrophysics Data System (ADS)
Abed Shukur, Wisam; Najah Abdullah, Wathiq; Kareem Qurban, Luheb
2018-05-01
The type of video that used in this proposed hiding a secret information technique is .AVI; the proposed technique of a data hiding to embed a secret information into video frames by using Discrete Cosine Transform (DCT), Discrete Wavelet Transform (DWT) and Curvelet Transform (CvT). An individual pixel consists of three color components (RGB), the secret information is embedded in Red (R) color channel. On the receiver side, the secret information is extracted from received video. After extracting secret information, robustness of proposed hiding a secret information technique is measured and obtained by computing the degradation of the extracted secret information by comparing it with the original secret information via calculating the Normalized cross Correlation (NC). The experiments shows the error ratio of the proposed technique is (8%) while accuracy ratio is (92%) when the Curvelet Transform (CvT) is used, but compared with Discrete Wavelet Transform (DWT) and Discrete Cosine Transform (DCT), the error rates are 11% and 14% respectively, while the accuracy ratios are (89%) and (86%) respectively. So, the experiments shows the Poisson noise gives better results than other types of noises, while the speckle noise gives worst results compared with other types of noises. The proposed technique has been established by using MATLAB R2016a programming language.
2013-10-01
correct group assignment of samples in unsupervised hierarchical clustering by the Unweighted Pair-Group Method using Arithmetic averages ( UPGMA ) based on...centering of log2 transformed MAS5.0 signal values; probe set clustering was performed by the UPGMA method using Cosine correlation as the similarity met...A) The 108 differentially-regulated genes identified were subjected to unsupervised hierarchical clustering analysis using the UPGMA algorithm with
NASA Astrophysics Data System (ADS)
Ruigrok, Elmer; Wapenaar, Kees
2014-05-01
In various application areas, e.g., seismology, astronomy and geodesy, arrays of sensors are used to characterize incoming wavefields due to distant sources. Beamforming is a general term for phased-adjusted summations over the different array elements, for untangling the directionality and elevation angle of the incoming waves. For characterizing noise sources, beamforming is conventionally applied with a temporal Fourier and a 2D spatial Fourier transform, possibly with additional weights. These transforms become aliased for higher frequencies and sparser array-element distributions. As a partial remedy, we derive a kernel for beamforming crosscorrelated data and call it cosine beamforming (CBF). By applying beamforming not directly to the data, but to crosscorrelated data, the sampling is effectively increased. We show that CBF, due to this better sampling, suffers less from aliasing and yields higher resolution than conventional beamforming. As a flip-side of the coin, the CBF output shows more smearing for spherical waves than conventional beamforming.
Visually Lossless Data Compression for Real-Time Frame/Pushbroom Space Science Imagers
NASA Technical Reports Server (NTRS)
Yeh, Pen-Shu; Venbrux, Jack; Bhatia, Prakash; Miller, Warner H.
2000-01-01
A visually lossless data compression technique is currently being developed for space science applications under the requirement of high-speed push-broom scanning. The technique is also applicable to frame based imaging and is error-resilient in that error propagation is contained within a few scan lines. The algorithm is based on a block transform of a hybrid of modulated lapped transform (MLT) and discrete cosine transform (DCT), or a 2-dimensional lapped transform, followed by bit-plane encoding; this combination results in an embedded bit string with exactly the desirable compression rate as desired by the user. The approach requires no unique table to maximize its performance. The compression scheme performs well on a suite of test images typical of images from spacecraft instruments. Flight qualified hardware implementations are in development; a functional chip set is expected by the end of 2001. The chip set is being designed to compress data in excess of 20 Msamples/sec and support quantizations from 2 to 16 bits.
Ho, B T; Tsai, M J; Wei, J; Ma, M; Saipetch, P
1996-01-01
A new method of video compression for angiographic images has been developed to achieve high compression ratio (~20:1) while eliminating block artifacts which leads to loss of diagnostic accuracy. This method adopts motion picture experts group's (MPEGs) motion compensated prediction to takes advantage of frame to frame correlation. However, in contrast to MPEG, the error images arising from mismatches in the motion estimation are encoded by discrete wavelet transform (DWT) rather than block discrete cosine transform (DCT). Furthermore, the authors developed a classification scheme which label each block in an image as intra, error, or background type and encode it accordingly. This hybrid coding can significantly improve the compression efficiency in certain eases. This method can be generalized for any dynamic image sequences applications sensitive to block artifacts.
1982-09-17
FK * 1PK (2) The convolution of two transforms in time domain is the inverse transform of the product in frequency domain. Thus Rp(s) - Fgc() Ipg(*) (3...its inverse transform by: R,(r)- R,(a.)e’’ do. (5)2w In order to nuke use f a very accurate numerical method to ompute Fourier "ke and coil...taorm. When the inverse transform it tken by using Eq. (15), the cosine transform, because it converges faster than the sine transform refu-ft the
NASA Astrophysics Data System (ADS)
Al-Mansoori, Saeed; Kunhu, Alavi
2013-10-01
This paper proposes a blind multi-watermarking scheme based on designing two back-to-back encoders. The first encoder is implemented to embed a robust watermark into remote sensing imagery by applying a Discrete Cosine Transform (DCT) approach. Such watermark is used in many applications to protect the copyright of the image. However, the second encoder embeds a fragile watermark using `SHA-1' hash function. The purpose behind embedding a fragile watermark is to prove the authenticity of the image (i.e. tamper-proof). Thus, the proposed technique was developed as a result of new challenges with piracy of remote sensing imagery ownership. This led researchers to look for different means to secure the ownership of satellite imagery and prevent the illegal use of these resources. Therefore, Emirates Institution for Advanced Science and Technology (EIAST) proposed utilizing existing data security concept by embedding a digital signature, "watermark", into DubaiSat-1 satellite imagery. In this study, DubaiSat-1 images with 2.5 meter resolution are used as a cover and a colored EIAST logo is used as a watermark. In order to evaluate the robustness of the proposed technique, a couple of attacks are applied such as JPEG compression, rotation and synchronization attacks. Furthermore, tampering attacks are applied to prove image authenticity.
NASA Astrophysics Data System (ADS)
Al-Hayani, Nazar; Al-Jawad, Naseer; Jassim, Sabah A.
2014-05-01
Video compression and encryption became very essential in a secured real time video transmission. Applying both techniques simultaneously is one of the challenges where the size and the quality are important in multimedia transmission. In this paper we proposed a new technique for video compression and encryption. Both encryption and compression are based on edges extracted from the high frequency sub-bands of wavelet decomposition. The compression algorithm based on hybrid of: discrete wavelet transforms, discrete cosine transform, vector quantization, wavelet based edge detection, and phase sensing. The compression encoding algorithm treats the video reference and non-reference frames in two different ways. The encryption algorithm utilized A5 cipher combined with chaotic logistic map to encrypt the significant parameters and wavelet coefficients. Both algorithms can be applied simultaneously after applying the discrete wavelet transform on each individual frame. Experimental results show that the proposed algorithms have the following features: high compression, acceptable quality, and resistance to the statistical and bruteforce attack with low computational processing.
Human Motion Capture Data Tailored Transform Coding.
Junhui Hou; Lap-Pui Chau; Magnenat-Thalmann, Nadia; Ying He
2015-07-01
Human motion capture (mocap) is a widely used technique for digitalizing human movements. With growing usage, compressing mocap data has received increasing attention, since compact data size enables efficient storage and transmission. Our analysis shows that mocap data have some unique characteristics that distinguish themselves from images and videos. Therefore, directly borrowing image or video compression techniques, such as discrete cosine transform, does not work well. In this paper, we propose a novel mocap-tailored transform coding algorithm that takes advantage of these features. Our algorithm segments the input mocap sequences into clips, which are represented in 2D matrices. Then it computes a set of data-dependent orthogonal bases to transform the matrices to frequency domain, in which the transform coefficients have significantly less dependency. Finally, the compression is obtained by entropy coding of the quantized coefficients and the bases. Our method has low computational cost and can be easily extended to compress mocap databases. It also requires neither training nor complicated parameter setting. Experimental results demonstrate that the proposed scheme significantly outperforms state-of-the-art algorithms in terms of compression performance and speed.
Multipurpose image watermarking algorithm based on multistage vector quantization.
Lu, Zhe-Ming; Xu, Dian-Guo; Sun, Sheng-He
2005-06-01
The rapid growth of digital multimedia and Internet technologies has made copyright protection, copy protection, and integrity verification three important issues in the digital world. To solve these problems, the digital watermarking technique has been presented and widely researched. Traditional watermarking algorithms are mostly based on discrete transform domains, such as the discrete cosine transform, discrete Fourier transform (DFT), and discrete wavelet transform (DWT). Most of these algorithms are good for only one purpose. Recently, some multipurpose digital watermarking methods have been presented, which can achieve the goal of content authentication and copyright protection simultaneously. However, they are based on DWT or DFT. Lately, several robust watermarking schemes based on vector quantization (VQ) have been presented, but they can only be used for copyright protection. In this paper, we present a novel multipurpose digital image watermarking method based on the multistage vector quantizer structure, which can be applied to image authentication and copyright protection. In the proposed method, the semi-fragile watermark and the robust watermark are embedded in different VQ stages using different techniques, and both of them can be extracted without the original image. Simulation results demonstrate the effectiveness of our algorithm in terms of robustness and fragility.
Liu, Yongsuo; Meng, Qinghua; Jiang, Shumin; Hu, Yuzhu
2005-03-01
The similarity evaluation of the fingerprints is one of the most important problems in the quality control of the traditional Chinese medicine (TCM). Similarity measures used to evaluate the similarity of the common peaks in the chromatogram of TCM have been discussed. Comparative studies were carried out among correlation coefficient, cosine of the angle and an improved extent similarity method using simulated data and experimental data. Correlation coefficient and cosine of the angle are not sensitive to the differences of the data set. They are still not sensitive to the differences of the data even after normalization. According to the similarity system theory, an improved extent similarity method was proposed. The improved extent similarity is more sensitive to the differences of the data sets than correlation coefficient and cosine of the angle. And the character of the data sets needs not to be changed compared with log-transformation. The improved extent similarity can be used to evaluate the similarity of the chromatographic fingerprints of TCM.
Natural convection heat transfer in an oscillating vertical cylinder
Ali Shah, Nehad; Tassaddiq, Asifa; Mustapha, Norzieha; Kechil, Seripah Awang
2018-01-01
This paper studies the heat transfer analysis caused due to free convection in a vertically oscillating cylinder. Exact solutions are determined by applying the Laplace and finite Hankel transforms. Expressions for temperature distribution and velocity field corresponding to cosine and sine oscillations are obtained. The solutions that have been obtained for velocity are presented in the forms of transient and post-transient solutions. Moreover, these solutions satisfy both the governing differential equation and all imposed initial and boundary conditions. Numerical computations and graphical illustrations are used in order to study the effects of Prandtl and Grashof numbers on velocity and temperature for various times. The transient solutions for both cosine and sine oscillations are also computed in tables. It is found that, the transient solutions are of considerable interest up to the times t = 15 for cosine oscillations and t = 1.75 for sine oscillations. After these moments, the transient solutions can be neglected and, the fluid moves according with the post-transient solutions. PMID:29304161
Natural convection heat transfer in an oscillating vertical cylinder.
Khan, Ilyas; Ali Shah, Nehad; Tassaddiq, Asifa; Mustapha, Norzieha; Kechil, Seripah Awang
2018-01-01
This paper studies the heat transfer analysis caused due to free convection in a vertically oscillating cylinder. Exact solutions are determined by applying the Laplace and finite Hankel transforms. Expressions for temperature distribution and velocity field corresponding to cosine and sine oscillations are obtained. The solutions that have been obtained for velocity are presented in the forms of transient and post-transient solutions. Moreover, these solutions satisfy both the governing differential equation and all imposed initial and boundary conditions. Numerical computations and graphical illustrations are used in order to study the effects of Prandtl and Grashof numbers on velocity and temperature for various times. The transient solutions for both cosine and sine oscillations are also computed in tables. It is found that, the transient solutions are of considerable interest up to the times t = 15 for cosine oscillations and t = 1.75 for sine oscillations. After these moments, the transient solutions can be neglected and, the fluid moves according with the post-transient solutions.
Zamli, Kamal Z.; Din, Fakhrud; Bures, Miroslav
2018-01-01
The sine-cosine algorithm (SCA) is a new population-based meta-heuristic algorithm. In addition to exploiting sine and cosine functions to perform local and global searches (hence the name sine-cosine), the SCA introduces several random and adaptive parameters to facilitate the search process. Although it shows promising results, the search process of the SCA is vulnerable to local minima/maxima due to the adoption of a fixed switch probability and the bounded magnitude of the sine and cosine functions (from -1 to 1). In this paper, we propose a new hybrid Q-learning sine-cosine- based strategy, called the Q-learning sine-cosine algorithm (QLSCA). Within the QLSCA, we eliminate the switching probability. Instead, we rely on the Q-learning algorithm (based on the penalty and reward mechanism) to dynamically identify the best operation during runtime. Additionally, we integrate two new operations (Lévy flight motion and crossover) into the QLSCA to facilitate jumping out of local minima/maxima and enhance the solution diversity. To assess its performance, we adopt the QLSCA for the combinatorial test suite minimization problem. Experimental results reveal that the QLSCA is statistically superior with regard to test suite size reduction compared to recent state-of-the-art strategies, including the original SCA, the particle swarm test generator (PSTG), adaptive particle swarm optimization (APSO) and the cuckoo search strategy (CS) at the 95% confidence level. However, concerning the comparison with discrete particle swarm optimization (DPSO), there is no significant difference in performance at the 95% confidence level. On a positive note, the QLSCA statistically outperforms the DPSO in certain configurations at the 90% confidence level. PMID:29771918
Zamli, Kamal Z; Din, Fakhrud; Ahmed, Bestoun S; Bures, Miroslav
2018-01-01
The sine-cosine algorithm (SCA) is a new population-based meta-heuristic algorithm. In addition to exploiting sine and cosine functions to perform local and global searches (hence the name sine-cosine), the SCA introduces several random and adaptive parameters to facilitate the search process. Although it shows promising results, the search process of the SCA is vulnerable to local minima/maxima due to the adoption of a fixed switch probability and the bounded magnitude of the sine and cosine functions (from -1 to 1). In this paper, we propose a new hybrid Q-learning sine-cosine- based strategy, called the Q-learning sine-cosine algorithm (QLSCA). Within the QLSCA, we eliminate the switching probability. Instead, we rely on the Q-learning algorithm (based on the penalty and reward mechanism) to dynamically identify the best operation during runtime. Additionally, we integrate two new operations (Lévy flight motion and crossover) into the QLSCA to facilitate jumping out of local minima/maxima and enhance the solution diversity. To assess its performance, we adopt the QLSCA for the combinatorial test suite minimization problem. Experimental results reveal that the QLSCA is statistically superior with regard to test suite size reduction compared to recent state-of-the-art strategies, including the original SCA, the particle swarm test generator (PSTG), adaptive particle swarm optimization (APSO) and the cuckoo search strategy (CS) at the 95% confidence level. However, concerning the comparison with discrete particle swarm optimization (DPSO), there is no significant difference in performance at the 95% confidence level. On a positive note, the QLSCA statistically outperforms the DPSO in certain configurations at the 90% confidence level.
Area and power efficient DCT architecture for image compression
NASA Astrophysics Data System (ADS)
Dhandapani, Vaithiyanathan; Ramachandran, Seshasayanan
2014-12-01
The discrete cosine transform (DCT) is one of the major components in image and video compression systems. The final output of these systems is interpreted by the human visual system (HVS), which is not perfect. The limited perception of human visualization allows the algorithm to be numerically approximate rather than exact. In this paper, we propose a new matrix for discrete cosine transform. The proposed 8 × 8 transformation matrix contains only zeros and ones which requires only adders, thus avoiding the need for multiplication and shift operations. The new class of transform requires only 12 additions, which highly reduces the computational complexity and achieves a performance in image compression that is comparable to that of the existing approximated DCT. Another important aspect of the proposed transform is that it provides an efficient area and power optimization while implementing in hardware. To ensure the versatility of the proposal and to further evaluate the performance and correctness of the structure in terms of speed, area, and power consumption, the model is implemented on Xilinx Virtex 7 field programmable gate array (FPGA) device and synthesized with Cadence® RTL Compiler® using UMC 90 nm standard cell library. The analysis obtained from the implementation indicates that the proposed structure is superior to the existing approximation techniques with a 30% reduction in power and 12% reduction in area.
A simplified Integer Cosine Transform and its application in image compression
NASA Technical Reports Server (NTRS)
Costa, M.; Tong, K.
1994-01-01
A simplified version of the integer cosine transform (ICT) is described. For practical reasons, the transform is considered jointly with the quantization of its coefficients. It differs from conventional ICT algorithms in that the combined factors for normalization and quantization are approximated by powers of two. In conventional algorithms, the normalization/quantization stage typically requires as many integer divisions as the number of transform coefficients. By restricting the factors to powers of two, these divisions can be performed by variable shifts in the binary representation of the coefficients, with speed and cost advantages to the hardware implementation of the algorithm. The error introduced by the factor approximations is compensated for in the inverse ICT operation, executed with floating point precision. The simplified ICT algorithm has potential applications in image-compression systems with disparate cost and speed requirements in the encoder and decoder ends. For example, in deep space image telemetry, the image processors on board the spacecraft could take advantage of the simplified, faster encoding operation, which would be adjusted on the ground, with high-precision arithmetic. A dual application is found in compressed video broadcasting. Here, a fast, high-performance processor at the transmitter would precompensate for the factor approximations in the inverse ICT operation, to be performed in real time, at a large number of low-cost receivers.
Haldar, Justin P.; Leahy, Richard M.
2013-01-01
This paper presents a novel family of linear transforms that can be applied to data collected from the surface of a 2-sphere in three-dimensional Fourier space. This family of transforms generalizes the previously-proposed Funk-Radon Transform (FRT), which was originally developed for estimating the orientations of white matter fibers in the central nervous system from diffusion magnetic resonance imaging data. The new family of transforms is characterized theoretically, and efficient numerical implementations of the transforms are presented for the case when the measured data is represented in a basis of spherical harmonics. After these general discussions, attention is focused on a particular new transform from this family that we name the Funk-Radon and Cosine Transform (FRACT). Based on theoretical arguments, it is expected that FRACT-based analysis should yield significantly better orientation information (e.g., improved accuracy and higher angular resolution) than FRT-based analysis, while maintaining the strong characterizability and computational efficiency of the FRT. Simulations are used to confirm these theoretical characteristics, and the practical significance of the proposed approach is illustrated with real diffusion weighted MRI brain data. These experiments demonstrate that, in addition to having strong theoretical characteristics, the proposed approach can outperform existing state-of-the-art orientation estimation methods with respect to measures such as angular resolution and robustness to noise and modeling errors. PMID:23353603
Subjective evaluations of integer cosine transform compressed Galileo solid state imagery
NASA Technical Reports Server (NTRS)
Haines, Richard F.; Gold, Yaron; Grant, Terry; Chuang, Sherry
1994-01-01
This paper describes a study conducted for the Jet Propulsion Laboratory, Pasadena, California, using 15 evaluators from 12 institutions involved in the Galileo Solid State Imaging (SSI) experiment. The objective of the study was to determine the impact of integer cosine transform (ICT) compression using specially formulated quantization (q) tables and compression ratios on acceptability of the 800 x 800 x 8 monochromatic astronomical images as evaluated visually by Galileo SSI mission scientists. Fourteen different images in seven image groups were evaluated. Each evaluator viewed two versions of the same image side by side on a high-resolution monitor; each was compressed using a different q level. First the evaluators selected the image with the highest overall quality to support them in their visual evaluations of image content. Next they rated each image using a scale from one to five indicating its judged degree of usefulness. Up to four preselected types of images with and without noise were presented to each evaluator.
ASIC implementation of recursive scaled discrete cosine transform algorithm
NASA Astrophysics Data System (ADS)
On, Bill N.; Narasimhan, Sam; Huang, Victor K.
1994-05-01
A program to implement the Recursive Scaled Discrete Cosine Transform (DCT) algorithm as proposed by H. S. Hou has been undertaken at the Institute of Microelectronics. Implementation of the design was done using top-down design methodology with VHDL (VHSIC Hardware Description Language) for chip modeling. When the VHDL simulation has been satisfactorily completed, the design is synthesized into gates using a synthesis tool. The architecture of the design consists of two processing units together with a memory module for data storage and transpose. Each processing unit is composed of four pipelined stages which allow the internal clock to run at one-eighth (1/8) the speed of the pixel clock. Each stage operates on eight pixels in parallel. As the data flows through each stage, there are various adders and multipliers to transform them into the desired coefficients. The Scaled IDCT was implemented in a similar fashion with the adders and multipliers rearranged to perform the inverse DCT algorithm. The chip has been verified using Field Programmable Gate Array devices. The design is operational. The combination of fewer multiplications required and pipelined architecture give Hou's Recursive Scaled DCT good potential of achieving high performance at a low cost in using Very Large Scale Integration implementation.
Fast computational scheme of image compression for 32-bit microprocessors
NASA Technical Reports Server (NTRS)
Kasperovich, Leonid
1994-01-01
This paper presents a new computational scheme of image compression based on the discrete cosine transform (DCT), underlying JPEG and MPEG International Standards. The algorithm for the 2-d DCT computation uses integer operations (register shifts and additions / subtractions only); its computational complexity is about 8 additions per image pixel. As a meaningful example of an on-board image compression application we consider the software implementation of the algorithm for the Mars Rover (Marsokhod, in Russian) imaging system being developed as a part of Mars-96 International Space Project. It's shown that fast software solution for 32-bit microprocessors may compete with the DCT-based image compression hardware.
Personalized Medicine in Veterans with Traumatic Brain Injuries
2013-05-01
Pair-Group Method using Arithmetic averages ( UPGMA ) based on cosine correlation of row mean centered log2 signal values; this was the top 50%-tile...cluster- ing was performed by the UPGMA method using Cosine correlation as the similarity metric. For comparative purposes, clustered heat maps included...non-mTBI cases were subjected to unsupervised hierarchical clustering analysis using the UPGMA algorithm with cosine correlation as the similarity
Personalized Medicine in Veterans with Traumatic Brain Injuries
2014-07-01
9 control cases are subjected to unsupervised hierarchical clustering analysis using the UPGMA algorithm with cosine correlation as the similarity...in unsu- pervised hierarchical clustering by the Un- weighted Pair-Group Method using Arithmetic averages ( UPGMA ) based on cosine correlation of row...of log2 trans- formed MAS5.0 signal values; probe set cluster- ing was performed by the UPGMA method using Cosine correlation as the similarity
A Local DCT-II Feature Extraction Approach for Personal Identification Based on Palmprint
NASA Astrophysics Data System (ADS)
Choge, H. Kipsang; Oyama, Tadahiro; Karungaru, Stephen; Tsuge, Satoru; Fukumi, Minoru
Biometric applications based on the palmprint have recently attracted increased attention from various researchers. In this paper, a method is presented that differs from the commonly used global statistical and structural techniques by extracting and using local features instead. The middle palm area is extracted after preprocessing for rotation, position and illumination normalization. The segmented region of interest is then divided into blocks of either 8×8 or 16×16 pixels in size. The type-II Discrete Cosine Transform (DCT) is applied to transform the blocks into DCT space. A subset of coefficients that encode the low to medium frequency components is selected using the JPEG-style zigzag scanning method. Features from each block are subsequently concatenated into a compact feature vector and used in palmprint verification experiments with palmprints from the PolyU Palmprint Database. Results indicate that this approach achieves better results than many conventional transform-based methods, with an excellent recognition accuracy above 99% and an Equal Error Rate (EER) of less than 1.2% in palmprint verification.
Initial performance of the COSINE-100 experiment
NASA Astrophysics Data System (ADS)
Adhikari, G.; Adhikari, P.; de Souza, E. Barbosa; Carlin, N.; Choi, S.; Choi, W. Q.; Djamal, M.; Ezeribe, A. C.; Ha, C.; Hahn, I. S.; Hubbard, A. J. F.; Jeon, E. J.; Jo, J. H.; Joo, H. W.; Kang, W. G.; Kang, W.; Kauer, M.; Kim, B. H.; Kim, H.; Kim, H. J.; Kim, K. W.; Kim, M. C.; Kim, N. Y.; Kim, S. K.; Kim, Y. D.; Kim, Y. H.; Kudryavtsev, V. A.; Lee, H. S.; Lee, J.; Lee, J. Y.; Lee, M. H.; Leonard, D. S.; Lim, K. E.; Lynch, W. A.; Maruyama, R. H.; Mouton, F.; Olsen, S. L.; Park, H. K.; Park, H. S.; Park, J. S.; Park, K. S.; Pettus, W.; Pierpoint, Z. P.; Prihtiadi, H.; Ra, S.; Rogers, F. R.; Rott, C.; Scarff, A.; Spooner, N. J. C.; Thompson, W. G.; Yang, L.; Yong, S. H.
2018-02-01
COSINE is a dark matter search experiment based on an array of low background NaI(Tl) crystals located at the Yangyang underground laboratory. The assembly of COSINE-100 was completed in the summer of 2016 and the detector is currently collecting physics quality data aimed at reproducing the DAMA/LIBRA experiment that reported an annual modulation signal. Stable operation has been achieved and will continue for at least 2 years. Here, we describe the design of COSINE-100, including the shielding arrangement, the configuration of the NaI(Tl) crystal detection elements, the veto systems, and the associated operational systems, and we show the current performance of the experiment.
Constructing and Deriving Reciprocal Trigonometric Relations: A Functional Analytic Approach
ERIC Educational Resources Information Center
Ninness, Chris; Dixon, Mark; Barnes-Holmes, Dermot; Rehfeldt, Ruth Anne; Rumph, Robin; McCuller, Glen; Holland, James; Smith, Ronald; Ninness, Sharon K.; McGinty, Jennifer
2009-01-01
Participants were pretrained and tested on mutually entailed trigonometric relations and combinatorially entailed relations as they pertained to positive and negative forms of sine, cosine, secant, and cosecant. Experiment 1 focused on training and testing transformations of these mathematical functions in terms of amplitude and frequency followed…
Stability of strongly nonlinear normal modes
NASA Astrophysics Data System (ADS)
Recktenwald, Geoffrey; Rand, Richard
2007-10-01
It is shown that a transformation of time can allow the periodic solution of a strongly nonlinear oscillator to be written as a simple cosine function. This enables the stability of strongly nonlinear normal modes in multidegree of freedom systems to be investigated by standard procedures such as harmonic balance.
A 16X16 Discrete Cosine Transform Chip
NASA Astrophysics Data System (ADS)
Sun, M. T.; Chen, T. C.; Gottlieb, A.; Wu, L.; Liou, M. L.
1987-10-01
Among various transform coding techniques for image compression the Discrete Cosine Transform (DCT) is considered to be the most effective method and has been widely used in the laboratory as well as in the market, place. DCT is computationally intensive. For video application at 14.3 MHz sample rate, a direct implementation of a 16x16 DCT requires a throughput, rate of approximately half a billion multiplications per second. In order to reduce the cost of hardware implementation, a single chip DCT implementation is highly desirable. In this paper, the implementation of a 16x16 DCT chip using a concurrent architecture will be presented. The chip is designed for real-time processing of 14.3 MHz sampled video data. It uses row-column decomposition to implement the two-dimensional transform. Distributed arithmetic combined with hit-serial and hit-parallel structures is used to implement the required vector inner products concurrently. Several schemes are utilized to reduce the size of required memory. The resultant circuit only uses memory, shift registers, and adders. No multipliers are required. It achieves high speed performance with a very regular and efficient integrated circuit realization. The chip accepts 0-bit input and produces 14-bit DCT coefficients. 12 bits are maintained after the first one-dimensional transform. The circuit has been laid out using a 2-μm CMOS technology with a symbolic design tool MULGA. The core contains approximately 73,000 transistors in an area of 7.2 x 7.0
Method and Apparatus for Evaluating the Visual Quality of Processed Digital Video Sequences
NASA Technical Reports Server (NTRS)
Watson, Andrew B. (Inventor)
2002-01-01
A Digital Video Quality (DVQ) apparatus and method that incorporate a model of human visual sensitivity to predict the visibility of artifacts. The DVQ method and apparatus are used for the evaluation of the visual quality of processed digital video sequences and for adaptively controlling the bit rate of the processed digital video sequences without compromising the visual quality. The DVQ apparatus minimizes the required amount of memory and computation. The input to the DVQ apparatus is a pair of color image sequences: an original (R) non-compressed sequence, and a processed (T) sequence. Both sequences (R) and (T) are sampled, cropped, and subjected to color transformations. The sequences are then subjected to blocking and discrete cosine transformation, and the results are transformed to local contrast. The next step is a time filtering operation which implements the human sensitivity to different time frequencies. The results are converted to threshold units by dividing each discrete cosine transform coefficient by its respective visual threshold. At the next stage the two sequences are subtracted to produce an error sequence. The error sequence is subjected to a contrast masking operation, which also depends upon the reference sequence (R). The masked errors can be pooled in various ways to illustrate the perceptual error over various dimensions, and the pooled error can be converted to a visual quality measure.
NASA Technical Reports Server (NTRS)
Chang, C. Y.; Kwok, R.; Curlander, J. C.
1987-01-01
Five coding techniques in the spatial and transform domains have been evaluated for SAR image compression: linear three-point predictor (LTPP), block truncation coding (BTC), microadaptive picture sequencing (MAPS), adaptive discrete cosine transform (ADCT), and adaptive Hadamard transform (AHT). These techniques have been tested with Seasat data. Both LTPP and BTC spatial domain coding techniques provide very good performance at rates of 1-2 bits/pixel. The two transform techniques, ADCT and AHT, demonstrate the capability to compress the SAR imagery to less than 0.5 bits/pixel without visible artifacts. Tradeoffs such as the rate distortion performance, the computational complexity, the algorithm flexibility, and the controllability of compression ratios are also discussed.
Blood perfusion construction for infrared face recognition based on bio-heat transfer.
Xie, Zhihua; Liu, Guodong
2014-01-01
To improve the performance of infrared face recognition for time-lapse data, a new construction of blood perfusion is proposed based on bio-heat transfer. Firstly, by quantifying the blood perfusion based on Pennes equation, the thermal information is converted into blood perfusion rate, which is stable facial biological feature of face image. Then, the separability discriminant criterion in Discrete Cosine Transform (DCT) domain is applied to extract the discriminative features of blood perfusion information. Experimental results demonstrate that the features of blood perfusion are more concentrative and discriminative for recognition than those of thermal information. The infrared face recognition based on the proposed blood perfusion is robust and can achieve better recognition performance compared with other state-of-the-art approaches.
Haldar, Justin P; Leahy, Richard M
2013-05-01
This paper presents a novel family of linear transforms that can be applied to data collected from the surface of a 2-sphere in three-dimensional Fourier space. This family of transforms generalizes the previously-proposed Funk-Radon Transform (FRT), which was originally developed for estimating the orientations of white matter fibers in the central nervous system from diffusion magnetic resonance imaging data. The new family of transforms is characterized theoretically, and efficient numerical implementations of the transforms are presented for the case when the measured data is represented in a basis of spherical harmonics. After these general discussions, attention is focused on a particular new transform from this family that we name the Funk-Radon and Cosine Transform (FRACT). Based on theoretical arguments, it is expected that FRACT-based analysis should yield significantly better orientation information (e.g., improved accuracy and higher angular resolution) than FRT-based analysis, while maintaining the strong characterizability and computational efficiency of the FRT. Simulations are used to confirm these theoretical characteristics, and the practical significance of the proposed approach is illustrated with real diffusion weighted MRI brain data. These experiments demonstrate that, in addition to having strong theoretical characteristics, the proposed approach can outperform existing state-of-the-art orientation estimation methods with respect to measures such as angular resolution and robustness to noise and modeling errors. Copyright © 2013 Elsevier Inc. All rights reserved.
Displaying radiologic images on personal computers: image storage and compression--Part 2.
Gillespy, T; Rowberg, A H
1994-02-01
This is part 2 of our article on image storage and compression, the third article of our series for radiologists and imaging scientists on displaying, manipulating, and analyzing radiologic images on personal computers. Image compression is classified as lossless (nondestructive) or lossy (destructive). Common lossless compression algorithms include variable-length bit codes (Huffman codes and variants), dictionary-based compression (Lempel-Ziv variants), and arithmetic coding. Huffman codes and the Lempel-Ziv-Welch (LZW) algorithm are commonly used for image compression. All of these compression methods are enhanced if the image has been transformed into a differential image based on a differential pulse-code modulation (DPCM) algorithm. The LZW compression after the DPCM image transformation performed the best on our example images, and performed almost as well as the best of the three commercial compression programs tested. Lossy compression techniques are capable of much higher data compression, but reduced image quality and compression artifacts may be noticeable. Lossy compression is comprised of three steps: transformation, quantization, and coding. Two commonly used transformation methods are the discrete cosine transformation and discrete wavelet transformation. In both methods, most of the image information is contained in a relatively few of the transformation coefficients. The quantization step reduces many of the lower order coefficients to 0, which greatly improves the efficiency of the coding (compression) step. In fractal-based image compression, image patterns are stored as equations that can be reconstructed at different levels of resolution.
Infrared face recognition based on LBP histogram and KW feature selection
NASA Astrophysics Data System (ADS)
Xie, Zhihua
2014-07-01
The conventional LBP-based feature as represented by the local binary pattern (LBP) histogram still has room for performance improvements. This paper focuses on the dimension reduction of LBP micro-patterns and proposes an improved infrared face recognition method based on LBP histogram representation. To extract the local robust features in infrared face images, LBP is chosen to get the composition of micro-patterns of sub-blocks. Based on statistical test theory, Kruskal-Wallis (KW) feature selection method is proposed to get the LBP patterns which are suitable for infrared face recognition. The experimental results show combination of LBP and KW features selection improves the performance of infrared face recognition, the proposed method outperforms the traditional methods based on LBP histogram, discrete cosine transform(DCT) or principal component analysis(PCA).
Embedding multiple watermarks in the DFT domain using low- and high-frequency bands
NASA Astrophysics Data System (ADS)
Ganic, Emir; Dexter, Scott D.; Eskicioglu, Ahmet M.
2005-03-01
Although semi-blind and blind watermarking schemes based on Discrete Cosine Transform (DCT) or Discrete Wavelet Transform (DWT) are robust to a number of attacks, they fail in the presence of geometric attacks such as rotation, scaling, and translation. The Discrete Fourier Transform (DFT) of a real image is conjugate symmetric, resulting in a symmetric DFT spectrum. Because of this property, the popularity of DFT-based watermarking has increased in the last few years. In a recent paper, we generalized a circular watermarking idea to embed multiple watermarks in lower and higher frequencies. Nevertheless, a circular watermark is visible in the DFT domain, providing a potential hacker with valuable information about the location of the watermark. In this paper, our focus is on embedding multiple watermarks that are not visible in the DFT domain. Using several frequency bands increases the overall robustness of the proposed watermarking scheme. Specifically, our experiments show that the watermark embedded in lower frequencies is robust to one set of attacks, and the watermark embedded in higher frequencies is robust to a different set of attacks.
Proposed data compression schemes for the Galileo S-band contingency mission
NASA Technical Reports Server (NTRS)
Cheung, Kar-Ming; Tong, Kevin
1993-01-01
The Galileo spacecraft is currently on its way to Jupiter and its moons. In April 1991, the high gain antenna (HGA) failed to deploy as commanded. In case the current efforts to deploy the HGA fails, communications during the Jupiter encounters will be through one of two low gain antenna (LGA) on an S-band (2.3 GHz) carrier. A lot of effort has been and will be conducted to attempt to open the HGA. Also various options for improving Galileo's telemetry downlink performance are being evaluated in the event that the HGA will not open at Jupiter arrival. Among all viable options the most promising and powerful one is to perform image and non-image data compression in software onboard the spacecraft. This involves in-flight re-programming of the existing flight software of Galileo's Command and Data Subsystem processors and Attitude and Articulation Control System (AACS) processor, which have very limited computational and memory resources. In this article we describe the proposed data compression algorithms and give their respective compression performance. The planned image compression algorithm is a 4 x 4 or an 8 x 8 multiplication-free integer cosine transform (ICT) scheme, which can be viewed as an integer approximation of the popular discrete cosine transform (DCT) scheme. The implementation complexity of the ICT schemes is much lower than the DCT-based schemes, yet the performances of the two algorithms are indistinguishable. The proposed non-image compression algorith is a Lempel-Ziv-Welch (LZW) variant, which is a lossless universal compression algorithm based on a dynamic dictionary lookup table. We developed a simple and efficient hashing function to perform the string search.
Partially supervised speaker clustering.
Tang, Hao; Chu, Stephen Mingyu; Hasegawa-Johnson, Mark; Huang, Thomas S
2012-05-01
Content-based multimedia indexing, retrieval, and processing as well as multimedia databases demand the structuring of the media content (image, audio, video, text, etc.), one significant goal being to associate the identity of the content to the individual segments of the signals. In this paper, we specifically address the problem of speaker clustering, the task of assigning every speech utterance in an audio stream to its speaker. We offer a complete treatment to the idea of partially supervised speaker clustering, which refers to the use of our prior knowledge of speakers in general to assist the unsupervised speaker clustering process. By means of an independent training data set, we encode the prior knowledge at the various stages of the speaker clustering pipeline via 1) learning a speaker-discriminative acoustic feature transformation, 2) learning a universal speaker prior model, and 3) learning a discriminative speaker subspace, or equivalently, a speaker-discriminative distance metric. We study the directional scattering property of the Gaussian mixture model (GMM) mean supervector representation of utterances in the high-dimensional space, and advocate exploiting this property by using the cosine distance metric instead of the euclidean distance metric for speaker clustering in the GMM mean supervector space. We propose to perform discriminant analysis based on the cosine distance metric, which leads to a novel distance metric learning algorithm—linear spherical discriminant analysis (LSDA). We show that the proposed LSDA formulation can be systematically solved within the elegant graph embedding general dimensionality reduction framework. Our speaker clustering experiments on the GALE database clearly indicate that 1) our speaker clustering methods based on the GMM mean supervector representation and vector-based distance metrics outperform traditional speaker clustering methods based on the “bag of acoustic features” representation and statistical model-based distance metrics, 2) our advocated use of the cosine distance metric yields consistent increases in the speaker clustering performance as compared to the commonly used euclidean distance metric, 3) our partially supervised speaker clustering concept and strategies significantly improve the speaker clustering performance over the baselines, and 4) our proposed LSDA algorithm further leads to state-of-the-art speaker clustering performance.
SPECT reconstruction using DCT-induced tight framelet regularization
NASA Astrophysics Data System (ADS)
Zhang, Jiahan; Li, Si; Xu, Yuesheng; Schmidtlein, C. R.; Lipson, Edward D.; Feiglin, David H.; Krol, Andrzej
2015-03-01
Wavelet transforms have been successfully applied in many fields of image processing. Yet, to our knowledge, they have never been directly incorporated to the objective function in Emission Computed Tomography (ECT) image reconstruction. Our aim has been to investigate if the ℓ1-norm of non-decimated discrete cosine transform (DCT) coefficients of the estimated radiotracer distribution could be effectively used as the regularization term for the penalized-likelihood (PL) reconstruction, where a regularizer is used to enforce the image smoothness in the reconstruction. In this study, the ℓ1-norm of 2D DCT wavelet decomposition was used as a regularization term. The Preconditioned Alternating Projection Algorithm (PAPA), which we proposed in earlier work to solve penalized likelihood (PL) reconstruction with non-differentiable regularizers, was used to solve this optimization problem. The DCT wavelet decompositions were performed on the transaxial reconstructed images. We reconstructed Monte Carlo simulated SPECT data obtained for a numerical phantom with Gaussian blobs as hot lesions and with a warm random lumpy background. Reconstructed images using the proposed method exhibited better noise suppression and improved lesion conspicuity, compared with images reconstructed using expectation maximization (EM) algorithm with Gaussian post filter (GPF). Also, the mean square error (MSE) was smaller, compared with EM-GPF. A critical and challenging aspect of this method was selection of optimal parameters. In summary, our numerical experiments demonstrated that the ℓ1-norm of discrete cosine transform (DCT) wavelet frame transform DCT regularizer shows promise for SPECT image reconstruction using PAPA method.
2.5D multi-view gait recognition based on point cloud registration.
Tang, Jin; Luo, Jian; Tjahjadi, Tardi; Gao, Yan
2014-03-28
This paper presents a method for modeling a 2.5-dimensional (2.5D) human body and extracting the gait features for identifying the human subject. To achieve view-invariant gait recognition, a multi-view synthesizing method based on point cloud registration (MVSM) to generate multi-view training galleries is proposed. The concept of a density and curvature-based Color Gait Curvature Image is introduced to map 2.5D data onto a 2D space to enable data dimension reduction by discrete cosine transform and 2D principle component analysis. Gait recognition is achieved via a 2.5D view-invariant gait recognition method based on point cloud registration. Experimental results on the in-house database captured by a Microsoft Kinect camera show a significant performance gain when using MVSM.
Decomposition of ECG by linear filtering.
Murthy, I S; Niranjan, U C
1992-01-01
A simple method is developed for the delineation of a given electrocardiogram (ECG) signal into its component waves. The properties of discrete cosine transform (DCT) are exploited for the purpose. The transformed signal is convolved with appropriate filters and the component waves are obtained by computing the inverse transform (IDCT) of the filtered signals. The filters are derived from the time signal itself. Analysis of continuous strips of ECG signals with various arrhythmias showed that the performance of the method is satisfactory both qualitatively and quantitatively. The small amplitude P wave usually had a high percentage rms difference (PRD) compared to the other large component waves.
Experimental Observation and Theoretical Description of Multisoliton Fission in Shallow Water
NASA Astrophysics Data System (ADS)
Trillo, S.; Deng, G.; Biondini, G.; Klein, M.; Clauss, G. F.; Chabchoub, A.; Onorato, M.
2016-09-01
We observe the dispersive breaking of cosine-type long waves [Phys. Rev. Lett. 15, 240 (1965)] in shallow water, characterizing the highly nonlinear "multisoliton" fission over variable conditions. We provide new insight into the interpretation of the results by analyzing the data in terms of the periodic inverse scattering transform for the Korteweg-de Vries equation. In a wide range of dispersion and nonlinearity, the data compare favorably with our analytical estimate, based on a rigorous WKB approach, of the number of emerging solitons. We are also able to observe experimentally the universal Fermi-Pasta-Ulam recurrence in the regime of moderately weak dispersion.
The theory of the gravitational potential applied to orbit prediction
NASA Technical Reports Server (NTRS)
Kirkpatrick, J. C.
1976-01-01
A complete derivation of the geopotential function and its gradient is presented. Also included is the transformation of Laplace's equation from Cartesian to spherical coordinates. The analytic solution to Laplace's equation is obtained from the transformed version, in the classical manner of separating the variables. A cursory introduction to the method devised by Pines, using direction cosines to express the orientation of a point in space, is presented together with sample computer program listings for computing the geopotential function and the components of its gradient. The use of the geopotential function is illustrated.
Image Data Compression Having Minimum Perceptual Error
NASA Technical Reports Server (NTRS)
Watson, Andrew B. (Inventor)
1997-01-01
A method is presented for performing color or grayscale image compression that eliminates redundant and invisible image components. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The quantization matrix comprises visual masking by luminance and contrast technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.
An Asymmetric Image Encryption Based on Phase Truncated Hybrid Transform
NASA Astrophysics Data System (ADS)
Khurana, Mehak; Singh, Hukum
2017-09-01
To enhance the security of the system and to protect it from the attacker, this paper proposes a new asymmetric cryptosystem based on hybrid approach of Phase Truncated Fourier and Discrete Cosine Transform (PTFDCT) which adds non linearity by including cube and cube root operation in the encryption and decryption path respectively. In this cryptosystem random phase masks are used as encryption keys and phase masks generated after the cube operation in encryption process are reserved as decryption keys and cube root operation is required to decrypt image in decryption process. The cube and cube root operation introduced in the encryption and decryption path makes system resistant against standard attacks. The robustness of the proposed cryptosystem has been analysed and verified on the basis of various parameters by simulating on MATLAB 7.9.0 (R2008a). The experimental results are provided to highlight the effectiveness and suitability of the proposed cryptosystem and prove the system is secure.
A Transform-Based Feature Extraction Approach for Motor Imagery Tasks Classification
Khorshidtalab, Aida; Mesbah, Mostefa; Salami, Momoh J. E.
2015-01-01
In this paper, we present a new motor imagery classification method in the context of electroencephalography (EEG)-based brain–computer interface (BCI). This method uses a signal-dependent orthogonal transform, referred to as linear prediction singular value decomposition (LP-SVD), for feature extraction. The transform defines the mapping as the left singular vectors of the LP coefficient filter impulse response matrix. Using a logistic tree-based model classifier; the extracted features are classified into one of four motor imagery movements. The proposed approach was first benchmarked against two related state-of-the-art feature extraction approaches, namely, discrete cosine transform (DCT) and adaptive autoregressive (AAR)-based methods. By achieving an accuracy of 67.35%, the LP-SVD approach outperformed the other approaches by large margins (25% compared with DCT and 6 % compared with AAR-based methods). To further improve the discriminatory capability of the extracted features and reduce the computational complexity, we enlarged the extracted feature subset by incorporating two extra features, namely, Q- and the Hotelling’s \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$T^{2}$ \\end{document} statistics of the transformed EEG and introduced a new EEG channel selection method. The performance of the EEG classification based on the expanded feature set and channel selection method was compared with that of a number of the state-of-the-art classification methods previously reported with the BCI IIIa competition data set. Our method came second with an average accuracy of 81.38%. PMID:27170898
2.5D Multi-View Gait Recognition Based on Point Cloud Registration
Tang, Jin; Luo, Jian; Tjahjadi, Tardi; Gao, Yan
2014-01-01
This paper presents a method for modeling a 2.5-dimensional (2.5D) human body and extracting the gait features for identifying the human subject. To achieve view-invariant gait recognition, a multi-view synthesizing method based on point cloud registration (MVSM) to generate multi-view training galleries is proposed. The concept of a density and curvature-based Color Gait Curvature Image is introduced to map 2.5D data onto a 2D space to enable data dimension reduction by discrete cosine transform and 2D principle component analysis. Gait recognition is achieved via a 2.5D view-invariant gait recognition method based on point cloud registration. Experimental results on the in-house database captured by a Microsoft Kinect camera show a significant performance gain when using MVSM. PMID:24686727
Genetics algorithm optimization of DWT-DCT based image Watermarking
NASA Astrophysics Data System (ADS)
Budiman, Gelar; Novamizanti, Ledya; Iwut, Iwan
2017-01-01
Data hiding in an image content is mandatory for setting the ownership of the image. Two dimensions discrete wavelet transform (DWT) and discrete cosine transform (DCT) are proposed as transform method in this paper. First, the host image in RGB color space is converted to selected color space. We also can select the layer where the watermark is embedded. Next, 2D-DWT transforms the selected layer obtaining 4 subband. We select only one subband. And then block-based 2D-DCT transforms the selected subband. Binary-based watermark is embedded on the AC coefficients of each block after zigzag movement and range based pixel selection. Delta parameter replacing pixels in each range represents embedded bit. +Delta represents bit “1” and -delta represents bit “0”. Several parameters to be optimized by Genetics Algorithm (GA) are selected color space, layer, selected subband of DWT decomposition, block size, embedding range, and delta. The result of simulation performs that GA is able to determine the exact parameters obtaining optimum imperceptibility and robustness, in any watermarked image condition, either it is not attacked or attacked. DWT process in DCT based image watermarking optimized by GA has improved the performance of image watermarking. By five attacks: JPEG 50%, resize 50%, histogram equalization, salt-pepper and additive noise with variance 0.01, robustness in the proposed method has reached perfect watermark quality with BER=0. And the watermarked image quality by PSNR parameter is also increased about 5 dB than the watermarked image quality from previous method.
Sines and Cosines. Part 1 of 3
NASA Technical Reports Server (NTRS)
Apostol, Tom M. (Editor)
1992-01-01
Applying the concept of similarities, the mathematical principles of circular motion and sine and cosine waves are presented utilizing both film footage and computer animation in this 'Project Mathematics' series video. Concepts presented include: the symmetry of sine waves; the cosine (complementary sine) and cosine waves; the use of sines and cosines on coordinate systems; the relationship they have to each other; the definitions and uses of periodic waves, square waves, sawtooth waves; the Gibbs phenomena; the use of sines and cosines as ratios; and the terminology related to sines and cosines (frequency, overtone, octave, intensity, and amplitude).
On E-discretization of tori of compact simple Lie groups. II
NASA Astrophysics Data System (ADS)
Hrivnák, Jiří; Juránek, Michal
2017-10-01
Ten types of discrete Fourier transforms of Weyl orbit functions are developed. Generalizing one-dimensional cosine, sine, and exponential, each type of the Weyl orbit function represents an exponential symmetrized with respect to a subgroup of the Weyl group. Fundamental domains of even affine and dual even affine Weyl groups, governing the argument and label symmetries of the even orbit functions, are determined. The discrete orthogonality relations are formulated on finite sets of points from the refinements of the dual weight lattices. Explicit counting formulas for the number of points of the discrete transforms are deduced. Real-valued Hartley orbit functions are introduced, and all ten types of the corresponding discrete Hartley transforms are detailed.
Improved digital filters for evaluating Fourier and Hankel transform integrals
Anderson, Walter L.
1975-01-01
New algorithms are described for evaluating Fourier (cosine, sine) and Hankel (J0,J1) transform integrals by means of digital filters. The filters have been designed with extended lengths so that a variable convolution operation can be applied to a large class of integral transforms having the same system transfer function. A f' lagged-convolution method is also presented to significantly decrease the computation time when computing a series of like-transforms over a parameter set spaced the same as the filters. Accuracy of the new filters is comparable to Gaussian integration, provided moderate parameter ranges and well-behaved kernel functions are used. A collection of Fortran IV subprograms is included for both real and complex functions for each filter type. The algorithms have been successfully used in geophysical applications containing a wide variety of integral transforms
Analytic expressions for the black-sky and white-sky albedos of the cosine lobe model.
Goodin, Christopher
2013-05-01
The cosine lobe model is a bidirectional reflectance distribution function (BRDF) that is commonly used in computer graphics to model specular reflections. The model is both simple and physically plausible, but physical quantities such as albedo have not been related to the parameterization of the model. In this paper, analytic expressions for calculating the black-sky and white-sky albedos from the cosine lobe BRDF model with integer exponents will be derived, to the author's knowledge for the first time. These expressions for albedo can be used to place constraints on physics-based simulations of radiative transfer such as high-fidelity ray-tracing simulations.
An iris recognition algorithm based on DCT and GLCM
NASA Astrophysics Data System (ADS)
Feng, G.; Wu, Ye-qing
2008-04-01
With the enlargement of mankind's activity range, the significance for person's status identity is becoming more and more important. So many different techniques for person's status identity were proposed for this practical usage. Conventional person's status identity methods like password and identification card are not always reliable. A wide variety of biometrics has been developed for this challenge. Among those biologic characteristics, iris pattern gains increasing attention for its stability, reliability, uniqueness, noninvasiveness and difficult to counterfeit. The distinct merits of the iris lead to its high reliability for personal identification. So the iris identification technique had become hot research point in the past several years. This paper presents an efficient algorithm for iris recognition using gray-level co-occurrence matrix(GLCM) and Discrete Cosine transform(DCT). To obtain more representative iris features, features from space and DCT transformation domain are extracted. Both GLCM and DCT are applied on the iris image to form the feature sequence in this paper. The combination of GLCM and DCT makes the iris feature more distinct. Upon GLCM and DCT the eigenvector of iris extracted, which reflects features of spatial transformation and frequency transformation. Experimental results show that the algorithm is effective and feasible with iris recognition.
Yuan, Soe-Tsyr; Sun, Jerry
2005-10-01
Development of algorithms for automated text categorization in massive text document sets is an important research area of data mining and knowledge discovery. Most of the text-clustering methods were grounded in the term-based measurement of distance or similarity, ignoring the structure of the documents. In this paper, we present a novel method named structured cosine similarity (SCS) that furnishes document clustering with a new way of modeling on document summarization, considering the structure of the documents so as to improve the performance of document clustering in terms of quality, stability, and efficiency. This study was motivated by the problem of clustering speech documents (of no rich document features) attained from the wireless experience oral sharing conducted by mobile workforce of enterprises, fulfilling audio-based knowledge management. In other words, this problem aims to facilitate knowledge acquisition and sharing by speech. The evaluations also show fairly promising results on our method of structured cosine similarity.
A Fourier transform method for Vsin i estimations under nonlinear Limb-Darkening laws
DOE Office of Scientific and Technical Information (OSTI.GOV)
Levenhagen, R. S., E-mail: ronaldo.levenhagen@gmail.com
Star rotation offers us a large horizon for the study of many important physical issues pertaining to stellar evolution. Currently, four methods are widely used to infer rotation velocities, namely those related to line width calibrations, on the fitting of synthetic spectra, interferometry, and on Fourier transforms (FTs) of line profiles. Almost all of the estimations of stellar projected rotation velocities using the Fourier method in the literature have been addressed with the use of linear limb-darkening (LD) approximations during the evaluation of rotation profiles and their cosine FTs, which in certain cases, lead to discrepant velocity estimates. In thismore » work, we introduce new mathematical expressions of rotation profiles and their Fourier cosine transforms assuming three nonlinear LD laws—quadratic, square-root, and logarithmic—and study their applications with and without gravity-darkening (GD) and geometrical flattening (GF) effects. Through an analysis of He I models in the visible range accounting for both limb and GD, we find out that, for classical models without rotationally driven effects, all the Vsin i values are too close to each other. On the other hand, taking into account GD and GF, the Vsin i obtained with the linear law result in Vsin i values that are systematically smaller than those obtained with the other laws. As a rule of thumb, we apply these expressions to the FT method to evaluate the projected rotation velocity of the emission B-type star Achernar (α Eri).« less
NASA Astrophysics Data System (ADS)
Li, Xiangyu; Huang, Zhanhua; Zhu, Meng; He, Jin; Zhang, Hao
2014-12-01
Hilbert transform (HT) is widely used in temporal speckle pattern interferometry, but errors from low modulations might propagate and corrupt the calculated phase. A spatio-temporal method for phase retrieval using temporal HT and spatial phase unwrapping is presented. In time domain, the wrapped phase difference between the initial and current states is directly determined by using HT. To avoid the influence of the low modulation intensity, the phase information between the two states is ignored. As a result, the phase unwrapping is shifted from time domain to space domain. A phase unwrapping algorithm based on discrete cosine transform is adopted by taking advantage of the information in adjacent pixels. An experiment is carried out with a Michelson-type interferometer to study the out-of-plane deformation field. High quality whole-field phase distribution maps with different fringe densities are obtained. Under the experimental conditions, the maximum number of fringes resolvable in a 416×416 frame is 30, which indicates a 15λ deformation along the direction of loading.
NASA Astrophysics Data System (ADS)
Zhang, Qian-Ming; Shang, Ming-Sheng; Zeng, Wei; Chen, Yong; Lü, Linyuan
2010-08-01
Collaborative filtering is one of the most successful recommendation techniques, which can effectively predict the possible future likes of users based on their past preferences. The key problem of this method is how to define the similarity between users. A standard approach is using the correlation between the ratings that two users give to a set of objects, such as Cosine index and Pearson correlation coefficient. However, the costs of computing this kind of indices are relatively high, and thus it is impossible to be applied in the huge-size systems. To solve this problem, in this paper, we introduce six local-structure-based similarity indices and compare their performances with the above two benchmark indices. Experimental results on two data sets demonstrate that the structure-based similarity indices overall outperform the Pearson correlation coefficient. When the data is dense, the structure-based indices can perform competitively good as Cosine index, while with lower computational complexity. Furthermore, when the data is sparse, the structure-based indices give even better results than Cosine index.
Image segmentation-based robust feature extraction for color image watermarking
NASA Astrophysics Data System (ADS)
Li, Mianjie; Deng, Zeyu; Yuan, Xiaochen
2018-04-01
This paper proposes a local digital image watermarking method based on Robust Feature Extraction. The segmentation is achieved by Simple Linear Iterative Clustering (SLIC) based on which an Image Segmentation-based Robust Feature Extraction (ISRFE) method is proposed for feature extraction. Our method can adaptively extract feature regions from the blocks segmented by SLIC. This novel method can extract the most robust feature region in every segmented image. Each feature region is decomposed into low-frequency domain and high-frequency domain by Discrete Cosine Transform (DCT). Watermark images are then embedded into the coefficients in the low-frequency domain. The Distortion-Compensated Dither Modulation (DC-DM) algorithm is chosen as the quantization method for embedding. The experimental results indicate that the method has good performance under various attacks. Furthermore, the proposed method can obtain a trade-off between high robustness and good image quality.
Artificial intelligence systems based on texture descriptors for vaccine development.
Nanni, Loris; Brahnam, Sheryl; Lumini, Alessandra
2011-02-01
The aim of this work is to analyze and compare several feature extraction methods for peptide classification that are based on the calculation of texture descriptors starting from a matrix representation of the peptide. This texture-based representation of the peptide is then used to train a support vector machine classifier. In our experiments, the best results are obtained using local binary patterns variants and the discrete cosine transform with selected coefficients. These results are better than those previously reported that employed texture descriptors for peptide representation. In addition, we perform experiments that combine standard approaches based on amino acid sequence. The experimental section reports several tests performed on a vaccine dataset for the prediction of peptides that bind human leukocyte antigens and on a human immunodeficiency virus (HIV-1). Experimental results confirm the usefulness of our novel descriptors. The matlab implementation of our approaches is available at http://bias.csr.unibo.it/nanni/TexturePeptide.zip.
Statistical Characterization of MP3 Encoders for Steganalysis: ’CHAMP3’
2004-04-27
compression exceeds those of typical stegano- graphic tools (e. g., LSB image embedding), the availability of commented source codes for MP3 encoders...developed by testing the approach on known and unknown reference data. 15. SUBJECT TERMS EOARD, Steganography , Digital Watermarking...Pages kbps Kilobits per Second LGPL Lesser General Public License LSB Least Significant Bit MB Megabyte MDCT Modified Discrete Cosine Transformation MP3
NASA Astrophysics Data System (ADS)
Cintra, Renato J.; Bayer, Fábio M.
2017-12-01
In [Dhandapani and Ramachandran, "Area and power efficient DCT architecture for image compression", EURASIP Journal on Advances in Signal Processing 2014, 2014:180] the authors claim to have introduced an approximation for the discrete cosine transform capable of outperforming several well-known approximations in literature in terms of additive complexity. We could not verify the above results and we offer corrections for their work.
Estimation of signal-dependent noise level function in transform domain via a sparse recovery model.
Yang, Jingyu; Gan, Ziqiao; Wu, Zhaoyang; Hou, Chunping
2015-05-01
This paper proposes a novel algorithm to estimate the noise level function (NLF) of signal-dependent noise (SDN) from a single image based on the sparse representation of NLFs. Noise level samples are estimated from the high-frequency discrete cosine transform (DCT) coefficients of nonlocal-grouped low-variation image patches. Then, an NLF recovery model based on the sparse representation of NLFs under a trained basis is constructed to recover NLF from the incomplete noise level samples. Confidence levels of the NLF samples are incorporated into the proposed model to promote reliable samples and weaken unreliable ones. We investigate the behavior of the estimation performance with respect to the block size, sampling rate, and confidence weighting. Simulation results on synthetic noisy images show that our method outperforms existing state-of-the-art schemes. The proposed method is evaluated on real noisy images captured by three types of commodity imaging devices, and shows consistently excellent SDN estimation performance. The estimated NLFs are incorporated into two well-known denoising schemes, nonlocal means and BM3D, and show significant improvements in denoising SDN-polluted images.
Study of Fourier transform spectrometer based on Michelson interferometer wave-meter
NASA Astrophysics Data System (ADS)
Peng, Yuexiang; Wang, Liqiang; Lin, Li
2008-03-01
A wave-meter based on Michelson interferometer consists of a reference and a measurement channel. The voice-coiled motor using PID means can realize to move in stable motion. The wavelength of a measurement laser can be obtained by counting interference fringes of reference and measurement laser. Reference laser with frequency stabilization creates a cosine interferogram signal whose frequency is proportional to velocity of the moving motor. The interferogram of the reference laser is converted to pulse signal, and it is subdivided into 16 times. In order to get optical spectrum, the analog signal of measurement channel should be collected. The Analog-to-Digital Converter (ADC) for measurement channel is triggered by the 16-times pulse signal of reference laser. So the sampling rate is constant only depending on frequency of reference laser and irrelative to the motor velocity. This means the sampling rate of measurement channel signals is on a uniform time-scale. The optical spectrum of measurement channel can be processed with Fast Fourier Transform (FFT) method by DSP and displayed on LCD.
Algebraic signal processing theory: 2-D spatial hexagonal lattice.
Pünschel, Markus; Rötteler, Martin
2007-06-01
We develop the framework for signal processing on a spatial, or undirected, 2-D hexagonal lattice for both an infinite and a finite array of signal samples. This framework includes the proper notions of z-transform, boundary conditions, filtering or convolution, spectrum, frequency response, and Fourier transform. In the finite case, the Fourier transform is called discrete triangle transform. Like the hexagonal lattice, this transform is nonseparable. The derivation of the framework makes it a natural extension of the algebraic signal processing theory that we recently introduced. Namely, we construct the proper signal models, given by polynomial algebras, bottom-up from a suitable definition of hexagonal space shifts using a procedure provided by the algebraic theory. These signal models, in turn, then provide all the basic signal processing concepts. The framework developed in this paper is related to Mersereau's early work on hexagonal lattices in the same way as the discrete cosine and sine transforms are related to the discrete Fourier transform-a fact that will be made rigorous in this paper.
Flow to a well in a water-table aquifer: An improved laplace transform solution
Moench, A.F.
1996-01-01
An alternative Laplace transform solution for the problem, originally solved by Neuman, of constant discharge from a partially penetrating well in a water-table aquifer was obtained. The solution differs from existing solutions in that it is simpler in form and can be numerically inverted without the need for time-consuming numerical integration. The derivation invloves the use of the Laplace transform and a finite Fourier cosine series and avoids the Hankel transform used in prior derivations. The solution allows for water in the overlying unsaturated zone to be released either instantaneously in response to a declining water table as assumed by Neuman, or gradually as approximated by Boulton's convolution integral. Numerical evaluation yields results identical with results obtained by previously published methods with the advantage, under most well-aquifer configurations, of much reduced computation time.
Image data compression having minimum perceptual error
NASA Technical Reports Server (NTRS)
Watson, Andrew B. (Inventor)
1995-01-01
A method for performing image compression that eliminates redundant and invisible image components is described. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The present invention adapts or customizes the quantization matrix to the image being compressed. The quantization matrix comprises visual masking by luminance and contrast techniques and by an error pooling technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.
Image-adapted visually weighted quantization matrices for digital image compression
NASA Technical Reports Server (NTRS)
Watson, Andrew B. (Inventor)
1994-01-01
A method for performing image compression that eliminates redundant and invisible image components is presented. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The present invention adapts or customizes the quantization matrix to the image being compressed. The quantization matrix comprises visual masking by luminance and contrast techniques and by an error pooling technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.
A Kinect based sign language recognition system using spatio-temporal features
NASA Astrophysics Data System (ADS)
Memiş, Abbas; Albayrak, Songül
2013-12-01
This paper presents a sign language recognition system that uses spatio-temporal features on RGB video images and depth maps for dynamic gestures of Turkish Sign Language. Proposed system uses motion differences and accumulation approach for temporal gesture analysis. Motion accumulation method, which is an effective method for temporal domain analysis of gestures, produces an accumulated motion image by combining differences of successive video frames. Then, 2D Discrete Cosine Transform (DCT) is applied to accumulated motion images and temporal domain features transformed into spatial domain. These processes are performed on both RGB images and depth maps separately. DCT coefficients that represent sign gestures are picked up via zigzag scanning and feature vectors are generated. In order to recognize sign gestures, K-Nearest Neighbor classifier with Manhattan distance is performed. Performance of the proposed sign language recognition system is evaluated on a sign database that contains 1002 isolated dynamic signs belongs to 111 words of Turkish Sign Language (TSL) in three different categories. Proposed sign language recognition system has promising success rates.
NASA Astrophysics Data System (ADS)
Rubel, Aleksey S.; Lukin, Vladimir V.; Egiazarian, Karen O.
2015-03-01
Results of denoising based on discrete cosine transform for a wide class of images corrupted by additive noise are obtained. Three types of noise are analyzed: additive white Gaussian noise and additive spatially correlated Gaussian noise with middle and high correlation levels. TID2013 image database and some additional images are taken as test images. Conventional DCT filter and BM3D are used as denoising techniques. Denoising efficiency is described by PSNR and PSNR-HVS-M metrics. Within hard-thresholding denoising mechanism, DCT-spectrum coefficient statistics are used to characterize images and, subsequently, denoising efficiency for them. Results of denoising efficiency are fitted for such statistics and efficient approximations are obtained. It is shown that the obtained approximations provide high accuracy of prediction of denoising efficiency.
A face and palmprint recognition approach based on discriminant DCT feature extraction.
Jing, Xiao-Yuan; Zhang, David
2004-12-01
In the field of image processing and recognition, discrete cosine transform (DCT) and linear discrimination are two widely used techniques. Based on them, we present a new face and palmprint recognition approach in this paper. It first uses a two-dimensional separability judgment to select the DCT frequency bands with favorable linear separability. Then from the selected bands, it extracts the linear discriminative features by an improved Fisherface method and performs the classification by the nearest neighbor classifier. We detailedly analyze theoretical advantages of our approach in feature extraction. The experiments on face databases and palmprint database demonstrate that compared to the state-of-the-art linear discrimination methods, our approach obtains better classification performance. It can significantly improve the recognition rates for face and palmprint data and effectively reduce the dimension of feature space.
Collaborative Wideband Compressed Signal Detection in Interplanetary Internet
NASA Astrophysics Data System (ADS)
Wang, Yulin; Zhang, Gengxin; Bian, Dongming; Gou, Liang; Zhang, Wei
2014-07-01
As the development of autonomous radio in deep space network, it is possible to actualize communication between explorers, aircrafts, rovers and satellites, e.g. from different countries, adopting different signal modes. The first mission to enforce the autonomous radio is to detect signals of the explorer autonomously without disturbing the original communication. This paper develops a collaborative wideband compressed signal detection approach for InterPlaNetary (IPN) Internet where there exist sparse active signals in the deep space environment. Compressed sensing (CS) can be utilized by exploiting the sparsity of IPN Internet communication signal, whose useful frequency support occupies only a small portion of an entirely wide spectrum. An estimate of the signal spectrum can be obtained by using reconstruction algorithms. Against deep space shadowing and channel fading, multiple satellites collaboratively sense and make a final decision according to certain fusion rule to gain spatial diversity. A couple of novel discrete cosine transform (DCT) and walsh-hadamard transform (WHT) based compressed spectrum detection methods are proposed which significantly improve the performance of spectrum recovery and signal detection. Finally, extensive simulation results are presented to show the effectiveness of our proposed collaborative scheme for signal detection in IPN Internet. Compared with the conventional discrete fourier transform (DFT) based method, our DCT and WHT based methods reduce computational complexity, decrease processing time, save energy and enhance probability of detection.
On the application of under-decimated filter banks
NASA Technical Reports Server (NTRS)
Lin, Y.-P.; Vaidyanathan, P. P.
1994-01-01
Maximally decimated filter banks have been extensively studied in the past. A filter bank is said to be under-decimated if the number of channels is more than the decimation ratio in the subbands. A maximally decimated filter bank is well known for its application in subband coding. Another application of maximally decimated filter banks is in block filtering. Convolution through block filtering has the advantages that parallelism is increased and data are processed at a lower rate. However, the computational complexity is comparable to that of direct convolution. More recently, another type of filter bank convolver has been developed. In this scheme, the convolution is performed in the subbands. Quantization and bit allocation of subband signals are based on signal variance, as in subband coding. Consequently, for a fixed rate, the result of convolution is more accurate than is direct convolution. This type of filter bank convolver also enjoys the advantages of block filtering, parallelism, and a lower working rate. Nevertheless, like block filtering, there is no computational saving. In this article, under-decimated systems are introduced to solve the problem. The new system is decimated only by half the number of channels. Two types of filter banks can be used in the under-decimated system: the discrete Fourier transform (DFT) filter banks and the cosine modulated filter banks. They are well known for their low complexity. In both cases, the system is approximately alias free, and the overall response is equivalent to a tunable multilevel filter. Properties of the DFT filter banks and the cosine modulated filter banks can be exploited to simultaneously achieve parallelism, computational saving, and a lower working rate. Furthermore, for both systems, the implementation cost of the analysis or synthesis bank is comparable to that of one prototype filter plus some low-complexity modulation matrices. The individual analysis and synthesis filters have complex coefficients in the DFT filter banks but have real coefficients in the cosine modulated filter banks.
On the application of under-decimated filter banks
NASA Astrophysics Data System (ADS)
Lin, Y.-P.; Vaidyanathan, P. P.
1994-11-01
Maximally decimated filter banks have been extensively studied in the past. A filter bank is said to be under-decimated if the number of channels is more than the decimation ratio in the subbands. A maximally decimated filter bank is well known for its application in subband coding. Another application of maximally decimated filter banks is in block filtering. Convolution through block filtering has the advantages that parallelism is increased and data are processed at a lower rate. However, the computational complexity is comparable to that of direct convolution. More recently, another type of filter bank convolver has been developed. In this scheme, the convolution is performed in the subbands. Quantization and bit allocation of subband signals are based on signal variance, as in subband coding. Consequently, for a fixed rate, the result of convolution is more accurate than is direct convolution. This type of filter bank convolver also enjoys the advantages of block filtering, parallelism, and a lower working rate. Nevertheless, like block filtering, there is no computational saving. In this article, under-decimated systems are introduced to solve the problem. The new system is decimated only by half the number of channels. Two types of filter banks can be used in the under-decimated system: the discrete Fourier transform (DFT) filter banks and the cosine modulated filter banks. They are well known for their low complexity. In both cases, the system is approximately alias free, and the overall response is equivalent to a tunable multilevel filter. Properties of the DFT filter banks and the cosine modulated filter banks can be exploited to simultaneously achieve parallelism, computational saving, and a lower working rate.
NASA Astrophysics Data System (ADS)
Zhang, Leihong; Liang, Dong; Li, Bei; Kang, Yi; Pan, Zilan; Zhang, Dawei; Gao, Xiumin; Ma, Xiuhua
2016-07-01
On the basis of analyzing the cosine light field with determined analytic expression and the pseudo-inverse method, the object is illuminated by a presetting light field with a determined discrete Fourier transform measurement matrix, and the object image is reconstructed by the pseudo-inverse method. The analytic expression of the algorithm of computational ghost imaging based on discrete Fourier transform measurement matrix is deduced theoretically, and compared with the algorithm of compressive computational ghost imaging based on random measurement matrix. The reconstruction process and the reconstruction error are analyzed. On this basis, the simulation is done to verify the theoretical analysis. When the sampling measurement number is similar to the number of object pixel, the rank of discrete Fourier transform matrix is the same as the one of the random measurement matrix, the PSNR of the reconstruction image of FGI algorithm and PGI algorithm are similar, the reconstruction error of the traditional CGI algorithm is lower than that of reconstruction image based on FGI algorithm and PGI algorithm. As the decreasing of the number of sampling measurement, the PSNR of reconstruction image based on FGI algorithm decreases slowly, and the PSNR of reconstruction image based on PGI algorithm and CGI algorithm decreases sharply. The reconstruction time of FGI algorithm is lower than that of other algorithms and is not affected by the number of sampling measurement. The FGI algorithm can effectively filter out the random white noise through a low-pass filter and realize the reconstruction denoising which has a higher denoising capability than that of the CGI algorithm. The FGI algorithm can improve the reconstruction accuracy and the reconstruction speed of computational ghost imaging.
Adaptive compressed sensing of remote-sensing imaging based on the sparsity prediction
NASA Astrophysics Data System (ADS)
Yang, Senlin; Li, Xilong; Chong, Xin
2017-10-01
The conventional compressive sensing works based on the non-adaptive linear projections, and the parameter of its measurement times is usually set empirically. As a result, the quality of image reconstruction is always affected. Firstly, the block-based compressed sensing (BCS) with conventional selection for compressive measurements was given. Then an estimation method for the sparsity of image was proposed based on the two dimensional discrete cosine transform (2D DCT). With an energy threshold given beforehand, the DCT coefficients were processed with both energy normalization and sorting in descending order, and the sparsity of the image can be achieved by the proportion of dominant coefficients. And finally, the simulation result shows that, the method can estimate the sparsity of image effectively, and provides an active basis for the selection of compressive observation times. The result also shows that, since the selection of observation times is based on the sparse degree estimated with the energy threshold provided, the proposed method can ensure the quality of image reconstruction.
A Classroom Note on Generating Examples for the Laws of Sines and Cosines from Pythagorean Triangles
ERIC Educational Resources Information Center
Sher, Lawrence; Sher, David
2007-01-01
By selecting certain special triangles, students can learn about the laws of sines and cosines without wrestling with long decimal representations or irrational numbers. Since the law of cosines requires only one of the three angles of a triangle, there are many examples of triangles with integral sides and a cosine that can be represented exactly…
A Thin Lens Model for Charged-Particle RF Accelerating Gaps
DOE Office of Scientific and Technical Information (OSTI.GOV)
Allen, Christopher K.
Presented is a thin-lens model for an RF accelerating gap that considers general axial fields without energy dependence or other a priori assumptions. Both the cosine and sine transit time factors (i.e., Fourier transforms) are required plus two additional functions; the Hilbert transforms the transit-time factors. The combination yields a complex-valued Hamiltonian rotating in the complex plane with synchronous phase. Using Hamiltonians the phase and energy gains are computed independently in the pre-gap and post-gap regions then aligned using the asymptotic values of wave number. Derivations of these results are outlined, examples are shown, and simulations with the model aremore » presented.« less
Steganographic embedding in containers-images
NASA Astrophysics Data System (ADS)
Nikishova, A. V.; Omelchenko, T. A.; Makedonskij, S. A.
2018-05-01
Steganography is one of the approaches to ensuring the protection of information transmitted over the network. But a steganographic method should vary depending on a used container. According to statistics, the most widely used containers are images and the most common image format is JPEG. Authors propose a method of data embedding into a frequency area of images in format JPEG 2000. It is proposed to use the method of Benham-Memon- Yeo-Yeung, in which instead of discrete cosine transform, discrete wavelet transform is used. Two requirements for images are formulated. Structure similarity is chosen to obtain quality assessment of data embedding. Experiments confirm that requirements satisfaction allows achieving high quality assessment of data embedding.
Lee, Bumshik; Kim, Munchurl
2016-08-01
In this paper, a low complexity coding unit (CU)-level rate and distortion estimation scheme is proposed for High Efficiency Video Coding (HEVC) hardware-friendly implementation where a Walsh-Hadamard transform (WHT)-based low-complexity integer discrete cosine transform (DCT) is employed for distortion estimation. Since HEVC adopts quadtree structures of coding blocks with hierarchical coding depths, it becomes more difficult to estimate accurate rate and distortion values without actually performing transform, quantization, inverse transform, de-quantization, and entropy coding. Furthermore, DCT for rate-distortion optimization (RDO) is computationally high, because it requires a number of multiplication and addition operations for various transform block sizes of 4-, 8-, 16-, and 32-orders and requires recursive computations to decide the optimal depths of CU or transform unit. Therefore, full RDO-based encoding is highly complex, especially for low-power implementation of HEVC encoders. In this paper, a rate and distortion estimation scheme is proposed in CU levels based on a low-complexity integer DCT that can be computed in terms of WHT whose coefficients are produced in prediction stages. For rate and distortion estimation in CU levels, two orthogonal matrices of 4×4 and 8×8 , which are applied to WHT that are newly designed in a butterfly structure only with addition and shift operations. By applying the integer DCT based on the WHT and newly designed transforms in each CU block, the texture rate can precisely be estimated after quantization using the number of non-zero quantized coefficients and the distortion can also be precisely estimated in transform domain without de-quantization and inverse transform required. In addition, a non-texture rate estimation is proposed by using a pseudoentropy code to obtain accurate total rate estimates. The proposed rate and the distortion estimation scheme can effectively be used for HW-friendly implementation of HEVC encoders with 9.8% loss over HEVC full RDO, which much less than 20.3% and 30.2% loss of a conventional approach and Hadamard-only scheme, respectively.
Personalized Medicine in Veterans with Traumatic Brain Injuries
2012-05-01
UPGMA ) based on cosine correlation of row mean centered log2 signal values; this was the top 50%-tile, 3) In the DA top 50%-tile, selected probe sets...GeneMaths XT following row mean centering of log2 trans- formed MAS5.0 signal values; probe set cluster- ing was performed by the UPGMA method using...hierarchical clustering analysis using the UPGMA algorithm with cosine correlation as the similarity metric. Results are presented as a heat map (left
Latent semantic analysis cosines as a cognitive similarity measure: Evidence from priming studies.
Günther, Fritz; Dudschig, Carolin; Kaup, Barbara
2016-01-01
In distributional semantics models (DSMs) such as latent semantic analysis (LSA), words are represented as vectors in a high-dimensional vector space. This allows for computing word similarities as the cosine of the angle between two such vectors. In two experiments, we investigated whether LSA cosine similarities predict priming effects, in that higher cosine similarities are associated with shorter reaction times (RTs). Critically, we applied a pseudo-random procedure in generating the item material to ensure that we directly manipulated LSA cosines as an independent variable. We employed two lexical priming experiments with lexical decision tasks (LDTs). In Experiment 1 we presented participants with 200 different prime words, each paired with one unique target. We found a significant effect of cosine similarities on RTs. The same was true for Experiment 2, where we reversed the prime-target order (primes of Experiment 1 were targets in Experiment 2, and vice versa). The results of these experiments confirm that LSA cosine similarities can predict priming effects, supporting the view that they are psychologically relevant. The present study thereby provides evidence for qualifying LSA cosine similarities not only as a linguistic measure, but also as a cognitive similarity measure. However, it is also shown that other DSMs can outperform LSA as a predictor of priming effects.
NASA Astrophysics Data System (ADS)
Liang, Ruiyu; Xi, Ji; Bao, Yongqiang
2017-07-01
To improve the performance of gain compensation based on three-segment sound pressure level (SPL) in hearing aids, an improved multichannel loudness compensation method based on eight-segment SPL was proposed. Firstly, the uniform cosine modulated filter bank was designed. Then, the adjacent channels which have low or gradual slopes were adaptively merged to obtain the corresponding non-uniform cosine modulated filter according to the audiogram of hearing impaired persons. Secondly, the input speech was decomposed into sub-band signals and the SPL of every sub-band signal was computed. Meanwhile, the audible SPL range from 0 dB SPL to 120 dB SPL was equally divided into eight segments. Based on these segments, a different prescription formula was designed to compute more detailed gain to compensate according to the audiogram and the computed SPL. Finally, the enhanced signal was synthesized. Objective experiments showed the decomposed signals after cosine modulated filter bank have little distortion. Objective experiments showed that the hearing aids speech perception index (HASPI) and hearing aids speech quality index (HASQI) increased 0.083 and 0.082 on average, respectively. Subjective experiments showed the proposed algorithm can effectively improve the speech recognition of six hearing impaired persons.
Vibration Power Flow In A Constrained Layer Damping Cylindrical Shell
NASA Astrophysics Data System (ADS)
Wang, Yun; Zheng, Gangtie
2012-07-01
In this paper, the vibration power flow in a constrained layer damping (CLD) cylindrical shell using wave propagation approach is investigated. The dynamic equations of the shell are derived with the Hamilton principle in conjunction with the Donnell shell assumption. With these equations, the dynamic responses of the system under a line circumferential cosine harmonic exciting force is obtained by employing the Fourier transform and the residue theorem. The vibration power flows inputted to the system and transmitted along the shell axial direction are both studied. The results show that input power flow varies with driving frequency and circumferential mode order, and the constrained damping layer can obviously restrict the exciting force from inputting power flow into the base shell especially for a thicker viscoelastic layer, a thicker or stiffer constraining layer (CL), and a higher circumferential mode order, can rapidly attenuate the vibration power flow transmitted along the base shell axial direction.
Resolution enhancement of low-quality videos using a high-resolution frame
NASA Astrophysics Data System (ADS)
Pham, Tuan Q.; van Vliet, Lucas J.; Schutte, Klamer
2006-01-01
This paper proposes an example-based Super-Resolution (SR) algorithm of compressed videos in the Discrete Cosine Transform (DCT) domain. Input to the system is a Low-Resolution (LR) compressed video together with a High-Resolution (HR) still image of similar content. Using a training set of corresponding LR-HR pairs of image patches from the HR still image, high-frequency details are transferred from the HR source to the LR video. The DCT-domain algorithm is much faster than example-based SR in spatial domain 6 because of a reduction in search dimensionality, which is a direct result of the compact and uncorrelated DCT representation. Fast searching techniques like tree-structure vector quantization 16 and coherence search1 are also key to the improved efficiency. Preliminary results on MJPEG sequence show promising result of the DCT-domain SR synthesis approach.
A review on "A Novel Technique for Image Steganography Based on Block-DCT and Huffman Encoding"
NASA Astrophysics Data System (ADS)
Das, Rig; Tuithung, Themrichon
2013-03-01
This paper reviews the embedding and extraction algorithm proposed by "A. Nag, S. Biswas, D. Sarkar and P. P. Sarkar" on "A Novel Technique for Image Steganography based on Block-DCT and Huffman Encoding" in "International Journal of Computer Science and Information Technology, Volume 2, Number 3, June 2010" [3] and shows that the Extraction of Secret Image is Not Possible for the algorithm proposed in [3]. 8 bit Cover Image of size is divided into non joint blocks and a two dimensional Discrete Cosine Transformation (2-D DCT) is performed on each of the blocks. Huffman Encoding is performed on an 8 bit Secret Image of size and each bit of the Huffman Encoded Bit Stream is embedded in the frequency domain by altering the LSB of the DCT coefficients of Cover Image blocks. The Huffman Encoded Bit Stream and Huffman Table
Alternate forms of the associated Legendre functions for use in geomagnetic modeling.
Alldredge, L.R.; Benton, E.R.
1986-01-01
An inconvenience attending traditional use of associated Legendre functions in global modeling is that the functions are not separable with respect to the 2 indices (order and degree). In 1973 Merilees suggested a way to avoid the problem by showing that associated Legendre functions of order m and degree m+k can be expressed in terms of elementary functions. This note calls attention to some possible gains in time savings and accuracy in geomagnetic modeling based upon this form. For this purpose, expansions of associated Legendre polynomials in terms of sines and cosines of multiple angles are displayed up to degree and order 10. Examples are also given explaining how some surface spherical harmonics can be transformed into true Fourier series for selected polar great circle paths. -from Authors
Compression for radiological images
NASA Astrophysics Data System (ADS)
Wilson, Dennis L.
1992-07-01
The viewing of radiological images has peculiarities that must be taken into account in the design of a compression technique. The images may be manipulated on a workstation to change the contrast, to change the center of the brightness levels that are viewed, and even to invert the images. Because of the possible consequences of losing information in a medical application, bit preserving compression is used for the images used for diagnosis. However, for archiving the images may be compressed to 10 of their original size. A compression technique based on the Discrete Cosine Transform (DCT) takes the viewing factors into account by compressing the changes in the local brightness levels. The compression technique is a variation of the CCITT JPEG compression that suppresses the blocking of the DCT except in areas of very high contrast.
On the Use of Quartic Force Fields in Variational Calculations
NASA Technical Reports Server (NTRS)
Fortenberry, Ryan C.; Huang, Xinchuan; Yachmenev, Andrey; Thiel, Walter; Lee, Timothy J.
2013-01-01
The use of quartic force fields (QFFs) has been shown to be one of the most effective ways to efficiently compute vibrational frequencies for small molecules. In this paper we outline and discuss how the simple-internal or bond-length bond-angle (BLBA) coordinates can be transformed into Morse-cosine(-sine) coordinates which produce potential energy surfaces from QFFs that possess proper limiting behavior and can effectively describe the vibrational (or rovibrational) energy levels of an arbitrary molecular system. We investigate parameter scaling in the Morse coordinate, symmetry considerations, and examples of transformed QFFs making use of the MULTIMODE, TROVE, and VTET variational vibrational methods. Cases are referenced where variational computations coupled with transformed QFFs produce accuracies compared to experiment for fundamental frequencies on the order of 5 cm(exp -1) and often as good as 1 cm(exp -1).
Coordinate transformation by minimizing correlations between parameters
NASA Technical Reports Server (NTRS)
Kumar, M.
1972-01-01
This investigation was to determine the transformation parameters (three rotations, three translations and a scale factor) between two Cartesian coordinate systems from sets of coordinates given in both systems. The objective was the determination of well separated transformation parameters with reduced correlations between each other, a problem especially relevant when the sets of coordinates are not well distributed. The above objective is achieved by preliminarily determining the three rotational parameters and the scale factor from the respective direction cosines and chord distances (these being independent of the translation parameters) between the common points, and then computing all the seven parameters from a solution in which the rotations and the scale factor are entered as weighted constraints according to their variances and covariances obtained in the preliminary solutions. Numerical tests involving two geodetic reference systems were performed to evaluate the effectiveness of this approach.
A new DWT/MC/DPCM video compression framework based on EBCOT
NASA Astrophysics Data System (ADS)
Mei, L. M.; Wu, H. R.; Tan, D. M.
2005-07-01
A novel Discrete Wavelet Transform (DWT)/Motion Compensation (MC)/Differential Pulse Code Modulation (DPCM) video compression framework is proposed in this paper. Although the Discrete Cosine Transform (DCT)/MC/DPCM is the mainstream framework for video coders in industry and international standards, the idea of DWT/MC/DPCM has existed for more than one decade in the literature and the investigation is still undergoing. The contribution of this work is twofold. Firstly, the Embedded Block Coding with Optimal Truncation (EBCOT) is used here as the compression engine for both intra- and inter-frame coding, which provides good compression ratio and embedded rate-distortion (R-D) optimization mechanism. This is an extension of the EBCOT application from still images to videos. Secondly, this framework offers a good interface for the Perceptual Distortion Measure (PDM) based on the Human Visual System (HVS) where the Mean Squared Error (MSE) can be easily replaced with the PDM in the R-D optimization. Some of the preliminary results are reported here. They are also compared with benchmarks such as MPEG-2 and MPEG-4 version 2. The results demonstrate that under specified condition the proposed coder outperforms the benchmarks in terms of rate vs. distortion.
Perceptual Optimization of DCT Color Quantization Matrices
NASA Technical Reports Server (NTRS)
Watson, Andrew B.; Statler, Irving C. (Technical Monitor)
1994-01-01
Many image compression schemes employ a block Discrete Cosine Transform (DCT) and uniform quantization. Acceptable rate/distortion performance depends upon proper design of the quantization matrix. In previous work, we showed how to use a model of the visibility of DCT basis functions to design quantization matrices for arbitrary display resolutions and color spaces. Subsequently, we showed how to optimize greyscale quantization matrices for individual images, for optimal rate/perceptual distortion performance. Here we describe extensions of this optimization algorithm to color images.
NASA Astrophysics Data System (ADS)
Shang, Xueyi; Li, Xibing; Morales-Esteban, A.; Dong, Longjun
2018-03-01
Micro-seismic P-phase arrival picking is an elementary step into seismic event location, source mechanism analysis, and seismic tomography. However, a micro-seismic signal is often mixed with high frequency noises and power frequency noises (50 Hz), which could considerably reduce P-phase picking accuracy. To solve this problem, an Empirical Mode Decomposition (EMD)-cosine function denoising-based Akaike Information Criterion (AIC) picker (ECD-AIC picker) is proposed for picking the P-phase arrival time. Unlike traditional low pass filters which are ineffective when seismic data and noise bandwidths overlap, the EMD adaptively separates the seismic data and the noise into different Intrinsic Mode Functions (IMFs). Furthermore, the EMD-cosine function-based denoising retains the P-phase arrival amplitude and phase spectrum more reliably than any traditional low pass filter. The ECD-AIC picker was tested on 1938 sets of micro-seismic waveforms randomly selected from the Institute of Mine Seismology (IMS) database of the Chinese Yongshaba mine. The results have shown that the EMD-cosine function denoising can effectively estimate high frequency and power frequency noises and can be easily adapted to perform on signals with different shapes and forms. Qualitative and quantitative comparisons show that the combined ECD-AIC picker provides better picking results than both the ED-AIC picker and the AIC picker, and the comparisons also show more reliable source localization results when the ECD-AIC picker is applied, thus showing the potential of this combined P-phase picking technique.
NASA Astrophysics Data System (ADS)
Jiang, Fan; Rossi, Mathieu; Parent, Guillaume
2018-05-01
Accurately modeling the anisotropic behavior of electrical steel is mandatory in order to perform good end simulations. Several approaches can be found in the literature for that purpose but the more often those methods are not able to deal with grain oriented electrical steel. In this paper, a method based on orientation distribution function is applied to modern grain oriented laminations. In particular, two solutions are proposed in order to increase the results accuracy. The first one consists in increasing the decomposition number of the cosine series on which the method is based. The second one consists in modifying the determination method of the terms belonging to this cosine series.
Sparse dictionary for synthetic transmit aperture medical ultrasound imaging.
Wang, Ping; Jiang, Jin-Yang; Li, Na; Luo, Han-Wu; Li, Fang; Cui, Shi-Gang
2017-07-01
It is possible to recover a signal below the Nyquist sampling limit using a compressive sensing technique in ultrasound imaging. However, the reconstruction enabled by common sparse transform approaches does not achieve satisfactory results. Considering the ultrasound echo signal's features of attenuation, repetition, and superposition, a sparse dictionary with the emission pulse signal is proposed. Sparse coefficients in the proposed dictionary have high sparsity. Images reconstructed with this dictionary were compared with those obtained with the three other common transforms, namely, discrete Fourier transform, discrete cosine transform, and discrete wavelet transform. The performance of the proposed dictionary was analyzed via a simulation and experimental data. The mean absolute error (MAE) was used to quantify the quality of the reconstructions. Experimental results indicate that the MAE associated with the proposed dictionary was always the smallest, the reconstruction time required was the shortest, and the lateral resolution and contrast of the reconstructed images were also the closest to the original images. The proposed sparse dictionary performed better than the other three sparse transforms. With the same sampling rate, the proposed dictionary achieved excellent reconstruction quality.
An Efficient Method for Image and Audio Steganography using Least Significant Bit (LSB) Substitution
NASA Astrophysics Data System (ADS)
Chadha, Ankit; Satam, Neha; Sood, Rakshak; Bade, Dattatray
2013-09-01
In order to improve the data hiding in all types of multimedia data formats such as image and audio and to make hidden message imperceptible, a novel method for steganography is introduced in this paper. It is based on Least Significant Bit (LSB) manipulation and inclusion of redundant noise as secret key in the message. This method is applied to data hiding in images. For data hiding in audio, Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) both are used. All the results displayed prove to be time-efficient and effective. Also the algorithm is tested for various numbers of bits. For those values of bits, Mean Square Error (MSE) and Peak-Signal-to-Noise-Ratio (PSNR) are calculated and plotted. Experimental results show that the stego-image is visually indistinguishable from the original cover-image when n<=4, because of better PSNR which is achieved by this technique. The final results obtained after steganography process does not reveal presence of any hidden message, thus qualifying the criteria of imperceptible message.
Sayago, Ana; Asuero, Agustin G
2006-09-14
A bilogarithmic hyperbolic cosine method for the spectrophotometric evaluation of stability constants of 1:1 weak complexes from continuous variation data has been devised and applied to literature data. A weighting scheme, however, is necessary in order to take into account the transformation for linearization. The method may be considered a useful alternative to methods in which one variable is involved on both sides of the basic equation (i.e. Heller and Schwarzenbach, Likussar and Adsul and Ramanathan). Classical least squares lead in those instances to biased and approximate stability constants and limiting absorbance values. The advantages of the proposed method are: the method gives a clear indication of the existence of only one complex in solution, it is flexible enough to allow for weighting of measurements and the computation procedure yield the best value of logbeta11 and its limit of error. The agreement between the values obtained by applying the weighted hyperbolic cosine method and the non-linear regression (NLR) method is good, being in both cases the mean quadratic error at a minimum.
A method for optimizing the cosine response of solar UV diffusers
NASA Astrophysics Data System (ADS)
Pulli, Tomi; Kärhä, Petri; Ikonen, Erkki
2013-07-01
Instruments measuring global solar ultraviolet (UV) irradiance at the surface of the Earth need to collect radiation from the entire hemisphere. Entrance optics with angular response as close as possible to the ideal cosine response are necessary to perform these measurements accurately. Typically, the cosine response is obtained using a transmitting diffuser. We have developed an efficient method based on a Monte Carlo algorithm to simulate radiation transport in the solar UV diffuser assembly. The algorithm takes into account propagation, absorption, and scattering of the radiation inside the diffuser material. The effects of the inner sidewalls of the diffuser housing, the shadow ring, and the protective weather dome are also accounted for. The software implementation of the algorithm is highly optimized: a simulation of 109 photons takes approximately 10 to 15 min to complete on a typical high-end PC. The results of the simulations agree well with the measured angular responses, indicating that the algorithm can be used to guide the diffuser design process. Cost savings can be obtained when simulations are carried out before diffuser fabrication as compared to a purely trial-and-error-based diffuser optimization. The algorithm was used to optimize two types of detectors, one with a planar diffuser and the other with a spherically shaped diffuser. The integrated cosine errors—which indicate the relative measurement error caused by the nonideal angular response under isotropic sky radiance—of these two detectors were calculated to be f2=1.4% and 0.66%, respectively.
NASA Astrophysics Data System (ADS)
Wang, Zhongpeng; Chen, Fangni; Qiu, Weiwei; Chen, Shoufa; Ren, Dongxiao
2018-03-01
In this paper, a two-layer image encryption scheme for a discrete cosine transform (DCT) precoded orthogonal frequency division multiplexing (OFDM) visible light communication (VLC) system is proposed. Firstly, in the proposed scheme the transmitted image is first encrypted by a chaos scrambling sequence,which is generated from the hybrid 4-D hyper- and Arnold map in the upper-layer. After that, the encrypted image is converted into digital QAM modulation signal, which is re-encrypted by chaos scrambling sequence based on Arnold map in physical layer to further enhance the security of the transmitted image. Moreover, DCT precoding is employed to improve BER performance of the proposed system and reduce the PAPR of OFDM signal. The BER and PAPR performances of the proposed system are evaluated by simulation experiments. The experiment results show that the proposed two-layer chaos scrambling schemes achieve image secure transmission for image-based OFDM VLC. Furthermore, DCT precoding can reduce the PAPR and improve the BER performance of OFDM-based VLC.
A closed form solution for constant flux pumping in a well under partial penetration condition
NASA Astrophysics Data System (ADS)
Yang, Shaw-Yang; Yeh, Hund-Der; Chiu, Pin-Yuan
2006-05-01
An analytical model for the constant flux pumping test is developed in a radial confined aquifer system with a partially penetrating well. The Laplace domain solution is derived by the application of the Laplace transforms with respect to time and the finite Fourier cosine transforms with respect to the vertical coordinates. A time domain solution is obtained using the inverse Laplace transforms, convolution theorem, and Bromwich integral method. The effect of partial penetration is apparent if the test well is completed with a short screen. An aquifer thickness 100 times larger than the screen length of the well can be considered as infinite. This solution can be used to investigate the effects of screen length and location on the drawdown distribution in a radial confined aquifer system and to produce type curves for the estimation of aquifer parameters with field pumping drawdown data.
Interaction phenomenon to dimensionally reduced p-gBKP equation
NASA Astrophysics Data System (ADS)
Zhang, Runfa; Bilige, Sudao; Bai, Yuexing; Lü, Jianqing; Gao, Xiaoqing
2018-02-01
Based on searching the combining of quadratic function and exponential (or hyperbolic cosine) function from the Hirota bilinear form of the dimensionally reduced p-gBKP equation, eight class of interaction solutions are derived via symbolic computation with Mathematica. The submergence phenomenon, presented to illustrate the dynamical features concerning these obtained solutions, is observed by three-dimensional plots and density plots with particular choices of the involved parameters between the exponential (or hyperbolic cosine) function and the quadratic function. It is proved that the interference between the two solitary waves is inelastic.
CONSTRUCTING AND DERIVING RECIPROCAL TRIGONOMETRIC RELATIONS: A FUNCTIONAL ANALYTIC APPROACH
Ninness, Chris; Dixon, Mark; Barnes-Holmes, Dermot; Rehfeldt, Ruth Anne; Rumph, Robin; McCuller, Glen; Holland, James; Smith, Ronald; Ninness, Sharon K; McGinty, Jennifer
2009-01-01
Participants were pretrained and tested on mutually entailed trigonometric relations and combinatorially entailed relations as they pertained to positive and negative forms of sine, cosine, secant, and cosecant. Experiment 1 focused on training and testing transformations of these mathematical functions in terms of amplitude and frequency followed by tests of novel relations. Experiment 2 addressed training in accordance with frames of coordination (same as) and frames of opposition (reciprocal of) followed by more tests of novel relations. All assessments of derived and novel formula-to-graph relations, including reciprocal functions with diversified amplitude and frequency transformations, indicated that all 4 participants demonstrated substantial improvement in their ability to identify increasingly complex trigonometric formula-to-graph relations pertaining to same as and reciprocal of to establish mathematically complex repertoires. PMID:19949509
Constructing and deriving reciprocal trigonometric relations: a functional analytic approach.
Ninness, Chris; Dixon, Mark; Barnes-Holmes, Dermot; Rehfeldt, Ruth Anne; Rumph, Robin; McCuller, Glen; Holland, James; Smith, Ronald; Ninness, Sharon K; McGinty, Jennifer
2009-01-01
Participants were pretrained and tested on mutually entailed trigonometric relations and combinatorially entailed relations as they pertained to positive and negative forms of sine, cosine, secant, and cosecant. Experiment 1 focused on training and testing transformations of these mathematical functions in terms of amplitude and frequency followed by tests of novel relations. Experiment 2 addressed training in accordance with frames of coordination (same as) and frames of opposition (reciprocal of) followed by more tests of novel relations. All assessments of derived and novel formula-to-graph relations, including reciprocal functions with diversified amplitude and frequency transformations, indicated that all 4 participants demonstrated substantial improvement in their ability to identify increasingly complex trigonometric formula-to-graph relations pertaining to same as and reciprocal of to establish mathematically complex repertoires.
Efficiency optimization of a fast Poisson solver in beam dynamics simulation
NASA Astrophysics Data System (ADS)
Zheng, Dawei; Pöplau, Gisela; van Rienen, Ursula
2016-01-01
Calculating the solution of Poisson's equation relating to space charge force is still the major time consumption in beam dynamics simulations and calls for further improvement. In this paper, we summarize a classical fast Poisson solver in beam dynamics simulations: the integrated Green's function method. We introduce three optimization steps of the classical Poisson solver routine: using the reduced integrated Green's function instead of the integrated Green's function; using the discrete cosine transform instead of discrete Fourier transform for the Green's function; using a novel fast convolution routine instead of an explicitly zero-padded convolution. The new Poisson solver routine preserves the advantages of fast computation and high accuracy. This provides a fast routine for high performance calculation of the space charge effect in accelerators.
Comparison Of The Performance Of Hybrid Coders Under Different Configurations
NASA Astrophysics Data System (ADS)
Gunasekaran, S.; Raina J., P.
1983-10-01
Picture bandwidth reduction employing DPCM and Orthogonal Transform (OT) coding for TV transmission have been widely discussed in literature; both the techniques have their own advantages and limitations in terms of compression ratio, implementation, sensitivity to picture statistics and their sensitivity to the channel noise. Hybrid coding introduced by Habibi, - a cascade of the two techniques, offers excellent performance and proves to be attractive retaining the special advantages of both the techniques. In the recent times, the interest has shifted over to Hybrid coding, and in the absence of a report on the relative performance specifications of hybrid coders at different configurations, an attempt has been made to colate the information. Fourier, Hadamard, Slant, Sine, Cosine and Harr transforms have been considered for the present work.
Pose Invariant Face Recognition Based on Hybrid Dominant Frequency Features
NASA Astrophysics Data System (ADS)
Wijaya, I. Gede Pasek Suta; Uchimura, Keiichi; Hu, Zhencheng
Face recognition is one of the most active research areas in pattern recognition, not only because the face is a human biometric characteristics of human being but also because there are many potential applications of the face recognition which range from human-computer interactions to authentication, security, and surveillance. This paper presents an approach to pose invariant human face image recognition. The proposed scheme is based on the analysis of discrete cosine transforms (DCT) and discrete wavelet transforms (DWT) of face images. From both the DCT and DWT domain coefficients, which describe the facial information, we build compact and meaningful features vector, using simple statistical measures and quantization. This feature vector is called as the hybrid dominant frequency features. Then, we apply a combination of the L2 and Lq metric to classify the hybrid dominant frequency features to a person's class. The aim of the proposed system is to overcome the high memory space requirement, the high computational load, and the retraining problems of previous methods. The proposed system is tested using several face databases and the experimental results are compared to a well-known Eigenface method. The proposed method shows good performance, robustness, stability, and accuracy without requiring geometrical normalization. Furthermore, the purposed method has low computational cost, requires little memory space, and can overcome retraining problem.
Map-invariant spectral analysis for the identification of DNA periodicities
2012-01-01
Many signal processing based methods for finding hidden periodicities in DNA sequences have primarily focused on assigning numerical values to the symbolic DNA sequence and then applying spectral analysis tools such as the short-time discrete Fourier transform (ST-DFT) to locate these repeats. The key results pertaining to this approach are however obtained using a very specific symbolic to numerical map, namely the so-called Voss representation. An important research problem is to therefore quantify the sensitivity of these results to the choice of the symbolic to numerical map. In this article, a novel algebraic approach to the periodicity detection problem is presented and provides a natural framework for studying the role of the symbolic to numerical map in finding these repeats. More specifically, we derive a new matrix-based expression of the DNA spectrum that comprises most of the widely used mappings in the literature as special cases, shows that the DNA spectrum is in fact invariable under all these mappings, and generates a necessary and sufficient condition for the invariance of the DNA spectrum to the symbolic to numerical map. Furthermore, the new algebraic framework decomposes the periodicity detection problem into several fundamental building blocks that are totally independent of each other. Sophisticated digital filters and/or alternate fast data transforms such as the discrete cosine and sine transforms can therefore be always incorporated in the periodicity detection scheme regardless of the choice of the symbolic to numerical map. Although the newly proposed framework is matrix based, identification of these periodicities can be achieved at a low computational cost. PMID:23067324
Monro, Donald M; Rakshit, Soumyadip; Zhang, Dexin
2007-04-01
This paper presents a novel iris coding method based on differences of discrete cosine transform (DCT) coefficients of overlapped angular patches from normalized iris images. The feature extraction capabilities of the DCT are optimized on the two largest publicly available iris image data sets, 2,156 images of 308 eyes from the CASIA database and 2,955 images of 150 eyes from the Bath database. On this data, we achieve 100 percent Correct Recognition Rate (CRR) and perfect Receiver-Operating Characteristic (ROC) Curves with no registered false accepts or rejects. Individual feature bit and patch position parameters are optimized for matching through a product-of-sum approach to Hamming distance calculation. For verification, a variable threshold is applied to the distance metric and the False Acceptance Rate (FAR) and False Rejection Rate (FRR) are recorded. A new worst-case metric is proposed for predicting practical system performance in the absence of matching failures, and the worst case theoretical Equal Error Rate (EER) is predicted to be as low as 2.59 x 10(-4) on the available data sets.
NASA Astrophysics Data System (ADS)
Gamil, A. M.; Gilani, S. I.; Al-Kayiem, H. H.
2013-06-01
Solar energy is the most available, clean, and inexpensive source of energy among the other renewable sources of energy. Malaysia is an encouraging location for the development of solar energy systems due to abundant sunshine (10 hours daily with average solar energy received between 1400 and 1900 kWh/m2). In this paper the design of heliostat field of 3 dual-axis heliostat units located in Ipoh, Malaysia is introduced. A mathematical model was developed to estimate the sun position and calculate the cosine losses in the field. The study includes calculating the incident solar power to a fixed target on the tower by analysing the tower height and ground distance between the heliostat and the tower base. The cosine efficiency was found for each heliostat according to the sun movement. TRNSYS software was used to simulate the cosine efficiencies and field hourly incident solar power input to the fixed target. The results show the heliostat field parameters and the total incident solar input to the receiver.
Analysis of Science Attitudes for K2 Planet Hunter Mission
2015-03-01
15 1. International Astronomical Union ...................................................15 2. IAU Planet Definition ...16 3. Planet Definition Relevant to Kepler Mission .................................16 B. STAR...73 a. Definition Based on Direction Cosine Matrix .......................73 b. Definition Based
NASA Astrophysics Data System (ADS)
Lee, Sungman; Kim, Jongyul; Moon, Myung Kook; Lee, Kye Hong; Lee, Seung Wook; Ino, Takashi; Skoy, Vadim R.; Lee, Manwoo; Kim, Guinyun
2013-02-01
For use as a neutron spin polarizer or analyzer in the neutron beam lines of the HANARO (High-flux Advanced Neutron Application ReactOr) nuclear research reactor, a 3He polarizer was designed based on both a compact solenoid coil and a VBG (volume Bragg grating) diode laser with a narrow spectral linewidth of 25 GHz. The nuclear magnetic resonance (NMR) signal was measured and analyzed using both a built-in cosine radio-frequency (RF) coil and a pick-up coil. Using a neutron transmission measurement, we estimated the polarization ratio of the 3He cell as 18% for an optical pumping time of 8 hours.
A VLSI implementation of DCT using pass transistor technology
NASA Technical Reports Server (NTRS)
Kamath, S.; Lynn, Douglas; Whitaker, Sterling
1992-01-01
A VLSI design for performing the Discrete Cosine Transform (DCT) operation on image blocks of size 16 x 16 in a real time fashion operating at 34 MHz (worst case) is presented. The process used was Hewlett-Packard's CMOS26--A 3 metal CMOS process with a minimum feature size of 0.75 micron. The design is based on Multiply-Accumulate (MAC) cells which make use of a modified Booth recoding algorithm for performing multiplication. The design of these cells is straight forward, and the layouts are regular with no complex routing. Two versions of these MAC cells were designed and their layouts completed. Both versions were simulated using SPICE to estimate their performance. One version is slightly faster at the cost of larger silicon area and higher power consumption. An improvement in speed of almost 20 percent is achieved after several iterations of simulation and re-sizing.
Non-Rigid Structure Estimation in Trajectory Space from Monocular Vision
Wang, Yaming; Tong, Lingling; Jiang, Mingfeng; Zheng, Junbao
2015-01-01
In this paper, the problem of non-rigid structure estimation in trajectory space from monocular vision is investigated. Similar to the Point Trajectory Approach (PTA), based on characteristic points’ trajectories described by a predefined Discrete Cosine Transform (DCT) basis, the structure matrix was also calculated by using a factorization method. To further optimize the non-rigid structure estimation from monocular vision, the rank minimization problem about structure matrix is proposed to implement the non-rigid structure estimation by introducing the basic low-rank condition. Moreover, the Accelerated Proximal Gradient (APG) algorithm is proposed to solve the rank minimization problem, and the initial structure matrix calculated by the PTA method is optimized. The APG algorithm can converge to efficient solutions quickly and lessen the reconstruction error obviously. The reconstruction results of real image sequences indicate that the proposed approach runs reliably, and effectively improves the accuracy of non-rigid structure estimation from monocular vision. PMID:26473863
Detection of shifted double JPEG compression by an adaptive DCT coefficient model
NASA Astrophysics Data System (ADS)
Wang, Shi-Lin; Liew, Alan Wee-Chung; Li, Sheng-Hong; Zhang, Yu-Jin; Li, Jian-Hua
2014-12-01
In many JPEG image splicing forgeries, the tampered image patch has been JPEG-compressed twice with different block alignments. Such phenomenon in JPEG image forgeries is called the shifted double JPEG (SDJPEG) compression effect. Detection of SDJPEG-compressed patches could help in detecting and locating the tampered region. However, the current SDJPEG detection methods do not provide satisfactory results especially when the tampered region is small. In this paper, we propose a new SDJPEG detection method based on an adaptive discrete cosine transform (DCT) coefficient model. DCT coefficient distributions for SDJPEG and non-SDJPEG patches have been analyzed and a discriminative feature has been proposed to perform the two-class classification. An adaptive approach is employed to select the most discriminative DCT modes for SDJPEG detection. The experimental results show that the proposed approach can achieve much better results compared with some existing approaches in SDJPEG patch detection especially when the patch size is small.
One Shot Detection with Laplacian Object and Fast Matrix Cosine Similarity.
Biswas, Sujoy Kumar; Milanfar, Peyman
2016-03-01
One shot, generic object detection involves searching for a single query object in a larger target image. Relevant approaches have benefited from features that typically model the local similarity patterns. In this paper, we combine local similarity (encoded by local descriptors) with a global context (i.e., a graph structure) of pairwise affinities among the local descriptors, embedding the query descriptors into a low dimensional but discriminatory subspace. Unlike principal components that preserve global structure of feature space, we actually seek a linear approximation to the Laplacian eigenmap that permits us a locality preserving embedding of high dimensional region descriptors. Our second contribution is an accelerated but exact computation of matrix cosine similarity as the decision rule for detection, obviating the computationally expensive sliding window search. We leverage the power of Fourier transform combined with integral image to achieve superior runtime efficiency that allows us to test multiple hypotheses (for pose estimation) within a reasonably short time. Our approach to one shot detection is training-free, and experiments on the standard data sets confirm the efficacy of our model. Besides, low computation cost of the proposed (codebook-free) object detector facilitates rather straightforward query detection in large data sets including movie videos.
The comparison between SVD-DCT and SVD-DWT digital image watermarking
NASA Astrophysics Data System (ADS)
Wira Handito, Kurniawan; Fauzi, Zulfikar; Aminy Ma’ruf, Firda; Widyaningrum, Tanti; Muslim Lhaksmana, Kemas
2018-03-01
With internet, anyone can publish their creation into digital data simply, inexpensively, and absolutely easy to be accessed by everyone. However, the problem appears when anyone else claims that the creation is their property or modifies some part of that creation. It causes necessary protection of copyrights; one of the examples is with watermarking method in digital image. The application of watermarking technique on digital data, especially on image, enables total invisibility if inserted in carrier image. Carrier image will not undergo any decrease of quality and also the inserted image will not be affected by attack. In this paper, watermarking will be implemented on digital image using Singular Value Decomposition based on Discrete Wavelet Transform (DWT) and Discrete Cosine Transform (DCT) by expectation in good performance of watermarking result. In this case, trade-off happen between invisibility and robustness of image watermarking. In embedding process, image watermarking has a good quality for scaling factor < 0.1. The quality of image watermarking in decomposition level 3 is better than level 2 and level 1. Embedding watermark in low-frequency is robust to Gaussian blur attack, rescale, and JPEG compression, but in high-frequency is robust to Gaussian noise.
Authentication Based on Pole-zero Models of Signature Velocity
Rashidi, Saeid; Fallah, Ali; Towhidkhah, Farzad
2013-01-01
With the increase of communication and financial transaction through internet, on-line signature verification is an accepted biometric technology for access control and plays a significant role in authenticity and authorization in modernized society. Therefore, fast and precise algorithms for the signature verification are very attractive. The goal of this paper is modeling of velocity signal that pattern and properties is stable for persons. With using pole-zero models based on discrete cosine transform, precise method is proposed for modeling and then features is founded from strokes. With using linear, parzen window and support vector machine classifiers, the signature verification technique was tested with a large number of authentic and forgery signatures and has demonstrated the good potential of this technique. The signatures are collected from three different database include a proprietary database, the SVC2004 and the Sabanci University signature database benchmark databases. Experimental results based on Persian, SVC2004 and SUSIG databases show that our method achieves an equal error rate of 5.91%, 5.62% and 3.91% in the skilled forgeries, respectively. PMID:24696797
Estimated spectrum adaptive postfilter and the iterative prepost filtering algirighms
NASA Technical Reports Server (NTRS)
Linares, Irving (Inventor)
2004-01-01
The invention presents The Estimated Spectrum Adaptive Postfilter (ESAP) and the Iterative Prepost Filter (IPF) algorithms. These algorithms model a number of image-adaptive post-filtering and pre-post filtering methods. They are designed to minimize Discrete Cosine Transform (DCT) blocking distortion caused when images are highly compressed with the Joint Photographic Expert Group (JPEG) standard. The ESAP and the IPF techniques of the present invention minimize the mean square error (MSE) to improve the objective and subjective quality of low-bit-rate JPEG gray-scale images while simultaneously enhancing perceptual visual quality with respect to baseline JPEG images.
A novel method of the image processing on irregular triangular meshes
NASA Astrophysics Data System (ADS)
Vishnyakov, Sergey; Pekhterev, Vitaliy; Sokolova, Elizaveta
2018-04-01
The paper describes a novel method of the image processing based on irregular triangular meshes implementation. The triangular mesh is adaptive to the image content, least mean square linear approximation is proposed for the basic interpolation within the triangle. It is proposed to use triangular numbers to simplify using of the local (barycentric) coordinates for the further analysis - triangular element of the initial irregular mesh is to be represented through the set of the four equilateral triangles. This allows to use fast and simple pixels indexing in local coordinates, e.g. "for" or "while" loops for access to the pixels. Moreover, representation proposed allows to use discrete cosine transform of the simple "rectangular" symmetric form without additional pixels reordering (as it is used for shape-adaptive DCT forms). Furthermore, this approach leads to the simple form of the wavelet transform on triangular mesh. The results of the method application are presented. It is shown that advantage of the method proposed is a combination of the flexibility of the image-adaptive irregular meshes with the simple form of the pixel indexing in local triangular coordinates and the using of the common forms of the discrete transforms for triangular meshes. Method described is proposed for the image compression, pattern recognition, image quality improvement, image search and indexing. It also may be used as a part of video coding (intra-frame or inter-frame coding, motion detection).
NASA Astrophysics Data System (ADS)
Chang, Chien-Chieh; Chen, Chia-Shyun
2002-06-01
A flowing partially penetrating well with infinitesimal well skin is a mixed boundary because a Cauchy condition is prescribed along the screen length and a Neumann condition of no flux is stipulated over the remaining unscreened part. An analytical approach based on the integral transform technique is developed to determine the Laplace domain solution for such a mixed boundary problem in a confined aquifer of finite thickness. First, the mixed boundary is changed into a homogeneous Neumann boundary by substituting the Cauchy condition with a Neumann condition in terms of well bore flux that varies along the screen length and is time dependent. Despite the well bore flux being unknown a priori, the modified model containing this homogeneous Neumann boundary can be solved with the Laplace and the finite Fourier cosine transforms. To determine well bore flux, screen length is discretized into a finite number of segments, to which the Cauchy condition is reinstated. This reinstatement also restores the relation between the original model and the solutions obtained. For a given time, the numerical inversion of the Laplace domain solution yields the drawdown distributions, well bore flux, and the well discharge. This analytical approach provides an alternative for dealing with the mixed boundary problems, especially when aquifer thickness is assumed to be finite.
Reflection and emission models for deserts derived from Nimbus-7 ERB scanner measurements
NASA Technical Reports Server (NTRS)
Staylor, W. F.; Suttles, J. T.
1986-01-01
Broadband shortwave and longwave radiance measurements obtained from the Nimbus-7 Earth Radiation Budget scanner were used to develop reflectance and emittance models for the Sahara-Arabian, Gibson, and Saudi Deserts. The models were established by fitting the satellite measurements to analytic functions. For the shortwave, the model function is based on an approximate solution to the radiative transfer equation. The bidirectional-reflectance function was obtained from a single-scattering approximation with a Rayleigh-like phase function. The directional-reflectance model followed from integration of the bidirectional model and is a function of the sum and product of cosine solar and viewing zenith angles, thus satisfying reciprocity between these angles. The emittance model was based on a simple power-law of cosine viewing zenith angle.
CD-Based Indices for Link Prediction in Complex Network.
Wang, Tao; Wang, Hongjue; Wang, Xiaoxia
2016-01-01
Lots of similarity-based algorithms have been designed to deal with the problem of link prediction in the past decade. In order to improve prediction accuracy, a novel cosine similarity index CD based on distance between nodes and cosine value between vectors is proposed in this paper. Firstly, node coordinate matrix can be obtained by node distances which are different from distance matrix and row vectors of the matrix are regarded as coordinates of nodes. Then, cosine value between node coordinates is used as their similarity index. A local community density index LD is also proposed. Then, a series of CD-based indices include CD-LD-k, CD*LD-k, CD-k and CDI are presented and applied in ten real networks. Experimental results demonstrate the effectiveness of CD-based indices. The effects of network clustering coefficient and assortative coefficient on prediction accuracy of indices are analyzed. CD-LD-k and CD*LD-k can improve prediction accuracy without considering the assortative coefficient of network is negative or positive. According to analysis of relative precision of each method on each network, CD-LD-k and CD*LD-k indices have excellent average performance and robustness. CD and CD-k indices perform better on positive assortative networks than on negative assortative networks. For negative assortative networks, we improve and refine CD index, referred as CDI index, combining the advantages of CD index and evolutionary mechanism of the network model BA. Experimental results reveal that CDI index can increase prediction accuracy of CD on negative assortative networks.
CD-Based Indices for Link Prediction in Complex Network
Wang, Tao; Wang, Hongjue; Wang, Xiaoxia
2016-01-01
Lots of similarity-based algorithms have been designed to deal with the problem of link prediction in the past decade. In order to improve prediction accuracy, a novel cosine similarity index CD based on distance between nodes and cosine value between vectors is proposed in this paper. Firstly, node coordinate matrix can be obtained by node distances which are different from distance matrix and row vectors of the matrix are regarded as coordinates of nodes. Then, cosine value between node coordinates is used as their similarity index. A local community density index LD is also proposed. Then, a series of CD-based indices include CD-LD-k, CD*LD-k, CD-k and CDI are presented and applied in ten real networks. Experimental results demonstrate the effectiveness of CD-based indices. The effects of network clustering coefficient and assortative coefficient on prediction accuracy of indices are analyzed. CD-LD-k and CD*LD-k can improve prediction accuracy without considering the assortative coefficient of network is negative or positive. According to analysis of relative precision of each method on each network, CD-LD-k and CD*LD-k indices have excellent average performance and robustness. CD and CD-k indices perform better on positive assortative networks than on negative assortative networks. For negative assortative networks, we improve and refine CD index, referred as CDI index, combining the advantages of CD index and evolutionary mechanism of the network model BA. Experimental results reveal that CDI index can increase prediction accuracy of CD on negative assortative networks. PMID:26752405
NASA Astrophysics Data System (ADS)
di Lauro, C.
2018-03-01
Transformations of vector or tensor properties from a space-fixed to a molecule-fixed axis system are often required in the study of rotating molecules. Spherical components λμ,ν of a first rank irreducible tensor can be obtained from the direction cosines between the two axis systems, and a second rank tensor with spherical components λμ,ν(2) can be built from the direct product λ × λ. It is shown that the treatment of the interaction between molecular rotation and the electric quadrupole of a nucleus is greatly simplified, if the coefficients in the axis-system transformation of the gradient of the electric field of the outer charges at the coupled nucleus are arranged as spherical components λμ,ν(2). Then the reduced matrix elements of the field gradient operators in a symmetric top eigenfunction basis, including their dependence on the molecule-fixed z-angular momentum component k, can be determined from the knowledge of those of λ(2) . The hyperfine structure Hamiltonian Hq is expressed as the sum of terms characterized each by a value of the molecule-fixed index ν, whose matrix elements obey the rule Δk = ν. Some of these terms may vanish because of molecular symmetry, and the specific cases of linear and symmetric top molecules, orthorhombic molecules, and molecules with symmetry lower than orthorhombic are considered. Each ν-term consists of a contraction of the rotational tensor λ(2) and the nuclear quadrupole tensor in the space-fixed frame, and its matrix elements in the rotation-nuclear spin coupled representation can be determined by the standard spherical tensor methods.
Geometric comparison of popular mixture-model distances.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mitchell, Scott A.
2010-09-01
Statistical Latent Dirichlet Analysis produces mixture model data that are geometrically equivalent to points lying on a regular simplex in moderate to high dimensions. Numerous other statistical models and techniques also produce data in this geometric category, even though the meaning of the axes and coordinate values differs significantly. A distance function is used to further analyze these points, for example to cluster them. Several different distance functions are popular amongst statisticians; which distance function is chosen is usually driven by the historical preference of the application domain, information-theoretic considerations, or by the desirability of the clustering results. Relatively littlemore » consideration is usually given to how distance functions geometrically transform data, or the distances algebraic properties. Here we take a look at these issues, in the hope of providing complementary insight and inspiring further geometric thought. Several popular distances, {chi}{sup 2}, Jensen - Shannon divergence, and the square of the Hellinger distance, are shown to be nearly equivalent; in terms of functional forms after transformations, factorizations, and series expansions; and in terms of the shape and proximity of constant-value contours. This is somewhat surprising given that their original functional forms look quite different. Cosine similarity is the square of the Euclidean distance, and a similar geometric relationship is shown with Hellinger and another cosine. We suggest a geodesic variation of Hellinger. The square-root projection that arises in Hellinger distance is briefly compared to standard normalization for Euclidean distance. We include detailed derivations of some ratio and difference bounds for illustrative purposes. We provide some constructions that nearly achieve the worst-case ratios, relevant for contours.« less
Harrison, John A
2008-09-04
RHF/aug-cc-pVnZ, UHF/aug-cc-pVnZ, and QCISD/aug-cc-pVnZ, n = 2-5, potential energy curves of H2 X (1) summation g (+) are analyzed by Fourier transform methods after transformation to a new coordinate system via an inverse hyperbolic cosine coordinate mapping. The Fourier frequency domain spectra are interpreted in terms of underlying mathematical behavior giving rise to distinctive features. There is a clear difference between the underlying mathematical nature of the potential energy curves calculated at the HF and full-CI levels. The method is particularly suited to the analysis of potential energy curves obtained at the highest levels of theory because the Fourier spectra are observed to be of a compact nature, with the envelope of the Fourier frequency coefficients decaying in magnitude in an exponential manner. The finite number of Fourier coefficients required to describe the CI curves allows for an optimum sampling strategy to be developed, corresponding to that required for exponential and geometric convergence. The underlying random numerical noise due to the finite convergence criterion is also a clearly identifiable feature in the Fourier spectrum. The methodology is applied to the analysis of MRCI potential energy curves for the ground and first excited states of HX (X = H-Ne). All potential energy curves exhibit structure in the Fourier spectrum consistent with the existence of resonances. The compact nature of the Fourier spectra following the inverse hyperbolic cosine coordinate mapping is highly suggestive that there is some advantage in viewing the chemical bond as having an underlying hyperbolic nature.
Hu, J H; Wang, Y; Cahill, P T
1997-01-01
This paper reports a multispectral code excited linear prediction (MCELP) method for the compression of multispectral images. Different linear prediction models and adaptation schemes have been compared. The method that uses a forward adaptive autoregressive (AR) model has been proven to achieve a good compromise between performance, complexity, and robustness. This approach is referred to as the MFCELP method. Given a set of multispectral images, the linear predictive coefficients are updated over nonoverlapping three-dimensional (3-D) macroblocks. Each macroblock is further divided into several 3-D micro-blocks, and the best excitation signal for each microblock is determined through an analysis-by-synthesis procedure. The MFCELP method has been applied to multispectral magnetic resonance (MR) images. To satisfy the high quality requirement for medical images, the error between the original image set and the synthesized one is further specified using a vector quantizer. This method has been applied to images from 26 clinical MR neuro studies (20 slices/study, three spectral bands/slice, 256x256 pixels/band, 12 b/pixel). The MFCELP method provides a significant visual improvement over the discrete cosine transform (DCT) based Joint Photographers Expert Group (JPEG) method, the wavelet transform based embedded zero-tree wavelet (EZW) coding method, and the vector tree (VT) coding method, as well as the multispectral segmented autoregressive moving average (MSARMA) method we developed previously.
Increases to Biogenic Secondary Organic Aerosols from SO2 and NOx in the Southeastern US
NASA Astrophysics Data System (ADS)
Russell, L. M.; Liu, J.; Ruggeri, G.; Takahama, S.; Claflin, M. S.; Ziemann, P. J.; Lee, A.; Murphy, B.; Pye, H. O. T.; Ng, N. L.; McKinney, K. A.; Surratt, J. D.
2017-12-01
During the 2013 Southern Oxidant and Aerosol Study, Fourier Transform Infrared Spectroscopy (FTIR) and Aerosol Mass Spectrometer (AMS) measurements of submicron mass were collected at Look Rock, Tennessee, and Centreville, Alabama. The low NOx, low wind, little rain, and increased daytime isoprene emissions led to multi-day stagnation events at Look Rock that provided clear evidence of particle-phase sulfate enhancing biogenic secondary organic aerosol (bSOA) by selective uptake. Organic mass (OM) sources were apportioned as 42% "vehicle-related" and 54% bSOA, with the latter including "sulfate-related bSOA" that correlated to sulfate (r=0.72) and "nitrate-related bSOA" that correlated to nitrate (r=0.65). Single-particle mass spectra showed three composition types that corresponded to the mass-based factors with spectra cosine similarity of 0.93 and time series correlations of r>0.4. The vehicle-related OM with m/z 44 was correlated to black carbon, "sulfate-related bSOA" was on particles with high sulfate, and "nitrate-related bSOA" was on all particles. The similarity of the m/z spectra (cosine similarity=0.97) and the time series correlation (r=0.80) of the "sulfate-related bSOA" to the sulfate-containing single-particle type provide evidence for particle composition contributing to selective uptake of isoprene oxidation products onto particles that contain sulfate from power plants. Since Look Rock had much less NOx than Centreville, comparing the bSOA at the two sites provides an evaluation of the role of NOx for bSOA. CO and submicron sulfate and OM concentrations were 15-60 % higher at Centreville than at Look Rock but their time series had moderate correlations of r= 0.51, 0.54, and 0.47, respectively. However, NOx had no correlation (r=0.08) between the two sites. OM correlated with the higher NOx levels at Centreville but with O3 at Look Rock. OM sources identified by Positive Matrix Factorization had three very similar factors at both sites from FTIR, one of which was Biological Organic Aerosols. The FTIR spectrum for this factor is similar (cosine similarity > 0.6) to that of lab-generated particle mass from both isoprene and monoterpene with high NOx conditions from chamber experiments, providing verification of the reactions relevant to atmospheric conditions.
No-Reference Video Quality Assessment Based on Statistical Analysis in 3D-DCT Domain.
Li, Xuelong; Guo, Qun; Lu, Xiaoqiang
2016-05-13
It is an important task to design models for universal no-reference video quality assessment (NR-VQA) in multiple video processing and computer vision applications. However, most existing NR-VQA metrics are designed for specific distortion types which are not often aware in practical applications. A further deficiency is that the spatial and temporal information of videos is hardly considered simultaneously. In this paper, we propose a new NR-VQA metric based on the spatiotemporal natural video statistics (NVS) in 3D discrete cosine transform (3D-DCT) domain. In the proposed method, a set of features are firstly extracted based on the statistical analysis of 3D-DCT coefficients to characterize the spatiotemporal statistics of videos in different views. These features are used to predict the perceived video quality via the efficient linear support vector regression (SVR) model afterwards. The contributions of this paper are: 1) we explore the spatiotemporal statistics of videos in 3DDCT domain which has the inherent spatiotemporal encoding advantage over other widely used 2D transformations; 2) we extract a small set of simple but effective statistical features for video visual quality prediction; 3) the proposed method is universal for multiple types of distortions and robust to different databases. The proposed method is tested on four widely used video databases. Extensive experimental results demonstrate that the proposed method is competitive with the state-of-art NR-VQA metrics and the top-performing FR-VQA and RR-VQA metrics.
A Robust Zero-Watermarking Algorithm for Audio
NASA Astrophysics Data System (ADS)
Chen, Ning; Zhu, Jie
2007-12-01
In traditional watermarking algorithms, the insertion of watermark into the host signal inevitably introduces some perceptible quality degradation. Another problem is the inherent conflict between imperceptibility and robustness. Zero-watermarking technique can solve these problems successfully. Instead of embedding watermark, the zero-watermarking technique extracts some essential characteristics from the host signal and uses them for watermark detection. However, most of the available zero-watermarking schemes are designed for still image and their robustness is not satisfactory. In this paper, an efficient and robust zero-watermarking technique for audio signal is presented. The multiresolution characteristic of discrete wavelet transform (DWT), the energy compression characteristic of discrete cosine transform (DCT), and the Gaussian noise suppression property of higher-order cumulant are combined to extract essential features from the host audio signal and they are then used for watermark recovery. Simulation results demonstrate the effectiveness of our scheme in terms of inaudibility, detection reliability, and robustness.
A visual detection model for DCT coefficient quantization
NASA Technical Reports Server (NTRS)
Ahumada, Albert J., Jr.; Peterson, Heidi A.
1993-01-01
The discrete cosine transform (DCT) is widely used in image compression, and is part of the JPEG and MPEG compression standards. The degree of compression, and the amount of distortion in the decompressed image are determined by the quantization of the transform coefficients. The standards do not specify how the DCT coefficients should be quantized. Our approach is to set the quantization level for each coefficient so that the quantization error is at the threshold of visibility. Here we combine results from our previous work to form our current best detection model for DCT coefficient quantization noise. This model predicts sensitivity as a function of display parameters, enabling quantization matrices to be designed for display situations varying in luminance, veiling light, and spatial frequency related conditions (pixel size, viewing distance, and aspect ratio). It also allows arbitrary color space directions for the representation of color.
A Fourier transform with speed improvements for microprocessor applications
NASA Technical Reports Server (NTRS)
Lokerson, D. C.; Rochelle, R.
1980-01-01
A fast Fourier transform algorithm for the RCA 1802microprocessor was developed for spacecraft instrument applications. The computations were tailored for the restrictions an eight bit machine imposes. The algorithm incorporates some aspects of Walsh function sequency to improve operational speed. This method uses a register to add a value proportional to the period of the band being processed before each computation is to be considered. If the result overflows into the DF register, the data sample is used in computation; otherwise computation is skipped. This operation is repeated for each of the 64 data samples. This technique is used for both sine and cosine portions of the computation. The processing uses eight bit data, but because of the many computations that can increase the size of the coefficient, floating point form is used. A method to reduce the alias problem in the lower bands is also described.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Y. S.; Cai, F.; Xu, W. M.
2011-09-28
The ship motion equation with a cosine wave excitement force describes the slip moments in regular waves. A new kind of wave excitement force model, with the form as sums of cosine functions was proposed to describe ship rolling in irregular waves. Ship rolling time series were obtained by solving the ship motion equation with the fourth-order-Runger-Kutta method. These rolling time series were synthetically analyzed with methods of phase-space track, power spectrum, primary component analysis, and the largest Lyapunove exponent. Simulation results show that ship rolling presents some chaotic characteristic when the wave excitement force was applied by sums ofmore » cosine functions. The result well explains the course of ship rolling's chaotic mechanism and is useful for ship hydrodynamic study.« less
NASA Astrophysics Data System (ADS)
Chen, Xin; Wang, Shuhong; Liu, Zhen; Wei, Xizhang
2017-07-01
Localization of a source whose half-wavelength is smaller than the array aperture would suffer from serious phase ambiguity problem, which also appears in recently proposed phase-based algorithms. In this paper, by using the centro-symmetry of fixed uniform circular array (UCA) with even number of sensors, the source's angles and range can be decoupled and a novel ambiguity resolving approach is addressed for phase-based algorithms of source's 3-D localization (azimuth angle, elevation angle, and range). In the proposed method, by using the cosine property of unambiguous phase differences, ambiguity searching and actual-value matching are first employed to obtain actual phase differences and corresponding source's angles. Then, the unambiguous angles are utilized to estimate the source's range based on a one dimension multiple signal classification (1-D MUSIC) estimator. Finally, simulation experiments investigate the influence of step size in search and SNR on performance of ambiguity resolution and demonstrate the satisfactory estimation performance of the proposed method.
Ultrafast Dephasing and Incoherent Light Photon Echoes in Organic Amorphous Systems
NASA Astrophysics Data System (ADS)
Yano, Ryuzi; Matsumoto, Yoshinori; Tani, Toshiro; Nakatsuka, Hiroki
1989-10-01
Incoherent light photon echoes were observed in organic amorphous systems (cresyl violet in polyvinyl alcohol and 1,4-dihydroxyanthraquinone in polymethacrylic acid) by using temporally-incoherent nanosecond laser pulses. It was found that an echo decay curve of an organic amorphous system is composed of a sharp peak which decays very rapidly and a slowly decaying wing at the tail. We show that the persistent hole burning (PHB) spectra were reproduced by the Fourier-cosine transforms of the echo decay curves. We claim that in general, we must take into account the multi-level feature of the system in order to explain ultrafast dephasing at very low temperatures.
NASA Astrophysics Data System (ADS)
Kuai, Xiao-yan; Sun, Hai-xin; Qi, Jie; Cheng, En; Xu, Xiao-ka; Guo, Yu-hui; Chen, You-gan
2014-06-01
In this paper, we investigate the performance of adaptive modulation (AM) orthogonal frequency division multiplexing (OFDM) system in underwater acoustic (UWA) communications. The aim is to solve the problem of large feedback overhead for channel state information (CSI) in every subcarrier. A novel CSI feedback scheme is proposed based on the theory of compressed sensing (CS). We propose a feedback from the receiver that only feedback the sparse channel parameters. Additionally, prediction of the channel state is proposed every several symbols to realize the AM in practice. We describe a linear channel prediction algorithm which is used in adaptive transmission. This system has been tested in the real underwater acoustic channel. The linear channel prediction makes the AM transmission techniques more feasible for acoustic channel communications. The simulation and experiment show that significant improvements can be obtained both in bit error rate (BER) and throughput in the AM scheme compared with the fixed Quadrature Phase Shift Keying (QPSK) modulation scheme. Moreover, the performance with standard CS outperforms the Discrete Cosine Transform (DCT) method.
Atmospheric Science Data Center
2014-09-25
Solar Noon (GMT time) The time when the sun is due south in the northern hemisphere or due north in the southern ... The average cosine of the angle between the sun and directly overhead during daylight hours. Cosine solar ...
A single-source photon source model of a linear accelerator for Monte Carlo dose calculation
Glatting, Gerhard; Wenz, Frederik; Fleckenstein, Jens
2017-01-01
Purpose To introduce a new method of deriving a virtual source model (VSM) of a linear accelerator photon beam from a phase space file (PSF) for Monte Carlo (MC) dose calculation. Materials and methods A PSF of a 6 MV photon beam was generated by simulating the interactions of primary electrons with the relevant geometries of a Synergy linear accelerator (Elekta AB, Stockholm, Sweden) and recording the particles that reach a plane 16 cm downstream the electron source. Probability distribution functions (PDFs) for particle positions and energies were derived from the analysis of the PSF. These PDFs were implemented in the VSM using inverse transform sampling. To model particle directions, the phase space plane was divided into a regular square grid. Each element of the grid corresponds to an area of 1 mm2 in the phase space plane. The average direction cosines, Pearson correlation coefficient (PCC) between photon energies and their direction cosines, as well as the PCC between the direction cosines were calculated for each grid element. Weighted polynomial surfaces were then fitted to these 2D data. The weights are used to correct for heteroscedasticity across the phase space bins. The directions of the particles created by the VSM were calculated from these fitted functions. The VSM was validated against the PSF by comparing the doses calculated by the two methods for different square field sizes. The comparisons were performed with profile and gamma analyses. Results The doses calculated with the PSF and VSM agree to within 3% /1 mm (>95% pixel pass rate) for the evaluated fields. Conclusion A new method of deriving a virtual photon source model of a linear accelerator from a PSF file for MC dose calculation was developed. Validation results show that the doses calculated with the VSM and the PSF agree to within 3% /1 mm. PMID:28886048
A single-source photon source model of a linear accelerator for Monte Carlo dose calculation.
Nwankwo, Obioma; Glatting, Gerhard; Wenz, Frederik; Fleckenstein, Jens
2017-01-01
To introduce a new method of deriving a virtual source model (VSM) of a linear accelerator photon beam from a phase space file (PSF) for Monte Carlo (MC) dose calculation. A PSF of a 6 MV photon beam was generated by simulating the interactions of primary electrons with the relevant geometries of a Synergy linear accelerator (Elekta AB, Stockholm, Sweden) and recording the particles that reach a plane 16 cm downstream the electron source. Probability distribution functions (PDFs) for particle positions and energies were derived from the analysis of the PSF. These PDFs were implemented in the VSM using inverse transform sampling. To model particle directions, the phase space plane was divided into a regular square grid. Each element of the grid corresponds to an area of 1 mm2 in the phase space plane. The average direction cosines, Pearson correlation coefficient (PCC) between photon energies and their direction cosines, as well as the PCC between the direction cosines were calculated for each grid element. Weighted polynomial surfaces were then fitted to these 2D data. The weights are used to correct for heteroscedasticity across the phase space bins. The directions of the particles created by the VSM were calculated from these fitted functions. The VSM was validated against the PSF by comparing the doses calculated by the two methods for different square field sizes. The comparisons were performed with profile and gamma analyses. The doses calculated with the PSF and VSM agree to within 3% /1 mm (>95% pixel pass rate) for the evaluated fields. A new method of deriving a virtual photon source model of a linear accelerator from a PSF file for MC dose calculation was developed. Validation results show that the doses calculated with the VSM and the PSF agree to within 3% /1 mm.
An accurate surface topography restoration algorithm for white light interferometry
NASA Astrophysics Data System (ADS)
Yuan, He; Zhang, Xiangchao; Xu, Min
2017-10-01
As an important measuring technique, white light interferometry can realize fast and non-contact measurement, thus it is now widely used in the field of ultra-precision engineering. However, the traditional recovery algorithms of surface topographies have flaws and limits. In this paper, we propose a new algorithm to solve these problems. It is a combination of Fourier transform and improved polynomial fitting method. Because the white light interference signal is usually expressed as a cosine signal whose amplitude is modulated by a Gaussian function, its fringe visibility is not constant and varies with different scanning positions. The interference signal is processed first by Fourier transform, then the positive frequency part is selected and moved back to the center of the amplitude-frequency curve. In order to restore the surface morphology, a polynomial fitting method is used to fit the amplitude curve after inverse Fourier transform and obtain the corresponding topography information. The new method is then compared to the traditional algorithms. It is proved that the aforementioned drawbacks can be effectively overcome. The relative error is less than 0.8%.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Szadkowski, Zbigniew
2015-07-01
The paper presents the first results from the trigger based on the Discrete Cosine Transform (DCT) operating in the new Front-End Boards with Cyclone V FPGA deployed in 8 test surface detectors in the Pierre Auger Engineering Array. The patterns of the ADC traces generated by very inclined showers were obtained from the Auger database and from the CORSIKA simulation package supported next by Offline reconstruction Auger platform which gives a predicted digitized signal profiles. Simulations for many variants of the initial angle of shower, initialization depth in the atmosphere, type of particle and its initial energy gave a boundarymore » of the DCT coefficients used next for the on-line pattern recognition in the FPGA. Preliminary results have proven a right approach. We registered several showers triggered by the DCT for 120 MSps and 160 MSps. (authors)« less
Blind technique using blocking artifacts and entropy of histograms for image tampering detection
NASA Astrophysics Data System (ADS)
Manu, V. T.; Mehtre, B. M.
2017-06-01
The tremendous technological advancements in recent times has enabled people to create, edit and circulate images easily than ever before. As a result of this, ensuring the integrity and authenticity of the images has become challenging. Malicious editing of images to deceive the viewer is referred to as image tampering. A widely used image tampering technique is image splicing or compositing, in which regions from different images are copied and pasted. In this paper, we propose a tamper detection method utilizing the blocking and blur artifacts which are the footprints of splicing. The classification of images as tampered or not, is done based on the standard deviations of the entropy histograms and block discrete cosine transformations. We can detect the exact boundaries of the tampered area in the image, if the image is classified as tampered. Experimental results on publicly available image tampering datasets show that the proposed method outperforms the existing methods in terms of accuracy.
NASA Astrophysics Data System (ADS)
Hu, Guiqiang; Xiao, Di; Wang, Yong; Xiang, Tao; Zhou, Qing
2017-11-01
Recently, a new kind of image encryption approach using compressive sensing (CS) and double random phase encoding has received much attention due to the advantages such as compressibility and robustness. However, this approach is found to be vulnerable to chosen plaintext attack (CPA) if the CS measurement matrix is re-used. Therefore, designing an efficient measurement matrix updating mechanism that ensures resistance to CPA is of practical significance. In this paper, we provide a novel solution to update the CS measurement matrix by altering the secret sparse basis with the help of counter mode operation. Particularly, the secret sparse basis is implemented by a reality-preserving fractional cosine transform matrix. Compared with the conventional CS-based cryptosystem that totally generates all the random entries of measurement matrix, our scheme owns efficiency superiority while guaranteeing resistance to CPA. Experimental and analysis results show that the proposed scheme has a good security performance and has robustness against noise and occlusion.
A Fast and Robust Beamspace Adaptive Beamformer for Medical Ultrasound Imaging.
Mohades Deylami, Ali; Mohammadzadeh Asl, Babak
2017-06-01
Minimum variance beamformer (MVB) increases the resolution and contrast of medical ultrasound imaging compared with nonadaptive beamformers. These advantages come at the expense of high computational complexity that prevents this adaptive beamformer to be applied in a real-time imaging system. A new beamspace (BS) based on discrete cosine transform is proposed in which the medical ultrasound signals can be represented with less dimensions compared with the standard BS. This is because of symmetric beampattern of the beams in the proposed BS compared with the asymmetric ones in the standard BS. This lets us decrease the dimensions of data to two, so a high complex algorithm, such as the MVB, can be applied faster in this BS. The results indicated that by keeping only two beams, the MVB in the proposed BS provides very similar resolution and also better contrast compared with the standard MVB (SMVB) with only 0.44% of needed flops. Also, this beamformer is more robust against sound speed estimation errors than the SMVB.
NASA Astrophysics Data System (ADS)
Szadkowski, Zbigniew; Wiedeński, Michał
2017-06-01
We present first results from a trigger based on the discrete cosine transform (DCT) operating in new front-end boards with a Cyclone V E field-programmable gate array (FPGA) deployed in seven test surface detectors in the Pierre Auger Test Array. The patterns of the ADC traces generated by very inclined showers (arriving at 70° to 90° from the vertical) were obtained from the Auger database and from the CORSIKA simulation package supported by the Auger OffLine event reconstruction platform that gives predicted digitized signal profiles. Simulations for many values of the initial cosmic ray angle of arrival, the shower initialization depth in the atmosphere, the type of particle, and its initial energy gave a boundary on the DCT coefficients used for the online pattern recognition in the FPGA. Preliminary results validated the approach used. We recorded several showers triggered by the DCT for 120 Msamples/s and 160 Msamples/s.
Microlens array processor with programmable weight mask and direct optical input
NASA Astrophysics Data System (ADS)
Schmid, Volker R.; Lueder, Ernst H.; Bader, Gerhard; Maier, Gert; Siegordner, Jochen
1999-03-01
We present an optical feature extraction system with a microlens array processor. The system is suitable for online implementation of a variety of transforms such as the Walsh transform and DCT. Operating with incoherent light, our processor accepts direct optical input. Employing a sandwich- like architecture, we obtain a very compact design of the optical system. The key elements of the microlens array processor are a square array of 15 X 15 spherical microlenses on acrylic substrate and a spatial light modulator as transmissive mask. The light distribution behind the mask is imaged onto the pixels of a customized a-Si image sensor with adjustable gain. We obtain one output sample for each microlens image and its corresponding weight mask area as summation of the transmitted intensity within one sensor pixel. The resulting architecture is very compact and robust like a conventional camera lens while incorporating a high degree of parallelism. We successfully demonstrate a Walsh transform into the spatial frequency domain as well as the implementation of a discrete cosine transform with digitized gray values. We provide results showing the transformation performance for both synthetic image patterns and images of natural texture samples. The extracted frequency features are suitable for neural classification of the input image. Other transforms and correlations can be implemented in real-time allowing adaptive optical signal processing.
A Digital Compressed Sensing-Based Energy-Efficient Single-Spot Bluetooth ECG Node
Cai, Zhipeng; Zou, Fumin; Zhang, Xiangyu
2018-01-01
Energy efficiency is still the obstacle for long-term real-time wireless ECG monitoring. In this paper, a digital compressed sensing- (CS-) based single-spot Bluetooth ECG node is proposed to deal with the challenge in wireless ECG application. A periodic sleep/wake-up scheme and a CS-based compression algorithm are implemented in a node, which consists of ultra-low-power analog front-end, microcontroller, Bluetooth 4.0 communication module, and so forth. The efficiency improvement and the node's specifics are evidenced by the experiments using the ECG signals sampled by the proposed node under daily activities of lay, sit, stand, walk, and run. Under using sparse binary matrix (SBM), block sparse Bayesian learning (BSBL) method, and discrete cosine transform (DCT) basis, all ECG signals were essentially undistorted recovered with root-mean-square differences (PRDs) which are less than 6%. The proposed sleep/wake-up scheme and data compression can reduce the airtime over energy-hungry wireless links, the energy consumption of proposed node is 6.53 mJ, and the energy consumption of radio decreases 77.37%. Moreover, the energy consumption increase caused by CS code execution is negligible, which is 1.3% of the total energy consumption. PMID:29599945
A Digital Compressed Sensing-Based Energy-Efficient Single-Spot Bluetooth ECG Node.
Luo, Kan; Cai, Zhipeng; Du, Keqin; Zou, Fumin; Zhang, Xiangyu; Li, Jianqing
2018-01-01
Energy efficiency is still the obstacle for long-term real-time wireless ECG monitoring. In this paper, a digital compressed sensing- (CS-) based single-spot Bluetooth ECG node is proposed to deal with the challenge in wireless ECG application. A periodic sleep/wake-up scheme and a CS-based compression algorithm are implemented in a node, which consists of ultra-low-power analog front-end, microcontroller, Bluetooth 4.0 communication module, and so forth. The efficiency improvement and the node's specifics are evidenced by the experiments using the ECG signals sampled by the proposed node under daily activities of lay, sit, stand, walk, and run. Under using sparse binary matrix (SBM), block sparse Bayesian learning (BSBL) method, and discrete cosine transform (DCT) basis, all ECG signals were essentially undistorted recovered with root-mean-square differences (PRDs) which are less than 6%. The proposed sleep/wake-up scheme and data compression can reduce the airtime over energy-hungry wireless links, the energy consumption of proposed node is 6.53 mJ, and the energy consumption of radio decreases 77.37%. Moreover, the energy consumption increase caused by CS code execution is negligible, which is 1.3% of the total energy consumption.
Zuo, Chao; Chen, Qian; Li, Hongru; Qu, Weijuan; Asundi, Anand
2014-07-28
Boundary conditions play a crucial role in the solution of the transport of intensity equation (TIE). If not appropriately handled, they can create significant boundary artifacts across the reconstruction result. In a previous paper [Opt. Express 22, 9220 (2014)], we presented a new boundary-artifact-free TIE phase retrieval method with use of discrete cosine transform (DCT). Here we report its experimental investigations with applications to the micro-optics characterization. The experimental setup is based on a tunable lens based 4f system attached to a non-modified inverted bright-field microscope. We establish inhomogeneous Neumann boundary values by placing a rectangular aperture in the intermediate image plane of the microscope. Then the boundary values are applied to solve the TIE with our DCT-based TIE solver. Experimental results on microlenses highlight the importance of boundary conditions that often overlooked in simplified models, and confirm that our approach effectively avoid the boundary error even when objects are located at the image borders. It is further demonstrated that our technique is non-interferometric, accurate, fast, full-field, and flexible, rendering it a promising metrological tool for the micro-optics inspection.
Time-Frequency-Wavenumber Analysis of Surface Waves Using the Continuous Wavelet Transform
NASA Astrophysics Data System (ADS)
Poggi, V.; Fäh, D.; Giardini, D.
2013-03-01
A modified approach to surface wave dispersion analysis using active sources is proposed. The method is based on continuous recordings, and uses the continuous wavelet transform to analyze the phase velocity dispersion of surface waves. This gives the possibility to accurately localize the phase information in time, and to isolate the most significant contribution of the surface waves. To extract the dispersion information, then, a hybrid technique is applied to the narrowband filtered seismic recordings. The technique combines the flexibility of the slant stack method in identifying waves that propagate in space and time, with the resolution of f- k approaches. This is particularly beneficial for higher mode identification in cases of high noise levels. To process the continuous wavelet transform, a new mother wavelet is presented and compared to the classical and widely used Morlet type. The proposed wavelet is obtained from a raised-cosine envelope function (Hanning type). The proposed approach is particularly suitable when using continuous recordings (e.g., from seismological-like equipment) since it does not require any hardware-based source triggering. This can be subsequently done with the proposed method. Estimation of the surface wave phase delay is performed in the frequency domain by means of a covariance matrix averaging procedure over successive wave field excitations. Thus, no record stacking is necessary in the time domain and a large number of consecutive shots can be used. This leads to a certain simplification of the field procedures. To demonstrate the effectiveness of the method, we tested it on synthetics as well on real field data. For the real case we also combine dispersion curves from ambient vibrations and active measurements.
On the Symmetry of Molecular Flows Through the Pipe of an Arbitrary Shape (I) Diffusive Reflection
NASA Astrophysics Data System (ADS)
Kusumoto, Yoshiro
Molecular gas flows through the pipe of an arbitrary shape is mathematically considered based on a diffusive reflection model. To avoid a perpetual motion, the magnitude of the molecular flow rate must remain invariant under the exchange of inlet and outlet pressures. For this flow symmetry, the cosine law reflection at the pipe wall was found to be sufficient and necessary, on the assumption that the molecular flux is conserved in a collision with the wall. It was also shown that a spontaneous flow occurs in a hemispherical apparatus, if the reflection obeys the n-th power of cosine law with n other than unity. This apparatus could work as a molecular pump with no moving parts.
Electro-mechanical sine/cosine generator
NASA Technical Reports Server (NTRS)
Flagge, B. (Inventor)
1972-01-01
An electromechanical device for generating both sine and cosine functions is described. A motor rotates a cylinder about an axis parallel to and a slight distance from the central axis of the cylinder. Two noncontacting displacement sensing devices are placed ninety degrees apart, equal distances from the axis of rotation of the cylinder and short distances above the surface of cylinder. Each of these sensing devices produces an electrical signal proportional to the distance that it is away from the cylinder. Consequently, as the cylinder is rotated the outputs from the two sensing devices are the sine and cosine functions.
Cosine-Gauss plasmon beam: a localized long-range nondiffracting surface wave.
Lin, Jiao; Dellinger, Jean; Genevet, Patrice; Cluzel, Benoit; de Fornel, Frederique; Capasso, Federico
2012-08-31
A new surface wave is introduced, the cosine-Gauss beam, which does not diffract while it propagates in a straight line and tightly bound to the metallic surface for distances up to 80 μm. The generation of this highly localized wave is shown to be straightforward and highly controllable, with varying degrees of transverse confinement and directionality, by fabricating a plasmon launcher consisting of intersecting metallic gratings. Cosine-Gauss beams have potential for applications in plasmonics, notably for efficient coupling to nanophotonic devices, opening up new design possibilities for next-generation optical interconnects.
A new JPEG-based steganographic algorithm for mobile devices
NASA Astrophysics Data System (ADS)
Agaian, Sos S.; Cherukuri, Ravindranath C.; Schneider, Erik C.; White, Gregory B.
2006-05-01
Currently, cellular phones constitute a significant portion of the global telecommunications market. Modern cellular phones offer sophisticated features such as Internet access, on-board cameras, and expandable memory which provide these devices with excellent multimedia capabilities. Because of the high volume of cellular traffic, as well as the ability of these devices to transmit nearly all forms of data. The need for an increased level of security in wireless communications is becoming a growing concern. Steganography could provide a solution to this important problem. In this article, we present a new algorithm for JPEG-compressed images which is applicable to mobile platforms. This algorithm embeds sensitive information into quantized discrete cosine transform coefficients obtained from the cover JPEG. These coefficients are rearranged based on certain statistical properties and the inherent processing and memory constraints of mobile devices. Based on the energy variation and block characteristics of the cover image, the sensitive data is hidden by using a switching embedding technique proposed in this article. The proposed system offers high capacity while simultaneously withstanding visual and statistical attacks. Based on simulation results, the proposed method demonstrates an improved retention of first-order statistics when compared to existing JPEG-based steganographic algorithms, while maintaining a capacity which is comparable to F5 for certain cover images.
NASA Astrophysics Data System (ADS)
Siddeq, M. M.; Rodrigues, M. A.
2015-09-01
Image compression techniques are widely used on 2D image 2D video 3D images and 3D video. There are many types of compression techniques and among the most popular are JPEG and JPEG2000. In this research, we introduce a new compression method based on applying a two level discrete cosine transform (DCT) and a two level discrete wavelet transform (DWT) in connection with novel compression steps for high-resolution images. The proposed image compression algorithm consists of four steps. (1) Transform an image by a two level DWT followed by a DCT to produce two matrices: DC- and AC-Matrix, or low and high frequency matrix, respectively, (2) apply a second level DCT on the DC-Matrix to generate two arrays, namely nonzero-array and zero-array, (3) apply the Minimize-Matrix-Size algorithm to the AC-Matrix and to the other high-frequencies generated by the second level DWT, (4) apply arithmetic coding to the output of previous steps. A novel decompression algorithm, Fast-Match-Search algorithm (FMS), is used to reconstruct all high-frequency matrices. The FMS-algorithm computes all compressed data probabilities by using a table of data, and then using a binary search algorithm for finding decompressed data inside the table. Thereafter, all decoded DC-values with the decoded AC-coefficients are combined in one matrix followed by inverse two levels DCT with two levels DWT. The technique is tested by compression and reconstruction of 3D surface patches. Additionally, this technique is compared with JPEG and JPEG2000 algorithm through 2D and 3D root-mean-square-error following reconstruction. The results demonstrate that the proposed compression method has better visual properties than JPEG and JPEG2000 and is able to more accurately reconstruct surface patches in 3D.
Representation of deformable motion for compression of dynamic cardiac image data
NASA Astrophysics Data System (ADS)
Weinlich, Andreas; Amon, Peter; Hutter, Andreas; Kaup, André
2012-02-01
We present a new approach for efficient estimation and storage of tissue deformation in dynamic medical image data like 3-D+t computed tomography reconstructions of human heart acquisitions. Tissue deformation between two points in time can be described by means of a displacement vector field indicating for each voxel of a slice, from which position in the previous slice at a fixed position in the third dimension it has moved to this position. Our deformation model represents the motion in a compact manner using a down-sampled potential function of the displacement vector field. This function is obtained by a Gauss-Newton minimization of the estimation error image, i. e., the difference between the current and the deformed previous slice. For lossless or lossy compression of volume slices, the potential function and the error image can afterwards be coded separately. By assuming deformations instead of translational motion, a subsequent coding algorithm using this method will achieve better compression ratios for medical volume data than with conventional block-based motion compensation known from video coding. Due to the smooth prediction without block artifacts, particularly whole-image transforms like wavelet decomposition as well as intra-slice prediction methods can benefit from this approach. We show that with discrete cosine as well as with Karhunen-Lo`eve transform the method can achieve a better energy compaction of the error image than block-based motion compensation while reaching approximately the same prediction error energy.
Long-life electromechanical sine-cosine generator
NASA Technical Reports Server (NTRS)
Flagge, B.
1971-01-01
Sine-cosine generator with no sliding parts is capable of withstanding a 20 Hz oscillation for more than 14 hours. Tests show that generator is electrically equal to potentiometer and that it has excellent dynamic characteristics. Generator shows promise of higher-speed applications than was previously possible.
Detecting Disease in Radiographs with Intuitive Confidence
2015-01-01
This paper argues in favor of a specific type of confidence for use in computer-aided diagnosis and disease classification, namely, sine/cosine values of angles represented by points on the unit circle. The paper shows how this confidence is motivated by Chinese medicine and how sine/cosine values are directly related with the two forces Yin and Yang. The angle for which sine and cosine are equal (45°) represents the state of equilibrium between Yin and Yang, which is a state of nonduality that indicates neither normality nor abnormality in terms of disease classification. The paper claims that the proposed confidence is intuitive and can be readily understood by physicians. The paper underpins this thesis with theoretical results in neural signal processing, stating that a sine/cosine relationship between the actual input signal and the perceived (learned) input is key to neural learning processes. As a practical example, the paper shows how to use the proposed confidence values to highlight manifestations of tuberculosis in frontal chest X-rays. PMID:26495433
Sampling functions for geophysics
NASA Technical Reports Server (NTRS)
Giacaglia, G. E. O.; Lunquist, C. A.
1972-01-01
A set of spherical sampling functions is defined such that they are related to spherical-harmonic functions in the same way that the sampling functions of information theory are related to sine and cosine functions. An orderly distribution of (N + 1) squared sampling points on a sphere is given, for which the (N + 1) squared spherical sampling functions span the same linear manifold as do the spherical-harmonic functions through degree N. The transformations between the spherical sampling functions and the spherical-harmonic functions are given by recurrence relations. The spherical sampling functions of two arguments are extended to three arguments and to nonspherical reference surfaces. Typical applications of this formalism to geophysical topics are sketched.
SAR data compression: Application, requirements, and designs
NASA Technical Reports Server (NTRS)
Curlander, John C.; Chang, C. Y.
1991-01-01
The feasibility of reducing data volume and data rate is evaluated for the Earth Observing System (EOS) Synthetic Aperture Radar (SAR). All elements of data stream from the sensor downlink data stream to electronic delivery of browse data products are explored. The factors influencing design of a data compression system are analyzed, including the signal data characteristics, the image quality requirements, and the throughput requirements. The conclusion is that little or no reduction can be achieved in the raw signal data using traditional data compression techniques (e.g., vector quantization, adaptive discrete cosine transform) due to the induced phase errors in the output image. However, after image formation, a number of techniques are effective for data compression.
Personalized Medicine in Veterans with Traumatic Brain Injuries
2011-05-01
UPGMA algorithm with cosine correlation as the similarity metric. Results are present as a heat map (left panel) demonstrating that the panel of 18... UPGMA algorithm with cosine correlation as the similarity metric. Results are presented as heat maps demonstrating the efficacy of using all 13
Principle and analysis of a rotational motion Fourier transform infrared spectrometer
NASA Astrophysics Data System (ADS)
Cai, Qisheng; Min, Huang; Han, Wei; Liu, Yixuan; Qian, Lulu; Lu, Xiangning
2017-09-01
Fourier transform infrared spectroscopy is an important technique in studying molecular energy levels, analyzing material compositions, and environmental pollutants detection. A novel rotational motion Fourier transform infrared spectrometer with high stability and ultra-rapid scanning characteristics is proposed in this paper. The basic principle, the optical path difference (OPD) calculations, and some tolerance analysis are elaborated. The OPD of this spectrometer is obtained by the continuously rotational motion of a pair of parallel mirrors instead of the translational motion in traditional Michelson interferometer. Because of the rotational motion, it avoids the tilt problems occurred in the translational motion Michelson interferometer. There is a cosine function relationship between the OPD and the rotating angle of the parallel mirrors. An optical model is setup in non-sequential mode of the ZEMAX software, and the interferogram of a monochromatic light is simulated using ray tracing method. The simulated interferogram is consistent with the theoretically calculated interferogram. As the rotating mirrors are the only moving elements in this spectrometer, the parallelism of the rotating mirrors and the vibration during the scan are analyzed. The vibration of the parallel mirrors is the main error during the rotation. This high stability and ultra-rapid scanning Fourier transform infrared spectrometer is a suitable candidate for airborne and space-borne remote sensing spectrometer.
Two-body potential model based on cosine series expansion for ionic materials
Oda, Takuji; Weber, William J.; Tanigawa, Hisashi
2015-09-23
There is a method to construct a two-body potential model for ionic materials with a Fourier series basis and we examine it. For this method, the coefficients of cosine basis functions are uniquely determined by solving simultaneous linear equations to minimize the sum of weighted mean square errors in energy, force and stress, where first-principles calculation results are used as the reference data. As a validation test of the method, potential models for magnesium oxide are constructed. The mean square errors appropriately converge with respect to the truncation of the cosine series. This result mathematically indicates that the constructed potentialmore » model is sufficiently close to the one that is achieved with the non-truncated Fourier series and demonstrates that this potential virtually provides minimum error from the reference data within the two-body representation. The constructed potential models work appropriately in both molecular statics and dynamics simulations, especially if a two-step correction to revise errors expected in the reference data is performed, and the models clearly outperform two existing Buckingham potential models that were tested. Moreover, the good agreement over a broad range of energies and forces with first-principles calculations should enable the prediction of materials behavior away from equilibrium conditions, such as a system under irradiation.« less
The Law of Cosines for an "n"-Dimensional Simplex
ERIC Educational Resources Information Center
Ding, Yiren
2008-01-01
Using the divergence theorem technique of L. Eifler and N.H. Rhee, "The n-dimensional Pythagorean Theorem via the Divergence Theorem" (to appear: Amer. Math. Monthly), we extend the law of cosines for a triangle in a plane to an "n"-dimensional simplex in an "n"-dimensional space.
An Elementary Algorithm to Evaluate Trigonometric Functions to High Precision
ERIC Educational Resources Information Center
Johansson, B. Tomas
2018-01-01
Evaluation of the cosine function is done via a simple Cordic-like algorithm, together with a package for handling arbitrary-precision arithmetic in the computer program Matlab. Approximations to the cosine function having hundreds of correct decimals are presented with a discussion around errors and implementation.
Compressed/reconstructed test images for CRAF/Cassini
NASA Technical Reports Server (NTRS)
Dolinar, S.; Cheung, K.-M.; Onyszchuk, I.; Pollara, F.; Arnold, S.
1991-01-01
A set of compressed, then reconstructed, test images submitted to the Comet Rendezvous Asteroid Flyby (CRAF)/Cassini project is presented as part of its evaluation of near lossless high compression algorithms for representing image data. A total of seven test image files were provided by the project. The seven test images were compressed, then reconstructed with high quality (root mean square error of approximately one or two gray levels on an 8 bit gray scale), using discrete cosine transforms or Hadamard transforms and efficient entropy coders. The resulting compression ratios varied from about 2:1 to about 10:1, depending on the activity or randomness in the source image. This was accomplished without any special effort to optimize the quantizer or to introduce special postprocessing to filter the reconstruction errors. A more complete set of measurements, showing the relative performance of the compression algorithms over a wide range of compression ratios and reconstruction errors, shows that additional compression is possible at a small sacrifice in fidelity.
Low complexity 1D IDCT for 16-bit parallel architectures
NASA Astrophysics Data System (ADS)
Bivolarski, Lazar
2007-09-01
This paper shows that using the Loeffler, Ligtenberg, and Moschytz factorization of 8-point IDCT [2] one-dimensional (1-D) algorithm as a fast approximation of the Discrete Cosine Transform (DCT) and using only 16 bit numbers, it is possible to create in an IEEE 1180-1990 compliant and multiplierless algorithm with low computational complexity. This algorithm as characterized by its structure is efficiently implemented on parallel high performance architectures as well as due to its low complexity is sufficient for wide range of other architectures. Additional constraint on this work was the requirement of compliance with the existing MPEG standards. The hardware implementation complexity and low resources where also part of the design criteria for this algorithm. This implementation is also compliant with the precision requirements described in MPEG IDCT precision specification ISO/IEC 23002-1. Complexity analysis is performed as an extension to the simple measure of shifts and adds for the multiplierless algorithm as additional operations are included in the complexity measure to better describe the actual transform implementation complexity.
Toward a perceptual video-quality metric
NASA Astrophysics Data System (ADS)
Watson, Andrew B.
1998-07-01
The advent of widespread distribution of digital video creates a need for automated methods for evaluating the visual quality of digital video. This is particularly so since most digital video is compressed using lossy methods, which involve the controlled introduction of potentially visible artifacts. Compounding the problem is the bursty nature of digital video, which requires adaptive bit allocation based on visual quality metrics, and the economic need to reduce bit-rate to the lowest level that yields acceptable quality. In previous work, we have developed visual quality metrics for evaluating, controlling,a nd optimizing the quality of compressed still images. These metrics incorporate simplified models of human visual sensitivity to spatial and chromatic visual signals. Here I describe a new video quality metric that is an extension of these still image metrics into the time domain. Like the still image metrics, it is based on the Discrete Cosine Transform. An effort has been made to minimize the amount of memory and computation required by the metric, in order that might be applied in the widest range of applications. To calibrate the basic sensitivity of this metric to spatial and temporal signals we have made measurements of visual thresholds for temporally varying samples of DCT quantization noise.
NASA Astrophysics Data System (ADS)
Urriza, Isidro; Barragan, Luis A.; Artigas, Jose I.; Garcia, Jose I.; Navarro, Denis
1997-11-01
Image compression plays an important role in the archiving and transmission of medical images. Discrete cosine transform (DCT)-based compression methods are not suitable for medical images because of block-like image artifacts that could mask or be mistaken for pathology. Wavelet transforms (WTs) are used to overcome this problem. When implementing WTs in hardware, finite precision arithmetic introduces quantization errors. However, lossless compression is usually required in the medical image field. Thus, the hardware designer must look for the optimum register length that, while ensuring the lossless accuracy criteria, will also lead to a high-speed implementation with small chip area. In addition, wavelet choice is a critical issue that affects image quality as well as system design. We analyze the filters best suited to image compression that appear in the literature. For them, we obtain the maximum quantization errors produced in the calculation of the WT components. Thus, we deduce the minimum word length required for the reconstructed image to be numerically identical to the original image. The theoretical results are compared with experimental results obtained from algorithm simulations on random test images. These results enable us to compare the hardware implementation cost of the different filter banks. Moreover, to reduce the word length, we have analyzed the case of increasing the integer part of the numbers while maintaining constant the word length when the scale increases.
Simultaneous storage of medical images in the spatial and frequency domain: a comparative study.
Nayak, Jagadish; Bhat, P Subbanna; Acharya U, Rajendra; Uc, Niranjan
2004-06-05
Digital watermarking is a technique of hiding specific identification data for copyright authentication. This technique is adapted here for interleaving patient information with medical images, to reduce storage and transmission overheads. The patient information is encrypted before interleaving with images to ensure greater security. The bio-signals are compressed and subsequently interleaved with the image. This interleaving is carried out in the spatial domain and Frequency domain. The performance of interleaving in the spatial, Discrete Fourier Transform (DFT), Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) coefficients is studied. Differential pulse code modulation (DPCM) is employed for data compression as well as encryption and results are tabulated for a specific example. It can be seen from results, the process does not affect the picture quality. This is attributed to the fact that the change in LSB of a pixel changes its brightness by 1 part in 256. Spatial and DFT domain interleaving gave very less %NRMSE as compared to DCT and DWT domain. The Results show that spatial domain the interleaving, the %NRMSE was less than 0.25% for 8-bit encoded pixel intensity. Among the frequency domain interleaving methods, DFT was found to be very efficient.
Numerical methods for comparing fresh and weathered oils by their FTIR spectra.
Li, Jianfeng; Hibbert, D Brynn; Fuller, Stephen
2007-08-01
Four comparison statistics ('similarity indices') for the identification of the source of a petroleum oil spill based on the ASTM standard test method D3414 were investigated. Namely, (1) first difference correlation coefficient squared and (2) correlation coefficient squared, (3) first difference Euclidean cosine squared and (4) Euclidean cosine squared. For numerical comparison, an FTIR spectrum is divided into three regions, described as: fingerprint (900-700 cm(-1)), generic (1350-900 cm(-1)) and supplementary (1770-1685 cm(-1)), which are the same as the three major regions recommended by the ASTM standard. For fresh oil samples, each similarity index was able to distinguish between replicate independent spectra of the same sample and between different samples. In general, the two first difference-based indices worked better than their parent indices. To provide samples to reveal relationships between weathered and fresh oils, a simple artificial weathering procedure was carried out. Euclidean cosine and correlation coefficients both worked well to maintain identification of a match in the fingerprint region and the two first difference indices were better in the generic region. Receiver operating characteristic curves (true positive rate versus false positive rate) for decisions on matching using the fingerprint region showed two samples could be matched when the difference in weathering time was up to 7 days. Beyond this time the true positive rate falls and samples cannot be reliably matched. However, artificial weathering of a fresh source sample can aid the matching of a weathered sample to its real source from a pool of very similar candidates.
Mathematics Education Graduate Students' Understanding of Trigonometric Ratios
ERIC Educational Resources Information Center
Yigit Koyunkaya, Melike
2016-01-01
This study describes mathematics education graduate students' understanding of relationships between sine and cosine of two base angles in a right triangle. To explore students' understanding of these relationships, an elaboration of Skemp's views of instrumental and relational understanding using Tall and Vinner's concept image and concept…
Transparency of the ab Planes of Bi2Sr2CaCu2O8+δ to Magnetic Fields
NASA Astrophysics Data System (ADS)
Kossler, W. J.; Dai, Y.; Petzinger, K. G.; Greer, A. J.; Williams, D. Ll.; Koster, E.; Harshman, D. R.; Mitzi, D. B.
1998-01-01
A sample composed of many Bi2Sr2CaCu2O8+δ single crystals was cooled to 2 K in a magnetic field of 100 G at 45° from the c axis. Muon-spin-rotation measurements were made for which the polarization was initially approximately in the ab plane. The time dependent polarization components along this initial direction and along the c axis were obtained. Cosine transforms of these and subsequent measurements were made. Upon removing the applied field, still at 2 K, only the c axis component of the field remained in the sample, thus providing microscopic evidence for extreme 2D behavior for the vortices even at this temperature.
Effect of data compression on diagnostic accuracy in digital hand and chest radiography
NASA Astrophysics Data System (ADS)
Sayre, James W.; Aberle, Denise R.; Boechat, Maria I.; Hall, Theodore R.; Huang, H. K.; Ho, Bruce K. T.; Kashfian, Payam; Rahbar, Guita
1992-05-01
Image compression is essential to handle a large volume of digital images including CT, MR, CR, and digitized films in a digital radiology operation. The full-frame bit allocation using the cosine transform technique developed during the last few years has been proven to be an excellent irreversible image compression method. This paper describes the effect of using the hardware compression module on diagnostic accuracy in hand radiographs with subperiosteal resorption and chest radiographs with interstitial disease. Receiver operating characteristic analysis using 71 hand radiographs and 52 chest radiographs with five observers each demonstrates that there is no statistical significant difference in diagnostic accuracy between the original films and the compressed images with a compression ratio as high as 20:1.
Performance of customized DCT quantization tables on scientific data
NASA Technical Reports Server (NTRS)
Ratnakar, Viresh; Livny, Miron
1994-01-01
We show that it is desirable to use data-specific or customized quantization tables for scaling the spatial frequency coefficients obtained using the Discrete Cosine Transform (DCT). DCT is widely used for image and video compression (MP89, PM93) but applications typically use default quantization matrices. Using actual scientific data gathered from divers sources such as spacecrafts and electron-microscopes, we show that the default compression/quality tradeoffs can be significantly improved upon by using customized tables. We also show that significant improvements are possible for the standard test images Lena and Baboon. This work is part of an effort to develop a practical scheme for optimizing quantization matrices for any given image or video stream, under any given quality or compression constraints.
Some Cosine Relations and the Regular Heptagon
ERIC Educational Resources Information Center
Osler, Thomas J.; Heng, Phongthong
2007-01-01
The ancient Greek mathematicians sought to construct, by use of straight edge and compass only, all regular polygons. They had no difficulty with regular polygons having 3, 4, 5 and 6 sides, but the 7-sided heptagon eluded all their attempts. In this article, the authors discuss some cosine relations and the regular heptagon. (Contains 1 figure.)
Similarity Measures in Scientometric Research: The Jaccard Index versus Salton's Cosine Formula.
ERIC Educational Resources Information Center
Hamers, Lieve; And Others
1989-01-01
Describes two similarity measures used in citation and co-citation analysis--the Jaccard index and Salton's cosine formula--and investigates the relationship between the two measures. It is shown that Salton's formula yields a numerical value that is twice Jaccard's index in most cases, and an explanation is offered. (13 references) (CLB)
Spectral Target Detection using Schroedinger Eigenmaps
NASA Astrophysics Data System (ADS)
Dorado-Munoz, Leidy P.
Applications of optical remote sensing processes include environmental monitoring, military monitoring, meteorology, mapping, surveillance, etc. Many of these tasks include the detection of specific objects or materials, usually few or small, which are surrounded by other materials that clutter the scene and hide the relevant information. This target detection process has been boosted lately by the use of hyperspectral imagery (HSI) since its high spectral dimension provides more detailed spectral information that is desirable in data exploitation. Typical spectral target detectors rely on statistical or geometric models to characterize the spectral variability of the data. However, in many cases these parametric models do not fit well HSI data that impacts the detection performance. On the other hand, non-linear transformation methods, mainly based on manifold learning algorithms, have shown a potential use in HSI transformation, dimensionality reduction and classification. In target detection, non-linear transformation algorithms are used as preprocessing techniques that transform the data to a more suitable lower dimensional space, where the statistical or geometric detectors are applied. One of these non-linear manifold methods is the Schroedinger Eigenmaps (SE) algorithm that has been introduced as a technique for semi-supervised classification. The core tool of the SE algorithm is the Schroedinger operator that includes a potential term that encodes prior information about the materials present in a scene, and enables the embedding to be steered in some convenient directions in order to cluster similar pixels together. A completely novel target detection methodology based on SE algorithm is proposed for the first time in this thesis. The proposed methodology does not just include the transformation of the data to a lower dimensional space but also includes the definition of a detector that capitalizes on the theory behind SE. The fact that target pixels and those similar pixels are clustered in a predictable region of the low-dimensional representation is used to define a decision rule that allows one to identify target pixels over the rest of pixels in a given image. In addition, a knowledge propagation scheme is used to combine spectral and spatial information as a means to propagate the "potential constraints" to nearby points. The propagation scheme is introduced to reinforce weak connections and improve the separability between most of the target pixels and the background. Experiments using different HSI data sets are carried out in order to test the proposed methodology. The assessment is performed from a quantitative and qualitative point of view, and by comparing the SE-based methodology against two other detection methodologies that use linear/non-linear algorithms as transformations and the well-known Adaptive Coherence/Cosine Estimator (ACE) detector. Overall results show that the SE-based detector outperforms the other two detection methodologies, which indicates the usefulness of the SE transformation in spectral target detection problems.
Video Transmission for Third Generation Wireless Communication Systems
Gharavi, H.; Alamouti, S. M.
2001-01-01
This paper presents a twin-class unequal protected video transmission system over wireless channels. Video partitioning based on a separation of the Variable Length Coded (VLC) Discrete Cosine Transform (DCT) coefficients within each block is considered for constant bitrate transmission (CBR). In the splitting process the fraction of bits assigned to each of the two partitions is adjusted according to the requirements of the unequal error protection scheme employed. Subsequently, partitioning is applied to the ITU-T H.263 coding standard. As a transport vehicle, we have considered one of the leading third generation cellular radio standards known as WCDMA. A dual-priority transmission system is then invoked on the WCDMA system where the video data, after being broken into two streams, is unequally protected. We use a very simple error correction coding scheme for illustration and then propose more sophisticated forms of unequal protection of the digitized video signals. We show that this strategy results in a significantly higher quality of the reconstructed video data when it is transmitted over time-varying multipath fading channels. PMID:27500033
Visual Attention Modeling for Stereoscopic Video: A Benchmark and Computational Model.
Fang, Yuming; Zhang, Chi; Li, Jing; Lei, Jianjun; Perreira Da Silva, Matthieu; Le Callet, Patrick
2017-10-01
In this paper, we investigate the visual attention modeling for stereoscopic video from the following two aspects. First, we build one large-scale eye tracking database as the benchmark of visual attention modeling for stereoscopic video. The database includes 47 video sequences and their corresponding eye fixation data. Second, we propose a novel computational model of visual attention for stereoscopic video based on Gestalt theory. In the proposed model, we extract the low-level features, including luminance, color, texture, and depth, from discrete cosine transform coefficients, which are used to calculate feature contrast for the spatial saliency computation. The temporal saliency is calculated by the motion contrast from the planar and depth motion features in the stereoscopic video sequences. The final saliency is estimated by fusing the spatial and temporal saliency with uncertainty weighting, which is estimated by the laws of proximity, continuity, and common fate in Gestalt theory. Experimental results show that the proposed method outperforms the state-of-the-art stereoscopic video saliency detection models on our built large-scale eye tracking database and one other database (DML-ITRACK-3D).
Efficiency analysis for 3D filtering of multichannel images
NASA Astrophysics Data System (ADS)
Kozhemiakin, Ruslan A.; Rubel, Oleksii; Abramov, Sergey K.; Lukin, Vladimir V.; Vozel, Benoit; Chehdi, Kacem
2016-10-01
Modern remote sensing systems basically acquire images that are multichannel (dual- or multi-polarization, multi- and hyperspectral) where noise, usually with different characteristics, is present in all components. If noise is intensive, it is desirable to remove (suppress) it before applying methods of image classification, interpreting, and information extraction. This can be done using one of two approaches - by component-wise or by vectorial (3D) filtering. The second approach has shown itself to have higher efficiency if there is essential correlation between multichannel image components as this often happens for multichannel remote sensing data of different origin. Within the class of 3D filtering techniques, there are many possibilities and variations. In this paper, we consider filtering based on discrete cosine transform (DCT) and pay attention to two aspects of processing. First, we study in detail what changes in DCT coefficient statistics take place for 3D denoising compared to component-wise processing. Second, we analyze how selection of component images united into 3D data array influences efficiency of filtering and can the observed tendencies be exploited in processing of images with rather large number of channels.
NASA Astrophysics Data System (ADS)
Kim, Seong-Whan; Suthaharan, Shan; Lee, Heung-Kyu; Rao, K. R.
2001-01-01
Quality of Service (QoS)-guarantee in real-time communication for multimedia applications is significantly important. An architectural framework for multimedia networks based on substreams or flows is effectively exploited for combining source and channel coding for multimedia data. But the existing frame by frame approach which includes Moving Pictures Expert Group (MPEG) cannot be neglected because it is a standard. In this paper, first, we designed an MPEG transcoder which converts an MPEG coded stream into variable rate packet sequences to be used for our joint source/channel coding (JSCC) scheme. Second, we designed a classification scheme to partition the packet stream into multiple substreams which have their own QoS requirements. Finally, we designed a management (reservation and scheduling) scheme for substreams to support better perceptual video quality such as the bound of end-to-end jitter. We have shown that our JSCC scheme is better than two other two popular techniques by simulation and real video experiments on the TCP/IP environment.
Analysis of the impact of digital watermarking on computer-aided diagnosis in medical imaging.
Garcia-Hernandez, Jose Juan; Gomez-Flores, Wilfrido; Rubio-Loyola, Javier
2016-01-01
Medical images (MI) are relevant sources of information for detecting and diagnosing a large number of illnesses and abnormalities. Due to their importance, this study is focused on breast ultrasound (BUS), which is the main adjunct for mammography to detect common breast lesions among women worldwide. On the other hand, aiming to enhance data security, image fidelity, authenticity, and content verification in e-health environments, MI watermarking has been widely used, whose main goal is to embed patient meta-data into MI so that the resulting image keeps its original quality. In this sense, this paper deals with the comparison of two watermarking approaches, namely spread spectrum based on the discrete cosine transform (SS-DCT) and the high-capacity data-hiding (HCDH) algorithm, so that the watermarked BUS images are guaranteed to be adequate for a computer-aided diagnosis (CADx) system, whose two principal outcomes are lesion segmentation and classification. Experimental results show that HCDH algorithm is highly recommended for watermarking medical images, maintaining the image quality and without introducing distortion into the output of CADx. Copyright © 2015 Elsevier Ltd. All rights reserved.
[Development of a video image system for wireless capsule endoscopes based on DSP].
Yang, Li; Peng, Chenglin; Wu, Huafeng; Zhao, Dechun; Zhang, Jinhua
2008-02-01
A video image recorder to record video picture for wireless capsule endoscopes was designed. TMS320C6211 DSP of Texas Instruments Inc. is the core processor of this system. Images are periodically acquired from Composite Video Broadcast Signal (CVBS) source and scaled by video decoder (SAA7114H). Video data is transported from high speed buffer First-in First-out (FIFO) to Digital Signal Processor (DSP) under the control of Complex Programmable Logic Device (CPLD). This paper adopts JPEG algorithm for image coding, and the compressed data in DSP was stored to Compact Flash (CF) card. TMS320C6211 DSP is mainly used for image compression and data transporting. Fast Discrete Cosine Transform (DCT) algorithm and fast coefficient quantization algorithm are used to accelerate operation speed of DSP and decrease the executing code. At the same time, proper address is assigned for each memory, which has different speed;the memory structure is also optimized. In addition, this system uses plenty of Extended Direct Memory Access (EDMA) to transport and process image data, which results in stable and high performance.
Video and accelerometer-based motion analysis for automated surgical skills assessment.
Zia, Aneeq; Sharma, Yachna; Bettadapura, Vinay; Sarin, Eric L; Essa, Irfan
2018-03-01
Basic surgical skills of suturing and knot tying are an essential part of medical training. Having an automated system for surgical skills assessment could help save experts time and improve training efficiency. There have been some recent attempts at automated surgical skills assessment using either video analysis or acceleration data. In this paper, we present a novel approach for automated assessment of OSATS-like surgical skills and provide an analysis of different features on multi-modal data (video and accelerometer data). We conduct a large study for basic surgical skill assessment on a dataset that contained video and accelerometer data for suturing and knot-tying tasks. We introduce "entropy-based" features-approximate entropy and cross-approximate entropy, which quantify the amount of predictability and regularity of fluctuations in time series data. The proposed features are compared to existing methods of Sequential Motion Texture, Discrete Cosine Transform and Discrete Fourier Transform, for surgical skills assessment. We report average performance of different features across all applicable OSATS-like criteria for suturing and knot-tying tasks. Our analysis shows that the proposed entropy-based features outperform previous state-of-the-art methods using video data, achieving average classification accuracies of 95.1 and 92.2% for suturing and knot tying, respectively. For accelerometer data, our method performs better for suturing achieving 86.8% average accuracy. We also show that fusion of video and acceleration features can improve overall performance for skill assessment. Automated surgical skills assessment can be achieved with high accuracy using the proposed entropy features. Such a system can significantly improve the efficiency of surgical training in medical schools and teaching hospitals.
Learning to predict where human gaze is using quaternion DCT based regional saliency detection
NASA Astrophysics Data System (ADS)
Li, Ting; Xu, Yi; Zhang, Chongyang
2014-09-01
Many current visual attention approaches used semantic features to accurately capture human gaze. However, these approaches demand high computational cost and can hardly be applied to daily use. Recently, some quaternion-based saliency detection models, such as PQFT (phase spectrum of Quaternion Fourier Transform), QDCT (Quaternion Discrete Cosine Transform), have been proposed to meet real-time requirement of human gaze tracking tasks. However, current saliency detection methods used global PQFT and QDCT to locate jump edges of the input, which can hardly detect the object boundaries accurately. To address the problem, we improved QDCT-based saliency detection model by introducing superpixel-wised regional saliency detection mechanism. The local smoothness of saliency value distribution is emphasized to distinguish noises of background from salient regions. Our algorithm called saliency confidence can distinguish the patches belonging to the salient object and those of the background. It decides whether the image patches belong to the same region. When an image patch belongs to a region consisting of other salient patches, this patch should be salient as well. Therefore, we use saliency confidence map to get background weight and foreground weight to do the optimization on saliency map obtained by QDCT. The optimization is accomplished by least square method. The optimization approach we proposed unifies local and global saliency by combination of QDCT and measuring the similarity between each image superpixel. We evaluate our model on four commonly-used datasets (Toronto, MIT, OSIE and ASD) using standard precision-recall curves (PR curves), the mean absolute error (MAE) and area under curve (AUC) measures. In comparison with most state-of-art models, our approach can achieve higher consistency with human perception without training. It can get accurate human gaze even in cluttered background. Furthermore, it achieves better compromise between speed and accuracy.
NASA Astrophysics Data System (ADS)
Liang, Xiuyu; Zhan, Hongbin; Zhang, You-Kuan; Schilling, Keith
2017-09-01
Unsaturated flow is an important process in base flow recessions and its effect is rarely investigated. A mathematical model for a coupled unsaturated-saturated flow in a horizontally unconfined aquifer with time-dependent infiltrations is presented. The effects of the lateral discharge of the unsaturated zone and aquifer compressibility are specifically taken into consideration. Semianalytical solutions for hydraulic heads and discharges are derived using Laplace transform and Cosine transform. The solutions are compared with solutions of the linearized Boussinesq equation (LB solution) and the linearized Laplace equation (LL solution), respectively. A larger dimensionless constitutive exponent κD (a smaller retention capacity) of the unsaturated zone leads to a smaller discharge during the infiltration period and a larger discharge after the infiltration. The lateral discharge of the unsaturated zone is significant when κD≤1, and becomes negligible when κD≥100. The compressibility of the aquifer has a nonnegligible impact on the discharge at early times. For late times, the power index b of the recession curve -dQ/dt˜ aQb, is 1 and independent of κD, where Q is the base flow and a is a constant lumped aquifer parameter. For early times, b is approximately equal to 3 but it approaches infinity when t→0. The present solution is applied to synthetic and field cases. The present solution matched the synthetic data better than both the LL and LB solutions, with a minimum relative error of 16% for estimate of hydraulic conductivity. The present solution was applied to the observed streamflow discharge in Iowa, and the estimated values of the aquifer parameters were reasonable.
Medhi, Biswajit; Hegde, Gopalakrishna M; Gorthi, Sai Siva; Reddy, Kalidevapura Jagannath; Roy, Debasish; Vasu, Ram Mohan
2016-08-01
A simple noninterferometric optical probe is developed to estimate wavefront distortion suffered by a plane wave in its passage through density variations in a hypersonic flow obstructed by a test model in a typical shock tunnel. The probe has a plane light wave trans-illuminating the flow and casting a shadow of a continuous-tone sinusoidal grating. Through a geometrical optics, eikonal approximation to the distorted wavefront, a bilinear approximation to it is related to the location-dependent shift (distortion) suffered by the grating, which can be read out space-continuously from the projected grating image. The processing of the grating shadow is done through an efficient Fourier fringe analysis scheme, either with a windowed or global Fourier transform (WFT and FT). For comparison, wavefront slopes are also estimated from shadows of random-dot patterns, processed through cross correlation. The measured slopes are suitably unwrapped by using a discrete cosine transform (DCT)-based phase unwrapping procedure, and also through iterative procedures. The unwrapped phase information is used in an iterative scheme, for a full quantitative recovery of density distribution in the shock around the model, through refraction tomographic inversion. Hypersonic flow field parameters around a missile-shaped body at a free-stream Mach number of ∼8 measured using this technique are compared with the numerically estimated values. It is shown that, while processing a wavefront with small space-bandwidth product (SBP) the FT inversion gave accurate results with computational efficiency; computation-intensive WFT was needed for similar results when dealing with larger SBP wavefronts.
Dai, Wenrui; Xiong, Hongkai; Jiang, Xiaoqian; Chen, Chang Wen
2014-01-01
This paper proposes a novel model on intra coding for High Efficiency Video Coding (HEVC), which simultaneously predicts blocks of pixels with optimal rate distortion. It utilizes the spatial statistical correlation for the optimal prediction based on 2-D contexts, in addition to formulating the data-driven structural interdependences to make the prediction error coherent with the probability distribution, which is desirable for successful transform and coding. The structured set prediction model incorporates a max-margin Markov network (M3N) to regulate and optimize multiple block predictions. The model parameters are learned by discriminating the actual pixel value from other possible estimates to maximize the margin (i.e., decision boundary bandwidth). Compared to existing methods that focus on minimizing prediction error, the M3N-based model adaptively maintains the coherence for a set of predictions. Specifically, the proposed model concurrently optimizes a set of predictions by associating the loss for individual blocks to the joint distribution of succeeding discrete cosine transform coefficients. When the sample size grows, the prediction error is asymptotically upper bounded by the training error under the decomposable loss function. As an internal step, we optimize the underlying Markov network structure to find states that achieve the maximal energy using expectation propagation. For validation, we integrate the proposed model into HEVC for optimal mode selection on rate-distortion optimization. The proposed prediction model obtains up to 2.85% bit rate reduction and achieves better visual quality in comparison to the HEVC intra coding. PMID:25505829
Discovering Trigonometric Relationships Implied by the Law of Sines and the Law of Cosines
ERIC Educational Resources Information Center
Skurnick, Ronald; Javadi, Mohammad
2006-01-01
The Law of Sines and The Law of Cosines are of paramount importance in the field of trigonometry because these two theorems establish relationships satisfied by the three sides and the three angles of any triangle. In this article, the authors use these two laws to discover a host of other trigonometric relationships that exist within any…
NASA Technical Reports Server (NTRS)
Nola, F. J. (Inventor)
1977-01-01
A tachometer in which sine and cosine signals responsive to the angular position of a shaft as it rotates are each multiplied by like, sine or cosine, functions of a carrier signal, the products summed, and the resulting frequency signal converted to fixed height, fixed width pulses of a like frequency. These pulses are then integrated, and the resulting dc output is an indication of shaft speed.
An Ellipse Morphs to a Cosine Graph!
ERIC Educational Resources Information Center
King, L .R.
2013-01-01
We produce a continuum of curves all of the same length, beginning with an ellipse and ending with a cosine graph. The curves in the continuum are made by cutting and unrolling circular cones whose section is the ellipse; the initial cone is degenerate (it is the plane of the ellipse); the final cone is a circular cylinder. The curves of the…
Enabling Technologies for Medium Additive Manufacturing (MAAM)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Richardson, Bradley S.; Love, Lonnie J.; Chesser, Phillip C.
ORNL has worked with Cosine Additive, Inc. on the design of MAAM extrusion components. The objective is to improve the print speed and part quality. A pellet extruder has been procured and integrated into the MAAM printer. Print speed has been greatly enhanced. In addition, ORNL and Cosine Additive have worked on alternative designs for a pellet drying and feed system.
NASA Astrophysics Data System (ADS)
Fukushima, Toshio
2018-02-01
In order to accelerate the spherical harmonic synthesis and/or analysis of arbitrary function on the unit sphere, we developed a pair of procedures to transform between a truncated spherical harmonic expansion and the corresponding two-dimensional Fourier series. First, we obtained an analytic expression of the sine/cosine series coefficient of the 4 π fully normalized associated Legendre function in terms of the rectangle values of the Wigner d function. Then, we elaborated the existing method to transform the coefficients of the surface spherical harmonic expansion to those of the double Fourier series so as to be capable with arbitrary high degree and order. Next, we created a new method to transform inversely a given double Fourier series to the corresponding surface spherical harmonic expansion. The key of the new method is a couple of new recurrence formulas to compute the inverse transformation coefficients: a decreasing-order, fixed-degree, and fixed-wavenumber three-term formula for general terms, and an increasing-degree-and-order and fixed-wavenumber two-term formula for diagonal terms. Meanwhile, the two seed values are analytically prepared. Both of the forward and inverse transformation procedures are confirmed to be sufficiently accurate and applicable to an extremely high degree/order/wavenumber as 2^{30} {≈ } 10^9. The developed procedures will be useful not only in the synthesis and analysis of the spherical harmonic expansion of arbitrary high degree and order, but also in the evaluation of the derivatives and integrals of the spherical harmonic expansion.
Flows of Newtonian and Power-Law Fluids in Symmetrically Corrugated Cappilary Fissures and Tubes
NASA Astrophysics Data System (ADS)
Walicka, A.
2018-02-01
In this paper, an analytical method for deriving the relationships between the pressure drop and the volumetric flow rate in laminar flow regimes of Newtonian and power-law fluids through symmetrically corrugated capillary fissures and tubes is presented. This method, which is general with regard to fluid and capillary shape, can also be used as a foundation for different fluids, fissures and tubes. It can also be a good base for numerical integration when analytical expressions are hard to obtain due to mathematical complexities. Five converging-diverging or diverging-converging geometrics, viz. wedge and cone, parabolic, hyperbolic, hyperbolic cosine and cosine curve, are used as examples to illustrate the application of this method. For the wedge and cone geometry the present results for the power-law fluid were compared with the results obtained by another method; this comparison indicates a good compatibility between both the results.
Domain similarity based orthology detection.
Bitard-Feildel, Tristan; Kemena, Carsten; Greenwood, Jenny M; Bornberg-Bauer, Erich
2015-05-13
Orthologous protein detection software mostly uses pairwise comparisons of amino-acid sequences to assert whether two proteins are orthologous or not. Accordingly, when the number of sequences for comparison increases, the number of comparisons to compute grows in a quadratic order. A current challenge of bioinformatic research, especially when taking into account the increasing number of sequenced organisms available, is to make this ever-growing number of comparisons computationally feasible in a reasonable amount of time. We propose to speed up the detection of orthologous proteins by using strings of domains to characterize the proteins. We present two new protein similarity measures, a cosine and a maximal weight matching score based on domain content similarity, and new software, named porthoDom. The qualities of the cosine and the maximal weight matching similarity measures are compared against curated datasets. The measures show that domain content similarities are able to correctly group proteins into their families. Accordingly, the cosine similarity measure is used inside porthoDom, the wrapper developed for proteinortho. porthoDom makes use of domain content similarity measures to group proteins together before searching for orthologs. By using domains instead of amino acid sequences, the reduction of the search space decreases the computational complexity of an all-against-all sequence comparison. We demonstrate that representing and comparing proteins as strings of discrete domains, i.e. as a concatenation of their unique identifiers, allows a drastic simplification of search space. porthoDom has the advantage of speeding up orthology detection while maintaining a degree of accuracy similar to proteinortho. The implementation of porthoDom is released using python and C++ languages and is available under the GNU GPL licence 3 at http://www.bornberglab.org/pages/porthoda .
Output MSE and PSNR prediction in DCT-based lossy compression of remote sensing images
NASA Astrophysics Data System (ADS)
Kozhemiakin, Ruslan A.; Abramov, Sergey K.; Lukin, Vladimir V.; Vozel, Benoit; Chehdi, Kacem
2017-10-01
Amount and size of remote sensing (RS) images acquired by modern systems are so large that data have to be compressed in order to transfer, save and disseminate them. Lossy compression becomes more popular for aforementioned situations. But lossy compression has to be applied carefully with providing acceptable level of introduced distortions not to lose valuable information contained in data. Then introduced losses have to be controlled and predicted and this is problematic for many coders. In this paper, we analyze possibilities of predicting mean square error or, equivalently, PSNR for coders based on discrete cosine transform (DCT) applied either for compressing singlechannel RS images or multichannel data in component-wise manner. The proposed approach is based on direct dependence between distortions introduced due to DCT coefficient quantization and losses in compressed data. One more innovation deals with possibility to employ a limited number (percentage) of blocks for which DCT-coefficients have to be calculated. This accelerates prediction and makes it considerably faster than compression itself. There are two other advantages of the proposed approach. First, it is applicable for both uniform and non-uniform quantization of DCT coefficients. Second, the approach is quite general since it works for several analyzed DCT-based coders. The simulation results are obtained for standard test images and then verified for real-life RS data.
An optimal adder-based hardware architecture for the DCT/SA-DCT
NASA Astrophysics Data System (ADS)
Kinane, Andrew; Muresan, Valentin; O'Connor, Noel
2005-07-01
The explosive growth of the mobile multimedia industry has accentuated the need for ecient VLSI implemen- tations of the associated computationally demanding signal processing algorithms. This need becomes greater as end-users demand increasingly enhanced features and more advanced underpinning video analysis. One such feature is object-based video processing as supported by MPEG-4 core profile, which allows content-based in- teractivity. MPEG-4 has many computationally demanding underlying algorithms, an example of which is the Shape Adaptive Discrete Cosine Transform (SA-DCT). The dynamic nature of the SA-DCT processing steps pose significant VLSI implementation challenges and many of the previously proposed approaches use area and power consumptive multipliers. Most also ignore the subtleties of the packing steps and manipulation of the shape information. We propose a new multiplier-less serial datapath based solely on adders and multiplexers to improve area and power. The adder cost is minimised by employing resource re-use methods. The number of (physical) adders used has been derived using a common sub-expression elimination algorithm. Additional energy eciency is factored into the design by employing guarded evaluation and local clock gating. Our design implements the SA-DCT packing with minimal switching using ecient addressing logic with a transpose mem- ory RAM. The entire design has been synthesized using TSMC 0.09µm TCBN90LP technology yielding a gate count of 12028 for the datapath and its control logic.
Cubic Equations and the Ideal Trisection of the Arbitrary Angle
ERIC Educational Resources Information Center
Farnsworth, Marion B.
2006-01-01
In the year 1837 mathematical proof was set forth authoritatively stating that it is impossible to trisect an arbitrary angle with a compass and an unmarked straightedge in the classical sense. The famous proof depends on an incompatible cubic equation having the cosine of an angle of 60 and the cube of the cosine of one-third of an angle of 60 as…
NASA Astrophysics Data System (ADS)
Rais, Muhammad H.
2010-06-01
This paper presents Field Programmable Gate Array (FPGA) implementation of standard and truncated multipliers using Very High Speed Integrated Circuit Hardware Description Language (VHDL). Truncated multiplier is a good candidate for digital signal processing (DSP) applications such as finite impulse response (FIR) and discrete cosine transform (DCT). Remarkable reduction in FPGA resources, delay, and power can be achieved using truncated multipliers instead of standard parallel multipliers when the full precision of the standard multiplier is not required. The truncated multipliers show significant improvement as compared to standard multipliers. Results show that the anomaly in Spartan-3 AN average connection and maximum pin delay have been efficiently reduced in Virtex-4 device.
Subband directional vector quantization in radiological image compression
NASA Astrophysics Data System (ADS)
Akrout, Nabil M.; Diab, Chaouki; Prost, Remy; Goutte, Robert; Amiel, Michel
1992-05-01
The aim of this paper is to propose a new scheme for image compression. The method is very efficient for images which have directional edges such as the tree-like structure of the coronary vessels in digital angiograms. This method involves two steps. First, the original image is decomposed at different resolution levels using a pyramidal subband decomposition scheme. For decomposition/reconstruction of the image, free of aliasing and boundary errors, we use an ideal band-pass filter bank implemented in the Discrete Cosine Transform domain (DCT). Second, the high-frequency subbands are vector quantized using a multiresolution codebook with vertical and horizontal codewords which take into account the edge orientation of each subband. The proposed method reduces the blocking effect encountered at low bit rates in conventional vector quantization.
FPGA Implementation of Optimal 3D-Integer DCT Structure for Video Compression
2015-01-01
A novel optimal structure for implementing 3D-integer discrete cosine transform (DCT) is presented by analyzing various integer approximation methods. The integer set with reduced mean squared error (MSE) and high coding efficiency are considered for implementation in FPGA. The proposed method proves that the least resources are utilized for the integer set that has shorter bit values. Optimal 3D-integer DCT structure is determined by analyzing the MSE, power dissipation, coding efficiency, and hardware complexity of different integer sets. The experimental results reveal that direct method of computing the 3D-integer DCT using the integer set [10, 9, 6, 2, 3, 1, 1] performs better when compared to other integer sets in terms of resource utilization and power dissipation. PMID:26601120
A robust H.264/AVC video watermarking scheme with drift compensation.
Jiang, Xinghao; Sun, Tanfeng; Zhou, Yue; Wang, Wan; Shi, Yun-Qing
2014-01-01
A robust H.264/AVC video watermarking scheme for copyright protection with self-adaptive drift compensation is proposed. In our scheme, motion vector residuals of macroblocks with the smallest partition size are selected to hide copyright information in order to hold visual impact and distortion drift to a minimum. Drift compensation is also implemented to reduce the influence of watermark to the most extent. Besides, discrete cosine transform (DCT) with energy compact property is applied to the motion vector residual group, which can ensure robustness against intentional attacks. According to the experimental results, this scheme gains excellent imperceptibility and low bit-rate increase. Malicious attacks with different quantization parameters (QPs) or motion estimation algorithms can be resisted efficiently, with 80% accuracy on average after lossy compression.
A Robust H.264/AVC Video Watermarking Scheme with Drift Compensation
Sun, Tanfeng; Zhou, Yue; Shi, Yun-Qing
2014-01-01
A robust H.264/AVC video watermarking scheme for copyright protection with self-adaptive drift compensation is proposed. In our scheme, motion vector residuals of macroblocks with the smallest partition size are selected to hide copyright information in order to hold visual impact and distortion drift to a minimum. Drift compensation is also implemented to reduce the influence of watermark to the most extent. Besides, discrete cosine transform (DCT) with energy compact property is applied to the motion vector residual group, which can ensure robustness against intentional attacks. According to the experimental results, this scheme gains excellent imperceptibility and low bit-rate increase. Malicious attacks with different quantization parameters (QPs) or motion estimation algorithms can be resisted efficiently, with 80% accuracy on average after lossy compression. PMID:24672376
NASA Technical Reports Server (NTRS)
Fymat, A. L.; Smith, C. B.
1979-01-01
It is shown that the inverse analytical solutions, provided separately by Fymat and Box-McKellar, for reconstructing particle size distributions from remote spectral transmission measurements under the anomalous diffraction approximation can be derived using a cosine and a sine transform, respectively. Sufficient conditions of validity of the two formulas are established. Their comparison shows that the former solution is preferable to the latter in that it requires less a priori information (knowledge of the particle number density is not needed) and has wider applicability. For gamma-type distributions, and either a real or a complex refractive index, explicit expressions are provided for retrieving the distribution parameters; such expressions are, interestingly, proportional to the geometric area of the polydispersion.
The influence of finite cavities on the sound insulation of double-plate structures.
Brunskog, Jonas
2005-06-01
Lightweight walls are often designed as frameworks of studs with plates on each side--a double-plate structure. The studs constitute boundaries for the cavities, thereby both affecting the sound transmission directly by short-circuiting the plates, and indirectly by disturbing the sound field between the plates. The paper presents a deterministic prediction model for airborne sound insulation including both effects of the studs. A spatial transform technique is used, taking advantage of the periodicity. The acoustic field inside the cavities is expanded by means of cosine-series. The transmission coefficient (angle-dependent and diffuse) and transmission loss are studied. Numerical examples are presented and comparisons with measurement are performed. The result indicates that a reasonably good agreement between theory and measurement can be achieved.
Air Force Academy Aeronautics Digest, Spring/Summer 1980
1980-10-01
transformation matrix developed under the direction cosine method now can be simplified to four 18 USAFA-TR-80-17 equations t (_W " - W " wq W r 2 2...0uE.CZ 25. 03 *.C .00 0.00 15.00 -(.1 ’.I-01 C.C’ 0 IILkiE1 r.C4 .C TH TAO ..700C7 0.7.1 H g-4-8:j 41S 35.00 0 19 C55 NE -. 68 £-% ’C,- USAFA-TR...4?5. I-TiET4C = FLCAT(I -8) .1.415. 1 T( P, ) TAO SUM = 0.0 0 l~J ZI15 i r4 2 IHETA " FL (AT (J-) *5. TH 0UIi P1 /1 1 Pli = IHLT A-THET 40 A:v 0
Simultaneous storage of medical images in the spatial and frequency domain: A comparative study
Nayak, Jagadish; Bhat, P Subbanna; Acharya U, Rajendra; UC, Niranjan
2004-01-01
Background Digital watermarking is a technique of hiding specific identification data for copyright authentication. This technique is adapted here for interleaving patient information with medical images, to reduce storage and transmission overheads. Methods The patient information is encrypted before interleaving with images to ensure greater security. The bio-signals are compressed and subsequently interleaved with the image. This interleaving is carried out in the spatial domain and Frequency domain. The performance of interleaving in the spatial, Discrete Fourier Transform (DFT), Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) coefficients is studied. Differential pulse code modulation (DPCM) is employed for data compression as well as encryption and results are tabulated for a specific example. Results It can be seen from results, the process does not affect the picture quality. This is attributed to the fact that the change in LSB of a pixel changes its brightness by 1 part in 256. Spatial and DFT domain interleaving gave very less %NRMSE as compared to DCT and DWT domain. Conclusion The Results show that spatial domain the interleaving, the %NRMSE was less than 0.25% for 8-bit encoded pixel intensity. Among the frequency domain interleaving methods, DFT was found to be very efficient. PMID:15180899
Effect of Diffuse Backscatter in Cassini Datasets on the Inferred Properties of Titan's surface
NASA Astrophysics Data System (ADS)
Sultan-Salem, A. K.; Tyler, G. L.
2006-12-01
Microwave (2.18 cm-λ) backscatter data for the surface of Titan obtained with the Cassini Radar instrument exhibit a significant diffuse scattering component. An empirical scattering law of the form Acos^{n}θ, with free parameters A and n, is often employed to model diffuse scattering, which may involve one or more unidentified mechanisms and processes, such as volume scattering and scattering from surface structure that is much smaller than the electromagnetic wavelength used to probe the surface. The cosine law in general is not explicit in its dependence on either the surface structure or electromagnetic parameters. Further, the cosine law often is only a poor representation of the observed diffuse scattering, as can be inferred from computation of standard goodness-of-fit measures such as the statistical significance. We fit four Cassini datasets (TA Inbound and Outbound, T3 Outbound, and T8 Inbound) with a linear combination of a cosine law and a generalized fractal-based quasi-specular scattering law (A. K. Sultan- Salem and G. L. Tyler, J. Geophys. Res., 111, E06S08, doi:10.1029/2005JE002540, 2006), in order to demonstrate how the presence of diffuse scattering increases considerably the uncertainty in surface parameters inferred from the quasi-specular component, typically the dielectric constant of the surface material and the surface root-mean-square slope. This uncertainty impacts inferences concerning the physical properties of the surfaces that display mixed scattering properties.
Learning Spatially-Smooth Mappings in Non-Rigid Structure from Motion
Hamsici, Onur C.; Gotardo, Paulo F.U.; Martinez, Aleix M.
2013-01-01
Non-rigid structure from motion (NRSFM) is a classical underconstrained problem in computer vision. A common approach to make NRSFM more tractable is to constrain 3D shape deformation to be smooth over time. This constraint has been used to compress the deformation model and reduce the number of unknowns that are estimated. However, temporal smoothness cannot be enforced when the data lacks temporal ordering and its benefits are less evident when objects undergo abrupt deformations. This paper proposes a new NRSFM method that addresses these problems by considering deformations as spatial variations in shape space and then enforcing spatial, rather than temporal, smoothness. This is done by modeling each 3D shape coefficient as a function of its input 2D shape. This mapping is learned in the feature space of a rotation invariant kernel, where spatial smoothness is intrinsically defined by the mapping function. As a result, our model represents shape variations compactly using custom-built coefficient bases learned from the input data, rather than a pre-specified set such as the Discrete Cosine Transform. The resulting kernel-based mapping is a by-product of the NRSFM solution and leads to another fundamental advantage of our approach: for a newly observed 2D shape, its 3D shape is recovered by simply evaluating the learned function. PMID:23946937
Learning Spatially-Smooth Mappings in Non-Rigid Structure from Motion.
Hamsici, Onur C; Gotardo, Paulo F U; Martinez, Aleix M
2012-01-01
Non-rigid structure from motion (NRSFM) is a classical underconstrained problem in computer vision. A common approach to make NRSFM more tractable is to constrain 3D shape deformation to be smooth over time. This constraint has been used to compress the deformation model and reduce the number of unknowns that are estimated. However, temporal smoothness cannot be enforced when the data lacks temporal ordering and its benefits are less evident when objects undergo abrupt deformations. This paper proposes a new NRSFM method that addresses these problems by considering deformations as spatial variations in shape space and then enforcing spatial, rather than temporal, smoothness. This is done by modeling each 3D shape coefficient as a function of its input 2D shape. This mapping is learned in the feature space of a rotation invariant kernel, where spatial smoothness is intrinsically defined by the mapping function. As a result, our model represents shape variations compactly using custom-built coefficient bases learned from the input data, rather than a pre-specified set such as the Discrete Cosine Transform. The resulting kernel-based mapping is a by-product of the NRSFM solution and leads to another fundamental advantage of our approach: for a newly observed 2D shape, its 3D shape is recovered by simply evaluating the learned function.
Bai, Zhiliang; Chen, Shili; Jia, Lecheng; Zeng, Zhoumo
2018-01-01
Embracing the fact that one can recover certain signals and images from far fewer measurements than traditional methods use, compressive sensing (CS) provides solutions to huge amounts of data collection in phased array-based material characterization. This article describes how a CS framework can be utilized to effectively compress ultrasonic phased array images in time and frequency domains. By projecting the image onto its Discrete Cosine transform domain, a novel scheme was implemented to verify the potentiality of CS for data reduction, as well as to explore its reconstruction accuracy. The results from CIVA simulations indicate that both time and frequency domain CS can accurately reconstruct array images using samples less than the minimum requirements of the Nyquist theorem. For experimental verification of three types of artificial flaws, although a considerable data reduction can be achieved with defects clearly preserved, it is currently impossible to break Nyquist limitation in the time domain. Fortunately, qualified recovery in the frequency domain makes it happen, meaning a real breakthrough for phased array image reconstruction. As a case study, the proposed CS procedure is applied to the inspection of an engine cylinder cavity containing different pit defects and the results show that orthogonal matching pursuit (OMP)-based CS guarantees the performance for real application. PMID:29738452
Arduino-Based Experiment Demonstrating Malus's Law
ERIC Educational Resources Information Center
de Freitas, Welica P. S.; Cena, Cicero R.; Alves, Diego C. B.; Goncalves, Alem-Mar B.
2018-01-01
Malus's law states that the intensity of light after passing through two polarizers is proportional to the square of the cosine of the angle between the polarizers. We present a simple setup demonstrating this law. The novelty of our work is that we use a multi-turn potentiometer mechanically linked to one of the polarizers to measure the…
A fast estimation of shock wave pressure based on trend identification
NASA Astrophysics Data System (ADS)
Yao, Zhenjian; Wang, Zhongyu; Wang, Chenchen; Lv, Jing
2018-04-01
In this paper, a fast method based on trend identification is proposed to accurately estimate the shock wave pressure in a dynamic measurement. Firstly, the collected output signal of the pressure sensor is reconstructed by discrete cosine transform (DCT) to reduce the computational complexity for the subsequent steps. Secondly, the empirical mode decomposition (EMD) is applied to decompose the reconstructed signal into several components with different frequency-bands, and the last few low-frequency components are chosen to recover the trend of the reconstructed signal. In the meantime, the optimal component number is determined based on the correlation coefficient and the normalized Euclidean distance between the trend and the reconstructed signal. Thirdly, with the areas under the gradient curve of the trend signal, the stable interval that produces the minimum can be easily identified. As a result, the stable value of the output signal is achieved in this interval. Finally, the shock wave pressure can be estimated according to the stable value of the output signal and the sensitivity of the sensor in the dynamic measurement. A series of shock wave pressure measurements are carried out with a shock tube system to validate the performance of this method. The experimental results show that the proposed method works well in shock wave pressure estimation. Furthermore, comparative experiments also demonstrate the superiority of the proposed method over the existing approaches in both estimation accuracy and computational efficiency.
NASA Astrophysics Data System (ADS)
Laloy, Eric; Hérault, Romain; Lee, John; Jacques, Diederik; Linde, Niklas
2017-12-01
Efficient and high-fidelity prior sampling and inversion for complex geological media is still a largely unsolved challenge. Here, we use a deep neural network of the variational autoencoder type to construct a parametric low-dimensional base model parameterization of complex binary geological media. For inversion purposes, it has the attractive feature that random draws from an uncorrelated standard normal distribution yield model realizations with spatial characteristics that are in agreement with the training set. In comparison with the most commonly used parametric representations in probabilistic inversion, we find that our dimensionality reduction (DR) approach outperforms principle component analysis (PCA), optimization-PCA (OPCA) and discrete cosine transform (DCT) DR techniques for unconditional geostatistical simulation of a channelized prior model. For the considered examples, important compression ratios (200-500) are achieved. Given that the construction of our parameterization requires a training set of several tens of thousands of prior model realizations, our DR approach is more suited for probabilistic (or deterministic) inversion than for unconditional (or point-conditioned) geostatistical simulation. Probabilistic inversions of 2D steady-state and 3D transient hydraulic tomography data are used to demonstrate the DR-based inversion. For the 2D case study, the performance is superior compared to current state-of-the-art multiple-point statistics inversion by sequential geostatistical resampling (SGR). Inversion results for the 3D application are also encouraging.
Zhou, Rui; Sun, Jinping; Hu, Yuxin; Qi, Yaolong
2018-01-31
Synthetic aperture radar (SAR) equipped on the hypersonic air vehicle in near space has many advantages over the conventional airborne SAR. However, its high-speed maneuvering characteristics with curved trajectory result in serious range migration, and exacerbate the contradiction between the high resolution and wide swath. To solve this problem, this paper establishes the imaging geometrical model matched with the flight trajectory of the hypersonic platform and the multichannel azimuth sampling model based on the displaced phase center antenna (DPCA) technology. Furthermore, based on the multichannel signal reconstruction theory, a more efficient spectrum reconstruction model using discrete Fourier transform is proposed to obtain the azimuth uniform sampling data. Due to the high complexity of the slant range model, it is difficult to deduce the processing algorithm for SAR imaging. Thus, an approximate range model is derived based on the minimax criterion, and the optimal second-order approximate coefficients of cosine function are obtained using the two-population coevolutionary algorithm. On this basis, aiming at the problem that the traditional Omega-K algorithm cannot compensate the residual phase with the difficulty of Stolt mapping along the range frequency axis, this paper proposes an Exact Transfer Function (ETF) algorithm for SAR imaging, and presents a method of range division to achieve wide swath imaging. Simulation results verify the effectiveness of the ETF imaging algorithm.
Zhou, Rui; Hu, Yuxin; Qi, Yaolong
2018-01-01
Synthetic aperture radar (SAR) equipped on the hypersonic air vehicle in near space has many advantages over the conventional airborne SAR. However, its high-speed maneuvering characteristics with curved trajectory result in serious range migration, and exacerbate the contradiction between the high resolution and wide swath. To solve this problem, this paper establishes the imaging geometrical model matched with the flight trajectory of the hypersonic platform and the multichannel azimuth sampling model based on the displaced phase center antenna (DPCA) technology. Furthermore, based on the multichannel signal reconstruction theory, a more efficient spectrum reconstruction model using discrete Fourier transform is proposed to obtain the azimuth uniform sampling data. Due to the high complexity of the slant range model, it is difficult to deduce the processing algorithm for SAR imaging. Thus, an approximate range model is derived based on the minimax criterion, and the optimal second-order approximate coefficients of cosine function are obtained using the two-population coevolutionary algorithm. On this basis, aiming at the problem that the traditional Omega-K algorithm cannot compensate the residual phase with the difficulty of Stolt mapping along the range frequency axis, this paper proposes an Exact Transfer Function (ETF) algorithm for SAR imaging, and presents a method of range division to achieve wide swath imaging. Simulation results verify the effectiveness of the ETF imaging algorithm. PMID:29385059
NASA Astrophysics Data System (ADS)
Dutta, Jaideep; Kundu, Balaram
2018-05-01
This paper aims to develop an analytical study of heat propagation in biological tissues for constant and variable heat flux at the skin surface correlated with Hyperthermia treatment. In the present research work we have attempted to impose two unique kind of oscillating boundary condition relevant to practical aspect of the biomedical engineering while the initial condition is constructed as spatially dependent according to a real life situation. We have implemented Laplace's Transform method (LTM) and Green Function (GFs) method to solve single phase lag (SPL) thermal wave model of bioheat equation (TWMBHE). This research work strongly focuses upon the non-invasive therapy by employing oscillating heat flux. The heat flux at the skin surface is considered as constant, sinusoidal, and cosine forms. A comparative study of the impact of different kinds of heat flux on the temperature field in living tissue explored that sinusoidal heat flux will be more effective if the time of therapeutic heating is high. Cosine heating is also applicable in Hyperthermia treatment due to its precision in thermal waveform. The result also emphasizes that accurate observation must be required for the selection of phase angle and frequency of oscillating heat flux. By possible comparison with the published experimental research work and published mathematical study we have experienced a difference in temperature distribution as 5.33% and 4.73%, respectively. A parametric analysis has been devoted to suggest an appropriate procedure of the selection of important design variables in viewpoint of an effective heating in hyperthermia treatment.
B-spline goal-oriented error estimators for geometrically nonlinear rods
2011-04-01
respectively, for the output functionals q2–q4 (linear and nonlinear with the trigonometric functions sine and cosine) in all the tests considered...of the errors resulting from the linear, quadratic and nonlinear (with trigonometric functions sine and cosine) outputs and for p = 1, 2. If the... Portugal . References [1] A.T. Adams. Sobolev Spaces. Academic Press, Boston, 1975. [2] M. Ainsworth and J.T. Oden. A posteriori error estimation in
Muon detector for the COSINE-100 experiment
NASA Astrophysics Data System (ADS)
Prihtiadi, H.; Adhikari, G.; Adhikari, P.; Barbosa de Souza, E.; Carlin, N.; Choi, S.; Choi, W. Q.; Djamal, M.; Ezeribe, A. C.; Ha, C.; Hahn, I. S.; Hubbard, A. J. F.; Jeon, E. J.; Jo, J. H.; Joo, H. W.; Kang, W.; Kang, W. G.; Kauer, M.; Kim, B. H.; Kim, H.; Kim, H. J.; Kim, K. W.; Kim, N. Y.; Kim, S. K.; Kim, Y. D.; Kim, Y. H.; Kudryavtsev, V. A.; Lee, H. S.; Lee, J.; Lee, J. Y.; Lee, M. H.; Leonard, D. S.; Lim, K. E.; Lynch, W. A.; Maruyama, R. H.; Mouton, F.; Olsen, S. L.; Park, H. K.; Park, H. S.; Park, J. S.; Park, K. S.; Pettus, W.; Pierpoint, Z. P.; Ra, S.; Rogers, F. R.; Rott, C.; Scarff, A.; Spooner, N. J. C.; Thompson, W. G.; Yang, L.; Yong, S. H.
2018-02-01
The COSINE-100 dark matter search experiment has started taking physics data with the goal of performing an independent measurement of the annual modulation signal observed by DAMA/LIBRA. A muon detector was constructed by using plastic scintillator panels in the outermost layer of the shield surrounding the COSINE-100 detector. It detects cosmic ray muons in order to understand the impact of the muon annual modulation on dark matter analysis. Assembly and initial performance tests of each module have been performed at a ground laboratory. The installation of the detector in the Yangyang Underground Laboratory (Y2L) was completed in the summer of 2016. Using three months of data, the muon underground flux was measured to be 328 ± 1(stat.)± 10(syst.) muons/m2/day. In this report, the assembly of the muon detector and the results from the analysis are presented.
Optimization of Darrieus turbines with an upwind and downwind momentum model
NASA Astrophysics Data System (ADS)
Loth, J. L.; McCoy, H.
1983-08-01
This paper presents a theoretical aerodynamic performance optimization for two dimensional vertical axis wind turbines. A momentum type wake model is introduced with separate cosine type interference coefficients for the up and downwind half of the rotor. The cosine type loading permits the rotor blades to become unloaded near the junction of the upwind and downwind rotor halves. Both the optimum and the off design magnitude of the interference coefficients are obtained by equating the drag on each of the rotor halves to that on each of two cosine loaded actuator discs in series. The values for the optimum rotor efficiency, solidity and corresponding interference coefficients have been obtained in a closed form analytic solution by maximizing the power extracted from the downwind rotor half as well as from the entire rotor. A numerical solution was required when viscous effects were incorporated in the rotor optimization.
Learning-Based Just-Noticeable-Quantization- Distortion Modeling for Perceptual Video Coding.
Ki, Sehwan; Bae, Sung-Ho; Kim, Munchurl; Ko, Hyunsuk
2018-07-01
Conventional predictive video coding-based approaches are reaching the limit of their potential coding efficiency improvements, because of severely increasing computation complexity. As an alternative approach, perceptual video coding (PVC) has attempted to achieve high coding efficiency by eliminating perceptual redundancy, using just-noticeable-distortion (JND) directed PVC. The previous JNDs were modeled by adding white Gaussian noise or specific signal patterns into the original images, which were not appropriate in finding JND thresholds due to distortion with energy reduction. In this paper, we present a novel discrete cosine transform-based energy-reduced JND model, called ERJND, that is more suitable for JND-based PVC schemes. Then, the proposed ERJND model is extended to two learning-based just-noticeable-quantization-distortion (JNQD) models as preprocessing that can be applied for perceptual video coding. The two JNQD models can automatically adjust JND levels based on given quantization step sizes. One of the two JNQD models, called LR-JNQD, is based on linear regression and determines the model parameter for JNQD based on extracted handcraft features. The other JNQD model is based on a convolution neural network (CNN), called CNN-JNQD. To our best knowledge, our paper is the first approach to automatically adjust JND levels according to quantization step sizes for preprocessing the input to video encoders. In experiments, both the LR-JNQD and CNN-JNQD models were applied to high efficiency video coding (HEVC) and yielded maximum (average) bitrate reductions of 38.51% (10.38%) and 67.88% (24.91%), respectively, with little subjective video quality degradation, compared with the input without preprocessing applied.
Average Cosine Meter and High Spectral Resolution Measurements at the Marine Light Mixed Layer Site.
1994-09-30
G. A. Knauer, D. M. Karl , and Atlantic Ocean determined by inverse methods , W. W. Broenkow, VERTEX: Carbon cycling in the Ph.D. thesis, pp. 1-287...electromechanical release, this novel and inexpensive method eliminated shadowing from the ship. The ACM measured irradiance at 490nm using cosine and 411...three times more accurately than using traditional methods . A mathematical simulation of the absorption coefficient of phytoplankton derived from
Azimuthal decorrelation of jets widely separated in rapidity in pp collisions at √{s}=7 TeV
NASA Astrophysics Data System (ADS)
Khachatryan, V.; Sirunyan, A. M.; Tumasyan, A.; Adam, W.; Asilar, E.; Bergauer, T.; Brandstetter, J.; Brondolin, E.; Dragicevic, M.; Erö, J.; Flechl, M.; Friedl, M.; Frühwirth, R.; Ghete, V. M.; Hartl, C.; Hörmann, N.; Hrubec, J.; Jeitler, M.; Knünz, V.; König, A.; Krammer, M.; Krätschmer, I.; Liko, D.; Matsushita, T.; Mikulec, I.; Rabady, D.; Rahbaran, B.; Rohringer, H.; Schieck, J.; Schöfbeck, R.; Strauss, J.; Treberer-Treberspurg, W.; Waltenberger, W.; Wulz, C.-E.; Mossolov, V.; Shumeiko, N.; Suarez Gonzalez, J.; Alderweireldt, S.; Cornelis, T.; de Wolf, E. A.; Janssen, X.; Knutsson, A.; Lauwers, J.; Luyckx, S.; Ochesanu, S.; Rougny, R.; van de Klundert, M.; van Haevermaet, H.; van Mechelen, P.; van Remortel, N.; van Spilbeeck, A.; Abu Zeid, S.; Blekman, F.; D'Hondt, J.; Daci, N.; de Bruyn, I.; Deroover, K.; Heracleous, N.; Keaveney, J.; Lowette, S.; Moreels, L.; Olbrechts, A.; Python, Q.; Strom, D.; Tavernier, S.; van Doninck, W.; van Mulders, P.; van Onsem, G. P.; van Parijs, I.; Barria, P.; Caillol, C.; Clerbaux, B.; de Lentdecker, G.; Delannoy, H.; Fasanella, G.; Favart, L.; Gay, A. P. R.; Grebenyuk, A.; Karapostoli, G.; Lenzi, T.; Léonard, A.; Maerschalk, T.; Marinov, A.; Perniè, L.; Randle-Conde, A.; Reis, T.; Seva, T.; Vander Velde, C.; Vanlaer, P.; Yonamine, R.; Zenoni, F.; Zhang, F.; Beernaert, K.; Benucci, L.; Cimmino, A.; Crucy, S.; Dobur, D.; Fagot, A.; Garcia, G.; Gul, M.; McCartin, J.; Ocampo Rios, A. A.; Poyraz, D.; Ryckbosch, D.; Salva, S.; Sigamani, M.; Strobbe, N.; Tytgat, M.; van Driessche, W.; Yazgan, E.; Zaganidis, N.; Basegmez, S.; Beluffi, C.; Bondu, O.; Brochet, S.; Bruno, G.; Castello, R.; Caudron, A.; Ceard, L.; da Silveira, G. G.; Delaere, C.; Favart, D.; Forthomme, L.; Giammanco, A.; Hollar, J.; Jafari, A.; Jez, P.; Komm, M.; Lemaitre, V.; Mertens, A.; Nuttens, C.; Perrini, L.; Pin, A.; Piotrzkowski, K.; Popov, A.; Quertenmont, L.; Selvaggi, M.; Vidal Marono, M.; Beliy, N.; Hammad, G. H.; Aldá Júnior, W. L.; Alves, G. A.; Brito, L.; Correa Martins Junior, M.; Hamer, M.; Hensel, C.; Mora Herrera, C.; Moraes, A.; Pol, M. E.; Rebello Teles, P.; Belchior Batista Das Chagas, E.; Carvalho, W.; Chinellato, J.; Custódio, A.; da Costa, E. M.; de Jesus Damiao, D.; de Oliveira Martins, C.; Fonseca de Souza, S.; Huertas Guativa, L. M.; Malbouisson, H.; Matos Figueiredo, D.; Mundim, L.; Nogima, H.; Prado da Silva, W. L.; Santoro, A.; Sznajder, A.; Tonelli Manganote, E. J.; Vilela Pereira, A.; Ahuja, S.; Bernardes, C. A.; de Souza Santos, A.; Dogra, S.; Fernandez Perez Tomei, T. R.; Gregores, E. M.; Mercadante, P. G.; Moon, C. S.; Novaes, S. F.; Padula, Sandra S.; Romero Abad, D.; Ruiz Vargas, J. C.; Aleksandrov, A.; Hadjiiska, R.; Iaydjiev, P.; Rodozov, M.; Stoykova, S.; Sultanov, G.; Vutova, M.; Dimitrov, A.; Glushkov, I.; Litov, L.; Pavlov, B.; Petkov, P.; Ahmad, M.; Bian, J. G.; Chen, G. M.; Chen, H. S.; Chen, M.; Cheng, T.; Du, R.; Jiang, C. H.; Plestina, R.; Romeo, F.; Shaheen, S. M.; Tao, J.; Wang, C.; Wang, Z.; Zhang, H.; Asawatangtrakuldee, C.; Ban, Y.; Li, Q.; Liu, S.; Mao, Y.; Qian, S. J.; Wang, D.; Xu, Z.; Zou, W.; Avila, C.; Cabrera, A.; Chaparro Sierra, L. F.; Florez, C.; Gomez, J. P.; Gomez Moreno, B.; Sanabria, J. C.; Godinovic, N.; Lelas, D.; Puljak, I.; Ribeiro Cipriano, P. M.; Antunovic, Z.; Kovac, M.; Brigljevic, V.; Kadija, K.; Luetic, J.; Micanovic, S.; Sudic, L.; Attikis, A.; Mavromanolakis, G.; Mousa, J.; Nicolaou, C.; Ptochos, F.; Razis, P. A.; Rykaczewski, H.; Bodlak, M.; Finger, M.; Finger, M.; El-Khateeb, E.; Elkafrawy, T.; Mohamed, A.; Radi, A.; Salama, E.; Calpas, B.; Kadastik, M.; Murumaa, M.; Raidal, M.; Tiko, A.; Veelken, C.; Eerola, P.; Pekkanen, J.; Voutilainen, M.; Härkönen, J.; Karimäki, V.; Kinnunen, R.; Lampén, T.; Lassila-Perini, K.; Lehti, S.; Lindén, T.; Luukka, P.; Mäenpää, T.; Peltola, T.; Tuominen, E.; Tuominiemi, J.; Tuovinen, E.; Wendland, L.; Talvitie, J.; Tuuva, T.; Besancon, M.; Couderc, F.; Dejardin, M.; Denegri, D.; Fabbro, B.; Faure, J. L.; Favaro, C.; Ferri, F.; Ganjour, S.; Givernaud, A.; Gras, P.; Hamel de Monchenault, G.; Jarry, P.; Locci, E.; Machet, M.; Malcles, J.; Rander, J.; Rosowsky, A.; Titov, M.; Zghiche, A.; Antropov, I.; Baffioni, S.; Beaudette, F.; Busson, P.; Cadamuro, L.; Chapon, E.; Charlot, C.; Dahms, T.; Davignon, O.; Filipovic, N.; Florent, A.; Granier de Cassagnac, R.; Lisniak, S.; Mastrolorenzo, L.; Miné, P.; Naranjo, I. N.; Nguyen, M.; Ochando, C.; Ortona, G.; Paganini, P.; Pigard, P.; Regnard, S.; Salerno, R.; Sauvan, J. B.; Sirois, Y.; Strebler, T.; Yilmaz, Y.; Zabi, A.; Agram, J.-L.; Andrea, J.; Aubin, A.; Bloch, D.; Brom, J.-M.; Buttignol, M.; Chabert, E. C.; Chanon, N.; Collard, C.; Conte, E.; Coubez, X.; Fontaine, J.-C.; Gelé, D.; Goerlach, U.; Goetzmann, C.; Le Bihan, A.-C.; Merlin, J. A.; Skovpen, K.; van Hove, P.; Gadrat, S.; Beauceron, S.; Bernet, C.; Boudoul, G.; Bouvier, E.; Carrillo Montoya, C. A.; Chierici, R.; Contardo, D.; Courbon, B.; Depasse, P.; El Mamouni, H.; Fan, J.; Fay, J.; Gascon, S.; Gouzevitch, M.; Ille, B.; Lagarde, F.; Laktineh, I. B.; Lethuillier, M.; Mirabito, L.; Pequegnot, A. L.; Perries, S.; Ruiz Alvarez, J. D.; Sabes, D.; Sgandurra, L.; Sordini, V.; Vander Donckt, M.; Verdier, P.; Viret, S.; Xiao, H.; Toriashvili, T.; Tsamalaidze, Z.; Autermann, C.; Beranek, S.; Edelhoff, M.; Feld, L.; Heister, A.; Kiesel, M. K.; Klein, K.; Lipinski, M.; Ostapchuk, A.; Preuten, M.; Raupach, F.; Schael, S.; Schulte, J. F.; Verlage, T.; Weber, H.; Wittmer, B.; Zhukov, V.; Ata, M.; Brodski, M.; Dietz-Laursonn, E.; Duchardt, D.; Endres, M.; Erdmann, M.; Erdweg, S.; Esch, T.; Fischer, R.; Güth, A.; Hebbeker, T.; Heidemann, C.; Hoepfner, K.; Klingebiel, D.; Knutzen, S.; Kreuzer, P.; Merschmeyer, M.; Meyer, A.; Millet, P.; Olschewski, M.; Padeken, K.; Papacz, P.; Pook, T.; Radziej, M.; Reithler, H.; Rieger, M.; Scheuch, F.; Sonnenschein, L.; Teyssier, D.; Thüer, S.; Cherepanov, V.; Erdogan, Y.; Flügge, G.; Geenen, H.; Geisler, M.; Hoehle, F.; Kargoll, B.; Kress, T.; Kuessel, Y.; Künsken, A.; Lingemann, J.; Nehrkorn, A.; Nowack, A.; Nugent, I. M.; Pistone, C.; Pooth, O.; Stahl, A.; Aldaya Martin, M.; Asin, I.; Bartosik, N.; Behnke, O.; Behrens, U.; Bell, A. J.; Borras, K.; Burgmeier, A.; Cakir, A.; Calligaris, L.; Campbell, A.; Choudhury, S.; Costanza, F.; Diez Pardos, C.; Dolinska, G.; Dooling, S.; Dorland, T.; Eckerlin, G.; Eckstein, D.; Eichhorn, T.; Flucke, G.; Gallo, E.; Garay Garcia, J.; Geiser, A.; Gizhko, A.; Gunnellini, P.; Hauk, J.; Hempel, M.; Jung, H.; Kalogeropoulos, A.; Karacheban, O.; Kasemann, M.; Katsas, P.; Kieseler, J.; Kleinwort, C.; Korol, I.; Lange, W.; Leonard, J.; Lipka, K.; Lobanov, A.; Lohmann, W.; Mankel, R.; Marfin, I.; Melzer-Pellmann, I.-A.; Meyer, A. B.; Mittag, G.; Mnich, J.; Mussgiller, A.; Naumann-Emme, S.; Nayak, A.; Ntomari, E.; Perrey, H.; Pitzl, D.; Placakyte, R.; Raspereza, A.; Roland, B.; Sahin, M. Ö.; Saxena, P.; Schoerner-Sadenius, T.; Schröder, M.; Seitz, C.; Spannagel, S.; Trippkewitz, K. D.; Walsh, R.; Wissing, C.; Blobel, V.; Centis Vignali, M.; Draeger, A. R.; Erfle, J.; Garutti, E.; Goebel, K.; Gonzalez, D.; Görner, M.; Haller, J.; Hoffmann, M.; Höing, R. S.; Junkes, A.; Klanner, R.; Kogler, R.; Lapsien, T.; Lenz, T.; Marchesini, I.; Marconi, D.; Meyer, M.; Nowatschin, D.; Ott, J.; Pantaleo, F.; Peiffer, T.; Perieanu, A.; Pietsch, N.; Poehlsen, J.; Rathjens, D.; Sander, C.; Schettler, H.; Schleper, P.; Schlieckau, E.; Schmidt, A.; Schwandt, J.; Seidel, M.; Sola, V.; Stadie, H.; Steinbrück, G.; Tholen, H.; Troendle, D.; Usai, E.; Vanelderen, L.; Vanhoefer, A.; Vormwald, B.; Akbiyik, M.; Barth, C.; Baus, C.; Berger, J.; Böser, C.; Butz, E.; Chwalek, T.; Colombo, F.; de Boer, W.; Descroix, A.; Dierlamm, A.; Fink, S.; Frensch, F.; Giffels, M.; Gilbert, A.; Hartmann, F.; Heindl, S. M.; Husemann, U.; Katkov, I.; Kornmayer, A.; Lobelle Pardo, P.; Maier, B.; Mildner, H.; Mozer, M. U.; Müller, T.; Müller, Th.; Plagge, M.; Quast, G.; Rabbertz, K.; Röcker, S.; Roscher, F.; Simonis, H. J.; Stober, F. M.; Ulrich, R.; Wagner-Kuhr, J.; Wayand, S.; Weber, M.; Weiler, T.; Wöhrmann, C.; Wolf, R.; Anagnostou, G.; Daskalakis, G.; Geralis, T.; Giakoumopoulou, V. A.; Kyriakis, A.; Loukas, D.; Psallidas, A.; Topsis-Giotis, I.; Agapitos, A.; Kesisoglou, S.; Panagiotou, A.; Saoulidou, N.; Tziaferi, E.; Evangelou, I.; Flouris, G.; Foudas, C.; Kokkas, P.; Loukas, N.; Manthos, N.; Papadopoulos, I.; Paradas, E.; Strologas, J.; Bencze, G.; Hajdu, C.; Hazi, A.; Hidas, P.; Horvath, D.; Sikler, F.; Veszpremi, V.; Vesztergombi, G.; Zsigmond, A. J.; Beni, N.; Czellar, S.; Karancsi, J.; Molnar, J.; Szillasi, Z.; Bartók, M.; Makovec, A.; Raics, P.; Trocsanyi, Z. L.; Ujvari, B.; Mal, P.; Mandal, K.; Sahoo, N.; Swain, S. K.; Bansal, S.; Beri, S. B.; Bhatnagar, V.; Chawla, R.; Gupta, R.; Bhawandeep, U.; Kalsi, A. K.; Kaur, A.; Kaur, M.; Kumar, R.; Mehta, A.; Mittal, M.; Singh, J. B.; Walia, G.; Kumar, Ashok; Bhardwaj, A.; Choudhary, B. C.; Garg, R. B.; Kumar, A.; Malhotra, S.; Naimuddin, M.; Nishu, N.; Ranjan, K.; Sharma, R.; Sharma, V.; Banerjee, S.; Bhattacharya, S.; Chatterjee, K.; Dey, S.; Dutta, S.; Jain, Sa.; Majumdar, N.; Modak, A.; Mondal, K.; Mukherjee, S.; Mukhopadhyay, S.; Roy, A.; Roy, D.; Roy Chowdhury, S.; Sarkar, S.; Sharan, M.; Abdulsalam, A.; Chudasama, R.; Dutta, D.; Jha, V.; Kumar, V.; Mohanty, A. K.; Pant, L. M.; Shukla, P.; Topkar, A.; Aziz, T.; Banerjee, S.; Bhowmik, S.; Chatterjee, R. M.; Dewanjee, R. K.; Dugad, S.; Ganguly, S.; Ghosh, S.; Guchait, M.; Gurtu, A.; Kole, G.; Kumar, S.; Mahakud, B.; Maity, M.; Majumder, G.; Mazumdar, K.; Mitra, S.; Mohanty, G. B.; Parida, B.; Sarkar, T.; Sudhakar, K.; Sur, N.; Sutar, B.; Wickramage, N.; Chauhan, S.; Dube, S.; Sharma, S.; Bakhshiansohi, H.; Behnamian, H.; Etesami, S. M.; Fahim, A.; Goldouzian, R.; Khakzad, M.; Mohammadi Najafabadi, M.; Naseri, M.; Paktinat Mehdiabadi, S.; Rezaei Hosseinabadi, F.; Safarzadeh, B.; Zeinali, M.; Felcini, M.; Grunewald, M.; Abbrescia, M.; Calabria, C.; Caputo, C.; Colaleo, A.; Creanza, D.; Cristella, L.; de Filippis, N.; de Palma, M.; Fiore, L.; Iaselli, G.; Maggi, G.; Maggi, M.; Miniello, G.; My, S.; Nuzzo, S.; Pompili, A.; Pugliese, G.; Radogna, R.; Ranieri, A.; Selvaggi, G.; Silvestris, L.; Venditti, R.; Verwilligen, P.; Abbiendi, G.; Battilana, C.; Benvenuti, A. C.; Bonacorsi, D.; Braibant-Giacomelli, S.; Brigliadori, L.; Campanini, R.; Capiluppi, P.; Castro, A.; Cavallo, F. R.; Chhibra, S. S.; Codispoti, G.; Cuffiani, M.; Dallavalle, G. M.; Fabbri, F.; Fanfani, A.; Fasanella, D.; Giacomelli, P.; Grandi, C.; Guiducci, L.; Marcellini, S.; Masetti, G.; Montanari, A.; Navarria, F. L.; Perrotta, A.; Rossi, A. M.; Rovelli, T.; Siroli, G. P.; Tosi, N.; Travaglini, R.; Cappello, G.; Chiorboli, M.; Costa, S.; Giordano, F.; Potenza, R.; Tricomi, A.; Tuve, C.; Barbagli, G.; Ciulli, V.; Civinini, C.; D'Alessandro, R.; Focardi, E.; Gonzi, S.; Gori, V.; Lenzi, P.; Meschini, M.; Paoletti, S.; Sguazzoni, G.; Tropiano, A.; Viliani, L.; Benussi, L.; Bianco, S.; Fabbri, F.; Piccolo, D.; Primavera, F.; Calvelli, V.; Ferro, F.; Lo Vetere, M.; Monge, M. R.; Robutti, E.; Tosi, S.; Brianza, L.; Dinardo, M. E.; Fiorendi, S.; Gennai, S.; Gerosa, R.; Ghezzi, A.; Govoni, P.; Malvezzi, S.; Manzoni, R. A.; Marzocchi, B.; Menasce, D.; Moroni, L.; Paganoni, M.; Pedrini, D.; Ragazzi, S.; Redaelli, N.; Tabarelli de Fatis, T.; Buontempo, S.; Cavallo, N.; di Guida, S.; Esposito, M.; Fabozzi, F.; Iorio, A. O. M.; Lanza, G.; Lista, L.; Meola, S.; Merola, M.; Paolucci, P.; Sciacca, C.; Thyssen, F.; Azzi, P.; Bacchetta, N.; Benato, L.; Bisello, D.; Boletti, A.; Carlin, R.; Checchia, P.; Dall'Osso, M.; Dorigo, T.; Fanzago, F.; Gasparini, F.; Gasparini, U.; Gozzelino, A.; Lacaprara, S.; Margoni, M.; Maron, G.; Meneguzzo, A. T.; Montecassiano, F.; Pazzini, J.; Pozzobon, N.; Ronchese, P.; Simonetto, F.; Torassa, E.; Tosi, M.; Ventura, S.; Zanetti, M.; Zotto, P.; Zucchetta, A.; Zumerle, G.; Braghieri, A.; Magnani, A.; Montagna, P.; Ratti, S. P.; Re, V.; Riccardi, C.; Salvini, P.; Vai, I.; Vitulo, P.; Alunni Solestizi, L.; Biasini, M.; Bilei, G. M.; Ciangottini, D.; Fanò, L.; Lariccia, P.; Mantovani, G.; Menichelli, M.; Saha, A.; Santocchia, A.; Spiezia, A.; Androsov, K.; Azzurri, P.; Bagliesi, G.; Bernardini, J.; Boccali, T.; Broccolo, G.; Castaldi, R.; Ciocci, M. A.; Dell'Orso, R.; Donato, S.; Fedi, G.; Foà, L.; Giassi, A.; Grippo, M. T.; Ligabue, F.; Lomtadze, T.; Martini, L.; Messineo, A.; Palla, F.; Rizzi, A.; Savoy-Navarro, A.; Serban, A. T.; Spagnolo, P.; Squillacioti, P.; Tenchini, R.; Tonelli, G.; Venturi, A.; Verdini, P. G.; Barone, L.; Cavallari, F.; D'Imperio, G.; Del Re, D.; Diemoz, M.; Gelli, S.; Jorda, C.; Longo, E.; Margaroli, F.; Meridiani, P.; Organtini, G.; Paramatti, R.; Preiato, F.; Rahatlou, S.; Rovelli, C.; Santanastasio, F.; Traczyk, P.; Amapane, N.; Arcidiacono, R.; Argiro, S.; Arneodo, M.; Bellan, R.; Biino, C.; Cartiglia, N.; Costa, M.; Covarelli, R.; Degano, A.; Demaria, N.; Finco, L.; Mariotti, C.; Maselli, S.; Mazza, G.; Migliore, E.; Monaco, V.; Monteil, E.; Musich, M.; Obertino, M. M.; Pacher, L.; Pastrone, N.; Pelliccioni, M.; Pinna Angioni, G. L.; Ravera, F.; Romero, A.; Ruspa, M.; Sacchi, R.; Solano, A.; Staiano, A.; Tamponi, U.; Belforte, S.; Candelise, V.; Casarsa, M.; Cossutti, F.; Della Ricca, G.; Gobbo, B.; La Licata, C.; Marone, M.; Schizzi, A.; Zanetti, A.; Kropivnitskaya, A.; Nam, S. K.; Kim, D. H.; Kim, G. N.; Kim, M. S.; Kong, D. J.; Lee, S.; Oh, Y. D.; Sakharov, A.; Son, D. C.; Brochero Cifuentes, J. A.; Kim, H.; Kim, T. J.; Ryu, M. S.; Song, S.; Choi, S.; Go, Y.; Gyun, D.; Hong, B.; Jo, M.; Kim, H.; Kim, Y.; Lee, B.; Lee, K.; Lee, K. S.; Lee, S.; Park, S. K.; Roh, Y.; Yoo, H. D.; Choi, M.; Kim, H.; Kim, J. H.; Lee, J. S. H.; Park, I. C.; Ryu, G.; Choi, Y.; Choi, Y. K.; Goh, J.; Kim, D.; Kwon, E.; Lee, J.; Yu, I.; Juodagalvis, A.; Vaitkus, J.; Ahmed, I.; Ibrahim, Z. A.; Komaragiri, J. R.; Ali, M. A. B. Md; Mohamad Idris, F.; Wan Abdullah, W. A. T.; Yusli, M. N.; Casimiro Linares, E.; Castilla-Valdez, H.; de La Cruz-Burelo, E.; Heredia-de La Cruz, I.; Hernandez-Almada, A.; Lopez-Fernandez, R.; Sanchez-Hernandez, A.; Carrillo Moreno, S.; Vazquez Valencia, F.; Pedraza, I.; Salazar Ibarguen, H. A.; Morelos Pineda, A.; Krofcheck, D.; Butler, P. H.; Ahmad, A.; Ahmad, M.; Hassan, Q.; Hoorani, H. R.; Khan, W. A.; Khurshid, T.; Shoaib, M.; Bialkowska, H.; Bluj, M.; Boimska, B.; Frueboes, T.; Górski, M.; Kazana, M.; Nawrocki, K.; Romanowska-Rybinska, K.; Szleper, M.; Zalewski, P.; Brona, G.; Bunkowski, K.; Doroba, K.; Kalinowski, A.; Konecki, M.; Krolikowski, J.; Misiura, M.; Olszewski, M.; Walczak, M.; Bargassa, P.; Beirão da Cruz E Silva, C.; di Francesco, A.; Faccioli, P.; Ferreira Parracho, P. G.; Gallinaro, M.; Leonardo, N.; Lloret Iglesias, L.; Nguyen, F.; Rodrigues Antunes, J.; Seixas, J.; Toldaiev, O.; Vadruccio, D.; Varela, J.; Vischia, P.; Afanasiev, S.; Bunin, P.; Gavrilenko, M.; Golutvin, I.; Gorbunov, I.; Kamenev, A.; Karjavin, V.; Konoplyanikov, V.; Lanev, A.; Malakhov, A.; Matveev, V.; Moisenz, P.; Palichik, V.; Perelygin, V.; Shmatov, S.; Shulha, S.; Skatchkov, N.; Smirnov, V.; Zarubin, A.; Golovtsov, V.; Ivanov, Y.; Kim, V.; Kuznetsova, E.; Levchenko, P.; Murzin, V.; Oreshkin, V.; Smirnov, I.; Sulimov, V.; Uvarov, L.; Vavilov, S.; Vorobyev, A.; Andreev, Yu.; Dermenev, A.; Gninenko, S.; Golubev, N.; Karneyeu, A.; Kirsanov, M.; Krasnikov, N.; Pashenkov, A.; Tlisov, D.; Toropin, A.; Epshteyn, V.; Gavrilov, V.; Lychkovskaya, N.; Popov, V.; Pozdnyakov, I.; Safronov, G.; Spiridonov, A.; Vlasov, E.; Zhokin, A.; Bylinkin, A.; Andreev, V.; Azarkin, M.; Dremin, I.; Kirakosyan, M.; Leonidov, A.; Mesyats, G.; Rusakov, S. V.; Vinogradov, A.; Baskakov, A.; Belyaev, A.; Boos, E.; Ershov, A.; Gribushin, A.; Khein, L.; Klyukhin, V.; Kodolova, O.; Lokhtin, I.; Lukina, O.; Myagkov, I.; Obraztsov, S.; Petrushanko, S.; Savrin, V.; Snigirev, A.; Azhgirey, I.; Bayshev, I.; Bitioukov, S.; Kachanov, V.; Kalinin, A.; Konstantinov, D.; Krychkine, V.; Petrov, V.; Ryutin, R.; Sobol, A.; Tourtchanovitch, L.; Troshin, S.; Tyurin, N.; Uzunian, A.; Volkov, A.; Adzic, P.; Ekmedzic, M.; Milosevic, J.; Rekovic, V.; Alcaraz Maestre, J.; Calvo, E.; Cerrada, M.; Chamizo Llatas, M.; Colino, N.; de La Cruz, B.; Delgado Peris, A.; Domínguez Vázquez, D.; Escalante Del Valle, A.; Fernandez Bedoya, C.; Fernández Ramos, J. P.; Flix, J.; Fouz, M. C.; Garcia-Abia, P.; Gonzalez Lopez, O.; Goy Lopez, S.; Hernandez, J. M.; Josa, M. I.; Navarro de Martino, E.; Pérez-Calero Yzquierdo, A.; Puerta Pelayo, J.; Quintario Olmeda, A.; Redondo, I.; Romero, L.; Soares, M. S.; Albajar, C.; de Trocóniz, J. F.; Missiroli, M.; Moran, D.; Brun, H.; Cuevas, J.; Fernandez Menendez, J.; Folgueras, S.; Gonzalez Caballero, I.; Palencia Cortezon, E.; Vizan Garcia, J. M.; Cabrillo, I. J.; Calderon, A.; Castiñeiras de Saa, J. R.; de Castro Manzano, P.; Duarte Campderros, J.; Fernandez, M.; Garcia-Ferrero, J.; Gomez, G.; Lopez Virto, A.; Marco, J.; Marco, R.; Martinez Rivero, C.; Matorras, F.; Munoz Sanchez, F. J.; Piedra Gomez, J.; Rodrigo, T.; Rodríguez-Marrero, A. Y.; Ruiz-Jimeno, A.; Scodellaro, L.; Vila, I.; Vilar Cortabitarte, R.; Abbaneo, D.; Auffray, E.; Auzinger, G.; Bachtis, M.; Baillon, P.; Ball, A. H.; Barney, D.; Benaglia, A.; Bendavid, J.; Benhabib, L.; Benitez, J. F.; Berruti, G. M.; Bloch, P.; Bocci, A.; Bonato, A.; Botta, C.; Breuker, H.; Camporesi, T.; Cerminara, G.; Colafranceschi, S.; D'Alfonso, M.; D'Enterria, D.; Dabrowski, A.; Daponte, V.; David, A.; de Gruttola, M.; de Guio, F.; de Roeck, A.; de Visscher, S.; di Marco, E.; Dobson, M.; Dordevic, M.; Dorney, B.; Du Pree, T.; Dünser, M.; Dupont, N.; Elliott-Peisert, A.; Franzoni, G.; Funk, W.; Gigi, D.; Gill, K.; Giordano, D.; Girone, M.; Glege, F.; Guida, R.; Gundacker, S.; Guthoff, M.; Hammer, J.; Harris, P.; Hegeman, J.; Innocente, V.; Janot, P.; Kirschenmann, H.; Kortelainen, M. J.; Kousouris, K.; Krajczar, K.; Lecoq, P.; Lourenço, C.; Lucchini, M. T.; Magini, N.; Malgeri, L.; Mannelli, M.; Martelli, A.; Masetti, L.; Meijers, F.; Mersi, S.; Meschi, E.; Moortgat, F.; Morovic, S.; Mulders, M.; Nemallapudi, M. V.; Neugebauer, H.; Orfanelli, S.; Orsini, L.; Pape, L.; Perez, E.; Peruzzi, M.; Petrilli, A.; Petrucciani, G.; Pfeiffer, A.; Piparo, D.; Racz, A.; Rolandi, G.; Rovere, M.; Ruan, M.; Sakulin, H.; Schäfer, C.; Schwick, C.; Sharma, A.; Silva, P.; Simon, M.; Sphicas, P.; Spiga, D.; Steggemann, J.; Stieger, B.; Stoye, M.; Takahashi, Y.; Treille, D.; Triossi, A.; Tsirou, A.; Veres, G. I.; Wardle, N.; Wöhri, H. K.; Zagozdzinska, A.; Zeuner, W. D.; Bertl, W.; Deiters, K.; Erdmann, W.; Horisberger, R.; Ingram, Q.; Kaestli, H. C.; Kotlinski, D.; Langenegger, U.; Renker, D.; Rohe, T.; Bachmair, F.; Bäni, L.; Bianchini, L.; Buchmann, M. A.; Casal, B.; Dissertori, G.; Dittmar, M.; Donegà, M.; Eller, P.; Grab, C.; Heidegger, C.; Hits, D.; Hoss, J.; Kasieczka, G.; Lustermann, W.; Mangano, B.; Marionneau, M.; Martinez Ruiz Del Arbol, P.; Masciovecchio, M.; Meister, D.; Micheli, F.; Musella, P.; Nessi-Tedaldi, F.; Pandolfi, F.; Pata, J.; Pauss, F.; Perrozzi, L.; Quittnat, M.; Rossini, M.; Starodumov, A.; Takahashi, M.; Tavolaro, V. R.; Theofilatos, K.; Wallny, R.; Aarrestad, T. K.; Amsler, C.; Caminada, L.; Canelli, M. F.; Chiochia, V.; de Cosa, A.; Galloni, C.; Hinzmann, A.; Hreus, T.; Kilminster, B.; Lange, C.; Ngadiuba, J.; Pinna, D.; Robmann, P.; Ronga, F. J.; Salerno, D.; Yang, Y.; Cardaci, M.; Chen, K. H.; Doan, T. H.; Jain, Sh.; Khurana, R.; Konyushikhin, M.; Kuo, C. M.; Lin, W.; Lu, Y. J.; Yu, S. S.; Kumar, Arun; Bartek, R.; Chang, P.; Chang, Y. H.; Chang, Y. W.; Chao, Y.; Chen, K. F.; Chen, P. H.; Dietz, C.; Fiori, F.; Grundler, U.; Hou, W.-S.; Hsiung, Y.; Liu, Y. F.; Lu, R.-S.; Miñano Moya, M.; Petrakou, E.; Tsai, J. F.; Tzeng, Y. M.; Asavapibhop, B.; Kovitanggoon, K.; Singh, G.; Srimanobhas, N.; Suwonjandee, N.; Adiguzel, A.; Cerci, S.; Demiroglu, Z. S.; Dozen, C.; Dumanoglu, I.; Girgis, S.; Gokbulut, G.; Guler, Y.; Gurpinar, E.; Hos, I.; Kangal, E. E.; Kayis Topaksu, A.; Onengut, G.; Ozdemir, K.; Ozturk, S.; Sunar Cerci, D.; Topakli, H.; Vergili, M.; Zorbilmez, C.; Akin, I. V.; Bilin, B.; Bilmis, S.; Isildak, B.; Karapinar, G.; Yalvac, M.; Zeyrek, M.; Albayrak, E. A.; Gülmez, E.; Kaya, M.; Kaya, O.; Yetkin, T.; Cankocak, K.; Sen, S.; Vardarlı, F. I.; Grynyov, B.; Levchuk, L.; Sorokin, P.; Aggleton, R.; Ball, F.; Beck, L.; Brooke, J. J.; Clement, E.; Cussans, D.; Flacher, H.; Goldstein, J.; Grimes, M.; Heath, G. P.; Heath, H. F.; Jacob, J.; Kreczko, L.; Lucas, C.; Meng, Z.; Newbold, D. M.; Paramesvaran, S.; Poll, A.; Sakuma, T.; Seif El Nasr-Storey, S.; Senkin, S.; Smith, D.; Smith, V. J.; Bell, K. W.; Belyaev, A.; Brew, C.; Brown, R. M.; Cieri, D.; Cockerill, D. J. A.; Coughlan, J. A.; Harder, K.; Harper, S.; Olaiya, E.; Petyt, D.; Shepherd-Themistocleous, C. H.; Thea, A.; Thomas, L.; Tomalin, I. R.; Williams, T.; Womersley, W. J.; Worm, S. D.; Baber, M.; Bainbridge, R.; Buchmuller, O.; Bundock, A.; Burton, D.; Casasso, S.; Citron, M.; Colling, D.; Corpe, L.; Cripps, N.; Dauncey, P.; Davies, G.; de Wit, A.; Della Negra, M.; Dunne, P.; Elwood, A.; Ferguson, W.; Fulcher, J.; Futyan, D.; Hall, G.; Iles, G.; Kenzie, M.; Lane, R.; Lucas, R.; Lyons, L.; Magnan, A.-M.; Malik, S.; Nash, J.; Nikitenko, A.; Pela, J.; Pesaresi, M.; Petridis, K.; Raymond, D. M.; Richards, A.; Rose, A.; Seez, C.; Tapper, A.; Uchida, K.; Vazquez Acosta, M.; Virdee, T.; Zenz, S. C.; Cole, J. E.; Hobson, P. R.; Khan, A.; Kyberd, P.; Leggat, D.; Leslie, D.; Reid, I. D.; Symonds, P.; Teodorescu, L.; Turner, M.; Borzou, A.; Call, K.; Dittmann, J.; Hatakeyama, K.; Kasmi, A.; Liu, H.; Pastika, N.; Charaf, O.; Cooper, S. I.; Henderson, C.; Rumerio, P.; Avetisyan, A.; Bose, T.; Fantasia, C.; Gastler, D.; Lawson, P.; Rankin, D.; Richardson, C.; Rohlf, J.; St. John, J.; Sulak, L.; Zou, D.; Alimena, J.; Berry, E.; Bhattacharya, S.; Cutts, D.; Dhingra, N.; Ferapontov, A.; Garabedian, A.; Hakala, J.; Heintz, U.; Laird, E.; Landsberg, G.; Mao, Z.; Narain, M.; Piperov, S.; Sagir, S.; Sinthuprasith, T.; Syarif, R.; Breedon, R.; Breto, G.; Calderon de La Barca Sanchez, M.; Chauhan, S.; Chertok, M.; Conway, J.; Conway, R.; Cox, P. T.; Erbacher, R.; Gardner, M.; Ko, W.; Lander, R.; Mulhearn, M.; Pellett, D.; Pilot, J.; Ricci-Tam, F.; Shalhout, S.; Smith, J.; Squires, M.; Stolp, D.; Tripathi, M.; Wilbur, S.; Yohay, R.; Cousins, R.; Everaerts, P.; Farrell, C.; Hauser, J.; Ignatenko, M.; Saltzberg, D.; Takasugi, E.; Valuev, V.; Weber, M.; Burt, K.; Clare, R.; Ellison, J.; Gary, J. W.; Hanson, G.; Heilman, J.; Ivova Paneva, M.; Jandir, P.; Kennedy, E.; Lacroix, F.; Long, O. R.; Luthra, A.; Malberti, M.; Olmedo Negrete, M.; Shrinivas, A.; Wei, H.; Wimpenny, S.; Branson, J. G.; Cerati, G. B.; Cittolin, S.; D'Agnolo, R. T.; Holzner, A.; Kelley, R.; Klein, D.; Letts, J.; MacNeill, I.; Olivito, D.; Padhi, S.; Pieri, M.; Sani, M.; Sharma, V.; Simon, S.; Tadel, M.; Vartak, A.; Wasserbaech, S.; Welke, C.; Würthwein, F.; Yagil, A.; Zevi Della Porta, G.; Barge, D.; Bradmiller-Feld, J.; Campagnari, C.; Dishaw, A.; Dutta, V.; Flowers, K.; Franco Sevilla, M.; Geffert, P.; George, C.; Golf, F.; Gouskos, L.; Gran, J.; Incandela, J.; Justus, C.; McColl, N.; Mullin, S. D.; Richman, J.; Stuart, D.; Suarez, I.; To, W.; West, C.; Yoo, J.; Anderson, D.; Apresyan, A.; Bornheim, A.; Bunn, J.; Chen, Y.; Duarte, J.; Mott, A.; Newman, H. B.; Pena, C.; Pierini, M.; Spiropulu, M.; Vlimant, J. R.; Xie, S.; Zhu, R. Y.; Azzolini, V.; Calamba, A.; Carlson, B.; Ferguson, T.; Paulini, M.; Russ, J.; Sun, M.; Vogel, H.; Vorobiev, I.; Cumalat, J. P.; Ford, W. T.; Gaz, A.; Jensen, F.; Johnson, A.; Krohn, M.; Mulholland, T.; Nauenberg, U.; Stenson, K.; Wagner, S. R.; Alexander, J.; Chatterjee, A.; Chaves, J.; Chu, J.; Dittmer, S.; Eggert, N.; Mirman, N.; Nicolas Kaufman, G.; Patterson, J. R.; Rinkevicius, A.; Ryd, A.; Skinnari, L.; Soffi, L.; Sun, W.; Tan, S. M.; Teo, W. D.; Thom, J.; Thompson, J.; Tucker, J.; Weng, Y.; Wittich, P.; Abdullin, S.; Albrow, M.; Anderson, J.; Apollinari, G.; Bauerdick, L. A. T.; Beretvas, A.; Berryhill, J.; Bhat, P. C.; Bolla, G.; Burkett, K.; Butler, J. N.; Cheung, H. W. K.; Chlebana, F.; Cihangir, S.; Elvira, V. D.; Fisk, I.; Freeman, J.; Gottschalk, E.; Gray, L.; Green, D.; Grünendahl, S.; Gutsche, O.; Hanlon, J.; Hare, D.; Harris, R. M.; Hirschauer, J.; Hooberman, B.; Hu, Z.; Jindariani, S.; Johnson, M.; Joshi, U.; Jung, A. W.; Klima, B.; Kreis, B.; Kwan, S.; Lammel, S.; Linacre, J.; Lincoln, D.; Lipton, R.; Liu, T.; Lopes de Sá, R.; Lykken, J.; Maeshima, K.; Marraffino, J. M.; Martinez Outschoorn, V. I.; Maruyama, S.; Mason, D.; McBride, P.; Merkel, P.; Mishra, K.; Mrenna, S.; Nahn, S.; Newman-Holmes, C.; O'Dell, V.; Pedro, K.; Prokofyev, O.; Rakness, G.; Sexton-Kennedy, E.; Soha, A.; Spalding, W. J.; Spiegel, L.; Taylor, L.; Tkaczyk, S.; Tran, N. V.; Uplegger, L.; Vaandering, E. W.; Vernieri, C.; Verzocchi, M.; Vidal, R.; Weber, H. A.; Whitbeck, A.; Yang, F.; Acosta, D.; Avery, P.; Bortignon, P.; Bourilkov, D.; Carnes, A.; Carver, M.; Curry, D.; Das, S.; di Giovanni, G. P.; Field, R. D.; Furic, I. K.; Hugon, J.; Konigsberg, J.; Korytov, A.; Low, J. F.; Ma, P.; Matchev, K.; Mei, H.; Milenovic, P.; Mitselmakher, G.; Rank, D.; Rossin, R.; Shchutska, L.; Snowball, M.; Sperka, D.; Wang, J.; Wang, S.; Yelton, J.; Hewamanage, S.; Linn, S.; Markowitz, P.; Martinez, G.; Rodriguez, J. L.; Ackert, A.; Adams, J. R.; Adams, T.; Askew, A.; Bochenek, J.; Diamond, B.; Haas, J.; Hagopian, S.; Hagopian, V.; Johnson, K. F.; Khatiwada, A.; Prosper, H.; Veeraraghavan, V.; Weinberg, M.; Baarmand, M. M.; Bhopatkar, V.; Hohlmann, M.; Kalakhety, H.; Noonan, D.; Roy, T.; Yumiceva, F.; Adams, M. R.; Apanasevich, L.; Berry, D.; Betts, R. R.; Bucinskaite, I.; Cavanaugh, R.; Evdokimov, O.; Gauthier, L.; Gerber, C. E.; Hofman, D. J.; Kurt, P.; O'Brien, C.; Sandoval Gonzalez, I. D.; Silkworth, C.; Turner, P.; Varelas, N.; Wu, Z.; Zakaria, M.; Bilki, B.; Clarida, W.; Dilsiz, K.; Durgut, S.; Gandrajula, R. P.; Haytmyradov, M.; Khristenko, V.; Merlo, J.-P.; Mermerkaya, H.; Mestvirishvili, A.; Moeller, A.; Nachtman, J.; Ogul, H.; Onel, Y.; Ozok, F.; Penzo, A.; Snyder, C.; Tan, P.; Tiras, E.; Wetzel, J.; Yi, K.; Anderson, I.; Barnett, B. A.; Blumenfeld, B.; Fehling, D.; Feng, L.; Gritsan, A. V.; Maksimovic, P.; Martin, C.; Osherson, M.; Swartz, M.; Xiao, M.; Xin, Y.; You, C.; Baringer, P.; Bean, A.; Benelli, G.; Bruner, C.; Kenny, R. P.; Majumder, D.; Malek, M.; Murray, M.; Sanders, S.; Stringer, R.; Wang, Q.; Wood, J. S.; Ivanov, A.; Kaadze, K.; Khalil, S.; Makouski, M.; Maravin, Y.; Mohammadi, A.; Saini, L. K.; Skhirtladze, N.; Toda, S.; Lange, D.; Rebassoo, F.; Wright, D.; Anelli, C.; Baden, A.; Baron, O.; Belloni, A.; Calvert, B.; Eno, S. C.; Ferraioli, C.; Gomez, J. A.; Hadley, N. J.; Jabeen, S.; Kellogg, R. G.; Kolberg, T.; Kunkle, J.; Lu, Y.; Mignerey, A. C.; Shin, Y. H.; Skuja, A.; Tonjes, M. B.; Tonwar, S. C.; Apyan, A.; Barbieri, R.; Baty, A.; Bierwagen, K.; Brandt, S.; Busza, W.; Cali, I. A.; Demiragli, Z.; Di Matteo, L.; Gomez Ceballos, G.; Goncharov, M.; Gulhan, D.; Iiyama, Y.; Innocenti, G. M.; Klute, M.; Kovalskyi, D.; Lai, Y. S.; Lee, Y.-J.; Levin, A.; Luckey, P. D.; Marini, A. C.; McGinn, C.; Mironov, C.; Niu, X.; Paus, C.; Ralph, D.; Roland, C.; Roland, G.; Salfeld-Nebgen, J.; Stephans, G. S. F.; Sumorok, K.; Varma, M.; Velicanu, D.; Veverka, J.; Wang, J.; Wang, T. W.; Wyslouch, B.; Yang, M.; Zhukova, V.; Dahmes, B.; Finkel, A.; Gude, A.; Hansen, P.; Kalafut, S.; Kao, S. C.; Klapoetke, K.; Kubota, Y.; Lesko, Z.; Mans, J.; Nourbakhsh, S.; Ruckstuhl, N.; Rusack, R.; Tambe, N.; Turkewitz, J.; Acosta, J. G.; Oliveros, S.; Avdeeva, E.; Bloom, K.; Bose, S.; Claes, D. R.; Dominguez, A.; Fangmeier, C.; Gonzalez Suarez, R.; Kamalieddin, R.; Keller, J.; Knowlton, D.; Kravchenko, I.; Lazo-Flores, J.; Meier, F.; Monroy, J.; Ratnikov, F.; Siado, J. E.; Snow, G. R.; Alyari, M.; Dolen, J.; George, J.; Godshalk, A.; Harrington, C.; Iashvili, I.; Kaisen, J.; Kharchilava, A.; Kumar, A.; Rappoccio, S.; Alverson, G.; Barberis, E.; Baumgartel, D.; Chasco, M.; Hortiangtham, A.; Massironi, A.; Morse, D. M.; Nash, D.; Orimoto, T.; Teixeira de Lima, R.; Trocino, D.; Wang, R.-J.; Wood, D.; Zhang, J.; Hahn, K. A.; Kubik, A.; Mucia, N.; Odell, N.; Pollack, B.; Pozdnyakov, A.; Schmitt, M.; Stoynev, S.; Sung, K.; Trovato, M.; Velasco, M.; Brinkerhoff, A.; Dev, N.; Hildreth, M.; Jessop, C.; Karmgard, D. J.; Kellams, N.; Lannon, K.; Lynch, S.; Marinelli, N.; Meng, F.; Mueller, C.; Musienko, Y.; Pearson, T.; Planer, M.; Reinsvold, A.; Ruchti, R.; Smith, G.; Taroni, S.; Valls, N.; Wayne, M.; Wolf, M.; Woodard, A.; Antonelli, L.; Brinson, J.; Bylsma, B.; Durkin, L. S.; Flowers, S.; Hart, A.; Hill, C.; Hughes, R.; Ji, W.; Kotov, K.; Ling, T. Y.; Liu, B.; Luo, W.; Puigh, D.; Rodenburg, M.; Winer, B. L.; Wulsin, H. W.; Driga, O.; Elmer, P.; Hardenbrook, J.; Hebda, P.; Koay, S. A.; Lujan, P.; Marlow, D.; Medvedeva, T.; Mooney, M.; Olsen, J.; Palmer, C.; Piroué, P.; Quan, X.; Saka, H.; Stickland, D.; Tully, C.; Werner, J. S.; Zuranski, A.; Malik, S.; Barnes, V. E.; Benedetti, D.; Bortoletto, D.; Gutay, L.; Jha, M. K.; Jones, M.; Jung, K.; Kress, M.; Miller, D. H.; Neumeister, N.; Radburn-Smith, B. C.; Shi, X.; Shipsey, I.; Silvers, D.; Sun, J.; Svyatkovskiy, A.; Wang, F.; Xie, W.; Xu, L.; Parashar, N.; Stupak, J.; Adair, A.; Akgun, B.; Chen, Z.; Ecklund, K. M.; Geurts, F. J. M.; Guilbaud, M.; Li, W.; Michlin, B.; Northup, M.; Padley, B. P.; Redjimi, R.; Roberts, J.; Rorie, J.; Tu, Z.; Zabel, J.; Betchart, B.; Bodek, A.; de Barbaro, P.; Demina, R.; Eshaq, Y.; Ferbel, T.; Galanti, M.; Garcia-Bellido, A.; Goldenzweig, P.; Han, J.; Harel, A.; Hindrichs, O.; Khukhunaishvili, A.; Petrillo, G.; Verzetti, M.; Demortier, L.; Arora, S.; Barker, A.; Chou, J. P.; Contreras-Campana, C.; Contreras-Campana, E.; Duggan, D.; Ferencek, D.; Gershtein, Y.; Gray, R.; Halkiadakis, E.; Hidas, D.; Hughes, E.; Kaplan, S.; Kunnawalkam Elayavalli, R.; Lath, A.; Nash, K.; Panwalkar, S.; Park, M.; Salur, S.; Schnetzer, S.; Sheffield, D.; Somalwar, S.; Stone, R.; Thomas, S.; Thomassen, P.; Walker, M.; Foerster, M.; Riley, G.; Rose, K.; Spanier, S.; York, A.; Bouhali, O.; Castaneda Hernandez, A.; Dalchenko, M.; de Mattia, M.; Delgado, A.; Dildick, S.; Eusebi, R.; Flanagan, W.; Gilmore, J.; Kamon, T.; Krutelyov, V.; Montalvo, R.; Mueller, R.; Osipenkov, I.; Pakhotin, Y.; Patel, R.; Perloff, A.; Roe, J.; Rose, A.; Safonov, A.; Tatarinov, A.; Ulmer, K. A.; Akchurin, N.; Cowden, C.; Damgov, J.; Dragoiu, C.; Dudero, P. R.; Faulkner, J.; Kunori, S.; Lamichhane, K.; Lee, S. W.; Libeiro, T.; Undleeb, S.; Volobouev, I.; Appelt, E.; Delannoy, A. G.; Greene, S.; Gurrola, A.; Janjam, R.; Johns, W.; Maguire, C.; Mao, Y.; Melo, A.; Ni, H.; Sheldon, P.; Snook, B.; Tuo, S.; Velkovska, J.; Xu, Q.; Arenton, M. W.; Boutle, S.; Cox, B.; Francis, B.; Goodell, J.; Hirosky, R.; Ledovskoy, A.; Li, H.; Lin, C.; Neu, C.; Wolfe, E.; Wood, J.; Xia, F.; Clarke, C.; Harr, R.; Karchin, P. E.; Kottachchi Kankanamge Don, C.; Lamichhane, P.; Sturdy, J.; Belknap, D. A.; Carlsmith, D.; Cepeda, M.; Christian, A.; Dasu, S.; Dodd, L.; Duric, S.; Friis, E.; Gomber, B.; Grothe, M.; Hall-Wilton, R.; Herndon, M.; Hervé, A.; Klabbers, P.; Lanaro, A.; Levine, A.; Long, K.; Loveless, R.; Mohapatra, A.; Ojalvo, I.; Perry, T.; Pierro, G. A.; Polese, G.; Ross, I.; Ruggles, T.; Sarangi, T.; Savin, A.; Sharma, A.; Smith, N.; Smith, W. H.; Taylor, D.; Woods, N.
2016-08-01
The decorrelation in the azimuthal angle between the most forward and the most backward jets (Mueller-Navelet jets) is measured in data collected in pp collisions with the CMS detector at the LHC at √{s}=7 TeV. The measurement is presented in the form of distributions of azimuthal-angle differences, Δϕ, between the Mueller-Navelet jets, the average cosines of ( π - Δ ϕ), 2( π - Δ ϕ), and 3( π - Δ ϕ), and ratios of these cosines. The jets are required to have transverse momenta, p T, in excess of 35 GeV and rapidities, | y|, of less than 4.7. The results are presented as a function of the rapidity separation, Δ y, between the Mueller-Navelet jets, reaching Δ y up to 9.4 for the first time. The results are compared to predictions of various Monte Carlo event generators and to analytical predictions based on the DGLAP and BFKL parton evolution schemes. [Figure not available: see fulltext.
Khachatryan, Vardan
2016-08-24
The decorrelation in the azimuthal angle between the most forward and the most backward jets (Mueller-Navelet jets) is measured in data collected in pp collisions with the CMS detector at the LHC atmore » $$\\sqrt{s} =$$ 7 TeV. The measurement is presented in the form of distributions of azimuthal-angle differences, $$\\Delta\\phi$$, between the Mueller-Navelet jets, the average cosines of $$(\\pi-\\Delta\\phi)$$, $$2(\\pi-\\Delta\\phi)$$, and $$3(\\pi-\\Delta\\phi)$$, and ratios of these cosines. The jets are required to have transverse momenta, $$p_{\\mathrm{T}}$$, in excess of 35 GeV and rapidities, $| y |$, of less than 4.7. The results are presented as a function of the rapidity separation, $$\\Delta{y}$$, between the Mueller-Navelet jets, reaching $$\\Delta{y}$$ up to 9.4 for the first time. Lastly, the results are compared to predictions of various Monte Carlo event generators and to analytical predictions based on the DGLAP and BFKL parton evolution schemes.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khachatryan, Vardan
The decorrelation in the azimuthal angle between the most forward and the most backward jets (Mueller-Navelet jets) is measured in data collected in pp collisions with the CMS detector at the LHC atmore » $$\\sqrt{s} =$$ 7 TeV. The measurement is presented in the form of distributions of azimuthal-angle differences, $$\\Delta\\phi$$, between the Mueller-Navelet jets, the average cosines of $$(\\pi-\\Delta\\phi)$$, $$2(\\pi-\\Delta\\phi)$$, and $$3(\\pi-\\Delta\\phi)$$, and ratios of these cosines. The jets are required to have transverse momenta, $$p_{\\mathrm{T}}$$, in excess of 35 GeV and rapidities, $| y |$, of less than 4.7. The results are presented as a function of the rapidity separation, $$\\Delta{y}$$, between the Mueller-Navelet jets, reaching $$\\Delta{y}$$ up to 9.4 for the first time. Lastly, the results are compared to predictions of various Monte Carlo event generators and to analytical predictions based on the DGLAP and BFKL parton evolution schemes.« less
Prediction of break-out sound from a rectangular cavity via an elastically mounted panel.
Wang, Gang; Li, Wen L; Du, Jingtao; Li, Wanyou
2016-02-01
The break-out sound from a cavity via an elastically mounted panel is predicted in this paper. The vibroacoustic system model is derived based on the so-called spectro-geometric method in which the solution over each sub-domain is invariably expressed as a modified Fourier series expansion. Unlike the traditional modal superposition methods, the continuity of the normal velocities is faithfully enforced on the interfaces between the flexible panel and the (interior and exterior) acoustic media. A fully coupled vibro-acoustic system is obtained by taking into account the strong coupling between the vibration of the elastic panel and the sound fields on the both sides. The typical time-consuming calculations of quadruple integrals encountered in determining the sound power radiation from a panel has been effectively avoided by reducing them, via discrete cosine transform, into a number of single integrals which are subsequently calculated analytically in a closed form. Several numerical examples are presented to validate the system model, understand the effects on the sound transmissions of panel mounting conditions, and demonstrate the dependence on the size of source room of the "measured" transmission loss.
A Novel Signal Modeling Approach for Classification of Seizure and Seizure-Free EEG Signals.
Gupta, Anubha; Singh, Pushpendra; Karlekar, Mandar
2018-05-01
This paper presents a signal modeling-based new methodology of automatic seizure detection in EEG signals. The proposed method consists of three stages. First, a multirate filterbank structure is proposed that is constructed using the basis vectors of discrete cosine transform. The proposed filterbank decomposes EEG signals into its respective brain rhythms: delta, theta, alpha, beta, and gamma. Second, these brain rhythms are statistically modeled with the class of self-similar Gaussian random processes, namely, fractional Brownian motion and fractional Gaussian noises. The statistics of these processes are modeled using a single parameter called the Hurst exponent. In the last stage, the value of Hurst exponent and autoregressive moving average parameters are used as features to design a binary support vector machine classifier to classify pre-ictal, inter-ictal (epileptic with seizure free interval), and ictal (seizure) EEG segments. The performance of the classifier is assessed via extensive analysis on two widely used data set and is observed to provide good accuracy on both the data set. Thus, this paper proposes a novel signal model for EEG data that best captures the attributes of these signals and hence, allows to boost the classification accuracy of seizure and seizure-free epochs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bakhtiari, S.; Liao, S.; Elmer, T.
This paper analyzes heart rate (HR) information from physiological tracings collected with a remote millimeter wave (mmW) I-Q sensor for biometric monitoring applications. A parameter optimization method based on the nonlinear Levenberg-Marquardt algorithm is used. The mmW sensor works at 94 GHz and can detect the vital signs of a human subject from a few to tens of meters away. The reflected mmW signal is typically affected by respiration, body movement, background noise, and electronic system noise. Processing of the mmW radar signal is, thus, necessary to obtain the true HR. The down-converted received signal in this case consists ofmore » both the real part (I-branch) and the imaginary part (Q-branch), which can be considered as the cosine and sine of the received phase of the HR signal. Instead of fitting the converted phase angle signal, the method directly fits the real and imaginary parts of the HR signal, which circumvents the need for phase unwrapping. This is particularly useful when the SNR is low. Also, the method identifies both beat-to-beat HR and individual heartbeat magnitude, which is valuable for some medical diagnosis applications. The mean HR here is compared to that obtained using the discrete Fourier transform.« less
Optimization of self-acting step thrust bearings for load capacity and stiffness.
NASA Technical Reports Server (NTRS)
Hamrock, B. J.
1972-01-01
Linearized analysis of a finite-width rectangular step thrust bearing. Dimensionless load capacity and stiffness are expressed in terms of a Fourier cosine series. The dimensionless load capacity and stiffness were found to be a function of the dimensionless bearing number, the pad length-to-width ratio, the film thickness ratio, the step location parameter, and the feed groove parameter. The equations obtained in the analysis were verified. The assumptions imposed were substantiated by comparing the results with an existing exact solution for the infinite width bearing. A digital computer program was developed which determines optimal bearing configuration for maximum load capacity or stiffness. Simple design curves are presented. Results are shown for both compressible and incompressible lubrication. Through a parameter transformation the results are directly usable in designing optimal step sector thrust bearings.
Wide band stepped frequency ground penetrating radar
Bashforth, M.B.; Gardner, D.; Patrick, D.; Lewallen, T.A.; Nammath, S.R.; Painter, K.D.; Vadnais, K.G.
1996-03-12
A wide band ground penetrating radar system is described embodying a method wherein a series of radio frequency signals is produced by a single radio frequency source and provided to a transmit antenna for transmission to a target and reflection therefrom to a receive antenna. A phase modulator modulates those portions of the radio frequency signals to be transmitted and the reflected modulated signal is combined in a mixer with the original radio frequency signal to produce a resultant signal which is demodulated to produce a series of direct current voltage signals, the envelope of which forms a cosine wave shaped plot which is processed by a Fast Fourier Transform Unit 44 into frequency domain data wherein the position of a preponderant frequency is indicative of distance to the target and magnitude is indicative of the signature of the target. 6 figs.
Liu, Sheng-jin; Yang, Huan; Wu, De-kang; Xu, Chun-xiang; Lin, Rui-chao; Tian, Jin-gai; Fang, Fang
2015-04-01
In the present paper, the fingerprint of Limonitum (a mineral Chinese medicine) by FTIR was established, and the spectrograms among crude samples, processed one and the adulterant sample were compared. Eighteen batches of Limonitum samples from different production areas were analyzed and the angle cosine value of transmittance (%) of common peaks was calculated to get the similarity of the FTIR fingerprints. The result showed that the similarities and the coefficients of the samples were all more than 0.90. The processed samples revealed significant differences compared with the crude one. This study analyzed the composition characteristics of Limonitum in FTIR fingerprint, and it was simple and fast to distinguish the crude, processed and the counterfeit samples. The FTIR fingerprints provide a new method for evaluating the quality of Limonitum.
Jiao, Yong; Zhang, Yu; Wang, Yu; Wang, Bei; Jin, Jing; Wang, Xingyu
2018-05-01
Multiset canonical correlation analysis (MsetCCA) has been successfully applied to optimize the reference signals by extracting common features from multiple sets of electroencephalogram (EEG) for steady-state visual evoked potential (SSVEP) recognition in brain-computer interface application. To avoid extracting the possible noise components as common features, this study proposes a sophisticated extension of MsetCCA, called multilayer correlation maximization (MCM) model for further improving SSVEP recognition accuracy. MCM combines advantages of both CCA and MsetCCA by carrying out three layers of correlation maximization processes. The first layer is to extract the stimulus frequency-related information in using CCA between EEG samples and sine-cosine reference signals. The second layer is to learn reference signals by extracting the common features with MsetCCA. The third layer is to re-optimize the reference signals set in using CCA with sine-cosine reference signals again. Experimental study is implemented to validate effectiveness of the proposed MCM model in comparison with the standard CCA and MsetCCA algorithms. Superior performance of MCM demonstrates its promising potential for the development of an improved SSVEP-based brain-computer interface.
High-Aperture-Efficiency Horn Antenna
NASA Technical Reports Server (NTRS)
Pickens, Wesley; Hoppe, Daniel; Epp, Larry; Kahn, Abdur
2005-01-01
A horn antenna (see Figure 1) has been developed to satisfy requirements specific to its use as an essential component of a high-efficiency Ka-band amplifier: The combination of the horn antenna and an associated microstrip-patch antenna array is required to function as a spatial power divider that feeds 25 monolithic microwave integrated-circuit (MMIC) power amplifiers. The foregoing requirement translates to, among other things, a further requirement that the horn produce a uniform, vertically polarized electromagnetic field in its patches identically so that the MMICs can operate at maximum efficiency. The horn is fed from a square waveguide of 5.9436-mm-square cross section via a transition piece. The horn features cosine-tapered, dielectric-filled longitudinal corrugations in its vertical walls to create a hard boundary condition: This aspect of the horn design causes the field in the horn aperture to be substantially vertically polarized and to be nearly uniform in amplitude and phase. As used here, cosine-tapered signifies that the depth of the corrugations is a cosine function of distance along the horn. Preliminary results of finite-element simulations of performance have shown that by virtue of the cosine taper the impedance response of this horn can be expected to be better than has been achieved previously in a similar horn having linearly tapered dielectric- filled longitudinal corrugations. It is possible to create a hard boundary condition by use of a single dielectric-filled corrugation in each affected wall, but better results can be obtained with more corrugations. Simulations were performed for a one- and a three-corrugation cosine-taper design. For comparison, a simulation was also performed for a linear- taper design (see Figure 2). The three-corrugation design was chosen to minimize the cost of fabrication while still affording acceptably high performance. Future designs using more corrugations per wavelength are expected to provide better field responses and, hence, greater aperture efficiencies.
Efficient use of bit planes in the generation of motion stimuli
NASA Technical Reports Server (NTRS)
Mulligan, Jeffrey B.; Stone, Leland S.
1988-01-01
The production of animated motion sequences on computer-controlled display systems presents a technical problem because large images cannot be transferred from disk storage to image memory at conventional frame rates. A technique is described in which a single base image can be used to generate a broad class of motion stimuli without the need for such memory transfers. This technique was applied to the generation of drifting sine-wave gratings (and by extension, sine wave plaids). For each drifting grating, sine and cosine spatial phase components are first reduced to 1 bit/pixel using a digital halftoning technique. The resulting pairs of 1-bit images are then loaded into pairs of bit planes of the display memory. To animate the patterns, the display hardware's color lookup table is modified on a frame-by-frame basis; for each frame the lookup table is set to display a weighted sum of the spatial sine and cosine phase components. Because the contrasts and temporal frequencies of the various components are mutually independent in each frame, the sine and cosine components can be counterphase modulated in temporal quadrature, yielding a single drifting grating. Using additional bit planes, multiple drifting gratings can be combined to form sine-wave plaid patterns. A large number of resultant plaid motions can be produced from a single image file because the temporal frequencies of all the components can be varied independently. For a graphics device having 8 bits/pixel, up to four drifting gratings may be combined, each having independently variable contrast and speed.
Sines and Cosines. Part 2 of 3
NASA Technical Reports Server (NTRS)
Apostol, Tom M. (Editor)
1993-01-01
The Law of Sines and the Law of Cosines are introduced and demonstrated in this 'Project Mathematics' series video using both film footage and computer animation. This video deals primarily with the mathematical field of Trigonometry and explains how these laws were developed and their applications. One significant use is geographical and geological surveying. This includes both the triangulation method and the spirit leveling method. With these methods, it is shown how the height of the tallest mountain in the world, Mt. Everest, was determined.
Convolutional coding combined with continuous phase modulation
NASA Technical Reports Server (NTRS)
Pizzi, S. V.; Wilson, S. G.
1985-01-01
Background theory and specific coding designs for combined coding/modulation schemes utilizing convolutional codes and continuous-phase modulation (CPM) are presented. In this paper the case of r = 1/2 coding onto a 4-ary CPM is emphasized, with short-constraint length codes presented for continuous-phase FSK, double-raised-cosine, and triple-raised-cosine modulation. Coding buys several decibels of coding gain over the Gaussian channel, with an attendant increase of bandwidth. Performance comparisons in the power-bandwidth tradeoff with other approaches are made.
Wigner distribution function of Hermite-cosine-Gaussian beams through an apertured optical system.
Sun, Dong; Zhao, Daomu
2005-08-01
By introducing the hard-aperture function into a finite sum of complex Gaussian functions, the approximate analytical expressions of the Wigner distribution function for Hermite-cosine-Gaussian beams passing through an apertured paraxial ABCD optical system are obtained. The analytical results are compared with the numerically integrated ones, and the absolute errors are also given. It is shown that the analytical results are proper and that the calculation speed for them is much faster than for the numerical results.
Star tracker error analysis: Roll-to-pitch nonorthogonality
NASA Technical Reports Server (NTRS)
Corson, R. W.
1979-01-01
An error analysis is described on an anomaly isolated in the star tracker software line of sight (LOS) rate test. The LOS rate cosine was found to be greater than one in certain cases which implied that either one or both of the star tracker measured end point unit vectors used to compute the LOS rate cosine had lengths greater than unity. The roll/pitch nonorthogonality matrix in the TNB CL module of the IMU software is examined as the source of error.
Sines and Cosines. Part 3 of 3
NASA Technical Reports Server (NTRS)
Apostol, Tom M. (Editor)
1994-01-01
In this 'Project Mathematics' series video, the addition formulas of sines and cosines are explained and their real life applications are demonstrated. Both film footage and computer animation is used. Several mathematical concepts are discussed and include: Ptolemy's theorem concerned with quadrilaterals; the difference between a central angle and an inscribed angle; sines and chord lengths; special angles; subtraction formulas; and a application to simple harmonic motion. A brief history of the city Alexandria, its mathematicians, and their contribution to the field of mathematics is shown.
Application of DFT Filter Banks and Cosine Modulated Filter Banks in Filtering
NASA Technical Reports Server (NTRS)
Lin, Yuan-Pei; Vaidyanathan, P. P.
1994-01-01
None given. This is a proposal for a paper to be presented at APCCAS '94 in Taipei, Taiwan. (From outline): This work is organized as follows: Sec. II is devoted to the construction of the new 2m channel under-decimated DFT filter bank. Implementation and complexity of this DFT filter bank are discussed therein. IN a similar manner, the new 2m channel cosine modulated filter bank is discussed in Sec. III. Design examples are given in Sec. IV.
Canonic FFT flow graphs for real-valued even/odd symmetric inputs
NASA Astrophysics Data System (ADS)
Lao, Yingjie; Parhi, Keshab K.
2017-12-01
Canonic real-valued fast Fourier transform (RFFT) has been proposed to reduce the arithmetic complexity by eliminating redundancies. In a canonic N-point RFFT, the number of signal values at each stage is canonic with respect to the number of signal values, i.e., N. The major advantage of the canonic RFFTs is that these require the least number of butterfly operations and only real datapaths when mapped to architectures. In this paper, we consider the FFT computation whose inputs are not only real but also even/odd symmetric, which indeed lead to the well-known discrete cosine and sine transforms (DCTs and DSTs). Novel algorithms for generating the flow graphs of canonic RFFTs with even/odd symmetric inputs are proposed. It is shown that the proposed algorithms lead to canonic structures with N/2 +1 signal values at each stage for an N-point real even symmetric FFT (REFFT) or N/2 -1 signal values at each stage for an N-point RFFT real odd symmetric FFT (ROFFT). In order to remove butterfly operations, several twiddle factor transformations are proposed in this paper. We also discuss the design of canonic REFFT for any composite length. Performances of the canonic REFFT/ROFFT are also discussed. It is shown that the flow graph of canonic REFFT/ROFFT has less number of interconnections, less butterfly operations, and less twiddle factor operations, compared to prior works.
A drawdown solution for constant-flux pumping in a confined anticline aquifer
NASA Astrophysics Data System (ADS)
Chen, Yen-Ju; Yeh, Hund-Der; Kuo, Chia-Chen
2011-08-01
SummaryAn anticline, known as a convex-upward fold in layers of rock, commonly is formed during lateral compression, which may be elected as a potential site for carbon sequestration. A mathematical model is developed in this study for describing the steady-state drawdown distribution in an anticline aquifer in response to the constant-flux pumping. The topographical shape of the anticline is mimicked by three successive blocks. The solution is obtained by applying the infinite Fourier transform and the finite Fourier cosine transform in each blocks and acquiring the hydraulic continuities between the blocks. Simulated results reveal that the introduction of a thin-limbs or narrow-ridged anticline would produce a much greater head drop in the ridge zone. For a well of constant pumping rate, the dimensionless drawdown around the well increases with decreasing well screen length or/and aquifer anisotropy ratio. An examination of the effect of well location on the drawdown reveals that the partially penetrating well located at the top-middle of the ridge zone produces the largest drawdown. The simulation of the flow in an anticline aquifer based on MODFLOW results in slightly smaller drawdown values in most regions when compared with those predicted by the present solution. The present solution can also be used to simulate the flow in a slab-shaped aquifer or a hillslope aquifer. It can be applied to determine the aquifer parameters if coupled with an optimization scheme and to provide the basis for selecting a potential site for carbon sequestration in the future as well.
Random Matrix Theory in molecular dynamics analysis.
Palese, Luigi Leonardo
2015-01-01
It is well known that, in some situations, principal component analysis (PCA) carried out on molecular dynamics data results in the appearance of cosine-shaped low index projections. Because this is reminiscent of the results obtained by performing PCA on a multidimensional Brownian dynamics, it has been suggested that short-time protein dynamics is essentially nothing more than a noisy signal. Here we use Random Matrix Theory to analyze a series of short-time molecular dynamics experiments which are specifically designed to be simulations with high cosine content. We use as a model system the protein apoCox17, a mitochondrial copper chaperone. Spectral analysis on correlation matrices allows to easily differentiate random correlations, simply deriving from the finite length of the process, from non-random signals reflecting the intrinsic system properties. Our results clearly show that protein dynamics is not really Brownian also in presence of the cosine-shaped low index projections on principal axes. Copyright © 2014 Elsevier B.V. All rights reserved.
Khan, Ilyas; Shah, Nehad Ali; Dennis, L C C
2017-03-15
This scientific report investigates the heat transfer analysis in mixed convection flow of Maxwell fluid over an oscillating vertical plate with constant wall temperature. The problem is modelled in terms of coupled partial differential equations with initial and boundary conditions. Some suitable non-dimensional variables are introduced in order to transform the governing problem into dimensionless form. The resulting problem is solved via Laplace transform method and exact solutions for velocity, shear stress and temperature are obtained. These solutions are greatly influenced with the variation of embedded parameters which include the Prandtl number and Grashof number for various times. In the absence of free convection, the corresponding solutions representing the mechanical part of velocity reduced to the well known solutions in the literature. The total velocity is presented as a sum of both cosine and sine velocities. The unsteady velocity in each case is arranged in the form of transient and post transient parts. It is found that the post transient parts are independent of time. The solutions corresponding to Newtonian fluids are recovered as a special case and comparison between Newtonian fluid and Maxwell fluid is shown graphically.
Dynamic Forms. Part 1: Functions
NASA Technical Reports Server (NTRS)
Meyer, George; Smith, G. Allan
1993-01-01
The formalism of dynamic forms is developed as a means for organizing and systematizing the design control systems. The formalism allows the designer to easily compute derivatives to various orders of large composite functions that occur in flight-control design. Such functions involve many function-of-a-function calls that may be nested to many levels. The component functions may be multiaxis, nonlinear, and they may include rotation transformations. A dynamic form is defined as a variable together with its time derivatives up to some fixed but arbitrary order. The variable may be a scalar, a vector, a matrix, a direction cosine matrix, Euler angles, or Euler parameters. Algorithms for standard elementary functions and operations of scalar dynamic forms are developed first. Then vector and matrix operations and transformations between parameterization of rotations are developed in the next level in the hierarchy. Commonly occurring algorithms in control-system design, including inversion of pure feedback systems, are developed in the third level. A large-angle, three-axis attitude servo and other examples are included to illustrate the effectiveness of the developed formalism. All algorithms were implemented in FORTRAN code. Practical experience shows that the proposed formalism may significantly improve the productivity of the design and coding process.
NASA Astrophysics Data System (ADS)
Khan, Ilyas; Shah, Nehad Ali; Dennis, L. C. C.
2017-03-01
This scientific report investigates the heat transfer analysis in mixed convection flow of Maxwell fluid over an oscillating vertical plate with constant wall temperature. The problem is modelled in terms of coupled partial differential equations with initial and boundary conditions. Some suitable non-dimensional variables are introduced in order to transform the governing problem into dimensionless form. The resulting problem is solved via Laplace transform method and exact solutions for velocity, shear stress and temperature are obtained. These solutions are greatly influenced with the variation of embedded parameters which include the Prandtl number and Grashof number for various times. In the absence of free convection, the corresponding solutions representing the mechanical part of velocity reduced to the well known solutions in the literature. The total velocity is presented as a sum of both cosine and sine velocities. The unsteady velocity in each case is arranged in the form of transient and post transient parts. It is found that the post transient parts are independent of time. The solutions corresponding to Newtonian fluids are recovered as a special case and comparison between Newtonian fluid and Maxwell fluid is shown graphically.
Khan, Ilyas; Shah, Nehad Ali; Dennis, L. C. C.
2017-01-01
This scientific report investigates the heat transfer analysis in mixed convection flow of Maxwell fluid over an oscillating vertical plate with constant wall temperature. The problem is modelled in terms of coupled partial differential equations with initial and boundary conditions. Some suitable non-dimensional variables are introduced in order to transform the governing problem into dimensionless form. The resulting problem is solved via Laplace transform method and exact solutions for velocity, shear stress and temperature are obtained. These solutions are greatly influenced with the variation of embedded parameters which include the Prandtl number and Grashof number for various times. In the absence of free convection, the corresponding solutions representing the mechanical part of velocity reduced to the well known solutions in the literature. The total velocity is presented as a sum of both cosine and sine velocities. The unsteady velocity in each case is arranged in the form of transient and post transient parts. It is found that the post transient parts are independent of time. The solutions corresponding to Newtonian fluids are recovered as a special case and comparison between Newtonian fluid and Maxwell fluid is shown graphically. PMID:28294186
How the laser-induced ionization of transparent solids can be suppressed
NASA Astrophysics Data System (ADS)
Gruzdev, Vitaly
2013-12-01
A capability to suppress laser-induced ionization of dielectric crystals in controlled and predictable way can potentially result in substantial improvement of laser damage threshold of optical materials. The traditional models that employ the Keldysh formula do not predict any suppression of the ionization because of the oversimplified description of electronic energy bands underlying the Keldysh formula. To fix this gap, we performed numerical simulations of time evolution of conduction-band electron density for a realistic cosine model of electronic bands characteristic of wide-band-gap cubic crystals. The simulations include contributions from the photo-ionization (evaluated by the Keldysh formula and by the formula for the cosine band of volume-centered cubic crystals) and from the avalanche ionization (evaluated by the Drude model). Maximum conduction-band electron density is evaluated from a single rate equation as a function of peak intensity of femtosecond laser pulses for alkali halide crystals. Results obtained for high-intensity femtosecond laser pulses demonstrate that the ionization can be suppressed by proper choice of laser parameters. In case of the Keldysh formula, the peak electron density exhibits saturation followed by gradual increase. For the cosine band, the electron density increases with irradiance within the low-intensity multiphoton regime and switches to decrease with intensity approaching threshold of the strong singularity of the ionization rate characteristic of the cosine band. Those trends are explained with specific modifications of band structure by electric field of laser pulses.
The tall letters represent the highly conserved bases in DNA binding sites of several prokaryotic repressors and activators. Conservation is strongest where major grooves of the double helical DNA (represented by crests of a cosine wave) face the protein. This shows that conservation analysis alone can be used to predict the face of DNA that contacts the proteins.
Identity Recognition Algorithm Using Improved Gabor Feature Selection of Gait Energy Image
NASA Astrophysics Data System (ADS)
Chao, LIANG; Ling-yao, JIA; Dong-cheng, SHI
2017-01-01
This paper describes an effective gait recognition approach based on Gabor features of gait energy image. In this paper, the kernel Fisher analysis combined with kernel matrix is proposed to select dominant features. The nearest neighbor classifier based on whitened cosine distance is used to discriminate different gait patterns. The approach proposed is tested on the CASIA and USF gait databases. The results show that our approach outperforms other state of gait recognition approaches in terms of recognition accuracy and robustness.
NASA Astrophysics Data System (ADS)
Liu, Changying; Wu, Xinyuan
2017-07-01
In this paper we explore arbitrarily high-order Lagrange collocation-type time-stepping schemes for effectively solving high-dimensional nonlinear Klein-Gordon equations with different boundary conditions. We begin with one-dimensional periodic boundary problems and first formulate an abstract ordinary differential equation (ODE) on a suitable infinity-dimensional function space based on the operator spectrum theory. We then introduce an operator-variation-of-constants formula which is essential for the derivation of our arbitrarily high-order Lagrange collocation-type time-stepping schemes for the nonlinear abstract ODE. The nonlinear stability and convergence are rigorously analysed once the spatial differential operator is approximated by an appropriate positive semi-definite matrix under some suitable smoothness assumptions. With regard to the two dimensional Dirichlet or Neumann boundary problems, our new time-stepping schemes coupled with discrete Fast Sine / Cosine Transformation can be applied to simulate the two-dimensional nonlinear Klein-Gordon equations effectively. All essential features of the methodology are present in one-dimensional and two-dimensional cases, although the schemes to be analysed lend themselves with equal to higher-dimensional case. The numerical simulation is implemented and the numerical results clearly demonstrate the advantage and effectiveness of our new schemes in comparison with the existing numerical methods for solving nonlinear Klein-Gordon equations in the literature.
Similarities between principal components of protein dynamics and random diffusion
NASA Astrophysics Data System (ADS)
Hess, Berk
2000-12-01
Principal component analysis, also called essential dynamics, is a powerful tool for finding global, correlated motions in atomic simulations of macromolecules. It has become an established technique for analyzing molecular dynamics simulations of proteins. The first few principal components of simulations of large proteins often resemble cosines. We derive the principal components for high-dimensional random diffusion, which are almost perfect cosines. This resemblance between protein simulations and noise implies that for many proteins the time scales of current simulations are too short to obtain convergence of collective motions.
Cosine-Gaussian Schell-model sources.
Mei, Zhangrong; Korotkova, Olga
2013-07-15
We introduce a new class of partially coherent sources of Schell type with cosine-Gaussian spectral degree of coherence and confirm that such sources are physically genuine. Further, we derive the expression for the cross-spectral density function of a beam generated by the novel source propagating in free space and analyze the evolution of the spectral density and the spectral degree of coherence. It is shown that at sufficiently large distances from the source the degree of coherence of the propagating beam assumes Gaussian shape while the spectral density takes on the dark-hollow profile.
Krienin, Frank
1990-01-01
A magnetic field generating device provides a useful magnetic field within a specific retgion, while keeping nearby surrounding regions virtually field free. By placing an appropriate current density along a flux line of the source, the stray field effects of the generator may be contained. One current carrying structure may support a truncated cosine distribution, and it may be surrounded by a current structure which follows a flux line that would occur in a full coaxial double cosine distribution. Strong magnetic fields may be generated and contained using superconducting cables to approximate required current surfaces.
Use Correlation Coefficients in Gaussian Process to Train Stable ELM Models
2015-05-22
confidence interval of prediction y′. There are two parameters that need to be determined in BELM: σ2N and α > 0. BELM effectively controls the over...similarly between h (u) and h (v) with vectorial angle cosine rather than distance between them. The increase of vector dimen- sion will not cause the... vectorial angle cosine approaches 0. Then, we can know that Q I with the increase of L. This reduces the chance of over-fitting. 414 Y. He et al. T a b
NASA Technical Reports Server (NTRS)
Currie, J. R.; Kissel, R. R.
1986-01-01
A system for the measurement of shaft angles is disclosed wherein a synchro resolver is sequentially pulsed, and alternately, a sine and then a cosine representative voltage output of it are sampled. Two like type, sine or cosine, succeeding outputs (V sub S1, V sub S2) are averaged and algebraically related to the opposite type output pulse (V sub c) occurring between the averaged pulses to provide a precise indication of the angle of a shaft coupled to the resolver at the instant of the occurrence of the intermediately occurring pulse (V sub c).
Method of detecting system function by measuring frequency response
NASA Technical Reports Server (NTRS)
Morrison, John L. (Inventor); Morrison, William H. (Inventor); Christophersen, Jon P. (Inventor)
2012-01-01
Real-time battery impedance spectrum is acquired using a one-time record. Fast Summation Transformation (FST) is a parallel method of acquiring a real-time battery impedance spectrum using a one-time record that enables battery diagnostics. An excitation current to a battery is a sum of equal amplitude sine waves of frequencies that are octave harmonics spread over a range of interest. A sample frequency is also octave and harmonically related to all frequencies in the sum. The time profile of this signal has a duration that is a few periods of the lowest frequency. The voltage response of the battery, average deleted, is the impedance of the battery in the time domain. Since the excitation frequencies are known and octave and harmonically related, a simple algorithm, FST, processes the time record by rectifying relative to the sine and cosine of each frequency. Another algorithm yields real and imaginary components for each frequency.
Wide band stepped frequency ground penetrating radar
Bashforth, Michael B.; Gardner, Duane; Patrick, Douglas; Lewallen, Tricia A.; Nammath, Sharyn R.; Painter, Kelly D.; Vadnais, Kenneth G.
1996-01-01
A wide band ground penetrating radar system (10) embodying a method wherein a series of radio frequency signals (60) is produced by a single radio frequency source (16) and provided to a transmit antenna (26) for transmission to a target (54) and reflection therefrom to a receive antenna (28). A phase modulator (18) modulates those portion of the radio frequency signals (62) to be transmitted and the reflected modulated signal (62) is combined in a mixer (34) with the original radio frequency signal (60) to produce a resultant signal (53) which is demodulated to produce a series of direct current voltage signals (66) the envelope of which forms a cosine wave shaped plot (68) which is processed by a Fast Fourier Transform unit 44 into frequency domain data (70) wherein the position of a preponderant frequency is indicative of distance to the target (54) and magnitude is indicative of the signature of the target (54).
Detecting double compression of audio signal
NASA Astrophysics Data System (ADS)
Yang, Rui; Shi, Yun Q.; Huang, Jiwu
2010-01-01
MP3 is the most popular audio format nowadays in our daily life, for example music downloaded from the Internet and file saved in the digital recorder are often in MP3 format. However, low bitrate MP3s are often transcoded to high bitrate since high bitrate ones are of high commercial value. Also audio recording in digital recorder can be doctored easily by pervasive audio editing software. This paper presents two methods for the detection of double MP3 compression. The methods are essential for finding out fake-quality MP3 and audio forensics. The proposed methods use support vector machine classifiers with feature vectors formed by the distributions of the first digits of the quantized MDCT (modified discrete cosine transform) coefficients. Extensive experiments demonstrate the effectiveness of the proposed methods. To the best of our knowledge, this piece of work is the first one to detect double compression of audio signal.
Method of detecting system function by measuring frequency response
Morrison, John L [Butte, MT; Morrison, William H [Manchester, CT; Christophersen, Jon P [Idaho Falls, ID
2012-04-03
Real-time battery impedance spectrum is acquired using a one-time record. Fast Summation Transformation (FST) is a parallel method of acquiring a real-time battery impedance spectrum using a one-time record that enables battery diagnostics. An excitation current to a battery is a sum of equal amplitude sine waves of frequencies that are octave harmonics spread over a range of interest. A sample frequency is also octave and harmonically related to all frequencies in the sum. The time profile of this signal has a duration that is a few periods of the lowest frequency. The voltage response of the battery, average deleted, is the impedance of the battery in the time domain. Since the excitation frequencies are known and octave and harmonically related, a simple algorithm, FST, processes the time record by rectifying relative to the sine and cosine of each frequency. Another algorithm yields real and imaginary components for each frequency.
NASA Technical Reports Server (NTRS)
Sayood, K.; Chen, Y. C.; Wang, X.
1992-01-01
During this reporting period we have worked on three somewhat different problems. These are modeling of video traffic in packet networks, low rate video compression, and the development of a lossy + lossless image compression algorithm, which might have some application in browsing algorithms. The lossy + lossless scheme is an extension of work previously done under this grant. It provides a simple technique for incorporating browsing capability. The low rate coding scheme is also a simple variation on the standard discrete cosine transform (DCT) coding approach. In spite of its simplicity, the approach provides surprisingly high quality reconstructions. The modeling approach is borrowed from the speech recognition literature, and seems to be promising in that it provides a simple way of obtaining an idea about the second order behavior of a particular coding scheme. Details about these are presented.
Quantum communication through an unmodulated spin chain.
Bose, Sougato
2003-11-14
We propose a scheme for using an unmodulated and unmeasured spin chain as a channel for short distance quantum communications. The state to be transmitted is placed on one spin of the chain and received later on a distant spin with some fidelity. We first obtain simple expressions for the fidelity of quantum state transfer and the amount of entanglement sharable between any two sites of an arbitrary Heisenberg ferromagnet using our scheme. We then apply this to the realizable case of an open ended chain with nearest neighbor interactions. The fidelity of quantum state transfer is obtained as an inverse discrete cosine transform and as a Bessel function series. We find that in a reasonable time, a qubit can be directly transmitted with better than classical fidelity across the full length of chains of up to 80 spins. Moreover, our channel allows distillable entanglement to be shared over arbitrary distances.
Some fundamentals regarding kinematics and generalized forces for multibody dynamics
NASA Technical Reports Server (NTRS)
Hodges, Dewey H.
1990-01-01
In order to illustrate the various forms in which generalized forces can arise from diverse subsystem analyses in multibody dynamics, intrinsic dynamical equations for the rotational dynamics of a rigid body are derived from Hamilton's principle. Two types of generalized forces are derived: (1) those associated with the virtual rotation vector in some orthogonal basis, and (2) those associated with varying generalized coordinates. As one physical or kinematical result (such as a frequency or a specific direction cosine) cannot rely on this selection, a 'blind' coupling of two models in which generalized forces are calculated in different ways would be wrong. Both types should use the same rotational coordinates and should denote the virtual rotation on a similar basis according to method 1, or in terms of common rotational coordinates and their diversifications as in method 2. Alternatively, the generalized forces and coordinates of one model may be transformed to those of the other.
The distance function effect on k-nearest neighbor classification for medical datasets.
Hu, Li-Yu; Huang, Min-Wei; Ke, Shih-Wen; Tsai, Chih-Fong
2016-01-01
K-nearest neighbor (k-NN) classification is conventional non-parametric classifier, which has been used as the baseline classifier in many pattern classification problems. It is based on measuring the distances between the test data and each of the training data to decide the final classification output. Since the Euclidean distance function is the most widely used distance metric in k-NN, no study examines the classification performance of k-NN by different distance functions, especially for various medical domain problems. Therefore, the aim of this paper is to investigate whether the distance function can affect the k-NN performance over different medical datasets. Our experiments are based on three different types of medical datasets containing categorical, numerical, and mixed types of data and four different distance functions including Euclidean, cosine, Chi square, and Minkowsky are used during k-NN classification individually. The experimental results show that using the Chi square distance function is the best choice for the three different types of datasets. However, using the cosine and Euclidean (and Minkowsky) distance function perform the worst over the mixed type of datasets. In this paper, we demonstrate that the chosen distance function can affect the classification accuracy of the k-NN classifier. For the medical domain datasets including the categorical, numerical, and mixed types of data, K-NN based on the Chi square distance function performs the best.
Yin, X X; Ng, B W-H; Ramamohanarao, K; Baghai-Wadji, A; Abbott, D
2012-09-01
It has been shown that, magnetic resonance images (MRIs) with sparsity representation in a transformed domain, e.g. spatial finite-differences (FD), or discrete cosine transform (DCT), can be restored from undersampled k-space via applying current compressive sampling theory. The paper presents a model-based method for the restoration of MRIs. The reduced-order model, in which a full-system-response is projected onto a subspace of lower dimensionality, has been used to accelerate image reconstruction by reducing the size of the involved linear system. In this paper, the singular value threshold (SVT) technique is applied as a denoising scheme to reduce and select the model order of the inverse Fourier transform image, and to restore multi-slice breast MRIs that have been compressively sampled in k-space. The restored MRIs with SVT for denoising show reduced sampling errors compared to the direct MRI restoration methods via spatial FD, or DCT. Compressive sampling is a technique for finding sparse solutions to underdetermined linear systems. The sparsity that is implicit in MRIs is to explore the solution to MRI reconstruction after transformation from significantly undersampled k-space. The challenge, however, is that, since some incoherent artifacts result from the random undersampling, noise-like interference is added to the image with sparse representation. These recovery algorithms in the literature are not capable of fully removing the artifacts. It is necessary to introduce a denoising procedure to improve the quality of image recovery. This paper applies a singular value threshold algorithm to reduce the model order of image basis functions, which allows further improvement of the quality of image reconstruction with removal of noise artifacts. The principle of the denoising scheme is to reconstruct the sparse MRI matrices optimally with a lower rank via selecting smaller number of dominant singular values. The singular value threshold algorithm is performed by minimizing the nuclear norm of difference between the sampled image and the recovered image. It has been illustrated that this algorithm improves the ability of previous image reconstruction algorithms to remove noise artifacts while significantly improving the quality of MRI recovery.
NASA Astrophysics Data System (ADS)
Tang, Miaomiao; Zhao, Daomu; Li, Xinzhong; Wang, Jingge
2018-01-01
Recently, we introduced a new class of radially polarized beams with multi-cosine Gaussian Schell-model(MCGSM) correlation function based on the partially coherent theory (Tang et al., 2017). In this manuscript, we extend the work to study the statistical properties such as the spectral density, the degree of coherence, the degree of polarization, and the state of polarization of the beam propagating in isotropic turbulence with a non-Kolmogorov power spectrum. Analytical formulas for the cross-spectral density matrix elements of a radially polarized MCGSM beam in non-Kolmogorov turbulence are derived. Numerical results show that lattice-like intensity pattern of the beam, which keeps propagation-invariant in free space, is destroyed by the turbulence when it passes at sufficiently large distances from the source. It is also shown that the polarization properties are mainly affected by the source correlation functions, and change in the turbulent statistics plays a relatively small effect. In addition, the polarization state exhibits self-splitting property and each beamlet evolves into radially polarized structure upon propagation.
Analysis of nulling phase functions suitable to image plane coronagraphy
NASA Astrophysics Data System (ADS)
Hénault, François; Carlotti, Alexis; Vérinaud, Christophe
2016-07-01
Coronagraphy is a very efficient technique for identifying and characterizing extra-solar planets orbiting in the habitable zone of their parent star, especially in a space environment. An important family of coronagraphs is actually based on phase plates located at an intermediate image plane of the optical system, and spreading the starlight outside the "Lyot" exit pupil plane of the instrument. In this commutation we present a set of candidate phase functions generating a central null at the Lyot plane, and study how it propagates to the image plane of the coronagraph. These functions include linear azimuthal phase ramps (the well-known optical vortex), azimuthally cosine-modulated phase profiles, and circular phase gratings. Nnumerical simulations of the expected null depth, inner working angle, sensitivity to pointing errors, effect of central obscuration located at the pupil or image planes, and effective throughput including image mask and Lyot stop transmissions are presented and discussed. The preliminary conclusion is that azimuthal cosine functions appear as an interesting alternative to the classical optical vortex of integer topological charge.
Impact of conditions at start-up on thermovibrational convective flow.
Melnikov, D E; Shevtsova, V M; Legros, J C
2008-11-01
The development of thermovibrational convection in a cubic cell filled with high Prandtl number liquid (isopropanol) is studied. Direct nonlinear simulations are performed by solving three-dimensional Navier-Stokes equations in the Boussinesq approximation. The cell is subjected to high frequency periodic oscillations perpendicular to the applied temperature gradient under zero gravity. Two types of vibrations are imposed: either as a sine or cosine function of time. It is shown that the initial vibrational phase plays a significant role in the transient behavior of thermovibrational convective flow. Such knowledge is important to interpret correctly short-duration experimental results performed in microgravity, among which the most accessible are drop towers ( approximately 5s) and parabolic flights ( approximately 20s) . It is obtained that under sine vibrations, the flow reaches steady state within less than one thermal time. Under cosine acceleration, this time is 2 times longer. For cosine excitations, the Nusselt number is approximately 10 times smaller in comparison with the sine case. Besides, in the case of cosine, the Nusselt number oscillates with double frequency. However, at the steady state, time-averaged and oscillatory characteristics of the flow are independent of the vibrational start-up. The only feature that always differs the two cases is the phase difference between the velocity, temperature, and accelerations. We have found that due to nonlinear response of the system to the imposed vibrations, the phase shift between velocity and temperature is never equal exactly to pi2 , at least in weightlessness. Thus, heat transport always exists from the beginning of vibrations, although it might be weak.
Kochiya, Yuko; Hirabayashi, Akari; Ichimaru, Yuhei
2017-09-16
To evaluate the dynamic nature of nocturnal heart rate variability, RR intervals recorded with a wearable heart rate sensor were analyzed using the Least Square Cosine Spectrum Method. Six 1-year-old infants participated in the study. A wearable heart rate sensor was placed on their chest to measure RR intervals and 3-axis acceleration. Heartbeat time series were analyzed for every 30 s using the Least Square Cosine Spectrum Method, and an original parameter to quantify the regularity of respiratory-related heart rate rhythm was extracted and referred to as "RA (RA-COSPEC: Respiratory Area obtained by COSPEC)." The RA value is higher when a cosine curve is fitted to the original data series. The time sequential changes of RA showed cyclic changes with significant rhythm during the night. The mean cycle length of RA was 70 ± 15 min, which is shorter than young adult's cycle in our previous study. At the threshold level of RA greater than 3, the HR was significantly decreased compared with the RA value less than 3. The regularity of heart rate rhythm showed dynamic changes during the night in 1-year-old infants. Significant decrease of HR at the time of higher RA suggests the increase of parasympathetic activity. We suspect that the higher RA reflects the regular respiratory pattern during the night. This analysis system may be useful for quantitative assessment of regularity and dynamic changes of nocturnal heart rate variability in infants.
Wang, Jinling; Belatreche, Ammar; Maguire, Liam P; McGinnity, Thomas Martin
2017-01-01
This paper presents an enhanced rank-order-based learning algorithm, called SpikeTemp, for spiking neural networks (SNNs) with a dynamically adaptive structure. The trained feed-forward SNN consists of two layers of spiking neurons: 1) an encoding layer which temporally encodes real-valued features into spatio-temporal spike patterns and 2) an output layer of dynamically grown neurons which perform spatio-temporal classification. Both Gaussian receptive fields and square cosine population encoding schemes are employed to encode real-valued features into spatio-temporal spike patterns. Unlike the rank-order-based learning approach, SpikeTemp uses the precise times of the incoming spikes for adjusting the synaptic weights such that early spikes result in a large weight change and late spikes lead to a smaller weight change. This removes the need to rank all the incoming spikes and, thus, reduces the computational cost of SpikeTemp. The proposed SpikeTemp algorithm is demonstrated on several benchmark data sets and on an image recognition task. The results show that SpikeTemp can achieve better classification performance and is much faster than the existing rank-order-based learning approach. In addition, the number of output neurons is much smaller when the square cosine encoding scheme is employed. Furthermore, SpikeTemp is benchmarked against a selection of existing machine learning algorithms, and the results demonstrate the ability of SpikeTemp to classify different data sets after just one presentation of the training samples with comparable classification performance.
Low frequency AC waveform generator
Bilharz, Oscar W.
1986-01-01
Low frequency sine, cosine, triangle and square waves are synthesized in circuitry which allows variation in the waveform amplitude and frequency while exhibiting good stability and without requiring significant stabilization time. A triangle waveform is formed by a ramped integration process controlled by a saturation amplifier circuit which produces the necessary hysteresis for the triangle waveform. The output of the saturation circuit is tapped to produce the square waveform. The sine waveform is synthesized by taking the absolute value of the triangular waveform, raising this absolute value to a predetermined power, multiplying the raised absolute value of the triangle wave with the triangle wave itself and properly scaling the resultant waveform and subtracting it from the triangular waveform itself. The cosine is synthesized by squaring the triangular waveform, raising the triangular waveform to a predetermined power and adding the squared waveform raised to the predetermined power with a DC reference and subtracting the squared waveform therefrom, with all waveforms properly scaled. The resultant waveform is then multiplied with a square wave in order to correct the polarity and produce the resultant cosine waveform.
Selecting a proper design period for heliostat field layout optimization using Campo code
NASA Astrophysics Data System (ADS)
Saghafifar, Mohammad; Gadalla, Mohamed
2016-09-01
In this paper, different approaches are considered to calculate the cosine factor which is utilized in Campo code to expand the heliostat field layout and maximize its annual thermal output. Furthermore, three heliostat fields containing different number of mirrors are taken into consideration. Cosine factor is determined by considering instantaneous and time-average approaches. For instantaneous method, different design days and design hours are selected. For the time average method, daily time average, monthly time average, seasonally time average, and yearly time averaged cosine factor determinations are considered. Results indicate that instantaneous methods are more appropriate for small scale heliostat field optimization. Consequently, it is proposed to consider the design period as the second design variable to ensure the best outcome. For medium and large scale heliostat fields, selecting an appropriate design period is more important. Therefore, it is more reliable to select one of the recommended time average methods to optimize the field layout. Optimum annual weighted efficiency for heliostat fields (small, medium, and large) containing 350, 1460, and 3450 mirrors are 66.14%, 60.87%, and 54.04%, respectively.
Content fragile watermarking for H.264/AVC video authentication
NASA Astrophysics Data System (ADS)
Ait Sadi, K.; Guessoum, A.; Bouridane, A.; Khelifi, F.
2017-04-01
Discrete cosine transform is exploited in this work to generate the authentication data that are treated as a fragile watermark. This watermark is embedded in the motion vectors. The advances in multimedia technologies and digital processing tools have brought with them new challenges for the source and content authentication. To ensure the integrity of the H.264/AVC video stream, we introduce an approach based on a content fragile video watermarking method using an independent authentication of each group of pictures (GOPs) within the video. This technique uses robust visual features extracted from the video pertaining to the set of selected macroblocs (MBs) which hold the best partition mode in a tree-structured motion compensation process. An additional security degree is offered by the proposed method through using a more secured keyed function HMAC-SHA-256 and randomly choosing candidates from already selected MBs. In here, the watermark detection and verification processes are blind, whereas the tampered frames detection is not since it needs the original frames within the tampered GOPs. The proposed scheme achieves an accurate authentication technique with a high fragility and fidelity whilst maintaining the original bitrate and the perceptual quality. Furthermore, its ability to detect the tampered frames in case of spatial, temporal and colour manipulations is confirmed.
Personal recognition using hand shape and texture.
Kumar, Ajay; Zhang, David
2006-08-01
This paper proposes a new bimodal biometric system using feature-level fusion of hand shape and palm texture. The proposed combination is of significance since both the palmprint and hand-shape images are proposed to be extracted from the single hand image acquired from a digital camera. Several new hand-shape features that can be used to represent the hand shape and improve the performance are investigated. The new approach for palmprint recognition using discrete cosine transform coefficients, which can be directly obtained from the camera hardware, is demonstrated. None of the prior work on hand-shape or palmprint recognition has given any attention on the critical issue of feature selection. Our experimental results demonstrate that while majority of palmprint or hand-shape features are useful in predicting the subjects identity, only a small subset of these features are necessary in practice for building an accurate model for identification. The comparison and combination of proposed features is evaluated on the diverse classification schemes; naive Bayes (normal, estimated, multinomial), decision trees (C4.5, LMT), k-NN, SVM, and FFN. Although more work remains to be done, our results to date indicate that the combination of selected hand-shape and palmprint features constitutes a promising addition to the biometrics-based personal recognition systems.
Identifying Degenerative Brain Disease Using Rough Set Classifier Based on Wavelet Packet Method.
Cheng, Ching-Hsue; Liu, Wei-Xiang
2018-05-28
Population aging has become a worldwide phenomenon, which causes many serious problems. The medical issues related to degenerative brain disease have gradually become a concern. Magnetic Resonance Imaging is one of the most advanced methods for medical imaging and is especially suitable for brain scans. From the literature, although the automatic segmentation method is less laborious and time-consuming, it is restricted in several specific types of images. In addition, hybrid techniques segmentation improves the shortcomings of the single segmentation method. Therefore, this study proposed a hybrid segmentation combined with rough set classifier and wavelet packet method to identify degenerative brain disease. The proposed method is a three-stage image process method to enhance accuracy of brain disease classification. In the first stage, this study used the proposed hybrid segmentation algorithms to segment the brain ROI (region of interest). In the second stage, wavelet packet was used to conduct the image decomposition and calculate the feature values. In the final stage, the rough set classifier was utilized to identify the degenerative brain disease. In verification and comparison, two experiments were employed to verify the effectiveness of the proposed method and compare with the TV-seg (total variation segmentation) algorithm, Discrete Cosine Transform, and the listing classifiers. Overall, the results indicated that the proposed method outperforms the listing methods.
Siddiqui, M F; Reza, A W; Kanesan, J; Ramiah, H
2014-01-01
A wide interest has been observed to find a low power and area efficient hardware design of discrete cosine transform (DCT) algorithm. This research work proposed a novel Common Subexpression Elimination (CSE) based pipelined architecture for DCT, aimed at reproducing the cost metrics of power and area while maintaining high speed and accuracy in DCT applications. The proposed design combines the techniques of Canonical Signed Digit (CSD) representation and CSE to implement the multiplier-less method for fixed constant multiplication of DCT coefficients. Furthermore, symmetry in the DCT coefficient matrix is used with CSE to further decrease the number of arithmetic operations. This architecture needs a single-port memory to feed the inputs instead of multiport memory, which leads to reduction of the hardware cost and area. From the analysis of experimental results and performance comparisons, it is observed that the proposed scheme uses minimum logic utilizing mere 340 slices and 22 adders. Moreover, this design meets the real time constraints of different video/image coders and peak-signal-to-noise-ratio (PSNR) requirements. Furthermore, the proposed technique has significant advantages over recent well-known methods along with accuracy in terms of power reduction, silicon area usage, and maximum operating frequency by 41%, 15%, and 15%, respectively.
Dakua, Sarada Prasad; Abinahed, Julien; Al-Ansari, Abdulla
2015-01-01
Abstract. Liver segmentation continues to remain a major challenge, largely due to its intense complexity with surrounding anatomical structures (stomach, kidney, and heart), high noise level and lack of contrast in pathological computed tomography (CT) data. We present an approach to reconstructing the liver surface in low contrast CT. The main contributions are: (1) a stochastic resonance-based methodology in discrete cosine transform domain is developed to enhance the contrast of pathological liver images, (2) a new formulation is proposed to prevent the object boundary, resulting from the cellular automata method, from leaking into the surrounding areas of similar intensity, and (3) a level-set method is suggested to generate intermediate segmentation contours from two segmented slices distantly located in a subject sequence. We have tested the algorithm on real datasets obtained from two sources, Hamad General Hospital and medical image computing and computer-assisted interventions grand challenge workshop. Various parameters in the algorithm, such as w, Δt, z, α, μ, α1, and α2, play imperative roles, thus their values are precisely selected. Both qualitative and quantitative evaluation performed on liver data show promising segmentation accuracy when compared with ground truth data reflecting the potential of the proposed method. PMID:26158101
NASA Astrophysics Data System (ADS)
Li, Minkang; Zhou, Changhe; Wei, Chunlong; Jia, Wei; Lu, Yancong; Xiang, Changcheng; Xiang, XianSong
2016-10-01
Large-sized gratings are essential optical elements in laser fusion and space astronomy facilities. Scanning beam interference lithography is an effective method to fabricate large-sized gratings. To minimize the nonlinear phase written into the photo-resist, the image grating must be measured to adjust the left and right beams to interfere at their waists. In this paper, we propose a new method to conduct wavefront metrology based on phase-stepping interferometry. Firstly, a transmission grating is used to combine the two beams to form an interferogram which is recorded by a charge coupled device(CCD). Phase steps are introduced by moving the grating with a linear stage monitored by a laser interferometer. A series of interferograms are recorded as the displacement is measured by the laser interferometer. Secondly, to eliminate the tilt and piston error during the phase stepping, the iterative least square phase shift method is implemented to obtain the wrapped phase. Thirdly, we use the discrete cosine transform least square method to unwrap the phase map. Experiment results indicate that the measured wavefront has a nonlinear phase around 0.05 λ@404.7nm. Finally, as the image grating is acquired, we simulate the print-error written into the photo-resist.
NASA Astrophysics Data System (ADS)
Yao, Juncai; Liu, Guizhong
2017-03-01
In order to achieve higher image compression ratio and improve visual perception of the decompressed image, a novel color image compression scheme based on the contrast sensitivity characteristics of the human visual system (HVS) is proposed. In the proposed scheme, firstly the image is converted into the YCrCb color space and divided into sub-blocks. Afterwards, the discrete cosine transform is carried out for each sub-block, and three quantization matrices are built to quantize the frequency spectrum coefficients of the images by combining the contrast sensitivity characteristics of HVS. The Huffman algorithm is used to encode the quantized data. The inverse process involves decompression and matching to reconstruct the decompressed color image. And simulations are carried out for two color images. The results show that the average structural similarity index measurement (SSIM) and peak signal to noise ratio (PSNR) under the approximate compression ratio could be increased by 2.78% and 5.48%, respectively, compared with the joint photographic experts group (JPEG) compression. The results indicate that the proposed compression algorithm in the text is feasible and effective to achieve higher compression ratio under ensuring the encoding and image quality, which can fully meet the needs of storage and transmission of color images in daily life.
Krishnan, Sunder Ram; Seelamantula, Chandra Sekhar; Bouwens, Arno; Leutenegger, Marcel; Lasser, Theo
2012-10-01
We address the problem of high-resolution reconstruction in frequency-domain optical-coherence tomography (FDOCT). The traditional method employed uses the inverse discrete Fourier transform, which is limited in resolution due to the Heisenberg uncertainty principle. We propose a reconstruction technique based on zero-crossing (ZC) interval analysis. The motivation for our approach lies in the observation that, for a multilayered specimen, the backscattered signal may be expressed as a sum of sinusoids, and each sinusoid manifests as a peak in the FDOCT reconstruction. The successive ZC intervals of a sinusoid exhibit high consistency, with the intervals being inversely related to the frequency of the sinusoid. The statistics of the ZC intervals are used for detecting the frequencies present in the input signal. The noise robustness of the proposed technique is improved by using a cosine-modulated filter bank for separating the input into different frequency bands, and the ZC analysis is carried out on each band separately. The design of the filter bank requires the design of a prototype, which we accomplish using a Kaiser window approach. We show that the proposed method gives good results on synthesized and experimental data. The resolution is enhanced, and noise robustness is higher compared with the standard Fourier reconstruction.
Siddiqui, M. F.; Reza, A. W.; Kanesan, J.; Ramiah, H.
2014-01-01
A wide interest has been observed to find a low power and area efficient hardware design of discrete cosine transform (DCT) algorithm. This research work proposed a novel Common Subexpression Elimination (CSE) based pipelined architecture for DCT, aimed at reproducing the cost metrics of power and area while maintaining high speed and accuracy in DCT applications. The proposed design combines the techniques of Canonical Signed Digit (CSD) representation and CSE to implement the multiplier-less method for fixed constant multiplication of DCT coefficients. Furthermore, symmetry in the DCT coefficient matrix is used with CSE to further decrease the number of arithmetic operations. This architecture needs a single-port memory to feed the inputs instead of multiport memory, which leads to reduction of the hardware cost and area. From the analysis of experimental results and performance comparisons, it is observed that the proposed scheme uses minimum logic utilizing mere 340 slices and 22 adders. Moreover, this design meets the real time constraints of different video/image coders and peak-signal-to-noise-ratio (PSNR) requirements. Furthermore, the proposed technique has significant advantages over recent well-known methods along with accuracy in terms of power reduction, silicon area usage, and maximum operating frequency by 41%, 15%, and 15%, respectively. PMID:25133249
Cosine problem in EPRL/FK spinfoam model
NASA Astrophysics Data System (ADS)
Vojinović, Marko
2014-01-01
We calculate the classical limit effective action of the EPRL/FK spinfoam model of quantum gravity coupled to matter fields. By employing the standard QFT background field method adapted to the spinfoam setting, we find that the model has many different classical effective actions. Most notably, these include the ordinary Einstein-Hilbert action coupled to matter, but also an action which describes antigravity. All those multiple classical limits appear as a consequence of the fact that the EPRL/FK vertex amplitude has cosine-like large spin asymptotics. We discuss some possible ways to eliminate the unwanted classical limits.
Someswara Rao, Chinta; Viswanadha Raju, S.
2016-01-01
In this paper, we consider correlation coefficient, rank correlation coefficient and cosine similarity measures for evaluating similarity between Homo sapiens and monkeys. We used DNA chromosomes of genome wide genes to determine the correlation between the chromosomal content and evolutionary relationship. The similarity among the H. sapiens and monkeys is measured for a total of 210 chromosomes related to 10 species. The similarity measures of these different species show the relationship between the H. sapiens and monkey. This similarity will be helpful at theft identification, maternity identification, disease identification, etc. PMID:26981409
Someswara Rao, Chinta; Viswanadha Raju, S
2016-03-01
In this paper, we consider correlation coefficient, rank correlation coefficient and cosine similarity measures for evaluating similarity between Homo sapiens and monkeys. We used DNA chromosomes of genome wide genes to determine the correlation between the chromosomal content and evolutionary relationship. The similarity among the H. sapiens and monkeys is measured for a total of 210 chromosomes related to 10 species. The similarity measures of these different species show the relationship between the H. sapiens and monkey. This similarity will be helpful at theft identification, maternity identification, disease identification, etc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sandford, M.T. II; Bradley, J.N.; Handel, T.G.
Data embedding is a new steganographic method for combining digital information sets. This paper describes the data embedding method and gives examples of its application using software written in the C-programming language. Sandford and Handel produced a computer program (BMPEMBED, Ver. 1.51 written for IBM PC/AT or compatible, MS/DOS Ver. 3.3 or later) that implements data embedding in an application for digital imagery. Information is embedded into, and extracted from, Truecolor or color-pallet images in Microsoft{reg_sign} bitmap (.BMP) format. Hiding data in the noise component of a host, by means of an algorithm that modifies or replaces the noise bits,more » is termed {open_quote}steganography.{close_quote} Data embedding differs markedly from conventional steganography, because it uses the noise component of the host to insert information with few or no modifications to the host data values or their statistical properties. Consequently, the entropy of the host data is affected little by using data embedding to add information. The data embedding method applies to host data compressed with transform, or {open_quote}lossy{close_quote} compression algorithms, as for example ones based on discrete cosine transform and wavelet functions. Analysis of the host noise generates a key required for embedding and extracting the auxiliary data from the combined data. The key is stored easily in the combined data. Images without the key cannot be processed to extract the embedded information. To provide security for the embedded data, one can remove the key from the combined data and manage it separately. The image key can be encrypted and stored in the combined data or transmitted separately as a ciphertext much smaller in size than the embedded data. The key size is typically ten to one-hundred bytes, and it is in data an analysis algorithm.« less
A visual detection model for DCT coefficient quantization
NASA Technical Reports Server (NTRS)
Ahumada, Albert J., Jr.; Watson, Andrew B.
1994-01-01
The discrete cosine transform (DCT) is widely used in image compression and is part of the JPEG and MPEG compression standards. The degree of compression and the amount of distortion in the decompressed image are controlled by the quantization of the transform coefficients. The standards do not specify how the DCT coefficients should be quantized. One approach is to set the quantization level for each coefficient so that the quantization error is near the threshold of visibility. Results from previous work are combined to form the current best detection model for DCT coefficient quantization noise. This model predicts sensitivity as a function of display parameters, enabling quantization matrices to be designed for display situations varying in luminance, veiling light, and spatial frequency related conditions (pixel size, viewing distance, and aspect ratio). It also allows arbitrary color space directions for the representation of color. A model-based method of optimizing the quantization matrix for an individual image was developed. The model described above provides visual thresholds for each DCT frequency. These thresholds are adjusted within each block for visual light adaptation and contrast masking. For given quantization matrix, the DCT quantization errors are scaled by the adjusted thresholds to yield perceptual errors. These errors are pooled nonlinearly over the image to yield total perceptual error. With this model one may estimate the quantization matrix for a particular image that yields minimum bit rate for a given total perceptual error, or minimum perceptual error for a given bit rate. Custom matrices for a number of images show clear improvement over image-independent matrices. Custom matrices are compatible with the JPEG standard, which requires transmission of the quantization matrix.
Automated surgical skill assessment in RMIS training.
Zia, Aneeq; Essa, Irfan
2018-05-01
Manual feedback in basic robot-assisted minimally invasive surgery (RMIS) training can consume a significant amount of time from expert surgeons' schedule and is prone to subjectivity. In this paper, we explore the usage of different holistic features for automated skill assessment using only robot kinematic data and propose a weighted feature fusion technique for improving score prediction performance. Moreover, we also propose a method for generating 'task highlights' which can give surgeons a more directed feedback regarding which segments had the most effect on the final skill score. We perform our experiments on the publicly available JHU-ISI Gesture and Skill Assessment Working Set (JIGSAWS) and evaluate four different types of holistic features from robot kinematic data-sequential motion texture (SMT), discrete Fourier transform (DFT), discrete cosine transform (DCT) and approximate entropy (ApEn). The features are then used for skill classification and exact skill score prediction. Along with using these features individually, we also evaluate the performance using our proposed weighted combination technique. The task highlights are produced using DCT features. Our results demonstrate that these holistic features outperform all previous Hidden Markov Model (HMM)-based state-of-the-art methods for skill classification on the JIGSAWS dataset. Also, our proposed feature fusion strategy significantly improves performance for skill score predictions achieving up to 0.61 average spearman correlation coefficient. Moreover, we provide an analysis on how the proposed task highlights can relate to different surgical gestures within a task. Holistic features capturing global information from robot kinematic data can successfully be used for evaluating surgeon skill in basic surgical tasks on the da Vinci robot. Using the framework presented can potentially allow for real-time score feedback in RMIS training and help surgical trainees have more focused training.
NASA Astrophysics Data System (ADS)
Sandford, Maxwell T., II; Bradley, Jonathan N.; Handel, Theodore G.
1996-01-01
Data embedding is a new steganographic method for combining digital information sets. This paper describes the data embedding method and gives examples of its application using software written in the C-programming language. Sandford and Handel produced a computer program (BMPEMBED, Ver. 1.51 written for IBM PC/AT or compatible, MS/DOS Ver. 3.3 or later) that implements data embedding in an application for digital imagery. Information is embedded into, and extracted from, Truecolor or color-pallet images in MicrosoftTM bitmap (BMP) format. Hiding data in the noise component of a host, by means of an algorithm that modifies or replaces the noise bits, is termed `steganography.' Data embedding differs markedly from conventional steganography, because it uses the noise component of the host to insert information with few or no modifications to the host data values or their statistical properties. Consequently, the entropy of the host data is affected little by using data embedding to add information. The data embedding method applies to host data compressed with transform, or `lossy' compression algorithms, as for example ones based on discrete cosine transform and wavelet functions. Analysis of the host noise generates a key required for embedding and extracting the auxiliary data from the combined data. The key is stored easily in the combined data. Images without the key cannot be processed to extract the embedded information. To provide security for the embedded data, one can remove the key from the combined data and manage it separately. The image key can be encrypted and stored in the combined data or transmitted separately as a ciphertext much smaller in size than the embedded data. The key size is typically ten to one-hundred bytes, and it is derived from the original host data by an analysis algorithm.
Luykx, Jurjen J.; Bakker, Steven C.; Lentjes, Eef; Boks, Marco P. M.; van Geloven, Nan; Eijkemans, Marinus J. C.; Janson, Esther; Strengman, Eric; de Lepper, Anne M.; Westenberg, Herman; Klopper, Kai E.; Hoorn, Hendrik J.; Gelissen, Harry P. M. M.; Jordan, Julian; Tolenaar, Noortje M.; van Dongen, Eric P. A.; Michel, Bregt; Abramovic, Lucija; Horvath, Steve; Kappen, Teus; Bruins, Peter; Keijzers, Peter; Borgdorff, Paul; Ophoff, Roel A.; Kahn, René S.
2012-01-01
Background Animal studies have revealed seasonal patterns in cerebrospinal fluid (CSF) monoamine (MA) turnover. In humans, no study had systematically assessed seasonal patterns in CSF MA turnover in a large set of healthy adults. Methodology/Principal Findings Standardized amounts of CSF were prospectively collected from 223 healthy individuals undergoing spinal anesthesia for minor surgical procedures. The metabolites of serotonin (5-hydroxyindoleacetic acid, 5-HIAA), dopamine (homovanillic acid, HVA) and norepinephrine (3-methoxy-4-hydroxyphenylglycol, MPHG) were measured using high performance liquid chromatography (HPLC). Concentration measurements by sampling and birth dates were modeled using a non-linear quantile cosine function and locally weighted scatterplot smoothing (LOESS, span = 0.75). The cosine model showed a unimodal season of sampling 5-HIAA zenith in April and a nadir in October (p-value of the amplitude of the cosine = 0.00050), with predicted maximum (PCmax) and minimum (PCmin) concentrations of 173 and 108 nmol/L, respectively, implying a 60% increase from trough to peak. Season of birth showed a unimodal 5-HIAA zenith in May and a nadir in November (p = 0.00339; PCmax = 172 and PCmin = 126). The non-parametric LOESS showed a similar pattern to the cosine in both season of sampling and season of birth models, validating the cosine model. A final model including both sampling and birth months demonstrated that both sampling and birth seasons were independent predictors of 5-HIAA concentrations. Conclusion In subjects without mental illness, 5-HT turnover shows circannual variation by season of sampling as well as season of birth, with peaks in spring and troughs in fall. PMID:22312427
NASA Astrophysics Data System (ADS)
Lauinger, Norbert
1999-08-01
Diffractive 3D phase gratings of spherical scatterers dense in hexagonal packing geometry represent adaptively tunable 4D-spatiotemporal filters with trichromatic resonance in visible spectrum. They are described in the (lambda) - chromatic and the reciprocal (nu) -aspects by reciprocal geometric translations of the lightlike Pythagoras theorem, and by the direction cosine for double cones. The most elementary resonance condition in the lightlike Pythagoras theorem is given by the transformation of the grating constants gx, gy, gz of the hexagonal 3D grating to (lambda) h1h2h3 equals (lambda) 111 with cos (alpha) equals 0.5. Through normalization of the chromaticity in the von Laue-interferences to (lambda) 111, the (nu) (lambda) equals (lambda) h1h2h3/(lambda) 111-factor of phase velocity becomes the crucial resonance factor, the 'regulating device' of the spatiotemporal interaction between 3D grating and light, space and time. In the reciprocal space equal/unequal weights and times in spectral metrics result at positions of interference maxima defined by hyperbolas and circles. A database becomes built up by optical interference for trichromatic image preprocessing, motion detection in vector space, multiple range data analysis, patchwide multiple correlations in the spatial frequency spectrum, etc.
Low-complex energy-aware image communication in visual sensor networks
NASA Astrophysics Data System (ADS)
Phamila, Yesudhas Asnath Victy; Amutha, Ramachandran
2013-10-01
A low-complex, low bit rate, energy-efficient image compression algorithm explicitly designed for resource-constrained visual sensor networks applied for surveillance, battle field, habitat monitoring, etc. is presented, where voluminous amount of image data has to be communicated over a bandwidth-limited wireless medium. The proposed method overcomes the energy limitation of individual nodes and is investigated in terms of image quality, entropy, processing time, overall energy consumption, and system lifetime. This algorithm is highly energy efficient and extremely fast since it applies energy-aware zonal binary discrete cosine transform (DCT) that computes only the few required significant coefficients and codes them using enhanced complementary Golomb Rice code without using any floating point operations. Experiments are performed using the Atmel Atmega128 and MSP430 processors to measure the resultant energy savings. Simulation results show that the proposed energy-aware fast zonal transform consumes only 0.3% of energy needed by conventional DCT. This algorithm consumes only 6% of energy needed by Independent JPEG Group (fast) version, and it suits for embedded systems requiring low power consumption. The proposed scheme is unique since it significantly enhances the lifetime of the camera sensor node and the network without any need for distributed processing as was traditionally required in existing algorithms.
Broadband W-band Rapid Frequency Sweep Considerations for Fourier Transform EPR.
Strangeway, Robert A; Hyde, James S; Camenisch, Theodore G; Sidabras, Jason W; Mett, Richard R; Anderson, James R; Ratke, Joseph J; Subczynski, Witold K
2017-12-01
A multi-arm W-band (94 GHz) electron paramagnetic resonance spectrometer that incorporates a loop-gap resonator with high bandwidth is described. A goal of the instrumental development is detection of free induction decay following rapid sweep of the microwave frequency across the spectrum of a nitroxide radical at physiological temperature, which is expected to lead to a capability for Fourier transform electron paramagnetic resonance. Progress toward this goal is a theme of the paper. Because of the low Q-value of the loop-gap resonator, it was found necessary to develop a new type of automatic frequency control, which is described in an appendix. Path-length equalization, which is accomplished at the intermediate frequency of 59 GHz, is analyzed. A directional coupler is favored for separation of incident and reflected power between the bridge and the loop-gap resonator. Microwave leakage of this coupler is analyzed. An oversize waveguide with hyperbolic-cosine tapers couples the bridge to the loop-gap resonator, which results in reduced microwave power and signal loss. Benchmark sensitivity data are provided. The most extensive application of the instrument to date has been the measurement of T 1 values using pulse saturation recovery. An overview of that work is provided.
A compressed sensing based 3D resistivity inversion algorithm for hydrogeological applications
NASA Astrophysics Data System (ADS)
Ranjan, Shashi; Kambhammettu, B. V. N. P.; Peddinti, Srinivasa Rao; Adinarayana, J.
2018-04-01
Image reconstruction from discrete electrical responses pose a number of computational and mathematical challenges. Application of smoothness constrained regularized inversion from limited measurements may fail to detect resistivity anomalies and sharp interfaces separated by hydro stratigraphic units. Under favourable conditions, compressed sensing (CS) can be thought of an alternative to reconstruct the image features by finding sparse solutions to highly underdetermined linear systems. This paper deals with the development of a CS assisted, 3-D resistivity inversion algorithm for use with hydrogeologists and groundwater scientists. CS based l1-regularized least square algorithm was applied to solve the resistivity inversion problem. Sparseness in the model update vector is introduced through block oriented discrete cosine transformation, with recovery of the signal achieved through convex optimization. The equivalent quadratic program was solved using primal-dual interior point method. Applicability of the proposed algorithm was demonstrated using synthetic and field examples drawn from hydrogeology. The proposed algorithm has outperformed the conventional (smoothness constrained) least square method in recovering the model parameters with much fewer data, yet preserving the sharp resistivity fronts separated by geologic layers. Resistivity anomalies represented by discrete homogeneous blocks embedded in contrasting geologic layers were better imaged using the proposed algorithm. In comparison to conventional algorithm, CS has resulted in an efficient (an increase in R2 from 0.62 to 0.78; a decrease in RMSE from 125.14 Ω-m to 72.46 Ω-m), reliable, and fast converging (run time decreased by about 25%) solution.
Comparison of Compressed Sensing Algorithms for Inversion of 3-D Electrical Resistivity Tomography.
NASA Astrophysics Data System (ADS)
Peddinti, S. R.; Ranjan, S.; Kbvn, D. P.
2016-12-01
Image reconstruction algorithms derived from electrical resistivity tomography (ERT) are highly non-linear, sparse, and ill-posed. The inverse problem is much severe, when dealing with 3-D datasets that result in large sized matrices. Conventional gradient based techniques using L2 norm minimization with some sort of regularization can impose smoothness constraint on the solution. Compressed sensing (CS) is relatively new technique that takes the advantage of inherent sparsity in parameter space in one or the other form. If favorable conditions are met, CS was proven to be an efficient image reconstruction technique that uses limited observations without losing edge sharpness. This paper deals with the development of an open source 3-D resistivity inversion tool using CS framework. The forward model was adopted from RESINVM3D (Pidlisecky et al., 2007) with CS as the inverse code. Discrete cosine transformation (DCT) function was used to induce model sparsity in orthogonal form. Two CS based algorithms viz., interior point method and two-step IST were evaluated on a synthetic layered model with surface electrode observations. The algorithms were tested (in terms of quality and convergence) under varying degrees of parameter heterogeneity, model refinement, and reduced observation data space. In comparison to conventional gradient algorithms, CS was proven to effectively reconstruct the sub-surface image with less computational cost. This was observed by a general increase in NRMSE from 0.5 in 10 iterations using gradient algorithm to 0.8 in 5 iterations using CS algorithms.
NASA Astrophysics Data System (ADS)
Rerucha, Simon; Sarbort, Martin; Hola, Miroslava; Cizek, Martin; Hucl, Vaclav; Cip, Ondrej; Lazar, Josef
2016-12-01
The homodyne detection with only a single detector represents a promising approach in the interferometric application which enables a significant reduction of the optical system complexity while preserving the fundamental resolution and dynamic range of the single frequency laser interferometers. We present the design, implementation and analysis of algorithmic methods for computational processing of the single-detector interference signal based on parallel pipelined processing suitable for real time implementation on a programmable hardware platform (e.g. the FPGA - Field Programmable Gate Arrays or the SoC - System on Chip). The algorithmic methods incorporate (a) the single detector signal (sine) scaling, filtering, demodulations and mixing necessary for the second (cosine) quadrature signal reconstruction followed by a conic section projection in Cartesian plane as well as (a) the phase unwrapping together with the goniometric and linear transformations needed for the scale linearization and periodic error correction. The digital computing scheme was designed for bandwidths up to tens of megahertz which would allow to measure the displacements at the velocities around half metre per second. The algorithmic methods were tested in real-time operation with a PC-based reference implementation that employed the advantage pipelined processing by balancing the computational load among multiple processor cores. The results indicate that the algorithmic methods are suitable for a wide range of applications [3] and that they are bringing the fringe counting interferometry closer to the industrial applications due to their optical setup simplicity and robustness, computational stability, scalability and also a cost-effectiveness.
Dhir, L; Habib, N E; Monro, D M; Rakshit, S
2010-06-01
The purpose of this study was to investigate the effect of cataract surgery and pupil dilation on iris pattern recognition for personal authentication. Prospective non-comparative cohort study. Images of 15 subjects were captured before (enrolment), and 5, 10, and 15 min after instillation of mydriatics before routine cataract surgery. After cataract surgery, images were captured 2 weeks thereafter. Enrolled and test images (after pupillary dilation and after cataract surgery) were segmented to extract the iris. This was then unwrapped onto a rectangular format for normalization and a novel method using the Discrete Cosine Transform was applied to encode the image into binary bits. The numerical difference between two iris codes (Hamming distance, HD) was calculated. The HD between identification and enrolment codes was used as a score and was compared with a confidence threshold for specific equipment, giving a match or non-match result. The Correct Recognition Rate (CRR) and Equal Error Rates (EERs) were calculated to analyse overall system performance. After cataract surgery, perfect identification and verification was achieved, with zero false acceptance rate, zero false rejection rate, and zero EER. After pupillary dilation, non-elastic deformation occurs and a CRR of 86.67% and EER of 9.33% were obtained. Conventional circle-based localization methods are inadequate. Matching reliability decreases considerably with increase in pupillary dilation. Cataract surgery has no effect on iris pattern recognition, whereas pupil dilation may be used to defeat an iris-based authentication system.
2002-01-01
the fully coupled electrical and optical sys- of carrier is assumed and the minority carriers are not tems in VCSELs (Oyafuso et al. 2000). separated...evolution times the cosine function in Mn 5 ++.(1) weakly depends on the phase space variables. With the increase of the time, the cosine term...can also be applied in phase - coherent devices. Our approach is useful to To obtain S(0) we just have to integrate A Q2 over the study noise in a wide
Lump solutions and interaction phenomenon to the third-order nonlinear evolution equation
NASA Astrophysics Data System (ADS)
Kofane, T. C.; Fokou, M.; Mohamadou, A.; Yomba, E.
2017-11-01
In this work, the lump solution and the kink solitary wave solution from the (2 + 1) -dimensional third-order evolution equation, using the Hirota bilinear method are obtained through symbolic computation with Maple. We have assumed that the lump solution is centered at the origin, when t = 0 . By considering a mixing positive quadratic function with exponential function, as well as a mixing positive quadratic function with hyperbolic cosine function, interaction solutions like lump-exponential and lump-hyperbolic cosine are presented. A completely non-elastic interaction between a lump and kink soliton is observed, showing that a lump solution can be swallowed by a kink soliton.
Cahuantzi, Roberto; Buckley, Alastair
2017-09-01
Making accurate and reliable measurements of solar irradiance is important for understanding performance in the photovoltaic energy sector. In this paper, we present design details and performance of a number of fibre optic couplers for use in irradiance measurement systems employing remote light sensors applicable for either spectrally resolved or broadband measurement. The angular and spectral characteristics of different coupler designs are characterised and compared with existing state-of-the-art commercial technology. The new coupler designs are fabricated from polytetrafluorethylene (PTFE) rods and operate through forward scattering of incident sunlight on the front surfaces of the structure into an optic fibre located in a cavity to the rear of the structure. The PTFE couplers exhibit up to 4.8% variation in scattered transmission intensity between 425 nm and 700 nm and show minimal specular reflection, making the designs accurate and reliable over the visible region. Through careful geometric optimization near perfect cosine dependence on the angular response of the coupler can be achieved. The PTFE designs represent a significant improvement over the state of the art with less than 0.01% error compared with ideal cosine response for angles of incidence up to 50°.
Coronagraphic Observations of the Lunar Sodium Exosphere Near the Lunar Surface
NASA Technical Reports Server (NTRS)
Potter, A. E.; Morgan, T. H.
1998-01-01
The sodium exosphere of the Moon was observed using a solar coronagraph to occult the illuminated surface of the Moon. Exceptionally dust-free atmospheric conditions were required to allow the faint emission from sunlight scattered by lunar sodium atoms to be distinguished from moonlight scattered from atmospheric dust. At 0300 UT on April 22, 1994, ideal conditions prevailed for a few hours, and one excellent image of the sodium exosphere was measured, with the Moon at a phase angle of 51 deg, 81 % illuminated. Analysis of the image data showed that the weighted mean temperature of the exosphere was 1280 K and that the sodium column density varied approximately as cosine-cubed of the latitude. A cosine-cubed variation is an unexpected result, since the flux per unit area of solar photons and solar particles varies as the cosine of latitude. It is suggested that this can be explained by a temperature dependence for the sputtering of sodium atoms from the surface. This is a characteristic feature of chemical sputtering, which has been previously proposed to explain the sodium exosphere of Mercury. A possible interaction between chemical sputtering and solar photons is suggested.
Structural-electromagnetic bidirectional coupling analysis of space large film reflector antennas
NASA Astrophysics Data System (ADS)
Zhang, Xinghua; Zhang, Shuxin; Cheng, ZhengAi; Duan, Baoyan; Yang, Chen; Li, Meng; Hou, Xinbin; Li, Xun
2017-10-01
As used for energy transmission, a space large film reflector antenna (SLFRA) is characterized by large size and enduring high power density. The structural flexibility and the microwave radiation pressure (MRP) will lead to the phenomenon of structural-electromagnetic bidirectional coupling (SEBC). In this paper, the SEBC model of SLFRA is presented, then the deformation induced by the MRP and the corresponding far field pattern deterioration are simulated. Results show that, the direction of the MRP is identical to the normal of the reflector surface, and the magnitude is proportional to the power density and the square of cosine incident angle. For a typical cosine function distributed electric field, the MRP is a square of cosine distributed across the diameter. The maximum deflections of SLFRA linearly increase with the increasing microwave power densities and the square of the reflector diameters, and vary inversely with the film thicknesses. When the reflector diameter becomes 100 m large and the microwave power density exceeds 102 W/cm2, the gain loss of the 6.3 μm-thick reflector goes beyond 0.75 dB. When the MRP-induced deflection degrades the reflector performance, the SEBC should be taken into account.
Automated detection and recognition of wildlife using thermal cameras.
Christiansen, Peter; Steen, Kim Arild; Jørgensen, Rasmus Nyholm; Karstoft, Henrik
2014-07-30
In agricultural mowing operations, thousands of animals are injured or killed each year, due to the increased working widths and speeds of agricultural machinery. Detection and recognition of wildlife within the agricultural fields is important to reduce wildlife mortality and, thereby, promote wildlife-friendly farming. The work presented in this paper contributes to the automated detection and classification of animals in thermal imaging. The methods and results are based on top-view images taken manually from a lift to motivate work towards unmanned aerial vehicle-based detection and recognition. Hot objects are detected based on a threshold dynamically adjusted to each frame. For the classification of animals, we propose a novel thermal feature extraction algorithm. For each detected object, a thermal signature is calculated using morphological operations. The thermal signature describes heat characteristics of objects and is partly invariant to translation, rotation, scale and posture. The discrete cosine transform (DCT) is used to parameterize the thermal signature and, thereby, calculate a feature vector, which is used for subsequent classification. Using a k-nearest-neighbor (kNN) classifier, animals are discriminated from non-animals with a balanced classification accuracy of 84.7% in an altitude range of 3-10 m and an accuracy of 75.2% for an altitude range of 10-20 m. To incorporate temporal information in the classification, a tracking algorithm is proposed. Using temporal information improves the balanced classification accuracy to 93.3% in an altitude range 3-10 of meters and 77.7% in an altitude range of 10-20 m.
NASA Astrophysics Data System (ADS)
Taha, Zahari; Muazu Musa, Rabiu; Majeed, Anwar P. P. Abdul; Razali Abdullah, Mohamad; Muaz Alim, Muhammad; Nasir, Ahmad Fakhri Ab
2018-04-01
The present study aims at classifying and predicting high and low potential archers from a collection of psychological coping skills variables trained on different k-Nearest Neighbour (k-NN) kernels. 50 youth archers with the average age and standard deviation of (17.0 ±.056) gathered from various archery programmes completed a one end shooting score test. Psychological coping skills inventory which evaluates the archers level of related coping skills were filled out by the archers prior to their shooting tests. k-means cluster analysis was applied to cluster the archers based on their scores on variables assessed k-NN models, i.e. fine, medium, coarse, cosine, cubic and weighted kernel functions, were trained on the psychological variables. The k-means clustered the archers into high psychologically prepared archers (HPPA) and low psychologically prepared archers (LPPA), respectively. It was demonstrated that the cosine k-NN model exhibited good accuracy and precision throughout the exercise with an accuracy of 94% and considerably fewer error rate for the prediction of the HPPA and the LPPA as compared to the rest of the models. The findings of this investigation can be valuable to coaches and sports managers to recognise high potential athletes from the selected psychological coping skills variables examined which would consequently save time and energy during talent identification and development programme.
Development of a second generation SiLC-based Laue lens
NASA Astrophysics Data System (ADS)
Girou, David; Wade, Colin; Barrière, Nicolas; Collon, Maximilien; Günther, Ramses; Hanlon, Lorraine; Tomsick, John; Uliyanov, Alexey; Vacanti, Giuseppe; Zoglauer, Andreas
2017-09-01
For more than a decade, cosine has been developing silicon pore optics (SPO), lightweight modular X-ray optics made of stacks of bent and directly bonded silicon mirror plates. This technology, which has been selected by ESA to realize the optics of ATHENA, can also be used to fabricate soft gamma-ray Laue lenses where Bragg diffraction through the bulk silicon is exploited, rather than grazing incidence reflection. Silicon Laue Components (SiLCs) are made of stacks of curved, polished, wedged silicon plates, allowing the concentration of radiation in both radial and azimuthal directions. This greatly increases the focusing properties of a Laue lens since the size of the focal spot is no longer determined by the size of the individual single crystals, but by the accuracy of the applied curvature. After a successful proof of concept in 2013, establishing the huge potential of this technology, a new project has been launched in Spring 2017 at cosine to further develop and test this technique. Here we present the latest advances of the second generation of SiLCs made from even thinner silicon plates stacked by a robot with dedicated tools in a class-100 clean room environment.
Quantum geometry of resurgent perturbative/nonperturbative relations
NASA Astrophysics Data System (ADS)
Basar, Gökçe; Dunne, Gerald V.; Ünsal, Mithat
2017-05-01
For a wide variety of quantum potentials, including the textbook `instanton' examples of the periodic cosine and symmetric double-well potentials, the perturbative data coming from fluctuations about the vacuum saddle encodes all non-perturbative data in all higher non-perturbative sectors. Here we unify these examples in geometric terms, arguing that the all-orders quantum action determines the all-orders quantum dual action for quantum spectral problems associated with a classical genus one elliptic curve. Furthermore, for a special class of genus one potentials this relation is particularly simple: this class includes the cubic oscillator, symmetric double-well, symmetric degenerate triple-well, and periodic cosine potential. These are related to the Chebyshev potentials, which are in turn related to certain \\mathcal{N} = 2 supersymmetric quantum field theories, to mirror maps for hypersurfaces in projective spaces, and also to topological c = 3 Landau-Ginzburg models and `special geometry'. These systems inherit a natural modular structure corresponding to Ramanujan's theory of elliptic functions in alternative bases, which is especially important for the quantization. Insights from supersymmetric quantum field theory suggest similar structures for more complicated potentials, corresponding to higher genus. Our approach is very elementary, using basic classical geometry combined with all-orders WKB.
A method for the measurement and the statistical analysis of atmospheric turbulence
NASA Technical Reports Server (NTRS)
Tieleman, H. W.; Tavoularis, S. C.
1974-01-01
The instantaneous values of output voltages representing the wind velocity vector and the temperature at different elevations of the 250-foot meteorological tower located at NASA Wallops Flight Center are provided with the three dimensional split-film TSI Model 1080 anemometer system. The output voltages are sampled at a rate of one every 5 milliseconds, digitized and stored on digital magnetic tapes for a time period of approximately 40 minutes, with the use of a specially designed data acqusition system. A new calibration procedure permits the conversion of the digital voltages to the respective values of the temperature and the velocity components in a Cartesian coordinate system connected with the TSI probe with considerable accuracy. Power, cross, coincidence and quadrature spectra of the wind components and the temperature are obtained with the use of the fast Fourier transform. The cosine taper data window and ensemble and frequency smoothing techniques are used to provide smooth estimates of the spectral functions.
The importance of robust error control in data compression applications
NASA Technical Reports Server (NTRS)
Woolley, S. I.
1993-01-01
Data compression has become an increasingly popular option as advances in information technology have placed further demands on data storage capabilities. With compression ratios as high as 100:1 the benefits are clear; however, the inherent intolerance of many compression formats to error events should be given careful consideration. If we consider that efficiently compressed data will ideally contain no redundancy, then the introduction of a channel error must result in a change of understanding from that of the original source. While the prefix property of codes such as Huffman enables resynchronisation, this is not sufficient to arrest propagating errors in an adaptive environment. Arithmetic, Lempel-Ziv, discrete cosine transform (DCT) and fractal methods are similarly prone to error propagating behaviors. It is, therefore, essential that compression implementations provide sufficient combatant error control in order to maintain data integrity. Ideally, this control should be derived from a full understanding of the prevailing error mechanisms and their interaction with both the system configuration and the compression schemes in use.
Method, system and computer-readable media for measuring impedance of an energy storage device
Morrison, John L.; Morrison, William H.; Christophersen, Jon P.; Motloch, Chester G.
2016-01-26
Real-time battery impedance spectrum is acquired using a one-time record. Fast Summation Transformation (FST) is a parallel method of acquiring a real-time battery impedance spectrum using a one-time record that enables battery diagnostics. An excitation current to a battery is a sum of equal amplitude sine waves of frequencies that are octave harmonics spread over a range of interest. A sample frequency is also octave and harmonically related to all frequencies in the sum. A time profile of this sampled signal has a duration that is a few periods of the lowest frequency. A voltage response of the battery, average deleted, is an impedance of the battery in a time domain. Since the excitation frequencies are known and octave and harmonically related, a simple algorithm, FST, processes the time profile by rectifying relative to sine and cosine of each frequency. Another algorithm yields real and imaginary components for each frequency.
Generalized spherical and simplicial coordinates
NASA Astrophysics Data System (ADS)
Richter, Wolf-Dieter
2007-12-01
Elementary trigonometric quantities are defined in l2,p analogously to that in l2,2, the sine and cosine functions are generalized for each p>0 as functions sinp and cosp such that they satisfy the basic equation cosp([phi])p+sinp([phi])p=1. The p-generalized radius coordinate of a point [xi][set membership, variant]Rn is defined for each p>0 as . On combining these quantities, ln,p-spherical coordinates are defined. It is shown that these coordinates are nearly related to ln,p-simplicial coordinates. The Jacobians of these generalized coordinate transformations are derived. Applications and interpretations from analysis deal especially with the definition of a generalized surface content on ln,p-spheres which is nearly related to a modified co-area formula and an extension of Cavalieri's and Torricelli's indivisibeln method, and with differential equations. Applications from probability theory deal especially with a geometric interpretation of the uniform probability distribution on the ln,p-sphere and with the derivation of certain generalized statistical distributions.
Normal compression wave scattering by a permeable crack in a fluid-saturated poroelastic solid
NASA Astrophysics Data System (ADS)
Song, Yongjia; Hu, Hengshan; Rudnicki, John W.
2017-04-01
A mathematical formulation is presented for the dynamic stress intensity factor (mode I) of a finite permeable crack subjected to a time-harmonic propagating longitudinal wave in an infinite poroelastic solid. In particular, the effect of the wave-induced fluid flow due to the presence of a liquid-saturated crack on the dynamic stress intensity factor is analyzed. Fourier sine and cosine integral transforms in conjunction with Helmholtz potential theory are used to formulate the mixed boundary-value problem as dual integral equations in the frequency domain. The dual integral equations are reduced to a Fredholm integral equation of the second kind. It is found that the stress intensity factor monotonically decreases with increasing frequency, decreasing the fastest when the crack width and the slow wave wavelength are of the same order. The characteristic frequency at which the stress intensity factor decays the fastest shifts to higher frequency values when the crack width decreases.
Combinational logic for generating gate drive signals for phase control rectifiers
NASA Technical Reports Server (NTRS)
Dolland, C. R.; Trimble, D. W. (Inventor)
1982-01-01
Control signals for phase-delay rectifiers, which require a variable firing angle that ranges from 0 deg to 180 deg, are derived from line-to-line 3-phase signals and both positive and negative firing angle control signals which are generated by comparing current command and actual current. Line-to-line phases are transformed into line-to-neutral phases and integrated to produce 90 deg phase delayed signals that are inverted to produce three cosine signals, such that for each its maximum occurs at the intersection of positive half cycles of the other two phases which are inputs to other inverters. At the same time, both positive and negative (inverted) phase sync signals are generated for each phase by comparing each with the next and producing a square wave when it is greater. Ramp, sync and firing angle controls signals are than used in combinational logic to generate the gate firing control signals SCR gate drives which fire SCR devices in a bridge circuit.
NASA Astrophysics Data System (ADS)
Heidari, Morteza; Zargari Khuzani, Abolfazl; Danala, Gopichandh; Qiu, Yuchen; Zheng, Bin
2018-02-01
Objective of this study is to develop and test a new computer-aided detection (CAD) scheme with improved region of interest (ROI) segmentation combined with an image feature extraction framework to improve performance in predicting short-term breast cancer risk. A dataset involving 570 sets of "prior" negative mammography screening cases was retrospectively assembled. In the next sequential "current" screening, 285 cases were positive and 285 cases remained negative. A CAD scheme was applied to all 570 "prior" negative images to stratify cases into the high and low risk case group of having cancer detected in the "current" screening. First, a new ROI segmentation algorithm was used to automatically remove useless area of mammograms. Second, from the matched bilateral craniocaudal view images, a set of 43 image features related to frequency characteristics of ROIs were initially computed from the discrete cosine transform and spatial domain of the images. Third, a support vector machine model based machine learning classifier was used to optimally classify the selected optimal image features to build a CAD-based risk prediction model. The classifier was trained using a leave-one-case-out based cross-validation method. Applying this improved CAD scheme to the testing dataset, an area under ROC curve, AUC = 0.70+/-0.04, which was significantly higher than using the extracting features directly from the dataset without the improved ROI segmentation step (AUC = 0.63+/-0.04). This study demonstrated that the proposed approach could improve accuracy on predicting short-term breast cancer risk, which may play an important role in helping eventually establish an optimal personalized breast cancer paradigm.
Distributed Digital Subarray Antennas
2013-12-01
subarrays in space). Linear, planar, volumetric. Periodic, aperiodic or random. Rotation and tilt relative to a global reference. Based on the...sm N , and ( ), ( ), ( )s s sx m y m z m coordinates of subarray m in the global system. The subarrays can be rotated and tilted with respect...to the global origin. In the global system ( , ) the direction cosines are sin cos sin sin cos . u v w (1) The scan
NASA Technical Reports Server (NTRS)
Klumpp, A. R.
1976-01-01
A computer algorithm for extracting a quaternion from a direction-cosine matrix (DCM) is described. The quaternion provides a four-parameter representation of rotation, as against the nine-parameter representation afforded by a DCM. Commanded attitude in space shuttle steering is conveniently computed by DCM, while actual attitude is computed most compactly as a quaternion, as is attitude error. The unit length of the rotation quaternion, and interchangeable of a quaternion and its negative, are used to advantage in the extraction algorithm. Protection of the algorithm against square root failure and division overflow are considered. Necessary and sufficient conditions for handling the rotation vector element of largest magnitude are discussed
Site selection and directional models of deserts used for ERBE validation targets
NASA Technical Reports Server (NTRS)
Staylor, W. F.
1986-01-01
Broadband shortwave and longwave radiance measurements obtained from the Nimbus 7 Earth Radiation Budget scanner were used to develop reflectance and emittance models for the Sahara, Gibson, and Saudi Deserts. These deserts will serve as in-flight validation targets for the Earth Radiation Budget Experiment being flown on the Earth Radiation Budget Satellite and two National Oceanic and Atmospheric Administration polar satellites. The directional reflectance model derived for the deserts was a function of the sum and product of the cosines of the solar and viewing zenith angles, and thus reciprocity existed between these zenith angles. The emittance model was related by a power law of the cosine of the viewing zenith angle.
NASA Astrophysics Data System (ADS)
Abdelmonem, M. S.; Abdel-Hady, Afaf; Nasser, I.
2017-07-01
The scaling laws are given for the entropies in the information theory, including the Shannon's entropy, its power, the Fisher's information and the Fisher-Shannon product, using the exponential-cosine screened Coulomb potential. The scaling laws are specified, in the r-space, as a function of |μ - μc, nℓ|, where μ is the screening parameter and μc, nℓ its critical value for the specific quantum numbers n and ℓ. Scaling laws for other physical quantities, such as energy eigenvalues, the moments, static polarisability, transition probabilities, etc. are also given. Some of these are reported for the first time. The outcome is compared with the available literatures' results.
Energy and Quality-Aware Multimedia Signal Processing
NASA Astrophysics Data System (ADS)
Emre, Yunus
Today's mobile devices have to support computation-intensive multimedia applications with a limited energy budget. In this dissertation, we present architecture level and algorithm-level techniques that reduce energy consumption of these devices with minimal impact on system quality. First, we present novel techniques to mitigate the effects of SRAM memory failures in JPEG2000 implementations operating in scaled voltages. We investigate error control coding schemes and propose an unequal error protection scheme tailored for JPEG2000 that reduces overhead without affecting the performance. Furthermore, we propose algorithm-specific techniques for error compensation that exploit the fact that in JPEG2000 the discrete wavelet transform outputs have larger values for low frequency subband coefficients and smaller values for high frequency subband coefficients. Next, we present use of voltage overscaling to reduce the data-path power consumption of JPEG codecs. We propose an algorithm-specific technique which exploits the characteristics of the quantized coefficients after zig-zag scan to mitigate errors introduced by aggressive voltage scaling. Third, we investigate the effect of reducing dynamic range for datapath energy reduction. We analyze the effect of truncation error and propose a scheme that estimates the mean value of the truncation error during the pre-computation stage and compensates for this error. Such a scheme is very effective for reducing the noise power in applications that are dominated by additions and multiplications such as FIR filter and transform computation. We also present a novel sum of absolute difference (SAD) scheme that is based on most significant bit truncation. The proposed scheme exploits the fact that most of the absolute difference (AD) calculations result in small values, and most of the large AD values do not contribute to the SAD values of the blocks that are selected. Such a scheme is highly effective in reducing the energy consumption of motion estimation and intra-prediction kernels in video codecs. Finally, we present several hybrid energy-saving techniques based on combination of voltage scaling, computation reduction and dynamic range reduction that further reduce the energy consumption while keeping the performance degradation very low. For instance, a combination of computation reduction and dynamic range reduction for Discrete Cosine Transform shows on average, 33% to 46% reduction in energy consumption while incurring only 0.5dB to 1.5dB loss in PSNR.
NASA Astrophysics Data System (ADS)
Huang, C.-S.; Yang, S.-Y.; Yeh, H.-D.
2015-03-01
An aquifer consisting of a skin zone and a formation zone is considered as a two-zone aquifer. Existing solutions for the problem of constant-flux pumping (CFP) in a two-zone confined aquifer involve laborious calculation. This study develops a new approximate solution for the problem based on a mathematical model including two steady-state flow equations with different hydraulic parameters for the skin and formation zones. A partially penetrating well may be treated as the Neumann condition with a known flux along the screened part and zero flux along the unscreened part. The aquifer domain is finite with an outer circle boundary treated as the Dirichlet condition. The steady-state drawdown solution of the model is derived by the finite Fourier cosine transform. Then, an approximate transient solution is developed by replacing the radius of the boundary in the steady-state solution with an analytical expression for a dimensionless time-dependent radius of influence. The approximate solution is capable of predicting good temporal drawdown distributions over the whole pumping period except at the early stage. A quantitative criterion for the validity of neglecting the vertical flow component due to a partially penetrating well is also provided. Conventional models considering radial flow without the vertical component for the CFP have good accuracy if satisfying the criterion.
Dakua, Sarada Prasad; Abinahed, Julien; Al-Ansari, Abdulla
2015-04-01
Liver segmentation continues to remain a major challenge, largely due to its intense complexity with surrounding anatomical structures (stomach, kidney, and heart), high noise level and lack of contrast in pathological computed tomography (CT) data. We present an approach to reconstructing the liver surface in low contrast CT. The main contributions are: (1) a stochastic resonance-based methodology in discrete cosine transform domain is developed to enhance the contrast of pathological liver images, (2) a new formulation is proposed to prevent the object boundary, resulting from the cellular automata method, from leaking into the surrounding areas of similar intensity, and (3) a level-set method is suggested to generate intermediate segmentation contours from two segmented slices distantly located in a subject sequence. We have tested the algorithm on real datasets obtained from two sources, Hamad General Hospital and medical image computing and computer-assisted interventions grand challenge workshop. Various parameters in the algorithm, such as [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], and [Formula: see text], play imperative roles, thus their values are precisely selected. Both qualitative and quantitative evaluation performed on liver data show promising segmentation accuracy when compared with ground truth data reflecting the potential of the proposed method.
Dual-Layer Video Encryption using RSA Algorithm
NASA Astrophysics Data System (ADS)
Chadha, Aman; Mallik, Sushmit; Chadha, Ankit; Johar, Ravdeep; Mani Roja, M.
2015-04-01
This paper proposes a video encryption algorithm using RSA and Pseudo Noise (PN) sequence, aimed at applications requiring sensitive video information transfers. The system is primarily designed to work with files encoded using the Audio Video Interleaved (AVI) codec, although it can be easily ported for use with Moving Picture Experts Group (MPEG) encoded files. The audio and video components of the source separately undergo two layers of encryption to ensure a reasonable level of security. Encryption of the video component involves applying the RSA algorithm followed by the PN-based encryption. Similarly, the audio component is first encrypted using PN and further subjected to encryption using the Discrete Cosine Transform. Combining these techniques, an efficient system, invulnerable to security breaches and attacks with favorable values of parameters such as encryption/decryption speed, encryption/decryption ratio and visual degradation; has been put forth. For applications requiring encryption of sensitive data wherein stringent security requirements are of prime concern, the system is found to yield negligible similarities in visual perception between the original and the encrypted video sequence. For applications wherein visual similarity is not of major concern, we limit the encryption task to a single level of encryption which is accomplished by using RSA, thereby quickening the encryption process. Although some similarity between the original and encrypted video is observed in this case, it is not enough to comprehend the happenings in the video.
Liu, Xinjie; Liu, Liangyun; Hu, Jiaochan; Du, Shanshan
2017-01-01
The measurement of solar-induced chlorophyll fluorescence (SIF) is a new tool for estimating gross primary production (GPP). Continuous tower-based spectral observations together with flux measurements are an efficient way of linking the SIF to the GPP. Compared to conical observations, hemispherical observations made with cosine-corrected foreoptic have a much larger field of view and can better match the footprint of the tower-based flux measurements. However, estimating the equivalent radiation transfer path length (ERTPL) for hemispherical observations is more complex than for conical observations and this is a key problem that needs to be addressed before accurate retrieval of SIF can be made. In this paper, we first modeled the footprint of hemispherical spectral measurements and found that, under convective conditions with light winds, 90% of the total radiation came from an FOV of width 72°, which in turn covered 75.68% of the source area of the flux measurements. In contrast, conical spectral observations covered only 1.93% of the flux footprint. Secondly, using theoretical considerations, we modeled the ERTPL of the hemispherical spectral observations made with cosine-corrected foreoptic and found that the ERTPL was approximately equal to twice the sensor height above the canopy. Finally, the modeled ERTPL was evaluated using a simulated dataset. The ERTPL calculated using the simulated data was about 1.89 times the sensor’s height above the target surface, which was quite close to the results for the modeled ERTPL. Furthermore, the SIF retrieved from atmospherically corrected spectra using the modeled ERTPL fitted well with the reference values, giving a relative root mean square error of 18.22%. These results show that the modeled ERTPL was reasonable and that this method is applicable to tower-based hemispherical observations of SIF. PMID:28509843
Effects of Unsaturated Zones on Baseflow Recession: Analytical Solution and Application
NASA Astrophysics Data System (ADS)
Zhan, H.; Liang, X.; Zhang, Y. K.
2017-12-01
Unsaturated flow is an important process in baseflow recessions and its effect is rarely investigated. A mathematical model for a coupled unsaturated-saturated flow in a horizontally unconfined aquifer with time-dependent infiltrations is presented. Semi-analytical solutions for hydraulic heads and discharges are derived using Laplace transform and Cosine transform. The solutions are compared with solutions of the linearized Boussinesq equation (LB solution) and the linearized Laplace equation (LL solution), respectively. The result indicates that a larger dimensionless constitutive exponent κD of the unsaturated zone leads to a smaller discharge during the infiltration period and a larger discharge after the infiltration. The lateral discharge of the unsaturated zone is significant when κD≤1, and becomes negligible when κD≥100. For late times, the power index b of the recession curve-dQ/dt aQb, is 1 and independent of κD, where Q is the baseflow and a is a constant lumped aquifer parameter. For early times, b is approximately equal to 3 but it approaches infinity when t→1. The present solution is applied to synthetic and field cases. The present solution matched the synthetic data better than both the LL and LB solutions, with a minimum relative error of 16% for estimate of hydraulic conductivity. The present solution was applied to the observed streamflow discharge in Iowa, and the estimated values of the aquifer parameters were reasonable.
Enhancement of Satellite Image Compression Using a Hybrid (DWT-DCT) Algorithm
NASA Astrophysics Data System (ADS)
Shihab, Halah Saadoon; Shafie, Suhaidi; Ramli, Abdul Rahman; Ahmad, Fauzan
2017-12-01
Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) image compression techniques have been utilized in most of the earth observation satellites launched during the last few decades. However, these techniques have some issues that should be addressed. The DWT method has proven to be more efficient than DCT for several reasons. Nevertheless, the DCT can be exploited to improve the high-resolution satellite image compression when combined with the DWT technique. Hence, a proposed hybrid (DWT-DCT) method was developed and implemented in the current work, simulating an image compression system on-board on a small remote sensing satellite, with the aim of achieving a higher compression ratio to decrease the onboard data storage and the downlink bandwidth, while avoiding further complex levels of DWT. This method also succeeded in maintaining the reconstructed satellite image quality through replacing the standard forward DWT thresholding and quantization processes with an alternative process that employed the zero-padding technique, which also helped to reduce the processing time of DWT compression. The DCT, DWT and the proposed hybrid methods were implemented individually, for comparison, on three LANDSAT 8 images, using the MATLAB software package. A comparison was also made between the proposed method and three other previously published hybrid methods. The evaluation of all the objective and subjective results indicated the feasibility of using the proposed hybrid (DWT-DCT) method to enhance the image compression process on-board satellites.
Resonant circuit which provides dual frequency excitation for rapid cycling of an electromagnet
Praeg, Walter F.
1984-01-01
Disclosed is a ring magnet control circuit that permits synchrotron repetition rates much higher than the frequency of the cosinusoidal guide field of the ring magnet during particle acceleration. the control circuit generates cosinusoidal excitation currents of different frequencies in the half waves. During radio frequency acceleration of the particles in the synchrotron, the control circuit operates with a lower frequency cosine wave and thereafter the electromagnets are reset with a higher frequency half cosine wave. Flat-bottom and flat-top wave shaping circuits maintain the magnetic guide field in a relatively time-invariant mode during times when the particles are being injected into the ring magnets and when the particles are being ejected from the ring magnets.
Quantum geometry of resurgent perturbative/nonperturbative relations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Basar, Gokce; Dunne, Gerald V.; Unsal, Mithat
For a wide variety of quantum potentials, including the textbook ‘instanton’ examples of the periodic cosine and symmetric double-well potentials, the perturbative data coming from fluctuations about the vacuum saddle encodes all non-perturbative data in all higher non-perturbative sectors. Here we unify these examples in geometric terms, arguing that the all-orders quantum action determines the all-orders quantum dual action for quantum spectral problems associated with a classical genus one elliptic curve. Furthermore, for a special class of genus one potentials this relation is particularly simple: this class includes the cubic oscillator, symmetric double-well, symmetric degenerate triple-well, and periodic cosine potential.more » These are related to the Chebyshev potentials, which are in turn related to certain N = 2 supersymmetric quantum field theories, to mirror maps for hypersurfaces in projective spaces, and also to topological c = 3 Landau-Ginzburg models and ‘special geometry’. These systems inherit a natural modular structure corresponding to Ramanujan’s theory of elliptic functions in alternative bases, which is especially important for the quantization. Insights from supersymmetric quantum field theory suggest similar structures for more complicated potentials, corresponding to higher genus. Lastly, our approach is very elementary, using basic classical geometry combined with all-orders WKB.« less
Quantum geometry of resurgent perturbative/nonperturbative relations
Basar, Gokce; Dunne, Gerald V.; Unsal, Mithat
2017-05-16
For a wide variety of quantum potentials, including the textbook ‘instanton’ examples of the periodic cosine and symmetric double-well potentials, the perturbative data coming from fluctuations about the vacuum saddle encodes all non-perturbative data in all higher non-perturbative sectors. Here we unify these examples in geometric terms, arguing that the all-orders quantum action determines the all-orders quantum dual action for quantum spectral problems associated with a classical genus one elliptic curve. Furthermore, for a special class of genus one potentials this relation is particularly simple: this class includes the cubic oscillator, symmetric double-well, symmetric degenerate triple-well, and periodic cosine potential.more » These are related to the Chebyshev potentials, which are in turn related to certain N = 2 supersymmetric quantum field theories, to mirror maps for hypersurfaces in projective spaces, and also to topological c = 3 Landau-Ginzburg models and ‘special geometry’. These systems inherit a natural modular structure corresponding to Ramanujan’s theory of elliptic functions in alternative bases, which is especially important for the quantization. Insights from supersymmetric quantum field theory suggest similar structures for more complicated potentials, corresponding to higher genus. Lastly, our approach is very elementary, using basic classical geometry combined with all-orders WKB.« less
Flight Test of an Adaptive Configuration Optimization System for Transport Aircraft
NASA Technical Reports Server (NTRS)
Gilyard, Glenn B.; Georgie, Jennifer; Barnicki, Joseph S.
1999-01-01
A NASA Dryden Flight Research Center program explores the practical application of real-time adaptive configuration optimization for enhanced transport performance on an L-1011 aircraft. This approach is based on calculation of incremental drag from forced-response, symmetric, outboard aileron maneuvers. In real-time operation, the symmetric outboard aileron deflection is directly optimized, and the horizontal stabilator and angle of attack are indirectly optimized. A flight experiment has been conducted from an onboard research engineering test station, and flight research results are presented herein. The optimization system has demonstrated the capability of determining the minimum drag configuration of the aircraft in real time. The drag-minimization algorithm is capable of identifying drag to approximately a one-drag-count level. Optimizing the symmetric outboard aileron position realizes a drag reduction of 2-3 drag counts (approximately 1 percent). Algorithm analysis of maneuvers indicate that two-sided raised-cosine maneuvers improve definition of the symmetric outboard aileron drag effect, thereby improving analysis results and consistency. Ramp maneuvers provide a more even distribution of data collection as a function of excitation deflection than raised-cosine maneuvers provide. A commercial operational system would require airdata calculations and normal output of current inertial navigation systems; engine pressure ratio measurements would be optional.
NASA Technical Reports Server (NTRS)
Ambur, Damodar R.; Starnes, James H., Jr.; Prasad, Chunchu B.
1993-01-01
An analytical procedure is presented for determining the transient response of simply supported, rectangular laminated composite plates subjected to impact loads from airgun-propelled or dropped-weight impactors. A first-order shear-deformation theory is included in the analysis to represent properly any local short-wave-length transient bending response. The impact force is modeled as a locally distributed load with a cosine-cosine distribution. A double Fourier series expansion and the Timoshenko small-increment method are used to determine the contact force, out-of-plane deflections, and in-plane strains and stresses at any plate location due to an impact force at any plate location. The results of experimental and analytical studies are compared for quasi-isotropic laminates. The results indicate that using the appropriate local force distribution for the locally loaded area and including transverse-shear-deformation effects in the laminated plate response analysis are important. The applicability of the present analytical procedure based on small deformation theory is investigated by comparing analytical and experimental results for combinations of quasi-isotropic laminate thicknesses and impact energy levels. The results of this study indicate that large-deformation effects influence the response of both 24- and 32-ply laminated plates, and that a geometrically nonlinear analysis is required for predicting the response accurately.
A drop penetration method to measure powder blend wettability.
Wang, Yifan; Liu, Zhanjie; Muzzio, Fernando; Drazer, German; Callegari, Gerardo
2018-03-01
Water wettability of pharmaceutical blends affects important quality attributes of final products. We investigate the wetting properties of a pharmaceutical blend lubricated with Magnesium Stearate (MgSt) as a function of the mechanical shear strain applied to the blend. We measure the penetration dynamics of sessile drops deposited on slightly compressed powder beds. We consider a blend composed of 9% Acetaminophen 90% Lactose and 1% MgSt by weight. Comparing the penetration time of water and a reference liquid Polydimethylsiloxane (silicon oil) we obtain an effective cosine of the contact angle with water, based on a recently developed drop penetration method. We repeat the experiments for blends exposed to increasing levels of shear strain and demonstrate a significant decrease in water wettability (decrease in the cosine of the contact angle). The results are consistent with the development of a hydrophobic film coating the powder particles as a result of the increased shear strain. Finally, we show that, as expected dissolution times increase with the level of shear strain. Therefore, the proposed drop penetration method could be used to directly assess the state of lubrication of a pharmaceutical blend and act as a quality control on powder blend attributes before the blend is tableted. Copyright © 2017 Elsevier B.V. All rights reserved.
Comparison of methods for measurement and retrieval of SIF with tower based sensors
NASA Astrophysics Data System (ADS)
Kornfeld, A.; Berry, J. A.
2017-12-01
As the popularity of solar induced fluorescence (SIF) measurement increases, the number of ways to measure and process the data has also increased, leaving a bewildering array of choices for the practitioner. To help clarify the advantages and disadvantages of several methods, we modified our foreoptic, Rotaprism, to measure spectra using either bi-hemispheric (cosine correcting diffusers on both upward and downward views) or hemispherical-conical views (only the upward view is cosine corrected). To test spatial sensitivity of each optic, we recorded data after moving the device relatively short distances - 1-2x the sensor's height above the canopy. When using conical measurements, measured SIF varied by as much as 100% across locations, whereas bi-hemispherical measurements were nearly unaffected by the moves. Reflectance indexes such as NDVI, PRI, NIRv were also spatially sensitive for the conical measurements. We also compared retrievals using either the O2A band or the adjacent Fraunhofer band to examine the relative advantages of each retrieval band for full-day retrievals. Finally, we investigated how choice of retrieval algorithm (SVD, FLD, SFM) affects the computed results. The primary site for this experiment was a California bunchgrass/tallgrass field. Additional data from the Brazilian Amazon will also be used, where appropriate, to support our conclusions.
Atlas, Glen; Li, John K-J; Amin, Shawn; Hahn, Robert G
2017-01-01
A closed-form integro-differential equation (IDE) model of plasma dilution (PD) has been derived which represents both the intravenous (IV) infusion of crystalloid and the postinfusion period. Specifically, PD is mathematically represented using a combination of constant ratio, differential, and integral components. Furthermore, this model has successfully been applied to preexisting data, from a prior human study, in which crystalloid was infused for a period of 30 minutes at the beginning of thyroid surgery. Using Euler's formula and a Laplace transform solution to the IDE, patients could be divided into two distinct groups based on their response to PD during the infusion period. Explicitly, Group 1 patients had an infusion-based PD response which was modeled using an exponentially decaying hyperbolic sine function, whereas Group 2 patients had an infusion-based PD response which was modeled using an exponentially decaying trigonometric sine function. Both Group 1 and Group 2 patients had postinfusion PD responses which were modeled using the same combination of hyperbolic sine and hyperbolic cosine functions. Statistically significant differences, between Groups 1 and 2, were noted with respect to the area under their PD curves during both the infusion and postinfusion periods. Specifically, Group 2 patients exhibited a response to PD which was most likely consistent with a preoperative hypovolemia. Overall, this IDE model of PD appears to be highly "adaptable" and successfully fits clinically-obtained human data on a patient-specific basis, during both the infusion and postinfusion periods. In addition, patient-specific IDE modeling of PD may be a useful adjunct in perioperative fluid management and in assessing clinical volume kinetics, of crystalloid solutions, in real time.
Atlas, Glen; Li, John K-J; Amin, Shawn; Hahn, Robert G
2017-01-01
A closed-form integro-differential equation (IDE) model of plasma dilution (PD) has been derived which represents both the intravenous (IV) infusion of crystalloid and the postinfusion period. Specifically, PD is mathematically represented using a combination of constant ratio, differential, and integral components. Furthermore, this model has successfully been applied to preexisting data, from a prior human study, in which crystalloid was infused for a period of 30 minutes at the beginning of thyroid surgery. Using Euler’s formula and a Laplace transform solution to the IDE, patients could be divided into two distinct groups based on their response to PD during the infusion period. Explicitly, Group 1 patients had an infusion-based PD response which was modeled using an exponentially decaying hyperbolic sine function, whereas Group 2 patients had an infusion-based PD response which was modeled using an exponentially decaying trigonometric sine function. Both Group 1 and Group 2 patients had postinfusion PD responses which were modeled using the same combination of hyperbolic sine and hyperbolic cosine functions. Statistically significant differences, between Groups 1 and 2, were noted with respect to the area under their PD curves during both the infusion and postinfusion periods. Specifically, Group 2 patients exhibited a response to PD which was most likely consistent with a preoperative hypovolemia. Overall, this IDE model of PD appears to be highly “adaptable” and successfully fits clinically-obtained human data on a patient-specific basis, during both the infusion and postinfusion periods. In addition, patient-specific IDE modeling of PD may be a useful adjunct in perioperative fluid management and in assessing clinical volume kinetics, of crystalloid solutions, in real time. PMID:29123436
Lump and lump-soliton solutions to the (2+1) -dimensional Ito equation
NASA Astrophysics Data System (ADS)
Yang, Jin-Yun; Ma, Wen-Xiu; Qin, Zhenyun
2017-06-01
Based on the Hirota bilinear form of the (2+1) -dimensional Ito equation, one class of lump solutions and two classes of interaction solutions between lumps and line solitons are generated through analysis and symbolic computations with Maple. Analyticity is naturally guaranteed for the presented lump and interaction solutions, and the interaction solutions reduce to lumps (or line solitons) while the hyperbolic-cosine (or the quadratic function) disappears. Three-dimensional plots and contour plots are made for two specific examples of the resulting interaction solutions.
2015-11-20
between tweets and profiles as follow, • TFIDF Score, which calculates the cosine similarity between a tweet and a profile in vector space model with...TFIDF weight of terms. Vector space model is a model which represents a document as a vector. Tweets and profiles can be expressed as vectors, ~ T = (t...gain(Tr i ) (13) where Tr is the returned tweet sets, gain() is the score func- tion for a tweet. Not interesting, spam/ junk tweets receive a gain of 0
Kiedron, Peter
2008-01-15
Once every minute between sunrise and sunset the Rotating Shadowband Spectroradiometer (RSS) measures simultaneously three irradiances: total horizontal, diffuse horizontal and direct normal in near ultraviolet, visible and near infrared range (approx. 370nm-1050nm) at 512 (RSS103) or 1024 (RSS102 and RSS105) adjacent spectral resolving elements (pixels). The resolution is pixel (wavelength) dependent and it differs from instrument to instrument. The reported irradiances are cosine response corrected. And their radiometric calibration is based on incandescent lamp calibrators that can be traced to the NIST irradiance scale. The units are W/m2/nm.
NASA Astrophysics Data System (ADS)
Boyraz, Uǧur; Melek Kazezyılmaz-Alhan, Cevza
2017-04-01
Groundwater is a vital element of hydrologic cycle and the analytical & numerical solutions of different forms of groundwater flow equations play an important role in understanding the hydrological behavior of subsurface water. The interaction between groundwater and surface water bodies can be determined using these solutions. In this study, new hypothetical approaches are implemented to groundwater flow system in order to contribute to the studies on surface water/groundwater interactions. A time dependent problem is considered in a 2-dimensional stream-wetland-aquifer system. The sloped stream boundary is used to represent the interaction between stream and aquifer. The rest of the aquifer boundaries are assumed as no-flux boundary. In addition, a wetland is considered as a surface water body which lies over the whole aquifer. The effect of the interaction between the wetland and the aquifer is taken into account with a source/sink term in the groundwater flow equation and the interaction flow is calculated by using Darcy's approach. A semi-analytical solution is developed for the 2-dimensional groundwater flow equation in 5 steps. First, Laplace and Fourier cosine transforms are employed to obtain the general solution in Fourier and Laplace domain. Then, the initial and boundary conditions are applied to obtain the particular solution. Finally, inverse Fourier transform is carried out analytically and inverse Laplace transform is carried out numerically to obtain the final solution in space and time domain, respectively. In order to verify the semi-analytical solution, an explicit finite difference algorithm is developed and analytical and numerical solutions are compared for synthetic examples. The comparison of the analytical and numerical solutions shows that the analytical solution gives accurate results.
Boyack, Kevin W.; Newman, David; Duhon, Russell J.; Klavans, Richard; Patek, Michael; Biberstine, Joseph R.; Schijvenaars, Bob; Skupin, André; Ma, Nianli; Börner, Katy
2011-01-01
Background We investigate the accuracy of different similarity approaches for clustering over two million biomedical documents. Clustering large sets of text documents is important for a variety of information needs and applications such as collection management and navigation, summary and analysis. The few comparisons of clustering results from different similarity approaches have focused on small literature sets and have given conflicting results. Our study was designed to seek a robust answer to the question of which similarity approach would generate the most coherent clusters of a biomedical literature set of over two million documents. Methodology We used a corpus of 2.15 million recent (2004-2008) records from MEDLINE, and generated nine different document-document similarity matrices from information extracted from their bibliographic records, including titles, abstracts and subject headings. The nine approaches were comprised of five different analytical techniques with two data sources. The five analytical techniques are cosine similarity using term frequency-inverse document frequency vectors (tf-idf cosine), latent semantic analysis (LSA), topic modeling, and two Poisson-based language models – BM25 and PMRA (PubMed Related Articles). The two data sources were a) MeSH subject headings, and b) words from titles and abstracts. Each similarity matrix was filtered to keep the top-n highest similarities per document and then clustered using a combination of graph layout and average-link clustering. Cluster results from the nine similarity approaches were compared using (1) within-cluster textual coherence based on the Jensen-Shannon divergence, and (2) two concentration measures based on grant-to-article linkages indexed in MEDLINE. Conclusions PubMed's own related article approach (PMRA) generated the most coherent and most concentrated cluster solution of the nine text-based similarity approaches tested, followed closely by the BM25 approach using titles and abstracts. Approaches using only MeSH subject headings were not competitive with those based on titles and abstracts. PMID:21437291
Safdari, Reza; Maserat, Elham; Asadzadeh Aghdaei, Hamid; Javan Amoli, Amir Hossein; Mohaghegh Shalmani, Hamid
2017-01-01
To survey person centered survival rate in population based screening program by an intelligent clinical decision support system. Colorectal cancer is the most common malignancy and major cause of morbidity and mortality throughout the world. Colorectal cancer is the sixth leading cause of cancer death in Iran. In this survey, we used cosine similarity as data mining technique and intelligent system for estimating survival of at risk groups in the screening plan. In the first step, we determined minimum data set (MDS). MDS was approved by experts and reviewing literatures. In the second step, MDS were coded by python language and matched with cosine similarity formula. Finally, survival rate by percent was illustrated in the user interface of national intelligent system. The national intelligent system was designed in PyCharm environment. Main data elements of intelligent system consist demographic information, age, referral type, risk group, recommendation and survival rate. Minimum data set related to survival comprise of clinical status, past medical history and socio-demographic information. Information of the covered population as a comprehensive database was connected to intelligent system and survival rate estimated for each patient. Mean range of survival of HNPCC patients and FAP patients were respectively 77.7% and 75.1%. Also, the mean range of the survival rate and other calculations have changed with the entry of new patients in the CRC registry by real-time. National intelligent system monitors the entire of risk group and reports survival rates by electronic guidelines and data mining technique and also operates according to the clinical process. This web base software has a critical role in the estimation survival rate in order to health care planning.
The Implementation of Cosine Similarity to Calculate Text Relevance between Two Documents
NASA Astrophysics Data System (ADS)
Gunawan, D.; Sembiring, C. A.; Budiman, M. A.
2018-03-01
Rapidly increasing number of web pages or documents leads to topic specific filtering in order to find web pages or documents efficiently. This is a preliminary research that uses cosine similarity to implement text relevance in order to find topic specific document. This research is divided into three parts. The first part is text-preprocessing. In this part, the punctuation in a document will be removed, then convert the document to lower case, implement stop word removal and then extracting the root word by using Porter Stemming algorithm. The second part is keywords weighting. Keyword weighting will be used by the next part, the text relevance calculation. Text relevance calculation will result the value between 0 and 1. The closer value to 1, then both documents are more related, vice versa.
NASA Astrophysics Data System (ADS)
Asten, Michael
2017-04-01
We study sea level variations over the past 300yr in order to quantify what fraction of variations may be considered cyclic, and what phase relations exist with respect to those cycles. The 64yr cycle detected by Chambers et al (2012) is found in the 1960-2000 data set which Hamlington et al (2013) interpreted as an expression of the PDO; we show that fitting a 64yr cycle is a better fit, accounting for 92% of variance. In a 300yr GMSL tide guage record Jeverejeva et al (2008) identified a 60-65yr cycle superimposed on an upward trend from 1800CE. Using break-points and removal of centennial trends identified by Kemp et al (2015), we produce a detrended GMSL record for 1700-2000CE which emphasizes the 60-65yr oscillations. A least-square fit using a 64yr period cosine yields an amplitude 12mm and origin at year 1958.6, which accounts for 30% of the variance. A plot of the cosine against the entire length of the 300yr detrended GMSL record shows a clear phase lock for the interval 1740 to 2000CE, denoting either a very consistent timing of an internally generated natural variation, or adding to evidence for an external forcing of astronomical origin (Scafetta 2012, 2013). Barcikowska et al (2016) have identified a 65yr cyclic variation in sea surface temperature in the first multidecadal component of Multi- Channel Singular Spectrum Analysis (MSSA) on the Hadley SST data set (RC60). A plot of RC60 versus our fitted cosine shows the phase shift to be 16 yr, close to a 90 degree phase lag of GMSL relative to RC60. This is the relation to be expected for a simple low-pass or integrating filter, which suggests that cyclic natural variations in sea-surface temperature drive similar variations in GMSL. We compare the extent of Arctic sea-ice using the time interval of 1979- 2016 (window of satellite imagery). The decrease in summer ice cover has been subject of many predictions as to when summer ice will reach zero. The plot of measured ice area can be fitted with many speculative curves, and we show three such best fit curves, a parabola (zero ice cover by 2028), a linear fit (zero by 2060) and a 64yr period cosine, where the cosine is a shape chosen as a hypothesis, given the relation we observe between SST natural variations and 260 years of detrended sea level data. The cosine best fit shows a maximum ice coverage in 1985.6 and predicted minimum in 2017.6, which compares with the detrended sea level cyclic component minimum at 1990.6 and predicted maximum at 2023.6CE. Thus the sea-ice retreat lags RC60 by about 10 yr or 60deg in phase. The consistent phase of sea-level change over 260yr, and the phase lags of sea-ice retreat and sea-level change relative to the natural 65yr cyclic component of SST, have implications in the debate over internal versus external drivers of the cyclic components of change, and in hypotheses on cause and effect of the non-anthropogenic components of change.
Significant Figure Rules for General Arithmetic Functions.
ERIC Educational Resources Information Center
Graham, D. M.
1989-01-01
Provides some significant figure rules used in chemistry including the general theoretical basis; logarithms and antilogarithms; exponentiation (with exactly known exponents); sines and cosines; and the extreme value rule. (YP)
De Paëpe, Gaël; Lewandowski, Józef R; Griffin, Robert G
2008-03-28
We introduce a family of solid-state NMR pulse sequences that generalizes the concept of second averaging in the modulation frame and therefore provides a new approach to perform magic angle spinning dipolar recoupling experiments. Here, we focus on two particular recoupling mechanisms-cosine modulated rotary resonance (CMpRR) and cosine modulated recoupling with isotropic chemical shift reintroduction (COMICS). The first technique, CMpRR, is based on a cosine modulation of the rf phase and yields broadband double-quantum (DQ) (13)C recoupling using >70 kHz omega(1,C)/2pi rf field for the spinning frequency omega(r)/2=10-30 kHz and (1)H Larmor frequency omega(0,H)/2pi up to 900 MHz. Importantly, for p>or=5, CMpRR recouples efficiently in the absence of (1)H decoupling. Extension to lower p values (3.5
An improved feature extraction algorithm based on KAZE for multi-spectral image
NASA Astrophysics Data System (ADS)
Yang, Jianping; Li, Jun
2018-02-01
Multi-spectral image contains abundant spectral information, which is widely used in all fields like resource exploration, meteorological observation and modern military. Image preprocessing, such as image feature extraction and matching, is indispensable while dealing with multi-spectral remote sensing image. Although the feature matching algorithm based on linear scale such as SIFT and SURF performs strong on robustness, the local accuracy cannot be guaranteed. Therefore, this paper proposes an improved KAZE algorithm, which is based on nonlinear scale, to raise the number of feature and to enhance the matching rate by using the adjusted-cosine vector. The experiment result shows that the number of feature and the matching rate of the improved KAZE are remarkably than the original KAZE algorithm.
Cicone, A; Liu, J; Zhou, H
2016-04-13
Chemicals released in the air can be extremely dangerous for human beings and the environment. Hyperspectral images can be used to identify chemical plumes, however the task can be extremely challenging. Assuming we know a priori that some chemical plume, with a known frequency spectrum, has been photographed using a hyperspectral sensor, we can use standard techniques such as the so-called matched filter or adaptive cosine estimator, plus a properly chosen threshold value, to identify the position of the chemical plume. However, due to noise and inadequate sensing, the accurate identification of chemical pixels is not easy even in this apparently simple situation. In this paper, we present a post-processing tool that, in a completely adaptive and data-driven fashion, allows us to improve the performance of any classification methods in identifying the boundaries of a plume. This is done using the multidimensional iterative filtering (MIF) algorithm (Cicone et al. 2014 (http://arxiv.org/abs/1411.6051); Cicone & Zhou 2015 (http://arxiv.org/abs/1507.07173)), which is a non-stationary signal decomposition method like the pioneering empirical mode decomposition method (Huang et al. 1998 Proc. R. Soc. Lond. A 454, 903. (doi:10.1098/rspa.1998.0193)). Moreover, based on the MIF technique, we propose also a pre-processing method that allows us to decorrelate and mean-centre a hyperspectral dataset. The cosine similarity measure, which often fails in practice, appears to become a successful and outperforming classifier when equipped with such a pre-processing method. We show some examples of the proposed methods when applied to real-life problems. © 2016 The Author(s).
Mapped grid methods for long-range molecules and cold collisions
NASA Astrophysics Data System (ADS)
Willner, K.; Dulieu, O.; Masnou-Seeuws, F.
2004-01-01
The paper discusses ways of improving the accuracy of numerical calculations for vibrational levels of diatomic molecules close to the dissociation limit or for ultracold collisions, in the framework of a grid representation. In order to avoid the implementation of very large grids, Kokoouline et al. [J. Chem. Phys. 110, 9865 (1999)] have proposed a mapping procedure through introduction of an adaptive coordinate x subjected to the variation of the local de Broglie wavelength as a function of the internuclear distance R. Some unphysical levels ("ghosts") then appear in the vibrational series computed via a mapped Fourier grid representation. In the present work the choice of the basis set is reexamined, and two alternative expansions are discussed: Sine functions and Hardy functions. It is shown that use of a basis set with fixed nodes at both grid ends is efficient to eliminate "ghost" solutions. It is further shown that the Hamiltonian matrix in the sine basis can be calculated very accurately by using an auxiliary basis of cosine functions, overcoming the problems arising from numerical calculation of the Jacobian J(x) of the R→x coordinate transformation.
Image splitting and remapping method for radiological image compression
NASA Astrophysics Data System (ADS)
Lo, Shih-Chung B.; Shen, Ellen L.; Mun, Seong K.
1990-07-01
A new decomposition method using image splitting and gray-level remapping has been proposed for image compression, particularly for images with high contrast resolution. The effects of this method are especially evident in our radiological image compression study. In our experiments, we tested the impact of this decomposition method on image compression by employing it with two coding techniques on a set of clinically used CT images and several laser film digitized chest radiographs. One of the compression techniques used was full-frame bit-allocation in the discrete cosine transform domain, which has been proven to be an effective technique for radiological image compression. The other compression technique used was vector quantization with pruned tree-structured encoding, which through recent research has also been found to produce a low mean-square-error and a high compression ratio. The parameters we used in this study were mean-square-error and the bit rate required for the compressed file. In addition to these parameters, the difference between the original and reconstructed images will be presented so that the specific artifacts generated by both techniques can be discerned by visual perception.
NASA Astrophysics Data System (ADS)
Khaleghi, Morteza; Furlong, Cosme; Cheng, Jeffrey Tao; Rosowski, John J.
2014-07-01
The eardrum or Tympanic Membrane (TM) transfers acoustic energy from the ear canal (at the external ear) into mechanical motions of the ossicles (at the middle ear). The acousto-mechanical-transformer behavior of the TM is determined by its shape and mechanical properties. For a better understanding of hearing mysteries, full-field-of-view techniques are required to quantify shape, nanometer-scale sound-induced displacement, and mechanical properties of the TM in 3D. In this paper, full-field-of-view, three-dimensional shape and sound-induced displacement of the surface of the TM are obtained by the methods of multiple wavelengths and multiple sensitivity vectors with lensless digital holography. Using our developed digital holographic systems, unique 3D information such as, shape (with micrometer resolution), 3D acoustically-induced displacement (with nanometer resolution), full strain tensor (with nano-strain resolution), 3D phase of motion, and 3D directional cosines of the displacement vectors can be obtained in full-field-ofview with a spatial resolution of about 3 million points on the surface of the TM and a temporal resolution of 15 Hz.
NASA Astrophysics Data System (ADS)
Huang, C.-S.; Yang, S.-Y.; Yeh, H.-D.
2015-06-01
An aquifer consisting of a skin zone and a formation zone is considered as a two-zone aquifer. Existing solutions for the problem of constant-flux pumping in a two-zone confined aquifer involve laborious calculation. This study develops a new approximate solution for the problem based on a mathematical model describing steady-state radial and vertical flows in a two-zone aquifer. Hydraulic parameters in these two zones can be different but are assumed homogeneous in each zone. A partially penetrating well may be treated as the Neumann condition with a known flux along the screened part and zero flux along the unscreened part. The aquifer domain is finite with an outer circle boundary treated as the Dirichlet condition. The steady-state drawdown solution of the model is derived by the finite Fourier cosine transform. Then, an approximate transient solution is developed by replacing the radius of the aquifer domain in the steady-state solution with an analytical expression for a dimensionless time-dependent radius of influence. The approximate solution is capable of predicting good temporal drawdown distributions over the whole pumping period except at the early stage. A quantitative criterion for the validity of neglecting the vertical flow due to a partially penetrating well is also provided. Conventional models considering radial flow without the vertical component for the constant-flux pumping have good accuracy if satisfying the criterion.
Absence of diurnal variation of C-reactive protein concentrations in healthy human subjects
NASA Technical Reports Server (NTRS)
Meier-Ewert, H. K.; Ridker, P. M.; Rifai, N.; Price, N.; Dinges, D. F.; Mullington, J. M.
2001-01-01
BACKGROUND: The concentration of C-reactive protein (CRP) in otherwise healthy subjects has been shown to predict future risk of myocardial infarction and stroke. CRP is synthesized by the liver in response to interleukin-6, the serum concentration of which is subject to diurnal variation. METHODS: To examine the existence of a time-of-day effect for baseline CRP values, we determined CRP concentrations in hourly blood samples drawn from healthy subjects (10 males, 3 females; age range, 21-35 years) during a baseline day in a controlled environment (8 h of nighttime sleep). RESULTS: Overall CRP concentrations were low, with only three subjects having CRP concentrations >2 mg/L. Comparison of raw data showed stability of CRP concentrations throughout the 24 h studied. When compared with cutoff values of CRP quintile derived from population-based studies, misclassification of greater than one quintile did not occur as a result of diurnal variation in any of the subjects studied. Nonparametric ANOVA comparing different time points showed no significant differences for both raw and z-transformed data. Analysis for rhythmic diurnal variation using a method fitting a cosine curve to the group data was negative. CONCLUSIONS: Our data show that baseline CRP concentrations are not subject to time-of-day variation and thus help to explain why CRP concentrations are a better predictor of vascular risk than interleukin-6. Determination of CRP for cardiovascular risk prediction may be performed without concern for diurnal variation.
Improved detection of DNA-binding proteins via compression technology on PSSM information.
Wang, Yubo; Ding, Yijie; Guo, Fei; Wei, Leyi; Tang, Jijun
2017-01-01
Since the importance of DNA-binding proteins in multiple biomolecular functions has been recognized, an increasing number of researchers are attempting to identify DNA-binding proteins. In recent years, the machine learning methods have become more and more compelling in the case of protein sequence data soaring, because of their favorable speed and accuracy. In this paper, we extract three features from the protein sequence, namely NMBAC (Normalized Moreau-Broto Autocorrelation), PSSM-DWT (Position-specific scoring matrix-Discrete Wavelet Transform), and PSSM-DCT (Position-specific scoring matrix-Discrete Cosine Transform). We also employ feature selection algorithm on these feature vectors. Then, these features are fed into the training SVM (support vector machine) model as classifier to predict DNA-binding proteins. Our method applys three datasets, namely PDB1075, PDB594 and PDB186, to evaluate the performance of our approach. The PDB1075 and PDB594 datasets are employed for Jackknife test and the PDB186 dataset is used for the independent test. Our method achieves the best accuracy in the Jacknife test, from 79.20% to 86.23% and 80.5% to 86.20% on PDB1075 and PDB594 datasets, respectively. In the independent test, the accuracy of our method comes to 76.3%. The performance of independent test also shows that our method has a certain ability to be effectively used for DNA-binding protein prediction. The data and source code are at https://doi.org/10.6084/m9.figshare.5104084.
M-estimator for the 3D symmetric Helmert coordinate transformation
NASA Astrophysics Data System (ADS)
Chang, Guobin; Xu, Tianhe; Wang, Qianxin
2018-01-01
The M-estimator for the 3D symmetric Helmert coordinate transformation problem is developed. Small-angle rotation assumption is abandoned. The direction cosine matrix or the quaternion is used to represent the rotation. The 3 × 1 multiplicative error vector is defined to represent the rotation estimation error. An analytical solution can be employed to provide the initial approximate for iteration, if the outliers are not large. The iteration is carried out using the iterative reweighted least-squares scheme. In each iteration after the first one, the measurement equation is linearized using the available parameter estimates, the reweighting matrix is constructed using the residuals obtained in the previous iteration, and then the parameter estimates with their variance-covariance matrix are calculated. The influence functions of a single pseudo-measurement on the least-squares estimator and on the M-estimator are derived to theoretically show the robustness. In the solution process, the parameter is rescaled in order to improve the numerical stability. Monte Carlo experiments are conducted to check the developed method. Different cases to investigate whether the assumed stochastic model is correct are considered. The results with the simulated data slightly deviating from the true model are used to show the developed method's statistical efficacy at the assumed stochastic model, its robustness against the deviations from the assumed stochastic model, and the validity of the estimated variance-covariance matrix no matter whether the assumed stochastic model is correct or not.
Non-Darcian flow to a partially penetrating well in a confined aquifer with a finite-thickness skin
NASA Astrophysics Data System (ADS)
Feng, Qinggao; Wen, Zhang
2016-08-01
Non-Darcian flow to a partially penetrating well in a confined aquifer with a finite-thickness skin was investigated. The Izbash equation is used to describe the non-Darcian flow in the horizontal direction, and the vertical flow is described as Darcian. The solution for the newly developed non-Darcian flow model can be obtained by applying the linearization procedure in conjunction with the Laplace transform and the finite Fourier cosine transform. The flow model combines the effects of the non-Darcian flow, partial penetration of the well, and the finite thickness of the well skin. The results show that the depression cone spread is larger for the Darcian flow than for the non-Darcian flow. The drawdowns within the skin zone for a fully penetrating well are smaller than those for the partially penetrating well. The skin type and skin thickness have great impact on the drawdown in the skin zone, while they have little influence on drawdown in the formation zone. The sensitivity analysis indicates that the drawdown in the formation zone is sensitive to the power index ( n), the length of well screen ( w), the apparent radial hydraulic conductivity of the formation zone ( K r2), and the specific storage of the formation zone ( S s2) at early times, and it is very sensitive to the parameters n, w and K r2 at late times, especially to n, while it is not sensitive to the skin thickness ( r s).
Convective flows of generalized time-nonlocal nanofluids through a vertical rectangular channel
NASA Astrophysics Data System (ADS)
Ahmed, Najma; Vieru, Dumitru; Fetecau, Constantin; Shah, Nehad Ali
2018-05-01
Time-nonlocal generalized model of the natural convection heat transfer and nanofluid flows through a rectangular vertical channel with wall conditions of the Robin type are studied. The generalized mathematical model with time-nonlocality is developed by considering the fractional constitutive equations for the shear stress and thermal flux defined with the time-fractional Caputo derivative. The Caputo power-law non-local kernel provides the damping to the velocity and temperature gradient; therefore, transport processes are influenced by the histories at all past and present times. Analytical solutions for dimensionless velocity and temperature fields are obtained by using the Laplace transform coupled with the finite sine-cosine Fourier transform which is suitable to problems with boundary conditions of the Robin type. Particularizing the fractional thermal and velocity parameters, solutions for three simplified models are obtained (classical linear momentum equation with damped thermal flux; fractional shear stress constitutive equation with classical Fourier's law for thermal flux; classical shear stress and thermal flux constitutive equations). It is found that the thermal histories strongly influence the thermal transport for small values of time t. Also, the thermal transport can be enhanced if the thermal fractional parameter decreases or by increasing the nanoparticles' volume fraction. The velocity field is influenced on the one hand by the temperature of the fluid and on the other by the damping of the velocity gradient introduced by the fractional derivative. Also, the transport motions of the channel walls influence the motion of the fluid layers located near them.
Alternating-gradient canted cosine theta superconducting magnets for future compact proton gantries
Wan, Weishi; Brouwer, Lucas; Caspi, Shlomo; ...
2015-10-23
We present a design of superconducting magnets, optimized for application in a gantry for proton therapy. We have introduced a new magnet design concept, called an alternating-gradient canted cosine theta (AG-CCT) concept, which is compatible with an achromatic layout. This layout allows a large momentum acceptance. The 15 cm radius of the bore aperture enables the application of pencil beam scanning in front of the SC-magnet. The optical and dynamic performance of a gantry based on these magnets has been analyzed using the fields derived (via Biot-Savart law) from the actual windings of the AG-CCT combined with the full equationsmore » of motion. The results show that with appropriate higher order correction, a large 3D volume can be rapidly scanned with little beam shape distortion. A very big advantage is that all this can be done while keeping the AG-CCT fields fixed. This reduces the need for fast field ramping of the superconducting magnets between the successive beam energies used for the scanning in depth and it is important for medical application since this reduces the technical risk (e.g., a quench) associated with fast field changes in superconducting magnets. For proton gantries the corresponding superconducting magnet system holds promise of dramatic reduction in weight. For heavier ion gantries there may furthermore be a significant reduction in size.« less
Improvement of Risk Assessment from Space Radiation Exposure for Future Space Exploration Missions
NASA Technical Reports Server (NTRS)
Kim, Myung-Hee Y.; Atwell, Bill; Ponomarev, Artem L.; Nounu, Hatem; Hussein, Hesham; Cucinotta, Francis A.
2007-01-01
Protecting astronauts from space radiation exposure is an important challenge for mission design and operations for future exploration-class and long-duration missions. Crew members are exposed to sporadic solar particle events (SPEs) as well as to the continuous galactic cosmic radiation (GCR). If sufficient protection is not provided the radiation risk to crew members from SPEs could be significant. To improve exposure risk estimates and radiation protection from SPEs, detailed variations of radiation shielding properties are required. A model using a modern CAD tool ProE (TM), which is the leading engineering design platform at NASA, has been developed for this purpose. For the calculation of radiation exposure at a specific site, the cosine distribution was implemented to replicate the omnidirectional characteristic of the 4 pi particle flux on a surface. Previously, estimates of doses to the blood forming organs (BFO) from SPEs have been made using an average body-shielding distribution for the bone marrow based on the computerized anatomical man model (CAM). The development of an 82-point body-shielding distribution at BFOs made it possible to estimate the mean and variance of SPE doses in the major active marrow regions. Using the detailed distribution of bone marrow sites and implementation of cosine distribution of particle flux is shown to provide improved estimates of acute and cancer risks from SPEs.
Alternating-gradient canted cosine theta superconducting magnets for future compact proton gantries
NASA Astrophysics Data System (ADS)
Wan, Weishi; Brouwer, Lucas; Caspi, Shlomo; Prestemon, Soren; Gerbershagen, Alexander; Schippers, Jacobus Maarten; Robin, David
2015-10-01
We present a design of superconducting magnets, optimized for application in a gantry for proton therapy. We have introduced a new magnet design concept, called an alternating-gradient canted cosine theta (AG-CCT) concept, which is compatible with an achromatic layout. This layout allows a large momentum acceptance. The 15 cm radius of the bore aperture enables the application of pencil beam scanning in front of the SC-magnet. The optical and dynamic performance of a gantry based on these magnets has been analyzed using the fields derived (via Biot-Savart law) from the actual windings of the AG-CCT combined with the full equations of motion. The results show that with appropriate higher order correction, a large 3D volume can be rapidly scanned with little beam shape distortion. A very big advantage is that all this can be done while keeping the AG-CCT fields fixed. This reduces the need for fast field ramping of the superconducting magnets between the successive beam energies used for the scanning in depth and it is important for medical application since this reduces the technical risk (e.g., a quench) associated with fast field changes in superconducting magnets. For proton gantries the corresponding superconducting magnet system holds promise of dramatic reduction in weight. For heavier ion gantries there may furthermore be a significant reduction in size.
Method and apparatus for frequency spectrum analysis
NASA Technical Reports Server (NTRS)
Cole, Steven W. (Inventor)
1992-01-01
A method for frequency spectrum analysis of an unknown signal in real-time is discussed. The method is based upon integration of 1-bit samples of signal voltage amplitude corresponding to sine or cosine phases of a controlled center frequency clock which is changed after each integration interval to sweep the frequency range of interest in steps. Integration of samples during each interval is carried out over a number of cycles of the center frequency clock spanning a number of cycles of an input signal to be analyzed. The invention may be used to detect the frequency of at least two signals simultaneously. By using a reference signal of known frequency and voltage amplitude (added to the two signals for parallel processing in the same way, but in a different channel with a sampling at the known frequency and phases of the reference signal), the absolute voltage amplitude of the other two signals may be determined by squaring the sine and cosine integrals of each channel and summing the squares to obtain relative power measurements in all three channels and, from the known voltage amplitude of the reference signal, obtaining an absolute voltage measurement for the other two signals by multiplying the known voltage of the reference signal with the ratio of the relative power of each of the other two signals to the relative power of the reference signal.
Diffraction of cosine-Gaussian-correlated Schell-model beams.
Pan, Liuzhan; Ding, Chaoliang; Wang, Haixia
2014-05-19
The expression of spectral density of cosine-Gaussian-correlated Schell-model (CGSM) beams diffracted by an aperture is derived, and used to study the changes in the spectral density distribution of CGSM beams upon propagation, where the effect of aperture diffraction is emphasized. It is shown that, comparing with that of GSM beams, the spectral density distribution of CGSM beams diffracted by an aperture has dip and shows dark hollow intensity distribution when the order-parameter n is big enough. The central intensity increases with increasing truncation parameter of aperture. The comparative study of spectral density distributions of CGSM beams with aperture and that of without aperture is performed. Furthermore, the effect of order-parameter n and spatial coherence of CGSM beams on the spectral density distribution is discussed in detail. The results obtained may be useful in optical particulate manipulation.
Butler, Samuel D; Nauyoks, Stephen E; Marciniak, Michael A
2015-06-01
Of the many classes of bidirectional reflectance distribution function (BRDF) models, two popular classes of models are the microfacet model and the linear systems diffraction model. The microfacet model has the benefit of speed and simplicity, as it uses geometric optics approximations, while linear systems theory uses a diffraction approach to compute the BRDF, at the expense of greater computational complexity. In this Letter, nongrazing BRDF measurements of rough and polished surface-reflecting materials at multiple incident angles are scaled by the microfacet cross section conversion term, but in the linear systems direction cosine space, resulting in great alignment of BRDF data at various incident angles in this space. This results in a predictive BRDF model for surface-reflecting materials at nongrazing angles, while avoiding some of the computational complexities in the linear systems diffraction model.
Arduino-based experiment demonstrating Malus’s law
NASA Astrophysics Data System (ADS)
Freitas, W. P. S.; Cena, C. R.; Alves, D. C. B.; Goncalves, A. M. B.
2018-05-01
Malus’s law states that the intensity of light after passing through two polarizers is proportional to the square of the cosine of the angle between the polarizers. We present a simple setup demonstrating this law. The novelty of our work is that we use a multi-turn potentiometer mechanically linked to one of the polarizers to measure the polarizer’s rotation angle while keeping the other polarizer fixed. Both the potentiometer and light sensor used to measure the transmitted light intensity are connected to an Arduino board so that the intensity of light is measured as a function of the rotation angle.
Wadehn, Federico; Carnal, David; Loeliger, Hans-Andrea
2015-08-01
Heart rate variability is one of the key parameters for assessing the health status of a subject's cardiovascular system. This paper presents a local model fitting algorithm used for finding single heart beats in photoplethysmogram recordings. The local fit of exponentially decaying cosines of frequencies within the physiological range is used to detect the presence of a heart beat. Using 42 subjects from the CapnoBase database, the average heart rate error was 0.16 BPM and the standard deviation of the absolute estimation error was 0.24 BPM.
Transmission intensity disturbance in a rotating polarizer
NASA Astrophysics Data System (ADS)
Fan, J. Y.; Li, H. X.; Wu, F. Q.
2008-01-01
Random disturbance was observed in transmission intensity in various rotating prism polarizers when they were used in optical systems. As a result, the transmitted intensity exhibited cyclic significant deviation from the Malus cosine-squared law with rotation of prisms. The disturbance spoils the light quality transmitted through the polarizer thus dramatically depresses the accuracies of measurements when the prim polarizers were used in light path. A rigorous model is presented based on the solid basis of multi-beams interference, and theoretical results show good agreement with measured values and also indicate effective method for reducing the disturbance.
The Effect of the Ill-posed Problem on Quantitative Error Assessment in Digital Image Correlation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lehoucq, R. B.; Reu, P. L.; Turner, D. Z.
Here, this work explores the effect of the ill-posed problem on uncertainty quantification for motion estimation using digital image correlation (DIC) (Sutton et al. 2009). We develop a correction factor for standard uncertainty estimates based on the cosine of the angle between the true motion and the image gradients, in an integral sense over a subregion of the image. This correction factor accounts for variability in the DIC solution previously unaccounted for when considering only image noise, interpolation bias, contrast, and the software settings such as subset size and spacing.
NASA Astrophysics Data System (ADS)
Li, Dong-xia; Ye, Qian-wen
Out-of-band radiation suppression algorithm must be used efficiently for broadband aeronautical communication system in order not to interfere the operation of the existing systems in aviation L-Band. Based on the simple introduction of the broadband aeronautical multi-carrier communication (B-AMC) system model, several sidelobe suppression techniques in orthogonal frequency multiplexing (OFDM) system are presented and analyzed so as to find a suitable algorithm for B-AMC system in this paper. Simulation results show that raise-cosine function windowing can suppress the out-of-band radiation of B-AMC system effectively.
The Effect of the Ill-posed Problem on Quantitative Error Assessment in Digital Image Correlation
Lehoucq, R. B.; Reu, P. L.; Turner, D. Z.
2017-11-27
Here, this work explores the effect of the ill-posed problem on uncertainty quantification for motion estimation using digital image correlation (DIC) (Sutton et al. 2009). We develop a correction factor for standard uncertainty estimates based on the cosine of the angle between the true motion and the image gradients, in an integral sense over a subregion of the image. This correction factor accounts for variability in the DIC solution previously unaccounted for when considering only image noise, interpolation bias, contrast, and the software settings such as subset size and spacing.
On a PCA-based lung motion model
NASA Astrophysics Data System (ADS)
Li, Ruijiang; Lewis, John H.; Jia, Xun; Zhao, Tianyu; Liu, Weifeng; Wuenschel, Sara; Lamb, James; Yang, Deshan; Low, Daniel A.; Jiang, Steve B.
2011-09-01
Respiration-induced organ motion is one of the major uncertainties in lung cancer radiotherapy and is crucial to be able to accurately model the lung motion. Most work so far has focused on the study of the motion of a single point (usually the tumor center of mass), and much less work has been done to model the motion of the entire lung. Inspired by the work of Zhang et al (2007 Med. Phys. 34 4772-81), we believe that the spatiotemporal relationship of the entire lung motion can be accurately modeled based on principle component analysis (PCA) and then a sparse subset of the entire lung, such as an implanted marker, can be used to drive the motion of the entire lung (including the tumor). The goal of this work is twofold. First, we aim to understand the underlying reason why PCA is effective for modeling lung motion and find the optimal number of PCA coefficients for accurate lung motion modeling. We attempt to address the above important problems both in a theoretical framework and in the context of real clinical data. Second, we propose a new method to derive the entire lung motion using a single internal marker based on the PCA model. The main results of this work are as follows. We derived an important property which reveals the implicit regularization imposed by the PCA model. We then studied the model using two mathematical respiratory phantoms and 11 clinical 4DCT scans for eight lung cancer patients. For the mathematical phantoms with cosine and an even power (2n) of cosine motion, we proved that 2 and 2n PCA coefficients and eigenvectors will completely represent the lung motion, respectively. Moreover, for the cosine phantom, we derived the equivalence conditions for the PCA motion model and the physiological 5D lung motion model (Low et al 2005 Int. J. Radiat. Oncol. Biol. Phys. 63 921-9). For the clinical 4DCT data, we demonstrated the modeling power and generalization performance of the PCA model. The average 3D modeling error using PCA was within 1 mm (0.7 ± 0.1 mm). When a single artificial internal marker was used to derive the lung motion, the average 3D error was found to be within 2 mm (1.8 ± 0.3 mm) through comprehensive statistical analysis. The optimal number of PCA coefficients needs to be determined on a patient-by-patient basis and two PCA coefficients seem to be sufficient for accurate modeling of the lung motion for most patients. In conclusion, we have presented thorough theoretical analysis and clinical validation of the PCA lung motion model. The feasibility of deriving the entire lung motion using a single marker has also been demonstrated on clinical data using a simulation approach.
Random Walk Graph Laplacian-Based Smoothness Prior for Soft Decoding of JPEG Images.
Liu, Xianming; Cheung, Gene; Wu, Xiaolin; Zhao, Debin
2017-02-01
Given the prevalence of joint photographic experts group (JPEG) compressed images, optimizing image reconstruction from the compressed format remains an important problem. Instead of simply reconstructing a pixel block from the centers of indexed discrete cosine transform (DCT) coefficient quantization bins (hard decoding), soft decoding reconstructs a block by selecting appropriate coefficient values within the indexed bins with the help of signal priors. The challenge thus lies in how to define suitable priors and apply them effectively. In this paper, we combine three image priors-Laplacian prior for DCT coefficients, sparsity prior, and graph-signal smoothness prior for image patches-to construct an efficient JPEG soft decoding algorithm. Specifically, we first use the Laplacian prior to compute a minimum mean square error initial solution for each code block. Next, we show that while the sparsity prior can reduce block artifacts, limiting the size of the overcomplete dictionary (to lower computation) would lead to poor recovery of high DCT frequencies. To alleviate this problem, we design a new graph-signal smoothness prior (desired signal has mainly low graph frequencies) based on the left eigenvectors of the random walk graph Laplacian matrix (LERaG). Compared with the previous graph-signal smoothness priors, LERaG has desirable image filtering properties with low computation overhead. We demonstrate how LERaG can facilitate recovery of high DCT frequencies of a piecewise smooth signal via an interpretation of low graph frequency components as relaxed solutions to normalized cut in spectral clustering. Finally, we construct a soft decoding algorithm using the three signal priors with appropriate prior weights. Experimental results show that our proposal outperforms the state-of-the-art soft decoding algorithms in both objective and subjective evaluations noticeably.
Wide-band doubler and sine wave quadrature generator
NASA Technical Reports Server (NTRS)
Crow, R. B.
1969-01-01
Phase-locked loop with photoresistive control, which provides both sine and cosine outputs for subcarrier demodulation, serves as a telemetry demodulator signal conditioner with a second harmonic signal for synchronization with the locally generated code.
Ju, Chunhua; Xu, Chonghuan
2013-01-01
Although there are many good collaborative recommendation methods, it is still a challenge to increase the accuracy and diversity of these methods to fulfill users' preferences. In this paper, we propose a novel collaborative filtering recommendation approach based on K-means clustering algorithm. In the process of clustering, we use artificial bee colony (ABC) algorithm to overcome the local optimal problem caused by K-means. After that we adopt the modified cosine similarity to compute the similarity between users in the same clusters. Finally, we generate recommendation results for the corresponding target users. Detailed numerical analysis on a benchmark dataset MovieLens and a real-world dataset indicates that our new collaborative filtering approach based on users clustering algorithm outperforms many other recommendation methods.
Ju, Chunhua
2013-01-01
Although there are many good collaborative recommendation methods, it is still a challenge to increase the accuracy and diversity of these methods to fulfill users' preferences. In this paper, we propose a novel collaborative filtering recommendation approach based on K-means clustering algorithm. In the process of clustering, we use artificial bee colony (ABC) algorithm to overcome the local optimal problem caused by K-means. After that we adopt the modified cosine similarity to compute the similarity between users in the same clusters. Finally, we generate recommendation results for the corresponding target users. Detailed numerical analysis on a benchmark dataset MovieLens and a real-world dataset indicates that our new collaborative filtering approach based on users clustering algorithm outperforms many other recommendation methods. PMID:24381525
NASA Technical Reports Server (NTRS)
Holland, A. C.; Thomas, R. W. L.; Pearce, W. A.
1978-01-01
The paper presents the results of a Monte Carlo simulation study of the brightness and polarization at right angles to the solar direction both for ground-based observations (looking up) and for satellite-based systems (looking down). Calculations have been made for a solar zenith angle whose cosine was 0.6 and wavelengths ranging from 3500 A to 9500 A. A sensitivity of signatures to total aerosol loading, aerosol particle size distribution and refractive index, and the surface reflectance albedo has been demonstrated. For Lambertian-type surface reflection the albedo effects enter solely through the intensity sensitivity, and very high correlations have been found between the polarization term signatures for the ground-based and satellite-based systems. Potential applications of these results for local albedo predictions and satellite imaging systems recalibrations are discussed.
A new and inexpensive pyranometer for the visible spectral range.
Martínez, Miguel A; Andújar, José M; Enrique, Juan M
2009-01-01
This paper presents the design, construction and testing of a new photodiode-based pyranometer for the visible spectral range (approx. 400 to 750 nm), whose principal characteristics are: accuracy, ease of connection, immunity to noise, remote programming and operation, interior temperature regulation, cosine error minimisation and all this at a very low cost, tens of times lower than that of commercial thermopile-based devices. This new photodiode-based pyranometer overcomes traditional problems in this type of device and offers similar characteristics to those of thermopile-based pyranometers and, therefore, can be used in any installation where reliable measurement of solar irradiance is necessary, especially in those where cost is a deciding factor in the choice of a meter. This new pyranometer has been registered in the Spanish Patent and Trademark Office under the number P200703162.
A New and Inexpensive Pyranometer for the Visible Spectral Range
Martínez, Miguel A.; Andújar, José M.; Enrique, Juan M.
2009-01-01
This paper presents the design, construction and testing of a new photodiode-based pyranometer for the visible spectral range (approx. 400 to 750 nm), whose principal characteristics are: accuracy, ease of connection, immunity to noise, remote programming and operation, interior temperature regulation, cosine error minimisation and all this at a very low cost, tens of times lower than that of commercial thermopile-based devices. This new photodiode-based pyranometer overcomes traditional problems in this type of device and offers similar characteristics to those of thermopile-based pyranometers and, therefore, can be used in any installation where reliable measurement of solar irradiance is necessary, especially in those where cost is a deciding factor in the choice of a meter. This new pyranometer has been registered in the Spanish Patent and Trademark Office under the number P200703162. PMID:22408545
Fast-response cup anemometer features cosine response
NASA Technical Reports Server (NTRS)
Frenzen, P.
1968-01-01
Six-cup, low-inertia anemometer combines high resolution and fast response with a unique ability to sense only the horizontal component of the winds fluctuating rapidly in three dimensions. Cup assemblies are fabricated of expanded polystyrene plastic.
NASA Astrophysics Data System (ADS)
Chebaane, Saleh; Fathallah, Habib; Seleem, Hussein; Machhout, Mohsen
2018-02-01
Dispersion management in few mode fiber (FMF) technology is crucial to support the upcoming standard that reaches 400 Gbps and Terabit/s per wavelength. Recently in Chebaane et al. (2016), we defined two potential differential mode delay (DMD) management strategies, namely sawtooth and triangular. Moreover we proposed a novel parametric refractive index profile for FMF, referred as raised cosine (RC) profile. In this article, we improve and optimize the RC profile design by including additional shaping parameters, in order to obtain much more attractive dispersion characteristics. Our improved design enabled to obtain a zero DMD (z-DMD), strong positive DMD (p-DMD) and near-zero DMD (nz-DMD) for six-mode fiber, all appropriate for dispersion management in FMF system. In addition, we propose a positive DMD (p-DMD) fiber designs for both, four-mode fiber (4-FMF) and six-mode fiber (6-FMF), respectively, having particularly attractive dispersion characteristics.
Leaf movement in Calathea lutea (Marantaceae).
Herbert, Thomas J; Larsen, Parry B
1985-09-01
Calathea lutea is a broad-leaved, secondary successional plant which shows complex leaf movements involving both elevation and folding of the leaf surface about the pulvinus. In the plants studied, mean leaf elevation increased from approximately 34 degrees in the early morning to 70 degrees at noon while the angle of leaf folding increased from 13 degrees to 50 degrees over the same time period. During the period from early morning to noon, these movements resulted in a significant decrease in the cosine of the angle of incidence, a measure of the direct solar radiation intercepted. The observed changes in elevational angle significantly reduce the cosine of angle of incidence while folding does not significantly reduce the fraction of direct solar radiation intercepted during the period of direct exposure of the leaf surface to the solar beam. Since elevational changes seem to account for the reduction in exposure to direct solar radiation, the role of folding remains unclear.
Exploring methods to expedite the recording of CEST datasets using selective pulse excitation
NASA Astrophysics Data System (ADS)
Yuwen, Tairan; Bouvignies, Guillaume; Kay, Lewis E.
2018-07-01
Chemical Exchange Saturation Transfer (CEST) has emerged as a powerful tool for studies of biomolecular conformational exchange involving the interconversion between a major, visible conformer and one or more minor, invisible states. Applications typically entail recording a large number of 2D datasets, each of which differs in the position of a weak radio frequency field, so as to generate a CEST profile for each nucleus from which the chemical shifts of spins in the invisible state(s) are obtained. Here we compare a number of band-selective CEST schemes for speeding up the process using either DANTE or cosine-modulated excitation approaches. We show that while both are essentially identical for applications such as 15N CEST, in cases where the probed spins are dipolar or scalar coupled to other like spins there can be advantages for the cosine-excitation scheme.
NASA Astrophysics Data System (ADS)
De Paëpe, Gaël; Eléna, Bénédicte; Emsley, Lyndon
2004-08-01
The work presented here aims at understanding the performance of phase modulated heteronuclear decoupling sequences such as Cosine Modulation or Two Pulse Phase Modulation. To that end we provide an analytical description of the intrinsic behavior of Cosine Modulation decoupling with respect to radio-frequency-inhomogeneity and the proton-proton dipolar coupling network. We discover through a Modulation Frame average Hamiltonian analysis that best decoupling is obtained under conditions where the heteronuclear interactions are removed but notably where homonuclear couplings are recoupled at a homonuclear Rotary Resonance (HORROR) condition in the Modulation Frame. These conclusions are supported by extensive experimental investigations, and notably through the introduction of proton nutation experiments to characterize spin dynamics in solids under decoupling conditions. The theoretical framework presented in this paper allows the prediction of the optimum parameters for a given set of experimental conditions.
Implementation of the common phrase index method on the phrase query for information retrieval
NASA Astrophysics Data System (ADS)
Fatmawati, Triyah; Zaman, Badrus; Werdiningsih, Indah
2017-08-01
As the development of technology, the process of finding information on the news text is easy, because the text of the news is not only distributed in print media, such as newspapers, but also in electronic media that can be accessed using the search engine. In the process of finding relevant documents on the search engine, a phrase often used as a query. The number of words that make up the phrase query and their position obviously affect the relevance of the document produced. As a result, the accuracy of the information obtained will be affected. Based on the outlined problem, the purpose of this research was to analyze the implementation of the common phrase index method on information retrieval. This research will be conducted in English news text and implemented on a prototype to determine the relevance level of the documents produced. The system is built with the stages of pre-processing, indexing, term weighting calculation, and cosine similarity calculation. Then the system will display the document search results in a sequence, based on the cosine similarity. Furthermore, system testing will be conducted using 100 documents and 20 queries. That result is then used for the evaluation stage. First, determine the relevant documents using kappa statistic calculation. Second, determine the system success rate using precision, recall, and F-measure calculation. In this research, the result of kappa statistic calculation was 0.71, so that the relevant documents are eligible for the system evaluation. Then the calculation of precision, recall, and F-measure produces precision of 0.37, recall of 0.50, and F-measure of 0.43. From this result can be said that the success rate of the system to produce relevant documents is low.
A Mathematical Model for the Height of a Satellite.
ERIC Educational Resources Information Center
Thoemke, Sharon S.; And Others
1993-01-01
Emphasizes a real-world-problem situation using sine law and cosine law. Angles of elevation from two tracking stations located in the plane of the equator determine height of a satellite. Calculators or computers can be used. (LDR)
Comparison of the GHSSmooth and the Rayleigh-Rice surface scatter theories
NASA Astrophysics Data System (ADS)
Harvey, James E.; Pfisterer, Richard N.
2016-09-01
The scalar-based GHSSmooth surface scatter theory results in an expression for the BRDF in terms of the surface PSD that is very similar to that provided by the rigorous Rayleigh-Rice (RR) vector perturbation theory. However it contains correction factors for two extreme situations not shared by the RR theory: (i) large incident or scattered angles that result in some portion of the scattered radiance distribution falling outside of the unit circle in direction cosine space, and (ii) the situation where the relevant rms surface roughness, σrel, is less than the total intrinsic rms roughness of the scattering surface. Also, the RR obliquity factor has been discovered to be an approximation of the more general GHSSmooth obliquity factor due to a little-known (or long-forgotten) implicit assumption in the RR theory that the surface autocovariance length is longer than the wavelength of the scattered radiation. This assumption allowed retaining only quadratic terms and lower in the series expansion for the cosine function, and results in reducing the validity of RR predictions for scattering angles greater than 60°. This inaccurate obliquity factor in the RR theory is also the cause of a complementary unrealistic "hook" at the high spatial frequency end of the predicted surface PSD when performing the inverse scattering problem. Furthermore, if we empirically substitute the polarization reflectance, Q, from the RR expression for the scalar reflectance, R, in the GHSSmooth expression, it inherits all of the polarization capabilities of the rigorous RR vector perturbation theory.
NASA Technical Reports Server (NTRS)
Romanofsky, Robert R.
1989-01-01
In this report, a thorough analytical procedure is developed for evaluating the frequency-dependent loss characteristics and effective permittivity of microstrip lines. The technique is based on the measured reflection coefficient of microstrip resonator pairs. Experimental data, including quality factor Q, effective relative permittivity, and fringing for 50-omega lines on gallium arsenide (GaAs) from 26.5 to 40.0 GHz are presented. The effects of an imperfect open circuit, coupling losses, and loading of the resonant frequency are considered. A cosine-tapered ridge-guide text fixture is described. It was found to be well suited to the device characterization.
Distribution of curvature of 3D nonrotational surfaces approximating the corneal topography
NASA Astrophysics Data System (ADS)
Kasprzak, Henryk T.
1998-10-01
The first part of the paper presents the analytical curves used to approximate the corneal profile. Next, some definition of 3D surfaces curvature, like main normal sections, main radii of curvature and their orientations are given. The examples of four nonrotational 3D surfaces such as: ellipsoidal, surface based on hyperbolic cosine function, sphero-cylindrical and toroidal, approximating the corneal topography are proposed. The 3D surface and the contour plots of main radii of curvature and their orientation for four nonrotational approximation of the cornea are shown. Results of calculations are discussed from the point of view of videokeratometric images.
Image quality (IQ) guided multispectral image compression
NASA Astrophysics Data System (ADS)
Zheng, Yufeng; Chen, Genshe; Wang, Zhonghai; Blasch, Erik
2016-05-01
Image compression is necessary for data transportation, which saves both transferring time and storage space. In this paper, we focus on our discussion on lossy compression. There are many standard image formats and corresponding compression algorithms, for examples, JPEG (DCT -- discrete cosine transform), JPEG 2000 (DWT -- discrete wavelet transform), BPG (better portable graphics) and TIFF (LZW -- Lempel-Ziv-Welch). The image quality (IQ) of decompressed image will be measured by numerical metrics such as root mean square error (RMSE), peak signal-to-noise ratio (PSNR), and structural Similarity (SSIM) Index. Given an image and a specified IQ, we will investigate how to select a compression method and its parameters to achieve an expected compression. Our scenario consists of 3 steps. The first step is to compress a set of interested images by varying parameters and compute their IQs for each compression method. The second step is to create several regression models per compression method after analyzing the IQ-measurement versus compression-parameter from a number of compressed images. The third step is to compress the given image with the specified IQ using the selected compression method (JPEG, JPEG2000, BPG, or TIFF) according to the regressed models. The IQ may be specified by a compression ratio (e.g., 100), then we will select the compression method of the highest IQ (SSIM, or PSNR). Or the IQ may be specified by a IQ metric (e.g., SSIM = 0.8, or PSNR = 50), then we will select the compression method of the highest compression ratio. Our experiments tested on thermal (long-wave infrared) images (in gray scales) showed very promising results.
NASA Astrophysics Data System (ADS)
Alfalou, Ayman; Elbouz, Marwa; Jridi, Maher; Loussert, Alain
2009-09-01
In some recognition form applications (which require multiple images: facial identification or sign-language), many images should be transmitted or stored. This requires the use of communication systems with a good security level (encryption) and an acceptable transmission rate (compression rate). In the literature, several encryption and compression techniques can be found. In order to use optical correlation, encryption and compression techniques cannot be deployed independently and in a cascade manner. Otherwise, our system will suffer from two major problems. In fact, we cannot simply use these techniques in a cascade manner without considering the impact of one technique over another. Secondly, a standard compression can affect the correlation decision, because the correlation is sensitive to the loss of information. To solve both problems, we developed a new technique to simultaneously compress & encrypt multiple images using a BPOF optimized filter. The main idea of our approach consists in multiplexing the spectrums of different transformed images by a Discrete Cosine Transform (DCT). To this end, the spectral plane should be divided into several areas and each of them corresponds to the spectrum of one image. On the other hand, Encryption is achieved using the multiplexing, a specific rotation functions, biometric encryption keys and random phase keys. A random phase key is widely used in optical encryption approaches. Finally, many simulations have been conducted. Obtained results corroborate the good performance of our approach. We should also mention that the recording of the multiplexed and encrypted spectra is optimized using an adapted quantification technique to improve the overall compression rate.
Alternative Proofs for Inequalities of Some Trigonometric Functions
ERIC Educational Resources Information Center
Guo, Bai-Ni; Qi, Feng
2008-01-01
By using an identity relating to Bernoulli's numbers and power series expansions of cotangent function and logarithms of functions involving sine function, cosine function and tangent function, four inequalities involving cotangent function, sine function, secant function and tangent function are established.
NASA Astrophysics Data System (ADS)
Szadkowski, Zbigniew
2015-06-01
The surface detector (SD) array of the Pierre Auger Observatory needs an upgrade which allows space for more complex triggers with higher bandwidth and greater dynamic range. To this end this paper presents a front-end board (FEB) with the largest Cyclone V E FPGA 5CEFA9F31I7N. It supports eight channels sampled with max. 250 MSps@14-bit resolution. Considered sampling for the SD is 120 MSps; however, the FEB has been developed with external anti-aliasing filters to retain maximal flexibility. Six channels are targeted at the SD, two are reserved for other experiments like: Auger Engineering Radio Array and additional muon counters. The FEB is an intermediate design plugged into a unified board communicating with a micro-controller at 40 MHz; however, it provides 250 MSPs sampling with an 18-bit dynamic range, is equipped with a virtual NIOS processor and supports 256 MB of SDRAM as well as an implemented spectral trigger based on the discrete cosine transform for detection of very inclined “old” showers. The FEB can also support neural network development for detection of “young” showers, potentially generated by neutrinos. A single FEB was already tested in the Auger surface detector in Malargüe (Argentina) for 120 and 160 MSps. Preliminary tests showed perfect stability of data acquisition for sampling frequency three or four times greater. They allowed optimization of the design before deployment of seven or eight FEBs for several months of continuous tests in the engineering array.
In vivo repeatability of the pulse wave inverse problem in human carotid arteries.
McGarry, Matthew; Nauleau, Pierre; Apostolakis, Iason; Konofagou, Elisa
2017-11-07
Accurate arterial stiffness measurement would improve diagnosis and monitoring for many diseases. Atherosclerotic plaques and aneurysms are expected to involve focal changes in vessel wall properties; therefore, a method to image the stiffness variation would be a valuable clinical tool. The pulse wave inverse problem (PWIP) fits unknown parameters from a computational model of arterial pulse wave propagation to ultrasound-based measurements of vessel wall displacements by minimizing the difference between the model and measured displacements. The PWIP has been validated in phantoms, and this study presents the first in vivo demonstration. The common carotid arteries of five healthy volunteers were imaged five times in a single session with repositioning of the probe and subject between each scan. The 1D finite difference computational model used in the PWIP spanned from the start of the transducer to the carotid bifurcation, where a resistance outlet boundary condition was applied to approximately model the downstream reflection of the pulse wave. Unknown parameters that were estimated by the PWIP included a 10-segment linear piecewise compliance distribution and 16 discrete cosine transformation coefficients for each of the inlet boundary conditions. Input data was selected to include pulse waves resulting from the primary pulse and dicrotic notch. The recovered compliance maps indicate that the compliance increases close to the bifurcation, and the variability of the average pulse wave velocity estimated through the PWIP is on the order of 11%, which is similar to that of the conventional processing technique which tracks the wavefront arrival time (13%). Copyright © 2017 Elsevier Ltd. All rights reserved.
Pedestrian Detection and Tracking from Low-Resolution Unmanned Aerial Vehicle Thermal Imagery
Ma, Yalong; Wu, Xinkai; Yu, Guizhen; Xu, Yongzheng; Wang, Yunpeng
2016-01-01
Driven by the prominent thermal signature of humans and following the growing availability of unmanned aerial vehicles (UAVs), more and more research efforts have been focusing on the detection and tracking of pedestrians using thermal infrared images recorded from UAVs. However, pedestrian detection and tracking from the thermal images obtained from UAVs pose many challenges due to the low-resolution of imagery, platform motion, image instability and the relatively small size of the objects. This research tackles these challenges by proposing a pedestrian detection and tracking system. A two-stage blob-based approach is first developed for pedestrian detection. This approach first extracts pedestrian blobs using the regional gradient feature and geometric constraints filtering and then classifies the detected blobs by using a linear Support Vector Machine (SVM) with a hybrid descriptor, which sophisticatedly combines Histogram of Oriented Gradient (HOG) and Discrete Cosine Transform (DCT) features in order to achieve accurate detection. This research further proposes an approach for pedestrian tracking. This approach employs the feature tracker with the update of detected pedestrian location to track pedestrian objects from the registered videos and extracts the motion trajectory data. The proposed detection and tracking approaches have been evaluated by multiple different datasets, and the results illustrate the effectiveness of the proposed methods. This research is expected to significantly benefit many transportation applications, such as the multimodal traffic performance measure, pedestrian behavior study and pedestrian-vehicle crash analysis. Future work will focus on using fused thermal and visual images to further improve the detection efficiency and effectiveness. PMID:27023564
Pedestrian Detection and Tracking from Low-Resolution Unmanned Aerial Vehicle Thermal Imagery.
Ma, Yalong; Wu, Xinkai; Yu, Guizhen; Xu, Yongzheng; Wang, Yunpeng
2016-03-26
Driven by the prominent thermal signature of humans and following the growing availability of unmanned aerial vehicles (UAVs), more and more research efforts have been focusing on the detection and tracking of pedestrians using thermal infrared images recorded from UAVs. However, pedestrian detection and tracking from the thermal images obtained from UAVs pose many challenges due to the low-resolution of imagery, platform motion, image instability and the relatively small size of the objects. This research tackles these challenges by proposing a pedestrian detection and tracking system. A two-stage blob-based approach is first developed for pedestrian detection. This approach first extracts pedestrian blobs using the regional gradient feature and geometric constraints filtering and then classifies the detected blobs by using a linear Support Vector Machine (SVM) with a hybrid descriptor, which sophisticatedly combines Histogram of Oriented Gradient (HOG) and Discrete Cosine Transform (DCT) features in order to achieve accurate detection. This research further proposes an approach for pedestrian tracking. This approach employs the feature tracker with the update of detected pedestrian location to track pedestrian objects from the registered videos and extracts the motion trajectory data. The proposed detection and tracking approaches have been evaluated by multiple different datasets, and the results illustrate the effectiveness of the proposed methods. This research is expected to significantly benefit many transportation applications, such as the multimodal traffic performance measure, pedestrian behavior study and pedestrian-vehicle crash analysis. Future work will focus on using fused thermal and visual images to further improve the detection efficiency and effectiveness.
InSAR tropospheric delay mitigation by GPS observations: A case study in Tokyo area
NASA Astrophysics Data System (ADS)
Xu, Caijun; Wang, Hua; Ge, Linlin; Yonezawa, Chinatsu; Cheng, Pu
2006-03-01
Like other space geodetic techniques, interferometric synthetic aperture radar (InSAR) is limited by the variations of tropospheric delay noise. In this paper, we analyze the double-difference (DD) feature of tropospheric delay noise in SAR interferogram. By processing the ERS-2 radar pair, we find some tropospheric delay fringes, which have similar patterns with the GMS-5 visible-channel images acquired at almost the same epoch. Thirty-five continuous GPS (CGPS) stations are distributed in the radar scene. We analyze the GPS data by GIPSY-OASIS (II) software and extract the wet zenith delay (WZD) parameters at each station at the same epoch with the master and the slave image, respectively. A cosine mapping function is applied to transform the WZD to wet slant delay (WSD) in line-of-sight direction. Based on the DD WSD parameters, we establish a two-dimensional (2D) semi-variogram model, with the parameters 35.2, 3.6 and 0.88. Then we predict the DD WSD parameters by the kriging algorithm for each pixel of the interferogram, and subtract it from the unwrapped phase. Comparisons between CGPS and InSAR range changes in LOS direction show that the root of mean squares (RMS) decreased from 1.33 cm before correction to 0.87 cm after correction. From the result, we can conclude that GPS WZD parameters can be effectively used to identify and mitigate the large-scale InSAR tropospheric delay noise if the spatial resolution of GPS stations is dense enough.
A novel high-frequency encoding algorithm for image compression
NASA Astrophysics Data System (ADS)
Siddeq, Mohammed M.; Rodrigues, Marcos A.
2017-12-01
In this paper, a new method for image compression is proposed whose quality is demonstrated through accurate 3D reconstruction from 2D images. The method is based on the discrete cosine transform (DCT) together with a high-frequency minimization encoding algorithm at compression stage and a new concurrent binary search algorithm at decompression stage. The proposed compression method consists of five main steps: (1) divide the image into blocks and apply DCT to each block; (2) apply a high-frequency minimization method to the AC-coefficients reducing each block by 2/3 resulting in a minimized array; (3) build a look up table of probability data to enable the recovery of the original high frequencies at decompression stage; (4) apply a delta or differential operator to the list of DC-components; and (5) apply arithmetic encoding to the outputs of steps (2) and (4). At decompression stage, the look up table and the concurrent binary search algorithm are used to reconstruct all high-frequency AC-coefficients while the DC-components are decoded by reversing the arithmetic coding. Finally, the inverse DCT recovers the original image. We tested the technique by compressing and decompressing 2D images including images with structured light patterns for 3D reconstruction. The technique is compared with JPEG and JPEG2000 through 2D and 3D RMSE. Results demonstrate that the proposed compression method is perceptually superior to JPEG with equivalent quality to JPEG2000. Concerning 3D surface reconstruction from images, it is demonstrated that the proposed method is superior to both JPEG and JPEG2000.
Development of a residual acceleration data reduction and dissemination plan
NASA Technical Reports Server (NTRS)
Rogers, Melissa J. B.
1992-01-01
A major obstacle in evaluating the residual acceleration environment in an orbiting space laboratory is the amount of data collected during a given mission: gigabytes of data will be available as SAMS units begin to fly regularly. Investigators taking advantage of the reduced gravity conditions of space should not be overwhelmed by the accelerometer data which describe these conditions. We are therefore developing a data reduction and analysis plan that will allow principal investigators of low-g experiments to create experiment-specific residual acceleration data bases for post-flight analysis. The basic aspects of the plan can also be used to characterize the acceleration environment of earth orbiting laboratories. Our development of the reduction plan is based on the following program of research: the identification of experiment sensitivities by order of magnitude estimates and numerical modelling; evaluation of various signal processing techniques appropriate for the reduction, supplementation, and dissemination of residual acceleration data; and testing and implementation of the plan on existing acceleration data bases. The orientation of the residual acceleration vector with respect to some set of coordinate axes is important for experiments with known directional sensitivity. Orientation information can be obtained from the evaluation of direction cosines. Fourier analysis is commonly used to transform time history data into the frequency domain. Common spectral representations are the amplitude spectrum which gives the average of the components of the time series at each frequency and the power spectral density which indicates the power or energy present in the series per unit frequency interval. The data reduction and analysis scheme developed involves a two tiered structure to: (1) identify experiment characteristics and mission events that can be used to limit the amount of accelerator data an investigator should be interested in; and (2) process the data in a way that will be meaningful to the experiment objectives. A general outline of the plan is given.
Absolute cosine-based SVM-RFE feature selection method for prostate histopathological grading.
Sahran, Shahnorbanun; Albashish, Dheeb; Abdullah, Azizi; Shukor, Nordashima Abd; Hayati Md Pauzi, Suria
2018-04-18
Feature selection (FS) methods are widely used in grading and diagnosing prostate histopathological images. In this context, FS is based on the texture features obtained from the lumen, nuclei, cytoplasm and stroma, all of which are important tissue components. However, it is difficult to represent the high-dimensional textures of these tissue components. To solve this problem, we propose a new FS method that enables the selection of features with minimal redundancy in the tissue components. We categorise tissue images based on the texture of individual tissue components via the construction of a single classifier and also construct an ensemble learning model by merging the values obtained by each classifier. Another issue that arises is overfitting due to the high-dimensional texture of individual tissue components. We propose a new FS method, SVM-RFE(AC), that integrates a Support Vector Machine-Recursive Feature Elimination (SVM-RFE) embedded procedure with an absolute cosine (AC) filter method to prevent redundancy in the selected features of the SV-RFE and an unoptimised classifier in the AC. We conducted experiments on H&E histopathological prostate and colon cancer images with respect to three prostate classifications, namely benign vs. grade 3, benign vs. grade 4 and grade 3 vs. grade 4. The colon benchmark dataset requires a distinction between grades 1 and 2, which are the most difficult cases to distinguish in the colon domain. The results obtained by both the single and ensemble classification models (which uses the product rule as its merging method) confirm that the proposed SVM-RFE(AC) is superior to the other SVM and SVM-RFE-based methods. We developed an FS method based on SVM-RFE and AC and successfully showed that its use enabled the identification of the most crucial texture feature of each tissue component. Thus, it makes possible the distinction between multiple Gleason grades (e.g. grade 3 vs. grade 4) and its performance is far superior to other reported FS methods. Copyright © 2018 Elsevier B.V. All rights reserved.
Blind prediction of natural video quality.
Saad, Michele A; Bovik, Alan C; Charrier, Christophe
2014-03-01
We propose a blind (no reference or NR) video quality evaluation model that is nondistortion specific. The approach relies on a spatio-temporal model of video scenes in the discrete cosine transform domain, and on a model that characterizes the type of motion occurring in the scenes, to predict video quality. We use the models to define video statistics and perceptual features that are the basis of a video quality assessment (VQA) algorithm that does not require the presence of a pristine video to compare against in order to predict a perceptual quality score. The contributions of this paper are threefold. 1) We propose a spatio-temporal natural scene statistics (NSS) model for videos. 2) We propose a motion model that quantifies motion coherency in video scenes. 3) We show that the proposed NSS and motion coherency models are appropriate for quality assessment of videos, and we utilize them to design a blind VQA algorithm that correlates highly with human judgments of quality. The proposed algorithm, called video BLIINDS, is tested on the LIVE VQA database and on the EPFL-PoliMi video database and shown to perform close to the level of top performing reduced and full reference VQA algorithms.
Gottschlich, Carsten
2016-01-01
We present a new type of local image descriptor which yields binary patterns from small image patches. For the application to fingerprint liveness detection, we achieve rotation invariant image patches by taking the fingerprint segmentation and orientation field into account. We compute the discrete cosine transform (DCT) for these rotation invariant patches and attain binary patterns by comparing pairs of two DCT coefficients. These patterns are summarized into one or more histograms per image. Each histogram comprises the relative frequencies of pattern occurrences. Multiple histograms are concatenated and the resulting feature vector is used for image classification. We name this novel type of descriptor convolution comparison pattern (CCP). Experimental results show the usefulness of the proposed CCP descriptor for fingerprint liveness detection. CCP outperforms other local image descriptors such as LBP, LPQ and WLD on the LivDet 2013 benchmark. The CCP descriptor is a general type of local image descriptor which we expect to prove useful in areas beyond fingerprint liveness detection such as biological and medical image processing, texture recognition, face recognition and iris recognition, liveness detection for face and iris images, and machine vision for surface inspection and material classification. PMID:26844544
C-FSCV: Compressive Fast-Scan Cyclic Voltammetry for Brain Dopamine Recording.
Zamani, Hossein; Bahrami, Hamid Reza; Chalwadi, Preeti; Garris, Paul A; Mohseni, Pedram
2018-01-01
This paper presents a novel compressive sensing framework for recording brain dopamine levels with fast-scan cyclic voltammetry (FSCV) at a carbon-fiber microelectrode. Termed compressive FSCV (C-FSCV), this approach compressively samples the measured total current in each FSCV scan and performs basic FSCV processing steps, e.g., background current averaging and subtraction, directly with compressed measurements. The resulting background-subtracted faradaic currents, which are shown to have a block-sparse representation in the discrete cosine transform domain, are next reconstructed from their compressively sampled counterparts with the block sparse Bayesian learning algorithm. Using a previously recorded dopamine dataset, consisting of electrically evoked signals recorded in the dorsal striatum of an anesthetized rat, the C-FSCV framework is shown to be efficacious in compressing and reconstructing brain dopamine dynamics and associated voltammograms with high fidelity (correlation coefficient, ), while achieving compression ratio, CR, values as high as ~ 5. Moreover, using another set of dopamine data recorded 5 minutes after administration of amphetamine (AMPH) to an ambulatory rat, C-FSCV once again compresses (CR = 5) and reconstructs the temporal pattern of dopamine release with high fidelity ( ), leading to a true-positive rate of 96.4% in detecting AMPH-induced dopamine transients.
Mpeg2 codec HD improvements with medical and robotic imaging benefits
NASA Astrophysics Data System (ADS)
Picard, Wayne F. J.
2010-02-01
In this report, we propose an efficient scheme to use High Definition Television (HDTV) in a console or notebook format as a computer terminal in addition to their role as TV display unit. In the proposed scheme, we assume that the main computer is situated at a remote location. The computer raster in the remote server is compressed using an HD E- >Mpeg2 encoder and transmitted to the terminal at home. The built-in E->Mpeg2 decoder in the terminal decompresses the compressed bit stream, and displays the raster. The terminal will be fitted with a mouse and keyboard, through which the interaction with the remote computer server can be performed via a communications back channel. The terminal in a notebook format can thus be used as a high resolution computer and multimedia device. We will consider developments such as the required HD enhanced Mpeg2 resolution (E->Mpeg2) and its medical ramifications due to improvements on compressed image quality with 2D to 3D conversion (Mpeg3) and using the compressed Discrete Cosine Transform coefficients in the reality compression of vision and control of medical robotic surgeons.
A Geometric View of Complex Trigonometric Functions
ERIC Educational Resources Information Center
Hammack, Richard
2007-01-01
Given that the sine and cosine functions of a real variable can be interpreted as the coordinates of points on the unit circle, the author of this article asks whether there is something similar for complex variables, and shows that indeed there is.
On a PCA-based lung motion model
Li, Ruijiang; Lewis, John H; Jia, Xun; Zhao, Tianyu; Liu, Weifeng; Wuenschel, Sara; Lamb, James; Yang, Deshan; Low, Daniel A; Jiang, Steve B
2014-01-01
Respiration-induced organ motion is one of the major uncertainties in lung cancer radiotherapy and is crucial to be able to accurately model the lung motion. Most work so far has focused on the study of the motion of a single point (usually the tumor center of mass), and much less work has been done to model the motion of the entire lung. Inspired by the work of Zhang et al (2007 Med. Phys. 34 4772–81), we believe that the spatiotemporal relationship of the entire lung motion can be accurately modeled based on principle component analysis (PCA) and then a sparse subset of the entire lung, such as an implanted marker, can be used to drive the motion of the entire lung (including the tumor). The goal of this work is twofold. First, we aim to understand the underlying reason why PCA is effective for modeling lung motion and find the optimal number of PCA coefficients for accurate lung motion modeling. We attempt to address the above important problems both in a theoretical framework and in the context of real clinical data. Second, we propose a new method to derive the entire lung motion using a single internal marker based on the PCA model. The main results of this work are as follows. We derived an important property which reveals the implicit regularization imposed by the PCA model. We then studied the model using two mathematical respiratory phantoms and 11 clinical 4DCT scans for eight lung cancer patients. For the mathematical phantoms with cosine and an even power (2n) of cosine motion, we proved that 2 and 2n PCA coefficients and eigenvectors will completely represent the lung motion, respectively. Moreover, for the cosine phantom, we derived the equivalence conditions for the PCA motion model and the physiological 5D lung motion model (Low et al 2005 Int. J. Radiat. Oncol. Biol. Phys. 63 921–9). For the clinical 4DCT data, we demonstrated the modeling power and generalization performance of the PCA model. The average 3D modeling error using PCA was within 1 mm (0.7 ± 0.1 mm). When a single artificial internal marker was used to derive the lung motion, the average 3D error was found to be within 2 mm (1.8 ± 0.3 mm) through comprehensive statistical analysis. The optimal number of PCA coefficients needs to be determined on a patient-by-patient basis and two PCA coefficients seem to be sufficient for accurate modeling of the lung motion for most patients. In conclusion, we have presented thorough theoretical analysis and clinical validation of the PCA lung motion model. The feasibility of deriving the entire lung motion using a single marker has also been demonstrated on clinical data using a simulation approach. PMID:21865624
Low-Symmetry Gap Functions of Organic Superconductors
NASA Astrophysics Data System (ADS)
Mori, Takehiko
2018-04-01
Superconducting gap functions of various low-symmetry organic superconductors are investigated starting from the tight-binding energy band and the random phase approximation by numerically solving Eliashberg's equation. The obtained singlet gap function is approximately represented by an asymmetrical dx2 - y2 form, where two cosine functions are mixed in an appropriate ratio. This is usually called d + s wave, where the ratio of the two cosine functions varies from 1:1 in the two-dimensional limit to 1:0 in the one-dimensional limit. A single cosine function does not make a superconducting gap in an ideal one-dimensional conductor, but works as a relevant gap function in quasi-one-dimensional conductors with slight interchain transfer integrals. Even when the Fermi surface is composed of small pockets, the gap function is obtained supposing a globally connected elliptical Fermi surface. In such a case, we have to connect the second energy band in the second Brillouin zone. The periodicity of the resulting gap function is larger than the first Brillouin zone. This is because the susceptibility has peaks at 2kF, where the periodicity has to be twice the size of the global Fermi surface. In general, periodicity of gap function corresponds to one electron or two molecules in the real space. In the κ-phase, two axes are nonequivalent, but the exact dx2 - y2 symmetry is maintained because the diagonal transfer integral introduced to a square lattice is oriented to the node direction of the dx2 - y2 wave. By contrast, the θ-phase gap function shows considerable anisotropy because a quarter-filled square lattice has a different dxy symmetry.
Contact angle hysteresis on superhydrophobic stripes.
Dubov, Alexander L; Mourran, Ahmed; Möller, Martin; Vinogradova, Olga I
2014-08-21
We study experimentally and discuss quantitatively the contact angle hysteresis on striped superhydrophobic surfaces as a function of a solid fraction, ϕS. It is shown that the receding regime is determined by a longitudinal sliding motion of the deformed contact line. Despite an anisotropy of the texture the receding contact angle remains isotropic, i.e., is practically the same in the longitudinal and transverse directions. The cosine of the receding angle grows nonlinearly with ϕS. To interpret this we develop a theoretical model, which shows that the value of the receding angle depends both on weak defects at smooth solid areas and on the strong defects due to the elastic energy of the deformed contact line, which scales as ϕS(2)lnϕS. The advancing contact angle was found to be anisotropic, except in a dilute regime, and its value is shown to be determined by the rolling motion of the drop. The cosine of the longitudinal advancing angle depends linearly on ϕS, but a satisfactory fit to the data can only be provided if we generalize the Cassie equation to account for weak defects. The cosine of the transverse advancing angle is much smaller and is maximized at ϕS ≃ 0.5. An explanation of its value can be obtained if we invoke an additional energy due to strong defects in this direction, which is shown to be caused by the adhesion of the drop on solid sectors and is proportional to ϕS(2). Finally, the contact angle hysteresis is found to be quite large and generally anisotropic, but it becomes isotropic when ϕS ≤ 0.2.
Partitioning Pythagorean Triangles Using Pythagorean Angles
ERIC Educational Resources Information Center
Swenson, Carl E.; Yandl, Andre L.
2012-01-01
Inside any Pythagorean right triangle, it is possible to find a point M so that drawing segments from M to each vertex of the triangle yields angles whose sines and cosines are all rational. This article describes an algorithm that generates an infinite number of such points.
A Puzzle in Elementary Ballistics.
ERIC Educational Resources Information Center
Haugland, Ole Anton
1983-01-01
Provides an answer to the question of why it is easy to miss when shooting uphill or downhill. Experimental results indicate that when shooting uphill or downhill, sight should not be adjusted to actual distance but to distance multiplied by the cosine of the inclination angle. (JN)
Trigonometric Integration without Trigonometric Functions
ERIC Educational Resources Information Center
Quinlan, James; Kolibal, Joseph
2016-01-01
Teaching techniques of integration can be tedious and often uninspired. We present an obvious but underutilized approach for finding antiderivatives of various trigonometric functions using the complex exponential representation of the sine and cosine. The purpose goes beyond providing students an alternative approach to trigonometric integrals.…
Hermite-cosine-Gaussian laser beam and its propagation characteristics in turbulent atmosphere.
Eyyuboğlu, Halil Tanyer
2005-08-01
Hermite-cosine-Gaussian (HcosG) laser beams are studied. The source plane intensity of the HcosG beam is introduced and its dependence on the source parameters is examined. By application of the Fresnel diffraction integral, the average receiver intensity of HcosG beam is formulated for the case of propagation in turbulent atmosphere. The average receiver intensity is seen to reduce appropriately to various special cases. When traveling in turbulence, the HcosG beam initially experiences the merging of neighboring beam lobes, and then a TEM-type cosh-Gaussian beam is formed, temporarily leading to a plain cosh-Gaussian beam. Eventually a pure Gaussian beam results. The numerical evaluation of the normalized beam size along the propagation axis at selected mode indices indicates that relative spreading of higher-order HcosG beam modes is less than that of the lower-order counterparts. Consequently, it is possible at some propagation distances to capture more power by using higher-mode-indexed HcosG beams.
Non-uniform cosine modulated filter banks using meta-heuristic algorithms in CSD space.
Kalathil, Shaeen; Elias, Elizabeth
2015-11-01
This paper presents an efficient design of non-uniform cosine modulated filter banks (CMFB) using canonic signed digit (CSD) coefficients. CMFB has got an easy and efficient design approach. Non-uniform decomposition can be easily obtained by merging the appropriate filters of a uniform filter bank. Only the prototype filter needs to be designed and optimized. In this paper, the prototype filter is designed using window method, weighted Chebyshev approximation and weighted constrained least square approximation. The coefficients are quantized into CSD, using a look-up-table. The finite precision CSD rounding, deteriorates the filter bank performances. The performances of the filter bank are improved using suitably modified meta-heuristic algorithms. The different meta-heuristic algorithms which are modified and used in this paper are Artificial Bee Colony algorithm, Gravitational Search algorithm, Harmony Search algorithm and Genetic algorithm and they result in filter banks with less implementation complexity, power consumption and area requirements when compared with those of the conventional continuous coefficient non-uniform CMFB.
Non-uniform cosine modulated filter banks using meta-heuristic algorithms in CSD space
Kalathil, Shaeen; Elias, Elizabeth
2014-01-01
This paper presents an efficient design of non-uniform cosine modulated filter banks (CMFB) using canonic signed digit (CSD) coefficients. CMFB has got an easy and efficient design approach. Non-uniform decomposition can be easily obtained by merging the appropriate filters of a uniform filter bank. Only the prototype filter needs to be designed and optimized. In this paper, the prototype filter is designed using window method, weighted Chebyshev approximation and weighted constrained least square approximation. The coefficients are quantized into CSD, using a look-up-table. The finite precision CSD rounding, deteriorates the filter bank performances. The performances of the filter bank are improved using suitably modified meta-heuristic algorithms. The different meta-heuristic algorithms which are modified and used in this paper are Artificial Bee Colony algorithm, Gravitational Search algorithm, Harmony Search algorithm and Genetic algorithm and they result in filter banks with less implementation complexity, power consumption and area requirements when compared with those of the conventional continuous coefficient non-uniform CMFB. PMID:26644921
Ardura, J; Andrés, J; Aldana, J; Revilla, M A; Cornélissen, G; Halberg, F
1997-09-01
Lighting, noise and temperature were monitored in two perinatal nurseries. Rhythms of several frequencies were found, including prominent 24-hour rhythms with acrophases around 13:00 (light intensity) and 16:00 (noise). For light and noise, the ratio formed by dividing the amplitude of a 1-week (circaseptan) or half-week (circasemiseptan) fitted cosine curve by the amplitude of a 24-hour fitted cosine curve is smaller than unity, since 24-hour rhythms are prominent for these variables. The amplitude ratios are larger than unity for temperature in the newborns' unit but not in the infants' unit. Earlier, the origin of the about-7-day rhythms of neonatal physiologic variables was demonstrated to have, in addition to a major endogenous, also a minor exogenous component. Hence, the possibility of optimizing maturation by manipulating environmental changes can be considered, using, as gauges of development, previously mapped chronomes (time structures of biologic multifrequency rhythms, trends and noise).
Low frequency ac waveform generator
Bilharz, O.W.
1983-11-22
Low frequency sine, cosine, triangle and square waves are synthesized in circuitry which allows variation in the waveform amplitude and frequency while exhibiting good stability and without requiring significant stablization time. A triangle waveform is formed by a ramped integration process controlled by a saturation amplifier circuit which produces the necessary hysteresis for the triangle waveform. The output of the saturation circuit is tapped to produce the square waveform. The sine waveform is synthesized by taking the absolute value of the triangular waveform, raising this absolute value to a predetermined power, multiplying the raised absolute value of the triangle wave with the triangle wave itself and properly scaling the resultant waveform and subtracting it from the triangular waveform to a predetermined power and adding the squared waveform raised to the predetermined power with a DC reference and subtracting the squared waveform therefrom, with all waveforms properly scaled. The resultant waveform is then multiplied with a square wave in order to correct the polarity and produce the resultant cosine waveform.
NASA Astrophysics Data System (ADS)
Uchiyama, H.; Watanabe, M.; Shaw, D. M.; Bahia, J. E.; Collins, G. J.
1999-10-01
Accurate measurement of plasma source impedance is important for verification of plasma circuit models, as well as for plasma process characterization and endpoint detection. Most impedance measurement techniques depend in some manner on the cosine of the phase angle to determine the impedance of the plasma load. Inductively coupled plasmas are generally highly inductive, with the phase angle between the applied rf voltage and the rf current in the range of 88 to near 90 degrees. A small measurement error in this phase angle range results in a large error in the calculated cosine of the angle, introducing large impedance measurement variations. In this work, we have compared the measured impedance of a planar inductively coupled plasma using three commercial plasma impedance monitors (ENI V/I probe, Advanced Energy RFZ60 and Advanced Energy Z-Scan). The plasma impedance is independently verified using a specially designed match network and a calibrated load, representing the plasma, to provide a measurement standard.
Generating coherent broadband continuum soft-x-ray radiation by attosecond ionization gating.
Pfeifer, Thomas; Jullien, Aurélie; Abel, Mark J; Nagel, Phillip M; Gallmann, Lukas; Neumark, Daniel M; Leone, Stephen R
2007-12-10
The current paradigm of isolated attosecond pulse production requires a few-cycle pulse as the driver for high-harmonic generation that has a cosine-like electric field stabilized with respect to the peak of the pulse envelope. Here, we present simulations and experimental evidence that the production of high-harmonic light can be restricted to one or a few cycles on the leading edge of a laser pulse by a gating mechanism that employs time-dependent ionization of the conversion medium. This scheme enables the generation of broadband and tunable attosecond pulses. Instead of fixing the carrier-envelope phase to produce a cosine driver pulse, the phase becomes a control parameter for the center frequency of the attosecond pulse. A method to assess the multiplicity of attosecond pulses in the pulse train is also presented. The results of our study suggest an avenue towards relaxing the requirement of few-cycle pulses for isolated attosecond pulse generation.
From scores to face templates: a model-based approach.
Mohanty, Pranab; Sarkar, Sudeep; Kasturi, Rangachar
2007-12-01
Regeneration of templates from match scores has security and privacy implications related to any biometric authentication system. We propose a novel paradigm to reconstruct face templates from match scores using a linear approach. It proceeds by first modeling the behavior of the given face recognition algorithm by an affine transformation. The goal of the modeling is to approximate the distances computed by a face recognition algorithm between two faces by distances between points, representing these faces, in an affine space. Given this space, templates from an independent image set (break-in) are matched only once with the enrolled template of the targeted subject and match scores are recorded. These scores are then used to embed the targeted subject in the approximating affine (non-orthogonal) space. Given the coordinates of the targeted subject in the affine space, the original template of the targeted subject is reconstructed using the inverse of the affine transformation. We demonstrate our ideas using three, fundamentally different, face recognition algorithms: Principal Component Analysis (PCA) with Mahalanobis cosine distance measure, Bayesian intra-extrapersonal classifier (BIC), and a feature-based commercial algorithm. To demonstrate the independence of the break-in set with the gallery set, we select face templates from two different databases: Face Recognition Grand Challenge (FRGC) and Facial Recognition Technology (FERET) Database (FERET). With an operational point set at 1 percent False Acceptance Rate (FAR) and 99 percent True Acceptance Rate (TAR) for 1,196 enrollments (FERET gallery), we show that at most 600 attempts (score computations) are required to achieve a 73 percent chance of breaking in as a randomly chosen target subject for the commercial face recognition system. With similar operational set up, we achieve a 72 percent and 100 percent chance of breaking in for the Bayesian and PCA based face recognition systems, respectively. With three different levels of score quantization, we achieve 69 percent, 68 percent and 49 percent probability of break-in, indicating the robustness of our proposed scheme to score quantization. We also show that the proposed reconstruction scheme has 47 percent more probability of breaking in as a randomly chosen target subject for the commercial system as compared to a hill climbing approach with the same number of attempts. Given that the proposed template reconstruction method uses distinct face templates to reconstruct faces, this work exposes a more severe form of vulnerability than a hill climbing kind of attack where incrementally different versions of the same face are used. Also, the ability of the proposed approach to reconstruct actual face templates of the users increases privacy concerns in biometric systems.
Progress in standoff surface contaminant detector platform
NASA Astrophysics Data System (ADS)
Dupuis, Julia R.; Giblin, Jay; Dixon, John; Hensley, Joel; Mansur, David; Marinelli, William J.
2017-05-01
Progress towards the development of a longwave infrared quantum cascade laser (QLC) based standoff surface contaminant detection platform is presented. The detection platform utilizes reflectance spectroscopy with application to optically thick and thin materials including solid and liquid phase chemical warfare agents, toxic industrial chemicals and materials, and explosives. The platform employs an ensemble of broadband QCLs with a spectrally selective detector to interrogate target surfaces at 10s of m standoff. A version of the Adaptive Cosine Estimator (ACE) featuring class based screening is used for detection and discrimination in high clutter environments. Detection limits approaching 0.1 μg/cm2 are projected through speckle reduction methods enabling detector noise limited performance. The design, build, and validation of a breadboard version of the QCL-based surface contaminant detector are discussed. Functional test results specific to the QCL illuminator are presented with specific emphasis on speckle reduction.
Mollweide's Formula in Teaching Trigonometry
ERIC Educational Resources Information Center
Karjanto, Natanael
2011-01-01
Trigonometry is one of the topics in mathematics that the students in both high school and pre-undergraduate levels need to learn. Generally, the topic covers trigonometric functions, trigonometric equations, trigonometric identities and solving oblique triangles using the Laws of Sines and Cosines. However, when solving the oblique triangles,…
Wang, Lu; Xu, Lisheng; Zhao, Dazhe; Yao, Yang; Song, Dan
2015-04-01
Because arterial pulse waves contain vital information related to the condition of the cardiovascular system, considerable attention has been devoted to the study of pulse waves in recent years. Accurate acquisition is essential to investigate arterial pulse waves. However, at the stage of developing equipment for acquiring and analyzing arterial pulse waves, specific pulse signals may be unavailable for debugging and evaluating the system under development. To produce test signals that reflect specific physiological conditions, in this paper, an arterial pulse wave generator has been designed and implemented using a field programmable gate array (FPGA), which can produce the desired pulse waves according to the feature points set by users. To reconstruct a periodic pulse wave from the given feature points, a method known as piecewise Gaussian-cosine fitting is also proposed in this paper. Using a test database that contains four types of typical pulse waves with each type containing 25 pulse wave signals, the maximum residual error of each sampling point of the fitted pulse wave in comparison with the real pulse wave is within 8%. In addition, the function for adding baseline drift and three types of noises is integrated into the developed system because the baseline occasionally wanders, and noise needs to be added for testing the performance of the designed circuits and the analysis algorithms. The proposed arterial pulse wave generator can be considered as a special signal generator with a simple structure, low cost and compact size, which can also provide flexible solutions for many other related research purposes. Copyright © 2015 Elsevier Ltd. All rights reserved.
Probing molecular potentials with an optical centrifuge.
Milner, A A; Korobenko, A; Hepburn, J W; Milner, V
2017-09-28
We use an optical centrifuge to excite coherent rotational wave packets in N 2 O, OCS, and CS 2 molecules with rotational quantum numbers reaching up to J≈465, 690, and 1186, respectively. Time-resolved rotational spectroscopy at such ultra-high levels of rotational excitation can be used as a sensitive tool to probe the molecular potential energy surface at internuclear distances far from their equilibrium values. Significant bond stretching in the centrifuged molecules results in the growing period of the rotational revivals, which are experimentally detected using coherent Raman scattering. We measure the revival period as a function of the centrifuge-induced rotational frequency and compare it with the numerical calculations based on the known Morse-cosine potentials.
Probing molecular potentials with an optical centrifuge
NASA Astrophysics Data System (ADS)
Milner, A. A.; Korobenko, A.; Hepburn, J. W.; Milner, V.
2017-09-01
We use an optical centrifuge to excite coherent rotational wave packets in N2O, OCS, and CS2 molecules with rotational quantum numbers reaching up to J ≈465 , 690, and 1186, respectively. Time-resolved rotational spectroscopy at such ultra-high levels of rotational excitation can be used as a sensitive tool to probe the molecular potential energy surface at internuclear distances far from their equilibrium values. Significant bond stretching in the centrifuged molecules results in the growing period of the rotational revivals, which are experimentally detected using coherent Raman scattering. We measure the revival period as a function of the centrifuge-induced rotational frequency and compare it with the numerical calculations based on the known Morse-cosine potentials.
NASA Astrophysics Data System (ADS)
Vacanti, Giuseppe; Barrière, Nicolas; Bavdaz, Marcos; Chatbi, Abdelhakim; Collon, Maximilien; Dekker, Daniëlle; Girou, David; Günther, Ramses; van der Hoeven, Roy; Krumrey, Michael; Landgraf, Boris; Müller, Peter; Schreiber, Swenja; Vervest, Mark; Wille, Eric
2017-09-01
While predictions based on the metrology (local slope errors and detailed geometrical details) play an essential role in controlling the development of the manufacturing processes, X-ray characterization remains the ultimate indication of the actual performance of Silicon Pore Optics (SPO). For this reason SPO stacks and mirror modules are routinely characterized at PTB's X-ray Pencil Beam Facility at BESSY II. Obtaining standard X-ray results quickly, right after the production of X-ray optics is essential to making sure that X-ray results can inform decisions taken in the lab. We describe the data analysis pipeline in operations at cosine, and how it allows us to go from stack production to full X-ray characterization in 24 hours.
Synthetic aperture radar range - Azimuth ambiguity design and constraints
NASA Technical Reports Server (NTRS)
Mehlis, J. G.
1980-01-01
Problems concerning the design of a system for mapping a planetary surface with a synthetic aperture radar (SAR) are considered. Given an ambiguity level, resolution, and swath width, the problems are related to the determination of optimum antenna apertures and the most suitable pulse repetition frequency (PRF). From the set of normalized azimuth ambiguity ratio curves, the designer can arrive at the azimuth antenna length, and from the sets of normalized range ambiguity ratio curves, he can arrive at the range aperture length or pulse repetition frequency. A procedure based on this design method is shown in an example. The normalized curves provide results for a SAR using a uniformly or cosine weighted rectangular antenna aperture.
Vision Aided Inertial Navigation System Augmented with a Coded Aperture
2011-03-24
as the change in blur at different distances from the pixel plane can be inferred. Cameras with a micro lens array (called plenoptic cameras...images from 8 slightly different perspectives [14,43]. Dappled photography is a similar to the plenoptic camera approach except that a cosine mask
Quantum Assisted Learning for Registration of MODIS Images
NASA Astrophysics Data System (ADS)
Pelissier, C.; Le Moigne, J.; Fekete, G.; Halem, M.
2017-12-01
The advent of the first large scale quantum annealer by D-Wave has led to an increased interest in quantum computing. However, the quantum annealing computer of the D-Wave is limited to either solving Quadratic Unconstrained Binary Optimization problems (QUBOs) or using the ground state sampling of an Ising system that can be produced by the D-Wave. These restrictions make it challenging to find algorithms to accelerate the computation of typical Earth Science applications. A major difficulty is that most applications have continuous real-valued parameters rather than binary. Here we present an exploratory study using the ground state sampling to train artificial neural networks (ANNs) to carry out image registration of MODIS images. The key idea to using the D-Wave to train networks is that the quantum chip behaves thermally like Boltzmann machines (BMs), and BMs are known to be successful at recognizing patterns in images. The ground state sampling of the D-Wave also depends on the dynamics of the adiabatic evolution and is subject to other non-thermal fluctuations, but the statistics are thought to be similar and ANNs tend to be robust under fluctuations. In light of this, the D-Wave ground state sampling is used to define a Boltzmann like generative model and is investigated to register MODIS images. Image intensities of MODIS images are transformed using a Discrete Cosine Transform and used to train a several layers network to learn how to align images to a reference image. The network layers consist of an initial sigmoid layer acting as a binary filter of the input followed by a strict binarization using Bernoulli sampling, and then fed into a Boltzmann machine. The output is then classified using a soft-max layer. Results are presented and discussed.
Optimization of Trade-offs in Error-free Image Transmission
NASA Astrophysics Data System (ADS)
Cox, Jerome R.; Moore, Stephen M.; Blaine, G. James; Zimmerman, John B.; Wallace, Gregory K.
1989-05-01
The availability of ubiquitous wide-area channels of both modest cost and higher transmission rate than voice-grade lines promises to allow the expansion of electronic radiology services to a larger community. The band-widths of the new services becoming available from the Integrated Services Digital Network (ISDN) are typically limited to 128 Kb/s, almost two orders of magnitude lower than popular LANs can support. Using Discrete Cosine Transform (DCT) techniques, a compressed approximation to an image may be rapidly transmitted. However, intensity or resampling transformations of the reconstructed image may reveal otherwise invisible artifacts of the approximate encoding. A progressive transmission scheme reported in ISO Working Paper N800 offers an attractive solution to this problem by rapidly reconstructing an apparently undistorted image from the DCT coefficients and then subse-quently transmitting the error image corresponding to the difference between the original and the reconstructed images. This approach achieves an error-free transmission without sacrificing the perception of rapid image delivery. Furthermore, subsequent intensity and resampling manipulations can be carried out with confidence. DCT coefficient precision affects the amount of error information that must be transmitted and, hence the delivery speed of error-free images. This study calculates the overall information coding rate for six radiographic images as a function of DCT coefficient precision. The results demonstrate that a minimum occurs for each of the six images at an average coefficient precision of between 0.5 and 1.0 bits per pixel (b/p). Apparently undistorted versions of these six images can be transmitted with a coding rate of between 0.25 and 0.75 b/p while error-free versions can be transmitted with an overall coding rate between 4.5 and 6.5 b/p.
An image assessment study of image acceptability of the Galileo low gain antenna mission
NASA Technical Reports Server (NTRS)
Chuang, S. L.; Haines, R. F.; Grant, T.; Gold, Yaron; Cheung, Kar-Ming
1994-01-01
This paper describes a study conducted by NASA Ames Research Center (ARC) in collaboration with the Jet Propulsion Laboratory (JPL), Pasadena, California on the image acceptability of the Galileo Low Gain Antenna mission. The primary objective of the study is to determine the impact of the Integer Cosine Transform (ICT) compression algorithm on Galilean images of atmospheric bodies, moons, asteroids and Jupiter's rings. The approach involved fifteen volunteer subjects representing twelve institutions involved with the Galileo Solid State Imaging (SSI) experiment. Four different experiment specific quantization tables (q-table) and various compression stepsizes (q-factor) to achieve different compression ratios were used. It then determined the acceptability of the compressed monochromatic astronomical images as evaluated by Galileo SSI mission scientists. Fourteen different images were evaluated. Each observer viewed two versions of the same image side by side on a high resolution monitor, each was compressed using a different quantization stepsize. They were requested to select which image had the highest overall quality to support them in carrying out their visual evaluations of image content. Then they rated both images using a scale from one to five on its judged degree of usefulness. Up to four pre-selected types of images were presented with and without noise to each subject based upon results of a previously administered survey of their image preferences. Fourteen different images in seven image groups were studied. The results showed that: (1) acceptable compression ratios vary widely with the type of images; (2) noisy images detract greatly from image acceptability and acceptable compression ratios; and (3) atmospheric images of Jupiter seem to have higher compression ratios of 4 to 5 times that of some clear surface satellite images.
Ghaderi, Parviz; Marateb, Hamid R
2017-07-01
The aim of this study was to reconstruct low-quality High-density surface EMG (HDsEMG) signals, recorded with 2-D electrode arrays, using image inpainting and surface reconstruction methods. It is common that some fraction of the electrodes may provide low-quality signals. We used variety of image inpainting methods, based on partial differential equations (PDEs), and surface reconstruction methods to reconstruct the time-averaged or instantaneous muscle activity maps of those outlier channels. Two novel reconstruction algorithms were also proposed. HDsEMG signals were recorded from the biceps femoris and brachial biceps muscles during low-to-moderate-level isometric contractions, and some of the channels (5-25%) were randomly marked as outliers. The root-mean-square error (RMSE) between the original and reconstructed maps was then calculated. Overall, the proposed Poisson and wave PDE outperformed the other methods (average RMSE 8.7 μV rms ± 6.1 μV rms and 7.5 μV rms ± 5.9 μV rms ) for the time-averaged single-differential and monopolar map reconstruction, respectively. Biharmonic Spline, the discrete cosine transform, and the Poisson PDE outperformed the other methods for the instantaneous map reconstruction. The running time of the proposed Poisson and wave PDE methods, implemented using a Vectorization package, was 4.6 ± 5.7 ms and 0.6 ± 0.5 ms, respectively, for each signal epoch or time sample in each channel. The proposed reconstruction algorithms could be promising new tools for reconstructing muscle activity maps in real-time applications. Proper reconstruction methods could recover the information of low-quality recorded channels in HDsEMG signals.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shi, J; Udayakumar, T; Wang, Z
Purpose: CT is not able to differentiate tumors from surrounding soft tissue. This study is to develop a bioluminescence tomography (BLT) system that is integrated onto our previously developed CT guided small animal arc radiation treatment system (iSMAART) to guide radiation, monitor tumor growth and evaluate therapeutic response. Methods: The BLT system employs a CCD camera coupled with a high speed lens, and is aligned orthogonally to the x-ray beam central axis. The two imaging modalities, CT and BLT, are physically registered through geometrical calibration. The CT anatomy provides an accurate contour of animal surface which is used to constructmore » 3D mesh for BLT reconstruction. Bioluminescence projections are captured from multiple angles, once every 45 degree rotation. The diffusion equation based on analytical Kirchhoff approximation is adopted to model the photon propagation in tissues. A discrete cosine transform based reweighted L1-norm regularization (DCT-re-L1) algorithm is used for BLT reconstruction. Experiments are conducted on a mouse orthotopic prostate tumor model (n=12) to evaluate the BLT performance, in terms of its robustness and accuracy in locating and quantifying the bioluminescent tumor cells. Iodinated contrast agent was injected intravenously to delineate the tumor in CT. The tumor location and volume obtained from CT also serve as a benchmark against BLT. Results: With our cutting edge reconstruction algorithm, BLT is able to accurately reconstruct the orthotopic prostate tumors. The tumor center of mass in BLT is within 0.5 mm radial distance of that in CT. The tumor volume in BLT is significantly correlated with that in CT (R2 = 0.81). Conclusion: The BLT can differentiate, localize and quantify tumors. Together with CT, BLT will provide precision radiation guidance and reliable treatment assessment in preclinical cancer research.« less
Gürkan Figen, Ziya; Aytür, Orhan; Arıkan, Orhan
2016-03-20
In this paper, we design aperiodic gratings based on orientation-patterned gallium arsenide (OP-GaAs) for converting 2.1 μm pump laser radiation into long-wave infrared (8-12 μm) in an idler-efficiency-enhanced scheme. These single OP-GaAs gratings placed in an optical parametric oscillator (OPO) or an optical parametric generator (OPG) can simultaneously phase match two optical parametric amplification (OPA) processes, OPA 1 and OPA 2. We use two design methods that allow simultaneous phase matching of two arbitrary χ(2) processes and also free adjustment of their relative strength. The first aperiodic grating design method (Method 1) relies on generating a grating structure that has domain walls located at the zeros of the summation of two cosine functions, each of which has a spatial frequency that equals one of the phase-mismatch terms of the two processes. Some of the domain walls are discarded considering the minimum domain length that is achievable in the production process. In this paper, we propose a second design method (Method 2) that relies on discretizing the crystal length with sample lengths that are much smaller than the minimum domain length and testing each sample's contribution in such a way that the sign of the nonlinearity maximizes the magnitude sum of the real and imaginary parts of the Fourier transform of the grating function at the relevant phase mismatches. Method 2 produces a similar performance as Method 1 in terms of the maximization of the height of either Fourier peak located at the relevant phase mismatch while allowing an adjustable relative height for the two peaks. To our knowledge, this is the first time that aperiodic OP-GaAs gratings have been proposed for efficient long-wave infrared beam generation based on simultaneous phase matching.
Wrinkle-free design of thin membrane structures using stress-based topology optimization
NASA Astrophysics Data System (ADS)
Luo, Yangjun; Xing, Jian; Niu, Yanzhuang; Li, Ming; Kang, Zhan
2017-05-01
Thin membrane structures would experience wrinkling due to local buckling deformation when compressive stresses are induced in some regions. Using the stress criterion for membranes in wrinkled and taut states, this paper proposed a new stress-based topology optimization methodology to seek the optimal wrinkle-free design of macro-scale thin membrane structures under stretching. Based on the continuum model and linearly elastic assumption in the taut state, the optimization problem is defined as to maximize the structural stiffness under membrane area and principal stress constraints. In order to make the problem computationally tractable, the stress constraints are reformulated into equivalent ones and relaxed by a cosine-type relaxation scheme. The reformulated optimization problem is solved by a standard gradient-based algorithm with the adjoint-variable sensitivity analysis. Several examples with post-bulking simulations and experimental tests are given to demonstrate the effectiveness of the proposed optimization model for eliminating stress-related wrinkles in the novel design of thin membrane structures.
2013-01-01
Peak alignment is a critical procedure in mass spectrometry-based biomarker discovery in metabolomics. One of peak alignment approaches to comprehensive two-dimensional gas chromatography mass spectrometry (GC×GC-MS) data is peak matching-based alignment. A key to the peak matching-based alignment is the calculation of mass spectral similarity scores. Various mass spectral similarity measures have been developed mainly for compound identification, but the effect of these spectral similarity measures on the performance of peak matching-based alignment still remains unknown. Therefore, we selected five mass spectral similarity measures, cosine correlation, Pearson's correlation, Spearman's correlation, partial correlation, and part correlation, and examined their effects on peak alignment using two sets of experimental GC×GC-MS data. The results show that the spectral similarity measure does not affect the alignment accuracy significantly in analysis of data from less complex samples, while the partial correlation performs much better than other spectral similarity measures when analyzing experimental data acquired from complex biological samples. PMID:24151524
Automated speech analysis applied to laryngeal disease categorization.
Gelzinis, A; Verikas, A; Bacauskiene, M
2008-07-01
The long-term goal of the work is a decision support system for diagnostics of laryngeal diseases. Colour images of vocal folds, a voice signal, and questionnaire data are the information sources to be used in the analysis. This paper is concerned with automated analysis of a voice signal applied to screening of laryngeal diseases. The effectiveness of 11 different feature sets in classification of voice recordings of the sustained phonation of the vowel sound /a/ into a healthy and two pathological classes, diffuse and nodular, is investigated. A k-NN classifier, SVM, and a committee build using various aggregation options are used for the classification. The study was made using the mixed gender database containing 312 voice recordings. The correct classification rate of 84.6% was achieved when using an SVM committee consisting of four members. The pitch and amplitude perturbation measures, cepstral energy features, autocorrelation features as well as linear prediction cosine transform coefficients were amongst the feature sets providing the best performance. In the case of two class classification, using recordings from 79 subjects representing the pathological and 69 the healthy class, the correct classification rate of 95.5% was obtained from a five member committee. Again the pitch and amplitude perturbation measures provided the best performance.
Uyghur face recognition method combining 2DDCT with POEM
NASA Astrophysics Data System (ADS)
Yi, Lihamu; Ya, Ermaimaiti
2017-11-01
In this paper, in light of the reduced recognition rate and poor robustness of Uyghur face under illumination and partial occlusion, a Uyghur face recognition method combining Two Dimension Discrete Cosine Transform (2DDCT) with Patterns Oriented Edge Magnitudes (POEM) was proposed. Firstly, the Uyghur face images were divided into 8×8 block matrix, and the Uyghur face images after block processing were converted into frequency-domain status using 2DDCT; secondly, the Uyghur face images were compressed to exclude non-sensitive medium frequency parts and non-high frequency parts, so it can reduce the feature dimensions necessary for the Uyghur face images, and further reduce the amount of computation; thirdly, the corresponding POEM histograms of the Uyghur face images were obtained by calculating the feature quantity of POEM; fourthly, the POEM histograms were cascaded together as the texture histogram of the center feature point to obtain the texture features of the Uyghur face feature points; finally, classification of the training samples was carried out using deep learning algorithm. The simulation experiment results showed that the proposed algorithm further improved the recognition rate of the self-built Uyghur face database, and greatly improved the computing speed of the self-built Uyghur face database, and had strong robustness.
Cross-talk free selective reconstruction of individual objects from multiplexed optical field data
NASA Astrophysics Data System (ADS)
Zea, Alejandro Velez; Barrera, John Fredy; Torroba, Roberto
2018-01-01
In this paper we present a data multiplexing method for simultaneous storage in a single package composed by several optical fields of tridimensional (3D) objects, and their individual cross-talk free retrieval. Optical field data are extracted from off axis Fourier holograms, and then sampled by multiplying them with random binary masks. The resulting sampled optical fields can be used to reconstruct the original objects. Sampling causes a loss of quality that can be controlled by the number of white pixels in the binary masks and by applying a padding procedure on the optical field data. This process can be performed using a different binary mask for each optical field, and then added to form a multiplexed package. With the adequate choice of sampling and padding, we can achieve a volume reduction in the multiplexed package over the addition of all individual optical fields. Moreover, the package can be multiplied by a binary mask to select a specific optical field, and after the reconstruction procedure, the corresponding 3D object is recovered without any cross-talk. We demonstrate the effectiveness of our proposal for data compression with a comparison with discrete cosine transform filtering. Experimental results confirm the validity of our proposal.
Kim, Kwangdon; Lee, Kisung; Lee, Hakjae; Joo, Sungkwan; Kang, Jungwon
2018-01-01
We aimed to develop a gap-filling algorithm, in particular the filter mask design method of the algorithm, which optimizes the filter to the imaging object by an adaptive and iterative process, rather than by manual means. Two numerical phantoms (Shepp-Logan and Jaszczak) were used for sinogram generation. The algorithm works iteratively, not only on the gap-filling iteration but also on the mask generation, to identify the object-dedicated low frequency area in the DCT-domain that is to be preserved. We redefine the low frequency preserving region of the filter mask at every gap-filling iteration, and the region verges on the property of the original image in the DCT domain. The previous DCT2 mask for each phantom case had been manually well optimized, and the results show little difference from the reference image and sinogram. We observed little or no difference between the results of the manually optimized DCT2 algorithm and those of the proposed algorithm. The proposed algorithm works well for various types of scanning object and shows results that compare to those of the manually optimized DCT2 algorithm without perfect or full information of the imaging object.
A secure online image trading system for untrusted cloud environments.
Munadi, Khairul; Arnia, Fitri; Syaryadhi, Mohd; Fujiyoshi, Masaaki; Kiya, Hitoshi
2015-01-01
In conventional image trading systems, images are usually stored unprotected on a server, rendering them vulnerable to untrusted server providers and malicious intruders. This paper proposes a conceptual image trading framework that enables secure storage and retrieval over Internet services. The process involves three parties: an image publisher, a server provider, and an image buyer. The aim is to facilitate secure storage and retrieval of original images for commercial transactions, while preventing untrusted server providers and unauthorized users from gaining access to true contents. The framework exploits the Discrete Cosine Transform (DCT) coefficients and the moment invariants of images. Original images are visually protected in the DCT domain, and stored on a repository server. Small representation of the original images, called thumbnails, are generated and made publicly accessible for browsing. When a buyer is interested in a thumbnail, he/she sends a query to retrieve the visually protected image. The thumbnails and protected images are matched using the DC component of the DCT coefficients and the moment invariant feature. After the matching process, the server returns the corresponding protected image to the buyer. However, the image remains visually protected unless a key is granted. Our target application is the online market, where publishers sell their stock images over the Internet using public cloud servers.
Array Design: Literature Survey For A High-Resolution Imaging Sonal System. Part 1
1993-12-01
radioastronomy or microwave arrays, or it may be elastic or acoustic (same as elastic but .-estricted to fields specified by a scalar quantity). There is...these are direction cosines (or sines) (see Ziomek 1985, or a radioastronomy book such as Perley et al. 1989). This 0 is because the Fourier
An Undergraduate Course on Operating Systems Principles.
ERIC Educational Resources Information Center
National Academy of Engineering, Washington, DC. Commission on Education.
This report is from Task Force VIII of the COSINE Committee of the Commission on Education of the National Academy of Engineering. The task force was established to formulate subject matter for an elective undergraduate subject on computer operating systems principles for students whose major interest is in the engineering of computer systems and…
Radar Transponder Antenna Systems Evaluation Handbook
2006-07-01
Poincare Sphere Usage. The plane geometry... Poincare Sphere any coupling factor is numerically equal to the cosine of half the distance between states on the spherical surface. 5-52 Then in...Erhcp, Elhcp) on the Poincare sphere (Paragraph 5.14.1). As such, any antenna whose polarization lies on this plane receives the same
On the Delusiveness of Adopting a Common Space for Modeling IR Objects: Are Queries Documents?
ERIC Educational Resources Information Center
Bollmann-Sdorra, Peter; Raghavan, Vjay V.
1993-01-01
Proposes that document space and query space have different structures in information retrieval and discusses similarity measures, term independence, and linear structure. Examples are given using the retrieval functions of dot-product, the cosine measure, the coefficient of Jaccard, and the overlap function. (Contains 28 references.) (LRW)
Learning binary code via PCA of angle projection for image retrieval
NASA Astrophysics Data System (ADS)
Yang, Fumeng; Ye, Zhiqiang; Wei, Xueqi; Wu, Congzhong
2018-01-01
With benefits of low storage costs and high query speeds, binary code representation methods are widely researched for efficiently retrieving large-scale data. In image hashing method, learning hashing function to embed highdimensions feature to Hamming space is a key step for accuracy retrieval. Principal component analysis (PCA) technical is widely used in compact hashing methods, and most these hashing methods adopt PCA projection functions to project the original data into several dimensions of real values, and then each of these projected dimensions is quantized into one bit by thresholding. The variances of different projected dimensions are different, and with real-valued projection produced more quantization error. To avoid the real-valued projection with large quantization error, in this paper we proposed to use Cosine similarity projection for each dimensions, the angle projection can keep the original structure and more compact with the Cosine-valued. We used our method combined the ITQ hashing algorithm, and the extensive experiments on the public CIFAR-10 and Caltech-256 datasets validate the effectiveness of the proposed method.
Tire-rim interface pressure of a commercial vehicle wheel under radial loads: theory and experiment
NASA Astrophysics Data System (ADS)
Wan, Xiaofei; Shan, Yingchun; Liu, Xiandong; He, Tian; Wang, Jiegong
2017-11-01
The simulation of the radial fatigue test of a wheel has been a necessary tool to improve the design of the wheel and calculate its fatigue life. The simulation model, including the strong nonlinearity of the tire structure and material, may produce accurate results, but often leads to a divergence in calculation. Thus, a simplified simulation model in which the complicated tire model is replaced with a tire-wheel contact pressure model is used extensively in the industry. In this paper, a simplified tire-rim interface pressure model of a wheel under a radial load is established, and the pressure of the wheel under different radial loads is tested. The tire-rim contact behavior affected by the radial load is studied and analyzed according to the test result, and the tire-rim interface pressure extracted from the test result is used to evaluate the simplified pressure model and the traditional cosine function model. The results show that the proposed model may provide a more accurate prediction of the wheel radial fatigue life than the traditional cosine function model.
A Universal Formula for Extracting the Euler Angles
NASA Technical Reports Server (NTRS)
Shuster, Malcolm D.; Markley, F. Landis
2004-01-01
Recently, the authors completed a study of the Davenport angles, which are a generalization of the Euler angles for which the initial and final Euler axes need not be either mutually parallel or mutually perpendicular or even along the coordinate axes. During the conduct of that study, those authors discovered a relationship which can be used to compute straightforwardly the Euler angles characterizing a proper-orthogonal direction-cosine matrix for an arbitrary Euler-axis set satisfying n(sub 1) x n(sub 2) = 0 and n(sub 3) x n(sub 1) = 0, which is also satisfied by the more usual Euler angles we encounter commonly in the practice of Astronautics. Rather than leave that relationship hidden in an article with very different focus from the present Engineering note, we present it and the universal algorithm derived from it for extracting the Euler angles from the direction-cosine matrix here. We also offer literal "code" for performing the operations, numerical examples, and general considerations about the extraction of Euler angles which are not universally known, particularly, the treatment of statistical error.
NASA Astrophysics Data System (ADS)
Sedelnikov, A. V.
2018-05-01
Assessment of parameters of rotary motion of the small spacecraft around its center of mass and of microaccelerations using measurements of current from silicon photocells is carried out. At the same time there is a problem of interpretation of ambiguous telemetric data. Current from two opposite sides of the small spacecraft is significant. The mean of removal of such uncertainty is considered. It is based on an fuzzy set. As membership function it is offered to use a normality condition of the direction cosines. The example of uncertainty removal for a prototype of the Aist small spacecraft is given. The offered approach can significantly increase the accuracy of microaccelerations estimate when using measurements of current from silicon photocells.
NASA Astrophysics Data System (ADS)
Brouwer, Lucas Nathan
Advances in superconducting magnet technology have historically enabled the construction of new, higher energy hadron colliders. Looking forward to the needs of a potential future collider, a significant increase in magnet field and performance is required. Such a task requires an open mind to the investigation of new design concepts for high field magnets. Part I of this thesis will present an investigation of the Canted-Cosine-Theta (CCT) design for high field Nb3Sn magnets. New analytic and finite element methods for analysis of CCT magnets will be given, along with a discussion on optimization of the design for high field. The design, fabrication, and successful test of the 2.5 T NbTi dipole CCT1 will be presented as a proof-of-principle step towards a high field Nb3Sn magnet. Finally, the design and initial steps in the fabrication of the 16 T Nb3Sn dipole CCT2 will be described. Part II of this thesis will investigate the CCT concept extended to a curved magnet for use in an ion beam therapy gantry. The introduction of superconducting technology in this field shows promise to reduce the weight and cost of gantries, as well as open the door to new beam optics solutions with high energy acceptance. An analytic approach developed for modeling curved CCT magnets will be presented, followed by a design study of a superconducting magnet for a proton therapy gantry. Finally, a new magnet concept called the "Alternating Gradient CCT" (AG-CCT) will be introduced. This concept will be shown to be a practical magnet solution for achieving the alternating quadrupole fields desired for an achromatic gantry, allowing for the consideration of treatment with minimal field changes in the superconducting magnets. The primary motivation of this thesis is to share new developments for Canted-Cosine-Theta superconducting magnets, with the hope this design will improve technology for high energy physics and ion beam cancer therapy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Larson, T.K.; Anderson, J.L.; Condie, K.G.
Experiments designed to investigate surface dryout in a heated, ribbed annulus test section simulating one of the annular coolant channels of a Savannah River Plant production reactor Mark 22 fuel assembly have been conducted at the Idaho National Engineering Laboratory. The inner surface of the annulus was constructed of aluminum and was electrically heated to provide an axial cosine power profile and a flat azimuthal power shape. Data presented in this report are from the ECS-2, WSR, and ECS-2cE series of tests. These experiments were conducted to examine the onset of wall thermal excursion for a range of flow, inletmore » fluid temperature, and annulus outlet pressure. Hydraulic boundary conditions on the test section represent flowrates (0.1--1.4 1/s), inlet fluid temperatures (293--345 K), and outlet pressures (-18--139.7 cm of water relative to the bottom of the heated length (61--200 cm of water relative to the bottom of the lower plenum)) expected to occur during the Emergency Coolant System (ECS) phase of postulated Loss-of-Coolant Accident in a production reactor. The onset of thermal excursion based on the present data is consistent with data gathered in test rigs with flat axial power profiles. The data indicate that wall dryout is primarily a function of liquid superficial velocity. Air entrainment rate was observed to be a strong function of the boundary conditions (primarily flowrate and liquid temperature), but had a minor effect on the power at the onset of thermal excursion for the range of conditions examined. 14 refs., 33 figs., 13 tabs.« less