Sample records for cosine transform coefficients

  1. The optimal digital filters of sine and cosine transforms for geophysical transient electromagnetic method

    NASA Astrophysics Data System (ADS)

    Zhao, Yun-wei; Zhu, Zi-qiang; Lu, Guang-yin; Han, Bo

    2018-03-01

    The sine and cosine transforms implemented with digital filters have been used in the Transient electromagnetic methods for a few decades. Kong (2007) proposed a method of obtaining filter coefficients, which are computed in the sample domain by Hankel transform pair. However, the curve shape of Hankel transform pair changes with a parameter, which usually is set to be 1 or 3 in the process of obtaining the digital filter coefficients of sine and cosine transforms. First, this study investigates the influence of the parameter on the digital filter algorithm of sine and cosine transforms based on the digital filter algorithm of Hankel transform and the relationship between the sine, cosine function and the ±1/2 order Bessel function of the first kind. The results show that the selection of the parameter highly influences the precision of digital filter algorithm. Second, upon the optimal selection of the parameter, it is found that an optimal sampling interval s also exists to achieve the best precision of digital filter algorithm. Finally, this study proposes four groups of sine and cosine transform digital filter coefficients with different length, which may help to develop the digital filter algorithm of sine and cosine transforms, and promote its application.

  2. Optimal block cosine transform image coding for noisy channels

    NASA Technical Reports Server (NTRS)

    Vaishampayan, V.; Farvardin, N.

    1986-01-01

    The two dimensional block transform coding scheme based on the discrete cosine transform was studied extensively for image coding applications. While this scheme has proven to be efficient in the absence of channel errors, its performance degrades rapidly over noisy channels. A method is presented for the joint source channel coding optimization of a scheme based on the 2-D block cosine transform when the output of the encoder is to be transmitted via a memoryless design of the quantizers used for encoding the transform coefficients. This algorithm produces a set of locally optimum quantizers and the corresponding binary code assignment for the assumed transform coefficient statistics. To determine the optimum bit assignment among the transform coefficients, an algorithm was used based on the steepest descent method, which under certain convexity conditions on the performance of the channel optimized quantizers, yields the optimal bit allocation. Comprehensive simulation results for the performance of this locally optimum system over noisy channels were obtained and appropriate comparisons against a reference system designed for no channel error were rendered.

  3. Removing tidal-period variations from time-series data using low-pass digital filters

    USGS Publications Warehouse

    Walters, Roy A.; Heston, Cynthia

    1982-01-01

    Several low-pass, digital filters are examined for their ability to remove tidal Period Variations from a time-series of water surface elevation for San Francisco Bay. The most efficient filter is the one which is applied to the Fourier coefficients of the transformed data, and the filtered data recovered through an inverse transform. The ability of the filters to remove the tidal components increased in the following order: 1) cosine-Lanczos filter, 2) cosine-Lanczos squared filter; 3) Godin filter; and 4) a transform fitter. The Godin fitter is not sufficiently sharp to prevent severe attenuation of 2–3 day variations in surface elevation resulting from weather events.

  4. [Similarity system theory to evaluate similarity of chromatographic fingerprints of traditional Chinese medicine].

    PubMed

    Liu, Yongsuo; Meng, Qinghua; Jiang, Shumin; Hu, Yuzhu

    2005-03-01

    The similarity evaluation of the fingerprints is one of the most important problems in the quality control of the traditional Chinese medicine (TCM). Similarity measures used to evaluate the similarity of the common peaks in the chromatogram of TCM have been discussed. Comparative studies were carried out among correlation coefficient, cosine of the angle and an improved extent similarity method using simulated data and experimental data. Correlation coefficient and cosine of the angle are not sensitive to the differences of the data set. They are still not sensitive to the differences of the data even after normalization. According to the similarity system theory, an improved extent similarity method was proposed. The improved extent similarity is more sensitive to the differences of the data sets than correlation coefficient and cosine of the angle. And the character of the data sets needs not to be changed compared with log-transformation. The improved extent similarity can be used to evaluate the similarity of the chromatographic fingerprints of TCM.

  5. Designing for Compressive Sensing: Compressive Art, Camouflage, Fonts, and Quick Response Codes

    DTIC Science & Technology

    2018-01-01

    an example where the signal is non-sparse in the standard basis, but sparse in the discrete cosine basis . The top plot shows the signal from the...previous example, now used as sparse discrete cosine transform (DCT) coefficients . The next plot shows the non-sparse signal in the standard...Romberg JK, Tao T. Stable signal recovery from incomplete and inaccurate measurements. Commun Pure Appl Math . 2006;59(8):1207–1223. 3. Donoho DL

  6. A simplified Integer Cosine Transform and its application in image compression

    NASA Technical Reports Server (NTRS)

    Costa, M.; Tong, K.

    1994-01-01

    A simplified version of the integer cosine transform (ICT) is described. For practical reasons, the transform is considered jointly with the quantization of its coefficients. It differs from conventional ICT algorithms in that the combined factors for normalization and quantization are approximated by powers of two. In conventional algorithms, the normalization/quantization stage typically requires as many integer divisions as the number of transform coefficients. By restricting the factors to powers of two, these divisions can be performed by variable shifts in the binary representation of the coefficients, with speed and cost advantages to the hardware implementation of the algorithm. The error introduced by the factor approximations is compensated for in the inverse ICT operation, executed with floating point precision. The simplified ICT algorithm has potential applications in image-compression systems with disparate cost and speed requirements in the encoder and decoder ends. For example, in deep space image telemetry, the image processors on board the spacecraft could take advantage of the simplified, faster encoding operation, which would be adjusted on the ground, with high-precision arithmetic. A dual application is found in compressed video broadcasting. Here, a fast, high-performance processor at the transmitter would precompensate for the factor approximations in the inverse ICT operation, to be performed in real time, at a large number of low-cost receivers.

  7. Image compression system and method having optimized quantization tables

    NASA Technical Reports Server (NTRS)

    Ratnakar, Viresh (Inventor); Livny, Miron (Inventor)

    1998-01-01

    A digital image compression preprocessor for use in a discrete cosine transform-based digital image compression device is provided. The preprocessor includes a gathering mechanism for determining discrete cosine transform statistics from input digital image data. A computing mechanism is operatively coupled to the gathering mechanism to calculate a image distortion array and a rate of image compression array based upon the discrete cosine transform statistics for each possible quantization value. A dynamic programming mechanism is operatively coupled to the computing mechanism to optimize the rate of image compression array against the image distortion array such that a rate-distortion-optimal quantization table is derived. In addition, a discrete cosine transform-based digital image compression device and a discrete cosine transform-based digital image compression and decompression system are provided. Also, a method for generating a rate-distortion-optimal quantization table, using discrete cosine transform-based digital image compression, and operating a discrete cosine transform-based digital image compression and decompression system are provided.

  8. Hyperbolic Cross Truncations for Stochastic Fourier Cosine Series

    PubMed Central

    Zhang, Zhihua

    2014-01-01

    Based on our decomposition of stochastic processes and our asymptotic representations of Fourier cosine coefficients, we deduce an asymptotic formula of approximation errors of hyperbolic cross truncations for bivariate stochastic Fourier cosine series. Moreover we propose a kind of Fourier cosine expansions with polynomials factors such that the corresponding Fourier cosine coefficients decay very fast. Although our research is in the setting of stochastic processes, our results are also new for deterministic functions. PMID:25147842

  9. Discrete Cosine Transform Image Coding With Sliding Block Codes

    NASA Astrophysics Data System (ADS)

    Divakaran, Ajay; Pearlman, William A.

    1989-11-01

    A transform trellis coding scheme for images is presented. A two dimensional discrete cosine transform is applied to the image followed by a search on a trellis structured code. This code is a sliding block code that utilizes a constrained size reproduction alphabet. The image is divided into blocks by the transform coding. The non-stationarity of the image is counteracted by grouping these blocks in clusters through a clustering algorithm, and then encoding the clusters separately. Mandela ordered sequences are formed from each cluster i.e identically indexed coefficients from each block are grouped together to form one dimensional sequences. A separate search ensues on each of these Mandela ordered sequences. Padding sequences are used to improve the trellis search fidelity. The padding sequences absorb the error caused by the building up of the trellis to full size. The simulations were carried out on a 256x256 image ('LENA'). The results are comparable to any existing scheme. The visual quality of the image is enhanced considerably by the padding and clustering.

  10. A robust color image watermarking algorithm against rotation attacks

    NASA Astrophysics Data System (ADS)

    Han, Shao-cheng; Yang, Jin-feng; Wang, Rui; Jia, Gui-min

    2018-01-01

    A robust digital watermarking algorithm is proposed based on quaternion wavelet transform (QWT) and discrete cosine transform (DCT) for copyright protection of color images. The luminance component Y of a host color image in YIQ space is decomposed by QWT, and then the coefficients of four low-frequency subbands are transformed by DCT. An original binary watermark scrambled by Arnold map and iterated sine chaotic system is embedded into the mid-frequency DCT coefficients of the subbands. In order to improve the performance of the proposed algorithm against rotation attacks, a rotation detection scheme is implemented before watermark extracting. The experimental results demonstrate that the proposed watermarking scheme shows strong robustness not only against common image processing attacks but also against arbitrary rotation attacks.

  11. Infrared and visual image fusion method based on discrete cosine transform and local spatial frequency in discrete stationary wavelet transform domain

    NASA Astrophysics Data System (ADS)

    Jin, Xin; Jiang, Qian; Yao, Shaowen; Zhou, Dongming; Nie, Rencan; Lee, Shin-Jye; He, Kangjian

    2018-01-01

    In order to promote the performance of infrared and visual image fusion and provide better visual effects, this paper proposes a hybrid fusion method for infrared and visual image by the combination of discrete stationary wavelet transform (DSWT), discrete cosine transform (DCT) and local spatial frequency (LSF). The proposed method has three key processing steps. Firstly, DSWT is employed to decompose the important features of the source image into a series of sub-images with different levels and spatial frequencies. Secondly, DCT is used to separate the significant details of the sub-images according to the energy of different frequencies. Thirdly, LSF is applied to enhance the regional features of DCT coefficients, and it can be helpful and useful for image feature extraction. Some frequently-used image fusion methods and evaluation metrics are employed to evaluate the validity of the proposed method. The experiments indicate that the proposed method can achieve good fusion effect, and it is more efficient than other conventional image fusion methods.

  12. A visual detection model for DCT coefficient quantization

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert J., Jr.; Peterson, Heidi A.

    1993-01-01

    The discrete cosine transform (DCT) is widely used in image compression, and is part of the JPEG and MPEG compression standards. The degree of compression, and the amount of distortion in the decompressed image are determined by the quantization of the transform coefficients. The standards do not specify how the DCT coefficients should be quantized. Our approach is to set the quantization level for each coefficient so that the quantization error is at the threshold of visibility. Here we combine results from our previous work to form our current best detection model for DCT coefficient quantization noise. This model predicts sensitivity as a function of display parameters, enabling quantization matrices to be designed for display situations varying in luminance, veiling light, and spatial frequency related conditions (pixel size, viewing distance, and aspect ratio). It also allows arbitrary color space directions for the representation of color.

  13. Recognition of Activities of Daily Living Based on Environmental Analyses Using Audio Fingerprinting Techniques: A Systematic Review

    PubMed Central

    Santos, Rui; Pombo, Nuno; Flórez-Revuelta, Francisco

    2018-01-01

    An increase in the accuracy of identification of Activities of Daily Living (ADL) is very important for different goals of Enhanced Living Environments and for Ambient Assisted Living (AAL) tasks. This increase may be achieved through identification of the surrounding environment. Although this is usually used to identify the location, ADL recognition can be improved with the identification of the sound in that particular environment. This paper reviews audio fingerprinting techniques that can be used with the acoustic data acquired from mobile devices. A comprehensive literature search was conducted in order to identify relevant English language works aimed at the identification of the environment of ADLs using data acquired with mobile devices, published between 2002 and 2017. In total, 40 studies were analyzed and selected from 115 citations. The results highlight several audio fingerprinting techniques, including Modified discrete cosine transform (MDCT), Mel-frequency cepstrum coefficients (MFCC), Principal Component Analysis (PCA), Fast Fourier Transform (FFT), Gaussian mixture models (GMM), likelihood estimation, logarithmic moduled complex lapped transform (LMCLT), support vector machine (SVM), constant Q transform (CQT), symmetric pairwise boosting (SPB), Philips robust hash (PRH), linear discriminant analysis (LDA) and discrete cosine transform (DCT). PMID:29315232

  14. Integer cosine transform for image compression

    NASA Technical Reports Server (NTRS)

    Cheung, K.-M.; Pollara, F.; Shahshahani, M.

    1991-01-01

    This article describes a recently introduced transform algorithm called the integer cosine transform (ICT), which is used in transform-based data compression schemes. The ICT algorithm requires only integer operations on small integers and at the same time gives a rate-distortion performance comparable to that offered by the floating-point discrete cosine transform (DCT). The article addresses the issue of implementation complexity, which is of prime concern for source coding applications of interest in deep-space communications. Complexity reduction in the transform stage of the compression scheme is particularly relevant, since this stage accounts for most (typically over 80 percent) of the computational load.

  15. Luminance-model-based DCT quantization for color image compression

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert J., Jr.; Peterson, Heidi A.

    1992-01-01

    A model is developed to approximate visibility thresholds for discrete cosine transform (DCT) coefficient quantization error based on the peak-to-peak luminance of the error image. Experimentally measured visibility thresholds for R, G, and B DCT basis functions can be predicted by a simple luminance-based detection model. This model allows DCT coefficient quantization matrices to be designed for display conditions other than those of the experimental measurements: other display luminances, other veiling luminances, and other spatial frequencies (different pixel spacings, viewing distances, and aspect ratios).

  16. Image Data Compression Having Minimum Perceptual Error

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B. (Inventor)

    1997-01-01

    A method is presented for performing color or grayscale image compression that eliminates redundant and invisible image components. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The quantization matrix comprises visual masking by luminance and contrast technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.

  17. Computation of Symmetric Discrete Cosine Transform Using Bakhvalov's Algorithm

    NASA Technical Reports Server (NTRS)

    Aburdene, Maurice F.; Strojny, Brian C.; Dorband, John E.

    2005-01-01

    A number of algorithms for recursive computation of the discrete cosine transform (DCT) have been developed recently. This paper presents a new method for computing the discrete cosine transform and its inverse using Bakhvalov's algorithm, a method developed for evaluation of a polynomial at a point. In this paper, we will focus on both the application of the algorithm to the computation of the DCT-I and its complexity. In addition, Bakhvalov s algorithm is compared with Clenshaw s algorithm for the computation of the DCT.

  18. The Karhunen-Loeve, discrete cosine, and related transforms obtained via the Hadamard transform. [for data compression

    NASA Technical Reports Server (NTRS)

    Jones, H. W.; Hein, D. N.; Knauer, S. C.

    1978-01-01

    A general class of even/odd transforms is presented that includes the Karhunen-Loeve transform, the discrete cosine transform, the Walsh-Hadamard transform, and other familiar transforms. The more complex even/odd transforms can be computed by combining a simpler even/odd transform with a sparse matrix multiplication. A theoretical performance measure is computed for some even/odd transforms, and two image compression experiments are reported.

  19. Computer-Aided Diagnosis System for Alzheimer's Disease Using Different Discrete Transform Techniques.

    PubMed

    Dessouky, Mohamed M; Elrashidy, Mohamed A; Taha, Taha E; Abdelkader, Hatem M

    2016-05-01

    The different discrete transform techniques such as discrete cosine transform (DCT), discrete sine transform (DST), discrete wavelet transform (DWT), and mel-scale frequency cepstral coefficients (MFCCs) are powerful feature extraction techniques. This article presents a proposed computer-aided diagnosis (CAD) system for extracting the most effective and significant features of Alzheimer's disease (AD) using these different discrete transform techniques and MFCC techniques. Linear support vector machine has been used as a classifier in this article. Experimental results conclude that the proposed CAD system using MFCC technique for AD recognition has a great improvement for the system performance with small number of significant extracted features, as compared with the CAD system based on DCT, DST, DWT, and the hybrid combination methods of the different transform techniques. © The Author(s) 2015.

  20. Infrared images target detection based on background modeling in the discrete cosine domain

    NASA Astrophysics Data System (ADS)

    Ye, Han; Pei, Jihong

    2018-02-01

    Background modeling is the critical technology to detect the moving target for video surveillance. Most background modeling techniques are aimed at land monitoring and operated in the spatial domain. A background establishment becomes difficult when the scene is a complex fluctuating sea surface. In this paper, the background stability and separability between target are analyzed deeply in the discrete cosine transform (DCT) domain, on this basis, we propose a background modeling method. The proposed method models each frequency point as a single Gaussian model to represent background, and the target is extracted by suppressing the background coefficients. Experimental results show that our approach can establish an accurate background model for seawater, and the detection results outperform other background modeling methods in the spatial domain.

  1. ASIC implementation of recursive scaled discrete cosine transform algorithm

    NASA Astrophysics Data System (ADS)

    On, Bill N.; Narasimhan, Sam; Huang, Victor K.

    1994-05-01

    A program to implement the Recursive Scaled Discrete Cosine Transform (DCT) algorithm as proposed by H. S. Hou has been undertaken at the Institute of Microelectronics. Implementation of the design was done using top-down design methodology with VHDL (VHSIC Hardware Description Language) for chip modeling. When the VHDL simulation has been satisfactorily completed, the design is synthesized into gates using a synthesis tool. The architecture of the design consists of two processing units together with a memory module for data storage and transpose. Each processing unit is composed of four pipelined stages which allow the internal clock to run at one-eighth (1/8) the speed of the pixel clock. Each stage operates on eight pixels in parallel. As the data flows through each stage, there are various adders and multipliers to transform them into the desired coefficients. The Scaled IDCT was implemented in a similar fashion with the adders and multipliers rearranged to perform the inverse DCT algorithm. The chip has been verified using Field Programmable Gate Array devices. The design is operational. The combination of fewer multiplications required and pipelined architecture give Hou's Recursive Scaled DCT good potential of achieving high performance at a low cost in using Very Large Scale Integration implementation.

  2. Coloured computational imaging with single-pixel detectors based on a 2D discrete cosine transform

    NASA Astrophysics Data System (ADS)

    Liu, Bao-Lei; Yang, Zhao-Hua; Liu, Xia; Wu, Ling-An

    2017-02-01

    We propose and demonstrate a computational imaging technique that uses structured illumination based on a two-dimensional discrete cosine transform to perform imaging with a single-pixel detector. A scene is illuminated by a projector with two sets of orthogonal patterns, then by applying an inverse cosine transform to the spectra obtained from the single-pixel detector a full-colour image is retrieved. This technique can retrieve an image from sub-Nyquist measurements, and the background noise is easily cancelled to give excellent image quality. Moreover, the experimental set-up is very simple.

  3. Wavelets

    NASA Astrophysics Data System (ADS)

    Strang, Gilbert

    1994-06-01

    Several methods are compared that are used to analyze and synthesize a signal. Three ways are mentioned to transform a symphony: into cosine waves (Fourier transform), into pieces of cosines (short-time Fourier transform), and into wavelets (little waves that start and stop). Choosing the best basis, higher dimensions, fast wavelet transform, and Daubechies wavelets are discussed. High-definition television is described. The use of wavelets in identifying fingerprints in the future is related.

  4. Infrared and visible image fusion using discrete cosine transform and swarm intelligence for surveillance applications

    NASA Astrophysics Data System (ADS)

    Paramanandham, Nirmala; Rajendiran, Kishore

    2018-01-01

    A novel image fusion technique is presented for integrating infrared and visible images. Integration of images from the same or various sensing modalities can deliver the required information that cannot be delivered by viewing the sensor outputs individually and consecutively. In this paper, a swarm intelligence based image fusion technique using discrete cosine transform (DCT) domain is proposed for surveillance application which integrates the infrared image with the visible image for generating a single informative fused image. Particle swarm optimization (PSO) is used in the fusion process for obtaining the optimized weighting factor. These optimized weighting factors are used for fusing the DCT coefficients of visible and infrared images. Inverse DCT is applied for obtaining the initial fused image. An enhanced fused image is obtained through adaptive histogram equalization for a better visual understanding and target detection. The proposed framework is evaluated using quantitative metrics such as standard deviation, spatial frequency, entropy and mean gradient. The experimental results demonstrate the outperformance of the proposed algorithm over many other state- of- the- art techniques reported in literature.

  5. Fluorescence molecular tomography reconstruction via discrete cosine transform-based regularization

    NASA Astrophysics Data System (ADS)

    Shi, Junwei; Liu, Fei; Zhang, Jiulou; Luo, Jianwen; Bai, Jing

    2015-05-01

    Fluorescence molecular tomography (FMT) as a noninvasive imaging modality has been widely used for biomedical preclinical applications. However, FMT reconstruction suffers from severe ill-posedness, especially when a limited number of projections are used. In order to improve the quality of FMT reconstruction results, a discrete cosine transform (DCT) based reweighted L1-norm regularization algorithm is proposed. In each iteration of the reconstruction process, different reweighted regularization parameters are adaptively assigned according to the values of DCT coefficients to suppress the reconstruction noise. In addition, the permission region of the reconstructed fluorophores is adaptively constructed to increase the convergence speed. In order to evaluate the performance of the proposed algorithm, physical phantom and in vivo mouse experiments with a limited number of projections are carried out. For comparison, different L1-norm regularization strategies are employed. By quantifying the signal-to-noise ratio (SNR) of the reconstruction results in the phantom and in vivo mouse experiments with four projections, the proposed DCT-based reweighted L1-norm regularization shows higher SNR than other L1-norm regularizations employed in this work.

  6. Human Motion Capture Data Tailored Transform Coding.

    PubMed

    Junhui Hou; Lap-Pui Chau; Magnenat-Thalmann, Nadia; Ying He

    2015-07-01

    Human motion capture (mocap) is a widely used technique for digitalizing human movements. With growing usage, compressing mocap data has received increasing attention, since compact data size enables efficient storage and transmission. Our analysis shows that mocap data have some unique characteristics that distinguish themselves from images and videos. Therefore, directly borrowing image or video compression techniques, such as discrete cosine transform, does not work well. In this paper, we propose a novel mocap-tailored transform coding algorithm that takes advantage of these features. Our algorithm segments the input mocap sequences into clips, which are represented in 2D matrices. Then it computes a set of data-dependent orthogonal bases to transform the matrices to frequency domain, in which the transform coefficients have significantly less dependency. Finally, the compression is obtained by entropy coding of the quantized coefficients and the bases. Our method has low computational cost and can be easily extended to compress mocap databases. It also requires neither training nor complicated parameter setting. Experimental results demonstrate that the proposed scheme significantly outperforms state-of-the-art algorithms in terms of compression performance and speed.

  7. Low-power hardware implementation of movement decoding for brain computer interface with reduced-resolution discrete cosine transform.

    PubMed

    Minho Won; Albalawi, Hassan; Xin Li; Thomas, Donald E

    2014-01-01

    This paper describes a low-power hardware implementation for movement decoding of brain computer interface. Our proposed hardware design is facilitated by two novel ideas: (i) an efficient feature extraction method based on reduced-resolution discrete cosine transform (DCT), and (ii) a new hardware architecture of dual look-up table to perform discrete cosine transform without explicit multiplication. The proposed hardware implementation has been validated for movement decoding of electrocorticography (ECoG) signal by using a Xilinx FPGA Zynq-7000 board. It achieves more than 56× energy reduction over a reference design using band-pass filters for feature extraction.

  8. Image data compression having minimum perceptual error

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B. (Inventor)

    1995-01-01

    A method for performing image compression that eliminates redundant and invisible image components is described. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The present invention adapts or customizes the quantization matrix to the image being compressed. The quantization matrix comprises visual masking by luminance and contrast techniques and by an error pooling technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.

  9. Image-adapted visually weighted quantization matrices for digital image compression

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B. (Inventor)

    1994-01-01

    A method for performing image compression that eliminates redundant and invisible image components is presented. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The present invention adapts or customizes the quantization matrix to the image being compressed. The quantization matrix comprises visual masking by luminance and contrast techniques and by an error pooling technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.

  10. Method and Apparatus for Evaluating the Visual Quality of Processed Digital Video Sequences

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B. (Inventor)

    2002-01-01

    A Digital Video Quality (DVQ) apparatus and method that incorporate a model of human visual sensitivity to predict the visibility of artifacts. The DVQ method and apparatus are used for the evaluation of the visual quality of processed digital video sequences and for adaptively controlling the bit rate of the processed digital video sequences without compromising the visual quality. The DVQ apparatus minimizes the required amount of memory and computation. The input to the DVQ apparatus is a pair of color image sequences: an original (R) non-compressed sequence, and a processed (T) sequence. Both sequences (R) and (T) are sampled, cropped, and subjected to color transformations. The sequences are then subjected to blocking and discrete cosine transformation, and the results are transformed to local contrast. The next step is a time filtering operation which implements the human sensitivity to different time frequencies. The results are converted to threshold units by dividing each discrete cosine transform coefficient by its respective visual threshold. At the next stage the two sequences are subtracted to produce an error sequence. The error sequence is subjected to a contrast masking operation, which also depends upon the reference sequence (R). The masked errors can be pooled in various ways to illustrate the perceptual error over various dimensions, and the pooled error can be converted to a visual quality measure.

  11. A 16X16 Discrete Cosine Transform Chip

    NASA Astrophysics Data System (ADS)

    Sun, M. T.; Chen, T. C.; Gottlieb, A.; Wu, L.; Liou, M. L.

    1987-10-01

    Among various transform coding techniques for image compression the Discrete Cosine Transform (DCT) is considered to be the most effective method and has been widely used in the laboratory as well as in the market, place. DCT is computationally intensive. For video application at 14.3 MHz sample rate, a direct implementation of a 16x16 DCT requires a throughput, rate of approximately half a billion multiplications per second. In order to reduce the cost of hardware implementation, a single chip DCT implementation is highly desirable. In this paper, the implementation of a 16x16 DCT chip using a concurrent architecture will be presented. The chip is designed for real-time processing of 14.3 MHz sampled video data. It uses row-column decomposition to implement the two-dimensional transform. Distributed arithmetic combined with hit-serial and hit-parallel structures is used to implement the required vector inner products concurrently. Several schemes are utilized to reduce the size of required memory. The resultant circuit only uses memory, shift registers, and adders. No multipliers are required. It achieves high speed performance with a very regular and efficient integrated circuit realization. The chip accepts 0-bit input and produces 14-bit DCT coefficients. 12 bits are maintained after the first one-dimensional transform. The circuit has been laid out using a 2-μm CMOS technology with a symbolic design tool MULGA. The core contains approximately 73,000 transistors in an area of 7.2 x 7.0

  12. Similarity analysis between chromosomes of Homo sapiens and monkeys with correlation coefficient, rank correlation coefficient and cosine similarity measures

    PubMed Central

    Someswara Rao, Chinta; Viswanadha Raju, S.

    2016-01-01

    In this paper, we consider correlation coefficient, rank correlation coefficient and cosine similarity measures for evaluating similarity between Homo sapiens and monkeys. We used DNA chromosomes of genome wide genes to determine the correlation between the chromosomal content and evolutionary relationship. The similarity among the H. sapiens and monkeys is measured for a total of 210 chromosomes related to 10 species. The similarity measures of these different species show the relationship between the H. sapiens and monkey. This similarity will be helpful at theft identification, maternity identification, disease identification, etc. PMID:26981409

  13. Similarity analysis between chromosomes of Homo sapiens and monkeys with correlation coefficient, rank correlation coefficient and cosine similarity measures.

    PubMed

    Someswara Rao, Chinta; Viswanadha Raju, S

    2016-03-01

    In this paper, we consider correlation coefficient, rank correlation coefficient and cosine similarity measures for evaluating similarity between Homo sapiens and monkeys. We used DNA chromosomes of genome wide genes to determine the correlation between the chromosomal content and evolutionary relationship. The similarity among the H. sapiens and monkeys is measured for a total of 210 chromosomes related to 10 species. The similarity measures of these different species show the relationship between the H. sapiens and monkey. This similarity will be helpful at theft identification, maternity identification, disease identification, etc.

  14. Optimization of Darrieus turbines with an upwind and downwind momentum model

    NASA Astrophysics Data System (ADS)

    Loth, J. L.; McCoy, H.

    1983-08-01

    This paper presents a theoretical aerodynamic performance optimization for two dimensional vertical axis wind turbines. A momentum type wake model is introduced with separate cosine type interference coefficients for the up and downwind half of the rotor. The cosine type loading permits the rotor blades to become unloaded near the junction of the upwind and downwind rotor halves. Both the optimum and the off design magnitude of the interference coefficients are obtained by equating the drag on each of the rotor halves to that on each of two cosine loaded actuator discs in series. The values for the optimum rotor efficiency, solidity and corresponding interference coefficients have been obtained in a closed form analytic solution by maximizing the power extracted from the downwind rotor half as well as from the entire rotor. A numerical solution was required when viscous effects were incorporated in the rotor optimization.

  15. SPECT reconstruction using DCT-induced tight framelet regularization

    NASA Astrophysics Data System (ADS)

    Zhang, Jiahan; Li, Si; Xu, Yuesheng; Schmidtlein, C. R.; Lipson, Edward D.; Feiglin, David H.; Krol, Andrzej

    2015-03-01

    Wavelet transforms have been successfully applied in many fields of image processing. Yet, to our knowledge, they have never been directly incorporated to the objective function in Emission Computed Tomography (ECT) image reconstruction. Our aim has been to investigate if the ℓ1-norm of non-decimated discrete cosine transform (DCT) coefficients of the estimated radiotracer distribution could be effectively used as the regularization term for the penalized-likelihood (PL) reconstruction, where a regularizer is used to enforce the image smoothness in the reconstruction. In this study, the ℓ1-norm of 2D DCT wavelet decomposition was used as a regularization term. The Preconditioned Alternating Projection Algorithm (PAPA), which we proposed in earlier work to solve penalized likelihood (PL) reconstruction with non-differentiable regularizers, was used to solve this optimization problem. The DCT wavelet decompositions were performed on the transaxial reconstructed images. We reconstructed Monte Carlo simulated SPECT data obtained for a numerical phantom with Gaussian blobs as hot lesions and with a warm random lumpy background. Reconstructed images using the proposed method exhibited better noise suppression and improved lesion conspicuity, compared with images reconstructed using expectation maximization (EM) algorithm with Gaussian post filter (GPF). Also, the mean square error (MSE) was smaller, compared with EM-GPF. A critical and challenging aspect of this method was selection of optimal parameters. In summary, our numerical experiments demonstrated that the ℓ1-norm of discrete cosine transform (DCT) wavelet frame transform DCT regularizer shows promise for SPECT image reconstruction using PAPA method.

  16. Fast discrete cosine transform structure suitable for implementation with integer computation

    NASA Astrophysics Data System (ADS)

    Jeong, Yeonsik; Lee, Imgeun

    2000-10-01

    The discrete cosine transform (DCT) has wide applications in speech and image coding. We propose a fast DCT scheme with the property of reduced multiplication stages and fewer additions and multiplications. The proposed algorithm is structured so that most multiplications are performed at the final stage, which reduces the propagation error that could occur in the integer computation.

  17. Multifocus watermarking approach based on discrete cosine transform.

    PubMed

    Waheed, Safa Riyadh; Alkawaz, Mohammed Hazim; Rehman, Amjad; Almazyad, Abdulaziz S; Saba, Tanzila

    2016-05-01

    Image fusion process consolidates data and information from various images of same sight into a solitary image. Each of the source images might speak to a fractional perspective of the scene, and contains both "pertinent" and "immaterial" information. In this study, a new image fusion method is proposed utilizing the Discrete Cosine Transform (DCT) to join the source image into a solitary minimized image containing more exact depiction of the sight than any of the individual source images. In addition, the fused image comes out with most ideal quality image without bending appearance or loss of data. DCT algorithm is considered efficient in image fusion. The proposed scheme is performed in five steps: (1) RGB colour image (input image) is split into three channels R, G, and B for source images. (2) DCT algorithm is applied to each channel (R, G, and B). (3) The variance values are computed for the corresponding 8 × 8 blocks of each channel. (4) Each block of R of source images is compared with each other based on the variance value and then the block with maximum variance value is selected to be the block in the new image. This process is repeated for all channels of source images. (5) Inverse discrete cosine transform is applied on each fused channel to convert coefficient values to pixel values, and then combined all the channels to generate the fused image. The proposed technique can potentially solve the problem of unwanted side effects such as blurring or blocking artifacts by reducing the quality of the subsequent image in image fusion process. The proposed approach is evaluated using three measurement units: the average of Q(abf), standard deviation, and peak Signal Noise Rate. The experimental results of this proposed technique have shown good results as compared with older techniques. © 2016 Wiley Periodicals, Inc.

  18. Detection of shifted double JPEG compression by an adaptive DCT coefficient model

    NASA Astrophysics Data System (ADS)

    Wang, Shi-Lin; Liew, Alan Wee-Chung; Li, Sheng-Hong; Zhang, Yu-Jin; Li, Jian-Hua

    2014-12-01

    In many JPEG image splicing forgeries, the tampered image patch has been JPEG-compressed twice with different block alignments. Such phenomenon in JPEG image forgeries is called the shifted double JPEG (SDJPEG) compression effect. Detection of SDJPEG-compressed patches could help in detecting and locating the tampered region. However, the current SDJPEG detection methods do not provide satisfactory results especially when the tampered region is small. In this paper, we propose a new SDJPEG detection method based on an adaptive discrete cosine transform (DCT) coefficient model. DCT coefficient distributions for SDJPEG and non-SDJPEG patches have been analyzed and a discriminative feature has been proposed to perform the two-class classification. An adaptive approach is employed to select the most discriminative DCT modes for SDJPEG detection. The experimental results show that the proposed approach can achieve much better results compared with some existing approaches in SDJPEG patch detection especially when the patch size is small.

  19. Transformation between surface spherical harmonic expansion of arbitrary high degree and order and double Fourier series on sphere

    NASA Astrophysics Data System (ADS)

    Fukushima, Toshio

    2018-02-01

    In order to accelerate the spherical harmonic synthesis and/or analysis of arbitrary function on the unit sphere, we developed a pair of procedures to transform between a truncated spherical harmonic expansion and the corresponding two-dimensional Fourier series. First, we obtained an analytic expression of the sine/cosine series coefficient of the 4 π fully normalized associated Legendre function in terms of the rectangle values of the Wigner d function. Then, we elaborated the existing method to transform the coefficients of the surface spherical harmonic expansion to those of the double Fourier series so as to be capable with arbitrary high degree and order. Next, we created a new method to transform inversely a given double Fourier series to the corresponding surface spherical harmonic expansion. The key of the new method is a couple of new recurrence formulas to compute the inverse transformation coefficients: a decreasing-order, fixed-degree, and fixed-wavenumber three-term formula for general terms, and an increasing-degree-and-order and fixed-wavenumber two-term formula for diagonal terms. Meanwhile, the two seed values are analytically prepared. Both of the forward and inverse transformation procedures are confirmed to be sufficiently accurate and applicable to an extremely high degree/order/wavenumber as 2^{30} {≈ } 10^9. The developed procedures will be useful not only in the synthesis and analysis of the spherical harmonic expansion of arbitrary high degree and order, but also in the evaluation of the derivatives and integrals of the spherical harmonic expansion.

  20. Discrete cosine and sine transforms generalized to honeycomb lattice

    NASA Astrophysics Data System (ADS)

    Hrivnák, Jiří; Motlochová, Lenka

    2018-06-01

    The discrete cosine and sine transforms are generalized to a triangular fragment of the honeycomb lattice. The honeycomb point sets are constructed by subtracting the root lattice from the weight lattice points of the crystallographic root system A2. The two-variable orbit functions of the Weyl group of A2, discretized simultaneously on the weight and root lattices, induce a novel parametric family of extended Weyl orbit functions. The periodicity and von Neumann and Dirichlet boundary properties of the extended Weyl orbit functions are detailed. Three types of discrete complex Fourier-Weyl transforms and real-valued Hartley-Weyl transforms are described. Unitary transform matrices and interpolating behavior of the discrete transforms are exemplified. Consequences of the developed discrete transforms for transversal eigenvibrations of the mechanical graphene model are discussed.

  1. The hyperbolic chemical bond: Fourier analysis of ground and first excited state potential energy curves of HX (X = H-Ne).

    PubMed

    Harrison, John A

    2008-09-04

    RHF/aug-cc-pVnZ, UHF/aug-cc-pVnZ, and QCISD/aug-cc-pVnZ, n = 2-5, potential energy curves of H2 X (1) summation g (+) are analyzed by Fourier transform methods after transformation to a new coordinate system via an inverse hyperbolic cosine coordinate mapping. The Fourier frequency domain spectra are interpreted in terms of underlying mathematical behavior giving rise to distinctive features. There is a clear difference between the underlying mathematical nature of the potential energy curves calculated at the HF and full-CI levels. The method is particularly suited to the analysis of potential energy curves obtained at the highest levels of theory because the Fourier spectra are observed to be of a compact nature, with the envelope of the Fourier frequency coefficients decaying in magnitude in an exponential manner. The finite number of Fourier coefficients required to describe the CI curves allows for an optimum sampling strategy to be developed, corresponding to that required for exponential and geometric convergence. The underlying random numerical noise due to the finite convergence criterion is also a clearly identifiable feature in the Fourier spectrum. The methodology is applied to the analysis of MRCI potential energy curves for the ground and first excited states of HX (X = H-Ne). All potential energy curves exhibit structure in the Fourier spectrum consistent with the existence of resonances. The compact nature of the Fourier spectra following the inverse hyperbolic cosine coordinate mapping is highly suggestive that there is some advantage in viewing the chemical bond as having an underlying hyperbolic nature.

  2. Sparse dictionary for synthetic transmit aperture medical ultrasound imaging.

    PubMed

    Wang, Ping; Jiang, Jin-Yang; Li, Na; Luo, Han-Wu; Li, Fang; Cui, Shi-Gang

    2017-07-01

    It is possible to recover a signal below the Nyquist sampling limit using a compressive sensing technique in ultrasound imaging. However, the reconstruction enabled by common sparse transform approaches does not achieve satisfactory results. Considering the ultrasound echo signal's features of attenuation, repetition, and superposition, a sparse dictionary with the emission pulse signal is proposed. Sparse coefficients in the proposed dictionary have high sparsity. Images reconstructed with this dictionary were compared with those obtained with the three other common transforms, namely, discrete Fourier transform, discrete cosine transform, and discrete wavelet transform. The performance of the proposed dictionary was analyzed via a simulation and experimental data. The mean absolute error (MAE) was used to quantify the quality of the reconstructions. Experimental results indicate that the MAE associated with the proposed dictionary was always the smallest, the reconstruction time required was the shortest, and the lateral resolution and contrast of the reconstructed images were also the closest to the original images. The proposed sparse dictionary performed better than the other three sparse transforms. With the same sampling rate, the proposed dictionary achieved excellent reconstruction quality.

  3. Displaying radiologic images on personal computers: image storage and compression--Part 2.

    PubMed

    Gillespy, T; Rowberg, A H

    1994-02-01

    This is part 2 of our article on image storage and compression, the third article of our series for radiologists and imaging scientists on displaying, manipulating, and analyzing radiologic images on personal computers. Image compression is classified as lossless (nondestructive) or lossy (destructive). Common lossless compression algorithms include variable-length bit codes (Huffman codes and variants), dictionary-based compression (Lempel-Ziv variants), and arithmetic coding. Huffman codes and the Lempel-Ziv-Welch (LZW) algorithm are commonly used for image compression. All of these compression methods are enhanced if the image has been transformed into a differential image based on a differential pulse-code modulation (DPCM) algorithm. The LZW compression after the DPCM image transformation performed the best on our example images, and performed almost as well as the best of the three commercial compression programs tested. Lossy compression techniques are capable of much higher data compression, but reduced image quality and compression artifacts may be noticeable. Lossy compression is comprised of three steps: transformation, quantization, and coding. Two commonly used transformation methods are the discrete cosine transformation and discrete wavelet transformation. In both methods, most of the image information is contained in a relatively few of the transformation coefficients. The quantization step reduces many of the lower order coefficients to 0, which greatly improves the efficiency of the coding (compression) step. In fractal-based image compression, image patterns are stored as equations that can be reconstructed at different levels of resolution.

  4. Qualitative and semiquantitative Fourier transformation using a noncoherent system.

    PubMed

    Rogers, G L

    1979-09-15

    A number of authors have pointed out that a system of zone plates combined with a diffuse source, transparent input, lens, and focusing screen will display on the output screen the Fourier transform of the input. Strictly speaking, the transform normally displayed is the cosine transform, and the bipolar output is superimposed on a dc gray level to give a positive-only intensity variation. By phase-shifting one zone plate the sine transform is obtained. Temporal modulation is possible. It is also possible to redesign the system to accept a diffusely reflecting input at the cost of introducing a phase gradient in the output. Results are given of the sine and cosine transforms of a small circular aperture. As expected, the sine transform is a uniform gray. Both transforms show unwanted artifacts beyond 0.1 rad off-axis. An analysis shows this is due to unwanted circularly symmetrical moire patterns between the zone plates.

  5. Integer cosine transform compression for Galileo at Jupiter: A preliminary look

    NASA Technical Reports Server (NTRS)

    Ekroot, L.; Dolinar, S.; Cheung, K.-M.

    1993-01-01

    The Galileo low-gain antenna mission has a severely rate-constrained channel over which we wish to send large amounts of information. Because of this link pressure, compression techniques for image and other data are being selected. The compression technique that will be used for images is the integer cosine transform (ICT). This article investigates the compression performance of Galileo's ICT algorithm as applied to Galileo images taken during the early portion of the mission and to images that simulate those expected from the encounter at Jupiter.

  6. On the application of under-decimated filter banks

    NASA Technical Reports Server (NTRS)

    Lin, Y.-P.; Vaidyanathan, P. P.

    1994-01-01

    Maximally decimated filter banks have been extensively studied in the past. A filter bank is said to be under-decimated if the number of channels is more than the decimation ratio in the subbands. A maximally decimated filter bank is well known for its application in subband coding. Another application of maximally decimated filter banks is in block filtering. Convolution through block filtering has the advantages that parallelism is increased and data are processed at a lower rate. However, the computational complexity is comparable to that of direct convolution. More recently, another type of filter bank convolver has been developed. In this scheme, the convolution is performed in the subbands. Quantization and bit allocation of subband signals are based on signal variance, as in subband coding. Consequently, for a fixed rate, the result of convolution is more accurate than is direct convolution. This type of filter bank convolver also enjoys the advantages of block filtering, parallelism, and a lower working rate. Nevertheless, like block filtering, there is no computational saving. In this article, under-decimated systems are introduced to solve the problem. The new system is decimated only by half the number of channels. Two types of filter banks can be used in the under-decimated system: the discrete Fourier transform (DFT) filter banks and the cosine modulated filter banks. They are well known for their low complexity. In both cases, the system is approximately alias free, and the overall response is equivalent to a tunable multilevel filter. Properties of the DFT filter banks and the cosine modulated filter banks can be exploited to simultaneously achieve parallelism, computational saving, and a lower working rate. Furthermore, for both systems, the implementation cost of the analysis or synthesis bank is comparable to that of one prototype filter plus some low-complexity modulation matrices. The individual analysis and synthesis filters have complex coefficients in the DFT filter banks but have real coefficients in the cosine modulated filter banks.

  7. On the application of under-decimated filter banks

    NASA Astrophysics Data System (ADS)

    Lin, Y.-P.; Vaidyanathan, P. P.

    1994-11-01

    Maximally decimated filter banks have been extensively studied in the past. A filter bank is said to be under-decimated if the number of channels is more than the decimation ratio in the subbands. A maximally decimated filter bank is well known for its application in subband coding. Another application of maximally decimated filter banks is in block filtering. Convolution through block filtering has the advantages that parallelism is increased and data are processed at a lower rate. However, the computational complexity is comparable to that of direct convolution. More recently, another type of filter bank convolver has been developed. In this scheme, the convolution is performed in the subbands. Quantization and bit allocation of subband signals are based on signal variance, as in subband coding. Consequently, for a fixed rate, the result of convolution is more accurate than is direct convolution. This type of filter bank convolver also enjoys the advantages of block filtering, parallelism, and a lower working rate. Nevertheless, like block filtering, there is no computational saving. In this article, under-decimated systems are introduced to solve the problem. The new system is decimated only by half the number of channels. Two types of filter banks can be used in the under-decimated system: the discrete Fourier transform (DFT) filter banks and the cosine modulated filter banks. They are well known for their low complexity. In both cases, the system is approximately alias free, and the overall response is equivalent to a tunable multilevel filter. Properties of the DFT filter banks and the cosine modulated filter banks can be exploited to simultaneously achieve parallelism, computational saving, and a lower working rate.

  8. Simultaneous compression and encryption for secure real-time secure transmission of sensitive video transmission

    NASA Astrophysics Data System (ADS)

    Al-Hayani, Nazar; Al-Jawad, Naseer; Jassim, Sabah A.

    2014-05-01

    Video compression and encryption became very essential in a secured real time video transmission. Applying both techniques simultaneously is one of the challenges where the size and the quality are important in multimedia transmission. In this paper we proposed a new technique for video compression and encryption. Both encryption and compression are based on edges extracted from the high frequency sub-bands of wavelet decomposition. The compression algorithm based on hybrid of: discrete wavelet transforms, discrete cosine transform, vector quantization, wavelet based edge detection, and phase sensing. The compression encoding algorithm treats the video reference and non-reference frames in two different ways. The encryption algorithm utilized A5 cipher combined with chaotic logistic map to encrypt the significant parameters and wavelet coefficients. Both algorithms can be applied simultaneously after applying the discrete wavelet transform on each individual frame. Experimental results show that the proposed algorithms have the following features: high compression, acceptable quality, and resistance to the statistical and bruteforce attack with low computational processing.

  9. A Fourier transform with speed improvements for microprocessor applications

    NASA Technical Reports Server (NTRS)

    Lokerson, D. C.; Rochelle, R.

    1980-01-01

    A fast Fourier transform algorithm for the RCA 1802microprocessor was developed for spacecraft instrument applications. The computations were tailored for the restrictions an eight bit machine imposes. The algorithm incorporates some aspects of Walsh function sequency to improve operational speed. This method uses a register to add a value proportional to the period of the band being processed before each computation is to be considered. If the result overflows into the DF register, the data sample is used in computation; otherwise computation is skipped. This operation is repeated for each of the 64 data samples. This technique is used for both sine and cosine portions of the computation. The processing uses eight bit data, but because of the many computations that can increase the size of the coefficient, floating point form is used. A method to reduce the alias problem in the lower bands is also described.

  10. Image compression-encryption algorithms by combining hyper-chaotic system with discrete fractional random transform

    NASA Astrophysics Data System (ADS)

    Gong, Lihua; Deng, Chengzhi; Pan, Shumin; Zhou, Nanrun

    2018-07-01

    Based on hyper-chaotic system and discrete fractional random transform, an image compression-encryption algorithm is designed. The original image is first transformed into a spectrum by the discrete cosine transform and the resulting spectrum is compressed according to the method of spectrum cutting. The random matrix of the discrete fractional random transform is controlled by a chaotic sequence originated from the high dimensional hyper-chaotic system. Then the compressed spectrum is encrypted by the discrete fractional random transform. The order of DFrRT and the parameters of the hyper-chaotic system are the main keys of this image compression and encryption algorithm. The proposed algorithm can compress and encrypt image signal, especially can encrypt multiple images once. To achieve the compression of multiple images, the images are transformed into spectra by the discrete cosine transform, and then the spectra are incised and spliced into a composite spectrum by Zigzag scanning. Simulation results demonstrate that the proposed image compression and encryption algorithm is of high security and good compression performance.

  11. A new Watermarking System based on Discrete Cosine Transform (DCT) in color biometric images.

    PubMed

    Dogan, Sengul; Tuncer, Turker; Avci, Engin; Gulten, Arif

    2012-08-01

    This paper recommend a biometric color images hiding approach An Watermarking System based on Discrete Cosine Transform (DCT), which is used to protect the security and integrity of transmitted biometric color images. Watermarking is a very important hiding information (audio, video, color image, gray image) technique. It is commonly used on digital objects together with the developing technology in the last few years. One of the common methods used for hiding information on image files is DCT method which used in the frequency domain. In this study, DCT methods in order to embed watermark data into face images, without corrupting their features.

  12. Average Cosine Meter and High Spectral Resolution Measurements at the Marine Light Mixed Layer Site.

    DTIC Science & Technology

    1994-09-30

    G. A. Knauer, D. M. Karl , and Atlantic Ocean determined by inverse methods , W. W. Broenkow, VERTEX: Carbon cycling in the Ph.D. thesis, pp. 1-287...electromechanical release, this novel and inexpensive method eliminated shadowing from the ship. The ACM measured irradiance at 490nm using cosine and 411...three times more accurately than using traditional methods . A mathematical simulation of the absorption coefficient of phytoplankton derived from

  13. Empirical comparison of local structural similarity indices for collaborative-filtering-based recommender systems

    NASA Astrophysics Data System (ADS)

    Zhang, Qian-Ming; Shang, Ming-Sheng; Zeng, Wei; Chen, Yong; Lü, Linyuan

    2010-08-01

    Collaborative filtering is one of the most successful recommendation techniques, which can effectively predict the possible future likes of users based on their past preferences. The key problem of this method is how to define the similarity between users. A standard approach is using the correlation between the ratings that two users give to a set of objects, such as Cosine index and Pearson correlation coefficient. However, the costs of computing this kind of indices are relatively high, and thus it is impossible to be applied in the huge-size systems. To solve this problem, in this paper, we introduce six local-structure-based similarity indices and compare their performances with the above two benchmark indices. Experimental results on two data sets demonstrate that the structure-based similarity indices overall outperform the Pearson correlation coefficient. When the data is dense, the structure-based indices can perform competitively good as Cosine index, while with lower computational complexity. Furthermore, when the data is sparse, the structure-based indices give even better results than Cosine index.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balsa Terzic, Gabriele Bassi

    In this paper we discuss representations of charge particle densities in particle-in-cell (PIC) simulations, analyze the sources and profiles of the intrinsic numerical noise, and present efficient methods for their removal. We devise two alternative estimation methods for charged particle distribution which represent significant improvement over the Monte Carlo cosine expansion used in the 2d code of Bassi, designed to simulate coherent synchrotron radiation (CSR) in charged particle beams. The improvement is achieved by employing an alternative beam density estimation to the Monte Carlo cosine expansion. The representation is first binned onto a finite grid, after which two grid-based methodsmore » are employed to approximate particle distributions: (i) truncated fast cosine transform (TFCT); and (ii) thresholded wavelet transform (TWT). We demonstrate that these alternative methods represent a staggering upgrade over the original Monte Carlo cosine expansion in terms of efficiency, while the TWT approximation also provides an appreciable improvement in accuracy. The improvement in accuracy comes from a judicious removal of the numerical noise enabled by the wavelet formulation. The TWT method is then integrated into Bassi's CSR code, and benchmarked against the original version. We show that the new density estimation method provides a superior performance in terms of efficiency and spatial resolution, thus enabling high-fidelity simulations of CSR effects, including microbunching instability.« less

  15. Performance of customized DCT quantization tables on scientific data

    NASA Technical Reports Server (NTRS)

    Ratnakar, Viresh; Livny, Miron

    1994-01-01

    We show that it is desirable to use data-specific or customized quantization tables for scaling the spatial frequency coefficients obtained using the Discrete Cosine Transform (DCT). DCT is widely used for image and video compression (MP89, PM93) but applications typically use default quantization matrices. Using actual scientific data gathered from divers sources such as spacecrafts and electron-microscopes, we show that the default compression/quality tradeoffs can be significantly improved upon by using customized tables. We also show that significant improvements are possible for the standard test images Lena and Baboon. This work is part of an effort to develop a practical scheme for optimizing quantization matrices for any given image or video stream, under any given quality or compression constraints.

  16. Determination of Fourier Transforms on an Instructional Analog Computer

    ERIC Educational Resources Information Center

    Anderson, Owen T.; Greenwood, Stephen R.

    1974-01-01

    An analog computer program to find and display the Fourier transform of some real, even functions is described. Oscilloscope traces are shown for Fourier transforms of a rectangular pulse, a Gaussian, a cosine wave, and a delayed narrow pulse. Instructional uses of the program are discussed briefly. (DT)

  17. A Local DCT-II Feature Extraction Approach for Personal Identification Based on Palmprint

    NASA Astrophysics Data System (ADS)

    Choge, H. Kipsang; Oyama, Tadahiro; Karungaru, Stephen; Tsuge, Satoru; Fukumi, Minoru

    Biometric applications based on the palmprint have recently attracted increased attention from various researchers. In this paper, a method is presented that differs from the commonly used global statistical and structural techniques by extracting and using local features instead. The middle palm area is extracted after preprocessing for rotation, position and illumination normalization. The segmented region of interest is then divided into blocks of either 8×8 or 16×16 pixels in size. The type-II Discrete Cosine Transform (DCT) is applied to transform the blocks into DCT space. A subset of coefficients that encode the low to medium frequency components is selected using the JPEG-style zigzag scanning method. Features from each block are subsequently concatenated into a compact feature vector and used in palmprint verification experiments with palmprints from the PolyU Palmprint Database. Results indicate that this approach achieves better results than many conventional transform-based methods, with an excellent recognition accuracy above 99% and an Equal Error Rate (EER) of less than 1.2% in palmprint verification.

  18. Down-Regulation of Olfactory Receptors in Response to Traumatic Brain Injury Promotes Risk for Alzheimers Disease

    DTIC Science & Technology

    2015-12-01

    group assignment of samples in unsupervised hierarchical clustering by the Unweighted Pair-Group Method using Arithmetic averages ( UPGMA ) based on...log2 transformed MAS5.0 signal values; probe set clustering was performed by the UPGMA method using Cosine correlation as the similarity met- ric. For...differentially-regulated genes identified were subjected to unsupervised hierarchical clustering analysis using the UPGMA algorithm with cosine correlation as

  19. Adaptive compressed sensing of remote-sensing imaging based on the sparsity prediction

    NASA Astrophysics Data System (ADS)

    Yang, Senlin; Li, Xilong; Chong, Xin

    2017-10-01

    The conventional compressive sensing works based on the non-adaptive linear projections, and the parameter of its measurement times is usually set empirically. As a result, the quality of image reconstruction is always affected. Firstly, the block-based compressed sensing (BCS) with conventional selection for compressive measurements was given. Then an estimation method for the sparsity of image was proposed based on the two dimensional discrete cosine transform (2D DCT). With an energy threshold given beforehand, the DCT coefficients were processed with both energy normalization and sorting in descending order, and the sparsity of the image can be achieved by the proportion of dominant coefficients. And finally, the simulation result shows that, the method can estimate the sparsity of image effectively, and provides an active basis for the selection of compressive observation times. The result also shows that, since the selection of observation times is based on the sparse degree estimated with the energy threshold provided, the proposed method can ensure the quality of image reconstruction.

  20. A Kinect based sign language recognition system using spatio-temporal features

    NASA Astrophysics Data System (ADS)

    Memiş, Abbas; Albayrak, Songül

    2013-12-01

    This paper presents a sign language recognition system that uses spatio-temporal features on RGB video images and depth maps for dynamic gestures of Turkish Sign Language. Proposed system uses motion differences and accumulation approach for temporal gesture analysis. Motion accumulation method, which is an effective method for temporal domain analysis of gestures, produces an accumulated motion image by combining differences of successive video frames. Then, 2D Discrete Cosine Transform (DCT) is applied to accumulated motion images and temporal domain features transformed into spatial domain. These processes are performed on both RGB images and depth maps separately. DCT coefficients that represent sign gestures are picked up via zigzag scanning and feature vectors are generated. In order to recognize sign gestures, K-Nearest Neighbor classifier with Manhattan distance is performed. Performance of the proposed sign language recognition system is evaluated on a sign database that contains 1002 isolated dynamic signs belongs to 111 words of Turkish Sign Language (TSL) in three different categories. Proposed sign language recognition system has promising success rates.

  1. A visual detection model for DCT coefficient quantization

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert J., Jr.; Watson, Andrew B.

    1994-01-01

    The discrete cosine transform (DCT) is widely used in image compression and is part of the JPEG and MPEG compression standards. The degree of compression and the amount of distortion in the decompressed image are controlled by the quantization of the transform coefficients. The standards do not specify how the DCT coefficients should be quantized. One approach is to set the quantization level for each coefficient so that the quantization error is near the threshold of visibility. Results from previous work are combined to form the current best detection model for DCT coefficient quantization noise. This model predicts sensitivity as a function of display parameters, enabling quantization matrices to be designed for display situations varying in luminance, veiling light, and spatial frequency related conditions (pixel size, viewing distance, and aspect ratio). It also allows arbitrary color space directions for the representation of color. A model-based method of optimizing the quantization matrix for an individual image was developed. The model described above provides visual thresholds for each DCT frequency. These thresholds are adjusted within each block for visual light adaptation and contrast masking. For given quantization matrix, the DCT quantization errors are scaled by the adjusted thresholds to yield perceptual errors. These errors are pooled nonlinearly over the image to yield total perceptual error. With this model one may estimate the quantization matrix for a particular image that yields minimum bit rate for a given total perceptual error, or minimum perceptual error for a given bit rate. Custom matrices for a number of images show clear improvement over image-independent matrices. Custom matrices are compatible with the JPEG standard, which requires transmission of the quantization matrix.

  2. A real-time inverse quantised transform for multi-standard with dynamic resolution support

    NASA Astrophysics Data System (ADS)

    Sun, Chi-Chia; Lin, Chun-Ying; Zhang, Ce

    2016-06-01

    In this paper, a real-time configurable intelligent property (IP) core is presented for image/video decoding process in compatibility with the standard MPEG-4 Visual and the standard H.264/AVC. The inverse quantised discrete cosine and integer transform can be used to perform inverse quantised discrete cosine transform and inverse quantised inverse integer transforms which only required shift and add operations. Meanwhile, COordinate Rotation DIgital Computer iterations and compensation steps are adjustable in order to compensate for the video compression quality regarding various data throughput. The implementations are embedded in publicly available software XVID Codes 1.2.2 for the standard MPEG-4 Visual and the H.264/AVC reference software JM 16.1, where the experimental results show that the balance between the computational complexity and video compression quality is retained. At the end, FPGA synthesised results show that the proposed IP core can bring advantages to low hardware costs and also provide real-time performance for Full HD and 4K-2K video decoding.

  3. Non-uniform cosine modulated filter banks using meta-heuristic algorithms in CSD space.

    PubMed

    Kalathil, Shaeen; Elias, Elizabeth

    2015-11-01

    This paper presents an efficient design of non-uniform cosine modulated filter banks (CMFB) using canonic signed digit (CSD) coefficients. CMFB has got an easy and efficient design approach. Non-uniform decomposition can be easily obtained by merging the appropriate filters of a uniform filter bank. Only the prototype filter needs to be designed and optimized. In this paper, the prototype filter is designed using window method, weighted Chebyshev approximation and weighted constrained least square approximation. The coefficients are quantized into CSD, using a look-up-table. The finite precision CSD rounding, deteriorates the filter bank performances. The performances of the filter bank are improved using suitably modified meta-heuristic algorithms. The different meta-heuristic algorithms which are modified and used in this paper are Artificial Bee Colony algorithm, Gravitational Search algorithm, Harmony Search algorithm and Genetic algorithm and they result in filter banks with less implementation complexity, power consumption and area requirements when compared with those of the conventional continuous coefficient non-uniform CMFB.

  4. Non-uniform cosine modulated filter banks using meta-heuristic algorithms in CSD space

    PubMed Central

    Kalathil, Shaeen; Elias, Elizabeth

    2014-01-01

    This paper presents an efficient design of non-uniform cosine modulated filter banks (CMFB) using canonic signed digit (CSD) coefficients. CMFB has got an easy and efficient design approach. Non-uniform decomposition can be easily obtained by merging the appropriate filters of a uniform filter bank. Only the prototype filter needs to be designed and optimized. In this paper, the prototype filter is designed using window method, weighted Chebyshev approximation and weighted constrained least square approximation. The coefficients are quantized into CSD, using a look-up-table. The finite precision CSD rounding, deteriorates the filter bank performances. The performances of the filter bank are improved using suitably modified meta-heuristic algorithms. The different meta-heuristic algorithms which are modified and used in this paper are Artificial Bee Colony algorithm, Gravitational Search algorithm, Harmony Search algorithm and Genetic algorithm and they result in filter banks with less implementation complexity, power consumption and area requirements when compared with those of the conventional continuous coefficient non-uniform CMFB. PMID:26644921

  5. A Posteriori Restoration of Block Transform-Compressed Data

    NASA Technical Reports Server (NTRS)

    Brown, R.; Boden, A. F.

    1995-01-01

    The Galileo spacecraft will use lossy data compression for the transmission of its science imagery over the low-bandwidth communication system. The technique chosen for image compression is a block transform technique based on the Integer Cosine Transform, a derivative of the JPEG image compression standard. Considered here are two known a posteriori enhancement techniques, which are adapted.

  6. The influence of finite cavities on the sound insulation of double-plate structures.

    PubMed

    Brunskog, Jonas

    2005-06-01

    Lightweight walls are often designed as frameworks of studs with plates on each side--a double-plate structure. The studs constitute boundaries for the cavities, thereby both affecting the sound transmission directly by short-circuiting the plates, and indirectly by disturbing the sound field between the plates. The paper presents a deterministic prediction model for airborne sound insulation including both effects of the studs. A spatial transform technique is used, taking advantage of the periodicity. The acoustic field inside the cavities is expanded by means of cosine-series. The transmission coefficient (angle-dependent and diffuse) and transmission loss are studied. Numerical examples are presented and comparisons with measurement are performed. The result indicates that a reasonably good agreement between theory and measurement can be achieved.

  7. Use Correlation Coefficients in Gaussian Process to Train Stable ELM Models

    DTIC Science & Technology

    2015-05-22

    confidence interval of prediction y′. There are two parameters that need to be determined in BELM: σ2N and α > 0. BELM effectively controls the over...similarly between h (u) and h (v) with vectorial angle cosine rather than distance between them. The increase of vector dimen- sion will not cause the... vectorial angle cosine approaches 0. Then, we can know that Q I with the increase of L. This reduces the chance of over-fitting. 414 Y. He et al. T a b

  8. Vesicle sizing by static light scattering: a Fourier cosine transform approach

    NASA Astrophysics Data System (ADS)

    Wang, Jianhong; Hallett, F. Ross

    1995-08-01

    A Fourier cosine transform method, based on the Rayleigh-Gans-Debye thin-shell approximation, was developed to retrieve vesicle size distribution directly from the angular dependence of scattered light intensity. Its feasibility for real vesicles was partially tested on scattering data generated by the exact Mie solutions for isotropic vesicles. The noise tolerance of the method in recovering unimodal and biomodal distributions was studied with the simulated data. Applicability of this approach to vesicles with weak anisotropy was examined using Mie theory for anisotropic hollow spheres. A primitive theory about the first four moments of the radius distribution about the origin, excluding the mean radius, was obtained as an alternative to the direct retrieval of size distributions.

  9. Testing Fixture For Microwave Integrated Circuits

    NASA Technical Reports Server (NTRS)

    Romanofsky, Robert; Shalkhauser, Kurt

    1989-01-01

    Testing fixture facilitates radio-frequency characterization of microwave and millimeter-wave integrated circuits. Includes base onto which two cosine-tapered ridge waveguide-to-microstrip transitions fastened. Length and profile of taper determined analytically to provide maximum bandwidth and minimum insertion loss. Each cosine taper provides transformation from high impedance of waveguide to characteristic impedance of microstrip. Used in conjunction with automatic network analyzer to provide user with deembedded scattering parameters of device under test. Operates from 26.5 to 40.0 GHz, but operation extends to much higher frequencies.

  10. Transform coding for space applications

    NASA Technical Reports Server (NTRS)

    Glover, Daniel

    1993-01-01

    Data compression coding requirements for aerospace applications differ somewhat from the compression requirements for entertainment systems. On the one hand, entertainment applications are bit rate driven with the goal of getting the best quality possible with a given bandwidth. Science applications are quality driven with the goal of getting the lowest bit rate for a given level of reconstruction quality. In the past, the required quality level has been nothing less than perfect allowing only the use of lossless compression methods (if that). With the advent of better, faster, cheaper missions, an opportunity has arisen for lossy data compression methods to find a use in science applications as requirements for perfect quality reconstruction runs into cost constraints. This paper presents a review of the data compression problem from the space application perspective. Transform coding techniques are described and some simple, integer transforms are presented. The application of these transforms to space-based data compression problems is discussed. Integer transforms have an advantage over conventional transforms in computational complexity. Space applications are different from broadcast or entertainment in that it is desirable to have a simple encoder (in space) and tolerate a more complicated decoder (on the ground) rather than vice versa. Energy compaction with new transforms are compared with the Walsh-Hadamard (WHT), Discrete Cosine (DCT), and Integer Cosine (ICT) transforms.

  11. Second rank direction cosine spherical tensor operators and the nuclear electric quadrupole hyperfine structure Hamiltonian of rotating molecules

    NASA Astrophysics Data System (ADS)

    di Lauro, C.

    2018-03-01

    Transformations of vector or tensor properties from a space-fixed to a molecule-fixed axis system are often required in the study of rotating molecules. Spherical components λμ,ν of a first rank irreducible tensor can be obtained from the direction cosines between the two axis systems, and a second rank tensor with spherical components λμ,ν(2) can be built from the direct product λ × λ. It is shown that the treatment of the interaction between molecular rotation and the electric quadrupole of a nucleus is greatly simplified, if the coefficients in the axis-system transformation of the gradient of the electric field of the outer charges at the coupled nucleus are arranged as spherical components λμ,ν(2). Then the reduced matrix elements of the field gradient operators in a symmetric top eigenfunction basis, including their dependence on the molecule-fixed z-angular momentum component k, can be determined from the knowledge of those of λ(2) . The hyperfine structure Hamiltonian Hq is expressed as the sum of terms characterized each by a value of the molecule-fixed index ν, whose matrix elements obey the rule Δk = ν. Some of these terms may vanish because of molecular symmetry, and the specific cases of linear and symmetric top molecules, orthorhombic molecules, and molecules with symmetry lower than orthorhombic are considered. Each ν-term consists of a contraction of the rotational tensor λ(2) and the nuclear quadrupole tensor in the space-fixed frame, and its matrix elements in the rotation-nuclear spin coupled representation can be determined by the standard spherical tensor methods.

  12. Simultaneous storage of medical images in the spatial and frequency domain: a comparative study.

    PubMed

    Nayak, Jagadish; Bhat, P Subbanna; Acharya U, Rajendra; Uc, Niranjan

    2004-06-05

    Digital watermarking is a technique of hiding specific identification data for copyright authentication. This technique is adapted here for interleaving patient information with medical images, to reduce storage and transmission overheads. The patient information is encrypted before interleaving with images to ensure greater security. The bio-signals are compressed and subsequently interleaved with the image. This interleaving is carried out in the spatial domain and Frequency domain. The performance of interleaving in the spatial, Discrete Fourier Transform (DFT), Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) coefficients is studied. Differential pulse code modulation (DPCM) is employed for data compression as well as encryption and results are tabulated for a specific example. It can be seen from results, the process does not affect the picture quality. This is attributed to the fact that the change in LSB of a pixel changes its brightness by 1 part in 256. Spatial and DFT domain interleaving gave very less %NRMSE as compared to DCT and DWT domain. The Results show that spatial domain the interleaving, the %NRMSE was less than 0.25% for 8-bit encoded pixel intensity. Among the frequency domain interleaving methods, DFT was found to be very efficient.

  13. Optimization of Trade-offs in Error-free Image Transmission

    NASA Astrophysics Data System (ADS)

    Cox, Jerome R.; Moore, Stephen M.; Blaine, G. James; Zimmerman, John B.; Wallace, Gregory K.

    1989-05-01

    The availability of ubiquitous wide-area channels of both modest cost and higher transmission rate than voice-grade lines promises to allow the expansion of electronic radiology services to a larger community. The band-widths of the new services becoming available from the Integrated Services Digital Network (ISDN) are typically limited to 128 Kb/s, almost two orders of magnitude lower than popular LANs can support. Using Discrete Cosine Transform (DCT) techniques, a compressed approximation to an image may be rapidly transmitted. However, intensity or resampling transformations of the reconstructed image may reveal otherwise invisible artifacts of the approximate encoding. A progressive transmission scheme reported in ISO Working Paper N800 offers an attractive solution to this problem by rapidly reconstructing an apparently undistorted image from the DCT coefficients and then subse-quently transmitting the error image corresponding to the difference between the original and the reconstructed images. This approach achieves an error-free transmission without sacrificing the perception of rapid image delivery. Furthermore, subsequent intensity and resampling manipulations can be carried out with confidence. DCT coefficient precision affects the amount of error information that must be transmitted and, hence the delivery speed of error-free images. This study calculates the overall information coding rate for six radiographic images as a function of DCT coefficient precision. The results demonstrate that a minimum occurs for each of the six images at an average coefficient precision of between 0.5 and 1.0 bits per pixel (b/p). Apparently undistorted versions of these six images can be transmitted with a coding rate of between 0.25 and 0.75 b/p while error-free versions can be transmitted with an overall coding rate between 4.5 and 6.5 b/p.

  14. A method for predicting DCT-based denoising efficiency for grayscale images corrupted by AWGN and additive spatially correlated noise

    NASA Astrophysics Data System (ADS)

    Rubel, Aleksey S.; Lukin, Vladimir V.; Egiazarian, Karen O.

    2015-03-01

    Results of denoising based on discrete cosine transform for a wide class of images corrupted by additive noise are obtained. Three types of noise are analyzed: additive white Gaussian noise and additive spatially correlated Gaussian noise with middle and high correlation levels. TID2013 image database and some additional images are taken as test images. Conventional DCT filter and BM3D are used as denoising techniques. Denoising efficiency is described by PSNR and PSNR-HVS-M metrics. Within hard-thresholding denoising mechanism, DCT-spectrum coefficient statistics are used to characterize images and, subsequently, denoising efficiency for them. Results of denoising efficiency are fitted for such statistics and efficient approximations are obtained. It is shown that the obtained approximations provide high accuracy of prediction of denoising efficiency.

  15. Application of Fourier Transform Infrared Spectra (FTIR) Fingerprint in the Quality Control of Mineral Chinese Medicine Limonitum.

    PubMed

    Liu, Sheng-jin; Yang, Huan; Wu, De-kang; Xu, Chun-xiang; Lin, Rui-chao; Tian, Jin-gai; Fang, Fang

    2015-04-01

    In the present paper, the fingerprint of Limonitum (a mineral Chinese medicine) by FTIR was established, and the spectrograms among crude samples, processed one and the adulterant sample were compared. Eighteen batches of Limonitum samples from different production areas were analyzed and the angle cosine value of transmittance (%) of common peaks was calculated to get the similarity of the FTIR fingerprints. The result showed that the similarities and the coefficients of the samples were all more than 0.90. The processed samples revealed significant differences compared with the crude one. This study analyzed the composition characteristics of Limonitum in FTIR fingerprint, and it was simple and fast to distinguish the crude, processed and the counterfeit samples. The FTIR fingerprints provide a new method for evaluating the quality of Limonitum.

  16. CD-Based Indices for Link Prediction in Complex Network.

    PubMed

    Wang, Tao; Wang, Hongjue; Wang, Xiaoxia

    2016-01-01

    Lots of similarity-based algorithms have been designed to deal with the problem of link prediction in the past decade. In order to improve prediction accuracy, a novel cosine similarity index CD based on distance between nodes and cosine value between vectors is proposed in this paper. Firstly, node coordinate matrix can be obtained by node distances which are different from distance matrix and row vectors of the matrix are regarded as coordinates of nodes. Then, cosine value between node coordinates is used as their similarity index. A local community density index LD is also proposed. Then, a series of CD-based indices include CD-LD-k, CD*LD-k, CD-k and CDI are presented and applied in ten real networks. Experimental results demonstrate the effectiveness of CD-based indices. The effects of network clustering coefficient and assortative coefficient on prediction accuracy of indices are analyzed. CD-LD-k and CD*LD-k can improve prediction accuracy without considering the assortative coefficient of network is negative or positive. According to analysis of relative precision of each method on each network, CD-LD-k and CD*LD-k indices have excellent average performance and robustness. CD and CD-k indices perform better on positive assortative networks than on negative assortative networks. For negative assortative networks, we improve and refine CD index, referred as CDI index, combining the advantages of CD index and evolutionary mechanism of the network model BA. Experimental results reveal that CDI index can increase prediction accuracy of CD on negative assortative networks.

  17. Numerical methods for comparing fresh and weathered oils by their FTIR spectra.

    PubMed

    Li, Jianfeng; Hibbert, D Brynn; Fuller, Stephen

    2007-08-01

    Four comparison statistics ('similarity indices') for the identification of the source of a petroleum oil spill based on the ASTM standard test method D3414 were investigated. Namely, (1) first difference correlation coefficient squared and (2) correlation coefficient squared, (3) first difference Euclidean cosine squared and (4) Euclidean cosine squared. For numerical comparison, an FTIR spectrum is divided into three regions, described as: fingerprint (900-700 cm(-1)), generic (1350-900 cm(-1)) and supplementary (1770-1685 cm(-1)), which are the same as the three major regions recommended by the ASTM standard. For fresh oil samples, each similarity index was able to distinguish between replicate independent spectra of the same sample and between different samples. In general, the two first difference-based indices worked better than their parent indices. To provide samples to reveal relationships between weathered and fresh oils, a simple artificial weathering procedure was carried out. Euclidean cosine and correlation coefficients both worked well to maintain identification of a match in the fingerprint region and the two first difference indices were better in the generic region. Receiver operating characteristic curves (true positive rate versus false positive rate) for decisions on matching using the fingerprint region showed two samples could be matched when the difference in weathering time was up to 7 days. Beyond this time the true positive rate falls and samples cannot be reliably matched. However, artificial weathering of a fresh source sample can aid the matching of a weathered sample to its real source from a pool of very similar candidates.

  18. CD-Based Indices for Link Prediction in Complex Network

    PubMed Central

    Wang, Tao; Wang, Hongjue; Wang, Xiaoxia

    2016-01-01

    Lots of similarity-based algorithms have been designed to deal with the problem of link prediction in the past decade. In order to improve prediction accuracy, a novel cosine similarity index CD based on distance between nodes and cosine value between vectors is proposed in this paper. Firstly, node coordinate matrix can be obtained by node distances which are different from distance matrix and row vectors of the matrix are regarded as coordinates of nodes. Then, cosine value between node coordinates is used as their similarity index. A local community density index LD is also proposed. Then, a series of CD-based indices include CD-LD-k, CD*LD-k, CD-k and CDI are presented and applied in ten real networks. Experimental results demonstrate the effectiveness of CD-based indices. The effects of network clustering coefficient and assortative coefficient on prediction accuracy of indices are analyzed. CD-LD-k and CD*LD-k can improve prediction accuracy without considering the assortative coefficient of network is negative or positive. According to analysis of relative precision of each method on each network, CD-LD-k and CD*LD-k indices have excellent average performance and robustness. CD and CD-k indices perform better on positive assortative networks than on negative assortative networks. For negative assortative networks, we improve and refine CD index, referred as CDI index, combining the advantages of CD index and evolutionary mechanism of the network model BA. Experimental results reveal that CDI index can increase prediction accuracy of CD on negative assortative networks. PMID:26752405

  19. Estimation of signal-dependent noise level function in transform domain via a sparse recovery model.

    PubMed

    Yang, Jingyu; Gan, Ziqiao; Wu, Zhaoyang; Hou, Chunping

    2015-05-01

    This paper proposes a novel algorithm to estimate the noise level function (NLF) of signal-dependent noise (SDN) from a single image based on the sparse representation of NLFs. Noise level samples are estimated from the high-frequency discrete cosine transform (DCT) coefficients of nonlocal-grouped low-variation image patches. Then, an NLF recovery model based on the sparse representation of NLFs under a trained basis is constructed to recover NLF from the incomplete noise level samples. Confidence levels of the NLF samples are incorporated into the proposed model to promote reliable samples and weaken unreliable ones. We investigate the behavior of the estimation performance with respect to the block size, sampling rate, and confidence weighting. Simulation results on synthetic noisy images show that our method outperforms existing state-of-the-art schemes. The proposed method is evaluated on real noisy images captured by three types of commodity imaging devices, and shows consistently excellent SDN estimation performance. The estimated NLFs are incorporated into two well-known denoising schemes, nonlocal means and BM3D, and show significant improvements in denoising SDN-polluted images.

  20. Soliton and periodic solutions for time-dependent coefficient non-linear equation

    NASA Astrophysics Data System (ADS)

    Guner, Ozkan

    2016-01-01

    In this article, we establish exact solutions for the generalized (3+1)-dimensional variable coefficient Kadomtsev-Petviashvili (GVCKP) equation. Using solitary wave ansatz in terms of ? functions and the modified sine-cosine method, we find exact analytical bright soliton solutions and exact periodic solutions for the considered model. The physical parameters in the soliton solutions are obtained as function of the dependent model coefficients. The effectiveness and reliability of the method are shown by its application to the GVCKP equation.

  1. Information Hiding In Digital Video Using DCT, DWT and CvT

    NASA Astrophysics Data System (ADS)

    Abed Shukur, Wisam; Najah Abdullah, Wathiq; Kareem Qurban, Luheb

    2018-05-01

    The type of video that used in this proposed hiding a secret information technique is .AVI; the proposed technique of a data hiding to embed a secret information into video frames by using Discrete Cosine Transform (DCT), Discrete Wavelet Transform (DWT) and Curvelet Transform (CvT). An individual pixel consists of three color components (RGB), the secret information is embedded in Red (R) color channel. On the receiver side, the secret information is extracted from received video. After extracting secret information, robustness of proposed hiding a secret information technique is measured and obtained by computing the degradation of the extracted secret information by comparing it with the original secret information via calculating the Normalized cross Correlation (NC). The experiments shows the error ratio of the proposed technique is (8%) while accuracy ratio is (92%) when the Curvelet Transform (CvT) is used, but compared with Discrete Wavelet Transform (DWT) and Discrete Cosine Transform (DCT), the error rates are 11% and 14% respectively, while the accuracy ratios are (89%) and (86%) respectively. So, the experiments shows the Poisson noise gives better results than other types of noises, while the speckle noise gives worst results compared with other types of noises. The proposed technique has been established by using MATLAB R2016a programming language.

  2. High-speed real-time image compression based on all-optical discrete cosine transformation

    NASA Astrophysics Data System (ADS)

    Guo, Qiang; Chen, Hongwei; Wang, Yuxi; Chen, Minghua; Yang, Sigang; Xie, Shizhong

    2017-02-01

    In this paper, we present a high-speed single-pixel imaging (SPI) system based on all-optical discrete cosine transform (DCT) and demonstrate its capability to enable noninvasive imaging of flowing cells in a microfluidic channel. Through spectral shaping based on photonic time stretch (PTS) and wavelength-to-space conversion, structured illumination patterns are generated at a rate (tens of MHz) which is three orders of magnitude higher than the switching rate of a digital micromirror device (DMD) used in a conventional single-pixel camera. Using this pattern projector, high-speed image compression based on DCT can be achieved in the optical domain. In our proposed system, a high compression ratio (approximately 10:1) and a fast image reconstruction procedure are both achieved, which implicates broad applications in industrial quality control and biomedical imaging.

  3. A CU-Level Rate and Distortion Estimation Scheme for RDO of Hardware-Friendly HEVC Encoders Using Low-Complexity Integer DCTs.

    PubMed

    Lee, Bumshik; Kim, Munchurl

    2016-08-01

    In this paper, a low complexity coding unit (CU)-level rate and distortion estimation scheme is proposed for High Efficiency Video Coding (HEVC) hardware-friendly implementation where a Walsh-Hadamard transform (WHT)-based low-complexity integer discrete cosine transform (DCT) is employed for distortion estimation. Since HEVC adopts quadtree structures of coding blocks with hierarchical coding depths, it becomes more difficult to estimate accurate rate and distortion values without actually performing transform, quantization, inverse transform, de-quantization, and entropy coding. Furthermore, DCT for rate-distortion optimization (RDO) is computationally high, because it requires a number of multiplication and addition operations for various transform block sizes of 4-, 8-, 16-, and 32-orders and requires recursive computations to decide the optimal depths of CU or transform unit. Therefore, full RDO-based encoding is highly complex, especially for low-power implementation of HEVC encoders. In this paper, a rate and distortion estimation scheme is proposed in CU levels based on a low-complexity integer DCT that can be computed in terms of WHT whose coefficients are produced in prediction stages. For rate and distortion estimation in CU levels, two orthogonal matrices of 4×4 and 8×8 , which are applied to WHT that are newly designed in a butterfly structure only with addition and shift operations. By applying the integer DCT based on the WHT and newly designed transforms in each CU block, the texture rate can precisely be estimated after quantization using the number of non-zero quantized coefficients and the distortion can also be precisely estimated in transform domain without de-quantization and inverse transform required. In addition, a non-texture rate estimation is proposed by using a pseudoentropy code to obtain accurate total rate estimates. The proposed rate and the distortion estimation scheme can effectively be used for HW-friendly implementation of HEVC encoders with 9.8% loss over HEVC full RDO, which much less than 20.3% and 30.2% loss of a conventional approach and Hadamard-only scheme, respectively.

  4. A single-source photon source model of a linear accelerator for Monte Carlo dose calculation

    PubMed Central

    Glatting, Gerhard; Wenz, Frederik; Fleckenstein, Jens

    2017-01-01

    Purpose To introduce a new method of deriving a virtual source model (VSM) of a linear accelerator photon beam from a phase space file (PSF) for Monte Carlo (MC) dose calculation. Materials and methods A PSF of a 6 MV photon beam was generated by simulating the interactions of primary electrons with the relevant geometries of a Synergy linear accelerator (Elekta AB, Stockholm, Sweden) and recording the particles that reach a plane 16 cm downstream the electron source. Probability distribution functions (PDFs) for particle positions and energies were derived from the analysis of the PSF. These PDFs were implemented in the VSM using inverse transform sampling. To model particle directions, the phase space plane was divided into a regular square grid. Each element of the grid corresponds to an area of 1 mm2 in the phase space plane. The average direction cosines, Pearson correlation coefficient (PCC) between photon energies and their direction cosines, as well as the PCC between the direction cosines were calculated for each grid element. Weighted polynomial surfaces were then fitted to these 2D data. The weights are used to correct for heteroscedasticity across the phase space bins. The directions of the particles created by the VSM were calculated from these fitted functions. The VSM was validated against the PSF by comparing the doses calculated by the two methods for different square field sizes. The comparisons were performed with profile and gamma analyses. Results The doses calculated with the PSF and VSM agree to within 3% /1 mm (>95% pixel pass rate) for the evaluated fields. Conclusion A new method of deriving a virtual photon source model of a linear accelerator from a PSF file for MC dose calculation was developed. Validation results show that the doses calculated with the VSM and the PSF agree to within 3% /1 mm. PMID:28886048

  5. A single-source photon source model of a linear accelerator for Monte Carlo dose calculation.

    PubMed

    Nwankwo, Obioma; Glatting, Gerhard; Wenz, Frederik; Fleckenstein, Jens

    2017-01-01

    To introduce a new method of deriving a virtual source model (VSM) of a linear accelerator photon beam from a phase space file (PSF) for Monte Carlo (MC) dose calculation. A PSF of a 6 MV photon beam was generated by simulating the interactions of primary electrons with the relevant geometries of a Synergy linear accelerator (Elekta AB, Stockholm, Sweden) and recording the particles that reach a plane 16 cm downstream the electron source. Probability distribution functions (PDFs) for particle positions and energies were derived from the analysis of the PSF. These PDFs were implemented in the VSM using inverse transform sampling. To model particle directions, the phase space plane was divided into a regular square grid. Each element of the grid corresponds to an area of 1 mm2 in the phase space plane. The average direction cosines, Pearson correlation coefficient (PCC) between photon energies and their direction cosines, as well as the PCC between the direction cosines were calculated for each grid element. Weighted polynomial surfaces were then fitted to these 2D data. The weights are used to correct for heteroscedasticity across the phase space bins. The directions of the particles created by the VSM were calculated from these fitted functions. The VSM was validated against the PSF by comparing the doses calculated by the two methods for different square field sizes. The comparisons were performed with profile and gamma analyses. The doses calculated with the PSF and VSM agree to within 3% /1 mm (>95% pixel pass rate) for the evaluated fields. A new method of deriving a virtual photon source model of a linear accelerator from a PSF file for MC dose calculation was developed. Validation results show that the doses calculated with the VSM and the PSF agree to within 3% /1 mm.

  6. Image segmentation-based robust feature extraction for color image watermarking

    NASA Astrophysics Data System (ADS)

    Li, Mianjie; Deng, Zeyu; Yuan, Xiaochen

    2018-04-01

    This paper proposes a local digital image watermarking method based on Robust Feature Extraction. The segmentation is achieved by Simple Linear Iterative Clustering (SLIC) based on which an Image Segmentation-based Robust Feature Extraction (ISRFE) method is proposed for feature extraction. Our method can adaptively extract feature regions from the blocks segmented by SLIC. This novel method can extract the most robust feature region in every segmented image. Each feature region is decomposed into low-frequency domain and high-frequency domain by Discrete Cosine Transform (DCT). Watermark images are then embedded into the coefficients in the low-frequency domain. The Distortion-Compensated Dither Modulation (DC-DM) algorithm is chosen as the quantization method for embedding. The experimental results indicate that the method has good performance under various attacks. Furthermore, the proposed method can obtain a trade-off between high robustness and good image quality.

  7. Artificial intelligence systems based on texture descriptors for vaccine development.

    PubMed

    Nanni, Loris; Brahnam, Sheryl; Lumini, Alessandra

    2011-02-01

    The aim of this work is to analyze and compare several feature extraction methods for peptide classification that are based on the calculation of texture descriptors starting from a matrix representation of the peptide. This texture-based representation of the peptide is then used to train a support vector machine classifier. In our experiments, the best results are obtained using local binary patterns variants and the discrete cosine transform with selected coefficients. These results are better than those previously reported that employed texture descriptors for peptide representation. In addition, we perform experiments that combine standard approaches based on amino acid sequence. The experimental section reports several tests performed on a vaccine dataset for the prediction of peptides that bind human leukocyte antigens and on a human immunodeficiency virus (HIV-1). Experimental results confirm the usefulness of our novel descriptors. The matlab implementation of our approaches is available at http://bias.csr.unibo.it/nanni/TexturePeptide.zip.

  8. Detecting double compression of audio signal

    NASA Astrophysics Data System (ADS)

    Yang, Rui; Shi, Yun Q.; Huang, Jiwu

    2010-01-01

    MP3 is the most popular audio format nowadays in our daily life, for example music downloaded from the Internet and file saved in the digital recorder are often in MP3 format. However, low bitrate MP3s are often transcoded to high bitrate since high bitrate ones are of high commercial value. Also audio recording in digital recorder can be doctored easily by pervasive audio editing software. This paper presents two methods for the detection of double MP3 compression. The methods are essential for finding out fake-quality MP3 and audio forensics. The proposed methods use support vector machine classifiers with feature vectors formed by the distributions of the first digits of the quantized MDCT (modified discrete cosine transform) coefficients. Extensive experiments demonstrate the effectiveness of the proposed methods. To the best of our knowledge, this piece of work is the first one to detect double compression of audio signal.

  9. A review on "A Novel Technique for Image Steganography Based on Block-DCT and Huffman Encoding"

    NASA Astrophysics Data System (ADS)

    Das, Rig; Tuithung, Themrichon

    2013-03-01

    This paper reviews the embedding and extraction algorithm proposed by "A. Nag, S. Biswas, D. Sarkar and P. P. Sarkar" on "A Novel Technique for Image Steganography based on Block-DCT and Huffman Encoding" in "International Journal of Computer Science and Information Technology, Volume 2, Number 3, June 2010" [3] and shows that the Extraction of Secret Image is Not Possible for the algorithm proposed in [3]. 8 bit Cover Image of size is divided into non joint blocks and a two dimensional Discrete Cosine Transformation (2-D DCT) is performed on each of the blocks. Huffman Encoding is performed on an 8 bit Secret Image of size and each bit of the Huffman Encoded Bit Stream is embedded in the frequency domain by altering the LSB of the DCT coefficients of Cover Image blocks. The Huffman Encoded Bit Stream and Huffman Table

  10. Phase retrieval with the transport-of-intensity equation in an arbitrarily-shaped aperture by iterative discrete cosine transforms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Lei; Zuo, Chao; Idir, Mourad

    A novel transport-of-intensity equation (TIE) based phase retrieval method is proposed with putting an arbitrarily-shaped aperture into the optical wavefield. In this arbitrarily-shaped aperture, the TIE can be solved under non-uniform illuminations and even non-homogeneous boundary conditions by iterative discrete cosine transforms with a phase compensation mechanism. Simulation with arbitrary phase, arbitrary aperture shape, and non-uniform intensity distribution verifies the effective compensation and high accuracy of the proposed method. Experiment is also carried out to check the feasibility of the proposed method in real measurement. Comparing to the existing methods, the proposed method is applicable for any types of phasemore » distribution under non-uniform illumination and non-homogeneous boundary conditions within an arbitrarily-shaped aperture, which enables the technique of TIE with hard aperture become a more flexible phase retrieval tool in practical measurements.« less

  11. Phase retrieval with the transport-of-intensity equation in an arbitrarily-shaped aperture by iterative discrete cosine transforms

    DOE PAGES

    Huang, Lei; Zuo, Chao; Idir, Mourad; ...

    2015-04-21

    A novel transport-of-intensity equation (TIE) based phase retrieval method is proposed with putting an arbitrarily-shaped aperture into the optical wavefield. In this arbitrarily-shaped aperture, the TIE can be solved under non-uniform illuminations and even non-homogeneous boundary conditions by iterative discrete cosine transforms with a phase compensation mechanism. Simulation with arbitrary phase, arbitrary aperture shape, and non-uniform intensity distribution verifies the effective compensation and high accuracy of the proposed method. Experiment is also carried out to check the feasibility of the proposed method in real measurement. Comparing to the existing methods, the proposed method is applicable for any types of phasemore » distribution under non-uniform illumination and non-homogeneous boundary conditions within an arbitrarily-shaped aperture, which enables the technique of TIE with hard aperture become a more flexible phase retrieval tool in practical measurements.« less

  12. Cosine beamforming

    NASA Astrophysics Data System (ADS)

    Ruigrok, Elmer; Wapenaar, Kees

    2014-05-01

    In various application areas, e.g., seismology, astronomy and geodesy, arrays of sensors are used to characterize incoming wavefields due to distant sources. Beamforming is a general term for phased-adjusted summations over the different array elements, for untangling the directionality and elevation angle of the incoming waves. For characterizing noise sources, beamforming is conventionally applied with a temporal Fourier and a 2D spatial Fourier transform, possibly with additional weights. These transforms become aliased for higher frequencies and sparser array-element distributions. As a partial remedy, we derive a kernel for beamforming crosscorrelated data and call it cosine beamforming (CBF). By applying beamforming not directly to the data, but to crosscorrelated data, the sampling is effectively increased. We show that CBF, due to this better sampling, suffers less from aliasing and yields higher resolution than conventional beamforming. As a flip-side of the coin, the CBF output shows more smearing for spherical waves than conventional beamforming.

  13. Image secure transmission for optical orthogonal frequency-division multiplexing visible light communication systems using chaotic discrete cosine transform

    NASA Astrophysics Data System (ADS)

    Wang, Zhongpeng; Zhang, Shaozhong; Chen, Fangni; Wu, Ming-Wei; Qiu, Weiwei

    2017-11-01

    A physical encryption scheme for orthogonal frequency-division multiplexing (OFDM) visible light communication (VLC) systems using chaotic discrete cosine transform (DCT) is proposed. In the scheme, the row of the DCT matrix is permutated by a scrambling sequence generated by a three-dimensional (3-D) Arnold chaos map. Furthermore, two scrambling sequences, which are also generated from a 3-D Arnold map, are employed to encrypt the real and imaginary parts of the transmitted OFDM signal before the chaotic DCT operation. The proposed scheme enhances the physical layer security and improves the bit error rate (BER) performance for OFDM-based VLC. The simulation results prove the efficiency of the proposed encryption method. The experimental results show that the proposed security scheme not only protects image data from eavesdroppers but also keeps the good BER and peak-to-average power ratio performances for image-based OFDM-VLC systems.

  14. A Method to Compute the Force Signature of a Body Impacting on a Linear Elastic Structure Using Fourier Analysis

    DTIC Science & Technology

    1982-09-17

    FK * 1PK (2) The convolution of two transforms in time domain is the inverse transform of the product in frequency domain. Thus Rp(s) - Fgc() Ipg(*) (3...its inverse transform by: R,(r)- R,(a.)e’’ do. (5)2w In order to nuke use f a very accurate numerical method to ompute Fourier "ke and coil...taorm. When the inverse transform it tken by using Eq. (15), the cosine transform, because it converges faster than the sine transform refu-ft the

  15. New density estimation methods for charged particle beams with applications to microbunching instability

    NASA Astrophysics Data System (ADS)

    Terzić, Balša; Bassi, Gabriele

    2011-07-01

    In this paper we discuss representations of charge particle densities in particle-in-cell simulations, analyze the sources and profiles of the intrinsic numerical noise, and present efficient methods for their removal. We devise two alternative estimation methods for charged particle distribution which represent significant improvement over the Monte Carlo cosine expansion used in the 2D code of Bassi et al. [G. Bassi, J. A. Ellison, K. Heinemann, and R. Warnock, Phys. Rev. ST Accel. Beams 12, 080704 (2009); PRABFM1098-440210.1103/PhysRevSTAB.12.080704G. Bassi and B. Terzić, in Proceedings of the 23rd Particle Accelerator Conference, Vancouver, Canada, 2009 (IEEE, Piscataway, NJ, 2009), TH5PFP043], designed to simulate coherent synchrotron radiation (CSR) in charged particle beams. The improvement is achieved by employing an alternative beam density estimation to the Monte Carlo cosine expansion. The representation is first binned onto a finite grid, after which two grid-based methods are employed to approximate particle distributions: (i) truncated fast cosine transform; and (ii) thresholded wavelet transform (TWT). We demonstrate that these alternative methods represent a staggering upgrade over the original Monte Carlo cosine expansion in terms of efficiency, while the TWT approximation also provides an appreciable improvement in accuracy. The improvement in accuracy comes from a judicious removal of the numerical noise enabled by the wavelet formulation. The TWT method is then integrated into the CSR code [G. Bassi, J. A. Ellison, K. Heinemann, and R. Warnock, Phys. Rev. ST Accel. Beams 12, 080704 (2009)PRABFM1098-440210.1103/PhysRevSTAB.12.080704], and benchmarked against the original version. We show that the new density estimation method provides a superior performance in terms of efficiency and spatial resolution, thus enabling high-fidelity simulations of CSR effects, including microbunching instability.

  16. Simultaneous storage of medical images in the spatial and frequency domain: A comparative study

    PubMed Central

    Nayak, Jagadish; Bhat, P Subbanna; Acharya U, Rajendra; UC, Niranjan

    2004-01-01

    Background Digital watermarking is a technique of hiding specific identification data for copyright authentication. This technique is adapted here for interleaving patient information with medical images, to reduce storage and transmission overheads. Methods The patient information is encrypted before interleaving with images to ensure greater security. The bio-signals are compressed and subsequently interleaved with the image. This interleaving is carried out in the spatial domain and Frequency domain. The performance of interleaving in the spatial, Discrete Fourier Transform (DFT), Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) coefficients is studied. Differential pulse code modulation (DPCM) is employed for data compression as well as encryption and results are tabulated for a specific example. Results It can be seen from results, the process does not affect the picture quality. This is attributed to the fact that the change in LSB of a pixel changes its brightness by 1 part in 256. Spatial and DFT domain interleaving gave very less %NRMSE as compared to DCT and DWT domain. Conclusion The Results show that spatial domain the interleaving, the %NRMSE was less than 0.25% for 8-bit encoded pixel intensity. Among the frequency domain interleaving methods, DFT was found to be very efficient. PMID:15180899

  17. Progressive sparse representation-based classification using local discrete cosine transform evaluation for image recognition

    NASA Astrophysics Data System (ADS)

    Song, Xiaoning; Feng, Zhen-Hua; Hu, Guosheng; Yang, Xibei; Yang, Jingyu; Qi, Yunsong

    2015-09-01

    This paper proposes a progressive sparse representation-based classification algorithm using local discrete cosine transform (DCT) evaluation to perform face recognition. Specifically, the sum of the contributions of all training samples of each subject is first taken as the contribution of this subject, then the redundant subject with the smallest contribution to the test sample is iteratively eliminated. Second, the progressive method aims at representing the test sample as a linear combination of all the remaining training samples, by which the representation capability of each training sample is exploited to determine the optimal "nearest neighbors" for the test sample. Third, the transformed DCT evaluation is constructed to measure the similarity between the test sample and each local training sample using cosine distance metrics in the DCT domain. The final goal of the proposed method is to determine an optimal weighted sum of nearest neighbors that are obtained under the local correlative degree evaluation, which is approximately equal to the test sample, and we can use this weighted linear combination to perform robust classification. Experimental results conducted on the ORL database of faces (created by the Olivetti Research Laboratory in Cambridge), the FERET face database (managed by the Defense Advanced Research Projects Agency and the National Institute of Standards and Technology), AR face database (created by Aleix Martinez and Robert Benavente in the Computer Vision Center at U.A.B), and USPS handwritten digit database (gathered at the Center of Excellence in Document Analysis and Recognition at SUNY Buffalo) demonstrate the effectiveness of the proposed method.

  18. Natural convection heat transfer in an oscillating vertical cylinder

    PubMed Central

    Ali Shah, Nehad; Tassaddiq, Asifa; Mustapha, Norzieha; Kechil, Seripah Awang

    2018-01-01

    This paper studies the heat transfer analysis caused due to free convection in a vertically oscillating cylinder. Exact solutions are determined by applying the Laplace and finite Hankel transforms. Expressions for temperature distribution and velocity field corresponding to cosine and sine oscillations are obtained. The solutions that have been obtained for velocity are presented in the forms of transient and post-transient solutions. Moreover, these solutions satisfy both the governing differential equation and all imposed initial and boundary conditions. Numerical computations and graphical illustrations are used in order to study the effects of Prandtl and Grashof numbers on velocity and temperature for various times. The transient solutions for both cosine and sine oscillations are also computed in tables. It is found that, the transient solutions are of considerable interest up to the times t = 15 for cosine oscillations and t = 1.75 for sine oscillations. After these moments, the transient solutions can be neglected and, the fluid moves according with the post-transient solutions. PMID:29304161

  19. Natural convection heat transfer in an oscillating vertical cylinder.

    PubMed

    Khan, Ilyas; Ali Shah, Nehad; Tassaddiq, Asifa; Mustapha, Norzieha; Kechil, Seripah Awang

    2018-01-01

    This paper studies the heat transfer analysis caused due to free convection in a vertically oscillating cylinder. Exact solutions are determined by applying the Laplace and finite Hankel transforms. Expressions for temperature distribution and velocity field corresponding to cosine and sine oscillations are obtained. The solutions that have been obtained for velocity are presented in the forms of transient and post-transient solutions. Moreover, these solutions satisfy both the governing differential equation and all imposed initial and boundary conditions. Numerical computations and graphical illustrations are used in order to study the effects of Prandtl and Grashof numbers on velocity and temperature for various times. The transient solutions for both cosine and sine oscillations are also computed in tables. It is found that, the transient solutions are of considerable interest up to the times t = 15 for cosine oscillations and t = 1.75 for sine oscillations. After these moments, the transient solutions can be neglected and, the fluid moves according with the post-transient solutions.

  20. Area and power efficient DCT architecture for image compression

    NASA Astrophysics Data System (ADS)

    Dhandapani, Vaithiyanathan; Ramachandran, Seshasayanan

    2014-12-01

    The discrete cosine transform (DCT) is one of the major components in image and video compression systems. The final output of these systems is interpreted by the human visual system (HVS), which is not perfect. The limited perception of human visualization allows the algorithm to be numerically approximate rather than exact. In this paper, we propose a new matrix for discrete cosine transform. The proposed 8 × 8 transformation matrix contains only zeros and ones which requires only adders, thus avoiding the need for multiplication and shift operations. The new class of transform requires only 12 additions, which highly reduces the computational complexity and achieves a performance in image compression that is comparable to that of the existing approximated DCT. Another important aspect of the proposed transform is that it provides an efficient area and power optimization while implementing in hardware. To ensure the versatility of the proposal and to further evaluate the performance and correctness of the structure in terms of speed, area, and power consumption, the model is implemented on Xilinx Virtex 7 field programmable gate array (FPGA) device and synthesized with Cadence® RTL Compiler® using UMC 90 nm standard cell library. The analysis obtained from the implementation indicates that the proposed structure is superior to the existing approximation techniques with a 30% reduction in power and 12% reduction in area.

  1. Output MSE and PSNR prediction in DCT-based lossy compression of remote sensing images

    NASA Astrophysics Data System (ADS)

    Kozhemiakin, Ruslan A.; Abramov, Sergey K.; Lukin, Vladimir V.; Vozel, Benoit; Chehdi, Kacem

    2017-10-01

    Amount and size of remote sensing (RS) images acquired by modern systems are so large that data have to be compressed in order to transfer, save and disseminate them. Lossy compression becomes more popular for aforementioned situations. But lossy compression has to be applied carefully with providing acceptable level of introduced distortions not to lose valuable information contained in data. Then introduced losses have to be controlled and predicted and this is problematic for many coders. In this paper, we analyze possibilities of predicting mean square error or, equivalently, PSNR for coders based on discrete cosine transform (DCT) applied either for compressing singlechannel RS images or multichannel data in component-wise manner. The proposed approach is based on direct dependence between distortions introduced due to DCT coefficient quantization and losses in compressed data. One more innovation deals with possibility to employ a limited number (percentage) of blocks for which DCT-coefficients have to be calculated. This accelerates prediction and makes it considerably faster than compression itself. There are two other advantages of the proposed approach. First, it is applicable for both uniform and non-uniform quantization of DCT coefficients. Second, the approach is quite general since it works for several analyzed DCT-based coders. The simulation results are obtained for standard test images and then verified for real-life RS data.

  2. VizieR Online Data Catalog: Absolute Refletivity of Jupiter and Saturn (Mendikoa+ 2017)

    NASA Astrophysics Data System (ADS)

    Mendikoa, I.; Sanchez-Lavega, A.; Perez-Hoyos, S.; Hueso, R.; Rojas, J. F.; Lopez-Santiago, J.

    2017-08-01

    Overall mean absolute reflectivity I/F of Jupiter and Saturn. Scans at central meridian are given versus latitude from observations at Calar Alto observatory between 2012 and 2016. In addition, Minnaert coefficients (I/F)0 and k are given, determining the I/F variation with the cosines of the incidence and emission angles, where (I/F)0 represents the absolute reflectivity in absence of darkening effects at nadir viewing and k is the limb-darkening coefficient. (12 data files).

  3. Subjective evaluations of integer cosine transform compressed Galileo solid state imagery

    NASA Technical Reports Server (NTRS)

    Haines, Richard F.; Gold, Yaron; Grant, Terry; Chuang, Sherry

    1994-01-01

    This paper describes a study conducted for the Jet Propulsion Laboratory, Pasadena, California, using 15 evaluators from 12 institutions involved in the Galileo Solid State Imaging (SSI) experiment. The objective of the study was to determine the impact of integer cosine transform (ICT) compression using specially formulated quantization (q) tables and compression ratios on acceptability of the 800 x 800 x 8 monochromatic astronomical images as evaluated visually by Galileo SSI mission scientists. Fourteen different images in seven image groups were evaluated. Each evaluator viewed two versions of the same image side by side on a high-resolution monitor; each was compressed using a different q level. First the evaluators selected the image with the highest overall quality to support them in their visual evaluations of image content. Next they rated each image using a scale from one to five indicating its judged degree of usefulness. Up to four preselected types of images with and without noise were presented to each evaluator.

  4. A trace map comparison algorithm for the discrete fracture network models of rock masses

    NASA Astrophysics Data System (ADS)

    Han, Shuai; Wang, Gang; Li, Mingchao

    2018-06-01

    Discrete fracture networks (DFN) are widely used to build refined geological models. However, validating whether a refined model can match to reality is a crucial problem, concerning whether the model can be used for analysis. The current validation methods include numerical validation and graphical validation. However, the graphical validation, aiming at estimating the similarity between a simulated trace map and the real trace map by visual observation, is subjective. In this paper, an algorithm for the graphical validation of DFN is set up. Four main indicators, including total gray, gray grade curve, characteristic direction and gray density distribution curve, are presented to assess the similarity between two trace maps. A modified Radon transform and loop cosine similarity are presented based on Radon transform and cosine similarity respectively. Besides, how to use Bézier curve to reduce the edge effect is described. Finally, a case study shows that the new algorithm can effectively distinguish which simulated trace map is more similar to the real trace map.

  5. On the Delusiveness of Adopting a Common Space for Modeling IR Objects: Are Queries Documents?

    ERIC Educational Resources Information Center

    Bollmann-Sdorra, Peter; Raghavan, Vjay V.

    1993-01-01

    Proposes that document space and query space have different structures in information retrieval and discusses similarity measures, term independence, and linear structure. Examples are given using the retrieval functions of dot-product, the cosine measure, the coefficient of Jaccard, and the overlap function. (Contains 28 references.) (LRW)

  6. A Transform-Based Feature Extraction Approach for Motor Imagery Tasks Classification

    PubMed Central

    Khorshidtalab, Aida; Mesbah, Mostefa; Salami, Momoh J. E.

    2015-01-01

    In this paper, we present a new motor imagery classification method in the context of electroencephalography (EEG)-based brain–computer interface (BCI). This method uses a signal-dependent orthogonal transform, referred to as linear prediction singular value decomposition (LP-SVD), for feature extraction. The transform defines the mapping as the left singular vectors of the LP coefficient filter impulse response matrix. Using a logistic tree-based model classifier; the extracted features are classified into one of four motor imagery movements. The proposed approach was first benchmarked against two related state-of-the-art feature extraction approaches, namely, discrete cosine transform (DCT) and adaptive autoregressive (AAR)-based methods. By achieving an accuracy of 67.35%, the LP-SVD approach outperformed the other approaches by large margins (25% compared with DCT and 6 % compared with AAR-based methods). To further improve the discriminatory capability of the extracted features and reduce the computational complexity, we enlarged the extracted feature subset by incorporating two extra features, namely, Q- and the Hotelling’s \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$T^{2}$ \\end{document} statistics of the transformed EEG and introduced a new EEG channel selection method. The performance of the EEG classification based on the expanded feature set and channel selection method was compared with that of a number of the state-of-the-art classification methods previously reported with the BCI IIIa competition data set. Our method came second with an average accuracy of 81.38%. PMID:27170898

  7. Genetics algorithm optimization of DWT-DCT based image Watermarking

    NASA Astrophysics Data System (ADS)

    Budiman, Gelar; Novamizanti, Ledya; Iwut, Iwan

    2017-01-01

    Data hiding in an image content is mandatory for setting the ownership of the image. Two dimensions discrete wavelet transform (DWT) and discrete cosine transform (DCT) are proposed as transform method in this paper. First, the host image in RGB color space is converted to selected color space. We also can select the layer where the watermark is embedded. Next, 2D-DWT transforms the selected layer obtaining 4 subband. We select only one subband. And then block-based 2D-DCT transforms the selected subband. Binary-based watermark is embedded on the AC coefficients of each block after zigzag movement and range based pixel selection. Delta parameter replacing pixels in each range represents embedded bit. +Delta represents bit “1” and -delta represents bit “0”. Several parameters to be optimized by Genetics Algorithm (GA) are selected color space, layer, selected subband of DWT decomposition, block size, embedding range, and delta. The result of simulation performs that GA is able to determine the exact parameters obtaining optimum imperceptibility and robustness, in any watermarked image condition, either it is not attacked or attacked. DWT process in DCT based image watermarking optimized by GA has improved the performance of image watermarking. By five attacks: JPEG 50%, resize 50%, histogram equalization, salt-pepper and additive noise with variance 0.01, robustness in the proposed method has reached perfect watermark quality with BER=0. And the watermarked image quality by PSNR parameter is also increased about 5 dB than the watermarked image quality from previous method.

  8. Error analysis and new dual-cosine window for estimating the sensor frequency response function from the step response data

    NASA Astrophysics Data System (ADS)

    Yang, Shuang-Long; Liang, Li-Ping; Liu, Hou-De; Xu, Ke-Jun

    2018-03-01

    Aiming at reducing the estimation error of the sensor frequency response function (FRF) estimated by the commonly used window-based spectral estimation method, the error models of interpolation and transient errors are derived in the form of non-parameter models. Accordingly, window effects on the errors are analyzed and reveal that the commonly used hanning window leads to smaller interpolation error which can also be significantly eliminated by the cubic spline interpolation method when estimating the FRF from the step response data, and window with smaller front-end value can restrain more transient error. Thus, a new dual-cosine window with its non-zero discrete Fourier transform bins at -3, -1, 0, 1, and 3 is constructed for FRF estimation. Compared with the hanning window, the new dual-cosine window has the equivalent interpolation error suppression capability and better transient error suppression capability when estimating the FRF from the step response; specifically, it reduces the asymptotic property of the transient error from O(N-2) of the hanning window method to O(N-4) while only increases the uncertainty slightly (about 0.4 dB). Then, one direction of a wind tunnel strain gauge balance which is a high order, small damping, and non-minimum phase system is employed as the example for verifying the new dual-cosine window-based spectral estimation method. The model simulation result shows that the new dual-cosine window method is better than the hanning window method for FRF estimation, and compared with the Gans method and LPM method, it has the advantages of simple computation, less time consumption, and short data requirement; the actual data calculation result of the balance FRF is consistent to the simulation result. Thus, the new dual-cosine window is effective and practical for FRF estimation.

  9. First results from the spectral DCT trigger implemented in the Cyclone V Front-End Board used for a detection of very inclined showers in the Pierre Auger surface detector Engineering Array

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Szadkowski, Zbigniew

    2015-07-01

    The paper presents the first results from the trigger based on the Discrete Cosine Transform (DCT) operating in the new Front-End Boards with Cyclone V FPGA deployed in 8 test surface detectors in the Pierre Auger Engineering Array. The patterns of the ADC traces generated by very inclined showers were obtained from the Auger database and from the CORSIKA simulation package supported next by Offline reconstruction Auger platform which gives a predicted digitized signal profiles. Simulations for many variants of the initial angle of shower, initialization depth in the atmosphere, type of particle and its initial energy gave a boundarymore » of the DCT coefficients used next for the on-line pattern recognition in the FPGA. Preliminary results have proven a right approach. We registered several showers triggered by the DCT for 120 MSps and 160 MSps. (authors)« less

  10. DCT-based iris recognition.

    PubMed

    Monro, Donald M; Rakshit, Soumyadip; Zhang, Dexin

    2007-04-01

    This paper presents a novel iris coding method based on differences of discrete cosine transform (DCT) coefficients of overlapped angular patches from normalized iris images. The feature extraction capabilities of the DCT are optimized on the two largest publicly available iris image data sets, 2,156 images of 308 eyes from the CASIA database and 2,955 images of 150 eyes from the Bath database. On this data, we achieve 100 percent Correct Recognition Rate (CRR) and perfect Receiver-Operating Characteristic (ROC) Curves with no registered false accepts or rejects. Individual feature bit and patch position parameters are optimized for matching through a product-of-sum approach to Hamming distance calculation. For verification, a variable threshold is applied to the distance metric and the False Acceptance Rate (FAR) and False Rejection Rate (FRR) are recorded. A new worst-case metric is proposed for predicting practical system performance in the absence of matching failures, and the worst case theoretical Equal Error Rate (EER) is predicted to be as low as 2.59 x 10(-4) on the available data sets.

  11. DCT Trigger in a High-Resolution Test Platform for the Detection of Very Inclined Showers in Pierre Auger Surface Detectors

    NASA Astrophysics Data System (ADS)

    Szadkowski, Zbigniew; Wiedeński, Michał

    2017-06-01

    We present first results from a trigger based on the discrete cosine transform (DCT) operating in new front-end boards with a Cyclone V E field-programmable gate array (FPGA) deployed in seven test surface detectors in the Pierre Auger Test Array. The patterns of the ADC traces generated by very inclined showers (arriving at 70° to 90° from the vertical) were obtained from the Auger database and from the CORSIKA simulation package supported by the Auger OffLine event reconstruction platform that gives predicted digitized signal profiles. Simulations for many values of the initial cosmic ray angle of arrival, the shower initialization depth in the atmosphere, the type of particle, and its initial energy gave a boundary on the DCT coefficients used for the online pattern recognition in the FPGA. Preliminary results validated the approach used. We recorded several showers triggered by the DCT for 120 Msamples/s and 160 Msamples/s.

  12. Constructing and Deriving Reciprocal Trigonometric Relations: A Functional Analytic Approach

    ERIC Educational Resources Information Center

    Ninness, Chris; Dixon, Mark; Barnes-Holmes, Dermot; Rehfeldt, Ruth Anne; Rumph, Robin; McCuller, Glen; Holland, James; Smith, Ronald; Ninness, Sharon K.; McGinty, Jennifer

    2009-01-01

    Participants were pretrained and tested on mutually entailed trigonometric relations and combinatorially entailed relations as they pertained to positive and negative forms of sine, cosine, secant, and cosecant. Experiment 1 focused on training and testing transformations of these mathematical functions in terms of amplitude and frequency followed…

  13. Stability of strongly nonlinear normal modes

    NASA Astrophysics Data System (ADS)

    Recktenwald, Geoffrey; Rand, Richard

    2007-10-01

    It is shown that a transformation of time can allow the periodic solution of a strongly nonlinear oscillator to be written as a simple cosine function. This enables the stability of strongly nonlinear normal modes in multidegree of freedom systems to be investigated by standard procedures such as harmonic balance.

  14. Image coding of SAR imagery

    NASA Technical Reports Server (NTRS)

    Chang, C. Y.; Kwok, R.; Curlander, J. C.

    1987-01-01

    Five coding techniques in the spatial and transform domains have been evaluated for SAR image compression: linear three-point predictor (LTPP), block truncation coding (BTC), microadaptive picture sequencing (MAPS), adaptive discrete cosine transform (ADCT), and adaptive Hadamard transform (AHT). These techniques have been tested with Seasat data. Both LTPP and BTC spatial domain coding techniques provide very good performance at rates of 1-2 bits/pixel. The two transform techniques, ADCT and AHT, demonstrate the capability to compress the SAR imagery to less than 0.5 bits/pixel without visible artifacts. Tradeoffs such as the rate distortion performance, the computational complexity, the algorithm flexibility, and the controllability of compression ratios are also discussed.

  15. Learning Spatially-Smooth Mappings in Non-Rigid Structure from Motion

    PubMed Central

    Hamsici, Onur C.; Gotardo, Paulo F.U.; Martinez, Aleix M.

    2013-01-01

    Non-rigid structure from motion (NRSFM) is a classical underconstrained problem in computer vision. A common approach to make NRSFM more tractable is to constrain 3D shape deformation to be smooth over time. This constraint has been used to compress the deformation model and reduce the number of unknowns that are estimated. However, temporal smoothness cannot be enforced when the data lacks temporal ordering and its benefits are less evident when objects undergo abrupt deformations. This paper proposes a new NRSFM method that addresses these problems by considering deformations as spatial variations in shape space and then enforcing spatial, rather than temporal, smoothness. This is done by modeling each 3D shape coefficient as a function of its input 2D shape. This mapping is learned in the feature space of a rotation invariant kernel, where spatial smoothness is intrinsically defined by the mapping function. As a result, our model represents shape variations compactly using custom-built coefficient bases learned from the input data, rather than a pre-specified set such as the Discrete Cosine Transform. The resulting kernel-based mapping is a by-product of the NRSFM solution and leads to another fundamental advantage of our approach: for a newly observed 2D shape, its 3D shape is recovered by simply evaluating the learned function. PMID:23946937

  16. Learning Spatially-Smooth Mappings in Non-Rigid Structure from Motion.

    PubMed

    Hamsici, Onur C; Gotardo, Paulo F U; Martinez, Aleix M

    2012-01-01

    Non-rigid structure from motion (NRSFM) is a classical underconstrained problem in computer vision. A common approach to make NRSFM more tractable is to constrain 3D shape deformation to be smooth over time. This constraint has been used to compress the deformation model and reduce the number of unknowns that are estimated. However, temporal smoothness cannot be enforced when the data lacks temporal ordering and its benefits are less evident when objects undergo abrupt deformations. This paper proposes a new NRSFM method that addresses these problems by considering deformations as spatial variations in shape space and then enforcing spatial, rather than temporal, smoothness. This is done by modeling each 3D shape coefficient as a function of its input 2D shape. This mapping is learned in the feature space of a rotation invariant kernel, where spatial smoothness is intrinsically defined by the mapping function. As a result, our model represents shape variations compactly using custom-built coefficient bases learned from the input data, rather than a pre-specified set such as the Discrete Cosine Transform. The resulting kernel-based mapping is a by-product of the NRSFM solution and leads to another fundamental advantage of our approach: for a newly observed 2D shape, its 3D shape is recovered by simply evaluating the learned function.

  17. A new JPEG-based steganographic algorithm for mobile devices

    NASA Astrophysics Data System (ADS)

    Agaian, Sos S.; Cherukuri, Ravindranath C.; Schneider, Erik C.; White, Gregory B.

    2006-05-01

    Currently, cellular phones constitute a significant portion of the global telecommunications market. Modern cellular phones offer sophisticated features such as Internet access, on-board cameras, and expandable memory which provide these devices with excellent multimedia capabilities. Because of the high volume of cellular traffic, as well as the ability of these devices to transmit nearly all forms of data. The need for an increased level of security in wireless communications is becoming a growing concern. Steganography could provide a solution to this important problem. In this article, we present a new algorithm for JPEG-compressed images which is applicable to mobile platforms. This algorithm embeds sensitive information into quantized discrete cosine transform coefficients obtained from the cover JPEG. These coefficients are rearranged based on certain statistical properties and the inherent processing and memory constraints of mobile devices. Based on the energy variation and block characteristics of the cover image, the sensitive data is hidden by using a switching embedding technique proposed in this article. The proposed system offers high capacity while simultaneously withstanding visual and statistical attacks. Based on simulation results, the proposed method demonstrates an improved retention of first-order statistics when compared to existing JPEG-based steganographic algorithms, while maintaining a capacity which is comparable to F5 for certain cover images.

  18. Investigation of a novel common subexpression elimination method for low power and area efficient DCT architecture.

    PubMed

    Siddiqui, M F; Reza, A W; Kanesan, J; Ramiah, H

    2014-01-01

    A wide interest has been observed to find a low power and area efficient hardware design of discrete cosine transform (DCT) algorithm. This research work proposed a novel Common Subexpression Elimination (CSE) based pipelined architecture for DCT, aimed at reproducing the cost metrics of power and area while maintaining high speed and accuracy in DCT applications. The proposed design combines the techniques of Canonical Signed Digit (CSD) representation and CSE to implement the multiplier-less method for fixed constant multiplication of DCT coefficients. Furthermore, symmetry in the DCT coefficient matrix is used with CSE to further decrease the number of arithmetic operations. This architecture needs a single-port memory to feed the inputs instead of multiport memory, which leads to reduction of the hardware cost and area. From the analysis of experimental results and performance comparisons, it is observed that the proposed scheme uses minimum logic utilizing mere 340 slices and 22 adders. Moreover, this design meets the real time constraints of different video/image coders and peak-signal-to-noise-ratio (PSNR) requirements. Furthermore, the proposed technique has significant advantages over recent well-known methods along with accuracy in terms of power reduction, silicon area usage, and maximum operating frequency by 41%, 15%, and 15%, respectively.

  19. Investigation of a Novel Common Subexpression Elimination Method for Low Power and Area Efficient DCT Architecture

    PubMed Central

    Siddiqui, M. F.; Reza, A. W.; Kanesan, J.; Ramiah, H.

    2014-01-01

    A wide interest has been observed to find a low power and area efficient hardware design of discrete cosine transform (DCT) algorithm. This research work proposed a novel Common Subexpression Elimination (CSE) based pipelined architecture for DCT, aimed at reproducing the cost metrics of power and area while maintaining high speed and accuracy in DCT applications. The proposed design combines the techniques of Canonical Signed Digit (CSD) representation and CSE to implement the multiplier-less method for fixed constant multiplication of DCT coefficients. Furthermore, symmetry in the DCT coefficient matrix is used with CSE to further decrease the number of arithmetic operations. This architecture needs a single-port memory to feed the inputs instead of multiport memory, which leads to reduction of the hardware cost and area. From the analysis of experimental results and performance comparisons, it is observed that the proposed scheme uses minimum logic utilizing mere 340 slices and 22 adders. Moreover, this design meets the real time constraints of different video/image coders and peak-signal-to-noise-ratio (PSNR) requirements. Furthermore, the proposed technique has significant advantages over recent well-known methods along with accuracy in terms of power reduction, silicon area usage, and maximum operating frequency by 41%, 15%, and 15%, respectively. PMID:25133249

  20. A secure online image trading system for untrusted cloud environments.

    PubMed

    Munadi, Khairul; Arnia, Fitri; Syaryadhi, Mohd; Fujiyoshi, Masaaki; Kiya, Hitoshi

    2015-01-01

    In conventional image trading systems, images are usually stored unprotected on a server, rendering them vulnerable to untrusted server providers and malicious intruders. This paper proposes a conceptual image trading framework that enables secure storage and retrieval over Internet services. The process involves three parties: an image publisher, a server provider, and an image buyer. The aim is to facilitate secure storage and retrieval of original images for commercial transactions, while preventing untrusted server providers and unauthorized users from gaining access to true contents. The framework exploits the Discrete Cosine Transform (DCT) coefficients and the moment invariants of images. Original images are visually protected in the DCT domain, and stored on a repository server. Small representation of the original images, called thumbnails, are generated and made publicly accessible for browsing. When a buyer is interested in a thumbnail, he/she sends a query to retrieve the visually protected image. The thumbnails and protected images are matched using the DC component of the DCT coefficients and the moment invariant feature. After the matching process, the server returns the corresponding protected image to the buyer. However, the image remains visually protected unless a key is granted. Our target application is the online market, where publishers sell their stock images over the Internet using public cloud servers.

  1. Pose Invariant Face Recognition Based on Hybrid Dominant Frequency Features

    NASA Astrophysics Data System (ADS)

    Wijaya, I. Gede Pasek Suta; Uchimura, Keiichi; Hu, Zhencheng

    Face recognition is one of the most active research areas in pattern recognition, not only because the face is a human biometric characteristics of human being but also because there are many potential applications of the face recognition which range from human-computer interactions to authentication, security, and surveillance. This paper presents an approach to pose invariant human face image recognition. The proposed scheme is based on the analysis of discrete cosine transforms (DCT) and discrete wavelet transforms (DWT) of face images. From both the DCT and DWT domain coefficients, which describe the facial information, we build compact and meaningful features vector, using simple statistical measures and quantization. This feature vector is called as the hybrid dominant frequency features. Then, we apply a combination of the L2 and Lq metric to classify the hybrid dominant frequency features to a person's class. The aim of the proposed system is to overcome the high memory space requirement, the high computational load, and the retraining problems of previous methods. The proposed system is tested using several face databases and the experimental results are compared to a well-known Eigenface method. The proposed method shows good performance, robustness, stability, and accuracy without requiring geometrical normalization. Furthermore, the purposed method has low computational cost, requires little memory space, and can overcome retraining problem.

  2. Low-complex energy-aware image communication in visual sensor networks

    NASA Astrophysics Data System (ADS)

    Phamila, Yesudhas Asnath Victy; Amutha, Ramachandran

    2013-10-01

    A low-complex, low bit rate, energy-efficient image compression algorithm explicitly designed for resource-constrained visual sensor networks applied for surveillance, battle field, habitat monitoring, etc. is presented, where voluminous amount of image data has to be communicated over a bandwidth-limited wireless medium. The proposed method overcomes the energy limitation of individual nodes and is investigated in terms of image quality, entropy, processing time, overall energy consumption, and system lifetime. This algorithm is highly energy efficient and extremely fast since it applies energy-aware zonal binary discrete cosine transform (DCT) that computes only the few required significant coefficients and codes them using enhanced complementary Golomb Rice code without using any floating point operations. Experiments are performed using the Atmel Atmega128 and MSP430 processors to measure the resultant energy savings. Simulation results show that the proposed energy-aware fast zonal transform consumes only 0.3% of energy needed by conventional DCT. This algorithm consumes only 6% of energy needed by Independent JPEG Group (fast) version, and it suits for embedded systems requiring low power consumption. The proposed scheme is unique since it significantly enhances the lifetime of the camera sensor node and the network without any need for distributed processing as was traditionally required in existing algorithms.

  3. Iterative Correction Scheme Based on Discrete Cosine Transform and L1 Regularization for Fluorescence Molecular Tomography With Background Fluorescence.

    PubMed

    Zhang, Jiulou; Shi, Junwei; Guang, Huizhi; Zuo, Simin; Liu, Fei; Bai, Jing; Luo, Jianwen

    2016-06-01

    High-intensity background fluorescence is generally encountered in fluorescence molecular tomography (FMT), because of the accumulation of fluorescent probes in nontarget tissues or the existence of autofluorescence in biological tissues. The reconstruction results are affected or even distorted by the background fluorescence, especially when the distribution of fluorescent targets is relatively sparse. The purpose of this paper is to reduce the negative effect of background fluorescence on FMT reconstruction. After each iteration of the Tikhonov regularization algorithm, 3-D discrete cosine transform is adopted to filter the intermediate results. And then, a sparsity constraint step based on L1 regularization is applied to restrain the energy of the objective function. Phantom experiments with different fluorescence intensities of homogeneous and heterogeneous background are carried out to validate the performance of the proposed scheme. The results show that the reconstruction quality can be improved with the proposed iterative correction scheme. The influence of background fluorescence in FMT can be reduced effectively because of the filtering of the intermediate results, the detail preservation, and noise suppression of L1 regularization.

  4. Image Augmentation for Object Image Classification Based On Combination of Pre-Trained CNN and SVM

    NASA Astrophysics Data System (ADS)

    Shima, Yoshihiro

    2018-04-01

    Neural networks are a powerful means of classifying object images. The proposed image category classification method for object images combines convolutional neural networks (CNNs) and support vector machines (SVMs). A pre-trained CNN, called Alex-Net, is used as a pattern-feature extractor. Alex-Net is pre-trained for the large-scale object-image dataset ImageNet. Instead of training, Alex-Net, pre-trained for ImageNet is used. An SVM is used as trainable classifier. The feature vectors are passed to the SVM from Alex-Net. The STL-10 dataset are used as object images. The number of classes is ten. Training and test samples are clearly split. STL-10 object images are trained by the SVM with data augmentation. We use the pattern transformation method with the cosine function. We also apply some augmentation method such as rotation, skewing and elastic distortion. By using the cosine function, the original patterns were left-justified, right-justified, top-justified, or bottom-justified. Patterns were also center-justified and enlarged. Test error rate is decreased by 0.435 percentage points from 16.055% by augmentation with cosine transformation. Error rates are increased by other augmentation method such as rotation, skewing and elastic distortion, compared without augmentation. Number of augmented data is 30 times that of the original STL-10 5K training samples. Experimental test error rate for the test 8k STL-10 object images was 15.620%, which shows that image augmentation is effective for image category classification.

  5. Improved cosine similarity measures of simplified neutrosophic sets for medical diagnoses.

    PubMed

    Ye, Jun

    2015-03-01

    In pattern recognition and medical diagnosis, similarity measure is an important mathematical tool. To overcome some disadvantages of existing cosine similarity measures of simplified neutrosophic sets (SNSs) in vector space, this paper proposed improved cosine similarity measures of SNSs based on cosine function, including single valued neutrosophic cosine similarity measures and interval neutrosophic cosine similarity measures. Then, weighted cosine similarity measures of SNSs were introduced by taking into account the importance of each element. Further, a medical diagnosis method using the improved cosine similarity measures was proposed to solve medical diagnosis problems with simplified neutrosophic information. The improved cosine similarity measures between SNSs were introduced based on cosine function. Then, we compared the improved cosine similarity measures of SNSs with existing cosine similarity measures of SNSs by numerical examples to demonstrate their effectiveness and rationality for overcoming some shortcomings of existing cosine similarity measures of SNSs in some cases. In the medical diagnosis method, we can find a proper diagnosis by the cosine similarity measures between the symptoms and considered diseases which are represented by SNSs. Then, the medical diagnosis method based on the improved cosine similarity measures was applied to two medical diagnosis problems to show the applications and effectiveness of the proposed method. Two numerical examples all demonstrated that the improved cosine similarity measures of SNSs based on the cosine function can overcome the shortcomings of the existing cosine similarity measures between two vectors in some cases. By two medical diagnoses problems, the medical diagnoses using various similarity measures of SNSs indicated the identical diagnosis results and demonstrated the effectiveness and rationality of the diagnosis method proposed in this paper. The improved cosine measures of SNSs based on cosine function can overcome some drawbacks of existing cosine similarity measures of SNSs in vector space, and then their diagnosis method is very suitable for handling the medical diagnosis problems with simplified neutrosophic information and demonstrates the effectiveness and rationality of medical diagnoses. Copyright © 2014 Elsevier B.V. All rights reserved.

  6. New fast DCT algorithms based on Loeffler's factorization

    NASA Astrophysics Data System (ADS)

    Hong, Yoon Mi; Kim, Il-Koo; Lee, Tammy; Cheon, Min-Su; Alshina, Elena; Han, Woo-Jin; Park, Jeong-Hoon

    2012-10-01

    This paper proposes a new 32-point fast discrete cosine transform (DCT) algorithm based on the Loeffler's 16-point transform. Fast integer realizations of 16-point and 32-point transforms are also provided based on the proposed transform. For the recent development of High Efficiency Video Coding (HEVC), simplified quanti-zation and de-quantization process are proposed. Three different forms of implementation with the essentially same performance, namely matrix multiplication, partial butterfly, and full factorization can be chosen accord-ing to the given platform. In terms of the number of multiplications required for the realization, our proposed full-factorization is 3~4 times faster than a partial butterfly, and about 10 times faster than direct matrix multiplication.

  7. Decomposition of ECG by linear filtering.

    PubMed

    Murthy, I S; Niranjan, U C

    1992-01-01

    A simple method is developed for the delineation of a given electrocardiogram (ECG) signal into its component waves. The properties of discrete cosine transform (DCT) are exploited for the purpose. The transformed signal is convolved with appropriate filters and the component waves are obtained by computing the inverse transform (IDCT) of the filtered signals. The filters are derived from the time signal itself. Analysis of continuous strips of ECG signals with various arrhythmias showed that the performance of the method is satisfactory both qualitatively and quantitatively. The small amplitude P wave usually had a high percentage rms difference (PRD) compared to the other large component waves.

  8. Two-body potential model based on cosine series expansion for ionic materials

    DOE PAGES

    Oda, Takuji; Weber, William J.; Tanigawa, Hisashi

    2015-09-23

    There is a method to construct a two-body potential model for ionic materials with a Fourier series basis and we examine it. For this method, the coefficients of cosine basis functions are uniquely determined by solving simultaneous linear equations to minimize the sum of weighted mean square errors in energy, force and stress, where first-principles calculation results are used as the reference data. As a validation test of the method, potential models for magnesium oxide are constructed. The mean square errors appropriately converge with respect to the truncation of the cosine series. This result mathematically indicates that the constructed potentialmore » model is sufficiently close to the one that is achieved with the non-truncated Fourier series and demonstrates that this potential virtually provides minimum error from the reference data within the two-body representation. The constructed potential models work appropriately in both molecular statics and dynamics simulations, especially if a two-step correction to revise errors expected in the reference data is performed, and the models clearly outperform two existing Buckingham potential models that were tested. Moreover, the good agreement over a broad range of energies and forces with first-principles calculations should enable the prediction of materials behavior away from equilibrium conditions, such as a system under irradiation.« less

  9. The theory of the gravitational potential applied to orbit prediction

    NASA Technical Reports Server (NTRS)

    Kirkpatrick, J. C.

    1976-01-01

    A complete derivation of the geopotential function and its gradient is presented. Also included is the transformation of Laplace's equation from Cartesian to spherical coordinates. The analytic solution to Laplace's equation is obtained from the transformed version, in the classical manner of separating the variables. A cursory introduction to the method devised by Pines, using direction cosines to express the orientation of a point in space, is presented together with sample computer program listings for computing the geopotential function and the components of its gradient. The use of the geopotential function is illustrated.

  10. Improved method of step length estimation based on inverted pendulum model.

    PubMed

    Zhao, Qi; Zhang, Boxue; Wang, Jingjing; Feng, Wenquan; Jia, Wenyan; Sun, Mingui

    2017-04-01

    Step length estimation is an important issue in areas such as gait analysis, sport training, or pedestrian localization. In this article, we estimate the step length of walking using a waist-worn wearable computer named eButton. Motion sensors within this device are used to record body movement from the trunk instead of extremities. Two signal-processing techniques are applied to our algorithm design. The direction cosine matrix transforms vertical acceleration from the device coordinates to the topocentric coordinates. The empirical mode decomposition is used to remove the zero- and first-order skew effects resulting from an integration process. Our experimental results show that our algorithm performs well in step length estimation. The effectiveness of the direction cosine matrix algorithm is improved from 1.69% to 3.56% while the walking speed increased.

  11. Video Transmission for Third Generation Wireless Communication Systems

    PubMed Central

    Gharavi, H.; Alamouti, S. M.

    2001-01-01

    This paper presents a twin-class unequal protected video transmission system over wireless channels. Video partitioning based on a separation of the Variable Length Coded (VLC) Discrete Cosine Transform (DCT) coefficients within each block is considered for constant bitrate transmission (CBR). In the splitting process the fraction of bits assigned to each of the two partitions is adjusted according to the requirements of the unequal error protection scheme employed. Subsequently, partitioning is applied to the ITU-T H.263 coding standard. As a transport vehicle, we have considered one of the leading third generation cellular radio standards known as WCDMA. A dual-priority transmission system is then invoked on the WCDMA system where the video data, after being broken into two streams, is unequally protected. We use a very simple error correction coding scheme for illustration and then propose more sophisticated forms of unequal protection of the digitized video signals. We show that this strategy results in a significantly higher quality of the reconstructed video data when it is transmitted over time-varying multipath fading channels. PMID:27500033

  12. Visual Attention Modeling for Stereoscopic Video: A Benchmark and Computational Model.

    PubMed

    Fang, Yuming; Zhang, Chi; Li, Jing; Lei, Jianjun; Perreira Da Silva, Matthieu; Le Callet, Patrick

    2017-10-01

    In this paper, we investigate the visual attention modeling for stereoscopic video from the following two aspects. First, we build one large-scale eye tracking database as the benchmark of visual attention modeling for stereoscopic video. The database includes 47 video sequences and their corresponding eye fixation data. Second, we propose a novel computational model of visual attention for stereoscopic video based on Gestalt theory. In the proposed model, we extract the low-level features, including luminance, color, texture, and depth, from discrete cosine transform coefficients, which are used to calculate feature contrast for the spatial saliency computation. The temporal saliency is calculated by the motion contrast from the planar and depth motion features in the stereoscopic video sequences. The final saliency is estimated by fusing the spatial and temporal saliency with uncertainty weighting, which is estimated by the laws of proximity, continuity, and common fate in Gestalt theory. Experimental results show that the proposed method outperforms the state-of-the-art stereoscopic video saliency detection models on our built large-scale eye tracking database and one other database (DML-ITRACK-3D).

  13. Efficiency analysis for 3D filtering of multichannel images

    NASA Astrophysics Data System (ADS)

    Kozhemiakin, Ruslan A.; Rubel, Oleksii; Abramov, Sergey K.; Lukin, Vladimir V.; Vozel, Benoit; Chehdi, Kacem

    2016-10-01

    Modern remote sensing systems basically acquire images that are multichannel (dual- or multi-polarization, multi- and hyperspectral) where noise, usually with different characteristics, is present in all components. If noise is intensive, it is desirable to remove (suppress) it before applying methods of image classification, interpreting, and information extraction. This can be done using one of two approaches - by component-wise or by vectorial (3D) filtering. The second approach has shown itself to have higher efficiency if there is essential correlation between multichannel image components as this often happens for multichannel remote sensing data of different origin. Within the class of 3D filtering techniques, there are many possibilities and variations. In this paper, we consider filtering based on discrete cosine transform (DCT) and pay attention to two aspects of processing. First, we study in detail what changes in DCT coefficient statistics take place for 3D denoising compared to component-wise processing. Second, we analyze how selection of component images united into 3D data array influences efficiency of filtering and can the observed tendencies be exploited in processing of images with rather large number of channels.

  14. [Development of a video image system for wireless capsule endoscopes based on DSP].

    PubMed

    Yang, Li; Peng, Chenglin; Wu, Huafeng; Zhao, Dechun; Zhang, Jinhua

    2008-02-01

    A video image recorder to record video picture for wireless capsule endoscopes was designed. TMS320C6211 DSP of Texas Instruments Inc. is the core processor of this system. Images are periodically acquired from Composite Video Broadcast Signal (CVBS) source and scaled by video decoder (SAA7114H). Video data is transported from high speed buffer First-in First-out (FIFO) to Digital Signal Processor (DSP) under the control of Complex Programmable Logic Device (CPLD). This paper adopts JPEG algorithm for image coding, and the compressed data in DSP was stored to Compact Flash (CF) card. TMS320C6211 DSP is mainly used for image compression and data transporting. Fast Discrete Cosine Transform (DCT) algorithm and fast coefficient quantization algorithm are used to accelerate operation speed of DSP and decrease the executing code. At the same time, proper address is assigned for each memory, which has different speed;the memory structure is also optimized. In addition, this system uses plenty of Extended Direct Memory Access (EDMA) to transport and process image data, which results in stable and high performance.

  15. Sparse Matrix Motivated Reconstruction of Far-Field Radiation Patterns

    DTIC Science & Technology

    2015-03-01

    method for base - station antenna radiation patterns. IEEE Antennas Propagation Magazine. 2001;43(2):132. 4. Vasiliadis TG, Dimitriou D, Sergiadis JD...algorithm based on sparse representations of radiation patterns using the inverse Discrete Fourier Transform (DFT) and the inverse Discrete Cosine...patterns using a Model- Based Parameter Estimation (MBPE) technique that reduces the computational time required to model radiation patterns. Another

  16. Structural-functional lung imaging using a combined CT-EIT and a Discrete Cosine Transformation reconstruction method.

    PubMed

    Schullcke, Benjamin; Gong, Bo; Krueger-Ziolek, Sabine; Soleimani, Manuchehr; Mueller-Lisse, Ullrich; Moeller, Knut

    2016-05-16

    Lung EIT is a functional imaging method that utilizes electrical currents to reconstruct images of conductivity changes inside the thorax. This technique is radiation free and applicable at the bedside, but lacks of spatial resolution compared to morphological imaging methods such as X-ray computed tomography (CT). In this article we describe an approach for EIT image reconstruction using morphologic information obtained from other structural imaging modalities. This leads to recon- structed images of lung ventilation that can easily be superimposed with structural CT or MRI images, which facilitates image interpretation. The approach is based on a Discrete Cosine Transformation (DCT) of an image of the considered transversal thorax slice. The use of DCT enables reduction of the dimensionality of the reconstruction and ensures that only conductivity changes of the lungs are reconstructed and displayed. The DCT based approach is well suited to fuse morphological image information with functional lung imaging at low computational costs. Results on simulated data indicate that this approach preserves the morphological structures of the lungs and avoids blurring of the solution. Images from patient measurements reveal the capabilities of the method and demonstrate benefits in possible applications.

  17. Structural-functional lung imaging using a combined CT-EIT and a Discrete Cosine Transformation reconstruction method

    PubMed Central

    Schullcke, Benjamin; Gong, Bo; Krueger-Ziolek, Sabine; Soleimani, Manuchehr; Mueller-Lisse, Ullrich; Moeller, Knut

    2016-01-01

    Lung EIT is a functional imaging method that utilizes electrical currents to reconstruct images of conductivity changes inside the thorax. This technique is radiation free and applicable at the bedside, but lacks of spatial resolution compared to morphological imaging methods such as X-ray computed tomography (CT). In this article we describe an approach for EIT image reconstruction using morphologic information obtained from other structural imaging modalities. This leads to recon- structed images of lung ventilation that can easily be superimposed with structural CT or MRI images, which facilitates image interpretation. The approach is based on a Discrete Cosine Transformation (DCT) of an image of the considered transversal thorax slice. The use of DCT enables reduction of the dimensionality of the reconstruction and ensures that only conductivity changes of the lungs are reconstructed and displayed. The DCT based approach is well suited to fuse morphological image information with functional lung imaging at low computational costs. Results on simulated data indicate that this approach preserves the morphological structures of the lungs and avoids blurring of the solution. Images from patient measurements reveal the capabilities of the method and demonstrate benefits in possible applications. PMID:27181695

  18. Multispectral code excited linear prediction coding and its application in magnetic resonance images.

    PubMed

    Hu, J H; Wang, Y; Cahill, P T

    1997-01-01

    This paper reports a multispectral code excited linear prediction (MCELP) method for the compression of multispectral images. Different linear prediction models and adaptation schemes have been compared. The method that uses a forward adaptive autoregressive (AR) model has been proven to achieve a good compromise between performance, complexity, and robustness. This approach is referred to as the MFCELP method. Given a set of multispectral images, the linear predictive coefficients are updated over nonoverlapping three-dimensional (3-D) macroblocks. Each macroblock is further divided into several 3-D micro-blocks, and the best excitation signal for each microblock is determined through an analysis-by-synthesis procedure. The MFCELP method has been applied to multispectral magnetic resonance (MR) images. To satisfy the high quality requirement for medical images, the error between the original image set and the synthesized one is further specified using a vector quantizer. This method has been applied to images from 26 clinical MR neuro studies (20 slices/study, three spectral bands/slice, 256x256 pixels/band, 12 b/pixel). The MFCELP method provides a significant visual improvement over the discrete cosine transform (DCT) based Joint Photographers Expert Group (JPEG) method, the wavelet transform based embedded zero-tree wavelet (EZW) coding method, and the vector tree (VT) coding method, as well as the multispectral segmented autoregressive moving average (MSARMA) method we developed previously.

  19. Sines and Cosines. Part 1 of 3

    NASA Technical Reports Server (NTRS)

    Apostol, Tom M. (Editor)

    1992-01-01

    Applying the concept of similarities, the mathematical principles of circular motion and sine and cosine waves are presented utilizing both film footage and computer animation in this 'Project Mathematics' series video. Concepts presented include: the symmetry of sine waves; the cosine (complementary sine) and cosine waves; the use of sines and cosines on coordinate systems; the relationship they have to each other; the definitions and uses of periodic waves, square waves, sawtooth waves; the Gibbs phenomena; the use of sines and cosines as ratios; and the terminology related to sines and cosines (frequency, overtone, octave, intensity, and amplitude).

  20. On E-discretization of tori of compact simple Lie groups. II

    NASA Astrophysics Data System (ADS)

    Hrivnák, Jiří; Juránek, Michal

    2017-10-01

    Ten types of discrete Fourier transforms of Weyl orbit functions are developed. Generalizing one-dimensional cosine, sine, and exponential, each type of the Weyl orbit function represents an exponential symmetrized with respect to a subgroup of the Weyl group. Fundamental domains of even affine and dual even affine Weyl groups, governing the argument and label symmetries of the even orbit functions, are determined. The discrete orthogonality relations are formulated on finite sets of points from the refinements of the dual weight lattices. Explicit counting formulas for the number of points of the discrete transforms are deduced. Real-valued Hartley orbit functions are introduced, and all ten types of the corresponding discrete Hartley transforms are detailed.

  1. Energy and Quality-Aware Multimedia Signal Processing

    NASA Astrophysics Data System (ADS)

    Emre, Yunus

    Today's mobile devices have to support computation-intensive multimedia applications with a limited energy budget. In this dissertation, we present architecture level and algorithm-level techniques that reduce energy consumption of these devices with minimal impact on system quality. First, we present novel techniques to mitigate the effects of SRAM memory failures in JPEG2000 implementations operating in scaled voltages. We investigate error control coding schemes and propose an unequal error protection scheme tailored for JPEG2000 that reduces overhead without affecting the performance. Furthermore, we propose algorithm-specific techniques for error compensation that exploit the fact that in JPEG2000 the discrete wavelet transform outputs have larger values for low frequency subband coefficients and smaller values for high frequency subband coefficients. Next, we present use of voltage overscaling to reduce the data-path power consumption of JPEG codecs. We propose an algorithm-specific technique which exploits the characteristics of the quantized coefficients after zig-zag scan to mitigate errors introduced by aggressive voltage scaling. Third, we investigate the effect of reducing dynamic range for datapath energy reduction. We analyze the effect of truncation error and propose a scheme that estimates the mean value of the truncation error during the pre-computation stage and compensates for this error. Such a scheme is very effective for reducing the noise power in applications that are dominated by additions and multiplications such as FIR filter and transform computation. We also present a novel sum of absolute difference (SAD) scheme that is based on most significant bit truncation. The proposed scheme exploits the fact that most of the absolute difference (AD) calculations result in small values, and most of the large AD values do not contribute to the SAD values of the blocks that are selected. Such a scheme is highly effective in reducing the energy consumption of motion estimation and intra-prediction kernels in video codecs. Finally, we present several hybrid energy-saving techniques based on combination of voltage scaling, computation reduction and dynamic range reduction that further reduce the energy consumption while keeping the performance degradation very low. For instance, a combination of computation reduction and dynamic range reduction for Discrete Cosine Transform shows on average, 33% to 46% reduction in energy consumption while incurring only 0.5dB to 1.5dB loss in PSNR.

  2. Improved digital filters for evaluating Fourier and Hankel transform integrals

    USGS Publications Warehouse

    Anderson, Walter L.

    1975-01-01

    New algorithms are described for evaluating Fourier (cosine, sine) and Hankel (J0,J1) transform integrals by means of digital filters. The filters have been designed with extended lengths so that a variable convolution operation can be applied to a large class of integral transforms having the same system transfer function. A f' lagged-convolution method is also presented to significantly decrease the computation time when computing a series of like-transforms over a parameter set spaced the same as the filters. Accuracy of the new filters is comparable to Gaussian integration, provided moderate parameter ranges and well-behaved kernel functions are used. A collection of Fortran IV subprograms is included for both real and complex functions for each filter type. The algorithms have been successfully used in geophysical applications containing a wide variety of integral transforms

  3. Internal Performance of Several Divergent-Shroud Ejector Nozzles with High Divergence Angles

    NASA Technical Reports Server (NTRS)

    Trout, Arthur M.; Papell, S. Stephen; Povolny, John H.

    1957-01-01

    Nine divergent-shroud ejector configurations were investigated to determine the effect of shroud divergence angle on ejector internal performance. Unheated dry air was used for both the primary and secondary flows. The decrease in the design-point thrust coefficient with increasing flow divergence angle (angle measured from primary exit to shroud exit) followed very closely a simple relation involving the cosine of the angle. This indicates that design-point thrust performance for divergent-shroud ejectors can be predicted with reasonable accuracy within the range investigated. The decrease in design-point thrust coefficient due to increasing the flow divergence engle from 120deg to 30deg (half-singles) was approximately 6 percent. Ejector air-handling characteristics and the primary-nozzle flow coefficient were not significantly affected by change in shroud divergence angle.

  4. A Fourier transform method for Vsin i estimations under nonlinear Limb-Darkening laws

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Levenhagen, R. S., E-mail: ronaldo.levenhagen@gmail.com

    Star rotation offers us a large horizon for the study of many important physical issues pertaining to stellar evolution. Currently, four methods are widely used to infer rotation velocities, namely those related to line width calibrations, on the fitting of synthetic spectra, interferometry, and on Fourier transforms (FTs) of line profiles. Almost all of the estimations of stellar projected rotation velocities using the Fourier method in the literature have been addressed with the use of linear limb-darkening (LD) approximations during the evaluation of rotation profiles and their cosine FTs, which in certain cases, lead to discrepant velocity estimates. In thismore » work, we introduce new mathematical expressions of rotation profiles and their Fourier cosine transforms assuming three nonlinear LD laws—quadratic, square-root, and logarithmic—and study their applications with and without gravity-darkening (GD) and geometrical flattening (GF) effects. Through an analysis of He I models in the visible range accounting for both limb and GD, we find out that, for classical models without rotationally driven effects, all the Vsin i values are too close to each other. On the other hand, taking into account GD and GF, the Vsin i obtained with the linear law result in Vsin i values that are systematically smaller than those obtained with the other laws. As a rule of thumb, we apply these expressions to the FT method to evaluate the projected rotation velocity of the emission B-type star Achernar (α Eri).« less

  5. Statistical Characterization of MP3 Encoders for Steganalysis: ’CHAMP3’

    DTIC Science & Technology

    2004-04-27

    compression exceeds those of typical stegano- graphic tools (e. g., LSB image embedding), the availability of commented source codes for MP3 encoders...developed by testing the approach on known and unknown reference data. 15. SUBJECT TERMS EOARD, Steganography , Digital Watermarking...Pages kbps Kilobits per Second LGPL Lesser General Public License LSB Least Significant Bit MB Megabyte MDCT Modified Discrete Cosine Transformation MP3

  6. Comments on `Area and power efficient DCT architecture for image compression' by Dhandapani and Ramachandran

    NASA Astrophysics Data System (ADS)

    Cintra, Renato J.; Bayer, Fábio M.

    2017-12-01

    In [Dhandapani and Ramachandran, "Area and power efficient DCT architecture for image compression", EURASIP Journal on Advances in Signal Processing 2014, 2014:180] the authors claim to have introduced an approximation for the discrete cosine transform capable of outperforming several well-known approximations in literature in terms of additive complexity. We could not verify the above results and we offer corrections for their work.

  7. Discrete cosine transform and hash functions toward implementing a (robust-fragile) watermarking scheme

    NASA Astrophysics Data System (ADS)

    Al-Mansoori, Saeed; Kunhu, Alavi

    2013-10-01

    This paper proposes a blind multi-watermarking scheme based on designing two back-to-back encoders. The first encoder is implemented to embed a robust watermark into remote sensing imagery by applying a Discrete Cosine Transform (DCT) approach. Such watermark is used in many applications to protect the copyright of the image. However, the second encoder embeds a fragile watermark using `SHA-1' hash function. The purpose behind embedding a fragile watermark is to prove the authenticity of the image (i.e. tamper-proof). Thus, the proposed technique was developed as a result of new challenges with piracy of remote sensing imagery ownership. This led researchers to look for different means to secure the ownership of satellite imagery and prevent the illegal use of these resources. Therefore, Emirates Institution for Advanced Science and Technology (EIAST) proposed utilizing existing data security concept by embedding a digital signature, "watermark", into DubaiSat-1 satellite imagery. In this study, DubaiSat-1 images with 2.5 meter resolution are used as a cover and a colored EIAST logo is used as a watermark. In order to evaluate the robustness of the proposed technique, a couple of attacks are applied such as JPEG compression, rotation and synchronization attacks. Furthermore, tampering attacks are applied to prove image authenticity.

  8. Random Walk Graph Laplacian-Based Smoothness Prior for Soft Decoding of JPEG Images.

    PubMed

    Liu, Xianming; Cheung, Gene; Wu, Xiaolin; Zhao, Debin

    2017-02-01

    Given the prevalence of joint photographic experts group (JPEG) compressed images, optimizing image reconstruction from the compressed format remains an important problem. Instead of simply reconstructing a pixel block from the centers of indexed discrete cosine transform (DCT) coefficient quantization bins (hard decoding), soft decoding reconstructs a block by selecting appropriate coefficient values within the indexed bins with the help of signal priors. The challenge thus lies in how to define suitable priors and apply them effectively. In this paper, we combine three image priors-Laplacian prior for DCT coefficients, sparsity prior, and graph-signal smoothness prior for image patches-to construct an efficient JPEG soft decoding algorithm. Specifically, we first use the Laplacian prior to compute a minimum mean square error initial solution for each code block. Next, we show that while the sparsity prior can reduce block artifacts, limiting the size of the overcomplete dictionary (to lower computation) would lead to poor recovery of high DCT frequencies. To alleviate this problem, we design a new graph-signal smoothness prior (desired signal has mainly low graph frequencies) based on the left eigenvectors of the random walk graph Laplacian matrix (LERaG). Compared with the previous graph-signal smoothness priors, LERaG has desirable image filtering properties with low computation overhead. We demonstrate how LERaG can facilitate recovery of high DCT frequencies of a piecewise smooth signal via an interpretation of low graph frequency components as relaxed solutions to normalized cut in spectral clustering. Finally, we construct a soft decoding algorithm using the three signal priors with appropriate prior weights. Experimental results show that our proposal outperforms the state-of-the-art soft decoding algorithms in both objective and subjective evaluations noticeably.

  9. Algebraic signal processing theory: 2-D spatial hexagonal lattice.

    PubMed

    Pünschel, Markus; Rötteler, Martin

    2007-06-01

    We develop the framework for signal processing on a spatial, or undirected, 2-D hexagonal lattice for both an infinite and a finite array of signal samples. This framework includes the proper notions of z-transform, boundary conditions, filtering or convolution, spectrum, frequency response, and Fourier transform. In the finite case, the Fourier transform is called discrete triangle transform. Like the hexagonal lattice, this transform is nonseparable. The derivation of the framework makes it a natural extension of the algebraic signal processing theory that we recently introduced. Namely, we construct the proper signal models, given by polynomial algebras, bottom-up from a suitable definition of hexagonal space shifts using a procedure provided by the algebraic theory. These signal models, in turn, then provide all the basic signal processing concepts. The framework developed in this paper is related to Mersereau's early work on hexagonal lattices in the same way as the discrete cosine and sine transforms are related to the discrete Fourier transform-a fact that will be made rigorous in this paper.

  10. Flow to a well in a water-table aquifer: An improved laplace transform solution

    USGS Publications Warehouse

    Moench, A.F.

    1996-01-01

    An alternative Laplace transform solution for the problem, originally solved by Neuman, of constant discharge from a partially penetrating well in a water-table aquifer was obtained. The solution differs from existing solutions in that it is simpler in form and can be numerically inverted without the need for time-consuming numerical integration. The derivation invloves the use of the Laplace transform and a finite Fourier cosine series and avoids the Hankel transform used in prior derivations. The solution allows for water in the overlying unsaturated zone to be released either instantaneously in response to a declining water table as assumed by Neuman, or gradually as approximated by Boulton's convolution integral. Numerical evaluation yields results identical with results obtained by previously published methods with the advantage, under most well-aquifer configurations, of much reduced computation time.

  11. A low-power and high-quality implementation of the discrete cosine transformation

    NASA Astrophysics Data System (ADS)

    Heyne, B.; Götze, J.

    2007-06-01

    In this paper a computationally efficient and high-quality preserving DCT architecture is presented. It is obtained by optimizing the Loeffler DCT based on the Cordic algorithm. The computational complexity is reduced from 11 multiply and 29 add operations (Loeffler DCT) to 38 add and 16 shift operations (which is similar to the complexity of the binDCT). The experimental results show that the proposed DCT algorithm not only reduces the computational complexity significantly, but also retains the good transformation quality of the Loeffler DCT. Therefore, the proposed Cordic based Loeffler DCT is especially suited for low-power and high-quality CODECs in battery-based systems.

  12. Detection of biomarkers for Hepatocellular Carcinoma using a hybrid univariate gene selection methods

    PubMed Central

    2012-01-01

    Background Discovering new biomarkers has a great role in improving early diagnosis of Hepatocellular carcinoma (HCC). The experimental determination of biomarkers needs a lot of time and money. This motivates this work to use in-silico prediction of biomarkers to reduce the number of experiments required for detecting new ones. This is achieved by extracting the most representative genes in microarrays of HCC. Results In this work, we provide a method for extracting the differential expressed genes, up regulated ones, that can be considered candidate biomarkers in high throughput microarrays of HCC. We examine the power of several gene selection methods (such as Pearson’s correlation coefficient, Cosine coefficient, Euclidean distance, Mutual information and Entropy with different estimators) in selecting informative genes. A biological interpretation of the highly ranked genes is done using KEGG (Kyoto Encyclopedia of Genes and Genomes) pathways, ENTREZ and DAVID (Database for Annotation, Visualization, and Integrated Discovery) databases. The top ten genes selected using Pearson’s correlation coefficient and Cosine coefficient contained six genes that have been implicated in cancer (often multiple cancers) genesis in previous studies. A fewer number of genes were obtained by the other methods (4 genes using Mutual information, 3genes using Euclidean distance and only one gene using Entropy). A better result was obtained by the utilization of a hybrid approach based on intersecting the highly ranked genes in the output of all investigated methods. This hybrid combination yielded seven genes (2 genes for HCC and 5 genes in different types of cancer) in the top ten genes of the list of intersected genes. Conclusions To strengthen the effectiveness of the univariate selection methods, we propose a hybrid approach by intersecting several of these methods in a cascaded manner. This approach surpasses all of univariate selection methods when used individually according to biological interpretation and the examination of gene expression signal profiles. PMID:22867264

  13. Convolution Comparison Pattern: An Efficient Local Image Descriptor for Fingerprint Liveness Detection

    PubMed Central

    Gottschlich, Carsten

    2016-01-01

    We present a new type of local image descriptor which yields binary patterns from small image patches. For the application to fingerprint liveness detection, we achieve rotation invariant image patches by taking the fingerprint segmentation and orientation field into account. We compute the discrete cosine transform (DCT) for these rotation invariant patches and attain binary patterns by comparing pairs of two DCT coefficients. These patterns are summarized into one or more histograms per image. Each histogram comprises the relative frequencies of pattern occurrences. Multiple histograms are concatenated and the resulting feature vector is used for image classification. We name this novel type of descriptor convolution comparison pattern (CCP). Experimental results show the usefulness of the proposed CCP descriptor for fingerprint liveness detection. CCP outperforms other local image descriptors such as LBP, LPQ and WLD on the LivDet 2013 benchmark. The CCP descriptor is a general type of local image descriptor which we expect to prove useful in areas beyond fingerprint liveness detection such as biological and medical image processing, texture recognition, face recognition and iris recognition, liveness detection for face and iris images, and machine vision for surface inspection and material classification. PMID:26844544

  14. C-FSCV: Compressive Fast-Scan Cyclic Voltammetry for Brain Dopamine Recording.

    PubMed

    Zamani, Hossein; Bahrami, Hamid Reza; Chalwadi, Preeti; Garris, Paul A; Mohseni, Pedram

    2018-01-01

    This paper presents a novel compressive sensing framework for recording brain dopamine levels with fast-scan cyclic voltammetry (FSCV) at a carbon-fiber microelectrode. Termed compressive FSCV (C-FSCV), this approach compressively samples the measured total current in each FSCV scan and performs basic FSCV processing steps, e.g., background current averaging and subtraction, directly with compressed measurements. The resulting background-subtracted faradaic currents, which are shown to have a block-sparse representation in the discrete cosine transform domain, are next reconstructed from their compressively sampled counterparts with the block sparse Bayesian learning algorithm. Using a previously recorded dopamine dataset, consisting of electrically evoked signals recorded in the dorsal striatum of an anesthetized rat, the C-FSCV framework is shown to be efficacious in compressing and reconstructing brain dopamine dynamics and associated voltammograms with high fidelity (correlation coefficient, ), while achieving compression ratio, CR, values as high as ~ 5. Moreover, using another set of dopamine data recorded 5 minutes after administration of amphetamine (AMPH) to an ambulatory rat, C-FSCV once again compresses (CR = 5) and reconstructs the temporal pattern of dopamine release with high fidelity ( ), leading to a true-positive rate of 96.4% in detecting AMPH-induced dopamine transients.

  15. Mpeg2 codec HD improvements with medical and robotic imaging benefits

    NASA Astrophysics Data System (ADS)

    Picard, Wayne F. J.

    2010-02-01

    In this report, we propose an efficient scheme to use High Definition Television (HDTV) in a console or notebook format as a computer terminal in addition to their role as TV display unit. In the proposed scheme, we assume that the main computer is situated at a remote location. The computer raster in the remote server is compressed using an HD E- >Mpeg2 encoder and transmitted to the terminal at home. The built-in E->Mpeg2 decoder in the terminal decompresses the compressed bit stream, and displays the raster. The terminal will be fitted with a mouse and keyboard, through which the interaction with the remote computer server can be performed via a communications back channel. The terminal in a notebook format can thus be used as a high resolution computer and multimedia device. We will consider developments such as the required HD enhanced Mpeg2 resolution (E->Mpeg2) and its medical ramifications due to improvements on compressed image quality with 2D to 3D conversion (Mpeg3) and using the compressed Discrete Cosine Transform coefficients in the reality compression of vision and control of medical robotic surgeons.

  16. No-Reference Video Quality Assessment Based on Statistical Analysis in 3D-DCT Domain.

    PubMed

    Li, Xuelong; Guo, Qun; Lu, Xiaoqiang

    2016-05-13

    It is an important task to design models for universal no-reference video quality assessment (NR-VQA) in multiple video processing and computer vision applications. However, most existing NR-VQA metrics are designed for specific distortion types which are not often aware in practical applications. A further deficiency is that the spatial and temporal information of videos is hardly considered simultaneously. In this paper, we propose a new NR-VQA metric based on the spatiotemporal natural video statistics (NVS) in 3D discrete cosine transform (3D-DCT) domain. In the proposed method, a set of features are firstly extracted based on the statistical analysis of 3D-DCT coefficients to characterize the spatiotemporal statistics of videos in different views. These features are used to predict the perceived video quality via the efficient linear support vector regression (SVR) model afterwards. The contributions of this paper are: 1) we explore the spatiotemporal statistics of videos in 3DDCT domain which has the inherent spatiotemporal encoding advantage over other widely used 2D transformations; 2) we extract a small set of simple but effective statistical features for video visual quality prediction; 3) the proposed method is universal for multiple types of distortions and robust to different databases. The proposed method is tested on four widely used video databases. Extensive experimental results demonstrate that the proposed method is competitive with the state-of-art NR-VQA metrics and the top-performing FR-VQA and RR-VQA metrics.

  17. On a PCA-based lung motion model

    NASA Astrophysics Data System (ADS)

    Li, Ruijiang; Lewis, John H.; Jia, Xun; Zhao, Tianyu; Liu, Weifeng; Wuenschel, Sara; Lamb, James; Yang, Deshan; Low, Daniel A.; Jiang, Steve B.

    2011-09-01

    Respiration-induced organ motion is one of the major uncertainties in lung cancer radiotherapy and is crucial to be able to accurately model the lung motion. Most work so far has focused on the study of the motion of a single point (usually the tumor center of mass), and much less work has been done to model the motion of the entire lung. Inspired by the work of Zhang et al (2007 Med. Phys. 34 4772-81), we believe that the spatiotemporal relationship of the entire lung motion can be accurately modeled based on principle component analysis (PCA) and then a sparse subset of the entire lung, such as an implanted marker, can be used to drive the motion of the entire lung (including the tumor). The goal of this work is twofold. First, we aim to understand the underlying reason why PCA is effective for modeling lung motion and find the optimal number of PCA coefficients for accurate lung motion modeling. We attempt to address the above important problems both in a theoretical framework and in the context of real clinical data. Second, we propose a new method to derive the entire lung motion using a single internal marker based on the PCA model. The main results of this work are as follows. We derived an important property which reveals the implicit regularization imposed by the PCA model. We then studied the model using two mathematical respiratory phantoms and 11 clinical 4DCT scans for eight lung cancer patients. For the mathematical phantoms with cosine and an even power (2n) of cosine motion, we proved that 2 and 2n PCA coefficients and eigenvectors will completely represent the lung motion, respectively. Moreover, for the cosine phantom, we derived the equivalence conditions for the PCA motion model and the physiological 5D lung motion model (Low et al 2005 Int. J. Radiat. Oncol. Biol. Phys. 63 921-9). For the clinical 4DCT data, we demonstrated the modeling power and generalization performance of the PCA model. The average 3D modeling error using PCA was within 1 mm (0.7 ± 0.1 mm). When a single artificial internal marker was used to derive the lung motion, the average 3D error was found to be within 2 mm (1.8 ± 0.3 mm) through comprehensive statistical analysis. The optimal number of PCA coefficients needs to be determined on a patient-by-patient basis and two PCA coefficients seem to be sufficient for accurate modeling of the lung motion for most patients. In conclusion, we have presented thorough theoretical analysis and clinical validation of the PCA lung motion model. The feasibility of deriving the entire lung motion using a single marker has also been demonstrated on clinical data using a simulation approach.

  18. Video compression of coronary angiograms based on discrete wavelet transform with block classification.

    PubMed

    Ho, B T; Tsai, M J; Wei, J; Ma, M; Saipetch, P

    1996-01-01

    A new method of video compression for angiographic images has been developed to achieve high compression ratio (~20:1) while eliminating block artifacts which leads to loss of diagnostic accuracy. This method adopts motion picture experts group's (MPEGs) motion compensated prediction to takes advantage of frame to frame correlation. However, in contrast to MPEG, the error images arising from mismatches in the motion estimation are encoded by discrete wavelet transform (DWT) rather than block discrete cosine transform (DCT). Furthermore, the authors developed a classification scheme which label each block in an image as intra, error, or background type and encode it accordingly. This hybrid coding can significantly improve the compression efficiency in certain eases. This method can be generalized for any dynamic image sequences applications sensitive to block artifacts.

  19. Exact solutions for an oscillator with anti-symmetric quadratic nonlinearity

    NASA Astrophysics Data System (ADS)

    Beléndez, A.; Martínez, F. J.; Beléndez, T.; Pascual, C.; Alvarez, M. L.; Gimeno, E.; Arribas, E.

    2018-04-01

    Closed-form exact solutions for an oscillator with anti-symmetric quadratic nonlinearity are derived from the first integral of the nonlinear differential equation governing the behaviour of this oscillator. The mathematical model is an ordinary second order differential equation in which the sign of the quadratic nonlinear term changes. Two parameters characterize this oscillator: the coefficient of the linear term and the coefficient of the quadratic term. Not only the common case in which both coefficients are positive but also all possible combinations of positive and negative signs of these coefficients which provide periodic motions are considered, giving rise to four different cases. Three different periods and solutions are obtained, since the same result is valid in two of these cases. An interesting feature is that oscillatory motions whose equilibrium points are not at x = 0 are also considered. The periods are given in terms of an incomplete or complete elliptic integral of the first kind, and the exact solutions are expressed as functions including Jacobi elliptic cosine or sine functions.

  20. Down-Regulation of Olfactory Receptors in Response to Traumatic Brain Injury Promotes Risk for Alzheimer’s Disease

    DTIC Science & Technology

    2013-10-01

    correct group assignment of samples in unsupervised hierarchical clustering by the Unweighted Pair-Group Method using Arithmetic averages ( UPGMA ) based on...centering of log2 transformed MAS5.0 signal values; probe set clustering was performed by the UPGMA method using Cosine correlation as the similarity met...A) The 108 differentially-regulated genes identified were subjected to unsupervised hierarchical clustering analysis using the UPGMA algorithm with

  1. Fetal heart rate deceleration detection using a discrete cosine transform implementation of singular spectrum analysis.

    PubMed

    Warrick, P A; Precup, D; Hamilton, E F; Kearney, R E

    2007-01-01

    To develop a singular-spectrum analysis (SSA) based change-point detection algorithm applicable to fetal heart rate (FHR) monitoring to improve the detection of deceleration events. We present a method for decomposing a signal into near-orthogonal components via the discrete cosine transform (DCT) and apply this in a novel online manner to change-point detection based on SSA. The SSA technique forms models of the underlying signal that can be compared over time; models that are sufficiently different indicate signal change points. To adapt the algorithm to deceleration detection where many successive similar change events can occur, we modify the standard SSA algorithm to hold the reference model constant under such conditions, an approach that we term "base-hold SSA". The algorithm is applied to a database of 15 FHR tracings that have been preprocessed to locate candidate decelerations and is compared to the markings of an expert obstetrician. Of the 528 true and 1285 false decelerations presented to the algorithm, the base-hold approach improved on standard SSA, reducing the number of missed decelerations from 64 to 49 (21.9%) while maintaining the same reduction in false-positives (278). The standard SSA assumption that changes are infrequent does not apply to FHR analysis where decelerations can occur successively and in close proximity; our base-hold SSA modification improves detection of these types of event series.

  2. A Novel 2D Image Compression Algorithm Based on Two Levels DWT and DCT Transforms with Enhanced Minimize-Matrix-Size Algorithm for High Resolution Structured Light 3D Surface Reconstruction

    NASA Astrophysics Data System (ADS)

    Siddeq, M. M.; Rodrigues, M. A.

    2015-09-01

    Image compression techniques are widely used on 2D image 2D video 3D images and 3D video. There are many types of compression techniques and among the most popular are JPEG and JPEG2000. In this research, we introduce a new compression method based on applying a two level discrete cosine transform (DCT) and a two level discrete wavelet transform (DWT) in connection with novel compression steps for high-resolution images. The proposed image compression algorithm consists of four steps. (1) Transform an image by a two level DWT followed by a DCT to produce two matrices: DC- and AC-Matrix, or low and high frequency matrix, respectively, (2) apply a second level DCT on the DC-Matrix to generate two arrays, namely nonzero-array and zero-array, (3) apply the Minimize-Matrix-Size algorithm to the AC-Matrix and to the other high-frequencies generated by the second level DWT, (4) apply arithmetic coding to the output of previous steps. A novel decompression algorithm, Fast-Match-Search algorithm (FMS), is used to reconstruct all high-frequency matrices. The FMS-algorithm computes all compressed data probabilities by using a table of data, and then using a binary search algorithm for finding decompressed data inside the table. Thereafter, all decoded DC-values with the decoded AC-coefficients are combined in one matrix followed by inverse two levels DCT with two levels DWT. The technique is tested by compression and reconstruction of 3D surface patches. Additionally, this technique is compared with JPEG and JPEG2000 algorithm through 2D and 3D root-mean-square-error following reconstruction. The results demonstrate that the proposed compression method has better visual properties than JPEG and JPEG2000 and is able to more accurately reconstruct surface patches in 3D.

  3. A Classroom Note on Generating Examples for the Laws of Sines and Cosines from Pythagorean Triangles

    ERIC Educational Resources Information Center

    Sher, Lawrence; Sher, David

    2007-01-01

    By selecting certain special triangles, students can learn about the laws of sines and cosines without wrestling with long decimal representations or irrational numbers. Since the law of cosines requires only one of the three angles of a triangle, there are many examples of triangles with integral sides and a cosine that can be represented exactly…

  4. A Thin Lens Model for Charged-Particle RF Accelerating Gaps

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Allen, Christopher K.

    Presented is a thin-lens model for an RF accelerating gap that considers general axial fields without energy dependence or other a priori assumptions. Both the cosine and sine transit time factors (i.e., Fourier transforms) are required plus two additional functions; the Hilbert transforms the transit-time factors. The combination yields a complex-valued Hamiltonian rotating in the complex plane with synchronous phase. Using Hamiltonians the phase and energy gains are computed independently in the pre-gap and post-gap regions then aligned using the asymptotic values of wave number. Derivations of these results are outlined, examples are shown, and simulations with the model aremore » presented.« less

  5. Steganographic embedding in containers-images

    NASA Astrophysics Data System (ADS)

    Nikishova, A. V.; Omelchenko, T. A.; Makedonskij, S. A.

    2018-05-01

    Steganography is one of the approaches to ensuring the protection of information transmitted over the network. But a steganographic method should vary depending on a used container. According to statistics, the most widely used containers are images and the most common image format is JPEG. Authors propose a method of data embedding into a frequency area of images in format JPEG 2000. It is proposed to use the method of Benham-Memon- Yeo-Yeung, in which instead of discrete cosine transform, discrete wavelet transform is used. Two requirements for images are formulated. Structure similarity is chosen to obtain quality assessment of data embedding. Experiments confirm that requirements satisfaction allows achieving high quality assessment of data embedding.

  6. Latent semantic analysis cosines as a cognitive similarity measure: Evidence from priming studies.

    PubMed

    Günther, Fritz; Dudschig, Carolin; Kaup, Barbara

    2016-01-01

    In distributional semantics models (DSMs) such as latent semantic analysis (LSA), words are represented as vectors in a high-dimensional vector space. This allows for computing word similarities as the cosine of the angle between two such vectors. In two experiments, we investigated whether LSA cosine similarities predict priming effects, in that higher cosine similarities are associated with shorter reaction times (RTs). Critically, we applied a pseudo-random procedure in generating the item material to ensure that we directly manipulated LSA cosines as an independent variable. We employed two lexical priming experiments with lexical decision tasks (LDTs). In Experiment 1 we presented participants with 200 different prime words, each paired with one unique target. We found a significant effect of cosine similarities on RTs. The same was true for Experiment 2, where we reversed the prime-target order (primes of Experiment 1 were targets in Experiment 2, and vice versa). The results of these experiments confirm that LSA cosine similarities can predict priming effects, supporting the view that they are psychologically relevant. The present study thereby provides evidence for qualifying LSA cosine similarities not only as a linguistic measure, but also as a cognitive similarity measure. However, it is also shown that other DSMs can outperform LSA as a predictor of priming effects.

  7. New architecture for dynamic frame-skipping transcoder.

    PubMed

    Fung, Kai-Tat; Chan, Yui-Lam; Siu, Wan-Chi

    2002-01-01

    Transcoding is a key technique for reducing the bit rate of a previously compressed video signal. A high transcoding ratio may result in an unacceptable picture quality when the full frame rate of the incoming video bitstream is used. Frame skipping is often used as an efficient scheme to allocate more bits to the representative frames, so that an acceptable quality for each frame can be maintained. However, the skipped frame must be decompressed completely, which might act as a reference frame to nonskipped frames for reconstruction. The newly quantized discrete cosine transform (DCT) coefficients of the prediction errors need to be re-computed for the nonskipped frame with reference to the previous nonskipped frame; this can create undesirable complexity as well as introduce re-encoding errors. In this paper, we propose new algorithms and a novel architecture for frame-rate reduction to improve picture quality and to reduce complexity. The proposed architecture is mainly performed on the DCT domain to achieve a transcoder with low complexity. With the direct addition of DCT coefficients and an error compensation feedback loop, re-encoding errors are reduced significantly. Furthermore, we propose a frame-rate control scheme which can dynamically adjust the number of skipped frames according to the incoming motion vectors and re-encoding errors due to transcoding such that the decoded sequence can have a smooth motion as well as better transcoded pictures. Experimental results show that, as compared to the conventional transcoder, the new architecture for frame-skipping transcoder is more robust, produces fewer requantization errors, and has reduced computational complexity.

  8. A novel high-frequency encoding algorithm for image compression

    NASA Astrophysics Data System (ADS)

    Siddeq, Mohammed M.; Rodrigues, Marcos A.

    2017-12-01

    In this paper, a new method for image compression is proposed whose quality is demonstrated through accurate 3D reconstruction from 2D images. The method is based on the discrete cosine transform (DCT) together with a high-frequency minimization encoding algorithm at compression stage and a new concurrent binary search algorithm at decompression stage. The proposed compression method consists of five main steps: (1) divide the image into blocks and apply DCT to each block; (2) apply a high-frequency minimization method to the AC-coefficients reducing each block by 2/3 resulting in a minimized array; (3) build a look up table of probability data to enable the recovery of the original high frequencies at decompression stage; (4) apply a delta or differential operator to the list of DC-components; and (5) apply arithmetic encoding to the outputs of steps (2) and (4). At decompression stage, the look up table and the concurrent binary search algorithm are used to reconstruct all high-frequency AC-coefficients while the DC-components are decoded by reversing the arithmetic coding. Finally, the inverse DCT recovers the original image. We tested the technique by compressing and decompressing 2D images including images with structured light patterns for 3D reconstruction. The technique is compared with JPEG and JPEG2000 through 2D and 3D RMSE. Results demonstrate that the proposed compression method is perceptually superior to JPEG with equivalent quality to JPEG2000. Concerning 3D surface reconstruction from images, it is demonstrated that the proposed method is superior to both JPEG and JPEG2000.

  9. On the Use of Quartic Force Fields in Variational Calculations

    NASA Technical Reports Server (NTRS)

    Fortenberry, Ryan C.; Huang, Xinchuan; Yachmenev, Andrey; Thiel, Walter; Lee, Timothy J.

    2013-01-01

    The use of quartic force fields (QFFs) has been shown to be one of the most effective ways to efficiently compute vibrational frequencies for small molecules. In this paper we outline and discuss how the simple-internal or bond-length bond-angle (BLBA) coordinates can be transformed into Morse-cosine(-sine) coordinates which produce potential energy surfaces from QFFs that possess proper limiting behavior and can effectively describe the vibrational (or rovibrational) energy levels of an arbitrary molecular system. We investigate parameter scaling in the Morse coordinate, symmetry considerations, and examples of transformed QFFs making use of the MULTIMODE, TROVE, and VTET variational vibrational methods. Cases are referenced where variational computations coupled with transformed QFFs produce accuracies compared to experiment for fundamental frequencies on the order of 5 cm(exp -1) and often as good as 1 cm(exp -1).

  10. Coordinate transformation by minimizing correlations between parameters

    NASA Technical Reports Server (NTRS)

    Kumar, M.

    1972-01-01

    This investigation was to determine the transformation parameters (three rotations, three translations and a scale factor) between two Cartesian coordinate systems from sets of coordinates given in both systems. The objective was the determination of well separated transformation parameters with reduced correlations between each other, a problem especially relevant when the sets of coordinates are not well distributed. The above objective is achieved by preliminarily determining the three rotational parameters and the scale factor from the respective direction cosines and chord distances (these being independent of the translation parameters) between the common points, and then computing all the seven parameters from a solution in which the rotations and the scale factor are entered as weighted constraints according to their variances and covariances obtained in the preliminary solutions. Numerical tests involving two geodetic reference systems were performed to evaluate the effectiveness of this approach.

  11. Perceptual Optimization of DCT Color Quantization Matrices

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Statler, Irving C. (Technical Monitor)

    1994-01-01

    Many image compression schemes employ a block Discrete Cosine Transform (DCT) and uniform quantization. Acceptable rate/distortion performance depends upon proper design of the quantization matrix. In previous work, we showed how to use a model of the visibility of DCT basis functions to design quantization matrices for arbitrary display resolutions and color spaces. Subsequently, we showed how to optimize greyscale quantization matrices for individual images, for optimal rate/perceptual distortion performance. Here we describe extensions of this optimization algorithm to color images.

  12. Structured Set Intra Prediction With Discriminative Learning in a Max-Margin Markov Network for High Efficiency Video Coding

    PubMed Central

    Dai, Wenrui; Xiong, Hongkai; Jiang, Xiaoqian; Chen, Chang Wen

    2014-01-01

    This paper proposes a novel model on intra coding for High Efficiency Video Coding (HEVC), which simultaneously predicts blocks of pixels with optimal rate distortion. It utilizes the spatial statistical correlation for the optimal prediction based on 2-D contexts, in addition to formulating the data-driven structural interdependences to make the prediction error coherent with the probability distribution, which is desirable for successful transform and coding. The structured set prediction model incorporates a max-margin Markov network (M3N) to regulate and optimize multiple block predictions. The model parameters are learned by discriminating the actual pixel value from other possible estimates to maximize the margin (i.e., decision boundary bandwidth). Compared to existing methods that focus on minimizing prediction error, the M3N-based model adaptively maintains the coherence for a set of predictions. Specifically, the proposed model concurrently optimizes a set of predictions by associating the loss for individual blocks to the joint distribution of succeeding discrete cosine transform coefficients. When the sample size grows, the prediction error is asymptotically upper bounded by the training error under the decomposable loss function. As an internal step, we optimize the underlying Markov network structure to find states that achieve the maximal energy using expectation propagation. For validation, we integrate the proposed model into HEVC for optimal mode selection on rate-distortion optimization. The proposed prediction model obtains up to 2.85% bit rate reduction and achieves better visual quality in comparison to the HEVC intra coding. PMID:25505829

  13. On a PCA-based lung motion model

    PubMed Central

    Li, Ruijiang; Lewis, John H; Jia, Xun; Zhao, Tianyu; Liu, Weifeng; Wuenschel, Sara; Lamb, James; Yang, Deshan; Low, Daniel A; Jiang, Steve B

    2014-01-01

    Respiration-induced organ motion is one of the major uncertainties in lung cancer radiotherapy and is crucial to be able to accurately model the lung motion. Most work so far has focused on the study of the motion of a single point (usually the tumor center of mass), and much less work has been done to model the motion of the entire lung. Inspired by the work of Zhang et al (2007 Med. Phys. 34 4772–81), we believe that the spatiotemporal relationship of the entire lung motion can be accurately modeled based on principle component analysis (PCA) and then a sparse subset of the entire lung, such as an implanted marker, can be used to drive the motion of the entire lung (including the tumor). The goal of this work is twofold. First, we aim to understand the underlying reason why PCA is effective for modeling lung motion and find the optimal number of PCA coefficients for accurate lung motion modeling. We attempt to address the above important problems both in a theoretical framework and in the context of real clinical data. Second, we propose a new method to derive the entire lung motion using a single internal marker based on the PCA model. The main results of this work are as follows. We derived an important property which reveals the implicit regularization imposed by the PCA model. We then studied the model using two mathematical respiratory phantoms and 11 clinical 4DCT scans for eight lung cancer patients. For the mathematical phantoms with cosine and an even power (2n) of cosine motion, we proved that 2 and 2n PCA coefficients and eigenvectors will completely represent the lung motion, respectively. Moreover, for the cosine phantom, we derived the equivalence conditions for the PCA motion model and the physiological 5D lung motion model (Low et al 2005 Int. J. Radiat. Oncol. Biol. Phys. 63 921–9). For the clinical 4DCT data, we demonstrated the modeling power and generalization performance of the PCA model. The average 3D modeling error using PCA was within 1 mm (0.7 ± 0.1 mm). When a single artificial internal marker was used to derive the lung motion, the average 3D error was found to be within 2 mm (1.8 ± 0.3 mm) through comprehensive statistical analysis. The optimal number of PCA coefficients needs to be determined on a patient-by-patient basis and two PCA coefficients seem to be sufficient for accurate modeling of the lung motion for most patients. In conclusion, we have presented thorough theoretical analysis and clinical validation of the PCA lung motion model. The feasibility of deriving the entire lung motion using a single marker has also been demonstrated on clinical data using a simulation approach. PMID:21865624

  14. Automated speech analysis applied to laryngeal disease categorization.

    PubMed

    Gelzinis, A; Verikas, A; Bacauskiene, M

    2008-07-01

    The long-term goal of the work is a decision support system for diagnostics of laryngeal diseases. Colour images of vocal folds, a voice signal, and questionnaire data are the information sources to be used in the analysis. This paper is concerned with automated analysis of a voice signal applied to screening of laryngeal diseases. The effectiveness of 11 different feature sets in classification of voice recordings of the sustained phonation of the vowel sound /a/ into a healthy and two pathological classes, diffuse and nodular, is investigated. A k-NN classifier, SVM, and a committee build using various aggregation options are used for the classification. The study was made using the mixed gender database containing 312 voice recordings. The correct classification rate of 84.6% was achieved when using an SVM committee consisting of four members. The pitch and amplitude perturbation measures, cepstral energy features, autocorrelation features as well as linear prediction cosine transform coefficients were amongst the feature sets providing the best performance. In the case of two class classification, using recordings from 79 subjects representing the pathological and 69 the healthy class, the correct classification rate of 95.5% was obtained from a five member committee. Again the pitch and amplitude perturbation measures provided the best performance.

  15. Personal recognition using hand shape and texture.

    PubMed

    Kumar, Ajay; Zhang, David

    2006-08-01

    This paper proposes a new bimodal biometric system using feature-level fusion of hand shape and palm texture. The proposed combination is of significance since both the palmprint and hand-shape images are proposed to be extracted from the single hand image acquired from a digital camera. Several new hand-shape features that can be used to represent the hand shape and improve the performance are investigated. The new approach for palmprint recognition using discrete cosine transform coefficients, which can be directly obtained from the camera hardware, is demonstrated. None of the prior work on hand-shape or palmprint recognition has given any attention on the critical issue of feature selection. Our experimental results demonstrate that while majority of palmprint or hand-shape features are useful in predicting the subjects identity, only a small subset of these features are necessary in practice for building an accurate model for identification. The comparison and combination of proposed features is evaluated on the diverse classification schemes; naive Bayes (normal, estimated, multinomial), decision trees (C4.5, LMT), k-NN, SVM, and FFN. Although more work remains to be done, our results to date indicate that the combination of selected hand-shape and palmprint features constitutes a promising addition to the biometrics-based personal recognition systems.

  16. A novel color image compression algorithm using the human visual contrast sensitivity characteristics

    NASA Astrophysics Data System (ADS)

    Yao, Juncai; Liu, Guizhong

    2017-03-01

    In order to achieve higher image compression ratio and improve visual perception of the decompressed image, a novel color image compression scheme based on the contrast sensitivity characteristics of the human visual system (HVS) is proposed. In the proposed scheme, firstly the image is converted into the YCrCb color space and divided into sub-blocks. Afterwards, the discrete cosine transform is carried out for each sub-block, and three quantization matrices are built to quantize the frequency spectrum coefficients of the images by combining the contrast sensitivity characteristics of HVS. The Huffman algorithm is used to encode the quantized data. The inverse process involves decompression and matching to reconstruct the decompressed color image. And simulations are carried out for two color images. The results show that the average structural similarity index measurement (SSIM) and peak signal to noise ratio (PSNR) under the approximate compression ratio could be increased by 2.78% and 5.48%, respectively, compared with the joint photographic experts group (JPEG) compression. The results indicate that the proposed compression algorithm in the text is feasible and effective to achieve higher compression ratio under ensuring the encoding and image quality, which can fully meet the needs of storage and transmission of color images in daily life.

  17. Spectrophotometric evaluation of stability constants of 1:1 weak complexes from continuous variation data.

    PubMed

    Sayago, Ana; Asuero, Agustin G

    2006-09-14

    A bilogarithmic hyperbolic cosine method for the spectrophotometric evaluation of stability constants of 1:1 weak complexes from continuous variation data has been devised and applied to literature data. A weighting scheme, however, is necessary in order to take into account the transformation for linearization. The method may be considered a useful alternative to methods in which one variable is involved on both sides of the basic equation (i.e. Heller and Schwarzenbach, Likussar and Adsul and Ramanathan). Classical least squares lead in those instances to biased and approximate stability constants and limiting absorbance values. The advantages of the proposed method are: the method gives a clear indication of the existence of only one complex in solution, it is flexible enough to allow for weighting of measurements and the computation procedure yield the best value of logbeta11 and its limit of error. The agreement between the values obtained by applying the weighted hyperbolic cosine method and the non-linear regression (NLR) method is good, being in both cases the mean quadratic error at a minimum.

  18. A New Minimum Trees-Based Approach for Shape Matching with Improved Time Computing: Application to Graphical Symbols Recognition

    NASA Astrophysics Data System (ADS)

    Franco, Patrick; Ogier, Jean-Marc; Loonis, Pierre; Mullot, Rémy

    Recently we have developed a model for shape description and matching. Based on minimum spanning trees construction and specifics stages like the mixture, it seems to have many desirable properties. Recognition invariance in front shift, rotated and noisy shape was checked through median scale tests related to GREC symbol reference database. Even if extracting the topology of a shape by mapping the shortest path connecting all the pixels seems to be powerful, the construction of graph induces an expensive algorithmic cost. In this article we discuss on the ways to reduce time computing. An alternative solution based on image compression concepts is provided and evaluated. The model no longer operates in the image space but in a compact space, namely the Discrete Cosine space. The use of block discrete cosine transform is discussed and justified. The experimental results led on the GREC2003 database show that the proposed method is characterized by a good discrimination power, a real robustness to noise with an acceptable time computing.

  19. Visually Lossless Data Compression for Real-Time Frame/Pushbroom Space Science Imagers

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu; Venbrux, Jack; Bhatia, Prakash; Miller, Warner H.

    2000-01-01

    A visually lossless data compression technique is currently being developed for space science applications under the requirement of high-speed push-broom scanning. The technique is also applicable to frame based imaging and is error-resilient in that error propagation is contained within a few scan lines. The algorithm is based on a block transform of a hybrid of modulated lapped transform (MLT) and discrete cosine transform (DCT), or a 2-dimensional lapped transform, followed by bit-plane encoding; this combination results in an embedded bit string with exactly the desirable compression rate as desired by the user. The approach requires no unique table to maximize its performance. The compression scheme performs well on a suite of test images typical of images from spacecraft instruments. Flight qualified hardware implementations are in development; a functional chip set is expected by the end of 2001. The chip set is being designed to compress data in excess of 20 Msamples/sec and support quantizations from 2 to 16 bits.

  20. A progressive data compression scheme based upon adaptive transform coding: Mixture block coding of natural images

    NASA Technical Reports Server (NTRS)

    Rost, Martin C.; Sayood, Khalid

    1991-01-01

    A method for efficiently coding natural images using a vector-quantized variable-blocksized transform source coder is presented. The method, mixture block coding (MBC), incorporates variable-rate coding by using a mixture of discrete cosine transform (DCT) source coders. Which coders are selected to code any given image region is made through a threshold driven distortion criterion. In this paper, MBC is used in two different applications. The base method is concerned with single-pass low-rate image data compression. The second is a natural extension of the base method which allows for low-rate progressive transmission (PT). Since the base method adapts easily to progressive coding, it offers the aesthetic advantage of progressive coding without incorporating extensive channel overhead. Image compression rates of approximately 0.5 bit/pel are demonstrated for both monochrome and color images.

  1. A closed form solution for constant flux pumping in a well under partial penetration condition

    NASA Astrophysics Data System (ADS)

    Yang, Shaw-Yang; Yeh, Hund-Der; Chiu, Pin-Yuan

    2006-05-01

    An analytical model for the constant flux pumping test is developed in a radial confined aquifer system with a partially penetrating well. The Laplace domain solution is derived by the application of the Laplace transforms with respect to time and the finite Fourier cosine transforms with respect to the vertical coordinates. A time domain solution is obtained using the inverse Laplace transforms, convolution theorem, and Bromwich integral method. The effect of partial penetration is apparent if the test well is completed with a short screen. An aquifer thickness 100 times larger than the screen length of the well can be considered as infinite. This solution can be used to investigate the effects of screen length and location on the drawdown distribution in a radial confined aquifer system and to produce type curves for the estimation of aquifer parameters with field pumping drawdown data.

  2. Establishing an efficient way to utilize the drought resistance germplasm population in wheat.

    PubMed

    Wang, Jiancheng; Guan, Yajing; Wang, Yang; Zhu, Liwei; Wang, Qitian; Hu, Qijuan; Hu, Jin

    2013-01-01

    Drought resistance breeding provides a hopeful way to improve yield and quality of wheat in arid and semiarid regions. Constructing core collection is an efficient way to evaluate and utilize drought-resistant germplasm resources in wheat. In the present research, 1,683 wheat varieties were divided into five germplasm groups (high resistant, HR; resistant, R; moderate resistant, MR; susceptible, S; and high susceptible, HS). The least distance stepwise sampling (LDSS) method was adopted to select core accessions. Six commonly used genetic distances (Euclidean distance, Euclid; Standardized Euclidean distance, Seuclid; Mahalanobis distance, Mahal; Manhattan distance, Manhat; Cosine distance, Cosine; and Correlation distance, Correlation) were used to assess genetic distances among accessions. Unweighted pair-group average (UPGMA) method was used to perform hierarchical cluster analysis. Coincidence rate of range (CR) and variable rate of coefficient of variation (VR) were adopted to evaluate the representativeness of the core collection. A method for selecting the ideal constructing strategy was suggested in the present research. A wheat core collection for the drought resistance breeding programs was constructed by the strategy selected in the present research. The principal component analysis showed that the genetic diversity was well preserved in that core collection.

  3. Shape Optimization of Cylindrical Shell for Interior Noise

    NASA Technical Reports Server (NTRS)

    Robinson, Jay H.

    1999-01-01

    In this paper an analytic method is used to solve for the cross spectral density of the interior acoustic response of a cylinder with nonuniform thickness subjected to turbulent boundary layer excitation. The cylinder is of honeycomb core construction with the thickness of the core material expressed as a cosine series in the circumferential direction. The coefficients of this series are used as the design variable in the optimization study. The objective function is the space and frequency averaged acoustic response. Results confirm the presence of multiple local minima as previously reported and demonstrate the potential for modest noise reduction.

  4. CONSTRUCTING AND DERIVING RECIPROCAL TRIGONOMETRIC RELATIONS: A FUNCTIONAL ANALYTIC APPROACH

    PubMed Central

    Ninness, Chris; Dixon, Mark; Barnes-Holmes, Dermot; Rehfeldt, Ruth Anne; Rumph, Robin; McCuller, Glen; Holland, James; Smith, Ronald; Ninness, Sharon K; McGinty, Jennifer

    2009-01-01

    Participants were pretrained and tested on mutually entailed trigonometric relations and combinatorially entailed relations as they pertained to positive and negative forms of sine, cosine, secant, and cosecant. Experiment 1 focused on training and testing transformations of these mathematical functions in terms of amplitude and frequency followed by tests of novel relations. Experiment 2 addressed training in accordance with frames of coordination (same as) and frames of opposition (reciprocal of) followed by more tests of novel relations. All assessments of derived and novel formula-to-graph relations, including reciprocal functions with diversified amplitude and frequency transformations, indicated that all 4 participants demonstrated substantial improvement in their ability to identify increasingly complex trigonometric formula-to-graph relations pertaining to same as and reciprocal of to establish mathematically complex repertoires. PMID:19949509

  5. Constructing and deriving reciprocal trigonometric relations: a functional analytic approach.

    PubMed

    Ninness, Chris; Dixon, Mark; Barnes-Holmes, Dermot; Rehfeldt, Ruth Anne; Rumph, Robin; McCuller, Glen; Holland, James; Smith, Ronald; Ninness, Sharon K; McGinty, Jennifer

    2009-01-01

    Participants were pretrained and tested on mutually entailed trigonometric relations and combinatorially entailed relations as they pertained to positive and negative forms of sine, cosine, secant, and cosecant. Experiment 1 focused on training and testing transformations of these mathematical functions in terms of amplitude and frequency followed by tests of novel relations. Experiment 2 addressed training in accordance with frames of coordination (same as) and frames of opposition (reciprocal of) followed by more tests of novel relations. All assessments of derived and novel formula-to-graph relations, including reciprocal functions with diversified amplitude and frequency transformations, indicated that all 4 participants demonstrated substantial improvement in their ability to identify increasingly complex trigonometric formula-to-graph relations pertaining to same as and reciprocal of to establish mathematically complex repertoires.

  6. Efficiency optimization of a fast Poisson solver in beam dynamics simulation

    NASA Astrophysics Data System (ADS)

    Zheng, Dawei; Pöplau, Gisela; van Rienen, Ursula

    2016-01-01

    Calculating the solution of Poisson's equation relating to space charge force is still the major time consumption in beam dynamics simulations and calls for further improvement. In this paper, we summarize a classical fast Poisson solver in beam dynamics simulations: the integrated Green's function method. We introduce three optimization steps of the classical Poisson solver routine: using the reduced integrated Green's function instead of the integrated Green's function; using the discrete cosine transform instead of discrete Fourier transform for the Green's function; using a novel fast convolution routine instead of an explicitly zero-padded convolution. The new Poisson solver routine preserves the advantages of fast computation and high accuracy. This provides a fast routine for high performance calculation of the space charge effect in accelerators.

  7. Comparison Of The Performance Of Hybrid Coders Under Different Configurations

    NASA Astrophysics Data System (ADS)

    Gunasekaran, S.; Raina J., P.

    1983-10-01

    Picture bandwidth reduction employing DPCM and Orthogonal Transform (OT) coding for TV transmission have been widely discussed in literature; both the techniques have their own advantages and limitations in terms of compression ratio, implementation, sensitivity to picture statistics and their sensitivity to the channel noise. Hybrid coding introduced by Habibi, - a cascade of the two techniques, offers excellent performance and proves to be attractive retaining the special advantages of both the techniques. In the recent times, the interest has shifted over to Hybrid coding, and in the absence of a report on the relative performance specifications of hybrid coders at different configurations, an attempt has been made to colate the information. Fourier, Hadamard, Slant, Sine, Cosine and Harr transforms have been considered for the present work.

  8. Initial performance of the COSINE-100 experiment

    NASA Astrophysics Data System (ADS)

    Adhikari, G.; Adhikari, P.; de Souza, E. Barbosa; Carlin, N.; Choi, S.; Choi, W. Q.; Djamal, M.; Ezeribe, A. C.; Ha, C.; Hahn, I. S.; Hubbard, A. J. F.; Jeon, E. J.; Jo, J. H.; Joo, H. W.; Kang, W. G.; Kang, W.; Kauer, M.; Kim, B. H.; Kim, H.; Kim, H. J.; Kim, K. W.; Kim, M. C.; Kim, N. Y.; Kim, S. K.; Kim, Y. D.; Kim, Y. H.; Kudryavtsev, V. A.; Lee, H. S.; Lee, J.; Lee, J. Y.; Lee, M. H.; Leonard, D. S.; Lim, K. E.; Lynch, W. A.; Maruyama, R. H.; Mouton, F.; Olsen, S. L.; Park, H. K.; Park, H. S.; Park, J. S.; Park, K. S.; Pettus, W.; Pierpoint, Z. P.; Prihtiadi, H.; Ra, S.; Rogers, F. R.; Rott, C.; Scarff, A.; Spooner, N. J. C.; Thompson, W. G.; Yang, L.; Yong, S. H.

    2018-02-01

    COSINE is a dark matter search experiment based on an array of low background NaI(Tl) crystals located at the Yangyang underground laboratory. The assembly of COSINE-100 was completed in the summer of 2016 and the detector is currently collecting physics quality data aimed at reproducing the DAMA/LIBRA experiment that reported an annual modulation signal. Stable operation has been achieved and will continue for at least 2 years. Here, we describe the design of COSINE-100, including the shielding arrangement, the configuration of the NaI(Tl) crystal detection elements, the veto systems, and the associated operational systems, and we show the current performance of the experiment.

  9. Automated surgical skill assessment in RMIS training.

    PubMed

    Zia, Aneeq; Essa, Irfan

    2018-05-01

    Manual feedback in basic robot-assisted minimally invasive surgery (RMIS) training can consume a significant amount of time from expert surgeons' schedule and is prone to subjectivity. In this paper, we explore the usage of different holistic features for automated skill assessment using only robot kinematic data and propose a weighted feature fusion technique for improving score prediction performance. Moreover, we also propose a method for generating 'task highlights' which can give surgeons a more directed feedback regarding which segments had the most effect on the final skill score. We perform our experiments on the publicly available JHU-ISI Gesture and Skill Assessment Working Set (JIGSAWS) and evaluate four different types of holistic features from robot kinematic data-sequential motion texture (SMT), discrete Fourier transform (DFT), discrete cosine transform (DCT) and approximate entropy (ApEn). The features are then used for skill classification and exact skill score prediction. Along with using these features individually, we also evaluate the performance using our proposed weighted combination technique. The task highlights are produced using DCT features. Our results demonstrate that these holistic features outperform all previous Hidden Markov Model (HMM)-based state-of-the-art methods for skill classification on the JIGSAWS dataset. Also, our proposed feature fusion strategy significantly improves performance for skill score predictions achieving up to 0.61 average spearman correlation coefficient. Moreover, we provide an analysis on how the proposed task highlights can relate to different surgical gestures within a task. Holistic features capturing global information from robot kinematic data can successfully be used for evaluating surgeon skill in basic surgical tasks on the da Vinci robot. Using the framework presented can potentially allow for real-time score feedback in RMIS training and help surgical trainees have more focused training.

  10. Self-consistent approach to the solution of the light transfer problem for irradiances in marine waters with arbitrary turbidity, depth, and surface illumination. I. Case of absorption and elastic scattering.

    PubMed

    Haltrin, V I

    1998-06-20

    A self-consistent variant of the two-flow approximation that takes into account strong anisotropy of light scattering in seawater of finite depth and arbitrary turbidity is presented. To achieve an appropriate accuracy, this approach uses experimental dependencies between downward and total mean cosines. It calculates irradiances, diffuse attenuation coefficients, and diffuse reflectances in waters with arbitrary values of scattering, backscattering, and attenuation coefficients. It also takes into account arbitrary conditions of illumination and reflection from the bottom with the Lambertian albedo. This theory can be used for the calculation of apparent optical properties in both open and coastal oceanic waters, lakes, and rivers. It can also be applied to other types of absorbing and scattering medium such as paints, photographic emulsions, and biological tissues.

  11. Influence of video compression on the measurement error of the television system

    NASA Astrophysics Data System (ADS)

    Sotnik, A. V.; Yarishev, S. N.; Korotaev, V. V.

    2015-05-01

    Video data require a very large memory capacity. Optimal ratio quality / volume video encoding method is one of the most actual problem due to the urgent need to transfer large amounts of video over various networks. The technology of digital TV signal compression reduces the amount of data used for video stream representation. Video compression allows effective reduce the stream required for transmission and storage. It is important to take into account the uncertainties caused by compression of the video signal in the case of television measuring systems using. There are a lot digital compression methods. The aim of proposed work is research of video compression influence on the measurement error in television systems. Measurement error of the object parameter is the main characteristic of television measuring systems. Accuracy characterizes the difference between the measured value abd the actual parameter value. Errors caused by the optical system can be selected as a source of error in the television systems measurements. Method of the received video signal processing is also a source of error. Presence of error leads to large distortions in case of compression with constant data stream rate. Presence of errors increases the amount of data required to transmit or record an image frame in case of constant quality. The purpose of the intra-coding is reducing of the spatial redundancy within a frame (or field) of television image. This redundancy caused by the strong correlation between the elements of the image. It is possible to convert an array of image samples into a matrix of coefficients that are not correlated with each other, if one can find corresponding orthogonal transformation. It is possible to apply entropy coding to these uncorrelated coefficients and achieve a reduction in the digital stream. One can select such transformation that most of the matrix coefficients will be almost zero for typical images . Excluding these zero coefficients also possible reducing of the digital stream. Discrete cosine transformation is most widely used among possible orthogonal transformation. Errors of television measuring systems and data compression protocols analyzed In this paper. The main characteristics of measuring systems and detected sources of their error detected. The most effective methods of video compression are determined. The influence of video compression error on television measuring systems was researched. Obtained results will increase the accuracy of the measuring systems. In television image quality measuring system reduces distortion identical distortion in analog systems and specific distortions resulting from the process of coding / decoding digital video signal and errors in the transmission channel. By the distortions associated with encoding / decoding signal include quantization noise, reducing resolution, mosaic effect, "mosquito" effect edging on sharp drops brightness, blur colors, false patterns, the effect of "dirty window" and other defects. The size of video compression algorithms used in television measuring systems based on the image encoding with intra- and inter prediction individual fragments. The process of encoding / decoding image is non-linear in space and in time, because the quality of the playback of a movie at the reception depends on the pre- and post-history of a random, from the preceding and succeeding tracks, which can lead to distortion of the inadequacy of the sub-picture and a corresponding measuring signal.

  12. One Shot Detection with Laplacian Object and Fast Matrix Cosine Similarity.

    PubMed

    Biswas, Sujoy Kumar; Milanfar, Peyman

    2016-03-01

    One shot, generic object detection involves searching for a single query object in a larger target image. Relevant approaches have benefited from features that typically model the local similarity patterns. In this paper, we combine local similarity (encoded by local descriptors) with a global context (i.e., a graph structure) of pairwise affinities among the local descriptors, embedding the query descriptors into a low dimensional but discriminatory subspace. Unlike principal components that preserve global structure of feature space, we actually seek a linear approximation to the Laplacian eigenmap that permits us a locality preserving embedding of high dimensional region descriptors. Our second contribution is an accelerated but exact computation of matrix cosine similarity as the decision rule for detection, obviating the computationally expensive sliding window search. We leverage the power of Fourier transform combined with integral image to achieve superior runtime efficiency that allows us to test multiple hypotheses (for pose estimation) within a reasonably short time. Our approach to one shot detection is training-free, and experiments on the standard data sets confirm the efficacy of our model. Besides, low computation cost of the proposed (codebook-free) object detector facilitates rather straightforward query detection in large data sets including movie videos.

  13. Personalized Medicine in Veterans with Traumatic Brain Injuries

    DTIC Science & Technology

    2013-05-01

    Pair-Group Method using Arithmetic averages ( UPGMA ) based on cosine correlation of row mean centered log2 signal values; this was the top 50%-tile...cluster- ing was performed by the UPGMA method using Cosine correlation as the similarity metric. For comparative purposes, clustered heat maps included...non-mTBI cases were subjected to unsupervised hierarchical clustering analysis using the UPGMA algorithm with cosine correlation as the similarity

  14. Personalized Medicine in Veterans with Traumatic Brain Injuries

    DTIC Science & Technology

    2014-07-01

    9 control cases are subjected to unsupervised hierarchical clustering analysis using the UPGMA algorithm with cosine correlation as the similarity...in unsu- pervised hierarchical clustering by the Un- weighted Pair-Group Method using Arithmetic averages ( UPGMA ) based on cosine correlation of row...of log2 trans- formed MAS5.0 signal values; probe set cluster- ing was performed by the UPGMA method using Cosine correlation as the similarity

  15. A fast estimation of shock wave pressure based on trend identification

    NASA Astrophysics Data System (ADS)

    Yao, Zhenjian; Wang, Zhongyu; Wang, Chenchen; Lv, Jing

    2018-04-01

    In this paper, a fast method based on trend identification is proposed to accurately estimate the shock wave pressure in a dynamic measurement. Firstly, the collected output signal of the pressure sensor is reconstructed by discrete cosine transform (DCT) to reduce the computational complexity for the subsequent steps. Secondly, the empirical mode decomposition (EMD) is applied to decompose the reconstructed signal into several components with different frequency-bands, and the last few low-frequency components are chosen to recover the trend of the reconstructed signal. In the meantime, the optimal component number is determined based on the correlation coefficient and the normalized Euclidean distance between the trend and the reconstructed signal. Thirdly, with the areas under the gradient curve of the trend signal, the stable interval that produces the minimum can be easily identified. As a result, the stable value of the output signal is achieved in this interval. Finally, the shock wave pressure can be estimated according to the stable value of the output signal and the sensitivity of the sensor in the dynamic measurement. A series of shock wave pressure measurements are carried out with a shock tube system to validate the performance of this method. The experimental results show that the proposed method works well in shock wave pressure estimation. Furthermore, comparative experiments also demonstrate the superiority of the proposed method over the existing approaches in both estimation accuracy and computational efficiency.

  16. Multichannel High Resolution Wide Swath SAR Imaging for Hypersonic Air Vehicle with Curved Trajectory.

    PubMed

    Zhou, Rui; Sun, Jinping; Hu, Yuxin; Qi, Yaolong

    2018-01-31

    Synthetic aperture radar (SAR) equipped on the hypersonic air vehicle in near space has many advantages over the conventional airborne SAR. However, its high-speed maneuvering characteristics with curved trajectory result in serious range migration, and exacerbate the contradiction between the high resolution and wide swath. To solve this problem, this paper establishes the imaging geometrical model matched with the flight trajectory of the hypersonic platform and the multichannel azimuth sampling model based on the displaced phase center antenna (DPCA) technology. Furthermore, based on the multichannel signal reconstruction theory, a more efficient spectrum reconstruction model using discrete Fourier transform is proposed to obtain the azimuth uniform sampling data. Due to the high complexity of the slant range model, it is difficult to deduce the processing algorithm for SAR imaging. Thus, an approximate range model is derived based on the minimax criterion, and the optimal second-order approximate coefficients of cosine function are obtained using the two-population coevolutionary algorithm. On this basis, aiming at the problem that the traditional Omega-K algorithm cannot compensate the residual phase with the difficulty of Stolt mapping along the range frequency axis, this paper proposes an Exact Transfer Function (ETF) algorithm for SAR imaging, and presents a method of range division to achieve wide swath imaging. Simulation results verify the effectiveness of the ETF imaging algorithm.

  17. Multichannel High Resolution Wide Swath SAR Imaging for Hypersonic Air Vehicle with Curved Trajectory

    PubMed Central

    Zhou, Rui; Hu, Yuxin; Qi, Yaolong

    2018-01-01

    Synthetic aperture radar (SAR) equipped on the hypersonic air vehicle in near space has many advantages over the conventional airborne SAR. However, its high-speed maneuvering characteristics with curved trajectory result in serious range migration, and exacerbate the contradiction between the high resolution and wide swath. To solve this problem, this paper establishes the imaging geometrical model matched with the flight trajectory of the hypersonic platform and the multichannel azimuth sampling model based on the displaced phase center antenna (DPCA) technology. Furthermore, based on the multichannel signal reconstruction theory, a more efficient spectrum reconstruction model using discrete Fourier transform is proposed to obtain the azimuth uniform sampling data. Due to the high complexity of the slant range model, it is difficult to deduce the processing algorithm for SAR imaging. Thus, an approximate range model is derived based on the minimax criterion, and the optimal second-order approximate coefficients of cosine function are obtained using the two-population coevolutionary algorithm. On this basis, aiming at the problem that the traditional Omega-K algorithm cannot compensate the residual phase with the difficulty of Stolt mapping along the range frequency axis, this paper proposes an Exact Transfer Function (ETF) algorithm for SAR imaging, and presents a method of range division to achieve wide swath imaging. Simulation results verify the effectiveness of the ETF imaging algorithm. PMID:29385059

  18. Analytical and experimental procedures for determining propagation characteristics of millimeter-wave gallium arsenide microstrip lines

    NASA Technical Reports Server (NTRS)

    Romanofsky, Robert R.

    1989-01-01

    In this report, a thorough analytical procedure is developed for evaluating the frequency-dependent loss characteristics and effective permittivity of microstrip lines. The technique is based on the measured reflection coefficient of microstrip resonator pairs. Experimental data, including quality factor Q, effective relative permittivity, and fringing for 50-omega lines on gallium arsenide (GaAs) from 26.5 to 40.0 GHz are presented. The effects of an imperfect open circuit, coupling losses, and loading of the resonant frequency are considered. A cosine-tapered ridge-guide text fixture is described. It was found to be well suited to the device characterization.

  19. Estimated spectrum adaptive postfilter and the iterative prepost filtering algirighms

    NASA Technical Reports Server (NTRS)

    Linares, Irving (Inventor)

    2004-01-01

    The invention presents The Estimated Spectrum Adaptive Postfilter (ESAP) and the Iterative Prepost Filter (IPF) algorithms. These algorithms model a number of image-adaptive post-filtering and pre-post filtering methods. They are designed to minimize Discrete Cosine Transform (DCT) blocking distortion caused when images are highly compressed with the Joint Photographic Expert Group (JPEG) standard. The ESAP and the IPF techniques of the present invention minimize the mean square error (MSE) to improve the objective and subjective quality of low-bit-rate JPEG gray-scale images while simultaneously enhancing perceptual visual quality with respect to baseline JPEG images.

  20. Performance Evaluation of Frequency Transform Based Block Classification of Compound Image Segmentation Techniques

    NASA Astrophysics Data System (ADS)

    Selwyn, Ebenezer Juliet; Florinabel, D. Jemi

    2018-04-01

    Compound image segmentation plays a vital role in the compression of computer screen images. Computer screen images are images which are mixed with textual, graphical, or pictorial contents. In this paper, we present a comparison of two transform based block classification of compound images based on metrics like speed of classification, precision and recall rate. Block based classification approaches normally divide the compound images into fixed size blocks of non-overlapping in nature. Then frequency transform like Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) are applied over each block. Mean and standard deviation are computed for each 8 × 8 block and are used as features set to classify the compound images into text/graphics and picture/background block. The classification accuracy of block classification based segmentation techniques are measured by evaluation metrics like precision and recall rate. Compound images of smooth background and complex background images containing text of varying size, colour and orientation are considered for testing. Experimental evidence shows that the DWT based segmentation provides significant improvement in recall rate and precision rate approximately 2.3% than DCT based segmentation with an increase in block classification time for both smooth and complex background images.

  1. Cloud Tracking from Satellite Pictures.

    DTIC Science & Technology

    1981-07-01

    sufficiently smooth contours, this information can be obtained from very few low-order coefficients. The inverse transform of the two lowest-order...obtained from very few low- order coefficients. The inverse transform of the two lowest-order coefficients is an ellipse approximating the original...coefficients obtained from the contour of Fig. 9. .. . ........ .. .. ... ..... 67 11. Inverse transform of truncated FD series .. .. . .. .... 67 12

  2. A relation between landsat digital numbers, surface reflectance, and the cosine of the solar zenith angle

    USGS Publications Warehouse

    Kowalik, William S.; Marsh, Stuart E.; Lyon, Ronald J. P.

    1982-01-01

    A method for estimating the reflectance of ground sites from satellite radiance data is proposed and tested. The method uses the known ground reflectance from several sites and satellite data gathered over a wide range of solar zenith angles. The method was tested on each of 10 different Landsat images using 10 small sites in the Walker Lake, Nevada area. Plots of raw Landsat digital numbers (DNs) versus the cosine of the solar zenith angle (cos Z) for the the test areas are linear, and the average correlation coefficients of the data for Landsat bands 4, 5, 6, and 7 are 0.94, 0.93, 0.94, and 0.94, respectively. Ground reflectance values for the 10 sites are proportional to the slope of the DN versus cos Z relation at each site. The slope of the DN versus cos Z relation for seven additional sites in Nevada and California were used to estimate the ground reflectances of those sites. The estimates for nearby sites are in error by an average of 1.2% and more distant sites are in error by 5.1%. The method can successfully estimate the reflectance of sites outside the original scene, but extrapolation of the reflectance estimation equations to other areas may violate assumptions of atmospheric homogeneity.

  3. Linear Transforms for Fourier Data on the Sphere: Application to High Angular Resolution Diffusion MRI of the Brain

    PubMed Central

    Haldar, Justin P.; Leahy, Richard M.

    2013-01-01

    This paper presents a novel family of linear transforms that can be applied to data collected from the surface of a 2-sphere in three-dimensional Fourier space. This family of transforms generalizes the previously-proposed Funk-Radon Transform (FRT), which was originally developed for estimating the orientations of white matter fibers in the central nervous system from diffusion magnetic resonance imaging data. The new family of transforms is characterized theoretically, and efficient numerical implementations of the transforms are presented for the case when the measured data is represented in a basis of spherical harmonics. After these general discussions, attention is focused on a particular new transform from this family that we name the Funk-Radon and Cosine Transform (FRACT). Based on theoretical arguments, it is expected that FRACT-based analysis should yield significantly better orientation information (e.g., improved accuracy and higher angular resolution) than FRT-based analysis, while maintaining the strong characterizability and computational efficiency of the FRT. Simulations are used to confirm these theoretical characteristics, and the practical significance of the proposed approach is illustrated with real diffusion weighted MRI brain data. These experiments demonstrate that, in addition to having strong theoretical characteristics, the proposed approach can outperform existing state-of-the-art orientation estimation methods with respect to measures such as angular resolution and robustness to noise and modeling errors. PMID:23353603

  4. Fourier modeling of the BOLD response to a breath-hold task: Optimization and reproducibility.

    PubMed

    Pinto, Joana; Jorge, João; Sousa, Inês; Vilela, Pedro; Figueiredo, Patrícia

    2016-07-15

    Cerebrovascular reactivity (CVR) reflects the capacity of blood vessels to adjust their caliber in order to maintain a steady supply of brain perfusion, and it may provide a sensitive disease biomarker. Measurement of the blood oxygen level dependent (BOLD) response to a hypercapnia-inducing breath-hold (BH) task has been frequently used to map CVR noninvasively using functional magnetic resonance imaging (fMRI). However, the best modeling approach for the accurate quantification of CVR maps remains an open issue. Here, we compare and optimize Fourier models of the BOLD response to a BH task with a preparatory inspiration, and assess the test-retest reproducibility of the associated CVR measurements, in a group of 10 healthy volunteers studied over two fMRI sessions. Linear combinations of sine-cosine pairs at the BH task frequency and its successive harmonics were added sequentially in a nested models approach, and were compared in terms of the adjusted coefficient of determination and corresponding variance explained (VE) of the BOLD signal, as well as the number of voxels exhibiting significant BOLD responses, the estimated CVR values, and their test-retest reproducibility. The brain average VE increased significantly with the Fourier model order, up to the 3rd order. However, the number of responsive voxels increased significantly only up to the 2nd order, and started to decrease from the 3rd order onwards. Moreover, no significant relative underestimation of CVR values was observed beyond the 2nd order. Hence, the 2nd order model was concluded to be the optimal choice for the studied paradigm. This model also yielded the best test-retest reproducibility results, with intra-subject coefficients of variation of 12 and 16% and an intra-class correlation coefficient of 0.74. In conclusion, our results indicate that a Fourier series set consisting of a sine-cosine pair at the BH task frequency and its two harmonics is a suitable model for BOLD-fMRI CVR measurements based on a BH task with preparatory inspiration, yielding robust estimates of this important physiological parameter. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. A hybrid Q-learning sine-cosine-based strategy for addressing the combinatorial test suite minimization problem

    PubMed Central

    Zamli, Kamal Z.; Din, Fakhrud; Bures, Miroslav

    2018-01-01

    The sine-cosine algorithm (SCA) is a new population-based meta-heuristic algorithm. In addition to exploiting sine and cosine functions to perform local and global searches (hence the name sine-cosine), the SCA introduces several random and adaptive parameters to facilitate the search process. Although it shows promising results, the search process of the SCA is vulnerable to local minima/maxima due to the adoption of a fixed switch probability and the bounded magnitude of the sine and cosine functions (from -1 to 1). In this paper, we propose a new hybrid Q-learning sine-cosine- based strategy, called the Q-learning sine-cosine algorithm (QLSCA). Within the QLSCA, we eliminate the switching probability. Instead, we rely on the Q-learning algorithm (based on the penalty and reward mechanism) to dynamically identify the best operation during runtime. Additionally, we integrate two new operations (Lévy flight motion and crossover) into the QLSCA to facilitate jumping out of local minima/maxima and enhance the solution diversity. To assess its performance, we adopt the QLSCA for the combinatorial test suite minimization problem. Experimental results reveal that the QLSCA is statistically superior with regard to test suite size reduction compared to recent state-of-the-art strategies, including the original SCA, the particle swarm test generator (PSTG), adaptive particle swarm optimization (APSO) and the cuckoo search strategy (CS) at the 95% confidence level. However, concerning the comparison with discrete particle swarm optimization (DPSO), there is no significant difference in performance at the 95% confidence level. On a positive note, the QLSCA statistically outperforms the DPSO in certain configurations at the 90% confidence level. PMID:29771918

  6. A hybrid Q-learning sine-cosine-based strategy for addressing the combinatorial test suite minimization problem.

    PubMed

    Zamli, Kamal Z; Din, Fakhrud; Ahmed, Bestoun S; Bures, Miroslav

    2018-01-01

    The sine-cosine algorithm (SCA) is a new population-based meta-heuristic algorithm. In addition to exploiting sine and cosine functions to perform local and global searches (hence the name sine-cosine), the SCA introduces several random and adaptive parameters to facilitate the search process. Although it shows promising results, the search process of the SCA is vulnerable to local minima/maxima due to the adoption of a fixed switch probability and the bounded magnitude of the sine and cosine functions (from -1 to 1). In this paper, we propose a new hybrid Q-learning sine-cosine- based strategy, called the Q-learning sine-cosine algorithm (QLSCA). Within the QLSCA, we eliminate the switching probability. Instead, we rely on the Q-learning algorithm (based on the penalty and reward mechanism) to dynamically identify the best operation during runtime. Additionally, we integrate two new operations (Lévy flight motion and crossover) into the QLSCA to facilitate jumping out of local minima/maxima and enhance the solution diversity. To assess its performance, we adopt the QLSCA for the combinatorial test suite minimization problem. Experimental results reveal that the QLSCA is statistically superior with regard to test suite size reduction compared to recent state-of-the-art strategies, including the original SCA, the particle swarm test generator (PSTG), adaptive particle swarm optimization (APSO) and the cuckoo search strategy (CS) at the 95% confidence level. However, concerning the comparison with discrete particle swarm optimization (DPSO), there is no significant difference in performance at the 95% confidence level. On a positive note, the QLSCA statistically outperforms the DPSO in certain configurations at the 90% confidence level.

  7. Proposed data compression schemes for the Galileo S-band contingency mission

    NASA Technical Reports Server (NTRS)

    Cheung, Kar-Ming; Tong, Kevin

    1993-01-01

    The Galileo spacecraft is currently on its way to Jupiter and its moons. In April 1991, the high gain antenna (HGA) failed to deploy as commanded. In case the current efforts to deploy the HGA fails, communications during the Jupiter encounters will be through one of two low gain antenna (LGA) on an S-band (2.3 GHz) carrier. A lot of effort has been and will be conducted to attempt to open the HGA. Also various options for improving Galileo's telemetry downlink performance are being evaluated in the event that the HGA will not open at Jupiter arrival. Among all viable options the most promising and powerful one is to perform image and non-image data compression in software onboard the spacecraft. This involves in-flight re-programming of the existing flight software of Galileo's Command and Data Subsystem processors and Attitude and Articulation Control System (AACS) processor, which have very limited computational and memory resources. In this article we describe the proposed data compression algorithms and give their respective compression performance. The planned image compression algorithm is a 4 x 4 or an 8 x 8 multiplication-free integer cosine transform (ICT) scheme, which can be viewed as an integer approximation of the popular discrete cosine transform (DCT) scheme. The implementation complexity of the ICT schemes is much lower than the DCT-based schemes, yet the performances of the two algorithms are indistinguishable. The proposed non-image compression algorith is a Lempel-Ziv-Welch (LZW) variant, which is a lossless universal compression algorithm based on a dynamic dictionary lookup table. We developed a simple and efficient hashing function to perform the string search.

  8. Geometric comparison of popular mixture-model distances.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mitchell, Scott A.

    2010-09-01

    Statistical Latent Dirichlet Analysis produces mixture model data that are geometrically equivalent to points lying on a regular simplex in moderate to high dimensions. Numerous other statistical models and techniques also produce data in this geometric category, even though the meaning of the axes and coordinate values differs significantly. A distance function is used to further analyze these points, for example to cluster them. Several different distance functions are popular amongst statisticians; which distance function is chosen is usually driven by the historical preference of the application domain, information-theoretic considerations, or by the desirability of the clustering results. Relatively littlemore » consideration is usually given to how distance functions geometrically transform data, or the distances algebraic properties. Here we take a look at these issues, in the hope of providing complementary insight and inspiring further geometric thought. Several popular distances, {chi}{sup 2}, Jensen - Shannon divergence, and the square of the Hellinger distance, are shown to be nearly equivalent; in terms of functional forms after transformations, factorizations, and series expansions; and in terms of the shape and proximity of constant-value contours. This is somewhat surprising given that their original functional forms look quite different. Cosine similarity is the square of the Euclidean distance, and a similar geometric relationship is shown with Hellinger and another cosine. We suggest a geodesic variation of Hellinger. The square-root projection that arises in Hellinger distance is briefly compared to standard normalization for Euclidean distance. We include detailed derivations of some ratio and difference bounds for illustrative purposes. We provide some constructions that nearly achieve the worst-case ratios, relevant for contours.« less

  9. Analysis on Behaviour of Wavelet Coefficient during Fault Occurrence in Transformer

    NASA Astrophysics Data System (ADS)

    Sreewirote, Bancha; Ngaopitakkul, Atthapol

    2018-03-01

    The protection system for transformer has play significant role in avoiding severe damage to equipment when disturbance occur and ensure overall system reliability. One of the methodology that widely used in protection scheme and algorithm is discrete wavelet transform. However, characteristic of coefficient under fault condition must be analyzed to ensure its effectiveness. So, this paper proposed study and analysis on wavelet coefficient characteristic when fault occur in transformer in both high- and low-frequency component from discrete wavelet transform. The effect of internal and external fault on wavelet coefficient of both fault and normal phase has been taken into consideration. The fault signal has been simulate using transmission connected to transformer experimental setup on laboratory level that modelled after actual system. The result in term of wavelet coefficient shown a clearly differentiate between wavelet characteristic in both high and low frequency component that can be used to further design and improve detection and classification algorithm that based on discrete wavelet transform methodology in the future.

  10. Optimized nonorthogonal transforms for image compression.

    PubMed

    Guleryuz, O G; Orchard, M T

    1997-01-01

    The transform coding of images is analyzed from a common standpoint in order to generate a framework for the design of optimal transforms. It is argued that all transform coders are alike in the way they manipulate the data structure formed by transform coefficients. A general energy compaction measure is proposed to generate optimized transforms with desirable characteristics particularly suited to the simple transform coding operation of scalar quantization and entropy coding. It is shown that the optimal linear decoder (inverse transform) must be an optimal linear estimator, independent of the structure of the transform generating the coefficients. A formulation that sequentially optimizes the transforms is presented, and design equations and algorithms for its computation provided. The properties of the resulting transform systems are investigated. In particular, it is shown that the resulting basis are nonorthogonal and complete, producing energy compaction optimized, decorrelated transform coefficients. Quantization issues related to nonorthogonal expansion coefficients are addressed with a simple, efficient algorithm. Two implementations are discussed, and image coding examples are given. It is shown that the proposed design framework results in systems with superior energy compaction properties and excellent coding results.

  11. A Robust Zero-Watermarking Algorithm for Audio

    NASA Astrophysics Data System (ADS)

    Chen, Ning; Zhu, Jie

    2007-12-01

    In traditional watermarking algorithms, the insertion of watermark into the host signal inevitably introduces some perceptible quality degradation. Another problem is the inherent conflict between imperceptibility and robustness. Zero-watermarking technique can solve these problems successfully. Instead of embedding watermark, the zero-watermarking technique extracts some essential characteristics from the host signal and uses them for watermark detection. However, most of the available zero-watermarking schemes are designed for still image and their robustness is not satisfactory. In this paper, an efficient and robust zero-watermarking technique for audio signal is presented. The multiresolution characteristic of discrete wavelet transform (DWT), the energy compression characteristic of discrete cosine transform (DCT), and the Gaussian noise suppression property of higher-order cumulant are combined to extract essential features from the host audio signal and they are then used for watermark recovery. Simulation results demonstrate the effectiveness of our scheme in terms of inaudibility, detection reliability, and robustness.

  12. Analysis of Real Ship Rolling Dynamics under Wave Excitement Force Composed of Sums of Cosine Functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Y. S.; Cai, F.; Xu, W. M.

    2011-09-28

    The ship motion equation with a cosine wave excitement force describes the slip moments in regular waves. A new kind of wave excitement force model, with the form as sums of cosine functions was proposed to describe ship rolling in irregular waves. Ship rolling time series were obtained by solving the ship motion equation with the fourth-order-Runger-Kutta method. These rolling time series were synthetically analyzed with methods of phase-space track, power spectrum, primary component analysis, and the largest Lyapunove exponent. Simulation results show that ship rolling presents some chaotic characteristic when the wave excitement force was applied by sums ofmore » cosine functions. The result well explains the course of ship rolling's chaotic mechanism and is useful for ship hydrodynamic study.« less

  13. Fast computational scheme of image compression for 32-bit microprocessors

    NASA Technical Reports Server (NTRS)

    Kasperovich, Leonid

    1994-01-01

    This paper presents a new computational scheme of image compression based on the discrete cosine transform (DCT), underlying JPEG and MPEG International Standards. The algorithm for the 2-d DCT computation uses integer operations (register shifts and additions / subtractions only); its computational complexity is about 8 additions per image pixel. As a meaningful example of an on-board image compression application we consider the software implementation of the algorithm for the Mars Rover (Marsokhod, in Russian) imaging system being developed as a part of Mars-96 International Space Project. It's shown that fast software solution for 32-bit microprocessors may compete with the DCT-based image compression hardware.

  14. Ultrafast Dephasing and Incoherent Light Photon Echoes in Organic Amorphous Systems

    NASA Astrophysics Data System (ADS)

    Yano, Ryuzi; Matsumoto, Yoshinori; Tani, Toshiro; Nakatsuka, Hiroki

    1989-10-01

    Incoherent light photon echoes were observed in organic amorphous systems (cresyl violet in polyvinyl alcohol and 1,4-dihydroxyanthraquinone in polymethacrylic acid) by using temporally-incoherent nanosecond laser pulses. It was found that an echo decay curve of an organic amorphous system is composed of a sharp peak which decays very rapidly and a slowly decaying wing at the tail. We show that the persistent hole burning (PHB) spectra were reproduced by the Fourier-cosine transforms of the echo decay curves. We claim that in general, we must take into account the multi-level feature of the system in order to explain ultrafast dephasing at very low temperatures.

  15. Experimental Observation and Theoretical Description of Multisoliton Fission in Shallow Water

    NASA Astrophysics Data System (ADS)

    Trillo, S.; Deng, G.; Biondini, G.; Klein, M.; Clauss, G. F.; Chabchoub, A.; Onorato, M.

    2016-09-01

    We observe the dispersive breaking of cosine-type long waves [Phys. Rev. Lett. 15, 240 (1965)] in shallow water, characterizing the highly nonlinear "multisoliton" fission over variable conditions. We provide new insight into the interpretation of the results by analyzing the data in terms of the periodic inverse scattering transform for the Korteweg-de Vries equation. In a wide range of dispersion and nonlinearity, the data compare favorably with our analytical estimate, based on a rigorous WKB approach, of the number of emerging solitons. We are also able to observe experimentally the universal Fermi-Pasta-Ulam recurrence in the regime of moderately weak dispersion.

  16. PET attenuation coefficients from CT images: experimental evaluation of the transformation of CT into PET 511-keV attenuation coefficients.

    PubMed

    Burger, C; Goerres, G; Schoenes, S; Buck, A; Lonn, A H R; Von Schulthess, G K

    2002-07-01

    The CT data acquired in combined PET/CT studies provide a fast and essentially noiseless source for the correction of photon attenuation in PET emission data. To this end, the CT values relating to attenuation of photons in the range of 40-140 keV must be transformed into linear attenuation coefficients at the PET energy of 511 keV. As attenuation depends on photon energy and the absorbing material, an accurate theoretical relation cannot be devised. The transformation implemented in the Discovery LS PET/CT scanner (GE Medical Systems, Milwaukee, Wis.) uses a bilinear function based on the attenuation of water and cortical bone at the CT and PET energies. The purpose of this study was to compare this transformation with experimental CT values and corresponding PET attenuation coefficients. In 14 patients, quantitative PET attenuation maps were calculated from germanium-68 transmission scans, and resolution-matched CT images were generated. A total of 114 volumes of interest were defined and the average PET attenuation coefficients and CT values measured. From the CT values the predicted PET attenuation coefficients were calculated using the bilinear transformation. When the transformation was based on the narrow-beam attenuation coefficient of water at 511 keV (0.096 cm(-1)), the predicted attenuation coefficients were higher in soft tissue than the measured values. This bias was reduced by replacing 0.096 cm(-1) in the transformation by the linear attenuation coefficient of 0.093 cm(-1) obtained from germanium-68 transmission scans. An analysis of the corrected emission activities shows that the resulting transformation is essentially equivalent to the transmission-based attenuation correction for human tissue. For non-human material, however, it may assign inaccurate attenuation coefficients which will also affect the correction in neighbouring tissue.

  17. Solar Geometry

    Atmospheric Science Data Center

    2014-09-25

    Solar Noon (GMT time) The time when the sun is due south in the northern hemisphere or due north in the southern ... The average cosine of the angle between the sun and directly overhead during daylight hours.   Cosine solar ...

  18. An iris recognition algorithm based on DCT and GLCM

    NASA Astrophysics Data System (ADS)

    Feng, G.; Wu, Ye-qing

    2008-04-01

    With the enlargement of mankind's activity range, the significance for person's status identity is becoming more and more important. So many different techniques for person's status identity were proposed for this practical usage. Conventional person's status identity methods like password and identification card are not always reliable. A wide variety of biometrics has been developed for this challenge. Among those biologic characteristics, iris pattern gains increasing attention for its stability, reliability, uniqueness, noninvasiveness and difficult to counterfeit. The distinct merits of the iris lead to its high reliability for personal identification. So the iris identification technique had become hot research point in the past several years. This paper presents an efficient algorithm for iris recognition using gray-level co-occurrence matrix(GLCM) and Discrete Cosine transform(DCT). To obtain more representative iris features, features from space and DCT transformation domain are extracted. Both GLCM and DCT are applied on the iris image to form the feature sequence in this paper. The combination of GLCM and DCT makes the iris feature more distinct. Upon GLCM and DCT the eigenvector of iris extracted, which reflects features of spatial transformation and frequency transformation. Experimental results show that the algorithm is effective and feasible with iris recognition.

  19. Multipurpose image watermarking algorithm based on multistage vector quantization.

    PubMed

    Lu, Zhe-Ming; Xu, Dian-Guo; Sun, Sheng-He

    2005-06-01

    The rapid growth of digital multimedia and Internet technologies has made copyright protection, copy protection, and integrity verification three important issues in the digital world. To solve these problems, the digital watermarking technique has been presented and widely researched. Traditional watermarking algorithms are mostly based on discrete transform domains, such as the discrete cosine transform, discrete Fourier transform (DFT), and discrete wavelet transform (DWT). Most of these algorithms are good for only one purpose. Recently, some multipurpose digital watermarking methods have been presented, which can achieve the goal of content authentication and copyright protection simultaneously. However, they are based on DWT or DFT. Lately, several robust watermarking schemes based on vector quantization (VQ) have been presented, but they can only be used for copyright protection. In this paper, we present a novel multipurpose digital image watermarking method based on the multistage vector quantizer structure, which can be applied to image authentication and copyright protection. In the proposed method, the semi-fragile watermark and the robust watermark are embedded in different VQ stages using different techniques, and both of them can be extracted without the original image. Simulation results demonstrate the effectiveness of our algorithm in terms of robustness and fragility.

  20. Large-aspect-ratio limit of neoclassical transport theory.

    PubMed

    Wong, S K; Chan, V S

    2003-06-01

    This paper presents a comprehensive description of neoclassical transport theory in the banana regime for large-aspect-ratio flux surfaces of arbitrary shapes. The method of matched-asymptotic expansions is used to obtain analytical solutions for plasma distribution functions and to compute transport coefficients. The method provides justification for retaining only the part of the Fokker-Planck operator that involves the second derivative with respect to the cosine of the pitch angle for the trapped and barely circulating particles. It leads to a simple equation for the freely circulating particles with boundary conditions that embody a discontinuity separating particles moving in opposite directions. Corrections to the transport coefficients are obtained by generalizing an existing boundary layer analysis. The system of moment and field equations is consistently taken in the cylinder limit, which facilitates the discussion of the treatment of dynamical constraints. It is shown that the nonlocal nature of Ohm's law in neoclassical theory renders the mathematical problem of plasma transport with changing flux surfaces nonstandard.

  1. Converting Differential Photometry Results to the Standard System using Transform Generator and Transform Applier (Abstract)

    NASA Astrophysics Data System (ADS)

    Ciocca, M.

    2016-12-01

    (Abstract only) Since Fall of 2014, AAVSO made available two very useful software tools: transform generator (tg) and transform applier (ta). tg, authored by Gordon Myers (gordonmyers@hotmail.com), is a program, running under python that allows the user to obtain the transformation coefficients of their imaging train. ta, authored by George Silvis, allows users to apply the transformation coefficients obtained previously to their photometric observation. The data so processed become then directly comparable to those of other observers. I will show how to obtain transform coefficient using two Standard Field (M 67 and NGC7790), how consistent the results are and as an application, I will present transformed data for two AAVSO Target stars, AE UMA and RR CET.

  2. Linear transforms for Fourier data on the sphere: application to high angular resolution diffusion MRI of the brain.

    PubMed

    Haldar, Justin P; Leahy, Richard M

    2013-05-01

    This paper presents a novel family of linear transforms that can be applied to data collected from the surface of a 2-sphere in three-dimensional Fourier space. This family of transforms generalizes the previously-proposed Funk-Radon Transform (FRT), which was originally developed for estimating the orientations of white matter fibers in the central nervous system from diffusion magnetic resonance imaging data. The new family of transforms is characterized theoretically, and efficient numerical implementations of the transforms are presented for the case when the measured data is represented in a basis of spherical harmonics. After these general discussions, attention is focused on a particular new transform from this family that we name the Funk-Radon and Cosine Transform (FRACT). Based on theoretical arguments, it is expected that FRACT-based analysis should yield significantly better orientation information (e.g., improved accuracy and higher angular resolution) than FRT-based analysis, while maintaining the strong characterizability and computational efficiency of the FRT. Simulations are used to confirm these theoretical characteristics, and the practical significance of the proposed approach is illustrated with real diffusion weighted MRI brain data. These experiments demonstrate that, in addition to having strong theoretical characteristics, the proposed approach can outperform existing state-of-the-art orientation estimation methods with respect to measures such as angular resolution and robustness to noise and modeling errors. Copyright © 2013 Elsevier Inc. All rights reserved.

  3. Spatio-temporal phase retrieval in speckle interferometry with Hilbert transform and two-dimensional phase unwrapping

    NASA Astrophysics Data System (ADS)

    Li, Xiangyu; Huang, Zhanhua; Zhu, Meng; He, Jin; Zhang, Hao

    2014-12-01

    Hilbert transform (HT) is widely used in temporal speckle pattern interferometry, but errors from low modulations might propagate and corrupt the calculated phase. A spatio-temporal method for phase retrieval using temporal HT and spatial phase unwrapping is presented. In time domain, the wrapped phase difference between the initial and current states is directly determined by using HT. To avoid the influence of the low modulation intensity, the phase information between the two states is ignored. As a result, the phase unwrapping is shifted from time domain to space domain. A phase unwrapping algorithm based on discrete cosine transform is adopted by taking advantage of the information in adjacent pixels. An experiment is carried out with a Michelson-type interferometer to study the out-of-plane deformation field. High quality whole-field phase distribution maps with different fringe densities are obtained. Under the experimental conditions, the maximum number of fringes resolvable in a 416×416 frame is 30, which indicates a 15λ deformation along the direction of loading.

  4. An accurate surface topography restoration algorithm for white light interferometry

    NASA Astrophysics Data System (ADS)

    Yuan, He; Zhang, Xiangchao; Xu, Min

    2017-10-01

    As an important measuring technique, white light interferometry can realize fast and non-contact measurement, thus it is now widely used in the field of ultra-precision engineering. However, the traditional recovery algorithms of surface topographies have flaws and limits. In this paper, we propose a new algorithm to solve these problems. It is a combination of Fourier transform and improved polynomial fitting method. Because the white light interference signal is usually expressed as a cosine signal whose amplitude is modulated by a Gaussian function, its fringe visibility is not constant and varies with different scanning positions. The interference signal is processed first by Fourier transform, then the positive frequency part is selected and moved back to the center of the amplitude-frequency curve. In order to restore the surface morphology, a polynomial fitting method is used to fit the amplitude curve after inverse Fourier transform and obtain the corresponding topography information. The new method is then compared to the traditional algorithms. It is proved that the aforementioned drawbacks can be effectively overcome. The relative error is less than 0.8%.

  5. Embedding multiple watermarks in the DFT domain using low- and high-frequency bands

    NASA Astrophysics Data System (ADS)

    Ganic, Emir; Dexter, Scott D.; Eskicioglu, Ahmet M.

    2005-03-01

    Although semi-blind and blind watermarking schemes based on Discrete Cosine Transform (DCT) or Discrete Wavelet Transform (DWT) are robust to a number of attacks, they fail in the presence of geometric attacks such as rotation, scaling, and translation. The Discrete Fourier Transform (DFT) of a real image is conjugate symmetric, resulting in a symmetric DFT spectrum. Because of this property, the popularity of DFT-based watermarking has increased in the last few years. In a recent paper, we generalized a circular watermarking idea to embed multiple watermarks in lower and higher frequencies. Nevertheless, a circular watermark is visible in the DFT domain, providing a potential hacker with valuable information about the location of the watermark. In this paper, our focus is on embedding multiple watermarks that are not visible in the DFT domain. Using several frequency bands increases the overall robustness of the proposed watermarking scheme. Specifically, our experiments show that the watermark embedded in lower frequencies is robust to one set of attacks, and the watermark embedded in higher frequencies is robust to a different set of attacks.

  6. Correlation, Cost Risk, and Geometry

    NASA Technical Reports Server (NTRS)

    Dean, Edwin B.

    1992-01-01

    The geometric viewpoint identifies the choice of a correlation matrix for the simulation of cost risk with the pairwise choice of data vectors corresponding to the parameters used to obtain cost risk. The correlation coefficient is the cosine of the angle between the data vectors after translation to an origin at the mean and normalization for magnitude. Thus correlation is equivalent to expressing the data in terms of a non orthogonal basis. To understand the many resulting phenomena requires the use of the tensor concept of raising the index to transform the measured and observed covariant components into contravariant components before vector addition can be applied. The geometric viewpoint also demonstrates that correlation and covariance are geometric properties, as opposed to purely statistical properties, of the variates. Thus, variates from different distributions may be correlated, as desired, after selection from independent distributions. By determining the principal components of the correlation matrix, variates with the desired mean, magnitude, and correlation can be generated through linear transforms which include the eigenvalues and the eigenvectors of the correlation matrix. The conversion of the data to a non orthogonal basis uses a compound linear transformation which distorts or stretches the data space. Hence, the correlated data does not have the same properties as the uncorrelated data used to generate it. This phenomena is responsible for seemingly strange observations such as the fact that the marginal distributions of the correlated data can be quite different from the distributions used to generate the data. The joint effect of statistical distributions and correlation remains a fertile area for further research. In terms of application to cost estimating, the geometric approach demonstrates that the estimator must have data and must understand that data in order to properly choose the correlation matrix appropriate for a given estimate. There is a general feeling by employers and managers that the field of cost requires little technical or mathematical background. Contrary to that opinion, this paper demonstrates that a background in mathematics equivalent to that needed for typical engineering and scientific disciplines at the masters or doctorate level is appropriate within the field of cost risk.

  7. Low-order auditory Zernike moment: a novel approach for robust music identification in the compressed domain

    NASA Astrophysics Data System (ADS)

    Li, Wei; Xiao, Chuan; Liu, Yaduo

    2013-12-01

    Audio identification via fingerprint has been an active research field for years. However, most previously reported methods work on the raw audio format in spite of the fact that nowadays compressed format audio, especially MP3 music, has grown into the dominant way to store music on personal computers and/or transmit it over the Internet. It will be interesting if a compressed unknown audio fragment could be directly recognized from the database without decompressing it into the wave format at first. So far, very few algorithms run directly on the compressed domain for music information retrieval, and most of them take advantage of the modified discrete cosine transform coefficients or derived cepstrum and energy type of features. As a first attempt, we propose in this paper utilizing compressed domain auditory Zernike moment adapted from image processing techniques as the key feature to devise a novel robust audio identification algorithm. Such fingerprint exhibits strong robustness, due to its statistically stable nature, against various audio signal distortions such as recompression, noise contamination, echo adding, equalization, band-pass filtering, pitch shifting, and slight time scale modification. Experimental results show that in a music database which is composed of 21,185 MP3 songs, a 10-s long music segment is able to identify its original near-duplicate recording, with average top-5 hit rate up to 90% or above even under severe audio signal distortions.

  8. Microlens array processor with programmable weight mask and direct optical input

    NASA Astrophysics Data System (ADS)

    Schmid, Volker R.; Lueder, Ernst H.; Bader, Gerhard; Maier, Gert; Siegordner, Jochen

    1999-03-01

    We present an optical feature extraction system with a microlens array processor. The system is suitable for online implementation of a variety of transforms such as the Walsh transform and DCT. Operating with incoherent light, our processor accepts direct optical input. Employing a sandwich- like architecture, we obtain a very compact design of the optical system. The key elements of the microlens array processor are a square array of 15 X 15 spherical microlenses on acrylic substrate and a spatial light modulator as transmissive mask. The light distribution behind the mask is imaged onto the pixels of a customized a-Si image sensor with adjustable gain. We obtain one output sample for each microlens image and its corresponding weight mask area as summation of the transmitted intensity within one sensor pixel. The resulting architecture is very compact and robust like a conventional camera lens while incorporating a high degree of parallelism. We successfully demonstrate a Walsh transform into the spatial frequency domain as well as the implementation of a discrete cosine transform with digitized gray values. We provide results showing the transformation performance for both synthetic image patterns and images of natural texture samples. The extracted frequency features are suitable for neural classification of the input image. Other transforms and correlations can be implemented in real-time allowing adaptive optical signal processing.

  9. [Estimation of organic matter content of north fluvo-aquic soil based on the coupling model of wavelet transform and partial least squares].

    PubMed

    Wang, Yan-Cang; Yang, Gui-Jun; Zhu, Jin-Shan; Gu, Xiao-He; Xu, Peng; Liao, Qin-Hong

    2014-07-01

    For improving the estimation accuracy of soil organic matter content of the north fluvo-aquic soil, wavelet transform technology is introduced. The soil samples were collected from Tongzhou district and Shunyi district in Beijing city. And the data source is from soil hyperspectral data obtained under laboratory condition. First, discrete wavelet transform efficiently decomposes hyperspectral into approximate coefficients and detail coefficients. Then, the correlation between approximate coefficients, detail coefficients and organic matter content was analyzed, and the sensitive bands of the organic matter were screened. Finally, models were established to estimate the soil organic content by using the partial least squares regression (PLSR). Results show that the NIR bands made more contributions than the visible band in estimating organic matter content models; the ability of approximate coefficients to estimate organic matter content is better than that of detail coefficients; The estimation precision of the detail coefficients fir soil organic matter content decreases with the spectral resolution being lower; Compared with the commonly used three types of soil spectral reflectance transforms, the wavelet transform can improve the estimation ability of soil spectral fir organic content; The accuracy of the best model established by the approximate coefficients or detail coefficients is higher, and the coefficient of determination (R2) and the root mean square error (RMSE) of the best model for approximate coefficients are 0.722 and 0.221, respectively. The R2 and RMSE of the best model for detail coefficients are 0.670 and 0.255, respectively.

  10. Electro-mechanical sine/cosine generator

    NASA Technical Reports Server (NTRS)

    Flagge, B. (Inventor)

    1972-01-01

    An electromechanical device for generating both sine and cosine functions is described. A motor rotates a cylinder about an axis parallel to and a slight distance from the central axis of the cylinder. Two noncontacting displacement sensing devices are placed ninety degrees apart, equal distances from the axis of rotation of the cylinder and short distances above the surface of cylinder. Each of these sensing devices produces an electrical signal proportional to the distance that it is away from the cylinder. Consequently, as the cylinder is rotated the outputs from the two sensing devices are the sine and cosine functions.

  11. Cosine-Gauss plasmon beam: a localized long-range nondiffracting surface wave.

    PubMed

    Lin, Jiao; Dellinger, Jean; Genevet, Patrice; Cluzel, Benoit; de Fornel, Frederique; Capasso, Federico

    2012-08-31

    A new surface wave is introduced, the cosine-Gauss beam, which does not diffract while it propagates in a straight line and tightly bound to the metallic surface for distances up to 80 μm. The generation of this highly localized wave is shown to be straightforward and highly controllable, with varying degrees of transverse confinement and directionality, by fabricating a plasmon launcher consisting of intersecting metallic gratings. Cosine-Gauss beams have potential for applications in plasmonics, notably for efficient coupling to nanophotonic devices, opening up new design possibilities for next-generation optical interconnects.

  12. ECG compression using Slantlet and lifting wavelet transform with and without normalisation

    NASA Astrophysics Data System (ADS)

    Aggarwal, Vibha; Singh Patterh, Manjeet

    2013-05-01

    This article analyses the performance of: (i) linear transform: Slantlet transform (SLT), (ii) nonlinear transform: lifting wavelet transform (LWT) and (iii) nonlinear transform (LWT) with normalisation for electrocardiogram (ECG) compression. First, an ECG signal is transformed using linear transform and nonlinear transform. The transformed coefficients (TC) are then thresholded using bisection algorithm in order to match the predefined user-specified percentage root mean square difference (UPRD) within the tolerance. Then, the binary look up table is made to store the position map for zero and nonzero coefficients (NZCs). The NZCs are quantised by Max-Lloyd quantiser followed by Arithmetic coding. The look up table is encoded by Huffman coding. The results show that the LWT gives the best result as compared to SLT evaluated in this article. This transform is then considered to evaluate the effect of normalisation before thresholding. In case of normalisation, the TC is normalised by dividing the TC by ? (where ? is number of samples) to reduce the range of TC. The normalised coefficients (NC) are then thresholded. After that the procedure is same as in case of coefficients without normalisation. The results show that the compression ratio (CR) in case of LWT with normalisation is improved as compared to that without normalisation.

  13. Adaptive Filtering in the Wavelet Transform Domain via Genetic Algorithms

    DTIC Science & Technology

    2004-08-06

    wavelet transforms. Whereas the term “evolved” pertains only to the altered wavelet coefficients used during the inverse transform process. 2...words, the inverse transform produces the original signal x(t) from the wavelet and scaling coefficients. )()( ,, tdtx nk n nk k ψ...reconstruct the original signal as accurately as possible. The inverse transform reconstructs an approximation of the original signal (Burrus

  14. Long-life electromechanical sine-cosine generator

    NASA Technical Reports Server (NTRS)

    Flagge, B.

    1971-01-01

    Sine-cosine generator with no sliding parts is capable of withstanding a 20 Hz oscillation for more than 14 hours. Tests show that generator is electrically equal to potentiometer and that it has excellent dynamic characteristics. Generator shows promise of higher-speed applications than was previously possible.

  15. Detecting Disease in Radiographs with Intuitive Confidence

    PubMed Central

    2015-01-01

    This paper argues in favor of a specific type of confidence for use in computer-aided diagnosis and disease classification, namely, sine/cosine values of angles represented by points on the unit circle. The paper shows how this confidence is motivated by Chinese medicine and how sine/cosine values are directly related with the two forces Yin and Yang. The angle for which sine and cosine are equal (45°) represents the state of equilibrium between Yin and Yang, which is a state of nonduality that indicates neither normality nor abnormality in terms of disease classification. The paper claims that the proposed confidence is intuitive and can be readily understood by physicians. The paper underpins this thesis with theoretical results in neural signal processing, stating that a sine/cosine relationship between the actual input signal and the perceived (learned) input is key to neural learning processes. As a practical example, the paper shows how to use the proposed confidence values to highlight manifestations of tuberculosis in frontal chest X-rays. PMID:26495433

  16. Sampling functions for geophysics

    NASA Technical Reports Server (NTRS)

    Giacaglia, G. E. O.; Lunquist, C. A.

    1972-01-01

    A set of spherical sampling functions is defined such that they are related to spherical-harmonic functions in the same way that the sampling functions of information theory are related to sine and cosine functions. An orderly distribution of (N + 1) squared sampling points on a sphere is given, for which the (N + 1) squared spherical sampling functions span the same linear manifold as do the spherical-harmonic functions through degree N. The transformations between the spherical sampling functions and the spherical-harmonic functions are given by recurrence relations. The spherical sampling functions of two arguments are extended to three arguments and to nonspherical reference surfaces. Typical applications of this formalism to geophysical topics are sketched.

  17. SAR data compression: Application, requirements, and designs

    NASA Technical Reports Server (NTRS)

    Curlander, John C.; Chang, C. Y.

    1991-01-01

    The feasibility of reducing data volume and data rate is evaluated for the Earth Observing System (EOS) Synthetic Aperture Radar (SAR). All elements of data stream from the sensor downlink data stream to electronic delivery of browse data products are explored. The factors influencing design of a data compression system are analyzed, including the signal data characteristics, the image quality requirements, and the throughput requirements. The conclusion is that little or no reduction can be achieved in the raw signal data using traditional data compression techniques (e.g., vector quantization, adaptive discrete cosine transform) due to the induced phase errors in the output image. However, after image formation, a number of techniques are effective for data compression.

  18. Analytic expressions for the black-sky and white-sky albedos of the cosine lobe model.

    PubMed

    Goodin, Christopher

    2013-05-01

    The cosine lobe model is a bidirectional reflectance distribution function (BRDF) that is commonly used in computer graphics to model specular reflections. The model is both simple and physically plausible, but physical quantities such as albedo have not been related to the parameterization of the model. In this paper, analytic expressions for calculating the black-sky and white-sky albedos from the cosine lobe BRDF model with integer exponents will be derived, to the author's knowledge for the first time. These expressions for albedo can be used to place constraints on physics-based simulations of radiative transfer such as high-fidelity ray-tracing simulations.

  19. Adaptive Filtering in the Wavelet Transform Domain Via Genetic Algorithms

    DTIC Science & Technology

    2004-08-01

    inverse transform process. 2. BACKGROUND The image processing research conducted at the AFRL/IFTA Reconfigurable Computing Laboratory has been...coefficients from the wavelet domain back into the original signal domain. In other words, the inverse transform produces the original signal x(t) from the...coefficients for an inverse wavelet transform, such that the MSE of images reconstructed by this inverse transform is significantly less than the mean squared

  20. Personalized Medicine in Veterans with Traumatic Brain Injuries

    DTIC Science & Technology

    2011-05-01

    UPGMA algorithm with cosine correlation as the similarity metric. Results are present as a heat map (left panel) demonstrating that the panel of 18... UPGMA algorithm with cosine correlation as the similarity metric. Results are presented as heat maps demonstrating the efficacy of using all 13

  1. QR code-based non-linear image encryption using Shearlet transform and spiral phase transform

    NASA Astrophysics Data System (ADS)

    Kumar, Ravi; Bhaduri, Basanta; Hennelly, Bryan

    2018-02-01

    In this paper, we propose a new quick response (QR) code-based non-linear technique for image encryption using Shearlet transform (ST) and spiral phase transform. The input image is first converted into a QR code and then scrambled using the Arnold transform. The scrambled image is then decomposed into five coefficients using the ST and the first Shearlet coefficient, C1 is interchanged with a security key before performing the inverse ST. The output after inverse ST is then modulated with a random phase mask and further spiral phase transformed to get the final encrypted image. The first coefficient, C1 is used as a private key for decryption. The sensitivity of the security keys is analysed in terms of correlation coefficient and peak signal-to noise ratio. The robustness of the scheme is also checked against various attacks such as noise, occlusion and special attacks. Numerical simulation results are shown in support of the proposed technique and an optoelectronic set-up for encryption is also proposed.

  2. Application of Genetic Algorithm and Particle Swarm Optimization techniques for improved image steganography systems

    NASA Astrophysics Data System (ADS)

    Jude Hemanth, Duraisamy; Umamaheswari, Subramaniyan; Popescu, Daniela Elena; Naaji, Antoanela

    2016-01-01

    Image steganography is one of the ever growing computational approaches which has found its application in many fields. The frequency domain techniques are highly preferred for image steganography applications. However, there are significant drawbacks associated with these techniques. In transform based approaches, the secret data is embedded in random manner in the transform coefficients of the cover image. These transform coefficients may not be optimal in terms of the stego image quality and embedding capacity. In this work, the application of Genetic Algorithm (GA) and Particle Swarm Optimization (PSO) have been explored in the context of determining the optimal coefficients in these transforms. Frequency domain transforms such as Bandelet Transform (BT) and Finite Ridgelet Transform (FRIT) are used in combination with GA and PSO to improve the efficiency of the image steganography system.

  3. Principle and analysis of a rotational motion Fourier transform infrared spectrometer

    NASA Astrophysics Data System (ADS)

    Cai, Qisheng; Min, Huang; Han, Wei; Liu, Yixuan; Qian, Lulu; Lu, Xiangning

    2017-09-01

    Fourier transform infrared spectroscopy is an important technique in studying molecular energy levels, analyzing material compositions, and environmental pollutants detection. A novel rotational motion Fourier transform infrared spectrometer with high stability and ultra-rapid scanning characteristics is proposed in this paper. The basic principle, the optical path difference (OPD) calculations, and some tolerance analysis are elaborated. The OPD of this spectrometer is obtained by the continuously rotational motion of a pair of parallel mirrors instead of the translational motion in traditional Michelson interferometer. Because of the rotational motion, it avoids the tilt problems occurred in the translational motion Michelson interferometer. There is a cosine function relationship between the OPD and the rotating angle of the parallel mirrors. An optical model is setup in non-sequential mode of the ZEMAX software, and the interferogram of a monochromatic light is simulated using ray tracing method. The simulated interferogram is consistent with the theoretically calculated interferogram. As the rotating mirrors are the only moving elements in this spectrometer, the parallelism of the rotating mirrors and the vibration during the scan are analyzed. The vibration of the parallel mirrors is the main error during the rotation. This high stability and ultra-rapid scanning Fourier transform infrared spectrometer is a suitable candidate for airborne and space-borne remote sensing spectrometer.

  4. The Law of Cosines for an "n"-Dimensional Simplex

    ERIC Educational Resources Information Center

    Ding, Yiren

    2008-01-01

    Using the divergence theorem technique of L. Eifler and N.H. Rhee, "The n-dimensional Pythagorean Theorem via the Divergence Theorem" (to appear: Amer. Math. Monthly), we extend the law of cosines for a triangle in a plane to an "n"-dimensional simplex in an "n"-dimensional space.

  5. An Elementary Algorithm to Evaluate Trigonometric Functions to High Precision

    ERIC Educational Resources Information Center

    Johansson, B. Tomas

    2018-01-01

    Evaluation of the cosine function is done via a simple Cordic-like algorithm, together with a package for handling arbitrary-precision arithmetic in the computer program Matlab. Approximations to the cosine function having hundreds of correct decimals are presented with a discussion around errors and implementation.

  6. Compressed/reconstructed test images for CRAF/Cassini

    NASA Technical Reports Server (NTRS)

    Dolinar, S.; Cheung, K.-M.; Onyszchuk, I.; Pollara, F.; Arnold, S.

    1991-01-01

    A set of compressed, then reconstructed, test images submitted to the Comet Rendezvous Asteroid Flyby (CRAF)/Cassini project is presented as part of its evaluation of near lossless high compression algorithms for representing image data. A total of seven test image files were provided by the project. The seven test images were compressed, then reconstructed with high quality (root mean square error of approximately one or two gray levels on an 8 bit gray scale), using discrete cosine transforms or Hadamard transforms and efficient entropy coders. The resulting compression ratios varied from about 2:1 to about 10:1, depending on the activity or randomness in the source image. This was accomplished without any special effort to optimize the quantizer or to introduce special postprocessing to filter the reconstruction errors. A more complete set of measurements, showing the relative performance of the compression algorithms over a wide range of compression ratios and reconstruction errors, shows that additional compression is possible at a small sacrifice in fidelity.

  7. An Asymmetric Image Encryption Based on Phase Truncated Hybrid Transform

    NASA Astrophysics Data System (ADS)

    Khurana, Mehak; Singh, Hukum

    2017-09-01

    To enhance the security of the system and to protect it from the attacker, this paper proposes a new asymmetric cryptosystem based on hybrid approach of Phase Truncated Fourier and Discrete Cosine Transform (PTFDCT) which adds non linearity by including cube and cube root operation in the encryption and decryption path respectively. In this cryptosystem random phase masks are used as encryption keys and phase masks generated after the cube operation in encryption process are reserved as decryption keys and cube root operation is required to decrypt image in decryption process. The cube and cube root operation introduced in the encryption and decryption path makes system resistant against standard attacks. The robustness of the proposed cryptosystem has been analysed and verified on the basis of various parameters by simulating on MATLAB 7.9.0 (R2008a). The experimental results are provided to highlight the effectiveness and suitability of the proposed cryptosystem and prove the system is secure.

  8. Low complexity 1D IDCT for 16-bit parallel architectures

    NASA Astrophysics Data System (ADS)

    Bivolarski, Lazar

    2007-09-01

    This paper shows that using the Loeffler, Ligtenberg, and Moschytz factorization of 8-point IDCT [2] one-dimensional (1-D) algorithm as a fast approximation of the Discrete Cosine Transform (DCT) and using only 16 bit numbers, it is possible to create in an IEEE 1180-1990 compliant and multiplierless algorithm with low computational complexity. This algorithm as characterized by its structure is efficiently implemented on parallel high performance architectures as well as due to its low complexity is sufficient for wide range of other architectures. Additional constraint on this work was the requirement of compliance with the existing MPEG standards. The hardware implementation complexity and low resources where also part of the design criteria for this algorithm. This implementation is also compliant with the precision requirements described in MPEG IDCT precision specification ISO/IEC 23002-1. Complexity analysis is performed as an extension to the simple measure of shifts and adds for the multiplierless algorithm as additional operations are included in the complexity measure to better describe the actual transform implementation complexity.

  9. COMPARISON OF MICROBIAL TRANSFORMATION RATE COEFFICIENTS OF XENOBIOTIC CHEMICALS BETWEEN FIELD-COLLECTED AND LABORATORY MICROCOSM MICROBIOTA

    EPA Science Inventory

    Two second-order transformation rate coefficients--kb, based on total plate counts, and kA, based on periphyton-colonized surface areas--were used to compare xenobiotic chemical transformation by laboratory-developed (microcosm) and by field-collected microbiota. Similarity of tr...

  10. The mathematical modeling of the experiment on the determination of correlation coefficients in neutron beta-decay

    NASA Astrophysics Data System (ADS)

    Serebrov, A. P.; Zherebtsov, O. M.; Klyushnikov, G. N.

    2018-05-01

    An experiment on the measurement of the ratio of the axial coupling constant to the vector one is under development. The main idea of the experiment is to measure the values of A and B in the same setup. An additional measurement of the polarization is not necessary. The accuracy achieved to date in measuring λ is 2 × 10-3. It is expected that in the experiment the accuracy will be of the order of 10-4. Some particular problems of mathematical modeling concerning the experiment on the measurement of the ratio of the axial coupling constant to the vector one are considered. The force lines for the given tabular field of a magnetic trap are studied. The dependences of the longitudinal and transverse field non-uniformity coefficients on the coordinates are regarded. A special computational algorithm based on the law of a charged particle motion along a local magnetic force line is performed for the calculation of the electrons and protons motion time as well as for the evaluation of the total number of electrons colliding with the detector surface. The average values of the cosines of the angles with the coefficients of a, A and B have been estimated.

  11. Cocaine profiling for strategic intelligence, a cross-border project between France and Switzerland: part II. Validation of the statistical methodology for the profiling of cocaine.

    PubMed

    Lociciro, S; Esseiva, P; Hayoz, P; Dujourdy, L; Besacier, F; Margot, P

    2008-05-20

    Harmonisation and optimization of analytical and statistical methodologies were carried out between two forensic laboratories (Lausanne, Switzerland and Lyon, France) in order to provide drug intelligence for cross-border cocaine seizures. Part I dealt with the optimization of the analytical method and its robustness. This second part investigates statistical methodologies that will provide reliable comparison of cocaine seizures analysed on two different gas chromatographs interfaced with a flame ionisation detectors (GC-FIDs) in two distinct laboratories. Sixty-six statistical combinations (ten data pre-treatments followed by six different distance measurements and correlation coefficients) were applied. One pre-treatment (N+S: area of each peak is divided by its standard deviation calculated from the whole data set) followed by the Cosine or Pearson correlation coefficients were found to be the best statistical compromise for optimal discrimination of linked and non-linked samples. The centralisation of the analyses in one single laboratory is not a required condition anymore to compare samples seized in different countries. This allows collaboration, but also, jurisdictional control over data.

  12. Wakes behind surface-mounted obstacles: Impact of aspect ratio, incident angle, and surface roughness

    NASA Astrophysics Data System (ADS)

    Tobin, Nicolas; Chamorro, Leonardo P.

    2018-03-01

    The so-called wake-moment coefficient C˜h and lateral wake deflection of three-dimensional windbreaks are explored in the near and far wake. Wind-tunnel experiments were performed to study the functional dependence of C˜h with windbreak aspect ratio, incidence angle, and the ratio of the windbreak height and surface roughness (h /z0 ). Supported with the data, we also propose basic models for the wake deflection of the windbreak in the near and far fields. The near-wake model is based on momentum conservation considering the drag on the windbreak, whereas the far-wake counterpart is based on existing models for wakes behind surface-mounted obstacles. Results show that C˜h does not change with windbreak aspect ratios of 10 or greater; however, it may be lower for an aspect ratio of 5. C˜h is found to change roughly with the cosine of the incidence angle, and to depend strongly on h /z0 . The data broadly support the proposed wake-deflection models, though better predictions could be made with improved knowledge of the windbreak drag coefficient.

  13. Weighted optimization of irradiance for photodynamic therapy of port wine stains

    NASA Astrophysics Data System (ADS)

    He, Linhuan; Zhou, Ya; Hu, Xiaoming

    2016-10-01

    Planning of irradiance distribution (PID) is one of the foremost factors for on-demand treatment of port wine stains (PWS) with photodynamic therapy (PDT). A weighted optimization method for PID was proposed according to the grading of PWS with a three dimensional digital illumination instrument. Firstly, the point clouds of lesions were filtered to remove the error or redundant points, the triangulation was carried out and the lesion was divided into small triangular patches. Secondly, the parameters such as area, normal vector and orthocenter for optimization of each triangular patch were calculated, and the weighted coefficients were determined by the erythema indexes and areas of patches. Then, the optimization initial point was calculated based on the normal vectors and orthocenters to optimize the light direction. In the end, the irradiation can be optimized according to cosine values of irradiance angles and weighted coefficients. Comparing the irradiance distribution before and after optimization, the proposed weighted optimization method can make the irradiance distribution match better with the characteristics of lesions, and has the potential to improve the therapeutic efficacy.

  14. Observations of the directional distribution of the wind energy input function over swell waves

    NASA Astrophysics Data System (ADS)

    Shabani, Behnam; Babanin, Alex V.; Baldock, Tom E.

    2016-02-01

    Field measurements of wind stress over shallow water swell traveling in different directions relative to the wind are presented. The directional distribution of the measured stresses is used to confirm the previously proposed but unverified directional distribution of the wind energy input function. The observed wind energy input function is found to follow a much narrower distribution (β∝cos⁡3.6θ) than the Plant (1982) cosine distribution. The observation of negative stress angles at large wind-wave angles, however, indicates that the onset of negative wind shearing occurs at about θ≈ 50°, and supports the use of the Snyder et al. (1981) directional distribution. Taking into account the reverse momentum transfer from swell to the wind, Snyder's proposed parameterization is found to perform exceptionally well in explaining the observed narrow directional distribution of the wind energy input function, and predicting the wind drag coefficients. The empirical coefficient (ɛ) in Snyder's parameterization is hypothesised to be a function of the wave shape parameter, with ɛ value increasing as the wave shape changes between sinusoidal, sawtooth, and sharp-crested shoaling waves.

  15. [A quality controllable algorithm for ECG compression based on wavelet transform and ROI coding].

    PubMed

    Zhao, An; Wu, Baoming

    2006-12-01

    This paper presents an ECG compression algorithm based on wavelet transform and region of interest (ROI) coding. The algorithm has realized near-lossless coding in ROI and quality controllable lossy coding outside of ROI. After mean removal of the original signal, multi-layer orthogonal discrete wavelet transform is performed. Simultaneously,feature extraction is performed on the original signal to find the position of ROI. The coefficients related to the ROI are important coefficients and kept. Otherwise, the energy loss of the transform domain is calculated according to the goal PRDBE (Percentage Root-mean-square Difference with Baseline Eliminated), and then the threshold of the coefficients outside of ROI is determined according to the loss of energy. The important coefficients, which include the coefficients of ROI and the coefficients that are larger than the threshold outside of ROI, are put into a linear quantifier. The map, which records the positions of the important coefficients in the original wavelet coefficients vector, is compressed with a run-length encoder. Huffman coding has been applied to improve the compression ratio. ECG signals taken from the MIT/BIH arrhythmia database are tested, and satisfactory results in terms of clinical information preserving, quality and compress ratio are obtained.

  16. Inverse modeling of the overpressure distribution in an extension fracture with an arbitrary aperture variation: application to non-feeder dikes in the Miyake-jima Volcano, Japan

    NASA Astrophysics Data System (ADS)

    Kusumoto, Shigekazu; Geshi, Nobuo; Gudmundsson, Agust

    2010-05-01

    We derived a solution for the overpressure distribution acting on the walls (surfaces) of an extension fracture (a hydrofracture) with an arbitrary opening-displacement (or aperture) variation. In the proposed model, we assume that the overpressure distribution can be described by Fourier cosine series. We at first present a solution for the forward model giving the fracture aperture when it is opened by an irregular overpressure variation obtained using the Fourier cosine series. Next, by changing the form of the solution for the forward model, we obtain a matrix equation that can be used to estimate the Fourier coefficients to obtain the overpressure distribution from the fracture aperture variation. As simple examples of this inverse analysis, we estimate the overpressure conditions from crack apertures given analytically for two cases, namely, 1) the overpressure in the crack is constant, and 2) the overpressure variation in the crack varies linearly from its center. The estimated overpressure distributions were found to be correct, although a small 'noise' was present. Since the method presented here gives the overpressure distribution as a Fourier series by the aperture data measured at a finite number of points, the overpressure conditions for forming the fracture can be determined for each wavelength. The Fourier coefficient of n = 0 is an important coefficient that gives the average value of the overpressure acting inside the crack. With the exception of n = 0, the Fourier coefficient of n = 1 expresses the longest wavelength component of the irregular overpressure. Thus, because this coefficient including the coefficient of n = 0 gives the longest wavelength component in the irregular overpressure, the component may be an important indicator of the overpressure condition that decides the basic form of the crack. We applied the solution for the inverse analysis to the thickness data of 19 non-feeder dikes exposed in the caldera wall of the Miyake-jima Volcano, Japan. In the analysis, the host-rock Young's modulus and Poisson's ratio were taken as 1 GPa and 0.25. The results show that most of the estimated overpressures increase toward the tips of the dikes and reach about 5 to 15 MPa (average was 8 MPa). In addition, results indicate host-rock fracture toughnesses between 60 MPa m1-2 and 170 MPa m1-2 (average 100 MPa m1-2). For comparison, we also estimated the magma overpressure by the least square method, assuming constant overpressure. This method gives overpressure between 1.5 MPa and 4 MPa (average 2.8 MPa). Similarly, the fracture toughnesses estimated in this way range between 30 MPa m1-2 and 120 MPa m1-2 (average 55 MPa m1-2). These methods and assumptions thus yield somewhat different results, as expected, but indicate the likely ranges of the magma overpressures and host-rock fracture toughnesses both of which are very reasonable and agree with earlier results obtained by different methods.

  17. In vivo repeatability of the pulse wave inverse problem in human carotid arteries.

    PubMed

    McGarry, Matthew; Nauleau, Pierre; Apostolakis, Iason; Konofagou, Elisa

    2017-11-07

    Accurate arterial stiffness measurement would improve diagnosis and monitoring for many diseases. Atherosclerotic plaques and aneurysms are expected to involve focal changes in vessel wall properties; therefore, a method to image the stiffness variation would be a valuable clinical tool. The pulse wave inverse problem (PWIP) fits unknown parameters from a computational model of arterial pulse wave propagation to ultrasound-based measurements of vessel wall displacements by minimizing the difference between the model and measured displacements. The PWIP has been validated in phantoms, and this study presents the first in vivo demonstration. The common carotid arteries of five healthy volunteers were imaged five times in a single session with repositioning of the probe and subject between each scan. The 1D finite difference computational model used in the PWIP spanned from the start of the transducer to the carotid bifurcation, where a resistance outlet boundary condition was applied to approximately model the downstream reflection of the pulse wave. Unknown parameters that were estimated by the PWIP included a 10-segment linear piecewise compliance distribution and 16 discrete cosine transformation coefficients for each of the inlet boundary conditions. Input data was selected to include pulse waves resulting from the primary pulse and dicrotic notch. The recovered compliance maps indicate that the compliance increases close to the bifurcation, and the variability of the average pulse wave velocity estimated through the PWIP is on the order of 11%, which is similar to that of the conventional processing technique which tracks the wavefront arrival time (13%). Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Research on fusion algorithm of polarization image in tetrolet domain

    NASA Astrophysics Data System (ADS)

    Zhang, Dexiang; Yuan, BaoHong; Zhang, Jingjing

    2015-12-01

    Tetrolets are Haar-type wavelets whose supports are tetrominoes which are shapes made by connecting four equal-sized squares. A fusion method for polarization images based on tetrolet transform is proposed. Firstly, the magnitude of polarization image and angle of polarization image can be decomposed into low-frequency coefficients and high-frequency coefficients with multi-scales and multi-directions using tetrolet transform. For the low-frequency coefficients, the average fusion method is used. According to edge distribution differences in high frequency sub-band images, for the directional high-frequency coefficients are used to select the better coefficients by region spectrum entropy algorithm for fusion. At last the fused image can be obtained by utilizing inverse transform for fused tetrolet coefficients. Experimental results show that the proposed method can detect image features more effectively and the fused image has better subjective visual effect

  19. Partial-fraction expansion and inverse Laplace transform of a rational function with real coefficients

    NASA Technical Reports Server (NTRS)

    Chang, F.-C.; Mott, H.

    1974-01-01

    This paper presents a technique for the partial-fraction expansion of functions which are ratios of polynomials with real coefficients. The expansion coefficients are determined by writing the polynomials as Taylor's series and obtaining the Laurent series expansion of the function. The general formula for the inverse Laplace transform is also derived.

  20. Transparency of the ab Planes of Bi2Sr2CaCu2O8+δ to Magnetic Fields

    NASA Astrophysics Data System (ADS)

    Kossler, W. J.; Dai, Y.; Petzinger, K. G.; Greer, A. J.; Williams, D. Ll.; Koster, E.; Harshman, D. R.; Mitzi, D. B.

    1998-01-01

    A sample composed of many Bi2Sr2CaCu2O8+δ single crystals was cooled to 2 K in a magnetic field of 100 G at 45° from the c axis. Muon-spin-rotation measurements were made for which the polarization was initially approximately in the ab plane. The time dependent polarization components along this initial direction and along the c axis were obtained. Cosine transforms of these and subsequent measurements were made. Upon removing the applied field, still at 2 K, only the c axis component of the field remained in the sample, thus providing microscopic evidence for extreme 2D behavior for the vortices even at this temperature.

  1. Effect of data compression on diagnostic accuracy in digital hand and chest radiography

    NASA Astrophysics Data System (ADS)

    Sayre, James W.; Aberle, Denise R.; Boechat, Maria I.; Hall, Theodore R.; Huang, H. K.; Ho, Bruce K. T.; Kashfian, Payam; Rahbar, Guita

    1992-05-01

    Image compression is essential to handle a large volume of digital images including CT, MR, CR, and digitized films in a digital radiology operation. The full-frame bit allocation using the cosine transform technique developed during the last few years has been proven to be an excellent irreversible image compression method. This paper describes the effect of using the hardware compression module on diagnostic accuracy in hand radiographs with subperiosteal resorption and chest radiographs with interstitial disease. Receiver operating characteristic analysis using 71 hand radiographs and 52 chest radiographs with five observers each demonstrates that there is no statistical significant difference in diagnostic accuracy between the original films and the compressed images with a compression ratio as high as 20:1.

  2. Blood perfusion construction for infrared face recognition based on bio-heat transfer.

    PubMed

    Xie, Zhihua; Liu, Guodong

    2014-01-01

    To improve the performance of infrared face recognition for time-lapse data, a new construction of blood perfusion is proposed based on bio-heat transfer. Firstly, by quantifying the blood perfusion based on Pennes equation, the thermal information is converted into blood perfusion rate, which is stable facial biological feature of face image. Then, the separability discriminant criterion in Discrete Cosine Transform (DCT) domain is applied to extract the discriminative features of blood perfusion information. Experimental results demonstrate that the features of blood perfusion are more concentrative and discriminative for recognition than those of thermal information. The infrared face recognition based on the proposed blood perfusion is robust and can achieve better recognition performance compared with other state-of-the-art approaches.

  3. Partially supervised speaker clustering.

    PubMed

    Tang, Hao; Chu, Stephen Mingyu; Hasegawa-Johnson, Mark; Huang, Thomas S

    2012-05-01

    Content-based multimedia indexing, retrieval, and processing as well as multimedia databases demand the structuring of the media content (image, audio, video, text, etc.), one significant goal being to associate the identity of the content to the individual segments of the signals. In this paper, we specifically address the problem of speaker clustering, the task of assigning every speech utterance in an audio stream to its speaker. We offer a complete treatment to the idea of partially supervised speaker clustering, which refers to the use of our prior knowledge of speakers in general to assist the unsupervised speaker clustering process. By means of an independent training data set, we encode the prior knowledge at the various stages of the speaker clustering pipeline via 1) learning a speaker-discriminative acoustic feature transformation, 2) learning a universal speaker prior model, and 3) learning a discriminative speaker subspace, or equivalently, a speaker-discriminative distance metric. We study the directional scattering property of the Gaussian mixture model (GMM) mean supervector representation of utterances in the high-dimensional space, and advocate exploiting this property by using the cosine distance metric instead of the euclidean distance metric for speaker clustering in the GMM mean supervector space. We propose to perform discriminant analysis based on the cosine distance metric, which leads to a novel distance metric learning algorithm—linear spherical discriminant analysis (LSDA). We show that the proposed LSDA formulation can be systematically solved within the elegant graph embedding general dimensionality reduction framework. Our speaker clustering experiments on the GALE database clearly indicate that 1) our speaker clustering methods based on the GMM mean supervector representation and vector-based distance metrics outperform traditional speaker clustering methods based on the “bag of acoustic features” representation and statistical model-based distance metrics, 2) our advocated use of the cosine distance metric yields consistent increases in the speaker clustering performance as compared to the commonly used euclidean distance metric, 3) our partially supervised speaker clustering concept and strategies significantly improve the speaker clustering performance over the baselines, and 4) our proposed LSDA algorithm further leads to state-of-the-art speaker clustering performance.

  4. Some Cosine Relations and the Regular Heptagon

    ERIC Educational Resources Information Center

    Osler, Thomas J.; Heng, Phongthong

    2007-01-01

    The ancient Greek mathematicians sought to construct, by use of straight edge and compass only, all regular polygons. They had no difficulty with regular polygons having 3, 4, 5 and 6 sides, but the 7-sided heptagon eluded all their attempts. In this article, the authors discuss some cosine relations and the regular heptagon. (Contains 1 figure.)

  5. Similarity Measures in Scientometric Research: The Jaccard Index versus Salton's Cosine Formula.

    ERIC Educational Resources Information Center

    Hamers, Lieve; And Others

    1989-01-01

    Describes two similarity measures used in citation and co-citation analysis--the Jaccard index and Salton's cosine formula--and investigates the relationship between the two measures. It is shown that Salton's formula yields a numerical value that is twice Jaccard's index in most cases, and an explanation is offered. (13 references) (CLB)

  6. Adaptive threshold shearlet transform for surface microseismic data denoising

    NASA Astrophysics Data System (ADS)

    Tang, Na; Zhao, Xian; Li, Yue; Zhu, Dan

    2018-06-01

    Random noise suppression plays an important role in microseismic data processing. The microseismic data is often corrupted by strong random noise, which would directly influence identification and location of microseismic events. Shearlet transform is a new multiscale transform, which can effectively process the low magnitude of microseismic data. In shearlet domain, due to different distributions of valid signals and random noise, shearlet coefficients can be shrunk by threshold. Therefore, threshold is vital in suppressing random noise. The conventional threshold denoising algorithms usually use the same threshold to process all coefficients, which causes noise suppression inefficiency or valid signals loss. In order to solve above problems, we propose the adaptive threshold shearlet transform (ATST) for surface microseismic data denoising. In the new algorithm, we calculate the fundamental threshold for each direction subband firstly. In each direction subband, the adjustment factor is obtained according to each subband coefficient and its neighboring coefficients, in order to adaptively regulate the fundamental threshold for different shearlet coefficients. Finally we apply the adaptive threshold to deal with different shearlet coefficients. The experimental denoising results of synthetic records and field data illustrate that the proposed method exhibits better performance in suppressing random noise and preserving valid signal than the conventional shearlet denoising method.

  7. Quantum transition probabilities during a perturbing pulse: Differences between the nonadiabatic results and Fermi's golden rule forms

    NASA Astrophysics Data System (ADS)

    Mandal, Anirban; Hunt, Katharine L. C.

    2018-05-01

    For a perturbed quantum system initially in the ground state, the coefficient ck(t) of excited state k in the time-dependent wave function separates into adiabatic and nonadiabatic terms. The adiabatic term ak(t) accounts for the adjustment of the original ground state to form the new ground state of the instantaneous Hamiltonian H(t), by incorporating excited states of the unperturbed Hamiltonian H0 without transitions; ak(t) follows the adiabatic theorem of Born and Fock. The nonadiabatic term bk(t) describes excitation into another quantum state k; bk(t) is obtained as an integral containing the time derivative of the perturbation. The true transition probability is given by |bk(t)|2, as first stated by Landau and Lifshitz. In this work, we contrast |bk(t)|2 and |ck(t)|2. The latter is the norm-square of the entire excited-state coefficient which is used for the transition probability within Fermi's golden rule. Calculations are performed for a perturbing pulse consisting of a cosine or sine wave in a Gaussian envelope. When the transition frequency ωk0 is on resonance with the frequency ω of the cosine wave, |bk(t)|2 and |ck(t)|2 rise almost monotonically to the same final value; the two are intertwined, but they are out of phase with each other. Off resonance (when ωk0 ≠ ω), |bk(t)|2 and |ck(t)|2 differ significantly during the pulse. They oscillate out of phase and reach different maxima but then fall off to equal final values after the pulse has ended, when ak(t) ≡ 0. If ωk0 < ω, |bk(t)|2 generally exceeds |ck(t)|2, while the opposite is true when ωk0 > ω. While the transition probability is rising, the midpoints between successive maxima and minima fit Gaussian functions of the form a exp[-b(t - d)2]. To our knowledge, this is the first analysis of nonadiabatic transition probabilities during a perturbing pulse.

  8. Discovering Trigonometric Relationships Implied by the Law of Sines and the Law of Cosines

    ERIC Educational Resources Information Center

    Skurnick, Ronald; Javadi, Mohammad

    2006-01-01

    The Law of Sines and The Law of Cosines are of paramount importance in the field of trigonometry because these two theorems establish relationships satisfied by the three sides and the three angles of any triangle. In this article, the authors use these two laws to discover a host of other trigonometric relationships that exist within any…

  9. Tachometer

    NASA Technical Reports Server (NTRS)

    Nola, F. J. (Inventor)

    1977-01-01

    A tachometer in which sine and cosine signals responsive to the angular position of a shaft as it rotates are each multiplied by like, sine or cosine, functions of a carrier signal, the products summed, and the resulting frequency signal converted to fixed height, fixed width pulses of a like frequency. These pulses are then integrated, and the resulting dc output is an indication of shaft speed.

  10. An Ellipse Morphs to a Cosine Graph!

    ERIC Educational Resources Information Center

    King, L .R.

    2013-01-01

    We produce a continuum of curves all of the same length, beginning with an ellipse and ending with a cosine graph. The curves in the continuum are made by cutting and unrolling circular cones whose section is the ellipse; the initial cone is degenerate (it is the plane of the ellipse); the final cone is a circular cylinder. The curves of the…

  11. Enabling Technologies for Medium Additive Manufacturing (MAAM)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Richardson, Bradley S.; Love, Lonnie J.; Chesser, Phillip C.

    ORNL has worked with Cosine Additive, Inc. on the design of MAAM extrusion components. The objective is to improve the print speed and part quality. A pellet extruder has been procured and integrated into the MAAM printer. Print speed has been greatly enhanced. In addition, ORNL and Cosine Additive have worked on alternative designs for a pellet drying and feed system.

  12. Dark and bright solitons for the two-dimensional complex modified Korteweg-de Vries and Maxwell-Bloch system with time-dependent coefficient

    NASA Astrophysics Data System (ADS)

    Shaikhova, G.; Ozat, N.; Yesmakhanova, K.; Bekova, G.

    2018-02-01

    In this work, we present Lax pair for two-dimensional complex modified Korteweg-de Vries and Maxwell-Bloch (cmKdV-MB) system with the time-dependent coefficient. Dark and bright soliton solutions for the cmKdV-MB system with variable coefficient are received by Darboux transformation. Moreover, the determinant representation of the one-fold and two-fold Darboux transformation for the cmKdV-MB system with time-dependent coefficient is presented.

  13. Cubic Equations and the Ideal Trisection of the Arbitrary Angle

    ERIC Educational Resources Information Center

    Farnsworth, Marion B.

    2006-01-01

    In the year 1837 mathematical proof was set forth authoritatively stating that it is impossible to trisect an arbitrary angle with a compass and an unmarked straightedge in the classical sense. The famous proof depends on an incompatible cubic equation having the cosine of an angle of 60 and the cube of the cosine of one-third of an angle of 60 as…

  14. YORP torque as the function of shape harmonics

    NASA Astrophysics Data System (ADS)

    Breiter, Sławomir; Michalska, Hanna

    2008-08-01

    The second-order analytical approximation of the mean Yarkovsky-O'Keefe-Radzievskii-Paddack (YORP) torque components is given as an explicit function of the shape spherical harmonics coefficients for a sufficiently regular minor body. The results are based upon a new expression for the insolation function, significantly simpler than in previous works. Linearized plane-parallel model of the temperature distribution derived from the insolation function allows us to take into account a non-zero conductivity. Final expressions for the three average components of the YORP torque related with rotation period, obliquity and precession are given in a form of the Legendre series of the cosine of obliquity. The series have good numerical properties and can be easily truncated according to the degree of the Legendre polynomials or associated functions, with first two terms playing the principal role.

  15. Poisson denoising on the sphere

    NASA Astrophysics Data System (ADS)

    Schmitt, J.; Starck, J. L.; Fadili, J.; Grenier, I.; Casandjian, J. M.

    2009-08-01

    In the scope of the Fermi mission, Poisson noise removal should improve data quality and make source detection easier. This paper presents a method for Poisson data denoising on sphere, called Multi-Scale Variance Stabilizing Transform on Sphere (MS-VSTS). This method is based on a Variance Stabilizing Transform (VST), a transform which aims to stabilize a Poisson data set such that each stabilized sample has an (asymptotically) constant variance. In addition, for the VST used in the method, the transformed data are asymptotically Gaussian. Thus, MS-VSTS consists in decomposing the data into a sparse multi-scale dictionary (wavelets, curvelets, ridgelets...), and then applying a VST on the coefficients in order to get quasi-Gaussian stabilized coefficients. In this present article, the used multi-scale transform is the Isotropic Undecimated Wavelet Transform. Then, hypothesis tests are made to detect significant coefficients, and the denoised image is reconstructed with an iterative method based on Hybrid Steepest Descent (HST). The method is tested on simulated Fermi data.

  16. A direct method to transform between expansions in the configuration state function and Slater determinant bases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Olsen, Jeppe, E-mail: jeppe@chem.au.dk

    2014-07-21

    A novel algorithm is introduced for the transformation of wave functions between the bases of Slater determinants (SD) and configuration state functions (CSF) in the genealogical coupling scheme. By modifying the expansion coefficients as each electron is spin-coupled, rather than performing a single many-electron transformation, the large transformation matrix that plagues previous approaches is avoided and the required number of operations is drastically reduced. As an example of the efficiency of the algorithm, the transformation for a configuration with 30 unpaired electrons and singlet spin is discussed. For this case, the 10 × 10{sup 6} coefficients in the CSF basismore » is obtained from the 150 × 10{sup 6} coefficients in the SD basis in 1 min, which should be compared with the seven years that the previously employed method is estimated to require.« less

  17. An Efficient Method for Image and Audio Steganography using Least Significant Bit (LSB) Substitution

    NASA Astrophysics Data System (ADS)

    Chadha, Ankit; Satam, Neha; Sood, Rakshak; Bade, Dattatray

    2013-09-01

    In order to improve the data hiding in all types of multimedia data formats such as image and audio and to make hidden message imperceptible, a novel method for steganography is introduced in this paper. It is based on Least Significant Bit (LSB) manipulation and inclusion of redundant noise as secret key in the message. This method is applied to data hiding in images. For data hiding in audio, Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) both are used. All the results displayed prove to be time-efficient and effective. Also the algorithm is tested for various numbers of bits. For those values of bits, Mean Square Error (MSE) and Peak-Signal-to-Noise-Ratio (PSNR) are calculated and plotted. Experimental results show that the stego-image is visually indistinguishable from the original cover-image when n<=4, because of better PSNR which is achieved by this technique. The final results obtained after steganography process does not reveal presence of any hidden message, thus qualifying the criteria of imperceptible message.

  18. Study of Fourier transform spectrometer based on Michelson interferometer wave-meter

    NASA Astrophysics Data System (ADS)

    Peng, Yuexiang; Wang, Liqiang; Lin, Li

    2008-03-01

    A wave-meter based on Michelson interferometer consists of a reference and a measurement channel. The voice-coiled motor using PID means can realize to move in stable motion. The wavelength of a measurement laser can be obtained by counting interference fringes of reference and measurement laser. Reference laser with frequency stabilization creates a cosine interferogram signal whose frequency is proportional to velocity of the moving motor. The interferogram of the reference laser is converted to pulse signal, and it is subdivided into 16 times. In order to get optical spectrum, the analog signal of measurement channel should be collected. The Analog-to-Digital Converter (ADC) for measurement channel is triggered by the 16-times pulse signal of reference laser. So the sampling rate is constant only depending on frequency of reference laser and irrelative to the motor velocity. This means the sampling rate of measurement channel signals is on a uniform time-scale. The optical spectrum of measurement channel can be processed with Fast Fourier Transform (FFT) method by DSP and displayed on LCD.

  19. 2.5D multi-view gait recognition based on point cloud registration.

    PubMed

    Tang, Jin; Luo, Jian; Tjahjadi, Tardi; Gao, Yan

    2014-03-28

    This paper presents a method for modeling a 2.5-dimensional (2.5D) human body and extracting the gait features for identifying the human subject. To achieve view-invariant gait recognition, a multi-view synthesizing method based on point cloud registration (MVSM) to generate multi-view training galleries is proposed. The concept of a density and curvature-based Color Gait Curvature Image is introduced to map 2.5D data onto a 2D space to enable data dimension reduction by discrete cosine transform and 2D principle component analysis. Gait recognition is achieved via a 2.5D view-invariant gait recognition method based on point cloud registration. Experimental results on the in-house database captured by a Microsoft Kinect camera show a significant performance gain when using MVSM.

  20. Hardware Design and Implementation of Fixed-Width Standard and Truncated 4×4, 6×6, 8×8 and 12×12-BIT Multipliers Using Fpga

    NASA Astrophysics Data System (ADS)

    Rais, Muhammad H.

    2010-06-01

    This paper presents Field Programmable Gate Array (FPGA) implementation of standard and truncated multipliers using Very High Speed Integrated Circuit Hardware Description Language (VHDL). Truncated multiplier is a good candidate for digital signal processing (DSP) applications such as finite impulse response (FIR) and discrete cosine transform (DCT). Remarkable reduction in FPGA resources, delay, and power can be achieved using truncated multipliers instead of standard parallel multipliers when the full precision of the standard multiplier is not required. The truncated multipliers show significant improvement as compared to standard multipliers. Results show that the anomaly in Spartan-3 AN average connection and maximum pin delay have been efficiently reduced in Virtex-4 device.

  1. Alternate forms of the associated Legendre functions for use in geomagnetic modeling.

    USGS Publications Warehouse

    Alldredge, L.R.; Benton, E.R.

    1986-01-01

    An inconvenience attending traditional use of associated Legendre functions in global modeling is that the functions are not separable with respect to the 2 indices (order and degree). In 1973 Merilees suggested a way to avoid the problem by showing that associated Legendre functions of order m and degree m+k can be expressed in terms of elementary functions. This note calls attention to some possible gains in time savings and accuracy in geomagnetic modeling based upon this form. For this purpose, expansions of associated Legendre polynomials in terms of sines and cosines of multiple angles are displayed up to degree and order 10. Examples are also given explaining how some surface spherical harmonics can be transformed into true Fourier series for selected polar great circle paths. -from Authors

  2. Compression for radiological images

    NASA Astrophysics Data System (ADS)

    Wilson, Dennis L.

    1992-07-01

    The viewing of radiological images has peculiarities that must be taken into account in the design of a compression technique. The images may be manipulated on a workstation to change the contrast, to change the center of the brightness levels that are viewed, and even to invert the images. Because of the possible consequences of losing information in a medical application, bit preserving compression is used for the images used for diagnosis. However, for archiving the images may be compressed to 10 of their original size. A compression technique based on the Discrete Cosine Transform (DCT) takes the viewing factors into account by compressing the changes in the local brightness levels. The compression technique is a variation of the CCITT JPEG compression that suppresses the blocking of the DCT except in areas of very high contrast.

  3. Subband directional vector quantization in radiological image compression

    NASA Astrophysics Data System (ADS)

    Akrout, Nabil M.; Diab, Chaouki; Prost, Remy; Goutte, Robert; Amiel, Michel

    1992-05-01

    The aim of this paper is to propose a new scheme for image compression. The method is very efficient for images which have directional edges such as the tree-like structure of the coronary vessels in digital angiograms. This method involves two steps. First, the original image is decomposed at different resolution levels using a pyramidal subband decomposition scheme. For decomposition/reconstruction of the image, free of aliasing and boundary errors, we use an ideal band-pass filter bank implemented in the Discrete Cosine Transform domain (DCT). Second, the high-frequency subbands are vector quantized using a multiresolution codebook with vertical and horizontal codewords which take into account the edge orientation of each subband. The proposed method reduces the blocking effect encountered at low bit rates in conventional vector quantization.

  4. FPGA Implementation of Optimal 3D-Integer DCT Structure for Video Compression

    PubMed Central

    2015-01-01

    A novel optimal structure for implementing 3D-integer discrete cosine transform (DCT) is presented by analyzing various integer approximation methods. The integer set with reduced mean squared error (MSE) and high coding efficiency are considered for implementation in FPGA. The proposed method proves that the least resources are utilized for the integer set that has shorter bit values. Optimal 3D-integer DCT structure is determined by analyzing the MSE, power dissipation, coding efficiency, and hardware complexity of different integer sets. The experimental results reveal that direct method of computing the 3D-integer DCT using the integer set [10, 9, 6, 2, 3, 1, 1] performs better when compared to other integer sets in terms of resource utilization and power dissipation. PMID:26601120

  5. A robust H.264/AVC video watermarking scheme with drift compensation.

    PubMed

    Jiang, Xinghao; Sun, Tanfeng; Zhou, Yue; Wang, Wan; Shi, Yun-Qing

    2014-01-01

    A robust H.264/AVC video watermarking scheme for copyright protection with self-adaptive drift compensation is proposed. In our scheme, motion vector residuals of macroblocks with the smallest partition size are selected to hide copyright information in order to hold visual impact and distortion drift to a minimum. Drift compensation is also implemented to reduce the influence of watermark to the most extent. Besides, discrete cosine transform (DCT) with energy compact property is applied to the motion vector residual group, which can ensure robustness against intentional attacks. According to the experimental results, this scheme gains excellent imperceptibility and low bit-rate increase. Malicious attacks with different quantization parameters (QPs) or motion estimation algorithms can be resisted efficiently, with 80% accuracy on average after lossy compression.

  6. A Robust H.264/AVC Video Watermarking Scheme with Drift Compensation

    PubMed Central

    Sun, Tanfeng; Zhou, Yue; Shi, Yun-Qing

    2014-01-01

    A robust H.264/AVC video watermarking scheme for copyright protection with self-adaptive drift compensation is proposed. In our scheme, motion vector residuals of macroblocks with the smallest partition size are selected to hide copyright information in order to hold visual impact and distortion drift to a minimum. Drift compensation is also implemented to reduce the influence of watermark to the most extent. Besides, discrete cosine transform (DCT) with energy compact property is applied to the motion vector residual group, which can ensure robustness against intentional attacks. According to the experimental results, this scheme gains excellent imperceptibility and low bit-rate increase. Malicious attacks with different quantization parameters (QPs) or motion estimation algorithms can be resisted efficiently, with 80% accuracy on average after lossy compression. PMID:24672376

  7. Analytical inversions in remote sensing of particle size distributions. IV - Comparison of Fymat and Box-McKellar solutions in the anomalous diffraction approximation

    NASA Technical Reports Server (NTRS)

    Fymat, A. L.; Smith, C. B.

    1979-01-01

    It is shown that the inverse analytical solutions, provided separately by Fymat and Box-McKellar, for reconstructing particle size distributions from remote spectral transmission measurements under the anomalous diffraction approximation can be derived using a cosine and a sine transform, respectively. Sufficient conditions of validity of the two formulas are established. Their comparison shows that the former solution is preferable to the latter in that it requires less a priori information (knowledge of the particle number density is not needed) and has wider applicability. For gamma-type distributions, and either a real or a complex refractive index, explicit expressions are provided for retrieving the distribution parameters; such expressions are, interestingly, proportional to the geometric area of the polydispersion.

  8. Air Force Academy Aeronautics Digest, Spring/Summer 1980

    DTIC Science & Technology

    1980-10-01

    transformation matrix developed under the direction cosine method now can be simplified to four 18 USAFA-TR-80-17 equations t (_W " - W " wq W r 2 2...0uE.CZ 25. 03 *.C .00 0.00 15.00 -(.1 ’.I-01 C.C’ 0 IILkiE1 r.C4 .C TH TAO ..700C7 0.7.1 H g-4-8:j � 41S 35.00 0 19 C55 NE -. 68 £-% ’C,- USAFA-TR...4?5. I-TiET4C = FLCAT(I -8) .1.415. 1 T( P, ) TAO SUM = 0.0 0 l~J ZI15 i r4 2 IHETA " FL (AT (J-) *5. TH 0UIi P1 /1 1 Pli = IHLT A-THET 40 A:v 0

  9. Infrared face recognition based on LBP histogram and KW feature selection

    NASA Astrophysics Data System (ADS)

    Xie, Zhihua

    2014-07-01

    The conventional LBP-based feature as represented by the local binary pattern (LBP) histogram still has room for performance improvements. This paper focuses on the dimension reduction of LBP micro-patterns and proposes an improved infrared face recognition method based on LBP histogram representation. To extract the local robust features in infrared face images, LBP is chosen to get the composition of micro-patterns of sub-blocks. Based on statistical test theory, Kruskal-Wallis (KW) feature selection method is proposed to get the LBP patterns which are suitable for infrared face recognition. The experimental results show combination of LBP and KW features selection improves the performance of infrared face recognition, the proposed method outperforms the traditional methods based on LBP histogram, discrete cosine transform(DCT) or principal component analysis(PCA).

  10. Evaluation of Fourier transform coefficients for the diagnosis of rheumatoid arthritis from diffuse optical tomography images

    NASA Astrophysics Data System (ADS)

    Montejo, Ludguier D.; Jia, Jingfei; Kim, Hyun K.; Hielscher, Andreas H.

    2013-03-01

    We apply the Fourier Transform to absorption and scattering coefficient images of proximal interphalangeal (PIP) joints and evaluate the performance of these coefficients as classifiers using receiver operator characteristic (ROC) curve analysis. We find 25 features that yield a Youden index over 0.7, 3 features that yield a Youden index over 0.8, and 1 feature that yields a Youden index over 0.9 (90.0% sensitivity and 100% specificity). In general, scattering coefficient images yield better one-dimensional classifiers compared to absorption coefficient images. Using features derived from scattering coefficient images we obtain an average Youden index of 0.58 +/- 0.16, and an average Youden index of 0.45 +/- 0.15 when using features from absorption coefficient images.

  11. Retrieving atmospheric transmissivity for biologically active daily dose, in various european sites

    NASA Astrophysics Data System (ADS)

    de La Casinière, A.; Touré, M. L.; Lenoble, J.; Cabot, T.

    2003-04-01

    In the frame of the European Project EDUCE, global UV irradiance spectra recorded all along the year in several European sites are stored in a common database located in Finland. From the spectra set of some of these stations, are calculated atmospheric transmissivities for daily doses of four biologically active UV radiation, namely: UV-B, erythema, DNA damage, and plant damage. A transmissivity is defined as the ratio of the ground level value of the daily dose of interest to its corresponding extra-atmospheric value. Multiple linear correlation of the various transmissivities with three predictors (daily sunshine fraction, cosine of the daily minimum SZA, and daily total ozone column) assumed to be independent variables, are done for year 2000. The coefficients obtained from year 2000 correlation in a given site are expected to retrieve, from the local predictors, the daily dose for year 2001 in the same site, the average error being lesser than 10% for monthly mean values, and lesser than 5% for three-monthly mean values, depending on the daily dose type. Comparison of yearly mean daily doses retrieved in a given site from coefficients obtained in other sites is also presented.

  12. Developing an Accurate CFD Based Gust Model for the Truss Braced Wing Aircraft

    NASA Technical Reports Server (NTRS)

    Bartels, Robert E.

    2013-01-01

    The increased flexibility of long endurance aircraft having high aspect ratio wings necessitates attention to gust response and perhaps the incorporation of gust load alleviation. The design of civil transport aircraft with a strut or truss-braced high aspect ratio wing furthermore requires gust response analysis in the transonic cruise range. This requirement motivates the use of high fidelity nonlinear computational fluid dynamics (CFD) for gust response analysis. This paper presents the development of a CFD based gust model for the truss braced wing aircraft. A sharp-edged gust provides the gust system identification. The result of the system identification is several thousand time steps of instantaneous pressure coefficients over the entire vehicle. This data is filtered and downsampled to provide the snapshot data set from which a reduced order model is developed. A stochastic singular value decomposition algorithm is used to obtain a proper orthogonal decomposition (POD). The POD model is combined with a convolution integral to predict the time varying pressure coefficient distribution due to a novel gust profile. Finally the unsteady surface pressure response of the truss braced wing vehicle to a one-minus-cosine gust, simulated using the reduced order model, is compared with the full CFD.

  13. Nonlinear oscillator with power-form elastic-term: Fourier series expansion of the exact solution

    NASA Astrophysics Data System (ADS)

    Beléndez, Augusto; Francés, Jorge; Beléndez, Tarsicio; Bleda, Sergio; Pascual, Carolina; Arribas, Enrique

    2015-05-01

    A family of conservative, truly nonlinear, oscillators with integer or non-integer order nonlinearity is considered. These oscillators have only one odd power-form elastic-term and exact expressions for their period and solution were found in terms of Gamma functions and a cosine-Ateb function, respectively. Only for a few values of the order of nonlinearity, is it possible to obtain the periodic solution in terms of more common functions. However, for this family of conservative truly nonlinear oscillators we show in this paper that it is possible to obtain the Fourier series expansion of the exact solution, even though this exact solution is unknown. The coefficients of the Fourier series expansion of the exact solution are obtained as an integral expression in which a regularized incomplete Beta function appears. These coefficients are a function of the order of nonlinearity only and are computed numerically. One application of this technique is to compare the amplitudes for the different harmonics of the solution obtained using approximate methods with the exact ones computed numerically as shown in this paper. As an example, the approximate amplitudes obtained via a modified Ritz method are compared with the exact ones computed numerically.

  14. Probabilistic inversion of electrical resistivity data from bench-scale experiments: On model parameterization for CO2 sequestration monitoring

    NASA Astrophysics Data System (ADS)

    Breen, S. J.; Lochbuehler, T.; Detwiler, R. L.; Linde, N.

    2013-12-01

    Electrical resistivity tomography (ERT) is a well-established method for geophysical characterization and has shown potential for monitoring geologic CO2 sequestration, due to its sensitivity to electrical resistivity contrasts generated by liquid/gas saturation variability. In contrast to deterministic ERT inversion approaches, probabilistic inversion provides not only a single saturation model but a full posterior probability density function for each model parameter. Furthermore, the uncertainty inherent in the underlying petrophysics (e.g., Archie's Law) can be incorporated in a straightforward manner. In this study, the data are from bench-scale ERT experiments conducted during gas injection into a quasi-2D (1 cm thick), translucent, brine-saturated sand chamber with a packing that mimics a simple anticlinal geological reservoir. We estimate saturation fields by Markov chain Monte Carlo sampling with the MT-DREAM(ZS) algorithm and compare them quantitatively to independent saturation measurements from a light transmission technique, as well as results from deterministic inversions. Different model parameterizations are evaluated in terms of the recovered saturation fields and petrophysical parameters. The saturation field is parameterized (1) in cartesian coordinates, (2) by means of its discrete cosine transform coefficients, and (3) by fixed saturation values and gradients in structural elements defined by a gaussian bell of arbitrary shape and location. Synthetic tests reveal that a priori knowledge about the expected geologic structures (as in parameterization (3)) markedly improves the parameter estimates. The number of degrees of freedom thus strongly affects the inversion results. In an additional step, we explore the effects of assuming that the total volume of injected gas is known a priori and that no gas has migrated away from the monitored region.

  15. Random noise attenuation of non-uniformly sampled 3D seismic data along two spatial coordinates using non-equispaced curvelet transform

    NASA Astrophysics Data System (ADS)

    Zhang, Hua; Yang, Hui; Li, Hongxing; Huang, Guangnan; Ding, Zheyi

    2018-04-01

    The attenuation of random noise is important for improving the signal to noise ratio (SNR). However, the precondition for most conventional denoising methods is that the noisy data must be sampled on a uniform grid, making the conventional methods unsuitable for non-uniformly sampled data. In this paper, a denoising method capable of regularizing the noisy data from a non-uniform grid to a specified uniform grid is proposed. Firstly, the denoising method is performed for every time slice extracted from the 3D noisy data along the source and receiver directions, then the 2D non-equispaced fast Fourier transform (NFFT) is introduced in the conventional fast discrete curvelet transform (FDCT). The non-equispaced fast discrete curvelet transform (NFDCT) can be achieved based on the regularized inversion of an operator that links the uniformly sampled curvelet coefficients to the non-uniformly sampled noisy data. The uniform curvelet coefficients can be calculated by using the inversion algorithm of the spectral projected-gradient for ℓ1-norm problems. Then local threshold factors are chosen for the uniform curvelet coefficients for each decomposition scale, and effective curvelet coefficients are obtained respectively for each scale. Finally, the conventional inverse FDCT is applied to the effective curvelet coefficients. This completes the proposed 3D denoising method using the non-equispaced curvelet transform in the source-receiver domain. The examples for synthetic data and real data reveal the effectiveness of the proposed approach in applications to noise attenuation for non-uniformly sampled data compared with the conventional FDCT method and wavelet transformation.

  16. Thermal wave propagation in blood perfused tissues under hyperthermia treatment for unique oscillatory heat flux at skin surface and appropriate initial condition

    NASA Astrophysics Data System (ADS)

    Dutta, Jaideep; Kundu, Balaram

    2018-05-01

    This paper aims to develop an analytical study of heat propagation in biological tissues for constant and variable heat flux at the skin surface correlated with Hyperthermia treatment. In the present research work we have attempted to impose two unique kind of oscillating boundary condition relevant to practical aspect of the biomedical engineering while the initial condition is constructed as spatially dependent according to a real life situation. We have implemented Laplace's Transform method (LTM) and Green Function (GFs) method to solve single phase lag (SPL) thermal wave model of bioheat equation (TWMBHE). This research work strongly focuses upon the non-invasive therapy by employing oscillating heat flux. The heat flux at the skin surface is considered as constant, sinusoidal, and cosine forms. A comparative study of the impact of different kinds of heat flux on the temperature field in living tissue explored that sinusoidal heat flux will be more effective if the time of therapeutic heating is high. Cosine heating is also applicable in Hyperthermia treatment due to its precision in thermal waveform. The result also emphasizes that accurate observation must be required for the selection of phase angle and frequency of oscillating heat flux. By possible comparison with the published experimental research work and published mathematical study we have experienced a difference in temperature distribution as 5.33% and 4.73%, respectively. A parametric analysis has been devoted to suggest an appropriate procedure of the selection of important design variables in viewpoint of an effective heating in hyperthermia treatment.

  17. B-spline goal-oriented error estimators for geometrically nonlinear rods

    DTIC Science & Technology

    2011-04-01

    respectively, for the output functionals q2–q4 (linear and nonlinear with the trigonometric functions sine and cosine) in all the tests considered...of the errors resulting from the linear, quadratic and nonlinear (with trigonometric functions sine and cosine) outputs and for p = 1, 2. If the... Portugal . References [1] A.T. Adams. Sobolev Spaces. Academic Press, Boston, 1975. [2] M. Ainsworth and J.T. Oden. A posteriori error estimation in

  18. Muon detector for the COSINE-100 experiment

    NASA Astrophysics Data System (ADS)

    Prihtiadi, H.; Adhikari, G.; Adhikari, P.; Barbosa de Souza, E.; Carlin, N.; Choi, S.; Choi, W. Q.; Djamal, M.; Ezeribe, A. C.; Ha, C.; Hahn, I. S.; Hubbard, A. J. F.; Jeon, E. J.; Jo, J. H.; Joo, H. W.; Kang, W.; Kang, W. G.; Kauer, M.; Kim, B. H.; Kim, H.; Kim, H. J.; Kim, K. W.; Kim, N. Y.; Kim, S. K.; Kim, Y. D.; Kim, Y. H.; Kudryavtsev, V. A.; Lee, H. S.; Lee, J.; Lee, J. Y.; Lee, M. H.; Leonard, D. S.; Lim, K. E.; Lynch, W. A.; Maruyama, R. H.; Mouton, F.; Olsen, S. L.; Park, H. K.; Park, H. S.; Park, J. S.; Park, K. S.; Pettus, W.; Pierpoint, Z. P.; Ra, S.; Rogers, F. R.; Rott, C.; Scarff, A.; Spooner, N. J. C.; Thompson, W. G.; Yang, L.; Yong, S. H.

    2018-02-01

    The COSINE-100 dark matter search experiment has started taking physics data with the goal of performing an independent measurement of the annual modulation signal observed by DAMA/LIBRA. A muon detector was constructed by using plastic scintillator panels in the outermost layer of the shield surrounding the COSINE-100 detector. It detects cosmic ray muons in order to understand the impact of the muon annual modulation on dark matter analysis. Assembly and initial performance tests of each module have been performed at a ground laboratory. The installation of the detector in the Yangyang Underground Laboratory (Y2L) was completed in the summer of 2016. Using three months of data, the muon underground flux was measured to be 328 ± 1(stat.)± 10(syst.) muons/m2/day. In this report, the assembly of the muon detector and the results from the analysis are presented.

  19. Single-image super-resolution based on Markov random field and contourlet transform

    NASA Astrophysics Data System (ADS)

    Wu, Wei; Liu, Zheng; Gueaieb, Wail; He, Xiaohai

    2011-04-01

    Learning-based methods are well adopted in image super-resolution. In this paper, we propose a new learning-based approach using contourlet transform and Markov random field. The proposed algorithm employs contourlet transform rather than the conventional wavelet to represent image features and takes into account the correlation between adjacent pixels or image patches through the Markov random field (MRF) model. The input low-resolution (LR) image is decomposed with the contourlet transform and fed to the MRF model together with the contourlet transform coefficients from the low- and high-resolution image pairs in the training set. The unknown high-frequency components/coefficients for the input low-resolution image are inferred by a belief propagation algorithm. Finally, the inverse contourlet transform converts the LR input and the inferred high-frequency coefficients into the super-resolved image. The effectiveness of the proposed method is demonstrated with the experiments on facial, vehicle plate, and real scene images. A better visual quality is achieved in terms of peak signal to noise ratio and the image structural similarity measurement.

  20. Increases to Biogenic Secondary Organic Aerosols from SO2 and NOx in the Southeastern US

    NASA Astrophysics Data System (ADS)

    Russell, L. M.; Liu, J.; Ruggeri, G.; Takahama, S.; Claflin, M. S.; Ziemann, P. J.; Lee, A.; Murphy, B.; Pye, H. O. T.; Ng, N. L.; McKinney, K. A.; Surratt, J. D.

    2017-12-01

    During the 2013 Southern Oxidant and Aerosol Study, Fourier Transform Infrared Spectroscopy (FTIR) and Aerosol Mass Spectrometer (AMS) measurements of submicron mass were collected at Look Rock, Tennessee, and Centreville, Alabama. The low NOx, low wind, little rain, and increased daytime isoprene emissions led to multi-day stagnation events at Look Rock that provided clear evidence of particle-phase sulfate enhancing biogenic secondary organic aerosol (bSOA) by selective uptake. Organic mass (OM) sources were apportioned as 42% "vehicle-related" and 54% bSOA, with the latter including "sulfate-related bSOA" that correlated to sulfate (r=0.72) and "nitrate-related bSOA" that correlated to nitrate (r=0.65). Single-particle mass spectra showed three composition types that corresponded to the mass-based factors with spectra cosine similarity of 0.93 and time series correlations of r>0.4. The vehicle-related OM with m/z 44 was correlated to black carbon, "sulfate-related bSOA" was on particles with high sulfate, and "nitrate-related bSOA" was on all particles. The similarity of the m/z spectra (cosine similarity=0.97) and the time series correlation (r=0.80) of the "sulfate-related bSOA" to the sulfate-containing single-particle type provide evidence for particle composition contributing to selective uptake of isoprene oxidation products onto particles that contain sulfate from power plants. Since Look Rock had much less NOx than Centreville, comparing the bSOA at the two sites provides an evaluation of the role of NOx for bSOA. CO and submicron sulfate and OM concentrations were 15-60 % higher at Centreville than at Look Rock but their time series had moderate correlations of r= 0.51, 0.54, and 0.47, respectively. However, NOx had no correlation (r=0.08) between the two sites. OM correlated with the higher NOx levels at Centreville but with O3 at Look Rock. OM sources identified by Positive Matrix Factorization had three very similar factors at both sites from FTIR, one of which was Biological Organic Aerosols. The FTIR spectrum for this factor is similar (cosine similarity > 0.6) to that of lab-generated particle mass from both isoprene and monoterpene with high NOx conditions from chamber experiments, providing verification of the reactions relevant to atmospheric conditions.

  1. Personalized Medicine in Veterans with Traumatic Brain Injuries

    DTIC Science & Technology

    2012-05-01

    UPGMA ) based on cosine correlation of row mean centered log2 signal values; this was the top 50%-tile, 3) In the DA top 50%-tile, selected probe sets...GeneMaths XT following row mean centering of log2 trans- formed MAS5.0 signal values; probe set cluster- ing was performed by the UPGMA method using...hierarchical clustering analysis using the UPGMA algorithm with cosine correlation as the similarity metric. Results are presented as a heat map (left

  2. Identification of material constants for piezoelectric transformers by three-dimensional, finite-element method and a design-sensitivity method.

    PubMed

    Joo, Hyun-Woo; Lee, Chang-Hwan; Rho, Jong-Seok; Jung, Hyun-Kyo

    2003-08-01

    In this paper, an inversion scheme for piezoelectric constants of piezoelectric transformers is proposed. The impedance of piezoelectric transducers is calculated using a three-dimensional finite element method. The validity of this is confirmed experimentally. The effects of material coefficients on piezoelectric transformers are investigated numerically. Six material coefficient variables for piezoelectric transformers were selected, and a design sensitivity method was adopted as an inversion scheme. The validity of the proposed method was confirmed by step-up ratio calculations. The proposed method is applied to the analysis of a sample piezoelectric transformer, and its resonance characteristics are obtained by numerically combined equivalent circuit method.

  3. 2.5D Multi-View Gait Recognition Based on Point Cloud Registration

    PubMed Central

    Tang, Jin; Luo, Jian; Tjahjadi, Tardi; Gao, Yan

    2014-01-01

    This paper presents a method for modeling a 2.5-dimensional (2.5D) human body and extracting the gait features for identifying the human subject. To achieve view-invariant gait recognition, a multi-view synthesizing method based on point cloud registration (MVSM) to generate multi-view training galleries is proposed. The concept of a density and curvature-based Color Gait Curvature Image is introduced to map 2.5D data onto a 2D space to enable data dimension reduction by discrete cosine transform and 2D principle component analysis. Gait recognition is achieved via a 2.5D view-invariant gait recognition method based on point cloud registration. Experimental results on the in-house database captured by a Microsoft Kinect camera show a significant performance gain when using MVSM. PMID:24686727

  4. Optimization of self-acting step thrust bearings for load capacity and stiffness.

    NASA Technical Reports Server (NTRS)

    Hamrock, B. J.

    1972-01-01

    Linearized analysis of a finite-width rectangular step thrust bearing. Dimensionless load capacity and stiffness are expressed in terms of a Fourier cosine series. The dimensionless load capacity and stiffness were found to be a function of the dimensionless bearing number, the pad length-to-width ratio, the film thickness ratio, the step location parameter, and the feed groove parameter. The equations obtained in the analysis were verified. The assumptions imposed were substantiated by comparing the results with an existing exact solution for the infinite width bearing. A digital computer program was developed which determines optimal bearing configuration for maximum load capacity or stiffness. Simple design curves are presented. Results are shown for both compressible and incompressible lubrication. Through a parameter transformation the results are directly usable in designing optimal step sector thrust bearings.

  5. Wide band stepped frequency ground penetrating radar

    DOEpatents

    Bashforth, M.B.; Gardner, D.; Patrick, D.; Lewallen, T.A.; Nammath, S.R.; Painter, K.D.; Vadnais, K.G.

    1996-03-12

    A wide band ground penetrating radar system is described embodying a method wherein a series of radio frequency signals is produced by a single radio frequency source and provided to a transmit antenna for transmission to a target and reflection therefrom to a receive antenna. A phase modulator modulates those portions of the radio frequency signals to be transmitted and the reflected modulated signal is combined in a mixer with the original radio frequency signal to produce a resultant signal which is demodulated to produce a series of direct current voltage signals, the envelope of which forms a cosine wave shaped plot which is processed by a Fast Fourier Transform Unit 44 into frequency domain data wherein the position of a preponderant frequency is indicative of distance to the target and magnitude is indicative of the signature of the target. 6 figs.

  6. A face and palmprint recognition approach based on discriminant DCT feature extraction.

    PubMed

    Jing, Xiao-Yuan; Zhang, David

    2004-12-01

    In the field of image processing and recognition, discrete cosine transform (DCT) and linear discrimination are two widely used techniques. Based on them, we present a new face and palmprint recognition approach in this paper. It first uses a two-dimensional separability judgment to select the DCT frequency bands with favorable linear separability. Then from the selected bands, it extracts the linear discriminative features by an improved Fisherface method and performs the classification by the nearest neighbor classifier. We detailedly analyze theoretical advantages of our approach in feature extraction. The experiments on face databases and palmprint database demonstrate that compared to the state-of-the-art linear discrimination methods, our approach obtains better classification performance. It can significantly improve the recognition rates for face and palmprint data and effectively reduce the dimension of feature space.

  7. Measurement of the temperature coefficient of ratio transformers

    NASA Technical Reports Server (NTRS)

    Briggs, Matthew E.; Gammon, Robert W.; Shaumeyer, J. N.

    1993-01-01

    We have measured the temperature coefficient of the output of several ratio transformers at ratios near 0.500,000 using an ac bridge and a dual-phase, lock-in amplifier. The two orthogonal output components were each resolved to +/- ppb of the bridge drive signal. The results for three commercial ratio transformers between 20 and 50 C range from 0.5 to 100 ppb/K for the signal component in phase with the bridge drive, and from 4 to 300 ppb/K for the quadrature component.

  8. Evaluation of Contact Heat Transfer Coefficient and Phase Transformation during Hot Stamping of a Hat-Type Part

    PubMed Central

    Kim, Heung-Kyu; Lee, Seong Hyeon; Choi, Hyunjoo

    2015-01-01

    Using an inverse analysis technique, the heat transfer coefficient on the die-workpiece contact surface of a hot stamping process was evaluated as a power law function of contact pressure. This evaluation was to determine whether the heat transfer coefficient on the contact surface could be used for finite element analysis of the entire hot stamping process. By comparing results of the finite element analysis and experimental measurements of the phase transformation, an evaluation was performed to determine whether the obtained heat transfer coefficient function could provide reasonable finite element prediction for workpiece properties affected by the hot stamping process. PMID:28788046

  9. High-Aperture-Efficiency Horn Antenna

    NASA Technical Reports Server (NTRS)

    Pickens, Wesley; Hoppe, Daniel; Epp, Larry; Kahn, Abdur

    2005-01-01

    A horn antenna (see Figure 1) has been developed to satisfy requirements specific to its use as an essential component of a high-efficiency Ka-band amplifier: The combination of the horn antenna and an associated microstrip-patch antenna array is required to function as a spatial power divider that feeds 25 monolithic microwave integrated-circuit (MMIC) power amplifiers. The foregoing requirement translates to, among other things, a further requirement that the horn produce a uniform, vertically polarized electromagnetic field in its patches identically so that the MMICs can operate at maximum efficiency. The horn is fed from a square waveguide of 5.9436-mm-square cross section via a transition piece. The horn features cosine-tapered, dielectric-filled longitudinal corrugations in its vertical walls to create a hard boundary condition: This aspect of the horn design causes the field in the horn aperture to be substantially vertically polarized and to be nearly uniform in amplitude and phase. As used here, cosine-tapered signifies that the depth of the corrugations is a cosine function of distance along the horn. Preliminary results of finite-element simulations of performance have shown that by virtue of the cosine taper the impedance response of this horn can be expected to be better than has been achieved previously in a similar horn having linearly tapered dielectric- filled longitudinal corrugations. It is possible to create a hard boundary condition by use of a single dielectric-filled corrugation in each affected wall, but better results can be obtained with more corrugations. Simulations were performed for a one- and a three-corrugation cosine-taper design. For comparison, a simulation was also performed for a linear- taper design (see Figure 2). The three-corrugation design was chosen to minimize the cost of fabrication while still affording acceptably high performance. Future designs using more corrugations per wavelength are expected to provide better field responses and, hence, greater aperture efficiencies.

  10. Genetic Algorithms Evolve Optimized Transforms for Signal Processing Applications

    DTIC Science & Technology

    2005-04-01

    coefficient sets describing inverse transforms and matched forward/ inverse transform pairs that consistently outperform wavelets for image compression and reconstruction applications under conditions subject to quantization error.

  11. Multi-Focus Image Fusion Based on NSCT and NSST

    NASA Astrophysics Data System (ADS)

    Moonon, Altan-Ulzii; Hu, Jianwen

    2015-12-01

    In this paper, a multi-focus image fusion algorithm based on the nonsubsampled contourlet transform (NSCT) and the nonsubsampled shearlet transform (NSST) is proposed. The source images are first decomposed by the NSCT and NSST into low frequency coefficients and high frequency coefficients. Then, the average method is used to fuse low frequency coefficient of the NSCT. To obtain more accurate salience measurement, the high frequency coefficients of the NSST and NSCT are combined to measure salience. The high frequency coefficients of the NSCT with larger salience are selected as fused high frequency coefficients. Finally, the fused image is reconstructed by the inverse NSCT. We adopt three metrics (Q AB/F , Q e and Q w ) to evaluate the quality of fused images. The experimental results demonstrate that the proposed method outperforms other methods. It retains highly detailed edges and contours.

  12. Sines and Cosines. Part 2 of 3

    NASA Technical Reports Server (NTRS)

    Apostol, Tom M. (Editor)

    1993-01-01

    The Law of Sines and the Law of Cosines are introduced and demonstrated in this 'Project Mathematics' series video using both film footage and computer animation. This video deals primarily with the mathematical field of Trigonometry and explains how these laws were developed and their applications. One significant use is geographical and geological surveying. This includes both the triangulation method and the spirit leveling method. With these methods, it is shown how the height of the tallest mountain in the world, Mt. Everest, was determined.

  13. Convolutional coding combined with continuous phase modulation

    NASA Technical Reports Server (NTRS)

    Pizzi, S. V.; Wilson, S. G.

    1985-01-01

    Background theory and specific coding designs for combined coding/modulation schemes utilizing convolutional codes and continuous-phase modulation (CPM) are presented. In this paper the case of r = 1/2 coding onto a 4-ary CPM is emphasized, with short-constraint length codes presented for continuous-phase FSK, double-raised-cosine, and triple-raised-cosine modulation. Coding buys several decibels of coding gain over the Gaussian channel, with an attendant increase of bandwidth. Performance comparisons in the power-bandwidth tradeoff with other approaches are made.

  14. Wigner distribution function of Hermite-cosine-Gaussian beams through an apertured optical system.

    PubMed

    Sun, Dong; Zhao, Daomu

    2005-08-01

    By introducing the hard-aperture function into a finite sum of complex Gaussian functions, the approximate analytical expressions of the Wigner distribution function for Hermite-cosine-Gaussian beams passing through an apertured paraxial ABCD optical system are obtained. The analytical results are compared with the numerically integrated ones, and the absolute errors are also given. It is shown that the analytical results are proper and that the calculation speed for them is much faster than for the numerical results.

  15. Star tracker error analysis: Roll-to-pitch nonorthogonality

    NASA Technical Reports Server (NTRS)

    Corson, R. W.

    1979-01-01

    An error analysis is described on an anomaly isolated in the star tracker software line of sight (LOS) rate test. The LOS rate cosine was found to be greater than one in certain cases which implied that either one or both of the star tracker measured end point unit vectors used to compute the LOS rate cosine had lengths greater than unity. The roll/pitch nonorthogonality matrix in the TNB CL module of the IMU software is examined as the source of error.

  16. Sines and Cosines. Part 3 of 3

    NASA Technical Reports Server (NTRS)

    Apostol, Tom M. (Editor)

    1994-01-01

    In this 'Project Mathematics' series video, the addition formulas of sines and cosines are explained and their real life applications are demonstrated. Both film footage and computer animation is used. Several mathematical concepts are discussed and include: Ptolemy's theorem concerned with quadrilaterals; the difference between a central angle and an inscribed angle; sines and chord lengths; special angles; subtraction formulas; and a application to simple harmonic motion. A brief history of the city Alexandria, its mathematicians, and their contribution to the field of mathematics is shown.

  17. Application of DFT Filter Banks and Cosine Modulated Filter Banks in Filtering

    NASA Technical Reports Server (NTRS)

    Lin, Yuan-Pei; Vaidyanathan, P. P.

    1994-01-01

    None given. This is a proposal for a paper to be presented at APCCAS '94 in Taipei, Taiwan. (From outline): This work is organized as follows: Sec. II is devoted to the construction of the new 2m channel under-decimated DFT filter bank. Implementation and complexity of this DFT filter bank are discussed therein. IN a similar manner, the new 2m channel cosine modulated filter bank is discussed in Sec. III. Design examples are given in Sec. IV.

  18. Replacing backscattering with reduced scattering. A better formulation of reflectance function?

    NASA Astrophysics Data System (ADS)

    Piskozub, Jacek; McKee, David; Freda, Wlodzimierz

    2014-05-01

    Modern reflectance formulas all involve backscattering coefficient divided by absorption coefficient (bb/a). The backscattering (or backward scattering) coefficient describes how much of the incident radiation is scattered at angles between 90 and 180 deg. However, water leaving photons are not necessarily backscattered because it is possible for a variable fraction to exit after multiple forward scattering events. Therefore the whole angular function of scattering probability (phase function) influences the reflectance signal. This is the reason why phase functions of identical backscattering ratio may result in different reflectance values, contrary to the universally used formula. This creates the question whether there may exist a better formula using a parameter better describing phase function shape than backscattering ratio. The asymmetry parameter g (the average scattering cosine) is commonly used to parametrize phase functions. A replacement for backscattering should decrease with increasing g. Therefore, the simplest candidate to replace backscattering has the form of b(1-g), where b is the scattering coefficient. Such a parameter is well known in biomedical optics under the name of reduced scattering (sometimes transport scattering). It has even been used in parametrizing reflectance in (highly turbid) human tissues. However no attempt has been made to check its usefulness in marine optics. We perform Monte Carlo radiative transfer calculations of reflectance for multiple combinations of inherent optical properties, including different phase functions. The results are used to create a new reflectance formula as a function of reduced scattering and absorption and test its robustness to changes in phase function shape compared to the traditional bb/a formula. We discuss its usefulness as well as advantages and disadvantages compared to the traditional formulation.

  19. Collaborative Wideband Compressed Signal Detection in Interplanetary Internet

    NASA Astrophysics Data System (ADS)

    Wang, Yulin; Zhang, Gengxin; Bian, Dongming; Gou, Liang; Zhang, Wei

    2014-07-01

    As the development of autonomous radio in deep space network, it is possible to actualize communication between explorers, aircrafts, rovers and satellites, e.g. from different countries, adopting different signal modes. The first mission to enforce the autonomous radio is to detect signals of the explorer autonomously without disturbing the original communication. This paper develops a collaborative wideband compressed signal detection approach for InterPlaNetary (IPN) Internet where there exist sparse active signals in the deep space environment. Compressed sensing (CS) can be utilized by exploiting the sparsity of IPN Internet communication signal, whose useful frequency support occupies only a small portion of an entirely wide spectrum. An estimate of the signal spectrum can be obtained by using reconstruction algorithms. Against deep space shadowing and channel fading, multiple satellites collaboratively sense and make a final decision according to certain fusion rule to gain spatial diversity. A couple of novel discrete cosine transform (DCT) and walsh-hadamard transform (WHT) based compressed spectrum detection methods are proposed which significantly improve the performance of spectrum recovery and signal detection. Finally, extensive simulation results are presented to show the effectiveness of our proposed collaborative scheme for signal detection in IPN Internet. Compared with the conventional discrete fourier transform (DFT) based method, our DCT and WHT based methods reduce computational complexity, decrease processing time, save energy and enhance probability of detection.

  20. A novel method of the image processing on irregular triangular meshes

    NASA Astrophysics Data System (ADS)

    Vishnyakov, Sergey; Pekhterev, Vitaliy; Sokolova, Elizaveta

    2018-04-01

    The paper describes a novel method of the image processing based on irregular triangular meshes implementation. The triangular mesh is adaptive to the image content, least mean square linear approximation is proposed for the basic interpolation within the triangle. It is proposed to use triangular numbers to simplify using of the local (barycentric) coordinates for the further analysis - triangular element of the initial irregular mesh is to be represented through the set of the four equilateral triangles. This allows to use fast and simple pixels indexing in local coordinates, e.g. "for" or "while" loops for access to the pixels. Moreover, representation proposed allows to use discrete cosine transform of the simple "rectangular" symmetric form without additional pixels reordering (as it is used for shape-adaptive DCT forms). Furthermore, this approach leads to the simple form of the wavelet transform on triangular mesh. The results of the method application are presented. It is shown that advantage of the method proposed is a combination of the flexibility of the image-adaptive irregular meshes with the simple form of the pixel indexing in local triangular coordinates and the using of the common forms of the discrete transforms for triangular meshes. Method described is proposed for the image compression, pattern recognition, image quality improvement, image search and indexing. It also may be used as a part of video coding (intra-frame or inter-frame coding, motion detection).

  1. Canonic FFT flow graphs for real-valued even/odd symmetric inputs

    NASA Astrophysics Data System (ADS)

    Lao, Yingjie; Parhi, Keshab K.

    2017-12-01

    Canonic real-valued fast Fourier transform (RFFT) has been proposed to reduce the arithmetic complexity by eliminating redundancies. In a canonic N-point RFFT, the number of signal values at each stage is canonic with respect to the number of signal values, i.e., N. The major advantage of the canonic RFFTs is that these require the least number of butterfly operations and only real datapaths when mapped to architectures. In this paper, we consider the FFT computation whose inputs are not only real but also even/odd symmetric, which indeed lead to the well-known discrete cosine and sine transforms (DCTs and DSTs). Novel algorithms for generating the flow graphs of canonic RFFTs with even/odd symmetric inputs are proposed. It is shown that the proposed algorithms lead to canonic structures with N/2 +1 signal values at each stage for an N-point real even symmetric FFT (REFFT) or N/2 -1 signal values at each stage for an N-point RFFT real odd symmetric FFT (ROFFT). In order to remove butterfly operations, several twiddle factor transformations are proposed in this paper. We also discuss the design of canonic REFFT for any composite length. Performances of the canonic REFFT/ROFFT are also discussed. It is shown that the flow graph of canonic REFFT/ROFFT has less number of interconnections, less butterfly operations, and less twiddle factor operations, compared to prior works.

  2. An integral transform approach for a mixed boundary problem involving a flowing partially penetrating well with infinitesimal well skin

    NASA Astrophysics Data System (ADS)

    Chang, Chien-Chieh; Chen, Chia-Shyun

    2002-06-01

    A flowing partially penetrating well with infinitesimal well skin is a mixed boundary because a Cauchy condition is prescribed along the screen length and a Neumann condition of no flux is stipulated over the remaining unscreened part. An analytical approach based on the integral transform technique is developed to determine the Laplace domain solution for such a mixed boundary problem in a confined aquifer of finite thickness. First, the mixed boundary is changed into a homogeneous Neumann boundary by substituting the Cauchy condition with a Neumann condition in terms of well bore flux that varies along the screen length and is time dependent. Despite the well bore flux being unknown a priori, the modified model containing this homogeneous Neumann boundary can be solved with the Laplace and the finite Fourier cosine transforms. To determine well bore flux, screen length is discretized into a finite number of segments, to which the Cauchy condition is reinstated. This reinstatement also restores the relation between the original model and the solutions obtained. For a given time, the numerical inversion of the Laplace domain solution yields the drawdown distributions, well bore flux, and the well discharge. This analytical approach provides an alternative for dealing with the mixed boundary problems, especially when aquifer thickness is assumed to be finite.

  3. A Shearlet-based algorithm for quantum noise removal in low-dose CT images

    NASA Astrophysics Data System (ADS)

    Zhang, Aguan; Jiang, Huiqin; Ma, Ling; Liu, Yumin; Yang, Xiaopeng

    2016-03-01

    Low-dose CT (LDCT) scanning is a potential way to reduce the radiation exposure of X-ray in the population. It is necessary to improve the quality of low-dose CT images. In this paper, we propose an effective algorithm for quantum noise removal in LDCT images using shearlet transform. Because the quantum noise can be simulated by Poisson process, we first transform the quantum noise by using anscombe variance stabilizing transform (VST), producing an approximately Gaussian noise with unitary variance. Second, the non-noise shearlet coefficients are obtained by adaptive hard-threshold processing in shearlet domain. Third, we reconstruct the de-noised image using the inverse shearlet transform. Finally, an anscombe inverse transform is applied to the de-noised image, which can produce the improved image. The main contribution is to combine the anscombe VST with the shearlet transform. By this way, edge coefficients and noise coefficients can be separated from high frequency sub-bands effectively. A number of experiments are performed over some LDCT images by using the proposed method. Both quantitative and visual results show that the proposed method can effectively reduce the quantum noise while enhancing the subtle details. It has certain value in clinical application.

  4. Random Matrix Theory in molecular dynamics analysis.

    PubMed

    Palese, Luigi Leonardo

    2015-01-01

    It is well known that, in some situations, principal component analysis (PCA) carried out on molecular dynamics data results in the appearance of cosine-shaped low index projections. Because this is reminiscent of the results obtained by performing PCA on a multidimensional Brownian dynamics, it has been suggested that short-time protein dynamics is essentially nothing more than a noisy signal. Here we use Random Matrix Theory to analyze a series of short-time molecular dynamics experiments which are specifically designed to be simulations with high cosine content. We use as a model system the protein apoCox17, a mitochondrial copper chaperone. Spectral analysis on correlation matrices allows to easily differentiate random correlations, simply deriving from the finite length of the process, from non-random signals reflecting the intrinsic system properties. Our results clearly show that protein dynamics is not really Brownian also in presence of the cosine-shaped low index projections on principal axes. Copyright © 2014 Elsevier B.V. All rights reserved.

  5. Simulation and Evaluation of Small Scale Solar Power Tower Performance under Malaysia Weather Conditions

    NASA Astrophysics Data System (ADS)

    Gamil, A. M.; Gilani, S. I.; Al-Kayiem, H. H.

    2013-06-01

    Solar energy is the most available, clean, and inexpensive source of energy among the other renewable sources of energy. Malaysia is an encouraging location for the development of solar energy systems due to abundant sunshine (10 hours daily with average solar energy received between 1400 and 1900 kWh/m2). In this paper the design of heliostat field of 3 dual-axis heliostat units located in Ipoh, Malaysia is introduced. A mathematical model was developed to estimate the sun position and calculate the cosine losses in the field. The study includes calculating the incident solar power to a fixed target on the tower by analysing the tower height and ground distance between the heliostat and the tower base. The cosine efficiency was found for each heliostat according to the sun movement. TRNSYS software was used to simulate the cosine efficiencies and field hourly incident solar power input to the fixed target. The results show the heliostat field parameters and the total incident solar input to the receiver.

  6. Ontology-based structured cosine similarity in document summarization: with applications to mobile audio-based knowledge management.

    PubMed

    Yuan, Soe-Tsyr; Sun, Jerry

    2005-10-01

    Development of algorithms for automated text categorization in massive text document sets is an important research area of data mining and knowledge discovery. Most of the text-clustering methods were grounded in the term-based measurement of distance or similarity, ignoring the structure of the documents. In this paper, we present a novel method named structured cosine similarity (SCS) that furnishes document clustering with a new way of modeling on document summarization, considering the structure of the documents so as to improve the performance of document clustering in terms of quality, stability, and efficiency. This study was motivated by the problem of clustering speech documents (of no rich document features) attained from the wireless experience oral sharing conducted by mobile workforce of enterprises, fulfilling audio-based knowledge management. In other words, this problem aims to facilitate knowledge acquisition and sharing by speech. The evaluations also show fairly promising results on our method of structured cosine similarity.

  7. A scientific report on heat transfer analysis in mixed convection flow of Maxwell fluid over an oscillating vertical plate.

    PubMed

    Khan, Ilyas; Shah, Nehad Ali; Dennis, L C C

    2017-03-15

    This scientific report investigates the heat transfer analysis in mixed convection flow of Maxwell fluid over an oscillating vertical plate with constant wall temperature. The problem is modelled in terms of coupled partial differential equations with initial and boundary conditions. Some suitable non-dimensional variables are introduced in order to transform the governing problem into dimensionless form. The resulting problem is solved via Laplace transform method and exact solutions for velocity, shear stress and temperature are obtained. These solutions are greatly influenced with the variation of embedded parameters which include the Prandtl number and Grashof number for various times. In the absence of free convection, the corresponding solutions representing the mechanical part of velocity reduced to the well known solutions in the literature. The total velocity is presented as a sum of both cosine and sine velocities. The unsteady velocity in each case is arranged in the form of transient and post transient parts. It is found that the post transient parts are independent of time. The solutions corresponding to Newtonian fluids are recovered as a special case and comparison between Newtonian fluid and Maxwell fluid is shown graphically.

  8. A new DWT/MC/DPCM video compression framework based on EBCOT

    NASA Astrophysics Data System (ADS)

    Mei, L. M.; Wu, H. R.; Tan, D. M.

    2005-07-01

    A novel Discrete Wavelet Transform (DWT)/Motion Compensation (MC)/Differential Pulse Code Modulation (DPCM) video compression framework is proposed in this paper. Although the Discrete Cosine Transform (DCT)/MC/DPCM is the mainstream framework for video coders in industry and international standards, the idea of DWT/MC/DPCM has existed for more than one decade in the literature and the investigation is still undergoing. The contribution of this work is twofold. Firstly, the Embedded Block Coding with Optimal Truncation (EBCOT) is used here as the compression engine for both intra- and inter-frame coding, which provides good compression ratio and embedded rate-distortion (R-D) optimization mechanism. This is an extension of the EBCOT application from still images to videos. Secondly, this framework offers a good interface for the Perceptual Distortion Measure (PDM) based on the Human Visual System (HVS) where the Mean Squared Error (MSE) can be easily replaced with the PDM in the R-D optimization. Some of the preliminary results are reported here. They are also compared with benchmarks such as MPEG-2 and MPEG-4 version 2. The results demonstrate that under specified condition the proposed coder outperforms the benchmarks in terms of rate vs. distortion.

  9. Dynamic Forms. Part 1: Functions

    NASA Technical Reports Server (NTRS)

    Meyer, George; Smith, G. Allan

    1993-01-01

    The formalism of dynamic forms is developed as a means for organizing and systematizing the design control systems. The formalism allows the designer to easily compute derivatives to various orders of large composite functions that occur in flight-control design. Such functions involve many function-of-a-function calls that may be nested to many levels. The component functions may be multiaxis, nonlinear, and they may include rotation transformations. A dynamic form is defined as a variable together with its time derivatives up to some fixed but arbitrary order. The variable may be a scalar, a vector, a matrix, a direction cosine matrix, Euler angles, or Euler parameters. Algorithms for standard elementary functions and operations of scalar dynamic forms are developed first. Then vector and matrix operations and transformations between parameterization of rotations are developed in the next level in the hierarchy. Commonly occurring algorithms in control-system design, including inversion of pure feedback systems, are developed in the third level. A large-angle, three-axis attitude servo and other examples are included to illustrate the effectiveness of the developed formalism. All algorithms were implemented in FORTRAN code. Practical experience shows that the proposed formalism may significantly improve the productivity of the design and coding process.

  10. The comparison between SVD-DCT and SVD-DWT digital image watermarking

    NASA Astrophysics Data System (ADS)

    Wira Handito, Kurniawan; Fauzi, Zulfikar; Aminy Ma’ruf, Firda; Widyaningrum, Tanti; Muslim Lhaksmana, Kemas

    2018-03-01

    With internet, anyone can publish their creation into digital data simply, inexpensively, and absolutely easy to be accessed by everyone. However, the problem appears when anyone else claims that the creation is their property or modifies some part of that creation. It causes necessary protection of copyrights; one of the examples is with watermarking method in digital image. The application of watermarking technique on digital data, especially on image, enables total invisibility if inserted in carrier image. Carrier image will not undergo any decrease of quality and also the inserted image will not be affected by attack. In this paper, watermarking will be implemented on digital image using Singular Value Decomposition based on Discrete Wavelet Transform (DWT) and Discrete Cosine Transform (DCT) by expectation in good performance of watermarking result. In this case, trade-off happen between invisibility and robustness of image watermarking. In embedding process, image watermarking has a good quality for scaling factor < 0.1. The quality of image watermarking in decomposition level 3 is better than level 2 and level 1. Embedding watermark in low-frequency is robust to Gaussian blur attack, rescale, and JPEG compression, but in high-frequency is robust to Gaussian noise.

  11. A scientific report on heat transfer analysis in mixed convection flow of Maxwell fluid over an oscillating vertical plate

    NASA Astrophysics Data System (ADS)

    Khan, Ilyas; Shah, Nehad Ali; Dennis, L. C. C.

    2017-03-01

    This scientific report investigates the heat transfer analysis in mixed convection flow of Maxwell fluid over an oscillating vertical plate with constant wall temperature. The problem is modelled in terms of coupled partial differential equations with initial and boundary conditions. Some suitable non-dimensional variables are introduced in order to transform the governing problem into dimensionless form. The resulting problem is solved via Laplace transform method and exact solutions for velocity, shear stress and temperature are obtained. These solutions are greatly influenced with the variation of embedded parameters which include the Prandtl number and Grashof number for various times. In the absence of free convection, the corresponding solutions representing the mechanical part of velocity reduced to the well known solutions in the literature. The total velocity is presented as a sum of both cosine and sine velocities. The unsteady velocity in each case is arranged in the form of transient and post transient parts. It is found that the post transient parts are independent of time. The solutions corresponding to Newtonian fluids are recovered as a special case and comparison between Newtonian fluid and Maxwell fluid is shown graphically.

  12. A scientific report on heat transfer analysis in mixed convection flow of Maxwell fluid over an oscillating vertical plate

    PubMed Central

    Khan, Ilyas; Shah, Nehad Ali; Dennis, L. C. C.

    2017-01-01

    This scientific report investigates the heat transfer analysis in mixed convection flow of Maxwell fluid over an oscillating vertical plate with constant wall temperature. The problem is modelled in terms of coupled partial differential equations with initial and boundary conditions. Some suitable non-dimensional variables are introduced in order to transform the governing problem into dimensionless form. The resulting problem is solved via Laplace transform method and exact solutions for velocity, shear stress and temperature are obtained. These solutions are greatly influenced with the variation of embedded parameters which include the Prandtl number and Grashof number for various times. In the absence of free convection, the corresponding solutions representing the mechanical part of velocity reduced to the well known solutions in the literature. The total velocity is presented as a sum of both cosine and sine velocities. The unsteady velocity in each case is arranged in the form of transient and post transient parts. It is found that the post transient parts are independent of time. The solutions corresponding to Newtonian fluids are recovered as a special case and comparison between Newtonian fluid and Maxwell fluid is shown graphically. PMID:28294186

  13. Correction of energy-dependent systematic errors in dual-energy X-ray CT using a basis material coefficients transformation method

    NASA Astrophysics Data System (ADS)

    Goh, K. L.; Liew, S. C.; Hasegawa, B. H.

    1997-12-01

    Computer simulation results from our previous studies showed that energy dependent systematic errors exist in the values of attenuation coefficient synthesized using the basis material decomposition technique with acrylic and aluminum as the basis materials, especially when a high atomic number element (e.g., iodine from radiographic contrast media) was present in the body. The errors were reduced when a basis set was chosen from materials mimicking those found in the phantom. In the present study, we employed a basis material coefficients transformation method to correct for the energy-dependent systematic errors. In this method, the basis material coefficients were first reconstructed using the conventional basis materials (acrylic and aluminum) as the calibration basis set. The coefficients were then numerically transformed to those for a more desirable set materials. The transformation was done at the energies of the low and high energy windows of the X-ray spectrum. With this correction method using acrylic and an iodine-water mixture as our desired basis set, computer simulation results showed that accuracy of better than 2% could be achieved even when iodine was present in the body at a concentration as high as 10% by mass. Simulation work had also been carried out on a more inhomogeneous 2D thorax phantom of the 3D MCAT phantom. The results of the accuracy of quantitation were presented here.

  14. A method for optimizing the cosine response of solar UV diffusers

    NASA Astrophysics Data System (ADS)

    Pulli, Tomi; Kärhä, Petri; Ikonen, Erkki

    2013-07-01

    Instruments measuring global solar ultraviolet (UV) irradiance at the surface of the Earth need to collect radiation from the entire hemisphere. Entrance optics with angular response as close as possible to the ideal cosine response are necessary to perform these measurements accurately. Typically, the cosine response is obtained using a transmitting diffuser. We have developed an efficient method based on a Monte Carlo algorithm to simulate radiation transport in the solar UV diffuser assembly. The algorithm takes into account propagation, absorption, and scattering of the radiation inside the diffuser material. The effects of the inner sidewalls of the diffuser housing, the shadow ring, and the protective weather dome are also accounted for. The software implementation of the algorithm is highly optimized: a simulation of 109 photons takes approximately 10 to 15 min to complete on a typical high-end PC. The results of the simulations agree well with the measured angular responses, indicating that the algorithm can be used to guide the diffuser design process. Cost savings can be obtained when simulations are carried out before diffuser fabrication as compared to a purely trial-and-error-based diffuser optimization. The algorithm was used to optimize two types of detectors, one with a planar diffuser and the other with a spherically shaped diffuser. The integrated cosine errors—which indicate the relative measurement error caused by the nonideal angular response under isotropic sky radiance—of these two detectors were calculated to be f2=1.4% and 0.66%, respectively.

  15. How the laser-induced ionization of transparent solids can be suppressed

    NASA Astrophysics Data System (ADS)

    Gruzdev, Vitaly

    2013-12-01

    A capability to suppress laser-induced ionization of dielectric crystals in controlled and predictable way can potentially result in substantial improvement of laser damage threshold of optical materials. The traditional models that employ the Keldysh formula do not predict any suppression of the ionization because of the oversimplified description of electronic energy bands underlying the Keldysh formula. To fix this gap, we performed numerical simulations of time evolution of conduction-band electron density for a realistic cosine model of electronic bands characteristic of wide-band-gap cubic crystals. The simulations include contributions from the photo-ionization (evaluated by the Keldysh formula and by the formula for the cosine band of volume-centered cubic crystals) and from the avalanche ionization (evaluated by the Drude model). Maximum conduction-band electron density is evaluated from a single rate equation as a function of peak intensity of femtosecond laser pulses for alkali halide crystals. Results obtained for high-intensity femtosecond laser pulses demonstrate that the ionization can be suppressed by proper choice of laser parameters. In case of the Keldysh formula, the peak electron density exhibits saturation followed by gradual increase. For the cosine band, the electron density increases with irradiance within the low-intensity multiphoton regime and switches to decrease with intensity approaching threshold of the strong singularity of the ionization rate characteristic of the cosine band. Those trends are explained with specific modifications of band structure by electric field of laser pulses.

  16. Inf-sup estimates for the Stokes problem in a periodic channel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilkening, Jon

    2007-06-27

    We derive estimates of the Babuska-Brezzi inf-sup constant {beta} for two-dimensional incompressible flow in a periodic channel with one flat boundary and the other given by a periodic, Lipschitz continuous function h. If h is a constant function (so the domain is rectangular), we show that periodicity in one direction but not the other leads to an interesting connection between {beta} and the unitary operator mapping the Fourier sine coefficients of a function to its Fourier cosine coefficients. We exploit this connection to determine the dependence of {beta} on the aspect ratio of the rectangle. We then show how tomore » transfer this result to the case that h is C{sup 1,1} or even C{sup 0,1} by a change of variables. We avoid non-constructive theorems of functional analysis in order to explicitly exhibit the dependence of {beta} on features of the geometry such as the aspect ratio, the maximum slope, and the minimum gap thickness (if h passes near the substrate). We give an example to show that our estimates are optimal in their dependence on the minimum gap thickness in the C{sup 1,1} case, and nearly optimal in the Lipschitz case.« less

  17. Inf-sup estimates for the Stokes problem in a periodic channel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilkening, Jon

    2008-12-10

    We derive estimates of the Babuska-Brezzi inf-sup constant {beta} for two-dimensional incompressible flow in a periodic channel with one flat boundary and the other given by a periodic, Lipschitz continuous function h. If h is a constant function (so the domain is rectangular), we show that periodicity in one direction but not the other leads to an interesting connection between {beta} and the unitary operator mapping the Fourier sine coefficients of a function to its Fourier cosine coefficients. We exploit this connection to determine the dependence of {beta} on the aspect ratio of the rectangle. We then show how tomore » transfer this result to the case that h is C{sup 1,1} or even C{sup 0,1} by a change of variables. We avoid non-constructive theorems of functional analysis in order to explicitly exhibit the dependence of {beta} on features of the geometry such as the aspect ratio, the maximum slope, and the minimum gap thickness (if h passes near the substrate). We give an example to show that our estimates are optimal in their dependence on the minimum gap thickness in the C{sup 1,1} case, and nearly optimal in the Lipschitz case.« less

  18. The solitary wave solution of coupled Klein-Gordon-Zakharov equations via two different numerical methods

    NASA Astrophysics Data System (ADS)

    Dehghan, Mehdi; Nikpour, Ahmad

    2013-09-01

    In this research, we propose two different methods to solve the coupled Klein-Gordon-Zakharov (KGZ) equations: the Differential Quadrature (DQ) and Globally Radial Basis Functions (GRBFs) methods. In the DQ method, the derivative value of a function with respect to a point is directly approximated by a linear combination of all functional values in the global domain. The principal work in this method is the determination of weight coefficients. We use two ways for obtaining these coefficients: cosine expansion (CDQ) and radial basis functions (RBFs-DQ), the former is a mesh-based method and the latter categorizes in the set of meshless methods. Unlike the DQ method, the GRBF method directly substitutes the expression of the function approximation by RBFs into the partial differential equation. The main problem in the GRBFs method is ill-conditioning of the interpolation matrix. Avoiding this problem, we study the bases introduced in Pazouki and Schaback (2011) [44]. Some examples are presented to compare the accuracy and easy implementation of the proposed methods. In numerical examples, we concentrate on Inverse Multiquadric (IMQ) and second-order Thin Plate Spline (TPS) radial basis functions. The variable shape parameter (exponentially and random) strategies are applied in the IMQ function and the results are compared with the constant shape parameter.

  19. Multiscale image contrast amplification (MUSICA)

    NASA Astrophysics Data System (ADS)

    Vuylsteke, Pieter; Schoeters, Emile P.

    1994-05-01

    This article presents a novel approach to the problem of detail contrast enhancement, based on multiresolution representation of the original image. The image is decomposed into a weighted sum of smooth, localized, 2D basis functions at multiple scales. Each transform coefficient represents the amount of local detail at some specific scale and at a specific position in the image. Detail contrast is enhanced by non-linear amplification of the transform coefficients. An inverse transform is then applied to the modified coefficients. This yields a uniformly contrast- enhanced image without artefacts. The MUSICA-algorithm is being applied routinely to computed radiography images of chest, skull, spine, shoulder, pelvis, extremities, and abdomen examinations, with excellent acceptance. It is useful for a wide range of applications in the medical, graphical, and industrial area.

  20. Analysis, design, and control of a transcutaneous power regulator for artificial hearts.

    PubMed

    Qianhong Chen; Siu Chung Wong; Tse, C K; Xinbo Ruan

    2009-02-01

    Based on a generic transcutaneous transformer model, a remote power supply using a resonant topology for use in artificial hearts is analyzed and designed for easy controllability and high efficiency. The primary and secondary windings of the transcutaneous transformer are positioned outside and inside the human body, respectively. In such a transformer, the alignment and gap may change with external positioning. As a result, the coupling coefficient of the transcutaneous transformer is also varying, and so are the two large leakage inductances and the mutual inductance. Resonant-tank circuits with varying resonant-frequency are formed from the transformer inductors and external capacitors. For a given range of coupling coefficients, an operating frequency corresponding to a particular coupling coefficient can be found, for which the voltage transfer function is insensitive to load. Prior works have used frequency modulation to regulate the output voltage under varying load and transformer coupling. The use of frequency modulation may require a wide control frequency range which may extend well above the load insensitive frequency. In this paper, study of the input-to-output voltage transfer function is carried out, and a control method is proposed to lock the switching frequency at just above the load insensitive frequency for optimized efficiency at heavy loads. Specifically, operation at above resonant of the resonant circuits is maintained under varying coupling-coefficient. Using a digital-phase-lock-loop (PLL), zero-voltage switching is achieved in a full-bridge converter which is also programmed to provide output voltage regulation via pulsewidth modulation (PWM). A prototype transcutaneous power regulator is built and found to to perform excellently with high efficiency and tight regulation under variations of the alignment or gap of the transcutaneous transformer, load and input voltage.

  1. High frequency wide-band transformer uses coax to achieve high turn ratio and flat response

    NASA Technical Reports Server (NTRS)

    De Parry, T.

    1966-01-01

    Center-tap push-pull transformer with toroidal core helically wound with a single coaxial cable creates a high frequency wideband transformer. This transformer has a high-turn ratio, a high coupling coefficient, and a flat broadband response.

  2. The whole number axis integer linear transformation reversible information hiding algorithm on wavelet domain

    NASA Astrophysics Data System (ADS)

    Jiang, Zhuo; Xie, Chengjun

    2013-12-01

    This paper improved the algorithm of reversible integer linear transform on finite interval [0,255], which can realize reversible integer linear transform in whole number axis shielding data LSB (least significant bit). Firstly, this method use integer wavelet transformation based on lifting scheme to transform the original image, and select the transformed high frequency areas as information hiding area, meanwhile transform the high frequency coefficients blocks in integer linear way and embed the secret information in LSB of each coefficient, then information hiding by embedding the opposite steps. To extract data bits and recover the host image, a similar reverse procedure can be conducted, and the original host image can be lossless recovered. The simulation experimental results show that this method has good secrecy and concealment, after conducted the CDF (m, n) and DD (m, n) series of wavelet transformed. This method can be applied to information security domain, such as medicine, law and military.

  3. Tasseled cap transformation for HJ multispectral remote sensing data

    NASA Astrophysics Data System (ADS)

    Han, Ling; Han, Xiaoyong

    2015-12-01

    The tasseled cap transformation of remote sensing data has been widely used in environment, agriculture, forest and ecology. Tasseled cap transformation coefficients matrix of HJ multi-spectrum data has been established through Givens rotation matrix to rotate principal component transform vector to whiteness, greenness and blueness direction of ground object basing on 24 scenes year-round HJ multispectral remote sensing data. The whiteness component enhances the brightness difference of ground object, and the greenness component preserves more detailed information of vegetation change while enhances the vegetation characteristic, and the blueness component significantly enhances factory with blue plastic house roof around the town and also can enhance brightness of water. Tasseled cap transformation coefficients matrix of HJ will enhance the application effect of HJ multispectral remote sensing data in their application fields.

  4. Similarities between principal components of protein dynamics and random diffusion

    NASA Astrophysics Data System (ADS)

    Hess, Berk

    2000-12-01

    Principal component analysis, also called essential dynamics, is a powerful tool for finding global, correlated motions in atomic simulations of macromolecules. It has become an established technique for analyzing molecular dynamics simulations of proteins. The first few principal components of simulations of large proteins often resemble cosines. We derive the principal components for high-dimensional random diffusion, which are almost perfect cosines. This resemblance between protein simulations and noise implies that for many proteins the time scales of current simulations are too short to obtain convergence of collective motions.

  5. Cosine-Gaussian Schell-model sources.

    PubMed

    Mei, Zhangrong; Korotkova, Olga

    2013-07-15

    We introduce a new class of partially coherent sources of Schell type with cosine-Gaussian spectral degree of coherence and confirm that such sources are physically genuine. Further, we derive the expression for the cross-spectral density function of a beam generated by the novel source propagating in free space and analyze the evolution of the spectral density and the spectral degree of coherence. It is shown that at sufficiently large distances from the source the degree of coherence of the propagating beam assumes Gaussian shape while the spectral density takes on the dark-hollow profile.

  6. Magnetic field generator

    DOEpatents

    Krienin, Frank

    1990-01-01

    A magnetic field generating device provides a useful magnetic field within a specific retgion, while keeping nearby surrounding regions virtually field free. By placing an appropriate current density along a flux line of the source, the stray field effects of the generator may be contained. One current carrying structure may support a truncated cosine distribution, and it may be surrounded by a current structure which follows a flux line that would occur in a full coaxial double cosine distribution. Strong magnetic fields may be generated and contained using superconducting cables to approximate required current surfaces.

  7. Interaction phenomenon to dimensionally reduced p-gBKP equation

    NASA Astrophysics Data System (ADS)

    Zhang, Runfa; Bilige, Sudao; Bai, Yuexing; Lü, Jianqing; Gao, Xiaoqing

    2018-02-01

    Based on searching the combining of quadratic function and exponential (or hyperbolic cosine) function from the Hirota bilinear form of the dimensionally reduced p-gBKP equation, eight class of interaction solutions are derived via symbolic computation with Mathematica. The submergence phenomenon, presented to illustrate the dynamical features concerning these obtained solutions, is observed by three-dimensional plots and density plots with particular choices of the involved parameters between the exponential (or hyperbolic cosine) function and the quadratic function. It is proved that the interference between the two solitary waves is inelastic.

  8. Angular measurement system

    NASA Technical Reports Server (NTRS)

    Currie, J. R.; Kissel, R. R.

    1986-01-01

    A system for the measurement of shaft angles is disclosed wherein a synchro resolver is sequentially pulsed, and alternately, a sine and then a cosine representative voltage output of it are sampled. Two like type, sine or cosine, succeeding outputs (V sub S1, V sub S2) are averaged and algebraically related to the opposite type output pulse (V sub c) occurring between the averaged pulses to provide a precise indication of the angle of a shaft coupled to the resolver at the instant of the occurrence of the intermediately occurring pulse (V sub c).

  9. A VLSI implementation of DCT using pass transistor technology

    NASA Technical Reports Server (NTRS)

    Kamath, S.; Lynn, Douglas; Whitaker, Sterling

    1992-01-01

    A VLSI design for performing the Discrete Cosine Transform (DCT) operation on image blocks of size 16 x 16 in a real time fashion operating at 34 MHz (worst case) is presented. The process used was Hewlett-Packard's CMOS26--A 3 metal CMOS process with a minimum feature size of 0.75 micron. The design is based on Multiply-Accumulate (MAC) cells which make use of a modified Booth recoding algorithm for performing multiplication. The design of these cells is straight forward, and the layouts are regular with no complex routing. Two versions of these MAC cells were designed and their layouts completed. Both versions were simulated using SPICE to estimate their performance. One version is slightly faster at the cost of larger silicon area and higher power consumption. An improvement in speed of almost 20 percent is achieved after several iterations of simulation and re-sizing.

  10. Vibration Power Flow In A Constrained Layer Damping Cylindrical Shell

    NASA Astrophysics Data System (ADS)

    Wang, Yun; Zheng, Gangtie

    2012-07-01

    In this paper, the vibration power flow in a constrained layer damping (CLD) cylindrical shell using wave propagation approach is investigated. The dynamic equations of the shell are derived with the Hamilton principle in conjunction with the Donnell shell assumption. With these equations, the dynamic responses of the system under a line circumferential cosine harmonic exciting force is obtained by employing the Fourier transform and the residue theorem. The vibration power flows inputted to the system and transmitted along the shell axial direction are both studied. The results show that input power flow varies with driving frequency and circumferential mode order, and the constrained damping layer can obviously restrict the exciting force from inputting power flow into the base shell especially for a thicker viscoelastic layer, a thicker or stiffer constraining layer (CL), and a higher circumferential mode order, can rapidly attenuate the vibration power flow transmitted along the base shell axial direction.

  11. Non-Rigid Structure Estimation in Trajectory Space from Monocular Vision

    PubMed Central

    Wang, Yaming; Tong, Lingling; Jiang, Mingfeng; Zheng, Junbao

    2015-01-01

    In this paper, the problem of non-rigid structure estimation in trajectory space from monocular vision is investigated. Similar to the Point Trajectory Approach (PTA), based on characteristic points’ trajectories described by a predefined Discrete Cosine Transform (DCT) basis, the structure matrix was also calculated by using a factorization method. To further optimize the non-rigid structure estimation from monocular vision, the rank minimization problem about structure matrix is proposed to implement the non-rigid structure estimation by introducing the basic low-rank condition. Moreover, the Accelerated Proximal Gradient (APG) algorithm is proposed to solve the rank minimization problem, and the initial structure matrix calculated by the PTA method is optimized. The APG algorithm can converge to efficient solutions quickly and lessen the reconstruction error obviously. The reconstruction results of real image sequences indicate that the proposed approach runs reliably, and effectively improves the accuracy of non-rigid structure estimation from monocular vision. PMID:26473863

  12. Method of detecting system function by measuring frequency response

    NASA Technical Reports Server (NTRS)

    Morrison, John L. (Inventor); Morrison, William H. (Inventor); Christophersen, Jon P. (Inventor)

    2012-01-01

    Real-time battery impedance spectrum is acquired using a one-time record. Fast Summation Transformation (FST) is a parallel method of acquiring a real-time battery impedance spectrum using a one-time record that enables battery diagnostics. An excitation current to a battery is a sum of equal amplitude sine waves of frequencies that are octave harmonics spread over a range of interest. A sample frequency is also octave and harmonically related to all frequencies in the sum. The time profile of this signal has a duration that is a few periods of the lowest frequency. The voltage response of the battery, average deleted, is the impedance of the battery in the time domain. Since the excitation frequencies are known and octave and harmonically related, a simple algorithm, FST, processes the time record by rectifying relative to the sine and cosine of each frequency. Another algorithm yields real and imaginary components for each frequency.

  13. Resolution enhancement of low-quality videos using a high-resolution frame

    NASA Astrophysics Data System (ADS)

    Pham, Tuan Q.; van Vliet, Lucas J.; Schutte, Klamer

    2006-01-01

    This paper proposes an example-based Super-Resolution (SR) algorithm of compressed videos in the Discrete Cosine Transform (DCT) domain. Input to the system is a Low-Resolution (LR) compressed video together with a High-Resolution (HR) still image of similar content. Using a training set of corresponding LR-HR pairs of image patches from the HR still image, high-frequency details are transferred from the HR source to the LR video. The DCT-domain algorithm is much faster than example-based SR in spatial domain 6 because of a reduction in search dimensionality, which is a direct result of the compact and uncorrelated DCT representation. Fast searching techniques like tree-structure vector quantization 16 and coherence search1 are also key to the improved efficiency. Preliminary results on MJPEG sequence show promising result of the DCT-domain SR synthesis approach.

  14. Wide band stepped frequency ground penetrating radar

    DOEpatents

    Bashforth, Michael B.; Gardner, Duane; Patrick, Douglas; Lewallen, Tricia A.; Nammath, Sharyn R.; Painter, Kelly D.; Vadnais, Kenneth G.

    1996-01-01

    A wide band ground penetrating radar system (10) embodying a method wherein a series of radio frequency signals (60) is produced by a single radio frequency source (16) and provided to a transmit antenna (26) for transmission to a target (54) and reflection therefrom to a receive antenna (28). A phase modulator (18) modulates those portion of the radio frequency signals (62) to be transmitted and the reflected modulated signal (62) is combined in a mixer (34) with the original radio frequency signal (60) to produce a resultant signal (53) which is demodulated to produce a series of direct current voltage signals (66) the envelope of which forms a cosine wave shaped plot (68) which is processed by a Fast Fourier Transform unit 44 into frequency domain data (70) wherein the position of a preponderant frequency is indicative of distance to the target (54) and magnitude is indicative of the signature of the target (54).

  15. Method of detecting system function by measuring frequency response

    DOEpatents

    Morrison, John L [Butte, MT; Morrison, William H [Manchester, CT; Christophersen, Jon P [Idaho Falls, ID

    2012-04-03

    Real-time battery impedance spectrum is acquired using a one-time record. Fast Summation Transformation (FST) is a parallel method of acquiring a real-time battery impedance spectrum using a one-time record that enables battery diagnostics. An excitation current to a battery is a sum of equal amplitude sine waves of frequencies that are octave harmonics spread over a range of interest. A sample frequency is also octave and harmonically related to all frequencies in the sum. The time profile of this signal has a duration that is a few periods of the lowest frequency. The voltage response of the battery, average deleted, is the impedance of the battery in the time domain. Since the excitation frequencies are known and octave and harmonically related, a simple algorithm, FST, processes the time record by rectifying relative to the sine and cosine of each frequency. Another algorithm yields real and imaginary components for each frequency.

  16. Modeling of video traffic in packet networks, low rate video compression, and the development of a lossy+lossless image compression algorithm

    NASA Technical Reports Server (NTRS)

    Sayood, K.; Chen, Y. C.; Wang, X.

    1992-01-01

    During this reporting period we have worked on three somewhat different problems. These are modeling of video traffic in packet networks, low rate video compression, and the development of a lossy + lossless image compression algorithm, which might have some application in browsing algorithms. The lossy + lossless scheme is an extension of work previously done under this grant. It provides a simple technique for incorporating browsing capability. The low rate coding scheme is also a simple variation on the standard discrete cosine transform (DCT) coding approach. In spite of its simplicity, the approach provides surprisingly high quality reconstructions. The modeling approach is borrowed from the speech recognition literature, and seems to be promising in that it provides a simple way of obtaining an idea about the second order behavior of a particular coding scheme. Details about these are presented.

  17. Quantum communication through an unmodulated spin chain.

    PubMed

    Bose, Sougato

    2003-11-14

    We propose a scheme for using an unmodulated and unmeasured spin chain as a channel for short distance quantum communications. The state to be transmitted is placed on one spin of the chain and received later on a distant spin with some fidelity. We first obtain simple expressions for the fidelity of quantum state transfer and the amount of entanglement sharable between any two sites of an arbitrary Heisenberg ferromagnet using our scheme. We then apply this to the realizable case of an open ended chain with nearest neighbor interactions. The fidelity of quantum state transfer is obtained as an inverse discrete cosine transform and as a Bessel function series. We find that in a reasonable time, a qubit can be directly transmitted with better than classical fidelity across the full length of chains of up to 80 spins. Moreover, our channel allows distillable entanglement to be shared over arbitrary distances.

  18. Some fundamentals regarding kinematics and generalized forces for multibody dynamics

    NASA Technical Reports Server (NTRS)

    Hodges, Dewey H.

    1990-01-01

    In order to illustrate the various forms in which generalized forces can arise from diverse subsystem analyses in multibody dynamics, intrinsic dynamical equations for the rotational dynamics of a rigid body are derived from Hamilton's principle. Two types of generalized forces are derived: (1) those associated with the virtual rotation vector in some orthogonal basis, and (2) those associated with varying generalized coordinates. As one physical or kinematical result (such as a frequency or a specific direction cosine) cannot rely on this selection, a 'blind' coupling of two models in which generalized forces are calculated in different ways would be wrong. Both types should use the same rotational coordinates and should denote the virtual rotation on a similar basis according to method 1, or in terms of common rotational coordinates and their diversifications as in method 2. Alternatively, the generalized forces and coordinates of one model may be transformed to those of the other.

  19. Incorporating Technology and Cooperative Learning to Teach Function Transformations

    ERIC Educational Resources Information Center

    Boz, Burçak; Erbilgin, Evrim

    2015-01-01

    When teaching transformations of functions, teachers typically have students vary the coefficients of equations and examine the resulting changes in the graph. This approach, however, may lead students to memorise rules related to transformations. Students need opportunities to think deeply about transformations beyond superficial observations…

  20. Heat transfer coefficient as parameter describing ability of insulating liquid to heat transfer

    NASA Astrophysics Data System (ADS)

    Nadolny, Zbigniew; Gościński, Przemysław; Bródka, Bolesław

    2017-10-01

    The paper presents the results of the measurements of heat transfer coefficient of insulating liquids used in transformers. The coefficient describes an ability of the liquid to heat transport. On the basis of the coefficient, effectiveness of cooling system of electric power devices can be estimated. Following liquids were used for the measurements: mineral oil, synthetic ester and natural ester. It was assumed that surface heat load is about 2500 W·m-2, which is equal the load of transformer windings. A height of heat element was 1.6 m, because it makes possible steady distribution of temperature on its surface. The measurements of heat transfer coefficient was made as a function of various position of heat element (vertical, horizontal). In frame of horizontal position of heat element, three suppositions were analysed: top, bottom, and side.

  1. High-resolution Fourier transform measurements of air-induced broadening and shift coefficients in the 0002-0000 main isotopologue band of nitrous oxide

    NASA Astrophysics Data System (ADS)

    Werwein, Viktor; Li, Gang; Serdyukov, Anton; Brunzendorf, Jens; Werhahn, Olav; Ebert, Volker

    2018-06-01

    In the present study, we report highly accurate air-induced broadening and shift coefficients for the nitrous oxide (N2O) 0002-0000 band at 2.26 μm of the main isotopologue retrieved from high-resolution Fourier transform infrared (FTIR) measurements with metrologically determined pressure, temperature, absorption path length and chemical composition. Most of our retrieved air-broadening coefficients agree with previously generated datasets within the expanded (confidence interval of 95%) uncertainties. For the air-shift coefficients our results suggest a different rotational dependence compared to literature. The present study benefits from improved measurement conditions and a detailed metrological uncertainty description. Comparing to literature, the uncertainties of the previous broadening and shift coefficients are improved by a factor of up to 39 and up to 22, respectively.

  2. Procrustes Matching by Congruence Coefficients

    ERIC Educational Resources Information Center

    Korth, Bruce; Tucker, L. R.

    1976-01-01

    Matching by Procrustes methods involves the transformation of one matrix to match with another. A special least squares criterion, the congruence coefficient, has advantages as a criterion for some factor analytic interpretations. A Procrustes method maximizing the congruence coefficient is given. (Author/JKS)

  3. Log-Gabor Energy Based Multimodal Medical Image Fusion in NSCT Domain

    PubMed Central

    Yang, Yong; Tong, Song; Huang, Shuying; Lin, Pan

    2014-01-01

    Multimodal medical image fusion is a powerful tool in clinical applications such as noninvasive diagnosis, image-guided radiotherapy, and treatment planning. In this paper, a novel nonsubsampled Contourlet transform (NSCT) based method for multimodal medical image fusion is presented, which is approximately shift invariant and can effectively suppress the pseudo-Gibbs phenomena. The source medical images are initially transformed by NSCT followed by fusing low- and high-frequency components. The phase congruency that can provide a contrast and brightness-invariant representation is applied to fuse low-frequency coefficients, whereas the Log-Gabor energy that can efficiently determine the frequency coefficients from the clear and detail parts is employed to fuse the high-frequency coefficients. The proposed fusion method has been compared with the discrete wavelet transform (DWT), the fast discrete curvelet transform (FDCT), and the dual tree complex wavelet transform (DTCWT) based image fusion methods and other NSCT-based methods. Visually and quantitatively experimental results indicate that the proposed fusion method can obtain more effective and accurate fusion results of multimodal medical images than other algorithms. Further, the applicability of the proposed method has been testified by carrying out a clinical example on a woman affected with recurrent tumor images. PMID:25214889

  4. Discrete Haar transform and protein structure.

    PubMed

    Morosetti, S

    1997-12-01

    The discrete Haar transform of the sequence of the backbone dihedral angles (phi and psi) was performed over a set of X-ray protein structures of high resolution from the Brookhaven Protein Data Bank. Afterwards, the new dihedral angles were calculated by the inverse transform, using a growing number of Haar functions, from the lower to the higher degree. New structures were obtained using these dihedral angles, with standard values for bond lengths and angles, and with omega = 0 degree. The reconstructed structures were compared with the experimental ones, and analyzed by visual inspection and statistical analysis. When half of the Haar coefficients were used, all the reconstructed structures were not yet collapsed to a tertiary folding, but they showed yet realized most of the secondary motifs. These results indicate a substantial separation of structural information in the space of Haar transform, with the secondary structural information mainly present in the Haar coefficients of lower degrees, and the tertiary one present in the higher degree coefficients. Because of this separation, the representation of the folded structures in the space of Haar transform seems a promising candidate to encompass the problem of premature convergence in genetic algorithms.

  5. Accurate determination of the diffusion coefficient of proteins by Fourier analysis with whole column imaging detection.

    PubMed

    Zarabadi, Atefeh S; Pawliszyn, Janusz

    2015-02-17

    Analysis in the frequency domain is considered a powerful tool to elicit precise information from spectroscopic signals. In this study, the Fourier transformation technique is employed to determine the diffusion coefficient (D) of a number of proteins in the frequency domain. Analytical approaches are investigated for determination of D from both experimental and data treatment viewpoints. The diffusion process is modeled to calculate diffusion coefficients based on the Fourier transformation solution to Fick's law equation, and its results are compared to time domain results. The simulations characterize optimum spatial and temporal conditions and demonstrate the noise tolerance of the method. The proposed model is validated by its application for the electropherograms from the diffusion path of a set of proteins. Real-time dynamic scanning is conducted to monitor dispersion by employing whole column imaging detection technology in combination with capillary isoelectric focusing (CIEF) and the imaging plug flow (iPF) experiment. These experimental techniques provide different peak shapes, which are utilized to demonstrate the Fourier transformation ability in extracting diffusion coefficients out of irregular shape signals. Experimental results confirmed that the Fourier transformation procedure substantially enhanced the accuracy of the determined values compared to those obtained in the time domain.

  6. Impact of conditions at start-up on thermovibrational convective flow.

    PubMed

    Melnikov, D E; Shevtsova, V M; Legros, J C

    2008-11-01

    The development of thermovibrational convection in a cubic cell filled with high Prandtl number liquid (isopropanol) is studied. Direct nonlinear simulations are performed by solving three-dimensional Navier-Stokes equations in the Boussinesq approximation. The cell is subjected to high frequency periodic oscillations perpendicular to the applied temperature gradient under zero gravity. Two types of vibrations are imposed: either as a sine or cosine function of time. It is shown that the initial vibrational phase plays a significant role in the transient behavior of thermovibrational convective flow. Such knowledge is important to interpret correctly short-duration experimental results performed in microgravity, among which the most accessible are drop towers ( approximately 5s) and parabolic flights ( approximately 20s) . It is obtained that under sine vibrations, the flow reaches steady state within less than one thermal time. Under cosine acceleration, this time is 2 times longer. For cosine excitations, the Nusselt number is approximately 10 times smaller in comparison with the sine case. Besides, in the case of cosine, the Nusselt number oscillates with double frequency. However, at the steady state, time-averaged and oscillatory characteristics of the flow are independent of the vibrational start-up. The only feature that always differs the two cases is the phase difference between the velocity, temperature, and accelerations. We have found that due to nonlinear response of the system to the imposed vibrations, the phase shift between velocity and temperature is never equal exactly to pi2 , at least in weightlessness. Thus, heat transport always exists from the beginning of vibrations, although it might be weak.

  7. Enhancing micro-seismic P-phase arrival picking: EMD-cosine function-based denoising with an application to the AIC picker

    NASA Astrophysics Data System (ADS)

    Shang, Xueyi; Li, Xibing; Morales-Esteban, A.; Dong, Longjun

    2018-03-01

    Micro-seismic P-phase arrival picking is an elementary step into seismic event location, source mechanism analysis, and seismic tomography. However, a micro-seismic signal is often mixed with high frequency noises and power frequency noises (50 Hz), which could considerably reduce P-phase picking accuracy. To solve this problem, an Empirical Mode Decomposition (EMD)-cosine function denoising-based Akaike Information Criterion (AIC) picker (ECD-AIC picker) is proposed for picking the P-phase arrival time. Unlike traditional low pass filters which are ineffective when seismic data and noise bandwidths overlap, the EMD adaptively separates the seismic data and the noise into different Intrinsic Mode Functions (IMFs). Furthermore, the EMD-cosine function-based denoising retains the P-phase arrival amplitude and phase spectrum more reliably than any traditional low pass filter. The ECD-AIC picker was tested on 1938 sets of micro-seismic waveforms randomly selected from the Institute of Mine Seismology (IMS) database of the Chinese Yongshaba mine. The results have shown that the EMD-cosine function denoising can effectively estimate high frequency and power frequency noises and can be easily adapted to perform on signals with different shapes and forms. Qualitative and quantitative comparisons show that the combined ECD-AIC picker provides better picking results than both the ED-AIC picker and the AIC picker, and the comparisons also show more reliable source localization results when the ECD-AIC picker is applied, thus showing the potential of this combined P-phase picking technique.

  8. Nocturnal heart rate variability in 1-year-old infants analyzed by using the Least Square Cosine Spectrum Method.

    PubMed

    Kochiya, Yuko; Hirabayashi, Akari; Ichimaru, Yuhei

    2017-09-16

    To evaluate the dynamic nature of nocturnal heart rate variability, RR intervals recorded with a wearable heart rate sensor were analyzed using the Least Square Cosine Spectrum Method. Six 1-year-old infants participated in the study. A wearable heart rate sensor was placed on their chest to measure RR intervals and 3-axis acceleration. Heartbeat time series were analyzed for every 30 s using the Least Square Cosine Spectrum Method, and an original parameter to quantify the regularity of respiratory-related heart rate rhythm was extracted and referred to as "RA (RA-COSPEC: Respiratory Area obtained by COSPEC)." The RA value is higher when a cosine curve is fitted to the original data series. The time sequential changes of RA showed cyclic changes with significant rhythm during the night. The mean cycle length of RA was 70 ± 15 min, which is shorter than young adult's cycle in our previous study. At the threshold level of RA greater than 3, the HR was significantly decreased compared with the RA value less than 3. The regularity of heart rate rhythm showed dynamic changes during the night in 1-year-old infants. Significant decrease of HR at the time of higher RA suggests the increase of parasympathetic activity. We suspect that the higher RA reflects the regular respiratory pattern during the night. This analysis system may be useful for quantitative assessment of regularity and dynamic changes of nocturnal heart rate variability in infants.

  9. Low frequency AC waveform generator

    DOEpatents

    Bilharz, Oscar W.

    1986-01-01

    Low frequency sine, cosine, triangle and square waves are synthesized in circuitry which allows variation in the waveform amplitude and frequency while exhibiting good stability and without requiring significant stabilization time. A triangle waveform is formed by a ramped integration process controlled by a saturation amplifier circuit which produces the necessary hysteresis for the triangle waveform. The output of the saturation circuit is tapped to produce the square waveform. The sine waveform is synthesized by taking the absolute value of the triangular waveform, raising this absolute value to a predetermined power, multiplying the raised absolute value of the triangle wave with the triangle wave itself and properly scaling the resultant waveform and subtracting it from the triangular waveform itself. The cosine is synthesized by squaring the triangular waveform, raising the triangular waveform to a predetermined power and adding the squared waveform raised to the predetermined power with a DC reference and subtracting the squared waveform therefrom, with all waveforms properly scaled. The resultant waveform is then multiplied with a square wave in order to correct the polarity and produce the resultant cosine waveform.

  10. Selecting a proper design period for heliostat field layout optimization using Campo code

    NASA Astrophysics Data System (ADS)

    Saghafifar, Mohammad; Gadalla, Mohamed

    2016-09-01

    In this paper, different approaches are considered to calculate the cosine factor which is utilized in Campo code to expand the heliostat field layout and maximize its annual thermal output. Furthermore, three heliostat fields containing different number of mirrors are taken into consideration. Cosine factor is determined by considering instantaneous and time-average approaches. For instantaneous method, different design days and design hours are selected. For the time average method, daily time average, monthly time average, seasonally time average, and yearly time averaged cosine factor determinations are considered. Results indicate that instantaneous methods are more appropriate for small scale heliostat field optimization. Consequently, it is proposed to consider the design period as the second design variable to ensure the best outcome. For medium and large scale heliostat fields, selecting an appropriate design period is more important. Therefore, it is more reliable to select one of the recommended time average methods to optimize the field layout. Optimum annual weighted efficiency for heliostat fields (small, medium, and large) containing 350, 1460, and 3450 mirrors are 66.14%, 60.87%, and 54.04%, respectively.

  11. Watermarking scheme for authentication of compressed image

    NASA Astrophysics Data System (ADS)

    Hsieh, Tsung-Han; Li, Chang-Tsun; Wang, Shuo

    2003-11-01

    As images are commonly transmitted or stored in compressed form such as JPEG, to extend the applicability of our previous work, a new scheme for embedding watermark in compressed domain without resorting to cryptography is proposed. In this work, a target image is first DCT transformed and quantised. Then, all the coefficients are implicitly watermarked in order to minimize the risk of being attacked on the unwatermarked coefficients. The watermarking is done through registering/blending the zero-valued coefficients with a binary sequence to create the watermark and involving the unembedded coefficients during the process of embedding the selected coefficients. The second-order neighbors and the block itself are considered in the process of the watermark embedding in order to thwart different attacks such as cover-up, vector quantisation, and transplantation. The experiments demonstrate the capability of the proposed scheme in thwarting local tampering, geometric transformation such as cropping, and common signal operations such as lowpass filtering.

  12. A robust method of computing finite difference coefficients based on Vandermonde matrix

    NASA Astrophysics Data System (ADS)

    Zhang, Yijie; Gao, Jinghuai; Peng, Jigen; Han, Weimin

    2018-05-01

    When the finite difference (FD) method is employed to simulate the wave propagation, high-order FD method is preferred in order to achieve better accuracy. However, if the order of FD scheme is high enough, the coefficient matrix of the formula for calculating finite difference coefficients is close to be singular. In this case, when the FD coefficients are computed by matrix inverse operator of MATLAB, inaccuracy can be produced. In order to overcome this problem, we have suggested an algorithm based on Vandermonde matrix in this paper. After specified mathematical transformation, the coefficient matrix is transformed into a Vandermonde matrix. Then the FD coefficients of high-order FD method can be computed by the algorithm of Vandermonde matrix, which prevents the inverse of the singular matrix. The dispersion analysis and numerical results of a homogeneous elastic model and a geophysical model of oil and gas reservoir demonstrate that the algorithm based on Vandermonde matrix has better accuracy compared with matrix inverse operator of MATLAB.

  13. Determination of the Accomodation Coefficient Using Vapor/Gas Bubble Dynamics in an Acoustic Field

    NASA Technical Reports Server (NTRS)

    Gumerov, Nail A.

    1999-01-01

    Non-equilibrium liquid/vapor phase transformations can occur in superheated or subcooled liquids in fast processes such as in evaporation in a vacuum, in processing of molten metals, and in vapor explosions. The rate at which such a phase transformation occurs, Xi, can be described by the Hertz-Knudsen-Langmuir formula. More than one century of the history of the accommodation coefficient measurements shows many problems with its determination. This coefficient depends on the temperature, is sensitive to the conditions at the interface, and is influenced by small amounts of impurities. Even recent measurements of the accommodation coefficient for water (Hagen et al, 1989) showed a huge variation in Beta from 1 for 1 micron droplets to 0.006 for 15 micron droplets. Moreover, existing measurement techniques for the accommodation coefficient are complex and expensive. Thus development of a relatively inexpensive and reliable technique for measurement of the accommodation coefficient for a wide range of substances and temperatures is of great practical importance.

  14. Coefficient Alpha and Reliability of Scale Scores

    ERIC Educational Resources Information Center

    Almehrizi, Rashid S.

    2013-01-01

    The majority of large-scale assessments develop various score scales that are either linear or nonlinear transformations of raw scores for better interpretations and uses of assessment results. The current formula for coefficient alpha (a; the commonly used reliability coefficient) only provides internal consistency reliability estimates of raw…

  15. Cosine problem in EPRL/FK spinfoam model

    NASA Astrophysics Data System (ADS)

    Vojinović, Marko

    2014-01-01

    We calculate the classical limit effective action of the EPRL/FK spinfoam model of quantum gravity coupled to matter fields. By employing the standard QFT background field method adapted to the spinfoam setting, we find that the model has many different classical effective actions. Most notably, these include the ordinary Einstein-Hilbert action coupled to matter, but also an action which describes antigravity. All those multiple classical limits appear as a consequence of the fact that the EPRL/FK vertex amplitude has cosine-like large spin asymptotics. We discuss some possible ways to eliminate the unwanted classical limits.

  16. Remote Sensing Image Fusion Method Based on Nonsubsampled Shearlet Transform and Sparse Representation

    NASA Astrophysics Data System (ADS)

    Moonon, Altan-Ulzii; Hu, Jianwen; Li, Shutao

    2015-12-01

    The remote sensing image fusion is an important preprocessing technique in remote sensing image processing. In this paper, a remote sensing image fusion method based on the nonsubsampled shearlet transform (NSST) with sparse representation (SR) is proposed. Firstly, the low resolution multispectral (MS) image is upsampled and color space is transformed from Red-Green-Blue (RGB) to Intensity-Hue-Saturation (IHS). Then, the high resolution panchromatic (PAN) image and intensity component of MS image are decomposed by NSST to high and low frequency coefficients. The low frequency coefficients of PAN and the intensity component are fused by the SR with the learned dictionary. The high frequency coefficients of intensity component and PAN image are fused by local energy based fusion rule. Finally, the fused result is obtained by performing inverse NSST and inverse IHS transform. The experimental results on IKONOS and QuickBird satellites demonstrate that the proposed method provides better spectral quality and superior spatial information in the fused image than other remote sensing image fusion methods both in visual effect and object evaluation.

  17. Cone-shaped source characteristics and inductance effect of transient electromagnetic method

    NASA Astrophysics Data System (ADS)

    Yang, Hai-Yan; Li, Feng-Ping; Yue, Jian-Hua; Guo, Fu-Sheng; Liu, Xu-Hua; Zhang, Hua

    2017-03-01

    Small multi-turn coil devices are used with the transient electromagnetic method (TEM) in areas with limited space, particularly in underground environments such as coal mines roadways and engineering tunnels, and for detecting shallow geological targets in environmental and engineering fields. However, the equipment involved has strong mutual inductance coupling, which causes a lengthy turn-offtime and a deep "blind zone". This study proposes a new transmitter device with a conical-shape source and derives the radius formula of each coil and the mutual inductance coefficient of the cone. According to primary field characteristics, results of the two fields created, calculation of the conical-shaped source in a uniform medium using theoretical analysis, and a comparison of the inductance of the new device with that of the multi-turn coil, show that inductance of the multi-turn coil is nine times greater than that of the conical source with the same equivalent magnetic moment of 926.1 A·m2. This indicates that the new source leads to a much shallower "blind zone." Furthermore, increasing the bottom radius and turn of the cone creates a larger mutual inductance but increasing the cone height results in a lower mutual inductance. Using the superposition principle, the primary and secondary magnetic fields for a conical source in a homogeneous medium are calculated; results indicate that the magnetic behavior of the cone is the same as that of the multi-turn coils, but the transient responses of the secondary field and the total field are more stronger than those of the multi-turn coils. To study the transient response characteristics using a cone-shaped source in a layered earth, a numerical filtering algorithm is then developed using the fast Hankel transform and the improved cosine transform, again using the superposition principle. During development, an average apparent resistivity inverted from the induced electromotive force using each coil is defined to represent the comprehensive resistivity of the conical source. To verify the forward calculation method, the transient responses of H type models and KH type models are calculated, and data are inverted using a "smoke ring" inversion. The results of inversion have good agreement with original models and show that the forward calculation method is effective. The results of this study provide an option for solving the problem of a deep "blind zone" and also provide a theoretical indicator for further research.

  18. Fourier analysis of human soft tissue facial shape: sex differences in normal adults.

    PubMed Central

    Ferrario, V F; Sforza, C; Schmitz, J H; Miani, A; Taroni, G

    1995-01-01

    Sexual dimorphism in human facial form involves both size and shape variations of the soft tissue structures. These variations are conventionally appreciated using linear and angular measurements, as well as ratios, taken from photographs or radiographs. Unfortunately this metric approach provides adequate quantitative information about size only, eluding the problems of shape definition. Mathematical methods such as the Fourier series allow a correct quantitative analysis of shape and of its changes. A method for the reconstruction of outlines starting from selected landmarks and for their Fourier analysis has been developed, and applied to analyse sex differences in shape of the soft tissue facial contour in a group of healthy young adults. When standardised for size, no sex differences were found between both cosine and sine coefficients of the Fourier series expansion. This shape similarity was largely overwhelmed by the very evident size differences and it could be measured only using the proper mathematical methods. PMID:8586558

  19. Phytochemical screening and chemical variability in volatile oils of aerial parts of Morinda morindoides.

    PubMed

    Kiazolu, J Boima; Intisar, Azeem; Zhang, Lingyi; Wang, Yun; Zhang, Runsheng; Wu, Zhongping; Zhang, Weibing

    2016-10-01

    Morinda morindoides is an important Liberian traditional medicine for the treatment of malaria, fever, worms etc. The plant was subjected to integrated approaches including phytochemical screening and gas chromatography mass spectrometry (GC-MS) analyses. Phytochemical investigation of the powdered plant revealed the presence of phenolics, tannins, flavonoids, saponins, terpenes, steroidal compounds and volatile oil. Steam distillation followed by GC-MS resulted in the identification of 47 volatiles in its aerial parts: 28 were in common including various bioactive volatiles. Major constituents of leaves were phytol (43.63%), palmitic acid (8.55%) and geranyl linalool (6.95%) and stem were palmitic acid (14.95%), eicosane (9.67%) and phytol (9.31%), and hence, a significant difference in the percentage composition of aerial parts was observed. To study seasonal changes, similarity analysis was carried out by calculating correlation coefficient (r) and vector angle cosine (z) that were more than 0.91 for stem-to-stem and leaf-to-leaf batches indicating considerable consistency.

  20. Analysis and modeling of localized heat generation by tumor-targeted nanoparticles (Monte Carlo methods)

    NASA Astrophysics Data System (ADS)

    Sanattalab, Ehsan; SalmanOgli, Ahmad; Piskin, Erhan

    2016-04-01

    We investigated the tumor-targeted nanoparticles that influence heat generation. We suppose that all nanoparticles are fully functionalized and can find the target using active targeting methods. Unlike the commonly used methods, such as chemotherapy and radiotherapy, the treatment procedure proposed in this study is purely noninvasive, which is considered to be a significant merit. It is found that the localized heat generation due to targeted nanoparticles is significantly higher than other areas. By engineering the optical properties of nanoparticles, including scattering, absorption coefficients, and asymmetry factor (cosine scattering angle), the heat generated in the tumor's area reaches to such critical state that can burn the targeted tumor. The amount of heat generated by inserting smart agents, due to the surface Plasmon resonance, will be remarkably high. The light-matter interactions and trajectory of incident photon upon targeted tissues are simulated by MIE theory and Monte Carlo method, respectively. Monte Carlo method is a statistical one by which we can accurately probe the photon trajectories into a simulation area.

  1. Impact of Aspect Ratio, Incident Angle, and Surface Roughness on Windbreak Wakes

    NASA Astrophysics Data System (ADS)

    Tobin, Nicolas; Chamorro, Leonardo P.

    2017-11-01

    Wind-tunnel results are presented on the wakes behind three-dimensional windbreaks in a simulated atmospheric boundary layer. Sheltering by upwind windbreaks, and surface-mounted obstacles (SMOs) in general, is parameterized by the wake-moment coefficient C h , which is a complex function of obstacle geometry and flow conditions. Values of C h are presented for several windbreak aspect ratios, incident angles, and windbreak-height-to-surface-roughness ratios. Lateral wake deflection is further presented for several incident angles and aspect ratios, and compared to a simple analytical formulation including a near- and far-wake solution. It is found that C h does not change with aspect ratios of 10 or greater, though C h may be lower for an aspect ratio of 5. C h is found to change roughly with the cosine of the incident angle, and to depend strongly on windbreak-height-to-surface-roughness ratio. The data broadly support the proposed wake-deflection model.

  2. Estimates of oceanic surface wind speed and direction using orthogonal beam scatterometer measurements and comparison of recent sea scattering theories

    NASA Technical Reports Server (NTRS)

    Moore, R. K.; Fung, A. K.; Dome, G. J.; Birrer, I. J.

    1978-01-01

    The wind direction properties of radar backscatter from the sea were empirically modelled using a cosine Fourier series through the 4th harmonic in wind direction (referenced to upwind). A comparison with 1975 JONSWAP (Joint North Sea Wave Project) scatterometer data, at incidence angles of 40 and 65, indicates that effects to third and fourth harmonics are negligible. Another important result is that the Fourier coefficients through the second harmonic are related to wind speed by a power law expression. A technique is also proposed to estimate the wind speed and direction over the ocean from two orthogonal scattering measurements. A comparison between two different types of sea scatter theories, one type presented by the work of Wright and the other by that of Chan and Fung, was made with recent scatterometer measurements. It demonstrates that a complete scattering model must include some provisions for the anisotropic characteristics of the sea scatter, and use a sea spectrum which depends upon wind speed.

  3. Comparison of (GTG)5-oligonucleotide and ribosomal intergenic transcribed spacer (ITS)-PCR for molecular typing of Klebsiella isolates.

    PubMed

    Ryberg, Anna; Olsson, Crister; Ahrné, Siv; Monstein, Hans-Jürg

    2011-02-01

    Molecular typing of Klebsiella species has become important for monitoring dissemination of β-lactamase-producers in hospital environments. The present study was designed to evaluate poly-trinucleotide (GTG)(5)- and rDNA intergenic transcribed spacer (ITS)-PCR fingerprint analysis for typing of Klebsiella pneumoniae and Klebsiella oxytoca isolates. Multiple displacement amplified DNA derived from 19 K. pneumoniae (some with an ESBL-phenotype), 35 K. oxytoca isolates, five K. pneumoniae, two K. oxytoca, three Raoultella, and one Enterobacter aerogenes type and reference strains underwent (GTG)(5) and ITS-PCR analysis. Dendrograms were constructed using cosine coefficient and the Neighbour joining method. (GTG)(5) and ITS-PCR analysis revealed that K. pneumoniae and K. oxytoca isolates, reference and type strains formed distinct cluster groups, and tentative subclusters could be established. We conclude that (GTG)(5) and ITS-PCR analysis combined with automated capillary electrophoresis provides promising tools for molecular typing of Klebsiella isolates. Copyright © 2010 Elsevier B.V. All rights reserved.

  4. On Association Coefficients for 2x2 Tables and Properties that Do Not Depend on the Marginal Distributions

    ERIC Educational Resources Information Center

    Warrens, Matthijs J.

    2008-01-01

    We discuss properties that association coefficients may have in general, e.g., zero value under statistical independence, and we examine coefficients for 2x2 tables with respect to these properties. Furthermore, we study a family of coefficients that are linear transformations of the observed proportion of agreement given the marginal…

  5. Study on the algorithm of computational ghost imaging based on discrete fourier transform measurement matrix

    NASA Astrophysics Data System (ADS)

    Zhang, Leihong; Liang, Dong; Li, Bei; Kang, Yi; Pan, Zilan; Zhang, Dawei; Gao, Xiumin; Ma, Xiuhua

    2016-07-01

    On the basis of analyzing the cosine light field with determined analytic expression and the pseudo-inverse method, the object is illuminated by a presetting light field with a determined discrete Fourier transform measurement matrix, and the object image is reconstructed by the pseudo-inverse method. The analytic expression of the algorithm of computational ghost imaging based on discrete Fourier transform measurement matrix is deduced theoretically, and compared with the algorithm of compressive computational ghost imaging based on random measurement matrix. The reconstruction process and the reconstruction error are analyzed. On this basis, the simulation is done to verify the theoretical analysis. When the sampling measurement number is similar to the number of object pixel, the rank of discrete Fourier transform matrix is the same as the one of the random measurement matrix, the PSNR of the reconstruction image of FGI algorithm and PGI algorithm are similar, the reconstruction error of the traditional CGI algorithm is lower than that of reconstruction image based on FGI algorithm and PGI algorithm. As the decreasing of the number of sampling measurement, the PSNR of reconstruction image based on FGI algorithm decreases slowly, and the PSNR of reconstruction image based on PGI algorithm and CGI algorithm decreases sharply. The reconstruction time of FGI algorithm is lower than that of other algorithms and is not affected by the number of sampling measurement. The FGI algorithm can effectively filter out the random white noise through a low-pass filter and realize the reconstruction denoising which has a higher denoising capability than that of the CGI algorithm. The FGI algorithm can improve the reconstruction accuracy and the reconstruction speed of computational ghost imaging.

  6. Season of Sampling and Season of Birth Influence Serotonin Metabolite Levels in Human Cerebrospinal Fluid

    PubMed Central

    Luykx, Jurjen J.; Bakker, Steven C.; Lentjes, Eef; Boks, Marco P. M.; van Geloven, Nan; Eijkemans, Marinus J. C.; Janson, Esther; Strengman, Eric; de Lepper, Anne M.; Westenberg, Herman; Klopper, Kai E.; Hoorn, Hendrik J.; Gelissen, Harry P. M. M.; Jordan, Julian; Tolenaar, Noortje M.; van Dongen, Eric P. A.; Michel, Bregt; Abramovic, Lucija; Horvath, Steve; Kappen, Teus; Bruins, Peter; Keijzers, Peter; Borgdorff, Paul; Ophoff, Roel A.; Kahn, René S.

    2012-01-01

    Background Animal studies have revealed seasonal patterns in cerebrospinal fluid (CSF) monoamine (MA) turnover. In humans, no study had systematically assessed seasonal patterns in CSF MA turnover in a large set of healthy adults. Methodology/Principal Findings Standardized amounts of CSF were prospectively collected from 223 healthy individuals undergoing spinal anesthesia for minor surgical procedures. The metabolites of serotonin (5-hydroxyindoleacetic acid, 5-HIAA), dopamine (homovanillic acid, HVA) and norepinephrine (3-methoxy-4-hydroxyphenylglycol, MPHG) were measured using high performance liquid chromatography (HPLC). Concentration measurements by sampling and birth dates were modeled using a non-linear quantile cosine function and locally weighted scatterplot smoothing (LOESS, span = 0.75). The cosine model showed a unimodal season of sampling 5-HIAA zenith in April and a nadir in October (p-value of the amplitude of the cosine = 0.00050), with predicted maximum (PCmax) and minimum (PCmin) concentrations of 173 and 108 nmol/L, respectively, implying a 60% increase from trough to peak. Season of birth showed a unimodal 5-HIAA zenith in May and a nadir in November (p = 0.00339; PCmax = 172 and PCmin = 126). The non-parametric LOESS showed a similar pattern to the cosine in both season of sampling and season of birth models, validating the cosine model. A final model including both sampling and birth months demonstrated that both sampling and birth seasons were independent predictors of 5-HIAA concentrations. Conclusion In subjects without mental illness, 5-HT turnover shows circannual variation by season of sampling as well as season of birth, with peaks in spring and troughs in fall. PMID:22312427

  7. The efficiency of parameter estimation of latent path analysis using summated rating scale (SRS) and method of successive interval (MSI) for transformation of score to scale

    NASA Astrophysics Data System (ADS)

    Solimun, Fernandes, Adji Achmad Rinaldo; Arisoesilaningsih, Endang

    2017-12-01

    Research in various fields generally investigates systems and involves latent variables. One method to analyze the model representing the system is path analysis. The data of latent variables measured using questionnaires by applying attitude scale model yields data in the form of score, before analyzed should be transformation so that it becomes data of scale. Path coefficient, is parameter estimator, calculated from scale data using method of successive interval (MSI) and summated rating scale (SRS). In this research will be identifying which data transformation method is better. Path coefficients have smaller varieties are said to be more efficient. The transformation method that produces scaled data and used in path analysis capable of producing path coefficients (parameter estimators) with smaller varieties is said to be better. The result of analysis using real data shows that on the influence of Attitude variable to Intention Entrepreneurship, has relative efficiency (ER) = 1, where it shows that the result of analysis using data transformation of MSI and SRS as efficient. On the other hand, for simulation data, at high correlation between items (0.7-0.9), MSI method is more efficient 1.3 times better than SRS method.

  8. Diffractive centrosymmetric 3D-transmission phase gratings positioned at the image plane of optical systems transform lightlike 4D-WORLD as tunable resonators into spectral metrics...

    NASA Astrophysics Data System (ADS)

    Lauinger, Norbert

    1999-08-01

    Diffractive 3D phase gratings of spherical scatterers dense in hexagonal packing geometry represent adaptively tunable 4D-spatiotemporal filters with trichromatic resonance in visible spectrum. They are described in the (lambda) - chromatic and the reciprocal (nu) -aspects by reciprocal geometric translations of the lightlike Pythagoras theorem, and by the direction cosine for double cones. The most elementary resonance condition in the lightlike Pythagoras theorem is given by the transformation of the grating constants gx, gy, gz of the hexagonal 3D grating to (lambda) h1h2h3 equals (lambda) 111 with cos (alpha) equals 0.5. Through normalization of the chromaticity in the von Laue-interferences to (lambda) 111, the (nu) (lambda) equals (lambda) h1h2h3/(lambda) 111-factor of phase velocity becomes the crucial resonance factor, the 'regulating device' of the spatiotemporal interaction between 3D grating and light, space and time. In the reciprocal space equal/unequal weights and times in spectral metrics result at positions of interference maxima defined by hyperbolas and circles. A database becomes built up by optical interference for trichromatic image preprocessing, motion detection in vector space, multiple range data analysis, patchwide multiple correlations in the spatial frequency spectrum, etc.

  9. Broadband W-band Rapid Frequency Sweep Considerations for Fourier Transform EPR.

    PubMed

    Strangeway, Robert A; Hyde, James S; Camenisch, Theodore G; Sidabras, Jason W; Mett, Richard R; Anderson, James R; Ratke, Joseph J; Subczynski, Witold K

    2017-12-01

    A multi-arm W-band (94 GHz) electron paramagnetic resonance spectrometer that incorporates a loop-gap resonator with high bandwidth is described. A goal of the instrumental development is detection of free induction decay following rapid sweep of the microwave frequency across the spectrum of a nitroxide radical at physiological temperature, which is expected to lead to a capability for Fourier transform electron paramagnetic resonance. Progress toward this goal is a theme of the paper. Because of the low Q-value of the loop-gap resonator, it was found necessary to develop a new type of automatic frequency control, which is described in an appendix. Path-length equalization, which is accomplished at the intermediate frequency of 59 GHz, is analyzed. A directional coupler is favored for separation of incident and reflected power between the bridge and the loop-gap resonator. Microwave leakage of this coupler is analyzed. An oversize waveguide with hyperbolic-cosine tapers couples the bridge to the loop-gap resonator, which results in reduced microwave power and signal loss. Benchmark sensitivity data are provided. The most extensive application of the instrument to date has been the measurement of T 1 values using pulse saturation recovery. An overview of that work is provided.

  10. The effects of spatial autoregressive dependencies on inference in ordinary least squares: a geometric approach

    NASA Astrophysics Data System (ADS)

    Smith, Tony E.; Lee, Ka Lok

    2012-01-01

    There is a common belief that the presence of residual spatial autocorrelation in ordinary least squares (OLS) regression leads to inflated significance levels in beta coefficients and, in particular, inflated levels relative to the more efficient spatial error model (SEM). However, our simulations show that this is not always the case. Hence, the purpose of this paper is to examine this question from a geometric viewpoint. The key idea is to characterize the OLS test statistic in terms of angle cosines and examine the geometric implications of this characterization. Our first result is to show that if the explanatory variables in the regression exhibit no spatial autocorrelation, then the distribution of test statistics for individual beta coefficients in OLS is independent of any spatial autocorrelation in the error term. Hence, inferences about betas exhibit all the optimality properties of the classic uncorrelated error case. However, a second more important series of results show that if spatial autocorrelation is present in both the dependent and explanatory variables, then the conventional wisdom is correct. In particular, even when an explanatory variable is statistically independent of the dependent variable, such joint spatial dependencies tend to produce "spurious correlation" that results in over-rejection of the null hypothesis. The underlying geometric nature of this problem is clarified by illustrative examples. The paper concludes with a brief discussion of some possible remedies for this problem.

  11. Thermal Aspects of Using Alternative Nuclear Fuels in Supercritical Water-Cooled Reactors

    NASA Astrophysics Data System (ADS)

    Grande, Lisa Christine

    A SuperCritical Water-cooled Nuclear Reactor (SCWR) is a Generation IV concept currently being developed worldwide. Unique to this reactor type is the use of light-water coolant above its critical point. The current research presents a thermal-hydraulic analysis of a single fuel channel within a Pressure Tube (PT)-type SCWR with a single-reheat cycle. Since this reactor is in its early design phase many fuel-channel components are being investigated in various combinations. Analysis inputs are: steam cycle, Axial Heat Flux Profile (AHFP), fuel-bundle geometry, and thermophysical properties of reactor coolant, fuel sheath and fuel. Uniform and non-uniform AHFPs for average channel power were applied to a variety of alternative fuels (mixed oxide, thorium dioxide, uranium dicarbide, uranium nitride and uranium carbide) enclosed in an Inconel-600 43-element bundle. The results depict bulk-fluid, outer-sheath and fuel-centreline temperature profiles together with the Heat Transfer Coefficient (HTC) profiles along the heated length of fuel channel. The objective is to identify the best options in terms of fuel, sheath material and AHFPS in which the outer-sheath and fuel-centreline temperatures will be below the accepted temperature limits of 850°C and 1850°C respectively. The 43-element Inconel-600 fuel bundle is suitable for SCWR use as the sheath-temperature design limit of 850°C was maintained for all analyzed cases at average channel power. Thoria, UC2, UN and UC fuels for all AHFPs are acceptable since the maximum fuel-centreline temperature does not exceed the industry accepted limit of 1850°C. Conversely, the fuel-centreline temperature limit was exceeded for MOX at all AHFPs, and UO2 for both cosine and downstream-skewed cosine AHFPs. Therefore, fuel-bundle modifications are required for UO2 and MOX to be feasible nuclear fuels for SCWRs.

  12. Hypergeometric Series Solution to a Class of Second-Order Boundary Value Problems via Laplace Transform with Applications to Nanofluids

    NASA Astrophysics Data System (ADS)

    Ebaid, Abdelhalim; Wazwaz, Abdul-Majid; Alali, Elham; Masaedeh, Basem S.

    2017-03-01

    Very recently, it was observed that the temperature of nanofluids is finally governed by second-order ordinary differential equations with variable coefficients of exponential orders. Such coefficients were then transformed to polynomials type by using new independent variables. In this paper, a class of second-order ordinary differential equations with variable coefficients of polynomials type has been solved analytically. The analytical solution is expressed in terms of a hypergeometric function with generalized parameters. Moreover, applications of the present results have been applied on some selected nanofluids problems in the literature. The exact solutions in the literature were derived as special cases of our generalized analytical solution.

  13. Analysis of two dimensional signals via curvelet transform

    NASA Astrophysics Data System (ADS)

    Lech, W.; Wójcik, W.; Kotyra, A.; Popiel, P.; Duk, M.

    2007-04-01

    This paper describes an application of curvelet transform analysis problem of interferometric images. Comparing to two-dimensional wavelet transform, curvelet transform has higher time-frequency resolution. This article includes numerical experiments, which were executed on random interferometric image. In the result of nonlinear approximations, curvelet transform obtains matrix with smaller number of coefficients than is guaranteed by wavelet transform. Additionally, denoising simulations show that curvelet could be a very good tool to remove noise from images.

  14. Gas Chromatography Data Classification Based on Complex Coefficients of an Autoregressive Model

    DOE PAGES

    Zhao, Weixiang; Morgan, Joshua T.; Davis, Cristina E.

    2008-01-01

    This paper introduces autoregressive (AR) modeling as a novel method to classify outputs from gas chromatography (GC). The inverse Fourier transformation was applied to the original sensor data, and then an AR model was applied to transform data to generate AR model complex coefficients. This series of coefficients effectively contains a compressed version of all of the information in the original GC signal output. We applied this method to chromatograms resulting from proliferating bacteria species grown in culture. Three types of neural networks were used to classify the AR coefficients: backward propagating neural network (BPNN), radial basis function-principal component analysismore » (RBF-PCA) approach, and radial basis function-partial least squares regression (RBF-PLSR) approach. This exploratory study demonstrates the feasibility of using complex root coefficient patterns to distinguish various classes of experimental data, such as those from the different bacteria species. This cognition approach also proved to be robust and potentially useful for freeing us from time alignment of GC signals.« less

  15. Multifocus image fusion scheme based on the multiscale curvature in nonsubsampled contourlet transform domain

    NASA Astrophysics Data System (ADS)

    Li, Xiaosong; Li, Huafeng; Yu, Zhengtao; Kong, Yingchun

    2015-07-01

    An efficient multifocus image fusion scheme in nonsubsampled contourlet transform (NSCT) domain is proposed. Based on the property of optical imaging and the theory of defocused image, we present a selection principle for lowpass frequency coefficients and also investigate the connection between a low-frequency image and the defocused image. Generally, the NSCT algorithm decomposes detail image information indwells in different scales and different directions in the bandpass subband coefficient. In order to correctly pick out the prefused bandpass directional coefficients, we introduce multiscale curvature, which not only inherits the advantages of windows with different sizes, but also correctly recognizes the focused pixels from source images, and then develop a new fusion scheme of the bandpass subband coefficients. The fused image can be obtained by inverse NSCT with the different fused coefficients. Several multifocus image fusion methods are compared with the proposed scheme. The experimental results clearly indicate the validity and superiority of the proposed scheme in terms of both the visual qualities and the quantitative evaluation.

  16. Journal of Computational Electronics: Proceedings of the International Workshop on Computational Electronics (8th) (IWCE-8), Beckman Institute, University of Illinois, 15-18 October 2001. Volume 1, Issue 1-2

    DTIC Science & Technology

    2002-01-01

    the fully coupled electrical and optical sys- of carrier is assumed and the minority carriers are not tems in VCSELs (Oyafuso et al. 2000). separated...evolution times the cosine function in Mn 𔃺 5 ++.(1) weakly depends on the phase space variables. With the increase of the time, the cosine term...can also be applied in phase - coherent devices. Our approach is useful to To obtain S(0) we just have to integrate A Q2 over the study noise in a wide

  17. Lump solutions and interaction phenomenon to the third-order nonlinear evolution equation

    NASA Astrophysics Data System (ADS)

    Kofane, T. C.; Fokou, M.; Mohamadou, A.; Yomba, E.

    2017-11-01

    In this work, the lump solution and the kink solitary wave solution from the (2 + 1) -dimensional third-order evolution equation, using the Hirota bilinear method are obtained through symbolic computation with Maple. We have assumed that the lump solution is centered at the origin, when t = 0 . By considering a mixing positive quadratic function with exponential function, as well as a mixing positive quadratic function with hyperbolic cosine function, interaction solutions like lump-exponential and lump-hyperbolic cosine are presented. A completely non-elastic interaction between a lump and kink soliton is observed, showing that a lump solution can be swallowed by a kink soliton.

  18. Geometric optimisation of an accurate cosine correcting optic fibre coupler for solar spectral measurement.

    PubMed

    Cahuantzi, Roberto; Buckley, Alastair

    2017-09-01

    Making accurate and reliable measurements of solar irradiance is important for understanding performance in the photovoltaic energy sector. In this paper, we present design details and performance of a number of fibre optic couplers for use in irradiance measurement systems employing remote light sensors applicable for either spectrally resolved or broadband measurement. The angular and spectral characteristics of different coupler designs are characterised and compared with existing state-of-the-art commercial technology. The new coupler designs are fabricated from polytetrafluorethylene (PTFE) rods and operate through forward scattering of incident sunlight on the front surfaces of the structure into an optic fibre located in a cavity to the rear of the structure. The PTFE couplers exhibit up to 4.8% variation in scattered transmission intensity between 425 nm and 700 nm and show minimal specular reflection, making the designs accurate and reliable over the visible region. Through careful geometric optimization near perfect cosine dependence on the angular response of the coupler can be achieved. The PTFE designs represent a significant improvement over the state of the art with less than 0.01% error compared with ideal cosine response for angles of incidence up to 50°.

  19. Coronagraphic Observations of the Lunar Sodium Exosphere Near the Lunar Surface

    NASA Technical Reports Server (NTRS)

    Potter, A. E.; Morgan, T. H.

    1998-01-01

    The sodium exosphere of the Moon was observed using a solar coronagraph to occult the illuminated surface of the Moon. Exceptionally dust-free atmospheric conditions were required to allow the faint emission from sunlight scattered by lunar sodium atoms to be distinguished from moonlight scattered from atmospheric dust. At 0300 UT on April 22, 1994, ideal conditions prevailed for a few hours, and one excellent image of the sodium exosphere was measured, with the Moon at a phase angle of 51 deg, 81 % illuminated. Analysis of the image data showed that the weighted mean temperature of the exosphere was 1280 K and that the sodium column density varied approximately as cosine-cubed of the latitude. A cosine-cubed variation is an unexpected result, since the flux per unit area of solar photons and solar particles varies as the cosine of latitude. It is suggested that this can be explained by a temperature dependence for the sputtering of sodium atoms from the surface. This is a characteristic feature of chemical sputtering, which has been previously proposed to explain the sodium exosphere of Mercury. A possible interaction between chemical sputtering and solar photons is suggested.

  20. Structural-electromagnetic bidirectional coupling analysis of space large film reflector antennas

    NASA Astrophysics Data System (ADS)

    Zhang, Xinghua; Zhang, Shuxin; Cheng, ZhengAi; Duan, Baoyan; Yang, Chen; Li, Meng; Hou, Xinbin; Li, Xun

    2017-10-01

    As used for energy transmission, a space large film reflector antenna (SLFRA) is characterized by large size and enduring high power density. The structural flexibility and the microwave radiation pressure (MRP) will lead to the phenomenon of structural-electromagnetic bidirectional coupling (SEBC). In this paper, the SEBC model of SLFRA is presented, then the deformation induced by the MRP and the corresponding far field pattern deterioration are simulated. Results show that, the direction of the MRP is identical to the normal of the reflector surface, and the magnitude is proportional to the power density and the square of cosine incident angle. For a typical cosine function distributed electric field, the MRP is a square of cosine distributed across the diameter. The maximum deflections of SLFRA linearly increase with the increasing microwave power densities and the square of the reflector diameters, and vary inversely with the film thicknesses. When the reflector diameter becomes 100 m large and the microwave power density exceeds 102 W/cm2, the gain loss of the 6.3 μm-thick reflector goes beyond 0.75 dB. When the MRP-induced deflection degrades the reflector performance, the SEBC should be taken into account.

  1. Multichannel loudness compensation method based on segmented sound pressure level for digital hearing aids

    NASA Astrophysics Data System (ADS)

    Liang, Ruiyu; Xi, Ji; Bao, Yongqiang

    2017-07-01

    To improve the performance of gain compensation based on three-segment sound pressure level (SPL) in hearing aids, an improved multichannel loudness compensation method based on eight-segment SPL was proposed. Firstly, the uniform cosine modulated filter bank was designed. Then, the adjacent channels which have low or gradual slopes were adaptively merged to obtain the corresponding non-uniform cosine modulated filter according to the audiogram of hearing impaired persons. Secondly, the input speech was decomposed into sub-band signals and the SPL of every sub-band signal was computed. Meanwhile, the audible SPL range from 0 dB SPL to 120 dB SPL was equally divided into eight segments. Based on these segments, a different prescription formula was designed to compute more detailed gain to compensate according to the audiogram and the computed SPL. Finally, the enhanced signal was synthesized. Objective experiments showed the decomposed signals after cosine modulated filter bank have little distortion. Objective experiments showed that the hearing aids speech perception index (HASPI) and hearing aids speech quality index (HASQI) increased 0.083 and 0.082 on average, respectively. Subjective experiments showed the proposed algorithm can effectively improve the speech recognition of six hearing impaired persons.

  2. [Surface electromyography signal classification using gray system theory].

    PubMed

    Xie, Hongbo; Ma, Congbin; Wang, Zhizhong; Huang, Hai

    2004-12-01

    A new method based on gray correlation was introduced to improve the identification rate in artificial limb. The electromyography (EMG) signal was first transformed into time-frequency domain by wavelet transform. Singular value decomposition (SVD) was then used to extract feature vector from the wavelet coefficient for pattern recognition. The decision was made according to the maximum gray correlation coefficient. Compared with neural network recognition, this robust method has an almost equivalent recognition rate but much lower computation costs and less training samples.

  3. Products of multiple Fourier series with application to the multiblade transformation

    NASA Technical Reports Server (NTRS)

    Kunz, D. L.

    1981-01-01

    A relatively simple and systematic method for forming the products of multiple Fourier series using tensor like operations is demonstrated. This symbolic multiplication can be performed for any arbitrary number of series, and the coefficients of a set of linear differential equations with periodic coefficients from a rotating coordinate system to a nonrotating system is also demonstrated. It is shown that using Fourier operations to perform this transformation make it easily understood, simple to apply, and generally applicable.

  4. A Radiation Solver for the National Combustion Code

    NASA Technical Reports Server (NTRS)

    Sockol, Peter M.

    2015-01-01

    A methodology is given that converts an existing finite volume radiative transfer method that requires input of local absorption coefficients to one that can treat a mixture of combustion gases and compute the coefficients on the fly from the local mixture properties. The Full-spectrum k-distribution method is used to transform the radiative transfer equation (RTE) to an alternate wave number variable, g . The coefficients in the transformed equation are calculated at discrete temperatures and participating species mole fractions that span the values of the problem for each value of g. These results are stored in a table and interpolation is used to find the coefficients at every cell in the field. Finally, the transformed RTE is solved for each g and Gaussian quadrature is used to find the radiant heat flux throughout the field. The present implementation is in an existing cartesian/cylindrical grid radiative transfer code and the local mixture properties are given by a solution of the National Combustion Code (NCC) on the same grid. Based on this work the intention is to apply this method to an existing unstructured grid radiation code which can then be coupled directly to NCC.

  5. A surface spherical harmonic expansion of gravity anomalies on the ellipsoid

    NASA Astrophysics Data System (ADS)

    Claessens, S. J.; Hirt, C.

    2015-10-01

    A surface spherical harmonic expansion of gravity anomalies with respect to a geodetic reference ellipsoid can be used to model the global gravity field and reveal its spectral properties. In this paper, a direct and rigorous transformation between solid spherical harmonic coefficients of the Earth's disturbing potential and surface spherical harmonic coefficients of gravity anomalies in ellipsoidal approximation with respect to a reference ellipsoid is derived. This transformation cannot rigorously be achieved by the Hotine-Jekeli transformation between spherical and ellipsoidal harmonic coefficients. The method derived here is used to create a surface spherical harmonic model of gravity anomalies with respect to the GRS80 ellipsoid from the EGM2008 global gravity model. Internal validation of the model shows a global RMS precision of 1 nGal. This is significantly more precise than previous solutions based on spherical approximation or approximations to order or , which are shown to be insufficient for the generation of surface spherical harmonic coefficients with respect to a geodetic reference ellipsoid. Numerical results of two applications of the new method (the computation of ellipsoidal corrections to gravimetric geoid computation, and area means of gravity anomalies in ellipsoidal approximation) are provided.

  6. N-fold Darboux transformation and double-Wronskian-typed solitonic structures for a variable-coefficient modified Kortweg-de Vries equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Lei, E-mail: wanglei2239@126.com; Gao, Yi-Tian; State Key Laboratory of Software Development Environment, Beijing University of Aeronautics and Astronautics, Beijing 100191

    2012-08-15

    Under investigation in this paper is a variable-coefficient modified Kortweg-de Vries (vc-mKdV) model describing certain situations from the fluid mechanics, ocean dynamics and plasma physics. N-fold Darboux transformation (DT) of a variable-coefficient Ablowitz-Kaup-Newell-Segur spectral problem is constructed via a gauge transformation. Multi-solitonic solutions in terms of the double Wronskian for the vc-mKdV model are derived by the reduction of the N-fold DT. Three types of the solitonic interactions are discussed through figures: (1) Overtaking collision; (2) Head-on collision; (3) Parallel solitons. Nonlinear, dispersive and dissipative terms have the effects on the velocities of the solitonic waves while the amplitudes ofmore » the waves depend on the perturbation term. - Highlights: Black-Right-Pointing-Pointer N-fold DT is firstly applied to a vc-AKNS spectral problem. Black-Right-Pointing-Pointer Seeking a double Wronskian solution is changed into solving two systems. Black-Right-Pointing-Pointer Effects of the variable coefficients on the multi-solitonic waves are discussed in detail. Black-Right-Pointing-Pointer This work solves the problem from Yi Zhang [Ann. Phys. 323 (2008) 3059].« less

  7. Generation of electromagnetic energy in a magnetic cumulation generator with the use of inductively coupled circuits with a variable coupling coefficient

    NASA Astrophysics Data System (ADS)

    Gilev, S. D.; Prokopiev, V. S.

    2017-07-01

    A method of generation of electromagnetic energy and magnetic flux in a magnetic cumulation generator is proposed. The method is based on dynamic variation of the circuit coupling coefficient. This circuit is compared with other available circuits of magnetic energy generation with the help of magnetic cumulation (classical magnetic cumulation generator, generator with transformer coupling, and generator with a dynamic transformer). It is demonstrated that the proposed method allows obtaining high values of magnetic energy. The proposed circuit is found to be more effective than the known transformer circuit. Experiments on electromagnetic energy generation are performed, which demonstrate the efficiency of the proposed method.

  8. Centrifugal distortion coefficients of asymmetric-top molecules: Reduction of the octic terms of the rotational Hamiltonian

    NASA Astrophysics Data System (ADS)

    Ramachandra Rao, Ch. V. S.

    1983-11-01

    The rotational Hamiltonian of an asymmetric-top molecule in its standard form, containing terms up to eighth degree in the components of the total angular momentum, is transformed by a unitary transformation with parameters Spqr to a reduced Hamiltonian so as to avoid the indeterminacies inherent in fitting the complete Hamiltonian to observed energy levels. Expressions are given for the nine determinable combinations of octic constants Θ' i ( i = 1 to 9) which are invariant under the unitary transformation. A method of reduction suitable for energy calculations by matrix diagonalization is considered. The relations between the coefficients of the transformed Hamiltonian, for suitable choice of the parameters Spqr, and those of the reduced Hamiltonian are given. This enables the determination of the nine octic constants Θ' i in terms of the experimental constants.

  9. Gravity field error analysis for pendulum formations by a semi-analytical approach

    NASA Astrophysics Data System (ADS)

    Li, Huishu; Reubelt, Tilo; Antoni, Markus; Sneeuw, Nico

    2017-03-01

    Many geoscience disciplines push for ever higher requirements on accuracy, homogeneity and time- and space-resolution of the Earth's gravity field. Apart from better instruments or new observables, alternative satellite formations could improve the signal and error structure compared to Grace. One possibility to increase the sensitivity and isotropy by adding cross-track information is a pair of satellites flying in a pendulum formation. This formation contains two satellites which have different ascending nodes and arguments of latitude, but have the same orbital height and inclination. In this study, the semi-analytical approach for efficient pre-mission error assessment is presented, and the transfer coefficients of range, range-rate and range-acceleration gravitational perturbations are derived analytically for the pendulum formation considering a set of opening angles. The new challenge is the time variations of the opening angle and the range, leading to temporally variable transfer coefficients. This is solved by Fourier expansion of the sine/cosine of the opening angle and the central angle. The transfer coefficients are further applied to assess the error patterns which are caused by different orbital parameters. The simulation results indicate that a significant improvement in accuracy and isotropy is obtained for small and medium initial opening angles of single polar pendulums, compared to Grace. The optimal initial opening angles are 45° and 15° for accuracy and isotropy, respectively. For a Bender configuration, which is constituted by a polar Grace and an inclined pendulum in this paper, the behaviour of results is dependent on the inclination (prograde vs. retrograde) and on the relative baseline orientation (left or right leading). The simulation for a sun-synchronous orbit shows better results for the left leading case.

  10. Fault Tolerant Signal Processing Using Finite Fields and Error-Correcting Codes.

    DTIC Science & Technology

    1983-06-01

    Decimation in Frequency Form, Fast Inverse Transform F-18 F-4 Part of Decimation in Time Form, Fast Inverse Transform F-21 I . LIST OF TABLES fable Title Page...F-2 Intermediate Variables In A Fast Inverse Transform F-14 Accession For NTIS GRA&il DTIC TAB E Unannounced El ** Dist ribut ion/ ____ AvailabilitY...component polynomials may be transformed to an equiva- lent series of multiplications of the related transform ’.. coefficients. The inverse transform of

  11. Soliton interactions, Bäcklund transformations, Lax pair for a variable-coefficient generalized dispersive water-wave system

    NASA Astrophysics Data System (ADS)

    Liu, Lei; Tian, Bo; Zhen, Hui-Ling; Liu, De-Yin; Xie, Xi-Yang

    2018-04-01

    Under investigation in this paper is a variable-coefficient generalized dispersive water-wave system, which can simulate the propagation of the long weakly non-linear and weakly dispersive surface waves of variable depth in the shallow water. Under certain variable-coefficient constraints, by virtue of the Bell polynomials, Hirota method and symbolic computation, the bilinear forms, one- and two-soliton solutions are obtained. Bäcklund transformations and new Lax pair are also obtained. Our Lax pair is different from that previously reported. Based on the asymptotic and graphic analysis, with different forms of the variable coefficients, we find that there exist the elastic interactions for u, while either the elastic or inelastic interactions for v, with u and v as the horizontal velocity field and deviation height from the equilibrium position of the water, respectively. When the interactions are inelastic, we see the fission and fusion phenomena.

  12. Microscopy mineral image enhancement based on improved adaptive threshold in nonsubsampled shearlet transform domain

    NASA Astrophysics Data System (ADS)

    Li, Liangliang; Si, Yujuan; Jia, Zhenhong

    2018-03-01

    In this paper, a novel microscopy mineral image enhancement method based on adaptive threshold in non-subsampled shearlet transform (NSST) domain is proposed. First, the image is decomposed into one low-frequency sub-band and several high-frequency sub-bands. Second, the gamma correction is applied to process the low-frequency sub-band coefficients, and the improved adaptive threshold is adopted to suppress the noise of the high-frequency sub-bands coefficients. Third, the processed coefficients are reconstructed with the inverse NSST. Finally, the unsharp filter is used to enhance the details of the reconstructed image. Experimental results on various microscopy mineral images demonstrated that the proposed approach has a better enhancement effect in terms of objective metric and subjective metric.

  13. Texture formation in FePt thin films via thermal stress management

    NASA Astrophysics Data System (ADS)

    Rasmussen, P.; Rui, X.; Shield, J. E.

    2005-05-01

    The transformation variant of the fcc to fct transformation in FePt thin films was tailored by controlling the stresses in the thin films, thereby allowing selection of in- or out-of-plane c-axis orientation. FePt thin films were deposited at ambient temperature on several substrates with differing coefficients of thermal expansion relative to the FePt, which generated thermal stresses during the ordering heat treatment. X-ray diffraction analysis revealed preferential out-of-plane c-axis orientation for FePt films deposited on substrates with a similar coefficients of thermal expansion, and random orientation for FePt films deposited on substrates with a very low coefficient of thermal expansion, which is consistent with theoretical analysis when considering residual stresses.

  14. Content Based Image Retrieval based on Wavelet Transform coefficients distribution

    PubMed Central

    Lamard, Mathieu; Cazuguel, Guy; Quellec, Gwénolé; Bekri, Lynda; Roux, Christian; Cochener, Béatrice

    2007-01-01

    In this paper we propose a content based image retrieval method for diagnosis aid in medical fields. We characterize images without extracting significant features by using distribution of coefficients obtained by building signatures from the distribution of wavelet transform. The research is carried out by computing signature distances between the query and database images. Several signatures are proposed; they use a model of wavelet coefficient distribution. To enhance results, a weighted distance between signatures is used and an adapted wavelet base is proposed. Retrieval efficiency is given for different databases including a diabetic retinopathy, a mammography and a face database. Results are promising: the retrieval efficiency is higher than 95% for some cases using an optimization process. PMID:18003013

  15. A method for the measurement and the statistical analysis of atmospheric turbulence

    NASA Technical Reports Server (NTRS)

    Tieleman, H. W.; Tavoularis, S. C.

    1974-01-01

    The instantaneous values of output voltages representing the wind velocity vector and the temperature at different elevations of the 250-foot meteorological tower located at NASA Wallops Flight Center are provided with the three dimensional split-film TSI Model 1080 anemometer system. The output voltages are sampled at a rate of one every 5 milliseconds, digitized and stored on digital magnetic tapes for a time period of approximately 40 minutes, with the use of a specially designed data acqusition system. A new calibration procedure permits the conversion of the digital voltages to the respective values of the temperature and the velocity components in a Cartesian coordinate system connected with the TSI probe with considerable accuracy. Power, cross, coincidence and quadrature spectra of the wind components and the temperature are obtained with the use of the fast Fourier transform. The cosine taper data window and ensemble and frequency smoothing techniques are used to provide smooth estimates of the spectral functions.

  16. Blind technique using blocking artifacts and entropy of histograms for image tampering detection

    NASA Astrophysics Data System (ADS)

    Manu, V. T.; Mehtre, B. M.

    2017-06-01

    The tremendous technological advancements in recent times has enabled people to create, edit and circulate images easily than ever before. As a result of this, ensuring the integrity and authenticity of the images has become challenging. Malicious editing of images to deceive the viewer is referred to as image tampering. A widely used image tampering technique is image splicing or compositing, in which regions from different images are copied and pasted. In this paper, we propose a tamper detection method utilizing the blocking and blur artifacts which are the footprints of splicing. The classification of images as tampered or not, is done based on the standard deviations of the entropy histograms and block discrete cosine transformations. We can detect the exact boundaries of the tampered area in the image, if the image is classified as tampered. Experimental results on publicly available image tampering datasets show that the proposed method outperforms the existing methods in terms of accuracy.

  17. A two layer chaotic encryption scheme of secure image transmission for DCT precoded OFDM-VLC transmission

    NASA Astrophysics Data System (ADS)

    Wang, Zhongpeng; Chen, Fangni; Qiu, Weiwei; Chen, Shoufa; Ren, Dongxiao

    2018-03-01

    In this paper, a two-layer image encryption scheme for a discrete cosine transform (DCT) precoded orthogonal frequency division multiplexing (OFDM) visible light communication (VLC) system is proposed. Firstly, in the proposed scheme the transmitted image is first encrypted by a chaos scrambling sequence,which is generated from the hybrid 4-D hyper- and Arnold map in the upper-layer. After that, the encrypted image is converted into digital QAM modulation signal, which is re-encrypted by chaos scrambling sequence based on Arnold map in physical layer to further enhance the security of the transmitted image. Moreover, DCT precoding is employed to improve BER performance of the proposed system and reduce the PAPR of OFDM signal. The BER and PAPR performances of the proposed system are evaluated by simulation experiments. The experiment results show that the proposed two-layer chaos scrambling schemes achieve image secure transmission for image-based OFDM VLC. Furthermore, DCT precoding can reduce the PAPR and improve the BER performance of OFDM-based VLC.

  18. The importance of robust error control in data compression applications

    NASA Technical Reports Server (NTRS)

    Woolley, S. I.

    1993-01-01

    Data compression has become an increasingly popular option as advances in information technology have placed further demands on data storage capabilities. With compression ratios as high as 100:1 the benefits are clear; however, the inherent intolerance of many compression formats to error events should be given careful consideration. If we consider that efficiently compressed data will ideally contain no redundancy, then the introduction of a channel error must result in a change of understanding from that of the original source. While the prefix property of codes such as Huffman enables resynchronisation, this is not sufficient to arrest propagating errors in an adaptive environment. Arithmetic, Lempel-Ziv, discrete cosine transform (DCT) and fractal methods are similarly prone to error propagating behaviors. It is, therefore, essential that compression implementations provide sufficient combatant error control in order to maintain data integrity. Ideally, this control should be derived from a full understanding of the prevailing error mechanisms and their interaction with both the system configuration and the compression schemes in use.

  19. Method, system and computer-readable media for measuring impedance of an energy storage device

    DOEpatents

    Morrison, John L.; Morrison, William H.; Christophersen, Jon P.; Motloch, Chester G.

    2016-01-26

    Real-time battery impedance spectrum is acquired using a one-time record. Fast Summation Transformation (FST) is a parallel method of acquiring a real-time battery impedance spectrum using a one-time record that enables battery diagnostics. An excitation current to a battery is a sum of equal amplitude sine waves of frequencies that are octave harmonics spread over a range of interest. A sample frequency is also octave and harmonically related to all frequencies in the sum. A time profile of this sampled signal has a duration that is a few periods of the lowest frequency. A voltage response of the battery, average deleted, is an impedance of the battery in a time domain. Since the excitation frequencies are known and octave and harmonically related, a simple algorithm, FST, processes the time profile by rectifying relative to sine and cosine of each frequency. Another algorithm yields real and imaginary components for each frequency.

  20. Generalized spherical and simplicial coordinates

    NASA Astrophysics Data System (ADS)

    Richter, Wolf-Dieter

    2007-12-01

    Elementary trigonometric quantities are defined in l2,p analogously to that in l2,2, the sine and cosine functions are generalized for each p>0 as functions sinp and cosp such that they satisfy the basic equation cosp([phi])p+sinp([phi])p=1. The p-generalized radius coordinate of a point [xi][set membership, variant]Rn is defined for each p>0 as . On combining these quantities, ln,p-spherical coordinates are defined. It is shown that these coordinates are nearly related to ln,p-simplicial coordinates. The Jacobians of these generalized coordinate transformations are derived. Applications and interpretations from analysis deal especially with the definition of a generalized surface content on ln,p-spheres which is nearly related to a modified co-area formula and an extension of Cavalieri's and Torricelli's indivisibeln method, and with differential equations. Applications from probability theory deal especially with a geometric interpretation of the uniform probability distribution on the ln,p-sphere and with the derivation of certain generalized statistical distributions.

  1. Securing image information using double random phase encoding and parallel compressive sensing with updated sampling processes

    NASA Astrophysics Data System (ADS)

    Hu, Guiqiang; Xiao, Di; Wang, Yong; Xiang, Tao; Zhou, Qing

    2017-11-01

    Recently, a new kind of image encryption approach using compressive sensing (CS) and double random phase encoding has received much attention due to the advantages such as compressibility and robustness. However, this approach is found to be vulnerable to chosen plaintext attack (CPA) if the CS measurement matrix is re-used. Therefore, designing an efficient measurement matrix updating mechanism that ensures resistance to CPA is of practical significance. In this paper, we provide a novel solution to update the CS measurement matrix by altering the secret sparse basis with the help of counter mode operation. Particularly, the secret sparse basis is implemented by a reality-preserving fractional cosine transform matrix. Compared with the conventional CS-based cryptosystem that totally generates all the random entries of measurement matrix, our scheme owns efficiency superiority while guaranteeing resistance to CPA. Experimental and analysis results show that the proposed scheme has a good security performance and has robustness against noise and occlusion.

  2. A Fast and Robust Beamspace Adaptive Beamformer for Medical Ultrasound Imaging.

    PubMed

    Mohades Deylami, Ali; Mohammadzadeh Asl, Babak

    2017-06-01

    Minimum variance beamformer (MVB) increases the resolution and contrast of medical ultrasound imaging compared with nonadaptive beamformers. These advantages come at the expense of high computational complexity that prevents this adaptive beamformer to be applied in a real-time imaging system. A new beamspace (BS) based on discrete cosine transform is proposed in which the medical ultrasound signals can be represented with less dimensions compared with the standard BS. This is because of symmetric beampattern of the beams in the proposed BS compared with the asymmetric ones in the standard BS. This lets us decrease the dimensions of data to two, so a high complex algorithm, such as the MVB, can be applied faster in this BS. The results indicated that by keeping only two beams, the MVB in the proposed BS provides very similar resolution and also better contrast compared with the standard MVB (SMVB) with only 0.44% of needed flops. Also, this beamformer is more robust against sound speed estimation errors than the SMVB.

  3. Normal compression wave scattering by a permeable crack in a fluid-saturated poroelastic solid

    NASA Astrophysics Data System (ADS)

    Song, Yongjia; Hu, Hengshan; Rudnicki, John W.

    2017-04-01

    A mathematical formulation is presented for the dynamic stress intensity factor (mode I) of a finite permeable crack subjected to a time-harmonic propagating longitudinal wave in an infinite poroelastic solid. In particular, the effect of the wave-induced fluid flow due to the presence of a liquid-saturated crack on the dynamic stress intensity factor is analyzed. Fourier sine and cosine integral transforms in conjunction with Helmholtz potential theory are used to formulate the mixed boundary-value problem as dual integral equations in the frequency domain. The dual integral equations are reduced to a Fredholm integral equation of the second kind. It is found that the stress intensity factor monotonically decreases with increasing frequency, decreasing the fastest when the crack width and the slow wave wavelength are of the same order. The characteristic frequency at which the stress intensity factor decays the fastest shifts to higher frequency values when the crack width decreases.

  4. Combinational logic for generating gate drive signals for phase control rectifiers

    NASA Technical Reports Server (NTRS)

    Dolland, C. R.; Trimble, D. W. (Inventor)

    1982-01-01

    Control signals for phase-delay rectifiers, which require a variable firing angle that ranges from 0 deg to 180 deg, are derived from line-to-line 3-phase signals and both positive and negative firing angle control signals which are generated by comparing current command and actual current. Line-to-line phases are transformed into line-to-neutral phases and integrated to produce 90 deg phase delayed signals that are inverted to produce three cosine signals, such that for each its maximum occurs at the intersection of positive half cycles of the other two phases which are inputs to other inverters. At the same time, both positive and negative (inverted) phase sync signals are generated for each phase by comparing each with the next and producing a square wave when it is greater. Ramp, sync and firing angle controls signals are than used in combinational logic to generate the gate firing control signals SCR gate drives which fire SCR devices in a bridge circuit.

  5. Map-invariant spectral analysis for the identification of DNA periodicities

    PubMed Central

    2012-01-01

    Many signal processing based methods for finding hidden periodicities in DNA sequences have primarily focused on assigning numerical values to the symbolic DNA sequence and then applying spectral analysis tools such as the short-time discrete Fourier transform (ST-DFT) to locate these repeats. The key results pertaining to this approach are however obtained using a very specific symbolic to numerical map, namely the so-called Voss representation. An important research problem is to therefore quantify the sensitivity of these results to the choice of the symbolic to numerical map. In this article, a novel algebraic approach to the periodicity detection problem is presented and provides a natural framework for studying the role of the symbolic to numerical map in finding these repeats. More specifically, we derive a new matrix-based expression of the DNA spectrum that comprises most of the widely used mappings in the literature as special cases, shows that the DNA spectrum is in fact invariable under all these mappings, and generates a necessary and sufficient condition for the invariance of the DNA spectrum to the symbolic to numerical map. Furthermore, the new algebraic framework decomposes the periodicity detection problem into several fundamental building blocks that are totally independent of each other. Sophisticated digital filters and/or alternate fast data transforms such as the discrete cosine and sine transforms can therefore be always incorporated in the periodicity detection scheme regardless of the choice of the symbolic to numerical map. Although the newly proposed framework is matrix based, identification of these periodicities can be achieved at a low computational cost. PMID:23067324

  6. Heat storage in alloy transformations

    NASA Technical Reports Server (NTRS)

    Birchenall, C. E.; Gueceri, S. I.; Farkas, D.; Labdon, M. B.; Nagaswami, N.; Pregger, B.

    1981-01-01

    The feasibility of using metal alloys as thermal energy storage media was determined. The following major elements were studied: (1) identification of congruently transforming alloys and thermochemical property measurements; (2) development of a precise and convenient method for measuring volume change during phase transformation and thermal expansion coefficients; (3) development of a numerical modeling routine for calculating heat flow in cylindrical heat exchangers containing phase change materials; and (4) identification of materials that could be used to contain the metal alloys. Several eutectic alloys and ternary intermetallic phases were determined. A method employing X-ray absorption techniques was developed to determine the coefficients of thermal expansion of both the solid and liquid phases and the volume change during phase transformation from data obtained during one continuous experimental test. The method and apparatus are discussed and the experimental results are presented. The development of the numerical modeling method is presented and results are discussed for both salt and metal alloy phase change media.

  7. Tunable features of magnetoelectric transformers.

    PubMed

    Dong, Shuxiang; Zhai, Junyi; Priya, Shashank; Li, Jie-Fang; Viehland, Dwight

    2009-06-01

    We have found that magnetostrictive FeBSiC alloy ribbons laminated with piezoelectric Pb(Zr,Ti)O(3) fiber can act as a tunable transformer when driven under resonant conditions. These composites were also found to exhibit the strongest resonant magnetoelectric voltage coefficient of 750 V/cm-Oe. The tunable features were achieved by applying small dc magnetic biases of -5

  8. Singularity-free extraction of a quaternion from a direction-cosine matrix. [for spacecraft control and guidance

    NASA Technical Reports Server (NTRS)

    Klumpp, A. R.

    1976-01-01

    A computer algorithm for extracting a quaternion from a direction-cosine matrix (DCM) is described. The quaternion provides a four-parameter representation of rotation, as against the nine-parameter representation afforded by a DCM. Commanded attitude in space shuttle steering is conveniently computed by DCM, while actual attitude is computed most compactly as a quaternion, as is attitude error. The unit length of the rotation quaternion, and interchangeable of a quaternion and its negative, are used to advantage in the extraction algorithm. Protection of the algorithm against square root failure and division overflow are considered. Necessary and sufficient conditions for handling the rotation vector element of largest magnitude are discussed

  9. Site selection and directional models of deserts used for ERBE validation targets

    NASA Technical Reports Server (NTRS)

    Staylor, W. F.

    1986-01-01

    Broadband shortwave and longwave radiance measurements obtained from the Nimbus 7 Earth Radiation Budget scanner were used to develop reflectance and emittance models for the Sahara, Gibson, and Saudi Deserts. These deserts will serve as in-flight validation targets for the Earth Radiation Budget Experiment being flown on the Earth Radiation Budget Satellite and two National Oceanic and Atmospheric Administration polar satellites. The directional reflectance model derived for the deserts was a function of the sum and product of the cosines of the solar and viewing zenith angles, and thus reciprocity existed between these zenith angles. The emittance model was related by a power law of the cosine of the viewing zenith angle.

  10. Scaling behaviour of Fisher and Shannon entropies for the exponential-cosine screened coulomb potential

    NASA Astrophysics Data System (ADS)

    Abdelmonem, M. S.; Abdel-Hady, Afaf; Nasser, I.

    2017-07-01

    The scaling laws are given for the entropies in the information theory, including the Shannon's entropy, its power, the Fisher's information and the Fisher-Shannon product, using the exponential-cosine screened Coulomb potential. The scaling laws are specified, in the r-space, as a function of |μ - μc, nℓ|, where μ is the screening parameter and μc, nℓ its critical value for the specific quantum numbers n and ℓ. Scaling laws for other physical quantities, such as energy eigenvalues, the moments, static polarisability, transition probabilities, etc. are also given. Some of these are reported for the first time. The outcome is compared with the available literatures' results.

  11. Anisotropy model for modern grain oriented electrical steel based on orientation distribution function

    NASA Astrophysics Data System (ADS)

    Jiang, Fan; Rossi, Mathieu; Parent, Guillaume

    2018-05-01

    Accurately modeling the anisotropic behavior of electrical steel is mandatory in order to perform good end simulations. Several approaches can be found in the literature for that purpose but the more often those methods are not able to deal with grain oriented electrical steel. In this paper, a method based on orientation distribution function is applied to modern grain oriented laminations. In particular, two solutions are proposed in order to increase the results accuracy. The first one consists in increasing the decomposition number of the cosine series on which the method is based. The second one consists in modifying the determination method of the terms belonging to this cosine series.

  12. Recovery of singularities from a backscattering Born approximation for a biharmonic operator in 3D

    NASA Astrophysics Data System (ADS)

    Tyni, Teemu

    2018-04-01

    We consider a backscattering Born approximation for a perturbed biharmonic operator in three space dimensions. Previous results on this approach for biharmonic operator use the fact that the coefficients are real-valued to obtain the reconstruction of singularities in the coefficients. In this text we drop the assumption about real-valued coefficients and also establish the recovery of singularities for complex coefficients. The proof uses mapping properties of the Radon transform.

  13. Transform Decoding of Reed-Solomon Codes. Volume I. Algorithm and Signal Processing Structure

    DTIC Science & Technology

    1982-11-01

    systematic channel co.’e. 1. lake the inverse transform of the r- ceived se, - nee. 2. Isolate the error syndrome from the inverse transform and use... inverse transform is identic l with interpolation of the polynomial a(z) from its n values. In order to generate a Reed-Solomon (n,k) cooce, we let the set...in accordance with the transform of equation (4). If we were to apply the inverse transform of equa- tion (6) to the coefficient sequence of A(z), we

  14. Time dependence of 50 Hz magnetic fields in apartment buildings with indoor transformer stations.

    PubMed

    Yitzhak, Nir-Mordechay; Hareuveny, Ronen; Kandel, Shaiela; Ruppin, Raphael

    2012-04-01

    Twenty-four hour measurements of 50 Hz magnetic fields (MFs) in apartment buildings containing transformer stations have been performed. The apartments were classified into four types, according to their location relative to the transformer room. Temporal correlation coefficients between the MF in various apartments, as well as between MF and transformer load curves, were calculated. It was found that, in addition to their high average MF, the apartments located right above the transformer room also exhibit unique temporal correlation properties.

  15. An automatic fuzzy-based multi-temporal brain digital subtraction angiography image fusion algorithm using curvelet transform and content selection strategy.

    PubMed

    Momeni, Saba; Pourghassem, Hossein

    2014-08-01

    Recently image fusion has prominent role in medical image processing and is useful to diagnose and treat many diseases. Digital subtraction angiography is one of the most applicable imaging to diagnose brain vascular diseases and radiosurgery of brain. This paper proposes an automatic fuzzy-based multi-temporal fusion algorithm for 2-D digital subtraction angiography images. In this algorithm, for blood vessel map extraction, the valuable frames of brain angiography video are automatically determined to form the digital subtraction angiography images based on a novel definition of vessel dispersion generated by injected contrast material. Our proposed fusion scheme contains different fusion methods for high and low frequency contents based on the coefficient characteristic of wrapping second generation of curvelet transform and a novel content selection strategy. Our proposed content selection strategy is defined based on sample correlation of the curvelet transform coefficients. In our proposed fuzzy-based fusion scheme, the selection of curvelet coefficients are optimized by applying weighted averaging and maximum selection rules for the high frequency coefficients. For low frequency coefficients, the maximum selection rule based on local energy criterion is applied to better visual perception. Our proposed fusion algorithm is evaluated on a perfect brain angiography image dataset consisting of one hundred 2-D internal carotid rotational angiography videos. The obtained results demonstrate the effectiveness and efficiency of our proposed fusion algorithm in comparison with common and basic fusion algorithms.

  16. Interpreting spectral unmixing coefficients: From spectral weights to mass fractions

    NASA Astrophysics Data System (ADS)

    Grumpe, Arne; Mengewein, Natascha; Rommel, Daniela; Mall, Urs; Wöhler, Christian

    2018-01-01

    It is well known that many common planetary minerals exhibit prominent absorption features. Consequently, the analysis of spectral reflectance measurements has become a major tool of remote sensing. Quantifying the mineral abundances, however, is not a trivial task. The interaction between the incident light rays and particulate surfaces, e.g., the lunar regolith, leads to a non-linear relationship between the reflectance spectra of the pure minerals, the so-called ;endmembers;, and the surface's reflectance spectrum. It is, however, possible to transform the non-linear reflectance mixture into a linear mixture of single-scattering albedos of the Hapke model. The abundances obtained by inverting the linear single-scattering albedo mixture may be interpreted as volume fractions which are weighted by the endmember's extinction coefficient. Commonly, identical extinction coefficients are assumed throughout all endmembers and the obtained volume fractions are converted to mass fractions using either measured or assumed densities. In theory, the proposed method may cover different grain sizes if each grain size range of a mineral is treated as a distinct endmember. Here, we present a method to transform the mixing coefficients to mass fractions for arbitrary combinations of extinction coefficients and densities. The required parameters are computed from reflectance measurements of well defined endmember mixtures. Consequently, additional measurements, e.g., the endmember density, are no longer required. We evaluate the method based on laboratory measurements and various results presented in the literature, respectively. It is shown that the procedure transforms the mixing coefficients to mass fractions yielding an accuracy comparable to carefully calibrated laboratory measurements without additional knowledge. For our laboratory measurements, the square root of the mean squared error is less than 4.82 wt%. In addition, the method corrects for systematic effects originating from mixtures of endmembers showing a highly varying albedo, e.g., plagioclase and pyroxene.

  17. Effects of Unsaturated Zones on Baseflow Recession: Analytical Solution and Application

    NASA Astrophysics Data System (ADS)

    Zhan, H.; Liang, X.; Zhang, Y. K.

    2017-12-01

    Unsaturated flow is an important process in baseflow recessions and its effect is rarely investigated. A mathematical model for a coupled unsaturated-saturated flow in a horizontally unconfined aquifer with time-dependent infiltrations is presented. Semi-analytical solutions for hydraulic heads and discharges are derived using Laplace transform and Cosine transform. The solutions are compared with solutions of the linearized Boussinesq equation (LB solution) and the linearized Laplace equation (LL solution), respectively. The result indicates that a larger dimensionless constitutive exponent κD of the unsaturated zone leads to a smaller discharge during the infiltration period and a larger discharge after the infiltration. The lateral discharge of the unsaturated zone is significant when κD≤1, and becomes negligible when κD≥100. For late times, the power index b of the recession curve-dQ/dt aQb, is 1 and independent of κD, where Q is the baseflow and a is a constant lumped aquifer parameter. For early times, b is approximately equal to 3 but it approaches infinity when t→1. The present solution is applied to synthetic and field cases. The present solution matched the synthetic data better than both the LL and LB solutions, with a minimum relative error of 16% for estimate of hydraulic conductivity. The present solution was applied to the observed streamflow discharge in Iowa, and the estimated values of the aquifer parameters were reasonable.

  18. Enhancement of Satellite Image Compression Using a Hybrid (DWT-DCT) Algorithm

    NASA Astrophysics Data System (ADS)

    Shihab, Halah Saadoon; Shafie, Suhaidi; Ramli, Abdul Rahman; Ahmad, Fauzan

    2017-12-01

    Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) image compression techniques have been utilized in most of the earth observation satellites launched during the last few decades. However, these techniques have some issues that should be addressed. The DWT method has proven to be more efficient than DCT for several reasons. Nevertheless, the DCT can be exploited to improve the high-resolution satellite image compression when combined with the DWT technique. Hence, a proposed hybrid (DWT-DCT) method was developed and implemented in the current work, simulating an image compression system on-board on a small remote sensing satellite, with the aim of achieving a higher compression ratio to decrease the onboard data storage and the downlink bandwidth, while avoiding further complex levels of DWT. This method also succeeded in maintaining the reconstructed satellite image quality through replacing the standard forward DWT thresholding and quantization processes with an alternative process that employed the zero-padding technique, which also helped to reduce the processing time of DWT compression. The DCT, DWT and the proposed hybrid methods were implemented individually, for comparison, on three LANDSAT 8 images, using the MATLAB software package. A comparison was also made between the proposed method and three other previously published hybrid methods. The evaluation of all the objective and subjective results indicated the feasibility of using the proposed hybrid (DWT-DCT) method to enhance the image compression process on-board satellites.

  19. Choice of word length in the design of a specialized hardware for lossless wavelet compression of medical images

    NASA Astrophysics Data System (ADS)

    Urriza, Isidro; Barragan, Luis A.; Artigas, Jose I.; Garcia, Jose I.; Navarro, Denis

    1997-11-01

    Image compression plays an important role in the archiving and transmission of medical images. Discrete cosine transform (DCT)-based compression methods are not suitable for medical images because of block-like image artifacts that could mask or be mistaken for pathology. Wavelet transforms (WTs) are used to overcome this problem. When implementing WTs in hardware, finite precision arithmetic introduces quantization errors. However, lossless compression is usually required in the medical image field. Thus, the hardware designer must look for the optimum register length that, while ensuring the lossless accuracy criteria, will also lead to a high-speed implementation with small chip area. In addition, wavelet choice is a critical issue that affects image quality as well as system design. We analyze the filters best suited to image compression that appear in the literature. For them, we obtain the maximum quantization errors produced in the calculation of the WT components. Thus, we deduce the minimum word length required for the reconstructed image to be numerically identical to the original image. The theoretical results are compared with experimental results obtained from algorithm simulations on random test images. These results enable us to compare the hardware implementation cost of the different filter banks. Moreover, to reduce the word length, we have analyzed the case of increasing the integer part of the numbers while maintaining constant the word length when the scale increases.

  20. Sparse regularization for force identification using dictionaries

    NASA Astrophysics Data System (ADS)

    Qiao, Baijie; Zhang, Xingwu; Wang, Chenxi; Zhang, Hang; Chen, Xuefeng

    2016-04-01

    The classical function expansion method based on minimizing l2-norm of the response residual employs various basis functions to represent the unknown force. Its difficulty lies in determining the optimum number of basis functions. Considering the sparsity of force in the time domain or in other basis space, we develop a general sparse regularization method based on minimizing l1-norm of the coefficient vector of basis functions. The number of basis functions is adaptively determined by minimizing the number of nonzero components in the coefficient vector during the sparse regularization process. First, according to the profile of the unknown force, the dictionary composed of basis functions is determined. Second, a sparsity convex optimization model for force identification is constructed. Third, given the transfer function and the operational response, Sparse reconstruction by separable approximation (SpaRSA) is developed to solve the sparse regularization problem of force identification. Finally, experiments including identification of impact and harmonic forces are conducted on a cantilever thin plate structure to illustrate the effectiveness and applicability of SpaRSA. Besides the Dirac dictionary, other three sparse dictionaries including Db6 wavelets, Sym4 wavelets and cubic B-spline functions can also accurately identify both the single and double impact forces from highly noisy responses in a sparse representation frame. The discrete cosine functions can also successfully reconstruct the harmonic forces including the sinusoidal, square and triangular forces. Conversely, the traditional Tikhonov regularization method with the L-curve criterion fails to identify both the impact and harmonic forces in these cases.

  1. In-flight adaptive performance optimization (APO) control using redundant control effectors of an aircraft

    NASA Technical Reports Server (NTRS)

    Gilyard, Glenn B. (Inventor)

    1999-01-01

    Practical application of real-time (or near real-time) Adaptive Performance Optimization (APO) is provided for a transport aircraft in steady climb, cruise, turn descent or other flight conditions based on measurements and calculations of incremental drag from a forced response maneuver of one or more redundant control effectors defined as those in excess of the minimum set of control effectors required to maintain the steady flight condition in progress. The method comprises the steps of applying excitation in a raised-cosine form over an interval of from 100 to 500 sec. at the rate of 1 to 10 sets/sec of excitation, and data for analysis is gathered in sets of measurements made during the excitation to calculate lift and drag coefficients C.sub.L and C.sub.D from two equations, one for each coefficient. A third equation is an expansion of C.sub.D as a function of parasitic drag, induced drag, Mach and altitude drag effects, and control effector drag, and assumes a quadratic variation of drag with positions .delta..sub.i of redundant control effectors i=1 to n. The third equation is then solved for .delta..sub.iopt the optimal position of redundant control effector i, which is then used to set the control effector i for optimum performance during the remainder of said steady flight or until monitored flight conditions change by some predetermined amount as determined automatically or a predetermined minimum flight time has elapsed.

  2. Wing-Fixed PIV and force measurements of a large transverse gust encounter

    NASA Astrophysics Data System (ADS)

    Perrotta, Gino

    2015-11-01

    The unsteady aerodynamics of an aspect ratio 4 flat plate wing encountering a large-amplitude transverse gust were investigated using PIV in the wing-fixed reference frame and direct unsteady force measurements. Using a new experimental facility at the University of Maryland, the wing was towed at Reynolds number 20,000 through a 7m-long tank of nominally quiescent water containing a single cross-stream planar jet with velocity equal to the wing's towed velocity - a transverse gust ratio equal to one. The planar jet was created by pumping water through 30 cylindrical nozzles arranged in a single row. PIV confirms that the individual jets converge into a single, narrow, planar gust with a streamwise velocity profile resembling a canonical cosine-squared gust. Forces and fluid velocities of this wing-gust interaction will be presented for two pre-gust conditions: attached flow on the wing and stalled flow over the wing. In both cases, the gust encounter results in a momentary spike in lift coefficient. The peak lift coefficient was measured between 3 and 6 and varies with angle of attack. At low angle of attack, the attached flow wing produces less lift before the gust and much more (non-circulatory) lift during the gust than the stalled wing. Although the flow over the wing at low angle of attack separates during the gust and reattaches afterwards, the recovery time is similar to that of the high angle case, on the order of 10 chord lengths travelled.

  3. PET-CT image fusion using random forest and à-trous wavelet transform.

    PubMed

    Seal, Ayan; Bhattacharjee, Debotosh; Nasipuri, Mita; Rodríguez-Esparragón, Dionisio; Menasalvas, Ernestina; Gonzalo-Martin, Consuelo

    2018-03-01

    New image fusion rules for multimodal medical images are proposed in this work. Image fusion rules are defined by random forest learning algorithm and a translation-invariant à-trous wavelet transform (AWT). The proposed method is threefold. First, source images are decomposed into approximation and detail coefficients using AWT. Second, random forest is used to choose pixels from the approximation and detail coefficients for forming the approximation and detail coefficients of the fused image. Lastly, inverse AWT is applied to reconstruct fused image. All experiments have been performed on 198 slices of both computed tomography and positron emission tomography images of a patient. A traditional fusion method based on Mallat wavelet transform has also been implemented on these slices. A new image fusion performance measure along with 4 existing measures has been presented, which helps to compare the performance of 2 pixel level fusion methods. The experimental results clearly indicate that the proposed method outperforms the traditional method in terms of visual and quantitative qualities and the new measure is meaningful. Copyright © 2017 John Wiley & Sons, Ltd.

  4. Microbial Transformation of Esters of Chlorinated Carboxylic Acids

    PubMed Central

    Paris, D. F.; Wolfe, N. L.; Steen, W. C.

    1984-01-01

    Two groups of compounds were selected for microbial transformation studies. In the first group were carboxylic acid esters having a fixed aromatic moiety and an increasing length of the alkyl component. Ethyl esters of chlorine-substituted carboxylic acids were in the second group. Microorganisms from environmental waters and a pure culture of Pseudomonas putida U were used. The bacterial populations were monitored by plate counts, and disappearance of the parent compound was followed by gas-liquid chromatography as a function of time. The products of microbial hydrolysis were the respective carboxylic acids. Octanol-water partition coefficients (Kow) for the compounds were measured. These values spanned three orders of magnitude, whereas microbial transformation rate constants (kb) varied only 50-fold. The microbial rate constants of the carboxylic acid esters with a fixed aromatic moiety increased with an increasing length of alkyl substituents. The regression coefficient for the linear relationships between log kb and log Kow was high for group 1 compounds, indicating that these parameters correlated well. The regression coefficient for the linear relationships for group 2 compounds, however, was low, indicating that these parameters correlated poorly. PMID:16346459

  5. Research and Implementation of Heart Sound Denoising

    NASA Astrophysics Data System (ADS)

    Liu, Feng; Wang, Yutai; Wang, Yanxiang

    Heart sound is one of the most important signals. However, the process of getting heart sound signal can be interfered with many factors outside. Heart sound is weak electric signal and even weak external noise may lead to the misjudgment of pathological and physiological information in this signal, thus causing the misjudgment of disease diagnosis. As a result, it is a key to remove the noise which is mixed with heart sound. In this paper, a more systematic research and analysis which is involved in heart sound denoising based on matlab has been made. The study of heart sound denoising based on matlab firstly use the powerful image processing function of matlab to transform heart sound signals with noise into the wavelet domain through wavelet transform and decomposition these signals in muli-level. Then for the detail coefficient, soft thresholding is made using wavelet transform thresholding to eliminate noise, so that a signal denoising is significantly improved. The reconstructed signals are gained with stepwise coefficient reconstruction for the processed detail coefficient. Lastly, 50HZ power frequency and 35 Hz mechanical and electrical interference signals are eliminated using a notch filter.

  6. On the Angular Dependence of the Vicinal Fluorine-Fluorine Coupling Constant in 1,2-Difluoroethane:  Deviation from a Karplus-like Shape.

    PubMed

    Provasi, Patricio F; Sauer, Stephan P A

    2006-07-01

    The angular dependence of the vicinal fluorine-fluorine coupling constant, (3)JFF, for 1,2-difluoroethane has been investigated with several polarization propagator methods. (3)JFF and its four Ramsey contributions were calculated using the random phase approximation (RPA), its multiconfigurational generalization, and both second-order polarization propagator approximations (SOPPA and SOPPA(CCSD)), using locally dense basis sets. The geometries were optimized for each dihedral angle at the level of density functional theory using the B3LYP functional and fourth-order Møller-Plesset perturbation theory. The resulting coupling constant curves were fitted to a cosine series with 8 coefficients. Our results are compared with those obtained previously and values estimated from experiment. It is found that the inclusion of electron correlation in the calculation of (3)JFF reduces the absolute values. This is mainly due to changes in the FC contribution, which for dihedral angles around the trans conformation even changes its sign. This sign change is responsible for the breakdown of the Karplus-like curve.

  7. Analysis of spike-wave discharges in rats using discrete wavelet transform.

    PubMed

    Ubeyli, Elif Derya; Ilbay, Gül; Sahin, Deniz; Ateş, Nurbay

    2009-03-01

    A feature is a distinctive or characteristic measurement, transform, structural component extracted from a segment of a pattern. Features are used to represent patterns with the goal of minimizing the loss of important information. The discrete wavelet transform (DWT) as a feature extraction method was used in representing the spike-wave discharges (SWDs) records of Wistar Albino Glaxo/Rijswijk (WAG/Rij) rats. The SWD records of WAG/Rij rats were decomposed into time-frequency representations using the DWT and the statistical features were calculated to depict their distribution. The obtained wavelet coefficients were used to identify characteristics of the signal that were not apparent from the original time domain signal. The present study demonstrates that the wavelet coefficients are useful in determining the dynamics in the time-frequency domain of SWD records.

  8. Stationary wavelet transform for under-sampled MRI reconstruction.

    PubMed

    Kayvanrad, Mohammad H; McLeod, A Jonathan; Baxter, John S H; McKenzie, Charles A; Peters, Terry M

    2014-12-01

    In addition to coil sensitivity data (parallel imaging), sparsity constraints are often used as an additional lp-penalty for under-sampled MRI reconstruction (compressed sensing). Penalizing the traditional decimated wavelet transform (DWT) coefficients, however, results in visual pseudo-Gibbs artifacts, some of which are attributed to the lack of translation invariance of the wavelet basis. We show that these artifacts can be greatly reduced by penalizing the translation-invariant stationary wavelet transform (SWT) coefficients. This holds with various additional reconstruction constraints, including coil sensitivity profiles and total variation. Additionally, SWT reconstructions result in lower error values and faster convergence compared to DWT. These concepts are illustrated with extensive experiments on in vivo MRI data with particular emphasis on multiple-channel acquisitions. Copyright © 2014 Elsevier Inc. All rights reserved.

  9. Identification of large geomorphological anomalies based on 2D discrete wavelet transform

    NASA Astrophysics Data System (ADS)

    Doglioni, A.; Simeone, V.

    2012-04-01

    The identification and analysis based on quantitative evidences of large geomorphological anomalies is an important stage for the study of large landslides. Numerical geomorphic analyses represent an interesting approach to this kind of studies, allowing for a detailed and pretty accurate identification of hidden topographic anomalies that may be related to large landslides. Here a geomorphic numerical analyses of the Digital Terrain Model (DTM) is presented. The introduced approach is based on 2D discrete wavelet transform (Antoine et al., 2003; Bruun and Nilsen, 2003, Booth et al., 2009). The 2D wavelet decomposition of the DTM, and in particular the analysis of the detail coefficients of the wavelet transform can provide evidences of anomalies or singularities, i.e. discontinuities of the land surface. These discontinuities are not very evident from the DTM as it is, while 2D wavelet transform allows for grid-based analysis of DTM and for mapping the decomposition. In fact, the grid-based DTM can be assumed as a matrix, where a discrete wavelet transform (Daubechies, 1992) is performed columnwise and linewise, which basically represent horizontal and vertical directions. The outcomes of this analysis are low-frequency approximation coefficients and high-frequency detail coefficients. Detail coefficients are analyzed, since their variations are associated to discontinuities of the DTM. Detailed coefficients are estimated assuming to perform 2D wavelet transform both for the horizontal direction (east-west) and for the vertical direction (north-south). Detail coefficients are then mapped for both the cases, thus allowing to visualize and quantify potential anomalies of the land surface. Moreover, wavelet decomposition can be pushed to further levels, assuming a higher scale number of the transform. This may potentially return further interesting results, in terms of identification of the anomalies of land surface. In this kind of approach, the choice of a proper mother wavelet function is a tricky point, since it conditions the analysis and then their outcomes. Therefore multiple levels as well as multiple wavelet analyses are guessed. Here the introduced approach is applied to some interesting cases study of south Italy, in particular for the identification of large anomalies associated to large landslides at the transition between Apennine chain domain and the foredeep domain. In particular low Biferno valley and Fortore valley are here analyzed. Finally, the wavelet transforms are performed on multiple levels, thus trying to address the problem of which is the level extent for an accurate analysis fit to a specific problem. Antoine J.P., Carrette P., Murenzi R., and Piette B., (2003), Image analysis with two-dimensional continuous wavelet transform, Signal Processing, 31(3), pp. 241-272, doi:10.1016/0165-1684(93)90085-O. Booth A.M., Roering J.J., and Taylor Perron J., (2009), Automated landslide mapping using spectral analysis and high-resolution topographic data: Puget Sound lowlands, Washington, and Portland Hills, Oregon, Geomorphology, 109(3-4), pp. 132-147, doi:10.1016/j.geomorph.2009.02.027. Bruun B.T., and Nilsen S., (2003), Wavelet representation of large digital terrain models, Computers and Geoscience, 29(6), pp. 695-703, doi:10.1016/S0098-3004(03)00015-3. Daubechies, I. (1992), Ten lectures on wavelets, SIAM.

  10. Sparsity prediction and application to a new steganographic technique

    NASA Astrophysics Data System (ADS)

    Phillips, David; Noonan, Joseph

    2004-10-01

    Steganography is a technique of embedding information in innocuous data such that only the innocent data is visible. The wavelet transform lends itself to image steganography because it generates a large number of coefficients representing the information in the image. Altering a small set of these coefficients allows embedding of information (payload) into an image (cover) without noticeably altering the original image. We propose a novel, dual-wavelet steganographic technique, using transforms selected such that the transform of the cover image has low sparsity, while the payload transform has high sparsity. Maximizing the sparsity of the payload transform reduces the amount of information embedded in the cover, and minimizing the sparsity of the cover increases the locations that can be altered without significantly altering the image. Making this system effective on any given image pair requires a metric to indicate the best (maximum sparsity) and worst (minimum sparsity) wavelet transforms to use. This paper develops the first stage of this metric, which can predict, averaged across many wavelet families, which of two images will have a higher sparsity. A prototype implementation of the dual-wavelet system as a proof of concept is also developed.

  11. Improved image decompression for reduced transform coding artifacts

    NASA Technical Reports Server (NTRS)

    Orourke, Thomas P.; Stevenson, Robert L.

    1994-01-01

    The perceived quality of images reconstructed from low bit rate compression is severely degraded by the appearance of transform coding artifacts. This paper proposes a method for producing higher quality reconstructed images based on a stochastic model for the image data. Quantization (scalar or vector) partitions the transform coefficient space and maps all points in a partition cell to a representative reconstruction point, usually taken as the centroid of the cell. The proposed image estimation technique selects the reconstruction point within the quantization partition cell which results in a reconstructed image which best fits a non-Gaussian Markov random field (MRF) image model. This approach results in a convex constrained optimization problem which can be solved iteratively. At each iteration, the gradient projection method is used to update the estimate based on the image model. In the transform domain, the resulting coefficient reconstruction points are projected to the particular quantization partition cells defined by the compressed image. Experimental results will be shown for images compressed using scalar quantization of block DCT and using vector quantization of subband wavelet transform. The proposed image decompression provides a reconstructed image with reduced visibility of transform coding artifacts and superior perceived quality.

  12. Optical recognition of statistical patterns

    NASA Astrophysics Data System (ADS)

    Lee, S. H.

    1981-12-01

    Optical implementation of the Fukunaga-Koontz transform (FKT) and the Least-Squares Linear Mapping Technique (LSLMT) is described. The FKT is a linear transformation which performs image feature extraction for a two-class image classification problem. The LSLMT performs a transform from large dimensional feature space to small dimensional decision space for separating multiple image classes by maximizing the interclass differences while minimizing the intraclass variations. The FKT and the LSLMT were optically implemented by utilizing a coded phase optical processor. The transform was used for classifying birds and fish. After the F-K basis functions were calculated, those most useful for classification were incorporated into a computer generated hologram. The output of the optical processor, consisting of the squared magnitude of the F-K coefficients, was detected by a T.V. camera, digitized, and fed into a micro-computer for classification. A simple linear classifier based on only two F-K coefficients was able to separate the images into two classes, indicating that the F-K transform had chosen good features. Two advantages of optically implementing the FKT and LSLMT are parallel and real time processing.

  13. Optical recognition of statistical patterns

    NASA Technical Reports Server (NTRS)

    Lee, S. H.

    1981-01-01

    Optical implementation of the Fukunaga-Koontz transform (FKT) and the Least-Squares Linear Mapping Technique (LSLMT) is described. The FKT is a linear transformation which performs image feature extraction for a two-class image classification problem. The LSLMT performs a transform from large dimensional feature space to small dimensional decision space for separating multiple image classes by maximizing the interclass differences while minimizing the intraclass variations. The FKT and the LSLMT were optically implemented by utilizing a coded phase optical processor. The transform was used for classifying birds and fish. After the F-K basis functions were calculated, those most useful for classification were incorporated into a computer generated hologram. The output of the optical processor, consisting of the squared magnitude of the F-K coefficients, was detected by a T.V. camera, digitized, and fed into a micro-computer for classification. A simple linear classifier based on only two F-K coefficients was able to separate the images into two classes, indicating that the F-K transform had chosen good features. Two advantages of optically implementing the FKT and LSLMT are parallel and real time processing.

  14. Data compression using adaptive transform coding. Appendix 1: Item 1. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Rost, Martin Christopher

    1988-01-01

    Adaptive low-rate source coders are described in this dissertation. These coders adapt by adjusting the complexity of the coder to match the local coding difficulty of the image. This is accomplished by using a threshold driven maximum distortion criterion to select the specific coder used. The different coders are built using variable blocksized transform techniques, and the threshold criterion selects small transform blocks to code the more difficult regions and larger blocks to code the less complex regions. A theoretical framework is constructed from which the study of these coders can be explored. An algorithm for selecting the optimal bit allocation for the quantization of transform coefficients is developed. The bit allocation algorithm is more fully developed, and can be used to achieve more accurate bit assignments than the algorithms currently used in the literature. Some upper and lower bounds for the bit-allocation distortion-rate function are developed. An obtainable distortion-rate function is developed for a particular scalar quantizer mixing method that can be used to code transform coefficients at any rate.

  15. A compact spin-exchange optical pumping system for 3He polarization based on a solenoid coil, a VBG laser diode, and a cosine theta RF coil

    NASA Astrophysics Data System (ADS)

    Lee, Sungman; Kim, Jongyul; Moon, Myung Kook; Lee, Kye Hong; Lee, Seung Wook; Ino, Takashi; Skoy, Vadim R.; Lee, Manwoo; Kim, Guinyun

    2013-02-01

    For use as a neutron spin polarizer or analyzer in the neutron beam lines of the HANARO (High-flux Advanced Neutron Application ReactOr) nuclear research reactor, a 3He polarizer was designed based on both a compact solenoid coil and a VBG (volume Bragg grating) diode laser with a narrow spectral linewidth of 25 GHz. The nuclear magnetic resonance (NMR) signal was measured and analyzed using both a built-in cosine radio-frequency (RF) coil and a pick-up coil. Using a neutron transmission measurement, we estimated the polarization ratio of the 3He cell as 18% for an optical pumping time of 8 hours.

  16. On the Symmetry of Molecular Flows Through the Pipe of an Arbitrary Shape (I) Diffusive Reflection

    NASA Astrophysics Data System (ADS)

    Kusumoto, Yoshiro

    Molecular gas flows through the pipe of an arbitrary shape is mathematically considered based on a diffusive reflection model. To avoid a perpetual motion, the magnitude of the molecular flow rate must remain invariant under the exchange of inlet and outlet pressures. For this flow symmetry, the cosine law reflection at the pipe wall was found to be sufficient and necessary, on the assumption that the molecular flux is conserved in a collision with the wall. It was also shown that a spontaneous flow occurs in a hemispherical apparatus, if the reflection obeys the n-th power of cosine law with n other than unity. This apparatus could work as a molecular pump with no moving parts.

  17. Resonant circuit which provides dual frequency excitation for rapid cycling of an electromagnet

    DOEpatents

    Praeg, Walter F.

    1984-01-01

    Disclosed is a ring magnet control circuit that permits synchrotron repetition rates much higher than the frequency of the cosinusoidal guide field of the ring magnet during particle acceleration. the control circuit generates cosinusoidal excitation currents of different frequencies in the half waves. During radio frequency acceleration of the particles in the synchrotron, the control circuit operates with a lower frequency cosine wave and thereafter the electromagnets are reset with a higher frequency half cosine wave. Flat-bottom and flat-top wave shaping circuits maintain the magnetic guide field in a relatively time-invariant mode during times when the particles are being injected into the ring magnets and when the particles are being ejected from the ring magnets.

  18. A Wavelet Neural Network Optimal Control Model for Traffic-Flow Prediction in Intelligent Transport Systems

    NASA Astrophysics Data System (ADS)

    Huang, Darong; Bai, Xing-Rong

    Based on wavelet transform and neural network theory, a traffic-flow prediction model, which was used in optimal control of Intelligent Traffic system, is constructed. First of all, we have extracted the scale coefficient and wavelet coefficient from the online measured raw data of traffic flow via wavelet transform; Secondly, an Artificial Neural Network model of Traffic-flow Prediction was constructed and trained using the coefficient sequences as inputs and raw data as outputs; Simultaneous, we have designed the running principium of the optimal control system of traffic-flow Forecasting model, the network topological structure and the data transmitted model; Finally, a simulated example has shown that the technique is effectively and exactly. The theoretical results indicated that the wavelet neural network prediction model and algorithms have a broad prospect for practical application.

  19. Resolvent estimates in homogenisation of periodic problems of fractional elasticity

    NASA Astrophysics Data System (ADS)

    Cherednichenko, Kirill; Waurick, Marcus

    2018-03-01

    We provide operator-norm convergence estimates for solutions to a time-dependent equation of fractional elasticity in one spatial dimension, with rapidly oscillating coefficients that represent the material properties of a viscoelastic composite medium. Assuming periodicity in the coefficients, we prove operator-norm convergence estimates for an operator fibre decomposition obtained by applying to the original fractional elasticity problem the Fourier-Laplace transform in time and Gelfand transform in space. We obtain estimates on each fibre that are uniform in the quasimomentum of the decomposition and in the period of oscillations of the coefficients as well as quadratic with respect to the spectral variable. On the basis of these uniform estimates we derive operator-norm-type convergence estimates for the original fractional elasticity problem, for a class of sufficiently smooth densities of applied forces.

  20. Wavelets, ridgelets, and curvelets for Poisson noise removal.

    PubMed

    Zhang, Bo; Fadili, Jalal M; Starck, Jean-Luc

    2008-07-01

    In order to denoise Poisson count data, we introduce a variance stabilizing transform (VST) applied on a filtered discrete Poisson process, yielding a near Gaussian process with asymptotic constant variance. This new transform, which can be deemed as an extension of the Anscombe transform to filtered data, is simple, fast, and efficient in (very) low-count situations. We combine this VST with the filter banks of wavelets, ridgelets and curvelets, leading to multiscale VSTs (MS-VSTs) and nonlinear decomposition schemes. By doing so, the noise-contaminated coefficients of these MS-VST-modified transforms are asymptotically normally distributed with known variances. A classical hypothesis-testing framework is adopted to detect the significant coefficients, and a sparsity-driven iterative scheme reconstructs properly the final estimate. A range of examples show the power of this MS-VST approach for recovering important structures of various morphologies in (very) low-count images. These results also demonstrate that the MS-VST approach is competitive relative to many existing denoising methods.

  1. Single image super resolution algorithm based on edge interpolation in NSCT domain

    NASA Astrophysics Data System (ADS)

    Zhang, Mengqun; Zhang, Wei; He, Xinyu

    2017-11-01

    In order to preserve the texture and edge information and to improve the space resolution of single frame, a superresolution algorithm based on Contourlet (NSCT) is proposed. The original low resolution image is transformed by NSCT, and the directional sub-band coefficients of the transform domain are obtained. According to the scale factor, the high frequency sub-band coefficients are amplified by the interpolation method based on the edge direction to the desired resolution. For high frequency sub-band coefficients with noise and weak targets, Bayesian shrinkage is used to calculate the threshold value. The coefficients below the threshold are determined by the correlation among the sub-bands of the same scale to determine whether it is noise and de-noising. The anisotropic diffusion filter is used to effectively enhance the weak target in the low contrast region of the target and background. Finally, the high-frequency sub-band is amplified by the bilinear interpolation method to the desired resolution, and then combined with the high-frequency subband coefficients after de-noising and small target enhancement, the NSCT inverse transform is used to obtain the desired resolution image. In order to verify the effectiveness of the proposed algorithm, the proposed algorithm and several common image reconstruction methods are used to test the synthetic image, motion blurred image and hyperspectral image, the experimental results show that compared with the traditional single resolution algorithm, the proposed algorithm can obtain smooth edges and good texture features, and the reconstructed image structure is well preserved and the noise is suppressed to some extent.

  2. Time-Frequency-Wavenumber Analysis of Surface Waves Using the Continuous Wavelet Transform

    NASA Astrophysics Data System (ADS)

    Poggi, V.; Fäh, D.; Giardini, D.

    2013-03-01

    A modified approach to surface wave dispersion analysis using active sources is proposed. The method is based on continuous recordings, and uses the continuous wavelet transform to analyze the phase velocity dispersion of surface waves. This gives the possibility to accurately localize the phase information in time, and to isolate the most significant contribution of the surface waves. To extract the dispersion information, then, a hybrid technique is applied to the narrowband filtered seismic recordings. The technique combines the flexibility of the slant stack method in identifying waves that propagate in space and time, with the resolution of f- k approaches. This is particularly beneficial for higher mode identification in cases of high noise levels. To process the continuous wavelet transform, a new mother wavelet is presented and compared to the classical and widely used Morlet type. The proposed wavelet is obtained from a raised-cosine envelope function (Hanning type). The proposed approach is particularly suitable when using continuous recordings (e.g., from seismological-like equipment) since it does not require any hardware-based source triggering. This can be subsequently done with the proposed method. Estimation of the surface wave phase delay is performed in the frequency domain by means of a covariance matrix averaging procedure over successive wave field excitations. Thus, no record stacking is necessary in the time domain and a large number of consecutive shots can be used. This leads to a certain simplification of the field procedures. To demonstrate the effectiveness of the method, we tested it on synthetics as well on real field data. For the real case we also combine dispersion curves from ambient vibrations and active measurements.

  3. A Semi-Analytical Solution to Time Dependent Groundwater Flow Equation Incorporating Stream-Wetland-Aquifer Interactions

    NASA Astrophysics Data System (ADS)

    Boyraz, Uǧur; Melek Kazezyılmaz-Alhan, Cevza

    2017-04-01

    Groundwater is a vital element of hydrologic cycle and the analytical & numerical solutions of different forms of groundwater flow equations play an important role in understanding the hydrological behavior of subsurface water. The interaction between groundwater and surface water bodies can be determined using these solutions. In this study, new hypothetical approaches are implemented to groundwater flow system in order to contribute to the studies on surface water/groundwater interactions. A time dependent problem is considered in a 2-dimensional stream-wetland-aquifer system. The sloped stream boundary is used to represent the interaction between stream and aquifer. The rest of the aquifer boundaries are assumed as no-flux boundary. In addition, a wetland is considered as a surface water body which lies over the whole aquifer. The effect of the interaction between the wetland and the aquifer is taken into account with a source/sink term in the groundwater flow equation and the interaction flow is calculated by using Darcy's approach. A semi-analytical solution is developed for the 2-dimensional groundwater flow equation in 5 steps. First, Laplace and Fourier cosine transforms are employed to obtain the general solution in Fourier and Laplace domain. Then, the initial and boundary conditions are applied to obtain the particular solution. Finally, inverse Fourier transform is carried out analytically and inverse Laplace transform is carried out numerically to obtain the final solution in space and time domain, respectively. In order to verify the semi-analytical solution, an explicit finite difference algorithm is developed and analytical and numerical solutions are compared for synthetic examples. The comparison of the analytical and numerical solutions shows that the analytical solution gives accurate results.

  4. Identification of speech transients using variable frame rate analysis and wavelet packets.

    PubMed

    Rasetshwane, Daniel M; Boston, J Robert; Li, Ching-Chung

    2006-01-01

    Speech transients are important cues for identifying and discriminating speech sounds. Yoo et al. and Tantibundhit et al. were successful in identifying speech transients and, emphasizing them, improving the intelligibility of speech in noise. However, their methods are computationally intensive and unsuitable for real-time applications. This paper presents a method to identify and emphasize speech transients that combines subband decomposition by the wavelet packet transform with variable frame rate (VFR) analysis and unvoiced consonant detection. The VFR analysis is applied to each wavelet packet to define a transitivity function that describes the extent to which the wavelet coefficients of that packet are changing. Unvoiced consonant detection is used to identify unvoiced consonant intervals and the transitivity function is amplified during these intervals. The wavelet coefficients are multiplied by the transitivity function for that packet, amplifying the coefficients localized at times when they are changing and attenuating coefficients at times when they are steady. Inverse transform of the modified wavelet packet coefficients produces a signal corresponding to speech transients similar to the transients identified by Yoo et al. and Tantibundhit et al. A preliminary implementation of the algorithm runs more efficiently.

  5. Fabric wrinkle characterization and classification using modified wavelet coefficients and optimized support-vector-machine classifier

    USDA-ARS?s Scientific Manuscript database

    This paper presents a novel wrinkle evaluation method that uses modified wavelet coefficients and an optimized support-vector-machine (SVM) classification scheme to characterize and classify wrinkle appearance of fabric. Fabric images were decomposed with the wavelet transform (WT), and five parame...

  6. Association of HPA axis hormones with copeptin after psychological stress differs by sex.

    PubMed

    Spanakis, Elias K; Wand, Gary S; Ji, Nan; Golden, Sherita Hill

    2016-01-01

    Copeptin levels are elevated in severe medical conditions, an effect that is attributed to elevated arginine vasopressin (AVP) levels in response to physiological stress, resulting in activation of hypothalamus-pituitary-adrenal (HPA) axis. In the current study, we wanted to determine if copeptin is responsive to psychological stress, correlates with cortisol and adrenocorticotropin hormone (ACTH), and if associations differed by sex. In a cross-sectional study that included 100 healthy men (41%) and women (59%) (aged 18-30 years; mean 24.6 ± 3 years), who underwent the Trier Social Stress Test (TSST), we examined the association between percent change (peak-baseline/baseline) in copeptin levels and percent change in log ACTH and cortisol. Three baselines samples were drawn followed by blood sampling at 20, 35, 50, 65 and 85 min after TSST. There was a significant positive association between the percent change in copeptin and the percent change in log-transformed salivary cortisol (β-coefficient=0.95; p=0.02). The association between percent change in copeptin and log-transformed serum cortisol was not statistically significant in the overall population. There was a trend for a non-significant association between percent change in copeptin and percent change in log-transformed ACTH (β-coefficient=1.14; p=0.06). In males, there was a significant positive association between the percent change in copeptin levels and log-transformed salivary (β-coefficient=1.33, p=0.016) and serum cortisol (β-coefficient=0.69, p=0.01), whereas in women there was no statistically significant association. We found a significant positive association between percent change in copeptin and percent change in salivary and serum cortisol among males only. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. Downhole microseismic signal-to-noise ratio enhancement via strip matching shearlet transform

    NASA Astrophysics Data System (ADS)

    Li, Juan; Ji, Shuo; Li, Yue; Qian, Zhihong; Lu, Weili

    2018-04-01

    Shearlet transform has been proved effective in noise attenuation. However, because of the low magnitude and high frequency of downhole microseismic signals, the coefficient values of valid signals and noise are similar in the shearlet domain. As a result, it is hard to suppress the noise. In this paper, we present a novel signal-to-noise ratio enhancement scheme called strip matching shearlet transform. The method takes into account the directivity of microseismic events and shearlets. Through strip matching, the matching degree in direction between them has been promoted. Then the coefficient values of valid signals are much larger than those of the noise. Consequently, we can separate them well with the help of thresholding. The experimental results on both synthetic records and field data illustrate that our proposed method preserves the useful components and attenuates the noise well.

  8. Synthesis of vibroarthrographic signals in knee osteoarthritis diagnosis training.

    PubMed

    Shieh, Chin-Shiuh; Tseng, Chin-Dar; Chang, Li-Yun; Lin, Wei-Chun; Wu, Li-Fu; Wang, Hung-Yu; Chao, Pei-Ju; Chiu, Chien-Liang; Lee, Tsair-Fwu

    2016-07-19

    Vibroarthrographic (VAG) signals are used as useful indicators of knee osteoarthritis (OA) status. The objective was to build a template database of knee crepitus sounds. Internships can practice in the template database to shorten the time of training for diagnosis of OA. A knee sound signal was obtained using an innovative stethoscope device with a goniometer. Each knee sound signal was recorded with a Kellgren-Lawrence (KL) grade. The sound signal was segmented according to the goniometer data. The signal was Fourier transformed on the correlated frequency segment. An inverse Fourier transform was performed to obtain the time-domain signal. Haar wavelet transform was then done. The median and mean of the wavelet coefficients were chosen to inverse transform the synthesized signal in each KL category. The quality of the synthesized signal was assessed by a clinician. The sample signals were evaluated using different algorithms (median and mean). The accuracy rate of the median coefficient algorithm (93 %) was better than the mean coefficient algorithm (88 %) for cross-validation by a clinician using synthesis of VAG. The artificial signal we synthesized has the potential to build a learning system for medical students, internships and para-medical personnel for the diagnosis of OA. Therefore, our method provides a feasible way to evaluate crepitus sounds that may assist in the diagnosis of knee OA.

  9. Poisson denoising on the sphere: application to the Fermi gamma ray space telescope

    NASA Astrophysics Data System (ADS)

    Schmitt, J.; Starck, J. L.; Casandjian, J. M.; Fadili, J.; Grenier, I.

    2010-07-01

    The Large Area Telescope (LAT), the main instrument of the Fermi gamma-ray Space telescope, detects high energy gamma rays with energies from 20 MeV to more than 300 GeV. The two main scientific objectives, the study of the Milky Way diffuse background and the detection of point sources, are complicated by the lack of photons. That is why we need a powerful Poisson noise removal method on the sphere which is efficient on low count Poisson data. This paper presents a new multiscale decomposition on the sphere for data with Poisson noise, called multi-scale variance stabilizing transform on the sphere (MS-VSTS). This method is based on a variance stabilizing transform (VST), a transform which aims to stabilize a Poisson data set such that each stabilized sample has a quasi constant variance. In addition, for the VST used in the method, the transformed data are asymptotically Gaussian. MS-VSTS consists of decomposing the data into a sparse multi-scale dictionary like wavelets or curvelets, and then applying a VST on the coefficients in order to get almost Gaussian stabilized coefficients. In this work, we use the isotropic undecimated wavelet transform (IUWT) and the curvelet transform as spherical multi-scale transforms. Then, binary hypothesis testing is carried out to detect significant coefficients, and the denoised image is reconstructed with an iterative algorithm based on hybrid steepest descent (HSD). To detect point sources, we have to extract the Galactic diffuse background: an extension of the method to background separation is then proposed. In contrary, to study the Milky Way diffuse background, we remove point sources with a binary mask. The gaps have to be interpolated: an extension to inpainting is then proposed. The method, applied on simulated Fermi LAT data, proves to be adaptive, fast and easy to implement.

  10. Rogue waves and unbounded solutions of the NLSE

    NASA Astrophysics Data System (ADS)

    Lechuga, Antonio

    2017-04-01

    Since the pioneering work of Zakharov has been generally admitted that rogue waves can be studied in the framework of the Nonlinear Schrödinger Equation (NLSE). Many researchers, Akhmediev, Peregrine, Matveev among others gave different solutions to this equation that, in some way, could be linked to rogue waves and also to its more important characteristic: its unexpectedness. Janssen (2003, 2004), Onorato (2004, 2006) and Waseda (2006) linked the coefficient of the nonlinear term of the Schrödinger equation with the Benjamin-Feir index (BFI) that, we know, is a measure of the modulational instability of the waves. From this point of view the value of this coefficient of the NLSE could be known from statistics. Thus the relationship between sea states and the mechanism of generation of rogue waves could be found out. Following the well-known Lie group theory researchers have been studying the Lie point symmetries of the NLSE: the scaling transformations, Galilean transformations and phase transformations. Basically these transformations turn the NLSE into a nonlinear ordinary differential equation called Duffing equation (also called eikonal equation). There are different ways to do this, but in most of them the independent variable that could be seen as a space variable is a kind of moving frame with the time incorporated in this way. The main aim of this work is to classify solutions of the Duffing equation (periodic and nonperiodic waves and also bounded and unbounded waves) bearing in mind that the coefficient of the nonlinear term in the NLSE is left unaltered in the process of the transformation.

  11. Reflection and emission models for deserts derived from Nimbus-7 ERB scanner measurements

    NASA Technical Reports Server (NTRS)

    Staylor, W. F.; Suttles, J. T.

    1986-01-01

    Broadband shortwave and longwave radiance measurements obtained from the Nimbus-7 Earth Radiation Budget scanner were used to develop reflectance and emittance models for the Sahara-Arabian, Gibson, and Saudi Deserts. The models were established by fitting the satellite measurements to analytic functions. For the shortwave, the model function is based on an approximate solution to the radiative transfer equation. The bidirectional-reflectance function was obtained from a single-scattering approximation with a Rayleigh-like phase function. The directional-reflectance model followed from integration of the bidirectional model and is a function of the sum and product of cosine solar and viewing zenith angles, thus satisfying reciprocity between these angles. The emittance model was based on a simple power-law of cosine viewing zenith angle.

  12. The Implementation of Cosine Similarity to Calculate Text Relevance between Two Documents

    NASA Astrophysics Data System (ADS)

    Gunawan, D.; Sembiring, C. A.; Budiman, M. A.

    2018-03-01

    Rapidly increasing number of web pages or documents leads to topic specific filtering in order to find web pages or documents efficiently. This is a preliminary research that uses cosine similarity to implement text relevance in order to find topic specific document. This research is divided into three parts. The first part is text-preprocessing. In this part, the punctuation in a document will be removed, then convert the document to lower case, implement stop word removal and then extracting the root word by using Porter Stemming algorithm. The second part is keywords weighting. Keyword weighting will be used by the next part, the text relevance calculation. Text relevance calculation will result the value between 0 and 1. The closer value to 1, then both documents are more related, vice versa.

  13. Phase relations of natural 65 year SST variations, ocean sea level variations over 260 years, and Arctic sea-ice retreat of the satellite era - issues of cause and effect.

    NASA Astrophysics Data System (ADS)

    Asten, Michael

    2017-04-01

    We study sea level variations over the past 300yr in order to quantify what fraction of variations may be considered cyclic, and what phase relations exist with respect to those cycles. The 64yr cycle detected by Chambers et al (2012) is found in the 1960-2000 data set which Hamlington et al (2013) interpreted as an expression of the PDO; we show that fitting a 64yr cycle is a better fit, accounting for 92% of variance. In a 300yr GMSL tide guage record Jeverejeva et al (2008) identified a 60-65yr cycle superimposed on an upward trend from 1800CE. Using break-points and removal of centennial trends identified by Kemp et al (2015), we produce a detrended GMSL record for 1700-2000CE which emphasizes the 60-65yr oscillations. A least-square fit using a 64yr period cosine yields an amplitude 12mm and origin at year 1958.6, which accounts for 30% of the variance. A plot of the cosine against the entire length of the 300yr detrended GMSL record shows a clear phase lock for the interval 1740 to 2000CE, denoting either a very consistent timing of an internally generated natural variation, or adding to evidence for an external forcing of astronomical origin (Scafetta 2012, 2013). Barcikowska et al (2016) have identified a 65yr cyclic variation in sea surface temperature in the first multidecadal component of Multi- Channel Singular Spectrum Analysis (MSSA) on the Hadley SST data set (RC60). A plot of RC60 versus our fitted cosine shows the phase shift to be 16 yr, close to a 90 degree phase lag of GMSL relative to RC60. This is the relation to be expected for a simple low-pass or integrating filter, which suggests that cyclic natural variations in sea-surface temperature drive similar variations in GMSL. We compare the extent of Arctic sea-ice using the time interval of 1979- 2016 (window of satellite imagery). The decrease in summer ice cover has been subject of many predictions as to when summer ice will reach zero. The plot of measured ice area can be fitted with many speculative curves, and we show three such best fit curves, a parabola (zero ice cover by 2028), a linear fit (zero by 2060) and a 64yr period cosine, where the cosine is a shape chosen as a hypothesis, given the relation we observe between SST natural variations and 260 years of detrended sea level data. The cosine best fit shows a maximum ice coverage in 1985.6 and predicted minimum in 2017.6, which compares with the detrended sea level cyclic component minimum at 1990.6 and predicted maximum at 2023.6CE. Thus the sea-ice retreat lags RC60 by about 10 yr or 60deg in phase. The consistent phase of sea-level change over 260yr, and the phase lags of sea-ice retreat and sea-level change relative to the natural 65yr cyclic component of SST, have implications in the debate over internal versus external drivers of the cyclic components of change, and in hypotheses on cause and effect of the non-anthropogenic components of change.

  14. The Investigation of Strain-Induced Martensite Reverse Transformation in AISI 304 Austenitic Stainless Steel

    NASA Astrophysics Data System (ADS)

    Cios, G.; Tokarski, T.; Żywczak, A.; Dziurka, R.; Stępień, M.; Gondek, Ł.; Marciszko, M.; Pawłowski, B.; Wieczerzak, K.; Bała, P.

    2017-10-01

    This paper presents a comprehensive study on the strain-induced martensitic transformation and reversion transformation of the strain-induced martensite in AISI 304 stainless steel using a number of complementary techniques such as dilatometry, calorimetry, magnetometry, and in-situ X-ray diffraction, coupled with high-resolution microstructural transmission Kikuchi diffraction analysis. Tensile deformation was applied at temperatures between room temperature and 213 K (-60 °C) in order to obtain a different volume fraction of strain-induced martensite (up to 70 pct). The volume fraction of the strain-induced martensite, measured by the magnetometric method, was correlated with the total elongation, hardness, and linear thermal expansion coefficient. The thermal expansion coefficient, as well as the hardness of the strain-induced martensitic phase was evaluated. The in-situ thermal treatment experiments showed unusual changes in the kinetics of the reverse transformation (α' → γ). The X-ray diffraction analysis revealed that the reverse transformation may be stress assisted—strains inherited from the martensitic transformation may increase its kinetics at the lower annealing temperature range. More importantly, the transmission Kikuchi diffraction measurements showed that the reverse transformation of the strain-induced martensite proceeds through a displacive, diffusionless mechanism, maintaining the Kurdjumov-Sachs crystallographic relationship between the martensite and the reverted austenite. This finding is in contradiction to the results reported by other researchers for a similar alloy composition.

  15. Variance approach for multi-objective linear programming with fuzzy random of objective function coefficients

    NASA Astrophysics Data System (ADS)

    Indarsih, Indrati, Ch. Rini

    2016-02-01

    In this paper, we define variance of the fuzzy random variables through alpha level. We have a theorem that can be used to know that the variance of fuzzy random variables is a fuzzy number. We have a multi-objective linear programming (MOLP) with fuzzy random of objective function coefficients. We will solve the problem by variance approach. The approach transform the MOLP with fuzzy random of objective function coefficients into MOLP with fuzzy of objective function coefficients. By weighted methods, we have linear programming with fuzzy coefficients and we solve by simplex method for fuzzy linear programming.

  16. Fast downscaled inverses for images compressed with M-channel lapped transforms.

    PubMed

    de Queiroz, R L; Eschbach, R

    1997-01-01

    Compressed images may be decompressed and displayed or printed using different devices at different resolutions. Full decompression and rescaling in space domain is a very expensive method. We studied downscaled inverses where the image is decompressed partially, and a reduced inverse transform is used to recover the image. In this fashion, fewer transform coefficients are used and the synthesis process is simplified. We studied the design of fast inverses, for a given forward transform. General solutions are presented for M-channel finite impulse response (FIR) filterbanks, of which block and lapped transforms are a subset. Designs of faster inverses are presented for popular block and lapped transforms.

  17. Integrable equations of the infinite nonlinear Schrödinger equation hierarchy with time variable coefficients.

    PubMed

    Kedziora, D J; Ankiewicz, A; Chowdury, A; Akhmediev, N

    2015-10-01

    We present an infinite nonlinear Schrödinger equation hierarchy of integrable equations, together with the recurrence relations defining it. To demonstrate integrability, we present the Lax pairs for the whole hierarchy, specify its Darboux transformations and provide several examples of solutions. These resulting wavefunctions are given in exact analytical form. We then show that the Lax pair and Darboux transformation formalisms still apply in this scheme when the coefficients in the hierarchy depend on the propagation variable (e.g., time). This extension thus allows for the construction of complicated solutions within a greatly diversified domain of generalised nonlinear systems.

  18. Standard UBV Observations at the Çanakkale University Observatory (ÇUO)

    NASA Astrophysics Data System (ADS)

    Bakis, Hicran; Bakis, Volkan; Demircan, Osman; Budding, Edwin

    2005-07-01

    By using standard and comparison star observations carried out at different times of the year, at Çanakkale Onsekiz Mart University Observatory, we obtained the atmospheric extinction coefficients at the observatory. We also obtained transformation coefficients and zero-point constants for the transformation to the standard Johnson UBV system, of observations in the local system carried out with the SSP5A photometer and T40 telescope. The transmission curves and the mean wavelengths of the UBV filters as measured in the laboratory appear not much different from those of the standard Johnson system and found inside the transmission curve of the standard mean atmosphere.

  19. Rejection of the maternal electrocardiogram in the electrohysterogram signal.

    PubMed

    Leman, H; Marque, C

    2000-08-01

    The electrohysterogram (EHG) signal is mainly corrupted by the mother's electrocardiogram (ECG), which remains present despite analog filtering during acquisition. Wavelets are a powerful denoising tool and have already proved their efficiency on the EHG. In this paper, we propose a new method that employs the redundant wavelet packet transform. We first study wavelet packet coefficient histograms and propose an algorithm to automatically detect the histogram mode number. Using a new criterion, we compute a best basis adapted to the denoising. After EHG wavelet packet coefficient thresholding in the selected basis, the inverse transform is applied. The ECG seems to be very efficiently removed.

  20. A Discussion of the Discrete Fourier Transform Execution on a Typical Desktop PC

    NASA Technical Reports Server (NTRS)

    White, Michael J.

    2006-01-01

    This paper will discuss and compare the execution times of three examples of the Discrete Fourier Transform (DFT). The first two examples will demonstrate the direct implementation of the algorithm. In the first example, the Fourier coefficients are generated at the execution of the DFT. In the second example, the coefficients are generated prior to execution and the DFT coefficients are indexed at execution. The last example will demonstrate the Cooley- Tukey algorithm, better known as the Fast Fourier Transform. All examples were written in C executed on a PC using a Pentium 4 running at 1.7 Ghz. As a function of N, the total complex data size, the direct implementation DFT executes, as expected at order of N2 and the FFT executes at order of N log2 N. At N=16K, there is an increase in processing time beyond what is expected. This is not caused by implementation but is a consequence of the effect that machine architecture and memory hierarchy has on implementation. This paper will include a brief overview of digital signal processing, along with a discussion of contemporary work with discrete Fourier processing.

  1. Internal friction between fluid particles of MHD tangent hyperbolic fluid with heat generation: Using coefficients improved by Cash and Karp

    NASA Astrophysics Data System (ADS)

    Salahuddin, T.; Khan, Imad; Malik, M. Y.; Khan, Mair; Hussain, Arif; Awais, Muhammad

    2017-05-01

    The present work examines the internal resistance between fluid particles of tangent hyperbolic fluid flow due to a non-linear stretching sheet with heat generation. Using similarity transformations, the governing system of partial differential equations is transformed into a coupled non-linear ordinary differential system with variable coefficients. Unlike the current analytical works on the flow problems in the literature, the main concern here is to numerically work out and find the solution by using Runge-Kutta-Fehlberg coefficients improved by Cash and Karp (Naseer et al., Alexandria Eng. J. 53, 747 (2014)). To determine the relevant physical features of numerous mechanisms acting on the deliberated problem, it is sufficient to have the velocity profile and temperature field and also the drag force and heat transfer rate all as given in the current paper.

  2. The Delicate Analysis of Short-Term Load Forecasting

    NASA Astrophysics Data System (ADS)

    Song, Changwei; Zheng, Yuan

    2017-05-01

    This paper proposes a new method for short-term load forecasting based on the similar day method, correlation coefficient and Fast Fourier Transform (FFT) to achieve the precision analysis of load variation from three aspects (typical day, correlation coefficient, spectral analysis) and three dimensions (time dimension, industry dimensions, the main factors influencing the load characteristic such as national policies, regional economic, holidays, electricity and so on). First, the branch algorithm one-class-SVM is adopted to selection the typical day. Second, correlation coefficient method is used to obtain the direction and strength of the linear relationship between two random variables, which can reflect the influence caused by the customer macro policy and the scale of production to the electricity price. Third, Fourier transform residual error correction model is proposed to reflect the nature of load extracting from the residual error. Finally, simulation result indicates the validity and engineering practicability of the proposed method.

  3. Breather management in the derivative nonlinear Schrödinger equation with variable coefficients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhong, Wei-Ping, E-mail: zhongwp6@126.com; Texas A&M University at Qatar, P.O. Box 23874 Doha; Belić, Milivoj

    2015-04-15

    We investigate breather solutions of the generalized derivative nonlinear Schrödinger (DNLS) equation with variable coefficients, which is used in the description of femtosecond optical pulses in inhomogeneous media. The solutions are constructed by means of the similarity transformation, which reduces a particular form of the generalized DNLS equation into the standard one, with constant coefficients. Examples of bright and dark breathers of different orders, that ride on finite backgrounds and may be related to rogue waves, are presented. - Highlights: • Exact solutions of a generalized derivative NLS equation are obtained. • The solutions are produced by means of amore » transformation to the usual integrable equation. • The validity of the solutions is verified by comparing them to numerical counterparts. • Stability of the solutions is checked by means of direct simulations. • The model applies to the propagation of ultrashort pulses in optical media.« less

  4. Significant Figure Rules for General Arithmetic Functions.

    ERIC Educational Resources Information Center

    Graham, D. M.

    1989-01-01

    Provides some significant figure rules used in chemistry including the general theoretical basis; logarithms and antilogarithms; exponentiation (with exactly known exponents); sines and cosines; and the extreme value rule. (YP)

  5. Wavelet pressure reactivity index: A validation study.

    PubMed

    Liu, Xiuyun; Czosnyka, Marek; Donnelly, Joseph; Cardim, Danilo; Cabeleira, Manuel; Hutchinson, Peter J; Hu, Xiao; Smielewski, Peter; Brady, Ken

    2018-04-17

    The brain is vulnerable to damage from too little or too much blood flow. A physiological mechanism called cerebral autoregulation (CA) exists to maintain stable blood flow even if cerebral perfusion pressure (CPP) is changing. A robust method for assessing CA is not yet available. There are still some problems with the traditional measure, the pressure reactivity index (PRx). We introduced a new method, wavelet transform method (wPRx) to assess CA using data from two sets of controlled hypotension experiments in piglets: One set with artificially manipulated ABP oscillations; the other group were spontaneous ABP waves. A significant linear relationship was found between wPRx and PRx in both groups, with wPRx rendering a more stable result for the spontaneous waves. Although both methods showed similar accuracy in distinguishing intact and impaired CA, it seems that wPRx tend to perform better than PRx, though not significantly. We present a novel method to monitor cerebral autoregulation (CA) using the wavelet transform (WT). The new method is validated against the pressure reactivity index (PRx) in two piglet experiments with controlled hypotension. The first experiment (n = 12) had controlled haemorrhage with artificial stationary arterial blood pressure (ABP) and intracranial pressure (ICP) oscillations induced by sinusoidal slow changes in positive end-expiratory pressure ('PEEP group') . The second experiment (n = 17) had venous balloon inflation during spontaneous, non-stationary ABP and ICP oscillations ('non-PEEP group'). Wavelet transform phase shift (WTP) between ABP and ICP was calculated in the frequency 0.0067-0.05 Hz. Wavelet semblance, the cosine of WTP was used to make the values comparable to PRx, and the new index was termed wavelet pressure reactivity index (wPRx). The traditional PRx, the running correlation coefficient between ABP and ICP, was calculated. The result showed a significant linear relationship between wPRx and PRx in the PEEP group (R = 0.88) and non-PEEP group (R = 0.56). In non-PEEP group, wPRx showed better performance than PRx in distinguishing CPP above and below lower limit of autoregulation (LLA). When CPP was decreased below LLA, wPRx increased from 0.43 ± 0.28 to 0.69 ± 0.12 (p = 0.003) while PRx increased from 0.07 ± 0.21 to 0.27 ± 0.37 (p = 0.04). Moreover, wPRx rendered a more stable result than PRx (SD of PRx was 0.40 ± 0.07, and SD of wPRx was 0.28 ± 0.11, p = 0.001). Assessment of CA using wavelet derived phase shift between ABP and ICP is feasible. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  6. Pearson's Correlation between Three Variables; Using Students' Basic Knowledge of Geometry for an Exercise in Mathematical Statistics

    ERIC Educational Resources Information Center

    Vos, Pauline

    2009-01-01

    When studying correlations, how do the three bivariate correlation coefficients between three variables relate? After transforming Pearson's correlation coefficient r into a Euclidean distance, undergraduate students can tackle this problem using their secondary school knowledge of geometry (Pythagoras' theorem and similarity of triangles).…

  7. Some operational tools for solving fractional and higher integer order differential equations: A survey on their mutual relations

    NASA Astrophysics Data System (ADS)

    Kiryakova, Virginia S.

    2012-11-01

    The Laplace Transform (LT) serves as a basis of the Operational Calculus (OC), widely explored by engineers and applied scientists in solving mathematical models for their practical needs. This transform is closely related to the exponential and trigonometric functions (exp, cos, sin) and to the classical differentiation and integration operators, reducing them to simple algebraic operations. Thus, the classical LT and the OC give useful tool to handle differential equations and systems with constant coefficients. Several generalizations of the LT have been introduced to allow solving, in a similar way, of differential equations with variable coefficients and of higher integer orders, as well as of fractional (arbitrary non-integer) orders. Note that fractional order mathematical models are recently widely used to describe better various systems and phenomena of the real world. This paper surveys briefly some of our results on classes of such integral transforms, that can be obtained from the LT by means of "transmutations" which are operators of the generalized fractional calculus (GFC). On the list of these Laplace-type integral transforms, we consider the Borel-Dzrbashjan, Meijer, Krätzel, Obrechkoff, generalized Obrechkoff (multi-index Borel-Dzrbashjan) transforms, etc. All of them are G- and H-integral transforms of convolutional type, having as kernels Meijer's G- or Fox's H-functions. Besides, some special functions (also being G- and H-functions), among them - the generalized Bessel-type and Mittag-Leffler (M-L) type functions, are generating Gel'fond-Leontiev (G-L) operators of generalized differentiation and integration, which happen to be also operators of GFC. Our integral transforms have operational properties analogous to those of the LT - they do algebrize the G-L generalized integrations and differentiations, and thus can serve for solving wide classes of differential equations with variable coefficients of arbitrary, including non-integer order. Throughout the survey, we illustrate the parallels in the relationships: Laplace type integral transforms - special functions as kernels - operators of generalized integration and differentiation generated by special functions - special functions as solutions of related differential equations. The role of the so-called Special Functions of Fractional Calculus is emphasized.

  8. An intelligent data model for the storage of structured grids

    NASA Astrophysics Data System (ADS)

    Clyne, John; Norton, Alan

    2013-04-01

    With support from the U.S. National Science Foundation we have developed, and currently maintain, VAPOR: a geosciences-focused, open source visual data analysis package. VAPOR enables highly interactive exploration, as well as qualitative and quantitative analysis of high-resolution simulation outputs using only a commodity, desktop computer. The enabling technology behind VAPOR's ability to interact with a data set, whose size would overwhelm all but the largest analysis computing resources, is a progressive data access file format, called the VAPOR Data Collection (VDC). The VDC is based on the discrete wavelet transform and their information compaction properties. Prior to analysis, raw data undergo a wavelet transform, concentrating the information content into a fraction of the coefficients. The coefficients are then sorted by their information content (magnitude) into a small number of bins. Data are reconstructed by applying an inverse wavelet transform. If all of the coefficient bins are used during reconstruction the process is lossless (up to floating point round-off). If only a subset of the bins are used, an approximation of the original data is produced. A crucial point here is that the principal benefit to reconstruction from a subset of wavelet coefficients is a reduction in I/O. Further, if smaller coefficients are simply discarded, or perhaps stored on more capacious tertiary storage, secondary storage requirements (e.g. disk) can be reduced as well. In practice, these reductions in I/O or storage can be on the order of tens or even hundreds. This talk will briefly describe the VAPOR Data Collection, and will present real world success stories from the geosciences that illustrate how progressive data access enables highly interactive exploration of Big Data.

  9. Mapped grid methods for long-range molecules and cold collisions

    NASA Astrophysics Data System (ADS)

    Willner, K.; Dulieu, O.; Masnou-Seeuws, F.

    2004-01-01

    The paper discusses ways of improving the accuracy of numerical calculations for vibrational levels of diatomic molecules close to the dissociation limit or for ultracold collisions, in the framework of a grid representation. In order to avoid the implementation of very large grids, Kokoouline et al. [J. Chem. Phys. 110, 9865 (1999)] have proposed a mapping procedure through introduction of an adaptive coordinate x subjected to the variation of the local de Broglie wavelength as a function of the internuclear distance R. Some unphysical levels ("ghosts") then appear in the vibrational series computed via a mapped Fourier grid representation. In the present work the choice of the basis set is reexamined, and two alternative expansions are discussed: Sine functions and Hardy functions. It is shown that use of a basis set with fixed nodes at both grid ends is efficient to eliminate "ghost" solutions. It is further shown that the Hamiltonian matrix in the sine basis can be calculated very accurately by using an auxiliary basis of cosine functions, overcoming the problems arising from numerical calculation of the Jacobian J(x) of the R→x coordinate transformation.

  10. CSI feedback-based CS for underwater acoustic adaptive modulation OFDM system with channel prediction

    NASA Astrophysics Data System (ADS)

    Kuai, Xiao-yan; Sun, Hai-xin; Qi, Jie; Cheng, En; Xu, Xiao-ka; Guo, Yu-hui; Chen, You-gan

    2014-06-01

    In this paper, we investigate the performance of adaptive modulation (AM) orthogonal frequency division multiplexing (OFDM) system in underwater acoustic (UWA) communications. The aim is to solve the problem of large feedback overhead for channel state information (CSI) in every subcarrier. A novel CSI feedback scheme is proposed based on the theory of compressed sensing (CS). We propose a feedback from the receiver that only feedback the sparse channel parameters. Additionally, prediction of the channel state is proposed every several symbols to realize the AM in practice. We describe a linear channel prediction algorithm which is used in adaptive transmission. This system has been tested in the real underwater acoustic channel. The linear channel prediction makes the AM transmission techniques more feasible for acoustic channel communications. The simulation and experiment show that significant improvements can be obtained both in bit error rate (BER) and throughput in the AM scheme compared with the fixed Quadrature Phase Shift Keying (QPSK) modulation scheme. Moreover, the performance with standard CS outperforms the Discrete Cosine Transform (DCT) method.

  11. Image splitting and remapping method for radiological image compression

    NASA Astrophysics Data System (ADS)

    Lo, Shih-Chung B.; Shen, Ellen L.; Mun, Seong K.

    1990-07-01

    A new decomposition method using image splitting and gray-level remapping has been proposed for image compression, particularly for images with high contrast resolution. The effects of this method are especially evident in our radiological image compression study. In our experiments, we tested the impact of this decomposition method on image compression by employing it with two coding techniques on a set of clinically used CT images and several laser film digitized chest radiographs. One of the compression techniques used was full-frame bit-allocation in the discrete cosine transform domain, which has been proven to be an effective technique for radiological image compression. The other compression technique used was vector quantization with pruned tree-structured encoding, which through recent research has also been found to produce a low mean-square-error and a high compression ratio. The parameters we used in this study were mean-square-error and the bit rate required for the compressed file. In addition to these parameters, the difference between the original and reconstructed images will be presented so that the specific artifacts generated by both techniques can be discerned by visual perception.

  12. Shape and 3D acoustically induced vibrations of the human eardrum characterized by digital holography

    NASA Astrophysics Data System (ADS)

    Khaleghi, Morteza; Furlong, Cosme; Cheng, Jeffrey Tao; Rosowski, John J.

    2014-07-01

    The eardrum or Tympanic Membrane (TM) transfers acoustic energy from the ear canal (at the external ear) into mechanical motions of the ossicles (at the middle ear). The acousto-mechanical-transformer behavior of the TM is determined by its shape and mechanical properties. For a better understanding of hearing mysteries, full-field-of-view techniques are required to quantify shape, nanometer-scale sound-induced displacement, and mechanical properties of the TM in 3D. In this paper, full-field-of-view, three-dimensional shape and sound-induced displacement of the surface of the TM are obtained by the methods of multiple wavelengths and multiple sensitivity vectors with lensless digital holography. Using our developed digital holographic systems, unique 3D information such as, shape (with micrometer resolution), 3D acoustically-induced displacement (with nanometer resolution), full strain tensor (with nano-strain resolution), 3D phase of motion, and 3D directional cosines of the displacement vectors can be obtained in full-field-ofview with a spatial resolution of about 3 million points on the surface of the TM and a temporal resolution of 15 Hz.

  13. MPEG-compliant joint source/channel coding using discrete cosine transform and substream scheduling for visual communication over packet networks

    NASA Astrophysics Data System (ADS)

    Kim, Seong-Whan; Suthaharan, Shan; Lee, Heung-Kyu; Rao, K. R.

    2001-01-01

    Quality of Service (QoS)-guarantee in real-time communication for multimedia applications is significantly important. An architectural framework for multimedia networks based on substreams or flows is effectively exploited for combining source and channel coding for multimedia data. But the existing frame by frame approach which includes Moving Pictures Expert Group (MPEG) cannot be neglected because it is a standard. In this paper, first, we designed an MPEG transcoder which converts an MPEG coded stream into variable rate packet sequences to be used for our joint source/channel coding (JSCC) scheme. Second, we designed a classification scheme to partition the packet stream into multiple substreams which have their own QoS requirements. Finally, we designed a management (reservation and scheduling) scheme for substreams to support better perceptual video quality such as the bound of end-to-end jitter. We have shown that our JSCC scheme is better than two other two popular techniques by simulation and real video experiments on the TCP/IP environment.

  14. Authentication Based on Pole-zero Models of Signature Velocity

    PubMed Central

    Rashidi, Saeid; Fallah, Ali; Towhidkhah, Farzad

    2013-01-01

    With the increase of communication and financial transaction through internet, on-line signature verification is an accepted biometric technology for access control and plays a significant role in authenticity and authorization in modernized society. Therefore, fast and precise algorithms for the signature verification are very attractive. The goal of this paper is modeling of velocity signal that pattern and properties is stable for persons. With using pole-zero models based on discrete cosine transform, precise method is proposed for modeling and then features is founded from strokes. With using linear, parzen window and support vector machine classifiers, the signature verification technique was tested with a large number of authentic and forgery signatures and has demonstrated the good potential of this technique. The signatures are collected from three different database include a proprietary database, the SVC2004 and the Sabanci University signature database benchmark databases. Experimental results based on Persian, SVC2004 and SUSIG databases show that our method achieves an equal error rate of 5.91%, 5.62% and 3.91% in the skilled forgeries, respectively. PMID:24696797

  15. Analysis of the impact of digital watermarking on computer-aided diagnosis in medical imaging.

    PubMed

    Garcia-Hernandez, Jose Juan; Gomez-Flores, Wilfrido; Rubio-Loyola, Javier

    2016-01-01

    Medical images (MI) are relevant sources of information for detecting and diagnosing a large number of illnesses and abnormalities. Due to their importance, this study is focused on breast ultrasound (BUS), which is the main adjunct for mammography to detect common breast lesions among women worldwide. On the other hand, aiming to enhance data security, image fidelity, authenticity, and content verification in e-health environments, MI watermarking has been widely used, whose main goal is to embed patient meta-data into MI so that the resulting image keeps its original quality. In this sense, this paper deals with the comparison of two watermarking approaches, namely spread spectrum based on the discrete cosine transform (SS-DCT) and the high-capacity data-hiding (HCDH) algorithm, so that the watermarked BUS images are guaranteed to be adequate for a computer-aided diagnosis (CADx) system, whose two principal outcomes are lesion segmentation and classification. Experimental results show that HCDH algorithm is highly recommended for watermarking medical images, maintaining the image quality and without introducing distortion into the output of CADx. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Radial artery pulse waveform analysis based on curve fitting using discrete Fourier series.

    PubMed

    Jiang, Zhixing; Zhang, David; Lu, Guangming

    2018-04-19

    Radial artery pulse diagnosis has been playing an important role in traditional Chinese medicine (TCM). For its non-invasion and convenience, the pulse diagnosis has great significance in diseases analysis of modern medicine. The practitioners sense the pulse waveforms in patients' wrist to make diagnoses based on their non-objective personal experience. With the researches of pulse acquisition platforms and computerized analysis methods, the objective study on pulse diagnosis can help the TCM to keep up with the development of modern medicine. In this paper, we propose a new method to extract feature from pulse waveform based on discrete Fourier series (DFS). It regards the waveform as one kind of signal that consists of a series of sub-components represented by sine and cosine (SC) signals with different frequencies and amplitudes. After the pulse signals are collected and preprocessed, we fit the average waveform for each sample using discrete Fourier series by least squares. The feature vector is comprised by the coefficients of discrete Fourier series function. Compared with the fitting method using Gaussian mixture function, the fitting errors of proposed method are smaller, which indicate that our method can represent the original signal better. The classification performance of proposed feature is superior to the other features extracted from waveform, liking auto-regression model and Gaussian mixture model. The coefficients of optimized DFS function, who is used to fit the arterial pressure waveforms, can obtain better performance in modeling the waveforms and holds more potential information for distinguishing different psychological states. Copyright © 2018 Elsevier B.V. All rights reserved.

  17. Performance assessment of different day-of-the-year-based models for estimating global solar radiation - Case study: Egypt

    NASA Astrophysics Data System (ADS)

    Hassan, Gasser E.; Youssef, M. Elsayed; Ali, Mohamed A.; Mohamed, Zahraa E.; Shehata, Ali I.

    2016-11-01

    Different models are introduced to predict the daily global solar radiation in different locations but there is no specific model based on the day of the year is proposed for many locations around the world. In this study, more than 20 years of measured data for daily global solar radiation on a horizontal surface are used to develop and validate seven models to estimate the daily global solar radiation by day of the year for ten cities around Egypt as a case study. Moreover, the generalization capability for the best models is examined all over the country. The regression analysis is employed to calculate the coefficients of different suggested models. The statistical indicators namely, RMSE, MABE, MAPE, r and R2 are calculated to evaluate the performance of the developed models. Based on the validation with the available data, the results show that the hybrid sine and cosine wave model and 4th order polynomial model have the best performance among other suggested models. Consequently, these two models coupled with suitable coefficients can be used for estimating the daily global solar radiation on a horizontal surface for each city, and also for all the locations around the studied region. It is believed that the established models in this work are applicable and significant for quick estimation for the average daily global solar radiation on a horizontal surface with higher accuracy. The values of global solar radiation generated by this approach can be utilized in the design and estimation of the performance of different solar applications.

  18. Improved detection of DNA-binding proteins via compression technology on PSSM information.

    PubMed

    Wang, Yubo; Ding, Yijie; Guo, Fei; Wei, Leyi; Tang, Jijun

    2017-01-01

    Since the importance of DNA-binding proteins in multiple biomolecular functions has been recognized, an increasing number of researchers are attempting to identify DNA-binding proteins. In recent years, the machine learning methods have become more and more compelling in the case of protein sequence data soaring, because of their favorable speed and accuracy. In this paper, we extract three features from the protein sequence, namely NMBAC (Normalized Moreau-Broto Autocorrelation), PSSM-DWT (Position-specific scoring matrix-Discrete Wavelet Transform), and PSSM-DCT (Position-specific scoring matrix-Discrete Cosine Transform). We also employ feature selection algorithm on these feature vectors. Then, these features are fed into the training SVM (support vector machine) model as classifier to predict DNA-binding proteins. Our method applys three datasets, namely PDB1075, PDB594 and PDB186, to evaluate the performance of our approach. The PDB1075 and PDB594 datasets are employed for Jackknife test and the PDB186 dataset is used for the independent test. Our method achieves the best accuracy in the Jacknife test, from 79.20% to 86.23% and 80.5% to 86.20% on PDB1075 and PDB594 datasets, respectively. In the independent test, the accuracy of our method comes to 76.3%. The performance of independent test also shows that our method has a certain ability to be effectively used for DNA-binding protein prediction. The data and source code are at https://doi.org/10.6084/m9.figshare.5104084.

  19. M-estimator for the 3D symmetric Helmert coordinate transformation

    NASA Astrophysics Data System (ADS)

    Chang, Guobin; Xu, Tianhe; Wang, Qianxin

    2018-01-01

    The M-estimator for the 3D symmetric Helmert coordinate transformation problem is developed. Small-angle rotation assumption is abandoned. The direction cosine matrix or the quaternion is used to represent the rotation. The 3 × 1 multiplicative error vector is defined to represent the rotation estimation error. An analytical solution can be employed to provide the initial approximate for iteration, if the outliers are not large. The iteration is carried out using the iterative reweighted least-squares scheme. In each iteration after the first one, the measurement equation is linearized using the available parameter estimates, the reweighting matrix is constructed using the residuals obtained in the previous iteration, and then the parameter estimates with their variance-covariance matrix are calculated. The influence functions of a single pseudo-measurement on the least-squares estimator and on the M-estimator are derived to theoretically show the robustness. In the solution process, the parameter is rescaled in order to improve the numerical stability. Monte Carlo experiments are conducted to check the developed method. Different cases to investigate whether the assumed stochastic model is correct are considered. The results with the simulated data slightly deviating from the true model are used to show the developed method's statistical efficacy at the assumed stochastic model, its robustness against the deviations from the assumed stochastic model, and the validity of the estimated variance-covariance matrix no matter whether the assumed stochastic model is correct or not.

  20. Representation of deformable motion for compression of dynamic cardiac image data

    NASA Astrophysics Data System (ADS)

    Weinlich, Andreas; Amon, Peter; Hutter, Andreas; Kaup, André

    2012-02-01

    We present a new approach for efficient estimation and storage of tissue deformation in dynamic medical image data like 3-D+t computed tomography reconstructions of human heart acquisitions. Tissue deformation between two points in time can be described by means of a displacement vector field indicating for each voxel of a slice, from which position in the previous slice at a fixed position in the third dimension it has moved to this position. Our deformation model represents the motion in a compact manner using a down-sampled potential function of the displacement vector field. This function is obtained by a Gauss-Newton minimization of the estimation error image, i. e., the difference between the current and the deformed previous slice. For lossless or lossy compression of volume slices, the potential function and the error image can afterwards be coded separately. By assuming deformations instead of translational motion, a subsequent coding algorithm using this method will achieve better compression ratios for medical volume data than with conventional block-based motion compensation known from video coding. Due to the smooth prediction without block artifacts, particularly whole-image transforms like wavelet decomposition as well as intra-slice prediction methods can benefit from this approach. We show that with discrete cosine as well as with Karhunen-Lo`eve transform the method can achieve a better energy compaction of the error image than block-based motion compensation while reaching approximately the same prediction error energy.

Top