Image compression system and method having optimized quantization tables
NASA Technical Reports Server (NTRS)
Ratnakar, Viresh (Inventor); Livny, Miron (Inventor)
1998-01-01
A digital image compression preprocessor for use in a discrete cosine transform-based digital image compression device is provided. The preprocessor includes a gathering mechanism for determining discrete cosine transform statistics from input digital image data. A computing mechanism is operatively coupled to the gathering mechanism to calculate a image distortion array and a rate of image compression array based upon the discrete cosine transform statistics for each possible quantization value. A dynamic programming mechanism is operatively coupled to the computing mechanism to optimize the rate of image compression array against the image distortion array such that a rate-distortion-optimal quantization table is derived. In addition, a discrete cosine transform-based digital image compression device and a discrete cosine transform-based digital image compression and decompression system are provided. Also, a method for generating a rate-distortion-optimal quantization table, using discrete cosine transform-based digital image compression, and operating a discrete cosine transform-based digital image compression and decompression system are provided.
NASA Astrophysics Data System (ADS)
Zhao, Yun-wei; Zhu, Zi-qiang; Lu, Guang-yin; Han, Bo
2018-03-01
The sine and cosine transforms implemented with digital filters have been used in the Transient electromagnetic methods for a few decades. Kong (2007) proposed a method of obtaining filter coefficients, which are computed in the sample domain by Hankel transform pair. However, the curve shape of Hankel transform pair changes with a parameter, which usually is set to be 1 or 3 in the process of obtaining the digital filter coefficients of sine and cosine transforms. First, this study investigates the influence of the parameter on the digital filter algorithm of sine and cosine transforms based on the digital filter algorithm of Hankel transform and the relationship between the sine, cosine function and the ±1/2 order Bessel function of the first kind. The results show that the selection of the parameter highly influences the precision of digital filter algorithm. Second, upon the optimal selection of the parameter, it is found that an optimal sampling interval s also exists to achieve the best precision of digital filter algorithm. Finally, this study proposes four groups of sine and cosine transform digital filter coefficients with different length, which may help to develop the digital filter algorithm of sine and cosine transforms, and promote its application.
A new Watermarking System based on Discrete Cosine Transform (DCT) in color biometric images.
Dogan, Sengul; Tuncer, Turker; Avci, Engin; Gulten, Arif
2012-08-01
This paper recommend a biometric color images hiding approach An Watermarking System based on Discrete Cosine Transform (DCT), which is used to protect the security and integrity of transmitted biometric color images. Watermarking is a very important hiding information (audio, video, color image, gray image) technique. It is commonly used on digital objects together with the developing technology in the last few years. One of the common methods used for hiding information on image files is DCT method which used in the frequency domain. In this study, DCT methods in order to embed watermark data into face images, without corrupting their features.
Removing tidal-period variations from time-series data using low-pass digital filters
Walters, Roy A.; Heston, Cynthia
1982-01-01
Several low-pass, digital filters are examined for their ability to remove tidal Period Variations from a time-series of water surface elevation for San Francisco Bay. The most efficient filter is the one which is applied to the Fourier coefficients of the transformed data, and the filtered data recovered through an inverse transform. The ability of the filters to remove the tidal components increased in the following order: 1) cosine-Lanczos filter, 2) cosine-Lanczos squared filter; 3) Godin filter; and 4) a transform fitter. The Godin fitter is not sufficiently sharp to prevent severe attenuation of 2–3 day variations in surface elevation resulting from weather events.
High-speed real-time image compression based on all-optical discrete cosine transformation
NASA Astrophysics Data System (ADS)
Guo, Qiang; Chen, Hongwei; Wang, Yuxi; Chen, Minghua; Yang, Sigang; Xie, Shizhong
2017-02-01
In this paper, we present a high-speed single-pixel imaging (SPI) system based on all-optical discrete cosine transform (DCT) and demonstrate its capability to enable noninvasive imaging of flowing cells in a microfluidic channel. Through spectral shaping based on photonic time stretch (PTS) and wavelength-to-space conversion, structured illumination patterns are generated at a rate (tens of MHz) which is three orders of magnitude higher than the switching rate of a digital micromirror device (DMD) used in a conventional single-pixel camera. Using this pattern projector, high-speed image compression based on DCT can be achieved in the optical domain. In our proposed system, a high compression ratio (approximately 10:1) and a fast image reconstruction procedure are both achieved, which implicates broad applications in industrial quality control and biomedical imaging.
Multipurpose image watermarking algorithm based on multistage vector quantization.
Lu, Zhe-Ming; Xu, Dian-Guo; Sun, Sheng-He
2005-06-01
The rapid growth of digital multimedia and Internet technologies has made copyright protection, copy protection, and integrity verification three important issues in the digital world. To solve these problems, the digital watermarking technique has been presented and widely researched. Traditional watermarking algorithms are mostly based on discrete transform domains, such as the discrete cosine transform, discrete Fourier transform (DFT), and discrete wavelet transform (DWT). Most of these algorithms are good for only one purpose. Recently, some multipurpose digital watermarking methods have been presented, which can achieve the goal of content authentication and copyright protection simultaneously. However, they are based on DWT or DFT. Lately, several robust watermarking schemes based on vector quantization (VQ) have been presented, but they can only be used for copyright protection. In this paper, we present a novel multipurpose digital image watermarking method based on the multistage vector quantizer structure, which can be applied to image authentication and copyright protection. In the proposed method, the semi-fragile watermark and the robust watermark are embedded in different VQ stages using different techniques, and both of them can be extracted without the original image. Simulation results demonstrate the effectiveness of our algorithm in terms of robustness and fragility.
Method and Apparatus for Evaluating the Visual Quality of Processed Digital Video Sequences
NASA Technical Reports Server (NTRS)
Watson, Andrew B. (Inventor)
2002-01-01
A Digital Video Quality (DVQ) apparatus and method that incorporate a model of human visual sensitivity to predict the visibility of artifacts. The DVQ method and apparatus are used for the evaluation of the visual quality of processed digital video sequences and for adaptively controlling the bit rate of the processed digital video sequences without compromising the visual quality. The DVQ apparatus minimizes the required amount of memory and computation. The input to the DVQ apparatus is a pair of color image sequences: an original (R) non-compressed sequence, and a processed (T) sequence. Both sequences (R) and (T) are sampled, cropped, and subjected to color transformations. The sequences are then subjected to blocking and discrete cosine transformation, and the results are transformed to local contrast. The next step is a time filtering operation which implements the human sensitivity to different time frequencies. The results are converted to threshold units by dividing each discrete cosine transform coefficient by its respective visual threshold. At the next stage the two sequences are subtracted to produce an error sequence. The error sequence is subjected to a contrast masking operation, which also depends upon the reference sequence (R). The masked errors can be pooled in various ways to illustrate the perceptual error over various dimensions, and the pooled error can be converted to a visual quality measure.
Integer cosine transform for image compression
NASA Technical Reports Server (NTRS)
Cheung, K.-M.; Pollara, F.; Shahshahani, M.
1991-01-01
This article describes a recently introduced transform algorithm called the integer cosine transform (ICT), which is used in transform-based data compression schemes. The ICT algorithm requires only integer operations on small integers and at the same time gives a rate-distortion performance comparable to that offered by the floating-point discrete cosine transform (DCT). The article addresses the issue of implementation complexity, which is of prime concern for source coding applications of interest in deep-space communications. Complexity reduction in the transform stage of the compression scheme is particularly relevant, since this stage accounts for most (typically over 80 percent) of the computational load.
Coloured computational imaging with single-pixel detectors based on a 2D discrete cosine transform
NASA Astrophysics Data System (ADS)
Liu, Bao-Lei; Yang, Zhao-Hua; Liu, Xia; Wu, Ling-An
2017-02-01
We propose and demonstrate a computational imaging technique that uses structured illumination based on a two-dimensional discrete cosine transform to perform imaging with a single-pixel detector. A scene is illuminated by a projector with two sets of orthogonal patterns, then by applying an inverse cosine transform to the spectra obtained from the single-pixel detector a full-colour image is retrieved. This technique can retrieve an image from sub-Nyquist measurements, and the background noise is easily cancelled to give excellent image quality. Moreover, the experimental set-up is very simple.
NASA Astrophysics Data System (ADS)
Song, Xiaoning; Feng, Zhen-Hua; Hu, Guosheng; Yang, Xibei; Yang, Jingyu; Qi, Yunsong
2015-09-01
This paper proposes a progressive sparse representation-based classification algorithm using local discrete cosine transform (DCT) evaluation to perform face recognition. Specifically, the sum of the contributions of all training samples of each subject is first taken as the contribution of this subject, then the redundant subject with the smallest contribution to the test sample is iteratively eliminated. Second, the progressive method aims at representing the test sample as a linear combination of all the remaining training samples, by which the representation capability of each training sample is exploited to determine the optimal "nearest neighbors" for the test sample. Third, the transformed DCT evaluation is constructed to measure the similarity between the test sample and each local training sample using cosine distance metrics in the DCT domain. The final goal of the proposed method is to determine an optimal weighted sum of nearest neighbors that are obtained under the local correlative degree evaluation, which is approximately equal to the test sample, and we can use this weighted linear combination to perform robust classification. Experimental results conducted on the ORL database of faces (created by the Olivetti Research Laboratory in Cambridge), the FERET face database (managed by the Defense Advanced Research Projects Agency and the National Institute of Standards and Technology), AR face database (created by Aleix Martinez and Robert Benavente in the Computer Vision Center at U.A.B), and USPS handwritten digit database (gathered at the Center of Excellence in Document Analysis and Recognition at SUNY Buffalo) demonstrate the effectiveness of the proposed method.
Optimal block cosine transform image coding for noisy channels
NASA Technical Reports Server (NTRS)
Vaishampayan, V.; Farvardin, N.
1986-01-01
The two dimensional block transform coding scheme based on the discrete cosine transform was studied extensively for image coding applications. While this scheme has proven to be efficient in the absence of channel errors, its performance degrades rapidly over noisy channels. A method is presented for the joint source channel coding optimization of a scheme based on the 2-D block cosine transform when the output of the encoder is to be transmitted via a memoryless design of the quantizers used for encoding the transform coefficients. This algorithm produces a set of locally optimum quantizers and the corresponding binary code assignment for the assumed transform coefficient statistics. To determine the optimum bit assignment among the transform coefficients, an algorithm was used based on the steepest descent method, which under certain convexity conditions on the performance of the channel optimized quantizers, yields the optimal bit allocation. Comprehensive simulation results for the performance of this locally optimum system over noisy channels were obtained and appropriate comparisons against a reference system designed for no channel error were rendered.
A real-time inverse quantised transform for multi-standard with dynamic resolution support
NASA Astrophysics Data System (ADS)
Sun, Chi-Chia; Lin, Chun-Ying; Zhang, Ce
2016-06-01
In this paper, a real-time configurable intelligent property (IP) core is presented for image/video decoding process in compatibility with the standard MPEG-4 Visual and the standard H.264/AVC. The inverse quantised discrete cosine and integer transform can be used to perform inverse quantised discrete cosine transform and inverse quantised inverse integer transforms which only required shift and add operations. Meanwhile, COordinate Rotation DIgital Computer iterations and compensation steps are adjustable in order to compensate for the video compression quality regarding various data throughput. The implementations are embedded in publicly available software XVID Codes 1.2.2 for the standard MPEG-4 Visual and the H.264/AVC reference software JM 16.1, where the experimental results show that the balance between the computational complexity and video compression quality is retained. At the end, FPGA synthesised results show that the proposed IP core can bring advantages to low hardware costs and also provide real-time performance for Full HD and 4K-2K video decoding.
A robust color image watermarking algorithm against rotation attacks
NASA Astrophysics Data System (ADS)
Han, Shao-cheng; Yang, Jin-feng; Wang, Rui; Jia, Gui-min
2018-01-01
A robust digital watermarking algorithm is proposed based on quaternion wavelet transform (QWT) and discrete cosine transform (DCT) for copyright protection of color images. The luminance component Y of a host color image in YIQ space is decomposed by QWT, and then the coefficients of four low-frequency subbands are transformed by DCT. An original binary watermark scrambled by Arnold map and iterated sine chaotic system is embedded into the mid-frequency DCT coefficients of the subbands. In order to improve the performance of the proposed algorithm against rotation attacks, a rotation detection scheme is implemented before watermark extracting. The experimental results demonstrate that the proposed watermarking scheme shows strong robustness not only against common image processing attacks but also against arbitrary rotation attacks.
Minho Won; Albalawi, Hassan; Xin Li; Thomas, Donald E
2014-01-01
This paper describes a low-power hardware implementation for movement decoding of brain computer interface. Our proposed hardware design is facilitated by two novel ideas: (i) an efficient feature extraction method based on reduced-resolution discrete cosine transform (DCT), and (ii) a new hardware architecture of dual look-up table to perform discrete cosine transform without explicit multiplication. The proposed hardware implementation has been validated for movement decoding of electrocorticography (ECoG) signal by using a Xilinx FPGA Zynq-7000 board. It achieves more than 56× energy reduction over a reference design using band-pass filters for feature extraction.
Improved digital filters for evaluating Fourier and Hankel transform integrals
Anderson, Walter L.
1975-01-01
New algorithms are described for evaluating Fourier (cosine, sine) and Hankel (J0,J1) transform integrals by means of digital filters. The filters have been designed with extended lengths so that a variable convolution operation can be applied to a large class of integral transforms having the same system transfer function. A f' lagged-convolution method is also presented to significantly decrease the computation time when computing a series of like-transforms over a parameter set spaced the same as the filters. Accuracy of the new filters is comparable to Gaussian integration, provided moderate parameter ranges and well-behaved kernel functions are used. A collection of Fortran IV subprograms is included for both real and complex functions for each filter type. The algorithms have been successfully used in geophysical applications containing a wide variety of integral transforms
The comparison between SVD-DCT and SVD-DWT digital image watermarking
NASA Astrophysics Data System (ADS)
Wira Handito, Kurniawan; Fauzi, Zulfikar; Aminy Ma’ruf, Firda; Widyaningrum, Tanti; Muslim Lhaksmana, Kemas
2018-03-01
With internet, anyone can publish their creation into digital data simply, inexpensively, and absolutely easy to be accessed by everyone. However, the problem appears when anyone else claims that the creation is their property or modifies some part of that creation. It causes necessary protection of copyrights; one of the examples is with watermarking method in digital image. The application of watermarking technique on digital data, especially on image, enables total invisibility if inserted in carrier image. Carrier image will not undergo any decrease of quality and also the inserted image will not be affected by attack. In this paper, watermarking will be implemented on digital image using Singular Value Decomposition based on Discrete Wavelet Transform (DWT) and Discrete Cosine Transform (DCT) by expectation in good performance of watermarking result. In this case, trade-off happen between invisibility and robustness of image watermarking. In embedding process, image watermarking has a good quality for scaling factor < 0.1. The quality of image watermarking in decomposition level 3 is better than level 2 and level 1. Embedding watermark in low-frequency is robust to Gaussian blur attack, rescale, and JPEG compression, but in high-frequency is robust to Gaussian noise.
Information Hiding In Digital Video Using DCT, DWT and CvT
NASA Astrophysics Data System (ADS)
Abed Shukur, Wisam; Najah Abdullah, Wathiq; Kareem Qurban, Luheb
2018-05-01
The type of video that used in this proposed hiding a secret information technique is .AVI; the proposed technique of a data hiding to embed a secret information into video frames by using Discrete Cosine Transform (DCT), Discrete Wavelet Transform (DWT) and Curvelet Transform (CvT). An individual pixel consists of three color components (RGB), the secret information is embedded in Red (R) color channel. On the receiver side, the secret information is extracted from received video. After extracting secret information, robustness of proposed hiding a secret information technique is measured and obtained by computing the degradation of the extracted secret information by comparing it with the original secret information via calculating the Normalized cross Correlation (NC). The experiments shows the error ratio of the proposed technique is (8%) while accuracy ratio is (92%) when the Curvelet Transform (CvT) is used, but compared with Discrete Wavelet Transform (DWT) and Discrete Cosine Transform (DCT), the error rates are 11% and 14% respectively, while the accuracy ratios are (89%) and (86%) respectively. So, the experiments shows the Poisson noise gives better results than other types of noises, while the speckle noise gives worst results compared with other types of noises. The proposed technique has been established by using MATLAB R2016a programming language.
Human Motion Capture Data Tailored Transform Coding.
Junhui Hou; Lap-Pui Chau; Magnenat-Thalmann, Nadia; Ying He
2015-07-01
Human motion capture (mocap) is a widely used technique for digitalizing human movements. With growing usage, compressing mocap data has received increasing attention, since compact data size enables efficient storage and transmission. Our analysis shows that mocap data have some unique characteristics that distinguish themselves from images and videos. Therefore, directly borrowing image or video compression techniques, such as discrete cosine transform, does not work well. In this paper, we propose a novel mocap-tailored transform coding algorithm that takes advantage of these features. Our algorithm segments the input mocap sequences into clips, which are represented in 2D matrices. Then it computes a set of data-dependent orthogonal bases to transform the matrices to frequency domain, in which the transform coefficients have significantly less dependency. Finally, the compression is obtained by entropy coding of the quantized coefficients and the bases. Our method has low computational cost and can be easily extended to compress mocap databases. It also requires neither training nor complicated parameter setting. Experimental results demonstrate that the proposed scheme significantly outperforms state-of-the-art algorithms in terms of compression performance and speed.
Statistical Characterization of MP3 Encoders for Steganalysis: ’CHAMP3’
2004-04-27
compression exceeds those of typical stegano- graphic tools (e. g., LSB image embedding), the availability of commented source codes for MP3 encoders...developed by testing the approach on known and unknown reference data. 15. SUBJECT TERMS EOARD, Steganography , Digital Watermarking...Pages kbps Kilobits per Second LGPL Lesser General Public License LSB Least Significant Bit MB Megabyte MDCT Modified Discrete Cosine Transformation MP3
Effect of data compression on diagnostic accuracy in digital hand and chest radiography
NASA Astrophysics Data System (ADS)
Sayre, James W.; Aberle, Denise R.; Boechat, Maria I.; Hall, Theodore R.; Huang, H. K.; Ho, Bruce K. T.; Kashfian, Payam; Rahbar, Guita
1992-05-01
Image compression is essential to handle a large volume of digital images including CT, MR, CR, and digitized films in a digital radiology operation. The full-frame bit allocation using the cosine transform technique developed during the last few years has been proven to be an excellent irreversible image compression method. This paper describes the effect of using the hardware compression module on diagnostic accuracy in hand radiographs with subperiosteal resorption and chest radiographs with interstitial disease. Receiver operating characteristic analysis using 71 hand radiographs and 52 chest radiographs with five observers each demonstrates that there is no statistical significant difference in diagnostic accuracy between the original films and the compressed images with a compression ratio as high as 20:1.
NASA Astrophysics Data System (ADS)
Al-Mansoori, Saeed; Kunhu, Alavi
2013-10-01
This paper proposes a blind multi-watermarking scheme based on designing two back-to-back encoders. The first encoder is implemented to embed a robust watermark into remote sensing imagery by applying a Discrete Cosine Transform (DCT) approach. Such watermark is used in many applications to protect the copyright of the image. However, the second encoder embeds a fragile watermark using `SHA-1' hash function. The purpose behind embedding a fragile watermark is to prove the authenticity of the image (i.e. tamper-proof). Thus, the proposed technique was developed as a result of new challenges with piracy of remote sensing imagery ownership. This led researchers to look for different means to secure the ownership of satellite imagery and prevent the illegal use of these resources. Therefore, Emirates Institution for Advanced Science and Technology (EIAST) proposed utilizing existing data security concept by embedding a digital signature, "watermark", into DubaiSat-1 satellite imagery. In this study, DubaiSat-1 images with 2.5 meter resolution are used as a cover and a colored EIAST logo is used as a watermark. In order to evaluate the robustness of the proposed technique, a couple of attacks are applied such as JPEG compression, rotation and synchronization attacks. Furthermore, tampering attacks are applied to prove image authenticity.
Computation of Symmetric Discrete Cosine Transform Using Bakhvalov's Algorithm
NASA Technical Reports Server (NTRS)
Aburdene, Maurice F.; Strojny, Brian C.; Dorband, John E.
2005-01-01
A number of algorithms for recursive computation of the discrete cosine transform (DCT) have been developed recently. This paper presents a new method for computing the discrete cosine transform and its inverse using Bakhvalov's algorithm, a method developed for evaluation of a polynomial at a point. In this paper, we will focus on both the application of the algorithm to the computation of the DCT-I and its complexity. In addition, Bakhvalov s algorithm is compared with Clenshaw s algorithm for the computation of the DCT.
NASA Technical Reports Server (NTRS)
Jones, H. W.; Hein, D. N.; Knauer, S. C.
1978-01-01
A general class of even/odd transforms is presented that includes the Karhunen-Loeve transform, the discrete cosine transform, the Walsh-Hadamard transform, and other familiar transforms. The more complex even/odd transforms can be computed by combining a simpler even/odd transform with a sparse matrix multiplication. A theoretical performance measure is computed for some even/odd transforms, and two image compression experiments are reported.
Toward a perceptual video-quality metric
NASA Astrophysics Data System (ADS)
Watson, Andrew B.
1998-07-01
The advent of widespread distribution of digital video creates a need for automated methods for evaluating the visual quality of digital video. This is particularly so since most digital video is compressed using lossy methods, which involve the controlled introduction of potentially visible artifacts. Compounding the problem is the bursty nature of digital video, which requires adaptive bit allocation based on visual quality metrics, and the economic need to reduce bit-rate to the lowest level that yields acceptable quality. In previous work, we have developed visual quality metrics for evaluating, controlling,a nd optimizing the quality of compressed still images. These metrics incorporate simplified models of human visual sensitivity to spatial and chromatic visual signals. Here I describe a new video quality metric that is an extension of these still image metrics into the time domain. Like the still image metrics, it is based on the Discrete Cosine Transform. An effort has been made to minimize the amount of memory and computation required by the metric, in order that might be applied in the widest range of applications. To calibrate the basic sensitivity of this metric to spatial and temporal signals we have made measurements of visual thresholds for temporally varying samples of DCT quantization noise.
Image-adapted visually weighted quantization matrices for digital image compression
NASA Technical Reports Server (NTRS)
Watson, Andrew B. (Inventor)
1994-01-01
A method for performing image compression that eliminates redundant and invisible image components is presented. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The present invention adapts or customizes the quantization matrix to the image being compressed. The quantization matrix comprises visual masking by luminance and contrast techniques and by an error pooling technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.
NASA Astrophysics Data System (ADS)
Strang, Gilbert
1994-06-01
Several methods are compared that are used to analyze and synthesize a signal. Three ways are mentioned to transform a symphony: into cosine waves (Fourier transform), into pieces of cosines (short-time Fourier transform), and into wavelets (little waves that start and stop). Choosing the best basis, higher dimensions, fast wavelet transform, and Daubechies wavelets are discussed. High-definition television is described. The use of wavelets in identifying fingerprints in the future is related.
2015-12-01
group assignment of samples in unsupervised hierarchical clustering by the Unweighted Pair-Group Method using Arithmetic averages ( UPGMA ) based on...log2 transformed MAS5.0 signal values; probe set clustering was performed by the UPGMA method using Cosine correlation as the similarity met- ric. For...differentially-regulated genes identified were subjected to unsupervised hierarchical clustering analysis using the UPGMA algorithm with cosine correlation as
Image segmentation-based robust feature extraction for color image watermarking
NASA Astrophysics Data System (ADS)
Li, Mianjie; Deng, Zeyu; Yuan, Xiaochen
2018-04-01
This paper proposes a local digital image watermarking method based on Robust Feature Extraction. The segmentation is achieved by Simple Linear Iterative Clustering (SLIC) based on which an Image Segmentation-based Robust Feature Extraction (ISRFE) method is proposed for feature extraction. Our method can adaptively extract feature regions from the blocks segmented by SLIC. This novel method can extract the most robust feature region in every segmented image. Each feature region is decomposed into low-frequency domain and high-frequency domain by Discrete Cosine Transform (DCT). Watermark images are then embedded into the coefficients in the low-frequency domain. The Distortion-Compensated Dither Modulation (DC-DM) algorithm is chosen as the quantization method for embedding. The experimental results indicate that the method has good performance under various attacks. Furthermore, the proposed method can obtain a trade-off between high robustness and good image quality.
A Posteriori Restoration of Block Transform-Compressed Data
NASA Technical Reports Server (NTRS)
Brown, R.; Boden, A. F.
1995-01-01
The Galileo spacecraft will use lossy data compression for the transmission of its science imagery over the low-bandwidth communication system. The technique chosen for image compression is a block transform technique based on the Integer Cosine Transform, a derivative of the JPEG image compression standard. Considered here are two known a posteriori enhancement techniques, which are adapted.
NASA Astrophysics Data System (ADS)
Gong, Lihua; Deng, Chengzhi; Pan, Shumin; Zhou, Nanrun
2018-07-01
Based on hyper-chaotic system and discrete fractional random transform, an image compression-encryption algorithm is designed. The original image is first transformed into a spectrum by the discrete cosine transform and the resulting spectrum is compressed according to the method of spectrum cutting. The random matrix of the discrete fractional random transform is controlled by a chaotic sequence originated from the high dimensional hyper-chaotic system. Then the compressed spectrum is encrypted by the discrete fractional random transform. The order of DFrRT and the parameters of the hyper-chaotic system are the main keys of this image compression and encryption algorithm. The proposed algorithm can compress and encrypt image signal, especially can encrypt multiple images once. To achieve the compression of multiple images, the images are transformed into spectra by the discrete cosine transform, and then the spectra are incised and spliced into a composite spectrum by Zigzag scanning. Simulation results demonstrate that the proposed image compression and encryption algorithm is of high security and good compression performance.
Testing Fixture For Microwave Integrated Circuits
NASA Technical Reports Server (NTRS)
Romanofsky, Robert; Shalkhauser, Kurt
1989-01-01
Testing fixture facilitates radio-frequency characterization of microwave and millimeter-wave integrated circuits. Includes base onto which two cosine-tapered ridge waveguide-to-microstrip transitions fastened. Length and profile of taper determined analytically to provide maximum bandwidth and minimum insertion loss. Each cosine taper provides transformation from high impedance of waveguide to characteristic impedance of microstrip. Used in conjunction with automatic network analyzer to provide user with deembedded scattering parameters of device under test. Operates from 26.5 to 40.0 GHz, but operation extends to much higher frequencies.
Study of Fourier transform spectrometer based on Michelson interferometer wave-meter
NASA Astrophysics Data System (ADS)
Peng, Yuexiang; Wang, Liqiang; Lin, Li
2008-03-01
A wave-meter based on Michelson interferometer consists of a reference and a measurement channel. The voice-coiled motor using PID means can realize to move in stable motion. The wavelength of a measurement laser can be obtained by counting interference fringes of reference and measurement laser. Reference laser with frequency stabilization creates a cosine interferogram signal whose frequency is proportional to velocity of the moving motor. The interferogram of the reference laser is converted to pulse signal, and it is subdivided into 16 times. In order to get optical spectrum, the analog signal of measurement channel should be collected. The Analog-to-Digital Converter (ADC) for measurement channel is triggered by the 16-times pulse signal of reference laser. So the sampling rate is constant only depending on frequency of reference laser and irrelative to the motor velocity. This means the sampling rate of measurement channel signals is on a uniform time-scale. The optical spectrum of measurement channel can be processed with Fast Fourier Transform (FFT) method by DSP and displayed on LCD.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Balsa Terzic, Gabriele Bassi
In this paper we discuss representations of charge particle densities in particle-in-cell (PIC) simulations, analyze the sources and profiles of the intrinsic numerical noise, and present efficient methods for their removal. We devise two alternative estimation methods for charged particle distribution which represent significant improvement over the Monte Carlo cosine expansion used in the 2d code of Bassi, designed to simulate coherent synchrotron radiation (CSR) in charged particle beams. The improvement is achieved by employing an alternative beam density estimation to the Monte Carlo cosine expansion. The representation is first binned onto a finite grid, after which two grid-based methodsmore » are employed to approximate particle distributions: (i) truncated fast cosine transform (TFCT); and (ii) thresholded wavelet transform (TWT). We demonstrate that these alternative methods represent a staggering upgrade over the original Monte Carlo cosine expansion in terms of efficiency, while the TWT approximation also provides an appreciable improvement in accuracy. The improvement in accuracy comes from a judicious removal of the numerical noise enabled by the wavelet formulation. The TWT method is then integrated into Bassi's CSR code, and benchmarked against the original version. We show that the new density estimation method provides a superior performance in terms of efficiency and spatial resolution, thus enabling high-fidelity simulations of CSR effects, including microbunching instability.« less
Fast discrete cosine transform structure suitable for implementation with integer computation
NASA Astrophysics Data System (ADS)
Jeong, Yeonsik; Lee, Imgeun
2000-10-01
The discrete cosine transform (DCT) has wide applications in speech and image coding. We propose a fast DCT scheme with the property of reduced multiplication stages and fewer additions and multiplications. The proposed algorithm is structured so that most multiplications are performed at the final stage, which reduces the propagation error that could occur in the integer computation.
Detecting double compression of audio signal
NASA Astrophysics Data System (ADS)
Yang, Rui; Shi, Yun Q.; Huang, Jiwu
2010-01-01
MP3 is the most popular audio format nowadays in our daily life, for example music downloaded from the Internet and file saved in the digital recorder are often in MP3 format. However, low bitrate MP3s are often transcoded to high bitrate since high bitrate ones are of high commercial value. Also audio recording in digital recorder can be doctored easily by pervasive audio editing software. This paper presents two methods for the detection of double MP3 compression. The methods are essential for finding out fake-quality MP3 and audio forensics. The proposed methods use support vector machine classifiers with feature vectors formed by the distributions of the first digits of the quantized MDCT (modified discrete cosine transform) coefficients. Extensive experiments demonstrate the effectiveness of the proposed methods. To the best of our knowledge, this piece of work is the first one to detect double compression of audio signal.
A Digital Compressed Sensing-Based Energy-Efficient Single-Spot Bluetooth ECG Node
Cai, Zhipeng; Zou, Fumin; Zhang, Xiangyu
2018-01-01
Energy efficiency is still the obstacle for long-term real-time wireless ECG monitoring. In this paper, a digital compressed sensing- (CS-) based single-spot Bluetooth ECG node is proposed to deal with the challenge in wireless ECG application. A periodic sleep/wake-up scheme and a CS-based compression algorithm are implemented in a node, which consists of ultra-low-power analog front-end, microcontroller, Bluetooth 4.0 communication module, and so forth. The efficiency improvement and the node's specifics are evidenced by the experiments using the ECG signals sampled by the proposed node under daily activities of lay, sit, stand, walk, and run. Under using sparse binary matrix (SBM), block sparse Bayesian learning (BSBL) method, and discrete cosine transform (DCT) basis, all ECG signals were essentially undistorted recovered with root-mean-square differences (PRDs) which are less than 6%. The proposed sleep/wake-up scheme and data compression can reduce the airtime over energy-hungry wireless links, the energy consumption of proposed node is 6.53 mJ, and the energy consumption of radio decreases 77.37%. Moreover, the energy consumption increase caused by CS code execution is negligible, which is 1.3% of the total energy consumption. PMID:29599945
A Digital Compressed Sensing-Based Energy-Efficient Single-Spot Bluetooth ECG Node.
Luo, Kan; Cai, Zhipeng; Du, Keqin; Zou, Fumin; Zhang, Xiangyu; Li, Jianqing
2018-01-01
Energy efficiency is still the obstacle for long-term real-time wireless ECG monitoring. In this paper, a digital compressed sensing- (CS-) based single-spot Bluetooth ECG node is proposed to deal with the challenge in wireless ECG application. A periodic sleep/wake-up scheme and a CS-based compression algorithm are implemented in a node, which consists of ultra-low-power analog front-end, microcontroller, Bluetooth 4.0 communication module, and so forth. The efficiency improvement and the node's specifics are evidenced by the experiments using the ECG signals sampled by the proposed node under daily activities of lay, sit, stand, walk, and run. Under using sparse binary matrix (SBM), block sparse Bayesian learning (BSBL) method, and discrete cosine transform (DCT) basis, all ECG signals were essentially undistorted recovered with root-mean-square differences (PRDs) which are less than 6%. The proposed sleep/wake-up scheme and data compression can reduce the airtime over energy-hungry wireless links, the energy consumption of proposed node is 6.53 mJ, and the energy consumption of radio decreases 77.37%. Moreover, the energy consumption increase caused by CS code execution is negligible, which is 1.3% of the total energy consumption.
Vesicle sizing by static light scattering: a Fourier cosine transform approach
NASA Astrophysics Data System (ADS)
Wang, Jianhong; Hallett, F. Ross
1995-08-01
A Fourier cosine transform method, based on the Rayleigh-Gans-Debye thin-shell approximation, was developed to retrieve vesicle size distribution directly from the angular dependence of scattered light intensity. Its feasibility for real vesicles was partially tested on scattering data generated by the exact Mie solutions for isotropic vesicles. The noise tolerance of the method in recovering unimodal and biomodal distributions was studied with the simulated data. Applicability of this approach to vesicles with weak anisotropy was examined using Mie theory for anisotropic hollow spheres. A primitive theory about the first four moments of the radius distribution about the origin, excluding the mean radius, was obtained as an alternative to the direct retrieval of size distributions.
NASA Astrophysics Data System (ADS)
Wang, Zhongpeng; Zhang, Shaozhong; Chen, Fangni; Wu, Ming-Wei; Qiu, Weiwei
2017-11-01
A physical encryption scheme for orthogonal frequency-division multiplexing (OFDM) visible light communication (VLC) systems using chaotic discrete cosine transform (DCT) is proposed. In the scheme, the row of the DCT matrix is permutated by a scrambling sequence generated by a three-dimensional (3-D) Arnold chaos map. Furthermore, two scrambling sequences, which are also generated from a 3-D Arnold map, are employed to encrypt the real and imaginary parts of the transmitted OFDM signal before the chaotic DCT operation. The proposed scheme enhances the physical layer security and improves the bit error rate (BER) performance for OFDM-based VLC. The simulation results prove the efficiency of the proposed encryption method. The experimental results show that the proposed security scheme not only protects image data from eavesdroppers but also keeps the good BER and peak-to-average power ratio performances for image-based OFDM-VLC systems.
NASA Astrophysics Data System (ADS)
Wang, Zhongpeng; Chen, Fangni; Qiu, Weiwei; Chen, Shoufa; Ren, Dongxiao
2018-03-01
In this paper, a two-layer image encryption scheme for a discrete cosine transform (DCT) precoded orthogonal frequency division multiplexing (OFDM) visible light communication (VLC) system is proposed. Firstly, in the proposed scheme the transmitted image is first encrypted by a chaos scrambling sequence,which is generated from the hybrid 4-D hyper- and Arnold map in the upper-layer. After that, the encrypted image is converted into digital QAM modulation signal, which is re-encrypted by chaos scrambling sequence based on Arnold map in physical layer to further enhance the security of the transmitted image. Moreover, DCT precoding is employed to improve BER performance of the proposed system and reduce the PAPR of OFDM signal. The BER and PAPR performances of the proposed system are evaluated by simulation experiments. The experiment results show that the proposed two-layer chaos scrambling schemes achieve image secure transmission for image-based OFDM VLC. Furthermore, DCT precoding can reduce the PAPR and improve the BER performance of OFDM-based VLC.
NASA Astrophysics Data System (ADS)
Liang, Ruiyu; Xi, Ji; Bao, Yongqiang
2017-07-01
To improve the performance of gain compensation based on three-segment sound pressure level (SPL) in hearing aids, an improved multichannel loudness compensation method based on eight-segment SPL was proposed. Firstly, the uniform cosine modulated filter bank was designed. Then, the adjacent channels which have low or gradual slopes were adaptively merged to obtain the corresponding non-uniform cosine modulated filter according to the audiogram of hearing impaired persons. Secondly, the input speech was decomposed into sub-band signals and the SPL of every sub-band signal was computed. Meanwhile, the audible SPL range from 0 dB SPL to 120 dB SPL was equally divided into eight segments. Based on these segments, a different prescription formula was designed to compute more detailed gain to compensate according to the audiogram and the computed SPL. Finally, the enhanced signal was synthesized. Objective experiments showed the decomposed signals after cosine modulated filter bank have little distortion. Objective experiments showed that the hearing aids speech perception index (HASPI) and hearing aids speech quality index (HASQI) increased 0.083 and 0.082 on average, respectively. Subjective experiments showed the proposed algorithm can effectively improve the speech recognition of six hearing impaired persons.
Discrete cosine and sine transforms generalized to honeycomb lattice
NASA Astrophysics Data System (ADS)
Hrivnák, Jiří; Motlochová, Lenka
2018-06-01
The discrete cosine and sine transforms are generalized to a triangular fragment of the honeycomb lattice. The honeycomb point sets are constructed by subtracting the root lattice from the weight lattice points of the crystallographic root system A2. The two-variable orbit functions of the Weyl group of A2, discretized simultaneously on the weight and root lattices, induce a novel parametric family of extended Weyl orbit functions. The periodicity and von Neumann and Dirichlet boundary properties of the extended Weyl orbit functions are detailed. Three types of discrete complex Fourier-Weyl transforms and real-valued Hartley-Weyl transforms are described. Unitary transform matrices and interpolating behavior of the discrete transforms are exemplified. Consequences of the developed discrete transforms for transversal eigenvibrations of the mechanical graphene model are discussed.
Transform coding for space applications
NASA Technical Reports Server (NTRS)
Glover, Daniel
1993-01-01
Data compression coding requirements for aerospace applications differ somewhat from the compression requirements for entertainment systems. On the one hand, entertainment applications are bit rate driven with the goal of getting the best quality possible with a given bandwidth. Science applications are quality driven with the goal of getting the lowest bit rate for a given level of reconstruction quality. In the past, the required quality level has been nothing less than perfect allowing only the use of lossless compression methods (if that). With the advent of better, faster, cheaper missions, an opportunity has arisen for lossy data compression methods to find a use in science applications as requirements for perfect quality reconstruction runs into cost constraints. This paper presents a review of the data compression problem from the space application perspective. Transform coding techniques are described and some simple, integer transforms are presented. The application of these transforms to space-based data compression problems is discussed. Integer transforms have an advantage over conventional transforms in computational complexity. Space applications are different from broadcast or entertainment in that it is desirable to have a simple encoder (in space) and tolerate a more complicated decoder (on the ground) rather than vice versa. Energy compaction with new transforms are compared with the Walsh-Hadamard (WHT), Discrete Cosine (DCT), and Integer Cosine (ICT) transforms.
NASA Astrophysics Data System (ADS)
Jin, Xin; Jiang, Qian; Yao, Shaowen; Zhou, Dongming; Nie, Rencan; Lee, Shin-Jye; He, Kangjian
2018-01-01
In order to promote the performance of infrared and visual image fusion and provide better visual effects, this paper proposes a hybrid fusion method for infrared and visual image by the combination of discrete stationary wavelet transform (DSWT), discrete cosine transform (DCT) and local spatial frequency (LSF). The proposed method has three key processing steps. Firstly, DSWT is employed to decompose the important features of the source image into a series of sub-images with different levels and spatial frequencies. Secondly, DCT is used to separate the significant details of the sub-images according to the energy of different frequencies. Thirdly, LSF is applied to enhance the regional features of DCT coefficients, and it can be helpful and useful for image feature extraction. Some frequently-used image fusion methods and evaluation metrics are employed to evaluate the validity of the proposed method. The experiments indicate that the proposed method can achieve good fusion effect, and it is more efficient than other conventional image fusion methods.
A method for the measurement and the statistical analysis of atmospheric turbulence
NASA Technical Reports Server (NTRS)
Tieleman, H. W.; Tavoularis, S. C.
1974-01-01
The instantaneous values of output voltages representing the wind velocity vector and the temperature at different elevations of the 250-foot meteorological tower located at NASA Wallops Flight Center are provided with the three dimensional split-film TSI Model 1080 anemometer system. The output voltages are sampled at a rate of one every 5 milliseconds, digitized and stored on digital magnetic tapes for a time period of approximately 40 minutes, with the use of a specially designed data acqusition system. A new calibration procedure permits the conversion of the digital voltages to the respective values of the temperature and the velocity components in a Cartesian coordinate system connected with the TSI probe with considerable accuracy. Power, cross, coincidence and quadrature spectra of the wind components and the temperature are obtained with the use of the fast Fourier transform. The cosine taper data window and ensemble and frequency smoothing techniques are used to provide smooth estimates of the spectral functions.
Santos, Rui; Pombo, Nuno; Flórez-Revuelta, Francisco
2018-01-01
An increase in the accuracy of identification of Activities of Daily Living (ADL) is very important for different goals of Enhanced Living Environments and for Ambient Assisted Living (AAL) tasks. This increase may be achieved through identification of the surrounding environment. Although this is usually used to identify the location, ADL recognition can be improved with the identification of the sound in that particular environment. This paper reviews audio fingerprinting techniques that can be used with the acoustic data acquired from mobile devices. A comprehensive literature search was conducted in order to identify relevant English language works aimed at the identification of the environment of ADLs using data acquired with mobile devices, published between 2002 and 2017. In total, 40 studies were analyzed and selected from 115 citations. The results highlight several audio fingerprinting techniques, including Modified discrete cosine transform (MDCT), Mel-frequency cepstrum coefficients (MFCC), Principal Component Analysis (PCA), Fast Fourier Transform (FFT), Gaussian mixture models (GMM), likelihood estimation, logarithmic moduled complex lapped transform (LMCLT), support vector machine (SVM), constant Q transform (CQT), symmetric pairwise boosting (SPB), Philips robust hash (PRH), linear discriminant analysis (LDA) and discrete cosine transform (DCT). PMID:29315232
Qualitative and semiquantitative Fourier transformation using a noncoherent system.
Rogers, G L
1979-09-15
A number of authors have pointed out that a system of zone plates combined with a diffuse source, transparent input, lens, and focusing screen will display on the output screen the Fourier transform of the input. Strictly speaking, the transform normally displayed is the cosine transform, and the bipolar output is superimposed on a dc gray level to give a positive-only intensity variation. By phase-shifting one zone plate the sine transform is obtained. Temporal modulation is possible. It is also possible to redesign the system to accept a diffusely reflecting input at the cost of introducing a phase gradient in the output. Results are given of the sine and cosine transforms of a small circular aperture. As expected, the sine transform is a uniform gray. Both transforms show unwanted artifacts beyond 0.1 rad off-axis. An analysis shows this is due to unwanted circularly symmetrical moire patterns between the zone plates.
New fast DCT algorithms based on Loeffler's factorization
NASA Astrophysics Data System (ADS)
Hong, Yoon Mi; Kim, Il-Koo; Lee, Tammy; Cheon, Min-Su; Alshina, Elena; Han, Woo-Jin; Park, Jeong-Hoon
2012-10-01
This paper proposes a new 32-point fast discrete cosine transform (DCT) algorithm based on the Loeffler's 16-point transform. Fast integer realizations of 16-point and 32-point transforms are also provided based on the proposed transform. For the recent development of High Efficiency Video Coding (HEVC), simplified quanti-zation and de-quantization process are proposed. Three different forms of implementation with the essentially same performance, namely matrix multiplication, partial butterfly, and full factorization can be chosen accord-ing to the given platform. In terms of the number of multiplications required for the realization, our proposed full-factorization is 3~4 times faster than a partial butterfly, and about 10 times faster than direct matrix multiplication.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Lei; Zuo, Chao; Idir, Mourad
A novel transport-of-intensity equation (TIE) based phase retrieval method is proposed with putting an arbitrarily-shaped aperture into the optical wavefield. In this arbitrarily-shaped aperture, the TIE can be solved under non-uniform illuminations and even non-homogeneous boundary conditions by iterative discrete cosine transforms with a phase compensation mechanism. Simulation with arbitrary phase, arbitrary aperture shape, and non-uniform intensity distribution verifies the effective compensation and high accuracy of the proposed method. Experiment is also carried out to check the feasibility of the proposed method in real measurement. Comparing to the existing methods, the proposed method is applicable for any types of phasemore » distribution under non-uniform illumination and non-homogeneous boundary conditions within an arbitrarily-shaped aperture, which enables the technique of TIE with hard aperture become a more flexible phase retrieval tool in practical measurements.« less
Huang, Lei; Zuo, Chao; Idir, Mourad; ...
2015-04-21
A novel transport-of-intensity equation (TIE) based phase retrieval method is proposed with putting an arbitrarily-shaped aperture into the optical wavefield. In this arbitrarily-shaped aperture, the TIE can be solved under non-uniform illuminations and even non-homogeneous boundary conditions by iterative discrete cosine transforms with a phase compensation mechanism. Simulation with arbitrary phase, arbitrary aperture shape, and non-uniform intensity distribution verifies the effective compensation and high accuracy of the proposed method. Experiment is also carried out to check the feasibility of the proposed method in real measurement. Comparing to the existing methods, the proposed method is applicable for any types of phasemore » distribution under non-uniform illumination and non-homogeneous boundary conditions within an arbitrarily-shaped aperture, which enables the technique of TIE with hard aperture become a more flexible phase retrieval tool in practical measurements.« less
Improved cosine similarity measures of simplified neutrosophic sets for medical diagnoses.
Ye, Jun
2015-03-01
In pattern recognition and medical diagnosis, similarity measure is an important mathematical tool. To overcome some disadvantages of existing cosine similarity measures of simplified neutrosophic sets (SNSs) in vector space, this paper proposed improved cosine similarity measures of SNSs based on cosine function, including single valued neutrosophic cosine similarity measures and interval neutrosophic cosine similarity measures. Then, weighted cosine similarity measures of SNSs were introduced by taking into account the importance of each element. Further, a medical diagnosis method using the improved cosine similarity measures was proposed to solve medical diagnosis problems with simplified neutrosophic information. The improved cosine similarity measures between SNSs were introduced based on cosine function. Then, we compared the improved cosine similarity measures of SNSs with existing cosine similarity measures of SNSs by numerical examples to demonstrate their effectiveness and rationality for overcoming some shortcomings of existing cosine similarity measures of SNSs in some cases. In the medical diagnosis method, we can find a proper diagnosis by the cosine similarity measures between the symptoms and considered diseases which are represented by SNSs. Then, the medical diagnosis method based on the improved cosine similarity measures was applied to two medical diagnosis problems to show the applications and effectiveness of the proposed method. Two numerical examples all demonstrated that the improved cosine similarity measures of SNSs based on the cosine function can overcome the shortcomings of the existing cosine similarity measures between two vectors in some cases. By two medical diagnoses problems, the medical diagnoses using various similarity measures of SNSs indicated the identical diagnosis results and demonstrated the effectiveness and rationality of the diagnosis method proposed in this paper. The improved cosine measures of SNSs based on cosine function can overcome some drawbacks of existing cosine similarity measures of SNSs in vector space, and then their diagnosis method is very suitable for handling the medical diagnosis problems with simplified neutrosophic information and demonstrates the effectiveness and rationality of medical diagnoses. Copyright © 2014 Elsevier B.V. All rights reserved.
Sparse Matrix Motivated Reconstruction of Far-Field Radiation Patterns
2015-03-01
method for base - station antenna radiation patterns. IEEE Antennas Propagation Magazine. 2001;43(2):132. 4. Vasiliadis TG, Dimitriou D, Sergiadis JD...algorithm based on sparse representations of radiation patterns using the inverse Discrete Fourier Transform (DFT) and the inverse Discrete Cosine...patterns using a Model- Based Parameter Estimation (MBPE) technique that reduces the computational time required to model radiation patterns. Another
Fluorescence molecular tomography reconstruction via discrete cosine transform-based regularization
NASA Astrophysics Data System (ADS)
Shi, Junwei; Liu, Fei; Zhang, Jiulou; Luo, Jianwen; Bai, Jing
2015-05-01
Fluorescence molecular tomography (FMT) as a noninvasive imaging modality has been widely used for biomedical preclinical applications. However, FMT reconstruction suffers from severe ill-posedness, especially when a limited number of projections are used. In order to improve the quality of FMT reconstruction results, a discrete cosine transform (DCT) based reweighted L1-norm regularization algorithm is proposed. In each iteration of the reconstruction process, different reweighted regularization parameters are adaptively assigned according to the values of DCT coefficients to suppress the reconstruction noise. In addition, the permission region of the reconstructed fluorophores is adaptively constructed to increase the convergence speed. In order to evaluate the performance of the proposed algorithm, physical phantom and in vivo mouse experiments with a limited number of projections are carried out. For comparison, different L1-norm regularization strategies are employed. By quantifying the signal-to-noise ratio (SNR) of the reconstruction results in the phantom and in vivo mouse experiments with four projections, the proposed DCT-based reweighted L1-norm regularization shows higher SNR than other L1-norm regularizations employed in this work.
Integer cosine transform compression for Galileo at Jupiter: A preliminary look
NASA Technical Reports Server (NTRS)
Ekroot, L.; Dolinar, S.; Cheung, K.-M.
1993-01-01
The Galileo low-gain antenna mission has a severely rate-constrained channel over which we wish to send large amounts of information. Because of this link pressure, compression techniques for image and other data are being selected. The compression technique that will be used for images is the integer cosine transform (ICT). This article investigates the compression performance of Galileo's ICT algorithm as applied to Galileo images taken during the early portion of the mission and to images that simulate those expected from the encounter at Jupiter.
Zhang, Jiulou; Shi, Junwei; Guang, Huizhi; Zuo, Simin; Liu, Fei; Bai, Jing; Luo, Jianwen
2016-06-01
High-intensity background fluorescence is generally encountered in fluorescence molecular tomography (FMT), because of the accumulation of fluorescent probes in nontarget tissues or the existence of autofluorescence in biological tissues. The reconstruction results are affected or even distorted by the background fluorescence, especially when the distribution of fluorescent targets is relatively sparse. The purpose of this paper is to reduce the negative effect of background fluorescence on FMT reconstruction. After each iteration of the Tikhonov regularization algorithm, 3-D discrete cosine transform is adopted to filter the intermediate results. And then, a sparsity constraint step based on L1 regularization is applied to restrain the energy of the objective function. Phantom experiments with different fluorescence intensities of homogeneous and heterogeneous background are carried out to validate the performance of the proposed scheme. The results show that the reconstruction quality can be improved with the proposed iterative correction scheme. The influence of background fluorescence in FMT can be reduced effectively because of the filtering of the intermediate results, the detail preservation, and noise suppression of L1 regularization.
NASA Astrophysics Data System (ADS)
Khaleghi, Morteza; Furlong, Cosme; Cheng, Jeffrey Tao; Rosowski, John J.
2014-07-01
The eardrum or Tympanic Membrane (TM) transfers acoustic energy from the ear canal (at the external ear) into mechanical motions of the ossicles (at the middle ear). The acousto-mechanical-transformer behavior of the TM is determined by its shape and mechanical properties. For a better understanding of hearing mysteries, full-field-of-view techniques are required to quantify shape, nanometer-scale sound-induced displacement, and mechanical properties of the TM in 3D. In this paper, full-field-of-view, three-dimensional shape and sound-induced displacement of the surface of the TM are obtained by the methods of multiple wavelengths and multiple sensitivity vectors with lensless digital holography. Using our developed digital holographic systems, unique 3D information such as, shape (with micrometer resolution), 3D acoustically-induced displacement (with nanometer resolution), full strain tensor (with nano-strain resolution), 3D phase of motion, and 3D directional cosines of the displacement vectors can be obtained in full-field-ofview with a spatial resolution of about 3 million points on the surface of the TM and a temporal resolution of 15 Hz.
NASA Astrophysics Data System (ADS)
Yang, Shuang-Long; Liang, Li-Ping; Liu, Hou-De; Xu, Ke-Jun
2018-03-01
Aiming at reducing the estimation error of the sensor frequency response function (FRF) estimated by the commonly used window-based spectral estimation method, the error models of interpolation and transient errors are derived in the form of non-parameter models. Accordingly, window effects on the errors are analyzed and reveal that the commonly used hanning window leads to smaller interpolation error which can also be significantly eliminated by the cubic spline interpolation method when estimating the FRF from the step response data, and window with smaller front-end value can restrain more transient error. Thus, a new dual-cosine window with its non-zero discrete Fourier transform bins at -3, -1, 0, 1, and 3 is constructed for FRF estimation. Compared with the hanning window, the new dual-cosine window has the equivalent interpolation error suppression capability and better transient error suppression capability when estimating the FRF from the step response; specifically, it reduces the asymptotic property of the transient error from O(N-2) of the hanning window method to O(N-4) while only increases the uncertainty slightly (about 0.4 dB). Then, one direction of a wind tunnel strain gauge balance which is a high order, small damping, and non-minimum phase system is employed as the example for verifying the new dual-cosine window-based spectral estimation method. The model simulation result shows that the new dual-cosine window method is better than the hanning window method for FRF estimation, and compared with the Gans method and LPM method, it has the advantages of simple computation, less time consumption, and short data requirement; the actual data calculation result of the balance FRF is consistent to the simulation result. Thus, the new dual-cosine window is effective and practical for FRF estimation.
NASA Astrophysics Data System (ADS)
Terzić, Balša; Bassi, Gabriele
2011-07-01
In this paper we discuss representations of charge particle densities in particle-in-cell simulations, analyze the sources and profiles of the intrinsic numerical noise, and present efficient methods for their removal. We devise two alternative estimation methods for charged particle distribution which represent significant improvement over the Monte Carlo cosine expansion used in the 2D code of Bassi et al. [G. Bassi, J. A. Ellison, K. Heinemann, and R. Warnock, Phys. Rev. ST Accel. Beams 12, 080704 (2009); PRABFM1098-440210.1103/PhysRevSTAB.12.080704G. Bassi and B. Terzić, in Proceedings of the 23rd Particle Accelerator Conference, Vancouver, Canada, 2009 (IEEE, Piscataway, NJ, 2009), TH5PFP043], designed to simulate coherent synchrotron radiation (CSR) in charged particle beams. The improvement is achieved by employing an alternative beam density estimation to the Monte Carlo cosine expansion. The representation is first binned onto a finite grid, after which two grid-based methods are employed to approximate particle distributions: (i) truncated fast cosine transform; and (ii) thresholded wavelet transform (TWT). We demonstrate that these alternative methods represent a staggering upgrade over the original Monte Carlo cosine expansion in terms of efficiency, while the TWT approximation also provides an appreciable improvement in accuracy. The improvement in accuracy comes from a judicious removal of the numerical noise enabled by the wavelet formulation. The TWT method is then integrated into the CSR code [G. Bassi, J. A. Ellison, K. Heinemann, and R. Warnock, Phys. Rev. ST Accel. Beams 12, 080704 (2009)PRABFM1098-440210.1103/PhysRevSTAB.12.080704], and benchmarked against the original version. We show that the new density estimation method provides a superior performance in terms of efficiency and spatial resolution, thus enabling high-fidelity simulations of CSR effects, including microbunching instability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Szadkowski, Zbigniew
2015-07-01
The paper presents the first results from the trigger based on the Discrete Cosine Transform (DCT) operating in the new Front-End Boards with Cyclone V FPGA deployed in 8 test surface detectors in the Pierre Auger Engineering Array. The patterns of the ADC traces generated by very inclined showers were obtained from the Auger database and from the CORSIKA simulation package supported next by Offline reconstruction Auger platform which gives a predicted digitized signal profiles. Simulations for many variants of the initial angle of shower, initialization depth in the atmosphere, type of particle and its initial energy gave a boundarymore » of the DCT coefficients used next for the on-line pattern recognition in the FPGA. Preliminary results have proven a right approach. We registered several showers triggered by the DCT for 120 MSps and 160 MSps. (authors)« less
NASA Astrophysics Data System (ADS)
Szadkowski, Zbigniew; Wiedeński, Michał
2017-06-01
We present first results from a trigger based on the discrete cosine transform (DCT) operating in new front-end boards with a Cyclone V E field-programmable gate array (FPGA) deployed in seven test surface detectors in the Pierre Auger Test Array. The patterns of the ADC traces generated by very inclined showers (arriving at 70° to 90° from the vertical) were obtained from the Auger database and from the CORSIKA simulation package supported by the Auger OffLine event reconstruction platform that gives predicted digitized signal profiles. Simulations for many values of the initial cosmic ray angle of arrival, the shower initialization depth in the atmosphere, the type of particle, and its initial energy gave a boundary on the DCT coefficients used for the online pattern recognition in the FPGA. Preliminary results validated the approach used. We recorded several showers triggered by the DCT for 120 Msamples/s and 160 Msamples/s.
A trace map comparison algorithm for the discrete fracture network models of rock masses
NASA Astrophysics Data System (ADS)
Han, Shuai; Wang, Gang; Li, Mingchao
2018-06-01
Discrete fracture networks (DFN) are widely used to build refined geological models. However, validating whether a refined model can match to reality is a crucial problem, concerning whether the model can be used for analysis. The current validation methods include numerical validation and graphical validation. However, the graphical validation, aiming at estimating the similarity between a simulated trace map and the real trace map by visual observation, is subjective. In this paper, an algorithm for the graphical validation of DFN is set up. Four main indicators, including total gray, gray grade curve, characteristic direction and gray density distribution curve, are presented to assess the similarity between two trace maps. A modified Radon transform and loop cosine similarity are presented based on Radon transform and cosine similarity respectively. Besides, how to use Bézier curve to reduce the edge effect is described. Finally, a case study shows that the new algorithm can effectively distinguish which simulated trace map is more similar to the real trace map.
The Effect of the Ill-posed Problem on Quantitative Error Assessment in Digital Image Correlation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lehoucq, R. B.; Reu, P. L.; Turner, D. Z.
Here, this work explores the effect of the ill-posed problem on uncertainty quantification for motion estimation using digital image correlation (DIC) (Sutton et al. 2009). We develop a correction factor for standard uncertainty estimates based on the cosine of the angle between the true motion and the image gradients, in an integral sense over a subregion of the image. This correction factor accounts for variability in the DIC solution previously unaccounted for when considering only image noise, interpolation bias, contrast, and the software settings such as subset size and spacing.
The Effect of the Ill-posed Problem on Quantitative Error Assessment in Digital Image Correlation
Lehoucq, R. B.; Reu, P. L.; Turner, D. Z.
2017-11-27
Here, this work explores the effect of the ill-posed problem on uncertainty quantification for motion estimation using digital image correlation (DIC) (Sutton et al. 2009). We develop a correction factor for standard uncertainty estimates based on the cosine of the angle between the true motion and the image gradients, in an integral sense over a subregion of the image. This correction factor accounts for variability in the DIC solution previously unaccounted for when considering only image noise, interpolation bias, contrast, and the software settings such as subset size and spacing.
Map-invariant spectral analysis for the identification of DNA periodicities
2012-01-01
Many signal processing based methods for finding hidden periodicities in DNA sequences have primarily focused on assigning numerical values to the symbolic DNA sequence and then applying spectral analysis tools such as the short-time discrete Fourier transform (ST-DFT) to locate these repeats. The key results pertaining to this approach are however obtained using a very specific symbolic to numerical map, namely the so-called Voss representation. An important research problem is to therefore quantify the sensitivity of these results to the choice of the symbolic to numerical map. In this article, a novel algebraic approach to the periodicity detection problem is presented and provides a natural framework for studying the role of the symbolic to numerical map in finding these repeats. More specifically, we derive a new matrix-based expression of the DNA spectrum that comprises most of the widely used mappings in the literature as special cases, shows that the DNA spectrum is in fact invariable under all these mappings, and generates a necessary and sufficient condition for the invariance of the DNA spectrum to the symbolic to numerical map. Furthermore, the new algebraic framework decomposes the periodicity detection problem into several fundamental building blocks that are totally independent of each other. Sophisticated digital filters and/or alternate fast data transforms such as the discrete cosine and sine transforms can therefore be always incorporated in the periodicity detection scheme regardless of the choice of the symbolic to numerical map. Although the newly proposed framework is matrix based, identification of these periodicities can be achieved at a low computational cost. PMID:23067324
A low-power and high-quality implementation of the discrete cosine transformation
NASA Astrophysics Data System (ADS)
Heyne, B.; Götze, J.
2007-06-01
In this paper a computationally efficient and high-quality preserving DCT architecture is presented. It is obtained by optimizing the Loeffler DCT based on the Cordic algorithm. The computational complexity is reduced from 11 multiply and 29 add operations (Loeffler DCT) to 38 add and 16 shift operations (which is similar to the complexity of the binDCT). The experimental results show that the proposed DCT algorithm not only reduces the computational complexity significantly, but also retains the good transformation quality of the Loeffler DCT. Therefore, the proposed Cordic based Loeffler DCT is especially suited for low-power and high-quality CODECs in battery-based systems.
Determination of Fourier Transforms on an Instructional Analog Computer
ERIC Educational Resources Information Center
Anderson, Owen T.; Greenwood, Stephen R.
1974-01-01
An analog computer program to find and display the Fourier transform of some real, even functions is described. Oscilloscope traces are shown for Fourier transforms of a rectangular pulse, a Gaussian, a cosine wave, and a delayed narrow pulse. Instructional uses of the program are discussed briefly. (DT)
Dessouky, Mohamed M; Elrashidy, Mohamed A; Taha, Taha E; Abdelkader, Hatem M
2016-05-01
The different discrete transform techniques such as discrete cosine transform (DCT), discrete sine transform (DST), discrete wavelet transform (DWT), and mel-scale frequency cepstral coefficients (MFCCs) are powerful feature extraction techniques. This article presents a proposed computer-aided diagnosis (CAD) system for extracting the most effective and significant features of Alzheimer's disease (AD) using these different discrete transform techniques and MFCC techniques. Linear support vector machine has been used as a classifier in this article. Experimental results conclude that the proposed CAD system using MFCC technique for AD recognition has a great improvement for the system performance with small number of significant extracted features, as compared with the CAD system based on DCT, DST, DWT, and the hybrid combination methods of the different transform techniques. © The Author(s) 2015.
Multifocus watermarking approach based on discrete cosine transform.
Waheed, Safa Riyadh; Alkawaz, Mohammed Hazim; Rehman, Amjad; Almazyad, Abdulaziz S; Saba, Tanzila
2016-05-01
Image fusion process consolidates data and information from various images of same sight into a solitary image. Each of the source images might speak to a fractional perspective of the scene, and contains both "pertinent" and "immaterial" information. In this study, a new image fusion method is proposed utilizing the Discrete Cosine Transform (DCT) to join the source image into a solitary minimized image containing more exact depiction of the sight than any of the individual source images. In addition, the fused image comes out with most ideal quality image without bending appearance or loss of data. DCT algorithm is considered efficient in image fusion. The proposed scheme is performed in five steps: (1) RGB colour image (input image) is split into three channels R, G, and B for source images. (2) DCT algorithm is applied to each channel (R, G, and B). (3) The variance values are computed for the corresponding 8 × 8 blocks of each channel. (4) Each block of R of source images is compared with each other based on the variance value and then the block with maximum variance value is selected to be the block in the new image. This process is repeated for all channels of source images. (5) Inverse discrete cosine transform is applied on each fused channel to convert coefficient values to pixel values, and then combined all the channels to generate the fused image. The proposed technique can potentially solve the problem of unwanted side effects such as blurring or blocking artifacts by reducing the quality of the subsequent image in image fusion process. The proposed approach is evaluated using three measurement units: the average of Q(abf), standard deviation, and peak Signal Noise Rate. The experimental results of this proposed technique have shown good results as compared with older techniques. © 2016 Wiley Periodicals, Inc.
Schullcke, Benjamin; Gong, Bo; Krueger-Ziolek, Sabine; Soleimani, Manuchehr; Mueller-Lisse, Ullrich; Moeller, Knut
2016-05-16
Lung EIT is a functional imaging method that utilizes electrical currents to reconstruct images of conductivity changes inside the thorax. This technique is radiation free and applicable at the bedside, but lacks of spatial resolution compared to morphological imaging methods such as X-ray computed tomography (CT). In this article we describe an approach for EIT image reconstruction using morphologic information obtained from other structural imaging modalities. This leads to recon- structed images of lung ventilation that can easily be superimposed with structural CT or MRI images, which facilitates image interpretation. The approach is based on a Discrete Cosine Transformation (DCT) of an image of the considered transversal thorax slice. The use of DCT enables reduction of the dimensionality of the reconstruction and ensures that only conductivity changes of the lungs are reconstructed and displayed. The DCT based approach is well suited to fuse morphological image information with functional lung imaging at low computational costs. Results on simulated data indicate that this approach preserves the morphological structures of the lungs and avoids blurring of the solution. Images from patient measurements reveal the capabilities of the method and demonstrate benefits in possible applications.
Schullcke, Benjamin; Gong, Bo; Krueger-Ziolek, Sabine; Soleimani, Manuchehr; Mueller-Lisse, Ullrich; Moeller, Knut
2016-01-01
Lung EIT is a functional imaging method that utilizes electrical currents to reconstruct images of conductivity changes inside the thorax. This technique is radiation free and applicable at the bedside, but lacks of spatial resolution compared to morphological imaging methods such as X-ray computed tomography (CT). In this article we describe an approach for EIT image reconstruction using morphologic information obtained from other structural imaging modalities. This leads to recon- structed images of lung ventilation that can easily be superimposed with structural CT or MRI images, which facilitates image interpretation. The approach is based on a Discrete Cosine Transformation (DCT) of an image of the considered transversal thorax slice. The use of DCT enables reduction of the dimensionality of the reconstruction and ensures that only conductivity changes of the lungs are reconstructed and displayed. The DCT based approach is well suited to fuse morphological image information with functional lung imaging at low computational costs. Results on simulated data indicate that this approach preserves the morphological structures of the lungs and avoids blurring of the solution. Images from patient measurements reveal the capabilities of the method and demonstrate benefits in possible applications. PMID:27181695
Digital slip frequency generator and method for determining the desired slip frequency
Klein, Frederick F.
1989-01-01
The output frequency of an electric power generator is kept constant with variable rotor speed by automatic adjustment of the excitation slip frequency. The invention features a digital slip frequency generator which provides sine and cosine waveforms from a look-up table, which are combined with real and reactive power output of the power generator.
NASA Astrophysics Data System (ADS)
Rais, Muhammad H.
2010-06-01
This paper presents Field Programmable Gate Array (FPGA) implementation of standard and truncated multipliers using Very High Speed Integrated Circuit Hardware Description Language (VHDL). Truncated multiplier is a good candidate for digital signal processing (DSP) applications such as finite impulse response (FIR) and discrete cosine transform (DCT). Remarkable reduction in FPGA resources, delay, and power can be achieved using truncated multipliers instead of standard parallel multipliers when the full precision of the standard multiplier is not required. The truncated multipliers show significant improvement as compared to standard multipliers. Results show that the anomaly in Spartan-3 AN average connection and maximum pin delay have been efficiently reduced in Virtex-4 device.
Subband directional vector quantization in radiological image compression
NASA Astrophysics Data System (ADS)
Akrout, Nabil M.; Diab, Chaouki; Prost, Remy; Goutte, Robert; Amiel, Michel
1992-05-01
The aim of this paper is to propose a new scheme for image compression. The method is very efficient for images which have directional edges such as the tree-like structure of the coronary vessels in digital angiograms. This method involves two steps. First, the original image is decomposed at different resolution levels using a pyramidal subband decomposition scheme. For decomposition/reconstruction of the image, free of aliasing and boundary errors, we use an ideal band-pass filter bank implemented in the Discrete Cosine Transform domain (DCT). Second, the high-frequency subbands are vector quantized using a multiresolution codebook with vertical and horizontal codewords which take into account the edge orientation of each subband. The proposed method reduces the blocking effect encountered at low bit rates in conventional vector quantization.
Infrared images target detection based on background modeling in the discrete cosine domain
NASA Astrophysics Data System (ADS)
Ye, Han; Pei, Jihong
2018-02-01
Background modeling is the critical technology to detect the moving target for video surveillance. Most background modeling techniques are aimed at land monitoring and operated in the spatial domain. A background establishment becomes difficult when the scene is a complex fluctuating sea surface. In this paper, the background stability and separability between target are analyzed deeply in the discrete cosine transform (DCT) domain, on this basis, we propose a background modeling method. The proposed method models each frequency point as a single Gaussian model to represent background, and the target is extracted by suppressing the background coefficients. Experimental results show that our approach can establish an accurate background model for seawater, and the detection results outperform other background modeling methods in the spatial domain.
Warrick, P A; Precup, D; Hamilton, E F; Kearney, R E
2007-01-01
To develop a singular-spectrum analysis (SSA) based change-point detection algorithm applicable to fetal heart rate (FHR) monitoring to improve the detection of deceleration events. We present a method for decomposing a signal into near-orthogonal components via the discrete cosine transform (DCT) and apply this in a novel online manner to change-point detection based on SSA. The SSA technique forms models of the underlying signal that can be compared over time; models that are sufficiently different indicate signal change points. To adapt the algorithm to deceleration detection where many successive similar change events can occur, we modify the standard SSA algorithm to hold the reference model constant under such conditions, an approach that we term "base-hold SSA". The algorithm is applied to a database of 15 FHR tracings that have been preprocessed to locate candidate decelerations and is compared to the markings of an expert obstetrician. Of the 528 true and 1285 false decelerations presented to the algorithm, the base-hold approach improved on standard SSA, reducing the number of missed decelerations from 64 to 49 (21.9%) while maintaining the same reduction in false-positives (278). The standard SSA assumption that changes are infrequent does not apply to FHR analysis where decelerations can occur successively and in close proximity; our base-hold SSA modification improves detection of these types of event series.
NASA Astrophysics Data System (ADS)
Paramanandham, Nirmala; Rajendiran, Kishore
2018-01-01
A novel image fusion technique is presented for integrating infrared and visible images. Integration of images from the same or various sensing modalities can deliver the required information that cannot be delivered by viewing the sensor outputs individually and consecutively. In this paper, a swarm intelligence based image fusion technique using discrete cosine transform (DCT) domain is proposed for surveillance application which integrates the infrared image with the visible image for generating a single informative fused image. Particle swarm optimization (PSO) is used in the fusion process for obtaining the optimized weighting factor. These optimized weighting factors are used for fusing the DCT coefficients of visible and infrared images. Inverse DCT is applied for obtaining the initial fused image. An enhanced fused image is obtained through adaptive histogram equalization for a better visual understanding and target detection. The proposed framework is evaluated using quantitative metrics such as standard deviation, spatial frequency, entropy and mean gradient. The experimental results demonstrate the outperformance of the proposed algorithm over many other state- of- the- art techniques reported in literature.
Designing for Compressive Sensing: Compressive Art, Camouflage, Fonts, and Quick Response Codes
2018-01-01
an example where the signal is non-sparse in the standard basis, but sparse in the discrete cosine basis . The top plot shows the signal from the...previous example, now used as sparse discrete cosine transform (DCT) coefficients . The next plot shows the non-sparse signal in the standard...Romberg JK, Tao T. Stable signal recovery from incomplete and inaccurate measurements. Commun Pure Appl Math . 2006;59(8):1207–1223. 3. Donoho DL
ERIC Educational Resources Information Center
Commission on Engineering Education, Washington, DC.
This report describes an undergraduate course in digital subsystems. The course is divided into two major parts. Part I is entitled Electronic Circuits and Functional Units. The material in this part of the course proceeds from simple understandings of circuits to the progressively more complex functional units. Early emphasis is placed on basic…
Discrete Cosine Transform Image Coding With Sliding Block Codes
NASA Astrophysics Data System (ADS)
Divakaran, Ajay; Pearlman, William A.
1989-11-01
A transform trellis coding scheme for images is presented. A two dimensional discrete cosine transform is applied to the image followed by a search on a trellis structured code. This code is a sliding block code that utilizes a constrained size reproduction alphabet. The image is divided into blocks by the transform coding. The non-stationarity of the image is counteracted by grouping these blocks in clusters through a clustering algorithm, and then encoding the clusters separately. Mandela ordered sequences are formed from each cluster i.e identically indexed coefficients from each block are grouped together to form one dimensional sequences. A separate search ensues on each of these Mandela ordered sequences. Padding sequences are used to improve the trellis search fidelity. The padding sequences absorb the error caused by the building up of the trellis to full size. The simulations were carried out on a 256x256 image ('LENA'). The results are comparable to any existing scheme. The visual quality of the image is enhanced considerably by the padding and clustering.
Microlens array processor with programmable weight mask and direct optical input
NASA Astrophysics Data System (ADS)
Schmid, Volker R.; Lueder, Ernst H.; Bader, Gerhard; Maier, Gert; Siegordner, Jochen
1999-03-01
We present an optical feature extraction system with a microlens array processor. The system is suitable for online implementation of a variety of transforms such as the Walsh transform and DCT. Operating with incoherent light, our processor accepts direct optical input. Employing a sandwich- like architecture, we obtain a very compact design of the optical system. The key elements of the microlens array processor are a square array of 15 X 15 spherical microlenses on acrylic substrate and a spatial light modulator as transmissive mask. The light distribution behind the mask is imaged onto the pixels of a customized a-Si image sensor with adjustable gain. We obtain one output sample for each microlens image and its corresponding weight mask area as summation of the transmitted intensity within one sensor pixel. The resulting architecture is very compact and robust like a conventional camera lens while incorporating a high degree of parallelism. We successfully demonstrate a Walsh transform into the spatial frequency domain as well as the implementation of a discrete cosine transform with digitized gray values. We provide results showing the transformation performance for both synthetic image patterns and images of natural texture samples. The extracted frequency features are suitable for neural classification of the input image. Other transforms and correlations can be implemented in real-time allowing adaptive optical signal processing.
Video Transmission for Third Generation Wireless Communication Systems
Gharavi, H.; Alamouti, S. M.
2001-01-01
This paper presents a twin-class unequal protected video transmission system over wireless channels. Video partitioning based on a separation of the Variable Length Coded (VLC) Discrete Cosine Transform (DCT) coefficients within each block is considered for constant bitrate transmission (CBR). In the splitting process the fraction of bits assigned to each of the two partitions is adjusted according to the requirements of the unequal error protection scheme employed. Subsequently, partitioning is applied to the ITU-T H.263 coding standard. As a transport vehicle, we have considered one of the leading third generation cellular radio standards known as WCDMA. A dual-priority transmission system is then invoked on the WCDMA system where the video data, after being broken into two streams, is unequally protected. We use a very simple error correction coding scheme for illustration and then propose more sophisticated forms of unequal protection of the digitized video signals. We show that this strategy results in a significantly higher quality of the reconstructed video data when it is transmitted over time-varying multipath fading channels. PMID:27500033
Analysis of the impact of digital watermarking on computer-aided diagnosis in medical imaging.
Garcia-Hernandez, Jose Juan; Gomez-Flores, Wilfrido; Rubio-Loyola, Javier
2016-01-01
Medical images (MI) are relevant sources of information for detecting and diagnosing a large number of illnesses and abnormalities. Due to their importance, this study is focused on breast ultrasound (BUS), which is the main adjunct for mammography to detect common breast lesions among women worldwide. On the other hand, aiming to enhance data security, image fidelity, authenticity, and content verification in e-health environments, MI watermarking has been widely used, whose main goal is to embed patient meta-data into MI so that the resulting image keeps its original quality. In this sense, this paper deals with the comparison of two watermarking approaches, namely spread spectrum based on the discrete cosine transform (SS-DCT) and the high-capacity data-hiding (HCDH) algorithm, so that the watermarked BUS images are guaranteed to be adequate for a computer-aided diagnosis (CADx) system, whose two principal outcomes are lesion segmentation and classification. Experimental results show that HCDH algorithm is highly recommended for watermarking medical images, maintaining the image quality and without introducing distortion into the output of CADx. Copyright © 2015 Elsevier Ltd. All rights reserved.
[Development of a video image system for wireless capsule endoscopes based on DSP].
Yang, Li; Peng, Chenglin; Wu, Huafeng; Zhao, Dechun; Zhang, Jinhua
2008-02-01
A video image recorder to record video picture for wireless capsule endoscopes was designed. TMS320C6211 DSP of Texas Instruments Inc. is the core processor of this system. Images are periodically acquired from Composite Video Broadcast Signal (CVBS) source and scaled by video decoder (SAA7114H). Video data is transported from high speed buffer First-in First-out (FIFO) to Digital Signal Processor (DSP) under the control of Complex Programmable Logic Device (CPLD). This paper adopts JPEG algorithm for image coding, and the compressed data in DSP was stored to Compact Flash (CF) card. TMS320C6211 DSP is mainly used for image compression and data transporting. Fast Discrete Cosine Transform (DCT) algorithm and fast coefficient quantization algorithm are used to accelerate operation speed of DSP and decrease the executing code. At the same time, proper address is assigned for each memory, which has different speed;the memory structure is also optimized. In addition, this system uses plenty of Extended Direct Memory Access (EDMA) to transport and process image data, which results in stable and high performance.
NASA Astrophysics Data System (ADS)
Rerucha, Simon; Sarbort, Martin; Hola, Miroslava; Cizek, Martin; Hucl, Vaclav; Cip, Ondrej; Lazar, Josef
2016-12-01
The homodyne detection with only a single detector represents a promising approach in the interferometric application which enables a significant reduction of the optical system complexity while preserving the fundamental resolution and dynamic range of the single frequency laser interferometers. We present the design, implementation and analysis of algorithmic methods for computational processing of the single-detector interference signal based on parallel pipelined processing suitable for real time implementation on a programmable hardware platform (e.g. the FPGA - Field Programmable Gate Arrays or the SoC - System on Chip). The algorithmic methods incorporate (a) the single detector signal (sine) scaling, filtering, demodulations and mixing necessary for the second (cosine) quadrature signal reconstruction followed by a conic section projection in Cartesian plane as well as (a) the phase unwrapping together with the goniometric and linear transformations needed for the scale linearization and periodic error correction. The digital computing scheme was designed for bandwidths up to tens of megahertz which would allow to measure the displacements at the velocities around half metre per second. The algorithmic methods were tested in real-time operation with a PC-based reference implementation that employed the advantage pipelined processing by balancing the computational load among multiple processor cores. The results indicate that the algorithmic methods are suitable for a wide range of applications [3] and that they are bringing the fringe counting interferometry closer to the industrial applications due to their optical setup simplicity and robustness, computational stability, scalability and also a cost-effectiveness.
Improved method of step length estimation based on inverted pendulum model.
Zhao, Qi; Zhang, Boxue; Wang, Jingjing; Feng, Wenquan; Jia, Wenyan; Sun, Mingui
2017-04-01
Step length estimation is an important issue in areas such as gait analysis, sport training, or pedestrian localization. In this article, we estimate the step length of walking using a waist-worn wearable computer named eButton. Motion sensors within this device are used to record body movement from the trunk instead of extremities. Two signal-processing techniques are applied to our algorithm design. The direction cosine matrix transforms vertical acceleration from the device coordinates to the topocentric coordinates. The empirical mode decomposition is used to remove the zero- and first-order skew effects resulting from an integration process. Our experimental results show that our algorithm performs well in step length estimation. The effectiveness of the direction cosine matrix algorithm is improved from 1.69% to 3.56% while the walking speed increased.
Simultaneous storage of medical images in the spatial and frequency domain: a comparative study.
Nayak, Jagadish; Bhat, P Subbanna; Acharya U, Rajendra; Uc, Niranjan
2004-06-05
Digital watermarking is a technique of hiding specific identification data for copyright authentication. This technique is adapted here for interleaving patient information with medical images, to reduce storage and transmission overheads. The patient information is encrypted before interleaving with images to ensure greater security. The bio-signals are compressed and subsequently interleaved with the image. This interleaving is carried out in the spatial domain and Frequency domain. The performance of interleaving in the spatial, Discrete Fourier Transform (DFT), Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) coefficients is studied. Differential pulse code modulation (DPCM) is employed for data compression as well as encryption and results are tabulated for a specific example. It can be seen from results, the process does not affect the picture quality. This is attributed to the fact that the change in LSB of a pixel changes its brightness by 1 part in 256. Spatial and DFT domain interleaving gave very less %NRMSE as compared to DCT and DWT domain. The Results show that spatial domain the interleaving, the %NRMSE was less than 0.25% for 8-bit encoded pixel intensity. Among the frequency domain interleaving methods, DFT was found to be very efficient.
Optimization of self-acting step thrust bearings for load capacity and stiffness.
NASA Technical Reports Server (NTRS)
Hamrock, B. J.
1972-01-01
Linearized analysis of a finite-width rectangular step thrust bearing. Dimensionless load capacity and stiffness are expressed in terms of a Fourier cosine series. The dimensionless load capacity and stiffness were found to be a function of the dimensionless bearing number, the pad length-to-width ratio, the film thickness ratio, the step location parameter, and the feed groove parameter. The equations obtained in the analysis were verified. The assumptions imposed were substantiated by comparing the results with an existing exact solution for the infinite width bearing. A digital computer program was developed which determines optimal bearing configuration for maximum load capacity or stiffness. Simple design curves are presented. Results are shown for both compressible and incompressible lubrication. Through a parameter transformation the results are directly usable in designing optimal step sector thrust bearings.
Digital intermediate frequency QAM modulator using parallel processing
Pao, Hsueh-Yuan [Livermore, CA; Tran, Binh-Nien [San Ramon, CA
2008-05-27
The digital Intermediate Frequency (IF) modulator applies to various modulation types and offers a simple and low cost method to implement a high-speed digital IF modulator using field programmable gate arrays (FPGAs). The architecture eliminates multipliers and sequential processing by storing the pre-computed modulated cosine and sine carriers in ROM look-up-tables (LUTs). The high-speed input data stream is parallel processed using the corresponding LUTs, which reduces the main processing speed, allowing the use of low cost FPGAs.
Distributed Digital Subarray Antennas
2013-12-01
subarrays in space). Linear, planar, volumetric. Periodic, aperiodic or random. Rotation and tilt relative to a global reference. Based on the...sm N , and ( ), ( ), ( )s s sx m y m z m coordinates of subarray m in the global system. The subarrays can be rotated and tilted with respect...to the global origin. In the global system ( , ) the direction cosines are sin cos sin sin cos . u v w (1) The scan
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sandford, M.T. II; Bradley, J.N.; Handel, T.G.
Data embedding is a new steganographic method for combining digital information sets. This paper describes the data embedding method and gives examples of its application using software written in the C-programming language. Sandford and Handel produced a computer program (BMPEMBED, Ver. 1.51 written for IBM PC/AT or compatible, MS/DOS Ver. 3.3 or later) that implements data embedding in an application for digital imagery. Information is embedded into, and extracted from, Truecolor or color-pallet images in Microsoft{reg_sign} bitmap (.BMP) format. Hiding data in the noise component of a host, by means of an algorithm that modifies or replaces the noise bits,more » is termed {open_quote}steganography.{close_quote} Data embedding differs markedly from conventional steganography, because it uses the noise component of the host to insert information with few or no modifications to the host data values or their statistical properties. Consequently, the entropy of the host data is affected little by using data embedding to add information. The data embedding method applies to host data compressed with transform, or {open_quote}lossy{close_quote} compression algorithms, as for example ones based on discrete cosine transform and wavelet functions. Analysis of the host noise generates a key required for embedding and extracting the auxiliary data from the combined data. The key is stored easily in the combined data. Images without the key cannot be processed to extract the embedded information. To provide security for the embedded data, one can remove the key from the combined data and manage it separately. The image key can be encrypted and stored in the combined data or transmitted separately as a ciphertext much smaller in size than the embedded data. The key size is typically ten to one-hundred bytes, and it is in data an analysis algorithm.« less
NASA Astrophysics Data System (ADS)
Sandford, Maxwell T., II; Bradley, Jonathan N.; Handel, Theodore G.
1996-01-01
Data embedding is a new steganographic method for combining digital information sets. This paper describes the data embedding method and gives examples of its application using software written in the C-programming language. Sandford and Handel produced a computer program (BMPEMBED, Ver. 1.51 written for IBM PC/AT or compatible, MS/DOS Ver. 3.3 or later) that implements data embedding in an application for digital imagery. Information is embedded into, and extracted from, Truecolor or color-pallet images in MicrosoftTM bitmap (BMP) format. Hiding data in the noise component of a host, by means of an algorithm that modifies or replaces the noise bits, is termed `steganography.' Data embedding differs markedly from conventional steganography, because it uses the noise component of the host to insert information with few or no modifications to the host data values or their statistical properties. Consequently, the entropy of the host data is affected little by using data embedding to add information. The data embedding method applies to host data compressed with transform, or `lossy' compression algorithms, as for example ones based on discrete cosine transform and wavelet functions. Analysis of the host noise generates a key required for embedding and extracting the auxiliary data from the combined data. The key is stored easily in the combined data. Images without the key cannot be processed to extract the embedded information. To provide security for the embedded data, one can remove the key from the combined data and manage it separately. The image key can be encrypted and stored in the combined data or transmitted separately as a ciphertext much smaller in size than the embedded data. The key size is typically ten to one-hundred bytes, and it is derived from the original host data by an analysis algorithm.
NASA Technical Reports Server (NTRS)
Rost, Martin C.; Sayood, Khalid
1991-01-01
A method for efficiently coding natural images using a vector-quantized variable-blocksized transform source coder is presented. The method, mixture block coding (MBC), incorporates variable-rate coding by using a mixture of discrete cosine transform (DCT) source coders. Which coders are selected to code any given image region is made through a threshold driven distortion criterion. In this paper, MBC is used in two different applications. The base method is concerned with single-pass low-rate image data compression. The second is a natural extension of the base method which allows for low-rate progressive transmission (PT). Since the base method adapts easily to progressive coding, it offers the aesthetic advantage of progressive coding without incorporating extensive channel overhead. Image compression rates of approximately 0.5 bit/pel are demonstrated for both monochrome and color images.
Image Augmentation for Object Image Classification Based On Combination of Pre-Trained CNN and SVM
NASA Astrophysics Data System (ADS)
Shima, Yoshihiro
2018-04-01
Neural networks are a powerful means of classifying object images. The proposed image category classification method for object images combines convolutional neural networks (CNNs) and support vector machines (SVMs). A pre-trained CNN, called Alex-Net, is used as a pattern-feature extractor. Alex-Net is pre-trained for the large-scale object-image dataset ImageNet. Instead of training, Alex-Net, pre-trained for ImageNet is used. An SVM is used as trainable classifier. The feature vectors are passed to the SVM from Alex-Net. The STL-10 dataset are used as object images. The number of classes is ten. Training and test samples are clearly split. STL-10 object images are trained by the SVM with data augmentation. We use the pattern transformation method with the cosine function. We also apply some augmentation method such as rotation, skewing and elastic distortion. By using the cosine function, the original patterns were left-justified, right-justified, top-justified, or bottom-justified. Patterns were also center-justified and enlarged. Test error rate is decreased by 0.435 percentage points from 16.055% by augmentation with cosine transformation. Error rates are increased by other augmentation method such as rotation, skewing and elastic distortion, compared without augmentation. Number of augmented data is 30 times that of the original STL-10 5K training samples. Experimental test error rate for the test 8k STL-10 object images was 15.620%, which shows that image augmentation is effective for image category classification.
Hyperbolic Cross Truncations for Stochastic Fourier Cosine Series
Zhang, Zhihua
2014-01-01
Based on our decomposition of stochastic processes and our asymptotic representations of Fourier cosine coefficients, we deduce an asymptotic formula of approximation errors of hyperbolic cross truncations for bivariate stochastic Fourier cosine series. Moreover we propose a kind of Fourier cosine expansions with polynomials factors such that the corresponding Fourier cosine coefficients decay very fast. Although our research is in the setting of stochastic processes, our results are also new for deterministic functions. PMID:25147842
Luminance-model-based DCT quantization for color image compression
NASA Technical Reports Server (NTRS)
Ahumada, Albert J., Jr.; Peterson, Heidi A.
1992-01-01
A model is developed to approximate visibility thresholds for discrete cosine transform (DCT) coefficient quantization error based on the peak-to-peak luminance of the error image. Experimentally measured visibility thresholds for R, G, and B DCT basis functions can be predicted by a simple luminance-based detection model. This model allows DCT coefficient quantization matrices to be designed for display conditions other than those of the experimental measurements: other display luminances, other veiling luminances, and other spatial frequencies (different pixel spacings, viewing distances, and aspect ratios).
NASA Astrophysics Data System (ADS)
Franco, Patrick; Ogier, Jean-Marc; Loonis, Pierre; Mullot, Rémy
Recently we have developed a model for shape description and matching. Based on minimum spanning trees construction and specifics stages like the mixture, it seems to have many desirable properties. Recognition invariance in front shift, rotated and noisy shape was checked through median scale tests related to GREC symbol reference database. Even if extracting the topology of a shape by mapping the shortest path connecting all the pixels seems to be powerful, the construction of graph induces an expensive algorithmic cost. In this article we discuss on the ways to reduce time computing. An alternative solution based on image compression concepts is provided and evaluated. The model no longer operates in the image space but in a compact space, namely the Discrete Cosine space. The use of block discrete cosine transform is discussed and justified. The experimental results led on the GREC2003 database show that the proposed method is characterized by a good discrimination power, a real robustness to noise with an acceptable time computing.
NASA Astrophysics Data System (ADS)
Selwyn, Ebenezer Juliet; Florinabel, D. Jemi
2018-04-01
Compound image segmentation plays a vital role in the compression of computer screen images. Computer screen images are images which are mixed with textual, graphical, or pictorial contents. In this paper, we present a comparison of two transform based block classification of compound images based on metrics like speed of classification, precision and recall rate. Block based classification approaches normally divide the compound images into fixed size blocks of non-overlapping in nature. Then frequency transform like Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) are applied over each block. Mean and standard deviation are computed for each 8 × 8 block and are used as features set to classify the compound images into text/graphics and picture/background block. The classification accuracy of block classification based segmentation techniques are measured by evaluation metrics like precision and recall rate. Compound images of smooth background and complex background images containing text of varying size, colour and orientation are considered for testing. Experimental evidence shows that the DWT based segmentation provides significant improvement in recall rate and precision rate approximately 2.3% than DCT based segmentation with an increase in block classification time for both smooth and complex background images.
2013-10-01
correct group assignment of samples in unsupervised hierarchical clustering by the Unweighted Pair-Group Method using Arithmetic averages ( UPGMA ) based on...centering of log2 transformed MAS5.0 signal values; probe set clustering was performed by the UPGMA method using Cosine correlation as the similarity met...A) The 108 differentially-regulated genes identified were subjected to unsupervised hierarchical clustering analysis using the UPGMA algorithm with
Simultaneous storage of medical images in the spatial and frequency domain: A comparative study
Nayak, Jagadish; Bhat, P Subbanna; Acharya U, Rajendra; UC, Niranjan
2004-01-01
Background Digital watermarking is a technique of hiding specific identification data for copyright authentication. This technique is adapted here for interleaving patient information with medical images, to reduce storage and transmission overheads. Methods The patient information is encrypted before interleaving with images to ensure greater security. The bio-signals are compressed and subsequently interleaved with the image. This interleaving is carried out in the spatial domain and Frequency domain. The performance of interleaving in the spatial, Discrete Fourier Transform (DFT), Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) coefficients is studied. Differential pulse code modulation (DPCM) is employed for data compression as well as encryption and results are tabulated for a specific example. Results It can be seen from results, the process does not affect the picture quality. This is attributed to the fact that the change in LSB of a pixel changes its brightness by 1 part in 256. Spatial and DFT domain interleaving gave very less %NRMSE as compared to DCT and DWT domain. Conclusion The Results show that spatial domain the interleaving, the %NRMSE was less than 0.25% for 8-bit encoded pixel intensity. Among the frequency domain interleaving methods, DFT was found to be very efficient. PMID:15180899
NASA Astrophysics Data System (ADS)
Ruigrok, Elmer; Wapenaar, Kees
2014-05-01
In various application areas, e.g., seismology, astronomy and geodesy, arrays of sensors are used to characterize incoming wavefields due to distant sources. Beamforming is a general term for phased-adjusted summations over the different array elements, for untangling the directionality and elevation angle of the incoming waves. For characterizing noise sources, beamforming is conventionally applied with a temporal Fourier and a 2D spatial Fourier transform, possibly with additional weights. These transforms become aliased for higher frequencies and sparser array-element distributions. As a partial remedy, we derive a kernel for beamforming crosscorrelated data and call it cosine beamforming (CBF). By applying beamforming not directly to the data, but to crosscorrelated data, the sampling is effectively increased. We show that CBF, due to this better sampling, suffers less from aliasing and yields higher resolution than conventional beamforming. As a flip-side of the coin, the CBF output shows more smearing for spherical waves than conventional beamforming.
Visually Lossless Data Compression for Real-Time Frame/Pushbroom Space Science Imagers
NASA Technical Reports Server (NTRS)
Yeh, Pen-Shu; Venbrux, Jack; Bhatia, Prakash; Miller, Warner H.
2000-01-01
A visually lossless data compression technique is currently being developed for space science applications under the requirement of high-speed push-broom scanning. The technique is also applicable to frame based imaging and is error-resilient in that error propagation is contained within a few scan lines. The algorithm is based on a block transform of a hybrid of modulated lapped transform (MLT) and discrete cosine transform (DCT), or a 2-dimensional lapped transform, followed by bit-plane encoding; this combination results in an embedded bit string with exactly the desirable compression rate as desired by the user. The approach requires no unique table to maximize its performance. The compression scheme performs well on a suite of test images typical of images from spacecraft instruments. Flight qualified hardware implementations are in development; a functional chip set is expected by the end of 2001. The chip set is being designed to compress data in excess of 20 Msamples/sec and support quantizations from 2 to 16 bits.
Ho, B T; Tsai, M J; Wei, J; Ma, M; Saipetch, P
1996-01-01
A new method of video compression for angiographic images has been developed to achieve high compression ratio (~20:1) while eliminating block artifacts which leads to loss of diagnostic accuracy. This method adopts motion picture experts group's (MPEGs) motion compensated prediction to takes advantage of frame to frame correlation. However, in contrast to MPEG, the error images arising from mismatches in the motion estimation are encoded by discrete wavelet transform (DWT) rather than block discrete cosine transform (DCT). Furthermore, the authors developed a classification scheme which label each block in an image as intra, error, or background type and encode it accordingly. This hybrid coding can significantly improve the compression efficiency in certain eases. This method can be generalized for any dynamic image sequences applications sensitive to block artifacts.
1982-09-17
FK * 1PK (2) The convolution of two transforms in time domain is the inverse transform of the product in frequency domain. Thus Rp(s) - Fgc() Ipg(*) (3...its inverse transform by: R,(r)- R,(a.)e’’ do. (5)2w In order to nuke use f a very accurate numerical method to ompute Fourier "ke and coil...taorm. When the inverse transform it tken by using Eq. (15), the cosine transform, because it converges faster than the sine transform refu-ft the
NASA Astrophysics Data System (ADS)
Al-Hayani, Nazar; Al-Jawad, Naseer; Jassim, Sabah A.
2014-05-01
Video compression and encryption became very essential in a secured real time video transmission. Applying both techniques simultaneously is one of the challenges where the size and the quality are important in multimedia transmission. In this paper we proposed a new technique for video compression and encryption. Both encryption and compression are based on edges extracted from the high frequency sub-bands of wavelet decomposition. The compression algorithm based on hybrid of: discrete wavelet transforms, discrete cosine transform, vector quantization, wavelet based edge detection, and phase sensing. The compression encoding algorithm treats the video reference and non-reference frames in two different ways. The encryption algorithm utilized A5 cipher combined with chaotic logistic map to encrypt the significant parameters and wavelet coefficients. Both algorithms can be applied simultaneously after applying the discrete wavelet transform on each individual frame. Experimental results show that the proposed algorithms have the following features: high compression, acceptable quality, and resistance to the statistical and bruteforce attack with low computational processing.
Liu, Yongsuo; Meng, Qinghua; Jiang, Shumin; Hu, Yuzhu
2005-03-01
The similarity evaluation of the fingerprints is one of the most important problems in the quality control of the traditional Chinese medicine (TCM). Similarity measures used to evaluate the similarity of the common peaks in the chromatogram of TCM have been discussed. Comparative studies were carried out among correlation coefficient, cosine of the angle and an improved extent similarity method using simulated data and experimental data. Correlation coefficient and cosine of the angle are not sensitive to the differences of the data set. They are still not sensitive to the differences of the data even after normalization. According to the similarity system theory, an improved extent similarity method was proposed. The improved extent similarity is more sensitive to the differences of the data sets than correlation coefficient and cosine of the angle. And the character of the data sets needs not to be changed compared with log-transformation. The improved extent similarity can be used to evaluate the similarity of the chromatographic fingerprints of TCM.
Natural convection heat transfer in an oscillating vertical cylinder
Ali Shah, Nehad; Tassaddiq, Asifa; Mustapha, Norzieha; Kechil, Seripah Awang
2018-01-01
This paper studies the heat transfer analysis caused due to free convection in a vertically oscillating cylinder. Exact solutions are determined by applying the Laplace and finite Hankel transforms. Expressions for temperature distribution and velocity field corresponding to cosine and sine oscillations are obtained. The solutions that have been obtained for velocity are presented in the forms of transient and post-transient solutions. Moreover, these solutions satisfy both the governing differential equation and all imposed initial and boundary conditions. Numerical computations and graphical illustrations are used in order to study the effects of Prandtl and Grashof numbers on velocity and temperature for various times. The transient solutions for both cosine and sine oscillations are also computed in tables. It is found that, the transient solutions are of considerable interest up to the times t = 15 for cosine oscillations and t = 1.75 for sine oscillations. After these moments, the transient solutions can be neglected and, the fluid moves according with the post-transient solutions. PMID:29304161
Natural convection heat transfer in an oscillating vertical cylinder.
Khan, Ilyas; Ali Shah, Nehad; Tassaddiq, Asifa; Mustapha, Norzieha; Kechil, Seripah Awang
2018-01-01
This paper studies the heat transfer analysis caused due to free convection in a vertically oscillating cylinder. Exact solutions are determined by applying the Laplace and finite Hankel transforms. Expressions for temperature distribution and velocity field corresponding to cosine and sine oscillations are obtained. The solutions that have been obtained for velocity are presented in the forms of transient and post-transient solutions. Moreover, these solutions satisfy both the governing differential equation and all imposed initial and boundary conditions. Numerical computations and graphical illustrations are used in order to study the effects of Prandtl and Grashof numbers on velocity and temperature for various times. The transient solutions for both cosine and sine oscillations are also computed in tables. It is found that, the transient solutions are of considerable interest up to the times t = 15 for cosine oscillations and t = 1.75 for sine oscillations. After these moments, the transient solutions can be neglected and, the fluid moves according with the post-transient solutions.
Hardware Implementation of 32-Bit High-Speed Direct Digital Frequency Synthesizer
Ibrahim, Salah Hasan; Ali, Sawal Hamid Md.; Islam, Md. Shabiul
2014-01-01
The design and implementation of a high-speed direct digital frequency synthesizer are presented. A modified Brent-Kung parallel adder is combined with pipelining technique to improve the speed of the system. A gated clock technique is proposed to reduce the number of registers in the phase accumulator design. The quarter wave symmetry technique is used to store only one quarter of the sine wave. The ROM lookup table (LUT) is partitioned into three 4-bit sub-ROMs based on angular decomposition technique and trigonometric identity. Exploiting the advantages of sine-cosine symmetrical attributes together with XOR logic gates, one sub-ROM block can be removed from the design. These techniques, compressed the ROM into 368 bits. The ROM compressed ratio is 534.2 : 1, with only two adders, two multipliers, and XOR-gates with high frequency resolution of 0.029 Hz. These techniques make the direct digital frequency synthesizer an attractive candidate for wireless communication applications. PMID:24991635
Zamli, Kamal Z.; Din, Fakhrud; Bures, Miroslav
2018-01-01
The sine-cosine algorithm (SCA) is a new population-based meta-heuristic algorithm. In addition to exploiting sine and cosine functions to perform local and global searches (hence the name sine-cosine), the SCA introduces several random and adaptive parameters to facilitate the search process. Although it shows promising results, the search process of the SCA is vulnerable to local minima/maxima due to the adoption of a fixed switch probability and the bounded magnitude of the sine and cosine functions (from -1 to 1). In this paper, we propose a new hybrid Q-learning sine-cosine- based strategy, called the Q-learning sine-cosine algorithm (QLSCA). Within the QLSCA, we eliminate the switching probability. Instead, we rely on the Q-learning algorithm (based on the penalty and reward mechanism) to dynamically identify the best operation during runtime. Additionally, we integrate two new operations (Lévy flight motion and crossover) into the QLSCA to facilitate jumping out of local minima/maxima and enhance the solution diversity. To assess its performance, we adopt the QLSCA for the combinatorial test suite minimization problem. Experimental results reveal that the QLSCA is statistically superior with regard to test suite size reduction compared to recent state-of-the-art strategies, including the original SCA, the particle swarm test generator (PSTG), adaptive particle swarm optimization (APSO) and the cuckoo search strategy (CS) at the 95% confidence level. However, concerning the comparison with discrete particle swarm optimization (DPSO), there is no significant difference in performance at the 95% confidence level. On a positive note, the QLSCA statistically outperforms the DPSO in certain configurations at the 90% confidence level. PMID:29771918
Zamli, Kamal Z; Din, Fakhrud; Ahmed, Bestoun S; Bures, Miroslav
2018-01-01
The sine-cosine algorithm (SCA) is a new population-based meta-heuristic algorithm. In addition to exploiting sine and cosine functions to perform local and global searches (hence the name sine-cosine), the SCA introduces several random and adaptive parameters to facilitate the search process. Although it shows promising results, the search process of the SCA is vulnerable to local minima/maxima due to the adoption of a fixed switch probability and the bounded magnitude of the sine and cosine functions (from -1 to 1). In this paper, we propose a new hybrid Q-learning sine-cosine- based strategy, called the Q-learning sine-cosine algorithm (QLSCA). Within the QLSCA, we eliminate the switching probability. Instead, we rely on the Q-learning algorithm (based on the penalty and reward mechanism) to dynamically identify the best operation during runtime. Additionally, we integrate two new operations (Lévy flight motion and crossover) into the QLSCA to facilitate jumping out of local minima/maxima and enhance the solution diversity. To assess its performance, we adopt the QLSCA for the combinatorial test suite minimization problem. Experimental results reveal that the QLSCA is statistically superior with regard to test suite size reduction compared to recent state-of-the-art strategies, including the original SCA, the particle swarm test generator (PSTG), adaptive particle swarm optimization (APSO) and the cuckoo search strategy (CS) at the 95% confidence level. However, concerning the comparison with discrete particle swarm optimization (DPSO), there is no significant difference in performance at the 95% confidence level. On a positive note, the QLSCA statistically outperforms the DPSO in certain configurations at the 90% confidence level.
Area and power efficient DCT architecture for image compression
NASA Astrophysics Data System (ADS)
Dhandapani, Vaithiyanathan; Ramachandran, Seshasayanan
2014-12-01
The discrete cosine transform (DCT) is one of the major components in image and video compression systems. The final output of these systems is interpreted by the human visual system (HVS), which is not perfect. The limited perception of human visualization allows the algorithm to be numerically approximate rather than exact. In this paper, we propose a new matrix for discrete cosine transform. The proposed 8 × 8 transformation matrix contains only zeros and ones which requires only adders, thus avoiding the need for multiplication and shift operations. The new class of transform requires only 12 additions, which highly reduces the computational complexity and achieves a performance in image compression that is comparable to that of the existing approximated DCT. Another important aspect of the proposed transform is that it provides an efficient area and power optimization while implementing in hardware. To ensure the versatility of the proposal and to further evaluate the performance and correctness of the structure in terms of speed, area, and power consumption, the model is implemented on Xilinx Virtex 7 field programmable gate array (FPGA) device and synthesized with Cadence® RTL Compiler® using UMC 90 nm standard cell library. The analysis obtained from the implementation indicates that the proposed structure is superior to the existing approximation techniques with a 30% reduction in power and 12% reduction in area.
A simplified Integer Cosine Transform and its application in image compression
NASA Technical Reports Server (NTRS)
Costa, M.; Tong, K.
1994-01-01
A simplified version of the integer cosine transform (ICT) is described. For practical reasons, the transform is considered jointly with the quantization of its coefficients. It differs from conventional ICT algorithms in that the combined factors for normalization and quantization are approximated by powers of two. In conventional algorithms, the normalization/quantization stage typically requires as many integer divisions as the number of transform coefficients. By restricting the factors to powers of two, these divisions can be performed by variable shifts in the binary representation of the coefficients, with speed and cost advantages to the hardware implementation of the algorithm. The error introduced by the factor approximations is compensated for in the inverse ICT operation, executed with floating point precision. The simplified ICT algorithm has potential applications in image-compression systems with disparate cost and speed requirements in the encoder and decoder ends. For example, in deep space image telemetry, the image processors on board the spacecraft could take advantage of the simplified, faster encoding operation, which would be adjusted on the ground, with high-precision arithmetic. A dual application is found in compressed video broadcasting. Here, a fast, high-performance processor at the transmitter would precompensate for the factor approximations in the inverse ICT operation, to be performed in real time, at a large number of low-cost receivers.
Methods for performing fast discrete curvelet transforms of data
Candes, Emmanuel; Donoho, David; Demanet, Laurent
2010-11-23
Fast digital implementations of the second generation curvelet transform for use in data processing are disclosed. One such digital transformation is based on unequally-spaced fast Fourier transforms (USFFT) while another is based on the wrapping of specially selected Fourier samples. Both digital transformations return a table of digital curvelet coefficients indexed by a scale parameter, an orientation parameter, and a spatial location parameter. Both implementations are fast in the sense that they run in about O(n.sup.2 log n) flops for n by n Cartesian arrays or about O(N log N) flops for Cartesian arrays of size N=n.sup.3; in addition, they are also invertible, with rapid inversion algorithms of about the same complexity.
Digital plus analog output encoder
NASA Technical Reports Server (NTRS)
Hafle, R. S. (Inventor)
1976-01-01
The disclosed encoder is adapted to produce both digital and analog output signals corresponding to the angular position of a rotary shaft, or the position of any other movable member. The digital signals comprise a series of binary signals constituting a multidigit code word which defines the angular position of the shaft with a degree of resolution which depends upon the number of digits in the code word. The basic binary signals are produced by photocells actuated by a series of binary tracks on a code disc or member. The analog signals are in the form of a series of ramp signals which are related in length to the least significant bit of the digital code word. The analog signals are derived from sine and cosine tracks on the code disc.
Haldar, Justin P.; Leahy, Richard M.
2013-01-01
This paper presents a novel family of linear transforms that can be applied to data collected from the surface of a 2-sphere in three-dimensional Fourier space. This family of transforms generalizes the previously-proposed Funk-Radon Transform (FRT), which was originally developed for estimating the orientations of white matter fibers in the central nervous system from diffusion magnetic resonance imaging data. The new family of transforms is characterized theoretically, and efficient numerical implementations of the transforms are presented for the case when the measured data is represented in a basis of spherical harmonics. After these general discussions, attention is focused on a particular new transform from this family that we name the Funk-Radon and Cosine Transform (FRACT). Based on theoretical arguments, it is expected that FRACT-based analysis should yield significantly better orientation information (e.g., improved accuracy and higher angular resolution) than FRT-based analysis, while maintaining the strong characterizability and computational efficiency of the FRT. Simulations are used to confirm these theoretical characteristics, and the practical significance of the proposed approach is illustrated with real diffusion weighted MRI brain data. These experiments demonstrate that, in addition to having strong theoretical characteristics, the proposed approach can outperform existing state-of-the-art orientation estimation methods with respect to measures such as angular resolution and robustness to noise and modeling errors. PMID:23353603
Subjective evaluations of integer cosine transform compressed Galileo solid state imagery
NASA Technical Reports Server (NTRS)
Haines, Richard F.; Gold, Yaron; Grant, Terry; Chuang, Sherry
1994-01-01
This paper describes a study conducted for the Jet Propulsion Laboratory, Pasadena, California, using 15 evaluators from 12 institutions involved in the Galileo Solid State Imaging (SSI) experiment. The objective of the study was to determine the impact of integer cosine transform (ICT) compression using specially formulated quantization (q) tables and compression ratios on acceptability of the 800 x 800 x 8 monochromatic astronomical images as evaluated visually by Galileo SSI mission scientists. Fourteen different images in seven image groups were evaluated. Each evaluator viewed two versions of the same image side by side on a high-resolution monitor; each was compressed using a different q level. First the evaluators selected the image with the highest overall quality to support them in their visual evaluations of image content. Next they rated each image using a scale from one to five indicating its judged degree of usefulness. Up to four preselected types of images with and without noise were presented to each evaluator.
ASIC implementation of recursive scaled discrete cosine transform algorithm
NASA Astrophysics Data System (ADS)
On, Bill N.; Narasimhan, Sam; Huang, Victor K.
1994-05-01
A program to implement the Recursive Scaled Discrete Cosine Transform (DCT) algorithm as proposed by H. S. Hou has been undertaken at the Institute of Microelectronics. Implementation of the design was done using top-down design methodology with VHDL (VHSIC Hardware Description Language) for chip modeling. When the VHDL simulation has been satisfactorily completed, the design is synthesized into gates using a synthesis tool. The architecture of the design consists of two processing units together with a memory module for data storage and transpose. Each processing unit is composed of four pipelined stages which allow the internal clock to run at one-eighth (1/8) the speed of the pixel clock. Each stage operates on eight pixels in parallel. As the data flows through each stage, there are various adders and multipliers to transform them into the desired coefficients. The Scaled IDCT was implemented in a similar fashion with the adders and multipliers rearranged to perform the inverse DCT algorithm. The chip has been verified using Field Programmable Gate Array devices. The design is operational. The combination of fewer multiplications required and pipelined architecture give Hou's Recursive Scaled DCT good potential of achieving high performance at a low cost in using Very Large Scale Integration implementation.
Efficient use of bit planes in the generation of motion stimuli
NASA Technical Reports Server (NTRS)
Mulligan, Jeffrey B.; Stone, Leland S.
1988-01-01
The production of animated motion sequences on computer-controlled display systems presents a technical problem because large images cannot be transferred from disk storage to image memory at conventional frame rates. A technique is described in which a single base image can be used to generate a broad class of motion stimuli without the need for such memory transfers. This technique was applied to the generation of drifting sine-wave gratings (and by extension, sine wave plaids). For each drifting grating, sine and cosine spatial phase components are first reduced to 1 bit/pixel using a digital halftoning technique. The resulting pairs of 1-bit images are then loaded into pairs of bit planes of the display memory. To animate the patterns, the display hardware's color lookup table is modified on a frame-by-frame basis; for each frame the lookup table is set to display a weighted sum of the spatial sine and cosine phase components. Because the contrasts and temporal frequencies of the various components are mutually independent in each frame, the sine and cosine components can be counterphase modulated in temporal quadrature, yielding a single drifting grating. Using additional bit planes, multiple drifting gratings can be combined to form sine-wave plaid patterns. A large number of resultant plaid motions can be produced from a single image file because the temporal frequencies of all the components can be varied independently. For a graphics device having 8 bits/pixel, up to four drifting gratings may be combined, each having independently variable contrast and speed.
Computer-aided diagnosis of malignant mammograms using Zernike moments and SVM.
Sharma, Shubhi; Khanna, Pritee
2015-02-01
This work is directed toward the development of a computer-aided diagnosis (CAD) system to detect abnormalities or suspicious areas in digital mammograms and classify them as malignant or nonmalignant. Original mammogram is preprocessed to separate the breast region from its background. To work on the suspicious area of the breast, region of interest (ROI) patches of a fixed size of 128×128 are extracted from the original large-sized digital mammograms. For training, patches are extracted manually from a preprocessed mammogram. For testing, patches are extracted from a highly dense area identified by clustering technique. For all extracted patches corresponding to a mammogram, Zernike moments of different orders are computed and stored as a feature vector. A support vector machine (SVM) is used to classify extracted ROI patches. The experimental study shows that the use of Zernike moments with order 20 and SVM classifier gives better results among other studies. The proposed system is tested on Image Retrieval In Medical Application (IRMA) reference dataset and Digital Database for Screening Mammography (DDSM) mammogram database. On IRMA reference dataset, it attains 99% sensitivity and 99% specificity, and on DDSM mammogram database, it obtained 97% sensitivity and 96% specificity. To verify the applicability of Zernike moments as a fitting texture descriptor, the performance of the proposed CAD system is compared with the other well-known texture descriptors namely gray-level co-occurrence matrix (GLCM) and discrete cosine transform (DCT).
Content fragile watermarking for H.264/AVC video authentication
NASA Astrophysics Data System (ADS)
Ait Sadi, K.; Guessoum, A.; Bouridane, A.; Khelifi, F.
2017-04-01
Discrete cosine transform is exploited in this work to generate the authentication data that are treated as a fragile watermark. This watermark is embedded in the motion vectors. The advances in multimedia technologies and digital processing tools have brought with them new challenges for the source and content authentication. To ensure the integrity of the H.264/AVC video stream, we introduce an approach based on a content fragile video watermarking method using an independent authentication of each group of pictures (GOPs) within the video. This technique uses robust visual features extracted from the video pertaining to the set of selected macroblocs (MBs) which hold the best partition mode in a tree-structured motion compensation process. An additional security degree is offered by the proposed method through using a more secured keyed function HMAC-SHA-256 and randomly choosing candidates from already selected MBs. In here, the watermark detection and verification processes are blind, whereas the tampered frames detection is not since it needs the original frames within the tampered GOPs. The proposed scheme achieves an accurate authentication technique with a high fragility and fidelity whilst maintaining the original bitrate and the perceptual quality. Furthermore, its ability to detect the tampered frames in case of spatial, temporal and colour manipulations is confirmed.
Personal recognition using hand shape and texture.
Kumar, Ajay; Zhang, David
2006-08-01
This paper proposes a new bimodal biometric system using feature-level fusion of hand shape and palm texture. The proposed combination is of significance since both the palmprint and hand-shape images are proposed to be extracted from the single hand image acquired from a digital camera. Several new hand-shape features that can be used to represent the hand shape and improve the performance are investigated. The new approach for palmprint recognition using discrete cosine transform coefficients, which can be directly obtained from the camera hardware, is demonstrated. None of the prior work on hand-shape or palmprint recognition has given any attention on the critical issue of feature selection. Our experimental results demonstrate that while majority of palmprint or hand-shape features are useful in predicting the subjects identity, only a small subset of these features are necessary in practice for building an accurate model for identification. The comparison and combination of proposed features is evaluated on the diverse classification schemes; naive Bayes (normal, estimated, multinomial), decision trees (C4.5, LMT), k-NN, SVM, and FFN. Although more work remains to be done, our results to date indicate that the combination of selected hand-shape and palmprint features constitutes a promising addition to the biometrics-based personal recognition systems.
Siddiqui, M F; Reza, A W; Kanesan, J; Ramiah, H
2014-01-01
A wide interest has been observed to find a low power and area efficient hardware design of discrete cosine transform (DCT) algorithm. This research work proposed a novel Common Subexpression Elimination (CSE) based pipelined architecture for DCT, aimed at reproducing the cost metrics of power and area while maintaining high speed and accuracy in DCT applications. The proposed design combines the techniques of Canonical Signed Digit (CSD) representation and CSE to implement the multiplier-less method for fixed constant multiplication of DCT coefficients. Furthermore, symmetry in the DCT coefficient matrix is used with CSE to further decrease the number of arithmetic operations. This architecture needs a single-port memory to feed the inputs instead of multiport memory, which leads to reduction of the hardware cost and area. From the analysis of experimental results and performance comparisons, it is observed that the proposed scheme uses minimum logic utilizing mere 340 slices and 22 adders. Moreover, this design meets the real time constraints of different video/image coders and peak-signal-to-noise-ratio (PSNR) requirements. Furthermore, the proposed technique has significant advantages over recent well-known methods along with accuracy in terms of power reduction, silicon area usage, and maximum operating frequency by 41%, 15%, and 15%, respectively.
Siddiqui, M. F.; Reza, A. W.; Kanesan, J.; Ramiah, H.
2014-01-01
A wide interest has been observed to find a low power and area efficient hardware design of discrete cosine transform (DCT) algorithm. This research work proposed a novel Common Subexpression Elimination (CSE) based pipelined architecture for DCT, aimed at reproducing the cost metrics of power and area while maintaining high speed and accuracy in DCT applications. The proposed design combines the techniques of Canonical Signed Digit (CSD) representation and CSE to implement the multiplier-less method for fixed constant multiplication of DCT coefficients. Furthermore, symmetry in the DCT coefficient matrix is used with CSE to further decrease the number of arithmetic operations. This architecture needs a single-port memory to feed the inputs instead of multiport memory, which leads to reduction of the hardware cost and area. From the analysis of experimental results and performance comparisons, it is observed that the proposed scheme uses minimum logic utilizing mere 340 slices and 22 adders. Moreover, this design meets the real time constraints of different video/image coders and peak-signal-to-noise-ratio (PSNR) requirements. Furthermore, the proposed technique has significant advantages over recent well-known methods along with accuracy in terms of power reduction, silicon area usage, and maximum operating frequency by 41%, 15%, and 15%, respectively. PMID:25133249
Fast computational scheme of image compression for 32-bit microprocessors
NASA Technical Reports Server (NTRS)
Kasperovich, Leonid
1994-01-01
This paper presents a new computational scheme of image compression based on the discrete cosine transform (DCT), underlying JPEG and MPEG International Standards. The algorithm for the 2-d DCT computation uses integer operations (register shifts and additions / subtractions only); its computational complexity is about 8 additions per image pixel. As a meaningful example of an on-board image compression application we consider the software implementation of the algorithm for the Mars Rover (Marsokhod, in Russian) imaging system being developed as a part of Mars-96 International Space Project. It's shown that fast software solution for 32-bit microprocessors may compete with the DCT-based image compression hardware.
Personalized Medicine in Veterans with Traumatic Brain Injuries
2013-05-01
Pair-Group Method using Arithmetic averages ( UPGMA ) based on cosine correlation of row mean centered log2 signal values; this was the top 50%-tile...cluster- ing was performed by the UPGMA method using Cosine correlation as the similarity metric. For comparative purposes, clustered heat maps included...non-mTBI cases were subjected to unsupervised hierarchical clustering analysis using the UPGMA algorithm with cosine correlation as the similarity
Personalized Medicine in Veterans with Traumatic Brain Injuries
2014-07-01
9 control cases are subjected to unsupervised hierarchical clustering analysis using the UPGMA algorithm with cosine correlation as the similarity...in unsu- pervised hierarchical clustering by the Un- weighted Pair-Group Method using Arithmetic averages ( UPGMA ) based on cosine correlation of row...of log2 trans- formed MAS5.0 signal values; probe set cluster- ing was performed by the UPGMA method using Cosine correlation as the similarity
A Local DCT-II Feature Extraction Approach for Personal Identification Based on Palmprint
NASA Astrophysics Data System (ADS)
Choge, H. Kipsang; Oyama, Tadahiro; Karungaru, Stephen; Tsuge, Satoru; Fukumi, Minoru
Biometric applications based on the palmprint have recently attracted increased attention from various researchers. In this paper, a method is presented that differs from the commonly used global statistical and structural techniques by extracting and using local features instead. The middle palm area is extracted after preprocessing for rotation, position and illumination normalization. The segmented region of interest is then divided into blocks of either 8×8 or 16×16 pixels in size. The type-II Discrete Cosine Transform (DCT) is applied to transform the blocks into DCT space. A subset of coefficients that encode the low to medium frequency components is selected using the JPEG-style zigzag scanning method. Features from each block are subsequently concatenated into a compact feature vector and used in palmprint verification experiments with palmprints from the PolyU Palmprint Database. Results indicate that this approach achieves better results than many conventional transform-based methods, with an excellent recognition accuracy above 99% and an Equal Error Rate (EER) of less than 1.2% in palmprint verification.
Initial performance of the COSINE-100 experiment
NASA Astrophysics Data System (ADS)
Adhikari, G.; Adhikari, P.; de Souza, E. Barbosa; Carlin, N.; Choi, S.; Choi, W. Q.; Djamal, M.; Ezeribe, A. C.; Ha, C.; Hahn, I. S.; Hubbard, A. J. F.; Jeon, E. J.; Jo, J. H.; Joo, H. W.; Kang, W. G.; Kang, W.; Kauer, M.; Kim, B. H.; Kim, H.; Kim, H. J.; Kim, K. W.; Kim, M. C.; Kim, N. Y.; Kim, S. K.; Kim, Y. D.; Kim, Y. H.; Kudryavtsev, V. A.; Lee, H. S.; Lee, J.; Lee, J. Y.; Lee, M. H.; Leonard, D. S.; Lim, K. E.; Lynch, W. A.; Maruyama, R. H.; Mouton, F.; Olsen, S. L.; Park, H. K.; Park, H. S.; Park, J. S.; Park, K. S.; Pettus, W.; Pierpoint, Z. P.; Prihtiadi, H.; Ra, S.; Rogers, F. R.; Rott, C.; Scarff, A.; Spooner, N. J. C.; Thompson, W. G.; Yang, L.; Yong, S. H.
2018-02-01
COSINE is a dark matter search experiment based on an array of low background NaI(Tl) crystals located at the Yangyang underground laboratory. The assembly of COSINE-100 was completed in the summer of 2016 and the detector is currently collecting physics quality data aimed at reproducing the DAMA/LIBRA experiment that reported an annual modulation signal. Stable operation has been achieved and will continue for at least 2 years. Here, we describe the design of COSINE-100, including the shielding arrangement, the configuration of the NaI(Tl) crystal detection elements, the veto systems, and the associated operational systems, and we show the current performance of the experiment.
Constructing and Deriving Reciprocal Trigonometric Relations: A Functional Analytic Approach
ERIC Educational Resources Information Center
Ninness, Chris; Dixon, Mark; Barnes-Holmes, Dermot; Rehfeldt, Ruth Anne; Rumph, Robin; McCuller, Glen; Holland, James; Smith, Ronald; Ninness, Sharon K.; McGinty, Jennifer
2009-01-01
Participants were pretrained and tested on mutually entailed trigonometric relations and combinatorially entailed relations as they pertained to positive and negative forms of sine, cosine, secant, and cosecant. Experiment 1 focused on training and testing transformations of these mathematical functions in terms of amplitude and frequency followed…
Stability of strongly nonlinear normal modes
NASA Astrophysics Data System (ADS)
Recktenwald, Geoffrey; Rand, Richard
2007-10-01
It is shown that a transformation of time can allow the periodic solution of a strongly nonlinear oscillator to be written as a simple cosine function. This enables the stability of strongly nonlinear normal modes in multidegree of freedom systems to be investigated by standard procedures such as harmonic balance.
A 16X16 Discrete Cosine Transform Chip
NASA Astrophysics Data System (ADS)
Sun, M. T.; Chen, T. C.; Gottlieb, A.; Wu, L.; Liou, M. L.
1987-10-01
Among various transform coding techniques for image compression the Discrete Cosine Transform (DCT) is considered to be the most effective method and has been widely used in the laboratory as well as in the market, place. DCT is computationally intensive. For video application at 14.3 MHz sample rate, a direct implementation of a 16x16 DCT requires a throughput, rate of approximately half a billion multiplications per second. In order to reduce the cost of hardware implementation, a single chip DCT implementation is highly desirable. In this paper, the implementation of a 16x16 DCT chip using a concurrent architecture will be presented. The chip is designed for real-time processing of 14.3 MHz sampled video data. It uses row-column decomposition to implement the two-dimensional transform. Distributed arithmetic combined with hit-serial and hit-parallel structures is used to implement the required vector inner products concurrently. Several schemes are utilized to reduce the size of required memory. The resultant circuit only uses memory, shift registers, and adders. No multipliers are required. It achieves high speed performance with a very regular and efficient integrated circuit realization. The chip accepts 0-bit input and produces 14-bit DCT coefficients. 12 bits are maintained after the first one-dimensional transform. The circuit has been laid out using a 2-μm CMOS technology with a symbolic design tool MULGA. The core contains approximately 73,000 transistors in an area of 7.2 x 7.0
NASA Technical Reports Server (NTRS)
Chang, C. Y.; Kwok, R.; Curlander, J. C.
1987-01-01
Five coding techniques in the spatial and transform domains have been evaluated for SAR image compression: linear three-point predictor (LTPP), block truncation coding (BTC), microadaptive picture sequencing (MAPS), adaptive discrete cosine transform (ADCT), and adaptive Hadamard transform (AHT). These techniques have been tested with Seasat data. Both LTPP and BTC spatial domain coding techniques provide very good performance at rates of 1-2 bits/pixel. The two transform techniques, ADCT and AHT, demonstrate the capability to compress the SAR imagery to less than 0.5 bits/pixel without visible artifacts. Tradeoffs such as the rate distortion performance, the computational complexity, the algorithm flexibility, and the controllability of compression ratios are also discussed.
Digital Image Compression Using Artificial Neural Networks
NASA Technical Reports Server (NTRS)
Serra-Ricart, M.; Garrido, L.; Gaitan, V.; Aloy, A.
1993-01-01
The problem of storing, transmitting, and manipulating digital images is considered. Because of the file sizes involved, large amounts of digitized image information are becoming common in modern projects. Our goal is to described an image compression transform coder based on artificial neural networks techniques (NNCTC). A comparison of the compression results obtained from digital astronomical images by the NNCTC and the method used in the compression of the digitized sky survey from the Space Telescope Science Institute based on the H-transform is performed in order to assess the reliability of the NNCTC.
Blood perfusion construction for infrared face recognition based on bio-heat transfer.
Xie, Zhihua; Liu, Guodong
2014-01-01
To improve the performance of infrared face recognition for time-lapse data, a new construction of blood perfusion is proposed based on bio-heat transfer. Firstly, by quantifying the blood perfusion based on Pennes equation, the thermal information is converted into blood perfusion rate, which is stable facial biological feature of face image. Then, the separability discriminant criterion in Discrete Cosine Transform (DCT) domain is applied to extract the discriminative features of blood perfusion information. Experimental results demonstrate that the features of blood perfusion are more concentrative and discriminative for recognition than those of thermal information. The infrared face recognition based on the proposed blood perfusion is robust and can achieve better recognition performance compared with other state-of-the-art approaches.
Flexible All-Digital Receiver for Bandwidth Efficient Modulations
NASA Technical Reports Server (NTRS)
Gray, Andrew; Srinivasan, Meera; Simon, Marvin; Yan, Tsun-Yee
2000-01-01
An all-digital high data rate parallel receiver architecture developed jointly by Goddard Space Flight Center and the Jet Propulsion Laboratory is presented. This receiver utilizes only a small number of high speed components along with a majority of lower speed components operating in a parallel frequency domain structure implementable in CMOS, and can currently process up to 600 Mbps with standard QPSK modulation. Performance results for this receiver for bandwidth efficient QPSK modulation schemes such as square-root raised cosine pulse shaped QPSK and Feher's patented QPSK are presented, demonstrating the flexibility of the receiver architecture.
Haldar, Justin P; Leahy, Richard M
2013-05-01
This paper presents a novel family of linear transforms that can be applied to data collected from the surface of a 2-sphere in three-dimensional Fourier space. This family of transforms generalizes the previously-proposed Funk-Radon Transform (FRT), which was originally developed for estimating the orientations of white matter fibers in the central nervous system from diffusion magnetic resonance imaging data. The new family of transforms is characterized theoretically, and efficient numerical implementations of the transforms are presented for the case when the measured data is represented in a basis of spherical harmonics. After these general discussions, attention is focused on a particular new transform from this family that we name the Funk-Radon and Cosine Transform (FRACT). Based on theoretical arguments, it is expected that FRACT-based analysis should yield significantly better orientation information (e.g., improved accuracy and higher angular resolution) than FRT-based analysis, while maintaining the strong characterizability and computational efficiency of the FRT. Simulations are used to confirm these theoretical characteristics, and the practical significance of the proposed approach is illustrated with real diffusion weighted MRI brain data. These experiments demonstrate that, in addition to having strong theoretical characteristics, the proposed approach can outperform existing state-of-the-art orientation estimation methods with respect to measures such as angular resolution and robustness to noise and modeling errors. Copyright © 2013 Elsevier Inc. All rights reserved.
Coherent time-stretch transformation for real-time capture of wideband signals.
Buckley, Brandon W; Madni, Asad M; Jalali, Bahram
2013-09-09
Time stretch transformation of wideband waveforms boosts the performance of analog-to-digital converters and digital signal processors by slowing down analog electrical signals before digitization. The transform is based on dispersive Fourier transformation implemented in the optical domain. A coherent receiver would be ideal for capturing the time-stretched optical signal. Coherent receivers offer improved sensitivity, allow for digital cancellation of dispersion-induced impairments and optical nonlinearities, and enable decoding of phase-modulated optical data formats. Because time-stretch uses a chirped broadband (>1 THz) optical carrier, a new coherent detection technique is required. In this paper, we introduce and demonstrate coherent time stretch transformation; a technique that combines dispersive Fourier transform with optically broadband coherent detection.
Digital interface of electronic transformers based on embedded system
NASA Astrophysics Data System (ADS)
Shang, Qiufeng; Qi, Yincheng
2008-10-01
Benefited from digital interface of electronic transformers, information sharing and system integration in substation can be realized. An embedded system-based digital output scheme of electronic transformers is proposed. The digital interface is designed with S3C44B0X 32bit RISC microprocessor as the hardware platform. The μCLinux operation system (OS) is transplanted on ARM7 (S3C44B0X). Applying Ethernet technology as the communication mode in the substation automation system is a new trend. The network interface chip RTL8019AS is adopted. Data transmission is realized through the in-line TCP/IP protocol of uClinux embedded OS. The application result and character analysis show that the design can meet the real-time and reliability requirements of IEC60044-7/8 electronic voltage/current instrument transformer standards.
Displaying radiologic images on personal computers: image storage and compression--Part 2.
Gillespy, T; Rowberg, A H
1994-02-01
This is part 2 of our article on image storage and compression, the third article of our series for radiologists and imaging scientists on displaying, manipulating, and analyzing radiologic images on personal computers. Image compression is classified as lossless (nondestructive) or lossy (destructive). Common lossless compression algorithms include variable-length bit codes (Huffman codes and variants), dictionary-based compression (Lempel-Ziv variants), and arithmetic coding. Huffman codes and the Lempel-Ziv-Welch (LZW) algorithm are commonly used for image compression. All of these compression methods are enhanced if the image has been transformed into a differential image based on a differential pulse-code modulation (DPCM) algorithm. The LZW compression after the DPCM image transformation performed the best on our example images, and performed almost as well as the best of the three commercial compression programs tested. Lossy compression techniques are capable of much higher data compression, but reduced image quality and compression artifacts may be noticeable. Lossy compression is comprised of three steps: transformation, quantization, and coding. Two commonly used transformation methods are the discrete cosine transformation and discrete wavelet transformation. In both methods, most of the image information is contained in a relatively few of the transformation coefficients. The quantization step reduces many of the lower order coefficients to 0, which greatly improves the efficiency of the coding (compression) step. In fractal-based image compression, image patterns are stored as equations that can be reconstructed at different levels of resolution.
Infrared face recognition based on LBP histogram and KW feature selection
NASA Astrophysics Data System (ADS)
Xie, Zhihua
2014-07-01
The conventional LBP-based feature as represented by the local binary pattern (LBP) histogram still has room for performance improvements. This paper focuses on the dimension reduction of LBP micro-patterns and proposes an improved infrared face recognition method based on LBP histogram representation. To extract the local robust features in infrared face images, LBP is chosen to get the composition of micro-patterns of sub-blocks. Based on statistical test theory, Kruskal-Wallis (KW) feature selection method is proposed to get the LBP patterns which are suitable for infrared face recognition. The experimental results show combination of LBP and KW features selection improves the performance of infrared face recognition, the proposed method outperforms the traditional methods based on LBP histogram, discrete cosine transform(DCT) or principal component analysis(PCA).
Embedding multiple watermarks in the DFT domain using low- and high-frequency bands
NASA Astrophysics Data System (ADS)
Ganic, Emir; Dexter, Scott D.; Eskicioglu, Ahmet M.
2005-03-01
Although semi-blind and blind watermarking schemes based on Discrete Cosine Transform (DCT) or Discrete Wavelet Transform (DWT) are robust to a number of attacks, they fail in the presence of geometric attacks such as rotation, scaling, and translation. The Discrete Fourier Transform (DFT) of a real image is conjugate symmetric, resulting in a symmetric DFT spectrum. Because of this property, the popularity of DFT-based watermarking has increased in the last few years. In a recent paper, we generalized a circular watermarking idea to embed multiple watermarks in lower and higher frequencies. Nevertheless, a circular watermark is visible in the DFT domain, providing a potential hacker with valuable information about the location of the watermark. In this paper, our focus is on embedding multiple watermarks that are not visible in the DFT domain. Using several frequency bands increases the overall robustness of the proposed watermarking scheme. Specifically, our experiments show that the watermark embedded in lower frequencies is robust to one set of attacks, and the watermark embedded in higher frequencies is robust to a different set of attacks.
Proposed data compression schemes for the Galileo S-band contingency mission
NASA Technical Reports Server (NTRS)
Cheung, Kar-Ming; Tong, Kevin
1993-01-01
The Galileo spacecraft is currently on its way to Jupiter and its moons. In April 1991, the high gain antenna (HGA) failed to deploy as commanded. In case the current efforts to deploy the HGA fails, communications during the Jupiter encounters will be through one of two low gain antenna (LGA) on an S-band (2.3 GHz) carrier. A lot of effort has been and will be conducted to attempt to open the HGA. Also various options for improving Galileo's telemetry downlink performance are being evaluated in the event that the HGA will not open at Jupiter arrival. Among all viable options the most promising and powerful one is to perform image and non-image data compression in software onboard the spacecraft. This involves in-flight re-programming of the existing flight software of Galileo's Command and Data Subsystem processors and Attitude and Articulation Control System (AACS) processor, which have very limited computational and memory resources. In this article we describe the proposed data compression algorithms and give their respective compression performance. The planned image compression algorithm is a 4 x 4 or an 8 x 8 multiplication-free integer cosine transform (ICT) scheme, which can be viewed as an integer approximation of the popular discrete cosine transform (DCT) scheme. The implementation complexity of the ICT schemes is much lower than the DCT-based schemes, yet the performances of the two algorithms are indistinguishable. The proposed non-image compression algorith is a Lempel-Ziv-Welch (LZW) variant, which is a lossless universal compression algorithm based on a dynamic dictionary lookup table. We developed a simple and efficient hashing function to perform the string search.
Partially supervised speaker clustering.
Tang, Hao; Chu, Stephen Mingyu; Hasegawa-Johnson, Mark; Huang, Thomas S
2012-05-01
Content-based multimedia indexing, retrieval, and processing as well as multimedia databases demand the structuring of the media content (image, audio, video, text, etc.), one significant goal being to associate the identity of the content to the individual segments of the signals. In this paper, we specifically address the problem of speaker clustering, the task of assigning every speech utterance in an audio stream to its speaker. We offer a complete treatment to the idea of partially supervised speaker clustering, which refers to the use of our prior knowledge of speakers in general to assist the unsupervised speaker clustering process. By means of an independent training data set, we encode the prior knowledge at the various stages of the speaker clustering pipeline via 1) learning a speaker-discriminative acoustic feature transformation, 2) learning a universal speaker prior model, and 3) learning a discriminative speaker subspace, or equivalently, a speaker-discriminative distance metric. We study the directional scattering property of the Gaussian mixture model (GMM) mean supervector representation of utterances in the high-dimensional space, and advocate exploiting this property by using the cosine distance metric instead of the euclidean distance metric for speaker clustering in the GMM mean supervector space. We propose to perform discriminant analysis based on the cosine distance metric, which leads to a novel distance metric learning algorithm—linear spherical discriminant analysis (LSDA). We show that the proposed LSDA formulation can be systematically solved within the elegant graph embedding general dimensionality reduction framework. Our speaker clustering experiments on the GALE database clearly indicate that 1) our speaker clustering methods based on the GMM mean supervector representation and vector-based distance metrics outperform traditional speaker clustering methods based on the “bag of acoustic features” representation and statistical model-based distance metrics, 2) our advocated use of the cosine distance metric yields consistent increases in the speaker clustering performance as compared to the commonly used euclidean distance metric, 3) our partially supervised speaker clustering concept and strategies significantly improve the speaker clustering performance over the baselines, and 4) our proposed LSDA algorithm further leads to state-of-the-art speaker clustering performance.
NASA Technical Reports Server (NTRS)
Marcin, Martin; Abramovici, Alexander
2008-01-01
The software of a commercially available digital radio receiver has been modified to make the receiver function as a two-channel low-noise phase meter. This phase meter is a prototype in the continuing development of a phase meter for a system in which radiofrequency (RF) signals in the two channels would be outputs of a spaceborne heterodyne laser interferometer for detecting gravitational waves. The frequencies of the signals could include a common Doppler-shift component of as much as 15 MHz. The phase meter is required to measure the relative phases of the signals in the two channels at a sampling rate of 10 Hz at a root power spectral density <5 microcycle/(Hz)1/2 and to be capable of determining the power spectral density of the phase difference over the frequency range from 1 mHz to 1 Hz. Such a phase meter could also be used on Earth to perform similar measurements in laser metrology of moving bodies. To illustrate part of the principle of operation of the phase meter, the figure includes a simplified block diagram of a basic singlechannel digital receiver. The input RF signal is first fed to the input terminal of an analog-to-digital converter (ADC). To prevent aliasing errors in the ADC, the sampling rate must be at least twice the input signal frequency. The sampling rate of the ADC is governed by a sampling clock, which also drives a digital local oscillator (DLO), which is a direct digital frequency synthesizer. The DLO produces samples of sine and cosine signals at a programmed tuning frequency. The sine and cosine samples are mixed with (that is, multiplied by) the samples from the ADC, then low-pass filtered to obtain in-phase (I) and quadrature (Q) signal components. A digital signal processor (DSP) computes the ratio between the Q and I components, computes the phase of the RF signal (relative to that of the DLO signal) as the arctangent of this ratio, and then averages successive such phase values over a time interval specified by the user.
Influence of video compression on the measurement error of the television system
NASA Astrophysics Data System (ADS)
Sotnik, A. V.; Yarishev, S. N.; Korotaev, V. V.
2015-05-01
Video data require a very large memory capacity. Optimal ratio quality / volume video encoding method is one of the most actual problem due to the urgent need to transfer large amounts of video over various networks. The technology of digital TV signal compression reduces the amount of data used for video stream representation. Video compression allows effective reduce the stream required for transmission and storage. It is important to take into account the uncertainties caused by compression of the video signal in the case of television measuring systems using. There are a lot digital compression methods. The aim of proposed work is research of video compression influence on the measurement error in television systems. Measurement error of the object parameter is the main characteristic of television measuring systems. Accuracy characterizes the difference between the measured value abd the actual parameter value. Errors caused by the optical system can be selected as a source of error in the television systems measurements. Method of the received video signal processing is also a source of error. Presence of error leads to large distortions in case of compression with constant data stream rate. Presence of errors increases the amount of data required to transmit or record an image frame in case of constant quality. The purpose of the intra-coding is reducing of the spatial redundancy within a frame (or field) of television image. This redundancy caused by the strong correlation between the elements of the image. It is possible to convert an array of image samples into a matrix of coefficients that are not correlated with each other, if one can find corresponding orthogonal transformation. It is possible to apply entropy coding to these uncorrelated coefficients and achieve a reduction in the digital stream. One can select such transformation that most of the matrix coefficients will be almost zero for typical images . Excluding these zero coefficients also possible reducing of the digital stream. Discrete cosine transformation is most widely used among possible orthogonal transformation. Errors of television measuring systems and data compression protocols analyzed In this paper. The main characteristics of measuring systems and detected sources of their error detected. The most effective methods of video compression are determined. The influence of video compression error on television measuring systems was researched. Obtained results will increase the accuracy of the measuring systems. In television image quality measuring system reduces distortion identical distortion in analog systems and specific distortions resulting from the process of coding / decoding digital video signal and errors in the transmission channel. By the distortions associated with encoding / decoding signal include quantization noise, reducing resolution, mosaic effect, "mosquito" effect edging on sharp drops brightness, blur colors, false patterns, the effect of "dirty window" and other defects. The size of video compression algorithms used in television measuring systems based on the image encoding with intra- and inter prediction individual fragments. The process of encoding / decoding image is non-linear in space and in time, because the quality of the playback of a movie at the reception depends on the pre- and post-history of a random, from the preceding and succeeding tracks, which can lead to distortion of the inadequacy of the sub-picture and a corresponding measuring signal.
SPECT reconstruction using DCT-induced tight framelet regularization
NASA Astrophysics Data System (ADS)
Zhang, Jiahan; Li, Si; Xu, Yuesheng; Schmidtlein, C. R.; Lipson, Edward D.; Feiglin, David H.; Krol, Andrzej
2015-03-01
Wavelet transforms have been successfully applied in many fields of image processing. Yet, to our knowledge, they have never been directly incorporated to the objective function in Emission Computed Tomography (ECT) image reconstruction. Our aim has been to investigate if the ℓ1-norm of non-decimated discrete cosine transform (DCT) coefficients of the estimated radiotracer distribution could be effectively used as the regularization term for the penalized-likelihood (PL) reconstruction, where a regularizer is used to enforce the image smoothness in the reconstruction. In this study, the ℓ1-norm of 2D DCT wavelet decomposition was used as a regularization term. The Preconditioned Alternating Projection Algorithm (PAPA), which we proposed in earlier work to solve penalized likelihood (PL) reconstruction with non-differentiable regularizers, was used to solve this optimization problem. The DCT wavelet decompositions were performed on the transaxial reconstructed images. We reconstructed Monte Carlo simulated SPECT data obtained for a numerical phantom with Gaussian blobs as hot lesions and with a warm random lumpy background. Reconstructed images using the proposed method exhibited better noise suppression and improved lesion conspicuity, compared with images reconstructed using expectation maximization (EM) algorithm with Gaussian post filter (GPF). Also, the mean square error (MSE) was smaller, compared with EM-GPF. A critical and challenging aspect of this method was selection of optimal parameters. In summary, our numerical experiments demonstrated that the ℓ1-norm of discrete cosine transform (DCT) wavelet frame transform DCT regularizer shows promise for SPECT image reconstruction using PAPA method.
Kowalik, William S.; Marsh, Stuart E.; Lyon, Ronald J. P.
1982-01-01
A method for estimating the reflectance of ground sites from satellite radiance data is proposed and tested. The method uses the known ground reflectance from several sites and satellite data gathered over a wide range of solar zenith angles. The method was tested on each of 10 different Landsat images using 10 small sites in the Walker Lake, Nevada area. Plots of raw Landsat digital numbers (DNs) versus the cosine of the solar zenith angle (cos Z) for the the test areas are linear, and the average correlation coefficients of the data for Landsat bands 4, 5, 6, and 7 are 0.94, 0.93, 0.94, and 0.94, respectively. Ground reflectance values for the 10 sites are proportional to the slope of the DN versus cos Z relation at each site. The slope of the DN versus cos Z relation for seven additional sites in Nevada and California were used to estimate the ground reflectances of those sites. The estimates for nearby sites are in error by an average of 1.2% and more distant sites are in error by 5.1%. The method can successfully estimate the reflectance of sites outside the original scene, but extrapolation of the reflectance estimation equations to other areas may violate assumptions of atmospheric homogeneity.
Image splitting and remapping method for radiological image compression
NASA Astrophysics Data System (ADS)
Lo, Shih-Chung B.; Shen, Ellen L.; Mun, Seong K.
1990-07-01
A new decomposition method using image splitting and gray-level remapping has been proposed for image compression, particularly for images with high contrast resolution. The effects of this method are especially evident in our radiological image compression study. In our experiments, we tested the impact of this decomposition method on image compression by employing it with two coding techniques on a set of clinically used CT images and several laser film digitized chest radiographs. One of the compression techniques used was full-frame bit-allocation in the discrete cosine transform domain, which has been proven to be an effective technique for radiological image compression. The other compression technique used was vector quantization with pruned tree-structured encoding, which through recent research has also been found to produce a low mean-square-error and a high compression ratio. The parameters we used in this study were mean-square-error and the bit rate required for the compressed file. In addition to these parameters, the difference between the original and reconstructed images will be presented so that the specific artifacts generated by both techniques can be discerned by visual perception.
2.5D multi-view gait recognition based on point cloud registration.
Tang, Jin; Luo, Jian; Tjahjadi, Tardi; Gao, Yan
2014-03-28
This paper presents a method for modeling a 2.5-dimensional (2.5D) human body and extracting the gait features for identifying the human subject. To achieve view-invariant gait recognition, a multi-view synthesizing method based on point cloud registration (MVSM) to generate multi-view training galleries is proposed. The concept of a density and curvature-based Color Gait Curvature Image is introduced to map 2.5D data onto a 2D space to enable data dimension reduction by discrete cosine transform and 2D principle component analysis. Gait recognition is achieved via a 2.5D view-invariant gait recognition method based on point cloud registration. Experimental results on the in-house database captured by a Microsoft Kinect camera show a significant performance gain when using MVSM.
NASA Technical Reports Server (NTRS)
Svalbonas, V.; Levine, H.; Ogilvie, P.
1975-01-01
Engineering programming information is presented for the STARS-2P (shell theory automated for rotational structures-2P (plasticity)) digital computer program, and FORTRAN 4 was used in writing the various subroutines. The execution of this program requires the use of thirteen temporary storage units. The program was initially written and debugged on the IBM 370-165 computer and converted to the UNIVAC 1108 computer, where it utilizes approximately 60,000 words of core. Only basic FORTRAN library routines are required by the program: sine, cosine, absolute value, and square root.
Digital Sound Encryption with Logistic Map and Number Theoretic Transform
NASA Astrophysics Data System (ADS)
Satria, Yudi; Gabe Rizky, P. H.; Suryadi, MT
2018-03-01
Digital sound security has limits on encrypting in Frequency Domain. Number Theoretic Transform based on field (GF 2521 – 1) improve and solve that problem. The algorithm for this sound encryption is based on combination of Chaos function and Number Theoretic Transform. The Chaos function that used in this paper is Logistic Map. The trials and the simulations are conducted by using 5 different digital sound files data tester in Wave File Extension Format and simulated at least 100 times each. The key stream resulted is random with verified by 15 NIST’s randomness test. The key space formed is very big which more than 10469. The processing speed of algorithm for encryption is slightly affected by Number Theoretic Transform.
Discrete shearlet transform: faithful digitization concept and its applications
NASA Astrophysics Data System (ADS)
Lim, Wang-Q.
2011-09-01
Over the past years, various representation systems which sparsely approximate functions governed by anisotropic features such as edges in images have been proposed. Alongside the theoretical development of these systems, algorithmic realizations of the associated transforms were provided. However, one of the most common short-comings of these frameworks is the lack of providing a unified treatment of the continuum and digital world, i.e., allowing a digital theory to be a natural digitization of the continuum theory. Shearlets were introduced as means to sparsely encode anisotropic singularities of multivariate data while providing a unified treatment of the continuous and digital realm. In this paper, we introduce a discrete framework which allows a faithful digitization of the continuum domain shearlet transform based on compactly supported shearlets. Finally, we show numerical experiments demonstrating the potential of the discrete shearlet transform in several image processing applications.
Decomposition of ECG by linear filtering.
Murthy, I S; Niranjan, U C
1992-01-01
A simple method is developed for the delineation of a given electrocardiogram (ECG) signal into its component waves. The properties of discrete cosine transform (DCT) are exploited for the purpose. The transformed signal is convolved with appropriate filters and the component waves are obtained by computing the inverse transform (IDCT) of the filtered signals. The filters are derived from the time signal itself. Analysis of continuous strips of ECG signals with various arrhythmias showed that the performance of the method is satisfactory both qualitatively and quantitatively. The small amplitude P wave usually had a high percentage rms difference (PRD) compared to the other large component waves.
Memmolo, P; Finizio, A; Paturzo, M; Ferraro, P; Javidi, B
2012-05-01
A method based on spatial transformations of multiwavelength digital holograms and the correlation matching of their numerical reconstructions is proposed, with the aim to improve superimposition of different color reconstructed images. This method is based on an adaptive affine transform of the hologram that permits management of the physical parameters of numerical reconstruction. In addition, we present a procedure to synthesize a single digital hologram in which three different colors are multiplexed. The optical reconstruction of the synthetic hologram by a spatial light modulator at one wavelength allows us to display all color features of the object, avoiding loss of details.
[A digital micromirror device-based Hadamard transform near infrared spectrometer].
Liu, Jia; Chen, Fen-Fei; Liao, Cheng-Sheng; Xu, Qian; Zeng, Li-Bo; Wu, Qiong-Shui
2011-10-01
A Hadamard transform near infrared spectrometer based on a digital micromirror device was constructed. The optical signal was collected by optical fiber, a grating was used for light diffraction, a digital micromirror device (DMD) was applied instead of traditional mechanical Hadamard masks for optical modulation, and an InGaAs near infrared detector was used as the optic sensor. The original spectrum was recovered by fast Hadamard transform algrithms. The advantages of the spectrometer, such as high resolution, signal-noise-ratio, stability, sensitivity and response speed were proved by experiments, which indicated that it is very suitable for oil and food-safety applications.
Experimental Observation and Theoretical Description of Multisoliton Fission in Shallow Water
NASA Astrophysics Data System (ADS)
Trillo, S.; Deng, G.; Biondini, G.; Klein, M.; Clauss, G. F.; Chabchoub, A.; Onorato, M.
2016-09-01
We observe the dispersive breaking of cosine-type long waves [Phys. Rev. Lett. 15, 240 (1965)] in shallow water, characterizing the highly nonlinear "multisoliton" fission over variable conditions. We provide new insight into the interpretation of the results by analyzing the data in terms of the periodic inverse scattering transform for the Korteweg-de Vries equation. In a wide range of dispersion and nonlinearity, the data compare favorably with our analytical estimate, based on a rigorous WKB approach, of the number of emerging solitons. We are also able to observe experimentally the universal Fermi-Pasta-Ulam recurrence in the regime of moderately weak dispersion.
The theory of the gravitational potential applied to orbit prediction
NASA Technical Reports Server (NTRS)
Kirkpatrick, J. C.
1976-01-01
A complete derivation of the geopotential function and its gradient is presented. Also included is the transformation of Laplace's equation from Cartesian to spherical coordinates. The analytic solution to Laplace's equation is obtained from the transformed version, in the classical manner of separating the variables. A cursory introduction to the method devised by Pines, using direction cosines to express the orientation of a point in space, is presented together with sample computer program listings for computing the geopotential function and the components of its gradient. The use of the geopotential function is illustrated.
Image Data Compression Having Minimum Perceptual Error
NASA Technical Reports Server (NTRS)
Watson, Andrew B. (Inventor)
1997-01-01
A method is presented for performing color or grayscale image compression that eliminates redundant and invisible image components. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The quantization matrix comprises visual masking by luminance and contrast technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.
An Asymmetric Image Encryption Based on Phase Truncated Hybrid Transform
NASA Astrophysics Data System (ADS)
Khurana, Mehak; Singh, Hukum
2017-09-01
To enhance the security of the system and to protect it from the attacker, this paper proposes a new asymmetric cryptosystem based on hybrid approach of Phase Truncated Fourier and Discrete Cosine Transform (PTFDCT) which adds non linearity by including cube and cube root operation in the encryption and decryption path respectively. In this cryptosystem random phase masks are used as encryption keys and phase masks generated after the cube operation in encryption process are reserved as decryption keys and cube root operation is required to decrypt image in decryption process. The cube and cube root operation introduced in the encryption and decryption path makes system resistant against standard attacks. The robustness of the proposed cryptosystem has been analysed and verified on the basis of various parameters by simulating on MATLAB 7.9.0 (R2008a). The experimental results are provided to highlight the effectiveness and suitability of the proposed cryptosystem and prove the system is secure.
A Transform-Based Feature Extraction Approach for Motor Imagery Tasks Classification
Khorshidtalab, Aida; Mesbah, Mostefa; Salami, Momoh J. E.
2015-01-01
In this paper, we present a new motor imagery classification method in the context of electroencephalography (EEG)-based brain–computer interface (BCI). This method uses a signal-dependent orthogonal transform, referred to as linear prediction singular value decomposition (LP-SVD), for feature extraction. The transform defines the mapping as the left singular vectors of the LP coefficient filter impulse response matrix. Using a logistic tree-based model classifier; the extracted features are classified into one of four motor imagery movements. The proposed approach was first benchmarked against two related state-of-the-art feature extraction approaches, namely, discrete cosine transform (DCT) and adaptive autoregressive (AAR)-based methods. By achieving an accuracy of 67.35%, the LP-SVD approach outperformed the other approaches by large margins (25% compared with DCT and 6 % compared with AAR-based methods). To further improve the discriminatory capability of the extracted features and reduce the computational complexity, we enlarged the extracted feature subset by incorporating two extra features, namely, Q- and the Hotelling’s \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$T^{2}$ \\end{document} statistics of the transformed EEG and introduced a new EEG channel selection method. The performance of the EEG classification based on the expanded feature set and channel selection method was compared with that of a number of the state-of-the-art classification methods previously reported with the BCI IIIa competition data set. Our method came second with an average accuracy of 81.38%. PMID:27170898
Scott, Ian A; Sullivan, Clair; Staib, Andrew
2018-05-24
Objective In an era of rapid digitisation of Australian hospitals, practical guidance is needed in how to successfully implement electronic medical records (EMRs) as both a technical innovation and a major transformative change in clinical care. The aim of the present study was to develop a checklist that clearly and comprehensively defines the steps that best prepare hospitals for EMR implementation and digital transformation. Methods The checklist was developed using a formal methodological framework comprised of: literature reviews of relevant issues; an interactive workshop involving a multidisciplinary group of digital leads from Queensland hospitals; a draft document based on literature and workshop proceedings; and a review and feedback from senior clinical leads. Results The final checklist comprised 19 questions, 13 related to EMR implementation and six to digital transformation. Questions related to the former included organisational considerations (leadership, governance, change leaders, implementation plan), technical considerations (vendor choice, information technology and project management teams, system and hardware alignment with clinician workflows, interoperability with legacy systems) and training (user training, post-go-live contingency plans, roll-out sequence, staff support at point of care). Questions related to digital transformation included cultural considerations (clinically focused vision statement and communication strategy, readiness for change surveys), management of digital disruption syndromes and plans for further improvement in patient care (post-go-live optimisation of digital system, quality and benefit evaluation, ongoing digital innovation). Conclusion This evidence-based, field-tested checklist provides guidance to hospitals planning EMR implementation and separates readiness for EMR from readiness for digital transformation. What is known about the topic? Many hospitals throughout Australia have implemented, or are planning to implement, hospital wide electronic medical records (EMRs) with varying degrees of functionality. Few hospitals have implemented a complete end-to-end digital system with the ability to bring about major transformation in clinical care. Although the many challenges in implementing EMRs have been well documented, they have not been incorporated into an evidence-based, field-tested checklist that can practically assist hospitals in preparing for EMR implementation as both a technical innovation and a vehicle for major digital transformation of care. What does this paper add? This paper outlines a 19-question checklist that was developed using a formal methodological framework comprising literature review of relevant issues, proceedings from an interactive workshop involving a multidisciplinary group of digital leads from hospitals throughout Queensland, including three hospitals undertaking EMR implementation and one hospital with complete end-to-end EMR, and review of a draft checklist by senior clinical leads within a statewide digital healthcare improvement network. The checklist distinguishes between issues pertaining to EMR as a technical innovation and EMR as a vehicle for digital transformation of patient care. What are the implications for practitioners? Successful implementation of a hospital-wide EMR requires senior managers, clinical leads, information technology teams and project management teams to fully address key operational and strategic issues. Using an issues checklist may help prevent any one issue being inadvertently overlooked or underemphasised in the planning and implementation stages, and ensure the EMR is fully adopted and optimally used by clinician users in an ongoing digital transformation of care.
2.5D Multi-View Gait Recognition Based on Point Cloud Registration
Tang, Jin; Luo, Jian; Tjahjadi, Tardi; Gao, Yan
2014-01-01
This paper presents a method for modeling a 2.5-dimensional (2.5D) human body and extracting the gait features for identifying the human subject. To achieve view-invariant gait recognition, a multi-view synthesizing method based on point cloud registration (MVSM) to generate multi-view training galleries is proposed. The concept of a density and curvature-based Color Gait Curvature Image is introduced to map 2.5D data onto a 2D space to enable data dimension reduction by discrete cosine transform and 2D principle component analysis. Gait recognition is achieved via a 2.5D view-invariant gait recognition method based on point cloud registration. Experimental results on the in-house database captured by a Microsoft Kinect camera show a significant performance gain when using MVSM. PMID:24686727
Genetics algorithm optimization of DWT-DCT based image Watermarking
NASA Astrophysics Data System (ADS)
Budiman, Gelar; Novamizanti, Ledya; Iwut, Iwan
2017-01-01
Data hiding in an image content is mandatory for setting the ownership of the image. Two dimensions discrete wavelet transform (DWT) and discrete cosine transform (DCT) are proposed as transform method in this paper. First, the host image in RGB color space is converted to selected color space. We also can select the layer where the watermark is embedded. Next, 2D-DWT transforms the selected layer obtaining 4 subband. We select only one subband. And then block-based 2D-DCT transforms the selected subband. Binary-based watermark is embedded on the AC coefficients of each block after zigzag movement and range based pixel selection. Delta parameter replacing pixels in each range represents embedded bit. +Delta represents bit “1” and -delta represents bit “0”. Several parameters to be optimized by Genetics Algorithm (GA) are selected color space, layer, selected subband of DWT decomposition, block size, embedding range, and delta. The result of simulation performs that GA is able to determine the exact parameters obtaining optimum imperceptibility and robustness, in any watermarked image condition, either it is not attacked or attacked. DWT process in DCT based image watermarking optimized by GA has improved the performance of image watermarking. By five attacks: JPEG 50%, resize 50%, histogram equalization, salt-pepper and additive noise with variance 0.01, robustness in the proposed method has reached perfect watermark quality with BER=0. And the watermarked image quality by PSNR parameter is also increased about 5 dB than the watermarked image quality from previous method.
Sines and Cosines. Part 1 of 3
NASA Technical Reports Server (NTRS)
Apostol, Tom M. (Editor)
1992-01-01
Applying the concept of similarities, the mathematical principles of circular motion and sine and cosine waves are presented utilizing both film footage and computer animation in this 'Project Mathematics' series video. Concepts presented include: the symmetry of sine waves; the cosine (complementary sine) and cosine waves; the use of sines and cosines on coordinate systems; the relationship they have to each other; the definitions and uses of periodic waves, square waves, sawtooth waves; the Gibbs phenomena; the use of sines and cosines as ratios; and the terminology related to sines and cosines (frequency, overtone, octave, intensity, and amplitude).
A minimax technique for time-domain design of preset digital equalizers using linear programming
NASA Technical Reports Server (NTRS)
Vaughn, G. L.; Houts, R. C.
1975-01-01
A linear programming technique is presented for the design of a preset finite-impulse response (FIR) digital filter to equalize the intersymbol interference (ISI) present in a baseband channel with known impulse response. A minimax technique is used which minimizes the maximum absolute error between the actual received waveform and a specified raised-cosine waveform. Transversal and frequency-sampling FIR digital filters are compared as to the accuracy of the approximation, the resultant ISI and the transmitted energy required. The transversal designs typically have slightly better waveform accuracy for a given distortion; however, the frequency-sampling equalizer uses fewer multipliers and requires less transmitted energy. A restricted transversal design is shown to use the least number of multipliers at the cost of a significant increase in energy and loss of waveform accuracy at the receiver.
On E-discretization of tori of compact simple Lie groups. II
NASA Astrophysics Data System (ADS)
Hrivnák, Jiří; Juránek, Michal
2017-10-01
Ten types of discrete Fourier transforms of Weyl orbit functions are developed. Generalizing one-dimensional cosine, sine, and exponential, each type of the Weyl orbit function represents an exponential symmetrized with respect to a subgroup of the Weyl group. Fundamental domains of even affine and dual even affine Weyl groups, governing the argument and label symmetries of the even orbit functions, are determined. The discrete orthogonality relations are formulated on finite sets of points from the refinements of the dual weight lattices. Explicit counting formulas for the number of points of the discrete transforms are deduced. Real-valued Hartley orbit functions are introduced, and all ten types of the corresponding discrete Hartley transforms are detailed.
Observer detection of image degradation caused by irreversible data compression processes
NASA Astrophysics Data System (ADS)
Chen, Ji; Flynn, Michael J.; Gross, Barry; Spizarny, David
1991-05-01
Irreversible data compression methods have been proposed to reduce the data storage and communication requirements of digital imaging systems. In general, the error produced by compression increases as an algorithm''s compression ratio is increased. We have studied the relationship between compression ratios and the detection of induced error using radiologic observers. The nature of the errors was characterized by calculating the power spectrum of the difference image. In contrast with studies designed to test whether detected errors alter diagnostic decisions, this study was designed to test whether observers could detect the induced error. A paired-film observer study was designed to test whether induced errors were detected. The study was conducted with chest radiographs selected and ranked for subtle evidence of interstitial disease, pulmonary nodules, or pneumothoraces. Images were digitized at 86 microns (4K X 5K) and 2K X 2K regions were extracted. A full-frame discrete cosine transform method was used to compress images at ratios varying between 6:1 and 60:1. The decompressed images were reprinted next to the original images in a randomized order with a laser film printer. The use of a film digitizer and a film printer which can reproduce all of the contrast and detail in the original radiograph makes the results of this study insensitive to instrument performance and primarily dependent on radiographic image quality. The results of this study define conditions for which errors associated with irreversible compression cannot be detected by radiologic observers. The results indicate that an observer can detect the errors introduced by this compression algorithm for compression ratios of 10:1 (1.2 bits/pixel) or higher.
Non-uniform cosine modulated filter banks using meta-heuristic algorithms in CSD space.
Kalathil, Shaeen; Elias, Elizabeth
2015-11-01
This paper presents an efficient design of non-uniform cosine modulated filter banks (CMFB) using canonic signed digit (CSD) coefficients. CMFB has got an easy and efficient design approach. Non-uniform decomposition can be easily obtained by merging the appropriate filters of a uniform filter bank. Only the prototype filter needs to be designed and optimized. In this paper, the prototype filter is designed using window method, weighted Chebyshev approximation and weighted constrained least square approximation. The coefficients are quantized into CSD, using a look-up-table. The finite precision CSD rounding, deteriorates the filter bank performances. The performances of the filter bank are improved using suitably modified meta-heuristic algorithms. The different meta-heuristic algorithms which are modified and used in this paper are Artificial Bee Colony algorithm, Gravitational Search algorithm, Harmony Search algorithm and Genetic algorithm and they result in filter banks with less implementation complexity, power consumption and area requirements when compared with those of the conventional continuous coefficient non-uniform CMFB.
Non-uniform cosine modulated filter banks using meta-heuristic algorithms in CSD space
Kalathil, Shaeen; Elias, Elizabeth
2014-01-01
This paper presents an efficient design of non-uniform cosine modulated filter banks (CMFB) using canonic signed digit (CSD) coefficients. CMFB has got an easy and efficient design approach. Non-uniform decomposition can be easily obtained by merging the appropriate filters of a uniform filter bank. Only the prototype filter needs to be designed and optimized. In this paper, the prototype filter is designed using window method, weighted Chebyshev approximation and weighted constrained least square approximation. The coefficients are quantized into CSD, using a look-up-table. The finite precision CSD rounding, deteriorates the filter bank performances. The performances of the filter bank are improved using suitably modified meta-heuristic algorithms. The different meta-heuristic algorithms which are modified and used in this paper are Artificial Bee Colony algorithm, Gravitational Search algorithm, Harmony Search algorithm and Genetic algorithm and they result in filter banks with less implementation complexity, power consumption and area requirements when compared with those of the conventional continuous coefficient non-uniform CMFB. PMID:26644921
Digital Transformation and Disruption of the Health Care Sector: Internet-Based Observational Study.
Herrmann, Maximilian; Boehme, Philip; Mondritzki, Thomas; Ehlers, Jan P; Kavadias, Stylianos; Truebel, Hubert
2018-03-27
Digital innovation, introduced across many industries, is a strong force of transformation. Some industries have seen faster transformation, whereas the health care sector only recently came into focus. A context where digital corporations move into health care, payers strive to keep rising costs at bay, and longer-living patients desire continuously improved quality of care points to a digital and value-based transformation with drastic implications for the health care sector. We tried to operationalize the discussion within the health care sector around digital and disruptive innovation to identify what type of technological enablers, business models, and value networks seem to be emerging from different groups of innovators with respect to their digital transformational efforts. From the Forbes 2000 and CBinsights databases, we identified 100 leading technology, life science, and start-up companies active in the health care sector. Further analysis identified projects from these companies within a digital context that were subsequently evaluated using the following criteria: delivery of patient value, presence of a comprehensive and distinctive underlying business model, solutions provided, and customer needs addressed. Our methodological approach recorded more than 400 projects and collaborations. We identified patterns that show established corporations rely more on incremental innovation that supports their current business models, while start-ups engage their flexibility to explore new market segments with notable transformations of established business models. Thereby, start-ups offer higher promises of disruptive innovation. Additionally, start-ups offer more diversified value propositions addressing broader areas of the health care sector. Digital transformation is an opportunity to accelerate health care performance by lowering cost and improving quality of care. At an economic scale, business models can be strengthened and disruptive innovation models enabled. Corporations should look for collaborations with start-up companies to keep investment costs at bay and off the balance sheet. At the same time, the regulatory knowledge of established corporations might help start-ups to kick off digital disruption in the health care sector. ©Maximilian Herrmann, Philip Boehme, Thomas Mondritzki, Jan P Ehlers, Stylianos Kavadias, Hubert Truebel. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 27.03.2018.
Digital Transformation and Disruption of the Health Care Sector: Internet-Based Observational Study
Mondritzki, Thomas; Ehlers, Jan P; Kavadias, Stylianos
2018-01-01
Background Digital innovation, introduced across many industries, is a strong force of transformation. Some industries have seen faster transformation, whereas the health care sector only recently came into focus. A context where digital corporations move into health care, payers strive to keep rising costs at bay, and longer-living patients desire continuously improved quality of care points to a digital and value-based transformation with drastic implications for the health care sector. Objective We tried to operationalize the discussion within the health care sector around digital and disruptive innovation to identify what type of technological enablers, business models, and value networks seem to be emerging from different groups of innovators with respect to their digital transformational efforts. Methods From the Forbes 2000 and CBinsights databases, we identified 100 leading technology, life science, and start-up companies active in the health care sector. Further analysis identified projects from these companies within a digital context that were subsequently evaluated using the following criteria: delivery of patient value, presence of a comprehensive and distinctive underlying business model, solutions provided, and customer needs addressed. Results Our methodological approach recorded more than 400 projects and collaborations. We identified patterns that show established corporations rely more on incremental innovation that supports their current business models, while start-ups engage their flexibility to explore new market segments with notable transformations of established business models. Thereby, start-ups offer higher promises of disruptive innovation. Additionally, start-ups offer more diversified value propositions addressing broader areas of the health care sector. Conclusions Digital transformation is an opportunity to accelerate health care performance by lowering cost and improving quality of care. At an economic scale, business models can be strengthened and disruptive innovation models enabled. Corporations should look for collaborations with start-up companies to keep investment costs at bay and off the balance sheet. At the same time, the regulatory knowledge of established corporations might help start-ups to kick off digital disruption in the health care sector. PMID:29588274
Digital disruption ?syndromes.
Sullivan, Clair; Staib, Andrew
2017-05-18
The digital transformation of hospitals in Australia is occurring rapidly in order to facilitate innovation and improve efficiency. Rapid transformation can cause temporary disruption of hospital workflows and staff as processes are adapted to the new digital workflows. The aim of this paper is to outline various types of digital disruption and some strategies for effective management. A large tertiary university hospital recently underwent a rapid, successful roll-out of an integrated electronic medical record (EMR). We observed this transformation and propose several digital disruption "syndromes" to assist with understanding and management during digital transformation: digital deceleration, digital transparency, digital hypervigilance, data discordance, digital churn and post-digital 'depression'. These 'syndromes' are defined and discussed in detail. Successful management of this temporary digital disruption is important to ensure a successful transition to a digital platform. What is known about this topic? Digital disruption is defined as the changes facilitated by digital technologies that occur at a pace and magnitude that disrupt established ways of value creation, social interactions, doing business and more generally our thinking. Increasing numbers of Australian hospitals are implementing digital solutions to replace traditional paper-based systems for patient care in order to create opportunities for improved care and efficiencies. Such large scale change has the potential to create transient disruption to workflows and staff. Managing this temporary disruption effectively is an important factor in the successful implementation of an EMR. What does this paper add? A large tertiary university hospital recently underwent a successful rapid roll-out of an integrated electronic medical record (EMR) to become Australia's largest digital hospital over a 3-week period. We observed and assisted with the management of several cultural, behavioural and operational forms of digital disruption which lead us to propose some digital disruption 'syndromes'. The definition and management of these 'syndromes' are discussed in detail. What are the implications for practitioners? Minimising the temporary effects of digital disruption in hospitals requires an understanding that these digital 'syndromes' are to be expected and actively managed during large-scale transformation.
Analytic expressions for the black-sky and white-sky albedos of the cosine lobe model.
Goodin, Christopher
2013-05-01
The cosine lobe model is a bidirectional reflectance distribution function (BRDF) that is commonly used in computer graphics to model specular reflections. The model is both simple and physically plausible, but physical quantities such as albedo have not been related to the parameterization of the model. In this paper, analytic expressions for calculating the black-sky and white-sky albedos from the cosine lobe BRDF model with integer exponents will be derived, to the author's knowledge for the first time. These expressions for albedo can be used to place constraints on physics-based simulations of radiative transfer such as high-fidelity ray-tracing simulations.
Matsushima, Kyoji
2008-07-01
Rotational transformation based on coordinate rotation in Fourier space is a useful technique for simulating wave field propagation between nonparallel planes. This technique is characterized by fast computation because the transformation only requires executing a fast Fourier transform twice and a single interpolation. It is proved that the formula of the rotational transformation mathematically satisfies the Helmholtz equation. Moreover, to verify the formulation and its usefulness in wave optics, it is also demonstrated that the transformation makes it possible to reconstruct an image on arbitrarily tilted planes from a wave field captured experimentally by using digital holography.
An iris recognition algorithm based on DCT and GLCM
NASA Astrophysics Data System (ADS)
Feng, G.; Wu, Ye-qing
2008-04-01
With the enlargement of mankind's activity range, the significance for person's status identity is becoming more and more important. So many different techniques for person's status identity were proposed for this practical usage. Conventional person's status identity methods like password and identification card are not always reliable. A wide variety of biometrics has been developed for this challenge. Among those biologic characteristics, iris pattern gains increasing attention for its stability, reliability, uniqueness, noninvasiveness and difficult to counterfeit. The distinct merits of the iris lead to its high reliability for personal identification. So the iris identification technique had become hot research point in the past several years. This paper presents an efficient algorithm for iris recognition using gray-level co-occurrence matrix(GLCM) and Discrete Cosine transform(DCT). To obtain more representative iris features, features from space and DCT transformation domain are extracted. Both GLCM and DCT are applied on the iris image to form the feature sequence in this paper. The combination of GLCM and DCT makes the iris feature more distinct. Upon GLCM and DCT the eigenvector of iris extracted, which reflects features of spatial transformation and frequency transformation. Experimental results show that the algorithm is effective and feasible with iris recognition.
Yuan, Soe-Tsyr; Sun, Jerry
2005-10-01
Development of algorithms for automated text categorization in massive text document sets is an important research area of data mining and knowledge discovery. Most of the text-clustering methods were grounded in the term-based measurement of distance or similarity, ignoring the structure of the documents. In this paper, we present a novel method named structured cosine similarity (SCS) that furnishes document clustering with a new way of modeling on document summarization, considering the structure of the documents so as to improve the performance of document clustering in terms of quality, stability, and efficiency. This study was motivated by the problem of clustering speech documents (of no rich document features) attained from the wireless experience oral sharing conducted by mobile workforce of enterprises, fulfilling audio-based knowledge management. In other words, this problem aims to facilitate knowledge acquisition and sharing by speech. The evaluations also show fairly promising results on our method of structured cosine similarity.
A Fourier transform method for Vsin i estimations under nonlinear Limb-Darkening laws
DOE Office of Scientific and Technical Information (OSTI.GOV)
Levenhagen, R. S., E-mail: ronaldo.levenhagen@gmail.com
Star rotation offers us a large horizon for the study of many important physical issues pertaining to stellar evolution. Currently, four methods are widely used to infer rotation velocities, namely those related to line width calibrations, on the fitting of synthetic spectra, interferometry, and on Fourier transforms (FTs) of line profiles. Almost all of the estimations of stellar projected rotation velocities using the Fourier method in the literature have been addressed with the use of linear limb-darkening (LD) approximations during the evaluation of rotation profiles and their cosine FTs, which in certain cases, lead to discrepant velocity estimates. In thismore » work, we introduce new mathematical expressions of rotation profiles and their Fourier cosine transforms assuming three nonlinear LD laws—quadratic, square-root, and logarithmic—and study their applications with and without gravity-darkening (GD) and geometrical flattening (GF) effects. Through an analysis of He I models in the visible range accounting for both limb and GD, we find out that, for classical models without rotationally driven effects, all the Vsin i values are too close to each other. On the other hand, taking into account GD and GF, the Vsin i obtained with the linear law result in Vsin i values that are systematically smaller than those obtained with the other laws. As a rule of thumb, we apply these expressions to the FT method to evaluate the projected rotation velocity of the emission B-type star Achernar (α Eri).« less
Digital Platforms as Factor Transforming Management Models in Businesses and Industries
NASA Astrophysics Data System (ADS)
Dimitrakiev, D.; Molodchik, A. V.
2018-05-01
Increasingly, digital platforms are built into the value chain, acting as an intermediary between the manufacturer and the consumer. The paper presents tendencies and features of business model transformation in connection with management of the new digital technologies. The limitations of traditional business models and the capabilities of business models based on digital platforms and self-organization were revealed. In the study, the viability of the new business model for the dental industry was confirmed and the new concept of the branch self-organizing control system based on the information platform, blockchain, cryptocurrency and reward of target consumer is offered, including mechanisms that make the model attractive for both the consumer and the service provider.
A Synthetic Quadrature Phase Detector/Demodulator for Fourier Transform Transform Spectrometers
NASA Technical Reports Server (NTRS)
Campbell, Joel
2008-01-01
A method is developed to demodulate (velocity correct) Fourier transform spectrometer (FTS) data that is taken with an analog to digital converter that digitizes equally spaced in time. This method makes it possible to use simple low cost, high resolution audio digitizers to record high quality data without the need for an event timer or quadrature laser hardware, and makes it possible to use a metrology laser of any wavelength. The reduced parts count and simplicity implementation makes it an attractive alternative in space based applications when compared to previous methods such as the Brault algorithm.
NASA Astrophysics Data System (ADS)
Li, Xiangyu; Huang, Zhanhua; Zhu, Meng; He, Jin; Zhang, Hao
2014-12-01
Hilbert transform (HT) is widely used in temporal speckle pattern interferometry, but errors from low modulations might propagate and corrupt the calculated phase. A spatio-temporal method for phase retrieval using temporal HT and spatial phase unwrapping is presented. In time domain, the wrapped phase difference between the initial and current states is directly determined by using HT. To avoid the influence of the low modulation intensity, the phase information between the two states is ignored. As a result, the phase unwrapping is shifted from time domain to space domain. A phase unwrapping algorithm based on discrete cosine transform is adopted by taking advantage of the information in adjacent pixels. An experiment is carried out with a Michelson-type interferometer to study the out-of-plane deformation field. High quality whole-field phase distribution maps with different fringe densities are obtained. Under the experimental conditions, the maximum number of fringes resolvable in a 416×416 frame is 30, which indicates a 15λ deformation along the direction of loading.
NASA Astrophysics Data System (ADS)
Zhang, Qian-Ming; Shang, Ming-Sheng; Zeng, Wei; Chen, Yong; Lü, Linyuan
2010-08-01
Collaborative filtering is one of the most successful recommendation techniques, which can effectively predict the possible future likes of users based on their past preferences. The key problem of this method is how to define the similarity between users. A standard approach is using the correlation between the ratings that two users give to a set of objects, such as Cosine index and Pearson correlation coefficient. However, the costs of computing this kind of indices are relatively high, and thus it is impossible to be applied in the huge-size systems. To solve this problem, in this paper, we introduce six local-structure-based similarity indices and compare their performances with the above two benchmark indices. Experimental results on two data sets demonstrate that the structure-based similarity indices overall outperform the Pearson correlation coefficient. When the data is dense, the structure-based indices can perform competitively good as Cosine index, while with lower computational complexity. Furthermore, when the data is sparse, the structure-based indices give even better results than Cosine index.
NASA Technical Reports Server (NTRS)
Johnson, Dennis A. (Inventor)
1996-01-01
A laser doppler velocimeter uses frequency shifting of a laser beam to provide signal information for each velocity component. A composite electrical signal generated by a light detector is digitized and a processor produces a discrete Fourier transform based on the digitized electrical signal. The transform includes two peak frequencies corresponding to the two velocity components.
Artificial intelligence systems based on texture descriptors for vaccine development.
Nanni, Loris; Brahnam, Sheryl; Lumini, Alessandra
2011-02-01
The aim of this work is to analyze and compare several feature extraction methods for peptide classification that are based on the calculation of texture descriptors starting from a matrix representation of the peptide. This texture-based representation of the peptide is then used to train a support vector machine classifier. In our experiments, the best results are obtained using local binary patterns variants and the discrete cosine transform with selected coefficients. These results are better than those previously reported that employed texture descriptors for peptide representation. In addition, we perform experiments that combine standard approaches based on amino acid sequence. The experimental section reports several tests performed on a vaccine dataset for the prediction of peptides that bind human leukocyte antigens and on a human immunodeficiency virus (HIV-1). Experimental results confirm the usefulness of our novel descriptors. The matlab implementation of our approaches is available at http://bias.csr.unibo.it/nanni/TexturePeptide.zip.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anaya, O.; Moreno, G.E.L.; Madrigal, M.M.
1999-11-01
In the last years, several definitions of power have been proposed for more accurate measurement of electrical quantities in presence of harmonics pollution on power lines. Nevertheless, only few instruments have been constructed considering these definitions. This paper describes a new microcontroller-based digital instrument, which include definitions based on Harley Transform. The algorithms are fully processed using Fast Hartley Transform (FHT) and 16 bit-microcontroller platform. The constructed prototype was compared with commercial harmonics analyzer instrument.
NASA Astrophysics Data System (ADS)
Cintra, Renato J.; Bayer, Fábio M.
2017-12-01
In [Dhandapani and Ramachandran, "Area and power efficient DCT architecture for image compression", EURASIP Journal on Advances in Signal Processing 2014, 2014:180] the authors claim to have introduced an approximation for the discrete cosine transform capable of outperforming several well-known approximations in literature in terms of additive complexity. We could not verify the above results and we offer corrections for their work.
Digital signal processing based on inverse scattering transform.
Turitsyna, Elena G; Turitsyn, Sergei K
2013-10-15
Through numerical modeling, we illustrate the possibility of a new approach to digital signal processing in coherent optical communications based on the application of the so-called inverse scattering transform. Considering without loss of generality a fiber link with normal dispersion and quadrature phase shift keying signal modulation, we demonstrate how an initial information pattern can be recovered (without direct backward propagation) through the calculation of nonlinear spectral data of the received optical signal.
Topographic correction realization based on the CBERS-02B image
NASA Astrophysics Data System (ADS)
Qin, Hui-ping; Yi, Wei-ning; Fang, Yong-hua
2011-08-01
The special topography of mountain terrain will induce the retrieval distortion in same species and surface spectral lines. In order to improve the research accuracy of topographic surface characteristic, many researchers have focused on topographic correction. Topographic correction methods can be statistical-empirical model or physical model, in which the methods based on the digital elevation model data are most popular. Restricted by spatial resolution, previous model mostly corrected topographic effect based on Landsat TM image, whose spatial resolution is 30 meter that can be easily achieved from internet or calculated from digital map. Some researchers have also done topographic correction based on high spatial resolution images, such as Quickbird and Ikonos, but there is little correlative research on the topographic correction of CBERS-02B image. In this study, liao-ning mountain terrain was taken as the objective. The digital elevation model data was interpolated to 2.36 meter by 15 meter original digital elevation model one meter by one meter. The C correction, SCS+C correction, Minnaert correction and Ekstrand-r were executed to correct the topographic effect. Then the corrected results were achieved and compared. The images corrected with C correction, SCS+C correction, Minnaert correction and Ekstrand-r were compared, and the scatter diagrams between image digital number and cosine of solar incidence angel with respect to surface normal were shown. The mean value, standard variance, slope of scatter diagram, and separation factor were statistically calculated. The analysed result shows that the shadow is weakened in corrected images than the original images, and the three-dimensional affect is removed. The absolute slope of fitting lines in scatter diagram is minished. Minnaert correction method has the most effective result. These demonstrate that the former correction methods can be successfully adapted to CBERS-02B images. The DEM data can be interpolated step by step to get the corresponding spatial resolution approximately for the condition that high spatial resolution elevation data is hard to get.
The digital transformation of oral health care. Teledentistry and electronic commerce.
Bauer, J C; Brown, W T
2001-02-01
Health care is being changed dramatically by the marriage of computers and telecommunications. Implications for hospitals and physicians already have received extensive media attention, but comparatively little has been said about the impact of information technology on dentistry. This article illustrates how the digital transformation will likely affect dentists and their patients. Based on recent experiences of hospitals and medical practices, dentists can expect to encounter revolutionary changes as a result of the digital transformation. The Internet, the World Wide Web and other developments of the information revolution will redefine patient care, referral relationships, practice management, quality, professional organizations and competition. To respond proactively to the digital transformation of oral health care, dentists must become familiar with its technologies and concepts. They must learn what new information technology can do for them and their patients and then develop creative applications that promote the profession and their approaches to care.
Estimation of signal-dependent noise level function in transform domain via a sparse recovery model.
Yang, Jingyu; Gan, Ziqiao; Wu, Zhaoyang; Hou, Chunping
2015-05-01
This paper proposes a novel algorithm to estimate the noise level function (NLF) of signal-dependent noise (SDN) from a single image based on the sparse representation of NLFs. Noise level samples are estimated from the high-frequency discrete cosine transform (DCT) coefficients of nonlocal-grouped low-variation image patches. Then, an NLF recovery model based on the sparse representation of NLFs under a trained basis is constructed to recover NLF from the incomplete noise level samples. Confidence levels of the NLF samples are incorporated into the proposed model to promote reliable samples and weaken unreliable ones. We investigate the behavior of the estimation performance with respect to the block size, sampling rate, and confidence weighting. Simulation results on synthetic noisy images show that our method outperforms existing state-of-the-art schemes. The proposed method is evaluated on real noisy images captured by three types of commodity imaging devices, and shows consistently excellent SDN estimation performance. The estimated NLFs are incorporated into two well-known denoising schemes, nonlocal means and BM3D, and show significant improvements in denoising SDN-polluted images.
Algebraic signal processing theory: 2-D spatial hexagonal lattice.
Pünschel, Markus; Rötteler, Martin
2007-06-01
We develop the framework for signal processing on a spatial, or undirected, 2-D hexagonal lattice for both an infinite and a finite array of signal samples. This framework includes the proper notions of z-transform, boundary conditions, filtering or convolution, spectrum, frequency response, and Fourier transform. In the finite case, the Fourier transform is called discrete triangle transform. Like the hexagonal lattice, this transform is nonseparable. The derivation of the framework makes it a natural extension of the algebraic signal processing theory that we recently introduced. Namely, we construct the proper signal models, given by polynomial algebras, bottom-up from a suitable definition of hexagonal space shifts using a procedure provided by the algebraic theory. These signal models, in turn, then provide all the basic signal processing concepts. The framework developed in this paper is related to Mersereau's early work on hexagonal lattices in the same way as the discrete cosine and sine transforms are related to the discrete Fourier transform-a fact that will be made rigorous in this paper.
Flow to a well in a water-table aquifer: An improved laplace transform solution
Moench, A.F.
1996-01-01
An alternative Laplace transform solution for the problem, originally solved by Neuman, of constant discharge from a partially penetrating well in a water-table aquifer was obtained. The solution differs from existing solutions in that it is simpler in form and can be numerically inverted without the need for time-consuming numerical integration. The derivation invloves the use of the Laplace transform and a finite Fourier cosine series and avoids the Hankel transform used in prior derivations. The solution allows for water in the overlying unsaturated zone to be released either instantaneously in response to a declining water table as assumed by Neuman, or gradually as approximated by Boulton's convolution integral. Numerical evaluation yields results identical with results obtained by previously published methods with the advantage, under most well-aquifer configurations, of much reduced computation time.
Making Room for the Transformation of Literacy Instruction in the Digital Classroom
ERIC Educational Resources Information Center
Sofkova Hashemi, Sylvana; Cederlund, Katarina
2017-01-01
Education is in the process of transforming traditional print-based instruction into digital formats. This multi-case study sheds light on the challenge of coping with the old and new in literacy teaching in the context of technology-mediated instruction in the early years of schooling (7-8 years old children). By investigating the relation…
Image data compression having minimum perceptual error
NASA Technical Reports Server (NTRS)
Watson, Andrew B. (Inventor)
1995-01-01
A method for performing image compression that eliminates redundant and invisible image components is described. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The present invention adapts or customizes the quantization matrix to the image being compressed. The quantization matrix comprises visual masking by luminance and contrast techniques and by an error pooling technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.
A Kinect based sign language recognition system using spatio-temporal features
NASA Astrophysics Data System (ADS)
Memiş, Abbas; Albayrak, Songül
2013-12-01
This paper presents a sign language recognition system that uses spatio-temporal features on RGB video images and depth maps for dynamic gestures of Turkish Sign Language. Proposed system uses motion differences and accumulation approach for temporal gesture analysis. Motion accumulation method, which is an effective method for temporal domain analysis of gestures, produces an accumulated motion image by combining differences of successive video frames. Then, 2D Discrete Cosine Transform (DCT) is applied to accumulated motion images and temporal domain features transformed into spatial domain. These processes are performed on both RGB images and depth maps separately. DCT coefficients that represent sign gestures are picked up via zigzag scanning and feature vectors are generated. In order to recognize sign gestures, K-Nearest Neighbor classifier with Manhattan distance is performed. Performance of the proposed sign language recognition system is evaluated on a sign database that contains 1002 isolated dynamic signs belongs to 111 words of Turkish Sign Language (TSL) in three different categories. Proposed sign language recognition system has promising success rates.
NASA Astrophysics Data System (ADS)
Rubel, Aleksey S.; Lukin, Vladimir V.; Egiazarian, Karen O.
2015-03-01
Results of denoising based on discrete cosine transform for a wide class of images corrupted by additive noise are obtained. Three types of noise are analyzed: additive white Gaussian noise and additive spatially correlated Gaussian noise with middle and high correlation levels. TID2013 image database and some additional images are taken as test images. Conventional DCT filter and BM3D are used as denoising techniques. Denoising efficiency is described by PSNR and PSNR-HVS-M metrics. Within hard-thresholding denoising mechanism, DCT-spectrum coefficient statistics are used to characterize images and, subsequently, denoising efficiency for them. Results of denoising efficiency are fitted for such statistics and efficient approximations are obtained. It is shown that the obtained approximations provide high accuracy of prediction of denoising efficiency.
A face and palmprint recognition approach based on discriminant DCT feature extraction.
Jing, Xiao-Yuan; Zhang, David
2004-12-01
In the field of image processing and recognition, discrete cosine transform (DCT) and linear discrimination are two widely used techniques. Based on them, we present a new face and palmprint recognition approach in this paper. It first uses a two-dimensional separability judgment to select the DCT frequency bands with favorable linear separability. Then from the selected bands, it extracts the linear discriminative features by an improved Fisherface method and performs the classification by the nearest neighbor classifier. We detailedly analyze theoretical advantages of our approach in feature extraction. The experiments on face databases and palmprint database demonstrate that compared to the state-of-the-art linear discrimination methods, our approach obtains better classification performance. It can significantly improve the recognition rates for face and palmprint data and effectively reduce the dimension of feature space.
Copy-move forgery detection utilizing Fourier-Mellin transform log-polar features
NASA Astrophysics Data System (ADS)
Dixit, Rahul; Naskar, Ruchira
2018-03-01
In this work, we address the problem of region duplication or copy-move forgery detection in digital images, along with detection of geometric transforms (rotation and rescale) and postprocessing-based attacks (noise, blur, and brightness adjustment). Detection of region duplication, following conventional techniques, becomes more challenging when an intelligent adversary brings about such additional transforms on the duplicated regions. In this work, we utilize Fourier-Mellin transform with log-polar mapping and a color-based segmentation technique using K-means clustering, which help us to achieve invariance to all the above forms of attacks in copy-move forgery detection of digital images. Our experimental results prove the efficiency of the proposed method and its superiority to the current state of the art.
New approaches to digital transformation of petrochemical production
NASA Astrophysics Data System (ADS)
Andieva, E. Y.; Kapelyuhovskaya, A. A.
2017-08-01
The newest concepts of the reference architecture of digital industrial transformation are considered, the problems of their application for the enterprises having in their life cycle oil products processing and marketing are revealed. The concept of the reference architecture, providing a systematic representation of the fundamental changes in the approaches to production management based on the automation of production process control is proposed.
A study on multiresolution lossless video coding using inter/intra frame adaptive prediction
NASA Astrophysics Data System (ADS)
Nakachi, Takayuki; Sawabe, Tomoko; Fujii, Tetsuro
2003-06-01
Lossless video coding is required in the fields of archiving and editing digital cinema or digital broadcasting contents. This paper combines a discrete wavelet transform and adaptive inter/intra-frame prediction in the wavelet transform domain to create multiresolution lossless video coding. The multiresolution structure offered by the wavelet transform facilitates interchange among several video source formats such as Super High Definition (SHD) images, HDTV, SDTV, and mobile applications. Adaptive inter/intra-frame prediction is an extension of JPEG-LS, a state-of-the-art lossless still image compression standard. Based on the image statistics of the wavelet transform domains in successive frames, inter/intra frame adaptive prediction is applied to the appropriate wavelet transform domain. This adaptation offers superior compression performance. This is achieved with low computational cost and no increase in additional information. Experiments on digital cinema test sequences confirm the effectiveness of the proposed algorithm.
Collaborative Wideband Compressed Signal Detection in Interplanetary Internet
NASA Astrophysics Data System (ADS)
Wang, Yulin; Zhang, Gengxin; Bian, Dongming; Gou, Liang; Zhang, Wei
2014-07-01
As the development of autonomous radio in deep space network, it is possible to actualize communication between explorers, aircrafts, rovers and satellites, e.g. from different countries, adopting different signal modes. The first mission to enforce the autonomous radio is to detect signals of the explorer autonomously without disturbing the original communication. This paper develops a collaborative wideband compressed signal detection approach for InterPlaNetary (IPN) Internet where there exist sparse active signals in the deep space environment. Compressed sensing (CS) can be utilized by exploiting the sparsity of IPN Internet communication signal, whose useful frequency support occupies only a small portion of an entirely wide spectrum. An estimate of the signal spectrum can be obtained by using reconstruction algorithms. Against deep space shadowing and channel fading, multiple satellites collaboratively sense and make a final decision according to certain fusion rule to gain spatial diversity. A couple of novel discrete cosine transform (DCT) and walsh-hadamard transform (WHT) based compressed spectrum detection methods are proposed which significantly improve the performance of spectrum recovery and signal detection. Finally, extensive simulation results are presented to show the effectiveness of our proposed collaborative scheme for signal detection in IPN Internet. Compared with the conventional discrete fourier transform (DFT) based method, our DCT and WHT based methods reduce computational complexity, decrease processing time, save energy and enhance probability of detection.
Feng, Ziang; Gao, Zhan; Zhang, Xiaoqiong; Wang, Shengjia; Yang, Dong; Yuan, Hao; Qin, Jie
2015-09-01
Digital shearing speckle pattern interferometry (DSSPI) has been recognized as a practical tool in testing strain. The DSSPI system which is based on temporal analysis is attractive because of its ability to measure strain dynamically. In this paper, such a DSSPI system with Wollaston prism has been built. The principles and system arrangement are described and the preliminary experimental result of the displacement-derivative test of an aluminum plate is shown with the wavelet transformation method and the Fourier transformation method. The simulations have been conducted with the finite element method. The comparison of the results shows that quantitative measurement of displacement-derivative has been realized.
Medical Image Authentication Using DPT Watermarking: A Preliminary Attempt
NASA Astrophysics Data System (ADS)
Wong, M. L. Dennis; Goh, Antionette W.-T.; Chua, Hong Siang
Secure authentication of digital medical image content provides great value to the e-Health community and medical insurance industries. Fragile Watermarking has been proposed to provide the mechanism to authenticate digital medical image securely. Transform Domain based Watermarking are typically slower than spatial domain watermarking owing to the overhead in calculation of coefficients. In this paper, we propose a new Discrete Pascal Transform based watermarking technique. Preliminary experiment result shows authentication capability. Possible improvements on the proposed scheme are also presented before conclusions.
2017-03-20
sub-array, which is based on all-pass filters (APFs) is realized using 130 nm CMOS technology. Approximate- discrete Fourier transform (a-DFT...fixed beams are directed at known directions [9]. The proposed approximate- discrete Fourier transform (a-DFT) based multi-beamformer [9] yields L...to digital conversion daughter board. occurs in the discrete time domain (in ROACH-2 FPGA platform) following signal digitization (see Figs. 1(d) and
On the application of under-decimated filter banks
NASA Technical Reports Server (NTRS)
Lin, Y.-P.; Vaidyanathan, P. P.
1994-01-01
Maximally decimated filter banks have been extensively studied in the past. A filter bank is said to be under-decimated if the number of channels is more than the decimation ratio in the subbands. A maximally decimated filter bank is well known for its application in subband coding. Another application of maximally decimated filter banks is in block filtering. Convolution through block filtering has the advantages that parallelism is increased and data are processed at a lower rate. However, the computational complexity is comparable to that of direct convolution. More recently, another type of filter bank convolver has been developed. In this scheme, the convolution is performed in the subbands. Quantization and bit allocation of subband signals are based on signal variance, as in subband coding. Consequently, for a fixed rate, the result of convolution is more accurate than is direct convolution. This type of filter bank convolver also enjoys the advantages of block filtering, parallelism, and a lower working rate. Nevertheless, like block filtering, there is no computational saving. In this article, under-decimated systems are introduced to solve the problem. The new system is decimated only by half the number of channels. Two types of filter banks can be used in the under-decimated system: the discrete Fourier transform (DFT) filter banks and the cosine modulated filter banks. They are well known for their low complexity. In both cases, the system is approximately alias free, and the overall response is equivalent to a tunable multilevel filter. Properties of the DFT filter banks and the cosine modulated filter banks can be exploited to simultaneously achieve parallelism, computational saving, and a lower working rate. Furthermore, for both systems, the implementation cost of the analysis or synthesis bank is comparable to that of one prototype filter plus some low-complexity modulation matrices. The individual analysis and synthesis filters have complex coefficients in the DFT filter banks but have real coefficients in the cosine modulated filter banks.
On the application of under-decimated filter banks
NASA Astrophysics Data System (ADS)
Lin, Y.-P.; Vaidyanathan, P. P.
1994-11-01
Maximally decimated filter banks have been extensively studied in the past. A filter bank is said to be under-decimated if the number of channels is more than the decimation ratio in the subbands. A maximally decimated filter bank is well known for its application in subband coding. Another application of maximally decimated filter banks is in block filtering. Convolution through block filtering has the advantages that parallelism is increased and data are processed at a lower rate. However, the computational complexity is comparable to that of direct convolution. More recently, another type of filter bank convolver has been developed. In this scheme, the convolution is performed in the subbands. Quantization and bit allocation of subband signals are based on signal variance, as in subband coding. Consequently, for a fixed rate, the result of convolution is more accurate than is direct convolution. This type of filter bank convolver also enjoys the advantages of block filtering, parallelism, and a lower working rate. Nevertheless, like block filtering, there is no computational saving. In this article, under-decimated systems are introduced to solve the problem. The new system is decimated only by half the number of channels. Two types of filter banks can be used in the under-decimated system: the discrete Fourier transform (DFT) filter banks and the cosine modulated filter banks. They are well known for their low complexity. In both cases, the system is approximately alias free, and the overall response is equivalent to a tunable multilevel filter. Properties of the DFT filter banks and the cosine modulated filter banks can be exploited to simultaneously achieve parallelism, computational saving, and a lower working rate.
NASA Astrophysics Data System (ADS)
Zhang, Leihong; Liang, Dong; Li, Bei; Kang, Yi; Pan, Zilan; Zhang, Dawei; Gao, Xiumin; Ma, Xiuhua
2016-07-01
On the basis of analyzing the cosine light field with determined analytic expression and the pseudo-inverse method, the object is illuminated by a presetting light field with a determined discrete Fourier transform measurement matrix, and the object image is reconstructed by the pseudo-inverse method. The analytic expression of the algorithm of computational ghost imaging based on discrete Fourier transform measurement matrix is deduced theoretically, and compared with the algorithm of compressive computational ghost imaging based on random measurement matrix. The reconstruction process and the reconstruction error are analyzed. On this basis, the simulation is done to verify the theoretical analysis. When the sampling measurement number is similar to the number of object pixel, the rank of discrete Fourier transform matrix is the same as the one of the random measurement matrix, the PSNR of the reconstruction image of FGI algorithm and PGI algorithm are similar, the reconstruction error of the traditional CGI algorithm is lower than that of reconstruction image based on FGI algorithm and PGI algorithm. As the decreasing of the number of sampling measurement, the PSNR of reconstruction image based on FGI algorithm decreases slowly, and the PSNR of reconstruction image based on PGI algorithm and CGI algorithm decreases sharply. The reconstruction time of FGI algorithm is lower than that of other algorithms and is not affected by the number of sampling measurement. The FGI algorithm can effectively filter out the random white noise through a low-pass filter and realize the reconstruction denoising which has a higher denoising capability than that of the CGI algorithm. The FGI algorithm can improve the reconstruction accuracy and the reconstruction speed of computational ghost imaging.
FPGA design of correlation-based pattern recognition
NASA Astrophysics Data System (ADS)
Jridi, Maher; Alfalou, Ayman
2017-05-01
Optical/Digital pattern recognition and tracking based on optical/digital correlation are a well-known techniques to detect, identify and localize a target object in a scene. Despite the limited number of treatments required by the correlation scheme, computational time and resources are relatively high. The most computational intensive treatment required by the correlation is the transformation from spatial to spectral domain and then from spectral to spatial domain. Furthermore, these transformations are used on optical/digital encryption schemes like the double random phase encryption (DRPE). In this paper, we present a VLSI architecture for the correlation scheme based on the fast Fourier transform (FFT). One interesting feature of the proposed scheme is its ability to stream image processing in order to perform correlation for video sequences. A trade-off between the hardware consumption and the robustness of the correlation can be made in order to understand the limitations of the correlation implementation in reconfigurable and portable platforms. Experimental results obtained from HDL simulations and FPGA prototype have demonstrated the advantages of the proposed scheme.
Electronic voltage and current transformers testing device.
Pan, Feng; Chen, Ruimin; Xiao, Yong; Sun, Weiming
2012-01-01
A method for testing electronic instrument transformers is described, including electronic voltage and current transformers (EVTs, ECTs) with both analog and digital outputs. A testing device prototype is developed. It is based on digital signal processing of the signals that are measured at the secondary outputs of the tested transformer and the reference transformer when the same excitation signal is fed to their primaries. The test that estimates the performance of the prototype has been carried out at the National Centre for High Voltage Measurement and the prototype is approved for testing transformers with precision class up to 0.2 at the industrial frequency (50 Hz or 60 Hz). The device is suitable for on-site testing due to its high accuracy, simple structure and low-cost hardware.
Digital differential confocal microscopy based on spatial shift transformation.
Liu, J; Wang, Y; Liu, C; Wilson, T; Wang, H; Tan, J
2014-11-01
Differential confocal microscopy is a particularly powerful surface profilometry technique in industrial metrology due to its high axial sensitivity and insensitivity to noise. However, the practical implementation of the technique requires the accurate positioning of point detectors in three-dimensions. We describe a simple alternative based on spatial transformation of a through-focus series of images obtained from a homemade beam scanning confocal microscope. This digital differential confocal microscopy approach is described and compared with the traditional Differential confocal microscopy approach. The ease of use of the digital differential confocal microscopy system is illustrated by performing measurements on a 3D standard specimen. © 2014 The Authors Journal of Microscopy © 2014 Royal Microscopical Society.
MEMS based digital transform spectrometers
NASA Astrophysics Data System (ADS)
Geller, Yariv; Ramani, Mouli
2005-09-01
Earlier this year, a new breed of Spectrometers based on Micro-Electro-Mechanical-System (MEMS) engines has been introduced to the commercial market. The use of these engines combined with transform mathematics, produces powerful spectrometers at unprecedented low cost in various spectral regions.
Microcomputer-Based Digital Signal Processing Laboratory Experiments.
ERIC Educational Resources Information Center
Tinari, Jr., Rocco; Rao, S. Sathyanarayan
1985-01-01
Describes a system (Apple II microcomputer interfaced to flexible, custom-designed digital hardware) which can provide: (1) Fast Fourier Transform (FFT) computation on real-time data with a video display of spectrum; (2) frequency synthesis experiments using the inverse FFT; and (3) real-time digital filtering experiments. (JN)
Adaptive compressed sensing of remote-sensing imaging based on the sparsity prediction
NASA Astrophysics Data System (ADS)
Yang, Senlin; Li, Xilong; Chong, Xin
2017-10-01
The conventional compressive sensing works based on the non-adaptive linear projections, and the parameter of its measurement times is usually set empirically. As a result, the quality of image reconstruction is always affected. Firstly, the block-based compressed sensing (BCS) with conventional selection for compressive measurements was given. Then an estimation method for the sparsity of image was proposed based on the two dimensional discrete cosine transform (2D DCT). With an energy threshold given beforehand, the DCT coefficients were processed with both energy normalization and sorting in descending order, and the sparsity of the image can be achieved by the proportion of dominant coefficients. And finally, the simulation result shows that, the method can estimate the sparsity of image effectively, and provides an active basis for the selection of compressive observation times. The result also shows that, since the selection of observation times is based on the sparse degree estimated with the energy threshold provided, the proposed method can ensure the quality of image reconstruction.
A Classroom Note on Generating Examples for the Laws of Sines and Cosines from Pythagorean Triangles
ERIC Educational Resources Information Center
Sher, Lawrence; Sher, David
2007-01-01
By selecting certain special triangles, students can learn about the laws of sines and cosines without wrestling with long decimal representations or irrational numbers. Since the law of cosines requires only one of the three angles of a triangle, there are many examples of triangles with integral sides and a cosine that can be represented exactly…
A Thin Lens Model for Charged-Particle RF Accelerating Gaps
DOE Office of Scientific and Technical Information (OSTI.GOV)
Allen, Christopher K.
Presented is a thin-lens model for an RF accelerating gap that considers general axial fields without energy dependence or other a priori assumptions. Both the cosine and sine transit time factors (i.e., Fourier transforms) are required plus two additional functions; the Hilbert transforms the transit-time factors. The combination yields a complex-valued Hamiltonian rotating in the complex plane with synchronous phase. Using Hamiltonians the phase and energy gains are computed independently in the pre-gap and post-gap regions then aligned using the asymptotic values of wave number. Derivations of these results are outlined, examples are shown, and simulations with the model aremore » presented.« less
Steganographic embedding in containers-images
NASA Astrophysics Data System (ADS)
Nikishova, A. V.; Omelchenko, T. A.; Makedonskij, S. A.
2018-05-01
Steganography is one of the approaches to ensuring the protection of information transmitted over the network. But a steganographic method should vary depending on a used container. According to statistics, the most widely used containers are images and the most common image format is JPEG. Authors propose a method of data embedding into a frequency area of images in format JPEG 2000. It is proposed to use the method of Benham-Memon- Yeo-Yeung, in which instead of discrete cosine transform, discrete wavelet transform is used. Two requirements for images are formulated. Structure similarity is chosen to obtain quality assessment of data embedding. Experiments confirm that requirements satisfaction allows achieving high quality assessment of data embedding.
NASA Astrophysics Data System (ADS)
Tsai, Ko-Fan; Chu, Shu-Chun
2018-03-01
This study proposes a complete and unified method for selective excitation of any specified nearly nondiffracting Helmholtz-Gauss (HzG) beam in end-pumped solid-state digital lasers. Four types of the HzG beams: cosine-Gauss beams, Bessel-Gauss beams, Mathieu-Gauss beams, and, in particular, parabolic-Gauss beams are successfully demonstrated to be generated with the proposed methods. To the best of the authors’ knowledge, parabolic-Gauss beams have not yet been directly generated from any kind of laser system. The numerical results of this study show that one can successfully achieve any lasing HzG beams directly from the solid-state digital lasers with only added control of the laser gain transverse position provided by off-axis end pumping. This study also presents a practical digital laser set-up for easily manipulating off-axis pumping in order to achieve the control of the laser gain transverse gain position in digital lasers. The reported results in this study provide advancement of digital lasers in dynamically generating nondiffracting beams. The control of the digital laser cavity gain position creates the possibility of achieving real-time selection of more laser modes in digital lasers, and it is worth further investigation in the future.
Lee, Bumshik; Kim, Munchurl
2016-08-01
In this paper, a low complexity coding unit (CU)-level rate and distortion estimation scheme is proposed for High Efficiency Video Coding (HEVC) hardware-friendly implementation where a Walsh-Hadamard transform (WHT)-based low-complexity integer discrete cosine transform (DCT) is employed for distortion estimation. Since HEVC adopts quadtree structures of coding blocks with hierarchical coding depths, it becomes more difficult to estimate accurate rate and distortion values without actually performing transform, quantization, inverse transform, de-quantization, and entropy coding. Furthermore, DCT for rate-distortion optimization (RDO) is computationally high, because it requires a number of multiplication and addition operations for various transform block sizes of 4-, 8-, 16-, and 32-orders and requires recursive computations to decide the optimal depths of CU or transform unit. Therefore, full RDO-based encoding is highly complex, especially for low-power implementation of HEVC encoders. In this paper, a rate and distortion estimation scheme is proposed in CU levels based on a low-complexity integer DCT that can be computed in terms of WHT whose coefficients are produced in prediction stages. For rate and distortion estimation in CU levels, two orthogonal matrices of 4×4 and 8×8 , which are applied to WHT that are newly designed in a butterfly structure only with addition and shift operations. By applying the integer DCT based on the WHT and newly designed transforms in each CU block, the texture rate can precisely be estimated after quantization using the number of non-zero quantized coefficients and the distortion can also be precisely estimated in transform domain without de-quantization and inverse transform required. In addition, a non-texture rate estimation is proposed by using a pseudoentropy code to obtain accurate total rate estimates. The proposed rate and the distortion estimation scheme can effectively be used for HW-friendly implementation of HEVC encoders with 9.8% loss over HEVC full RDO, which much less than 20.3% and 30.2% loss of a conventional approach and Hadamard-only scheme, respectively.
Personalized Medicine in Veterans with Traumatic Brain Injuries
2012-05-01
UPGMA ) based on cosine correlation of row mean centered log2 signal values; this was the top 50%-tile, 3) In the DA top 50%-tile, selected probe sets...GeneMaths XT following row mean centering of log2 trans- formed MAS5.0 signal values; probe set cluster- ing was performed by the UPGMA method using...hierarchical clustering analysis using the UPGMA algorithm with cosine correlation as the similarity metric. Results are presented as a heat map (left
NASA Astrophysics Data System (ADS)
Kawabata, Kiyoshi
2016-12-01
This work shows that it is possible to calculate numerical values of the Chandrasekhar H-function for isotropic scattering at least with 15-digit accuracy by making use of the double exponential formula (DE-formula) of Takahashi and Mori (Publ. RIMS, Kyoto Univ. 9:721, 1974) instead of the Gauss-Legendre quadrature employed in the numerical scheme of Kawabata and Limaye (Astrophys. Space Sci. 332:365, 2011) and simultaneously taking a precautionary measure to minimize the effects due to loss of significant digits particularly in the cases of near-conservative scattering and/or errors involved in returned values of library functions supplied by compilers in use. The results of our calculations are presented for 18 selected values of single scattering albedo π0 and 22 values of an angular variable μ, the cosine of zenith angle θ specifying the direction of radiation incident on or emergent from semi-infinite media.
Latent semantic analysis cosines as a cognitive similarity measure: Evidence from priming studies.
Günther, Fritz; Dudschig, Carolin; Kaup, Barbara
2016-01-01
In distributional semantics models (DSMs) such as latent semantic analysis (LSA), words are represented as vectors in a high-dimensional vector space. This allows for computing word similarities as the cosine of the angle between two such vectors. In two experiments, we investigated whether LSA cosine similarities predict priming effects, in that higher cosine similarities are associated with shorter reaction times (RTs). Critically, we applied a pseudo-random procedure in generating the item material to ensure that we directly manipulated LSA cosines as an independent variable. We employed two lexical priming experiments with lexical decision tasks (LDTs). In Experiment 1 we presented participants with 200 different prime words, each paired with one unique target. We found a significant effect of cosine similarities on RTs. The same was true for Experiment 2, where we reversed the prime-target order (primes of Experiment 1 were targets in Experiment 2, and vice versa). The results of these experiments confirm that LSA cosine similarities can predict priming effects, supporting the view that they are psychologically relevant. The present study thereby provides evidence for qualifying LSA cosine similarities not only as a linguistic measure, but also as a cognitive similarity measure. However, it is also shown that other DSMs can outperform LSA as a predictor of priming effects.
Vibration Power Flow In A Constrained Layer Damping Cylindrical Shell
NASA Astrophysics Data System (ADS)
Wang, Yun; Zheng, Gangtie
2012-07-01
In this paper, the vibration power flow in a constrained layer damping (CLD) cylindrical shell using wave propagation approach is investigated. The dynamic equations of the shell are derived with the Hamilton principle in conjunction with the Donnell shell assumption. With these equations, the dynamic responses of the system under a line circumferential cosine harmonic exciting force is obtained by employing the Fourier transform and the residue theorem. The vibration power flows inputted to the system and transmitted along the shell axial direction are both studied. The results show that input power flow varies with driving frequency and circumferential mode order, and the constrained damping layer can obviously restrict the exciting force from inputting power flow into the base shell especially for a thicker viscoelastic layer, a thicker or stiffer constraining layer (CL), and a higher circumferential mode order, can rapidly attenuate the vibration power flow transmitted along the base shell axial direction.
Resolution enhancement of low-quality videos using a high-resolution frame
NASA Astrophysics Data System (ADS)
Pham, Tuan Q.; van Vliet, Lucas J.; Schutte, Klamer
2006-01-01
This paper proposes an example-based Super-Resolution (SR) algorithm of compressed videos in the Discrete Cosine Transform (DCT) domain. Input to the system is a Low-Resolution (LR) compressed video together with a High-Resolution (HR) still image of similar content. Using a training set of corresponding LR-HR pairs of image patches from the HR still image, high-frequency details are transferred from the HR source to the LR video. The DCT-domain algorithm is much faster than example-based SR in spatial domain 6 because of a reduction in search dimensionality, which is a direct result of the compact and uncorrelated DCT representation. Fast searching techniques like tree-structure vector quantization 16 and coherence search1 are also key to the improved efficiency. Preliminary results on MJPEG sequence show promising result of the DCT-domain SR synthesis approach.
A review on "A Novel Technique for Image Steganography Based on Block-DCT and Huffman Encoding"
NASA Astrophysics Data System (ADS)
Das, Rig; Tuithung, Themrichon
2013-03-01
This paper reviews the embedding and extraction algorithm proposed by "A. Nag, S. Biswas, D. Sarkar and P. P. Sarkar" on "A Novel Technique for Image Steganography based on Block-DCT and Huffman Encoding" in "International Journal of Computer Science and Information Technology, Volume 2, Number 3, June 2010" [3] and shows that the Extraction of Secret Image is Not Possible for the algorithm proposed in [3]. 8 bit Cover Image of size is divided into non joint blocks and a two dimensional Discrete Cosine Transformation (2-D DCT) is performed on each of the blocks. Huffman Encoding is performed on an 8 bit Secret Image of size and each bit of the Huffman Encoded Bit Stream is embedded in the frequency domain by altering the LSB of the DCT coefficients of Cover Image blocks. The Huffman Encoded Bit Stream and Huffman Table
Alternate forms of the associated Legendre functions for use in geomagnetic modeling.
Alldredge, L.R.; Benton, E.R.
1986-01-01
An inconvenience attending traditional use of associated Legendre functions in global modeling is that the functions are not separable with respect to the 2 indices (order and degree). In 1973 Merilees suggested a way to avoid the problem by showing that associated Legendre functions of order m and degree m+k can be expressed in terms of elementary functions. This note calls attention to some possible gains in time savings and accuracy in geomagnetic modeling based upon this form. For this purpose, expansions of associated Legendre polynomials in terms of sines and cosines of multiple angles are displayed up to degree and order 10. Examples are also given explaining how some surface spherical harmonics can be transformed into true Fourier series for selected polar great circle paths. -from Authors
Compression for radiological images
NASA Astrophysics Data System (ADS)
Wilson, Dennis L.
1992-07-01
The viewing of radiological images has peculiarities that must be taken into account in the design of a compression technique. The images may be manipulated on a workstation to change the contrast, to change the center of the brightness levels that are viewed, and even to invert the images. Because of the possible consequences of losing information in a medical application, bit preserving compression is used for the images used for diagnosis. However, for archiving the images may be compressed to 10 of their original size. A compression technique based on the Discrete Cosine Transform (DCT) takes the viewing factors into account by compressing the changes in the local brightness levels. The compression technique is a variation of the CCITT JPEG compression that suppresses the blocking of the DCT except in areas of very high contrast.
On the Use of Quartic Force Fields in Variational Calculations
NASA Technical Reports Server (NTRS)
Fortenberry, Ryan C.; Huang, Xinchuan; Yachmenev, Andrey; Thiel, Walter; Lee, Timothy J.
2013-01-01
The use of quartic force fields (QFFs) has been shown to be one of the most effective ways to efficiently compute vibrational frequencies for small molecules. In this paper we outline and discuss how the simple-internal or bond-length bond-angle (BLBA) coordinates can be transformed into Morse-cosine(-sine) coordinates which produce potential energy surfaces from QFFs that possess proper limiting behavior and can effectively describe the vibrational (or rovibrational) energy levels of an arbitrary molecular system. We investigate parameter scaling in the Morse coordinate, symmetry considerations, and examples of transformed QFFs making use of the MULTIMODE, TROVE, and VTET variational vibrational methods. Cases are referenced where variational computations coupled with transformed QFFs produce accuracies compared to experiment for fundamental frequencies on the order of 5 cm(exp -1) and often as good as 1 cm(exp -1).
Coordinate transformation by minimizing correlations between parameters
NASA Technical Reports Server (NTRS)
Kumar, M.
1972-01-01
This investigation was to determine the transformation parameters (three rotations, three translations and a scale factor) between two Cartesian coordinate systems from sets of coordinates given in both systems. The objective was the determination of well separated transformation parameters with reduced correlations between each other, a problem especially relevant when the sets of coordinates are not well distributed. The above objective is achieved by preliminarily determining the three rotational parameters and the scale factor from the respective direction cosines and chord distances (these being independent of the translation parameters) between the common points, and then computing all the seven parameters from a solution in which the rotations and the scale factor are entered as weighted constraints according to their variances and covariances obtained in the preliminary solutions. Numerical tests involving two geodetic reference systems were performed to evaluate the effectiveness of this approach.
Electronic Voltage and Current Transformers Testing Device
Pan, Feng; Chen, Ruimin; Xiao, Yong; Sun, Weiming
2012-01-01
A method for testing electronic instrument transformers is described, including electronic voltage and current transformers (EVTs, ECTs) with both analog and digital outputs. A testing device prototype is developed. It is based on digital signal processing of the signals that are measured at the secondary outputs of the tested transformer and the reference transformer when the same excitation signal is fed to their primaries. The test that estimates the performance of the prototype has been carried out at the National Centre for High Voltage Measurement and the prototype is approved for testing transformers with precision class up to 0.2 at the industrial frequency (50 Hz or 60 Hz). The device is suitable for on-site testing due to its high accuracy, simple structure and low-cost hardware. PMID:22368510
NASA Astrophysics Data System (ADS)
Zhang, Xiaolei; Zhang, Xiangchao; Xu, Min; Zhang, Hao; Jiang, Xiangqian
2018-03-01
The measurement of microstructured components is a challenging task in optical engineering. Digital holographic microscopy has attracted intensive attention due to its remarkable capability of measuring complex surfaces. However, speckles arise in the recorded interferometric holograms, and they will degrade the reconstructed wavefronts. Existing speckle removal methods suffer from the problems of frequency aliasing and phase distortions. A reconstruction method based on the antialiasing shift-invariant contourlet transform (ASCT) is developed. Salient edges and corners have sparse representations in the transform domain of ASCT, and speckles can be recognized and removed effectively. As subsampling in the scale and directional filtering schemes is avoided, the problems of frequency aliasing and phase distortions occurring in the conventional multiscale transforms can be effectively overcome, thereby improving the accuracy of wavefront reconstruction. As a result, the proposed method is promising for the digital holographic measurement of complex structures.
The use of LANDSAT digital data to detect and monitor vegetation water deficiencies. [South Dakota
NASA Technical Reports Server (NTRS)
Thompson, D. R.; Wehmanen, O. A.
1977-01-01
A technique devised using a vector transformation of LANDSAT digital data to indicate when vegetation is undergoing moisture stress is described. A relation established between the remote sensing-based criterion (the Green Index Number) and a ground-based criterion (Crop Moisture Index) is discussed.
A new DWT/MC/DPCM video compression framework based on EBCOT
NASA Astrophysics Data System (ADS)
Mei, L. M.; Wu, H. R.; Tan, D. M.
2005-07-01
A novel Discrete Wavelet Transform (DWT)/Motion Compensation (MC)/Differential Pulse Code Modulation (DPCM) video compression framework is proposed in this paper. Although the Discrete Cosine Transform (DCT)/MC/DPCM is the mainstream framework for video coders in industry and international standards, the idea of DWT/MC/DPCM has existed for more than one decade in the literature and the investigation is still undergoing. The contribution of this work is twofold. Firstly, the Embedded Block Coding with Optimal Truncation (EBCOT) is used here as the compression engine for both intra- and inter-frame coding, which provides good compression ratio and embedded rate-distortion (R-D) optimization mechanism. This is an extension of the EBCOT application from still images to videos. Secondly, this framework offers a good interface for the Perceptual Distortion Measure (PDM) based on the Human Visual System (HVS) where the Mean Squared Error (MSE) can be easily replaced with the PDM in the R-D optimization. Some of the preliminary results are reported here. They are also compared with benchmarks such as MPEG-2 and MPEG-4 version 2. The results demonstrate that under specified condition the proposed coder outperforms the benchmarks in terms of rate vs. distortion.
Momeni, Saba; Pourghassem, Hossein
2014-08-01
Recently image fusion has prominent role in medical image processing and is useful to diagnose and treat many diseases. Digital subtraction angiography is one of the most applicable imaging to diagnose brain vascular diseases and radiosurgery of brain. This paper proposes an automatic fuzzy-based multi-temporal fusion algorithm for 2-D digital subtraction angiography images. In this algorithm, for blood vessel map extraction, the valuable frames of brain angiography video are automatically determined to form the digital subtraction angiography images based on a novel definition of vessel dispersion generated by injected contrast material. Our proposed fusion scheme contains different fusion methods for high and low frequency contents based on the coefficient characteristic of wrapping second generation of curvelet transform and a novel content selection strategy. Our proposed content selection strategy is defined based on sample correlation of the curvelet transform coefficients. In our proposed fuzzy-based fusion scheme, the selection of curvelet coefficients are optimized by applying weighted averaging and maximum selection rules for the high frequency coefficients. For low frequency coefficients, the maximum selection rule based on local energy criterion is applied to better visual perception. Our proposed fusion algorithm is evaluated on a perfect brain angiography image dataset consisting of one hundred 2-D internal carotid rotational angiography videos. The obtained results demonstrate the effectiveness and efficiency of our proposed fusion algorithm in comparison with common and basic fusion algorithms.
Perceptual Optimization of DCT Color Quantization Matrices
NASA Technical Reports Server (NTRS)
Watson, Andrew B.; Statler, Irving C. (Technical Monitor)
1994-01-01
Many image compression schemes employ a block Discrete Cosine Transform (DCT) and uniform quantization. Acceptable rate/distortion performance depends upon proper design of the quantization matrix. In previous work, we showed how to use a model of the visibility of DCT basis functions to design quantization matrices for arbitrary display resolutions and color spaces. Subsequently, we showed how to optimize greyscale quantization matrices for individual images, for optimal rate/perceptual distortion performance. Here we describe extensions of this optimization algorithm to color images.
NASA Astrophysics Data System (ADS)
Shang, Xueyi; Li, Xibing; Morales-Esteban, A.; Dong, Longjun
2018-03-01
Micro-seismic P-phase arrival picking is an elementary step into seismic event location, source mechanism analysis, and seismic tomography. However, a micro-seismic signal is often mixed with high frequency noises and power frequency noises (50 Hz), which could considerably reduce P-phase picking accuracy. To solve this problem, an Empirical Mode Decomposition (EMD)-cosine function denoising-based Akaike Information Criterion (AIC) picker (ECD-AIC picker) is proposed for picking the P-phase arrival time. Unlike traditional low pass filters which are ineffective when seismic data and noise bandwidths overlap, the EMD adaptively separates the seismic data and the noise into different Intrinsic Mode Functions (IMFs). Furthermore, the EMD-cosine function-based denoising retains the P-phase arrival amplitude and phase spectrum more reliably than any traditional low pass filter. The ECD-AIC picker was tested on 1938 sets of micro-seismic waveforms randomly selected from the Institute of Mine Seismology (IMS) database of the Chinese Yongshaba mine. The results have shown that the EMD-cosine function denoising can effectively estimate high frequency and power frequency noises and can be easily adapted to perform on signals with different shapes and forms. Qualitative and quantitative comparisons show that the combined ECD-AIC picker provides better picking results than both the ED-AIC picker and the AIC picker, and the comparisons also show more reliable source localization results when the ECD-AIC picker is applied, thus showing the potential of this combined P-phase picking technique.
NASA Astrophysics Data System (ADS)
Jiang, Fan; Rossi, Mathieu; Parent, Guillaume
2018-05-01
Accurately modeling the anisotropic behavior of electrical steel is mandatory in order to perform good end simulations. Several approaches can be found in the literature for that purpose but the more often those methods are not able to deal with grain oriented electrical steel. In this paper, a method based on orientation distribution function is applied to modern grain oriented laminations. In particular, two solutions are proposed in order to increase the results accuracy. The first one consists in increasing the decomposition number of the cosine series on which the method is based. The second one consists in modifying the determination method of the terms belonging to this cosine series.
Sparse dictionary for synthetic transmit aperture medical ultrasound imaging.
Wang, Ping; Jiang, Jin-Yang; Li, Na; Luo, Han-Wu; Li, Fang; Cui, Shi-Gang
2017-07-01
It is possible to recover a signal below the Nyquist sampling limit using a compressive sensing technique in ultrasound imaging. However, the reconstruction enabled by common sparse transform approaches does not achieve satisfactory results. Considering the ultrasound echo signal's features of attenuation, repetition, and superposition, a sparse dictionary with the emission pulse signal is proposed. Sparse coefficients in the proposed dictionary have high sparsity. Images reconstructed with this dictionary were compared with those obtained with the three other common transforms, namely, discrete Fourier transform, discrete cosine transform, and discrete wavelet transform. The performance of the proposed dictionary was analyzed via a simulation and experimental data. The mean absolute error (MAE) was used to quantify the quality of the reconstructions. Experimental results indicate that the MAE associated with the proposed dictionary was always the smallest, the reconstruction time required was the shortest, and the lateral resolution and contrast of the reconstructed images were also the closest to the original images. The proposed sparse dictionary performed better than the other three sparse transforms. With the same sampling rate, the proposed dictionary achieved excellent reconstruction quality.
Broad-Bandwidth FPGA-Based Digital Polyphase Spectrometer
NASA Technical Reports Server (NTRS)
Jamot, Robert F.; Monroe, Ryan M.
2012-01-01
With present concern for ecological sustainability ever increasing, it is desirable to model the composition of Earth s upper atmosphere accurately with regards to certain helpful and harmful chemicals, such as greenhouse gases and ozone. The microwave limb sounder (MLS) is an instrument designed to map the global day-to-day concentrations of key atmospheric constituents continuously. One important component in MLS is the spectrometer, which processes the raw data provided by the receivers into frequency-domain information that cannot only be transmitted more efficiently, but also processed directly once received. The present-generation spectrometer is fully analog. The goal is to include a fully digital spectrometer in the next-generation sensor. In a digital spectrometer, incoming analog data must be converted into a digital format, processed through a Fourier transform, and finally accumulated to reduce the impact of input noise. While the final design will be placed on an application specific integrated circuit (ASIC), the building of these chips is prohibitively expensive. To that end, this design was constructed on a field-programmable gate array (FPGA). A family of state-of-the-art digital Fourier transform spectrometers has been developed, with a combination of high bandwidth and fine resolution. Analog signals consisting of radiation emitted by constituents in planetary atmospheres or galactic sources are downconverted and subsequently digitized by a pair of interleaved analog-to-digital converters (ADCs). This 6-Gsps (gigasample per second) digital representation of the analog signal is then processed through an FPGA-based streaming fast Fourier transform (FFT). Digital spectrometers have many advantages over previously used analog spectrometers, especially in terms of accuracy and resolution, both of which are particularly important for the type of scientific questions to be addressed with next-generation radiometers.
Discrete Walsh Hadamard transform based visible watermarking technique for digital color images
NASA Astrophysics Data System (ADS)
Santhi, V.; Thangavelu, Arunkumar
2011-10-01
As the size of the Internet is growing enormously the illegal manipulation of digital multimedia data become very easy with the advancement in technology tools. In order to protect those multimedia data from unauthorized access the digital watermarking system is used. In this paper a new Discrete walsh Hadamard Transform based visible watermarking system is proposed. As the watermark is embedded in transform domain, the system is robust to many signal processing attacks. Moreover in this proposed method the watermark is embedded in tiling manner in all the range of frequencies to make it robust to compression and cropping attack. The robustness of the algorithm is tested against noise addition, cropping, compression, Histogram equalization and resizing attacks. The experimental results show that the algorithm is robust to common signal processing attacks and the observed peak signal to noise ratio (PSNR) of watermarked image is varying from 20 to 30 db depends on the size of the watermark.
NASA Astrophysics Data System (ADS)
Zhang, Xiaolei; Zhang, Xiangchao; Yuan, He; Zhang, Hao; Xu, Min
2018-02-01
Digital holography is a promising measurement method in the fields of bio-medicine and micro-electronics. But the captured images of digital holography are severely polluted by the speckle noise because of optical scattering and diffraction. Via analyzing the properties of Fresnel diffraction and the topographies of micro-structures, a novel reconstruction method based on the dual-tree complex wavelet transform (DT-CWT) is proposed. This algorithm is shiftinvariant and capable of obtaining sparse representations for the diffracted signals of salient features, thus it is well suited for multiresolution processing of the interferometric holograms of directional morphologies. An explicit representation of orthogonal Fresnel DT-CWT bases and a specific filtering method are developed. This method can effectively remove the speckle noise without destroying the salient features. Finally, the proposed reconstruction method is compared with the conventional Fresnel diffraction integration and Fresnel wavelet transform with compressive sensing methods to validate its remarkable superiority on the aspects of topography reconstruction and speckle removal.
The use of Landsat digital data to detect and monitor vegetation water deficiencies
NASA Technical Reports Server (NTRS)
Thompson, D. R.; Wehmanen, O. A.
1977-01-01
In the Large Area Crop Inventory Experiment a technique was devised using a vector transformation of Landsat digital data to indicate when vegetation is undergoing moisture stress. A relation was established between the remote-sensing-based criterion (the Green Index Number) and a ground-based criterion (Crop Moisture Index).
ERIC Educational Resources Information Center
Sharp, Laurie A.
2018-01-01
Technology has transformed learning at the postsecondary level and significantly increased the prevalence of digital learning environments. As adult educators approach instructional design, they must consider how to apply research-based practices that preserve the quality of instruction and provide adult learners with technology-based instruction…
An Efficient Method for Image and Audio Steganography using Least Significant Bit (LSB) Substitution
NASA Astrophysics Data System (ADS)
Chadha, Ankit; Satam, Neha; Sood, Rakshak; Bade, Dattatray
2013-09-01
In order to improve the data hiding in all types of multimedia data formats such as image and audio and to make hidden message imperceptible, a novel method for steganography is introduced in this paper. It is based on Least Significant Bit (LSB) manipulation and inclusion of redundant noise as secret key in the message. This method is applied to data hiding in images. For data hiding in audio, Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) both are used. All the results displayed prove to be time-efficient and effective. Also the algorithm is tested for various numbers of bits. For those values of bits, Mean Square Error (MSE) and Peak-Signal-to-Noise-Ratio (PSNR) are calculated and plotted. Experimental results show that the stego-image is visually indistinguishable from the original cover-image when n<=4, because of better PSNR which is achieved by this technique. The final results obtained after steganography process does not reveal presence of any hidden message, thus qualifying the criteria of imperceptible message.
Sayago, Ana; Asuero, Agustin G
2006-09-14
A bilogarithmic hyperbolic cosine method for the spectrophotometric evaluation of stability constants of 1:1 weak complexes from continuous variation data has been devised and applied to literature data. A weighting scheme, however, is necessary in order to take into account the transformation for linearization. The method may be considered a useful alternative to methods in which one variable is involved on both sides of the basic equation (i.e. Heller and Schwarzenbach, Likussar and Adsul and Ramanathan). Classical least squares lead in those instances to biased and approximate stability constants and limiting absorbance values. The advantages of the proposed method are: the method gives a clear indication of the existence of only one complex in solution, it is flexible enough to allow for weighting of measurements and the computation procedure yield the best value of logbeta11 and its limit of error. The agreement between the values obtained by applying the weighted hyperbolic cosine method and the non-linear regression (NLR) method is good, being in both cases the mean quadratic error at a minimum.
A method for optimizing the cosine response of solar UV diffusers
NASA Astrophysics Data System (ADS)
Pulli, Tomi; Kärhä, Petri; Ikonen, Erkki
2013-07-01
Instruments measuring global solar ultraviolet (UV) irradiance at the surface of the Earth need to collect radiation from the entire hemisphere. Entrance optics with angular response as close as possible to the ideal cosine response are necessary to perform these measurements accurately. Typically, the cosine response is obtained using a transmitting diffuser. We have developed an efficient method based on a Monte Carlo algorithm to simulate radiation transport in the solar UV diffuser assembly. The algorithm takes into account propagation, absorption, and scattering of the radiation inside the diffuser material. The effects of the inner sidewalls of the diffuser housing, the shadow ring, and the protective weather dome are also accounted for. The software implementation of the algorithm is highly optimized: a simulation of 109 photons takes approximately 10 to 15 min to complete on a typical high-end PC. The results of the simulations agree well with the measured angular responses, indicating that the algorithm can be used to guide the diffuser design process. Cost savings can be obtained when simulations are carried out before diffuser fabrication as compared to a purely trial-and-error-based diffuser optimization. The algorithm was used to optimize two types of detectors, one with a planar diffuser and the other with a spherically shaped diffuser. The integrated cosine errors—which indicate the relative measurement error caused by the nonideal angular response under isotropic sky radiance—of these two detectors were calculated to be f2=1.4% and 0.66%, respectively.
Will the digital computer transform classical mathematics?
Rotman, Brian
2003-08-15
Mathematics and machines have influenced each other for millennia. The advent of the digital computer introduced a powerfully new element that promises to transform the relation between them. This paper outlines the thesis that the effect of the digital computer on mathematics, already widespread, is likely to be radical and far-reaching. To articulate this claim, an abstract model of doing mathematics is introduced based on a triad of actors of which one, the 'agent', corresponds to the function performed by the computer. The model is used to frame two sorts of transformation. The first is pragmatic and involves the alterations and progressive colonization of the content and methods of enquiry of various mathematical fields brought about by digital methods. The second is conceptual and concerns a fundamental antagonism between the infinity enshrined in classical mathematics and physics (continuity, real numbers, asymptotic definitions) and the inherently real and material limit of processes associated with digital computation. An example which lies in the intersection of classical mathematics and computer science, the P=NP problem, is analysed in the light of this latter issue.
Kong, Gang; Dai, Dao-Qing; Zou, Lu-Min
2008-07-01
In order to remove the artifacts of peripheral digital subtraction angiography (DSA), an affine transformation-based automatic image registration algorithm is introduced here. The whole process is described as follows: First, rectangle feature templates are constructed with their centers of the extracted Harris corners in the mask, and motion vectors of the central feature points are estimated using template matching technology with the similarity measure of maximum histogram energy. And then the optimal parameters of the affine transformation are calculated with the matrix singular value decomposition (SVD) method. Finally, bilinear intensity interpolation is taken to the mask according to the specific affine transformation. More than 30 peripheral DSA registrations are performed with the presented algorithm, and as the result, moving artifacts of the images are removed with sub-pixel precision, and the time consumption is less enough to satisfy the clinical requirements. Experimental results show the efficiency and robustness of the algorithm.
Digital Transformation of Words in Learning Processes: A Critical View.
ERIC Educational Resources Information Center
Saga, Hiroo
1999-01-01
Presents some negative aspects of society's dependence on digital transformation of words by referring to works by Walter Ong and Martin Heidegger. Discusses orality, literacy and digital literacy and describes three aspects of the digital transformation of words. Compares/contrasts art with technology and discusses implications for education.…
A closed form solution for constant flux pumping in a well under partial penetration condition
NASA Astrophysics Data System (ADS)
Yang, Shaw-Yang; Yeh, Hund-Der; Chiu, Pin-Yuan
2006-05-01
An analytical model for the constant flux pumping test is developed in a radial confined aquifer system with a partially penetrating well. The Laplace domain solution is derived by the application of the Laplace transforms with respect to time and the finite Fourier cosine transforms with respect to the vertical coordinates. A time domain solution is obtained using the inverse Laplace transforms, convolution theorem, and Bromwich integral method. The effect of partial penetration is apparent if the test well is completed with a short screen. An aquifer thickness 100 times larger than the screen length of the well can be considered as infinite. This solution can be used to investigate the effects of screen length and location on the drawdown distribution in a radial confined aquifer system and to produce type curves for the estimation of aquifer parameters with field pumping drawdown data.
Gray-scale transform and evaluation for digital x-ray chest images on CRT monitor
NASA Astrophysics Data System (ADS)
Furukawa, Isao; Suzuki, Junji; Ono, Sadayasu; Kitamura, Masayuki; Ando, Yutaka
1997-04-01
In this paper, an experimental evaluation of a super high definition (SHD) imaging system for digital x-ray chest images is presented. The SHD imaging system is proposed as a platform for integrating conventional image media. We are involved in the use of SHD images in the total digitizing of medical records that include chest x-rays and pathological microscopic images, both which demand the highest level of quality among the various types of medical images. SHD images use progressive scanning and have a spatial resolution of 2000 by 2000 pixels or more and a temporal resolution (frame rate) of 60 frames/sec or more. For displaying medical x-ray images on a CRT, we derived gray scale transform characteristics based on radiologists' comments during the experiment, and elucidated the relationship between that gray scale transform and the linearization transform for maintaining the linear relationship with the luminance of film on a light box (luminance linear transform). We then carried out viewing experiments based on a five-stage evaluation. Nine radiologists participated in our experiment, and the ten cases evaluated included pulmonary fibrosis, lung cancer, and pneumonia. The experimental results indicated that conventional film images and those on super high definition CRT monitors have nearly the same quality. They also show that the gray scale transform for CRT images decided according to radiologists' comments agrees with the luminance linear transform in the high luminance region. And in the low luminance region, it was found that the gray scale transform had the characteristics of level expansion to increase the number of levels that can be expressed.
Digital transceiver implementation for wavelet packet modulation
NASA Astrophysics Data System (ADS)
Lindsey, Alan R.; Dill, Jeffrey C.
1998-03-01
Current transceiver designs for wavelet-based communication systems are typically reliant on analog waveform synthesis, however, digital processing is an important part of the eventual success of these techniques. In this paper, a transceiver implementation is introduced for the recently introduced wavelet packet modulation scheme which moves the analog processing as far as possible toward the antenna. The transceiver is based on the discrete wavelet packet transform which incorporates level and node parameters for generalized computation of wavelet packets. In this transform no particular structure is imposed on the filter bank save dyadic branching, and a maximum level which is specified a priori and dependent mainly on speed and/or cost considerations. The transmitter/receiver structure takes a binary sequence as input and, based on the desired time- frequency partitioning, processes the signal through demultiplexing, synthesis, analysis, multiplexing and data determination completely in the digital domain - with exception of conversion in and out of the analog domain for transmission.
Poston, Brach; Danna-Dos Santos, Alessander; Jesunathadas, Mark; Hamm, Thomas M; Santello, Marco
2010-08-01
The ability to modulate digit forces during grasping relies on the coordination of multiple hand muscles. Because many muscles innervate each digit, the CNS can potentially choose from a large number of muscle coordination patterns to generate a given digit force. Studies of single-digit force production tasks have revealed that the electromyographic (EMG) activity scales uniformly across all muscles as a function of digit force. However, the extent to which this finding applies to the coordination of forces across multiple digits is unknown. We addressed this question by asking subjects (n = 8) to exert isometric forces using a three-digit grip (thumb, index, and middle fingers) that allowed for the quantification of hand muscle coordination within and across digits as a function of grasp force (5, 20, 40, 60, and 80% maximal voluntary force). We recorded EMG from 12 muscles (6 extrinsic and 6 intrinsic) of the three digits. Hand muscle coordination patterns were quantified in the amplitude and frequency domains (EMG-EMG coherence). EMG amplitude scaled uniformly across all hand muscles as a function of grasp force (muscle x force interaction: P = 0.997; cosines of angle between muscle activation pattern vector pairs: 0.897-0.997). Similarly, EMG-EMG coherence was not significantly affected by force (P = 0.324). However, coherence was stronger across extrinsic than that across intrinsic muscle pairs (P = 0.0039). These findings indicate that the distribution of neural drive to multiple hand muscles is force independent and may reflect the anatomical properties or functional roles of hand muscle groups.
Lahmiri, Salim; Gargour, Christian S; Gabrea, Marcel
2014-10-01
An automated diagnosis system that uses complex continuous wavelet transform (CWT) to process retina digital images and support vector machines (SVMs) for classification purposes is presented. In particular, each retina image is transformed into two one-dimensional signals by concatenating image rows and columns separately. The mathematical norm of phase angles found in each one-dimensional signal at each level of CWT decomposition are relied on to characterise the texture of normal images against abnormal images affected by exudates, drusen and microaneurysms. The leave-one-out cross-validation method was adopted to conduct experiments and the results from the SVM show that the proposed approach gives better results than those obtained by other methods based on the correct classification rate, sensitivity and specificity.
Interaction phenomenon to dimensionally reduced p-gBKP equation
NASA Astrophysics Data System (ADS)
Zhang, Runfa; Bilige, Sudao; Bai, Yuexing; Lü, Jianqing; Gao, Xiaoqing
2018-02-01
Based on searching the combining of quadratic function and exponential (or hyperbolic cosine) function from the Hirota bilinear form of the dimensionally reduced p-gBKP equation, eight class of interaction solutions are derived via symbolic computation with Mathematica. The submergence phenomenon, presented to illustrate the dynamical features concerning these obtained solutions, is observed by three-dimensional plots and density plots with particular choices of the involved parameters between the exponential (or hyperbolic cosine) function and the quadratic function. It is proved that the interference between the two solitary waves is inelastic.
Programmable Remapper with Single Flow Architecture
NASA Technical Reports Server (NTRS)
Fisher, Timothy E. (Inventor)
1993-01-01
An apparatus for image processing comprising a camera for receiving an original visual image and transforming the original visual image into an analog image, a first converter for transforming the analog image of the camera to a digital image, a processor having a single flow architecture for receiving the digital image and producing, with a single algorithm, an output image, a second converter for transforming the digital image of the processor to an analog image, and a viewer for receiving the analog image, transforming the analog image into a transformed visual image for observing the transformations applied to the original visual image. The processor comprises one or more subprocessors for the parallel reception of a digital image for producing an output matrix of the transformed visual image. More particularly, the processor comprises a plurality of subprocessors for receiving in parallel and transforming the digital image for producing a matrix of the transformed visual image, and an output interface means for receiving the respective portions of the transformed visual image from the respective subprocessor for producing an output matrix of the transformed visual image.
Moore, J A; Nemat-Gorgani, M; Madison, A C; Sandahl, M A; Punnamaraju, S; Eckhardt, A E; Pollack, M G; Vigneault, F; Church, G M; Fair, R B; Horowitz, M A; Griffin, P B
2017-01-01
This paper reports on the use of a digital microfluidic platform to perform multiplex automated genetic engineering (MAGE) cycles on droplets containing Escherichia coli cells. Bioactivated magnetic beads were employed for cell binding, washing, and media exchange in the preparation of electrocompetent cells in the electrowetting-on-dieletric (EWoD) platform. On-cartridge electroporation was used to deliver oligonucleotides into the cells. In addition to the optimization of a magnetic bead-based benchtop protocol for generating and transforming electrocompetent E. coli cells, we report on the implementation of this protocol in a fully automated digital microfluidic platform. Bead-based media exchange and electroporation pulse conditions were optimized on benchtop for transformation frequency to provide initial parameters for microfluidic device trials. Benchtop experiments comparing electrotransformation of free and bead-bound cells are presented. Our results suggest that dielectric shielding intrinsic to bead-bound cells significantly reduces electroporation field exposure efficiency. However, high transformation frequency can be maintained in the presence of magnetic beads through the application of more intense electroporation pulses. As a proof of concept, MAGE cycles were successfully performed on a commercial EWoD cartridge using variations of the optimal magnetic bead-based preparation procedure and pulse conditions determined by the benchtop results. Transformation frequencies up to 22% were achieved on benchtop; this frequency was matched within 1% (21%) by MAGE cycles on the microfluidic device. However, typical frequencies on the device remain lower, averaging 9% with a standard deviation of 9%. The presented results demonstrate the potential of digital microfluidics to perform complex and automated genetic engineering protocols.
Moore, J. A.; Nemat-Gorgani, M.; Madison, A. C.; Punnamaraju, S.; Eckhardt, A. E.; Pollack, M. G.; Church, G. M.; Fair, R. B.; Horowitz, M. A.; Griffin, P. B.
2017-01-01
This paper reports on the use of a digital microfluidic platform to perform multiplex automated genetic engineering (MAGE) cycles on droplets containing Escherichia coli cells. Bioactivated magnetic beads were employed for cell binding, washing, and media exchange in the preparation of electrocompetent cells in the electrowetting-on-dieletric (EWoD) platform. On-cartridge electroporation was used to deliver oligonucleotides into the cells. In addition to the optimization of a magnetic bead-based benchtop protocol for generating and transforming electrocompetent E. coli cells, we report on the implementation of this protocol in a fully automated digital microfluidic platform. Bead-based media exchange and electroporation pulse conditions were optimized on benchtop for transformation frequency to provide initial parameters for microfluidic device trials. Benchtop experiments comparing electrotransformation of free and bead-bound cells are presented. Our results suggest that dielectric shielding intrinsic to bead-bound cells significantly reduces electroporation field exposure efficiency. However, high transformation frequency can be maintained in the presence of magnetic beads through the application of more intense electroporation pulses. As a proof of concept, MAGE cycles were successfully performed on a commercial EWoD cartridge using variations of the optimal magnetic bead-based preparation procedure and pulse conditions determined by the benchtop results. Transformation frequencies up to 22% were achieved on benchtop; this frequency was matched within 1% (21%) by MAGE cycles on the microfluidic device. However, typical frequencies on the device remain lower, averaging 9% with a standard deviation of 9%. The presented results demonstrate the potential of digital microfluidics to perform complex and automated genetic engineering protocols. PMID:28191268
Fourier analysis and signal processing by use of the Moebius inversion formula
NASA Technical Reports Server (NTRS)
Reed, Irving S.; Yu, Xiaoli; Shih, Ming-Tang; Tufts, Donald W.; Truong, T. K.
1990-01-01
A novel Fourier technique for digital signal processing is developed. This approach to Fourier analysis is based on the number-theoretic method of the Moebius inversion of series. The Fourier transform method developed is shown also to yield the convolution of two signals. A computer simulation shows that this method for finding Fourier coefficients is quite suitable for digital signal processing. It competes with the classical FFT (fast Fourier transform) approach in terms of accuracy, complexity, and speed.
Digital SAR processing using a fast polynomial transform
NASA Technical Reports Server (NTRS)
Butman, S.; Lipes, R.; Rubin, A.; Truong, T. K.
1981-01-01
A new digital processing algorithm based on the fast polynomial transform is developed for producing images from Synthetic Aperture Radar data. This algorithm enables the computation of the two dimensional cyclic correlation of the raw echo data with the impulse response of a point target, thereby reducing distortions inherent in one dimensional transforms. This SAR processing technique was evaluated on a general-purpose computer and an actual Seasat SAR image was produced. However, regular production runs will require a dedicated facility. It is expected that such a new SAR processing algorithm could provide the basis for a real-time SAR correlator implementation in the Deep Space Network.
Joint Calibration of 3d Laser Scanner and Digital Camera Based on Dlt Algorithm
NASA Astrophysics Data System (ADS)
Gao, X.; Li, M.; Xing, L.; Liu, Y.
2018-04-01
Design a calibration target that can be scanned by 3D laser scanner while shot by digital camera, achieving point cloud and photos of a same target. A method to joint calibrate 3D laser scanner and digital camera based on Direct Linear Transformation algorithm was proposed. This method adds a distortion model of digital camera to traditional DLT algorithm, after repeating iteration, it can solve the inner and external position element of the camera as well as the joint calibration of 3D laser scanner and digital camera. It comes to prove that this method is reliable.
Optimization of Trade-offs in Error-free Image Transmission
NASA Astrophysics Data System (ADS)
Cox, Jerome R.; Moore, Stephen M.; Blaine, G. James; Zimmerman, John B.; Wallace, Gregory K.
1989-05-01
The availability of ubiquitous wide-area channels of both modest cost and higher transmission rate than voice-grade lines promises to allow the expansion of electronic radiology services to a larger community. The band-widths of the new services becoming available from the Integrated Services Digital Network (ISDN) are typically limited to 128 Kb/s, almost two orders of magnitude lower than popular LANs can support. Using Discrete Cosine Transform (DCT) techniques, a compressed approximation to an image may be rapidly transmitted. However, intensity or resampling transformations of the reconstructed image may reveal otherwise invisible artifacts of the approximate encoding. A progressive transmission scheme reported in ISO Working Paper N800 offers an attractive solution to this problem by rapidly reconstructing an apparently undistorted image from the DCT coefficients and then subse-quently transmitting the error image corresponding to the difference between the original and the reconstructed images. This approach achieves an error-free transmission without sacrificing the perception of rapid image delivery. Furthermore, subsequent intensity and resampling manipulations can be carried out with confidence. DCT coefficient precision affects the amount of error information that must be transmitted and, hence the delivery speed of error-free images. This study calculates the overall information coding rate for six radiographic images as a function of DCT coefficient precision. The results demonstrate that a minimum occurs for each of the six images at an average coefficient precision of between 0.5 and 1.0 bits per pixel (b/p). Apparently undistorted versions of these six images can be transmitted with a coding rate of between 0.25 and 0.75 b/p while error-free versions can be transmitted with an overall coding rate between 4.5 and 6.5 b/p.
Networking Hospital ePrescribing: A Systemic View of Digitalization of Medicines' Use in England.
Lichtner, Valentina; Hibberd, Ralph; Cornford, Tony
2016-01-01
Medicine management is at the core of hospital care and digitalization of prescribing and administration of medicines is often the focus of attention of health IT programs. This may be conveyed to the public in terms of the elimination of paper-based drug charts and increased readability of doctors' prescriptions. Based on analysis of documents about hospital medicines supply and use (including systems' implementation) in the UK, in this conceptual paper electronic prescribing and administration are repositioned as only one aspect of an important wider transformation in medicine management in hospital settings, involving, for example, procurement, dispensing, auditing, waste management, research and safety vigilance. Approaching digitalization from a systemic perspective has the potential to uncover the wider implications of this transformation for patients, the organization and the wider health care system.
NASA Technical Reports Server (NTRS)
Monroe, Ryan M.
2011-01-01
A family of state-of-the-art digital Fourier transform spectrometers has been developed, with a combination of high bandwidth and fine resolution unavailable elsewhere. Analog signals consisting of radiation emitted by constituents in planetary atmospheres or galactic sources are downconverted and subsequently digitized by a pair of interleaved Analog-to-Digital Converters, (ADC). This 6 Gsps (giga-sample per second) digital representation of the analog signal is then processed through an FPGA-based streaming Fast Fourier Transform (FFT), the key development described below. Digital spectrometers have many advantages over previously used analog spectrometers, especially in terms of accuracy and resolution, both of which are particularly important for the type of scientific questions to be addressed with next-generation radiometers. the implementation, results and underlying math for this spectrometer, as well as, potential for future extension to even higher bandwidth, resolution and channel orthogonality, needed to support proposed future advanced atmospheric science and radioastronomy, are discussed.
Wideband Spectroscopy: The Design and Implementation of a 3 GHz, 2048 Channel Digital Spectrometer
NASA Technical Reports Server (NTRS)
Monroe, Ryan M.
2011-01-01
A state-of-the-art digital Fourier Transform spectrometer has been developed, with a combination of high bandwidth and fine resolution unavailable elsewhere. Analog signals consisting of radiation emitted by constituents in planetary atmospheres or galactic sources are downconverted and subsequently digitized by a pair of interleaved Analog-to-Digital Converters (ADC). This 6 Gsps (giga sample per second) digital representation of the analog signal is then processed through an FPGA-based streaming Fast Fourier Transform (FFT), the key development described below. Digital spectrometers have many advantages over previously used analog spectrometers, especially in terms of accuracy and resolution, both of which are particularly important for the type of scientific questions to be addressed with next-generation radiometers. The implementation, results and underlying math for this spectrometer, as well as potential for future extension to even higher bandwidth, resolution and channel orthogonality, needed to support proposed future advanced atmospheric science and radioastronomy, are discussed.
Picture data compression coder using subband/transform coding with a Lempel-Ziv-based coder
NASA Technical Reports Server (NTRS)
Glover, Daniel R. (Inventor)
1995-01-01
Digital data coders/decoders are used extensively in video transmission. A digitally encoded video signal is separated into subbands. Separating the video into subbands allows transmission at low data rates. Once the data is separated into these subbands it can be coded and then decoded by statistical coders such as the Lempel-Ziv based coder.
Motion detection using extended fractional Fourier transform and digital speckle photography.
Bhaduri, Basanta; Tay, C J; Quan, C; Sheppard, Colin J R
2010-05-24
Digital speckle photography is a useful tool for measuring the motion of optically rough surfaces from the speckle shift that takes place at the recording plane. A simple correlation based digital speckle photographic system has been proposed that implements two simultaneous optical extended fractional Fourier transforms (EFRTs) of different orders using only a single lens and detector to simultaneously detect both the magnitude and direction of translation and tilt by capturing only two frames: one before and another after the object motion. The dynamic range and sensitivity of the measurement can be varied readily by altering the position of the mirror/s used in the optical setup. Theoretical analysis and experiment results are presented.
Digital health and the challenge of health systems transformation.
Alami, Hassane; Gagnon, Marie-Pierre; Fortin, Jean-Paul
2017-01-01
Information and communication technologies have transformed all sectors of society. The health sector is no exception to this trend. In light of "digital health", we see multiplying numbers of web platforms and mobile health applications, often brought by new unconventional players who produce and offer services in non-linear and non-hierarchal ways, this by multiplying access points to services for people. Some speak of a "uberization" of healthcare. New realities and challenges have emerged from this paradigm, which question the abilities of health systems to cope with new business and economic models, governance of data and regulation. Countries must provide adequate responses so that digital health, based increasingly on disruptive technologies, can benefit for all.
Wavelet transforms with discrete-time continuous-dilation wavelets
NASA Astrophysics Data System (ADS)
Zhao, Wei; Rao, Raghuveer M.
1999-03-01
Wavelet constructions and transforms have been confined principally to the continuous-time domain. Even the discrete wavelet transform implemented through multirate filter banks is based on continuous-time wavelet functions that provide orthogonal or biorthogonal decompositions. This paper provides a novel wavelet transform construction based on the definition of discrete-time wavelets that can undergo continuous parameter dilations. The result is a transformation that has the advantage of discrete-time or digital implementation while circumventing the problem of inadequate scaling resolution seen with conventional dyadic or M-channel constructions. Examples of constructing such wavelets are presented.
ERIC Educational Resources Information Center
Kim, Do Kyun; Dinu, Lucian F.; Chung, Wonjon
2013-01-01
Currently, the South Korean government is in the process of transforming school textbooks from a paper-based platform to a computer-based digital platform. Along with this effort, interactive online educational games (edu-games) have been examined as a potential component of the digital textbooks. Based on the theory of diffusion of innovations,…
Tweens' Characterization of Digital Technologies
ERIC Educational Resources Information Center
Brito, Pedro Quelhas
2012-01-01
The tweens are a transitional age group undergoing deep physical and psychological transformations. Based on a thirteen-focus group research design involving 103 students, and applying a tweens-centered approach, the characteristics of SMS, IM, Internet, digital photos, electronic games, and email were analyzed. Categories such as moral issues,…
CONSTRUCTING AND DERIVING RECIPROCAL TRIGONOMETRIC RELATIONS: A FUNCTIONAL ANALYTIC APPROACH
Ninness, Chris; Dixon, Mark; Barnes-Holmes, Dermot; Rehfeldt, Ruth Anne; Rumph, Robin; McCuller, Glen; Holland, James; Smith, Ronald; Ninness, Sharon K; McGinty, Jennifer
2009-01-01
Participants were pretrained and tested on mutually entailed trigonometric relations and combinatorially entailed relations as they pertained to positive and negative forms of sine, cosine, secant, and cosecant. Experiment 1 focused on training and testing transformations of these mathematical functions in terms of amplitude and frequency followed by tests of novel relations. Experiment 2 addressed training in accordance with frames of coordination (same as) and frames of opposition (reciprocal of) followed by more tests of novel relations. All assessments of derived and novel formula-to-graph relations, including reciprocal functions with diversified amplitude and frequency transformations, indicated that all 4 participants demonstrated substantial improvement in their ability to identify increasingly complex trigonometric formula-to-graph relations pertaining to same as and reciprocal of to establish mathematically complex repertoires. PMID:19949509
Constructing and deriving reciprocal trigonometric relations: a functional analytic approach.
Ninness, Chris; Dixon, Mark; Barnes-Holmes, Dermot; Rehfeldt, Ruth Anne; Rumph, Robin; McCuller, Glen; Holland, James; Smith, Ronald; Ninness, Sharon K; McGinty, Jennifer
2009-01-01
Participants were pretrained and tested on mutually entailed trigonometric relations and combinatorially entailed relations as they pertained to positive and negative forms of sine, cosine, secant, and cosecant. Experiment 1 focused on training and testing transformations of these mathematical functions in terms of amplitude and frequency followed by tests of novel relations. Experiment 2 addressed training in accordance with frames of coordination (same as) and frames of opposition (reciprocal of) followed by more tests of novel relations. All assessments of derived and novel formula-to-graph relations, including reciprocal functions with diversified amplitude and frequency transformations, indicated that all 4 participants demonstrated substantial improvement in their ability to identify increasingly complex trigonometric formula-to-graph relations pertaining to same as and reciprocal of to establish mathematically complex repertoires.
Efficiency optimization of a fast Poisson solver in beam dynamics simulation
NASA Astrophysics Data System (ADS)
Zheng, Dawei; Pöplau, Gisela; van Rienen, Ursula
2016-01-01
Calculating the solution of Poisson's equation relating to space charge force is still the major time consumption in beam dynamics simulations and calls for further improvement. In this paper, we summarize a classical fast Poisson solver in beam dynamics simulations: the integrated Green's function method. We introduce three optimization steps of the classical Poisson solver routine: using the reduced integrated Green's function instead of the integrated Green's function; using the discrete cosine transform instead of discrete Fourier transform for the Green's function; using a novel fast convolution routine instead of an explicitly zero-padded convolution. The new Poisson solver routine preserves the advantages of fast computation and high accuracy. This provides a fast routine for high performance calculation of the space charge effect in accelerators.
Comparison Of The Performance Of Hybrid Coders Under Different Configurations
NASA Astrophysics Data System (ADS)
Gunasekaran, S.; Raina J., P.
1983-10-01
Picture bandwidth reduction employing DPCM and Orthogonal Transform (OT) coding for TV transmission have been widely discussed in literature; both the techniques have their own advantages and limitations in terms of compression ratio, implementation, sensitivity to picture statistics and their sensitivity to the channel noise. Hybrid coding introduced by Habibi, - a cascade of the two techniques, offers excellent performance and proves to be attractive retaining the special advantages of both the techniques. In the recent times, the interest has shifted over to Hybrid coding, and in the absence of a report on the relative performance specifications of hybrid coders at different configurations, an attempt has been made to colate the information. Fourier, Hadamard, Slant, Sine, Cosine and Harr transforms have been considered for the present work.
Pose Invariant Face Recognition Based on Hybrid Dominant Frequency Features
NASA Astrophysics Data System (ADS)
Wijaya, I. Gede Pasek Suta; Uchimura, Keiichi; Hu, Zhencheng
Face recognition is one of the most active research areas in pattern recognition, not only because the face is a human biometric characteristics of human being but also because there are many potential applications of the face recognition which range from human-computer interactions to authentication, security, and surveillance. This paper presents an approach to pose invariant human face image recognition. The proposed scheme is based on the analysis of discrete cosine transforms (DCT) and discrete wavelet transforms (DWT) of face images. From both the DCT and DWT domain coefficients, which describe the facial information, we build compact and meaningful features vector, using simple statistical measures and quantization. This feature vector is called as the hybrid dominant frequency features. Then, we apply a combination of the L2 and Lq metric to classify the hybrid dominant frequency features to a person's class. The aim of the proposed system is to overcome the high memory space requirement, the high computational load, and the retraining problems of previous methods. The proposed system is tested using several face databases and the experimental results are compared to a well-known Eigenface method. The proposed method shows good performance, robustness, stability, and accuracy without requiring geometrical normalization. Furthermore, the purposed method has low computational cost, requires little memory space, and can overcome retraining problem.
Monro, Donald M; Rakshit, Soumyadip; Zhang, Dexin
2007-04-01
This paper presents a novel iris coding method based on differences of discrete cosine transform (DCT) coefficients of overlapped angular patches from normalized iris images. The feature extraction capabilities of the DCT are optimized on the two largest publicly available iris image data sets, 2,156 images of 308 eyes from the CASIA database and 2,955 images of 150 eyes from the Bath database. On this data, we achieve 100 percent Correct Recognition Rate (CRR) and perfect Receiver-Operating Characteristic (ROC) Curves with no registered false accepts or rejects. Individual feature bit and patch position parameters are optimized for matching through a product-of-sum approach to Hamming distance calculation. For verification, a variable threshold is applied to the distance metric and the False Acceptance Rate (FAR) and False Rejection Rate (FRR) are recorded. A new worst-case metric is proposed for predicting practical system performance in the absence of matching failures, and the worst case theoretical Equal Error Rate (EER) is predicted to be as low as 2.59 x 10(-4) on the available data sets.
Digital SAR processing using a fast polynomial transform
NASA Technical Reports Server (NTRS)
Truong, T. K.; Lipes, R. G.; Butman, S. A.; Reed, I. S.; Rubin, A. L.
1984-01-01
A new digital processing algorithm based on the fast polynomial transform is developed for producing images from Synthetic Aperture Radar data. This algorithm enables the computation of the two dimensional cyclic correlation of the raw echo data with the impulse response of a point target, thereby reducing distortions inherent in one dimensional transforms. This SAR processing technique was evaluated on a general-purpose computer and an actual Seasat SAR image was produced. However, regular production runs will require a dedicated facility. It is expected that such a new SAR processing algorithm could provide the basis for a real-time SAR correlator implementation in the Deep Space Network. Previously announced in STAR as N82-11295
Porter-O'Grady, Tim
2014-01-01
Health reform and transformation now call for the creation of a new landscape for nursing practice based on intentional translation application of value-driven measures of service, quality, and price. Nursing is a central driver in the effective recalibration of health care within the rubric of health transformation under the aegis of the Patient Protection and Affordable Care Act. Increasingly relying on a growing digital infrastructure, the nursing profession must now reframe both its practice foundations and patterns of practice to reflect emerging value-driven, health-grounded service requisites. Specific nursing responses are suggested, which position nursing to best coordinate, integrate, and facilitate health delivery in the emerging value-driven service environment.
NASA Astrophysics Data System (ADS)
Gamil, A. M.; Gilani, S. I.; Al-Kayiem, H. H.
2013-06-01
Solar energy is the most available, clean, and inexpensive source of energy among the other renewable sources of energy. Malaysia is an encouraging location for the development of solar energy systems due to abundant sunshine (10 hours daily with average solar energy received between 1400 and 1900 kWh/m2). In this paper the design of heliostat field of 3 dual-axis heliostat units located in Ipoh, Malaysia is introduced. A mathematical model was developed to estimate the sun position and calculate the cosine losses in the field. The study includes calculating the incident solar power to a fixed target on the tower by analysing the tower height and ground distance between the heliostat and the tower base. The cosine efficiency was found for each heliostat according to the sun movement. TRNSYS software was used to simulate the cosine efficiencies and field hourly incident solar power input to the fixed target. The results show the heliostat field parameters and the total incident solar input to the receiver.
Social Work in a Digital Age: Ethical and Risk Management Challenges
ERIC Educational Resources Information Center
Reamer, Frederic G.
2013-01-01
Digital, online, and other electronic technology has transformed the nature of social work practice. Contemporary social workers can provide services to clients by using online counseling, telephone counseling, video counseling, cybertherapy (avatar therapy), self-guided Web-based interventions, electronic social networks, e-mail, and text…
Review of "Teachers in the Age of Digital Instruction"
ERIC Educational Resources Information Center
Huerta, Luis A.
2012-01-01
The Fordham Institute's "Teachers in the Age of Digital Instruction" is an advocacy document outlining a vision for how technology might transform the teaching profession. The report's rationale is based on claims that the current education system lacks the capacity to support the revolutionary changes needed to unleash the technological…
NASA Technical Reports Server (NTRS)
Jackson, Deborah J. (Inventor)
1998-01-01
An analog optical encryption system based on phase scrambling of two-dimensional optical images and holographic transformation for achieving large encryption keys and high encryption speed. An enciphering interface uses a spatial light modulator for converting a digital data stream into a two dimensional optical image. The optical image is further transformed into a hologram with a random phase distribution. The hologram is converted into digital form for transmission over a shared information channel. A respective deciphering interface at a receiver reverses the encrypting process by using a phase conjugate reconstruction of the phase scrambled hologram.
3-D discrete shearlet transform and video processing.
Negi, Pooran Singh; Labate, Demetrio
2012-06-01
In this paper, we introduce a digital implementation of the 3-D shearlet transform and illustrate its application to problems of video denoising and enhancement. The shearlet representation is a multiscale pyramid of well-localized waveforms defined at various locations and orientations, which was introduced to overcome the limitations of traditional multiscale systems in dealing with multidimensional data. While the shearlet approach shares the general philosophy of curvelets and surfacelets, it is based on a very different mathematical framework, which is derived from the theory of affine systems and uses shearing matrices rather than rotations. This allows a natural transition from the continuous setting to the digital setting and a more flexible mathematical structure. The 3-D digital shearlet transform algorithm presented in this paper consists in a cascade of a multiscale decomposition and a directional filtering stage. The filters employed in this decomposition are implemented as finite-length filters, and this ensures that the transform is local and numerically efficient. To illustrate its performance, the 3-D discrete shearlet transform is applied to problems of video denoising and enhancement, and compared against other state-of-the-art multiscale techniques, including curvelets and surfacelets.
NASA Astrophysics Data System (ADS)
Kumar, Manoj; Khan, Gufran S.; Shakher, Chandra
2015-08-01
In the present work, application of digital speckle pattern interferometry (DSPI) was applied for the measurement of mechanical/elastic and thermal properties of fibre reinforced plastics (FRP). Digital speckle pattern interferometric technique was used to characterize the material constants (Poisson's ratio and Young's modulus) of the composite material. Poisson ratio based on plate bending and Young's modulus based on plate vibration of material are measured by using DSPI. In addition to this, the coefficient of thermal expansion of composite material is also measured. To study the thermal strain analysis, a single DSPI fringe pattern is used to extract the phase information by using Riesz transform and the monogenic signal. The phase extraction from a single DSPI fringe pattern by using Riesz transform does not require a phase-shifting system or spatial carrier. The elastic and thermal parameters obtained from DSPI are in close agreement with the theoretical predictions available in literature.
Analysis of Science Attitudes for K2 Planet Hunter Mission
2015-03-01
15 1. International Astronomical Union ...................................................15 2. IAU Planet Definition ...16 3. Planet Definition Relevant to Kepler Mission .................................16 B. STAR...73 a. Definition Based on Direction Cosine Matrix .......................73 b. Definition Based
The wavelet/scalar quantization compression standard for digital fingerprint images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bradley, J.N.; Brislawn, C.M.
1994-04-01
A new digital image compression standard has been adopted by the US Federal Bureau of Investigation for use on digitized gray-scale fingerprint images. The algorithm is based on adaptive uniform scalar quantization of a discrete wavelet transform image decomposition and is referred to as the wavelet/scalar quantization standard. The standard produces archival quality images at compression ratios of around 20:1 and will allow the FBI to replace their current database of paper fingerprint cards with digital imagery.
Technological innovations in mental healthcare: harnessing the digital revolution.
Hollis, Chris; Morriss, Richard; Martin, Jennifer; Amani, Sarah; Cotton, Rebecca; Denis, Mike; Lewis, Shôn
2015-04-01
Digital technology has the potential to transform mental healthcare by connecting patients, services and health data in new ways. Digital online and mobile applications can offer patients greater access to information and services and enhance clinical management and early intervention through access to real-time patient data. However, substantial gaps exist in the evidence base underlying these technologies. Greater patient and clinician involvement is needed to evaluate digital technologies and ensure they target unmet needs, maintain public trust and improve clinical outcomes. Royal College of Psychiatrists.
NASA Astrophysics Data System (ADS)
Lee, Sungman; Kim, Jongyul; Moon, Myung Kook; Lee, Kye Hong; Lee, Seung Wook; Ino, Takashi; Skoy, Vadim R.; Lee, Manwoo; Kim, Guinyun
2013-02-01
For use as a neutron spin polarizer or analyzer in the neutron beam lines of the HANARO (High-flux Advanced Neutron Application ReactOr) nuclear research reactor, a 3He polarizer was designed based on both a compact solenoid coil and a VBG (volume Bragg grating) diode laser with a narrow spectral linewidth of 25 GHz. The nuclear magnetic resonance (NMR) signal was measured and analyzed using both a built-in cosine radio-frequency (RF) coil and a pick-up coil. Using a neutron transmission measurement, we estimated the polarization ratio of the 3He cell as 18% for an optical pumping time of 8 hours.
A VLSI implementation of DCT using pass transistor technology
NASA Technical Reports Server (NTRS)
Kamath, S.; Lynn, Douglas; Whitaker, Sterling
1992-01-01
A VLSI design for performing the Discrete Cosine Transform (DCT) operation on image blocks of size 16 x 16 in a real time fashion operating at 34 MHz (worst case) is presented. The process used was Hewlett-Packard's CMOS26--A 3 metal CMOS process with a minimum feature size of 0.75 micron. The design is based on Multiply-Accumulate (MAC) cells which make use of a modified Booth recoding algorithm for performing multiplication. The design of these cells is straight forward, and the layouts are regular with no complex routing. Two versions of these MAC cells were designed and their layouts completed. Both versions were simulated using SPICE to estimate their performance. One version is slightly faster at the cost of larger silicon area and higher power consumption. An improvement in speed of almost 20 percent is achieved after several iterations of simulation and re-sizing.
Non-Rigid Structure Estimation in Trajectory Space from Monocular Vision
Wang, Yaming; Tong, Lingling; Jiang, Mingfeng; Zheng, Junbao
2015-01-01
In this paper, the problem of non-rigid structure estimation in trajectory space from monocular vision is investigated. Similar to the Point Trajectory Approach (PTA), based on characteristic points’ trajectories described by a predefined Discrete Cosine Transform (DCT) basis, the structure matrix was also calculated by using a factorization method. To further optimize the non-rigid structure estimation from monocular vision, the rank minimization problem about structure matrix is proposed to implement the non-rigid structure estimation by introducing the basic low-rank condition. Moreover, the Accelerated Proximal Gradient (APG) algorithm is proposed to solve the rank minimization problem, and the initial structure matrix calculated by the PTA method is optimized. The APG algorithm can converge to efficient solutions quickly and lessen the reconstruction error obviously. The reconstruction results of real image sequences indicate that the proposed approach runs reliably, and effectively improves the accuracy of non-rigid structure estimation from monocular vision. PMID:26473863
Detection of shifted double JPEG compression by an adaptive DCT coefficient model
NASA Astrophysics Data System (ADS)
Wang, Shi-Lin; Liew, Alan Wee-Chung; Li, Sheng-Hong; Zhang, Yu-Jin; Li, Jian-Hua
2014-12-01
In many JPEG image splicing forgeries, the tampered image patch has been JPEG-compressed twice with different block alignments. Such phenomenon in JPEG image forgeries is called the shifted double JPEG (SDJPEG) compression effect. Detection of SDJPEG-compressed patches could help in detecting and locating the tampered region. However, the current SDJPEG detection methods do not provide satisfactory results especially when the tampered region is small. In this paper, we propose a new SDJPEG detection method based on an adaptive discrete cosine transform (DCT) coefficient model. DCT coefficient distributions for SDJPEG and non-SDJPEG patches have been analyzed and a discriminative feature has been proposed to perform the two-class classification. An adaptive approach is employed to select the most discriminative DCT modes for SDJPEG detection. The experimental results show that the proposed approach can achieve much better results compared with some existing approaches in SDJPEG patch detection especially when the patch size is small.
One Shot Detection with Laplacian Object and Fast Matrix Cosine Similarity.
Biswas, Sujoy Kumar; Milanfar, Peyman
2016-03-01
One shot, generic object detection involves searching for a single query object in a larger target image. Relevant approaches have benefited from features that typically model the local similarity patterns. In this paper, we combine local similarity (encoded by local descriptors) with a global context (i.e., a graph structure) of pairwise affinities among the local descriptors, embedding the query descriptors into a low dimensional but discriminatory subspace. Unlike principal components that preserve global structure of feature space, we actually seek a linear approximation to the Laplacian eigenmap that permits us a locality preserving embedding of high dimensional region descriptors. Our second contribution is an accelerated but exact computation of matrix cosine similarity as the decision rule for detection, obviating the computationally expensive sliding window search. We leverage the power of Fourier transform combined with integral image to achieve superior runtime efficiency that allows us to test multiple hypotheses (for pose estimation) within a reasonably short time. Our approach to one shot detection is training-free, and experiments on the standard data sets confirm the efficacy of our model. Besides, low computation cost of the proposed (codebook-free) object detector facilitates rather straightforward query detection in large data sets including movie videos.
2009-05-01
information technology revolution. The architecture of the Nation’s digital infrastructure, based largely upon the Internet, is not secure or resilient...thriving digital infrastructure. In addi tion, differing national and regional laws and practices —such as laws concerning the investigation and... technology has transformed the global economy and connected people and markets in ways never imagined. To realize the full benefits of the digital
Digital health and the challenge of health systems transformation
Gagnon, Marie-Pierre; Fortin, Jean-Paul
2017-01-01
Information and communication technologies have transformed all sectors of society. The health sector is no exception to this trend. In light of “digital health”, we see multiplying numbers of web platforms and mobile health applications, often brought by new unconventional players who produce and offer services in non-linear and non-hierarchal ways, this by multiplying access points to services for people. Some speak of a “uberization” of healthcare. New realities and challenges have emerged from this paradigm, which question the abilities of health systems to cope with new business and economic models, governance of data and regulation. Countries must provide adequate responses so that digital health, based increasingly on disruptive technologies, can benefit for all. PMID:28894741
The application of digital signal processing techniques to a teleoperator radar system
NASA Technical Reports Server (NTRS)
Pujol, A.
1982-01-01
A digital signal processing system was studied for the determination of the spectral frequency distribution of echo signals from a teleoperator radar system. The system consisted of a sample and hold circuit, an analog to digital converter, a digital filter, and a Fast Fourier Transform. The system is interfaced to a 16 bit microprocessor. The microprocessor is programmed to control the complete digital signal processing. The digital filtering and Fast Fourier Transform functions are implemented by a S2815 digital filter/utility peripheral chip and a S2814A Fast Fourier Transform chip. The S2815 initially simulates a low-pass Butterworth filter with later expansion to complete filter circuit (bandpass and highpass) synthesizing.
2D non-separable linear canonical transform (2D-NS-LCT) based cryptography
NASA Astrophysics Data System (ADS)
Zhao, Liang; Muniraj, Inbarasan; Healy, John J.; Malallah, Ra'ed; Cui, Xiao-Guang; Ryle, James P.; Sheridan, John T.
2017-05-01
The 2D non-separable linear canonical transform (2D-NS-LCT) can describe a variety of paraxial optical systems. Digital algorithms to numerically evaluate the 2D-NS-LCTs are not only important in modeling the light field propagations but also of interest in various signal processing based applications, for instance optical encryption. Therefore, in this paper, for the first time, a 2D-NS-LCT based optical Double-random- Phase-Encryption (DRPE) system is proposed which offers encrypting information in multiple degrees of freedom. Compared with the traditional systems, i.e. (i) Fourier transform (FT); (ii) Fresnel transform (FST); (iii) Fractional Fourier transform (FRT); and (iv) Linear Canonical transform (LCT), based DRPE systems, the proposed system is more secure and robust as it encrypts the data with more degrees of freedom with an augmented key-space.
ERIC Educational Resources Information Center
Guerrero, Shannon; Baumgartel, Drew; Zobott, Maren
2013-01-01
Screencasting, or digital recordings of computer screen outputs, can be used to promote pedagogical transformation in the mathematics classroom by moving explicit, procedural-based instruction to the online environment, thus freeing classroom time for more student-centered investigations, problem solving, communication, and collaboration. This…
Cultivating Critical Game Makers in Digital Game-Based Learning: Learning from the Arts
ERIC Educational Resources Information Center
Denham, André R.; Guyotte, Kelly W.
2018-01-01
Digital games have the potential of being a transformative tool for applying constructionist principles to learning within formal and informal learning settings. Unfortunately, most recent attention has focused on instructionist games. Connected gaming provides a tantalizing alternative approach by calling for the development of games that are…
Authentication Based on Pole-zero Models of Signature Velocity
Rashidi, Saeid; Fallah, Ali; Towhidkhah, Farzad
2013-01-01
With the increase of communication and financial transaction through internet, on-line signature verification is an accepted biometric technology for access control and plays a significant role in authenticity and authorization in modernized society. Therefore, fast and precise algorithms for the signature verification are very attractive. The goal of this paper is modeling of velocity signal that pattern and properties is stable for persons. With using pole-zero models based on discrete cosine transform, precise method is proposed for modeling and then features is founded from strokes. With using linear, parzen window and support vector machine classifiers, the signature verification technique was tested with a large number of authentic and forgery signatures and has demonstrated the good potential of this technique. The signatures are collected from three different database include a proprietary database, the SVC2004 and the Sabanci University signature database benchmark databases. Experimental results based on Persian, SVC2004 and SUSIG databases show that our method achieves an equal error rate of 5.91%, 5.62% and 3.91% in the skilled forgeries, respectively. PMID:24696797
Estimated spectrum adaptive postfilter and the iterative prepost filtering algirighms
NASA Technical Reports Server (NTRS)
Linares, Irving (Inventor)
2004-01-01
The invention presents The Estimated Spectrum Adaptive Postfilter (ESAP) and the Iterative Prepost Filter (IPF) algorithms. These algorithms model a number of image-adaptive post-filtering and pre-post filtering methods. They are designed to minimize Discrete Cosine Transform (DCT) blocking distortion caused when images are highly compressed with the Joint Photographic Expert Group (JPEG) standard. The ESAP and the IPF techniques of the present invention minimize the mean square error (MSE) to improve the objective and subjective quality of low-bit-rate JPEG gray-scale images while simultaneously enhancing perceptual visual quality with respect to baseline JPEG images.
A novel method of the image processing on irregular triangular meshes
NASA Astrophysics Data System (ADS)
Vishnyakov, Sergey; Pekhterev, Vitaliy; Sokolova, Elizaveta
2018-04-01
The paper describes a novel method of the image processing based on irregular triangular meshes implementation. The triangular mesh is adaptive to the image content, least mean square linear approximation is proposed for the basic interpolation within the triangle. It is proposed to use triangular numbers to simplify using of the local (barycentric) coordinates for the further analysis - triangular element of the initial irregular mesh is to be represented through the set of the four equilateral triangles. This allows to use fast and simple pixels indexing in local coordinates, e.g. "for" or "while" loops for access to the pixels. Moreover, representation proposed allows to use discrete cosine transform of the simple "rectangular" symmetric form without additional pixels reordering (as it is used for shape-adaptive DCT forms). Furthermore, this approach leads to the simple form of the wavelet transform on triangular mesh. The results of the method application are presented. It is shown that advantage of the method proposed is a combination of the flexibility of the image-adaptive irregular meshes with the simple form of the pixel indexing in local triangular coordinates and the using of the common forms of the discrete transforms for triangular meshes. Method described is proposed for the image compression, pattern recognition, image quality improvement, image search and indexing. It also may be used as a part of video coding (intra-frame or inter-frame coding, motion detection).
NASA Astrophysics Data System (ADS)
Chang, Chien-Chieh; Chen, Chia-Shyun
2002-06-01
A flowing partially penetrating well with infinitesimal well skin is a mixed boundary because a Cauchy condition is prescribed along the screen length and a Neumann condition of no flux is stipulated over the remaining unscreened part. An analytical approach based on the integral transform technique is developed to determine the Laplace domain solution for such a mixed boundary problem in a confined aquifer of finite thickness. First, the mixed boundary is changed into a homogeneous Neumann boundary by substituting the Cauchy condition with a Neumann condition in terms of well bore flux that varies along the screen length and is time dependent. Despite the well bore flux being unknown a priori, the modified model containing this homogeneous Neumann boundary can be solved with the Laplace and the finite Fourier cosine transforms. To determine well bore flux, screen length is discretized into a finite number of segments, to which the Cauchy condition is reinstated. This reinstatement also restores the relation between the original model and the solutions obtained. For a given time, the numerical inversion of the Laplace domain solution yields the drawdown distributions, well bore flux, and the well discharge. This analytical approach provides an alternative for dealing with the mixed boundary problems, especially when aquifer thickness is assumed to be finite.
Digital Signal Processing and Generation for a DC Current Transformer for Particle Accelerators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zorzetti, Silvia
2013-01-01
The thesis topic, digital signal processing and generation for a DC current transformer, focuses on the most fundamental beam diagnostics in the field of particle accelerators, the measurement of the beam intensity, or beam current. The technology of a DC current transformer (DCCT) is well known, and used in many areas, including particle accelerator beam instrumentation, as non-invasive (shunt-free) method to monitor the DC current in a conducting wire, or in our case, the current of charged particles travelling inside an evacuated metal pipe. So far, custom and commercial DCCTs are entirely based on analog technologies and signal processing, whichmore » makes them inflexible, sensitive to component aging, and difficult to maintain and calibrate.« less
NASA Astrophysics Data System (ADS)
Lai, Chia-Lin; Lee, Jhih-Shian; Chen, Jyh-Cheng
2015-02-01
Energy-mapping, the conversion of linear attenuation coefficients (μ) calculated at the effective computed tomography (CT) energy to those corresponding to 511 keV, is an important step in CT-based attenuation correction (CTAC) for positron emission tomography (PET) quantification. The aim of this study was to implement energy-mapping step by using curve fitting ability of artificial neural network (ANN). Eleven digital phantoms simulated by Geant4 application for tomographic emission (GATE) and 12 physical phantoms composed of various volume concentrations of iodine contrast were used in this study to generate energy-mapping curves by acquiring average CT values and linear attenuation coefficients at 511 keV of these phantoms. The curves were built with ANN toolbox in MATLAB. To evaluate the effectiveness of the proposed method, another two digital phantoms (liver and spine-bone) and three physical phantoms (volume concentrations of 3%, 10% and 20%) were used to compare the energy-mapping curves built by ANN and bilinear transformation, and a semi-quantitative analysis was proceeded by injecting 0.5 mCi FDG into a SD rat for micro-PET scanning. The results showed that the percentage relative difference (PRD) values of digital liver and spine-bone phantom are 5.46% and 1.28% based on ANN, and 19.21% and 1.87% based on bilinear transformation. For 3%, 10% and 20% physical phantoms, the PRD values of ANN curve are 0.91%, 0.70% and 3.70%, and the PRD values of bilinear transformation are 3.80%, 1.44% and 4.30%, respectively. Both digital and physical phantoms indicated that the ANN curve can achieve better performance than bilinear transformation. The semi-quantitative analysis of rat PET images showed that the ANN curve can reduce the inaccuracy caused by attenuation effect from 13.75% to 4.43% in brain tissue, and 23.26% to 9.41% in heart tissue. On the other hand, the inaccuracy remained 6.47% and 11.51% in brain and heart tissue when the bilinear transformation was used. Overall, it can be concluded that the bilinear transformation method resulted in considerable bias and the newly proposed calibration curve built by ANN could achieve better results with acceptable accuracy.
Phase unwrapping in digital holography based on non-subsampled contourlet transform
NASA Astrophysics Data System (ADS)
Zhang, Xiaolei; Zhang, Xiangchao; Xu, Min; Zhang, Hao; Jiang, Xiangqian
2018-01-01
In the digital holographic measurement of complex surfaces, phase unwrapping is a critical step for accurate reconstruction. The phases of the complex amplitudes calculated from interferometric holograms are disturbed by speckle noise, thus reliable unwrapping results are difficult to be obtained. Most of existing unwrapping algorithms implement denoising operations first to obtain noise-free phases and then conduct phase unwrapping pixel by pixel. This approach is sensitive to spikes and prone to unreliable results in practice. In this paper, a robust unwrapping algorithm based on the non-subsampled contourlet transform (NSCT) is developed. The multiscale and directional decomposition of NSCT enhances the boundary between adjacent phase levels and henceforth the influence of local noise can be eliminated in the transform domain. The wrapped phase map is segmented into several regions corresponding to different phase levels. Finally, an unwrapped phase map is obtained by elevating the phases of a whole segment instead of individual pixels to avoid unwrapping errors caused by local spikes. This algorithm is suitable for dealing with complex and noisy wavefronts. Its universality and superiority in the digital holographic interferometry have been demonstrated by both numerical analysis and practical experiments.
A novel image watermarking method based on singular value decomposition and digital holography
NASA Astrophysics Data System (ADS)
Cai, Zhishan
2016-10-01
According to the information optics theory, a novel watermarking method based on Fourier-transformed digital holography and singular value decomposition (SVD) is proposed in this paper. First of all, a watermark image is converted to a digital hologram using the Fourier transform. After that, the original image is divided into many non-overlapping blocks. All the blocks and the hologram are decomposed using SVD. The singular value components of the hologram are then embedded into the singular value components of each block using an addition principle. Finally, SVD inverse transformation is carried out on the blocks and hologram to generate the watermarked image. The watermark information embedded in each block is extracted at first when the watermark is extracted. After that, an averaging operation is carried out on the extracted information to generate the final watermark information. Finally, the algorithm is simulated. Furthermore, to test the encrypted image's resistance performance against attacks, various attack tests are carried out. The results show that the proposed algorithm has very good robustness against noise interference, image cut, compression, brightness stretching, etc. In particular, when the image is rotated by a large angle, the watermark information can still be extracted correctly.
Geometric processing of digital images of the planets
NASA Technical Reports Server (NTRS)
Edwards, Kathleen
1987-01-01
New procedures and software have been developed for geometric transformation of images to support digital cartography of the planets. The procedures involve the correction of spacecraft camera orientation of each image with the use of ground control and the transformation of each image to a Sinusoidal Equal-Area map projection with an algorithm which allows the number of transformation calculations to vary as the distortion varies within the image. When the distortion is low in an area of an image, few transformation computations are required, and most pixels can be interpolated. When distortion is extreme, the location of each pixel is computed. Mosaics are made of these images and stored as digital databases. Completed Sinusoidal databases may be used for digital analysis and registration with other spatial data. They may also be reproduced as published image maps by digitally transforming them to appropriate map projections.
Reflection and emission models for deserts derived from Nimbus-7 ERB scanner measurements
NASA Technical Reports Server (NTRS)
Staylor, W. F.; Suttles, J. T.
1986-01-01
Broadband shortwave and longwave radiance measurements obtained from the Nimbus-7 Earth Radiation Budget scanner were used to develop reflectance and emittance models for the Sahara-Arabian, Gibson, and Saudi Deserts. The models were established by fitting the satellite measurements to analytic functions. For the shortwave, the model function is based on an approximate solution to the radiative transfer equation. The bidirectional-reflectance function was obtained from a single-scattering approximation with a Rayleigh-like phase function. The directional-reflectance model followed from integration of the bidirectional model and is a function of the sum and product of cosine solar and viewing zenith angles, thus satisfying reciprocity between these angles. The emittance model was based on a simple power-law of cosine viewing zenith angle.
Reconstruction-Based Digital Dental Occlusion of the Partially Edentulous Dentition.
Zhang, Jian; Xia, James J; Li, Jianfu; Zhou, Xiaobo
2017-01-01
Partially edentulous dentition presents a challenging problem for the surgical planning of digital dental occlusion in the field of craniomaxillofacial surgery because of the incorrect maxillomandibular distance caused by missing teeth. We propose an innovative approach called Dental Reconstruction with Symmetrical Teeth (DRST) to achieve accurate dental occlusion for the partially edentulous cases. In this DRST approach, the rigid transformation between two symmetrical teeth existing on the left and right dental model is estimated through probabilistic point registration by matching the two shapes. With the estimated transformation, the partially edentulous space can be virtually filled with the teeth in its symmetrical position. Dental alignment is performed by digital dental occlusion reestablishment algorithm with the reconstructed complete dental model. Satisfactory reconstruction and occlusion results are demonstrated with the synthetic and real partially edentulous models.
Design of efficient circularly symmetric two-dimensional variable digital FIR filters.
Bindima, Thayyil; Elias, Elizabeth
2016-05-01
Circularly symmetric two-dimensional (2D) finite impulse response (FIR) filters find extensive use in image and medical applications, especially for isotropic filtering. Moreover, the design and implementation of 2D digital filters with variable fractional delay and variable magnitude responses without redesigning the filter has become a crucial topic of interest due to its significance in low-cost applications. Recently the design using fixed word length coefficients has gained importance due to the replacement of multipliers by shifters and adders, which reduces the hardware complexity. Among the various approaches to 2D design, transforming a one-dimensional (1D) filter to 2D by transformation, is reported to be an efficient technique. In this paper, 1D variable digital filters (VDFs) with tunable cut-off frequencies are designed using Farrow structure based interpolation approach, and the sub-filter coefficients in the Farrow structure are made multiplier-less using canonic signed digit (CSD) representation. The resulting performance degradation in the filters is overcome by using artificial bee colony (ABC) optimization. Finally, the optimized 1D VDFs are mapped to 2D using generalized McClellan transformation resulting in low complexity, circularly symmetric 2D VDFs with real-time tunability.
Design of efficient circularly symmetric two-dimensional variable digital FIR filters
Bindima, Thayyil; Elias, Elizabeth
2016-01-01
Circularly symmetric two-dimensional (2D) finite impulse response (FIR) filters find extensive use in image and medical applications, especially for isotropic filtering. Moreover, the design and implementation of 2D digital filters with variable fractional delay and variable magnitude responses without redesigning the filter has become a crucial topic of interest due to its significance in low-cost applications. Recently the design using fixed word length coefficients has gained importance due to the replacement of multipliers by shifters and adders, which reduces the hardware complexity. Among the various approaches to 2D design, transforming a one-dimensional (1D) filter to 2D by transformation, is reported to be an efficient technique. In this paper, 1D variable digital filters (VDFs) with tunable cut-off frequencies are designed using Farrow structure based interpolation approach, and the sub-filter coefficients in the Farrow structure are made multiplier-less using canonic signed digit (CSD) representation. The resulting performance degradation in the filters is overcome by using artificial bee colony (ABC) optimization. Finally, the optimized 1D VDFs are mapped to 2D using generalized McClellan transformation resulting in low complexity, circularly symmetric 2D VDFs with real-time tunability. PMID:27222739
Abel inversion using fast Fourier transforms.
Kalal, M; Nugent, K A
1988-05-15
A fast Fourier transform based Abel inversion technique is proposed. The method is faster than previously used techniques, potentially very accurate (even for a relatively small number of points), and capable of handling large data sets. The technique is discussed in the context of its use with 2-D digital interferogram analysis algorithms. Several examples are given.
CD-Based Indices for Link Prediction in Complex Network.
Wang, Tao; Wang, Hongjue; Wang, Xiaoxia
2016-01-01
Lots of similarity-based algorithms have been designed to deal with the problem of link prediction in the past decade. In order to improve prediction accuracy, a novel cosine similarity index CD based on distance between nodes and cosine value between vectors is proposed in this paper. Firstly, node coordinate matrix can be obtained by node distances which are different from distance matrix and row vectors of the matrix are regarded as coordinates of nodes. Then, cosine value between node coordinates is used as their similarity index. A local community density index LD is also proposed. Then, a series of CD-based indices include CD-LD-k, CD*LD-k, CD-k and CDI are presented and applied in ten real networks. Experimental results demonstrate the effectiveness of CD-based indices. The effects of network clustering coefficient and assortative coefficient on prediction accuracy of indices are analyzed. CD-LD-k and CD*LD-k can improve prediction accuracy without considering the assortative coefficient of network is negative or positive. According to analysis of relative precision of each method on each network, CD-LD-k and CD*LD-k indices have excellent average performance and robustness. CD and CD-k indices perform better on positive assortative networks than on negative assortative networks. For negative assortative networks, we improve and refine CD index, referred as CDI index, combining the advantages of CD index and evolutionary mechanism of the network model BA. Experimental results reveal that CDI index can increase prediction accuracy of CD on negative assortative networks.
CD-Based Indices for Link Prediction in Complex Network
Wang, Tao; Wang, Hongjue; Wang, Xiaoxia
2016-01-01
Lots of similarity-based algorithms have been designed to deal with the problem of link prediction in the past decade. In order to improve prediction accuracy, a novel cosine similarity index CD based on distance between nodes and cosine value between vectors is proposed in this paper. Firstly, node coordinate matrix can be obtained by node distances which are different from distance matrix and row vectors of the matrix are regarded as coordinates of nodes. Then, cosine value between node coordinates is used as their similarity index. A local community density index LD is also proposed. Then, a series of CD-based indices include CD-LD-k, CD*LD-k, CD-k and CDI are presented and applied in ten real networks. Experimental results demonstrate the effectiveness of CD-based indices. The effects of network clustering coefficient and assortative coefficient on prediction accuracy of indices are analyzed. CD-LD-k and CD*LD-k can improve prediction accuracy without considering the assortative coefficient of network is negative or positive. According to analysis of relative precision of each method on each network, CD-LD-k and CD*LD-k indices have excellent average performance and robustness. CD and CD-k indices perform better on positive assortative networks than on negative assortative networks. For negative assortative networks, we improve and refine CD index, referred as CDI index, combining the advantages of CD index and evolutionary mechanism of the network model BA. Experimental results reveal that CDI index can increase prediction accuracy of CD on negative assortative networks. PMID:26752405
ERIC Educational Resources Information Center
Goh, Debbie; Kale, Ugur
2015-01-01
The project-based learning (PBL) approach closely reflects the tenets of journalism and provides a potential pedagogical guide for transforming traditional journalism education. This study operationalizes and applies a PBL framework in digitizing a print journalism course. The findings illustrate how the presence of seven key elements of PBL…
Modified signed-digit trinary addition using synthetic wavelet filter
NASA Astrophysics Data System (ADS)
Iftekharuddin, K. M.; Razzaque, M. A.
2000-09-01
The modified signed-digit (MSD) number system has been a topic of interest as it allows for parallel carry-free addition of two numbers for digital optical computing. In this paper, harmonic wavelet joint transform (HWJT)-based correlation technique is introduced for optical implementation of MSD trinary adder implementation. The realization of the carry-propagation-free addition of MSD trinary numerals is demonstrated using synthetic HWJT correlator model. It is also shown that the proposed synthetic wavelet filter-based correlator shows high performance in logic processing. Simulation results are presented to validate the performance of the proposed technique.
Knowledge Resources - A Knowledge Management Approach for Digital Ecosystems
NASA Astrophysics Data System (ADS)
Kurz, Thomas; Eder, Raimund; Heistracher, Thomas
The paper at hand presents an innovative approach for the conception and implementation of knowledge management in Digital Ecosystems. Based on a reflection of Digital Ecosystem research of the past years, an architecture is outlined which utilizes Knowledge Resources as the central and simplest entities of knowledge transfer. After the discussion of the related conception, the result of a first prototypical implementation is described that helps the transformation of implicit knowledge to explicit knowledge for wide use.
NASA Astrophysics Data System (ADS)
di Lauro, C.
2018-03-01
Transformations of vector or tensor properties from a space-fixed to a molecule-fixed axis system are often required in the study of rotating molecules. Spherical components λμ,ν of a first rank irreducible tensor can be obtained from the direction cosines between the two axis systems, and a second rank tensor with spherical components λμ,ν(2) can be built from the direct product λ × λ. It is shown that the treatment of the interaction between molecular rotation and the electric quadrupole of a nucleus is greatly simplified, if the coefficients in the axis-system transformation of the gradient of the electric field of the outer charges at the coupled nucleus are arranged as spherical components λμ,ν(2). Then the reduced matrix elements of the field gradient operators in a symmetric top eigenfunction basis, including their dependence on the molecule-fixed z-angular momentum component k, can be determined from the knowledge of those of λ(2) . The hyperfine structure Hamiltonian Hq is expressed as the sum of terms characterized each by a value of the molecule-fixed index ν, whose matrix elements obey the rule Δk = ν. Some of these terms may vanish because of molecular symmetry, and the specific cases of linear and symmetric top molecules, orthorhombic molecules, and molecules with symmetry lower than orthorhombic are considered. Each ν-term consists of a contraction of the rotational tensor λ(2) and the nuclear quadrupole tensor in the space-fixed frame, and its matrix elements in the rotation-nuclear spin coupled representation can be determined by the standard spherical tensor methods.
Geometric comparison of popular mixture-model distances.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mitchell, Scott A.
2010-09-01
Statistical Latent Dirichlet Analysis produces mixture model data that are geometrically equivalent to points lying on a regular simplex in moderate to high dimensions. Numerous other statistical models and techniques also produce data in this geometric category, even though the meaning of the axes and coordinate values differs significantly. A distance function is used to further analyze these points, for example to cluster them. Several different distance functions are popular amongst statisticians; which distance function is chosen is usually driven by the historical preference of the application domain, information-theoretic considerations, or by the desirability of the clustering results. Relatively littlemore » consideration is usually given to how distance functions geometrically transform data, or the distances algebraic properties. Here we take a look at these issues, in the hope of providing complementary insight and inspiring further geometric thought. Several popular distances, {chi}{sup 2}, Jensen - Shannon divergence, and the square of the Hellinger distance, are shown to be nearly equivalent; in terms of functional forms after transformations, factorizations, and series expansions; and in terms of the shape and proximity of constant-value contours. This is somewhat surprising given that their original functional forms look quite different. Cosine similarity is the square of the Euclidean distance, and a similar geometric relationship is shown with Hellinger and another cosine. We suggest a geodesic variation of Hellinger. The square-root projection that arises in Hellinger distance is briefly compared to standard normalization for Euclidean distance. We include detailed derivations of some ratio and difference bounds for illustrative purposes. We provide some constructions that nearly achieve the worst-case ratios, relevant for contours.« less
Harrison, John A
2008-09-04
RHF/aug-cc-pVnZ, UHF/aug-cc-pVnZ, and QCISD/aug-cc-pVnZ, n = 2-5, potential energy curves of H2 X (1) summation g (+) are analyzed by Fourier transform methods after transformation to a new coordinate system via an inverse hyperbolic cosine coordinate mapping. The Fourier frequency domain spectra are interpreted in terms of underlying mathematical behavior giving rise to distinctive features. There is a clear difference between the underlying mathematical nature of the potential energy curves calculated at the HF and full-CI levels. The method is particularly suited to the analysis of potential energy curves obtained at the highest levels of theory because the Fourier spectra are observed to be of a compact nature, with the envelope of the Fourier frequency coefficients decaying in magnitude in an exponential manner. The finite number of Fourier coefficients required to describe the CI curves allows for an optimum sampling strategy to be developed, corresponding to that required for exponential and geometric convergence. The underlying random numerical noise due to the finite convergence criterion is also a clearly identifiable feature in the Fourier spectrum. The methodology is applied to the analysis of MRCI potential energy curves for the ground and first excited states of HX (X = H-Ne). All potential energy curves exhibit structure in the Fourier spectrum consistent with the existence of resonances. The compact nature of the Fourier spectra following the inverse hyperbolic cosine coordinate mapping is highly suggestive that there is some advantage in viewing the chemical bond as having an underlying hyperbolic nature.
Multiresolution image registration in digital x-ray angiography with intensity variation modeling.
Nejati, Mansour; Pourghassem, Hossein
2014-02-01
Digital subtraction angiography (DSA) is a widely used technique for visualization of vessel anatomy in diagnosis and treatment. However, due to unavoidable patient motions, both externally and internally, the subtracted angiography images often suffer from motion artifacts that adversely affect the quality of the medical diagnosis. To cope with this problem and improve the quality of DSA images, registration algorithms are often employed before subtraction. In this paper, a novel elastic registration algorithm for registration of digital X-ray angiography images, particularly for the coronary location, is proposed. This algorithm includes a multiresolution search strategy in which a global transformation is calculated iteratively based on local search in coarse and fine sub-image blocks. The local searches are accomplished in a differential multiscale framework which allows us to capture both large and small scale transformations. The local registration transformation also explicitly accounts for local variations in the image intensities which incorporated into our model as a change of local contrast and brightness. These local transformations are then smoothly interpolated using thin-plate spline interpolation function to obtain the global model. Experimental results with several clinical datasets demonstrate the effectiveness of our algorithm in motion artifact reduction.
Fast reconstruction of off-axis digital holograms based on digital spatial multiplexing.
Sha, Bei; Liu, Xuan; Ge, Xiao-Lu; Guo, Cheng-Shan
2014-09-22
A method for fast reconstruction of off-axis digital holograms based on digital multiplexing algorithm is proposed. Instead of the existed angular multiplexing (AM), the new method utilizes a spatial multiplexing (SM) algorithm, in which four off-axis holograms recorded in sequence are synthesized into one SM function through multiplying each hologram with a tilted plane wave and then adding them up. In comparison with the conventional methods, the SM algorithm simplifies two-dimensional (2-D) Fourier transforms (FTs) of four N*N arrays into a 1.25-D FTs of one N*N arrays. Experimental results demonstrate that, using the SM algorithm, the computational efficiency can be improved and the reconstructed wavefronts keep the same quality as those retrieved based on the existed AM method. This algorithm may be useful in design of a fast preview system of dynamic wavefront imaging in digital holography.
Hu, J H; Wang, Y; Cahill, P T
1997-01-01
This paper reports a multispectral code excited linear prediction (MCELP) method for the compression of multispectral images. Different linear prediction models and adaptation schemes have been compared. The method that uses a forward adaptive autoregressive (AR) model has been proven to achieve a good compromise between performance, complexity, and robustness. This approach is referred to as the MFCELP method. Given a set of multispectral images, the linear predictive coefficients are updated over nonoverlapping three-dimensional (3-D) macroblocks. Each macroblock is further divided into several 3-D micro-blocks, and the best excitation signal for each microblock is determined through an analysis-by-synthesis procedure. The MFCELP method has been applied to multispectral magnetic resonance (MR) images. To satisfy the high quality requirement for medical images, the error between the original image set and the synthesized one is further specified using a vector quantizer. This method has been applied to images from 26 clinical MR neuro studies (20 slices/study, three spectral bands/slice, 256x256 pixels/band, 12 b/pixel). The MFCELP method provides a significant visual improvement over the discrete cosine transform (DCT) based Joint Photographers Expert Group (JPEG) method, the wavelet transform based embedded zero-tree wavelet (EZW) coding method, and the vector tree (VT) coding method, as well as the multispectral segmented autoregressive moving average (MSARMA) method we developed previously.
Hexagonal wavelet processing of digital mammography
NASA Astrophysics Data System (ADS)
Laine, Andrew F.; Schuler, Sergio; Huda, Walter; Honeyman-Buck, Janice C.; Steinbach, Barbara G.
1993-09-01
This paper introduces a novel approach for accomplishing mammographic feature analysis through overcomplete multiresolution representations. We show that efficient representations may be identified from digital mammograms and used to enhance features of importance to mammography within a continuum of scale-space. We present a method of contrast enhancement based on an overcomplete, non-separable multiscale representation: the hexagonal wavelet transform. Mammograms are reconstructed from transform coefficients modified at one or more levels by local and global non-linear operators. Multiscale edges identified within distinct levels of transform space provide local support for enhancement. We demonstrate that features extracted from multiresolution representations can provide an adaptive mechanism for accomplishing local contrast enhancement. We suggest that multiscale detection and local enhancement of singularities may be effectively employed for the visualization of breast pathology without excessive noise amplification.
Increases to Biogenic Secondary Organic Aerosols from SO2 and NOx in the Southeastern US
NASA Astrophysics Data System (ADS)
Russell, L. M.; Liu, J.; Ruggeri, G.; Takahama, S.; Claflin, M. S.; Ziemann, P. J.; Lee, A.; Murphy, B.; Pye, H. O. T.; Ng, N. L.; McKinney, K. A.; Surratt, J. D.
2017-12-01
During the 2013 Southern Oxidant and Aerosol Study, Fourier Transform Infrared Spectroscopy (FTIR) and Aerosol Mass Spectrometer (AMS) measurements of submicron mass were collected at Look Rock, Tennessee, and Centreville, Alabama. The low NOx, low wind, little rain, and increased daytime isoprene emissions led to multi-day stagnation events at Look Rock that provided clear evidence of particle-phase sulfate enhancing biogenic secondary organic aerosol (bSOA) by selective uptake. Organic mass (OM) sources were apportioned as 42% "vehicle-related" and 54% bSOA, with the latter including "sulfate-related bSOA" that correlated to sulfate (r=0.72) and "nitrate-related bSOA" that correlated to nitrate (r=0.65). Single-particle mass spectra showed three composition types that corresponded to the mass-based factors with spectra cosine similarity of 0.93 and time series correlations of r>0.4. The vehicle-related OM with m/z 44 was correlated to black carbon, "sulfate-related bSOA" was on particles with high sulfate, and "nitrate-related bSOA" was on all particles. The similarity of the m/z spectra (cosine similarity=0.97) and the time series correlation (r=0.80) of the "sulfate-related bSOA" to the sulfate-containing single-particle type provide evidence for particle composition contributing to selective uptake of isoprene oxidation products onto particles that contain sulfate from power plants. Since Look Rock had much less NOx than Centreville, comparing the bSOA at the two sites provides an evaluation of the role of NOx for bSOA. CO and submicron sulfate and OM concentrations were 15-60 % higher at Centreville than at Look Rock but their time series had moderate correlations of r= 0.51, 0.54, and 0.47, respectively. However, NOx had no correlation (r=0.08) between the two sites. OM correlated with the higher NOx levels at Centreville but with O3 at Look Rock. OM sources identified by Positive Matrix Factorization had three very similar factors at both sites from FTIR, one of which was Biological Organic Aerosols. The FTIR spectrum for this factor is similar (cosine similarity > 0.6) to that of lab-generated particle mass from both isoprene and monoterpene with high NOx conditions from chamber experiments, providing verification of the reactions relevant to atmospheric conditions.
No-Reference Video Quality Assessment Based on Statistical Analysis in 3D-DCT Domain.
Li, Xuelong; Guo, Qun; Lu, Xiaoqiang
2016-05-13
It is an important task to design models for universal no-reference video quality assessment (NR-VQA) in multiple video processing and computer vision applications. However, most existing NR-VQA metrics are designed for specific distortion types which are not often aware in practical applications. A further deficiency is that the spatial and temporal information of videos is hardly considered simultaneously. In this paper, we propose a new NR-VQA metric based on the spatiotemporal natural video statistics (NVS) in 3D discrete cosine transform (3D-DCT) domain. In the proposed method, a set of features are firstly extracted based on the statistical analysis of 3D-DCT coefficients to characterize the spatiotemporal statistics of videos in different views. These features are used to predict the perceived video quality via the efficient linear support vector regression (SVR) model afterwards. The contributions of this paper are: 1) we explore the spatiotemporal statistics of videos in 3DDCT domain which has the inherent spatiotemporal encoding advantage over other widely used 2D transformations; 2) we extract a small set of simple but effective statistical features for video visual quality prediction; 3) the proposed method is universal for multiple types of distortions and robust to different databases. The proposed method is tested on four widely used video databases. Extensive experimental results demonstrate that the proposed method is competitive with the state-of-art NR-VQA metrics and the top-performing FR-VQA and RR-VQA metrics.
Digital image transformation and rectification of spacecraft and radar images
NASA Technical Reports Server (NTRS)
Wu, S. S. C.
1985-01-01
The application of digital processing techniques to spacecraft television pictures and radar images is discussed. The use of digital rectification to produce contour maps from spacecraft pictures is described; images with azimuth and elevation angles are converted into point-perspective frame pictures. The digital correction of the slant angle of radar images to ground scale is examined. The development of orthophoto and stereoscopic shaded relief maps from digital terrain and digital image data is analyzed. Digital image transformations and rectifications are utilized on Viking Orbiter and Lander pictures of Mars.
An accurate system for onsite calibration of electronic transformers with digital output.
Zhi, Zhang; Li, Hong-Bin
2012-06-01
Calibration systems with digital output are used to replace conventional calibration systems because of principle diversity and characteristics of digital output of electronic transformers. But precision and unpredictable stability limit their onsite application even development. So fully considering the factors influencing accuracy of calibration system and employing simple but reliable structure, an all-digital calibration system with digital output is proposed in this paper. In complicated calibration environments, precision and dynamic range are guaranteed by A/D converter with 24-bit resolution, synchronization error limit is nanosecond by using the novelty synchronization method. In addition, an error correction algorithm based on the differential method by using two-order Hanning convolution window has good inhibition of frequency fluctuation and inter-harmonics interference. To verify the effectiveness, error calibration was carried out in the State Grid Electric Power Research Institute of China and results show that the proposed system can reach the precision class up to 0.05. Actual onsite calibration shows that the system has high accuracy, and is easy to operate with satisfactory stability.
An accurate system for onsite calibration of electronic transformers with digital output
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhi Zhang; Li Hongbin; State Key Laboratory of Advanced Electromagnetic Engineering and Technology, Wuhan 430074
Calibration systems with digital output are used to replace conventional calibration systems because of principle diversity and characteristics of digital output of electronic transformers. But precision and unpredictable stability limit their onsite application even development. So fully considering the factors influencing accuracy of calibration system and employing simple but reliable structure, an all-digital calibration system with digital output is proposed in this paper. In complicated calibration environments, precision and dynamic range are guaranteed by A/D converter with 24-bit resolution, synchronization error limit is nanosecond by using the novelty synchronization method. In addition, an error correction algorithm based on the differentialmore » method by using two-order Hanning convolution window has good inhibition of frequency fluctuation and inter-harmonics interference. To verify the effectiveness, error calibration was carried out in the State Grid Electric Power Research Institute of China and results show that the proposed system can reach the precision class up to 0.05. Actual onsite calibration shows that the system has high accuracy, and is easy to operate with satisfactory stability.« less
An accurate system for onsite calibration of electronic transformers with digital output
NASA Astrophysics Data System (ADS)
Zhi, Zhang; Li, Hong-Bin
2012-06-01
Calibration systems with digital output are used to replace conventional calibration systems because of principle diversity and characteristics of digital output of electronic transformers. But precision and unpredictable stability limit their onsite application even development. So fully considering the factors influencing accuracy of calibration system and employing simple but reliable structure, an all-digital calibration system with digital output is proposed in this paper. In complicated calibration environments, precision and dynamic range are guaranteed by A/D converter with 24-bit resolution, synchronization error limit is nanosecond by using the novelty synchronization method. In addition, an error correction algorithm based on the differential method by using two-order Hanning convolution window has good inhibition of frequency fluctuation and inter-harmonics interference. To verify the effectiveness, error calibration was carried out in the State Grid Electric Power Research Institute of China and results show that the proposed system can reach the precision class up to 0.05. Actual onsite calibration shows that the system has high accuracy, and is easy to operate with satisfactory stability.
Generation of optical OFDM signals using 21.4 GS/s real time digital signal processing.
Benlachtar, Yannis; Watts, Philip M; Bouziane, Rachid; Milder, Peter; Rangaraj, Deepak; Cartolano, Anthony; Koutsoyannis, Robert; Hoe, James C; Püschel, Markus; Glick, Madeleine; Killey, Robert I
2009-09-28
We demonstrate a field programmable gate array (FPGA) based optical orthogonal frequency division multiplexing (OFDM) transmitter implementing real time digital signal processing at a sample rate of 21.4 GS/s. The QPSK-OFDM signal is generated using an 8 bit, 128 point inverse fast Fourier transform (IFFT) core, performing one transform per clock cycle at a clock speed of 167.2 MHz and can be deployed with either a direct-detection or a coherent receiver. The hardware design and the main digital signal processing functions are described, and we show that the main performance limitation is due to the low (4-bit) resolution of the digital-to-analog converter (DAC) and the 8-bit resolution of the IFFT core used. We analyze the back-to-back performance of the transmitter generating an 8.36 Gb/s optical single sideband (SSB) OFDM signal using digital up-conversion, suitable for direct-detection. Additionally, we use the device to transmit 8.36 Gb/s SSB OFDM signals over 200 km of uncompensated standard single mode fiber achieving an overall BER<10(-3).
Skin image retrieval using Gabor wavelet texture feature.
Ou, X; Pan, W; Zhang, X; Xiao, P
2016-12-01
Skin imaging plays a key role in many clinical studies. We have used many skin imaging techniques, including the recently developed capacitive contact skin imaging based on fingerprint sensors. The aim of this study was to develop an effective skin image retrieval technique using Gabor wavelet transform, which can be used on different types of skin images, but with a special focus on skin capacitive contact images. Content-based image retrieval (CBIR) is a useful technology to retrieve stored images from database by supplying query images. In a typical CBIR, images are retrieved based on colour, shape, texture, etc. In this study, texture feature is used for retrieving skin images, and Gabor wavelet transform is used for texture feature description and extraction. The results show that the Gabor wavelet texture features can work efficiently on different types of skin images. Although Gabor wavelet transform is slower compared with other image retrieval techniques, such as principal component analysis (PCA) and grey-level co-occurrence matrix (GLCM), Gabor wavelet transform is the best for retrieving skin capacitive contact images and facial images with different orientations. Gabor wavelet transform can also work well on facial images with different expressions and skin cancer/disease images. We have developed an effective skin image retrieval method based on Gabor wavelet transform, that it is useful for retrieving different types of images, namely digital colour face images, digital colour skin cancer and skin disease images, and particularly greyscale skin capacitive contact images. Gabor wavelet transform can also be potentially useful for face recognition (with different orientation and expressions) and skin cancer/disease diagnosis. © 2016 Society of Cosmetic Scientists and the Société Française de Cosmétologie.
A Robust Zero-Watermarking Algorithm for Audio
NASA Astrophysics Data System (ADS)
Chen, Ning; Zhu, Jie
2007-12-01
In traditional watermarking algorithms, the insertion of watermark into the host signal inevitably introduces some perceptible quality degradation. Another problem is the inherent conflict between imperceptibility and robustness. Zero-watermarking technique can solve these problems successfully. Instead of embedding watermark, the zero-watermarking technique extracts some essential characteristics from the host signal and uses them for watermark detection. However, most of the available zero-watermarking schemes are designed for still image and their robustness is not satisfactory. In this paper, an efficient and robust zero-watermarking technique for audio signal is presented. The multiresolution characteristic of discrete wavelet transform (DWT), the energy compression characteristic of discrete cosine transform (DCT), and the Gaussian noise suppression property of higher-order cumulant are combined to extract essential features from the host audio signal and they are then used for watermark recovery. Simulation results demonstrate the effectiveness of our scheme in terms of inaudibility, detection reliability, and robustness.
A visual detection model for DCT coefficient quantization
NASA Technical Reports Server (NTRS)
Ahumada, Albert J., Jr.; Peterson, Heidi A.
1993-01-01
The discrete cosine transform (DCT) is widely used in image compression, and is part of the JPEG and MPEG compression standards. The degree of compression, and the amount of distortion in the decompressed image are determined by the quantization of the transform coefficients. The standards do not specify how the DCT coefficients should be quantized. Our approach is to set the quantization level for each coefficient so that the quantization error is at the threshold of visibility. Here we combine results from our previous work to form our current best detection model for DCT coefficient quantization noise. This model predicts sensitivity as a function of display parameters, enabling quantization matrices to be designed for display situations varying in luminance, veiling light, and spatial frequency related conditions (pixel size, viewing distance, and aspect ratio). It also allows arbitrary color space directions for the representation of color.
A Fourier transform with speed improvements for microprocessor applications
NASA Technical Reports Server (NTRS)
Lokerson, D. C.; Rochelle, R.
1980-01-01
A fast Fourier transform algorithm for the RCA 1802microprocessor was developed for spacecraft instrument applications. The computations were tailored for the restrictions an eight bit machine imposes. The algorithm incorporates some aspects of Walsh function sequency to improve operational speed. This method uses a register to add a value proportional to the period of the band being processed before each computation is to be considered. If the result overflows into the DF register, the data sample is used in computation; otherwise computation is skipped. This operation is repeated for each of the 64 data samples. This technique is used for both sine and cosine portions of the computation. The processing uses eight bit data, but because of the many computations that can increase the size of the coefficient, floating point form is used. A method to reduce the alias problem in the lower bands is also described.
Estimation of phase derivatives using discrete chirp-Fourier-transform-based method.
Gorthi, Sai Siva; Rastogi, Pramod
2009-08-15
Estimation of phase derivatives is an important task in many interferometric measurements in optical metrology. This Letter introduces a method based on discrete chirp-Fourier transform for accurate and direct estimation of phase derivatives, even in the presence of noise. The method is introduced in the context of the analysis of reconstructed interference fields in digital holographic interferometry. We present simulation and experimental results demonstrating the utility of the proposed method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Y. S.; Cai, F.; Xu, W. M.
2011-09-28
The ship motion equation with a cosine wave excitement force describes the slip moments in regular waves. A new kind of wave excitement force model, with the form as sums of cosine functions was proposed to describe ship rolling in irregular waves. Ship rolling time series were obtained by solving the ship motion equation with the fourth-order-Runger-Kutta method. These rolling time series were synthetically analyzed with methods of phase-space track, power spectrum, primary component analysis, and the largest Lyapunove exponent. Simulation results show that ship rolling presents some chaotic characteristic when the wave excitement force was applied by sums ofmore » cosine functions. The result well explains the course of ship rolling's chaotic mechanism and is useful for ship hydrodynamic study.« less
NASA Technical Reports Server (NTRS)
Torres-Pomales, Wilfredo
2014-01-01
This report describes a modeling and simulation approach for disturbance patterns representative of the environment experienced by a digital system in an electromagnetic reverberation chamber. The disturbance is modeled by a multi-variate statistical distribution based on empirical observations. Extended versions of the Rejection Samping and Inverse Transform Sampling techniques are developed to generate multi-variate random samples of the disturbance. The results show that Inverse Transform Sampling returns samples with higher fidelity relative to the empirical distribution. This work is part of an ongoing effort to develop a resilience assessment methodology for complex safety-critical distributed systems.
Digital watermarking algorithm research of color images based on quaternion Fourier transform
NASA Astrophysics Data System (ADS)
An, Mali; Wang, Weijiang; Zhao, Zhen
2013-10-01
A watermarking algorithm of color images based on the quaternion Fourier Transform (QFFT) and improved quantization index algorithm (QIM) is proposed in this paper. The original image is transformed by QFFT, the watermark image is processed by compression and quantization coding, and then the processed watermark image is embedded into the components of the transformed original image. It achieves embedding and blind extraction of the watermark image. The experimental results show that the watermarking algorithm based on the improved QIM algorithm with distortion compensation achieves a good tradeoff between invisibility and robustness, and better robustness for the attacks of Gaussian noises, salt and pepper noises, JPEG compression, cropping, filtering and image enhancement than the traditional QIM algorithm.
NASA Astrophysics Data System (ADS)
Chen, Xin; Wang, Shuhong; Liu, Zhen; Wei, Xizhang
2017-07-01
Localization of a source whose half-wavelength is smaller than the array aperture would suffer from serious phase ambiguity problem, which also appears in recently proposed phase-based algorithms. In this paper, by using the centro-symmetry of fixed uniform circular array (UCA) with even number of sensors, the source's angles and range can be decoupled and a novel ambiguity resolving approach is addressed for phase-based algorithms of source's 3-D localization (azimuth angle, elevation angle, and range). In the proposed method, by using the cosine property of unambiguous phase differences, ambiguity searching and actual-value matching are first employed to obtain actual phase differences and corresponding source's angles. Then, the unambiguous angles are utilized to estimate the source's range based on a one dimension multiple signal classification (1-D MUSIC) estimator. Finally, simulation experiments investigate the influence of step size in search and SNR on performance of ambiguity resolution and demonstrate the satisfactory estimation performance of the proposed method.
Reconstruction-based Digital Dental Occlusion of the Partially Edentulous Dentition
Zhang, Jian; Xia, James J.; Li, Jianfu; Zhou, Xiaobo
2016-01-01
Partially edentulous dentition presents a challenging problem for the surgical planning of digital dental occlusion in the field of craniomaxillofacial surgery because of the incorrect maxillomandibular distance caused by missing teeth. We propose an innovative approach called Dental Reconstruction with Symmetrical Teeth (DRST) to achieve accurate dental occlusion for the partially edentulous cases. In this DRST approach, the rigid transformation between two symmetrical teeth existing on the left and right dental model is estimated through probabilistic point registration by matching the two shapes. With the estimated transformation, the partially edentulous space can be virtually filled with the teeth in its symmetrical position. Dental alignment is performed by digital dental occlusion reestablishment algorithm with the reconstructed complete dental model. Satisfactory reconstruction and occlusion results are demonstrated with the synthetic and real partially edentulous models. PMID:26584502
Ultrafast Dephasing and Incoherent Light Photon Echoes in Organic Amorphous Systems
NASA Astrophysics Data System (ADS)
Yano, Ryuzi; Matsumoto, Yoshinori; Tani, Toshiro; Nakatsuka, Hiroki
1989-10-01
Incoherent light photon echoes were observed in organic amorphous systems (cresyl violet in polyvinyl alcohol and 1,4-dihydroxyanthraquinone in polymethacrylic acid) by using temporally-incoherent nanosecond laser pulses. It was found that an echo decay curve of an organic amorphous system is composed of a sharp peak which decays very rapidly and a slowly decaying wing at the tail. We show that the persistent hole burning (PHB) spectra were reproduced by the Fourier-cosine transforms of the echo decay curves. We claim that in general, we must take into account the multi-level feature of the system in order to explain ultrafast dephasing at very low temperatures.
NASA Astrophysics Data System (ADS)
Kuai, Xiao-yan; Sun, Hai-xin; Qi, Jie; Cheng, En; Xu, Xiao-ka; Guo, Yu-hui; Chen, You-gan
2014-06-01
In this paper, we investigate the performance of adaptive modulation (AM) orthogonal frequency division multiplexing (OFDM) system in underwater acoustic (UWA) communications. The aim is to solve the problem of large feedback overhead for channel state information (CSI) in every subcarrier. A novel CSI feedback scheme is proposed based on the theory of compressed sensing (CS). We propose a feedback from the receiver that only feedback the sparse channel parameters. Additionally, prediction of the channel state is proposed every several symbols to realize the AM in practice. We describe a linear channel prediction algorithm which is used in adaptive transmission. This system has been tested in the real underwater acoustic channel. The linear channel prediction makes the AM transmission techniques more feasible for acoustic channel communications. The simulation and experiment show that significant improvements can be obtained both in bit error rate (BER) and throughput in the AM scheme compared with the fixed Quadrature Phase Shift Keying (QPSK) modulation scheme. Moreover, the performance with standard CS outperforms the Discrete Cosine Transform (DCT) method.
Digital Storytelling for Transformative Global Citizenship Education
ERIC Educational Resources Information Center
Truong-White, Hoa; McLean, Lorna
2015-01-01
This article explores how digital storytelling offers the potential to support transformative global citizenship education (TGCE) through a case study of the Bridges to Understanding program that connected middle and high school students globally using digital storytelling. Drawing on a TGCE framework, this research project probed the curriculum…
Atmospheric Science Data Center
2014-09-25
Solar Noon (GMT time) The time when the sun is due south in the northern hemisphere or due north in the southern ... The average cosine of the angle between the sun and directly overhead during daylight hours. Cosine solar ...
Noncoherent parallel optical processor for discrete two-dimensional linear transformations.
Glaser, I
1980-10-01
We describe a parallel optical processor, based on a lenslet array, that provides general linear two-dimensional transformations using noncoherent light. Such a processor could become useful in image- and signal-processing applications in which the throughput requirements cannot be adequately satisfied by state-of-the-art digital processors. Experimental results that illustrate the feasibility of the processor by demonstrating its use in parallel optical computation of the two-dimensional Walsh-Hadamard transformation are presented.
Novel image encryption algorithm based on multiple-parameter discrete fractional random transform
NASA Astrophysics Data System (ADS)
Zhou, Nanrun; Dong, Taiji; Wu, Jianhua
2010-08-01
A new method of digital image encryption is presented by utilizing a new multiple-parameter discrete fractional random transform. Image encryption and decryption are performed based on the index additivity and multiple parameters of the multiple-parameter fractional random transform. The plaintext and ciphertext are respectively in the spatial domain and in the fractional domain determined by the encryption keys. The proposed algorithm can resist statistic analyses effectively. The computer simulation results show that the proposed encryption algorithm is sensitive to the multiple keys, and that it has considerable robustness, noise immunity and security.
Digital Learning in Schools: Conceptualizing the Challenges and Influences on Teacher Practice
ERIC Educational Resources Information Center
Blundell, Christopher; Lee, Kar-Tin; Nykvist, Shaun
2016-01-01
Digital technologies are an important requirement for curriculum expectations, including general ICT capability and STEM education. These technologies are also positioned as mechanisms for educational reform via transformation of teacher practice. It seems, however, that wide-scale transformation of teacher practice and digital learning remain…
An effective detection algorithm for region duplication forgery in digital images
NASA Astrophysics Data System (ADS)
Yavuz, Fatih; Bal, Abdullah; Cukur, Huseyin
2016-04-01
Powerful image editing tools are very common and easy to use these days. This situation may cause some forgeries by adding or removing some information on the digital images. In order to detect these types of forgeries such as region duplication, we present an effective algorithm based on fixed-size block computation and discrete wavelet transform (DWT). In this approach, the original image is divided into fixed-size blocks, and then wavelet transform is applied for dimension reduction. Each block is processed by Fourier Transform and represented by circle regions. Four features are extracted from each block. Finally, the feature vectors are lexicographically sorted, and duplicated image blocks are detected according to comparison metric results. The experimental results show that the proposed algorithm presents computational efficiency due to fixed-size circle block architecture.
Improved grid-noise removal in single-frame digital moiré 3D shape measurement
NASA Astrophysics Data System (ADS)
Mohammadi, Fatemeh; Kofman, Jonathan
2016-11-01
A single-frame grid-noise removal technique was developed for application in single-frame digital-moiré 3D shape measurement. The ability of the stationary wavelet transform (SWT) to prevent oscillation artifacts near discontinuities, and the ability of the Fourier transform (FFT) applied to wavelet coefficients to separate grid-noise from useful image information, were combined in a new technique, SWT-FFT, to remove grid-noise from moiré-pattern images generated by digital moiré. In comparison to previous grid-noise removal techniques in moiré, SWT-FFT avoids the requirement for mechanical translation of optical components and capture of multiple frames, to enable single-frame moiré-based measurement. Experiments using FFT, Discrete Wavelet Transform (DWT), DWT-FFT, and SWT-FFT were performed on moiré-pattern images containing grid noise, generated by digital moiré, for several test objects. SWT-FFT had the best performance in removing high-frequency grid-noise, both straight and curved lines, minimizing artifacts, and preserving the moiré pattern without blurring and degradation. SWT-FFT also had the lowest noise amplitude in the reconstructed height and lowest roughness index for all test objects, indicating best grid-noise removal in comparison to the other techniques.
A single-source photon source model of a linear accelerator for Monte Carlo dose calculation
Glatting, Gerhard; Wenz, Frederik; Fleckenstein, Jens
2017-01-01
Purpose To introduce a new method of deriving a virtual source model (VSM) of a linear accelerator photon beam from a phase space file (PSF) for Monte Carlo (MC) dose calculation. Materials and methods A PSF of a 6 MV photon beam was generated by simulating the interactions of primary electrons with the relevant geometries of a Synergy linear accelerator (Elekta AB, Stockholm, Sweden) and recording the particles that reach a plane 16 cm downstream the electron source. Probability distribution functions (PDFs) for particle positions and energies were derived from the analysis of the PSF. These PDFs were implemented in the VSM using inverse transform sampling. To model particle directions, the phase space plane was divided into a regular square grid. Each element of the grid corresponds to an area of 1 mm2 in the phase space plane. The average direction cosines, Pearson correlation coefficient (PCC) between photon energies and their direction cosines, as well as the PCC between the direction cosines were calculated for each grid element. Weighted polynomial surfaces were then fitted to these 2D data. The weights are used to correct for heteroscedasticity across the phase space bins. The directions of the particles created by the VSM were calculated from these fitted functions. The VSM was validated against the PSF by comparing the doses calculated by the two methods for different square field sizes. The comparisons were performed with profile and gamma analyses. Results The doses calculated with the PSF and VSM agree to within 3% /1 mm (>95% pixel pass rate) for the evaluated fields. Conclusion A new method of deriving a virtual photon source model of a linear accelerator from a PSF file for MC dose calculation was developed. Validation results show that the doses calculated with the VSM and the PSF agree to within 3% /1 mm. PMID:28886048
A single-source photon source model of a linear accelerator for Monte Carlo dose calculation.
Nwankwo, Obioma; Glatting, Gerhard; Wenz, Frederik; Fleckenstein, Jens
2017-01-01
To introduce a new method of deriving a virtual source model (VSM) of a linear accelerator photon beam from a phase space file (PSF) for Monte Carlo (MC) dose calculation. A PSF of a 6 MV photon beam was generated by simulating the interactions of primary electrons with the relevant geometries of a Synergy linear accelerator (Elekta AB, Stockholm, Sweden) and recording the particles that reach a plane 16 cm downstream the electron source. Probability distribution functions (PDFs) for particle positions and energies were derived from the analysis of the PSF. These PDFs were implemented in the VSM using inverse transform sampling. To model particle directions, the phase space plane was divided into a regular square grid. Each element of the grid corresponds to an area of 1 mm2 in the phase space plane. The average direction cosines, Pearson correlation coefficient (PCC) between photon energies and their direction cosines, as well as the PCC between the direction cosines were calculated for each grid element. Weighted polynomial surfaces were then fitted to these 2D data. The weights are used to correct for heteroscedasticity across the phase space bins. The directions of the particles created by the VSM were calculated from these fitted functions. The VSM was validated against the PSF by comparing the doses calculated by the two methods for different square field sizes. The comparisons were performed with profile and gamma analyses. The doses calculated with the PSF and VSM agree to within 3% /1 mm (>95% pixel pass rate) for the evaluated fields. A new method of deriving a virtual photon source model of a linear accelerator from a PSF file for MC dose calculation was developed. Validation results show that the doses calculated with the VSM and the PSF agree to within 3% /1 mm.
An accurate surface topography restoration algorithm for white light interferometry
NASA Astrophysics Data System (ADS)
Yuan, He; Zhang, Xiangchao; Xu, Min
2017-10-01
As an important measuring technique, white light interferometry can realize fast and non-contact measurement, thus it is now widely used in the field of ultra-precision engineering. However, the traditional recovery algorithms of surface topographies have flaws and limits. In this paper, we propose a new algorithm to solve these problems. It is a combination of Fourier transform and improved polynomial fitting method. Because the white light interference signal is usually expressed as a cosine signal whose amplitude is modulated by a Gaussian function, its fringe visibility is not constant and varies with different scanning positions. The interference signal is processed first by Fourier transform, then the positive frequency part is selected and moved back to the center of the amplitude-frequency curve. In order to restore the surface morphology, a polynomial fitting method is used to fit the amplitude curve after inverse Fourier transform and obtain the corresponding topography information. The new method is then compared to the traditional algorithms. It is proved that the aforementioned drawbacks can be effectively overcome. The relative error is less than 0.8%.
Blind technique using blocking artifacts and entropy of histograms for image tampering detection
NASA Astrophysics Data System (ADS)
Manu, V. T.; Mehtre, B. M.
2017-06-01
The tremendous technological advancements in recent times has enabled people to create, edit and circulate images easily than ever before. As a result of this, ensuring the integrity and authenticity of the images has become challenging. Malicious editing of images to deceive the viewer is referred to as image tampering. A widely used image tampering technique is image splicing or compositing, in which regions from different images are copied and pasted. In this paper, we propose a tamper detection method utilizing the blocking and blur artifacts which are the footprints of splicing. The classification of images as tampered or not, is done based on the standard deviations of the entropy histograms and block discrete cosine transformations. We can detect the exact boundaries of the tampered area in the image, if the image is classified as tampered. Experimental results on publicly available image tampering datasets show that the proposed method outperforms the existing methods in terms of accuracy.
NASA Astrophysics Data System (ADS)
Hu, Guiqiang; Xiao, Di; Wang, Yong; Xiang, Tao; Zhou, Qing
2017-11-01
Recently, a new kind of image encryption approach using compressive sensing (CS) and double random phase encoding has received much attention due to the advantages such as compressibility and robustness. However, this approach is found to be vulnerable to chosen plaintext attack (CPA) if the CS measurement matrix is re-used. Therefore, designing an efficient measurement matrix updating mechanism that ensures resistance to CPA is of practical significance. In this paper, we provide a novel solution to update the CS measurement matrix by altering the secret sparse basis with the help of counter mode operation. Particularly, the secret sparse basis is implemented by a reality-preserving fractional cosine transform matrix. Compared with the conventional CS-based cryptosystem that totally generates all the random entries of measurement matrix, our scheme owns efficiency superiority while guaranteeing resistance to CPA. Experimental and analysis results show that the proposed scheme has a good security performance and has robustness against noise and occlusion.
A Fast and Robust Beamspace Adaptive Beamformer for Medical Ultrasound Imaging.
Mohades Deylami, Ali; Mohammadzadeh Asl, Babak
2017-06-01
Minimum variance beamformer (MVB) increases the resolution and contrast of medical ultrasound imaging compared with nonadaptive beamformers. These advantages come at the expense of high computational complexity that prevents this adaptive beamformer to be applied in a real-time imaging system. A new beamspace (BS) based on discrete cosine transform is proposed in which the medical ultrasound signals can be represented with less dimensions compared with the standard BS. This is because of symmetric beampattern of the beams in the proposed BS compared with the asymmetric ones in the standard BS. This lets us decrease the dimensions of data to two, so a high complex algorithm, such as the MVB, can be applied faster in this BS. The results indicated that by keeping only two beams, the MVB in the proposed BS provides very similar resolution and also better contrast compared with the standard MVB (SMVB) with only 0.44% of needed flops. Also, this beamformer is more robust against sound speed estimation errors than the SMVB.
Zuo, Chao; Chen, Qian; Li, Hongru; Qu, Weijuan; Asundi, Anand
2014-07-28
Boundary conditions play a crucial role in the solution of the transport of intensity equation (TIE). If not appropriately handled, they can create significant boundary artifacts across the reconstruction result. In a previous paper [Opt. Express 22, 9220 (2014)], we presented a new boundary-artifact-free TIE phase retrieval method with use of discrete cosine transform (DCT). Here we report its experimental investigations with applications to the micro-optics characterization. The experimental setup is based on a tunable lens based 4f system attached to a non-modified inverted bright-field microscope. We establish inhomogeneous Neumann boundary values by placing a rectangular aperture in the intermediate image plane of the microscope. Then the boundary values are applied to solve the TIE with our DCT-based TIE solver. Experimental results on microlenses highlight the importance of boundary conditions that often overlooked in simplified models, and confirm that our approach effectively avoid the boundary error even when objects are located at the image borders. It is further demonstrated that our technique is non-interferometric, accurate, fast, full-field, and flexible, rendering it a promising metrological tool for the micro-optics inspection.
Orczyk, C; Rusinek, H; Rosenkrantz, A B; Mikheev, A; Deng, F-M; Melamed, J; Taneja, S S
2013-12-01
To assess a novel method of three-dimensional (3D) co-registration of prostate cancer digital histology and in-vivo multiparametric magnetic resonance imaging (mpMRI) image sets for clinical usefulness. A software platform was developed to achieve 3D co-registration. This software was prospectively applied to three patients who underwent radical prostatectomy. Data comprised in-vivo mpMRI [T2-weighted, dynamic contrast-enhanced weighted images (DCE); apparent diffusion coefficient (ADC)], ex-vivo T2-weighted imaging, 3D-rebuilt pathological specimen, and digital histology. Internal landmarks from zonal anatomy served as reference points for assessing co-registration accuracy and precision. Applying a method of deformable transformation based on 22 internal landmarks, a 1.6 mm accuracy was reached to align T2-weighted images and the 3D-rebuilt pathological specimen, an improvement over rigid transformation of 32% (p = 0.003). The 22 zonal anatomy landmarks were more accurately mapped using deformable transformation than rigid transformation (p = 0.0008). An automatic method based on mutual information, enabled automation of the process and to include perfusion and diffusion MRI images. Evaluation of co-registration accuracy using the volume overlap index (Dice index) met clinically relevant requirements, ranging from 0.81-0.96 for sequences tested. Ex-vivo images of the specimen did not significantly improve co-registration accuracy. This preliminary analysis suggests that deformable transformation based on zonal anatomy landmarks is accurate in the co-registration of mpMRI and histology. Including diffusion and perfusion sequences in the same 3D space as histology is essential further clinical information. The ability to localize cancer in 3D space may improve targeting for image-guided biopsy, focal therapy, and disease quantification in surveillance protocols. Copyright © 2013 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.
Time-Frequency-Wavenumber Analysis of Surface Waves Using the Continuous Wavelet Transform
NASA Astrophysics Data System (ADS)
Poggi, V.; Fäh, D.; Giardini, D.
2013-03-01
A modified approach to surface wave dispersion analysis using active sources is proposed. The method is based on continuous recordings, and uses the continuous wavelet transform to analyze the phase velocity dispersion of surface waves. This gives the possibility to accurately localize the phase information in time, and to isolate the most significant contribution of the surface waves. To extract the dispersion information, then, a hybrid technique is applied to the narrowband filtered seismic recordings. The technique combines the flexibility of the slant stack method in identifying waves that propagate in space and time, with the resolution of f- k approaches. This is particularly beneficial for higher mode identification in cases of high noise levels. To process the continuous wavelet transform, a new mother wavelet is presented and compared to the classical and widely used Morlet type. The proposed wavelet is obtained from a raised-cosine envelope function (Hanning type). The proposed approach is particularly suitable when using continuous recordings (e.g., from seismological-like equipment) since it does not require any hardware-based source triggering. This can be subsequently done with the proposed method. Estimation of the surface wave phase delay is performed in the frequency domain by means of a covariance matrix averaging procedure over successive wave field excitations. Thus, no record stacking is necessary in the time domain and a large number of consecutive shots can be used. This leads to a certain simplification of the field procedures. To demonstrate the effectiveness of the method, we tested it on synthetics as well on real field data. For the real case we also combine dispersion curves from ambient vibrations and active measurements.
Digital tool for detecting diabetic retinopathy in retinography image using gabor transform
NASA Astrophysics Data System (ADS)
Morales, Y.; Nuñez, R.; Suarez, J.; Torres, C.
2017-01-01
Diabetic retinopathy is a chronic disease and is the leading cause of blindness in the population. The fundamental problem is that diabetic retinopathy is usually asymptomatic in its early stage and, in advanced stages, it becomes incurable, hence the importance of early detection. To detect diabetic retinopathy, the ophthalmologist examines the fundus by ophthalmoscopy, after sends the patient to get a Retinography. Sometimes, these retinography are not of good quality. This paper show the implementation of a digital tool that facilitates to ophthalmologist provide better patient diagnosis suffering from diabetic retinopathy, informing them that type of retinopathy has and to what degree of severity is find . This tool develops an algorithm in Matlab based on Gabor transform and in the application of digital filters to provide better and higher quality of retinography. The performance of algorithm has been compared with conventional methods obtaining resulting filtered images with better contrast and higher.
On the Symmetry of Molecular Flows Through the Pipe of an Arbitrary Shape (I) Diffusive Reflection
NASA Astrophysics Data System (ADS)
Kusumoto, Yoshiro
Molecular gas flows through the pipe of an arbitrary shape is mathematically considered based on a diffusive reflection model. To avoid a perpetual motion, the magnitude of the molecular flow rate must remain invariant under the exchange of inlet and outlet pressures. For this flow symmetry, the cosine law reflection at the pipe wall was found to be sufficient and necessary, on the assumption that the molecular flux is conserved in a collision with the wall. It was also shown that a spontaneous flow occurs in a hemispherical apparatus, if the reflection obeys the n-th power of cosine law with n other than unity. This apparatus could work as a molecular pump with no moving parts.
NASA Astrophysics Data System (ADS)
Li, Da; Cheung, Chifai; Zhao, Xing; Ren, Mingjun; Zhang, Juan; Zhou, Liqiu
2016-10-01
Autostereoscopy based three-dimensional (3D) digital reconstruction has been widely applied in the field of medical science, entertainment, design, industrial manufacture, precision measurement and many other areas. The 3D digital model of the target can be reconstructed based on the series of two-dimensional (2D) information acquired by the autostereoscopic system, which consists multiple lens and can provide information of the target from multiple angles. This paper presents a generalized and precise autostereoscopic three-dimensional (3D) digital reconstruction method based on Direct Extraction of Disparity Information (DEDI) which can be used to any transform autostereoscopic systems and provides accurate 3D reconstruction results through error elimination process based on statistical analysis. The feasibility of DEDI method has been successfully verified through a series of optical 3D digital reconstruction experiments on different autostereoscopic systems which is highly efficient to perform the direct full 3D digital model construction based on tomography-like operation upon every depth plane with the exclusion of the defocused information. With the absolute focused information processed by DEDI method, the 3D digital model of the target can be directly and precisely formed along the axial direction with the depth information.
Weighted optimization of irradiance for photodynamic therapy of port wine stains
NASA Astrophysics Data System (ADS)
He, Linhuan; Zhou, Ya; Hu, Xiaoming
2016-10-01
Planning of irradiance distribution (PID) is one of the foremost factors for on-demand treatment of port wine stains (PWS) with photodynamic therapy (PDT). A weighted optimization method for PID was proposed according to the grading of PWS with a three dimensional digital illumination instrument. Firstly, the point clouds of lesions were filtered to remove the error or redundant points, the triangulation was carried out and the lesion was divided into small triangular patches. Secondly, the parameters such as area, normal vector and orthocenter for optimization of each triangular patch were calculated, and the weighted coefficients were determined by the erythema indexes and areas of patches. Then, the optimization initial point was calculated based on the normal vectors and orthocenters to optimize the light direction. In the end, the irradiation can be optimized according to cosine values of irradiance angles and weighted coefficients. Comparing the irradiance distribution before and after optimization, the proposed weighted optimization method can make the irradiance distribution match better with the characteristics of lesions, and has the potential to improve the therapeutic efficacy.
Bhattacharyya, Parthasarathi; Mondal, Ashok; Dey, Rana; Saha, Dipanjan; Saha, Goutam
2015-05-01
Auscultation is an important part of the clinical examination of different lung diseases. Objective analysis of lung sounds based on underlying characteristics and its subsequent automatic interpretations may help a clinical practice. We collected the breath sounds from 8 normal subjects and 20 diffuse parenchymal lung disease (DPLD) patients using a newly developed instrument and then filtered off the heart sounds using a novel technology. The collected sounds were thereafter analysed digitally on several characteristics as dynamical complexity, texture information and regularity index to find and define their unique digital signatures for differentiating normality and abnormality. For convenience of testing, these characteristic signatures of normal and DPLD lung sounds were transformed into coloured visual representations. The predictive power of these images has been validated by six independent observers that include three physicians. The proposed method gives a classification accuracy of 100% for composite features for both the normal as well as lung sound signals from DPLD patients. When tested by independent observers on the visually transformed images, the positive predictive value to diagnose the normality and DPLD remained 100%. The lung sounds from the normal and DPLD subjects could be differentiated and expressed according to their digital signatures. On visual transformation to coloured images, they retain 100% predictive power. This technique may assist physicians to diagnose DPLD from visual images bearing the digital signature of the condition. © 2015 Asian Pacific Society of Respirology.
Evidence of tampering in watermark identification
NASA Astrophysics Data System (ADS)
McLauchlan, Lifford; Mehrübeoglu, Mehrübe
2009-08-01
In this work, watermarks are embedded in digital images in the discrete wavelet transform (DWT) domain. Principal component analysis (PCA) is performed on the DWT coefficients. Next higher order statistics based on the principal components and the eigenvalues are determined for different sets of images. Feature sets are analyzed for different types of attacks in m dimensional space. The results demonstrate the separability of the features for the tampered digital copies. Different feature sets are studied to determine more effective tamper evident feature sets. The digital forensics, the probable manipulation(s) or modification(s) performed on the digital information can be identified using the described technique.
Electro-mechanical sine/cosine generator
NASA Technical Reports Server (NTRS)
Flagge, B. (Inventor)
1972-01-01
An electromechanical device for generating both sine and cosine functions is described. A motor rotates a cylinder about an axis parallel to and a slight distance from the central axis of the cylinder. Two noncontacting displacement sensing devices are placed ninety degrees apart, equal distances from the axis of rotation of the cylinder and short distances above the surface of cylinder. Each of these sensing devices produces an electrical signal proportional to the distance that it is away from the cylinder. Consequently, as the cylinder is rotated the outputs from the two sensing devices are the sine and cosine functions.
Cosine-Gauss plasmon beam: a localized long-range nondiffracting surface wave.
Lin, Jiao; Dellinger, Jean; Genevet, Patrice; Cluzel, Benoit; de Fornel, Frederique; Capasso, Federico
2012-08-31
A new surface wave is introduced, the cosine-Gauss beam, which does not diffract while it propagates in a straight line and tightly bound to the metallic surface for distances up to 80 μm. The generation of this highly localized wave is shown to be straightforward and highly controllable, with varying degrees of transverse confinement and directionality, by fabricating a plasmon launcher consisting of intersecting metallic gratings. Cosine-Gauss beams have potential for applications in plasmonics, notably for efficient coupling to nanophotonic devices, opening up new design possibilities for next-generation optical interconnects.
Significance of clustering and classification applications in digital and physical libraries
NASA Astrophysics Data System (ADS)
Triantafyllou, Ioannis; Koulouris, Alexandros; Zervos, Spiros; Dendrinos, Markos; Giannakopoulos, Georgios
2015-02-01
Applications of clustering and classification techniques can be proved very significant in both digital and physical (paper-based) libraries. The most essential application, document classification and clustering, is crucial for the content that is produced and maintained in digital libraries, repositories, databases, social media, blogs etc., based on various tags and ontology elements, transcending the traditional library-oriented classification schemes. Other applications with very useful and beneficial role in the new digital library environment involve document routing, summarization and query expansion. Paper-based libraries can benefit as well since classification combined with advanced material characterization techniques such as FTIR (Fourier Transform InfraRed spectroscopy) can be vital for the study and prevention of material deterioration. An improved two-level self-organizing clustering architecture is proposed in order to enhance the discrimination capacity of the learning space, prior to classification, yielding promising results when applied to the above mentioned library tasks.
Apparatus for direct-to-digital spatially-heterodyned holography
Thomas, Clarence E.; Hanson, Gregory R.
2006-12-12
An apparatus operable to record a spatially low-frequency heterodyne hologram including spatially heterodyne fringes for Fourier analysis includes: a laser; a beamsplitter optically coupled to the laser; an object optically coupled to the beamsplitter; a focusing lens optically coupled to both the beamsplitter and the object; a digital recorder optically coupled to the focusing lens; and a computer that performs a Fourier transform, applies a digital filter, and performs an inverse Fourier transform. A reference beam and an object beam are focused by the focusing lens at a focal plane of the digital recorder to form a spatially low-frequency heterodyne hologram including spatially heterodyne fringes for Fourier analysis which is recorded by the digital recorder, and the computer transforms the recorded spatially low-frequency heterodyne hologram including spatially heterodyne fringes and shifts axes in Fourier space to sit on top of a heterodyne carrier frequency defined by an angle between the reference beam and the object beam and cuts off signals around an original origin before performing the inverse Fourier transform.
Coupling Damage-Sensing Particles to the Digitial Twin Concept
NASA Technical Reports Server (NTRS)
Hochhalter, Jacob; Leser, William P.; Newman, John A.; Gupta, Vipul K.; Yamakov, Vesselin; Cornell, Stephen R.; Willard, Scott A.; Heber, Gerd
2014-01-01
The research presented herein is a first step toward integrating two emerging structural health management paradigms: digital twin and sensory materials. Digital twin is an emerging life management and certification paradigm whereby models and simulations consist of as-built vehicle state, as-experienced loads and environments, and other vehicle-specific history to enable high-fidelity modeling of individual aerospace vehicles throughout their service lives. The digital twin concept spans many disciplines, and an extensive study on the full domain is out of the scope of this study. Therefore, as it pertains to the digital twin, this research focused on one major concept: modeling specifically the as-manufactured geometry of a component and its microstructure (to the degree possible). The second aspect of this research was to develop the concept of sensory materials such that they can be employed within the digital twin framework. Sensory materials are shape-memory alloys that undergo an audible phase transformation while experiencing sufficient strain. Upon embedding sensory materials with a structural alloy, this audible transformation helps improve the reliability of crack detection especially at the early stages of crack growth. By combining these two early-stage technologies, an automated approach to evidence-based inspection and maintenance of aerospace vehicles is sought.
Kusakawa, Shinji; Yasuda, Satoshi; Kuroda, Takuya; Kawamata, Shin; Sato, Yoji
2015-12-08
Contamination with tumorigenic cellular impurities is one of the most pressing concerns for human cell-processed therapeutic products (hCTPs). The soft agar colony formation (SACF) assay, which is a well-known in vitro assay for the detection of malignant transformed cells, is applicable for the quality assessment of hCTPs. Here we established an image-based screening system for the SACF assay using a high-content cell analyzer termed the digital SACF assay. Dual fluorescence staining of formed colonies and the dissolution of soft agar led to accurate detection of transformed cells with the imaging cytometer. Partitioning a cell sample into multiple wells of culture plates enabled digital readout of the presence of colonies and elevated the sensitivity for their detection. In practice, the digital SACF assay detected impurity levels as low as 0.00001% of the hCTPs, i.e. only one HeLa cell contained in 10,000,000 human mesenchymal stem cells, within 30 days. The digital SACF assay saves time, is more sensitive than in vivo tumorigenicity tests, and would be useful for the quality control of hCTPs in the manufacturing process.
Kusakawa, Shinji; Yasuda, Satoshi; Kuroda, Takuya; Kawamata, Shin; Sato, Yoji
2015-01-01
Contamination with tumorigenic cellular impurities is one of the most pressing concerns for human cell-processed therapeutic products (hCTPs). The soft agar colony formation (SACF) assay, which is a well-known in vitro assay for the detection of malignant transformed cells, is applicable for the quality assessment of hCTPs. Here we established an image-based screening system for the SACF assay using a high-content cell analyzer termed the digital SACF assay. Dual fluorescence staining of formed colonies and the dissolution of soft agar led to accurate detection of transformed cells with the imaging cytometer. Partitioning a cell sample into multiple wells of culture plates enabled digital readout of the presence of colonies and elevated the sensitivity for their detection. In practice, the digital SACF assay detected impurity levels as low as 0.00001% of the hCTPs, i.e. only one HeLa cell contained in 10,000,000 human mesenchymal stem cells, within 30 days. The digital SACF assay saves time, is more sensitive than in vivo tumorigenicity tests, and would be useful for the quality control of hCTPs in the manufacturing process. PMID:26644244
A digitally implemented preambleless demodulator for maritime and mobile data communications
NASA Astrophysics Data System (ADS)
Chalmers, Harvey; Shenoy, Ajit; Verahrami, Farhad B.
The hardware design and software algorithms for a low-bit-rate, low-cost, all-digital preambleless demodulator are described. The demodulator operates under severe high-noise conditions, fast Doppler frequency shifts, large frequency offsets, and multipath fading. Sophisticated algorithms, including a fast Fourier transform (FFT)-based burst acquisition algorithm, a cycle-slip resistant carrier phase tracker, an innovative Doppler tracker, and a fast acquisition symbol synchronizer, were developed and extensively simulated for reliable burst reception. The compact digital signal processor (DSP)-based demodulator hardware uses a unique personal computer test interface for downloading test data files. The demodulator test results demonstrate a near-ideal performance within 0.2 dB of theory.
A new JPEG-based steganographic algorithm for mobile devices
NASA Astrophysics Data System (ADS)
Agaian, Sos S.; Cherukuri, Ravindranath C.; Schneider, Erik C.; White, Gregory B.
2006-05-01
Currently, cellular phones constitute a significant portion of the global telecommunications market. Modern cellular phones offer sophisticated features such as Internet access, on-board cameras, and expandable memory which provide these devices with excellent multimedia capabilities. Because of the high volume of cellular traffic, as well as the ability of these devices to transmit nearly all forms of data. The need for an increased level of security in wireless communications is becoming a growing concern. Steganography could provide a solution to this important problem. In this article, we present a new algorithm for JPEG-compressed images which is applicable to mobile platforms. This algorithm embeds sensitive information into quantized discrete cosine transform coefficients obtained from the cover JPEG. These coefficients are rearranged based on certain statistical properties and the inherent processing and memory constraints of mobile devices. Based on the energy variation and block characteristics of the cover image, the sensitive data is hidden by using a switching embedding technique proposed in this article. The proposed system offers high capacity while simultaneously withstanding visual and statistical attacks. Based on simulation results, the proposed method demonstrates an improved retention of first-order statistics when compared to existing JPEG-based steganographic algorithms, while maintaining a capacity which is comparable to F5 for certain cover images.
Digital Doctorates? An Exploratory Study of PhD Candidates' Use of Online Tools
ERIC Educational Resources Information Center
Dowling, Robyn; Wilson, Michael
2017-01-01
Online environments are transforming learning, including doctoral education. Yet the ways in which the PhD experience is shaped and transformed through these digital modes of engagement is seldom addressed, and not systematically understood. In this article, we explore PhD students' perceptions and use of digital tools. Drawing on the results of…
NASA Astrophysics Data System (ADS)
Siddeq, M. M.; Rodrigues, M. A.
2015-09-01
Image compression techniques are widely used on 2D image 2D video 3D images and 3D video. There are many types of compression techniques and among the most popular are JPEG and JPEG2000. In this research, we introduce a new compression method based on applying a two level discrete cosine transform (DCT) and a two level discrete wavelet transform (DWT) in connection with novel compression steps for high-resolution images. The proposed image compression algorithm consists of four steps. (1) Transform an image by a two level DWT followed by a DCT to produce two matrices: DC- and AC-Matrix, or low and high frequency matrix, respectively, (2) apply a second level DCT on the DC-Matrix to generate two arrays, namely nonzero-array and zero-array, (3) apply the Minimize-Matrix-Size algorithm to the AC-Matrix and to the other high-frequencies generated by the second level DWT, (4) apply arithmetic coding to the output of previous steps. A novel decompression algorithm, Fast-Match-Search algorithm (FMS), is used to reconstruct all high-frequency matrices. The FMS-algorithm computes all compressed data probabilities by using a table of data, and then using a binary search algorithm for finding decompressed data inside the table. Thereafter, all decoded DC-values with the decoded AC-coefficients are combined in one matrix followed by inverse two levels DCT with two levels DWT. The technique is tested by compression and reconstruction of 3D surface patches. Additionally, this technique is compared with JPEG and JPEG2000 algorithm through 2D and 3D root-mean-square-error following reconstruction. The results demonstrate that the proposed compression method has better visual properties than JPEG and JPEG2000 and is able to more accurately reconstruct surface patches in 3D.
Compensation based on linearized analysis for a six degree of freedom motion simulator
NASA Technical Reports Server (NTRS)
Parrish, R. V.; Dieudonne, J. E.; Martin, D. J., Jr.; Copeland, J. L.
1973-01-01
The inertial response characteristics of a synergistic, six-degree-of-freedom motion base are presented in terms of amplitude ratio and phase lag as functions of frequency data for the frequency range of interest (0 to 2 Hz) in real time, digital, flight simulators. The notch filters which smooth the digital-drive signals to continuous drive signals are presented, and appropriate compensation, based on the inertial response data, is suggested. The existence of an inverse transformation that converts actuator extensions into inertial positions makes it possible to gather the response data in the inertial axis system.
Representation of deformable motion for compression of dynamic cardiac image data
NASA Astrophysics Data System (ADS)
Weinlich, Andreas; Amon, Peter; Hutter, Andreas; Kaup, André
2012-02-01
We present a new approach for efficient estimation and storage of tissue deformation in dynamic medical image data like 3-D+t computed tomography reconstructions of human heart acquisitions. Tissue deformation between two points in time can be described by means of a displacement vector field indicating for each voxel of a slice, from which position in the previous slice at a fixed position in the third dimension it has moved to this position. Our deformation model represents the motion in a compact manner using a down-sampled potential function of the displacement vector field. This function is obtained by a Gauss-Newton minimization of the estimation error image, i. e., the difference between the current and the deformed previous slice. For lossless or lossy compression of volume slices, the potential function and the error image can afterwards be coded separately. By assuming deformations instead of translational motion, a subsequent coding algorithm using this method will achieve better compression ratios for medical volume data than with conventional block-based motion compensation known from video coding. Due to the smooth prediction without block artifacts, particularly whole-image transforms like wavelet decomposition as well as intra-slice prediction methods can benefit from this approach. We show that with discrete cosine as well as with Karhunen-Lo`eve transform the method can achieve a better energy compaction of the error image than block-based motion compensation while reaching approximately the same prediction error energy.
Long-life electromechanical sine-cosine generator
NASA Technical Reports Server (NTRS)
Flagge, B.
1971-01-01
Sine-cosine generator with no sliding parts is capable of withstanding a 20 Hz oscillation for more than 14 hours. Tests show that generator is electrically equal to potentiometer and that it has excellent dynamic characteristics. Generator shows promise of higher-speed applications than was previously possible.
Detecting Disease in Radiographs with Intuitive Confidence
2015-01-01
This paper argues in favor of a specific type of confidence for use in computer-aided diagnosis and disease classification, namely, sine/cosine values of angles represented by points on the unit circle. The paper shows how this confidence is motivated by Chinese medicine and how sine/cosine values are directly related with the two forces Yin and Yang. The angle for which sine and cosine are equal (45°) represents the state of equilibrium between Yin and Yang, which is a state of nonduality that indicates neither normality nor abnormality in terms of disease classification. The paper claims that the proposed confidence is intuitive and can be readily understood by physicians. The paper underpins this thesis with theoretical results in neural signal processing, stating that a sine/cosine relationship between the actual input signal and the perceived (learned) input is key to neural learning processes. As a practical example, the paper shows how to use the proposed confidence values to highlight manifestations of tuberculosis in frontal chest X-rays. PMID:26495433
Aguilar, Juan C; Misawa, Masaki; Matsuda, Kiyofumi; Suzuki, Yoshio; Takeuchi, Akihisa; Yasumoto, Masato
2018-05-01
In this work, the application of an undecimated wavelet transformation together with digital interferometric contrast to improve the resulting reconstructions in a digital hard X-ray Gabor holographic microscope is shown. Specifically, the starlet transform is used together with digital Zernike contrast. With this contrast, the results show that only a small set of scales from the hologram are, in effect, useful, and it is possible to enhance the details of the reconstruction.
Topology-Preserving Rigid Transformation of 2D Digital Images.
Ngo, Phuc; Passat, Nicolas; Kenmochi, Yukiko; Talbot, Hugues
2014-02-01
We provide conditions under which 2D digital images preserve their topological properties under rigid transformations. We consider the two most common digital topology models, namely dual adjacency and well-composedness. This paper leads to the proposal of optimal preprocessing strategies that ensure the topological invariance of images under arbitrary rigid transformations. These results and methods are proved to be valid for various kinds of images (binary, gray-level, label), thus providing generic and efficient tools, which can be used in particular in the context of image registration and warping.
Sampling functions for geophysics
NASA Technical Reports Server (NTRS)
Giacaglia, G. E. O.; Lunquist, C. A.
1972-01-01
A set of spherical sampling functions is defined such that they are related to spherical-harmonic functions in the same way that the sampling functions of information theory are related to sine and cosine functions. An orderly distribution of (N + 1) squared sampling points on a sphere is given, for which the (N + 1) squared spherical sampling functions span the same linear manifold as do the spherical-harmonic functions through degree N. The transformations between the spherical sampling functions and the spherical-harmonic functions are given by recurrence relations. The spherical sampling functions of two arguments are extended to three arguments and to nonspherical reference surfaces. Typical applications of this formalism to geophysical topics are sketched.
SAR data compression: Application, requirements, and designs
NASA Technical Reports Server (NTRS)
Curlander, John C.; Chang, C. Y.
1991-01-01
The feasibility of reducing data volume and data rate is evaluated for the Earth Observing System (EOS) Synthetic Aperture Radar (SAR). All elements of data stream from the sensor downlink data stream to electronic delivery of browse data products are explored. The factors influencing design of a data compression system are analyzed, including the signal data characteristics, the image quality requirements, and the throughput requirements. The conclusion is that little or no reduction can be achieved in the raw signal data using traditional data compression techniques (e.g., vector quantization, adaptive discrete cosine transform) due to the induced phase errors in the output image. However, after image formation, a number of techniques are effective for data compression.
Emitter signal separation method based on multi-level digital channelization
NASA Astrophysics Data System (ADS)
Han, Xun; Ping, Yifan; Wang, Sujun; Feng, Ying; Kuang, Yin; Yang, Xinquan
2018-02-01
To solve the problem of emitter separation under complex electromagnetic environment, a signal separation method based on multi-level digital channelization is proposed in this paper. A two-level structure which can divide signal into different channel is designed first, after that, the peaks of different channels are tracked using the track filter and the coincident signals in time domain are separated in time-frequency domain. Finally, the time domain waveforms of different signals are acquired by reverse transformation. The validness of the proposed method is proved by experiment.
Digital image transformation and rectification of spacecraft and radar images
Wu, S.S.C.
1985-01-01
Digital image transformation and rectification can be described in three categories: (1) digital rectification of spacecraft pictures on workable stereoplotters; (2) digital correction of radar image geometry; and (3) digital reconstruction of shaded relief maps and perspective views including stereograms. Digital rectification can make high-oblique pictures workable on stereoplotters that would otherwise not accommodate such extreme tilt angles. It also enables panoramic line-scan geometry to be used to compile contour maps with photogrammetric plotters. Rectifications were digitally processed on both Viking Orbiter and Lander pictures of Mars as well as radar images taken by various radar systems. By merging digital terrain data with image data, perspective and three-dimensional views of Olympus Mons and Tithonium Chasma, also of Mars, are reconstructed through digital image processing. ?? 1985.
An Efficient, Highly Flexible Multi-Channel Digital Downconverter Architecture
NASA Technical Reports Server (NTRS)
Goodhart, Charles E.; Soriano, Melissa A.; Navarro, Robert; Trinh, Joseph T.; Sigman, Elliott H.
2013-01-01
In this innovation, a digital downconverter has been created that produces a large (16 or greater) number of output channels of smaller bandwidths. Additionally, this design has the flexibility to tune each channel independently to anywhere in the input bandwidth to cover a wide range of output bandwidths (from 32 MHz down to 1 kHz). Both the flexibility in channel frequency selection and the more than four orders of magnitude range in output bandwidths (decimation rates from 32 to 640,000) presented significant challenges to be solved. The solution involved breaking the digital downconversion process into a two-stage process. The first stage is a 2 oversampled filter bank that divides the whole input bandwidth as a real input signal into seven overlapping, contiguous channels represented with complex samples. Using the symmetry of the sine and cosine functions in a similar way to that of an FFT (fast Fourier transform), this downconversion is very efficient and gives seven channels fixed in frequency. An arbitrary number of smaller bandwidth channels can be formed from second-stage downconverters placed after the first stage of downconversion. Because of the overlapping of the first stage, there is no gap in coverage of the entire input bandwidth. The input to any of the second-stage downconverting channels has a multiplexer that chooses one of the seven wideband channels from the first stage. These second-stage downconverters take up fewer resources because they operate at lower bandwidths than doing the entire downconversion process from the input bandwidth for each independent channel. These second-stage downconverters are each independent with fine frequency control tuning, providing extreme flexibility in positioning the center frequency of a downconverted channel. Finally, these second-stage downconverters have flexible decimation factors over four orders of magnitude The algorithm was developed to run in an FPGA (field programmable gate array) at input data sampling rates of up to 1,280 MHz. The current implementation takes a 1,280-MHz real input, and first breaks it up into seven 160-MHz complex channels, each spaced 80 MHz apart. The eighth channel at baseband was not required for this implementation, and led to more optimization. Afterwards, 16 second stage narrow band channels with independently tunable center frequencies and bandwidth settings are implemented A future implementation in a larger Xilinx FPGA will hold up to 32 independent second-stage channels.
GEOMETRIC PROCESSING OF DIGITAL IMAGES OF THE PLANETS.
Edwards, Kathleen
1987-01-01
New procedures and software have been developed for geometric transformations of images to support digital cartography of the planets. The procedures involve the correction of spacecraft camera orientation of each image with the use of ground control and the transformation of each image to a Sinusoidal Equal-Area map projection with an algorithm which allows the number of transformation calculations to vary as the distortion varies within the image. When the distortion is low in an area of an image, few transformation computations are required, and most pixels can be interpolated. When distortion is extreme, the location of each pixel is computed. Mosaics are made of these images and stored as digital databases.
Improved Remapping Processor For Digital Imagery
NASA Technical Reports Server (NTRS)
Fisher, Timothy E.
1991-01-01
Proposed digital image processor improved version of Programmable Remapper, which performs geometric and radiometric transformations on digital images. Features include overlapping and variably sized preimages. Overcomes some of limitations of image-warping circuit boards implementing only those geometric tranformations expressible in terms of polynomials of limited order. Also overcomes limitations of existing Programmable Remapper and made to perform transformations at video rate.
ERIC Educational Resources Information Center
Soete, George J.
The problem of preserving digital information and the strategies that are and might be employed to address it are the focus of this fifth issue of "Transforming Libraries." Twenty-one individuals involved at the technical or policy level in developing strategies for preserving digital information were interviewed. There is consensus on a…
ERIC Educational Resources Information Center
Smirnova, Lyudmila; Lazarevic , Bojan; Malloy, Veronica
2018-01-01
This paper explores how pedagogy is being influenced by fast developing digital technologies. Results are presented from exploratory research conducted in 2016. The findings are addressed in terms of the transformation of learning and education, including the move from the measured to the engaged classroom. Emerging technology creates a natural…
Personalized Medicine in Veterans with Traumatic Brain Injuries
2011-05-01
UPGMA algorithm with cosine correlation as the similarity metric. Results are present as a heat map (left panel) demonstrating that the panel of 18... UPGMA algorithm with cosine correlation as the similarity metric. Results are presented as heat maps demonstrating the efficacy of using all 13
Novel structures for Discrete Hartley Transform based on first-order moments
NASA Astrophysics Data System (ADS)
Xiong, Jun; Zheng, Wenjuan; Wang, Hao; Liu, Jianguo
2018-03-01
Discrete Hartley Transform (DHT) is an important tool in digital signal processing. In the present paper, the DHT is firstly transformed into the first-order moments-based form, then a new fast algorithm is proposed to calculate the first-order moments without multiplication. Based on the algorithm theory, the corresponding hardware architecture for DHT is proposed, which only contains shift operations and additions with no need for multipliers and large memory. To verify the availability and effectiveness, the proposed design is implemented with hardware description language and synthesized by Synopsys Design Compiler with 0.18-μm SMIC library. A series of experiments have proved that the proposed architecture has better performance in terms of the product of the hardware consumption and computation time.
Principle and analysis of a rotational motion Fourier transform infrared spectrometer
NASA Astrophysics Data System (ADS)
Cai, Qisheng; Min, Huang; Han, Wei; Liu, Yixuan; Qian, Lulu; Lu, Xiangning
2017-09-01
Fourier transform infrared spectroscopy is an important technique in studying molecular energy levels, analyzing material compositions, and environmental pollutants detection. A novel rotational motion Fourier transform infrared spectrometer with high stability and ultra-rapid scanning characteristics is proposed in this paper. The basic principle, the optical path difference (OPD) calculations, and some tolerance analysis are elaborated. The OPD of this spectrometer is obtained by the continuously rotational motion of a pair of parallel mirrors instead of the translational motion in traditional Michelson interferometer. Because of the rotational motion, it avoids the tilt problems occurred in the translational motion Michelson interferometer. There is a cosine function relationship between the OPD and the rotating angle of the parallel mirrors. An optical model is setup in non-sequential mode of the ZEMAX software, and the interferogram of a monochromatic light is simulated using ray tracing method. The simulated interferogram is consistent with the theoretically calculated interferogram. As the rotating mirrors are the only moving elements in this spectrometer, the parallelism of the rotating mirrors and the vibration during the scan are analyzed. The vibration of the parallel mirrors is the main error during the rotation. This high stability and ultra-rapid scanning Fourier transform infrared spectrometer is a suitable candidate for airborne and space-borne remote sensing spectrometer.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
2011-06-22
The Linac Coherent Light Source (LCLS) is required to deliver a high quality electron beam for producing coherent X-rays. As a result, high resolution beam position monitoring is required. The Beam Position Monitor (BPM) digitizer acquires analog signals from the beam line and digitizes them to obtain beam position data. Although Matlab is currently being used to test the BPM digitizer?s functions and capability, the Controls Department at SLAC prefers to use Experimental Physics and Industrial Control Systems (EPICS). This paper discusses the transition of providing similar as well as enhanced functionalities, than those offered by Matlab, to test themore » digitizer. Altogether, the improved test stand development system can perform mathematical and statistical calculations with the waveform signals acquired from the digitizer and compute the fast Fourier transform (FFT) of the signals. Finally, logging of meaningful data into files has been added.« less
Consequences of "going digital" for pathology professionals - entering the cloud.
Laurinavicius, Arvydas; Raslavicus, Paul
2012-01-01
New opportunities and the adoption of digital technologies will transform the way pathology professionals and services work. Many areas of our daily life as well as medical professions have experienced this change already which has resulted in a paradigm shift in many activities. Pathology is an image-based discipline, therefore, arrival of digital imaging into this domain promises major shift in our work and required mentality. Recognizing the physical and digital duality of the pathology workflow, we can prepare for the imminent increase of the digital component, synergize and enjoy its benefits. Development of a new generation of laboratory information systems along with seamless integration of digital imaging, decision-support, and knowledge databases will enable pathologists to work in a distributed environment. The paradigm of "cloud pathology" is proposed as an ultimate vision of digital pathology workstations plugged into the integrated multidisciplinary patient care systems.
Two-body potential model based on cosine series expansion for ionic materials
Oda, Takuji; Weber, William J.; Tanigawa, Hisashi
2015-09-23
There is a method to construct a two-body potential model for ionic materials with a Fourier series basis and we examine it. For this method, the coefficients of cosine basis functions are uniquely determined by solving simultaneous linear equations to minimize the sum of weighted mean square errors in energy, force and stress, where first-principles calculation results are used as the reference data. As a validation test of the method, potential models for magnesium oxide are constructed. The mean square errors appropriately converge with respect to the truncation of the cosine series. This result mathematically indicates that the constructed potentialmore » model is sufficiently close to the one that is achieved with the non-truncated Fourier series and demonstrates that this potential virtually provides minimum error from the reference data within the two-body representation. The constructed potential models work appropriately in both molecular statics and dynamics simulations, especially if a two-step correction to revise errors expected in the reference data is performed, and the models clearly outperform two existing Buckingham potential models that were tested. Moreover, the good agreement over a broad range of energies and forces with first-principles calculations should enable the prediction of materials behavior away from equilibrium conditions, such as a system under irradiation.« less
Assessment of low-contrast detectability for compressed digital chest images
NASA Astrophysics Data System (ADS)
Cook, Larry T.; Insana, Michael F.; McFadden, Michael A.; Hall, Timothy J.; Cox, Glendon G.
1994-04-01
The ability of human observers to detect low-contrast targets in screen-film (SF) images, computed radiographic (CR) images, and compressed CR images was measured using contrast detail (CD) analysis. The results of these studies were used to design a two- alternative forced-choice (2AFC) experiment to investigate the detectability of nodules in adult chest radiographs. CD curves for a common screen-film system were compared with CR images compressed up to 125:1. Data from clinical chest exams were used to define a CD region of clinical interest that sufficiently challenged the observer. From that data, simulated lesions were introduced into 100 normal CR chest films, and forced-choice observer performance studies were performed. CR images were compressed using a full-frame discrete cosine transform (FDCT) technique, where the 2D Fourier space was divided into four areas of different quantization depending on the cumulative power spectrum (energy) of each image. The characteristic curve of the CR images was adjusted so that optical densities matched those of the SF system. The CD curves for SF and uncompressed CR systems were statistically equivalent. The slope of the CD curve for each was - 1.0 as predicted by the Rose model. There was a significant degradation in detection found for CR images compressed to 125:1. Furthermore, contrast-detail analysis demonstrated that many pulmonary nodules encountered in clinical practice are significantly above the average observer threshold for detection. We designed a 2AFC observer study using simulated 1-cm lesions introduced into normal CR chest radiographs. Detectability was reduced for all compressed CR radiographs.
Magnified reconstruction of digitally recorded holograms by Fresnel-Bluestein transform.
Restrepo, John F; Garcia-Sucerquia, Jorge
2010-11-20
A method for numerical reconstruction of digitally recorded holograms with variable magnification is presented. The proposed strategy allows for smaller, equal, or larger magnification than that achieved with Fresnel transform by introducing the Bluestein substitution into the Fresnel kernel. The magnification is obtained independent of distance, wavelength, and number of pixels, which enables the method to be applied in color digital holography and metrological applications. The approach is supported by experimental and simulation results in digital holography of objects of comparable dimensions with the recording device and in the reconstruction of holograms from digital in-line holographic microscopy.
The Law of Cosines for an "n"-Dimensional Simplex
ERIC Educational Resources Information Center
Ding, Yiren
2008-01-01
Using the divergence theorem technique of L. Eifler and N.H. Rhee, "The n-dimensional Pythagorean Theorem via the Divergence Theorem" (to appear: Amer. Math. Monthly), we extend the law of cosines for a triangle in a plane to an "n"-dimensional simplex in an "n"-dimensional space.
An Elementary Algorithm to Evaluate Trigonometric Functions to High Precision
ERIC Educational Resources Information Center
Johansson, B. Tomas
2018-01-01
Evaluation of the cosine function is done via a simple Cordic-like algorithm, together with a package for handling arbitrary-precision arithmetic in the computer program Matlab. Approximations to the cosine function having hundreds of correct decimals are presented with a discussion around errors and implementation.
1992-08-01
MAXIMIUI• 0 P 8. ALL LIGHTS ARE LED"’ TORMAD TEM"ERATUE TO RESET 9. DIGITAL METER IS LE[in EMORY IETER WILL AUTOMATICALLY MAD PHASE WiTH HIGHEST...in place. 4.4 Building 379 The Building 379 installation consisted of removing three existing 167 kVA PCB-filled, single phase , polemount transformers...that were connected in a three phase bank and replacing them with a single 300 kVA Square D Company VPI dry-type transformer. This task also involved
Local Subspace Classifier with Transform-Invariance for Image Classification
NASA Astrophysics Data System (ADS)
Hotta, Seiji
A family of linear subspace classifiers called local subspace classifier (LSC) outperforms the k-nearest neighbor rule (kNN) and conventional subspace classifiers in handwritten digit classification. However, LSC suffers very high sensitivity to image transformations because it uses projection and the Euclidean distances for classification. In this paper, I present a combination of a local subspace classifier (LSC) and a tangent distance (TD) for improving accuracy of handwritten digit recognition. In this classification rule, we can deal with transform-invariance easily because we are able to use tangent vectors for approximation of transformations. However, we cannot use tangent vectors in other type of images such as color images. Hence, kernel LSC (KLSC) is proposed for incorporating transform-invariance into LSC via kernel mapping. The performance of the proposed methods is verified with the experiments on handwritten digit and color image classification.
Compressed/reconstructed test images for CRAF/Cassini
NASA Technical Reports Server (NTRS)
Dolinar, S.; Cheung, K.-M.; Onyszchuk, I.; Pollara, F.; Arnold, S.
1991-01-01
A set of compressed, then reconstructed, test images submitted to the Comet Rendezvous Asteroid Flyby (CRAF)/Cassini project is presented as part of its evaluation of near lossless high compression algorithms for representing image data. A total of seven test image files were provided by the project. The seven test images were compressed, then reconstructed with high quality (root mean square error of approximately one or two gray levels on an 8 bit gray scale), using discrete cosine transforms or Hadamard transforms and efficient entropy coders. The resulting compression ratios varied from about 2:1 to about 10:1, depending on the activity or randomness in the source image. This was accomplished without any special effort to optimize the quantizer or to introduce special postprocessing to filter the reconstruction errors. A more complete set of measurements, showing the relative performance of the compression algorithms over a wide range of compression ratios and reconstruction errors, shows that additional compression is possible at a small sacrifice in fidelity.
Low complexity 1D IDCT for 16-bit parallel architectures
NASA Astrophysics Data System (ADS)
Bivolarski, Lazar
2007-09-01
This paper shows that using the Loeffler, Ligtenberg, and Moschytz factorization of 8-point IDCT [2] one-dimensional (1-D) algorithm as a fast approximation of the Discrete Cosine Transform (DCT) and using only 16 bit numbers, it is possible to create in an IEEE 1180-1990 compliant and multiplierless algorithm with low computational complexity. This algorithm as characterized by its structure is efficiently implemented on parallel high performance architectures as well as due to its low complexity is sufficient for wide range of other architectures. Additional constraint on this work was the requirement of compliance with the existing MPEG standards. The hardware implementation complexity and low resources where also part of the design criteria for this algorithm. This implementation is also compliant with the precision requirements described in MPEG IDCT precision specification ISO/IEC 23002-1. Complexity analysis is performed as an extension to the simple measure of shifts and adds for the multiplierless algorithm as additional operations are included in the complexity measure to better describe the actual transform implementation complexity.
Animal-Free Chemical Safety Assessment
Loizou, George D.
2016-01-01
The exponential growth of the Internet of Things and the global popularity and remarkable decline in cost of the mobile phone is driving the digital transformation of medical practice. The rapidly maturing digital, non-medical world of mobile (wireless) devices, cloud computing and social networking is coalescing with the emerging digital medical world of omics data, biosensors and advanced imaging which offers the increasingly realistic prospect of personalized medicine. Described as a potential “seismic” shift from the current “healthcare” model to a “wellness” paradigm that is predictive, preventative, personalized and participatory, this change is based on the development of increasingly sophisticated biosensors which can track and measure key biochemical variables in people. Additional key drivers in this shift are metabolomic and proteomic signatures, which are increasingly being reported as pre-symptomatic, diagnostic and prognostic of toxicity and disease. These advancements also have profound implications for toxicological evaluation and safety assessment of pharmaceuticals and environmental chemicals. An approach based primarily on human in vivo and high-throughput in vitro human cell-line data is a distinct possibility. This would transform current chemical safety assessment practice which operates in a human “data poor” to a human “data rich” environment. This could also lead to a seismic shift from the current animal-based to an animal-free chemical safety assessment paradigm. PMID:27493630
Image Fusion Algorithms Using Human Visual System in Transform Domain
NASA Astrophysics Data System (ADS)
Vadhi, Radhika; Swamy Kilari, Veera; Samayamantula, Srinivas Kumar
2017-08-01
The endeavor of digital image fusion is to combine the important visual parts from various sources to advance the visibility eminence of the image. The fused image has a more visual quality than any source images. In this paper, the Human Visual System (HVS) weights are used in the transform domain to select appropriate information from various source images and then to attain a fused image. In this process, mainly two steps are involved. First, apply the DWT to the registered source images. Later, identify qualitative sub-bands using HVS weights. Hence, qualitative sub-bands are selected from different sources to form high quality HVS based fused image. The quality of the HVS based fused image is evaluated with general fusion metrics. The results show the superiority among the state-of-the art resolution Transforms (MRT) such as Discrete Wavelet Transform (DWT), Stationary Wavelet Transform (SWT), Contourlet Transform (CT), and Non Sub Sampled Contourlet Transform (NSCT) using maximum selection fusion rule.
Algorithm to calculate proportional area transformation factors for digital geographic databases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Edwards, R.
1983-01-01
A computer technique is described for determining proportionate-area factors used to transform thematic data between large geographic areal databases. The number of calculations in the algorithm increases linearly with the number of segments in the polygonal definitions of the databases, and increases with the square root of the total number of chains. Experience is presented in calculating transformation factors for two national databases, the USGS Water Cataloging Unit outlines and DOT county boundaries which consist of 2100 and 3100 polygons respectively. The technique facilitates using thematic data defined on various natural bases (watersheds, landcover units, etc.) in analyses involving economicmore » and other administrative bases (states, counties, etc.), and vice versa.« less
Exploring of PST-TBPM in Monitoring Bridge Dynamic Deflection in Vibration
NASA Astrophysics Data System (ADS)
Zhang, Guojian; Liu, Shengzhen; Zhao, Tonglong; Yu, Chengxin
2018-01-01
This study adopts digital photography to monitor bridge dynamic deflection in vibration. Digital photography used in this study is based on PST-TBPM (photographing scale transformation-time baseline parallax method). Firstly, a digital camera is used to monitor the bridge in static as a zero image. Then, the digital camera is used to monitor the bridge in vibration every three seconds as the successive images. Based on the reference system, PST-TBPM is used to calculate the images to obtain the bridge dynamic deflection in vibration. Results show that the average measurement accuracies are 0.615 pixels and 0.79 pixels in X and Z direction. The maximal deflection of the bridge is 7.14 pixels. PST-TBPM is valid in solving the problem-the photographing direction not perpendicular to the bridge. Digital photography used in this study can assess the bridge health through monitoring the bridge dynamic deflection in vibration. The deformation trend curves depicted over time also can warn the possible dangers.
Digitization and its discontents: future shock in predictive oncology.
Epstein, Richard J
2010-02-01
Clinical cancer care is being transformed by a high-technology informatics revolution fought out between the forces of personalized (biomarker-guided) and depersonalized (bureaucracy-controlled) medicine. Factors triggering this conflict include the online proliferation of treatment algorithms, rising prices of biological drug therapies, increasing sophistication of genomic-based predictive tools, and the growing entrepreneurialism of offshore treatment facilities. The resulting Napster-like forces unleashed within the oncology marketplace will deliver incremental improvements in cost-efficacy to global healthcare consumers. There will also be a price to pay, however, as the rising wave of digitization encourages third-party payers to make more use of biomarkers for tightening reimbursement criteria. Hence, as in other digitally transformed industries, a new paradigm of professional service delivery-less centered on doctor-patient relationships than in the past, and more dependent on pricing and marketing for standardized biomarker-defined indications-seems set to emerge as the unpredicted deliverable from this brave new world of predictive oncology. Copyright 2010 Elsevier Inc. All rights reserved.
2017-03-01
It does so by using an optical lens to perform an inverse spatial Fourier Transform on the up-converted RF signals, thereby rendering a real-time... simultaneous beams or other engineered beam patterns. There are two general approaches to array-based beam forming: digital and analog. In digital beam...of significantly limiting the number of beams that can be formed simultaneously and narrowing the operational bandwidth. An alternate approach that
A text zero-watermarking method based on keyword dense interval
NASA Astrophysics Data System (ADS)
Yang, Fan; Zhu, Yuesheng; Jiang, Yifeng; Qing, Yin
2017-07-01
Digital watermarking has been recognized as a useful technology for the copyright protection and authentication of digital information. However, rarely did the former methods focus on the key content of digital carrier. The idea based on the protection of key content is more targeted and can be considered in different digital information, including text, image and video. In this paper, we use text as research object and a text zero-watermarking method which uses keyword dense interval (KDI) as the key content is proposed. First, we construct zero-watermarking model by introducing the concept of KDI and giving the method of KDI extraction. Second, we design detection model which includes secondary generation of zero-watermark and the similarity computing method of keyword distribution. Besides, experiments are carried out, and the results show that the proposed method gives better performance than other available methods especially in the attacks of sentence transformation and synonyms substitution.
NASA Astrophysics Data System (ADS)
Urriza, Isidro; Barragan, Luis A.; Artigas, Jose I.; Garcia, Jose I.; Navarro, Denis
1997-11-01
Image compression plays an important role in the archiving and transmission of medical images. Discrete cosine transform (DCT)-based compression methods are not suitable for medical images because of block-like image artifacts that could mask or be mistaken for pathology. Wavelet transforms (WTs) are used to overcome this problem. When implementing WTs in hardware, finite precision arithmetic introduces quantization errors. However, lossless compression is usually required in the medical image field. Thus, the hardware designer must look for the optimum register length that, while ensuring the lossless accuracy criteria, will also lead to a high-speed implementation with small chip area. In addition, wavelet choice is a critical issue that affects image quality as well as system design. We analyze the filters best suited to image compression that appear in the literature. For them, we obtain the maximum quantization errors produced in the calculation of the WT components. Thus, we deduce the minimum word length required for the reconstructed image to be numerically identical to the original image. The theoretical results are compared with experimental results obtained from algorithm simulations on random test images. These results enable us to compare the hardware implementation cost of the different filter banks. Moreover, to reduce the word length, we have analyzed the case of increasing the integer part of the numbers while maintaining constant the word length when the scale increases.
A New Class of Pulse Compression Codes and Techniques.
1980-03-26
04 11 01 12 02 13 03 14 OA DIALFL I NOTE’ BOT TRANSFORM AND DIGITAL FILTER NETWORK INVERSE TRANSFORM DRIVE FRANK CODE SAME DIGITAL FILTER ; ! ! I I...function from circuit of Fig. I with N =9 TRANSFORM INVERSE TRANSFORM SINGLE _WORD S1A ~b,.ISR -.- ISR I- SR I--~ SR SIC-- I1GENERATOR 0 fJFJ $ J$ .. J...FOR I 1 1 13 11 12 13 FROM RECEIVER TRANSMIT Q- j ~ ~ 01 02 03 0, 02 03 11 01 12 02 13 03 4 1 1 ~ 4 NOTrE: BOTH TRANSFORM ANDI I I I INVERSE TRANSFORM DRIVE
NASA Technical Reports Server (NTRS)
Nagle, H. T., Jr.
1972-01-01
A three part survey is made of the state-of-the-art in digital filtering. Part one presents background material including sampled data transformations and the discrete Fourier transform. Part two, digital filter theory, gives an in-depth coverage of filter categories, transfer function synthesis, quantization and other nonlinear errors, filter structures and computer aided design. Part three presents hardware mechanization techniques. Implementations by general purpose, mini-, and special-purpose computers are presented.
Performance evaluation of digital phase-locked loops for advanced deep space transponders
NASA Technical Reports Server (NTRS)
Nguyen, T. M.; Hinedi, S. M.; Yeh, H.-G.; Kyriacou, C.
1994-01-01
The performances of the digital phase-locked loops (DPLL's) for the advanced deep-space transponders (ADT's) are investigated. DPLL's considered in this article are derived from the analog phase-locked loop, which is currently employed by the NASA standard deep space transponder, using S-domain to Z-domain mapping techniques. Three mappings are used to develop digital approximations of the standard deep space analog phase-locked loop, namely the bilinear transformation (BT), impulse invariant transformation (IIT), and step invariant transformation (SIT) techniques. The performance in terms of the closed loop phase and magnitude responses, carrier tracking jitter, and response of the loop to the phase offset (the difference between in incoming phase and reference phase) is evaluated for each digital approximation. Theoretical results of the carrier tracking jitter for command-on and command-off cases are then validated by computer simulation. Both theoretical and computer simulation results show that at high sampling frequency, the DPLL's approximated by all three transformations have the same tracking jitter. However, at low sampling frequency, the digital approximation using BT outperforms the others. The minimum sampling frequency for adequate tracking performance is determined for each digital approximation of the analog loop. In addition, computer simulation shows that the DPLL developed by BT provides faster response to the phase offset than IIT and SIT.
Accurate B-spline-based 3-D interpolation scheme for digital volume correlation
NASA Astrophysics Data System (ADS)
Ren, Maodong; Liang, Jin; Wei, Bin
2016-12-01
An accurate and efficient 3-D interpolation scheme, based on sampling theorem and Fourier transform technique, is proposed to reduce the sub-voxel matching error caused by intensity interpolation bias in digital volume correlation. First, the influence factors of the interpolation bias are investigated theoretically using the transfer function of an interpolation filter (henceforth filter) in the Fourier domain. A law that the positional error of a filter can be expressed as a function of fractional position and wave number is found. Then, considering the above factors, an optimized B-spline-based recursive filter, combining B-spline transforms and least squares optimization method, is designed to virtually eliminate the interpolation bias in the process of sub-voxel matching. Besides, given each volumetric image containing different wave number ranges, a Gaussian weighting function is constructed to emphasize or suppress certain of wave number ranges based on the Fourier spectrum analysis. Finally, a novel software is developed and series of validation experiments were carried out to verify the proposed scheme. Experimental results show that the proposed scheme can reduce the interpolation bias to an acceptable level.
Analysis of Digital Communication Signals and Extraction of Parameters.
1994-12-01
Fast Fourier Transform (FFT). The correlation methods utilize modified time-frequency distributions , where one of these is based on the Wigner - Ville ... Distribution ( WVD ). Gaussian white noise is added to the signal to simulate various signal-to-noise ratios (SNRs).
NASA Astrophysics Data System (ADS)
Topolsky, D. V.; Gonenko, T. V.; Khatsevskiy, V. F.
2017-10-01
The present paper discusses ways to solve the problem of enhancing operating efficiency of automated electric power supply control systems of mining companies. According to the authors, one of the ways to solve this problem is intellectualization of the electric power supply control system equipment. To enhance efficiency of electric power supply control and electricity metering, it is proposed to use specially designed digital combined instrument current and voltage transformers. This equipment conforms to IEC 61850 international standard and is adapted for integration into the digital substation structure. Tests were performed to check conformity of an experimental prototype of the digital combined instrument current and voltage transformer with IEC 61850 standard. The test results have shown that the considered equipment meets the requirements of the standard.
Cartographic services contract...for everything geographic
,
2003-01-01
The U.S. Geological Survey's (USGS) Cartographic Services Contract (CSC) is used to award work for photogrammetric and mapping services under the umbrella of Architect-Engineer (A&E) contracting. The A&E contract is broad in scope and can accommodate any activity related to standard, nonstandard, graphic, and digital cartographic products. Services provided may include, but are not limited to, photogrammetric mapping and aerotriangulation; orthophotography; thematic mapping (for example, land characterization); analog and digital imagery applications; geographic information systems development; surveying and control acquisition, including ground-based and airborne Global Positioning System; analog and digital image manipulation, analysis, and interpretation; raster and vector map digitizing; data manipulations (for example, transformations, conversions, generalization, integration, and conflation); primary and ancillary data acquisition (for example, aerial photography, satellite imagery, multispectral, multitemporal, and hyperspectral data); image scanning and processing; metadata production, revision, and creation; and production or revision of standard USGS products defined by formal and informal specification and standards, such as those for digital line graphs, digital elevation models, digital orthophoto quadrangles, and digital raster graphics.
NASA Astrophysics Data System (ADS)
Qin, Zhang-jian; Chen, Chuan; Luo, Jun-song; Xie, Xing-hong; Ge, Liang-quan; Wu, Qi-fan
2018-04-01
It is a usual practice for improving spectrum quality by the mean of designing a good shaping filter to improve signal-noise ratio in development of nuclear spectroscopy. Another method is proposed in the paper based on discriminating pulse-shape and discarding the bad pulse whose shape is distorted as a result of abnormal noise, unusual ballistic deficit or bad pulse pile-up. An Exponentially Decaying Pulse (EDP) generated in nuclear particle detectors can be transformed into a Mexican Hat Wavelet Pulse (MHWP) and the derivation process of the transform is given. After the transform is performed, the baseline drift is removed in the new MHWP. Moreover, the MHWP-shape can be discriminated with the three parameters: the time difference between the two minima of the MHWP, and the two ratios which are from the amplitude of the two minima respectively divided by the amplitude of the maximum in the MHWP. A new type of nuclear spectroscopy was implemented based on the new digital shaping filter and the Gamma-ray spectra were acquired with a variety of pulse-shape discrimination levels. It had manifested that the energy resolution and the peak-Compton ratio were both improved after the pulse-shape discrimination method was used.
Numerical methods for comparing fresh and weathered oils by their FTIR spectra.
Li, Jianfeng; Hibbert, D Brynn; Fuller, Stephen
2007-08-01
Four comparison statistics ('similarity indices') for the identification of the source of a petroleum oil spill based on the ASTM standard test method D3414 were investigated. Namely, (1) first difference correlation coefficient squared and (2) correlation coefficient squared, (3) first difference Euclidean cosine squared and (4) Euclidean cosine squared. For numerical comparison, an FTIR spectrum is divided into three regions, described as: fingerprint (900-700 cm(-1)), generic (1350-900 cm(-1)) and supplementary (1770-1685 cm(-1)), which are the same as the three major regions recommended by the ASTM standard. For fresh oil samples, each similarity index was able to distinguish between replicate independent spectra of the same sample and between different samples. In general, the two first difference-based indices worked better than their parent indices. To provide samples to reveal relationships between weathered and fresh oils, a simple artificial weathering procedure was carried out. Euclidean cosine and correlation coefficients both worked well to maintain identification of a match in the fingerprint region and the two first difference indices were better in the generic region. Receiver operating characteristic curves (true positive rate versus false positive rate) for decisions on matching using the fingerprint region showed two samples could be matched when the difference in weathering time was up to 7 days. Beyond this time the true positive rate falls and samples cannot be reliably matched. However, artificial weathering of a fresh source sample can aid the matching of a weathered sample to its real source from a pool of very similar candidates.
Digital Transformation Canvas - Übersicht behalten und Handlungsfelder gestalten
NASA Astrophysics Data System (ADS)
Köster, Michael; Mache, Tobias
Im Beitrag "Digital Transformation Canvas - Übersicht behalten und Handlungsfelder gestalten" wird zunächst grob auf die wesentlichen Herausforderungen, die mit der zunehmenden Digitalisierung einhergehen, eingegangen. Anschließend werden ausgewählte Konzepte des Business Transformation Management vorgestellt, die sich mit der grundlegenden Weiterentwicklung von Organisationen - wie es die Digitalisierung erfordert - auseinandersetzen. Eine detaillierte Einführung in die Methodik des Business Transformation Canvas, der sich mit den unterschiedlichsten Gestaltungsfeldern der Transformation auseinandersetzt und ein Framework für Transformationsprojekte darstellt, rundet den Beitrag ab. Er schließt mit einem Fazit und Ausblick.
The FBI compression standard for digitized fingerprint images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brislawn, C.M.; Bradley, J.N.; Onyshczak, R.J.
1996-10-01
The FBI has formulated national standards for digitization and compression of gray-scale fingerprint images. The compression algorithm for the digitized images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition, a technique referred to as the wavelet/scalar quantization method. The algorithm produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations. We will review the currentmore » status of the FBI standard, including the compliance testing process and the details of the first-generation encoder.« less
FBI compression standard for digitized fingerprint images
NASA Astrophysics Data System (ADS)
Brislawn, Christopher M.; Bradley, Jonathan N.; Onyshczak, Remigius J.; Hopper, Thomas
1996-11-01
The FBI has formulated national standards for digitization and compression of gray-scale fingerprint images. The compression algorithm for the digitized images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition, a technique referred to as the wavelet/scalar quantization method. The algorithm produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations. We will review the current status of the FBI standard, including the compliance testing process and the details of the first-generation encoder.
Mathematics Education Graduate Students' Understanding of Trigonometric Ratios
ERIC Educational Resources Information Center
Yigit Koyunkaya, Melike
2016-01-01
This study describes mathematics education graduate students' understanding of relationships between sine and cosine of two base angles in a right triangle. To explore students' understanding of these relationships, an elaboration of Skemp's views of instrumental and relational understanding using Tall and Vinner's concept image and concept…
NASA Technical Reports Server (NTRS)
Cornell, Stephen R.; Leser, William P.; Hochhalter, Jacob D.; Newman, John A.; Hartl, Darren J.
2014-01-01
A method for detecting fatigue cracks has been explored at NASA Langley Research Center. Microscopic NiTi shape memory alloy (sensory) particles were embedded in a 7050 aluminum alloy matrix to detect the presence of fatigue cracks. Cracks exhibit an elevated stress field near their tip inducing a martensitic phase transformation in nearby sensory particles. Detectable levels of acoustic energy are emitted upon particle phase transformation such that the existence and location of fatigue cracks can be detected. To test this concept, a fatigue crack was grown in a mode-I single-edge notch fatigue crack growth specimen containing sensory particles. As the crack approached the sensory particles, measurements of particle strain, matrix-particle debonding, and phase transformation behavior of the sensory particles were performed. Full-field deformation measurements were performed using a novel multi-scale optical 3D digital image correlation (DIC) system. This information will be used in a finite element-based study to determine optimal sensory material behavior and density.
Multiplexed wavelet transform technique for detection of microcalcification in digitized mammograms.
Mini, M G; Devassia, V P; Thomas, Tessamma
2004-12-01
Wavelet transform (WT) is a potential tool for the detection of microcalcifications, an early sign of breast cancer. This article describes the implementation and evaluates the performance of two novel WT-based schemes for the automatic detection of clustered microcalcifications in digitized mammograms. Employing a one-dimensional WT technique that utilizes the pseudo-periodicity property of image sequences, the proposed algorithms achieve high detection efficiency and low processing memory requirements. The detection is achieved from the parent-child relationship between the zero-crossings [Marr-Hildreth (M-H) detector] /local extrema (Canny detector) of the WT coefficients at different levels of decomposition. The detected pixels are weighted before the inverse transform is computed, and they are segmented by simple global gray level thresholding. Both detectors produce 95% detection sensitivity, even though there are more false positives for the M-H detector. The M-H detector preserves the shape information and provides better detection sensitivity for mammograms containing widely distributed calcifications.
[Glossary of terms used by radiologists in image processing].
Rolland, Y; Collorec, R; Bruno, A; Ramée, A; Morcet, N; Haigron, P
1995-01-01
We give the definition of 166 words used in image processing. Adaptivity, aliazing, analog-digital converter, analysis, approximation, arc, artifact, artificial intelligence, attribute, autocorrelation, bandwidth, boundary, brightness, calibration, class, classification, classify, centre, cluster, coding, color, compression, contrast, connectivity, convolution, correlation, data base, decision, decomposition, deconvolution, deduction, descriptor, detection, digitization, dilation, discontinuity, discretization, discrimination, disparity, display, distance, distorsion, distribution dynamic, edge, energy, enhancement, entropy, erosion, estimation, event, extrapolation, feature, file, filter, filter floaters, fitting, Fourier transform, frequency, fusion, fuzzy, Gaussian, gradient, graph, gray level, group, growing, histogram, Hough transform, Houndsfield, image, impulse response, inertia, intensity, interpolation, interpretation, invariance, isotropy, iterative, JPEG, knowledge base, label, laplacian, learning, least squares, likelihood, matching, Markov field, mask, matching, mathematical morphology, merge (to), MIP, median, minimization, model, moiré, moment, MPEG, neural network, neuron, node, noise, norm, normal, operator, optical system, optimization, orthogonal, parametric, pattern recognition, periodicity, photometry, pixel, polygon, polynomial, prediction, pulsation, pyramidal, quantization, raster, reconstruction, recursive, region, rendering, representation space, resolution, restoration, robustness, ROC, thinning, transform, sampling, saturation, scene analysis, segmentation, separable function, sequential, smoothing, spline, split (to), shape, threshold, tree, signal, speckle, spectrum, spline, stationarity, statistical, stochastic, structuring element, support, syntaxic, synthesis, texture, truncation, variance, vision, voxel, windowing.
Transparency of the ab Planes of Bi2Sr2CaCu2O8+δ to Magnetic Fields
NASA Astrophysics Data System (ADS)
Kossler, W. J.; Dai, Y.; Petzinger, K. G.; Greer, A. J.; Williams, D. Ll.; Koster, E.; Harshman, D. R.; Mitzi, D. B.
1998-01-01
A sample composed of many Bi2Sr2CaCu2O8+δ single crystals was cooled to 2 K in a magnetic field of 100 G at 45° from the c axis. Muon-spin-rotation measurements were made for which the polarization was initially approximately in the ab plane. The time dependent polarization components along this initial direction and along the c axis were obtained. Cosine transforms of these and subsequent measurements were made. Upon removing the applied field, still at 2 K, only the c axis component of the field remained in the sample, thus providing microscopic evidence for extreme 2D behavior for the vortices even at this temperature.
Performance of customized DCT quantization tables on scientific data
NASA Technical Reports Server (NTRS)
Ratnakar, Viresh; Livny, Miron
1994-01-01
We show that it is desirable to use data-specific or customized quantization tables for scaling the spatial frequency coefficients obtained using the Discrete Cosine Transform (DCT). DCT is widely used for image and video compression (MP89, PM93) but applications typically use default quantization matrices. Using actual scientific data gathered from divers sources such as spacecrafts and electron-microscopes, we show that the default compression/quality tradeoffs can be significantly improved upon by using customized tables. We also show that significant improvements are possible for the standard test images Lena and Baboon. This work is part of an effort to develop a practical scheme for optimizing quantization matrices for any given image or video stream, under any given quality or compression constraints.
Some Cosine Relations and the Regular Heptagon
ERIC Educational Resources Information Center
Osler, Thomas J.; Heng, Phongthong
2007-01-01
The ancient Greek mathematicians sought to construct, by use of straight edge and compass only, all regular polygons. They had no difficulty with regular polygons having 3, 4, 5 and 6 sides, but the 7-sided heptagon eluded all their attempts. In this article, the authors discuss some cosine relations and the regular heptagon. (Contains 1 figure.)
Similarity Measures in Scientometric Research: The Jaccard Index versus Salton's Cosine Formula.
ERIC Educational Resources Information Center
Hamers, Lieve; And Others
1989-01-01
Describes two similarity measures used in citation and co-citation analysis--the Jaccard index and Salton's cosine formula--and investigates the relationship between the two measures. It is shown that Salton's formula yields a numerical value that is twice Jaccard's index in most cases, and an explanation is offered. (13 references) (CLB)
A Double-function Digital Watermarking Algorithm Based on Chaotic System and LWT
NASA Astrophysics Data System (ADS)
Yuxia, Zhao; Jingbo, Fan
A double- function digital watermarking technology is studied and a double-function digital watermarking algorithm of colored image is presented based on chaotic system and the lifting wavelet transformation (LWT).The algorithm has realized the double aims of the copyright protection and the integrity authentication of image content. Making use of feature of human visual system (HVS), the watermark image is embedded into the color image's low frequency component and middle frequency components by different means. The algorithm has great security by using two kinds chaotic mappings and Arnold to scramble the watermark image at the same time. The algorithm has good efficiency by using LWT. The emulation experiment indicates the algorithm has great efficiency and security, and the effect of concealing is really good.
Sriraam, N.
2012-01-01
Developments of new classes of efficient compression algorithms, software systems, and hardware for data intensive applications in today's digital health care systems provide timely and meaningful solutions in response to exponentially growing patient information data complexity and associated analysis requirements. Of the different 1D medical signals, electroencephalography (EEG) data is of great importance to the neurologist for detecting brain-related disorders. The volume of digitized EEG data generated and preserved for future reference exceeds the capacity of recent developments in digital storage and communication media and hence there is a need for an efficient compression system. This paper presents a new and efficient high performance lossless EEG compression using wavelet transform and neural network predictors. The coefficients generated from the EEG signal by integer wavelet transform are used to train the neural network predictors. The error residues are further encoded using a combinational entropy encoder, Lempel-Ziv-arithmetic encoder. Also a new context-based error modeling is also investigated to improve the compression efficiency. A compression ratio of 2.99 (with compression efficiency of 67%) is achieved with the proposed scheme with less encoding time thereby providing diagnostic reliability for lossless transmission as well as recovery of EEG signals for telemedicine applications. PMID:22489238
Sriraam, N
2012-01-01
Developments of new classes of efficient compression algorithms, software systems, and hardware for data intensive applications in today's digital health care systems provide timely and meaningful solutions in response to exponentially growing patient information data complexity and associated analysis requirements. Of the different 1D medical signals, electroencephalography (EEG) data is of great importance to the neurologist for detecting brain-related disorders. The volume of digitized EEG data generated and preserved for future reference exceeds the capacity of recent developments in digital storage and communication media and hence there is a need for an efficient compression system. This paper presents a new and efficient high performance lossless EEG compression using wavelet transform and neural network predictors. The coefficients generated from the EEG signal by integer wavelet transform are used to train the neural network predictors. The error residues are further encoded using a combinational entropy encoder, Lempel-Ziv-arithmetic encoder. Also a new context-based error modeling is also investigated to improve the compression efficiency. A compression ratio of 2.99 (with compression efficiency of 67%) is achieved with the proposed scheme with less encoding time thereby providing diagnostic reliability for lossless transmission as well as recovery of EEG signals for telemedicine applications.
Spectral Target Detection using Schroedinger Eigenmaps
NASA Astrophysics Data System (ADS)
Dorado-Munoz, Leidy P.
Applications of optical remote sensing processes include environmental monitoring, military monitoring, meteorology, mapping, surveillance, etc. Many of these tasks include the detection of specific objects or materials, usually few or small, which are surrounded by other materials that clutter the scene and hide the relevant information. This target detection process has been boosted lately by the use of hyperspectral imagery (HSI) since its high spectral dimension provides more detailed spectral information that is desirable in data exploitation. Typical spectral target detectors rely on statistical or geometric models to characterize the spectral variability of the data. However, in many cases these parametric models do not fit well HSI data that impacts the detection performance. On the other hand, non-linear transformation methods, mainly based on manifold learning algorithms, have shown a potential use in HSI transformation, dimensionality reduction and classification. In target detection, non-linear transformation algorithms are used as preprocessing techniques that transform the data to a more suitable lower dimensional space, where the statistical or geometric detectors are applied. One of these non-linear manifold methods is the Schroedinger Eigenmaps (SE) algorithm that has been introduced as a technique for semi-supervised classification. The core tool of the SE algorithm is the Schroedinger operator that includes a potential term that encodes prior information about the materials present in a scene, and enables the embedding to be steered in some convenient directions in order to cluster similar pixels together. A completely novel target detection methodology based on SE algorithm is proposed for the first time in this thesis. The proposed methodology does not just include the transformation of the data to a lower dimensional space but also includes the definition of a detector that capitalizes on the theory behind SE. The fact that target pixels and those similar pixels are clustered in a predictable region of the low-dimensional representation is used to define a decision rule that allows one to identify target pixels over the rest of pixels in a given image. In addition, a knowledge propagation scheme is used to combine spectral and spatial information as a means to propagate the "potential constraints" to nearby points. The propagation scheme is introduced to reinforce weak connections and improve the separability between most of the target pixels and the background. Experiments using different HSI data sets are carried out in order to test the proposed methodology. The assessment is performed from a quantitative and qualitative point of view, and by comparing the SE-based methodology against two other detection methodologies that use linear/non-linear algorithms as transformations and the well-known Adaptive Coherence/Cosine Estimator (ACE) detector. Overall results show that the SE-based detector outperforms the other two detection methodologies, which indicates the usefulness of the SE transformation in spectral target detection problems.
Visual Attention Modeling for Stereoscopic Video: A Benchmark and Computational Model.
Fang, Yuming; Zhang, Chi; Li, Jing; Lei, Jianjun; Perreira Da Silva, Matthieu; Le Callet, Patrick
2017-10-01
In this paper, we investigate the visual attention modeling for stereoscopic video from the following two aspects. First, we build one large-scale eye tracking database as the benchmark of visual attention modeling for stereoscopic video. The database includes 47 video sequences and their corresponding eye fixation data. Second, we propose a novel computational model of visual attention for stereoscopic video based on Gestalt theory. In the proposed model, we extract the low-level features, including luminance, color, texture, and depth, from discrete cosine transform coefficients, which are used to calculate feature contrast for the spatial saliency computation. The temporal saliency is calculated by the motion contrast from the planar and depth motion features in the stereoscopic video sequences. The final saliency is estimated by fusing the spatial and temporal saliency with uncertainty weighting, which is estimated by the laws of proximity, continuity, and common fate in Gestalt theory. Experimental results show that the proposed method outperforms the state-of-the-art stereoscopic video saliency detection models on our built large-scale eye tracking database and one other database (DML-ITRACK-3D).
Efficiency analysis for 3D filtering of multichannel images
NASA Astrophysics Data System (ADS)
Kozhemiakin, Ruslan A.; Rubel, Oleksii; Abramov, Sergey K.; Lukin, Vladimir V.; Vozel, Benoit; Chehdi, Kacem
2016-10-01
Modern remote sensing systems basically acquire images that are multichannel (dual- or multi-polarization, multi- and hyperspectral) where noise, usually with different characteristics, is present in all components. If noise is intensive, it is desirable to remove (suppress) it before applying methods of image classification, interpreting, and information extraction. This can be done using one of two approaches - by component-wise or by vectorial (3D) filtering. The second approach has shown itself to have higher efficiency if there is essential correlation between multichannel image components as this often happens for multichannel remote sensing data of different origin. Within the class of 3D filtering techniques, there are many possibilities and variations. In this paper, we consider filtering based on discrete cosine transform (DCT) and pay attention to two aspects of processing. First, we study in detail what changes in DCT coefficient statistics take place for 3D denoising compared to component-wise processing. Second, we analyze how selection of component images united into 3D data array influences efficiency of filtering and can the observed tendencies be exploited in processing of images with rather large number of channels.
NASA Astrophysics Data System (ADS)
Kim, Seong-Whan; Suthaharan, Shan; Lee, Heung-Kyu; Rao, K. R.
2001-01-01
Quality of Service (QoS)-guarantee in real-time communication for multimedia applications is significantly important. An architectural framework for multimedia networks based on substreams or flows is effectively exploited for combining source and channel coding for multimedia data. But the existing frame by frame approach which includes Moving Pictures Expert Group (MPEG) cannot be neglected because it is a standard. In this paper, first, we designed an MPEG transcoder which converts an MPEG coded stream into variable rate packet sequences to be used for our joint source/channel coding (JSCC) scheme. Second, we designed a classification scheme to partition the packet stream into multiple substreams which have their own QoS requirements. Finally, we designed a management (reservation and scheduling) scheme for substreams to support better perceptual video quality such as the bound of end-to-end jitter. We have shown that our JSCC scheme is better than two other two popular techniques by simulation and real video experiments on the TCP/IP environment.
Video and accelerometer-based motion analysis for automated surgical skills assessment.
Zia, Aneeq; Sharma, Yachna; Bettadapura, Vinay; Sarin, Eric L; Essa, Irfan
2018-03-01
Basic surgical skills of suturing and knot tying are an essential part of medical training. Having an automated system for surgical skills assessment could help save experts time and improve training efficiency. There have been some recent attempts at automated surgical skills assessment using either video analysis or acceleration data. In this paper, we present a novel approach for automated assessment of OSATS-like surgical skills and provide an analysis of different features on multi-modal data (video and accelerometer data). We conduct a large study for basic surgical skill assessment on a dataset that contained video and accelerometer data for suturing and knot-tying tasks. We introduce "entropy-based" features-approximate entropy and cross-approximate entropy, which quantify the amount of predictability and regularity of fluctuations in time series data. The proposed features are compared to existing methods of Sequential Motion Texture, Discrete Cosine Transform and Discrete Fourier Transform, for surgical skills assessment. We report average performance of different features across all applicable OSATS-like criteria for suturing and knot-tying tasks. Our analysis shows that the proposed entropy-based features outperform previous state-of-the-art methods using video data, achieving average classification accuracies of 95.1 and 92.2% for suturing and knot tying, respectively. For accelerometer data, our method performs better for suturing achieving 86.8% average accuracy. We also show that fusion of video and acceleration features can improve overall performance for skill assessment. Automated surgical skills assessment can be achieved with high accuracy using the proposed entropy features. Such a system can significantly improve the efficiency of surgical training in medical schools and teaching hospitals.
Learning to predict where human gaze is using quaternion DCT based regional saliency detection
NASA Astrophysics Data System (ADS)
Li, Ting; Xu, Yi; Zhang, Chongyang
2014-09-01
Many current visual attention approaches used semantic features to accurately capture human gaze. However, these approaches demand high computational cost and can hardly be applied to daily use. Recently, some quaternion-based saliency detection models, such as PQFT (phase spectrum of Quaternion Fourier Transform), QDCT (Quaternion Discrete Cosine Transform), have been proposed to meet real-time requirement of human gaze tracking tasks. However, current saliency detection methods used global PQFT and QDCT to locate jump edges of the input, which can hardly detect the object boundaries accurately. To address the problem, we improved QDCT-based saliency detection model by introducing superpixel-wised regional saliency detection mechanism. The local smoothness of saliency value distribution is emphasized to distinguish noises of background from salient regions. Our algorithm called saliency confidence can distinguish the patches belonging to the salient object and those of the background. It decides whether the image patches belong to the same region. When an image patch belongs to a region consisting of other salient patches, this patch should be salient as well. Therefore, we use saliency confidence map to get background weight and foreground weight to do the optimization on saliency map obtained by QDCT. The optimization is accomplished by least square method. The optimization approach we proposed unifies local and global saliency by combination of QDCT and measuring the similarity between each image superpixel. We evaluate our model on four commonly-used datasets (Toronto, MIT, OSIE and ASD) using standard precision-recall curves (PR curves), the mean absolute error (MAE) and area under curve (AUC) measures. In comparison with most state-of-art models, our approach can achieve higher consistency with human perception without training. It can get accurate human gaze even in cluttered background. Furthermore, it achieves better compromise between speed and accuracy.
NASA Astrophysics Data System (ADS)
Liang, Xiuyu; Zhan, Hongbin; Zhang, You-Kuan; Schilling, Keith
2017-09-01
Unsaturated flow is an important process in base flow recessions and its effect is rarely investigated. A mathematical model for a coupled unsaturated-saturated flow in a horizontally unconfined aquifer with time-dependent infiltrations is presented. The effects of the lateral discharge of the unsaturated zone and aquifer compressibility are specifically taken into consideration. Semianalytical solutions for hydraulic heads and discharges are derived using Laplace transform and Cosine transform. The solutions are compared with solutions of the linearized Boussinesq equation (LB solution) and the linearized Laplace equation (LL solution), respectively. A larger dimensionless constitutive exponent κD (a smaller retention capacity) of the unsaturated zone leads to a smaller discharge during the infiltration period and a larger discharge after the infiltration. The lateral discharge of the unsaturated zone is significant when κD≤1, and becomes negligible when κD≥100. The compressibility of the aquifer has a nonnegligible impact on the discharge at early times. For late times, the power index b of the recession curve -dQ/dt˜ aQb, is 1 and independent of κD, where Q is the base flow and a is a constant lumped aquifer parameter. For early times, b is approximately equal to 3 but it approaches infinity when t→0. The present solution is applied to synthetic and field cases. The present solution matched the synthetic data better than both the LL and LB solutions, with a minimum relative error of 16% for estimate of hydraulic conductivity. The present solution was applied to the observed streamflow discharge in Iowa, and the estimated values of the aquifer parameters were reasonable.
Medhi, Biswajit; Hegde, Gopalakrishna M; Gorthi, Sai Siva; Reddy, Kalidevapura Jagannath; Roy, Debasish; Vasu, Ram Mohan
2016-08-01
A simple noninterferometric optical probe is developed to estimate wavefront distortion suffered by a plane wave in its passage through density variations in a hypersonic flow obstructed by a test model in a typical shock tunnel. The probe has a plane light wave trans-illuminating the flow and casting a shadow of a continuous-tone sinusoidal grating. Through a geometrical optics, eikonal approximation to the distorted wavefront, a bilinear approximation to it is related to the location-dependent shift (distortion) suffered by the grating, which can be read out space-continuously from the projected grating image. The processing of the grating shadow is done through an efficient Fourier fringe analysis scheme, either with a windowed or global Fourier transform (WFT and FT). For comparison, wavefront slopes are also estimated from shadows of random-dot patterns, processed through cross correlation. The measured slopes are suitably unwrapped by using a discrete cosine transform (DCT)-based phase unwrapping procedure, and also through iterative procedures. The unwrapped phase information is used in an iterative scheme, for a full quantitative recovery of density distribution in the shock around the model, through refraction tomographic inversion. Hypersonic flow field parameters around a missile-shaped body at a free-stream Mach number of ∼8 measured using this technique are compared with the numerically estimated values. It is shown that, while processing a wavefront with small space-bandwidth product (SBP) the FT inversion gave accurate results with computational efficiency; computation-intensive WFT was needed for similar results when dealing with larger SBP wavefronts.
Dai, Wenrui; Xiong, Hongkai; Jiang, Xiaoqian; Chen, Chang Wen
2014-01-01
This paper proposes a novel model on intra coding for High Efficiency Video Coding (HEVC), which simultaneously predicts blocks of pixels with optimal rate distortion. It utilizes the spatial statistical correlation for the optimal prediction based on 2-D contexts, in addition to formulating the data-driven structural interdependences to make the prediction error coherent with the probability distribution, which is desirable for successful transform and coding. The structured set prediction model incorporates a max-margin Markov network (M3N) to regulate and optimize multiple block predictions. The model parameters are learned by discriminating the actual pixel value from other possible estimates to maximize the margin (i.e., decision boundary bandwidth). Compared to existing methods that focus on minimizing prediction error, the M3N-based model adaptively maintains the coherence for a set of predictions. Specifically, the proposed model concurrently optimizes a set of predictions by associating the loss for individual blocks to the joint distribution of succeeding discrete cosine transform coefficients. When the sample size grows, the prediction error is asymptotically upper bounded by the training error under the decomposable loss function. As an internal step, we optimize the underlying Markov network structure to find states that achieve the maximal energy using expectation propagation. For validation, we integrate the proposed model into HEVC for optimal mode selection on rate-distortion optimization. The proposed prediction model obtains up to 2.85% bit rate reduction and achieves better visual quality in comparison to the HEVC intra coding. PMID:25505829
Computer applications in diagnostic imaging.
Horii, S C
1991-03-01
This article has introduced the nature, generation, use, and future of digital imaging. As digital technology has transformed other aspects of our lives--has the reader tried to buy a conventional record album recently? almost all music store stock is now compact disks--it is sure to continue to transform medicine as well. Whether that transformation will be to our liking as physicians or a source of frustration and disappointment is dependent on understanding the issues involved.
ERIC Educational Resources Information Center
Huang, Chung-Kai; Lin, Chun-Yu
2017-01-01
With the globalization of macro-economic environments, it is important to think about how to use instructional design and web-based digital technologies to enhance students' self-paced learning, stir up learning motivation and enjoyment, build up knowledge-sharing channels, and enhance individual learning. This study experimented with the flipped…
Digital diffractive optics: Have diffractive optics entered mainstream industry yet?
NASA Astrophysics Data System (ADS)
Kress, Bernard; Hejmadi, Vic
2010-05-01
When a new technology is integrated into industry commodity products and consumer electronic devices, and sold worldwide in retail stores, it is usually understood that this technology has then entered the realm of mainstream technology and therefore mainstream industry. Such a leap however does not come cheap, as it has a double edge sword effect: first it becomes democratized and thus massively developed by numerous companies for various applications, but also it becomes a commodity, and thus gets under tremendous pressure to cut down its production and integration costs while not sacrificing to performance. We will show, based on numerous examples extracted from recent industry history, that the field of Diffractive Optics is about to undergo such a major transformation. Such a move has many impacts on all facets of digital diffractive optics technology, from the optical design houses to the micro-optics foundries (for both mastering and volume replication), to the final product integrators or contract manufacturers. The main causes of such a transformation are, as they have been for many other technologies in industry, successive technological bubbles which have carried and lifted up diffractive optics technology within the last decades. These various technological bubbles have been triggered either by real industry needs or by virtual investment hype. Both of these causes will be discussed in the paper. The adjective ""digital"" in "digital diffractive optics" does not refer only, as it is done in digital electronics, to the digital functionality of the element (digital signal processing), but rather to the digital way they are designed (by a digital computer) and fabricated (as wafer level optics using digital masking techniques). However, we can still trace a very strong similarity between the emergence of micro-electronics from analog electronics half a century ago, and the emergence of digital optics from conventional optics today.
Video multiple watermarking technique based on image interlacing using DWT.
Ibrahim, Mohamed M; Abdel Kader, Neamat S; Zorkany, M
2014-01-01
Digital watermarking is one of the important techniques to secure digital media files in the domains of data authentication and copyright protection. In the nonblind watermarking systems, the need of the original host file in the watermark recovery operation makes an overhead over the system resources, doubles memory capacity, and doubles communications bandwidth. In this paper, a robust video multiple watermarking technique is proposed to solve this problem. This technique is based on image interlacing. In this technique, three-level discrete wavelet transform (DWT) is used as a watermark embedding/extracting domain, Arnold transform is used as a watermark encryption/decryption method, and different types of media (gray image, color image, and video) are used as watermarks. The robustness of this technique is tested by applying different types of attacks such as: geometric, noising, format-compression, and image-processing attacks. The simulation results show the effectiveness and good performance of the proposed technique in saving system resources, memory capacity, and communications bandwidth.
Improved pulse shape discrimination in EJ-301 liquid scintillators
NASA Astrophysics Data System (ADS)
Lang, R. F.; Masson, D.; Pienaar, J.; Röttger, S.
2017-06-01
Digital pulse shape discrimination has become readily available to distinguish nuclear recoil and electronic recoil events in scintillation detectors. We evaluate digital implementations of pulse shape discrimination algorithms discussed in the literature, namely the Charge Comparison Method, Pulse-Gradient Analysis, Fourier Series and Standard Event Fitting. In addition, we present a novel algorithm based on a Laplace Transform. Instead of comparing the performance of these algorithms based on a single Figure of Merit, we evaluate them as a function of recoil energy. Specifically, using commercial EJ-301 liquid scintillators, we examined both the resulting acceptance of nuclear recoils at a given rejection level of electronic recoils, as well as the purity of the selected nuclear recoil event samples. We find that both a Standard Event fit and a Laplace Transform can be used to significantly improve the discrimination capabilities over the whole considered energy range of 0 - 800keVee . Furthermore, we show that the Charge Comparison Method performs poorly in accurately identifying nuclear recoils.
Development of Michelson interferometer based spatial phase-shift digital shearography
NASA Astrophysics Data System (ADS)
Xie, Xin
Digital shearography is a non-contact, full field, optical measurement method, which has the capability of directly measuring the gradient of deformation. For high measurement sensitivity, phase evaluation method has to be introduced into digital shearography by phase-shift technique. Catalog by phase-shift method, digital phase-shift shearography can be divided into Temporal Phase-Shift Digital Shearography (TPS-DS) and Spatial Phase-Shift Digital Shearography (SPS-DS). TPS-DS is the most widely used phase-shift shearography system, due to its simple algorithm, easy operation and good phase-map quality. However, the application of TPS-DS is only limited in static/step-by-step loading measurement situation, due to its multi-step shifting process. In order to measure the strain under dynamic/continuous loading situation, a SPS-DS system has to be developed. This dissertation aims to develop a series of Michelson Interferometer based SPS-DS measurement methods to achieve the strain measurement by using only a single pair of speckle pattern images. The Michelson Interferometer based SPS-DS systems utilize special designed optical setup to introduce extra carrier frequency into the laser wavefront. The phase information corresponds to the strain field can be separated on the Fourier domain using a Fourier Transform and can further be evaluated with a Windowed Inverse Fourier Transform. With different optical setups and carrier frequency arrangements, the Michelson Interferometer based SPS-DS method is capable to achieve a variety of measurement tasks using only single pair of speckle pattern images. Catalog by the aimed measurand, these capable measurement tasks can be divided into five categories: 1) measurement of out-of-plane strain field with small shearing amount; 2) measurement of relative out-of-plane deformation field with big shearing amount; 3) simultaneous measurement of relative out-of-plane deformation field and deformation gradient field by using multiple carrier frequencies; 4) simultaneous measurement of two directional strain field using dual measurement channels 5) measurement of pure in-plane strain and pure out-of-plane strain with multiple carrier frequencies. The basic theory, optical path analysis, preliminary studies, results analysis and research plan are shown in detail in this dissertation.
Discovering Trigonometric Relationships Implied by the Law of Sines and the Law of Cosines
ERIC Educational Resources Information Center
Skurnick, Ronald; Javadi, Mohammad
2006-01-01
The Law of Sines and The Law of Cosines are of paramount importance in the field of trigonometry because these two theorems establish relationships satisfied by the three sides and the three angles of any triangle. In this article, the authors use these two laws to discover a host of other trigonometric relationships that exist within any…
NASA Technical Reports Server (NTRS)
Nola, F. J. (Inventor)
1977-01-01
A tachometer in which sine and cosine signals responsive to the angular position of a shaft as it rotates are each multiplied by like, sine or cosine, functions of a carrier signal, the products summed, and the resulting frequency signal converted to fixed height, fixed width pulses of a like frequency. These pulses are then integrated, and the resulting dc output is an indication of shaft speed.
An Ellipse Morphs to a Cosine Graph!
ERIC Educational Resources Information Center
King, L .R.
2013-01-01
We produce a continuum of curves all of the same length, beginning with an ellipse and ending with a cosine graph. The curves in the continuum are made by cutting and unrolling circular cones whose section is the ellipse; the initial cone is degenerate (it is the plane of the ellipse); the final cone is a circular cylinder. The curves of the…
Enabling Technologies for Medium Additive Manufacturing (MAAM)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Richardson, Bradley S.; Love, Lonnie J.; Chesser, Phillip C.
ORNL has worked with Cosine Additive, Inc. on the design of MAAM extrusion components. The objective is to improve the print speed and part quality. A pellet extruder has been procured and integrated into the MAAM printer. Print speed has been greatly enhanced. In addition, ORNL and Cosine Additive have worked on alternative designs for a pellet drying and feed system.
Nelson, Danielle; Ziv, Amitai; Bandali, Karim S
2012-10-01
The recent technological advance of digital high resolution imaging has allowed the field of pathology and medical laboratory science to undergo a dramatic transformation with the incorporation of virtual microscopy as a simulation-based educational and diagnostic tool. This transformation has correlated with an overall increase in the use of simulation in medicine in an effort to address dwindling clinical resource availability and patient safety issues currently facing the modern healthcare system. Virtual microscopy represents one such simulation-based technology that has the potential to enhance student learning and readiness to practice while revolutionising the ability to clinically diagnose pathology collaboratively across the world. While understanding that a substantial amount of literature already exists on virtual microscopy, much more research is still required to elucidate the full capabilities of this technology. This review explores the use of virtual microscopy in medical education and disease diagnosis with a unique focus on key requirements needed to take this technology to the next level in its use in medical education and clinical practice.
Nelson, Danielle; Ziv, Amitai; Bandali, Karim S
2013-10-01
The recent technological advance of digital high resolution imaging has allowed the field of pathology and medical laboratory science to undergo a dramatic transformation with the incorporation of virtual microscopy as a simulation-based educational and diagnostic tool. This transformation has correlated with an overall increase in the use of simulation in medicine in an effort to address dwindling clinical resource availability and patient safety issues currently facing the modern healthcare system. Virtual microscopy represents one such simulation-based technology that has the potential to enhance student learning and readiness to practice while revolutionising the ability to clinically diagnose pathology collaboratively across the world. While understanding that a substantial amount of literature already exists on virtual microscopy, much more research is still required to elucidate the full capabilities of this technology. This review explores the use of virtual microscopy in medical education and disease diagnosis with a unique focus on key requirements needed to take this technology to the next level in its use in medical education and clinical practice.
NASA Astrophysics Data System (ADS)
Fukushima, Toshio
2018-02-01
In order to accelerate the spherical harmonic synthesis and/or analysis of arbitrary function on the unit sphere, we developed a pair of procedures to transform between a truncated spherical harmonic expansion and the corresponding two-dimensional Fourier series. First, we obtained an analytic expression of the sine/cosine series coefficient of the 4 π fully normalized associated Legendre function in terms of the rectangle values of the Wigner d function. Then, we elaborated the existing method to transform the coefficients of the surface spherical harmonic expansion to those of the double Fourier series so as to be capable with arbitrary high degree and order. Next, we created a new method to transform inversely a given double Fourier series to the corresponding surface spherical harmonic expansion. The key of the new method is a couple of new recurrence formulas to compute the inverse transformation coefficients: a decreasing-order, fixed-degree, and fixed-wavenumber three-term formula for general terms, and an increasing-degree-and-order and fixed-wavenumber two-term formula for diagonal terms. Meanwhile, the two seed values are analytically prepared. Both of the forward and inverse transformation procedures are confirmed to be sufficiently accurate and applicable to an extremely high degree/order/wavenumber as 2^{30} {≈ } 10^9. The developed procedures will be useful not only in the synthesis and analysis of the spherical harmonic expansion of arbitrary high degree and order, but also in the evaluation of the derivatives and integrals of the spherical harmonic expansion.
Wavelet-based reversible watermarking for authentication
NASA Astrophysics Data System (ADS)
Tian, Jun
2002-04-01
In the digital information age, digital content (audio, image, and video) can be easily copied, manipulated, and distributed. Copyright protection and content authentication of digital content has become an urgent problem to content owners and distributors. Digital watermarking has provided a valuable solution to this problem. Based on its application scenario, most digital watermarking methods can be divided into two categories: robust watermarking and fragile watermarking. As a special subset of fragile watermark, reversible watermark (which is also called lossless watermark, invertible watermark, erasable watermark) enables the recovery of the original, unwatermarked content after the watermarked content has been detected to be authentic. Such reversibility to get back unwatermarked content is highly desired in sensitive imagery, such as military data and medical data. In this paper we present a reversible watermarking method based on an integer wavelet transform. We look into the binary representation of each wavelet coefficient and embed an extra bit to expandable wavelet coefficient. The location map of all expanded coefficients will be coded by JBIG2 compression and these coefficient values will be losslessly compressed by arithmetic coding. Besides these two compressed bit streams, an SHA-256 hash of the original image will also be embedded for authentication purpose.
Flows of Newtonian and Power-Law Fluids in Symmetrically Corrugated Cappilary Fissures and Tubes
NASA Astrophysics Data System (ADS)
Walicka, A.
2018-02-01
In this paper, an analytical method for deriving the relationships between the pressure drop and the volumetric flow rate in laminar flow regimes of Newtonian and power-law fluids through symmetrically corrugated capillary fissures and tubes is presented. This method, which is general with regard to fluid and capillary shape, can also be used as a foundation for different fluids, fissures and tubes. It can also be a good base for numerical integration when analytical expressions are hard to obtain due to mathematical complexities. Five converging-diverging or diverging-converging geometrics, viz. wedge and cone, parabolic, hyperbolic, hyperbolic cosine and cosine curve, are used as examples to illustrate the application of this method. For the wedge and cone geometry the present results for the power-law fluid were compared with the results obtained by another method; this comparison indicates a good compatibility between both the results.
Using Digital Health Technology to Better Generate Evidence and Deliver Evidence-Based Care.
Sharma, Abhinav; Harrington, Robert A; McClellan, Mark B; Turakhia, Mintu P; Eapen, Zubin J; Steinhubl, Steven; Mault, James R; Majmudar, Maulik D; Roessig, Lothar; Chandross, Karen J; Green, Eric M; Patel, Bakul; Hamer, Andrew; Olgin, Jeffrey; Rumsfeld, John S; Roe, Matthew T; Peterson, Eric D
2018-06-12
As we enter the information age of health care, digital health technologies offer significant opportunities to optimize both clinical care delivery and clinical research. Despite their potential, the use of such information technologies in clinical care and research faces major data quality, privacy, and regulatory concerns. In hopes of addressing both the promise and challenges facing digital health technologies in the transformation of health care, we convened a think tank meeting with academic, industry, and regulatory representatives in December 2016 in Washington, DC. In this paper, we summarize the proceedings of the think tank meeting and aim to delineate a framework for appropriately using digital health technologies in healthcare delivery and research. Copyright © 2018 American College of Cardiology Foundation. All rights reserved.
Global Digital Revolution and Africa: Transforming Nigerian Universities to World Class Institutions
ERIC Educational Resources Information Center
Isah, Emmanuel Aileonokhuoya; Ayeni, A. O.
2010-01-01
This study examined the global digital revolution and the transformation of Nigerian universities. The study overviewed university developments world wide in line with what obtains in Nigeria. The study highlighted the several challenges that face Nigerian universities inclusive of poor funding, poor personnel and the poor exposure to global…
NASA Technical Reports Server (NTRS)
Juday, Richard D. (Editor)
1988-01-01
The present conference discusses topics in pattern-recognition correlator architectures, digital stereo systems, geometric image transformations and their applications, topics in pattern recognition, filter algorithms, object detection and classification, shape representation techniques, and model-based object recognition methods. Attention is given to edge-enhancement preprocessing using liquid crystal TVs, massively-parallel optical data base management, three-dimensional sensing with polar exponential sensor arrays, the optical processing of imaging spectrometer data, hybrid associative memories and metric data models, the representation of shape primitives in neural networks, and the Monte Carlo estimation of moment invariants for pattern recognition.
High-accuracy resolver-to-digital conversion via phase locked loop based on PID controller
NASA Astrophysics Data System (ADS)
Li, Yaoling; Wu, Zhong
2018-03-01
The problem of resolver-to-digital conversion (RDC) is transformed into the problem of angle tracking control, and a phase locked loop (PLL) method based on PID controller is proposed in this paper. This controller comprises a typical PI controller plus an incomplete differential which can avoid the amplification of higher-frequency noise components by filtering the phase detection error with a low-pass filter. Compared with conventional ones, the proposed PLL method makes the converter a system of type III and thus the conversion accuracy can be improved. Experimental results demonstrate the effectiveness of the proposed method.
NASA Astrophysics Data System (ADS)
Schmalz, Mark S.; Ritter, Gerhard X.; Caimi, Frank M.
2001-12-01
A wide variety of digital image compression transforms developed for still imaging and broadcast video transmission are unsuitable for Internet video applications due to insufficient compression ratio, poor reconstruction fidelity, or excessive computational requirements. Examples include hierarchical transforms that require all, or large portion of, a source image to reside in memory at one time, transforms that induce significant locking effect at operationally salient compression ratios, and algorithms that require large amounts of floating-point computation. The latter constraint holds especially for video compression by small mobile imaging devices for transmission to, and compression on, platforms such as palmtop computers or personal digital assistants (PDAs). As Internet video requirements for frame rate and resolution increase to produce more detailed, less discontinuous motion sequences, a new class of compression transforms will be needed, especially for small memory models and displays such as those found on PDAs. In this, the third series of papers, we discuss the EBLAST compression transform and its application to Internet communication. Leading transforms for compression of Internet video and still imagery are reviewed and analyzed, including GIF, JPEG, AWIC (wavelet-based), wavelet packets, and SPIHT, whose performance is compared with EBLAST. Performance analysis criteria include time and space complexity and quality of the decompressed image. The latter is determined by rate-distortion data obtained from a database of realistic test images. Discussion also includes issues such as robustness of the compressed format to channel noise. EBLAST has been shown to perform superiorly to JPEG and, unlike current wavelet compression transforms, supports fast implementation on embedded processors with small memory models.
A new approach to pre-processing digital image for wavelet-based watermark
NASA Astrophysics Data System (ADS)
Agreste, Santa; Andaloro, Guido
2008-11-01
The growth of the Internet has increased the phenomenon of digital piracy, in multimedia objects, like software, image, video, audio and text. Therefore it is strategic to individualize and to develop methods and numerical algorithms, which are stable and have low computational cost, that will allow us to find a solution to these problems. We describe a digital watermarking algorithm for color image protection and authenticity: robust, not blind, and wavelet-based. The use of Discrete Wavelet Transform is motivated by good time-frequency features and a good match with Human Visual System directives. These two combined elements are important for building an invisible and robust watermark. Moreover our algorithm can work with any image, thanks to the step of pre-processing of the image that includes resize techniques that adapt to the size of the original image for Wavelet transform. The watermark signal is calculated in correlation with the image features and statistic properties. In the detection step we apply a re-synchronization between the original and watermarked image according to the Neyman-Pearson statistic criterion. Experimentation on a large set of different images has been shown to be resistant against geometric, filtering, and StirMark attacks with a low rate of false alarm.
Bautista, Pinky A; Yagi, Yukako
2012-05-01
Hematoxylin and eosin (H&E) stain is currently the most popular for routine histopathology staining. Special and/or immuno-histochemical (IHC) staining is often requested to further corroborate the initial diagnosis on H&E stained tissue sections. Digital simulation of staining (or digital staining) can be a very valuable tool to produce the desired stained images from the H&E stained tissue sections instantaneously. We present an approach to digital staining of histopathology multispectral images by combining the effects of spectral enhancement and spectral transformation. Spectral enhancement is accomplished by shifting the N-band original spectrum of the multispectral pixel with the weighted difference between the pixel's original and estimated spectrum; the spectrum is estimated using M < N principal component (PC) vectors. The pixel's enhanced spectrum is transformed to the spectral configuration associated to its reaction to a specific stain by utilizing an N × N transformation matrix, which is derived through application of least mean squares method to the enhanced and target spectral transmittance samples of the different tissue components found in the image. Results of our experiments on the digital conversion of an H&E stained multispectral image to its Masson's trichrome stained equivalent show the viability of the method.
Domain similarity based orthology detection.
Bitard-Feildel, Tristan; Kemena, Carsten; Greenwood, Jenny M; Bornberg-Bauer, Erich
2015-05-13
Orthologous protein detection software mostly uses pairwise comparisons of amino-acid sequences to assert whether two proteins are orthologous or not. Accordingly, when the number of sequences for comparison increases, the number of comparisons to compute grows in a quadratic order. A current challenge of bioinformatic research, especially when taking into account the increasing number of sequenced organisms available, is to make this ever-growing number of comparisons computationally feasible in a reasonable amount of time. We propose to speed up the detection of orthologous proteins by using strings of domains to characterize the proteins. We present two new protein similarity measures, a cosine and a maximal weight matching score based on domain content similarity, and new software, named porthoDom. The qualities of the cosine and the maximal weight matching similarity measures are compared against curated datasets. The measures show that domain content similarities are able to correctly group proteins into their families. Accordingly, the cosine similarity measure is used inside porthoDom, the wrapper developed for proteinortho. porthoDom makes use of domain content similarity measures to group proteins together before searching for orthologs. By using domains instead of amino acid sequences, the reduction of the search space decreases the computational complexity of an all-against-all sequence comparison. We demonstrate that representing and comparing proteins as strings of discrete domains, i.e. as a concatenation of their unique identifiers, allows a drastic simplification of search space. porthoDom has the advantage of speeding up orthology detection while maintaining a degree of accuracy similar to proteinortho. The implementation of porthoDom is released using python and C++ languages and is available under the GNU GPL licence 3 at http://www.bornberglab.org/pages/porthoda .
Evolution of Scientific and Technical Information Distribution
NASA Technical Reports Server (NTRS)
Esler, Sandra; Nelson, Michael L.
1998-01-01
World Wide Web (WWW) and related information technologies are transforming the distribution of scientific and technical information (STI). We examine 11 recent, functioning digital libraries focusing on the distribution of STI publications, including journal articles, conference papers, and technical reports. We introduce 4 main categories of digital library projects: based on the architecture (distributed vs. centralized) and the contributor (traditional publisher vs. authoring individual/organization). Many digital library prototypes merely automate existing publishing practices or focus solely on the digitization of the publishing cycle output, not sampling and capturing elements of the input. Still others do not consider for distribution the large body of "gray literature." We address these deficiencies in the current model of STI exchange by suggesting methods for expanding the scope and target of digital libraries by focusing on a greater source of technical publications and using "buckets," an object-oriented construct for grouping logically related information objects, to include holdings other than technical publications.
Output MSE and PSNR prediction in DCT-based lossy compression of remote sensing images
NASA Astrophysics Data System (ADS)
Kozhemiakin, Ruslan A.; Abramov, Sergey K.; Lukin, Vladimir V.; Vozel, Benoit; Chehdi, Kacem
2017-10-01
Amount and size of remote sensing (RS) images acquired by modern systems are so large that data have to be compressed in order to transfer, save and disseminate them. Lossy compression becomes more popular for aforementioned situations. But lossy compression has to be applied carefully with providing acceptable level of introduced distortions not to lose valuable information contained in data. Then introduced losses have to be controlled and predicted and this is problematic for many coders. In this paper, we analyze possibilities of predicting mean square error or, equivalently, PSNR for coders based on discrete cosine transform (DCT) applied either for compressing singlechannel RS images or multichannel data in component-wise manner. The proposed approach is based on direct dependence between distortions introduced due to DCT coefficient quantization and losses in compressed data. One more innovation deals with possibility to employ a limited number (percentage) of blocks for which DCT-coefficients have to be calculated. This accelerates prediction and makes it considerably faster than compression itself. There are two other advantages of the proposed approach. First, it is applicable for both uniform and non-uniform quantization of DCT coefficients. Second, the approach is quite general since it works for several analyzed DCT-based coders. The simulation results are obtained for standard test images and then verified for real-life RS data.
An optimal adder-based hardware architecture for the DCT/SA-DCT
NASA Astrophysics Data System (ADS)
Kinane, Andrew; Muresan, Valentin; O'Connor, Noel
2005-07-01
The explosive growth of the mobile multimedia industry has accentuated the need for ecient VLSI implemen- tations of the associated computationally demanding signal processing algorithms. This need becomes greater as end-users demand increasingly enhanced features and more advanced underpinning video analysis. One such feature is object-based video processing as supported by MPEG-4 core profile, which allows content-based in- teractivity. MPEG-4 has many computationally demanding underlying algorithms, an example of which is the Shape Adaptive Discrete Cosine Transform (SA-DCT). The dynamic nature of the SA-DCT processing steps pose significant VLSI implementation challenges and many of the previously proposed approaches use area and power consumptive multipliers. Most also ignore the subtleties of the packing steps and manipulation of the shape information. We propose a new multiplier-less serial datapath based solely on adders and multiplexers to improve area and power. The adder cost is minimised by employing resource re-use methods. The number of (physical) adders used has been derived using a common sub-expression elimination algorithm. Additional energy eciency is factored into the design by employing guarded evaluation and local clock gating. Our design implements the SA-DCT packing with minimal switching using ecient addressing logic with a transpose mem- ory RAM. The entire design has been synthesized using TSMC 0.09µm TCBN90LP technology yielding a gate count of 12028 for the datapath and its control logic.
Healthcare @ The Speed of Thought: A digital world needs successful transformative leaders.
Tremblay, Ken
2017-09-01
In the wake of transformational change powered by the digital era, resultant leadership challenges and strategies essential for successful change, both tactical and cultural, are linked to defined capabilities within the Systems Transformation domain of the LEADS in a Caring Environment framework. Honed from experience, specific softer leadership behaviours supporting system transformation are both described and reinforced. Further, a matrix combining the LEADS framework capabilities with these more specific behaviours is offered as a planning tool that leaders may reflect upon and map out key activities associated with their sponsorship of significant change.
NASA Astrophysics Data System (ADS)
Bini, Jason; Spain, James; Nehal, Kishwer; Hazelwood, Vikki; Dimarzio, Charles; Rajadhyaksha, Milind
2011-07-01
Confocal mosaicing microscopy enables rapid imaging of large areas of fresh tissue, without the processing that is necessary for conventional histology. Mosaicing may offer a means to perform rapid histology at the bedside. A possible barrier toward clinical acceptance is that the mosaics are based on a single mode of grayscale contrast and appear black and white, whereas histology is based on two stains (hematoxylin for nuclei, eosin for cellular cytoplasm and dermis) and appears purple and pink. Toward addressing this barrier, we report advances in digital staining: fluorescence mosaics that show only nuclei, are digitally stained purple and overlaid on reflectance mosaics, which show only cellular cytoplasm and dermis, and are digitally stained pink. With digital staining, the appearance of confocal mosaics mimics the appearance of histology. Using multispectral analysis and color matching functions, red, green, and blue (RGB) components of hematoxylin and eosin stains in tissue were determined. The resulting RGB components were then applied in a linear algorithm to transform fluorescence and reflectance contrast in confocal mosaics to the absorbance contrast seen in pathology. Optimization of staining with acridine orange showed improved quality of digitally stained mosaics, with good correlation to the corresponding histology.
Curriculum-based neurosurgery digital library.
Langevin, Jean-Philippe; Dang, Thai; Kon, David; Sapo, Monica; Batzdorf, Ulrich; Martin, Neil
2010-11-01
Recent work-hour restrictions and the constantly evolving body of knowledge are challenging the current ways of teaching neurosurgery residents. To develop a curriculum-based digital library of multimedia content to face the challenges in neurosurgery education. We used the residency program curriculum developed by the Congress of Neurological Surgeons to structure the library and Microsoft Sharepoint as the user interface. This project led to the creation of a user-friendly and searchable digital library that could be accessed remotely and throughout the hospital, including the operating rooms. The electronic format allows standardization of the content and transformation of the operating room into a classroom. This in turn facilitates the implementation of a curriculum within the training program and improves teaching efficiency. Future work will focus on evaluating the efficacy of the library as a teaching tool for residents.
Liu, Changgeng; Thapa, Damber; Yao, Xincheng
2017-01-01
Guidestar hologram based digital adaptive optics (DAO) is one recently emerging active imaging modality. It records each complex distorted line field reflected or scattered from the sample by an off-axis digital hologram, measures the optical aberration from a separate off-axis digital guidestar hologram, and removes the optical aberration from the distorted line fields by numerical processing. In previously demonstrated DAO systems, the optical aberration was directly retrieved from the guidestar hologram by taking its Fourier transform and extracting the phase term. For the direct retrieval method (DRM), when the sample is not coincident with the guidestar focal plane, the accuracy of the optical aberration retrieved by DRM undergoes a fast decay, leading to quality deterioration of corrected images. To tackle this problem, we explore here an image metrics-based iterative method (MIM) to retrieve the optical aberration from the guidestar hologram. Using an aberrated objective lens and scattering samples, we demonstrate that MIM can improve the accuracy of the retrieved aberrations from both focused and defocused guidestar holograms, compared to DRM, to improve the robustness of the DAO. PMID:28380937
Cubic Equations and the Ideal Trisection of the Arbitrary Angle
ERIC Educational Resources Information Center
Farnsworth, Marion B.
2006-01-01
In the year 1837 mathematical proof was set forth authoritatively stating that it is impossible to trisect an arbitrary angle with a compass and an unmarked straightedge in the classical sense. The famous proof depends on an incompatible cubic equation having the cosine of an angle of 60 and the cube of the cosine of one-third of an angle of 60 as…
Wavelength-encoded tomography based on optical temporal Fourier transform
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Chi; Wong, Kenneth K. Y., E-mail: kywong@eee.hku.hk
We propose and demonstrate a technique called wavelength-encoded tomography (WET) for non-invasive optical cross-sectional imaging, particularly beneficial in biological system. The WET utilizes time-lens to perform the optical Fourier transform, and the time-to-wavelength conversion generates a wavelength-encoded image of optical scattering from internal microstructures, analogous to the interferometery-based imaging such as optical coherence tomography. Optical Fourier transform, in principle, comes with twice as good axial resolution over the electrical Fourier transform, and will greatly simplify the digital signal processing after the data acquisition. As a proof-of-principle demonstration, a 150 -μm (ideally 36 μm) resolution is achieved based on a 7.5-nm bandwidth swept-pump,more » using a conventional optical spectrum analyzer. This approach can potentially achieve up to 100-MHz or even higher frame rate with some proven ultrafast spectrum analyzer. We believe that this technique is innovative towards the next-generation ultrafast optical tomographic imaging application.« less
NASA Astrophysics Data System (ADS)
Trusiak, Maciej; Micó, Vicente; Patorski, Krzysztof; García-Monreal, Javier; Sluzewski, Lukasz; Ferreira, Carlos
2016-08-01
In this contribution we propose two Hilbert-Huang Transform based algorithms for fast and accurate single-shot and two-shot quantitative phase imaging applicable in both on-axis and off-axis configurations. In the first scheme a single fringe pattern containing information about biological phase-sample under study is adaptively pre-filtered using empirical mode decomposition based approach. Further it is phase demodulated by the Hilbert Spiral Transform aided by the Principal Component Analysis for the local fringe orientation estimation. Orientation calculation enables closed fringes efficient analysis and can be avoided using arbitrary phase-shifted two-shot Gram-Schmidt Orthonormalization scheme aided by Hilbert-Huang Transform pre-filtering. This two-shot approach is a trade-off between single-frame and temporal phase shifting demodulation. Robustness of the proposed techniques is corroborated using experimental digital holographic microscopy studies of polystyrene micro-beads and red blood cells. Both algorithms compare favorably with the temporal phase shifting scheme which is used as a reference method.
NASA Technical Reports Server (NTRS)
Seltzer, S. M.
1976-01-01
The problem discussed is to design a digital controller for a typical satellite. The controlled plant is considered to be a rigid body acting in a plane. The controller is assumed to be a digital computer which, when combined with the proposed control algorithm, can be represented as a sampled-data system. The objective is to present a design strategy and technique for selecting numerical values for the control gains (assuming position, integral, and derivative feedback) and the sample rate. The technique is based on the parameter plane method and requires that the system be amenable to z-transform analysis.
Exploring of PST-TBPM in Monitoring Dynamic Deformation of Steel Structure in Vibration
NASA Astrophysics Data System (ADS)
Chen, Mingzhi; Zhao, Yongqian; Hai, Hua; Yu, Chengxin; Zhang, Guojian
2018-01-01
In order to monitor the dynamic deformation of steel structure in the real-time, digital photography is used in this paper. Firstly, the grid method is used correct the distortion of digital camera. Then the digital cameras are used to capture the initial and experimental images of steel structure to obtain its relative deformation. PST-TBPM (photographing scale transformation-time baseline parallax method) is used to eliminate the parallax error and convert the pixel change value of deformation points into the actual displacement value. In order to visualize the deformation trend of steel structure, the deformation curves are drawn based on the deformation value of deformation points. Results show that the average absolute accuracy and relative accuracy of PST-TBPM are 0.28mm and 1.1‰, respectively. Digital photography used in this study can meet accuracy requirements of steel structure deformation monitoring. It also can warn the safety of steel structure and provide data support for managers’ safety decisions based on the deformation curves on site.
Discrete linear canonical transforms based on dilated Hermite functions.
Pei, Soo-Chang; Lai, Yun-Chiu
2011-08-01
Linear canonical transform (LCT) is very useful and powerful in signal processing and optics. In this paper, discrete LCT (DLCT) is proposed to approximate LCT by utilizing the discrete dilated Hermite functions. The Wigner distribution function is also used to investigate DLCT performances in the time-frequency domain. Compared with the existing digital computation of LCT, our proposed DLCT possess additivity and reversibility properties with no oversampling involved. In addition, the length of input/output signals will not be changed before and after the DLCT transformations, which is consistent with the time-frequency area-preserving nature of LCT; meanwhile, the proposed DLCT has very good approximation of continuous LCT.
Secure Image Transmission over DFT-precoded OFDM-VLC systems based on Chebyshev Chaos scrambling
NASA Astrophysics Data System (ADS)
Wang, Zhongpeng; Qiu, Weiwei
2017-08-01
This paper proposes a physical layer image secure transmission scheme for discrete Fourier transform (DFT) precoded OFDM-based visible light communication systems by using Chebyshev chaos maps. In the proposed scheme, 256 subcarriers and QPSK modulation are employed. The transmitted digital signal of the image is encrypted with a Chebyshev chaos sequence. The encrypted signal is then transformed by a DFT precoding matrix to reduce the PAPR of the OFDM signal. After that, the encrypted and DFT-precoded OFDM are transmitted over a VLC channel. The simulation results show that the proposed image security transmission scheme can not only protect the DFT-precoded OFDM-based VLC from eavesdroppers but also improve BER performance.
Porosev, V V; Shekhtman, L I; Zelikman, M I; Blinov, N N
2004-01-01
Theoretical and experimental research results related with the influence of correlation of signals in neighboring elements of digital X-ray receiver-transformer produced on the evaluation of the output ratio noise/signal and, as a consequence, on the evaluation of quantum registration efficiency are described in the paper.
On the design of recursive digital filters
NASA Technical Reports Server (NTRS)
Shenoi, K.; Narasimha, M. J.; Peterson, A. M.
1976-01-01
A change of variables is described which transforms the problem of designing a recursive digital filter to that of approximation by a ratio of polynomials on a finite interval. Some analytic techniques for the design of low-pass filters are presented, illustrating the use of the transformation. Also considered are methods for the design of phase equalizers.
ERIC Educational Resources Information Center
Niess, Margaret L.; Gillow-Wiles, Henry
2014-01-01
This qualitative cross-case study explores the influence of a designed learning trajectory on transforming teachers' technological pedagogical content knowledge (TPACK) for teaching with digital image and video technologies. The TPACK Learning Trajectory embeds tasks with specific instructional strategies within a social metacognitive…
Librarians Lead the Growth of Information Literacy and Global Digital Citizens
ERIC Educational Resources Information Center
Crockett, Lee Watanabe
2018-01-01
Librarians are leaders in growing global digital citizens. The libraries of the future are more than just housing centers for books and media. They are invigorating meeting places and communities where truly meaningful learning and discovery take place. As technology has transformed reading and learning, it has also transformed the vision of the…
NASA Technical Reports Server (NTRS)
Rickard, D. A.; Bodenheimer, R. E.
1976-01-01
Digital computer components which perform two dimensional array logic operations (Tse logic) on binary data arrays are described. The properties of Golay transforms which make them useful in image processing are reviewed, and several architectures for Golay transform processors are presented with emphasis on the skeletonizing algorithm. Conventional logic control units developed for the Golay transform processors are described. One is a unique microprogrammable control unit that uses a microprocessor to control the Tse computer. The remaining control units are based on programmable logic arrays. Performance criteria are established and utilized to compare the various Golay transform machines developed. A critique of Tse logic is presented, and recommendations for additional research are included.
FPGA Implementation of Optimal 3D-Integer DCT Structure for Video Compression
2015-01-01
A novel optimal structure for implementing 3D-integer discrete cosine transform (DCT) is presented by analyzing various integer approximation methods. The integer set with reduced mean squared error (MSE) and high coding efficiency are considered for implementation in FPGA. The proposed method proves that the least resources are utilized for the integer set that has shorter bit values. Optimal 3D-integer DCT structure is determined by analyzing the MSE, power dissipation, coding efficiency, and hardware complexity of different integer sets. The experimental results reveal that direct method of computing the 3D-integer DCT using the integer set [10, 9, 6, 2, 3, 1, 1] performs better when compared to other integer sets in terms of resource utilization and power dissipation. PMID:26601120
A robust H.264/AVC video watermarking scheme with drift compensation.
Jiang, Xinghao; Sun, Tanfeng; Zhou, Yue; Wang, Wan; Shi, Yun-Qing
2014-01-01
A robust H.264/AVC video watermarking scheme for copyright protection with self-adaptive drift compensation is proposed. In our scheme, motion vector residuals of macroblocks with the smallest partition size are selected to hide copyright information in order to hold visual impact and distortion drift to a minimum. Drift compensation is also implemented to reduce the influence of watermark to the most extent. Besides, discrete cosine transform (DCT) with energy compact property is applied to the motion vector residual group, which can ensure robustness against intentional attacks. According to the experimental results, this scheme gains excellent imperceptibility and low bit-rate increase. Malicious attacks with different quantization parameters (QPs) or motion estimation algorithms can be resisted efficiently, with 80% accuracy on average after lossy compression.
A Robust H.264/AVC Video Watermarking Scheme with Drift Compensation
Sun, Tanfeng; Zhou, Yue; Shi, Yun-Qing
2014-01-01
A robust H.264/AVC video watermarking scheme for copyright protection with self-adaptive drift compensation is proposed. In our scheme, motion vector residuals of macroblocks with the smallest partition size are selected to hide copyright information in order to hold visual impact and distortion drift to a minimum. Drift compensation is also implemented to reduce the influence of watermark to the most extent. Besides, discrete cosine transform (DCT) with energy compact property is applied to the motion vector residual group, which can ensure robustness against intentional attacks. According to the experimental results, this scheme gains excellent imperceptibility and low bit-rate increase. Malicious attacks with different quantization parameters (QPs) or motion estimation algorithms can be resisted efficiently, with 80% accuracy on average after lossy compression. PMID:24672376
NASA Technical Reports Server (NTRS)
Fymat, A. L.; Smith, C. B.
1979-01-01
It is shown that the inverse analytical solutions, provided separately by Fymat and Box-McKellar, for reconstructing particle size distributions from remote spectral transmission measurements under the anomalous diffraction approximation can be derived using a cosine and a sine transform, respectively. Sufficient conditions of validity of the two formulas are established. Their comparison shows that the former solution is preferable to the latter in that it requires less a priori information (knowledge of the particle number density is not needed) and has wider applicability. For gamma-type distributions, and either a real or a complex refractive index, explicit expressions are provided for retrieving the distribution parameters; such expressions are, interestingly, proportional to the geometric area of the polydispersion.
The influence of finite cavities on the sound insulation of double-plate structures.
Brunskog, Jonas
2005-06-01
Lightweight walls are often designed as frameworks of studs with plates on each side--a double-plate structure. The studs constitute boundaries for the cavities, thereby both affecting the sound transmission directly by short-circuiting the plates, and indirectly by disturbing the sound field between the plates. The paper presents a deterministic prediction model for airborne sound insulation including both effects of the studs. A spatial transform technique is used, taking advantage of the periodicity. The acoustic field inside the cavities is expanded by means of cosine-series. The transmission coefficient (angle-dependent and diffuse) and transmission loss are studied. Numerical examples are presented and comparisons with measurement are performed. The result indicates that a reasonably good agreement between theory and measurement can be achieved.
Air Force Academy Aeronautics Digest, Spring/Summer 1980
1980-10-01
transformation matrix developed under the direction cosine method now can be simplified to four 18 USAFA-TR-80-17 equations t (_W " - W " wq W r 2 2...0uE.CZ 25. 03 *.C .00 0.00 15.00 -(.1 ’.I-01 C.C’ 0 IILkiE1 r.C4 .C TH TAO ..700C7 0.7.1 H g-4-8:j 41S 35.00 0 19 C55 NE -. 68 £-% ’C,- USAFA-TR...4?5. I-TiET4C = FLCAT(I -8) .1.415. 1 T( P, ) TAO SUM = 0.0 0 l~J ZI15 i r4 2 IHETA " FL (AT (J-) *5. TH 0UIi P1 /1 1 Pli = IHLT A-THET 40 A:v 0
ERIC Educational Resources Information Center
Greenberg, Richard
1998-01-01
Describes the Image Processing for Teaching (IPT) project which provides digital image processing to excite students about science and mathematics as they use research-quality software on microcomputers. Provides information on IPT whose components of this dissemination project have been widespread teacher education, curriculum-based materials…
ERIC Educational Resources Information Center
Nebbergall, Allison
2012-01-01
As technology increasingly transforms our daily lives, educators too are seeking strategies and resources that leverage technology to improve student learning. Research demonstrates that high-quality professional development, digital standards-based content, and personalized learning plans can increase student achievement, engagement, and…
A Laboratory Application of Microcomputer Graphics.
ERIC Educational Resources Information Center
Gehring, Kalle B.; Moore, John W.
1983-01-01
A PASCAL graphics and instrument interface program for a Z80/S-100 based microcomputer was developed. The computer interfaces to a stopped-flow spectrophotometer replacing a storage oscilloscope and polaroid camera. Applications of this system are discussed, indicating that graphics and analog-to-digital boards have transformed the computer into…
NASA Astrophysics Data System (ADS)
Federico, Alejandro; Kaufmann, Guillermo H.
2004-08-01
We evaluate the application of the Wigner-Ville distribution (WVD) to measure phase gradient maps in digital speckle pattern interferometry (DSPI), when the generated correlation fringes present phase discontinuities. The performance of the WVD method is evaluated using computer-simulated fringes. The influence of the filtering process to smooth DSPI fringes and additional drawbacks that emerge when this method is applied are discussed. A comparison with the conventional method based on the continuous wavelet transform in the stationary phase approximation is also presented.
Smooth affine shear tight frames: digitization and applications
NASA Astrophysics Data System (ADS)
Zhuang, Xiaosheng
2015-08-01
In this paper, we mainly discuss one of the recent developed directional multiscale representation systems: smooth affine shear tight frames. A directional wavelet tight frame is generated by isotropic dilations and translations of directional wavelet generators, while an affine shear tight frame is generated by anisotropic dilations, shears, and translations of shearlet generators. These two tight frames are actually connected in the sense that the affine shear tight frame can be obtained from a directional wavelet tight frame through subsampling. Consequently, an affine shear tight frame indeed has an underlying filter bank from the MRA structure of its associated directional wavelet tight frame. We call such filter banks affine shear filter banks, which can be designed completely in the frequency domain. We discuss the digitization of affine shear filter banks and their implementations: the forward and backward digital affine shear transforms. Redundancy rate and computational complexity of digital affine shear transforms are also investigated in this paper. Numerical experiments and comparisons in image/video processing show the advantages of digital affine shear transforms over many other state-of-art directional multiscale representation systems.
Phase in Optical Image Processing
NASA Astrophysics Data System (ADS)
Naughton, Thomas J.
2010-04-01
The use of phase has a long standing history in optical image processing, with early milestones being in the field of pattern recognition, such as VanderLugt's practical construction technique for matched filters, and (implicitly) Goodman's joint Fourier transform correlator. In recent years, the flexibility afforded by phase-only spatial light modulators and digital holography, for example, has enabled many processing techniques based on the explicit encoding and decoding of phase. One application area concerns efficient numerical computations. Pushing phase measurement to its physical limits, designs employing the physical properties of phase have ranged from the sensible to the wonderful, in some cases making computationally easy problems easier to solve and in other cases addressing mathematics' most challenging computationally hard problems. Another application area is optical image encryption, in which, typically, a phase mask modulates the fractional Fourier transformed coefficients of a perturbed input image, and the phase of the inverse transform is then sensed as the encrypted image. The inherent linearity that makes the system so elegant mitigates against its use as an effective encryption technique, but we show how a combination of optical and digital techniques can restore confidence in that security. We conclude with the concept of digital hologram image processing, and applications of same that are uniquely suited to optical implementation, where the processing, recognition, or encryption step operates on full field information, such as that emanating from a coherently illuminated real-world three-dimensional object.
Proposed U.S. Geological Survey standard for digital orthophotos
Hooper, David; Caruso, Vincent
1991-01-01
The U.S. Geological Survey has added the new category of digital orthophotos to the National Digital Cartographic Data Base. This differentially rectified digital image product enables users to take advantage of the properties of current photoimagery as a source of geographic information. The product and accompanying standard were implemented in spring 1991. The digital orthophotos will be quadrangle based and cast on the Universal Transverse Mercator projection and will extend beyond the 3.75-minute or 7.5-minute quadrangle area at least 300 meters to form a rectangle. The overedge may be used for mosaicking with adjacent digital orthophotos. To provide maximum information content and utility to the user, metadata (header) records exist at the beginning of the digital orthophoto file. Header information includes the photographic source type, date, instrumentation used to create the digital orthophoto, and information relating to the DEM that was used in the rectification process. Additional header information is included on transformation constants from the 1927 and 1983 North American Datums to the orthophoto internal file coordinates to enable the user to register overlays on either datum. The quadrangle corners in both datums are also imprinted on the image. Flexibility has been built into the digital orthophoto format for future enhancements, such as the provision to include the corresponding digital elevation model elevations used to rectify the orthophoto. The digital orthophoto conforms to National Map Accuracy Standards and provides valuable mapping data that can be used as a tool for timely revision of standard map products, for land use and land cover studies, and as a digital layer in a geographic information system.
Effect of Diffuse Backscatter in Cassini Datasets on the Inferred Properties of Titan's surface
NASA Astrophysics Data System (ADS)
Sultan-Salem, A. K.; Tyler, G. L.
2006-12-01
Microwave (2.18 cm-λ) backscatter data for the surface of Titan obtained with the Cassini Radar instrument exhibit a significant diffuse scattering component. An empirical scattering law of the form Acos^{n}θ, with free parameters A and n, is often employed to model diffuse scattering, which may involve one or more unidentified mechanisms and processes, such as volume scattering and scattering from surface structure that is much smaller than the electromagnetic wavelength used to probe the surface. The cosine law in general is not explicit in its dependence on either the surface structure or electromagnetic parameters. Further, the cosine law often is only a poor representation of the observed diffuse scattering, as can be inferred from computation of standard goodness-of-fit measures such as the statistical significance. We fit four Cassini datasets (TA Inbound and Outbound, T3 Outbound, and T8 Inbound) with a linear combination of a cosine law and a generalized fractal-based quasi-specular scattering law (A. K. Sultan- Salem and G. L. Tyler, J. Geophys. Res., 111, E06S08, doi:10.1029/2005JE002540, 2006), in order to demonstrate how the presence of diffuse scattering increases considerably the uncertainty in surface parameters inferred from the quasi-specular component, typically the dielectric constant of the surface material and the surface root-mean-square slope. This uncertainty impacts inferences concerning the physical properties of the surfaces that display mixed scattering properties.