Invariant domain watermarking using heaviside function of order alpha and fractional Gaussian field.
Abbasi, Almas; Woo, Chaw Seng; Ibrahim, Rabha Waell; Islam, Saeed
2015-01-01
Digital image watermarking is an important technique for the authentication of multimedia content and copyright protection. Conventional digital image watermarking techniques are often vulnerable to geometric distortions such as Rotation, Scaling, and Translation (RST). These distortions desynchronize the watermark information embedded in an image and thus disable watermark detection. To solve this problem, we propose an RST invariant domain watermarking technique based on fractional calculus. We have constructed a domain using Heaviside function of order alpha (HFOA). The HFOA models the signal as a polynomial for watermark embedding. The watermark is embedded in all the coefficients of the image. We have also constructed a fractional variance formula using fractional Gaussian field. A cross correlation method based on the fractional Gaussian field is used for watermark detection. Furthermore the proposed method enables blind watermark detection where the original image is not required during the watermark detection thereby making it more practical than non-blind watermarking techniques. Experimental results confirmed that the proposed technique has a high level of robustness.
Invariant Domain Watermarking Using Heaviside Function of Order Alpha and Fractional Gaussian Field
Abbasi, Almas; Woo, Chaw Seng; Ibrahim, Rabha Waell; Islam, Saeed
2015-01-01
Digital image watermarking is an important technique for the authentication of multimedia content and copyright protection. Conventional digital image watermarking techniques are often vulnerable to geometric distortions such as Rotation, Scaling, and Translation (RST). These distortions desynchronize the watermark information embedded in an image and thus disable watermark detection. To solve this problem, we propose an RST invariant domain watermarking technique based on fractional calculus. We have constructed a domain using Heaviside function of order alpha (HFOA). The HFOA models the signal as a polynomial for watermark embedding. The watermark is embedded in all the coefficients of the image. We have also constructed a fractional variance formula using fractional Gaussian field. A cross correlation method based on the fractional Gaussian field is used for watermark detection. Furthermore the proposed method enables blind watermark detection where the original image is not required during the watermark detection thereby making it more practical than non-blind watermarking techniques. Experimental results confirmed that the proposed technique has a high level of robustness. PMID:25884854
Video multiple watermarking technique based on image interlacing using DWT.
Ibrahim, Mohamed M; Abdel Kader, Neamat S; Zorkany, M
2014-01-01
Digital watermarking is one of the important techniques to secure digital media files in the domains of data authentication and copyright protection. In the nonblind watermarking systems, the need of the original host file in the watermark recovery operation makes an overhead over the system resources, doubles memory capacity, and doubles communications bandwidth. In this paper, a robust video multiple watermarking technique is proposed to solve this problem. This technique is based on image interlacing. In this technique, three-level discrete wavelet transform (DWT) is used as a watermark embedding/extracting domain, Arnold transform is used as a watermark encryption/decryption method, and different types of media (gray image, color image, and video) are used as watermarks. The robustness of this technique is tested by applying different types of attacks such as: geometric, noising, format-compression, and image-processing attacks. The simulation results show the effectiveness and good performance of the proposed technique in saving system resources, memory capacity, and communications bandwidth.
Optical 3D watermark based digital image watermarking for telemedicine
NASA Astrophysics Data System (ADS)
Li, Xiao Wei; Kim, Seok Tae
2013-12-01
Region of interest (ROI) of a medical image is an area including important diagnostic information and must be stored without any distortion. This algorithm for application of watermarking technique for non-ROI of the medical image preserving ROI. The paper presents a 3D watermark based medical image watermarking scheme. In this paper, a 3D watermark object is first decomposed into 2D elemental image array (EIA) by a lenslet array, and then the 2D elemental image array data is embedded into the host image. The watermark extraction process is an inverse process of embedding. The extracted EIA through the computational integral imaging reconstruction (CIIR) technique, the 3D watermark can be reconstructed. Because the EIA is composed of a number of elemental images possesses their own perspectives of a 3D watermark object. Even though the embedded watermark data badly damaged, the 3D virtual watermark can be successfully reconstructed. Furthermore, using CAT with various rule number parameters, it is possible to get many channels for embedding. So our method can recover the weak point having only one transform plane in traditional watermarking methods. The effectiveness of the proposed watermarking scheme is demonstrated with the aid of experimental results.
Multipurpose image watermarking algorithm based on multistage vector quantization.
Lu, Zhe-Ming; Xu, Dian-Guo; Sun, Sheng-He
2005-06-01
The rapid growth of digital multimedia and Internet technologies has made copyright protection, copy protection, and integrity verification three important issues in the digital world. To solve these problems, the digital watermarking technique has been presented and widely researched. Traditional watermarking algorithms are mostly based on discrete transform domains, such as the discrete cosine transform, discrete Fourier transform (DFT), and discrete wavelet transform (DWT). Most of these algorithms are good for only one purpose. Recently, some multipurpose digital watermarking methods have been presented, which can achieve the goal of content authentication and copyright protection simultaneously. However, they are based on DWT or DFT. Lately, several robust watermarking schemes based on vector quantization (VQ) have been presented, but they can only be used for copyright protection. In this paper, we present a novel multipurpose digital image watermarking method based on the multistage vector quantizer structure, which can be applied to image authentication and copyright protection. In the proposed method, the semi-fragile watermark and the robust watermark are embedded in different VQ stages using different techniques, and both of them can be extracted without the original image. Simulation results demonstrate the effectiveness of our algorithm in terms of robustness and fragility.
Chaotic CDMA watermarking algorithm for digital image in FRFT domain
NASA Astrophysics Data System (ADS)
Liu, Weizhong; Yang, Wentao; Feng, Zhuoming; Zou, Xuecheng
2007-11-01
A digital image-watermarking algorithm based on fractional Fourier transform (FRFT) domain is presented by utilizing chaotic CDMA technique in this paper. As a popular and typical transmission technique, CDMA has many advantages such as privacy, anti-jamming and low power spectral density, which can provide robustness against image distortions and malicious attempts to remove or tamper with the watermark. A super-hybrid chaotic map, with good auto-correlation and cross-correlation characteristics, is adopted to produce many quasi-orthogonal codes (QOC) that can replace the periodic PN-code used in traditional CDAM system. The watermarking data is divided into a lot of segments that correspond to different chaotic QOC respectively and are modulated into the CDMA watermarking data embedded into low-frequency amplitude coefficients of FRFT domain of the cover image. During watermark detection, each chaotic QOC extracts its corresponding watermarking segment by calculating correlation coefficients between chaotic QOC and watermarked data of the detected image. The CDMA technique not only can enhance the robustness of watermark but also can compress the data of the modulated watermark. Experimental results show that the watermarking algorithm has good performances in three aspects: better imperceptibility, anti-attack robustness and security.
Realisation and robustness evaluation of a blind spatial domain watermarking technique
NASA Astrophysics Data System (ADS)
Parah, Shabir A.; Sheikh, Javaid A.; Assad, Umer I.; Bhat, Ghulam M.
2017-04-01
A blind digital image watermarking scheme based on spatial domain is presented and investigated in this paper. The watermark has been embedded in intermediate significant bit planes besides the least significant bit plane at the address locations determined by pseudorandom address vector (PAV). The watermark embedding using PAV makes it difficult for an adversary to locate the watermark and hence adds to security of the system. The scheme has been evaluated to ascertain the spatial locations that are robust to various image processing and geometric attacks JPEG compression, additive white Gaussian noise, salt and pepper noise, filtering and rotation. The experimental results obtained, reveal an interesting fact, that, for all the above mentioned attacks, other than rotation, higher the bit plane in which watermark is embedded more robust the system. Further, the perceptual quality of the watermarked images obtained in the proposed system has been compared with some state-of-art watermarking techniques. The proposed technique outperforms the techniques under comparison, even if compared with the worst case peak signal-to-noise ratio obtained in our scheme.
Tuckley, Kushal
2017-01-01
In telemedicine systems, critical medical data is shared on a public communication channel. This increases the risk of unauthorised access to patient's information. This underlines the importance of secrecy and authentication for the medical data. This paper presents two innovative variations of classical histogram shift methods to increase the hiding capacity. The first technique divides the image into nonoverlapping blocks and embeds the watermark individually using the histogram method. The second method separates the region of interest and embeds the watermark only in the region of noninterest. This approach preserves the medical information intact. This method finds its use in critical medical cases. The high PSNR (above 45 dB) obtained for both techniques indicates imperceptibility of the approaches. Experimental results illustrate superiority of the proposed approaches when compared with other methods based on histogram shifting techniques. These techniques improve embedding capacity by 5–15% depending on the image type, without affecting the quality of the watermarked image. Both techniques also enable lossless reconstruction of the watermark and the host medical image. A higher embedding capacity makes the proposed approaches attractive for medical image watermarking applications without compromising the quality of the image. PMID:29104744
A blind dual color images watermarking based on IWT and state coding
NASA Astrophysics Data System (ADS)
Su, Qingtang; Niu, Yugang; Liu, Xianxi; Zhu, Yu
2012-04-01
In this paper, a state-coding based blind watermarking algorithm is proposed to embed color image watermark to color host image. The technique of state coding, which makes the state code of data set be equal to the hiding watermark information, is introduced in this paper. When embedding watermark, using Integer Wavelet Transform (IWT) and the rules of state coding, these components, R, G and B, of color image watermark are embedded to these components, Y, Cr and Cb, of color host image. Moreover, the rules of state coding are also used to extract watermark from the watermarked image without resorting to the original watermark or original host image. Experimental results show that the proposed watermarking algorithm cannot only meet the demand on invisibility and robustness of the watermark, but also have well performance compared with other proposed methods considered in this work.
Medical Image Authentication Using DPT Watermarking: A Preliminary Attempt
NASA Astrophysics Data System (ADS)
Wong, M. L. Dennis; Goh, Antionette W.-T.; Chua, Hong Siang
Secure authentication of digital medical image content provides great value to the e-Health community and medical insurance industries. Fragile Watermarking has been proposed to provide the mechanism to authenticate digital medical image securely. Transform Domain based Watermarking are typically slower than spatial domain watermarking owing to the overhead in calculation of coefficients. In this paper, we propose a new Discrete Pascal Transform based watermarking technique. Preliminary experiment result shows authentication capability. Possible improvements on the proposed scheme are also presented before conclusions.
Watermarking of ultrasound medical images in teleradiology using compressed watermark
Badshah, Gran; Liew, Siau-Chuin; Zain, Jasni Mohamad; Ali, Mushtaq
2016-01-01
Abstract. The open accessibility of Internet-based medical images in teleradialogy face security threats due to the nonsecured communication media. This paper discusses the spatial domain watermarking of ultrasound medical images for content authentication, tamper detection, and lossless recovery. For this purpose, the image is divided into two main parts, the region of interest (ROI) and region of noninterest (RONI). The defined ROI and its hash value are combined as watermark, lossless compressed, and embedded into the RONI part of images at pixel’s least significant bits (LSBs). The watermark lossless compression and embedding at pixel’s LSBs preserve image diagnostic and perceptual qualities. Different lossless compression techniques including Lempel-Ziv-Welch (LZW) were tested for watermark compression. The performances of these techniques were compared based on more bit reduction and compression ratio. LZW was found better than others and used in tamper detection and recovery watermarking of medical images (TDARWMI) scheme development to be used for ROI authentication, tamper detection, localization, and lossless recovery. TDARWMI performance was compared and found to be better than other watermarking schemes. PMID:26839914
Synchronization-insensitive video watermarking using structured noise pattern
NASA Astrophysics Data System (ADS)
Setyawan, Iwan; Kakes, Geerd; Lagendijk, Reginald L.
2002-04-01
For most watermarking methods, preserving the synchronization between the watermark embedded in a digital data (image, audio or video) and the watermark detector is critical to the success of the watermark detection process. Many digital watermarking attacks exploit this fact by disturbing the synchronization of the watermark and the watermark detector, and thus disabling proper watermark detection without having to actually remove the watermark from the data. Some techniques have been proposed in the literature to deal with this problem. Most of these techniques employ methods to reverse the distortion caused by the attack and then try to detect the watermark from the repaired data. In this paper, we propose a watermarking technique that is not sensitive to synchronization. This technique uses a structured noise pattern and embeds the watermark payload into the geometrical structure of the embedded pattern.
Protection of Health Imagery by Region Based Lossless Reversible Watermarking Scheme
Priya, R. Lakshmi; Sadasivam, V.
2015-01-01
Providing authentication and integrity in medical images is a problem and this work proposes a new blind fragile region based lossless reversible watermarking technique to improve trustworthiness of medical images. The proposed technique embeds the watermark using a reversible least significant bit embedding scheme. The scheme combines hashing, compression, and digital signature techniques to create a content dependent watermark making use of compressed region of interest (ROI) for recovery of ROI as reported in literature. The experiments were carried out to prove the performance of the scheme and its assessment reveals that ROI is extracted in an intact manner and PSNR values obtained lead to realization that the presented scheme offers greater protection for health imageries. PMID:26649328
GenInfoGuard--a robust and distortion-free watermarking technique for genetic data.
Iftikhar, Saman; Khan, Sharifullah; Anwar, Zahid; Kamran, Muhammad
2015-01-01
Genetic data, in digital format, is used in different biological phenomena such as DNA translation, mRNA transcription and protein synthesis. The accuracy of these biological phenomena depend on genetic codes and all subsequent processes. To computerize the biological procedures, different domain experts are provided with the authorized access of the genetic codes; as a consequence, the ownership protection of such data is inevitable. For this purpose, watermarks serve as the proof of ownership of data. While protecting data, embedded hidden messages (watermarks) influence the genetic data; therefore, the accurate execution of the relevant processes and the overall result becomes questionable. Most of the DNA based watermarking techniques modify the genetic data and are therefore vulnerable to information loss. Distortion-free techniques make sure that no modifications occur during watermarking; however, they are fragile to malicious attacks and therefore cannot be used for ownership protection (particularly, in presence of a threat model). Therefore, there is a need for a technique that must be robust and should also prevent unwanted modifications. In this spirit, a watermarking technique with aforementioned characteristics has been proposed in this paper. The proposed technique makes sure that: (i) the ownership rights are protected by means of a robust watermark; and (ii) the integrity of genetic data is preserved. The proposed technique-GenInfoGuard-ensures its robustness through the "watermark encoding" in permuted values, and exhibits high decoding accuracy against various malicious attacks.
A New Quantum Watermarking Based on Quantum Wavelet Transforms
NASA Astrophysics Data System (ADS)
Heidari, Shahrokh; Naseri, Mosayeb; Gheibi, Reza; Baghfalaki, Masoud; Rasoul Pourarian, Mohammad; Farouk, Ahmed
2017-06-01
Quantum watermarking is a technique to embed specific information, usually the owner’s identification, into quantum cover data such for copyright protection purposes. In this paper, a new scheme for quantum watermarking based on quantum wavelet transforms is proposed which includes scrambling, embedding and extracting procedures. The invisibility and robustness performances of the proposed watermarking method is confirmed by simulation technique. The invisibility of the scheme is examined by the peak-signal-to-noise ratio (PSNR) and the histogram calculation. Furthermore the robustness of the scheme is analyzed by the Bit Error Rate (BER) and the Correlation Two-Dimensional (Corr 2-D) calculation. The simulation results indicate that the proposed watermarking scheme indicate not only acceptable visual quality but also a good resistance against different types of attack. Supported by Kermanshah Branch, Islamic Azad University, Kermanshah, Iran
Deterring watermark collusion attacks using signal processing techniques
NASA Astrophysics Data System (ADS)
Lemma, Aweke N.; van der Veen, Michiel
2007-02-01
Collusion attack is a malicious watermark removal attack in which the hacker has access to multiple copies of the same content with different watermarks and tries to remove the watermark using averaging. In the literature, several solutions to collusion attacks have been reported. The main stream solutions aim at designing watermark codes that are inherently resistant to collusion attacks. The other approaches propose signal processing based solutions that aim at modifying the watermarked signals in such a way that averaging multiple copies of the content leads to a significant degradation of the content quality. In this paper, we present signal processing based technique that may be deployed for deterring collusion attacks. We formulate the problem in the context of electronic music distribution where the content is generally available in the compressed domain. Thus, we first extend the collusion resistance principles to bit stream signals and secondly present experimental based analysis to estimate a bound on the maximum number of modified versions of a content that satisfy good perceptibility requirement on one hand and destructive averaging property on the other hand.
Generalized watermarking attack based on watermark estimation and perceptual remodulation
NASA Astrophysics Data System (ADS)
Voloshynovskiy, Sviatoslav V.; Pereira, Shelby; Herrigel, Alexander; Baumgartner, Nazanin; Pun, Thierry
2000-05-01
Digital image watermarking has become a popular technique for authentication and copyright protection. For verifying the security and robustness of watermarking algorithms, specific attacks have to be applied to test them. In contrast to the known Stirmark attack, which degrades the quality of the image while destroying the watermark, this paper presents a new approach which is based on the estimation of a watermark and the exploitation of the properties of Human Visual System (HVS). The new attack satisfies two important requirements. First, image quality after the attack as perceived by the HVS is not worse than the quality of the stego image. Secondly, the attack uses all available prior information about the watermark and cover image statistics to perform the best watermark removal or damage. The proposed attack is based on a stochastic formulation of the watermark removal problem, considering the embedded watermark as additive noise with some probability distribution. The attack scheme consists of two main stages: (1) watermark estimation and partial removal by a filtering based on a Maximum a Posteriori (MAP) approach; (2) watermark alteration and hiding through addition of noise to the filtered image, taking into account the statistics of the embedded watermark and exploiting HVS characteristics. Experiments on a number of real world and computer generated images show the high efficiency of the proposed attack against known academic and commercial methods: the watermark is completely destroyed in all tested images without altering the image quality. The approach can be used against watermark embedding schemes that operate either in coordinate domain, or transform domains like Fourier, DCT or wavelet.
Watermarking techniques for electronic delivery of remote sensing images
NASA Astrophysics Data System (ADS)
Barni, Mauro; Bartolini, Franco; Magli, Enrico; Olmo, Gabriella
2002-09-01
Earth observation missions have recently attracted a growing interest, mainly due to the large number of possible applications capable of exploiting remotely sensed data and images. Along with the increase of market potential, the need arises for the protection of the image products. Such a need is a very crucial one, because the Internet and other public/private networks have become preferred means of data exchange. A critical issue arising when dealing with digital image distribution is copyright protection. Such a problem has been largely addressed by resorting to watermarking technology. A question that obviously arises is whether the requirements imposed by remote sensing imagery are compatible with existing watermarking techniques. On the basis of these motivations, the contribution of this work is twofold: assessment of the requirements imposed by remote sensing applications on watermark-based copyright protection, and modification of two well-established digital watermarking techniques to meet such constraints. More specifically, the concept of near-lossless watermarking is introduced and two possible algorithms matching such a requirement are presented. Experimental results are shown to measure the impact of watermark introduction on a typical remote sensing application, i.e., unsupervised image classification.
Oriented modulation for watermarking in direct binary search halftone images.
Guo, Jing-Ming; Su, Chang-Cheng; Liu, Yun-Fu; Lee, Hua; Lee, Jiann-Der
2012-09-01
In this paper, a halftoning-based watermarking method is presented. This method enables high pixel-depth watermark embedding, while maintaining high image quality. This technique is capable of embedding watermarks with pixel depths up to 3 bits without causing prominent degradation to the image quality. To achieve high image quality, the parallel oriented high-efficient direct binary search (DBS) halftoning is selected to be integrated with the proposed orientation modulation (OM) method. The OM method utilizes different halftone texture orientations to carry different watermark data. In the decoder, the least-mean-square-trained filters are applied for feature extraction from watermarked images in the frequency domain, and the naïve Bayes classifier is used to analyze the extracted features and ultimately to decode the watermark data. Experimental results show that the DBS-based OM encoding method maintains a high degree of image quality and realizes the processing efficiency and robustness to be adapted in printing applications.
A RONI Based Visible Watermarking Approach for Medical Image Authentication.
Thanki, Rohit; Borra, Surekha; Dwivedi, Vedvyas; Borisagar, Komal
2017-08-09
Nowadays medical data in terms of image files are often exchanged between different hospitals for use in telemedicine and diagnosis. Visible watermarking being extensively used for Intellectual Property identification of such medical images, leads to serious issues if failed to identify proper regions for watermark insertion. In this paper, the Region of Non-Interest (RONI) based visible watermarking for medical image authentication is proposed. In this technique, to RONI of the cover medical image is first identified using Human Visual System (HVS) model. Later, watermark logo is visibly inserted into RONI of the cover medical image to get watermarked medical image. Finally, the watermarked medical image is compared with the original medical image for measurement of imperceptibility and authenticity of proposed scheme. The experimental results showed that this proposed scheme reduces the computational complexity and improves the PSNR when compared to many existing schemes.
A Robust Image Watermarking in the Joint Time-Frequency Domain
NASA Astrophysics Data System (ADS)
Öztürk, Mahmut; Akan, Aydın; Çekiç, Yalçın
2010-12-01
With the rapid development of computers and internet applications, copyright protection of multimedia data has become an important problem. Watermarking techniques are proposed as a solution to copyright protection of digital media files. In this paper, a new, robust, and high-capacity watermarking method that is based on spatiofrequency (SF) representation is presented. We use the discrete evolutionary transform (DET) calculated by the Gabor expansion to represent an image in the joint SF domain. The watermark is embedded onto selected coefficients in the joint SF domain. Hence, by combining the advantages of spatial and spectral domain watermarking methods, a robust, invisible, secure, and high-capacity watermarking method is presented. A correlation-based detector is also proposed to detect and extract any possible watermarks on an image. The proposed watermarking method was tested on some commonly used test images under different signal processing attacks like additive noise, Wiener and Median filtering, JPEG compression, rotation, and cropping. Simulation results show that our method is robust against all of the attacks.
A Robust Zero-Watermarking Algorithm for Audio
NASA Astrophysics Data System (ADS)
Chen, Ning; Zhu, Jie
2007-12-01
In traditional watermarking algorithms, the insertion of watermark into the host signal inevitably introduces some perceptible quality degradation. Another problem is the inherent conflict between imperceptibility and robustness. Zero-watermarking technique can solve these problems successfully. Instead of embedding watermark, the zero-watermarking technique extracts some essential characteristics from the host signal and uses them for watermark detection. However, most of the available zero-watermarking schemes are designed for still image and their robustness is not satisfactory. In this paper, an efficient and robust zero-watermarking technique for audio signal is presented. The multiresolution characteristic of discrete wavelet transform (DWT), the energy compression characteristic of discrete cosine transform (DCT), and the Gaussian noise suppression property of higher-order cumulant are combined to extract essential features from the host audio signal and they are then used for watermark recovery. Simulation results demonstrate the effectiveness of our scheme in terms of inaudibility, detection reliability, and robustness.
Watermarking textures in video games
NASA Astrophysics Data System (ADS)
Liu, Huajian; Berchtold, Waldemar; Schäfer, Marcel; Lieb, Patrick; Steinebach, Martin
2014-02-01
Digital watermarking is a promising solution to video game piracy. In this paper, based on the analysis of special challenges and requirements in terms of watermarking textures in video games, a novel watermarking scheme for DDS textures in video games is proposed. To meet the performance requirements in video game applications, the proposed algorithm embeds the watermark message directly in the compressed stream in DDS files and can be straightforwardly applied in watermark container technique for real-time embedding. Furthermore, the embedding approach achieves high watermark payload to handle collusion secure fingerprinting codes with extreme length. Hence, the scheme is resistant to collusion attacks, which is indispensable in video game applications. The proposed scheme is evaluated in aspects of transparency, robustness, security and performance. Especially, in addition to classical objective evaluation, the visual quality and playing experience of watermarked games is assessed subjectively in game playing.
An image adaptive, wavelet-based watermarking of digital images
NASA Astrophysics Data System (ADS)
Agreste, Santa; Andaloro, Guido; Prestipino, Daniela; Puccio, Luigia
2007-12-01
In digital management, multimedia content and data can easily be used in an illegal way--being copied, modified and distributed again. Copyright protection, intellectual and material rights protection for authors, owners, buyers, distributors and the authenticity of content are crucial factors in solving an urgent and real problem. In such scenario digital watermark techniques are emerging as a valid solution. In this paper, we describe an algorithm--called WM2.0--for an invisible watermark: private, strong, wavelet-based and developed for digital images protection and authenticity. Using discrete wavelet transform (DWT) is motivated by good time-frequency features and well-matching with human visual system directives. These two combined elements are important in building an invisible and robust watermark. WM2.0 works on a dual scheme: watermark embedding and watermark detection. The watermark is embedded into high frequency DWT components of a specific sub-image and it is calculated in correlation with the image features and statistic properties. Watermark detection applies a re-synchronization between the original and watermarked image. The correlation between the watermarked DWT coefficients and the watermark signal is calculated according to the Neyman-Pearson statistic criterion. Experimentation on a large set of different images has shown to be resistant against geometric, filtering and StirMark attacks with a low rate of false alarm.
Discrete Walsh Hadamard transform based visible watermarking technique for digital color images
NASA Astrophysics Data System (ADS)
Santhi, V.; Thangavelu, Arunkumar
2011-10-01
As the size of the Internet is growing enormously the illegal manipulation of digital multimedia data become very easy with the advancement in technology tools. In order to protect those multimedia data from unauthorized access the digital watermarking system is used. In this paper a new Discrete walsh Hadamard Transform based visible watermarking system is proposed. As the watermark is embedded in transform domain, the system is robust to many signal processing attacks. Moreover in this proposed method the watermark is embedded in tiling manner in all the range of frequencies to make it robust to compression and cropping attack. The robustness of the algorithm is tested against noise addition, cropping, compression, Histogram equalization and resizing attacks. The experimental results show that the algorithm is robust to common signal processing attacks and the observed peak signal to noise ratio (PSNR) of watermarked image is varying from 20 to 30 db depends on the size of the watermark.
Watermarked cardiac CT image segmentation using deformable models and the Hermite transform
NASA Astrophysics Data System (ADS)
Gomez-Coronel, Sandra L.; Moya-Albor, Ernesto; Escalante-Ramírez, Boris; Brieva, Jorge
2015-01-01
Medical image watermarking is an open area for research and is a solution for the protection of copyright and intellectual property. One of the main challenges of this problem is that the marked images should not differ perceptually from the original images allowing a correct diagnosis and authentication. Furthermore, we also aim at obtaining watermarked images with very little numerical distortion so that computer vision tasks such as segmentation of important anatomical structures do not be impaired or affected. We propose a preliminary watermarking application in cardiac CT images based on a perceptive approach that includes a brightness model to generate a perceptive mask and identify the image regions where the watermark detection becomes a difficult task for the human eye. We propose a normalization scheme of the image in order to improve robustness against geometric attacks. We follow a spread spectrum technique to insert an alphanumeric code, such as patient's information, within the watermark. The watermark scheme is based on the Hermite transform as a bio-inspired image representation model. In order to evaluate the numerical integrity of the image data after watermarking, we perform a segmentation task based on deformable models. The segmentation technique is based on a vector-value level sets method such that, given a curve in a specific image, and subject to some constraints, the curve can evolve in order to detect objects. In order to stimulate the curve evolution we introduce simultaneously some image features like the gray level and the steered Hermite coefficients as texture descriptors. Segmentation performance was assessed by means of the Dice index and the Hausdorff distance. We tested different mark sizes and different insertion schemes on images that were later segmented either automatic or manual by physicians.
Image authentication by means of fragile CGH watermarking
NASA Astrophysics Data System (ADS)
Schirripa Spagnolo, Giuseppe; Simonetti, Carla; Cozzella, Lorenzo
2005-09-01
In this paper we propose a fragile marking system based on Computer Generated Hologram coding techniques, which is able to detect malicious tampering while tolerating some incidental distortions. A fragile watermark is a mark that is readily altered or destroyed when the host image is modified through a linear or nonlinear transformation. A fragile watermark monitors the integrity of the content of the image but not its numerical representation. Therefore the watermark is designed so that the integrity is proven if the content of the image has not been tampered. Since digital images can be altered or manipulated with ease, the ability to detect changes to digital images is very important for many applications such as news reporting, medical archiving, or legal usages. The proposed technique could be applied to Color Images as well as to Gray Scale ones. Using Computer Generated Hologram watermarking, the embedded mark could be easily recovered by means of a Fourier Transform. Due to this fact host image can be tampered and watermarked with the same holographic pattern. To avoid this possibility we have introduced an encryption method using a asymmetric Cryptography. The proposed schema is based on the knowledge of original mark from the Authentication
Image-adaptive and robust digital wavelet-domain watermarking for images
NASA Astrophysics Data System (ADS)
Zhao, Yi; Zhang, Liping
2018-03-01
We propose a new frequency domain wavelet based watermarking technique. The key idea of our scheme is twofold: multi-tier solution representation of image and odd-even quantization embedding/extracting watermark. Because many complementary watermarks need to be hidden, the watermark image designed is image-adaptive. The meaningful and complementary watermark images was embedded into the original image (host image) by odd-even quantization modifying coefficients, which was selected from the detail wavelet coefficients of the original image, if their magnitudes are larger than their corresponding Just Noticeable Difference thresholds. The tests show good robustness against best-known attacks such as noise addition, image compression, median filtering, clipping as well as geometric transforms. Further research may improve the performance by refining JND thresholds.
NASA Astrophysics Data System (ADS)
Bekkouche, S.; Chouarfia, A.
2011-06-01
Image watermarking can be defined as a technique that allows insertion of imperceptible and indelible digital data into an image. In addition to its initial application which is the copyright, watermarking can be used in other fields, particularly in the medical field in order to contribute to secure images shared on the network for telemedicine applications. In this report we study some watermarking methods and the comparison result of their combination, the first one is based on the CDMA (Code Division Multiple Access) in DWT and spatial domain and its aim is to verify the image authenticity whereas the second one is the reversible watermarking (the least significant bits LSB and cryptography tools) and the reversible contrast mapping RCM its objective is to check the integrity of the image and to keep the Confidentiality of the patient data. A new scheme of watermarking is the combination of the reversible watermarking method based on LSB and cryptography tools and the method of CDMA in spatial and DWT domain to verify the three security properties Integrity, Authenticity and confidentiality of medical data and patient information .In the end ,we made a comparison between these methods within the parameters of quality of medical images. Initially, an in-depth study on the characteristics of medical images would contribute to improve these methods to mitigate their limits and to optimize the results. Tests were done on IRM kind of medical images and the quality measurements have been done on the watermarked image to verify that this technique does not lead to a wrong diagnostic. The robustness of the watermarked images against attacks has been verified on the parameters of PSNR, SNR, MSE and MAE which the experimental result demonstrated that the proposed algorithm is good and robust in DWT than in spatial domain.
Copyright protection of remote sensing imagery by means of digital watermarking
NASA Astrophysics Data System (ADS)
Barni, Mauro; Bartolini, Franco; Cappellini, Vito; Magli, Enrico; Olmo, Gabriella; Zanini, R.
2001-12-01
The demand for remote sensing data has increased dramatically mainly due to the large number of possible applications capable to exploit remotely sensed data and images. As in many other fields, along with the increase of market potential and product diffusion, the need arises for some sort of protection of the image products from unauthorized use. Such a need is a very crucial one even because the Internet and other public/private networks have become preferred and effective means of data exchange. An important issue arising when dealing with digital image distribution is copyright protection. Such a problem has been largely addressed by resorting to watermarking technology. Before applying watermarking techniques developed for multimedia applications to remote sensing applications, it is important that the requirements imposed by remote sensing imagery are carefully analyzed to investigate whether they are compatible with existing watermarking techniques. On the basis of these motivations, the contribution of this work is twofold: (1) assessment of the requirements imposed by the characteristics of remotely sensed images on watermark-based copyright protection; (2) discussion of a case study where the performance of two popular, state-of-the-art watermarking techniques are evaluated by the light of the requirements at the previous point.
Robust watermark technique using masking and Hermite transform.
Coronel, Sandra L Gomez; Ramírez, Boris Escalante; Mosqueda, Marco A Acevedo
2016-01-01
The following paper evaluates a watermark algorithm designed for digital images by using a perceptive mask and a normalization process, thus preventing human eye detection, as well as ensuring its robustness against common processing and geometric attacks. The Hermite transform is employed because it allows a perfect reconstruction of the image, while incorporating human visual system properties; moreover, it is based on the Gaussian functions derivates. The applied watermark represents information of the digital image proprietor. The extraction process is blind, because it does not require the original image. The following techniques were utilized in the evaluation of the algorithm: peak signal-to-noise ratio, the structural similarity index average, the normalized crossed correlation, and bit error rate. Several watermark extraction tests were performed, with against geometric and common processing attacks. It allowed us to identify how many bits in the watermark can be modified for its adequate extraction.
A new Watermarking System based on Discrete Cosine Transform (DCT) in color biometric images.
Dogan, Sengul; Tuncer, Turker; Avci, Engin; Gulten, Arif
2012-08-01
This paper recommend a biometric color images hiding approach An Watermarking System based on Discrete Cosine Transform (DCT), which is used to protect the security and integrity of transmitted biometric color images. Watermarking is a very important hiding information (audio, video, color image, gray image) technique. It is commonly used on digital objects together with the developing technology in the last few years. One of the common methods used for hiding information on image files is DCT method which used in the frequency domain. In this study, DCT methods in order to embed watermark data into face images, without corrupting their features.
NASA Astrophysics Data System (ADS)
Al-Mansoori, Saeed; Kunhu, Alavi
2013-10-01
This paper proposes a blind multi-watermarking scheme based on designing two back-to-back encoders. The first encoder is implemented to embed a robust watermark into remote sensing imagery by applying a Discrete Cosine Transform (DCT) approach. Such watermark is used in many applications to protect the copyright of the image. However, the second encoder embeds a fragile watermark using `SHA-1' hash function. The purpose behind embedding a fragile watermark is to prove the authenticity of the image (i.e. tamper-proof). Thus, the proposed technique was developed as a result of new challenges with piracy of remote sensing imagery ownership. This led researchers to look for different means to secure the ownership of satellite imagery and prevent the illegal use of these resources. Therefore, Emirates Institution for Advanced Science and Technology (EIAST) proposed utilizing existing data security concept by embedding a digital signature, "watermark", into DubaiSat-1 satellite imagery. In this study, DubaiSat-1 images with 2.5 meter resolution are used as a cover and a colored EIAST logo is used as a watermark. In order to evaluate the robustness of the proposed technique, a couple of attacks are applied such as JPEG compression, rotation and synchronization attacks. Furthermore, tampering attacks are applied to prove image authenticity.
Quantum color image watermarking based on Arnold transformation and LSB steganography
NASA Astrophysics Data System (ADS)
Zhou, Ri-Gui; Hu, Wenwen; Fan, Ping; Luo, Gaofeng
In this paper, a quantum color image watermarking scheme is proposed through twice-scrambling of Arnold transformations and steganography of least significant bit (LSB). Both carrier image and watermark images are represented by the novel quantum representation of color digital images model (NCQI). The image sizes for carrier and watermark are assumed to be 2n×2n and 2n‑1×2n‑1, respectively. At first, the watermark is scrambled into a disordered form through image preprocessing technique of exchanging the image pixel position and altering the color information based on Arnold transforms, simultaneously. Then, the scrambled watermark with 2n‑1×2n‑1 image size and 24-qubit grayscale is further expanded to an image with size 2n×2n and 6-qubit grayscale using the nearest-neighbor interpolation method. Finally, the scrambled and expanded watermark is embedded into the carrier by steganography of LSB scheme, and a key image with 2n×2n size and 3-qubit information is generated at the meantime, which only can use the key image to retrieve the original watermark. The extraction of watermark is the reverse process of embedding, which is achieved by applying a sequence of operations in the reverse order. Simulation-based experimental results involving different carrier and watermark images (i.e. conventional or non-quantum) are simulated based on the classical computer’s MATLAB 2014b software, which illustrates that the present method has a good performance in terms of three items: visual quality, robustness and steganography capacity.
NASA Astrophysics Data System (ADS)
Le, Nam-Tuan
2017-05-01
Copyright protection and information security are two most considered issues of digital data following the development of internet and computer network. As an important solution for protection, watermarking technology has become one of the challenged roles in industry and academic research. The watermarking technology can be classified by two categories: visible watermarking and invisible watermarking. With invisible technique, there is an advantage on user interaction because of the visibility. By applying watermarking for communication, it will be a challenge and a new direction for communication technology. In this paper we will propose one new research on communication technology using optical camera communications (OCC) based invisible watermarking. Beside the analysis on performance of proposed system, we also suggest the frame structure of PHY and MAC layer for IEEE 802.15.7r1 specification which is a revision of visible light communication (VLC) standardization.
Watermarking and copyright labeling of printed images
NASA Astrophysics Data System (ADS)
Hel-Or, Hagit Z.
2001-07-01
Digital watermarking is a labeling technique for digital images which embeds a code into the digital data so the data are marked. Watermarking techniques previously developed deal with on-line digital data. These techniques have been developed to withstand digital attacks such as image processing, image compression and geometric transformations. However, one must also consider the readily available attack of printing and scanning. The available watermarking techniques are not reliable under printing and scanning. In fact, one must consider the availability of watermarks for printed images as well as for digital images. An important issue is to intercept and prevent forgery in printed material such as currency notes, back checks, etc. and to track and validate sensitive and secrete printed material. Watermarking in such printed material can be used not only for verification of ownership but as an indicator of date and type of transaction or date and source of the printed data. In this work we propose a method of embedding watermarks in printed images by inherently taking advantage of the printing process. The method is visually unobtrusive to the printed image, the watermark is easily extracted and is robust under reconstruction errors. The decoding algorithm is automatic given the watermarked image.
Music score watermarking by clef modifications
NASA Astrophysics Data System (ADS)
Schmucker, Martin; Yan, Hongning
2003-06-01
In this paper we present a new method for hiding data in music scores. In contrast to previous published algorithms we investigate the possibilities of embedding information in clefs. Using the clef as information carrier has two advantages: First, a clef is present in each staff line which guarantees a fixed capacity. Second, the clef defines the reference system for musical symbols and music containing symbols, e.g. the notes and the rests, are not degraded by manipulations. Music scores must be robust against greyscale to binary conversion. As a consequence, the information is embedded by modifying the black and white distribution of pixels in certain areas. We evaluate simple image processing mechanisms based on erosion and dilation for embedding the information. For retrieving the watermark the b/w-distribution is extracted from the given clef. To solve the synchronization problem the watermarked clef is normalized in a pre-processing step. The normalization is based on moments. The areas used for watermarking are calculated by image segmentation techniques which consider the features of a clef. We analyze capacity and robustness of the proposed method using different parameters for our proposed method. This proposed method can be combined with other music score watermarking methods to increase the capacity of existing watermarking techniques.
An intermediate significant bit (ISB) watermarking technique using neural networks.
Zeki, Akram; Abubakar, Adamu; Chiroma, Haruna
2016-01-01
Prior research studies have shown that the peak signal to noise ratio (PSNR) is the most frequent watermarked image quality metric that is used for determining the levels of strength and weakness of watermarking algorithms. Conversely, normalised cross correlation (NCC) is the most common metric used after attacks were applied to a watermarked image to verify the strength of the algorithm used. Many researchers have used these approaches to evaluate their algorithms. These strategies have been used for a long time, however, which unfortunately limits the value of PSNR and NCC in reflecting the strength and weakness of the watermarking algorithms. This paper considers this issue to determine the threshold values of these two parameters in reflecting the amount of strength and weakness of the watermarking algorithms. We used our novel watermarking technique for embedding four watermarks in intermediate significant bits (ISB) of six image files one-by-one through replacing the image pixels with new pixels and, at the same time, keeping the new pixels very close to the original pixels. This approach gains an improved robustness based on the PSNR and NCC values that were gathered. A neural network model was built that uses the image quality metrics (PSNR and NCC) values obtained from the watermarking of six grey-scale images that use ISB as the desired output and that are trained for each watermarked image's PSNR and NCC. The neural network predicts the watermarked image's PSNR together with NCC after the attacks when a portion of the output of the same or different types of image quality metrics (PSNR and NCC) are obtained. The results indicate that the NCC metric fluctuates before the PSNR values deteriorate.
Content-based audio authentication using a hierarchical patchwork watermark embedding
NASA Astrophysics Data System (ADS)
Gulbis, Michael; Müller, Erika
2010-05-01
Content-based audio authentication watermarking techniques extract perceptual relevant audio features, which are robustly embedded into the audio file to protect. Manipulations of the audio file are detected on the basis of changes between the original embedded feature information and the anew extracted features during verification. The main challenges of content-based watermarking are on the one hand the identification of a suitable audio feature to distinguish between content preserving and malicious manipulations. On the other hand the development of a watermark, which is robust against content preserving modifications and able to carry the whole authentication information. The payload requirements are significantly higher compared to transaction watermarking or copyright protection. Finally, the watermark embedding should not influence the feature extraction to avoid false alarms. Current systems still lack a sufficient alignment of watermarking algorithm and feature extraction. In previous work we developed a content-based audio authentication watermarking approach. The feature is based on changes in DCT domain over time. A patchwork algorithm based watermark was used to embed multiple one bit watermarks. The embedding process uses the feature domain without inflicting distortions to the feature. The watermark payload is limited by the feature extraction, more precisely the critical bands. The payload is inverse proportional to segment duration of the audio file segmentation. Transparency behavior was analyzed in dependence of segment size and thus the watermark payload. At a segment duration of about 20 ms the transparency shows an optimum (measured in units of Objective Difference Grade). Transparency and/or robustness are fast decreased for working points beyond this area. Therefore, these working points are unsuitable to gain further payload, needed for the embedding of the whole authentication information. In this paper we present a hierarchical extension of the watermark method to overcome the limitations given by the feature extraction. The approach is a recursive application of the patchwork algorithm onto its own patches, with a modified patch selection to ensure a better signal to noise ratio for the watermark embedding. The robustness evaluation was done by compression (mp3, ogg, aac), normalization, and several attacks of the stirmark benchmark for audio suite. Compared on the base of same payload and transparency the hierarchical approach shows improved robustness.
A Secure and Robust Object-Based Video Authentication System
NASA Astrophysics Data System (ADS)
He, Dajun; Sun, Qibin; Tian, Qi
2004-12-01
An object-based video authentication system, which combines watermarking, error correction coding (ECC), and digital signature techniques, is presented for protecting the authenticity between video objects and their associated backgrounds. In this system, a set of angular radial transformation (ART) coefficients is selected as the feature to represent the video object and the background, respectively. ECC and cryptographic hashing are applied to those selected coefficients to generate the robust authentication watermark. This content-based, semifragile watermark is then embedded into the objects frame by frame before MPEG4 coding. In watermark embedding and extraction, groups of discrete Fourier transform (DFT) coefficients are randomly selected, and their energy relationships are employed to hide and extract the watermark. The experimental results demonstrate that our system is robust to MPEG4 compression, object segmentation errors, and some common object-based video processing such as object translation, rotation, and scaling while securely preventing malicious object modifications. The proposed solution can be further incorporated into public key infrastructure (PKI).
Enhancing security of fingerprints through contextual biometric watermarking.
Noore, Afzel; Singh, Richa; Vatsa, Mayank; Houck, Max M
2007-07-04
This paper presents a novel digital watermarking technique using face and demographic text data as multiple watermarks for verifying the chain of custody and protecting the integrity of a fingerprint image. The watermarks are embedded in selected texture regions of a fingerprint image using discrete wavelet transform. Experimental results show that modifications in these locations are visually imperceptible and maintain the minutiae details. The integrity of the fingerprint image is verified through the high matching scores obtained from an automatic fingerprint identification system. There is also a high degree of visual correlation between the embedded images, and the extracted images from the watermarked fingerprint. The degree of similarity is computed using pixel-based metrics and human visual system metrics. The results also show that the proposed watermarked fingerprint and the extracted images are resilient to common attacks such as compression, filtering, and noise.
NASA Astrophysics Data System (ADS)
Dittmann, Jana; Steinebach, Martin; Wohlmacher, Petra; Ackermann, Ralf
2002-12-01
Digital watermarking is well known as enabling technology to prove ownership on copyrighted material, detect originators of illegally made copies, monitor the usage of the copyrighted multimedia data and analyze the spread spectrum of the data over networks and servers. Research has shown that data hiding techniques can be applied successfully to other application areas like manipulations recognition. In this paper, we show our innovative approach for integrating watermark and cryptography based methods within a framework of new application scenarios spanning a wide range from dedicated and user specific services, "Try&Buy" mechanisms to general means for long-term customer relationships. The tremendous recent efforts to develop and deploy ubiquitous mobile communication possibilities are changing the demands but also possibilities for establishing new business and commerce relationships. Especially we motivate annotation watermarks and aspects of M-Commerce to show important scenarios for access control. Based on a description of the challenges of the application domain and our latest work we discuss, which methods can be used for establishing services in a fast convenient and secure way for conditional access services based on digital watermarking combined with cryptographic techniques. We introduce an example scenario for digital audio and an overview of steps in order to establish these concepts practically.
The comparison between SVD-DCT and SVD-DWT digital image watermarking
NASA Astrophysics Data System (ADS)
Wira Handito, Kurniawan; Fauzi, Zulfikar; Aminy Ma’ruf, Firda; Widyaningrum, Tanti; Muslim Lhaksmana, Kemas
2018-03-01
With internet, anyone can publish their creation into digital data simply, inexpensively, and absolutely easy to be accessed by everyone. However, the problem appears when anyone else claims that the creation is their property or modifies some part of that creation. It causes necessary protection of copyrights; one of the examples is with watermarking method in digital image. The application of watermarking technique on digital data, especially on image, enables total invisibility if inserted in carrier image. Carrier image will not undergo any decrease of quality and also the inserted image will not be affected by attack. In this paper, watermarking will be implemented on digital image using Singular Value Decomposition based on Discrete Wavelet Transform (DWT) and Discrete Cosine Transform (DCT) by expectation in good performance of watermarking result. In this case, trade-off happen between invisibility and robustness of image watermarking. In embedding process, image watermarking has a good quality for scaling factor < 0.1. The quality of image watermarking in decomposition level 3 is better than level 2 and level 1. Embedding watermark in low-frequency is robust to Gaussian blur attack, rescale, and JPEG compression, but in high-frequency is robust to Gaussian noise.
An optical watermarking solution for color personal identification pictures
NASA Astrophysics Data System (ADS)
Tan, Yi-zhou; Liu, Hai-bo; Huang, Shui-hua; Sheng, Ben-jian; Pan, Zhong-ming
2009-11-01
This paper presents a new approach for embedding authentication information into image on printed materials based on optical projection technique. Our experimental setup consists of two parts, one is a common camera, and the other is a LCD projector, which project a pattern on personnel's body (especially on the face). The pattern, generated by a computer, act as the illumination light source with sinusoidal distribution and it is also the watermark signal. For a color image, the watermark is embedded into the blue channel. While we take pictures (256×256 and 512×512, 567×390 pixels, respectively), an invisible mark is embedded directly into magnitude coefficients of Discrete Fourier transform (DFT) at exposure moment. Both optical and digital correlation is suitable for detection of this type of watermark. The decoded watermark is a set of concentric circles or sectors in the DFT domain (middle frequencies region) which is robust to photographing, printing and scanning. The unlawful people modify or replace the original photograph, and make fake passport (drivers' license and so on). Experiments show, it is difficult to forge certificates in which a watermark was embedded by our projector-camera combination based on analogue watermark method rather than classical digital method.
A new approach of watermarking technique by means multichannel wavelet functions
NASA Astrophysics Data System (ADS)
Agreste, Santa; Puccio, Luigia
2012-12-01
The digital piracy involving images, music, movies, books, and so on, is a legal problem that has not found a solution. Therefore it becomes crucial to create and to develop methods and numerical algorithms in order to solve the copyright problems. In this paper we focus the attention on a new approach of watermarking technique applied to digital color images. Our aim is to describe the realized watermarking algorithm based on multichannel wavelet functions with multiplicity r = 3, called MCWM 1.0. We report a large experimentation and some important numerical results in order to show the robustness of the proposed algorithm to geometrical attacks.
A wavelet domain adaptive image watermarking method based on chaotic encryption
NASA Astrophysics Data System (ADS)
Wei, Fang; Liu, Jian; Cao, Hanqiang; Yang, Jun
2009-10-01
A digital watermarking technique is a specific branch of steganography, which can be used in various applications, provides a novel way to solve security problems for multimedia information. In this paper, we proposed a kind of wavelet domain adaptive image digital watermarking method using chaotic stream encrypt and human eye visual property. The secret information that can be seen as a watermarking is hidden into a host image, which can be publicly accessed, so the transportation of the secret information will not attract the attention of illegal receiver. The experimental results show that the method is invisible and robust against some image processing.
JP3D compressed-domain watermarking of volumetric medical data sets
NASA Astrophysics Data System (ADS)
Ouled Zaid, Azza; Makhloufi, Achraf; Olivier, Christian
2010-01-01
Increasing transmission of medical data across multiple user systems raises concerns for medical image watermarking. Additionaly, the use of volumetric images triggers the need for efficient compression techniques in picture archiving and communication systems (PACS), or telemedicine applications. This paper describes an hybrid data hiding/compression system, adapted to volumetric medical imaging. The central contribution is to integrate blind watermarking, based on turbo trellis-coded quantization (TCQ), to JP3D encoder. Results of our method applied to Magnetic Resonance (MR) and Computed Tomography (CT) medical images have shown that our watermarking scheme is robust to JP3D compression attacks and can provide relative high data embedding rate whereas keep a relative lower distortion.
Image watermarking capacity analysis based on Hopfield neural network
NASA Astrophysics Data System (ADS)
Zhang, Fan; Zhang, Hongbin
2004-11-01
In watermarking schemes, watermarking can be viewed as a form of communication problems. Almost all of previous works on image watermarking capacity are based on information theory, using Shannon formula to calculate the capacity of watermarking. In this paper, we present a blind watermarking algorithm using Hopfield neural network, and analyze watermarking capacity based on neural network. In our watermarking algorithm, watermarking capacity is decided by attraction basin of associative memory.
Optimum decoding and detection of a multiplicative amplitude-encoded watermark
NASA Astrophysics Data System (ADS)
Barni, Mauro; Bartolini, Franco; De Rosa, Alessia; Piva, Alessandro
2002-04-01
The aim of this paper is to present a novel approach to the decoding and the detection of multibit, multiplicative, watermarks embedded in the frequency domain. Watermark payload is conveyed by amplitude modulating a pseudo-random sequence, thus resembling conventional DS spread spectrum techniques. As opposed to conventional communication systems, though, the watermark is embedded within the host DFT coefficients by using a multiplicative rule. The watermark decoding technique presented in the paper is an optimum one, in that it minimizes the bit error probability. The problem of watermark presence assessment, which is often underestimated by state-of-the-art research on multibit watermarking, is addressed too, and the optimum detection rule derived according to the Neyman-Pearson criterion. Experimental results are shown both to demonstrate the validity of the theoretical analysis and to highlight the good performance of the proposed system.
Robust 3D DFT video watermarking
NASA Astrophysics Data System (ADS)
Deguillaume, Frederic; Csurka, Gabriela; O'Ruanaidh, Joseph J.; Pun, Thierry
1999-04-01
This paper proposes a new approach for digital watermarking and secure copyright protection of videos, the principal aim being to discourage illicit copying and distribution of copyrighted material. The method presented here is based on the discrete Fourier transform (DFT) of three dimensional chunks of video scene, in contrast with previous works on video watermarking where each video frame was marked separately, or where only intra-frame or motion compensation parameters were marked in MPEG compressed videos. Two kinds of information are hidden in the video: a watermark and a template. Both are encoded using an owner key to ensure the system security and are embedded in the 3D DFT magnitude of video chunks. The watermark is a copyright information encoded in the form of a spread spectrum signal. The template is a key based grid and is used to detect and invert the effect of frame-rate changes, aspect-ratio modification and rescaling of frames. The template search and matching is performed in the log-log-log map of the 3D DFT magnitude. The performance of the presented technique is evaluated experimentally and compared with a frame-by-frame 2D DFT watermarking approach.
DWT-based stereoscopic image watermarking
NASA Astrophysics Data System (ADS)
Chammem, A.; Mitrea, M.; Pr"teux, F.
2011-03-01
Watermarking already imposed itself as an effective and reliable solution for conventional multimedia content protection (image/video/audio/3D). By persistently (robustly) and imperceptibly (transparently) inserting some extra data into the original content, the illegitimate use of data can be detected without imposing any annoying constraint to a legal user. The present paper deals with stereoscopic image protection by means of watermarking techniques. That is, we first investigate the peculiarities of the visual stereoscopic content from the transparency and robustness point of view. Then, we advance a new watermarking scheme designed so as to reach the trade-off between transparency and robustness while ensuring a prescribed quantity of inserted information. Finally, this method is evaluated on two stereoscopic image corpora (natural image and medical data).
NASA Astrophysics Data System (ADS)
Rodriguez, Tony F.; Cushman, David A.
2003-06-01
With the growing commercialization of watermarking techniques in various application scenarios it has become increasingly important to quantify the performance of watermarking products. The quantification of relative merits of various products is not only essential in enabling further adoption of the technology by society as a whole, but will also drive the industry to develop testing plans/methodologies to ensure quality and minimize cost (to both vendors & customers.) While the research community understands the theoretical need for a publicly available benchmarking system to quantify performance, there has been less discussion on the practical application of these systems. By providing a standard set of acceptance criteria, benchmarking systems can dramatically increase the quality of a particular watermarking solution, validating the product performances if they are used efficiently and frequently during the design process. In this paper we describe how to leverage specific design of experiments techniques to increase the quality of a watermarking scheme, to be used with the benchmark tools being developed by the Ad-Hoc Watermark Verification Group. A Taguchi Loss Function is proposed for an application and orthogonal arrays used to isolate optimal levels for a multi-factor experimental situation. Finally, the results are generalized to a population of cover works and validated through an exhaustive test.
Optical/digital identification/verification system based on digital watermarking technology
NASA Astrophysics Data System (ADS)
Herrigel, Alexander; Voloshynovskiy, Sviatoslav V.; Hrytskiv, Zenon D.
2000-06-01
This paper presents a new approach for the secure integrity verification of driver licenses, passports or other analogue identification documents. The system embeds (detects) the reference number of the identification document with the DCT watermark technology in (from) the owner photo of the identification document holder. During verification the reference number is extracted and compared with the reference number printed in the identification document. The approach combines optical and digital image processing techniques. The detection system must be able to scan an analogue driver license or passport, convert the image of this document into a digital representation and then apply the watermark verification algorithm to check the payload of the embedded watermark. If the payload of the watermark is identical with the printed visual reference number of the issuer, the verification was successful and the passport or driver license has not been modified. This approach constitutes a new class of application for the watermark technology, which was originally targeted for the copyright protection of digital multimedia data. The presented approach substantially increases the security of the analogue identification documents applied in many European countries.
Spatial-frequency composite watermarking for digital image copyright protection
NASA Astrophysics Data System (ADS)
Su, Po-Chyi; Kuo, C.-C. Jay
2000-05-01
Digital watermarks can be classified into two categories according to the embedding and retrieval domain, i.e. spatial- and frequency-domain watermarks. Because the two watermarks have different characteristics and limitations, combination of them can have various interesting properties when applied to different applications. In this research, we examine two spatial-frequency composite watermarking schemes. In both cases, a frequency-domain watermarking technique is applied as a baseline structure in the system. The embedded frequency- domain watermark is robust against filtering and compression. A spatial-domain watermarking scheme is then built to compensate some deficiency of the frequency-domain scheme. The first composite scheme is to embed a robust watermark in images to convey copyright or author information. The frequency-domain watermark contains owner's identification number while the spatial-domain watermark is embedded for image registration to resist cropping attack. The second composite scheme is to embed fragile watermark for image authentication. The spatial-domain watermark helps in locating the tampered part of the image while the frequency-domain watermark indicates the source of the image and prevents double watermarking attack. Experimental results show that the two watermarks do not interfere with each other and different functionalities can be achieved. Watermarks in both domains are detected without resorting to the original image. Furthermore, the resulting watermarked image can still preserve high fidelity without serious visual degradation.
Content fragile watermarking for H.264/AVC video authentication
NASA Astrophysics Data System (ADS)
Ait Sadi, K.; Guessoum, A.; Bouridane, A.; Khelifi, F.
2017-04-01
Discrete cosine transform is exploited in this work to generate the authentication data that are treated as a fragile watermark. This watermark is embedded in the motion vectors. The advances in multimedia technologies and digital processing tools have brought with them new challenges for the source and content authentication. To ensure the integrity of the H.264/AVC video stream, we introduce an approach based on a content fragile video watermarking method using an independent authentication of each group of pictures (GOPs) within the video. This technique uses robust visual features extracted from the video pertaining to the set of selected macroblocs (MBs) which hold the best partition mode in a tree-structured motion compensation process. An additional security degree is offered by the proposed method through using a more secured keyed function HMAC-SHA-256 and randomly choosing candidates from already selected MBs. In here, the watermark detection and verification processes are blind, whereas the tampered frames detection is not since it needs the original frames within the tampered GOPs. The proposed scheme achieves an accurate authentication technique with a high fragility and fidelity whilst maintaining the original bitrate and the perceptual quality. Furthermore, its ability to detect the tampered frames in case of spatial, temporal and colour manipulations is confirmed.
NASA Astrophysics Data System (ADS)
Vielhauer, Claus; Croce Ferri, Lucilla
2003-06-01
Our paper addresses two issues of a biometric authentication algorithm for ID cardholders previously presented namely the security of the embedded reference data and the aging process of the biometric data. We describe a protocol that allows two levels of verification, combining a biometric hash technique based on handwritten signature and hologram watermarks with cryptographic signatures in a verification infrastructure. This infrastructure consists of a Trusted Central Public Authority (TCPA), which serves numerous Enrollment Stations (ES) in a secure environment. Each individual performs an enrollment at an ES, which provides the TCPA with the full biometric reference data and a document hash. The TCPA then calculates the authentication record (AR) with the biometric hash, a validity timestamp, and a document hash provided by the ES. The AR is then signed with a cryptographic signature function, initialized with the TCPA's private key and embedded in the ID card as a watermark. Authentication is performed at Verification Stations (VS), where the ID card will be scanned and the signed AR is retrieved from the watermark. Due to the timestamp mechanism and a two level biometric verification technique based on offline and online features, the AR can deal with the aging process of the biometric feature by forcing a re-enrollment of the user after expiry, making use of the ES infrastructure. We describe some attack scenarios and we illustrate the watermarking embedding, retrieval and dispute protocols, analyzing their requisites, advantages and disadvantages in relation to security requirements.
Semantically transparent fingerprinting for right protection of digital cinema
NASA Astrophysics Data System (ADS)
Wu, Xiaolin
2003-06-01
Digital cinema, a new frontier and crown jewel of digital multimedia, has the potential of revolutionizing the science, engineering and business of movie production and distribution. The advantages of digital cinema technology over traditional analog technology are numerous and profound. But without effective and enforceable copyright protection measures, digital cinema can be more susceptible to widespread piracy, which can dampen or even prevent the commercial deployment of digital cinema. In this paper we propose a novel approach of fingerprinting each individual distribution copy of a digital movie for the purpose of tracing pirated copies back to their source. The proposed fingerprinting technique presents a fundamental departure from the traditional digital watermarking/fingerprinting techniques. Its novelty and uniqueness lie in a so-called semantic or subjective transparency property. The fingerprints are created by editing those visual and audio attributes that can be modified with semantic and subjective transparency to the audience. Semantically-transparent fingerprinting or watermarking is the most robust kind among all existing watermarking techniques, because it is content-based not sample-based, and semantically-recoverable not statistically-recoverable.
HDL-level automated watermarking of IP cores
NASA Astrophysics Data System (ADS)
Castillo, E.; Meyer-Baese, U.; Parrilla, L.; García, A.; Lloris, A.
2008-04-01
This paper presents significant improvements to our previous watermarking technique for Intellectual Property Protection (IPP) of IP cores. The technique relies on hosting the bits of a digital signature at the HDL design level using resources included within the original system. Thus, any attack trying to change or remove the digital signature will damage the design. The technique also includes a procedure for secure signature extraction requiring minimal modifications to the system. The new advances refer to increasing the applicability of this watermarking technique to any design, not only to those including look-ups, and the provision of an automatic tool for signature hosting purposes. Synthesis results show that the application of the proposed watermarking strategy results in negligible degradation of system performance and very low area penalties and that the use of the automated tool, in addition to easy the signature hosting, leads to reduced area penalties.
Image watermarking against lens flare effects
NASA Astrophysics Data System (ADS)
Chotikawanid, Piyanart; Amornraksa, Thumrongrat
2017-02-01
Lens flare effects in various photo and camera software nowadays can partially or fully damage the watermark information within the watermarked image. We propose in this paper a spatial domain based image watermarking against lens flare effects. The watermark embedding is based on the modification of the saturation color component in HSV color space of a host image. For watermark extraction, a homomorphic filter is used to predict the original embedding component from the watermarked component, and the watermark is blindly recovered by differentiating both components. The watermarked image's quality is evaluated by wPSNR, while the extracted watermark's accuracy is evaluated by NC. The experimental results against various types of lens flare effects from both computer software and mobile application showed that our proposed method outperformed the previous methods.
Wavelet-based reversible watermarking for authentication
NASA Astrophysics Data System (ADS)
Tian, Jun
2002-04-01
In the digital information age, digital content (audio, image, and video) can be easily copied, manipulated, and distributed. Copyright protection and content authentication of digital content has become an urgent problem to content owners and distributors. Digital watermarking has provided a valuable solution to this problem. Based on its application scenario, most digital watermarking methods can be divided into two categories: robust watermarking and fragile watermarking. As a special subset of fragile watermark, reversible watermark (which is also called lossless watermark, invertible watermark, erasable watermark) enables the recovery of the original, unwatermarked content after the watermarked content has been detected to be authentic. Such reversibility to get back unwatermarked content is highly desired in sensitive imagery, such as military data and medical data. In this paper we present a reversible watermarking method based on an integer wavelet transform. We look into the binary representation of each wavelet coefficient and embed an extra bit to expandable wavelet coefficient. The location map of all expanded coefficients will be coded by JBIG2 compression and these coefficient values will be losslessly compressed by arithmetic coding. Besides these two compressed bit streams, an SHA-256 hash of the original image will also be embedded for authentication purpose.
Combining Digital Watermarking and Fingerprinting Techniques to Identify Copyrights for Color Images
Hsieh, Shang-Lin; Chen, Chun-Che; Shen, Wen-Shan
2014-01-01
This paper presents a copyright identification scheme for color images that takes advantage of the complementary nature of watermarking and fingerprinting. It utilizes an authentication logo and the extracted features of the host image to generate a fingerprint, which is then stored in a database and also embedded in the host image to produce a watermarked image. When a dispute over the copyright of a suspect image occurs, the image is first processed by watermarking. If the watermark can be retrieved from the suspect image, the copyright can then be confirmed; otherwise, the watermark then serves as the fingerprint and is processed by fingerprinting. If a match in the fingerprint database is found, then the suspect image will be considered a duplicated one. Because the proposed scheme utilizes both watermarking and fingerprinting, it is more robust than those that only adopt watermarking, and it can also obtain the preliminary result more quickly than those that only utilize fingerprinting. The experimental results show that when the watermarked image suffers slight attacks, watermarking alone is enough to identify the copyright. The results also show that when the watermarked image suffers heavy attacks that render watermarking incompetent, fingerprinting can successfully identify the copyright, hence demonstrating the effectiveness of the proposed scheme. PMID:25114966
Global motion compensated visual attention-based video watermarking
NASA Astrophysics Data System (ADS)
Oakes, Matthew; Bhowmik, Deepayan; Abhayaratne, Charith
2016-11-01
Imperceptibility and robustness are two key but complementary requirements of any watermarking algorithm. Low-strength watermarking yields high imperceptibility but exhibits poor robustness. High-strength watermarking schemes achieve good robustness but often suffer from embedding distortions resulting in poor visual quality in host media. This paper proposes a unique video watermarking algorithm that offers a fine balance between imperceptibility and robustness using motion compensated wavelet-based visual attention model (VAM). The proposed VAM includes spatial cues for visual saliency as well as temporal cues. The spatial modeling uses the spatial wavelet coefficients while the temporal modeling accounts for both local and global motion to arrive at the spatiotemporal VAM for video. The model is then used to develop a video watermarking algorithm, where a two-level watermarking weighting parameter map is generated from the VAM saliency maps using the saliency model and data are embedded into the host image according to the visual attentiveness of each region. By avoiding higher strength watermarking in the visually attentive region, the resulting watermarked video achieves high perceived visual quality while preserving high robustness. The proposed VAM outperforms the state-of-the-art video visual attention methods in joint saliency detection and low computational complexity performance. For the same embedding distortion, the proposed visual attention-based watermarking achieves up to 39% (nonblind) and 22% (blind) improvement in robustness against H.264/AVC compression, compared to existing watermarking methodology that does not use the VAM. The proposed visual attention-based video watermarking results in visual quality similar to that of low-strength watermarking and a robustness similar to those of high-strength watermarking.
Quality Scalability Aware Watermarking for Visual Content.
Bhowmik, Deepayan; Abhayaratne, Charith
2016-11-01
Scalable coding-based content adaptation poses serious challenges to traditional watermarking algorithms, which do not consider the scalable coding structure and hence cannot guarantee correct watermark extraction in media consumption chain. In this paper, we propose a novel concept of scalable blind watermarking that ensures more robust watermark extraction at various compression ratios while not effecting the visual quality of host media. The proposed algorithm generates scalable and robust watermarked image code-stream that allows the user to constrain embedding distortion for target content adaptations. The watermarked image code-stream consists of hierarchically nested joint distortion-robustness coding atoms. The code-stream is generated by proposing a new wavelet domain blind watermarking algorithm guided by a quantization based binary tree. The code-stream can be truncated at any distortion-robustness atom to generate the watermarked image with the desired distortion-robustness requirements. A blind extractor is capable of extracting watermark data from the watermarked images. The algorithm is further extended to incorporate a bit-plane discarding-based quantization model used in scalable coding-based content adaptation, e.g., JPEG2000. This improves the robustness against quality scalability of JPEG2000 compression. The simulation results verify the feasibility of the proposed concept, its applications, and its improved robustness against quality scalable content adaptation. Our proposed algorithm also outperforms existing methods showing 35% improvement. In terms of robustness to quality scalable video content adaptation using Motion JPEG2000 and wavelet-based scalable video coding, the proposed method shows major improvement for video watermarking.
Wavelet-based audio embedding and audio/video compression
NASA Astrophysics Data System (ADS)
Mendenhall, Michael J.; Claypoole, Roger L., Jr.
2001-12-01
Watermarking, traditionally used for copyright protection, is used in a new and exciting way. An efficient wavelet-based watermarking technique embeds audio information into a video signal. Several effective compression techniques are applied to compress the resulting audio/video signal in an embedded fashion. This wavelet-based compression algorithm incorporates bit-plane coding, index coding, and Huffman coding. To demonstrate the potential of this audio embedding and audio/video compression algorithm, we embed an audio signal into a video signal and then compress. Results show that overall compression rates of 15:1 can be achieved. The video signal is reconstructed with a median PSNR of nearly 33 dB. Finally, the audio signal is extracted from the compressed audio/video signal without error.
A new approach to pre-processing digital image for wavelet-based watermark
NASA Astrophysics Data System (ADS)
Agreste, Santa; Andaloro, Guido
2008-11-01
The growth of the Internet has increased the phenomenon of digital piracy, in multimedia objects, like software, image, video, audio and text. Therefore it is strategic to individualize and to develop methods and numerical algorithms, which are stable and have low computational cost, that will allow us to find a solution to these problems. We describe a digital watermarking algorithm for color image protection and authenticity: robust, not blind, and wavelet-based. The use of Discrete Wavelet Transform is motivated by good time-frequency features and a good match with Human Visual System directives. These two combined elements are important for building an invisible and robust watermark. Moreover our algorithm can work with any image, thanks to the step of pre-processing of the image that includes resize techniques that adapt to the size of the original image for Wavelet transform. The watermark signal is calculated in correlation with the image features and statistic properties. In the detection step we apply a re-synchronization between the original and watermarked image according to the Neyman-Pearson statistic criterion. Experimentation on a large set of different images has been shown to be resistant against geometric, filtering, and StirMark attacks with a low rate of false alarm.
Region of interest based robust watermarking scheme for adaptation in small displays
NASA Astrophysics Data System (ADS)
Vivekanandhan, Sapthagirivasan; K. B., Kishore Mohan; Vemula, Krishna Manohar
2010-02-01
Now-a-days Multimedia data can be easily replicated and the copyright is not legally protected. Cryptography does not allow the use of digital data in its original form and once the data is decrypted, it is no longer protected. Here we have proposed a new double protected digital image watermarking algorithm, which can embed the watermark image blocks into the adjacent regions of the host image itself based on their blocks similarity coefficient which is robust to various noise effects like Poisson noise, Gaussian noise, Random noise and thereby provide double security from various noises and hackers. As instrumentation application requires a much accurate data, the watermark image which is to be extracted back from the watermarked image must be immune to various noise effects. Our results provide better extracted image compared to the present/existing techniques and in addition we have done resizing the same for various displays. Adaptive resizing for various size displays is being experimented wherein we crop the required information in a frame, zoom it for a large display or resize for a small display using a threshold value and in either cases background is not given much importance but it is only the fore-sight object which gains importance which will surely be helpful in performing surgeries.
A novel blinding digital watermark algorithm based on lab color space
NASA Astrophysics Data System (ADS)
Dong, Bing-feng; Qiu, Yun-jie; Lu, Hong-tao
2010-02-01
It is necessary for blinding digital image watermark algorithm to extract watermark information without any extra information except the watermarked image itself. But most of the current blinding watermark algorithms have the same disadvantage: besides the watermarked image, they also need the size and other information about the original image when extracting the watermark. This paper presents an innovative blinding color image watermark algorithm based on Lab color space, which does not have the disadvantages mentioned above. This algorithm first marks the watermark region size and position through embedding some regular blocks called anchor points in image spatial domain, and then embeds the watermark into the image. In doing so, the watermark information can be easily extracted after doing cropping and scale change to the image. Experimental results show that the algorithm is particularly robust against the color adjusting and geometry transformation. This algorithm has already been used in a copyright protecting project and works very well.
Watermarking-based protection of remote sensing images: requirements and possible solutions
NASA Astrophysics Data System (ADS)
Barni, Mauro; Bartolini, Franco; Cappellini, Vito; Magli, Enrico; Olmo, Gabriella
2001-12-01
Earth observation missions have recently attracted ag rowing interest form the scientific and industrial communities, mainly due to the large number of possible applications capable to exploit remotely sensed data and images. Along with the increase of market potential, the need arises for the protection of the image products from non-authorized use. Such a need is a very crucial one even because the Internet and other public/private networks have become preferred means of data exchange. A crucial issue arising when dealing with digital image distribution is copyright protection. Such a problem has been largely addressed by resorting to watermarking technology. A question that obviously arises is whether the requirements imposed by remote sensing imagery are compatible with existing watermarking techniques. On the basis of these motivations, the contribution of this work is twofold: i) assessment of the requirements imposed by the characteristics of remotely sensed images on watermark-based copyright protection ii) analysis of the state-of-the-art, and performance evaluation of existing algorithms in terms of the requirements at the previous point.
Watermarking on 3D mesh based on spherical wavelet transform.
Jin, Jian-Qiu; Dai, Min-Ya; Bao, Hu-Jun; Peng, Qun-Sheng
2004-03-01
In this paper we propose a robust watermarking algorithm for 3D mesh. The algorithm is based on spherical wavelet transform. Our basic idea is to decompose the original mesh into a series of details at different scales by using spherical wavelet transform; the watermark is then embedded into the different levels of details. The embedding process includes: global sphere parameterization, spherical uniform sampling, spherical wavelet forward transform, embedding watermark, spherical wavelet inverse transform, and at last resampling the mesh watermarked to recover the topological connectivity of the original model. Experiments showed that our algorithm can improve the capacity of the watermark and the robustness of watermarking against attacks.
Embedding multiple watermarks in the DFT domain using low- and high-frequency bands
NASA Astrophysics Data System (ADS)
Ganic, Emir; Dexter, Scott D.; Eskicioglu, Ahmet M.
2005-03-01
Although semi-blind and blind watermarking schemes based on Discrete Cosine Transform (DCT) or Discrete Wavelet Transform (DWT) are robust to a number of attacks, they fail in the presence of geometric attacks such as rotation, scaling, and translation. The Discrete Fourier Transform (DFT) of a real image is conjugate symmetric, resulting in a symmetric DFT spectrum. Because of this property, the popularity of DFT-based watermarking has increased in the last few years. In a recent paper, we generalized a circular watermarking idea to embed multiple watermarks in lower and higher frequencies. Nevertheless, a circular watermark is visible in the DFT domain, providing a potential hacker with valuable information about the location of the watermark. In this paper, our focus is on embedding multiple watermarks that are not visible in the DFT domain. Using several frequency bands increases the overall robustness of the proposed watermarking scheme. Specifically, our experiments show that the watermark embedded in lower frequencies is robust to one set of attacks, and the watermark embedded in higher frequencies is robust to a different set of attacks.
2007-02-23
approach for signal-level watermark inheritance. 15. SUBJECT TERMS EOARD, Steganography , Image Fusion, Data Mining, Image ...in watermarking algorithms , a program interface and protocol has been de - veloped, which allows control of the embedding and retrieval processes by the...watermarks in an image . Watermarking algorithm (DLL) Watermarking editor (Delphi) - User marks all objects: ci - class information oi - object instance
Watermarking protocols for authentication and ownership protection based on timestamps and holograms
NASA Astrophysics Data System (ADS)
Dittmann, Jana; Steinebach, Martin; Croce Ferri, Lucilla
2002-04-01
Digital watermarking has become an accepted technology for enabling multimedia protection schemes. One problem here is the security of these schemes. Without a suitable framework, watermarks can be replaced and manipulated. We discuss different protocols providing security against rightful ownership attacks and other fraud attempts. We compare the characteristics of existing protocols for different media like direct embedding or seed based and required attributes of the watermarking technology like robustness or payload. We introduce two new media independent protocol schemes for rightful ownership authentication. With the first scheme we ensure security of digital watermarks used for ownership protection with a combination of two watermarks: first watermark of the copyright holder and a second watermark from a Trusted Third Party (TTP). It is based on hologram embedding and the watermark consists of e.g. a company logo. As an example we use digital images and specify the properties of the embedded additional security information. We identify components necessary for the security protocol like timestamp, PKI and cryptographic algorithms. The second scheme is used for authentication. It is designed for invertible watermarking applications which require high data integrity. We combine digital signature schemes and digital watermarking to provide a public verifiable integrity. The original data can only be reproduced with a secret key. Both approaches provide solutions for copyright and authentication watermarking and are introduced for image data but can be easily adopted for video and audio data as well.
Design Time Optimization for Hardware Watermarking Protection of HDL Designs
Castillo, E.; Morales, D. P.; García, A.; Parrilla, L.; Todorovich, E.; Meyer-Baese, U.
2015-01-01
HDL-level design offers important advantages for the application of watermarking to IP cores, but its complexity also requires tools automating these watermarking algorithms. A new tool for signature distribution through combinational logic is proposed in this work. IPP@HDL, a previously proposed high-level watermarking technique, has been employed for evaluating the tool. IPP@HDL relies on spreading the bits of a digital signature at the HDL design level using combinational logic included within the original system. The development of this new tool for the signature distribution has not only extended and eased the applicability of this IPP technique, but it has also improved the signature hosting process itself. Three algorithms were studied in order to develop this automated tool. The selection of a cost function determines the best hosting solutions in terms of area and performance penalties on the IP core to protect. An 1D-DWT core and MD5 and SHA1 digital signatures were used in order to illustrate the benefits of the new tool and its optimization related to the extraction logic resources. Among the proposed algorithms, the alternative based on simulated annealing reduces the additional resources while maintaining an acceptable computation time and also saving designer effort and time. PMID:25861681
Spread spectrum image watermarking based on perceptual quality metric.
Zhang, Fan; Liu, Wenyu; Lin, Weisi; Ngan, King Ngi
2011-11-01
Efficient image watermarking calls for full exploitation of the perceptual distortion constraint. Second-order statistics of visual stimuli are regarded as critical features for perception. This paper proposes a second-order statistics (SOS)-based image quality metric, which considers the texture masking effect and the contrast sensitivity in Karhunen-Loève transform domain. Compared with the state-of-the-art metrics, the quality prediction by SOS better correlates with several subjectively rated image databases, in which the images are impaired by the typical coding and watermarking artifacts. With the explicit metric definition, spread spectrum watermarking is posed as an optimization problem: we search for a watermark to minimize the distortion of the watermarked image and to maximize the correlation between the watermark pattern and the spread spectrum carrier. The simple metric guarantees the optimal watermark a closed-form solution and a fast implementation. The experiments show that the proposed watermarking scheme can take full advantage of the distortion constraint and improve the robustness in return.
Watermarking scheme based on singular value decomposition and homomorphic transform
NASA Astrophysics Data System (ADS)
Verma, Deval; Aggarwal, A. K.; Agarwal, Himanshu
2017-10-01
A semi-blind watermarking scheme based on singular-value-decomposition (SVD) and homomorphic transform is pro-posed. This scheme ensures the digital security of an eight bit gray scale image by inserting an invisible eight bit gray scale wa-termark into it. The key approach of the scheme is to apply the homomorphic transform on the host image to obtain its reflectance component. The watermark is embedded into the singular values that are obtained by applying the singular value decomposition on the reflectance component. Peak-signal-to-noise-ratio (PSNR), normalized-correlation-coefficient (NCC) and mean-structural-similarity-index-measure (MSSIM) are used to evaluate the performance of the scheme. Invisibility of watermark is ensured by visual inspection and high value of PSNR of watermarked images. Presence of watermark is ensured by visual inspection and high values of NCC and MSSIM of extracted watermarks. Robustness of the scheme is verified by high values of NCC and MSSIM for attacked watermarked images.
Digital watermarking algorithm research of color images based on quaternion Fourier transform
NASA Astrophysics Data System (ADS)
An, Mali; Wang, Weijiang; Zhao, Zhen
2013-10-01
A watermarking algorithm of color images based on the quaternion Fourier Transform (QFFT) and improved quantization index algorithm (QIM) is proposed in this paper. The original image is transformed by QFFT, the watermark image is processed by compression and quantization coding, and then the processed watermark image is embedded into the components of the transformed original image. It achieves embedding and blind extraction of the watermark image. The experimental results show that the watermarking algorithm based on the improved QIM algorithm with distortion compensation achieves a good tradeoff between invisibility and robustness, and better robustness for the attacks of Gaussian noises, salt and pepper noises, JPEG compression, cropping, filtering and image enhancement than the traditional QIM algorithm.
A study for watermark methods appropriate to medical images.
Cho, Y; Ahn, B; Kim, J S; Kim, I Y; Kim, S I
2001-06-01
The network system, including the picture archiving and communication system (PACS), is essential in hospital and medical imaging fields these days. Many medical images are accessed and processed on the web, as well as in PACS. Therefore, any possible accidents caused by the illegal modification of medical images must be prevented. Digital image watermark techniques have been proposed as a method to protect against illegal copying or modification of copyrighted material. Invisible signatures made by a digital image watermarking technique can be a solution to these problems. However, medical images have some different characteristics from normal digital images in that one must not corrupt the information contained in the original medical images. In this study, we suggest modified watermark methods appropriate for medical image processing and communication system that prevent clinically important data contained in original images from being corrupted.
Design of an H.264/SVC resilient watermarking scheme
NASA Astrophysics Data System (ADS)
Van Caenegem, Robrecht; Dooms, Ann; Barbarien, Joeri; Schelkens, Peter
2010-01-01
The rapid dissemination of media technologies has lead to an increase of unauthorized copying and distribution of digital media. Digital watermarking, i.e. embedding information in the multimedia signal in a robust and imperceptible manner, can tackle this problem. Recently, there has been a huge growth in the number of different terminals and connections that can be used to consume multimedia. To tackle the resulting distribution challenges, scalable coding is often employed. Scalable coding allows the adaptation of a single bit-stream to varying terminal and transmission characteristics. As a result of this evolution, watermarking techniques that are robust against scalable compression become essential in order to control illegal copying. In this paper, a watermarking technique resilient against scalable video compression using the state-of-the-art H.264/SVC codec is therefore proposed and evaluated.
Robust digital image watermarking using distortion-compensated dither modulation
NASA Astrophysics Data System (ADS)
Li, Mianjie; Yuan, Xiaochen
2018-04-01
In this paper, we propose a robust feature extraction based digital image watermarking method using Distortion- Compensated Dither Modulation (DC-DM). Our proposed local watermarking method provides stronger robustness and better flexibility than traditional global watermarking methods. We improve robustness by introducing feature extraction and DC-DM method. To extract the robust feature points, we propose a DAISY-based Robust Feature Extraction (DRFE) method by employing the DAISY descriptor and applying the entropy calculation based filtering. The experimental results show that the proposed method achieves satisfactory robustness under the premise of ensuring watermark imperceptibility quality compared to other existing methods.
A source-channel coding approach to digital image protection and self-recovery.
Sarreshtedari, Saeed; Akhaee, Mohammad Ali
2015-07-01
Watermarking algorithms have been widely applied to the field of image forensics recently. One of these very forensic applications is the protection of images against tampering. For this purpose, we need to design a watermarking algorithm fulfilling two purposes in case of image tampering: 1) detecting the tampered area of the received image and 2) recovering the lost information in the tampered zones. State-of-the-art techniques accomplish these tasks using watermarks consisting of check bits and reference bits. Check bits are used for tampering detection, whereas reference bits carry information about the whole image. The problem of recovering the lost reference bits still stands. This paper is aimed at showing that having the tampering location known, image tampering can be modeled and dealt with as an erasure error. Therefore, an appropriate design of channel code can protect the reference bits against tampering. In the present proposed method, the total watermark bit-budget is dedicated to three groups: 1) source encoder output bits; 2) channel code parity bits; and 3) check bits. In watermark embedding phase, the original image is source coded and the output bit stream is protected using appropriate channel encoder. For image recovery, erasure locations detected by check bits help channel erasure decoder to retrieve the original source encoded image. Experimental results show that our proposed scheme significantly outperforms recent techniques in terms of image quality for both watermarked and recovered image. The watermarked image quality gain is achieved through spending less bit-budget on watermark, while image recovery quality is considerably improved as a consequence of consistent performance of designed source and channel codes.
Self-recovery fragile watermarking algorithm based on SPHIT
NASA Astrophysics Data System (ADS)
Xin, Li Ping
2015-12-01
A fragile watermark algorithm is proposed, based on SPIHT coding, which can recover the primary image itself. The novelty of the algorithm is that it can tamper location and Self-restoration. The recovery has been very good effect. The first, utilizing the zero-tree structure, the algorithm compresses and encodes the image itself, and then gained self correlative watermark data, so as to greatly reduce the quantity of embedding watermark. Then the watermark data is encoded by error correcting code, and the check bits and watermark bits are scrambled and embedded to enhance the recovery ability. At the same time, by embedding watermark into the latter two bit place of gray level image's bit-plane code, the image after embedded watermark can gain nicer visual effect. The experiment results show that the proposed algorithm may not only detect various processing such as noise adding, cropping, and filtering, but also recover tampered image and realize blind-detection. Peak signal-to-noise ratios of the watermark image were higher than other similar algorithm. The attack capability of the algorithm was enhanced.
PDE based scheme for multi-modal medical image watermarking.
Aherrahrou, N; Tairi, H
2015-11-25
This work deals with copyright protection of digital images, an issue that needs protection of intellectual property rights. It is an important issue with a large number of medical images interchanged on the Internet every day. So, it is a challenging task to ensure the integrity of received images as well as authenticity. Digital watermarking techniques have been proposed as valid solution for this problem. It is worth mentioning that the Region Of Interest (ROI)/Region Of Non Interest (RONI) selection can be seen as a significant limitation from which suffers most of ROI/RONI based watermarking schemes and that in turn affects and limit their applicability in an effective way. Generally, the ROI/RONI is defined by a radiologist or a computer-aided selection tool. And thus, this will not be efficient for an institute or health care system, where one has to process a large number of images. Therefore, developing an automatic ROI/RONI selection is a challenge task. The major aim of this work is to develop an automatic selection algorithm of embedding region based on the so called Partial Differential Equation (PDE) method. Thus avoiding ROI/RONI selection problems including: (1) computational overhead, (2) time consuming, and (3) modality dependent selection. The algorithm is evaluated in terms of imperceptibility, robustness, tamper localization and recovery using MRI, Ultrasound, CT and X-ray grey scale medical images. From experimental results that we have conducted on a database of 100 medical images of four modalities, it can be inferred that our method can achieve high imperceptibility, while showing good robustness against attacks. Furthermore, the experiment results confirm the effectiveness of the proposed algorithm in detecting and recovering the various types of tampering. The highest PSNR value reached over the 100 images is 94,746 dB, while the lowest PSNR value is 60,1272 dB, which demonstrates the higher imperceptibility nature of the proposed method. Moreover, the Normalized Correlation (NC) between the original watermark and the corresponding extracted watermark for 100 images is computed. We get a NC value greater than or equal to 0.998. This indicates that the extracted watermark is very similar to the original watermark for all modalities. The key features of our proposed method are to (1) increase the robustness of the watermark against attacks; (2) provide more transparency to the embedded watermark. (3) provide more authenticity and integrity protection of the content of medical images. (4) provide minimum ROI/RONI selection complexity.
NASA Astrophysics Data System (ADS)
Su, Yonggang; Tang, Chen; Li, Biyuan; Lei, Zhenkun
2018-05-01
This paper presents a novel optical colour image watermarking scheme based on phase-truncated linear canonical transform (PT-LCT) and image decomposition (ID). In this proposed scheme, a PT-LCT-based asymmetric cryptography is designed to encode the colour watermark into a noise-like pattern, and an ID-based multilevel embedding method is constructed to embed the encoded colour watermark into a colour host image. The PT-LCT-based asymmetric cryptography, which can be optically implemented by double random phase encoding with a quadratic phase system, can provide a higher security to resist various common cryptographic attacks. And the ID-based multilevel embedding method, which can be digitally implemented by a computer, can make the information of the colour watermark disperse better in the colour host image. The proposed colour image watermarking scheme possesses high security and can achieve a higher robustness while preserving the watermark’s invisibility. The good performance of the proposed scheme has been demonstrated by extensive experiments and comparison with other relevant schemes.
Digital Watermarking of Autonomous Vehicles Imagery and Video Communication
2005-10-01
world’s recent events, the increasing need in different domains, those being: spatial, spectral and corn- homeland security and defense is a critical topic...watermarking schemes benefit in that security and analysis is also vital, especially when using there is no need for full or partial decompression, which...are embedded Arguably, the most widely used technique is spread spec- change with each application. Whether it is secure covert trum watermarking (SS
Quantum Watermarking Scheme Based on INEQR
NASA Astrophysics Data System (ADS)
Zhou, Ri-Gui; Zhou, Yang; Zhu, Changming; Wei, Lai; Zhang, Xiafen; Ian, Hou
2018-04-01
Quantum watermarking technology protects copyright by embedding invisible quantum signal in quantum multimedia data. In this paper, a watermarking scheme based on INEQR was presented. Firstly, the watermark image is extended to achieve the requirement of embedding carrier image. Secondly, the swap and XOR operation is used on the processed pixels. Since there is only one bit per pixel, XOR operation can achieve the effect of simple encryption. Thirdly, both the watermark image extraction and embedding operations are described, where the key image, swap operation and LSB algorithm are used. When the embedding is made, the binary image key is changed. It means that the watermark has been embedded. Of course, if the watermark image is extracted, the key's state need detected. When key's state is |1>, this extraction operation is carried out. Finally, for validation of the proposed scheme, both the Signal-to-noise ratio (PSNR) and the security of the scheme are analyzed.
Genetics algorithm optimization of DWT-DCT based image Watermarking
NASA Astrophysics Data System (ADS)
Budiman, Gelar; Novamizanti, Ledya; Iwut, Iwan
2017-01-01
Data hiding in an image content is mandatory for setting the ownership of the image. Two dimensions discrete wavelet transform (DWT) and discrete cosine transform (DCT) are proposed as transform method in this paper. First, the host image in RGB color space is converted to selected color space. We also can select the layer where the watermark is embedded. Next, 2D-DWT transforms the selected layer obtaining 4 subband. We select only one subband. And then block-based 2D-DCT transforms the selected subband. Binary-based watermark is embedded on the AC coefficients of each block after zigzag movement and range based pixel selection. Delta parameter replacing pixels in each range represents embedded bit. +Delta represents bit “1” and -delta represents bit “0”. Several parameters to be optimized by Genetics Algorithm (GA) are selected color space, layer, selected subband of DWT decomposition, block size, embedding range, and delta. The result of simulation performs that GA is able to determine the exact parameters obtaining optimum imperceptibility and robustness, in any watermarked image condition, either it is not attacked or attacked. DWT process in DCT based image watermarking optimized by GA has improved the performance of image watermarking. By five attacks: JPEG 50%, resize 50%, histogram equalization, salt-pepper and additive noise with variance 0.01, robustness in the proposed method has reached perfect watermark quality with BER=0. And the watermarked image quality by PSNR parameter is also increased about 5 dB than the watermarked image quality from previous method.
Speech watermarking: an approach for the forensic analysis of digital telephonic recordings.
Faundez-Zanuy, Marcos; Lucena-Molina, Jose J; Hagmüller, Martin
2010-07-01
In this article, the authors discuss the problem of forensic authentication of digital audio recordings. Although forensic audio has been addressed in several articles, the existing approaches are focused on analog magnetic recordings, which are less prevalent because of the large amount of digital recorders available on the market (optical, solid state, hard disks, etc.). An approach based on digital signal processing that consists of spread spectrum techniques for speech watermarking is presented. This approach presents the advantage that the authentication is based on the signal itself rather than the recording format. Thus, it is valid for usual recording devices in police-controlled telephone intercepts. In addition, our proposal allows for the introduction of relevant information such as the recording date and time and all the relevant data (this is not always possible with classical systems). Our experimental results reveal that the speech watermarking procedure does not interfere in a significant way with the posterior forensic speaker identification.
Digital watermarking for color images in hue-saturation-value color space
NASA Astrophysics Data System (ADS)
Tachaphetpiboon, Suwat; Thongkor, Kharittha; Amornraksa, Thumrongrat; Delp, Edward J.
2014-05-01
This paper proposes a new watermarking scheme for color images, in which all pixels of the image are used for embedding watermark bits in order to achieve the highest amount of embedding. For watermark embedding, the S component in the hue-saturation-value (HSV) color space is used to carry the watermark bits, while the V component is used in accordance with a human visual system model to determine the proper watermark strength. In the proposed scheme, the number of watermark bits equals the number of pixels in the host image. Watermark extraction is accomplished blindly based on the use of a 3×3 spatial domain Wiener filter. The efficiency of our proposed image watermarking scheme depends mainly on the accuracy of the estimate of the original S component. The experimental results show that the performance of the proposed scheme, under no attacks and against various types of attacks, was superior to the previous existing watermarking schemes.
Replacement Attack: A New Zero Text Watermarking Attack
NASA Astrophysics Data System (ADS)
Bashardoost, Morteza; Mohd Rahim, Mohd Shafry; Saba, Tanzila; Rehman, Amjad
2017-03-01
The main objective of zero watermarking methods that are suggested for the authentication of textual properties is to increase the fragility of produced watermarks against tampering attacks. On the other hand, zero watermarking attacks intend to alter the contents of document without changing the watermark. In this paper, the Replacement attack is proposed, which focuses on maintaining the location of the words in the document. The proposed text watermarking attack is specifically effective on watermarking approaches that exploit words' transition in the document. The evaluation outcomes prove that tested word-based method are unable to detect the existence of replacement attack in the document. Moreover, the comparison results show that the size of Replacement attack is estimated less accurate than other common types of zero text watermarking attacks.
Digital audio watermarking using moment-preserving thresholding
NASA Astrophysics Data System (ADS)
Choi, DooSeop; Jung, Hae Kyung; Choi, Hyuk; Kim, Taejeong
2007-09-01
The Moment-Preserving Thresholding technique for digital images has been used in digital image processing for decades, especially in image binarization and image compression. Its main strength lies in that the binary values that the MPT produces as a result, called representative values, are usually unaffected when the signal being thresholded goes through a signal processing operation. The two representative values in MPT together with the threshold value are obtained by solving the system of the preservation equations for the first, second, and third moment. Relying on this robustness of the representative values to various signal processing attacks considered in the watermarking context, this paper proposes a new watermarking scheme for audio signals. The watermark is embedded in the root-sum-square (RSS) of the two representative values of each signal block using the quantization technique. As a result, the RSS values are modified by scaling the signal according to the watermark bit sequence under the constraint of inaudibility relative to the human psycho-acoustic model. We also address and suggest solutions to the problem of synchronization and power scaling attacks. Experimental results show that the proposed scheme maintains high audio quality and robustness to various attacks including MP3 compression, re-sampling, jittering, and, DA/AD conversion.
StirMark Benchmark: audio watermarking attacks based on lossy compression
NASA Astrophysics Data System (ADS)
Steinebach, Martin; Lang, Andreas; Dittmann, Jana
2002-04-01
StirMark Benchmark is a well-known evaluation tool for watermarking robustness. Additional attacks are added to it continuously. To enable application based evaluation, in our paper we address attacks against audio watermarks based on lossy audio compression algorithms to be included in the test environment. We discuss the effect of different lossy compression algorithms like MPEG-2 audio Layer 3, Ogg or VQF on a selection of audio test data. Our focus is on changes regarding the basic characteristics of the audio data like spectrum or average power and on removal of embedded watermarks. Furthermore we compare results of different watermarking algorithms and show that lossy compression is still a challenge for most of them. There are two strategies for adding evaluation of robustness against lossy compression to StirMark Benchmark: (a) use of existing free compression algorithms (b) implementation of a generic lossy compression simulation. We discuss how such a model can be implemented based on the results of our tests. This method is less complex, as no real psycho acoustic model has to be applied. Our model can be used for audio watermarking evaluation of numerous application fields. As an example, we describe its importance for e-commerce applications with watermarking security.
An Efficient Buyer-Seller Watermarking Protocol Based on Chameleon Encryption
NASA Astrophysics Data System (ADS)
Poh, Geong Sen; Martin, Keith M.
Buyer-seller watermarking protocols are designed to deter clients from illegally distributing copies of digital content. This is achieved by allowing a distributor to insert a unique watermark into content in such a way that the distributor does not know the final watermarked copy that is given to the client. This protects both the client and distributor from attempts by one to falsely accuse the other of misuse. Buyer-seller watermarking protocols are normally based on asymmetric cryptographic primitives known as homomorphic encryption schemes. However, the computational and communication overhead of this conventional approach is high. In this paper we propose a different approach, based on the symmetric Chameleon encryption scheme. We show that this leads to significant gains in computational and operational efficiency.
Viswanathan, P; Krishna, P Venkata
2014-05-01
Teleradiology allows transmission of medical images for clinical data interpretation to provide improved e-health care access, delivery, and standards. The remote transmission raises various ethical and legal issues like image retention, fraud, privacy, malpractice liability, etc. A joint FED watermarking system means a joint fingerprint/encryption/dual watermarking system is proposed for addressing these issues. The system combines a region based substitution dual watermarking algorithm using spatial fusion, stream cipher algorithm using symmetric key, and fingerprint verification algorithm using invariants. This paper aims to give access to the outcomes of medical images with confidentiality, availability, integrity, and its origin. The watermarking, encryption, and fingerprint enrollment are conducted jointly in protection stage such that the extraction, decryption, and verification can be applied independently. The dual watermarking system, introducing two different embedding schemes, one used for patient data and other for fingerprint features, reduces the difficulty in maintenance of multiple documents like authentication data, personnel and diagnosis data, and medical images. The spatial fusion algorithm, which determines the region of embedding using threshold from the image to embed the encrypted patient data, follows the exact rules of fusion resulting in better quality than other fusion techniques. The four step stream cipher algorithm using symmetric key for encrypting the patient data with fingerprint verification system using algebraic invariants improves the robustness of the medical information. The experiment result of proposed scheme is evaluated for security and quality analysis in DICOM medical images resulted well in terms of attacks, quality index, and imperceptibility.
Color image watermarking against fog effects
NASA Astrophysics Data System (ADS)
Chotikawanid, Piyanart; Amornraksa, Thumrongrat
2017-07-01
Fog effects in various computer and camera software can partially or fully damage the watermark information within the watermarked image. In this paper, we propose a color image watermarking based on the modification of reflectance component against fog effects. The reflectance component is extracted from the blue color channel in the RGB color space of a host image, and then used to carry a watermark signal. The watermark extraction is blindly achieved by subtracting the estimation of the original reflectance component from the watermarked component. The performance of the proposed watermarking method in terms of wPSNR and NC is evaluated, and then compared with the previous method. The experimental results on robustness against various levels of fog effect, from both computer software and mobile application, demonstrated a higher robustness of our proposed method, compared to the previous one.
Bouslimi, D; Coatrieux, G; Cozic, M; Roux, Ch
2014-01-01
In this paper, we propose a novel crypto-watermarking system for the purpose of verifying the reliability of images and tracing them, i.e. identifying the person at the origin of an illegal distribution. This system couples a common watermarking method, based on Quantization Index Modulation (QIM), and a joint watermarking-decryption (JWD) approach. At the emitter side, it allows the insertion of a watermark as a proof of reliability of the image before sending it encrypted; at the reception, another watermark, a proof of traceability, is embedded during the decryption process. The scheme we propose makes interoperate such a combination of watermarking approaches taking into account risks of interferences between embedded watermarks, allowing the access to both reliability and traceability proofs. Experimental results confirm the efficiency of our system, and demonstrate it can be used to identify the physician at the origin of a disclosure even if the image has been modified.
Non-integer expansion embedding techniques for reversible image watermarking
NASA Astrophysics Data System (ADS)
Xiang, Shijun; Wang, Yi
2015-12-01
This work aims at reducing the embedding distortion of prediction-error expansion (PE)-based reversible watermarking. In the classical PE embedding method proposed by Thodi and Rodriguez, the predicted value is rounded to integer number for integer prediction-error expansion (IPE) embedding. The rounding operation makes a constraint on a predictor's performance. In this paper, we propose a non-integer PE (NIPE) embedding approach, which can proceed non-integer prediction errors for embedding data into an audio or image file by only expanding integer element of a prediction error while keeping its fractional element unchanged. The advantage of the NIPE embedding technique is that the NIPE technique can really bring a predictor into full play by estimating a sample/pixel in a noncausal way in a single pass since there is no rounding operation. A new noncausal image prediction method to estimate a pixel with four immediate pixels in a single pass is included in the proposed scheme. The proposed noncausal image predictor can provide better performance than Sachnev et al.'s noncausal double-set prediction method (where data prediction in two passes brings a distortion problem due to the fact that half of the pixels were predicted with the watermarked pixels). In comparison with existing several state-of-the-art works, experimental results have shown that the NIPE technique with the new noncausal prediction strategy can reduce the embedding distortion for the same embedding payload.
A novel image watermarking method based on singular value decomposition and digital holography
NASA Astrophysics Data System (ADS)
Cai, Zhishan
2016-10-01
According to the information optics theory, a novel watermarking method based on Fourier-transformed digital holography and singular value decomposition (SVD) is proposed in this paper. First of all, a watermark image is converted to a digital hologram using the Fourier transform. After that, the original image is divided into many non-overlapping blocks. All the blocks and the hologram are decomposed using SVD. The singular value components of the hologram are then embedded into the singular value components of each block using an addition principle. Finally, SVD inverse transformation is carried out on the blocks and hologram to generate the watermarked image. The watermark information embedded in each block is extracted at first when the watermark is extracted. After that, an averaging operation is carried out on the extracted information to generate the final watermark information. Finally, the algorithm is simulated. Furthermore, to test the encrypted image's resistance performance against attacks, various attack tests are carried out. The results show that the proposed algorithm has very good robustness against noise interference, image cut, compression, brightness stretching, etc. In particular, when the image is rotated by a large angle, the watermark information can still be extracted correctly.
A compressive sensing based secure watermark detection and privacy preserving storage framework.
Qia Wang; Wenjun Zeng; Jun Tian
2014-03-01
Privacy is a critical issue when the data owners outsource data storage or processing to a third party computing service, such as the cloud. In this paper, we identify a cloud computing application scenario that requires simultaneously performing secure watermark detection and privacy preserving multimedia data storage. We then propose a compressive sensing (CS)-based framework using secure multiparty computation (MPC) protocols to address such a requirement. In our framework, the multimedia data and secret watermark pattern are presented to the cloud for secure watermark detection in a CS domain to protect the privacy. During CS transformation, the privacy of the CS matrix and the watermark pattern is protected by the MPC protocols under the semi-honest security model. We derive the expected watermark detection performance in the CS domain, given the target image, watermark pattern, and the size of the CS matrix (but without the CS matrix itself). The correctness of the derived performance has been validated by our experiments. Our theoretical analysis and experimental results show that secure watermark detection in the CS domain is feasible. Our framework can also be extended to other collaborative secure signal processing and data-mining applications in the cloud.
NASA Astrophysics Data System (ADS)
Yan, Xin; Zhang, Ling; Wu, Yang; Luo, Youlong; Zhang, Xiaoxing
2017-02-01
As more and more wireless sensor nodes and networks are employed to acquire and transmit the state information of power equipment in smart grid, we are in urgent need of some viable security solutions to ensure secure smart grid communications. Conventional information security solutions, such as encryption/decryption, digital signature and so forth, are not applicable to wireless sensor networks in smart grid any longer, where bulk messages need to be exchanged continuously. The reason is that these cryptographic solutions will account for a large portion of the extremely limited resources on sensor nodes. In this article, a security solution based on digital watermarking is adopted to achieve the secure communications for wireless sensor networks in smart grid by data and entity authentications at a low cost of operation. Our solution consists of a secure framework of digital watermarking, and two digital watermarking algorithms based on alternating electric current and time window, respectively. Both watermarking algorithms are composed of watermark generation, embedding and detection. The simulation experiments are provided to verify the correctness and practicability of our watermarking algorithms. Additionally, a new cloud-based architecture for the information integration of smart grid is proposed on the basis of our security solutions.
Dey, Nilanjan; Bose, Soumyo; Das, Achintya; Chaudhuri, Sheli Sinha; Saba, Luca; Shafique, Shoaib; Nicolaides, Andrew; Suri, Jasjit S
2016-04-01
Embedding of diagnostic and health care information requires secure encryption and watermarking. This research paper presents a comprehensive study for the behavior of some well established watermarking algorithms in frequency domain for the preservation of stroke-based diagnostic parameters. Two different sets of watermarking algorithms namely: two correlation-based (binary logo hiding) and two singular value decomposition (SVD)-based (gray logo hiding) watermarking algorithms are used for embedding ownership logo. The diagnostic parameters in atherosclerotic plaque ultrasound video are namely: (a) bulb identification and recognition which consists of identifying the bulb edge points in far and near carotid walls; (b) carotid bulb diameter; and (c) carotid lumen thickness all along the carotid artery. The tested data set consists of carotid atherosclerotic movies taken under IRB protocol from University of Indiana Hospital, USA-AtheroPoint™ (Roseville, CA, USA) joint pilot study. ROC (receiver operating characteristic) analysis was performed on the bulb detection process that showed an accuracy and sensitivity of 100 % each, respectively. The diagnostic preservation (DPsystem) for SVD-based approach was above 99 % with PSNR (Peak signal-to-noise ratio) above 41, ensuring the retention of diagnostic parameter devalorization as an effect of watermarking. Thus, the fully automated proposed system proved to be an efficient method for watermarking the atherosclerotic ultrasound video for stroke application.
Self-synchronization for spread spectrum audio watermarks after time scale modification
NASA Astrophysics Data System (ADS)
Nadeau, Andrew; Sharma, Gaurav
2014-02-01
De-synchronizing operations such as insertion, deletion, and warping pose significant challenges for watermarking. Because these operations are not typical for classical communications, watermarking techniques such as spread spectrum can perform poorly. Conversely, specialized synchronization solutions can be challenging to analyze/ optimize. This paper addresses desynchronization for blind spread spectrum watermarks, detected without reference to any unmodified signal, using the robustness properties of short blocks. Synchronization relies on dynamic time warping to search over block alignments to find a sequence with maximum correlation to the watermark. This differs from synchronization schemes that must first locate invariant features of the original signal, or estimate and reverse desynchronization before detection. Without these extra synchronization steps, analysis for the proposed scheme builds on classical SS concepts and allows characterizes the relationship between the size of search space (number of detection alignment tests) and intrinsic robustness (continuous search space region covered by each individual detection test). The critical metrics that determine the search space, robustness, and performance are: time-frequency resolution of the watermarking transform, and blocklength resolution of the alignment. Simultaneous robustness to (a) MP3 compression, (b) insertion/deletion, and (c) time-scale modification is also demonstrated for a practical audio watermarking scheme developed in the proposed framework.
Joint forensics and watermarking approach for video authentication
NASA Astrophysics Data System (ADS)
Thiemert, Stefan; Liu, Huajian; Steinebach, Martin; Croce-Ferri, Lucilla
2007-02-01
In our paper we discuss and compare the possibilities and shortcomings of both content-fragile watermarking and digital forensics and analyze if the combination of both techniques allows the identification of more than the sum of all manipulations identified by both techniques on their own due to synergetic effects. The first part of the paper discusses the theoretical possibilities offered by a combined approach, in which forensics and watermarking are considered as complementary tools for data authentication or deeply combined together, in order to reduce their error rate and to enhance the detection efficiency. After this conceptual discussion the paper proposes some concrete examples in which the joint approach is applied to video authentication. Some specific forensics techniques are analyzed and expanded to handle efficiently video data. The examples show possible extensions of passive-blind image forgery detection to video data, where the motion and time related characteristics of video are efficiently exploited.
A Novel Application for Text Watermarking in Digital Reading
NASA Astrophysics Data System (ADS)
Zhang, Jin; Li, Qing-Cheng; Wang, Cong; Fang, Ji
Although watermarking research has made great strides in theoretical aspect, its lack of application in business could not be covered. It is due to few people pays attention to usage of the information carried by watermarking. This paper proposes a new watermarking application method. After digital document being reorganized with advertisement together, watermarking is designed to carry this structure of new document. It will release advertisement as interference information under attack. On the one hand, reducing the quality of digital works could inhabit unauthorized distribution. On the other hand, advertisement can benefit copyright holders as compensation. Moreover implementation detail, attack evaluation and watermarking algorithm correlation are also discussed through an experiment based on txt file.
Evidence of tampering in watermark identification
NASA Astrophysics Data System (ADS)
McLauchlan, Lifford; Mehrübeoglu, Mehrübe
2009-08-01
In this work, watermarks are embedded in digital images in the discrete wavelet transform (DWT) domain. Principal component analysis (PCA) is performed on the DWT coefficients. Next higher order statistics based on the principal components and the eigenvalues are determined for different sets of images. Feature sets are analyzed for different types of attacks in m dimensional space. The results demonstrate the separability of the features for the tampered digital copies. Different feature sets are studied to determine more effective tamper evident feature sets. The digital forensics, the probable manipulation(s) or modification(s) performed on the digital information can be identified using the described technique.
Evolution of music score watermarking algorithm
NASA Astrophysics Data System (ADS)
Busch, Christoph; Nesi, Paolo; Schmucker, Martin; Spinu, Marius B.
2002-04-01
Content protection for multimedia data is widely recognized especially for data types that are frequently distributed, sold or shared using the Internet. Particularly music industry dealing with audio files realized the necessity for content protection. Distribution of music sheets will face the same problems. Digital watermarking techniques provide a certain level of protection for these music sheets. But classical raster-oriented watermarking algorithms for images suffer several drawbacks when directly applied to image representations of music sheets. Therefore new solutions have been developed which are designed regarding the content of the music sheets. In Comparison to other media types the development for watermarking of music scores is a rather young art. The paper reviews the evolution of the early approaches and describes the current state of the art in the field.
Multimodal biometric digital watermarking on immigrant visas for homeland security
NASA Astrophysics Data System (ADS)
Sasi, Sreela; Tamhane, Kirti C.; Rajappa, Mahesh B.
2004-08-01
Passengers with immigrant Visa's are a major concern to the International Airports due to the various fraud operations identified. To curb tampering of genuine Visa, the Visa's should contain human identification information. Biometric characteristic is a common and reliable way to authenticate the identity of an individual [1]. A Multimodal Biometric Human Identification System (MBHIS) that integrates iris code, DNA fingerprint, and the passport number on the Visa photograph using digital watermarking scheme is presented. Digital Watermarking technique is well suited for any system requiring high security [2]. Ophthalmologists [3], [4], [5] suggested that iris scan is an accurate and nonintrusive optical fingerprint. DNA sequence can be used as a genetic barcode [6], [7]. While issuing Visa at the US consulates, the DNA sequence isolated from saliva, the iris code and passport number shall be digitally watermarked in the Visa photograph. This information is also recorded in the 'immigrant database'. A 'forward watermarking phase' combines a 2-D DWT transformed digital photograph with the personal identification information. A 'detection phase' extracts the watermarked information from this VISA photograph at the port of entry, from which iris code can be used for identification and DNA biometric for authentication, if an anomaly arises.
A Double-function Digital Watermarking Algorithm Based on Chaotic System and LWT
NASA Astrophysics Data System (ADS)
Yuxia, Zhao; Jingbo, Fan
A double- function digital watermarking technology is studied and a double-function digital watermarking algorithm of colored image is presented based on chaotic system and the lifting wavelet transformation (LWT).The algorithm has realized the double aims of the copyright protection and the integrity authentication of image content. Making use of feature of human visual system (HVS), the watermark image is embedded into the color image's low frequency component and middle frequency components by different means. The algorithm has great security by using two kinds chaotic mappings and Arnold to scramble the watermark image at the same time. The algorithm has good efficiency by using LWT. The emulation experiment indicates the algorithm has great efficiency and security, and the effect of concealing is really good.
NASA Astrophysics Data System (ADS)
Memon, Nasir D.; Wong, Ping W.
1999-04-01
Digital watermarks have recently been proposed for the purposes of copy protection and copy deterrence for multimedia content. In copy deterrence, a content owner (seller) inserts a unique watermark into a copy of the content before it is sold to a buyer. If the buyer resells unauthorized copies of the watermarked content, then these copies can be traced to the unlawful reseller (original buyer) using a watermark detection algorithm. One problem with such an approach is that the original buyer whose watermark has been found on unauthorized copies can claim that the unauthorized copy was created or caused (for example, by a security breach) by the original seller. In this paper we propose an interactive buyer-seller protocol for invisible watermarking in which the seller does not get to know the exact watermarked copy that the buyer receives. Hence the seller cannot create copies of the original content containing the buyer's watermark. In cases where the seller finds an unauthorized copy, the seller can identify the buyer from a watermark in the unauthorized copy, and furthermore the seller can prove this fact to a third party using a dispute resolution protocol. This prevents the buyer from claiming that an unauthorized copy may have originated from the seller.
Imperceptible reversible watermarking of radiographic images based on quantum noise masking.
Pan, Wei; Bouslimi, Dalel; Karasad, Mohamed; Cozic, Michel; Coatrieux, Gouenou
2018-07-01
Advances in information and communication technologies boost the sharing and remote access to medical images. Along with this evolution, needs in terms of data security are also increased. Watermarking can contribute to better protect images by dissimulating into their pixels some security attributes (e.g., digital signature, user identifier). But, to take full advantage of this technology in healthcare, one key problem to address is to ensure that the image distortion induced by the watermarking process does not endanger the image diagnosis value. To overcome this issue, reversible watermarking is one solution. It allows watermark removal with the exact recovery of the image. Unfortunately, reversibility does not mean that imperceptibility constraints are relaxed. Indeed, once the watermark removed, the image is unprotected. It is thus important to ensure the invisibility of reversible watermark in order to ensure a permanent image protection. We propose a new fragile reversible watermarking scheme for digital radiographic images, the main originality of which stands in masking a reversible watermark into the image quantum noise (the dominant noise in radiographic images). More clearly, in order to ensure the watermark imperceptibility, our scheme differentiates the image black background, where message embedding is conducted into pixel gray values with the well-known histogram shifting (HS) modulation, from the anatomical object, where HS is applied to wavelet detail coefficients, masking the watermark with the image quantum noise. In order to maintain the watermark embedder and reader synchronized in terms of image partitioning and insertion domain, our scheme makes use of different classification processes that are invariant to message embedding. We provide the theoretical performance limits of our scheme into the image quantum noise in terms of image distortion and message size (i.e. capacity). Experiments conducted on more than 800 12 bits radiographic images of different anatomical structures show that our scheme induces a very low image distortion (PSNR∼ 76.5 dB) for a relatively important capacity (capacity∼ 0.02 bits of message per pixel). The proposed watermarking scheme, while being reversible, preserves the diagnosis value of radiographic images by masking the watermark into the quantum noise. As theoretically and experimentally established our scheme offers a good capacity/image quality compromise that can support different watermarking based security services such as integrity and authenticity control. The watermark can be kept into the image during the interpretation of the image, offering thus a continuous protection. Such a masking strategy can be seen as the first psychovisual model for radiographic images. The reversibility allows the watermark update when necessary. Copyright © 2018 Elsevier B.V. All rights reserved.
A covert authentication and security solution for GMOs.
Mueller, Siguna; Jafari, Farhad; Roth, Don
2016-09-21
Proliferation and expansion of security risks necessitates new measures to ensure authenticity and validation of GMOs. Watermarking and other cryptographic methods are available which conceal and recover the original signature, but in the process reveal the authentication information. In many scenarios watermarking and standard cryptographic methods are necessary but not sufficient and new, more advanced, cryptographic protocols are necessary. Herein, we present a new crypto protocol, that is applicable in broader settings, and embeds the authentication string indistinguishably from a random element in the signature space and the string is verified or denied without disclosing the actual signature. Results show that in a nucleotide string of 1000, the algorithm gives a correlation of 0.98 or higher between the distribution of the codon and that of E. coli, making the signature virtually invisible. This algorithm may be used to securely authenticate and validate GMOs without disclosing the actual signature. While this protocol uses watermarking, its novelty is in use of more complex cryptographic techniques based on zero knowledge proofs to encode information.
An improved quantum watermarking scheme using small-scale quantum circuits and color scrambling
NASA Astrophysics Data System (ADS)
Li, Panchi; Zhao, Ya; Xiao, Hong; Cao, Maojun
2017-05-01
In order to solve the problem of embedding the watermark into the quantum color image, in this paper, an improved scheme of using small-scale quantum circuits and color scrambling is proposed. Both color carrier image and color watermark image are represented using novel enhanced quantum representation. The image sizes for carrier and watermark are assumed to be 2^{n+1}× 2^{n+2} and 2n× 2n, respectively. At first, the color of pixels in watermark image is scrambled using the controlled rotation gates, and then, the scrambled watermark with 2^n× 2^n image size and 24-qubit gray scale is expanded to an image with 2^{n+1}× 2^{n+2} image size and 3-qubit gray scale. Finally, the expanded watermark image is embedded into the carrier image by the controlled-NOT gates. The extraction of watermark is the reverse process of embedding it into carrier image, which is achieved by applying operations in the reverse order. Simulation-based experimental results show that the proposed scheme is superior to other similar algorithms in terms of three items, visual quality, scrambling effect of watermark image, and noise resistibility.
Singh, Anushikha; Dutta, Malay Kishore
2017-12-01
The authentication and integrity verification of medical images is a critical and growing issue for patients in e-health services. Accurate identification of medical images and patient verification is an essential requirement to prevent error in medical diagnosis. The proposed work presents an imperceptible watermarking system to address the security issue of medical fundus images for tele-ophthalmology applications and computer aided automated diagnosis of retinal diseases. In the proposed work, patient identity is embedded in fundus image in singular value decomposition domain with adaptive quantization parameter to maintain perceptual transparency for variety of fundus images like healthy fundus or disease affected image. In the proposed method insertion of watermark in fundus image does not affect the automatic image processing diagnosis of retinal objects & pathologies which ensure uncompromised computer-based diagnosis associated with fundus image. Patient ID is correctly recovered from watermarked fundus image for integrity verification of fundus image at the diagnosis centre. The proposed watermarking system is tested in a comprehensive database of fundus images and results are convincing. results indicate that proposed watermarking method is imperceptible and it does not affect computer vision based automated diagnosis of retinal diseases. Correct recovery of patient ID from watermarked fundus image makes the proposed watermarking system applicable for authentication of fundus images for computer aided diagnosis and Tele-ophthalmology applications. Copyright © 2017 Elsevier B.V. All rights reserved.
LDFT-based watermarking resilient to local desynchronization attacks.
Tian, Huawei; Zhao, Yao; Ni, Rongrong; Qin, Lunming; Li, Xuelong
2013-12-01
Up to now, a watermarking scheme that is robust against desynchronization attacks (DAs) is still a grand challenge. Most image watermarking resynchronization schemes in literature can survive individual global DAs (e.g., rotation, scaling, translation, and other affine transforms), but few are resilient to challenging cropping and local DAs. The main reason is that robust features for watermark synchronization are only globally invariable rather than locally invariable. In this paper, we present a blind image watermarking resynchronization scheme against local transform attacks. First, we propose a new feature transform named local daisy feature transform (LDFT), which is not only globally but also locally invariable. Then, the binary space partitioning (BSP) tree is used to partition the geometrically invariant LDFT space. In the BSP tree, the location of each pixel is fixed under global transform, local transform, and cropping. Lastly, the watermarking sequence is embedded bit by bit into each leaf node of the BSP tree by using the logarithmic quantization index modulation watermarking embedding method. Simulation results show that the proposed watermarking scheme can survive numerous kinds of distortions, including common image-processing attacks, local and global DAs, and noninvertible cropping.
Analyzing the effect of the distortion compensation in reversible watermarking
NASA Astrophysics Data System (ADS)
Kim, Suah; Kim, Hyoung Joong
2014-01-01
Reversible watermarking is used to hide information in images for medical and military uses. Reversible watermarking in images using distortion compensation proposed by Vasily et al [5] embeds each pixel twice such that distortion caused by the first embedding is reduced or removed by the distortion introduced by the second embedding. In their paper, because it is not applied in its most basic form, it is not clear whether improving it can achieve better results than the existing state of the art techniques. In this paper we first provide a novel basic distortion compensation technique that uses same prediction method as Tian's [2] difference expansion method (DE), in order to measure the effect of the distortion compensation more accurately. In the second part, we will analyze what kind of improvements can be made in distortion compensation.
NASA Astrophysics Data System (ADS)
Chen, Wen-Yuan; Liu, Chen-Chung
2006-01-01
The problems with binary watermarking schemes are that they have only a small amount of embeddable space and are not robust enough. We develop a slice-based large-cluster algorithm (SBLCA) to construct a robust watermarking scheme for binary images. In SBLCA, a small-amount cluster selection (SACS) strategy is used to search for a feasible slice in a large-cluster flappable-pixel decision (LCFPD) method, which is used to search for the best location for concealing a secret bit from a selected slice. This method has four major advantages over the others: (a) SBLCA has a simple and effective decision function to select appropriate concealment locations, (b) SBLCA utilizes a blind watermarking scheme without the original image in the watermark extracting process, (c) SBLCA uses slice-based shuffling capability to transfer the regular image into a hash state without remembering the state before shuffling, and finally, (d) SBLCA has enough embeddable space that every 64 pixels could accommodate a secret bit of the binary image. Furthermore, empirical results on test images reveal that our approach is a robust watermarking scheme for binary images.
Shedding light on some possible remedies against watermark desynchronization: a case study
NASA Astrophysics Data System (ADS)
Barni, Mauro
2005-03-01
Watermark de-synchronization is perhaps the most dangerous attack against the great majority of watermarking systems proposed so far. Exhaustive search and template matching are two of the most popular solution against it, however several doubts exist about their effectiveness. As a matter of fact, a controversial point in digital watermarking is whether these techniques are of any help to cope with watermark de-synchronization introduced by geometric attacks. On one side, watermark synchronization through exhaustive search dramatically increases the false detection probability. On the other side, for the template matching approach the probability of a synchronization error must be taken into account, thus deteriorating significantly the performance of the system. It is the scope of this paper to shed some light on the above points. To do so we focus on a very simple case study, whereby we show that as long as the size of the search space (the cardinality of the geometric attack) increases polynomially with the length of the to-be-marked host feature sequence, both methods provide an effective solution to the de-synchronization problem. Interestingly, and rather surprisingly, we also show that Exhaustive Search Detection (ESD) always outperforms Template Matching Detection (TMD), though the general behavior of the two schemes is rather similar.
Optical asymmetric watermarking using modified wavelet fusion and diffractive imaging
NASA Astrophysics Data System (ADS)
Mehra, Isha; Nishchal, Naveen K.
2015-05-01
In most of the existing image encryption algorithms the generated keys are in the form of a noise like distribution with a uniform distributed histogram. However, the noise like distribution is an apparent sign indicating the presence of the keys. If the keys are to be transferred through some communication channels, then this may lead to a security problem. This is because; the noise like features may easily catch people's attention and bring more attacks. To address this problem it is required to transfer the keys to some other meaningful images to disguise the attackers. The watermarking schemes are complementary to image encryption schemes. In most of the iterative encryption schemes, support constraints play an important role of the keys in order to decrypt the meaningful data. In this article, we have transferred the support constraints which are generated by axial translation of CCD camera using amplitude-, and phase- truncation approach, into different meaningful images. This has been done by developing modified fusion technique in wavelet transform domain. The second issue is, in case, the meaningful images are caught by the attacker then how to solve the copyright protection. To resolve this issue, watermark detection plays a crucial role. For this purpose, it is necessary to recover the original image using the retrieved watermarks/support constraints. To address this issue, four asymmetric keys have been generated corresponding to each watermarked image to retrieve the watermarks. For decryption, an iterative phase retrieval algorithm is applied to extract the plain-texts from corresponding retrieved watermarks.
Digital watermarking opportunities enabled by mobile media proliferation
NASA Astrophysics Data System (ADS)
Modro, Sierra; Sharma, Ravi K.
2009-02-01
Consumer usages of mobile devices and electronic media are changing. Mobile devices now include increased computational capabilities, mobile broadband access, better integrated sensors, and higher resolution screens. These enhanced features are driving increased consumption of media such as images, maps, e-books, audio, video, and games. As users become more accustomed to using mobile devices for media, opportunities arise for new digital watermarking usage models. For example, transient media, like images being displayed on screens, could be watermarked to provide a link between mobile devices. Applications based on these emerging usage models utilizing watermarking can provide richer user experiences and drive increased media consumption. We describe the enabling factors and highlight a few of the usage models and new opportunities. We also outline how the new opportunities are driving further innovation in watermarking technologies. We discuss challenges in market adoption of applications based on these usage models.
Robust Audio Watermarking Scheme Based on Deterministic Plus Stochastic Model
NASA Astrophysics Data System (ADS)
Dhar, Pranab Kumar; Kim, Cheol Hong; Kim, Jong-Myon
Digital watermarking has been widely used for protecting digital contents from unauthorized duplication. This paper proposes a new watermarking scheme based on spectral modeling synthesis (SMS) for copyright protection of digital contents. SMS defines a sound as a combination of deterministic events plus a stochastic component that makes it possible for a synthesized sound to attain all of the perceptual characteristics of the original sound. In our proposed scheme, watermarks are embedded into the highest prominent peak of the magnitude spectrum of each non-overlapping frame in peak trajectories. Simulation results indicate that the proposed watermarking scheme is highly robust against various kinds of attacks such as noise addition, cropping, re-sampling, re-quantization, and MP3 compression and achieves similarity values ranging from 17 to 22. In addition, our proposed scheme achieves signal-to-noise ratio (SNR) values ranging from 29 dB to 30 dB.
A text zero-watermarking method based on keyword dense interval
NASA Astrophysics Data System (ADS)
Yang, Fan; Zhu, Yuesheng; Jiang, Yifeng; Qing, Yin
2017-07-01
Digital watermarking has been recognized as a useful technology for the copyright protection and authentication of digital information. However, rarely did the former methods focus on the key content of digital carrier. The idea based on the protection of key content is more targeted and can be considered in different digital information, including text, image and video. In this paper, we use text as research object and a text zero-watermarking method which uses keyword dense interval (KDI) as the key content is proposed. First, we construct zero-watermarking model by introducing the concept of KDI and giving the method of KDI extraction. Second, we design detection model which includes secondary generation of zero-watermark and the similarity computing method of keyword distribution. Besides, experiments are carried out, and the results show that the proposed method gives better performance than other available methods especially in the attacks of sentence transformation and synonyms substitution.
Forensic Analysis of Digital Image Tampering
2004-12-01
analysis of when each method fails, which Chapter 4 discusses. Finally, a test image containing an invisible watermark using LSB steganography is...2.2 – Example of invisible watermark using Steganography Software F5 ............. 8 Figure 2.3 – Example of copy-move image forgery [12...Figure 3.11 – Algorithm for JPEG Block Technique ....................................................... 54 Figure 3.12 – “Forged” Image with Result
A Blind Reversible Robust Watermarking Scheme for Relational Databases
Chang, Chin-Chen; Nguyen, Thai-Son; Lin, Chia-Chen
2013-01-01
Protecting the ownership and controlling the copies of digital data have become very important issues in Internet-based applications. Reversible watermark technology allows the distortion-free recovery of relational databases after the embedded watermark data are detected or verified. In this paper, we propose a new, blind, reversible, robust watermarking scheme that can be used to provide proof of ownership for the owner of a relational database. In the proposed scheme, a reversible data-embedding algorithm, which is referred to as “histogram shifting of adjacent pixel difference” (APD), is used to obtain reversibility. The proposed scheme can detect successfully 100% of the embedded watermark data, even if as much as 80% of the watermarked relational database is altered. Our extensive analysis and experimental results show that the proposed scheme is robust against a variety of data attacks, for example, alteration attacks, deletion attacks, mix-match attacks, and sorting attacks. PMID:24223033
A blind reversible robust watermarking scheme for relational databases.
Chang, Chin-Chen; Nguyen, Thai-Son; Lin, Chia-Chen
2013-01-01
Protecting the ownership and controlling the copies of digital data have become very important issues in Internet-based applications. Reversible watermark technology allows the distortion-free recovery of relational databases after the embedded watermark data are detected or verified. In this paper, we propose a new, blind, reversible, robust watermarking scheme that can be used to provide proof of ownership for the owner of a relational database. In the proposed scheme, a reversible data-embedding algorithm, which is referred to as "histogram shifting of adjacent pixel difference" (APD), is used to obtain reversibility. The proposed scheme can detect successfully 100% of the embedded watermark data, even if as much as 80% of the watermarked relational database is altered. Our extensive analysis and experimental results show that the proposed scheme is robust against a variety of data attacks, for example, alteration attacks, deletion attacks, mix-match attacks, and sorting attacks.
Quantum watermarking scheme through Arnold scrambling and LSB steganography
NASA Astrophysics Data System (ADS)
Zhou, Ri-Gui; Hu, Wenwen; Fan, Ping
2017-09-01
Based on the NEQR of quantum images, a new quantum gray-scale image watermarking scheme is proposed through Arnold scrambling and least significant bit (LSB) steganography. The sizes of the carrier image and the watermark image are assumed to be 2n× 2n and n× n, respectively. Firstly, a classical n× n sized watermark image with 8-bit gray scale is expanded to a 2n× 2n sized image with 2-bit gray scale. Secondly, through the module of PA-MOD N, the expanded watermark image is scrambled to a meaningless image by the Arnold transform. Then, the expanded scrambled image is embedded into the carrier image by the steganography method of LSB. Finally, the time complexity analysis is given. The simulation experiment results show that our quantum circuit has lower time complexity, and the proposed watermarking scheme is superior to others.
Application of visual cryptography for learning in optics and photonics
NASA Astrophysics Data System (ADS)
Mandal, Avikarsha; Wozniak, Peter; Vauderwange, Oliver; Curticapean, Dan
2016-09-01
In the age data digitalization, important applications of optics and photonics based sensors and technology lie in the field of biometrics and image processing. Protecting user data in a safe and secure way is an essential task in this area. However, traditional cryptographic protocols rely heavily on computer aided computation. Secure protocols which rely only on human interactions are usually simpler to understand. In many scenarios development of such protocols are also important for ease of implementation and deployment. Visual cryptography (VC) is an encryption technique on images (or text) in which decryption is done by human visual system. In this technique, an image is encrypted into number of pieces (known as shares). When the printed shares are physically superimposed together, the image can be decrypted with human vision. Modern digital watermarking technologies can be combined with VC for image copyright protection where the shares can be watermarks (small identification) embedded in the image. Similarly, VC can be used for improving security of biometric authentication. This paper presents about design and implementation of a practical laboratory experiment based on the concept of VC for a course in media engineering. Specifically, our contribution deals with integration of VC in different schemes for applications like digital watermarking and biometric authentication in the field of optics and photonics. We describe theoretical concepts and propose our infrastructure for the experiment. Finally, we will evaluate the learning outcome of the experiment, performed by the students.
A joint asymmetric watermarking and image encryption scheme
NASA Astrophysics Data System (ADS)
Boato, G.; Conotter, V.; De Natale, F. G. B.; Fontanari, C.
2008-02-01
Here we introduce a novel watermarking paradigm designed to be both asymmetric, i.e., involving a private key for embedding and a public key for detection, and commutative with a suitable encryption scheme, allowing both to cipher watermarked data and to mark encrypted data without interphering with the detection process. In order to demonstrate the effectiveness of the above principles, we present an explicit example where the watermarking part, based on elementary linear algebra, and the encryption part, exploiting a secret random permutation, are integrated in a commutative scheme.
A robust color image watermarking algorithm against rotation attacks
NASA Astrophysics Data System (ADS)
Han, Shao-cheng; Yang, Jin-feng; Wang, Rui; Jia, Gui-min
2018-01-01
A robust digital watermarking algorithm is proposed based on quaternion wavelet transform (QWT) and discrete cosine transform (DCT) for copyright protection of color images. The luminance component Y of a host color image in YIQ space is decomposed by QWT, and then the coefficients of four low-frequency subbands are transformed by DCT. An original binary watermark scrambled by Arnold map and iterated sine chaotic system is embedded into the mid-frequency DCT coefficients of the subbands. In order to improve the performance of the proposed algorithm against rotation attacks, a rotation detection scheme is implemented before watermark extracting. The experimental results demonstrate that the proposed watermarking scheme shows strong robustness not only against common image processing attacks but also against arbitrary rotation attacks.
Wavelet versus DCT-based spread spectrum watermarking of image databases
NASA Astrophysics Data System (ADS)
Mitrea, Mihai P.; Zaharia, Titus B.; Preteux, Francoise J.; Vlad, Adriana
2004-05-01
This paper addresses the issue of oblivious robust watermarking, within the framework of colour still image database protection. We present an original method which complies with all the requirements nowadays imposed to watermarking applications: robustness (e.g. low-pass filtering, print & scan, StirMark), transparency (both quality and fidelity), low probability of false alarm, obliviousness and multiple bit recovering. The mark is generated from a 64 bit message (be it a logo, a serial number, etc.) by means of a Spread Spectrum technique and is embedded into DWT (Discrete Wavelet Transform) domain, into certain low frequency coefficients, selected according to the hierarchy of their absolute values. The best results were provided by the (9,7) bi-orthogonal transform. The experiments were carried out on 1200 image sequences, each of them of 32 images. Note that these sequences represented several types of images: natural, synthetic, medical, etc. and each time we obtained the same good results. These results are compared with those we already obtained for the DCT domain, the differences being pointed out and discussed.
A QR Code Based Zero-Watermarking Scheme for Authentication of Medical Images in Teleradiology Cloud
Seenivasagam, V.; Velumani, R.
2013-01-01
Healthcare institutions adapt cloud based archiving of medical images and patient records to share them efficiently. Controlled access to these records and authentication of images must be enforced to mitigate fraudulent activities and medical errors. This paper presents a zero-watermarking scheme implemented in the composite Contourlet Transform (CT)—Singular Value Decomposition (SVD) domain for unambiguous authentication of medical images. Further, a framework is proposed for accessing patient records based on the watermarking scheme. The patient identification details and a link to patient data encoded into a Quick Response (QR) code serves as the watermark. In the proposed scheme, the medical image is not subjected to degradations due to watermarking. Patient authentication and authorized access to patient data are realized on combining a Secret Share with the Master Share constructed from invariant features of the medical image. The Hu's invariant image moments are exploited in creating the Master Share. The proposed system is evaluated with Checkmark software and is found to be robust to both geometric and non geometric attacks. PMID:23970943
Seenivasagam, V; Velumani, R
2013-01-01
Healthcare institutions adapt cloud based archiving of medical images and patient records to share them efficiently. Controlled access to these records and authentication of images must be enforced to mitigate fraudulent activities and medical errors. This paper presents a zero-watermarking scheme implemented in the composite Contourlet Transform (CT)-Singular Value Decomposition (SVD) domain for unambiguous authentication of medical images. Further, a framework is proposed for accessing patient records based on the watermarking scheme. The patient identification details and a link to patient data encoded into a Quick Response (QR) code serves as the watermark. In the proposed scheme, the medical image is not subjected to degradations due to watermarking. Patient authentication and authorized access to patient data are realized on combining a Secret Share with the Master Share constructed from invariant features of the medical image. The Hu's invariant image moments are exploited in creating the Master Share. The proposed system is evaluated with Checkmark software and is found to be robust to both geometric and non geometric attacks.
Cui, Xinchun; Niu, Yuying; Zheng, Xiangwei; Han, Yingshuai
2018-01-01
In this paper, a new color watermarking algorithm based on differential evolution is proposed. A color host image is first converted from RGB space to YIQ space, which is more suitable for the human visual system. Then, apply three-level discrete wavelet transformation to luminance component Y and generate four different frequency sub-bands. After that, perform singular value decomposition on these sub-bands. In the watermark embedding process, apply discrete wavelet transformation to a watermark image after the scrambling encryption processing. Our new algorithm uses differential evolution algorithm with adaptive optimization to choose the right scaling factors. Experimental results show that the proposed algorithm has a better performance in terms of invisibility and robustness.
Digital video steganalysis exploiting collusion sensitivity
NASA Astrophysics Data System (ADS)
Budhia, Udit; Kundur, Deepa
2004-09-01
In this paper we present an effective steganalyis technique for digital video sequences based on the collusion attack. Steganalysis is the process of detecting with a high probability and low complexity the presence of covert data in multimedia. Existing algorithms for steganalysis target detecting covert information in still images. When applied directly to video sequences these approaches are suboptimal. In this paper, we present a method that overcomes this limitation by using redundant information present in the temporal domain to detect covert messages in the form of Gaussian watermarks. Our gains are achieved by exploiting the collusion attack that has recently been studied in the field of digital video watermarking, and more sophisticated pattern recognition tools. Applications of our scheme include cybersecurity and cyberforensics.
NASA Astrophysics Data System (ADS)
Liansheng, Sui; Bei, Zhou; Zhanmin, Wang; Ailing, Tian
2017-05-01
A novel optical color image watermarking scheme considering human visual characteristics is presented in gyrator transform domain. Initially, an appropriate reference image is constructed of significant blocks chosen from the grayscale host image by evaluating visual characteristics such as visual entropy and edge entropy. Three components of the color watermark image are compressed based on compressive sensing, and the corresponding results are combined to form the grayscale watermark. Then, the frequency coefficients of the watermark image are fused into the frequency data of the gyrator-transformed reference image. The fused result is inversely transformed and partitioned, and eventually the watermarked image is obtained by mapping the resultant blocks into their original positions. The scheme can reconstruct the watermark with high perceptual quality and has the enhanced security due to high sensitivity of the secret keys. Importantly, the scheme can be implemented easily under the framework of double random phase encoding with the 4f optical system. To the best of our knowledge, it is the first report on embedding the color watermark into the grayscale host image which will be out of attacker's expectation. Simulation results are given to verify the feasibility and its superior performance in terms of noise and occlusion robustness.
Optical image hiding based on computational ghost imaging
NASA Astrophysics Data System (ADS)
Wang, Le; Zhao, Shengmei; Cheng, Weiwen; Gong, Longyan; Chen, Hanwu
2016-05-01
Imaging hiding schemes play important roles in now big data times. They provide copyright protections of digital images. In the paper, we propose a novel image hiding scheme based on computational ghost imaging to have strong robustness and high security. The watermark is encrypted with the configuration of a computational ghost imaging system, and the random speckle patterns compose a secret key. Least significant bit algorithm is adopted to embed the watermark and both the second-order correlation algorithm and the compressed sensing (CS) algorithm are used to extract the watermark. The experimental and simulation results show that the authorized users can get the watermark with the secret key. The watermark image could not be retrieved when the eavesdropping ratio is less than 45% with the second-order correlation algorithm, whereas it is less than 20% with the TVAL3 CS reconstructed algorithm. In addition, the proposed scheme is robust against the 'salt and pepper' noise and image cropping degradations.
NASA Astrophysics Data System (ADS)
Wang, Liming; Qiao, Yaojun; Yu, Qian; Zhang, Wenbo
2016-04-01
We introduce a watermark non-binary low-density parity check code (NB-LDPC) scheme, which can estimate the time-varying noise variance by using prior information of watermark symbols, to improve the performance of NB-LDPC codes. And compared with the prior-art counterpart, the watermark scheme can bring about 0.25 dB improvement in net coding gain (NCG) at bit error rate (BER) of 1e-6 and 36.8-81% reduction of the iteration numbers. Obviously, the proposed scheme shows great potential in terms of error correction performance and decoding efficiency.
Linguistically informed digital fingerprints for text
NASA Astrophysics Data System (ADS)
Uzuner, Özlem
2006-02-01
Digital fingerprinting, watermarking, and tracking technologies have gained importance in the recent years in response to growing problems such as digital copyright infringement. While fingerprints and watermarks can be generated in many different ways, use of natural language processing for these purposes has so far been limited. Measuring similarity of literary works for automatic copyright infringement detection requires identifying and comparing creative expression of content in documents. In this paper, we present a linguistic approach to automatically fingerprinting novels based on their expression of content. We use natural language processing techniques to generate "expression fingerprints". These fingerprints consist of both syntactic and semantic elements of language, i.e., syntactic and semantic elements of expression. Our experiments indicate that syntactic and semantic elements of expression enable accurate identification of novels and their paraphrases, providing a significant improvement over techniques used in text classification literature for automatic copy recognition. We show that these elements of expression can be used to fingerprint, label, or watermark works; they represent features that are essential to the character of works and that remain fairly consistent in the works even when works are paraphrased. These features can be directly extracted from the contents of the works on demand and can be used to recognize works that would not be correctly identified either in the absence of pre-existing labels or by verbatim-copy detectors.
Capacity-optimized mp2 audio watermarking
NASA Astrophysics Data System (ADS)
Steinebach, Martin; Dittmann, Jana
2003-06-01
Today a number of audio watermarking algorithms have been proposed, some of them at a quality making them suitable for commercial applications. The focus of most of these algorithms is copyright protection. Therefore, transparency and robustness are the most discussed and optimised parameters. But other applications for audio watermarking can also be identified stressing other parameters like complexity or payload. In our paper, we introduce a new mp2 audio watermarking algorithm optimised for high payload. Our algorithm uses the scale factors of an mp2 file for watermark embedding. They are grouped and masked based on a pseudo-random pattern generated from a secret key. In each group, we embed one bit. Depending on the bit to embed, we change the scale factors by adding 1 where necessary until it includes either more even or uneven scale factors. An uneven group has a 1 embedded, an even group a 0. The same rule is later applied to detect the watermark. The group size can be increased or decreased for transparency/payload trade-off. We embed 160 bits or more in an mp2 file per second without reducing perceived quality. As an application example, we introduce a prototypic Karaoke system displaying song lyrics embedded as a watermark.
Impact of MPEG-4 3D mesh coding on watermarking algorithms for polygonal 3D meshes
NASA Astrophysics Data System (ADS)
Funk, Wolfgang
2004-06-01
The MPEG-4 multimedia standard addresses the scene-based composition of audiovisual objects. Natural and synthetic multimedia content can be mixed and transmitted over narrow and broadband communication channels. Synthetic natural hybrid coding (SNHC) within MPEG-4 provides tools for 3D mesh coding (3DMC). We investigate the robustness of two different 3D watermarking algorithms for polygonal meshes with respect to 3DMC. The first algorithm is a blind detection scheme designed for labelling applications that require high bandwidth and low robustness. The second algorithm is a robust non-blind one-bit watermarking scheme intended for copyright protection applications. Both algorithms have been proposed by Benedens. We expect 3DMC to have an impact on the watermarked 3D meshes, as the algorithms used for our simulations work on vertex coordinates to encode the watermark. We use the 3DMC implementation provided with the MPEG-4 reference software and the Princeton Shape Benchmark model database for our simulations. The watermarked models are sent through the 3DMC encoder and decoder, and the watermark decoding process is performed. For each algorithm under consideration we examine the detection properties as a function of the quantization of the vertex coordinates.
Crypto-Watermarking of Transmitted Medical Images.
Al-Haj, Ali; Mohammad, Ahmad; Amer, Alaa'
2017-02-01
Telemedicine is a booming healthcare practice that has facilitated the exchange of medical data and expertise between healthcare entities. However, the widespread use of telemedicine applications requires a secured scheme to guarantee confidentiality and verify authenticity and integrity of exchanged medical data. In this paper, we describe a region-based, crypto-watermarking algorithm capable of providing confidentiality, authenticity, and integrity for medical images of different modalities. The proposed algorithm provides authenticity by embedding robust watermarks in images' region of non-interest using SVD in the DWT domain. Integrity is provided in two levels: strict integrity implemented by a cryptographic hash watermark, and content-based integrity implemented by a symmetric encryption-based tamper localization scheme. Confidentiality is achieved as a byproduct of hiding patient's data in the image. Performance of the algorithm was evaluated with respect to imperceptibility, robustness, capacity, and tamper localization, using different medical images. The results showed the effectiveness of the algorithm in providing security for telemedicine applications.
Information Hiding: an Annotated Bibliography
1999-04-13
parameters needed for reconstruction are enciphered using DES . The encrypted image is hidden in a cover image . [153] 074115, ‘Watermarking algorithm ...authors present a block based watermarking algorithm for digital images . The D.C.T. of the block is increased by a certain value. Quality control is...includes evaluation of the watermark robustness and the subjec- tive visual image quality. Two algorithms use the frequency domain while the two others use
NASA Astrophysics Data System (ADS)
El-Shafai, W.; El-Rabaie, S.; El-Halawany, M.; Abd El-Samie, F. E.
2018-03-01
Three-Dimensional Video-plus-Depth (3DV + D) comprises diverse video streams captured by different cameras around an object. Therefore, there is a great need to fulfill efficient compression to transmit and store the 3DV + D content in compressed form to attain future resource bounds whilst preserving a decisive reception quality. Also, the security of the transmitted 3DV + D is a critical issue for protecting its copyright content. This paper proposes an efficient hybrid watermarking scheme for securing the 3DV + D transmission, which is the homomorphic transform based Singular Value Decomposition (SVD) in Discrete Wavelet Transform (DWT) domain. The objective of the proposed watermarking scheme is to increase the immunity of the watermarked 3DV + D to attacks and achieve adequate perceptual quality. Moreover, the proposed watermarking scheme reduces the transmission-bandwidth requirements for transmitting the color-plus-depth 3DV over limited-bandwidth wireless networks through embedding the depth frames into the color frames of the transmitted 3DV + D. Thus, it saves the transmission bit rate and subsequently it enhances the channel bandwidth-efficiency. The performance of the proposed watermarking scheme is compared with those of the state-of-the-art hybrid watermarking schemes. The comparisons depend on both the subjective visual results and the objective results; the Peak Signal-to-Noise Ratio (PSNR) of the watermarked frames and the Normalized Correlation (NC) of the extracted watermark frames. Extensive simulation results on standard 3DV + D sequences have been conducted in the presence of attacks. The obtained results confirm that the proposed hybrid watermarking scheme is robust in the presence of attacks. It achieves not only very good perceptual quality with appreciated PSNR values and saving in the transmission bit rate, but also high correlation coefficient values in the presence of attacks compared to the existing hybrid watermarking schemes.
A New Position Location System Using DTV Transmitter Identification Watermark Signals
NASA Astrophysics Data System (ADS)
Wang, Xianbin; Wu, Yiyan; Chouinard, Jean-Yves
2006-12-01
A new position location technique using the transmitter identification (TxID) RF watermark in the digital TV (DTV) signals is proposed in this paper. Conventional global positioning system (GPS) usually does not work well inside buildings due to the high frequency and weak field strength of the signal. In contrast to the GPS, the DTV signals are received from transmitters at relatively short distance, while the broadcast transmitters operate at levels up to the megawatts effective radiated power (ERP). Also the RF frequency of the DTV signal is much lower than the GPS, which makes it easier for the signal to penetrate buildings and other objects. The proposed position location system based on DTV TxID signal is presented in this paper. Practical receiver implementation issues including nonideal correlation and synchronization are analyzed and discussed. Performance of the proposed technique is evaluated through Monte Carlo simulations and compared with other existing position location systems. Possible ways to improve the accuracy of the new position location system is discussed.
Data Embedding for Covert Communications, Digital Watermarking, and Information Augmentation
2000-03-01
proposed an image authentication algorithm based on the fragility of messages embedded in digital images using LSB encoding. In [Walt95], he proposes...Invertibility 2/ 3 SAMPLE DATA EMBEDDING TECHNIQUES 23 3.1 SPATIAL TECHNIQUES 23 LSB Encoding in Intensity Images 23 Data embedding...ATTACK 21 FIGURE 6. EFFECTS OF LSB ENCODING 25 FIGURE 7. ALGORITHM FOR EZSTEGO 28 FIGURE 8. DATA EMBEDDING IN THE FREQUENCY DOMAIN 30 FIGURE 9
Khan, Aihab; Husain, Syed Afaq
2013-01-01
We put forward a fragile zero watermarking scheme to detect and characterize malicious modifications made to a database relation. Most of the existing watermarking schemes for relational databases introduce intentional errors or permanent distortions as marks into the database original content. These distortions inevitably degrade the data quality and data usability as the integrity of a relational database is violated. Moreover, these fragile schemes can detect malicious data modifications but do not characterize the tempering attack, that is, the nature of tempering. The proposed fragile scheme is based on zero watermarking approach to detect malicious modifications made to a database relation. In zero watermarking, the watermark is generated (constructed) from the contents of the original data rather than introduction of permanent distortions as marks into the data. As a result, the proposed scheme is distortion-free; thus, it also resolves the inherent conflict between security and imperceptibility. The proposed scheme also characterizes the malicious data modifications to quantify the nature of tempering attacks. Experimental results show that even minor malicious modifications made to a database relation can be detected and characterized successfully.
High-fidelity data embedding for image annotation.
He, Shan; Kirovski, Darko; Wu, Min
2009-02-01
High fidelity is a demanding requirement for data hiding, especially for images with artistic or medical value. This correspondence proposes a high-fidelity image watermarking for annotation with robustness to moderate distortion. To achieve the high fidelity of the embedded image, we introduce a visual perception model that aims at quantifying the local tolerance to noise for arbitrary imagery. Based on this model, we embed two kinds of watermarks: a pilot watermark that indicates the existence of the watermark and an information watermark that conveys a payload of several dozen bits. The objective is to embed 32 bits of metadata into a single image in such a way that it is robust to JPEG compression and cropping. We demonstrate the effectiveness of the visual model and the application of the proposed annotation technology using a database of challenging photographic and medical images that contain a large amount of smooth regions.
NASA Astrophysics Data System (ADS)
Qu, Zhi-Guo; He, Huang-Xing; Li, Tao
2018-01-01
Not Available Project supported by the National Natural Science Foundation of China (Grant Nos. 61373131, 61303039, 61232016, and 61501247), Sichuan Youth Science and Technique Foundation, China (Grant No. 2017JQ0048), NUIST Research Foundation for Talented Scholars of China (Grant No. 2015r014), and PAPD and CICAEET Funds of China.
Text image authenticating algorithm based on MD5-hash function and Henon map
NASA Astrophysics Data System (ADS)
Wei, Jinqiao; Wang, Ying; Ma, Xiaoxue
2017-07-01
In order to cater to the evidentiary requirements of the text image, this paper proposes a fragile watermarking algorithm based on Hash function and Henon map. The algorithm is to divide a text image into parts, get flippable pixels and nonflippable pixels of every lump according to PSD, generate watermark of non-flippable pixels with MD5-Hash, encrypt watermark with Henon map and select embedded blocks. The simulation results show that the algorithm with a good ability in tampering localization can be used to authenticate and forensics the authenticity and integrity of text images
Just Noticeable Distortion Model and Its Application in Color Image Watermarking
NASA Astrophysics Data System (ADS)
Liu, Kuo-Cheng
In this paper, a perceptually adaptive watermarking scheme for color images is proposed in order to achieve robustness and transparency. A new just noticeable distortion (JND) estimator for color images is first designed in the wavelet domain. The key issue of the JND model is to effectively integrate visual masking effects. The estimator is an extension to the perceptual model that is used in image coding for grayscale images. Except for the visual masking effects given coefficient by coefficient by taking into account the luminance content and the texture of grayscale images, the crossed masking effect given by the interaction between luminance and chrominance components and the effect given by the variance within the local region of the target coefficient are investigated such that the visibility threshold for the human visual system (HVS) can be evaluated. In a locally adaptive fashion based on the wavelet decomposition, the estimator applies to all subbands of luminance and chrominance components of color images and is used to measure the visibility of wavelet quantization errors. The subband JND profiles are then incorporated into the proposed color image watermarking scheme. Performance in terms of robustness and transparency of the watermarking scheme is obtained by means of the proposed approach to embed the maximum strength watermark while maintaining the perceptually lossless quality of the watermarked color image. Simulation results show that the proposed scheme with inserting watermarks into luminance and chrominance components is more robust than the existing scheme while retaining the watermark transparency.
NASA Astrophysics Data System (ADS)
Tomàs-Buliart, Joan; Fernández, Marcel; Soriano, Miguel
Critical infrastructures are usually controlled by software entities. To monitor the well-function of these entities, a solution based in the use of mobile agents is proposed. Some proposals to detect modifications of mobile agents, as digital signature of code, exist but they are oriented to protect software against modification or to verify that an agent have been executed correctly. The aim of our proposal is to guarantee that the software is being executed correctly by a non trusted host. The way proposed to achieve this objective is by the improvement of the Self-Validating Branch-Based Software Watermarking by Myles et al.. The proposed modification is the incorporation of an external element called sentinel which controls branch targets. This technique applied in mobile agents can guarantee the correct operation of an agent or, at least, can detect suspicious behaviours of a malicious host during the execution of the agent instead of detecting when the execution of the agent have finished.
Turuk, Mousami; Dhande, Ashwin
2018-04-01
The recent innovations in information and communication technologies have appreciably changed the panorama of health information system (HIS). These advances provide new means to process, handle, and share medical images and also augment the medical image security issues in terms of confidentiality, reliability, and integrity. Digital watermarking has emerged as new era that offers acceptable solutions to the security issues in HIS. Texture is a significant feature to detect the embedding sites in an image, which further leads to substantial improvement in the robustness. However, considering the perspective of digital watermarking, this feature has received meager attention in the reported literature. This paper exploits the texture property of an image and presents a novel hybrid texture-quantization-based approach for reversible multiple watermarking. The watermarked image quality has been accessed by peak signal to noise ratio (PSNR), structural similarity measure (SSIM), and universal image quality index (UIQI), and the obtained results are superior to the state-of-the-art methods. The algorithm has been evaluated on a variety of medical imaging modalities (CT, MRA, MRI, US) and robustness has been verified, considering various image processing attacks including JPEG compression. The proposed scheme offers additional security using repetitive embedding of BCH encoded watermarks and ADM encrypted ECG signal. Experimental results achieved a maximum of 22,616 bits hiding capacity with PSNR of 53.64 dB.
Oblivious image watermarking combined with JPEG compression
NASA Astrophysics Data System (ADS)
Chen, Qing; Maitre, Henri; Pesquet-Popescu, Beatrice
2003-06-01
For most data hiding applications, the main source of concern is the effect of lossy compression on hidden information. The objective of watermarking is fundamentally in conflict with lossy compression. The latter attempts to remove all irrelevant and redundant information from a signal, while the former uses the irrelevant information to mask the presence of hidden data. Compression on a watermarked image can significantly affect the retrieval of the watermark. Past investigations of this problem have heavily relied on simulation. It is desirable not only to measure the effect of compression on embedded watermark, but also to control the embedding process to survive lossy compression. In this paper, we focus on oblivious watermarking by assuming that the watermarked image inevitably undergoes JPEG compression prior to watermark extraction. We propose an image-adaptive watermarking scheme where the watermarking algorithm and the JPEG compression standard are jointly considered. Watermark embedding takes into consideration the JPEG compression quality factor and exploits an HVS model to adaptively attain a proper trade-off among transparency, hiding data rate, and robustness to JPEG compression. The scheme estimates the image-dependent payload under JPEG compression to achieve the watermarking bit allocation in a determinate way, while maintaining consistent watermark retrieval performance.
Visible digital watermarking system using perceptual models
NASA Astrophysics Data System (ADS)
Cheng, Qiang; Huang, Thomas S.
2001-03-01
This paper presents a visible watermarking system using perceptual models. %how and why A watermark image is overlaid translucently onto a primary image, for the purposes of immediate claim of copyright, instantaneous recognition of owner or creator, or deterrence to piracy of digital images or video. %perceptual The watermark is modulated by exploiting combined DCT-domain and DWT-domain perceptual models. % so that the watermark is visually uniform. The resulting watermarked image is visually pleasing and unobtrusive. The location, size and strength of the watermark vary randomly with the underlying image. The randomization makes the automatic removal of the watermark difficult even though the algorithm is known publicly but the key to the random sequence generator. The experiments demonstrate that the watermarked images have pleasant visual effect and strong robustness. The watermarking system can be used in copyright notification and protection.
NASA Astrophysics Data System (ADS)
Pierce, S. A.; Wagner, K.; Schwartz, S.; Gentle, J. N., Jr.
2016-12-01
Critical water resources face the effects of historic drought, increased demand, and potential contamination, the need has never been greater to develop resources to effectively communicate conservation and protection across a broad audience and geographical area. The Watermark application and macro-analysis methodology merges topical analysis of context rich corpus from policy texts with multi-attributed solution sets from integrated models of water resource and other subsystems, such as mineral, food, energy, or environmental systems to construct a scalable, robust, and reproducible approach for identifying links between policy and science knowledge bases. The Watermark application is an open-source, interactive workspace to support science-based visualization and decision making. Designed with generalization in mind, Watermark is a flexible platform that allows for data analysis and inclusion of large datasets with an interactive front-end capable of connecting with other applications as well as advanced computing resources. In addition, the Watermark analysis methodology offers functionality that streamlines communication with non-technical users for policy, education, or engagement with groups around scientific topics of societal relevance. The technology stack for Watermark was selected with the goal of creating a robust and dynamic modular codebase that can be adjusted to fit many use cases and scale to support usage loads that range between simple data display to complex scientific simulation-based modelling and analytics. The methodology uses to topical analysis and simulation-optimization to systematically analyze the policy and management realities of resource systems and explicitly connect the social and problem contexts with science-based and engineering knowledge from models. A case example demonstrates use in a complex groundwater resources management study highlighting multi-criteria spatial decision making and uncertainty comparisons.
Kabir, Muhammad N.; Alginahi, Yasser M.
2014-01-01
This paper addresses the problems and threats associated with verification of integrity, proof of authenticity, tamper detection, and copyright protection for digital-text content. Such issues were largely addressed in the literature for images, audio, and video, with only a few papers addressing the challenge of sensitive plain-text media under known constraints. Specifically, with text as the predominant online communication medium, it becomes crucial that techniques are deployed to protect such information. A number of digital-signature, hashing, and watermarking schemes have been proposed that essentially bind source data or embed invisible data in a cover media to achieve its goal. While many such complex schemes with resource redundancies are sufficient in offline and less-sensitive texts, this paper proposes a hybrid approach based on zero-watermarking and digital-signature-like manipulations for sensitive text documents in order to achieve content originality and integrity verification without physically modifying the cover text in anyway. The proposed algorithm was implemented and shown to be robust against undetected content modifications and is capable of confirming proof of originality whilst detecting and locating deliberate/nondeliberate tampering. Additionally, enhancements in resource utilisation and reduced redundancies were achieved in comparison to traditional encryption-based approaches. Finally, analysis and remarks are made about the current state of the art, and future research issues are discussed under the given constraints. PMID:25254247
NASA Astrophysics Data System (ADS)
Zhao, Liang; Adhikari, Avishek; Sakurai, Kouichi
Watermarking is one of the most effective techniques for copyright protection and information hiding. It can be applied in many fields of our society. Nowadays, some image scrambling schemes are used as one part of the watermarking algorithm to enhance the security. Therefore, how to select an image scrambling scheme and what kind of the image scrambling scheme may be used for watermarking are the key problems. Evaluation method of the image scrambling schemes can be seen as a useful test tool for showing the property or flaw of the image scrambling method. In this paper, a new scrambling evaluation system based on spatial distribution entropy and centroid difference of bit-plane is presented to obtain the scrambling degree of image scrambling schemes. Our scheme is illustrated and justified through computer simulations. The experimental results show (in Figs. 6 and 7) that for the general gray-scale image, the evaluation degree of the corresponding cipher image for the first 4 significant bit-planes selection is nearly the same as that for the 8 bit-planes selection. That is why, instead of taking 8 bit-planes of a gray-scale image, it is sufficient to take only the first 4 significant bit-planes for the experiment to find the scrambling degree. This 50% reduction in the computational cost makes our scheme efficient.
Smart security and securing data through watermarking
NASA Astrophysics Data System (ADS)
Singh, Ritesh; Kumar, Lalit; Banik, Debraj; Sundar, S.
2017-11-01
The growth of image processing in embedded system has provided the boon of enhancing the security in various sectors. This lead to the developing of various protective strategies, which will be needed by private or public sectors for cyber security purposes. So, we have developed a method which uses digital water marking and locking mechanism for the protection of any closed premises. This paper describes a contemporary system based on user name, user id, password and encryption technique which can be placed in banks, protected offices to beef the security up. The burglary can be abated substantially by using a proactive safety structure. In this proposed framework, we are using water-marking in spatial domain to encode and decode the image and PIR(Passive Infrared Sensor) sensor to detect the existence of person in any close area.
Tan, Chun Kiat; Ng, Jason Changwei; Xu, Xiaotian; Poh, Chueh Loo; Guan, Yong Liang; Sheah, Kenneth
2011-06-01
Teleradiology applications and universal availability of patient records using web-based technology are rapidly gaining importance. Consequently, digital medical image security has become an important issue when images and their pertinent patient information are transmitted across public networks, such as the Internet. Health mandates such as the Health Insurance Portability and Accountability Act require healthcare providers to adhere to security measures in order to protect sensitive patient information. This paper presents a fully reversible, dual-layer watermarking scheme with tamper detection capability for medical images. The scheme utilizes concepts of public-key cryptography and reversible data-hiding technique. The scheme was tested using medical images in DICOM format. The results show that the scheme is able to ensure image authenticity and integrity, and to locate tampered regions in the images.
A model for the distribution of watermarked digital content on mobile networks
NASA Astrophysics Data System (ADS)
Frattolillo, Franco; D'Onofrio, Salvatore
2006-10-01
Although digital watermarking can be considered one of the key technologies to implement the copyright protection of digital contents distributed on the Internet, most of the content distribution models based on watermarking protocols proposed in literature have been purposely designed for fixed networks and cannot be easily adapted to mobile networks. On the contrary, the use of mobile devices currently enables new types of services and business models, and this makes the development of new content distribution models for mobile environments strategic in the current scenario of the Internet. This paper presents and discusses a distribution model of watermarked digital contents for such environments able to achieve a trade-off between the needs of efficiency and security.
Collusion issue in video watermarking
NASA Astrophysics Data System (ADS)
Doerr, Gwenael; Dugelay, Jean-Luc
2005-03-01
Digital watermarking has first been introduced as a possible way to ensure intellectual property (IP) protection. However, fifteen years after its infancy, it is still viewed as a young technology and digital watermarking is far from being introduced in Digital Right Management (DRM) frameworks. A possible explanation is that the research community has so far mainly focused on the robustness of the embedded watermark and has almost ignored security aspects. For IP protection applications such as fingerprinting and copyright protection, the watermark should provide means to ensure some kind of trust in a non secure environment. To this end, security against attacks from malicious users has to be considered. This paper will focus on collusion attacks to evaluate security in the context of video watermarking. In particular, security pitfalls will be exhibited when frame-by-frame embedding strategies are enforced for video watermarking. Two alternative strategies will be surveyed: either eavesdropping the watermarking channel to identify some redundant hidden structure, or jamming the watermarking channel to wash out the embedded watermark signal. Finally, the need for a new brand of watermarking schemes will be highlighted if the watermark is to be released in a hostile environment, which is typically the case for IP protection applications.
Security Research on VoIP with Watermarking
NASA Astrophysics Data System (ADS)
Hu, Dong; Lee, Ping
2008-11-01
With the wide application of VoIP, many problems have occurred. One of the problems is security. The problems with securing VoIP systems, insufficient standardization and lack of security mechanisms emerged the need for new approaches and solutions. In this paper, we propose a new security architecture for VoIP which is based on digital watermarking which is a new, flexible and powerful technology that is increasingly gaining more and more attentions. Besides known applications e.g. to solve copyright protection problems, we propose to use digital watermarking to secure not only transmitted audio but also signaling protocol that VoIP is based on.
Registration methods for nonblind watermark detection in digital cinema applications
NASA Astrophysics Data System (ADS)
Nguyen, Philippe; Balter, Raphaele; Montfort, Nicolas; Baudry, Severine
2003-06-01
Digital watermarking may be used to enforce copyright protection of digital cinema, by embedding in each projected movie an unique identifier (fingerprint). By identifying the source of illegal copies, watermarking will thus incite movie theatre managers to enforce copyright protection, in particular by preventing people from coming in with a handy cam. We propose here a non-blind watermark method to improve the watermark detection on very impaired sequences. We first present a study on the picture impairments caused by the projection on a screen, then acquisition with a handy cam. We show that images undergo geometric deformations, which are fully described by a projective geometry model. The sequence also undergoes spatial and temporal luminance variation. Based on this study and on the impairments models which follow, we propose a method to match the retrieved sequence to the original one. First, temporal registration is performed by comparing the average luminance variation on both sequences. To compensate for geometric transformations, we used paired points from both sequences, obtained by applying a feature points detector. The matching of the feature points then enables to retrieve the geometric transform parameters. Tests show that the watermark retrieval on rectified sequences is greatly improved.
A New Pixels Flipping Method for Huge Watermarking Capacity of the Invoice Font Image
Li, Li; Hou, Qingzheng; Lu, Jianfeng; Dai, Junping; Mao, Xiaoyang; Chang, Chin-Chen
2014-01-01
Invoice printing just has two-color printing, so invoice font image can be seen as binary image. To embed watermarks into invoice image, the pixels need to be flipped. The more huge the watermark is, the more the pixels need to be flipped. We proposed a new pixels flipping method in invoice image for huge watermarking capacity. The pixels flipping method includes one novel interpolation method for binary image, one flippable pixels evaluation mechanism, and one denoising method based on gravity center and chaos degree. The proposed interpolation method ensures that the invoice image keeps features well after scaling. The flippable pixels evaluation mechanism ensures that the pixels keep better connectivity and smoothness and the pattern has highest structural similarity after flipping. The proposed denoising method makes invoice font image smoother and fiter for human vision. Experiments show that the proposed flipping method not only keeps the invoice font structure well but also improves watermarking capacity. PMID:25489606
Security of fragile authentication watermarks with localization
NASA Astrophysics Data System (ADS)
Fridrich, Jessica
2002-04-01
In this paper, we study the security of fragile image authentication watermarks that can localize tampered areas. We start by comparing the goals, capabilities, and advantages of image authentication based on watermarking and cryptography. Then we point out some common security problems of current fragile authentication watermarks with localization and classify attacks on authentication watermarks into five categories. By investigating the attacks and vulnerabilities of current schemes, we propose a variation of the Wong scheme18 that is fast, simple, cryptographically secure, and resistant to all known attacks, including the Holliman-Memon attack9. In the new scheme, a special symmetry structure in the logo is used to authenticate the block content, while the logo itself carries information about the block origin (block index, the image index or time stamp, author ID, etc.). Because the authentication of the content and its origin are separated, it is possible to easily identify swapped blocks between images and accurately detect cropped areas, while being able to accurately localize tampered pixels.
An efficient and robust 3D mesh compression based on 3D watermarking and wavelet transform
NASA Astrophysics Data System (ADS)
Zagrouba, Ezzeddine; Ben Jabra, Saoussen; Didi, Yosra
2011-06-01
The compression and watermarking of 3D meshes are very important in many areas of activity including digital cinematography, virtual reality as well as CAD design. However, most studies on 3D watermarking and 3D compression are done independently. To verify a good trade-off between protection and a fast transfer of 3D meshes, this paper proposes a new approach which combines 3D mesh compression with mesh watermarking. This combination is based on a wavelet transformation. In fact, the used compression method is decomposed to two stages: geometric encoding and topologic encoding. The proposed approach consists to insert a signature between these two stages. First, the wavelet transformation is applied to the original mesh to obtain two components: wavelets coefficients and a coarse mesh. Then, the geometric encoding is done on these two components. The obtained coarse mesh will be marked using a robust mesh watermarking scheme. This insertion into coarse mesh allows obtaining high robustness to several attacks. Finally, the topologic encoding is applied to the marked coarse mesh to obtain the compressed mesh. The combination of compression and watermarking permits to detect the presence of signature after a compression of the marked mesh. In plus, it allows transferring protected 3D meshes with the minimum size. The experiments and evaluations show that the proposed approach presents efficient results in terms of compression gain, invisibility and robustness of the signature against of many attacks.
Watermark Estimation through Detector Observations
1998-03-01
electronic watermark detection is only feasible if the watermark detector is aware of the secret K. In many watermarking business scenarios the watermark...detector will be available to the public as a black box D. The following question is therefore justified: Can the secret K be deduced from the
Design and research on the platform of network manufacture product electronic trading
NASA Astrophysics Data System (ADS)
Zhou, Zude; Liu, Quan; Jiang, Xuemei
2003-09-01
With the rapid globalization of market and business, E-trading affects every manufacture enterprise. However, the security of network manufacturing products of transmission on Internet is very important. In this paper we discussed the protocol of fair exchange and platform for network manufacture products E-trading based on fair exchange protocol and digital watermarking techniques. The platform realized reliable and copyright protection.
Performance evaluation of TDT soil water content and watermark soil water potential sensors
USDA-ARS?s Scientific Manuscript database
This study evaluated the performance of digitized Time Domain Transmissometry (TDT) soil water content sensors (Acclima, Inc., Meridian, ID) and resistance-based soil water potential sensors (Watermark 200, Irrometer Company, Inc., Riverside, CA) in two soils. The evaluation was performed by compar...
NASA Astrophysics Data System (ADS)
Kim, Kyung-Su; Lee, Hae-Yeoun; Im, Dong-Hyuck; Lee, Heung-Kyu
Commercial markets employ digital right management (DRM) systems to protect valuable high-definition (HD) quality videos. DRM system uses watermarking to provide copyright protection and ownership authentication of multimedia contents. We propose a real-time video watermarking scheme for HD video in the uncompressed domain. Especially, our approach is in aspect of practical perspectives to satisfy perceptual quality, real-time processing, and robustness requirements. We simplify and optimize human visual system mask for real-time performance and also apply dithering technique for invisibility. Extensive experiments are performed to prove that the proposed scheme satisfies the invisibility, real-time processing, and robustness requirements against video processing attacks. We concentrate upon video processing attacks that commonly occur in HD quality videos to display on portable devices. These attacks include not only scaling and low bit-rate encoding, but also malicious attacks such as format conversion and frame rate change.
Video watermarking for mobile phone applications
NASA Astrophysics Data System (ADS)
Mitrea, M.; Duta, S.; Petrescu, M.; Preteux, F.
2005-08-01
Nowadays, alongside with the traditional voice signal, music, video, and 3D characters tend to become common data to be run, stored and/or processed on mobile phones. Hence, to protect their related intellectual property rights also becomes a crucial issue. The video sequences involved in such applications are generally coded at very low bit rates. The present paper starts by presenting an accurate statistical investigation on such a video as well as on a very dangerous attack (the StirMark attack). The obtained results are turned into practice when adapting a spread spectrum watermarking method to such applications. The informed watermarking approach was also considered: an outstanding method belonging to this paradigm has been adapted and re evaluated under the low rate video constraint. The experimental results were conducted in collaboration with the SFR mobile services provider in France. They also allow a comparison between the spread spectrum and informed embedding techniques.
Fingerprinting of music scores
NASA Astrophysics Data System (ADS)
Irons, Jonathan; Schmucker, Martin
2004-06-01
Publishers of sheet music are generally reluctant in distributing their content via the Internet. Although online sheet music distribution's advantages are numerous the potential risk of Intellectual Property Rights (IPR) infringement, e.g. illegal online distributions, disables any innovation propensity. While active protection techniques only deter external risk factors, additional technology is necessary to adequately treat further risk factors. For several media types including music scores watermarking technology has been developed, which ebeds information in data by suitable data modifications. Furthermore, fingerprinting or perceptual hasing methods have been developed and are being applied especially for audio. These methods allow the identification of content without prior modifications. In this article we motivate the development of watermarking and fingerprinting technologies for sheet music. Outgoing from potential limitations of watermarking methods we explain why fingerprinting methods are important for sheet music and address potential applications. Finally we introduce a condept for fingerprinting of sheet music.
An Efficient Audio Watermarking Algorithm in Frequency Domain for Copyright Protection
NASA Astrophysics Data System (ADS)
Dhar, Pranab Kumar; Khan, Mohammad Ibrahim; Kim, Cheol-Hong; Kim, Jong-Myon
Digital Watermarking plays an important role for copyright protection of multimedia data. This paper proposes a new watermarking system in frequency domain for copyright protection of digital audio. In our proposed watermarking system, the original audio is segmented into non-overlapping frames. Watermarks are then embedded into the selected prominent peaks in the magnitude spectrum of each frame. Watermarks are extracted by performing the inverse operation of watermark embedding process. Simulation results indicate that the proposed watermarking system is highly robust against various kinds of attacks such as noise addition, cropping, re-sampling, re-quantization, MP3 compression, and low-pass filtering. Our proposed watermarking system outperforms Cox's method in terms of imperceptibility, while keeping comparable robustness with the Cox's method. Our proposed system achieves SNR (signal-to-noise ratio) values ranging from 20 dB to 28 dB, in contrast to Cox's method which achieves SNR values ranging from only 14 dB to 23 dB.
Digimarc MediaBridge: the birth of a consumer product from concept to commercial application
NASA Astrophysics Data System (ADS)
Perry, Burt; MacIntosh, Brian; Cushman, David
2002-04-01
This paper examines the issues encountered in the development and commercial deployment of a system based on digital watermarking technology. The paper provides an overview of the development of digital watermarking technology and the first applications to use the technology. It also looks at how we took the concept of digital watermarking as a communications channel within a digital environment and applied it to the physical print world to produce the Digimarc MediaBridge product. We describe the engineering tradeoffs that were made to balance competing requirements of watermark robustness, image quality, embedding process, detection speed and end user ease of use. Today, the Digimarc MediaBridge product links printed materials to auxiliary information about the content, via the Internet, to provide enhanced informational marketing, promotion, advertising and commerce opportunities.
Analysis of the impact of digital watermarking on computer-aided diagnosis in medical imaging.
Garcia-Hernandez, Jose Juan; Gomez-Flores, Wilfrido; Rubio-Loyola, Javier
2016-01-01
Medical images (MI) are relevant sources of information for detecting and diagnosing a large number of illnesses and abnormalities. Due to their importance, this study is focused on breast ultrasound (BUS), which is the main adjunct for mammography to detect common breast lesions among women worldwide. On the other hand, aiming to enhance data security, image fidelity, authenticity, and content verification in e-health environments, MI watermarking has been widely used, whose main goal is to embed patient meta-data into MI so that the resulting image keeps its original quality. In this sense, this paper deals with the comparison of two watermarking approaches, namely spread spectrum based on the discrete cosine transform (SS-DCT) and the high-capacity data-hiding (HCDH) algorithm, so that the watermarked BUS images are guaranteed to be adequate for a computer-aided diagnosis (CADx) system, whose two principal outcomes are lesion segmentation and classification. Experimental results show that HCDH algorithm is highly recommended for watermarking medical images, maintaining the image quality and without introducing distortion into the output of CADx. Copyright © 2015 Elsevier Ltd. All rights reserved.
Ishikawa, Muriel Y.; Wood, Lowell L.; Lougheed, Ronald W.; Moody, Kenton J.; Wang, Tzu-Fang
2004-05-25
A covert, gamma-ray "signature" is used as a "watermark" for property identification. This new watermarking technology is based on a unique steganographic or "hidden writing" digital signature, implemented in tiny quantities of gamma-ray-emitting radioisotopic material combinations, generally covertly emplaced on or within an object. This digital signature may be readily recovered at distant future times, by placing a sensitive, high energy-resolution gamma-ray detecting instrument reasonably precisely over the location of the watermark, which location may be known only to the object's owner; however, the signature is concealed from all ordinary detection means because its exceedingly low level of activity is obscured by the natural radiation background (including the gamma radiation naturally emanating from the object itself, from cosmic radiation and material surroundings, from human bodies, etc.). The "watermark" is used in object-tagging for establishing object identity, history or ownership. It thus may serve as an aid to law enforcement officials in identifying stolen property and prosecuting theft thereof. Highly effective, potentially very low cost identification-on demand of items of most all types is thus made possible.
Design and evaluation of sparse quantization index modulation watermarking schemes
NASA Astrophysics Data System (ADS)
Cornelis, Bruno; Barbarien, Joeri; Dooms, Ann; Munteanu, Adrian; Cornelis, Jan; Schelkens, Peter
2008-08-01
In the past decade the use of digital data has increased significantly. The advantages of digital data are, amongst others, easy editing, fast, cheap and cross-platform distribution and compact storage. The most crucial disadvantages are the unauthorized copying and copyright issues, by which authors and license holders can suffer considerable financial losses. Many inexpensive methods are readily available for editing digital data and, unlike analog information, the reproduction in the digital case is simple and robust. Hence, there is great interest in developing technology that helps to protect the integrity of a digital work and the copyrights of its owners. Watermarking, which is the embedding of a signal (known as the watermark) into the original digital data, is one method that has been proposed for the protection of digital media elements such as audio, video and images. In this article, we examine watermarking schemes for still images, based on selective quantization of the coefficients of a wavelet transformed image, i.e. sparse quantization-index modulation (QIM) watermarking. Different grouping schemes for the wavelet coefficients are evaluated and experimentally verified for robustness against several attacks. Wavelet tree-based grouping schemes yield a slightly improved performance over block-based grouping schemes. Additionally, the impact of the deployment of error correction codes on the most promising configurations is examined. The utilization of BCH-codes (Bose, Ray-Chaudhuri, Hocquenghem) results in an improved robustness as long as the capacity of the error codes is not exceeded (cliff-effect).
Stereo sequence transmission via conventional transmission channel
NASA Astrophysics Data System (ADS)
Lee, Ho-Keun; Kim, Chul-Hwan; Han, Kyu-Phil; Ha, Yeong-Ho
2003-05-01
This paper proposes a new stereo sequence transmission technique using digital watermarking for compatibility with conventional 2D digital TV. We, generally, compress and transmit image sequence using temporal-spatial redundancy between stereo images. It is difficult for users with conventional digital TV to watch the transmitted 3D image sequence because many 3D image compression methods are different. To solve such a problem, in this paper, we perceive the concealment of new information of digital watermarking and conceal information of the other stereo image into three channels of the reference image. The main target of the technique presented is to let the people who have conventional DTV watch stereo movies at the same time. This goal is reached by considering the response of human eyes to color information and by using digital watermarking. To hide right images into left images effectively, bit-change in 3 color channels and disparity estimation according to the value of estimated disparity are performed. The proposed method assigns the displacement information of right image to each channel of YCbCr on DCT domain. Each LSB bit on YCbCr channels is changed according to the bits of disparity information. The performance of the presented methods is confirmed by several computer experiments.
The video watermarking container: efficient real-time transaction watermarking
NASA Astrophysics Data System (ADS)
Wolf, Patrick; Hauer, Enrico; Steinebach, Martin
2008-02-01
When transaction watermarking is used to secure sales in online shops by embedding transaction specific watermarks, the major challenge is embedding efficiency: Maximum speed by minimal workload. This is true for all types of media. Video transaction watermarking presents a double challenge. Video files not only are larger than for example music files of the same playback time. In addition, video watermarking algorithms have a higher complexity than algorithms for other types of media. Therefore online shops that want to protect their videos by transaction watermarking are faced with the problem that their servers need to work harder and longer for every sold medium in comparison to audio sales. In the past, many algorithms responded to this challenge by reducing their complexity. But this usually results in a loss of either robustness or transparency. This paper presents a different approach. The container technology separates watermark embedding into two stages: A preparation stage and the finalization stage. In the preparation stage, the video is divided into embedding segments. For each segment one copy marked with "0" and anther one marked with "1" is created. This stage is computationally expensive but only needs to be done once. In the finalization stage, the watermarked video is assembled from the embedding segments according to the watermark message. This stage is very fast and involves no complex computations. It thus allows efficient creation of individually watermarked video files.
Khalil, Mohammed S.; Khan, Muhammad Khurram; Alginahi, Yasser M.
2014-01-01
This paper presents a novel watermarking method to facilitate the authentication and detection of the image forgery on the Quran images. Two layers of embedding scheme on wavelet and spatial domain are introduced to enhance the sensitivity of fragile watermarking and defend the attacks. Discrete wavelet transforms are applied to decompose the host image into wavelet prior to embedding the watermark in the wavelet domain. The watermarked wavelet coefficient is inverted back to spatial domain then the least significant bits is utilized to hide another watermark. A chaotic map is utilized to blur the watermark to make it secure against the local attack. The proposed method allows high watermark payloads, while preserving good image quality. Experiment results confirm that the proposed methods are fragile and have superior tampering detection even though the tampered area is very small. PMID:25028681
Khalil, Mohammed S; Kurniawan, Fajri; Khan, Muhammad Khurram; Alginahi, Yasser M
2014-01-01
This paper presents a novel watermarking method to facilitate the authentication and detection of the image forgery on the Quran images. Two layers of embedding scheme on wavelet and spatial domain are introduced to enhance the sensitivity of fragile watermarking and defend the attacks. Discrete wavelet transforms are applied to decompose the host image into wavelet prior to embedding the watermark in the wavelet domain. The watermarked wavelet coefficient is inverted back to spatial domain then the least significant bits is utilized to hide another watermark. A chaotic map is utilized to blur the watermark to make it secure against the local attack. The proposed method allows high watermark payloads, while preserving good image quality. Experiment results confirm that the proposed methods are fragile and have superior tampering detection even though the tampered area is very small.
A Secure Watermarking Scheme for Buyer-Seller Identification and Copyright Protection
NASA Astrophysics Data System (ADS)
Ahmed, Fawad; Sattar, Farook; Siyal, Mohammed Yakoob; Yu, Dan
2006-12-01
We propose a secure watermarking scheme that integrates watermarking with cryptography for addressing some important issues in copyright protection. We address three copyright protection issues—buyer-seller identification, copyright infringement, and ownership verification. By buyer-seller identification, we mean that a successful watermark extraction at the buyer's end will reveal the identities of the buyer and seller of the watermarked image. For copyright infringement, our proposed scheme enables the seller to identify the specific buyer from whom an illegal copy of the watermarked image has originated, and further prove this fact to a third party. For multiple ownership claims, our scheme enables a legal seller to claim his/her ownership in the court of law. We will show that the combination of cryptography with watermarking not only increases the security of the overall scheme, but it also enables to associate identities of buyer/seller with their respective watermarked images.
DNA-based watermarks using the DNA-Crypt algorithm.
Heider, Dominik; Barnekow, Angelika
2007-05-29
The aim of this paper is to demonstrate the application of watermarks based on DNA sequences to identify the unauthorized use of genetically modified organisms (GMOs) protected by patents. Predicted mutations in the genome can be corrected by the DNA-Crypt program leaving the encrypted information intact. Existing DNA cryptographic and steganographic algorithms use synthetic DNA sequences to store binary information however, although these sequences can be used for authentication, they may change the target DNA sequence when introduced into living organisms. The DNA-Crypt algorithm and image steganography are based on the same watermark-hiding principle, namely using the least significant base in case of DNA-Crypt and the least significant bit in case of the image steganography. It can be combined with binary encryption algorithms like AES, RSA or Blowfish. DNA-Crypt is able to correct mutations in the target DNA with several mutation correction codes such as the Hamming-code or the WDH-code. Mutations which can occur infrequently may destroy the encrypted information, however an integrated fuzzy controller decides on a set of heuristics based on three input dimensions, and recommends whether or not to use a correction code. These three input dimensions are the length of the sequence, the individual mutation rate and the stability over time, which is represented by the number of generations. In silico experiments using the Ypt7 in Saccharomyces cerevisiae shows that the DNA watermarks produced by DNA-Crypt do not alter the translation of mRNA into protein. The program is able to store watermarks in living organisms and can maintain the original information by correcting mutations itself. Pairwise or multiple sequence alignments show that DNA-Crypt produces few mismatches between the sequences similar to all steganographic algorithms.
DNA-based watermarks using the DNA-Crypt algorithm
Heider, Dominik; Barnekow, Angelika
2007-01-01
Background The aim of this paper is to demonstrate the application of watermarks based on DNA sequences to identify the unauthorized use of genetically modified organisms (GMOs) protected by patents. Predicted mutations in the genome can be corrected by the DNA-Crypt program leaving the encrypted information intact. Existing DNA cryptographic and steganographic algorithms use synthetic DNA sequences to store binary information however, although these sequences can be used for authentication, they may change the target DNA sequence when introduced into living organisms. Results The DNA-Crypt algorithm and image steganography are based on the same watermark-hiding principle, namely using the least significant base in case of DNA-Crypt and the least significant bit in case of the image steganography. It can be combined with binary encryption algorithms like AES, RSA or Blowfish. DNA-Crypt is able to correct mutations in the target DNA with several mutation correction codes such as the Hamming-code or the WDH-code. Mutations which can occur infrequently may destroy the encrypted information, however an integrated fuzzy controller decides on a set of heuristics based on three input dimensions, and recommends whether or not to use a correction code. These three input dimensions are the length of the sequence, the individual mutation rate and the stability over time, which is represented by the number of generations. In silico experiments using the Ypt7 in Saccharomyces cerevisiae shows that the DNA watermarks produced by DNA-Crypt do not alter the translation of mRNA into protein. Conclusion The program is able to store watermarks in living organisms and can maintain the original information by correcting mutations itself. Pairwise or multiple sequence alignments show that DNA-Crypt produces few mismatches between the sequences similar to all steganographic algorithms. PMID:17535434
Image segmentation-based robust feature extraction for color image watermarking
NASA Astrophysics Data System (ADS)
Li, Mianjie; Deng, Zeyu; Yuan, Xiaochen
2018-04-01
This paper proposes a local digital image watermarking method based on Robust Feature Extraction. The segmentation is achieved by Simple Linear Iterative Clustering (SLIC) based on which an Image Segmentation-based Robust Feature Extraction (ISRFE) method is proposed for feature extraction. Our method can adaptively extract feature regions from the blocks segmented by SLIC. This novel method can extract the most robust feature region in every segmented image. Each feature region is decomposed into low-frequency domain and high-frequency domain by Discrete Cosine Transform (DCT). Watermark images are then embedded into the coefficients in the low-frequency domain. The Distortion-Compensated Dither Modulation (DC-DM) algorithm is chosen as the quantization method for embedding. The experimental results indicate that the method has good performance under various attacks. Furthermore, the proposed method can obtain a trade-off between high robustness and good image quality.
Clinical evaluation of watermarked medical images.
Zain, Jasni M; Fauzi, Abdul M; Aziz, Azian A
2006-01-01
Digital watermarking medical images provides security to the images. The purpose of this study was to see whether digitally watermarked images changed clinical diagnoses when assessed by radiologists. We embedded 256 bits watermark to various medical images in the region of non-interest (RONI) and 480K bits in both region of interest (ROI) and RONI. Our results showed that watermarking medical images did not alter clinical diagnoses. In addition, there was no difference in image quality when visually assessed by the medical radiologists. We therefore concluded that digital watermarking medical images were safe in terms of preserving image quality for clinical purposes.
Robustness evaluation of transactional audio watermarking systems
NASA Astrophysics Data System (ADS)
Neubauer, Christian; Steinebach, Martin; Siebenhaar, Frank; Pickel, Joerg
2003-06-01
Distribution via Internet is of increasing importance. Easy access, transmission and consumption of digitally represented music is very attractive to the consumer but led also directly to an increasing problem of illegal copying. To cope with this problem watermarking is a promising concept since it provides a useful mechanism to track illicit copies by persistently attaching property rights information to the material. Especially for online music distribution the use of so-called transaction watermarking, also denoted with the term bitstream watermarking, is beneficial since it offers the opportunity to embed watermarks directly into perceptually encoded material without the need of full decompression/compression. Besides the concept of bitstream watermarking, former publications presented the complexity, the audio quality and the detection performance. These results are now extended by an assessment of the robustness of such schemes. The detection performance before and after applying selected attacks is presented for MPEG-1/2 Layer 3 (MP3) and MPEG-2/4 AAC bitstream watermarking, contrasted to the performance of PCM spread spectrum watermarking.
Robust and Blind 3D Mesh Watermarking in Spatial Domain Based on Faces Categorization and Sorting
NASA Astrophysics Data System (ADS)
Molaei, Amir Masoud; Ebrahimnezhad, Hossein; Sedaaghi, Mohammad Hossein
2016-06-01
In this paper, a 3D watermarking algorithm in spatial domain is presented with blind detection. In the proposed method, a negligible visual distortion is observed in host model. Initially, a preprocessing is applied on the 3D model to make it robust against geometric transformation attacks. Then, a number of triangle faces are determined as mark triangles using a novel systematic approach in which faces are categorized and sorted robustly. In order to enhance the capability of information retrieval by attacks, block watermarks are encoded using Reed-Solomon block error-correcting code before embedding into the mark triangles. Next, the encoded watermarks are embedded in spherical coordinates. The proposed method is robust against additive noise, mesh smoothing and quantization attacks. Also, it is stout next to geometric transformation, vertices and faces reordering attacks. Moreover, the proposed algorithm is designed so that it is robust against the cropping attack. Simulation results confirm that the watermarked models confront very low distortion if the control parameters are selected properly. Comparison with other methods demonstrates that the proposed method has good performance against the mesh smoothing attacks.
Dual domain watermarking for authentication and compression of cultural heritage images.
Zhao, Yang; Campisi, Patrizio; Kundur, Deepa
2004-03-01
This paper proposes an approach for the combined image authentication and compression of color images by making use of a digital watermarking and data hiding framework. The digital watermark is comprised of two components: a soft-authenticator watermark for authentication and tamper assessment of the given image, and a chrominance watermark employed to improve the efficiency of compression. The multipurpose watermark is designed by exploiting the orthogonality of various domains used for authentication, color decomposition and watermark insertion. The approach is implemented as a DCT-DWT dual domain algorithm and is applied for the protection and compression of cultural heritage imagery. Analysis is provided to characterize the behavior of the scheme under ideal conditions. Simulations and comparisons of the proposed approach with state-of-the-art existing work demonstrate the potential of the overall scheme.
Robust Audio Watermarking by Using Low-Frequency Histogram
NASA Astrophysics Data System (ADS)
Xiang, Shijun
In continuation to earlier work where the problem of time-scale modification (TSM) has been studied [1] by modifying the shape of audio time domain histogram, here we consider the additional ingredient of resisting additive noise-like operations, such as Gaussian noise, lossy compression and low-pass filtering. In other words, we study the problem of the watermark against both TSM and additive noises. To this end, in this paper we extract the histogram from a Gaussian-filtered low-frequency component for audio watermarking. The watermark is inserted by shaping the histogram in a way that the use of two consecutive bins as a group is exploited for hiding a bit by reassigning their population. The watermarked signals are perceptibly similar to the original one. Comparing with the previous time-domain watermarking scheme [1], the proposed watermarking method is more robust against additive noise, MP3 compression, low-pass filtering, etc.
Dual Level Digital Watermarking for Images
NASA Astrophysics Data System (ADS)
Singh, V. K.; Singh, A. K.
2010-11-01
More than 700 years ago, watermarks were used in Italy to indicate the paper brand and the mill that produced it. By the 18th century watermarks began to be used as anti counterfeiting measures on money and other documents.The term watermark was introduced near the end of the 18th century. It was probably given because the marks resemble the effects of water on paper. The first example of a technology similar to digital watermarking is a patent filed in 1954 by Emil Hembrooke for identifying music works. In 1988, Komatsu and Tominaga appear to be the first to use the term "digital watermarking". Consider the following hypothetical situations. You go to a shop, buy some goods and at the counter you are given a currency note you have never come across before. How do you verify that it is not counterfeit? Or say you go to a stationery shop and ask for a ream of bond paper. How do you verify that you have actually been given what you asked for? How does a philatelist verify the authenticity of a stamp? In all these cases, the watermark is used to authenticate. Watermarks have been in existence almost from the time paper has been in use. The impression created by the mesh moulds on the slurry of fibre and water remains on the paper. It serves to identify the manufacturer and thus authenticate the product without actually degrading the aesthetics and utility of the stock. It also makes forgery significantly tougher. Even today, important government and legal documents are watermarked. But what is watermarking, when it comes to digital data? Information is no longer present on a physical material but is represented as a series of zeros and ones. Duplication of information is achieved easily by just reproducing that combination of zeros and ones. How then can one protect ownership rights and authenticate data? The digital watermark is the same as that of conventional watermarks.
Reversible watermarking for authentication of DICOM images.
Zain, J M; Baldwin, L P; Clarke, M
2004-01-01
We propose a watermarking scheme that can recover the original image from the watermarked one. The purpose is to verify the integrity and authenticity of DICOM images. We used 800x600x8 bits ultrasound (US) images in our experiment. SHA-256 of the whole image is embedded in the least significant bits of the RONI (Region of Non-Interest). If the image has not been altered, the watermark will be extracted and the original image will be recovered. SHA-256 of the recovered image will be compared with the extracted watermark for authentication.
High capacity reversible watermarking for audio by histogram shifting and predicted error expansion.
Wang, Fei; Xie, Zhaoxin; Chen, Zuo
2014-01-01
Being reversible, the watermarking information embedded in audio signals can be extracted while the original audio data can achieve lossless recovery. Currently, the few reversible audio watermarking algorithms are confronted with following problems: relatively low SNR (signal-to-noise) of embedded audio; a large amount of auxiliary embedded location information; and the absence of accurate capacity control capability. In this paper, we present a novel reversible audio watermarking scheme based on improved prediction error expansion and histogram shifting. First, we use differential evolution algorithm to optimize prediction coefficients and then apply prediction error expansion to output stego data. Second, in order to reduce location map bits length, we introduced histogram shifting scheme. Meanwhile, the prediction error modification threshold according to a given embedding capacity can be computed by our proposed scheme. Experiments show that this algorithm improves the SNR of embedded audio signals and embedding capacity, drastically reduces location map bits length, and enhances capacity control capability.
Wavelet based mobile video watermarking: spread spectrum vs. informed embedding
NASA Astrophysics Data System (ADS)
Mitrea, M.; Prêteux, F.; Duţă, S.; Petrescu, M.
2005-11-01
The cell phone expansion provides an additional direction for digital video content distribution: music clips, news, sport events are more and more transmitted toward mobile users. Consequently, from the watermarking point of view, a new challenge should be taken: very low bitrate contents (e.g. as low as 64 kbit/s) are now to be protected. Within this framework, the paper approaches for the first time the mathematical models for two random processes, namely the original video to be protected and a very harmful attack any watermarking method should face the StirMark attack. By applying an advanced statistical investigation (combining the Chi square, Ro, Fisher and Student tests) in the discrete wavelet domain, it is established that the popular Gaussian assumption can be very restrictively used when describing the former process and has nothing to do with the latter. As these results can a priori determine the performances of several watermarking methods, both of spread spectrum and informed embedding types, they should be considered in the design stage.
Digimarc Discover on Google Glass
NASA Astrophysics Data System (ADS)
Rogers, Eliot; Rodriguez, Tony; Lord, John; Alattar, Adnan
2015-03-01
This paper reports on the implementation of the Digimarc® Discover platform on Google Glass, enabling the reading of a watermark embedded in a printed material or audio. The embedded watermark typically contains a unique code that identifies the containing media or object and a synchronization signal that allows the watermark to be read robustly. The Digimarc Discover smartphone application can read the watermark from a small portion of printed image presented at any orientation or reasonable distance. Likewise, Discover can read the recently introduced Digimarc Barcode to identify and manage consumer packaged goods in the retail channel. The Digimarc Barcode has several advantages over the traditional barcode and is expected to save the retail industry millions of dollars when deployed at scale. Discover can also read an audio watermark from ambient audio captured using a microphone. The Digimarc Discover platform has been widely deployed on the iPad, iPhone and many Android-based devices, but it has not yet been implemented on a head-worn wearable device, such as Google Glass. Implementing Discover on Google Glass is a challenging task due to the current hardware and software limitations of the device. This paper identifies the challenges encountered in porting Discover to the Google Glass and reports on the solutions created to deliver a prototype implementation.
Chrominance watermark for mobile applications
NASA Astrophysics Data System (ADS)
Reed, Alastair; Rogers, Eliot; James, Dan
2010-01-01
Creating an imperceptible watermark which can be read by a broad range of cell phone cameras is a difficult problem. The problems are caused by the inherently low resolution and noise levels of typical cell phone cameras. The quality limitations of these devices compared to a typical digital camera are caused by the small size of the cell phone and cost trade-offs made by the manufacturer. In order to achieve this, a low resolution watermark is required which can be resolved by a typical cell phone camera. The visibility of a traditional luminance watermark was too great at this lower resolution, so a chrominance watermark was developed. The chrominance watermark takes advantage of the relatively low sensitivity of the human visual system to chrominance changes. This enables a chrominance watermark to be inserted into an image which is imperceptible to the human eye but can be read using a typical cell phone camera. Sample images will be presented showing images with a very low visibility which can be easily read by a typical cell phone camera.
Dual watermarking scheme for secure buyer-seller watermarking protocol
NASA Astrophysics Data System (ADS)
Mehra, Neelesh; Shandilya, Madhu
2012-04-01
A buyer-seller watermarking protocol utilize watermarking along with cryptography for copyright and copy protection for the seller and meanwhile it also preserve buyers rights for privacy. It enables a seller to successfully identify a malicious seller from a pirated copy, while preventing the seller from framing an innocent buyer and provide anonymity to buyer. Up to now many buyer-seller watermarking protocols have been proposed which utilize more and more cryptographic scheme to solve many common problems such as customer's rights, unbinding problem, buyer's anonymity problem and buyer's participation in the dispute resolution. But most of them are infeasible since the buyer may not have knowledge of cryptography. Another issue is the number of steps to complete the protocols are large, a buyer needs to interact with different parties many times in these protocols, which is very inconvenient for buyer. To overcome these drawbacks, in this paper we proposed dual watermarking scheme in encrypted domain. Since neither of watermark has been generated by buyer so a general layman buyer can use the protocol.
Localized lossless authentication watermark (LAW)
NASA Astrophysics Data System (ADS)
Celik, Mehmet U.; Sharma, Gaurav; Tekalp, A. Murat; Saber, Eli S.
2003-06-01
A novel framework is proposed for lossless authentication watermarking of images which allows authentication and recovery of original images without any distortions. This overcomes a significant limitation of traditional authentication watermarks that irreversibly alter image data in the process of watermarking and authenticate the watermarked image rather than the original. In particular, authenticity is verified before full reconstruction of the original image, whose integrity is inferred from the reversibility of the watermarking procedure. This reduces computational requirements in situations when either the verification step fails or the zero-distortion reconstruction is not required. A particular instantiation of the framework is implemented using a hierarchical authentication scheme and the lossless generalized-LSB data embedding mechanism. The resulting algorithm, called localized lossless authentication watermark (LAW), can localize tampered regions of the image; has a low embedding distortion, which can be removed entirely if necessary; and supports public/private key authentication and recovery options. The effectiveness of the framework and the instantiation is demonstrated through examples.
NASA Astrophysics Data System (ADS)
Kusyk, Janusz; Eskicioglu, Ahmet M.
2005-10-01
Digital watermarking is considered to be a major technology for the protection of multimedia data. Some of the important applications are broadcast monitoring, copyright protection, and access control. In this paper, we present a semi-blind watermarking scheme for embedding a logo in color images using the DFT domain. After computing the DFT of the luminance layer of the cover image, the magnitudes of DFT coefficients are compared, and modified. A given watermark is embedded in three frequency bands: Low, middle, and high. Our experiments show that the watermarks extracted from the lower frequencies have the best visual quality for low pass filtering, adding Gaussian noise, JPEG compression, resizing, rotation, and scaling, and the watermarks extracted from the higher frequencies have the best visual quality for cropping, intensity adjustment, histogram equalization, and gamma correction. Extractions from the fragmented and translated image are identical to extractions from the unattacked watermarked image. The collusion and rewatermarking attacks do not provide the hacker with useful tools.
On digital cinema and watermarking
NASA Astrophysics Data System (ADS)
van Leest, Arno; Haitsma, Jaap; Kalker, Ton
2003-06-01
The illegal copying of movies in the cinema is now common practice. Although the quality is fairly low, the economic impact of these illegal copies can be enormous. Philips' digital cinema watermarking scheme is designed for the upcoming digital cinema format and will assist content owners and distributors with tracing the origin of illegal copies. In this paper we consider this watermarking scheme in more detail. A characteristic of this watermarking scheme is that it only exploits the temporal axis to insert a watermark. It is therefore inherently robust to geometrical distortions, a necessity for surviving illegal copying by camcorder recording. Moreover, the scheme resists frame rate conversions resulting from a frame rate mismatch between the camcorder and the projector. The watermarking scheme has been tested in a 'real' digital cinema environment with good results.
Medical Image Tamper Detection Based on Passive Image Authentication.
Ulutas, Guzin; Ustubioglu, Arda; Ustubioglu, Beste; V Nabiyev, Vasif; Ulutas, Mustafa
2017-12-01
Telemedicine has gained popularity in recent years. Medical images can be transferred over the Internet to enable the telediagnosis between medical staffs and to make the patient's history accessible to medical staff from anywhere. Therefore, integrity protection of the medical image is a serious concern due to the broadcast nature of the Internet. Some watermarking techniques are proposed to control the integrity of medical images. However, they require embedding of extra information (watermark) into image before transmission. It decreases visual quality of the medical image and can cause false diagnosis. The proposed method uses passive image authentication mechanism to detect the tampered regions on medical images. Structural texture information is obtained from the medical image by using local binary pattern rotation invariant (LBPROT) to make the keypoint extraction techniques more successful. Keypoints on the texture image are obtained with scale invariant feature transform (SIFT). Tampered regions are detected by the method by matching the keypoints. The method improves the keypoint-based passive image authentication mechanism (they do not detect tampering when the smooth region is used for covering an object) by using LBPROT before keypoint extraction because smooth regions also have texture information. Experimental results show that the method detects tampered regions on the medical images even if the forged image has undergone some attacks (Gaussian blurring/additive white Gaussian noise) or the forged regions are scaled/rotated before pasting.
NASA Astrophysics Data System (ADS)
Sablik, Thomas; Velten, Jörg; Kummert, Anton
2015-03-01
An novel system for automatic privacy protection in digital media based on spectral domain watermarking and JPEG compression is described in the present paper. In a first step private areas are detected. Therefore a detection method is presented. The implemented method uses Haar cascades to detects faces. Integral images are used to speed up calculations and the detection. Multiple detections of one face are combined. Succeeding steps comprise embedding the data into the image as part of JPEG compression using spectral domain methods and protecting the area of privacy. The embedding process is integrated into and adapted to JPEG compression. A Spread Spectrum Watermarking method is used to embed the size and position of the private areas into the cover image. Different methods for embedding regarding their robustness are compared. Moreover the performance of the method concerning tampered images is presented.
A good performance watermarking LDPC code used in high-speed optical fiber communication system
NASA Astrophysics Data System (ADS)
Zhang, Wenbo; Li, Chao; Zhang, Xiaoguang; Xi, Lixia; Tang, Xianfeng; He, Wenxue
2015-07-01
A watermarking LDPC code, which is a strategy designed to improve the performance of the traditional LDPC code, was introduced. By inserting some pre-defined watermarking bits into original LDPC code, we can obtain a more correct estimation about the noise level in the fiber channel. Then we use them to modify the probability distribution function (PDF) used in the initial process of belief propagation (BP) decoding algorithm. This algorithm was tested in a 128 Gb/s PDM-DQPSK optical communication system and results showed that the watermarking LDPC code had a better tolerances to polarization mode dispersion (PMD) and nonlinearity than that of traditional LDPC code. Also, by losing about 2.4% of redundancy for watermarking bits, the decoding efficiency of the watermarking LDPC code is about twice of the traditional one.
Selectively Encrypted Pull-Up Based Watermarking of Biometric data
NASA Astrophysics Data System (ADS)
Shinde, S. A.; Patel, Kushal S.
2012-10-01
Biometric authentication systems are becoming increasingly popular due to their potential usage in information security. However, digital biometric data (e.g. thumb impression) are themselves vulnerable to security attacks. There are various methods are available to secure biometric data. In biometric watermarking the data are embedded in an image container and are only retrieved if the secrete key is available. This container image is encrypted to have more security against the attack. As wireless devices are equipped with battery as their power supply, they have limited computational capabilities; therefore to reduce energy consumption we use the method of selective encryption of container image. The bit pull-up-based biometric watermarking scheme is based on amplitude modulation and bit priority which reduces the retrieval error rate to great extent. By using selective Encryption mechanism we expect more efficiency in time at the time of encryption as well as decryption. Significant reduction in error rate is expected to be achieved by the bit pull-up method.
Watermarking 3D Objects for Verification
1999-01-01
signal (audio/ image /video) pro- cessing and steganography fields, and even newer to the computer graphics community. Inherently, digital watermarking of...quality images , and digital video. The field of digital watermarking is relatively new, and many of its terms have not been well defined. Among the dif...ferent media types, watermarking of 2D still images is comparatively better studied. Inherently, digital water- marking of 3D objects remains a
Tian, Huawei; Zhao, Yao; Ni, Rongrong; Cao, Gang
2009-11-23
In a feature-based geometrically robust watermarking system, it is a challenging task to detect geometric-invariant regions (GIRs) which can survive a broad range of image processing operations. Instead of commonly used Harris detector or Mexican hat wavelet method, a more robust corner detector named multi-scale curvature product (MSCP) is adopted to extract salient features in this paper. Based on such features, disk-like GIRs are found, which consists of three steps. First, robust edge contours are extracted. Then, MSCP is utilized to detect the centers for GIRs. Third, the characteristic scale selection is performed to calculate the radius of each GIR. A novel sector-shaped partitioning method for the GIRs is designed, which can divide a GIR into several sector discs with the help of the most important corner (MIC). The watermark message is then embedded bit by bit in each sector by using Quantization Index Modulation (QIM). The GIRs and the divided sector discs are invariant to geometric transforms, so the watermarking method inherently has high robustness against geometric attacks. Experimental results show that the scheme has a better robustness against various image processing operations including common processing attacks, affine transforms, cropping, and random bending attack (RBA) than the previous approaches.
Embedded importance watermarking for image verification in radiology
NASA Astrophysics Data System (ADS)
Osborne, Domininc; Rogers, D.; Sorell, M.; Abbott, Derek
2004-03-01
Digital medical images used in radiology are quite different to everyday continuous tone images. Radiology images require that all detailed diagnostic information can be extracted, which traditionally constrains digital medical images to be of large size and stored without loss of information. In order to transmit diagnostic images over a narrowband wireless communication link for remote diagnosis, lossy compression schemes must be used. This involves discarding detailed information and compressing the data, making it more susceptible to error. The loss of image detail and incidental degradation occurring during transmission have potential legal accountability issues, especially in the case of the null diagnosis of a tumor. The work proposed here investigates techniques for verifying the voracity of medical images - in particular, detailing the use of embedded watermarking as an objective means to ensure that important parts of the medical image can be verified. We propose a result to show how embedded watermarking can be used to differentiate contextual from detailed information. The type of images that will be used include spiral hairline fractures and small tumors, which contain the essential diagnostic high spatial frequency information.
From watermarking to in-band enrichment: future trends
NASA Astrophysics Data System (ADS)
Mitrea, M.; Prêteux, F.
2009-02-01
Coming across with the emerging Knowledge Society, the enriched video is nowadays a hot research topic, from both academic and industrial perspectives. The principle consists in associating to the video stream some metadata of various types (textual, audio, video, executable codes, ...). This new content is to be further exploited in a large variety of applications, like interactive DTV, games, e-learning, and data mining, for instance. This paper brings into evidence the potentiality of the watermarking techniques for such an application. By inserting the enrichment data into the very video to be enriched, three main advantages are ensured. First, no additional complexity is required from the terminal and the representation format point of view. Secondly, no backward compatibility issue is encountered, thus allowing a unique system to accommodate services from several generations. Finally, the network adaptation constraints are alleviated. The discussion is structured on both theoretical aspects (the accurate evaluation of the watermarking capacity in several reallife scenarios) as well as on applications developed under the framework of the R&D contracts conducted at the ARTEMIS Department.
Watermarking scheme for authentication of compressed image
NASA Astrophysics Data System (ADS)
Hsieh, Tsung-Han; Li, Chang-Tsun; Wang, Shuo
2003-11-01
As images are commonly transmitted or stored in compressed form such as JPEG, to extend the applicability of our previous work, a new scheme for embedding watermark in compressed domain without resorting to cryptography is proposed. In this work, a target image is first DCT transformed and quantised. Then, all the coefficients are implicitly watermarked in order to minimize the risk of being attacked on the unwatermarked coefficients. The watermarking is done through registering/blending the zero-valued coefficients with a binary sequence to create the watermark and involving the unembedded coefficients during the process of embedding the selected coefficients. The second-order neighbors and the block itself are considered in the process of the watermark embedding in order to thwart different attacks such as cover-up, vector quantisation, and transplantation. The experiments demonstrate the capability of the proposed scheme in thwarting local tampering, geometric transformation such as cropping, and common signal operations such as lowpass filtering.
A watermarking algorithm for polysomnography data.
Jamasebi, R; Johnson, N L; Kaffashi, F; Redline, S; Loparo, K A
2008-01-01
A blind watermarking algorithm for polysomnography (PSG) data in European Data Format (EDF) has been developed for the identification and attribution of shared data. This is accomplished by hiding a unique identifier in the phase spectrum of each PSG epoch using an undisclosed key so that a third party cannot retrieve the watermark without knowledge of the key. A pattern discovery algorithm is developed to find the watermark pattern even though the data may have been altered. The method is evaluated using 25 PSG studies from the Sleep Heart Health Study database. The integrity of the signal data was determined using time series measures of both the original and watermarked signals, and by determining its effect on scoring sleep stages from the PSG data. The results of the analysis indicate that the proposed watermarking method for PSG data is an effective and efficient way to identify shared data without compromising its intended use.
An Improved EKG-Based Key Agreement Scheme for Body Area Networks
NASA Astrophysics Data System (ADS)
Ali, Aftab; Khan, Farrukh Aslam
Body area networks (BANs) play an important role in mobile health monitoring such as, monitoring the health of patients in a hospital or physical status of soldiers in a battlefield. By securing the BAN, we actually secure the lives of soldiers or patients. This work presents an electrocardiogram (EKG) based key agreement scheme using discrete wavelet transform (DWT) for the sake of generating a common key in a body area network. The use of EKG brings plug-and-play capability in BANs; i.e., the sensors are just placed on the human body and a secure communication is started among these sensors. The process is made secure by using the iris or fingerprints to lock and then unlock the blocks during exchange between the communicating sensors. The locking and unlocking is done through watermarking. When a watermark is added at the sender side, the block is locked and when it is removed at the receiver side, the block is unlocked. By using iris or fingerprints, the security of the technique improves and its plug-and-play capability is not affected. The analysis is done by using real 2-lead EKG data sampled at a rate of 125 Hz taken from MIT PhysioBank database.
A joint encryption/watermarking system for verifying the reliability of medical images.
Bouslimi, Dalel; Coatrieux, Gouenou; Cozic, Michel; Roux, Christian
2012-09-01
In this paper, we propose a joint encryption/water-marking system for the purpose of protecting medical images. This system is based on an approach which combines a substitutive watermarking algorithm, the quantization index modulation, with an encryption algorithm: a stream cipher algorithm (e.g., the RC4) or a block cipher algorithm (e.g., the AES in cipher block chaining (CBC) mode of operation). Our objective is to give access to the outcomes of the image integrity and of its origin even though the image is stored encrypted. If watermarking and encryption are conducted jointly at the protection stage, watermark extraction and decryption can be applied independently. The security analysis of our scheme and experimental results achieved on 8-bit depth ultrasound images as well as on 16-bit encoded positron emission tomography images demonstrate the capability of our system to securely make available security attributes in both spatial and encrypted domains while minimizing image distortion. Furthermore, by making use of the AES block cipher in CBC mode, the proposed system is compliant with or transparent to the DICOM standard.
Countermeasures for unintentional and intentional video watermarking attacks
NASA Astrophysics Data System (ADS)
Deguillaume, Frederic; Csurka, Gabriela; Pun, Thierry
2000-05-01
These last years, the rapidly growing digital multimedia market has revealed an urgent need for effective copyright protection mechanisms. Therefore, digital audio, image and video watermarking has recently become a very active area of research, as a solution to this problem. Many important issues have been pointed out, one of them being the robustness to non-intentional and intentional attacks. This paper studies some attacks and proposes countermeasures applied to videos. General attacks are lossy copying/transcoding such as MPEG compression and digital/analog (D/A) conversion, changes of frame-rate, changes of display format, and geometrical distortions. More specific attacks are sequence edition, and statistical attacks such as averaging or collusion. Averaging attack consists of averaging locally consecutive frames to cancel the watermark. This attack works well for schemes which embed random independent marks into frames. In the collusion attack the watermark is estimated from single frames (based on image denoising), and averaged over different scenes for better accuracy. The estimated watermark is then subtracted from each frame. Collusion requires that the same mark is embedded into all frames. The proposed countermeasures first ensures robustness to general attacks by spread spectrum encoding in the frequency domain and by the use of an additional template. Secondly, a Bayesian criterion, evaluating the probability of a correctly decoded watermark, is used for rejection of outliers, and to implement an algorithm against statistical attacks. The idea is to embed randomly chosen marks among a finite set of marks, into subsequences of videos which are long enough to resist averaging attacks, but short enough to avoid collusion attacks. The Bayesian criterion is needed to select the correct mark at the decoding step. Finally, the paper presents experimental results showing the robustness of the proposed method.
NASA Astrophysics Data System (ADS)
Liu, Xiyao; Lou, Jieting; Wang, Yifan; Du, Jingyu; Zou, Beiji; Chen, Yan
2018-03-01
Authentication and copyright identification are two critical security issues for medical images. Although zerowatermarking schemes can provide durable, reliable and distortion-free protection for medical images, the existing zerowatermarking schemes for medical images still face two problems. On one hand, they rarely considered the distinguishability for medical images, which is critical because different medical images are sometimes similar to each other. On the other hand, their robustness against geometric attacks, such as cropping, rotation and flipping, is insufficient. In this study, a novel discriminative and robust zero-watermarking (DRZW) is proposed to address these two problems. In DRZW, content-based features of medical images are first extracted based on completed local binary pattern (CLBP) operator to ensure the distinguishability and robustness, especially against geometric attacks. Then, master shares and ownership shares are generated from the content-based features and watermark according to (2,2) visual cryptography. Finally, the ownership shares are stored for authentication and copyright identification. For queried medical images, their content-based features are extracted and master shares are generated. Their watermarks for authentication and copyright identification are recovered by stacking the generated master shares and stored ownership shares. 200 different medical images of 5 types are collected as the testing data and our experimental results demonstrate that DRZW ensures both the accuracy and reliability of authentication and copyright identification. When fixing the false positive rate to 1.00%, the average value of false negative rates by using DRZW is only 1.75% under 20 common attacks with different parameters.
A Bernoulli Gaussian Watermark for Detecting Integrity Attacks in Control Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weerakkody, Sean; Ozel, Omur; Sinopoli, Bruno
We examine the merit of Bernoulli packet drops in actively detecting integrity attacks on control systems. The aim is to detect an adversary who delivers fake sensor measurements to a system operator in order to conceal their effect on the plant. Physical watermarks, or noisy additive Gaussian inputs, have been previously used to detect several classes of integrity attacks in control systems. In this paper, we consider the analysis and design of Gaussian physical watermarks in the presence of packet drops at the control input. On one hand, this enables analysis in a more general network setting. On the othermore » hand, we observe that in certain cases, Bernoulli packet drops can improve detection performance relative to a purely Gaussian watermark. This motivates the joint design of a Bernoulli-Gaussian watermark which incorporates both an additive Gaussian input and a Bernoulli drop process. We characterize the effect of such a watermark on system performance as well as attack detectability in two separate design scenarios. Here, we consider a correlation detector for attack recognition. We then propose efficiently solvable optimization problems to intelligently select parameters of the Gaussian input and the Bernoulli drop process while addressing security and performance trade-offs. Finally, we provide numerical results which illustrate that a watermark with packet drops can indeed outperform a Gaussian watermark.« less
Self-recovery reversible image watermarking algorithm
Sun, He; Gao, Shangbing; Jin, Shenghua
2018-01-01
The integrity of image content is essential, although most watermarking algorithms can achieve image authentication but not automatically repair damaged areas or restore the original image. In this paper, a self-recovery reversible image watermarking algorithm is proposed to recover the tampered areas effectively. First of all, the original image is divided into homogeneous blocks and non-homogeneous blocks through multi-scale decomposition, and the feature information of each block is calculated as the recovery watermark. Then, the original image is divided into 4×4 non-overlapping blocks classified into smooth blocks and texture blocks according to image textures. Finally, the recovery watermark generated by homogeneous blocks and error-correcting codes is embedded into the corresponding smooth block by mapping; watermark information generated by non-homogeneous blocks and error-correcting codes is embedded into the corresponding non-embedded smooth block and the texture block via mapping. The correlation attack is detected by invariant moments when the watermarked image is attacked. To determine whether a sub-block has been tampered with, its feature is calculated and the recovery watermark is extracted from the corresponding block. If the image has been tampered with, it can be recovered. The experimental results show that the proposed algorithm can effectively recover the tampered areas with high accuracy and high quality. The algorithm is characterized by sound visual quality and excellent image restoration. PMID:29920528
a Review of Digital Watermarking and Copyright Control Technology for Cultural Relics
NASA Astrophysics Data System (ADS)
Liu, H.; Hou, M.; Hu, Y.
2018-04-01
With the rapid growth of the application and sharing of the 3-D model data in the protection of cultural relics, the problem of Shared security and copyright control of the three-dimensional model of cultural relics is becoming increasingly prominent. Followed by a digital watermarking copyright control has become the frontier technology of 3-D model security protection of cultural relics and effective means, related technology research and application in recent years also got further development. 3-D model based on cultural relics digital watermarking and copyright control technology, introduces the research background and demand, its unique characteristics were described, and its development and application of the algorithm are discussed, and the prospects of the future development trend and some problems and the solution.
2004-12-01
digital watermarking http:// ww*.petitcolas .net/ fabien/ steganography / email: fapp2@cl.cam.ac.uk a=double(imread(’custom-a.jpg’)); %load in image ...MATLAB Algorithms for Rapid Detection and Embedding of Palindrome and Emordnilap Electronic Watermarks in Simulated Chemical and Biological Image ...approach (Ref 2-4) to watermarking involves be used to inform the viewer of data (such as photographs putting the cover image in the first 4
Perceptually controlled doping for audio source separation
NASA Astrophysics Data System (ADS)
Mahé, Gaël; Nadalin, Everton Z.; Suyama, Ricardo; Romano, João MT
2014-12-01
The separation of an underdetermined audio mixture can be performed through sparse component analysis (SCA) that relies however on the strong hypothesis that source signals are sparse in some domain. To overcome this difficulty in the case where the original sources are available before the mixing process, the informed source separation (ISS) embeds in the mixture a watermark, which information can help a further separation. Though powerful, this technique is generally specific to a particular mixing setup and may be compromised by an additional bitrate compression stage. Thus, instead of watermarking, we propose a `doping' method that makes the time-frequency representation of each source more sparse, while preserving its audio quality. This method is based on an iterative decrease of the distance between the distribution of the signal and a target sparse distribution, under a perceptual constraint. We aim to show that the proposed approach is robust to audio coding and that the use of the sparsified signals improves the source separation, in comparison with the original sources. In this work, the analysis is made only in instantaneous mixtures and focused on voice sources.
Steganalysis based on JPEG compatibility
NASA Astrophysics Data System (ADS)
Fridrich, Jessica; Goljan, Miroslav; Du, Rui
2001-11-01
In this paper, we introduce a new forensic tool that can reliably detect modifications in digital images, such as distortion due to steganography and watermarking, in images that were originally stored in the JPEG format. The JPEG compression leave unique fingerprints and serves as a fragile watermark enabling us to detect changes as small as modifying the LSB of one randomly chosen pixel. The detection of changes is based on investigating the compatibility of 8x8 blocks of pixels with JPEG compression with a given quantization matrix. The proposed steganalytic method is applicable to virtually all steganongraphic and watermarking algorithms with the exception of those that embed message bits into the quantized JPEG DCT coefficients. The method can also be used to estimate the size of the secret message and identify the pixels that carry message bits. As a consequence of our steganalysis, we strongly recommend avoiding using images that have been originally stored in the JPEG format as cover-images for spatial-domain steganography.
2009-09-01
and could be used to compensate for high frequency distortions to the LOS caused by platform jitter and the effects of the optical turbulence . In...engineer an unknown detector based on few experimental interactions. For watermarking algorithms in particular, we seek to identify specific distortions ...of a watermarked image that clearly identify or rule out one particular class of embedding. These experimental distortions surgical test for rapid
An Invisible Text Watermarking Algorithm using Image Watermark
NASA Astrophysics Data System (ADS)
Jalil, Zunera; Mirza, Anwar M.
Copyright protection of digital contents is very necessary in today's digital world with efficient communication mediums as internet. Text is the dominant part of the internet contents and there are very limited techniques available for text protection. This paper presents a novel algorithm for protection of plain text, which embeds the logo image of the copyright owner in the text and this logo can be extracted from the text later to prove ownership. The algorithm is robust against content-preserving modifications and at the same time, is capable of detecting malicious tampering. Experimental results demonstrate the effectiveness of the algorithm against tampering attacks by calculating normalized hamming distances. The results are also compared with a recent work in this domain
Modeling and Frequency Tracking of Marine Mammal Whistle Calls
2009-02-01
retrieve em- bedded information from watermarked synthetic whistle calls. Different fundamental frequency watermarking schemes are proposed b&ed on...unmodified frequency contour is relatively constant, there is little frequency separation between information bits, and watermark retrieval requires...UHYLHZLQJWKHFROOHFWLRQRILQIRUPDWLRQ6HQGFRPPHQWVUHJDUGLQJWKLVEXUGHQHVWLPDWH RU DQ\\RWKHUDVSHFWRIWKLVFROOHFWLRQ RI LQIRUPDWLRQ LQFOXGLQJ
An Efficient Semi-fragile Watermarking Scheme for Tamper Localization and Recovery
NASA Astrophysics Data System (ADS)
Hou, Xiang; Yang, Hui; Min, Lianquan
2018-03-01
To solve the problem that remote sensing images are vulnerable to be tampered, a semi-fragile watermarking scheme was proposed. Binary random matrix was used as the authentication watermark, which was embedded by quantizing the maximum absolute value of directional sub-bands coefficients. The average gray level of every non-overlapping 4×4 block was adopted as the recovery watermark, which was embedded in the least significant bit. Watermarking detection could be done directly without resorting to the original images. Experimental results showed our method was robust against rational distortions to a certain extent. At the same time, it was fragile to malicious manipulation, and realized accurate localization and approximate recovery of the tampered regions. Therefore, this scheme can protect the security of remote sensing image effectively.
A Study on the Security Levels of Spread-Spectrum Embedding Schemes in the WOA Framework.
Wang, Yuan-Gen; Zhu, Guopu; Kwong, Sam; Shi, Yun-Qing
2017-08-23
Security analysis is a very important issue for digital watermarking. Several years ago, according to Kerckhoffs' principle, the famous four security levels, namely insecurity, key security, subspace security, and stego-security, were defined for spread-spectrum (SS) embedding schemes in the framework of watermarked-only attack. However, up to now there has been little application of the definition of these security levels to the theoretical analysis of the security of SS embedding schemes, due to the difficulty of the theoretical analysis. In this paper, based on the security definition, we present a theoretical analysis to evaluate the security levels of five typical SS embedding schemes, which are the classical SS, the improved SS (ISS), the circular extension of ISS, the nonrobust and robust natural watermarking, respectively. The theoretical analysis of these typical SS schemes are successfully performed by taking advantage of the convolution of probability distributions to derive the probabilistic models of watermarked signals. Moreover, simulations are conducted to illustrate and validate our theoretical analysis. We believe that the theoretical and practical analysis presented in this paper can bridge the gap between the definition of the four security levels and its application to the theoretical analysis of SS embedding schemes.
Light Weight MP3 Watermarking Method for Mobile Terminals
NASA Astrophysics Data System (ADS)
Takagi, Koichi; Sakazawa, Shigeyuki; Takishima, Yasuhiro
This paper proposes a novel MP3 watermarking method which is applicable to a mobile terminal with limited computational resources. Considering that in most cases the embedded information is copyright information or metadata, which should be extracted before playing back audio contents, the watermark detection process should be executed at high speed. However, when conventional methods are used with a mobile terminal, it takes a considerable amount of time to detect a digital watermark. This paper focuses on scalefactor manipulation to enable high speed watermark embedding/detection for MP3 audio and also proposes the manipulation method which minimizes audio quality degradation adaptively. Evaluation tests showed that the proposed method is capable of embedding 3 bits/frame information without degrading audio quality and detecting it at very high speed. Finally, this paper describes application examples for authentication with a digital signature.
Attacks, applications, and evaluation of known watermarking algorithms with Checkmark
NASA Astrophysics Data System (ADS)
Meerwald, Peter; Pereira, Shelby
2002-04-01
The Checkmark benchmarking tool was introduced to provide a framework for application-oriented evaluation of watermarking schemes. In this article we introduce new attacks and applications into the existing Checkmark framework. In addition to describing new attacks and applications, we also compare the performance of some well-known watermarking algorithms (proposed by Bruyndonckx,Cox, Fridrich, Dugad, Kim, Wang, Xia, Xie, Zhu and Pereira) with respect to the Checkmark benchmark. In particular, we consider the non-geometric application which contains tests that do not change the geometry of image. This attack constraint is artificial, but yet important for research purposes since a number of algorithms may be interesting, but would score poorly with respect to specific applications simply because geometric compensation has not been incorporated. We note, however, that with the help of image registration, even research algorithms that do not have counter-measures against geometric distortion -- such as a template or reference watermark -- can be evaluated. In the first version of the Checkmark benchmarking program, application-oriented evaluation was introduced, along with many new attacks not already considered in the literature. A second goal of this paper is to introduce new attacks and new applications into the Checkmark framework. In particular, we introduce the following new applications: video frame watermarking, medical imaging and watermarking of logos. Video frame watermarking includes low compression attacks and distortions which warp the edges of the video as well as general projective transformations which may result from someone filming the screen at a cinema. With respect to medical imaging, only small distortions are considered and furthermore it is essential that no distortions are present at embedding. Finally for logos, we consider images of small sizes and particularly compression, scaling, aspect ratio and other small distortions. The challenge of watermarking logos is essentially that of watermarking a small and typically simple image. With respect to new attacks, we consider: subsampling followed by interpolation, dithering and thresholding which both yield a binary image.
Content Preserving Watermarking for Medical Images Using Shearlet Transform and SVD
NASA Astrophysics Data System (ADS)
Favorskaya, M. N.; Savchina, E. I.
2017-05-01
Medical Image Watermarking (MIW) is a special field of a watermarking due to the requirements of the Digital Imaging and COmmunications in Medicine (DICOM) standard since 1993. All 20 parts of the DICOM standard are revised periodically. The main idea of the MIW is to embed various types of information including the doctor's digital signature, fragile watermark, electronic patient record, and main watermark in a view of region of interest for the doctor into the host medical image. These four types of information are represented in different forms; some of them are encrypted according to the DICOM requirements. However, all types of information ought to be resulted into the generalized binary stream for embedding. The generalized binary stream may have a huge volume. Therefore, not all watermarking methods can be applied successfully. Recently, the digital shearlet transform had been introduced as a rigorous mathematical framework for the geometric representation of multi-dimensional data. Some modifications of the shearlet transform, particularly the non-subsampled shearlet transform, can be associated to a multi-resolution analysis that provides a fully shift-invariant, multi-scale, and multi-directional expansion. During experiments, a quality of the extracted watermarks under the JPEG compression and typical internet attacks was estimated using several metrics, including the peak signal to noise ratio, structural similarity index measure, and bit error rate.
Wavelet-based watermarking and compression for ECG signals with verification evaluation.
Tseng, Kuo-Kun; He, Xialong; Kung, Woon-Man; Chen, Shuo-Tsung; Liao, Minghong; Huang, Huang-Nan
2014-02-21
In the current open society and with the growth of human rights, people are more and more concerned about the privacy of their information and other important data. This study makes use of electrocardiography (ECG) data in order to protect individual information. An ECG signal can not only be used to analyze disease, but also to provide crucial biometric information for identification and authentication. In this study, we propose a new idea of integrating electrocardiogram watermarking and compression approach, which has never been researched before. ECG watermarking can ensure the confidentiality and reliability of a user's data while reducing the amount of data. In the evaluation, we apply the embedding capacity, bit error rate (BER), signal-to-noise ratio (SNR), compression ratio (CR), and compressed-signal to noise ratio (CNR) methods to assess the proposed algorithm. After comprehensive evaluation the final results show that our algorithm is robust and feasible.
Reversible Watermarking Surviving JPEG Compression.
Zain, J; Clarke, M
2005-01-01
This paper will discuss the properties of watermarking medical images. We will also discuss the possibility of such images being compressed by JPEG and give an overview of JPEG compression. We will then propose a watermarking scheme that is reversible and robust to JPEG compression. The purpose is to verify the integrity and authenticity of medical images. We used 800x600x8 bits ultrasound (US) images in our experiment. SHA-256 of the image is then embedded in the Least significant bits (LSB) of an 8x8 block in the Region of Non Interest (RONI). The image is then compressed using JPEG and decompressed using Photoshop 6.0. If the image has not been altered, the watermark extracted will match the hash (SHA256) of the original image. The result shown that the embedded watermark is robust to JPEG compression up to image quality 60 (~91% compressed).
A novel speech watermarking algorithm by line spectrum pair modification
NASA Astrophysics Data System (ADS)
Zhang, Qian; Yang, Senbin; Chen, Guang; Zhou, Jun
2011-10-01
To explore digital watermarking specifically suitable for the speech domain, this paper experimentally investigates the properties of line spectrum pair (LSP) parameters firstly. The results show that the differences between contiguous LSPs are robust against common signal processing operations and small modifications of LSPs are imperceptible to the human auditory system (HAS). According to these conclusions, three contiguous LSPs of a speech frame are selected to embed a watermark bit. The middle LSP is slightly altered to modify the differences of these LSPs when embedding watermark. Correspondingly, the watermark is extracted by comparing these differences. The proposed algorithm's transparency is adjustable to meet the needs of different applications. The algorithm has good robustness against additive noise, quantization, amplitude scale and MP3 compression attacks, for the bit error rate (BER) is less than 5%. In addition, the algorithm allows a relatively low capacity, which approximates to 50 bps.
DWT-Based High Capacity Audio Watermarking
NASA Astrophysics Data System (ADS)
Fallahpour, Mehdi; Megías, David
This letter suggests a novel high capacity robust audio watermarking algorithm by using the high frequency band of the wavelet decomposition, for which the human auditory system (HAS) is not very sensitive to alteration. The main idea is to divide the high frequency band into frames and then, for embedding, the wavelet samples are changed based on the average of the relevant frame. The experimental results show that the method has very high capacity (about 5.5kbps), without significant perceptual distortion (ODG in [-1, 0] and SNR about 33dB) and provides robustness against common audio signal processing such as added noise, filtering, echo and MPEG compression (MP3).
A Robust Blind Quantum Copyright Protection Method for Colored Images Based on Owner's Signature
NASA Astrophysics Data System (ADS)
Heidari, Shahrokh; Gheibi, Reza; Houshmand, Monireh; Nagata, Koji
2017-08-01
Watermarking is the imperceptible embedding of watermark bits into multimedia data in order to use for different applications. Among all its applications, copyright protection is the most prominent usage which conceals information about the owner in the carrier, so as to prohibit others from assertion copyright. This application requires high level of robustness. In this paper, a new blind quantum copyright protection method based on owners's signature in RGB images is proposed. The method utilizes one of the RGB channels as indicator and two remained channels are used for embedding information about the owner. In our contribution the owner's signature is considered as a text. Therefore, in order to embed in colored image as watermark, a new quantum representation of text based on ASCII character set is offered. Experimental results which are analyzed in MATLAB environment, exhibit that the presented scheme shows good performance against attacks and can be used to find out who the real owner is. Finally, the discussed quantum copyright protection method is compared with a related work that our analysis confirm that the presented scheme is more secure and applicable than the previous ones currently found in the literature.
Compact storage of medical images with patient information.
Acharya, R; Anand, D; Bhat, S; Niranjan, U C
2001-12-01
Digital watermarking is a technique of hiding specific identification data for copyright authentication. This technique is adapted here for interleaving patient information with medical images to reduce storage and transmission overheads. The text data are encrypted before interleaving with images to ensure greater security. The graphical signals are compressed and subsequently interleaved with the image. Differential pulse-code-modulation and adaptive-delta-modulation techniques are employed for data compression, and encryption and results are tabulated for a specific example.
Robust High-Capacity Audio Watermarking Based on FFT Amplitude Modification
NASA Astrophysics Data System (ADS)
Fallahpour, Mehdi; Megías, David
This paper proposes a novel robust audio watermarking algorithm to embed data and extract it in a bit-exact manner based on changing the magnitudes of the FFT spectrum. The key point is selecting a frequency band for embedding based on the comparison between the original and the MP3 compressed/decompressed signal and on a suitable scaling factor. The experimental results show that the method has a very high capacity (about 5kbps), without significant perceptual distortion (ODG about -0.25) and provides robustness against common audio signal processing such as added noise, filtering and MPEG compression (MP3). Furthermore, the proposed method has a larger capacity (number of embedded bits to number of host bits rate) than recent image data hiding methods.
PrOtocols for WatERmarking (POWER)
2011-03-01
22. Page 57 of 94 Figure 22: Exemplary watermark structure incl. the structure of the embedded ToC The segmented watermarking approach for...somewhere within the cover. The addresses addressSLji belonging to the different access levels Li are stored in a table of contents ( ToC ) entry, along...then embedded into the ToC -segment Stoc. Several assumptions have to be made before describing the actual functions of the presented system
Digital Watermarking: From Concepts to Real-Time Video Applications
1999-01-01
includes still- image , video, audio, and geometry data among others-the fundamental con- cept of steganography can be transferred from the field of...size of the message, which should be as small as possible. Some commercially available algorithms for image watermarking forego the secure-watermarking... image compres- sion.’ The image’s luminance component is divided into 8 x 8 pixel blocks. The algorithm selects a sequence of blocks and applies the
Watermarking requirements for Boeing digital cinema
NASA Astrophysics Data System (ADS)
Lixvar, John P.
2003-06-01
The enormous economic incentives for safeguarding intellectual property in the digital domain have made forensic watermarking a research topic of considerable interest. However, a recent examination of some of the leading product development efforts reveals that at present there is no effective watermarking implementation that addresses both the fidelity and security requirements of high definition digital cinema. If Boeing Digital Cinema (BDC, a business unit of Boeing Integrated Defense Systems) is to succeed in using watermarking as a deterrent to the unauthorized capture and distribution of high value cinematic material, the technology must be robust, transparent, asymmetric in its insertion/detection costs, and compatible with all the other elements of Boeing's multi-layered security system, including its compression, encryption, and key management services.
Digital watermarking for secure and adaptive teleconferencing
NASA Astrophysics Data System (ADS)
Vorbrueggen, Jan C.; Thorwirth, Niels
2002-04-01
The EC-sponsored project ANDROID aims to develop a management system for secure active networks. Active network means allowing the network's customers to execute code (Java-based so-called proxylets) on parts of the network infrastructure. Secure means that the network operator nonetheless retains full control over the network and its resources, and that proxylets use ANDROID-developed facilities to provide secure applications. Management is based on policies and allows autonomous, distributed decisions and actions to be taken. Proxylets interface with the system via policies; among actions they can take is controlling execution of other proxylets or redirection of network traffic. Secure teleconferencing is used as the application to demonstrate the approach's advantages. A way to control a teleconference's data streams is to use digital watermarking of the video, audio and/or shared-whiteboard streams, providing an imperceptible and inseparable side channel that delivers information from originating or intermediate stations to downstream stations. Depending on the information carried by the watermark, these stations can take many different actions. Examples are forwarding decisions based on security classifications (possibly time-varying) at security boundaries, set-up and tear-down of virtual private networks, intelligent and adaptive transcoding, recorder or playback control (e.g., speaking off the record), copyright protection, and sender authentication.
DCT-based cyber defense techniques
NASA Astrophysics Data System (ADS)
Amsalem, Yaron; Puzanov, Anton; Bedinerman, Anton; Kutcher, Maxim; Hadar, Ofer
2015-09-01
With the increasing popularity of video streaming services and multimedia sharing via social networks, there is a need to protect the multimedia from malicious use. An attacker may use steganography and watermarking techniques to embed malicious content, in order to attack the end user. Most of the attack algorithms are robust to basic image processing techniques such as filtering, compression, noise addition, etc. Hence, in this article two novel, real-time, defense techniques are proposed: Smart threshold and anomaly correction. Both techniques operate at the DCT domain, and are applicable for JPEG images and H.264 I-Frames. The defense performance was evaluated against a highly robust attack, and the perceptual quality degradation was measured by the well-known PSNR and SSIM quality assessment metrics. A set of defense techniques is suggested for improving the defense efficiency. For the most aggressive attack configuration, the combination of all the defense techniques results in 80% protection against cyber-attacks with PSNR of 25.74 db.
2004-11-16
MATLAB Algorithms for Rapid Detection and Embedding of Palindrome and Emordnilap Electronic Watermarks in Simulated Chemical and Biological Image ...and Emordnilap Electronic Watermarks in Simulated Chemical and Biological Image Data 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT...Conference on Chemical and Biological Defense Research. Held in Hunt Valley, Maryland on 15-17 November 2004., The original document contains color images
Locally optimum nonlinearities for DCT watermark detection.
Briassouli, Alexia; Strintzis, Michael G
2004-12-01
The issue of copyright protection of digital multimedia data has attracted a lot of attention during the last decade. An efficient copyright protection method that has been gaining popularity is watermarking, i.e., the embedding of a signature in a digital document that can be detected only by its rightful owner. Watermarks are usually blindly detected using correlating structures, which would be optimal in the case of Gaussian data. However, in the case of DCT-domain image watermarking, the data is more heavy-tailed and the correlator is clearly suboptimal. Nonlinear receivers have been shown to be particularly well suited for the detection of weak signals in heavy-tailed noise, as they are locally optimal. This motivates the use of the Gaussian-tailed zero-memory nonlinearity, as well as the locally optimal Cauchy nonlinearity for the detection of watermarks in DCT transformed images. We analyze the performance of these schemes theoretically and compare it to that of the traditionally used Gaussian correlator, but also to the recently proposed generalized Gaussian detector, which outperforms the correlator. The theoretical analysis and the actual performance of these systems is assessed through experiments, which verify the theoretical analysis and also justify the use of nonlinear structures for watermark detection. The performance of the correlator and the nonlinear detectors in the presence of quantization is also analyzed, using results from dither theory, and also verified experimentally.
Data Hiding and the Statistics of Images
NASA Astrophysics Data System (ADS)
Cox, Ingemar J.
The fields of digital watermarking, steganography and steganalysis, and content forensics are closely related. In all cases, there is a class of images that is considered “natural”, i.e. images that do not contain watermarks, images that do not contain covert messages, or images that have not been tampered with. And, conversely, there is a class of images that is considered to be “unnatural”, i.e. images that contain watermarks, images that contain covert messages, or images that have been tampered with.
A Lightweight Data Integrity Scheme for Sensor Networks
Kamel, Ibrahim; Juma, Hussam
2011-01-01
Limited energy is the most critical constraint that limits the capabilities of wireless sensor networks (WSNs). Most sensors operate on batteries with limited power. Battery recharging or replacement may be impossible. Security mechanisms that are based on public key cryptographic algorithms such as RSA and digital signatures are prohibitively expensive in terms of energy consumption and storage requirements, and thus unsuitable for WSN applications. This paper proposes a new fragile watermarking technique to detect unauthorized alterations in WSN data streams. We propose the FWC-D scheme, which uses group delimiters to keep the sender and receivers synchronized and help them to avoid ambiguity in the event of data insertion or deletion. The watermark, which is computed using a hash function, is stored in the previous group in a linked-list fashion to ensure data freshness and mitigate replay attacks, FWC-D generates a serial number SN that is attached to each group to help the receiver determines how many group insertions or deletions occurred. Detailed security analysis that compares the proposed FWC-D scheme with SGW, one of the latest integrity schemes for WSNs, shows that FWC-D is more robust than SGW. Simulation results further show that the proposed scheme is much faster than SGW. PMID:22163840
Automation and workflow considerations for embedding Digimarc Barcodes at scale
NASA Astrophysics Data System (ADS)
Rodriguez, Tony; Haaga, Don; Calhoon, Sean
2015-03-01
The Digimarc® Barcode is a digital watermark applied to packages and variable data labels that carries GS1 standard GTIN-14 data traditionally carried by a 1-D barcode. The Digimarc Barcode can be read with smartphones and imaging-based barcode readers commonly used in grocery and retail environments. Using smartphones, consumers can engage with products and retailers can materially increase the speed of check-out, increasing store margins and providing a better experience for shoppers. Internal testing has shown an average of 53% increase in scanning throughput, enabling 100's of millions of dollars in cost savings [1] for retailers when deployed at scale. To get to scale, the process of embedding a digital watermark must be automated and integrated within existing workflows. Creating the tools and processes to do so represents a new challenge for the watermarking community. This paper presents a description and an analysis of the workflow implemented by Digimarc to deploy the Digimarc Barcode at scale. An overview of the tools created and lessons learned during the introduction of technology to the market are provided.
NASA Astrophysics Data System (ADS)
Yu, Eugene; Craver, Scott
2006-02-01
Wow, or time warping caused by speed fluctuations in analog audio equipment, provides a wealth of applications in watermarking. Very subtle temporal distortion has been used to defeat watermarks, and as components in watermarking systems. In the image domain, the analogous warping of an image's canvas has been used both to defeat watermarks and also proposed to prevent collusion attacks on fingerprinting systems. In this paper, we explore how subliminal levels of wow can be used for steganography and fingerprinting. We present both a low-bitrate robust solution and a higher-bitrate solution intended for steganographic communication. As already observed, such a fingerprinting algorithm naturally discourages collusion by averaging, owing to flanging effects when misaligned audio is averaged. Another advantage of warping is that even when imperceptible, it can be beyond the reach of compression algorithms. We use this opportunity to debunk the common misconception that steganography is impossible under "perfect compression."
NASA Astrophysics Data System (ADS)
Hu, Hwai-Tsu; Chou, Hsien-Hsin; Yu, Chu; Hsu, Ling-Yuan
2014-12-01
This paper presents a novel approach for blind audio watermarking. The proposed scheme utilizes the flexibility of discrete wavelet packet transformation (DWPT) to approximate the critical bands and adaptively determines suitable embedding strengths for carrying out quantization index modulation (QIM). The singular value decomposition (SVD) is employed to analyze the matrix formed by the DWPT coefficients and embed watermark bits by manipulating singular values subject to perceptual criteria. To achieve even better performance, two auxiliary enhancement measures are attached to the developed scheme. Performance evaluation and comparison are demonstrated with the presence of common digital signal processing attacks. Experimental results confirm that the combination of the DWPT, SVD, and adaptive QIM achieves imperceptible data hiding with satisfying robustness and payload capacity. Moreover, the inclusion of self-synchronization capability allows the developed watermarking system to withstand time-shifting and cropping attacks.
Robust image watermarking using DWT and SVD for copyright protection
NASA Astrophysics Data System (ADS)
Harjito, Bambang; Suryani, Esti
2017-02-01
The Objective of this paper is proposed a robust combined Discrete Wavelet Transform (DWT) and Singular Value Decomposition (SVD). The RGB image is called a cover medium, and watermark image is converted into gray scale. Then, they are transformed using DWT so that they can be split into several subbands, namely sub-band LL2, LH2, HL2. The watermark image embeds into the cover medium on sub-band LL2. This scheme aims to obtain the higher robustness level than the previous method which performs of SVD matrix factorization image for copyright protection. The experiment results show that the proposed method has robustness against several image processing attacks such as Gaussian, Poisson and Salt and Pepper Noise. In these attacks, noise has average Normalized Correlation (NC) values of 0.574863 0.889784, 0.889782 respectively. The watermark image can be detected and extracted.
Physical Watermarking for Securing Cyber-Physical Systems via Packet Drop Injections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ozel, Omur; Weekrakkody, Sean; Sinopoli, Bruno
Physical watermarking is a well known solution for detecting integrity attacks on Cyber-Physical Systems (CPSs) such as the smart grid. Here, a random control input is injected into the system in order to authenticate physical dynamics and sensors which may have been corrupted by adversaries. Packet drops may naturally occur in a CPS due to network imperfections. To our knowledge, previous work has not considered the role of packet drops in detecting integrity attacks. In this paper, we investigate the merit of injecting Bernoulli packet drops into the control inputs sent to actuators as a new physical watermarking scheme. Withmore » the classical linear quadratic objective function and an independent and identically distributed packet drop injection sequence, we study the effect of packet drops on meeting security and control objectives. Our results indicate that the packet drops could act as a potential physical watermark for attack detection in CPSs.« less
Using digital watermarking to enhance security in wireless medical image transmission.
Giakoumaki, Aggeliki; Perakis, Konstantinos; Banitsas, Konstantinos; Giokas, Konstantinos; Tachakra, Sapal; Koutsouris, Dimitris
2010-04-01
During the last few years, wireless networks have been increasingly used both inside hospitals and in patients' homes to transmit medical information. In general, wireless networks suffer from decreased security. However, digital watermarking can be used to secure medical information. In this study, we focused on combining wireless transmission and digital watermarking technologies to better secure the transmission of medical images within and outside the hospital. We utilized an integrated system comprising the wireless network and the digital watermarking module to conduct a series of tests. The test results were evaluated by medical consultants. They concluded that the images suffered no visible quality degradation and maintained their diagnostic integrity. The proposed integrated system presented reasonable stability, and its performance was comparable to that of a fixed network. This system can enhance security during the transmission of medical images through a wireless channel.
Digital watermarking in telemedicine applications--towards enhanced data security and accessibility.
Giakoumaki, Aggeliki L; Perakis, Konstantinos; Tagaris, Anastassios; Koutsouris, Dimitris
2006-01-01
Implementing telemedical solutions has become a trend amongst the various research teams at an international level. Yet, contemporary information access and distribution technologies raise critical issues that urgently need to be addressed, especially those related to security. The paper suggests the use of watermarking in telemedical applications in order to enhance security of the transmitted sensitive medical data, familiarizes the users with a telemedical system and a watermarking module that have already been developed, and proposes an architecture that will enable the integration of the two systems, taking into account a variety of use cases and application scenarios.
Plenoptic image watermarking to preserve copyright
NASA Astrophysics Data System (ADS)
Ansari, A.; Dorado, A.; Saavedra, G.; Martinez Corral, M.
2017-05-01
Common camera loses a huge amount of information obtainable from scene as it does not record the value of individual rays passing a point and it merely keeps the summation of intensities of all the rays passing a point. Plenoptic images can be exploited to provide a 3D representation of the scene and watermarking such images can be helpful to protect the ownership of these images. In this paper we propose a method for watermarking the plenoptic images to achieve this aim. The performance of the proposed method is validated by experimental results and a compromise is held between imperceptibility and robustness.
The Hunger Stones: a new source for more objective identification of historical droughts
NASA Astrophysics Data System (ADS)
Elleder, Libor
2016-04-01
Extreme droughts recorded recently more frequently in different parts of the world represent the most serious environmental problem. Our contribution identifies periods of hydrological drought. The extreme drought period in summer 2015 enabled the levelling of historical watermarks on the „Hunger Stone" (Hungerstein) in the Elbe in Czech town of Děčín. The comparison of the obtained levels of earlier palaeographic records with systematic measurements in the Děčín profile confirmed the hypothesis that the old watermarks represent the minimal water levels. Moreover, we present a review of so far known Hunger Stones in the Elbe River with their low-level watermarks. For identification of the drought period duration we used the oldest water level records from the Czech Hydrometeorological Institute (CHMI) database archive: Magdeburg (since 1727), Dresden (since 1801), Prague (since 1825) and Decin (since 1851) time-series. We obtained more objective and complex information on all historical droughts between 1727 and 2015. The low water-marks on Hunger Stones give us a possibility for augmentation of systematic records and extended our knowledge's back to 1616. The Hunger Stones in the Elbe River with old watermarks are unique testimony for studying of hydrological extremes, and last but not least also of anthropogenic changes in the riverbed of the Elbe.
A novel lost packets recovery scheme based on visual secret sharing
NASA Astrophysics Data System (ADS)
Lu, Kun; Shan, Hong; Li, Zhi; Niu, Zhao
2017-08-01
In this paper, a novel lost packets recovery scheme which encrypts the effective parts of an original packet into two shadow packets based on (2, 2)-threshold XOR-based visual Secret Sharing (VSS) is proposed. The two shadow packets used as watermarks would be embedded into two normal data packets with digital watermarking embedding technology and then sent from one sensor node to another. Each shadow packet would reveal no information of the original packet, which can improve the security of original packet delivery greatly. The two shadow packets which can be extracted from the received two normal data packets delivered from a sensor node can recover the original packet lossless based on XOR-based VSS. The Performance analysis present that the proposed scheme provides essential services as long as possible in the presence of selective forwarding attack. The proposed scheme would not increase the amount of additional traffic, namely, lower energy consumption, which is suitable for Wireless Sensor Network (WSN).
Content-independent embedding scheme for multi-modal medical image watermarking.
Nyeem, Hussain; Boles, Wageeh; Boyd, Colin
2015-02-04
As the increasing adoption of information technology continues to offer better distant medical services, the distribution of, and remote access to digital medical images over public networks continues to grow significantly. Such use of medical images raises serious concerns for their continuous security protection, which digital watermarking has shown great potential to address. We present a content-independent embedding scheme for medical image watermarking. We observe that the perceptual content of medical images varies widely with their modalities. Recent medical image watermarking schemes are image-content dependent and thus they may suffer from inconsistent embedding capacity and visual artefacts. To attain the image content-independent embedding property, we generalise RONI (region of non-interest, to the medical professionals) selection process and use it for embedding by utilising RONI's least significant bit-planes. The proposed scheme thus avoids the need for RONI segmentation that incurs capacity and computational overheads. Our experimental results demonstrate that the proposed embedding scheme performs consistently over a dataset of 370 medical images including their 7 different modalities. Experimental results also verify how the state-of-the-art reversible schemes can have an inconsistent performance for different modalities of medical images. Our scheme has MSSIM (Mean Structural SIMilarity) larger than 0.999 with a deterministically adaptable embedding capacity. Our proposed image-content independent embedding scheme is modality-wise consistent, and maintains a good image quality of RONI while keeping all other pixels in the image untouched. Thus, with an appropriate watermarking framework (i.e., with the considerations of watermark generation, embedding and detection functions), our proposed scheme can be viable for the multi-modality medical image applications and distant medical services such as teleradiology and eHealth.
NASA Astrophysics Data System (ADS)
El-Shafai, W.; El-Bakary, E. M.; El-Rabaie, S.; Zahran, O.; El-Halawany, M.; Abd El-Samie, F. E.
2017-06-01
Three-Dimensional Multi-View Video (3D-MVV) transmission over wireless networks suffers from Macro-Blocks losses due to either packet dropping or fading-motivated bit errors. Thus, the robust performance of 3D-MVV transmission schemes over wireless channels becomes a recent considerable hot research issue due to the restricted resources and the presence of severe channel errors. The 3D-MVV is composed of multiple video streams shot by several cameras around a single object, simultaneously. Therefore, it is an urgent task to achieve high compression ratios to meet future bandwidth constraints. Unfortunately, the highly-compressed 3D-MVV data becomes more sensitive and vulnerable to packet losses, especially in the case of heavy channel faults. Thus, in this paper, we suggest the application of a chaotic Baker interleaving approach with equalization and convolution coding for efficient Singular Value Decomposition (SVD) watermarked 3D-MVV transmission over an Orthogonal Frequency Division Multiplexing wireless system. Rayleigh fading and Additive White Gaussian Noise are considered in the real scenario of 3D-MVV transmission. The SVD watermarked 3D-MVV frames are primarily converted to their luminance and chrominance components, which are then converted to binary data format. After that, chaotic interleaving is applied prior to the modulation process. It is used to reduce the channel effects on the transmitted bit streams and it also adds a degree of encryption to the transmitted 3D-MVV frames. To test the performance of the proposed framework; several simulation experiments on different SVD watermarked 3D-MVV frames have been executed. The experimental results show that the received SVD watermarked 3D-MVV frames still have high Peak Signal-to-Noise Ratios and watermark extraction is possible in the proposed framework.
Reversible watermarking for knowledge digest embedding and reliability control in medical images.
Coatrieux, Gouenou; Le Guillou, Clara; Cauvin, Jean-Michel; Roux, Christian
2009-03-01
To improve medical image sharing in applications such as e-learning or remote diagnosis aid, we propose to make the image more usable by watermarking it with a digest of its associated knowledge. The aim of such a knowledge digest (KD) is for it to be used for retrieving similar images with either the same findings or differential diagnoses. It summarizes the symbolic descriptions of the image, the symbolic descriptions of the findings semiology, and the similarity rules that contribute to balancing the importance of previous descriptors when comparing images. Instead of modifying the image file format by adding some extra header information, watermarking is used to embed the KD in the pixel gray-level values of the corresponding images. When shared through open networks, watermarking also helps to convey reliability proofs (integrity and authenticity) of an image and its KD. The interest of these new image functionalities is illustrated in the updating of the distributed users' databases within the framework of an e-learning application demonstrator of endoscopic semiology.
Information hiding techniques for infrared images: exploring the state-of-the art and challenges
NASA Astrophysics Data System (ADS)
Pomponiu, Victor; Cavagnino, Davide; Botta, Marco; Nejati, Hossein
2015-10-01
The proliferation of Infrared technology and imaging systems enables a different perspective to tackle many computer vision problems in defense and security applications. Infrared images are widely used by the law enforcement, Homeland Security and military organizations to achieve a significant advantage or situational awareness, and thus is vital to protect these data against malicious attacks. Concurrently, sophisticated malware are developed which are able to disrupt the security and integrity of these digital media. For instance, illegal distribution and manipulation are possible malicious attacks to the digital objects. In this paper we explore the use of a new layer of defense for the integrity of the infrared images through the aid of information hiding techniques such as watermarking. In this context, we analyze the efficiency of several optimal decoding schemes for the watermark inserted into the Singular Value Decomposition (SVD) domain of the IR images using an additive spread spectrum (SS) embedding framework. In order to use the singular values (SVs) of the IR images with the SS embedding we adopt several restrictions that ensure that the values of the SVs will maintain their statistics. For both the optimal maximum likelihood decoder and sub-optimal decoders we assume that the PDF of SVs can be modeled by the Weibull distribution. Furthermore, we investigate the challenges involved in protecting and assuring the integrity of IR images such as data complexity and the error probability behavior, i.e., the probability of detection and the probability of false detection, for the applied optimal decoders. By taking into account the efficiency and the necessary auxiliary information for decoding the watermark, we discuss the suitable decoder for various operating situations. Experimental results are carried out on a large dataset of IR images to show the imperceptibility and efficiency of the proposed scheme against various attack scenarios.
NASA Astrophysics Data System (ADS)
Leni, Pierre-Emmanuel; Fougerolle, Yohan D.; Truchetet, Frédéric
2014-05-01
We propose a progressive transmission approach of an image authenticated using an overlapping subimage that can be removed to restore the original image. Our approach is different from most visible watermarking approaches that allow one to later remove the watermark, because the mark is not directly introduced in the two-dimensional image space. Instead, it is rather applied to an equivalent monovariate representation of the image. Precisely, the approach is based on our progressive transmission approach that relies on a modified Kolmogorov spline network, and therefore inherits its advantages: resilience to packet losses during transmission and support of heterogeneous display environments. The marked image can be accessed at any intermediate resolution, and a key is needed to remove the mark to fully recover the original image without loss. Moreover, the key can be different for every resolution, and the images can be globally restored in case of packet losses during the transmission. Our contributions lie in the proposition of decomposing a mark (an overlapping image) and an image into monovariate functions following the Kolmogorov superposition theorem; and in the combination of these monovariate functions to provide a removable visible "watermarking" of images with the ability to restore the original image using a key.
Capacity is the Wrong Paradigm
2002-01-01
short, steganography values detection over ro- bustness, whereas watermarking values robustness over de - tection.) Hiding techniques for JPEG images ...world length of the code. D: If the algorithm is known, this method is trivially de - tectable if we are sending images (with no encryption). If we are...implications of the work of Chaitin and Kolmogorov on algorithmic complex- ity [5]. We have also concentrated on screen images in this paper and have not
NASA Astrophysics Data System (ADS)
Modegi, Toshio
We are developing audio watermarking techniques which enable extraction of embedded data by cell phones. For that we have to embed data onto frequency ranges, where our auditory response is prominent, therefore data embedding will cause much auditory noises. Previously we have proposed applying a two-channel stereo play-back feature, where noises generated by a data embedded left-channel signal will be reduced by the other right-channel signal. However, this proposal has practical problems of restricting extracting terminal location. In this paper, we propose synthesizing the noise reducing right-channel signal with the left-signal and reduces noises completely by generating an auditory stream segregation phenomenon to users. This newly proposed makes the noise reducing right-channel signal unnecessary and supports monaural play-back operations. Moreover, we propose a wide-band embedding method causing dual auditory stream segregation phenomena, which enables data embedding on whole public phone frequency ranges and stable extractions with 3-G mobile phones. From these proposals, extraction precisions become higher than those by the previously proposed method whereas the quality damages of embedded signals become smaller. In this paper we present an abstract of our newly proposed method and experimental results comparing with those by the previously proposed method.
Lossless Data Embedding—New Paradigm in Digital Watermarking
NASA Astrophysics Data System (ADS)
Fridrich, Jessica; Goljan, Miroslav; Du, Rui
2002-12-01
One common drawback of virtually all current data embedding methods is the fact that the original image is inevitably distorted due to data embedding itself. This distortion typically cannot be removed completely due to quantization, bit-replacement, or truncation at the grayscales 0 and 255. Although the distortion is often quite small and perceptual models are used to minimize its visibility, the distortion may not be acceptable for medical imagery (for legal reasons) or for military images inspected under nonstandard viewing conditions (after enhancement or extreme zoom). In this paper, we introduce a new paradigm for data embedding in images (lossless data embedding) that has the property that the distortion due to embedding can be completely removed from the watermarked image after the embedded data has been extracted. We present lossless embedding methods for the uncompressed formats (BMP, TIFF) and for the JPEG format. We also show how the concept of lossless data embedding can be used as a powerful tool to achieve a variety of nontrivial tasks, including lossless authentication using fragile watermarks, steganalysis of LSB embedding, and distortion-free robust watermarking.
Estimating JPEG2000 compression for image forensics using Benford's Law
NASA Astrophysics Data System (ADS)
Qadir, Ghulam; Zhao, Xi; Ho, Anthony T. S.
2010-05-01
With the tremendous growth and usage of digital images nowadays, the integrity and authenticity of digital content is becoming increasingly important, and a growing concern to many government and commercial sectors. Image Forensics, based on a passive statistical analysis of the image data only, is an alternative approach to the active embedding of data associated with Digital Watermarking. Benford's Law was first introduced to analyse the probability distribution of the 1st digit (1-9) numbers of natural data, and has since been applied to Accounting Forensics for detecting fraudulent income tax returns [9]. More recently, Benford's Law has been further applied to image processing and image forensics. For example, Fu et al. [5] proposed a Generalised Benford's Law technique for estimating the Quality Factor (QF) of JPEG compressed images. In our previous work, we proposed a framework incorporating the Generalised Benford's Law to accurately detect unknown JPEG compression rates of watermarked images in semi-fragile watermarking schemes. JPEG2000 (a relatively new image compression standard) offers higher compression rates and better image quality as compared to JPEG compression. In this paper, we propose the novel use of Benford's Law for estimating JPEG2000 compression for image forensics applications. By analysing the DWT coefficients and JPEG2000 compression on 1338 test images, the initial results indicate that the 1st digit probability of DWT coefficients follow the Benford's Law. The unknown JPEG2000 compression rates of the image can also be derived, and proved with the help of a divergence factor, which shows the deviation between the probabilities and Benford's Law. Based on 1338 test images, the mean divergence for DWT coefficients is approximately 0.0016, which is lower than DCT coefficients at 0.0034. However, the mean divergence for JPEG2000 images compression rate at 0.1 is 0.0108, which is much higher than uncompressed DWT coefficients. This result clearly indicates a presence of compression in the image. Moreover, we compare the results of 1st digit probability and divergence among JPEG2000 compression rates at 0.1, 0.3, 0.5 and 0.9. The initial results show that the expected difference among them could be used for further analysis to estimate the unknown JPEG2000 compression rates.
Quantum red-green-blue image steganography
NASA Astrophysics Data System (ADS)
Heidari, Shahrokh; Pourarian, Mohammad Rasoul; Gheibi, Reza; Naseri, Mosayeb; Houshmand, Monireh
One of the most considering matters in the field of quantum information processing is quantum data hiding including quantum steganography and quantum watermarking. This field is an efficient tool for protecting any kind of digital data. In this paper, three quantum color images steganography algorithms are investigated based on Least Significant Bit (LSB). The first algorithm employs only one of the image’s channels to cover secret data. The second procedure is based on LSB XORing technique, and the last algorithm utilizes two channels to cover the color image for hiding secret quantum data. The performances of the proposed schemes are analyzed by using software simulations in MATLAB environment. The analysis of PSNR, BER and Histogram graphs indicate that the presented schemes exhibit acceptable performances and also theoretical analysis demonstrates that the networks complexity of the approaches scales squarely.
Bouslimi, D; Coatrieux, G; Roux, Ch
2011-01-01
In this paper, we propose a new joint watermarking/encryption algorithm for the purpose of verifying the reliability of medical images in both encrypted and spatial domains. It combines a substitutive watermarking algorithm, the quantization index modulation (QIM), with a block cipher algorithm, the Advanced Encryption Standard (AES), in CBC mode of operation. The proposed solution gives access to the outcomes of the image integrity and of its origins even though the image is stored encrypted. Experimental results achieved on 8 bits encoded Ultrasound images illustrate the overall performances of the proposed scheme. By making use of the AES block cipher in CBC mode, the proposed solution is compliant with or transparent to the DICOM standard.
A robust watermarking scheme using lifting wavelet transform and singular value decomposition
NASA Astrophysics Data System (ADS)
Bhardwaj, Anuj; Verma, Deval; Verma, Vivek Singh
2017-01-01
The present paper proposes a robust image watermarking scheme using lifting wavelet transform (LWT) and singular value decomposition (SVD). Second level LWT is applied on host/cover image to decompose into different subbands. SVD is used to obtain singular values of watermark image and then these singular values are updated with the singular values of LH2 subband. The algorithm is tested on a number of benchmark images and it is found that the present algorithm is robust against different geometric and image processing operations. A comparison of the proposed scheme is performed with other existing schemes and observed that the present scheme is better not only in terms of robustness but also in terms of imperceptibility.
Influence of the watermark in immersion lithography process
NASA Astrophysics Data System (ADS)
Kawamura, Daisuke; Takeishi, Tomoyuki; Sho, Koutarou; Matsunaga, Kentarou; Shibata, Naofumi; Ozawa, Kaoru; Shimura, Satoru; Kyoda, Hideharu; Kawasaki, Tetsu; Ishida, Seiki; Toshima, Takayuki; Oonishi, Yasunobu; Ito, Shinichi
2005-05-01
In the liquid immersion lithography, uses of the cover material (C/M) films were discussed to reduce elution of resist components to fluid. With fluctuation of exposure tool or resist process, it is possible to remain of waterdrop on the wafer and watermark (W/M) will be made. The investigation of influence of the W/M on resist patterns, formation process of W/M, and reduction of pattern defect due to W/M will be discussed. Resist patterns within and around the intentionally made W/M were observed in three cases, which were without C/M, TOK TSP-3A and alkali-soluble C/M. In all C/M cases, pattern defect were T-topped shapes. Reduction of pattern defects due to waterdrop was examined. It was found that remained waterdrop made defect. It should be required to remove waterdrop before drying, and/or to remove the defect due to waterdrop. But new dry technique and/or unit will be need for making no W/M. It was examined that the observation of waterdrop through the drying step and simulative reproduction of experiment in order to understand the formation mechanism of W/M. If maximum drying time of waterdrop using immersion exposure tool is estimated 90 seconds, the watermark of which volume and diameter are less than 0.02 uL and 350um will be dried and will make pattern defect. The threshold will be large with wafer speed become faster. From result and speculations in this work, it is considered that it will be difficult to development C/M as single film, which makes no pattern defects due to remained waterdrop.
Lakshmi, C; Thenmozhi, K; Rayappan, John Bosco Balaguru; Amirtharajan, Rengarajan
2018-06-01
Digital Imaging and Communications in Medicine (DICOM) is one among the significant formats used worldwide for the representation of medical images. Undoubtedly, medical-image security plays a crucial role in telemedicine applications. Merging encryption and watermarking in medical-image protection paves the way for enhancing the authentication and safer transmission over open channels. In this context, the present work on DICOM image encryption has employed a fuzzy chaotic map for encryption and the Discrete Wavelet Transform (DWT) for watermarking. The proposed approach overcomes the limitation of the Arnold transform-one of the most utilised confusion mechanisms in image ciphering. Various metrics have substantiated the effectiveness of the proposed medical-image encryption algorithm. Copyright © 2018 Elsevier B.V. All rights reserved.
A robust H.264/AVC video watermarking scheme with drift compensation.
Jiang, Xinghao; Sun, Tanfeng; Zhou, Yue; Wang, Wan; Shi, Yun-Qing
2014-01-01
A robust H.264/AVC video watermarking scheme for copyright protection with self-adaptive drift compensation is proposed. In our scheme, motion vector residuals of macroblocks with the smallest partition size are selected to hide copyright information in order to hold visual impact and distortion drift to a minimum. Drift compensation is also implemented to reduce the influence of watermark to the most extent. Besides, discrete cosine transform (DCT) with energy compact property is applied to the motion vector residual group, which can ensure robustness against intentional attacks. According to the experimental results, this scheme gains excellent imperceptibility and low bit-rate increase. Malicious attacks with different quantization parameters (QPs) or motion estimation algorithms can be resisted efficiently, with 80% accuracy on average after lossy compression.
A Robust H.264/AVC Video Watermarking Scheme with Drift Compensation
Sun, Tanfeng; Zhou, Yue; Shi, Yun-Qing
2014-01-01
A robust H.264/AVC video watermarking scheme for copyright protection with self-adaptive drift compensation is proposed. In our scheme, motion vector residuals of macroblocks with the smallest partition size are selected to hide copyright information in order to hold visual impact and distortion drift to a minimum. Drift compensation is also implemented to reduce the influence of watermark to the most extent. Besides, discrete cosine transform (DCT) with energy compact property is applied to the motion vector residual group, which can ensure robustness against intentional attacks. According to the experimental results, this scheme gains excellent imperceptibility and low bit-rate increase. Malicious attacks with different quantization parameters (QPs) or motion estimation algorithms can be resisted efficiently, with 80% accuracy on average after lossy compression. PMID:24672376
Individually Watermarked Information Distributed Scalable by Modified Transforms
2009-10-01
inverse of the secret transform is needed. Each trusted recipient has a unique inverse transform that is similar to the inverse of the original...transform. The elements of this individual inverse transform are given by the individual descrambling key. After applying the individual inverse ... transform the retrieved image is embedded with a recipient individual watermark. Souce 1 I Decode IW1 Decode IW2 Decode ISC Scramb K Recipient 3
Transmission and storage of medical images with patient information.
Acharya U, Rajendra; Subbanna Bhat, P; Kumar, Sathish; Min, Lim Choo
2003-07-01
Digital watermarking is a technique of hiding specific identification data for copyright authentication. This technique is adapted here for interleaving patient information with medical images, to reduce storage and transmission overheads. The text data is encrypted before interleaving with images to ensure greater security. The graphical signals are interleaved with the image. Two types of error control-coding techniques are proposed to enhance reliability of transmission and storage of medical images interleaved with patient information. Transmission and storage scenarios are simulated with and without error control coding and a qualitative as well as quantitative interpretation of the reliability enhancement resulting from the use of various commonly used error control codes such as repetitive, and (7,4) Hamming code is provided.
A Quasi-Copysafe Security of Documents on Normal Papersheets
2000-10-01
Commercial Components ", held in Budapest, Hungary, 23-25 October 2000, and published in RTO MP-072. 32-2 Steganography : Hiding/embedding the secret information...hostile attacks (jitter attack, etc.) aiming to fool the receiver/detector by either impairing or diminishing or removing the secret message. After a...replace a watermark, when that requires the secret key. Problems : Attackers rather try to modify the watermark content. Or try to discredit the
Associated diacritical watermarking approach to protect sensitive arabic digital texts
NASA Astrophysics Data System (ADS)
Kamaruddin, Nurul Shamimi; Kamsin, Amirrudin; Hakak, Saqib
2017-10-01
Among multimedia content, one of the most predominant medium is text content. There have been lots of efforts to protect and secure text information over the Internet. The limitations of existing works have been identified in terms of watermark capacity, time complexity and memory complexity. In this work, an invisible digital watermarking approach has been proposed to protect and secure the most sensitive text i.e. Digital Holy Quran. The proposed approach works by XOR-ing only those Quranic letters that has certain diacritics associated with it. Due to sensitive nature of Holy Quran, diacritics play vital role in the meaning of the particular verse. Hence, securing letters with certain diacritics will preserve the original meaning of Quranic verses in case of alternation attempt. Initial results have shown that the proposed approach is promising with less memory complexity and time complexity compared to existing approaches.
Robust and Imperceptible Watermarking of Video Streams for Low Power Devices
NASA Astrophysics Data System (ADS)
Ishtiaq, Muhammad; Jaffar, M. Arfan; Khan, Muhammad A.; Jan, Zahoor; Mirza, Anwar M.
With the advent of internet, every aspect of life is going online. From online working to watching videos, everything is now available on the internet. With the greater business benefits, increased availability and other online business advantages, there is a major challenge of security and ownership of data. Videos downloaded from an online store can easily be shared among non-intended or unauthorized users. Invisible watermarking is used to hide copyright protection information in the videos. The existing methods of watermarking are less robust and imperceptible and also the computational complexity of these methods does not suit low power devices. In this paper, we have proposed a new method to address the problem of robustness and imperceptibility. Experiments have shown that our method has better robustness and imperceptibility as well as our method is computationally efficient than previous approaches in practice. Hence our method can easily be applied on low power devices.
Drift-free MPEG-4 AVC semi-fragile watermarking
NASA Astrophysics Data System (ADS)
Hasnaoui, M.; Mitrea, M.
2014-02-01
While intra frame drifting is a concern for all types of MPEG-4 AVC compressed-domain video processing applications, it has a particular negative impact in watermarking. In order to avoid the drift drawbacks, two classes of solutions are currently considered in the literature. They try either to compensate the drift distortions at the expense of complex decoding/estimation algorithms or to restrict the insertion to the blocks which are not involved in the prediction, thus reducing the data payload. The present study follows a different approach. First, it algebraically models the drift distortion spread problem by considering the analytic expressions of the MPEG-4 AVC encoding operations. Secondly, it solves the underlying algebraic system under drift-free constraints. Finally, the advanced solution is adapted to take into account the watermarking peculiarities. The experiments consider an m-QIM semi-fragile watermarking method and a video surveillance corpus of 80 minutes. For prescribed data payload (100 bit/s), robustness (BER < 0.1 against transcoding at 50% in stream size), fragility (frame modification detection with accuracies of 1/81 from the frame size and 3s) and complexity constraints, the modified insertion results in gains in transparency of 2 dB in PSNR, of 0.4 in AAD, of 0.002 in IF, of 0.03 in SC, of 0.017 NCC and 22 in DVQ.
Evaluating the accuracy of soil water sensors for irrigation scheduling to conserve freshwater
NASA Astrophysics Data System (ADS)
Ganjegunte, Girisha K.; Sheng, Zhuping; Clark, John A.
2012-06-01
In the Trans-Pecos area, pecan [ Carya illinoinensis (Wangenh) C. Koch] is a major irrigated cash crop. Pecan trees require large amounts of water for their growth and flood (border) irrigation is the most common method of irrigation. Pecan crop is often over irrigated using traditional method of irrigation scheduling by counting number of calendar days since the previous irrigation. Studies in other pecan growing areas have shown that the water use efficiency can be improved significantly and precious freshwater can be saved by scheduling irrigation based on soil moisture conditions. This study evaluated the accuracy of three recent low cost soil water sensors (ECH2O-5TE, Watermark 200SS and Tensiometer model R) to monitor volumetric soil water content (θv) to develop improved irrigation scheduling in a mature pecan orchard in El Paso, Texas. Results indicated that while all three sensors were successful in following the general trends of soil moisture conditions during the growing season, actual measurements differed significantly. Statistical analyses of results indicated that Tensiometer provided relatively accurate soil moisture data than ECH2O-5TE and Watermark without site-specific calibration. While ECH2O-5TE overestimated the soil water content, Watermark and Tensiometer underestimated. Results of this study suggested poor accuracy of all three sensors if factory calibration and reported soil water retention curve for study site soil texture were used. This indicated that sensors needed site-specific calibration to improve their accuracy in estimating soil water content data.
Field performance of three real-time moisture sensors in sandy loam and clay loam soils
USDA-ARS?s Scientific Manuscript database
The study was conducted to evaluate HydraProbe (HyP), Campbell Time Domain Reflectometry (TDR) and Watermarks (WM) moisture sensors for their ability to estimate water content based on calibrated neutron probe measurements. The three sensors were in-situ tested under natural weather conditions over ...
Using Imperceptible Digital Watermarking Technologies To Transform Educational Media: A Prototype.
ERIC Educational Resources Information Center
McGraw, Tammy M.; Burdette, Krista; Seale, Virginia B.; Ross, John D.
The Institute for the Advancement of Emerging Technologies in Education (IAETE) at AEL recently explored the potential benefits and limitations of traditional print-based textbooks and many e-book alternatives. Having considered these media, IAETE created prototype interactive textbook pages that retain the salient aspects of print media while…
NASA Astrophysics Data System (ADS)
Battisti, F.; Carli, M.; Neri, A.
2011-03-01
The increasing use of digital image-based applications is resulting in huge databases that are often difficult to use and prone to misuse and privacy concerns. These issues are especially crucial in medical applications. The most commonly adopted solution is the encryption of both the image and the patient data in separate files that are then linked. This practice results to be inefficient since, in order to retrieve patient data or analysis details, it is necessary to decrypt both files. In this contribution, an alternative solution for secure medical image annotation is presented. The proposed framework is based on the joint use of a key-dependent wavelet transform, the Integer Fibonacci-Haar transform, of a secure cryptographic scheme, and of a reversible watermarking scheme. The system allows: i) the insertion of the patient data into the encrypted image without requiring the knowledge of the original image, ii) the encryption of annotated images without causing loss in the embedded information, and iii) due to the complete reversibility of the process, it allows recovering the original image after the mark removal. Experimental results show the effectiveness of the proposed scheme.
Invisible photonic printing: computer designing graphics, UV printing and shown by a magnetic field
Hu, Haibo; Tang, Jian; Zhong, Hao; Xi, Zheng; Chen, Changle; Chen, Qianwang
2013-01-01
Invisible photonic printing, an emerging printing technique, is particularly useful for steganography and watermarking for anti-counterfeiting purposes. However, many challenges exist in order to realize this technique. Herein, we describe a novel photonic printing strategy targeting to overcome these challenges and realize fast and convenient fabrication of invisible photonic prints with good tenability and reproducibility. With this novel photonic printing technique, a variety of graphics with brilliant colors can be perfectly hidden in a soft and waterproof photonic-paper. The showing and hiding of the latent photonic prints are instantaneous with magnet as the only required instrument. In addition, this strategy has excellent practicality and allows end-user control of the structural design utilizing simple software on a PC. PMID:23508071
2D approaches to 3D watermarking: state-of-the-art and perspectives
NASA Astrophysics Data System (ADS)
Mitrea, M.; Duţă, S.; Prêteux, F.
2006-02-01
With the advent of the Information Society, video, audio, speech, and 3D media represent the source of huge economic benefits. Consequently, there is a continuously increasing demand for protecting their related intellectual property rights. The solution can be provided by robust watermarking, a research field which exploded in the last 7 years. However, the largest part of the scientific effort was devoted to video and audio protection, the 3D objects being quite neglected. In the absence of any standardisation attempt, the paper starts by summarising the approaches developed in this respect and by further identifying the main challenges to be addressed in the next years. Then, it describes an original oblivious watermarking method devoted to the protection of the 3D objects represented by NURBS (Non uniform Rational B Spline) surfaces. Applied to both free form objects and CAD models, the method exhibited very good transparency (no visible differences between the marked and the unmarked model) and robustness (with respect to both traditional attacks and to NURBS processing).
Huang, H; Coatrieux, G; Shu, H Z; Luo, L M; Roux, Ch
2011-01-01
In this paper we present a medical image integrity verification system that not only allows detecting and approximating malevolent local image alterations (e.g. removal or addition of findings) but is also capable to identify the nature of global image processing applied to the image (e.g. lossy compression, filtering …). For that purpose, we propose an image signature derived from the geometric moments of pixel blocks. Such a signature is computed over regions of interest of the image and then watermarked in regions of non interest. Image integrity analysis is conducted by comparing embedded and recomputed signatures. If any, local modifications are approximated through the determination of the parameters of the nearest generalized 2D Gaussian. Image moments are taken as image features and serve as inputs to one classifier we learned to discriminate the type of global image processing. Experimental results with both local and global modifications illustrate the overall performances of our approach.
Watermarking -- Paving the Way for Better Released Documents
NASA Technical Reports Server (NTRS)
Little, Keith; Carole, Miller; Schwindt, Paul; Zimmerman, Cheryl
2011-01-01
One of the biggest issues with regard to Released Documentation is making sure what you are looking at is. in fact. what you think it is. Is it Released or Not? Is it the Latest? How to be sure? In this brief session, we'll discuss the path Kennedy Space Center has taken in implementing Watermarking with ProductView . We'11 cover the premise, challenges, and implementation from our windchill 9.1 environment. Come join us and see if we can't save you some time and headaches in the long-run.
NASA Astrophysics Data System (ADS)
Nofriansyah, Dicky; Defit, Sarjon; Nurcahyo, Gunadi W.; Ganefri, G.; Ridwan, R.; Saleh Ahmar, Ansari; Rahim, Robbi
2018-01-01
Cybercrime is one of the most serious threats. Efforts are made to reduce the number of cybercrime is to find new techniques in securing data such as Cryptography, Steganography and Watermarking combination. Cryptography and Steganography is a growing data security science. A combination of Cryptography and Steganography is one effort to improve data integrity. New techniques are used by combining several algorithms, one of which is the incorporation of hill cipher method and Morse code. Morse code is one of the communication codes used in the Scouting field. This code consists of dots and lines. This is a new modern and classic concept to maintain data integrity. The result of the combination of these three methods is expected to generate new algorithms to improve the security of the data, especially images.
Scalable Wavelet-Based Active Network Stepping Stone Detection
2012-03-22
47 4.2.2 Synchronization Frame . . . . . . . . . . . . . . . . . . . . . . . . 49 4.2.3 Frame Size...the vector. Pilot experiments result in the final algorithm shown in Figure 3.4 and the detector in Figure 3.5. Note that the synchronization frame and... synchronization frames divided by the number of total frames. Comparing this statistic to the detection threshold γ determines whether a watermark is
Introducing keytagging, a novel technique for the protection of medical image-based tests.
Rubio, Óscar J; Alesanco, Álvaro; García, José
2015-08-01
This paper introduces keytagging, a novel technique to protect medical image-based tests by implementing image authentication, integrity control and location of tampered areas, private captioning with role-based access control, traceability and copyright protection. It relies on the association of tags (binary data strings) to stable, semistable or volatile features of the image, whose access keys (called keytags) depend on both the image and the tag content. Unlike watermarking, this technique can associate information to the most stable features of the image without distortion. Thus, this method preserves the clinical content of the image without the need for assessment, prevents eavesdropping and collusion attacks, and obtains a substantial capacity-robustness tradeoff with simple operations. The evaluation of this technique, involving images of different sizes from various acquisition modalities and image modifications that are typical in the medical context, demonstrates that all the aforementioned security measures can be implemented simultaneously and that the algorithm presents good scalability. In addition to this, keytags can be protected with standard Cryptographic Message Syntax and the keytagging process can be easily combined with JPEG2000 compression since both share the same wavelet transform. This reduces the delays for associating keytags and retrieving the corresponding tags to implement the aforementioned measures to only ≃30 and ≃90ms respectively. As a result, keytags can be seamlessly integrated within DICOM, reducing delays and bandwidth when the image test is updated and shared in secure architectures where different users cooperate, e.g. physicians who interpret the test, clinicians caring for the patient and researchers. Copyright © 2015 Elsevier Inc. All rights reserved.
RFID identity theft and countermeasures
NASA Astrophysics Data System (ADS)
Herrigel, Alexander; Zhao, Jian
2006-02-01
This paper reviews the ICAO security architecture for biometric passports. An attack enabling RFID identity theft for a later misuse is presented. Specific countermeasures against this attack are described. Furthermore, it is shown that robust high capacity digital watermarking for the embedding and retrieving of binary digital signature data can be applied as an effective mean against RFID identity theft. This approach requires only minimal modifications of the passport manufacturing process and is an enhancement of already proposed solutions. The approach may also be applied in combination with a RFID as a backup solution (damaged RFID chip) to verify with asymmetric cryptographic techniques the authenticity and the integrity of the passport data.
Secure and Efficient Transmission of Hyperspectral Images for Geosciences Applications
NASA Astrophysics Data System (ADS)
Carpentieri, Bruno; Pizzolante, Raffaele
2017-12-01
Hyperspectral images are acquired through air-borne or space-borne special cameras (sensors) that collect information coming from the electromagnetic spectrum of the observed terrains. Hyperspectral remote sensing and hyperspectral images are used for a wide range of purposes: originally, they were developed for mining applications and for geology because of the capability of this kind of images to correctly identify various types of underground minerals by analysing the reflected spectrums, but their usage has spread in other application fields, such as ecology, military and surveillance, historical research and even archaeology. The large amount of data obtained by the hyperspectral sensors, the fact that these images are acquired at a high cost by air-borne sensors and that they are generally transmitted to a base, makes it necessary to provide an efficient and secure transmission protocol. In this paper, we propose a novel framework that allows secure and efficient transmission of hyperspectral images, by combining a reversible invisible watermarking scheme, used in conjunction with digital signature techniques, and a state-of-art predictive-based lossless compression algorithm.
Designing robust watermark barcodes for multiplex long-read sequencing.
Ezpeleta, Joaquín; Krsticevic, Flavia J; Bulacio, Pilar; Tapia, Elizabeth
2017-03-15
To attain acceptable sample misassignment rates, current approaches to multiplex single-molecule real-time sequencing require upstream quality improvement, which is obtained from multiple passes over the sequenced insert and significantly reduces the effective read length. In order to fully exploit the raw read length on multiplex applications, robust barcodes capable of dealing with the full single-pass error rates are needed. We present a method for designing sequencing barcodes that can withstand a large number of insertion, deletion and substitution errors and are suitable for use in multiplex single-molecule real-time sequencing. The manuscript focuses on the design of barcodes for full-length single-pass reads, impaired by challenging error rates in the order of 11%. The proposed barcodes can multiplex hundreds or thousands of samples while achieving sample misassignment probabilities as low as 10-7 under the above conditions, and are designed to be compatible with chemical constraints imposed by the sequencing process. Software tools for constructing watermark barcode sets and demultiplexing barcoded reads, together with example sets of barcodes and synthetic barcoded reads, are freely available at www.cifasis-conicet.gov.ar/ezpeleta/NS-watermark . ezpeleta@cifasis-conicet.gov.ar. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
NASA Astrophysics Data System (ADS)
Yu, Xu; Shao, Quanqin; Zhu, Yunhai; Deng, Yuejin; Yang, Haijun
2006-10-01
With the development of informationization and the separation between data management departments and application departments, spatial data sharing becomes one of the most important objectives for the spatial information infrastructure construction, and spatial metadata management system, data transmission security and data compression are the key technologies to realize spatial data sharing. This paper discusses the key technologies for metadata based on data interoperability, deeply researches the data compression algorithms such as adaptive Huffman algorithm, LZ77 and LZ78 algorithm, studies to apply digital signature technique to encrypt spatial data, which can not only identify the transmitter of spatial data, but also find timely whether the spatial data are sophisticated during the course of network transmission, and based on the analysis of symmetric encryption algorithms including 3DES,AES and asymmetric encryption algorithm - RAS, combining with HASH algorithm, presents a improved mix encryption method for spatial data. Digital signature technology and digital watermarking technology are also discussed. Then, a new solution of spatial data network distribution is put forward, which adopts three-layer architecture. Based on the framework, we give a spatial data network distribution system, which is efficient and safe, and also prove the feasibility and validity of the proposed solution.
Research in DRM architecture based on watermarking and PKI
NASA Astrophysics Data System (ADS)
Liu, Ligang; Chen, Xiaosu; Xiao, Dao-ju; Yi, Miao
2005-02-01
Analyze the virtue and disadvantage of the present digital copyright protecting system, design a kind of security protocol model of digital copyright protection, which equilibrium consider the digital media"s use validity, integrality, security of transmission, and trade equity, make a detailed formalize description to the protocol model, analyze the relationship of the entities involved in the digital work copyright protection. The analysis of the security and capability of the protocol model shows that the model is good at security and practicability.
NASA Astrophysics Data System (ADS)
Tickle, Andrew J.; Singh, Harjap; Grindley, Josef E.
2013-06-01
Morphological Scene Change Detection (MSCD) is a process typically tasked at detecting relevant changes in a guarded environment for security applications. This can be implemented on a Field Programmable Gate Array (FPGA) by a combination of binary differences based around exclusive-OR (XOR) gates, mathematical morphology and a crucial threshold setting. This is a robust technique and can be applied many areas from leak detection to movement tracking, and further augmented to perform additional functions such as watermarking and facial detection. Fire is a severe problem, and in areas where traditional fire alarm systems are not installed or feasible, it may not be detected until it is too late. Shown here is a way of adapting the traditional Morphological Scene Change Detector (MSCD) with a temperature sensor so if both the temperature sensor and scene change detector are triggered, there is a high likelihood of fire present. Such a system would allow integration into autonomous mobile robots so that not only security patrols could be undertaken, but also fire detection.
Video copy protection and detection framework (VPD) for e-learning systems
NASA Astrophysics Data System (ADS)
ZandI, Babak; Doustarmoghaddam, Danial; Pour, Mahsa R.
2013-03-01
This Article reviews and compares the copyright issues related to the digital video files, which can be categorized as contended based and Digital watermarking copy Detection. Then we describe how to protect a digital video by using a special Video data hiding method and algorithm. We also discuss how to detect the copy right of the file, Based on expounding Direction of the technology of the video copy detection, and Combining with the own research results, brings forward a new video protection and copy detection approach in terms of plagiarism and e-learning systems using the video data hiding technology. Finally we introduce a framework for Video protection and detection in e-learning systems (VPD Framework).
Fourier Phase Domain Steganography: Phase Bin Encoding Via Interpolation
NASA Astrophysics Data System (ADS)
Rivas, Edward
2007-04-01
In recent years there has been an increased interest in audio steganography and watermarking. This is due primarily to two reasons. First, an acute need to improve our national security capabilities in light of terrorist and criminal activity has driven new ideas and experimentation. Secondly, the explosive proliferation of digital media has forced the music industry to rethink how they will protect their intellectual property. Various techniques have been implemented but the phase domain remains a fertile ground for improvement due to the relative robustness to many types of distortion and immunity to the Human Auditory System. A new method for embedding data in the phase domain of the Discrete Fourier Transform of an audio signal is proposed. Focus is given to robustness and low perceptibility, while maintaining a relatively high capacity rate of up to 172 bits/s.
Visual saliency in MPEG-4 AVC video stream
NASA Astrophysics Data System (ADS)
Ammar, M.; Mitrea, M.; Hasnaoui, M.; Le Callet, P.
2015-03-01
Visual saliency maps already proved their efficiency in a large variety of image/video communication application fields, covering from selective compression and channel coding to watermarking. Such saliency maps are generally based on different visual characteristics (like color, intensity, orientation, motion,…) computed from the pixel representation of the visual content. This paper resumes and extends our previous work devoted to the definition of a saliency map solely extracted from the MPEG-4 AVC stream syntax elements. The MPEG-4 AVC saliency map thus defined is a fusion of static and dynamic map. The static saliency map is in its turn a combination of intensity, color and orientation features maps. Despite the particular way in which all these elementary maps are computed, the fusion techniques allowing their combination plays a critical role in the final result and makes the object of the proposed study. A total of 48 fusion formulas (6 for combining static features and, for each of them, 8 to combine static to dynamic features) are investigated. The performances of the obtained maps are evaluated on a public database organized at IRCCyN, by computing two objective metrics: the Kullback-Leibler divergence and the area under curve.
Privacy enhanced group communication in clinical environment
NASA Astrophysics Data System (ADS)
Li, Mingyan; Narayanan, Sreeram; Poovendran, Radha
2005-04-01
Privacy protection of medical records has always been an important issue and is mandated by the recent Health Insurance Portability and Accountability Act (HIPAA) standards. In this paper, we propose security architectures for a tele-referring system that allows electronic group communication among professionals for better quality treatments, while protecting patient privacy against unauthorized access. Although DICOM defines the much-needed guidelines for confidentiality of medical data during transmission, there is no provision in the existing medical security systems to guarantee patient privacy once the data has been received. In our design, we address this issue by enabling tracing back to the recipient whose received data is disclosed to outsiders, using watermarking technique. We present security architecture design of a tele-referring system using a distributed approach and a centralized web-based approach. The resulting tele-referring system (i) provides confidentiality during the transmission and ensures integrity and authenticity of the received data, (ii) allows tracing of the recipient who has either distributed the data to outsiders or whose system has been compromised, (iii) provides proof of receipt or origin, and (iv) can be easy to use and low-cost to employ in clinical environment.
Watermarking spot colors in packaging
NASA Astrophysics Data System (ADS)
Reed, Alastair; Filler, TomáÅ.¡; Falkenstern, Kristyn; Bai, Yang
2015-03-01
In January 2014, Digimarc announced Digimarc® Barcode for the packaging industry to improve the check-out efficiency and customer experience for retailers. Digimarc Barcode is a machine readable code that carries the same information as a traditional Universal Product Code (UPC) and is introduced by adding a robust digital watermark to the package design. It is imperceptible to the human eye but can be read by a modern barcode scanner at the Point of Sale (POS) station. Compared to a traditional linear barcode, Digimarc Barcode covers the whole package with minimal impact on the graphic design. This significantly improves the Items per Minute (IPM) metric, which retailers use to track the checkout efficiency since it closely relates to their profitability. Increasing IPM by a few percent could lead to potential savings of millions of dollars for retailers, giving them a strong incentive to add the Digimarc Barcode to their packages. Testing performed by Digimarc showed increases in IPM of at least 33% using the Digimarc Barcode, compared to using a traditional barcode. A method of watermarking print ready image data used in the commercial packaging industry is described. A significant proportion of packages are printed using spot colors, therefore spot colors needs to be supported by an embedder for Digimarc Barcode. Digimarc Barcode supports the PANTONE spot color system, which is commonly used in the packaging industry. The Digimarc Barcode embedder allows a user to insert the UPC code in an image while minimizing perceptibility to the Human Visual System (HVS). The Digimarc Barcode is inserted in the printing ink domain, using an Adobe Photoshop plug-in as the last step before printing. Since Photoshop is an industry standard widely used by pre-press shops in the packaging industry, a Digimarc Barcode can be easily inserted and proofed.
ASIST 2001. Information in a Networked World: Harnessing the Flow. Part III: Poster Presentations.
ERIC Educational Resources Information Center
Proceedings of the ASIST Annual Meeting, 2001
2001-01-01
Topics of Poster Presentations include: electronic preprints; intranets; poster session abstracts; metadata; information retrieval; watermark images; video games; distributed information retrieval; subject domain knowledge; data mining; information theory; course development; historians' use of pictorial images; information retrieval software;…
The neural signature of emotional memories in serial crimes.
Chassy, Philippe
2017-10-01
Neural plasticity is the process whereby semantic information and emotional responses are stored in neural networks. It is hypothesized that the neural networks built over time to encode the sexual fantasies that motivate serial killers to act should display a unique, detectable activation pattern. The pathological neural watermark hypothesis posits that such networks comprise activation of brain sites that reflect four cognitive components: autobiographical memory, sexual arousal, aggression, and control over aggression. The neural sites performing these cognitive functions have been successfully identified by previous research. The key findings are reviewed to hypothesise the typical pattern of activity that serial killers should display. Through the integration of biological findings into one framework, the neural approach proposed in this paper is in stark contrast with the many theories accounting for serial killers that offer non-medical taxonomies. The pathological neural watermark hypothesis offers a new framework to understand and detect deviant individuals. The technical and legal issues are briefly discussed. Copyright © 2017 Elsevier Ltd. All rights reserved.
Refahi, Yassin; Brunoud, Géraldine; Farcot, Etienne; Jean-Marie, Alain; Pulkkinen, Minna; Vernoux, Teva; Godin, Christophe
2016-01-01
Exploration of developmental mechanisms classically relies on analysis of pattern regularities. Whether disorders induced by biological noise may carry information on building principles of developmental systems is an important debated question. Here, we addressed theoretically this question using phyllotaxis, the geometric arrangement of plant aerial organs, as a model system. Phyllotaxis arises from reiterative organogenesis driven by lateral inhibitions at the shoot apex. Motivated by recurrent observations of disorders in phyllotaxis patterns, we revisited in depth the classical deterministic view of phyllotaxis. We developed a stochastic model of primordia initiation at the shoot apex, integrating locality and stochasticity in the patterning system. This stochastic model recapitulates phyllotactic patterns, both regular and irregular, and makes quantitative predictions on the nature of disorders arising from noise. We further show that disorders in phyllotaxis instruct us on the parameters governing phyllotaxis dynamics, thus that disorders can reveal biological watermarks of developmental systems. DOI: http://dx.doi.org/10.7554/eLife.14093.001 PMID:27380805
Quantum image processing: A review of advances in its security technologies
NASA Astrophysics Data System (ADS)
Yan, Fei; Iliyasu, Abdullah M.; Le, Phuc Q.
In this review, we present an overview of the advances made in quantum image processing (QIP) comprising of the image representations, the operations realizable on them, and the likely protocols and algorithms for their applications. In particular, we focus on recent progresses on QIP-based security technologies including quantum watermarking, quantum image encryption, and quantum image steganography. This review is aimed at providing readers with a succinct, yet adequate compendium of the progresses made in the QIP sub-area. Hopefully, this effort will stimulate further interest aimed at the pursuit of more advanced algorithms and experimental validations for available technologies and extensions to other domains.
Whittemore, Stephen Richard
2013-09-10
Imaging systems include a detector and a spatial light modulator (SLM) that is coupled so as to control image intensity at the detector based on predetermined detector limits. By iteratively adjusting SLM element values, image intensity at one or all detector elements or portions of an imaging detector can be controlled to be within limits. The SLM can be secured to the detector at a spacing such that the SLM is effectively at an image focal plane. In some applications, the SLM can be adjusted to impart visible or hidden watermarks to images or to reduce image intensity at one or a selected set of detector elements so as to reduce detector blooming
Multifocus watermarking approach based on discrete cosine transform.
Waheed, Safa Riyadh; Alkawaz, Mohammed Hazim; Rehman, Amjad; Almazyad, Abdulaziz S; Saba, Tanzila
2016-05-01
Image fusion process consolidates data and information from various images of same sight into a solitary image. Each of the source images might speak to a fractional perspective of the scene, and contains both "pertinent" and "immaterial" information. In this study, a new image fusion method is proposed utilizing the Discrete Cosine Transform (DCT) to join the source image into a solitary minimized image containing more exact depiction of the sight than any of the individual source images. In addition, the fused image comes out with most ideal quality image without bending appearance or loss of data. DCT algorithm is considered efficient in image fusion. The proposed scheme is performed in five steps: (1) RGB colour image (input image) is split into three channels R, G, and B for source images. (2) DCT algorithm is applied to each channel (R, G, and B). (3) The variance values are computed for the corresponding 8 × 8 blocks of each channel. (4) Each block of R of source images is compared with each other based on the variance value and then the block with maximum variance value is selected to be the block in the new image. This process is repeated for all channels of source images. (5) Inverse discrete cosine transform is applied on each fused channel to convert coefficient values to pixel values, and then combined all the channels to generate the fused image. The proposed technique can potentially solve the problem of unwanted side effects such as blurring or blocking artifacts by reducing the quality of the subsequent image in image fusion process. The proposed approach is evaluated using three measurement units: the average of Q(abf), standard deviation, and peak Signal Noise Rate. The experimental results of this proposed technique have shown good results as compared with older techniques. © 2016 Wiley Periodicals, Inc.
Passive detection of copy-move forgery in digital images: state-of-the-art.
Al-Qershi, Osamah M; Khoo, Bee Ee
2013-09-10
Currently, digital images and videos have high importance because they have become the main carriers of information. However, the relative ease of tampering with images and videos makes their authenticity untrustful. Digital image forensics addresses the problem of the authentication of images or their origins. One main branch of image forensics is passive image forgery detection. Images could be forged using different techniques, and the most common forgery is the copy-move, in which a region of an image is duplicated and placed elsewhere in the same image. Active techniques, such as watermarking, have been proposed to solve the image authenticity problem, but those techniques have limitations because they require human intervention or specially equipped cameras. To overcome these limitations, several passive authentication methods have been proposed. In contrast to active methods, passive methods do not require any previous information about the image, and they take advantage of specific detectable changes that forgeries can bring into the image. In this paper, we describe the current state-of-the-art of passive copy-move forgery detection methods. The key current issues in developing a robust copy-move forgery detector are then identified, and the trends of tackling those issues are addressed. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Evaluation of a user guidance reminder to improve the quality of electronic prescription messages.
Dhavle, A A; Corley, S T; Rupp, M T; Ruiz, J; Smith, J; Gill, R; Sow, M
2014-01-01
Prescribers' inappropriate use of the free-text Notes field in new electronic prescriptions can create confusion and workflow disruptions at receiving pharmacies that often necessitates contact with prescribers for clarification. The inclusion of inappropriate patient direction (Sig) information in the Notes field is particularly problematic. We evaluated the effect of a targeted watermark, an embedded overlay, reminder statement in the Notes field of an EHR-based e-prescribing application on the incidence of inappropriate patient directions (Sig) in the Notes field. E-prescriptions issued by the same exact cohort of 97 prescribers were collected over three time periods: baseline, three months after implementation of the reminder, and 15 months post implementation. Three certified and experienced pharmacy technicians independently reviewed all e-prescriptions for inappropriate Sig-related information in the Notes field. A physician reviewer served as the final adjudicator for e-prescriptions where the three reviewers could not reach a consensus. ANOVA and post hoc Tukey HSD tests were performed on group comparisons where statistical significance was evaluated at p<0.05. The incidence of inappropriate Sig-related information in the Notes field decreased from a baseline of 2.8% to 1.8% three months post-implementation and remained stable after 15 months. In addition, prescribers' use of the Notes decreased by 22% after 3 months and had stabilized at 18.7% below baseline after 15 months. Insertion of a targeted watermark reminder statement in the Notes field of an e-prescribing application significantly reduced the incidence of inappropriate Sig-related information in Notes and decreased prescribers' use of this field.
On the Role of Openness in Education: A Historical Reconstruction
ERIC Educational Resources Information Center
Peter, Sandra; Deimann, Markus
2013-01-01
In the context of education, "open(ness)" has become the watermark for a fast growing number of learning materials and associated platforms and practices from a variety of institutions and individuals. Open Educational Resources (OER), Massive Open Online Courses (MOOC), and more recently, initiatives such as Coursera are just some of…
Lossless data embedding for all image formats
NASA Astrophysics Data System (ADS)
Fridrich, Jessica; Goljan, Miroslav; Du, Rui
2002-04-01
Lossless data embedding has the property that the distortion due to embedding can be completely removed from the watermarked image without accessing any side channel. This can be a very important property whenever serious concerns over the image quality and artifacts visibility arise, such as for medical images, due to legal reasons, for military images or images used as evidence in court that may be viewed after enhancement and zooming. We formulate two general methodologies for lossless embedding that can be applied to images as well as any other digital objects, including video, audio, and other structures with redundancy. We use the general principles as guidelines for designing efficient, simple, and high-capacity lossless embedding methods for three most common image format paradigms - raw, uncompressed formats (BMP), lossy or transform formats (JPEG), and palette formats (GIF, PNG). We close the paper with examples of how the concept of lossless data embedding can be used as a powerful tool to achieve a variety of non-trivial tasks, including elegant lossless authentication using fragile watermarks. Note on terminology: some authors coined the terms erasable, removable, reversible, invertible, and distortion-free for the same concept.
Embedding intensity image into a binary hologram with strong noise resistant capability
NASA Astrophysics Data System (ADS)
Zhuang, Zhaoyong; Jiao, Shuming; Zou, Wenbin; Li, Xia
2017-11-01
A digital hologram can be employed as a host image for image watermarking applications to protect information security. Past research demonstrates that a gray level intensity image can be embedded into a binary Fresnel hologram by error diffusion method or bit truncation coding method. However, the fidelity of the retrieved watermark image from binary hologram is generally not satisfactory, especially when the binary hologram is contaminated with noise. To address this problem, we propose a JPEG-BCH encoding method in this paper. First, we employ the JPEG standard to compress the intensity image into a binary bit stream. Next, we encode the binary bit stream with BCH code to obtain error correction capability. Finally, the JPEG-BCH code is embedded into the binary hologram. By this way, the intensity image can be retrieved with high fidelity by a BCH-JPEG decoder even if the binary hologram suffers from serious noise contamination. Numerical simulation results show that the image quality of retrieved intensity image with our proposed method is superior to the state-of-the-art work reported.
Retracted Publications Within Radiology Journals.
Rosenkrantz, Andrew B
2016-02-01
The purpose of this study was to characterize trends related to retracted publications within radiology journals. PubMed was queried to identify all articles with the publication type "retracted publication" or "notification of retraction." Articles published within radiology journals were identified using Journal Citation Reports' journal categories. Available versions of original articles and publication notices were accessed from journal websites. Citations to retracted publications were identified using Web of Science. Overall trends were assessed. Forty-eight retracted original research articles were identified within radiology journals since 1983, which included 1.1% of all PubMed "retracted publication" entries. Distinct PubMed entries were available for the retracted publication and retraction notification in 39 of 48 articles. The original PDF was available for 37 articles, although the articles were not watermarked as retracted in 23 cases. In six cases with a watermarked PDF, further searches identified nonwatermarked versions. Original HTML versions were available for 13 articles but 11 were not watermarked. The mean (± SD) delay between publication and retraction was 2.7 ± 2.8 years (range, 0-16 years). The mean number of citations to retracted articles was 10.9 ± 17.1 (range, 0-94 citations). Reasons for retraction included problematic or incorrect methods or results (although it typically was unclear whether these represented honest errors or misconduct) in 33.3% of cases, complete or partial duplicate publication in 33.3% of cases, plagiarism in 14.6% of cases, a permission issue in 8.3% of cases, the publisher's error in 6.3% of cases, and no identified reason in 6.3% of cases. One or no retractions occurred annually from 1986 to 2001, although two or more retractions occurred annually in nine of the 12 years from 2002 through 2013. Retraction represents an uncommon, yet potentially increasing, issue within radiology journals that publishers have inconsistently and insufficiently addressed. Greater awareness and training in proper biomedical research conduct, as well as establishment and enforcement of standardized publishers' policies, are warranted.
Simultaneous storage of medical images in the spatial and frequency domain: a comparative study.
Nayak, Jagadish; Bhat, P Subbanna; Acharya U, Rajendra; Uc, Niranjan
2004-06-05
Digital watermarking is a technique of hiding specific identification data for copyright authentication. This technique is adapted here for interleaving patient information with medical images, to reduce storage and transmission overheads. The patient information is encrypted before interleaving with images to ensure greater security. The bio-signals are compressed and subsequently interleaved with the image. This interleaving is carried out in the spatial domain and Frequency domain. The performance of interleaving in the spatial, Discrete Fourier Transform (DFT), Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) coefficients is studied. Differential pulse code modulation (DPCM) is employed for data compression as well as encryption and results are tabulated for a specific example. It can be seen from results, the process does not affect the picture quality. This is attributed to the fact that the change in LSB of a pixel changes its brightness by 1 part in 256. Spatial and DFT domain interleaving gave very less %NRMSE as compared to DCT and DWT domain. The Results show that spatial domain the interleaving, the %NRMSE was less than 0.25% for 8-bit encoded pixel intensity. Among the frequency domain interleaving methods, DFT was found to be very efficient.
Watermarks within the Middle Eastern Manuscript Collection of the Baillieu Library
ERIC Educational Resources Information Center
Lewincamp, Sophie
2012-01-01
The University of Melbourne's Middle Eastern Manuscript collection housed at the Baillieu Library was acquired by Professor John Bowman in the 1950s as part of a teaching collection to promote greater learning of Middle Eastern culture and civilisation (Pryde 2007, 3). The collection is a rare example within Australia and represents many different…
What Women Want--And Why You Want Women--In the Workplace. Research Report
ERIC Educational Resources Information Center
Clerkin, Cathleen
2017-01-01
"What Women Want" is a scientific study conducted in partnership with the Center for Creative Leadership (CCL®) and Watermark. The aim of this study is to help organizational decision makers better understand how--and why--to recruit, retain, and promote women in the workplace. This study included 745 leaders and aspiring leaders. On…
Business Documents Don't Have to Be Boring
ERIC Educational Resources Information Center
Schultz, Benjamin
2006-01-01
With business documents, visuals can serve to enhance the written word in conveying the message. Images can be especially effective when used subtly, on part of the page, on successive pages to provide continuity, or even set as watermarks over the entire page. A main reason given for traditional text-only business documents is that they are…
Active Detection for Exposing Intelligent Attacks in Control Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weerakkody, Sean; Ozel, Omur; Griffioen, Paul
In this paper, we consider approaches for detecting integrity attacks carried out by intelligent and resourceful adversaries in control systems. Passive detection techniques are often incorporated to identify malicious behavior. Here, the defender utilizes finely-tuned algorithms to process information and make a binary decision, whether the system is healthy or under attack. We demonstrate that passive detection can be ineffective against adversaries with model knowledge and access to a set of input/output channels. We then propose active detection as a tool to detect attacks. In active detection, the defender leverages degrees of freedom he has in the system to detectmore » the adversary. Specifically, the defender will introduce a physical secret kept hidden from the adversary, which can be utilized to authenticate the dynamics. In this regard, we carefully review two approaches for active detection: physical watermarking at the control input, and a moving target approach for generating system dynamics. We examine practical considerations for implementing these technologies and discuss future research directions.« less
Human visual system-based color image steganography using the contourlet transform
NASA Astrophysics Data System (ADS)
Abdul, W.; Carré, P.; Gaborit, P.
2010-01-01
We present a steganographic scheme based on the contourlet transform which uses the contrast sensitivity function (CSF) to control the force of insertion of the hidden information in a perceptually uniform color space. The CIELAB color space is used as it is well suited for steganographic applications because any change in the CIELAB color space has a corresponding effect on the human visual system as is very important for steganographic schemes to be undetectable by the human visual system (HVS). The perceptual decomposition of the contourlet transform gives it a natural advantage over other decompositions as it can be molded with respect to the human perception of different frequencies in an image. The evaluation of the imperceptibility of the steganographic scheme with respect to the color perception of the HVS is done using standard methods such as the structural similarity (SSIM) and CIEDE2000. The robustness of the inserted watermark is tested against JPEG compression.
On Polymorphic Circuits and Their Design Using Evolutionary Algorithms
NASA Technical Reports Server (NTRS)
Stoica, Adrian; Zebulum, Ricardo; Keymeulen, Didier; Lohn, Jason; Clancy, Daniel (Technical Monitor)
2002-01-01
This paper introduces the concept of polymorphic electronics (polytronics) - referring to electronics with superimposed built-in functionality. A function change does not require switches/reconfiguration as in traditional approaches. Instead the change comes from modifications in the characteristics of devices involved in the circuit, in response to controls such as temperature, power supply voltage (VDD), control signals, light, etc. The paper illustrates polytronic circuits in which the control is done by temperature, morphing signals, and VDD respectively. Polytronic circuits are obtained by evolutionary design/evolvable hardware techniques. These techniques are ideal for the polytronics design, a new area that lacks design guidelines, know-how,- yet the requirements/objectives are easy to specify and test. The circuits are evolved/synthesized in two different modes. The first mode explores an unstructured space, in which transistors can be interconnected freely in any arrangement (in simulations only). The second mode uses a Field Programmable Transistor Array (FPTA) model, and the circuit topology is sought as a mapping onto a programmable architecture (these experiments are performed both in simulations and on FPTA chips). The experiments demonstrated the synthesis. of polytronic circuits by evolution. The capacity of storing/hiding "extra" functions provides for watermark/invisible functionality, thus polytronics may find uses in intelligence/security applications.
Simultaneous storage of medical images in the spatial and frequency domain: A comparative study
Nayak, Jagadish; Bhat, P Subbanna; Acharya U, Rajendra; UC, Niranjan
2004-01-01
Background Digital watermarking is a technique of hiding specific identification data for copyright authentication. This technique is adapted here for interleaving patient information with medical images, to reduce storage and transmission overheads. Methods The patient information is encrypted before interleaving with images to ensure greater security. The bio-signals are compressed and subsequently interleaved with the image. This interleaving is carried out in the spatial domain and Frequency domain. The performance of interleaving in the spatial, Discrete Fourier Transform (DFT), Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) coefficients is studied. Differential pulse code modulation (DPCM) is employed for data compression as well as encryption and results are tabulated for a specific example. Results It can be seen from results, the process does not affect the picture quality. This is attributed to the fact that the change in LSB of a pixel changes its brightness by 1 part in 256. Spatial and DFT domain interleaving gave very less %NRMSE as compared to DCT and DWT domain. Conclusion The Results show that spatial domain the interleaving, the %NRMSE was less than 0.25% for 8-bit encoded pixel intensity. Among the frequency domain interleaving methods, DFT was found to be very efficient. PMID:15180899
Statistical Characterization of MP3 Encoders for Steganalysis: ’CHAMP3’
2004-04-27
compression exceeds those of typical stegano- graphic tools (e. g., LSB image embedding), the availability of commented source codes for MP3 encoders...developed by testing the approach on known and unknown reference data. 15. SUBJECT TERMS EOARD, Steganography , Digital Watermarking...Pages kbps Kilobits per Second LGPL Lesser General Public License LSB Least Significant Bit MB Megabyte MDCT Modified Discrete Cosine Transformation MP3
Privacy Protection by Masking Moving Objects for Security Cameras
NASA Astrophysics Data System (ADS)
Yabuta, Kenichi; Kitazawa, Hitoshi; Tanaka, Toshihisa
Because of an increasing number of security cameras, it is crucial to establish a system that protects the privacy of objects in the recorded images. To this end, we propose a framework of image processing and data hiding for security monitoring and privacy protection. First, we state the requirements of the proposed monitoring systems and suggest possible implementation that satisfies those requirements. The underlying concept of our proposed framework is as follows: (1) in the recorded images, the objects whose privacy should be protected are deteriorated by appropriate image processing; (2) the original objects are encrypted and watermarked into the output image, which is encoded using an image compression standard; (3) real-time processing is performed such that no future frame is required to generate on output bitstream. It should be noted that in this framework, anyone can observe the decoded image that includes the deteriorated objects that are unrecognizable or invisible. On the other hand, for crime investigation, this system allows a limited number of users to observe the original objects by using a special viewer that decrypts and decodes the watermarked objects with a decoding password. Moreover, the special viewer allows us to select the objects to be decoded and displayed. We provide an implementation example, experimental results, and performance evaluations to support our proposed framework.
Image manipulation: Fraudulence in digital dental records: Study and review
Chowdhry, Aman; Sircar, Keya; Popli, Deepika Bablani; Tandon, Ankita
2014-01-01
Introduction: In present-day times, freely available software allows dentists to tweak their digital records as never before. But, there is a fine line between acceptable enhancements and scientific delinquency. Aims and Objective: To manipulate digital images (used in forensic dentistry) of casts, lip prints, and bite marks in order to highlight tampering techniques and methods of detecting and preventing manipulation of digital images. Materials and Methods: Digital image records of forensic data (casts, lip prints, and bite marks photographed using Samsung Techwin L77 digital camera) were manipulated using freely available software. Results: Fake digital images can be created either by merging two or more digital images, or by altering an existing image. Discussion and Conclusion: Retouched digital images can be used for fraudulent purposes in forensic investigations. However, tools are available to detect such digital frauds, which are extremely difficult to assess visually. Thus, all digital content should mandatorily have attached metadata and preferably watermarking in order to avert their malicious re-use. Also, computer alertness, especially about imaging software's, should be promoted among forensic odontologists/dental professionals. PMID:24696587
Blind Compressed Image Watermarking for Noisy Communication Channels
2015-10-26
Lenna test image [11] for our simulations, and gradient projection for sparse recon- struction (GPSR) [12] to solve the convex optimization prob- lem...E. Candes, J. Romberg , and T. Tao, “Robust uncertainty prin- ciples: exact signal reconstruction from highly incomplete fre- quency information,” IEEE...Images - Requirements and Guidelines,” ITU-T Recommen- dation T.81, 1992. [6] M. Gkizeli, D. Pados, and M. Medley, “Optimal signature de - sign for
Devi, B Pushpa; Singh, Kh Manglem; Roy, Sudipta
2016-01-01
This paper proposes a new watermarking algorithm based on the shuffled singular value decomposition and the visual cryptography for copyright protection of digital images. It generates the ownership and identification shares of the image based on visual cryptography. It decomposes the image into low and high frequency sub-bands. The low frequency sub-band is further divided into blocks of same size after shuffling it and then the singular value decomposition is applied to each randomly selected block. Shares are generated by comparing one of the elements in the first column of the left orthogonal matrix with its corresponding element in the right orthogonal matrix of the singular value decomposition of the block of the low frequency sub-band. The experimental results show that the proposed scheme clearly verifies the copyright of the digital images, and is robust to withstand several image processing attacks. Comparison with the other related visual cryptography-based algorithms reveals that the proposed method gives better performance. The proposed method is especially resilient against the rotation attack.
Girelli, Carlos Magno Alves
2016-05-01
Fingerprints present in false identity documents were found on the web. In some cases, laterally reversed (mirrored) images of a same fingerprint were observed in different documents. In the present work, 100 fingerprints images downloaded from the web, as well as their reversals obtained by image editing, were compared between themselves and against the database of the Brazilian Federal Police AFIS, in order to better understand trends about this kind of forgery in Brazil. Some image editing effects were observed in the analyzed fingerprints: addition of artifacts (such as watermarks), image rotation, image stylization, lateral reversal and tonal reversal. Discussion about lateral reversals' detection is presented in this article, as well as suggestion to reduce errors due to missed HIT decisions between reversed fingerprints. The present work aims to highlight the importance of the fingerprints' analysis when performing document examination, especially when only copies of documents are available, something very common in Brazil. Besides the intrinsic features of the fingermarks considered in three levels of details by ACE-V methodology, some visual features of the fingerprints images can be helpful to identify sources of forgeries and modus operandi, such as: limits and image contours, fails in the friction ridges caused by excess or lack of inking and presence of watermarks and artifacts arising from the background. Based on the agreement of such features in fingerprints present in different identity documents and also on the analysis of the time and location where the documents were seized, it is possible to highlight potential links between apparently unconnected crimes. Therefore, fingerprints have potential to reduce linkage blindness and the present work suggests the analysis of fingerprints when profiling false identity documents, as well as the inclusion of fingerprints features in the profile of the documents. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Integrated fingerprinting in secure digital cinema projection
NASA Astrophysics Data System (ADS)
Delannay, Damien; Delaigle, Jean-Francois; Macq, Benoit M. M.; Quisquater, Jean-Jacques; Mas Ribes, Joan M.; Boucqueau, Jean M.; Nivart, Jean-Francois
2001-12-01
This paper describes the functional model of a combined conditional access and fingerprinting copyright (-or projectionright) protection system in a digital cinema framework. In the cinema industry, a large part of early movie piracy comes from copies made in the theater itself with a camera. The evolution towards digital cinema broadcast enables watermark based fingerprinting protection systems. Besides an appropriate fingerprinting technology, a number of well defined security/cryptographic tools are integrated in order to guaranty the integrity of the whole system. The requirements are two-fold: On one side, we must ensure that the media content is only accessible at exhibition time (under specific authorization obtained after an ad-hoc film rental agreement) and contains the related exhibition fingerprint. At the other end, we must prove our ability to retrieve the fingerprint information from an illegal copy of the media.
MPEG-4 AVC saliency map computation
NASA Astrophysics Data System (ADS)
Ammar, M.; Mitrea, M.; Hasnaoui, M.
2014-02-01
A saliency map provides information about the regions inside some visual content (image, video, ...) at which a human observer will spontaneously look at. For saliency maps computation, current research studies consider the uncompressed (pixel) representation of the visual content and extract various types of information (intensity, color, orientation, motion energy) which are then fusioned. This paper goes one step further and computes the saliency map directly from the MPEG-4 AVC stream syntax elements with minimal decoding operations. In this respect, an a-priori in-depth study on the MPEG-4 AVC syntax elements is first carried out so as to identify the entities appealing the visual attention. Secondly, the MPEG-4 AVC reference software is completed with software tools allowing the parsing of these elements and their subsequent usage in objective benchmarking experiments. This way, it is demonstrated that an MPEG-4 saliency map can be given by a combination of static saliency and motion maps. This saliency map is experimentally validated under a robust watermarking framework. When included in an m-QIM (multiple symbols Quantization Index Modulation) insertion method, PSNR average gains of 2.43 dB, 2.15dB, and 2.37 dB are obtained for data payload of 10, 20 and 30 watermarked blocks per I frame, i.e. about 30, 60, and 90 bits/second, respectively. These quantitative results are obtained out of processing 2 hours of heterogeneous video content.
A comprehensive bibliography of linguistic steganography
NASA Astrophysics Data System (ADS)
Bergmair, Richard
2007-02-01
In this paper, we will attempt to give a comprehensive bibliographic account of the work in linguistic steganography published up to date. As the field is still in its infancy there is no widely accepted publication venue. Relevant work on the subject is scattered throughout the literature on information security, information hiding, imaging and watermarking, cryptology, and natural language processing. Bibliographic references within the field are very sparse. This makes literature research on linguistic steganography a tedious task and a comprehensive bibliography a valuable aid to the researcher.
Security of medical multimedia.
Tzelepi, S; Pangalos, G; Nikolacopoulou, G
2002-09-01
The application of information technology to health care has generated growing concern about the privacy and security of medical information. Furthermore, data and communication security requirements in the field of multimedia are higher. In this paper we describe firstly the most important security requirements that must be fulfilled by multimedia medical data, and the security measures used to satisfy these requirements. These security measures are based mainly on modern cryptographic and watermarking mechanisms as well as on security infrastructures. The objective of our work is to complete this picture, exploiting the capabilities of multimedia medical data to define and implement an authorization model for regulating access to the data. In this paper we describe an extended role-based access control model by considering, within the specification of the role-permission relationship phase, the constraints that must be satisfied in order for the holders of the permission to use those permissions. The use of constraints allows role-based access control to be tailored to specifiy very fine-grained and flexible content-, context- and time-based access control policies. Other restrictions, such as role entry restriction also can be captured. Finally, the description of system architecture for a secure DBMS is presented.
Signal processing for smart cards
NASA Astrophysics Data System (ADS)
Quisquater, Jean-Jacques; Samyde, David
2003-06-01
In 1998, Paul Kocher showed that when a smart card computes cryptographic algorithms, for signatures or encryption, its consumption or its radiations leak information. The keys or the secrets hidden in the card can then be recovered using a differential measurement based on the intercorrelation function. A lot of silicon manufacturers use desynchronization countermeasures to defeat power analysis. In this article we detail a new resynchronization technic. This method can be used to facilitate the use of a neural network to do the code recognition. It becomes possible to reverse engineer a software code automatically. Using data and clock separation methods, we show how to optimize the synchronization using signal processing. Then we compare these methods with watermarking methods for 1D and 2D signal. The very last watermarking detection improvements can be applied to signal processing for smart cards with very few modifications. Bayesian processing is one of the best ways to do Differential Power Analysis, and it is possible to extract a PIN code from a smart card in very few samples. So this article shows the need to continue to set up effective countermeasures for cryptographic processors. Although the idea to use advanced signal processing operators has been commonly known for a long time, no publication explains that results can be obtained. The main idea of differential measurement is to use the cross-correlation of two random variables and to repeat consumption measurements on the processor to be analyzed. We use two processors clocked at the same external frequency and computing the same data. The applications of our design are numerous. Two measurements provide the inputs of a central operator. With the most accurate operator we can improve the signal noise ratio, re-synchronize the acquisition clock with the internal one, or remove jitter. The analysis based on consumption or electromagnetic measurements can be improved using our structure. At first sight the same results can be obtained with only one smart card, but this idea is not completely true because the statistical properties of the signal are not the same. As the two smart cards are submitted to the same external noise during the measurement, it is more easy to reduce the influence of perturbations. This paper shows the importance of accurate countermeasures against differential analysis.
AAS Publishing News: Preparing Your Manuscript Just Got Easier
NASA Astrophysics Data System (ADS)
Kohler, Susanna
2016-03-01
Watermarking using the command watermark{DRAFT, v2}.Are you an astronomer considering submitting a paper to an AAS journal (i.e., AJ, ApJ, ApJ Letters, or ApJ Supplements)? If so, this post is for you! Read on to find out about the exciting new things you can do with the AASs newest LaTeX class file, available for download now.Why the Update?AAS publishing has maintained a consistent class file for LaTeX manuscript preparation for the past decade. But academic publishing is changing rapidly in todays era of electronic journals! Since its journals went fully electronic, the AAS has been continuously adding new publishing capabilities based on the recommendations of the Journals Task Force and the needs and requests of AAS authors. The AASs manuscript preparation tools are now being updated accordingly.Whats New in AASTex 6.0?There are many exciting new features and capabilities in AASTex 6.0. Here are just a few:Tracking options for author revisions include added{text}, deleted{text}, replaced{old}{new}, and explain{text}.Based on emulateapjDo you use the popular class file emulateapj to prepare your manuscripts? AASTex 6.0 is based on emulateapj, rather than on the older AASTex 5.2 (though 5.2 is still supported). This means that it is easy to produce a double-columned, single-spaced, and astro-ph-ready manuscript. Since two thirds of the AAS journals authors use emulateapj, this transition was designed to make manuscript preparation and sharing an easier and more seamless process.Tools for collaborationsDo you work in a large collaboration? AASTex now includes new tools to make preparing a manuscript within a collaboration easier. Drafts can now be watermarked to differentiate between versions. New markup for large author lists streamlines the display so that readers can access article information immediately, yet they can still access the full author list and affiliations at the end of the paper. And author revision markup allows members of a collaboration to track their edits within a manuscript, for clearer organization of versions and edits.An example figure set, which the reader can download as a .tar.gz high-resolution set or as PowerPoint slides.Additional figure supportDo you have a lot of similar figures that youd like associated with the electronic journal article but dont all need to be included in the article pdf? New support is now available for figure sets, which allow readers efficient access to the full set of images without slowing down their ability to read your article. In addition, AASTex 6.0 now offers new markup for displaying figures in a grid, providing authors with more control over figure placement.New features for tablesDo you frequently work with large data tables? You might be especially happy with the changes in table-handling in AASTex 6.0. Now you can automatically number columns, hide columns with a single command, specify math mode automatically for a designated column, control decimal alignment, and even split wide tables into multiple parts.Example use of the new software command.Software citation supportDo you want to cite software and third-party repositories within your articles? With AASTex 6.0, theres now a software command that can be used to highlight and link to software that you used in your work. In addition, the ApJ BibTeX style file has been updated to support software citation.Where Can You Get More Information?Learn more about AASTex 6.0Watch a video presentation about AASTex 6.0 by AAS Data Scientist Greg SchwarzDownload AASTex 6.0Wishing for still more improvements?The AAS publishing team would love your input! You can contact them at aastex-help@aas.org with additional suggestions or ideas for the next iteration of AASTex.
Strict integrity control of biomedical images
NASA Astrophysics Data System (ADS)
Coatrieux, Gouenou; Maitre, Henri; Sankur, Bulent
2001-08-01
The control of the integrity and authentication of medical images is becoming ever more important within the Medical Information Systems (MIS). The intra- and interhospital exchange of images, such as in the PACS (Picture Archiving and Communication Systems), and the ease of copying, manipulation and distribution of images have brought forth the security aspects. In this paper we focus on the role of watermarking for MIS security and address the problem of integrity control of medical images. We discuss alternative schemes to extract verification signatures and compare their tamper detection performance.
Methods for identification of images acquired with digital cameras
NASA Astrophysics Data System (ADS)
Geradts, Zeno J.; Bijhold, Jurrien; Kieft, Martijn; Kurosawa, Kenji; Kuroki, Kenro; Saitoh, Naoki
2001-02-01
From the court we were asked whether it is possible to determine if an image has been made with a specific digital camera. This question has to be answered in child pornography cases, where evidence is needed that a certain picture has been made with a specific camera. We have looked into different methods of examining the cameras to determine if a specific image has been made with a camera: defects in CCDs, file formats that are used, noise introduced by the pixel arrays and watermarking in images used by the camera manufacturer.
How a central bank perceives the (visual) communication of security features on its banknotes
NASA Astrophysics Data System (ADS)
Tornare, Roland
1998-04-01
The banknotes of earlier generations were protected by two or three security features with which the general public was familiar: watermark, security thread, intaglio printing. The remaining features pleased primarily printers and central banks, with little thought being given to public perception. The philosophy adopted two decades ago was based on a certain measure of discretion. It required patience and perseverance to discover the built-in security features of the banknotes. When colour photocopiers appeared on the scene in the mid- eighties we were compelled to take precautionary measures to protect our banknotes. One such measure consisted of an information campaign to prepare ourselves for this new potential threat. At this point, we actually became fully aware of the complex design of our banknotes and how difficult it is to communicate clearly the difference between a genuine and a counterfeit banknote. This difficult experience has nevertheless been a great benefit. It badgered us continually during the initial phase of designing the banknotes and preparing the information campaign.
Floating call boys and agile homosexuals: homophobia/Venice/history.
Champagne, John
2014-01-01
Because works of nonfiction are always composed of literary tropes and metaphors, they have to be read critically for the ways in which their truth claims are potentially structured by ideologies and stereotypes. This essay reads passages from Richard Sennett's sociological analysis Flesh and Stone, The Body and the City in Western Civilization and Joseph Brodsky's memoir Watermark in order to demonstrate how these alleged works of nonfiction shore up some dishearteningly familiar literary stereotypes of male homosexuality and participate in a tradition, dating from the 19th century, of linking the city of Venice with homosexuality and death.
NASA Astrophysics Data System (ADS)
Yabuta, Kenichi; Kitazawa, Hitoshi; Tanaka, Toshihisa
2006-02-01
Recently, monitoring cameras for security have been extensively increasing. However, it is normally difficult to know when and where we are monitored by these cameras and how the recorded images are stored and/or used. Therefore, how to protect privacy in the recorded images is a crucial issue. In this paper, we address this problem and introduce a framework for security monitoring systems considering the privacy protection. We state requirements for monitoring systems in this framework. We propose a possible implementation that satisfies the requirements. To protect privacy of recorded objects, they are made invisible by appropriate image processing techniques. Moreover, the original objects are encrypted and watermarked into the image with the "invisible" objects, which is coded by the JPEG standard. Therefore, the image decoded by a normal JPEG viewer includes the objects that are unrecognized or invisible. We also introduce in this paper a so-called "special viewer" in order to decrypt and display the original objects. This special viewer can be used by limited users when necessary for crime investigation, etc. The special viewer allows us to choose objects to be decoded and displayed. Moreover, in this proposed system, real-time processing can be performed, since no future frame is needed to generate a bitstream.
Scanning-time evaluation of Digimarc Barcode
NASA Astrophysics Data System (ADS)
Gerlach, Rebecca; Pinard, Dan; Weaver, Matt; Alattar, Adnan
2015-03-01
This paper presents a speed comparison between the use of Digimarc® Barcodes and the Universal Product Code (UPC) for customer checkout at point of sale (POS). The recently introduced Digimarc Barcode promises to increase the speed of scanning packaged goods at POS. When this increase is exploited by workforce optimization systems, the retail industry could potentially save billions of dollars. The Digimarc Barcode is based on Digimarc's watermarking technology, and it is imperceptible, very robust, and does not require any special ink, material, or printing processes. Using an image-based scanner, a checker can quickly scan consumer packaged goods (CPG) embedded with the Digimarc Barcode without the need to reorient the packages with respect to the scanner. Faster scanning of packages saves money and enhances customer satisfaction. It reduces the length of the queues at checkout, reduces the cost of cashier labor, and makes self-checkout more convenient. This paper quantifies the increase in POS scanning rates resulting from the use of the Digimarc Barcode versus the traditional UPC. It explains the testing methodology, describes the experimental setup, and analyzes the obtained results. It concludes that the Digimarc Barcode increases number of items per minute (IPM) scanned at least 50% over traditional UPC.
Secure public cloud platform for medical images sharing.
Pan, Wei; Coatrieux, Gouenou; Bouslimi, Dalel; Prigent, Nicolas
2015-01-01
Cloud computing promises medical imaging services offering large storage and computing capabilities for limited costs. In this data outsourcing framework, one of the greatest issues to deal with is data security. To do so, we propose to secure a public cloud platform devoted to medical image sharing by defining and deploying a security policy so as to control various security mechanisms. This policy stands on a risk assessment we conducted so as to identify security objectives with a special interest for digital content protection. These objectives are addressed by means of different security mechanisms like access and usage control policy, partial-encryption and watermarking.
NASA Astrophysics Data System (ADS)
Damera-Venkata, Niranjan; Yen, Jonathan
2003-01-01
A Visually significant two-dimensional barcode (VSB) developed by Shaked et. al. is a method used to design an information carrying two-dimensional barcode, which has the appearance of a given graphical entity such as a company logo. The encoding and decoding of information using the VSB, uses a base image with very few graylevels (typically only two). This typically requires the image histogram to be bi-modal. For continuous-tone images such as digital photographs of individuals, the representation of tone or "shades of gray" is not only important to obtain a pleasing rendition of the face, but in most cases, the VSB renders these images unrecognizable due to its inability to represent true gray-tone variations. This paper extends the concept of a VSB to an image bar code (IBC). We enable the encoding and subsequent decoding of information embedded in the hardcopy version of continuous-tone base-images such as those acquired with a digital camera. The encoding-decoding process is modeled by robust data transmission through a noisy print-scan channel that is explicitly modeled. The IBC supports a high information capacity that differentiates it from common hardcopy watermarks. The reason for the improved image quality over the VSB is a joint encoding/halftoning strategy based on a modified version of block error diffusion. Encoder stability, image quality vs. information capacity tradeoffs and decoding issues with and without explicit knowledge of the base-image are discussed.
Quantin, Catherine; Coatrieux, Gouenou; Allaert, François André; Fassa, Maniane; Bourquard, Karima; Boire, Jean-Yves; de Vlieger, Paul; Maigne, Lydia; Breton, Vincent
2009-01-01
The main problem for health professionals and patients in accessing information is that this information is very often distributed over many medical records and locations. This problem is particularly acute in cancerology because patients may be treated for many years and undergo a variety of examinations. Recent advances in technology make it feasible to gain access to medical records anywhere and anytime, allowing the physician or the patient to gather information from an “ephemeral electronic patient record”. However, this easy access to data is accompanied by the requirement for improved security (confidentiality, traceability, integrity, ...) and this issue needs to be addressed. In this paper we propose and discuss a decentralised approach based on recent advances in information sharing and protection: Grid technologies and watermarking methodologies. The potential impact of these technologies for oncology is illustrated by the examples of two experimental cases: a cancer surveillance network and a radiotherapy treatment plan. It is expected that the proposed approach will constitute the basis of a future secure “google-like” access to medical records. PMID:19718446
StegoWall: blind statistical detection of hidden data
NASA Astrophysics Data System (ADS)
Voloshynovskiy, Sviatoslav V.; Herrigel, Alexander; Rytsar, Yuri B.; Pun, Thierry
2002-04-01
Novel functional possibilities, provided by recent data hiding technologies, carry out the danger of uncontrolled (unauthorized) and unlimited information exchange that might be used by people with unfriendly interests. The multimedia industry as well as the research community recognize the urgent necessity for network security and copyright protection, or rather the lack of adequate law for digital multimedia protection. This paper advocates the need for detecting hidden data in digital and analog media as well as in electronic transmissions, and for attempting to identify the underlying hidden data. Solving this problem calls for the development of an architecture for blind stochastic hidden data detection in order to prevent unauthorized data exchange. The proposed architecture is called StegoWall; its key aspects are the solid investigation, the deep understanding, and the prediction of possible tendencies in the development of advanced data hiding technologies. The basic idea of our complex approach is to exploit all information about hidden data statistics to perform its detection based on a stochastic framework. The StegoWall system will be used for four main applications: robust watermarking, secret communications, integrity control and tamper proofing, and internet/network security.
NASA Astrophysics Data System (ADS)
Terahama, Yukinori; Takahashi, Yoshiyasu; Suzuki, Shigeru; Kinukawa, Hiroshi
Recent years, maintenance of corporate soundness and compliance with the law and corporate ethics are getting more significant in the insurance industry, regardless of life insurance. In the other hand, division of production and distribution is increasing. Therefore the problem of compliance with an agency is getting more significant. We propose a contract procedure in an agency separated products explanation and applied process for protection of the personal information. Our proposed procedure protects the personal information of the contractor and supports the compliance observance for contracts with the background texture watermarks and the redactable signature. We have developed a prototype system of the solution to check its feasibility.
Elia, Nadia; Wager, Elizabeth; Tramèr, Martin R.
2014-01-01
Objective To study journals' responses to a request from the State Medical Association of Rheinland-Pfalz, Germany, to retract 88 articles due to ethical concerns, and to check whether the resulting retractions followed published guidelines. Design Descriptive cross-sectional study. Population 88 articles (18 journals) by the anaesthesiologist Dr. Boldt, that warranted retraction. Method According to the recommendations of the Committee on Publication Ethics, we regarded a retraction as adequate when a retraction notice was published, linked to the retracted article, identified the title and authors of the retracted article in its heading, explained the reason and who took responsibility for the retraction, and when the retracted article was freely accessible and marked using a transparent watermark that preserved original content. Two authors extracted data independently (January 2013) and contacted editors-in-chief and publishers for clarification in cases of inadequate retraction. Results Five articles (6%) fulfilled all criteria for adequate retraction. Nine (10%) were not retracted (no retraction notice published, full text article not marked). 79 (90%) retraction notices were published, 76 (86%) were freely accessible, but only 15 (17%) were complete. 73 (83%) full text articles were marked as retracted, of which 14 (16%) had an opaque watermark hiding parts of the original content, and 11 (13%) had all original content deleted. 59 (67%) retracted articles were freely accessible. One editor-in-chief stated personal problems as a reason for incomplete retractions, eight blamed their publishers. Two publishers cited legal threats from Dr. Boldt's co-authors which prevented them from retracting articles. Conclusion Guidelines for retracting articles are incompletely followed. The role of publishers in the retraction process needs to be clarified and standards are needed on marking retracted articles. It remains unclear who should check that retractions are done properly. Legal safeguards are required to allow retraction of articles against the wishes of authors. PMID:24465744
Earth observations taken from shuttle orbiter Columbia
1995-10-27
STS073-702-051 (27 October 1995) --- Photographed by the crew aboard the Space Shuttle Columbia, this detailed scene of East Caicos Island highlights the shallow tropical waters typical of the Bahamas region. The contrast between the light blue shallow water and dark blue deep water marks a sharp difference (hundreds of meters) in water depth. The shallow marine regions include sandbars and tidal channels (just right of center). The coastline of the island is low and swampy, and is also greatly influenced by the tides. Further offshore, the darker regions in the slightly deeper watermark sea grass and algae beds. This sensitive submarine environment can be mapped from space because the waters are so clear. Chains of clouds forming off islands and headlands, mark the downwind direction.
Towards fraud-proof ID documents using multiple data hiding technologies and biometrics
NASA Astrophysics Data System (ADS)
Picard, Justin; Vielhauer, Claus; Thorwirth, Niels
2004-06-01
Identity documents, such as ID cards, passports, and driver's licenses, contain textual information, a portrait of the legitimate holder, and eventually some other biometric characteristics such as a fingerprint or handwritten signature. As prices for digital imaging technologies fall, making them more widely available, we have seen an exponential increase in the ease and the number of counterfeiters that can effectively forge documents. Today, with only limited knowledge of technology and a small amount of money, a counterfeiter can effortlessly replace a photo or modify identity information on a legitimate document to the extent that it is very diffcult to differentiate from the original. This paper proposes a virtually fraud-proof ID document based on a combination of three different data hiding technologies: digital watermarking, 2-D bar codes, and Copy Detection Pattern, plus additional biometric protection. As will be shown, that combination of data hiding technologies protects the document against any forgery, in principle without any requirement for other security features. To prevent a genuine document to be used by an illegitimate user,biometric information is also covertly stored in the ID document, to be used for identification at the detector.
Providing integrity and authenticity in DICOM images: a novel approach.
Kobayashi, Luiz Octavio Massato; Furuie, Sergio Shiguemi; Barreto, Paulo Sergio Licciardi Messeder
2009-07-01
The increasing adoption of information systems in healthcare has led to a scenario where patient information security is more and more being regarded as a critical issue. Allowing patient information to be in jeopardy may lead to irreparable damage, physically, morally, and socially to the patient, potentially shaking the credibility of the healthcare institution. Medical images play a crucial role in such context, given their importance in diagnosis, treatment, and research. Therefore, it is vital to take measures in order to prevent tampering and determine their provenance. This demands adoption of security mechanisms to assure information integrity and authenticity. There are a number of works done in this field, based on two major approaches: use of metadata and use of watermarking. However, there still are limitations for both approaches that must be properly addressed. This paper presents a new method using cryptographic means to improve trustworthiness of medical images, providing a stronger link between the image and the information on its integrity and authenticity, without compromising image quality to the end user. Use of Digital Imaging and Communications in Medicine structures is also an advantage for ease of development and deployment.
Zhong, Kuo; Li, Jiaqi; Liu, Liwang; Van Cleuvenbergen, Stijn; Song, Kai; Clays, Koen
2018-05-04
The colors of photonic crystals are based on their periodic crystalline structure. They show clear advantages over conventional chromophores for many applications, mainly due to their anti-photobleaching and responsiveness to stimuli. More specifically, combining colloidal photonic crystals and invisible patterns is important in steganography and watermarking for anticounterfeiting applications. Here a convenient way to imprint robust invisible patterns in colloidal crystals of hollow silica spheres is presented. While these patterns remain invisible under static environmental humidity, even up to near 100% relative humidity, they are unveiled immediately (≈100 ms) and fully reversibly by dynamic humid flow, e.g., human breath. They reveal themselves due to the extreme wettability of the patterned (etched) regions, as confirmed by contact angle measurements. The liquid surface tension threshold to induce wetting (revealing the imprinted invisible images) is evaluated by thermodynamic predictions and subsequently verified by exposure to various vapors with different surface tension. The color of the patterned regions is furthermore independently tuned by vapors with different refractive indices. Such a system can play a key role in applications such as anticounterfeiting, identification, and vapor sensing. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Velonakis, E; Mantas, J; Mavrikakis, I
2006-01-01
The occupational health and safety management constitutes a field of increasing interest. Institutions in cooperation with enterprises make synchronized efforts to initiate quality management systems to this field. Computer networks can offer such services via TCP/IP which is a reliable protocol for workflow management between enterprises and institutions. A design of such network is based on several factors in order to achieve defined criteria and connectivity with other networks. The network will be consisted of certain nodes responsible to inform executive persons on Occupational Health and Safety. A web database has been planned for inserting and searching documents, for answering and processing questionnaires. The submission of files to a server and the answers to questionnaires through the web help the experts to make corrections and improvements on their activities. Based on the requirements of enterprises we have constructed a web file server. We submit files in purpose users could retrieve the files which need. The access is limited to authorized users and digital watermarks authenticate and protect digital objects. The Health and Safety Management System follows ISO 18001. The implementation of it, through the web site is an aim. The all application is developed and implemented on a pilot basis for the health services sector. It is all ready installed within a hospital, supporting health and safety management among different departments of the hospital and allowing communication through WEB with other hospitals.
Mavrikakis, I; Mantas, J; Diomidous, M
2007-01-01
This paper is based on the research on the possible structure of an information system for the purposes of occupational health and safety management. We initiated a questionnaire in order to find the possible interest on the part of potential users in the subject of occupational health and safety. The depiction of the potential interest is vital both for the software analysis cycle and development according to previous models. The evaluation of the results tends to create pilot applications among different enterprises. Documentation and process improvements ascertained quality of services, operational support, occupational health and safety advice are the basics of the above applications. Communication and codified information among intersted parts is the other target of the survey regarding health issues. Computer networks can offer such services. The network will consist of certain nodes responsible to inform executives on Occupational Health and Safety. A web database has been installed for inserting and searching documents. The submission of files to a server and the answers to questionnaires through the web help the experts to perform their activities. Based on the requirements of enterprises we have constructed a web file server. We submit files so that users can retrieve the files which they need. The access is limited to authorized users. Digital watermarks authenticate and protect digital objects.
Social computing for image matching
Rivas, Alberto; Sánchez-Torres, Ramiro; Rodríguez, Sara
2018-01-01
One of the main technological trends in the last five years is mass data analysis. This trend is due in part to the emergence of concepts such as social networks, which generate a large volume of data that can provide added value through their analysis. This article is focused on a business and employment-oriented social network. More specifically, it focuses on the analysis of information provided by different users in image form. The images are analyzed to detect whether other existing users have posted or talked about the same image, even if the image has undergone some type of modification such as watermarks or color filters. This makes it possible to establish new connections among unknown users by detecting what they are posting or whether they are talking about the same images. The proposed solution consists of an image matching algorithm, which is based on the rapid calculation and comparison of hashes. However, there is a computationally expensive aspect in charge of revoking possible image transformations. As a result, the image matching process is supported by a distributed forecasting system that enables or disables nodes to serve all the possible requests. The proposed system has shown promising results for matching modified images, especially when compared with other existing systems. PMID:29813082
A novel key management solution for reinforcing compliance with HIPAA privacy/security regulations.
Lee, Chien-Ding; Ho, Kevin I-J; Lee, Wei-Bin
2011-07-01
Digitizing medical records facilitates the healthcare process. However, it can also cause serious security and privacy problems, which are the major concern in the Health Insurance Portability and Accountability Act (HIPAA). While various conventional encryption mechanisms can solve some aspects of these problems, they cannot address the illegal distribution of decrypted medical images, which violates the regulations defined in the HIPAA. To protect decrypted medical images from being illegally distributed by an authorized staff member, the model proposed in this paper provides a way to integrate several cryptographic mechanisms. In this model, the malicious staff member can be tracked by a watermarked clue. By combining several well-designed cryptographic mechanisms and developing a key management scheme to facilitate the interoperation among these mechanisms, the risk of illegal distribution can be reduced.
Limitations and requirements of content-based multimedia authentication systems
NASA Astrophysics Data System (ADS)
Wu, Chai W.
2001-08-01
Recently, a number of authentication schemes have been proposed for multimedia data such as images and sound data. They include both label based systems and semifragile watermarks. The main requirement for such authentication systems is that minor modifications such as lossy compression which do not alter the content of the data preserve the authenticity of the data, whereas modifications which do modify the content render the data not authentic. These schemes can be classified into two main classes depending on the model of image authentication they are based on. One of the purposes of this paper is to look at some of the advantages and disadvantages of these image authentication schemes and their relationship with fundamental limitations of the underlying model of image authentication. In particular, we study feature-based algorithms which generate an authentication tag based on some inherent features in the image such as the location of edges. The main disadvantage of most proposed feature-based algorithms is that similar images generate similar features, and therefore it is possible for a forger to generate dissimilar images that have the same features. On the other hand, the class of hash-based algorithms utilizes a cryptographic hash function or a digital signature scheme to reduce the data and generate an authentication tag. It inherits the security of digital signatures to thwart forgery attacks. The main disadvantage of hash-based algorithms is that the image needs to be modified in order to be made authenticatable. The amount of modification is on the order of the noise the image can tolerate before it is rendered inauthentic. The other purpose of this paper is to propose a multimedia authentication scheme which combines some of the best features of both classes of algorithms. The proposed scheme utilizes cryptographic hash functions and digital signature schemes and the data does not need to be modified in order to be made authenticatable. Several applications including the authentication of images on CD-ROM and handwritten documents will be discussed.
Forensic characterization of camcorded movies: digital cinema vs. celluloid film prints
NASA Astrophysics Data System (ADS)
Rolland-Nevière, Xavier; Chupeau, Bertrand; Do"rr, Gwena"l.; Blondé, Laurent
2012-03-01
Digital camcording in the premises of cinema theaters is the main source of pirate copies of newly released movies. To trace such recordings, watermarking systems are exploited in order for each projection to be unique and thus identifiable. The forensic analysis to recover these marks is different for digital and legacy cinemas. To avoid running both detectors, a reliable oracle discriminating between cams originating from analog or digital projections is required. This article details a classification framework relying on three complementary features : the spatial uniformity of the screen illumination, the vertical (in)stability of the projected image, and the luminance artifacts due to the interplay between the display and acquisition devices. The system has been tuned with cams captured in a controlled environment and benchmarked against a medium-sized dataset (61 samples) composed of real-life pirate cams. Reported experimental results demonstrate that such a framework yields over 80% classification accuracy.
Multimodal audio guide for museums and exhibitions
NASA Astrophysics Data System (ADS)
Gebbensleben, Sandra; Dittmann, Jana; Vielhauer, Claus
2006-02-01
In our paper we introduce a new Audio Guide concept for exploring buildings, realms and exhibitions. Actual proposed solutions work in most cases with pre-defined devices, which users have to buy or borrow. These systems often go along with complex technical installations and require a great degree of user training for device handling. Furthermore, the activation of audio commentary related to the exhibition objects is typically based on additional components like infrared, radio frequency or GPS technology. Beside the necessity of installation of specific devices for user location, these approaches often only support automatic activation with no or limited user interaction. Therefore, elaboration of alternative concepts appears worthwhile. Motivated by these aspects, we introduce a new concept based on usage of the visitor's own mobile smart phone. The advantages in our approach are twofold: firstly the Audio Guide can be used in various places without any purchase and extensive installation of additional components in or around the exhibition object. Secondly, the visitors can experience the exhibition on individual tours only by uploading the Audio Guide at a single point of entry, the Audio Guide Service Counter, and keeping it on her or his personal device. Furthermore, since the user usually is quite familiar with the interface of her or his phone and can thus interact with the application device easily. Our technical concept makes use of two general ideas for location detection and activation. Firstly, we suggest an enhanced interactive number based activation by exploiting the visual capabilities of modern smart phones and secondly we outline an active digital audio watermarking approach, where information about objects are transmitted via an analog audio channel.
The growth of optically variable features on banknotes
NASA Astrophysics Data System (ADS)
Lancaster, Ian M.; Mitchell, Astrid
2004-06-01
Public verification features are part of a matrix of security features on banknotes which allow the authenticity of legitimate banknotes to be established. They are characterised by being overt and easy to verify -- no examination tool or equipment is required even though the devices themselves are invariably highly sophisticated. Recent developments, though, combine overt and covert elements which may reqire inspection tools. Traditionally, banknote issuers were reluctant to involve the general public in the checking of banknotes, preferring to rely on those employed to handle them, experts and machinery to authenticate the various (normally undisclosed) features. This has now changed as the ability to counterfeit has moved from those highly skilled in printing to anyone with a scanner and computer -- the incidence of counterfeiting has grown exponentially in the last decade. Three techniques for what can be categorized as public verification features have been used for banknotes for many decades and continue to provide a barrier to counterfeiting: (1) the optical effects of watermarks; (2) the appearance and tactile characteristics of cylinder mould-made paper; (3) the tactile characteristics of intaglio print. Since the 1980s the emergence of threads and optically variable features have added to the available features which can be utilized on banknotes for public verification purposes. OVDs fall broadly into the two categories of diffraction and color shift. Products which utilize the former include holograms, kinegrams and other devices originated with similar techniques and bearing a variety of proprietary names, but collectively known as diffractive optically variable image devices (DOVIDs). All share the fundamental characteristic of changing in appearance according to the viewing angle, providing an effective barrier to the increasingly common use of digital reprographic technology as a counterfeiting tool as well as a simple means for verification by the man on the street. They differ, however, in the underlying technology, their specific characteristics and their functionality, which is what this paper will examine, along with their growth and usage on the world's currencies and the current technical developments and trends which are likely to affect their role in the future.
Van Andel, Tinde; Scholman, Ariane; Beumer, Mieke
2018-04-27
From 1640-1796, the Dutch East India Company (VOC) occupied the island of Ceylon (now Sri Lanka). Several VOC officers had a keen interest in the medicinal application of the local flora. The Leiden University Library holds a two-piece codex entitled: Icones Plantarum Malabaricarum, adscriptis nominibus et viribus, Vol. I. & II. (Illustrations of Plants from the Malabar, assigned names and strength). This manuscript contains 262 watercolour drawings of medicinal plants from Sri Lanka, with handwritten descriptions of local names, habitus, medicinal properties and therapeutic applications. This anonymous document had never been studied previously. To identify all depicted plant specimens, decipher the text, trace the author, and analyse the scientific relevance of this manuscript as well as its importance for Sri Lankan ethnobotany. We digitised the entire manuscript, transcribed and translated the handwritten Dutch texts and identified the depicted species using historic and modern literature, herbarium vouchers, online databases on Sri Lankan herbal medicine and 41 botanical drawings by the same artist in the Artis library, Amsterdam. We traced the origin of the manuscript by means of watermark analysis and historical literature. We compared the historic Sinhalese and Tamil names in the manuscript to recent plant names in ethnobotanical references from Sri Lanka and southern India. We published the entire manuscript online with translations and identifications. The watermarks indicate that the paper was made between 1694 and 1718. The handwriting is of a VOC scribe. In total, ca. 252 taxa are depicted, of which we could identify 221 to species level. The drawings represent mainly native species, including Sri Lankan endemics, but also introduced medicinal and ornamental plants. Lamiaceae, Zingiberaceae and Leguminosae were the best-represented families. Frequently mentioned applications were to purify the blood and to treat gastro-intestinal problems, fever and snakebites. Many plants are characterised by their humoral properties, of which 'warming' is the most prevalent. Plant species were mostly used for their roots (28%), bark (16%) or leaves (11%). More Tamil names (260) were documented than Sinhalese (208). More than half of the Tamil names and 36% of the Sinhalese names are still used today. The author was probably a VOC surgeon based in northern Sri Lanka, who travelled around the island to document medicinal plant use. Less than half of the species were previously documented from Ceylon by the famous VOC doctor and botanist Paul Hermann in the 1670s. Further archival research is needed to identify the maker of this manuscript. Although the maker of this early 18th century manuscript remains unknown, the detailed, 300-year-old information on medicinal plant use in the Icones Plantarum Malabaricarum represents an important ethnobotanical treasure for Sri Lanka, which offers ample opportunities to study changes and continuation of medicinal plant names and practices over time. Copyright © 2018 Elsevier B.V. All rights reserved.
Sharing Health Big Data for Research - A Design by Use Cases: The INSHARE Platform Approach.
Bouzillé, Guillaume; Westerlynck, Richard; Defossez, Gautier; Bouslimi, Dalel; Bayat, Sahar; Riou, Christine; Busnel, Yann; Le Guillou, Clara; Cauvin, Jean-Michel; Jacquelinet, Christian; Pladys, Patrick; Oger, Emmanuel; Stindel, Eric; Ingrand, Pierre; Coatrieux, Gouenou; Cuggia, Marc
2017-01-01
Sharing and exploiting Health Big Data (HBD) allow tackling challenges: data protection/governance taking into account legal, ethical, and deontological aspects enables trust, transparent and win-win relationship between researchers, citizens, and data providers. Lack of interoperability: compartmentalized and syntactically/semantica heterogeneous data. INSHARE project using experimental proof of concept explores how recent technologies overcome such issues. Using 6 data providers, platform is designed via 3 steps to: (1) analyze use cases, needs, and requirements; (2) define data sharing governance, secure access to platform; and (3) define platform specifications. Three use cases - from 5 studies and 11 data sources - were analyzed for platform design. Governance derived from SCANNER model was adapted to data sharing. Platform architecture integrates: data repository and hosting, semantic integration services, data processing, aggregate computing, data quality and integrity monitoring, Id linking, multisource query builder, visualization and data export services, data governance, study management service and security including data watermarking.
NASA Astrophysics Data System (ADS)
Yamada, Takayuki; Gohshi, Seiichi; Echizen, Isao
A method is described to prevent video images and videos displayed on screens from being re-shot by digital cameras and camcorders. Conventional methods using digital watermarking for re-shooting prevention embed content IDs into images and videos, and they help to identify the place and time where the actual content was shot. However, these methods do not actually prevent digital content from being re-shot by camcorders. We developed countermeasures to stop re-shooting by exploiting the differences between the sensory characteristics of humans and devices. The countermeasures require no additional functions to use-side devices. It uses infrared light (IR) to corrupt the content recorded by CCD or CMOS devices. In this way, re-shot content will be unusable. To validate the method, we developed a prototype system and implemented it on a 100-inch cinema screen. Experimental evaluations showed that the method effectively prevents re-shooting.
Singh, Anushikha; Dutta, Malay Kishore; Sharma, Dilip Kumar
2016-10-01
Identification of fundus images during transmission and storage in database for tele-ophthalmology applications is an important issue in modern era. The proposed work presents a novel accurate method for generation of unique identification code for identification of fundus images for tele-ophthalmology applications and storage in databases. Unlike existing methods of steganography and watermarking, this method does not tamper the medical image as nothing is embedded in this approach and there is no loss of medical information. Strategic combination of unique blood vessel pattern and patient ID is considered for generation of unique identification code for the digital fundus images. Segmented blood vessel pattern near the optic disc is strategically combined with patient ID for generation of a unique identification code for the image. The proposed method of medical image identification is tested on the publically available DRIVE and MESSIDOR database of fundus image and results are encouraging. Experimental results indicate the uniqueness of identification code and lossless recovery of patient identity from unique identification code for integrity verification of fundus images. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Ignat, V.
2016-08-01
Advanced industrial countries are affected by technology theft. German industry annually loses more than 50 billion euros. The main causes are industrial espionage and fraudulent copying patents and industrial products. Many Asian countries are profiteering saving up to 65% of production costs. Most affected are small medium enterprises, who do not have sufficient economic power to assert themselves against some powerful countries. International organizations, such as Interpol and World Customs Organization - WCO - work together to combat international economic crime. Several methods of protection can be achieved by registering patents or specific technical methods for recognition of product originality. They have developed more suitable protection, like Hologram, magnetic stripe, barcode, CE marking, digital watermarks, DNA or Nano-technologies, security labels, radio frequency identification, micro color codes, matrix code, cryptographic encodings. The automotive industry has developed the method “Manufactures against Product Piracy”. A sticker on the package features original products and it uses a Data Matrix verifiable barcode. The code can be recorded with a smartphone camera. The smartphone is connected via Internet to a database, where the identification numbers of the original parts are stored.
On some dynamical chameleon systems
NASA Astrophysics Data System (ADS)
Burkin, I. M.; Kuznetsova, O. I.
2018-03-01
It is now well known that dynamical systems can be categorized into systems with self-excited attractors and systems with hidden attractors. A self-excited attractor has a basin of attraction that is associated with an unstable equilibrium, while a hidden attractor has a basin of attraction that does not intersect with small neighborhoods of any equilibrium points. Hidden attractors play the important role in engineering applications because they allow unexpected and potentially disastrous responses to perturbations in a structure like a bridge or an airplane wing. In addition, complex behaviors of chaotic systems have been applied in various areas from image watermarking, audio encryption scheme, asymmetric color pathological image encryption, chaotic masking communication to random number generator. Recently, researchers have discovered the so-called “chameleon systems”. These systems were so named because they demonstrate self-excited or hidden oscillations depending on the value of parameters. The present paper offers a simple algorithm of synthesizing one-parameter chameleon systems. The authors trace the evolution of Lyapunov exponents and the Kaplan-Yorke dimension of such systems which occur when parameters change.
An efficient reversible privacy-preserving data mining technology over data streams.
Lin, Chen-Yi; Kao, Yuan-Hung; Lee, Wei-Bin; Chen, Rong-Chang
2016-01-01
With the popularity of smart handheld devices and the emergence of cloud computing, users and companies can save various data, which may contain private data, to the cloud. Topics relating to data security have therefore received much attention. This study focuses on data stream environments and uses the concept of a sliding window to design a reversible privacy-preserving technology to process continuous data in real time, known as a continuous reversible privacy-preserving (CRP) algorithm. Data with CRP algorithm protection can be accurately recovered through a data recovery process. In addition, by using an embedded watermark, the integrity of the data can be verified. The results from the experiments show that, compared to existing algorithms, CRP is better at preserving knowledge and is more effective in terms of reducing information loss and privacy disclosure risk. In addition, it takes far less time for CRP to process continuous data than existing algorithms. As a result, CRP is confirmed as suitable for data stream environments and fulfills the requirements of being lightweight and energy-efficient for smart handheld devices.
Color pictorial serpentine halftone for secure embedded data
NASA Astrophysics Data System (ADS)
Curry, Douglas N.
1998-04-01
This paper introduces a new rotatable glyph shape for trusted printing applications that has excellent image rendering, data storage and counterfeit deterrence properties. Referred to as a serpentine because it tiles into a meandering line screen, it can produce high quality images independent of its ability to embed data. The hafltone cell is constructed with hyperbolic curves to enhance its dynamic range, and generates low distortion because of rotational tone invariance with its neighbors. An extension to the process allows the data to be formatted into human readable text patterns, viewable with a magnifying glass, and therefore not requiring input scanning. The resultant embedded halftone patterns can be recognized as simple numbers (0 - 9) or alphanumerics (a - z). The pattern intensity can be offset from the surrounding image field intensity, producing a watermarking effect. We have been able to embed words such as 'original' or license numbers into the background halftone pattern of images which can be readily observed in the original image, and which conveniently disappear upon copying. We have also embedded data blocks with self-clocking codes and error correction data which are machine-readable. Finally, we have successfully printed full color images with both the embedded data and text, simulating a trusted printing application.
NASA Astrophysics Data System (ADS)
Yun, Sang Geun; Lee, Jin Young; Yang, Young Soo; Shin, Seung Wook; Lee, Sung Jae; Kwon, Hyo Young; Cho, Youn Jin; Choi, Seung Jib; Choi, Sang Jun; Kim, Jong Seob; Chang, Tuwon
2010-04-01
A topcoat material plays a significant role in achieving technology nodes below 45 nm via ArF immersion lithography. Switching the exposure medium between the lens and the photoresist (PR) film from gas (air, n=1) to liquid (H2O, n=1.44) may lead to leaching of the polymer, the photoacid generator (PAG), or the solvent. These substances can contaminate the lens or cause bubbles, which can lead to defects during the patterning. Previously reported topcoat materials mainly use hydrophobic fluoro-compounds and carboxylic acids to provide high dissolution rates (DR) to basic developers as well as high receding contact angles (RCA). Recently, the demand for a new top-coat material has risen since current materials cause water-mark defects and decreases in scan speeds, due to insufficient RCA's. However, RCA and DR are in a trade-off relationship as an increase in RCA generally results in a lower DR. To overcome this, a novel polymer with high-fluorine content was synthesized to produce a topcoat material with improved DR (120 nm/s in 2.38 wt% TMAH) and RCA (>70°). In addition, a strategy to control the pattern profile according to needs of customers was found.
Automatic source camera identification using the intrinsic lens radial distortion
NASA Astrophysics Data System (ADS)
Choi, Kai San; Lam, Edmund Y.; Wong, Kenneth K. Y.
2006-11-01
Source camera identification refers to the task of matching digital images with the cameras that are responsible for producing these images. This is an important task in image forensics, which in turn is a critical procedure in law enforcement. Unfortunately, few digital cameras are equipped with the capability of producing watermarks for this purpose. In this paper, we demonstrate that it is possible to achieve a high rate of accuracy in the identification by noting the intrinsic lens radial distortion of each camera. To reduce manufacturing cost, the majority of digital cameras are equipped with lenses having rather spherical surfaces, whose inherent radial distortions serve as unique fingerprints in the images. We extract, for each image, parameters from aberration measurements, which are then used to train and test a support vector machine classifier. We conduct extensive experiments to evaluate the success rate of a source camera identification with five cameras. The results show that this is a viable approach with high accuracy. Additionally, we also present results on how the error rates may change with images captured using various optical zoom levels, as zooming is commonly available in digital cameras.
Brittain, Harry G
2016-01-01
Through the combined use of infrared (IR) absorption spectroscopy and attenuated total reflectance (ATR) sampling, the composition of inks used to print the many different types of one-cent Benjamin Franklin stamps of the 19th century has been established. This information permits a historical evaluation of the formulations used at various times, and also facilitates the differentiation of the various stamps from each other. In two instances, the ink composition permits the unambiguous identification of stamps whose appearance is identical, and which (until now) have only been differentiated through estimates of the degree of hardness or softness of the stamp paper, or through the presence or absence of a watermark in the paper. In these instances, the use of ATR Fourier transform infrared spectroscopy (FT-IR) spectroscopy effectively renders irrelevant two 100-year-old practices of stamp identification. Furthermore, since the use of ATR sampling makes it possible to obtain the spectrum of a stamp still attached to its cover, it is no longer necessary to identify these blue Franklin stamps using their cancellation dates. © The Author(s) 2015.
Authenticity assessment of banknotes using portable near infrared spectrometer and chemometrics.
da Silva Oliveira, Vanessa; Honorato, Ricardo Saldanha; Honorato, Fernanda Araújo; Pereira, Claudete Fernandes
2018-05-01
Spectra recorded using a portable near infrared (NIR) spectrometer, Soft Independent Modeling of Class Analogy (SIMCA) and Linear Discriminant Analysis (LDA) associated to Successive Projections Algorithm (SPA) models were applied to identify counterfeit and authentic Brazilian Real (R$20, R$50 and R$100) banknotes, enabling a simple field analysis. NIR spectra (950-1650nm) were recorded from seven different areas of the banknotes (two with fluorescent ink, one over watermark, three with intaglio printing process and one over the serial numbers with typography printing). SIMCA and SPA-LDA models were built using 1st derivative preprocessed spectral data from one of the intaglio areas. For the SIMCA models, all authentic (300) banknotes were correctly classified and the counterfeits (227) were not classified. For the two classes SPA-LDA models (authentic and counterfeit currencies), all the test samples were correctly classified into their respective class. The number of selected variables by SPA varied from two to nineteen for R$20, R$50 and R$100 currencies. These results show that the use of the portable near-infrared with SIMCA or SPA-LDA models can be a completely effective, fast, and non-destructive way to identify authenticity of banknotes as well as permitting field analysis. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Monecke, K.; Beitel, J.; Moran, K.; Moore, A.
2006-12-01
Ban Talae Nok, a village on the Andaman shoreline of Thailand, was hit by the December 26, 2004, tsunami with wave heights up to ~13 meters. Eyewitnesses reported the passage of four to five waves with the second being the largest, followed by the third and fourth waves. The tsunami flooded an area with open grassy fields, small cashew nut plantations and a wetland within a local swale. The wave stopped against hills ~500 m from the shoreline, where watermarks still indicate a flow depth of approximately 1 m. Erosion at the beach is marked by a ~30 cm high scarp cutting a former gravelly beach trail ~60 m inshore of the present shoreline. Deposition of tsunami sand started behind the former beach trail at 80 m inshore. The tsunami deposit changes significantly in thickness and composition along a flow parallel transect that was measured and sampled within this study. The most seaward deposit is about 10 cm thick and consists of three distinct layers that show internal as well as overall normal grading from coarse sand into fine sand. The coarse base contains gravels from the old beach trail and shell fragments. Locally, cross stratification is visible at the top. Farther landward the deposit thins and only one normally graded layer is visible. Behind a small ridge where the wetland begins, the tsunami sediments again reveal three normally graded layers with shell- rich, medium-coarse sand grading into brown-gray mud, probably eroded from the wetland. This deposit thins farther inland from ~30 cm to 8 cm and consists of only one layer. The thickest deposit along the transect is 125 cm thick and can be found in a low at the landward end of the wetland. It consists of normally graded coarse to fine sand with rip-up clasts at the base and climbing ripples in the middle of the deposit section. In the adjacent grassy field the deposit is up to 40 cm thick and consists of medium to coarse sand with shell fragments grading into fine to medium sands, which continue to the foot of the hills.
Banknotes and unattended cash transactions
NASA Astrophysics Data System (ADS)
Bernardini, Ronald R.
2000-04-01
There is a 64 billion dollar annual unattended cash transaction business in the US with 10 to 20 million daily transactions. Even small problems with the machine readability of banknotes can quickly become a major problem to the machine manufacturer and consumer. Traditional note designs incorporate overt security features for visual validation by the public. Many of these features such as fine line engraving, microprinting and watermarks are unsuitable as machine readable features in low cost note acceptors. Current machine readable features, mostly covert, were designed and implemented with the central banks in mind. These features are only usable by the banks large, high speed currency sorting and validation equipment. New note designs should consider and provide for low cost not acceptors, implementing features developed for inexpensive sensing technologies. Machine readable features are only as good as their consistency. Quality of security features as well as that of the overall printing process must be maintained to ensure reliable and secure operation of note readers. Variations in printing and of the components used to make the note are one of the major causes of poor performance in low cost note acceptors. The involvement of machine manufacturers in new currency designs will aid note producers in the design of a note that is machine friendly, helping to secure the acceptance of the note by the public as well as acting asa deterrent to fraud.
Dermatological image search engines on the Internet: do they work?
Cutrone, M; Grimalt, R
2007-02-01
Atlases on CD-ROM first substituted the use of paediatric dermatology atlases printed on paper. This permitted a faster search and a practical comparison of differential diagnoses. The third step in the evolution of clinical atlases was the onset of the online atlas. Many doctors now use the Internet image search engines to obtain clinical images directly. The aim of this study was to test the reliability of the image search engines compared to the online atlases. We tested seven Internet image search engines with three paediatric dermatology diseases. In general, the service offered by the search engines is good, and continues to be free of charge. The coincidence between what we searched for and what we found was generally excellent, and contained no advertisements. Most Internet search engines provided similar results but some were more user friendly than others. It is not necessary to repeat the same research with Picsearch, Lycos and MSN, as the response would be the same; there is a possibility that they might share software. Image search engines are a useful, free and precise method to obtain paediatric dermatology images for teaching purposes. There is still the matter of copyright to be resolved. What are the legal uses of these 'free' images? How do we define 'teaching purposes'? New watermark methods and encrypted electronic signatures might solve these problems and answer these questions.
Abundance and distribution of the highly iterated palindrome 1 (HIP1) among prokaryotes.
Delaye, Luis; Moya, Andrés
2011-09-01
We have studied the abundance and phylogenetic distribution of the Highly Iterated Palindrome 1 (HIP1) among sequenced prokaryotic genomes. We show that an overrepresentation of HIP1 is exclusive of some lineages of cyanobacteria, and that this abundance was gained only once during evolution and was subsequently lost in the lineage leading to marine pico-cyanobacteria. We show that among cyanobacterial protein sequences with annotated Pfam domains, only OpcA (glucose 6-phosphate dehydrogenase assembly protein) has a phylogenetic distribution fully matching HIP1 abundance, suggesting a functional relationship; we also show that DAM methylase (an enzyme that has the four central nucleotides of HIP1 as is site of action) is present in all cyanobacterial genomes (independently of their HIP1 content) with the exception of marine pico-cyanobacteria whom might have lost this enzyme during the process of genome reduction. Our analyses also show that in some prokaryotic lineages (particularly in those species with large genomes), HIP1 is unevenly distributed between coding and non-coding DNA (being more common in coding regions; with the exception of Cyanobacteria Yellowstone B' and Synechococcus elongates where the reverse pattern is true). Finally, we explore the hypothesis that the HIP1 can be used as a molecular "water-mark" to identify horizontally transferred genes from cyanobacteria to other species.
NASA Astrophysics Data System (ADS)
Collins, W. D.; Wehner, M. F.; Prabhat, M.; Kurth, T.; Satish, N.; Mitliagkas, I.; Zhang, J.; Racah, E.; Patwary, M.; Sundaram, N.; Dubey, P.
2017-12-01
Anthropogenically-forced climate changes in the number and character of extreme storms have the potential to significantly impact human and natural systems. Current high-performance computing enables multidecadal simulations with global climate models at resolutions of 25km or finer. Such high-resolution simulations are demonstrably superior in simulating extreme storms such as tropical cyclones than the coarser simulations available in the Coupled Model Intercomparison Project (CMIP5) and provide the capability to more credibly project future changes in extreme storm statistics and properties. The identification and tracking of storms in the voluminous model output is very challenging as it is impractical to manually identify storms due to the enormous size of the datasets, and therefore automated procedures are used. Traditionally, these procedures are based on a multi-variate set of physical conditions based on known properties of the class of storms in question. In recent years, we have successfully demonstrated that Deep Learning produces state of the art results for pattern detection in climate data. We have developed supervised and semi-supervised convolutional architectures for detecting and localizing tropical cyclones, extra-tropical cyclones and atmospheric rivers in simulation data. One of the primary challenges in the applicability of Deep Learning to climate data is in the expensive training phase. Typical networks may take days to converge on 10GB-sized datasets, while the climate science community has ready access to O(10 TB)-O(PB) sized datasets. In this work, we present the most scalable implementation of Deep Learning to date. We successfully scale a unified, semi-supervised convolutional architecture on all of the Cori Phase II supercomputer at NERSC. We use IntelCaffe, MKL and MLSL libraries. We have optimized single node MKL libraries to obtain 1-4 TF on single KNL nodes. We have developed a novel hybrid parameter update strategy to improve scaling to 9600 KNL nodes (600,000 cores). We obtain 15PF performance over the course of the training run; setting a new watermark for the HPC and Deep Learning communities. This talk will share insights on how to obtain this extreme level of performance, current gaps/challenges and implications for the climate science community.
Napolitano, E.; Fusco, F; Baum, Rex L.; Godt, Jonathan W.; De Vita, P.
2016-01-01
Mountainous areas surrounding the Campanian Plain and the Somma-Vesuvius volcano (southern Italy) are among the most risky areas of Italy due to the repeated occurrence of rainfallinduced debris flows along ash-fall pyroclastic soil-mantled slopes. In this geomorphological framework, rainfall patterns, hydrological processes taking place within multi-layered ash-fall pyroclastic deposits and soil antecedent moisture status are the principal factors to be taken into account to assess triggering rainfall conditions and the related hazard. This paper presents the outcomes of an experimental study based on integrated analyses consisting of the reconstruction of physical models of landslides, in situ hydrological monitoring, and hydrological and slope stability modeling, carried out on four representative source areas of debris flows that occurred in May 1998 in the Sarno Mountain Range. The hydrological monitoring was carried out during 2011 using nests of tensiometers and Watermark pressure head sensors and also through a rainfall and air temperature recording station. Time series of measured pressure head were used to calibrate a hydrological numerical model of the pyroclastic soil mantle for 2011, which was re-run for a 12-year period beginning in 2000, given the availability of rainfall and air temperature monitoring data. Such an approach allowed us to reconstruct the regime of pressure head at a daily time scale for a long period, which is representative of about 11 hydrologic years with different meteorological conditions. Based on this simulated time series, average winter and summer hydrological conditions were chosen to carry out hydrological and stability modeling of sample slopes and to identify Intensity- Duration rainfall thresholds by a deterministic approach. Among principal results, the opposing winter and summer antecedent pressure head (soil moisture) conditions were found to exert a significant control on intensity and duration of rainfall triggering events. Going from winter to summer conditions requires a strong increase of intensity and/or duration to induce landslides. The results identify an approach to account for different hazard conditions related to seasonality of hydrological processes inside the ash-fall pyroclastic soil mantle. Moreover, they highlight another important factor of uncertainty that potentially affects rainfall thresholds triggering shallow landslides reconstructed by empirical approaches.
Retention of denture bases fabricated by three different processing techniques – An in vivo study
Chalapathi Kumar, V. H.; Surapaneni, Hemchand; Ravikiran, V.; Chandra, B. Sarat; Balusu, Srilatha; Reddy, V. Naveen
2016-01-01
Aim: Distortion due to Polymerization shrinkage compromises the retention. To evaluate the amount of retention of denture bases fabricated by conventional, anchorized, and injection molding polymerization techniques. Materials and Methods: Ten completely edentulous patients were selected, impressions were made, and master cast obtained was duplicated to fabricate denture bases by three polymerization techniques. Loop was attached to the finished denture bases to estimate the force required to dislodge them by retention apparatus. Readings were subjected to nonparametric Friedman two-way analysis of variance followed by Bonferroni correction methods and Wilcoxon matched-pairs signed-ranks test. Results: Denture bases fabricated by injection molding (3740 g), anchorized techniques (2913 g) recorded greater retention values than conventional technique (2468 g). Significant difference was seen between these techniques. Conclusions: Denture bases obtained by injection molding polymerization technique exhibited maximum retention, followed by anchorized technique, and least retention was seen in conventional molding technique. PMID:27382542
Construction of dynamic stochastic simulation models using knowledge-based techniques
NASA Technical Reports Server (NTRS)
Williams, M. Douglas; Shiva, Sajjan G.
1990-01-01
Over the past three decades, computer-based simulation models have proven themselves to be cost-effective alternatives to the more structured deterministic methods of systems analysis. During this time, many techniques, tools and languages for constructing computer-based simulation models have been developed. More recently, advances in knowledge-based system technology have led many researchers to note the similarities between knowledge-based programming and simulation technologies and to investigate the potential application of knowledge-based programming techniques to simulation modeling. The integration of conventional simulation techniques with knowledge-based programming techniques is discussed to provide a development environment for constructing knowledge-based simulation models. A comparison of the techniques used in the construction of dynamic stochastic simulation models and those used in the construction of knowledge-based systems provides the requirements for the environment. This leads to the design and implementation of a knowledge-based simulation development environment. These techniques were used in the construction of several knowledge-based simulation models including the Advanced Launch System Model (ALSYM).
Ground Vibration Test Planning and Pre-Test Analysis for the X-33 Vehicle
NASA Technical Reports Server (NTRS)
Bedrossian, Herand; Tinker, Michael L.; Hidalgo, Homero
2000-01-01
This paper describes the results of the modal test planning and the pre-test analysis for the X-33 vehicle. The pre-test analysis included the selection of the target modes, selection of the sensor and shaker locations and the development of an accurate Test Analysis Model (TAM). For target mode selection, four techniques were considered, one based on the Modal Cost technique, one based on Balanced Singular Value technique, a technique known as the Root Sum Squared (RSS) method, and a Modal Kinetic Energy (MKE) approach. For selecting sensor locations, four techniques were also considered; one based on the Weighted Average Kinetic Energy (WAKE), one based on Guyan Reduction (GR), one emphasizing engineering judgment, and one based on an optimum sensor selection technique using Genetic Algorithm (GA) search technique combined with a criteria based on Hankel Singular Values (HSV's). For selecting shaker locations, four techniques were also considered; one based on the Weighted Average Driving Point Residue (WADPR), one based on engineering judgment and accessibility considerations, a frequency response method, and an optimum shaker location selection based on a GA search technique combined with a criteria based on HSV's. To evaluate the effectiveness of the proposed sensor and shaker locations for exciting the target modes, extensive numerical simulations were performed. Multivariate Mode Indicator Function (MMIF) was used to evaluate the effectiveness of each sensor & shaker set with respect to modal parameter identification. Several TAM reduction techniques were considered including, Guyan, IRS, Modal, and Hybrid. Based on a pre-test cross-orthogonality checks using various reduction techniques, a Hybrid TAM reduction technique was selected and was used for all three vehicle fuel level configurations.
Piracy in cyber space: consumer complicity, pirates and enterprise enforcement
NASA Astrophysics Data System (ADS)
Chaudhry, Peggy E.; Chaudhry, Sohail S.; Stumpf, Stephen A.; Sudler, Hasshi
2011-05-01
This article presents an overview of the growth of internet piracy in the global marketplace. The ethical perceptions (or lack of) of the younger generation is addressed, in terms of their willingness to consume counterfeit goods on the web. Firms face the task of educating the consumer that downloading music, software, movies and the like, without compensation, is unethical. This awareness is critical for decreasing the demand for counterfeit goods in the virtual marketplace, where a consumer can exhibit a rogue behaviour with a limited fear of prosecution. We address the pyramid of internet piracy, which encompasses sophisticated suppliers/facilitators, such as the Warez group. Recent sting operations, such as Operation Buccaneer, are also depicted to highlight successful tactical manoeuvres of enforcement agencies. An overview of the Digital Millennium Copyright Act and the No Electronic Theft Act is included to debate the controversy surrounding this legislation. A discussion of enterprise enforcement mechanisms and novel anti-piracy technology for cyberspace is provided to reveal some of the tools used to fight the pirates, such as innovations in digital watermarking and NEC's recently announced video content identification technology. Enterprise information systems and its interdependence on the internet are also demanding new technologies that enable internet investigators to rapidly search, verify and potentially remove pirated content using web services. The quality of service of web services designed to efficiently detect pirated content is a growing consideration for new anti-piracy technology.
Customization of Curriculum Materials in Science: Motives, Challenges, and Opportunities
NASA Astrophysics Data System (ADS)
Romine, William L.; Banerjee, Tanvi
2012-02-01
Exemplary science instructors use inquiry to tailor content to student's learning needs; traditional textbooks treat science as a set of facts and a rigid curriculum. Publishers now allow instructors to compile pieces of published and/or self-authored text to make custom textbooks. This brings numerous advantages, including the ability to produce smaller, cheaper text and added flexibility on the teaching models used. Moreover, the internet allows instructors to decentralize textbooks through easy access to educational objects such as audiovisual simulations, individual textbook chapters, and scholarly research articles. However, these new opportunities bring with them new problems. With educational materials easy to access, manipulate and duplicate, it is necessary to define intellectual property boundaries, and the need to secure documents against unlawful copying and use is paramount. Engineers are developing and enhancing information embedding technologies, including steganography, cryptography, watermarking, and fingerprinting, to label and protect intellectual property. While these are showing their utility in securing information, hackers continue to find loop holes in these protection schemes, forcing engineers to constantly assess the algorithms to make them as secure as possible. As newer technologies rise, people still question whether custom publishing is desirable. Many instructors see the process as complex, costly, and substandard in comparison to using traditional text. Publishing companies are working to improve attitudes through advertising. What lacks is peer reviewed evidence showing that custom publishing improves learning. Studies exploring the effect of custom course materials on student attitude and learning outcomes are a necessary next step.
A Semi-Preemptive Garbage Collector for Solid State Drives
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Junghee; Kim, Youngjae; Shipman, Galen M
2011-01-01
NAND flash memory is a preferred storage media for various platforms ranging from embedded systems to enterprise-scale systems. Flash devices do not have any mechanical moving parts and provide low-latency access. They also require less power compared to rotating media. Unlike hard disks, flash devices use out-of-update operations and they require a garbage collection (GC) process to reclaim invalid pages to create free blocks. This GC process is a major cause of performance degradation when running concurrently with other I/O operations as internal bandwidth is consumed to reclaim these invalid pages. The invocation of the GC process is generally governedmore » by a low watermark on free blocks and other internal device metrics that different workloads meet at different intervals. This results in I/O performance that is highly dependent on workload characteristics. In this paper, we examine the GC process and propose a semi-preemptive GC scheme that can preempt on-going GC processing and service pending I/O requests in the queue. Moreover, we further enhance flash performance by pipelining internal GC operations and merge them with pending I/O requests whenever possible. Our experimental evaluation of this semi-preemptive GC sheme with realistic workloads demonstrate both improved performance and reduced performance variability. Write-dominant workloads show up to a 66.56% improvement in average response time with a 83.30% reduced variance in response time compared to the non-preemptive GC scheme.« less
Retraction of publications in nursing and midwifery research: A systematic review.
Al-Ghareeb, Amal; Hillel, Stav; McKenna, Lisa; Cleary, Michelle; Visentin, Denis; Jones, Martin; Bressington, Daniel; Gray, Richard
2018-05-01
Rates of manuscript retraction in academic journals are increasing. Papers are retracted because of scientific misconduct or serious error. To date there have been no studies that have examined rates of retraction in nursing and midwifery journals. A systematic review of Journal Citation Report listed nursing science journals. The Medline database was searched systematically from January 1980 through July 2017, and www.retractionwatch.com was manually searched for relevant studies that met the inclusion criteria. Two researchers undertook title and abstract and full text screening. Data were extracted on the country of the corresponding author, journal title, impact factor, study design, year of retraction, number of citations after retraction, and reason for retraction. Journals retraction index was also calculated. Twenty-nine retracted papers published in nursing science journals were identified, the first in 2007. This represents 0.029% of all papers published in these journals since 2007. We observed a significant increase in the retraction rate of 0.44 per 10,000 publications per year (95% CI; 0.03-0.84, p = .037). There was a negative association between a journal's retraction index and impact factor with a significant reduction in retraction index of -0.57 for a one-point increase in impact factor (95% CI; -1.05 to -0.09, p = .022). Duplicate publication was the most common reason for retraction (n = 18, 58%). The mean number of citations manuscripts received after retraction was seven, the highest was 52. Most (n = 27, 93.1%) of the retracted papers are still available online (with a watermark indicating they are retracted). Compared to more established academic disciplines, rates of retraction in nursing and midwifery are low. Findings suggest that unsound research is not being identified and that the checks and balances incumbent in the scientific method are not working. In a clinical discipline, this is concerning and may indicate that research that should have been removed from the evidence base continues to influence nursing and midwifery care. Copyright © 2018 Elsevier Ltd. All rights reserved.
Applying knowledge compilation techniques to model-based reasoning
NASA Technical Reports Server (NTRS)
Keller, Richard M.
1991-01-01
Researchers in the area of knowledge compilation are developing general purpose techniques for improving the efficiency of knowledge-based systems. In this article, an attempt is made to define knowledge compilation, to characterize several classes of knowledge compilation techniques, and to illustrate how some of these techniques can be applied to improve the performance of model-based reasoning systems.
Wolk, Courtney Benjamin; Marcus, Steven C.; Weersing, V. Robin; Hawley, Kristin M.; Evans, Arthur; Hurford, Matthew; Beidas, Rinad
2016-01-01
Objective Many youth receiving community mental health treatment do not receive evidence-based interventions. Research suggests that community mental health therapists use a broad range of therapeutic techniques at low intensities. The present study examined the relationship between therapist- and client-level predictors on community-based therapists’ report of cognitive, behavioral, psychodynamic, and family techniques within the context of implementation efforts. Methods One hundred thirty therapists from 23 organizations in an urban publicly funded behavioral health system implementing evidence-based practices participated. Therapist-level predictors included age, gender, clinical experience, licensure status, and participation in evidence-based practice initiatives. Child-level predictors included therapist-reported child primary disorder (i.e., externalizing, internalizing, or other) and child age. Therapists completed the Therapist Procedures Checklist- Family Revised, a self-report measure of therapeutic techniques used. Results Unlicensed therapists were more likely to report use of both psychodynamic and behavioral techniques. Therapists who did not participate in an evidence-based practice initiative were less likely to report use of cognitive techniques. Those with externalizing clients were more likely to report use of behavioral and family techniques. Therapists with the youngest clients (aged 3-7) were most likely to report use of behavioral techniques and less likely to report use of cognitive and psychodynamic techniques. Conclusions Results suggest that both therapist and client factors predict self-reported use of therapy techniques. Participating in an evidence-based practice initiative increased report of cognitive techniques. Therapists reported using more behavioral and family techniques for youth with externalizing disorders and fewer cognitive and psychodynamic techniques with young clients. PMID:26876658
NASA Astrophysics Data System (ADS)
Liu, Yan; Deng, Honggui; Ren, Shuang; Tang, Chengying; Qian, Xuewen
2018-01-01
We propose an efficient partial transmit sequence technique based on genetic algorithm and peak-value optimization algorithm (GAPOA) to reduce high peak-to-average power ratio (PAPR) in visible light communication systems based on orthogonal frequency division multiplexing (VLC-OFDM). By analysis of hill-climbing algorithm's pros and cons, we propose the POA with excellent local search ability to further process the signals whose PAPR is still over the threshold after processed by genetic algorithm (GA). To verify the effectiveness of the proposed technique and algorithm, we evaluate the PAPR performance and the bit error rate (BER) performance and compare them with partial transmit sequence (PTS) technique based on GA (GA-PTS), PTS technique based on genetic and hill-climbing algorithm (GH-PTS), and PTS based on shuffled frog leaping algorithm and hill-climbing algorithm (SFLAHC-PTS). The results show that our technique and algorithm have not only better PAPR performance but also lower computational complexity and BER than GA-PTS, GH-PTS, and SFLAHC-PTS technique.
Theoretical Bound of CRLB for Energy Efficient Technique of RSS-Based Factor Graph Geolocation
NASA Astrophysics Data System (ADS)
Kahar Aziz, Muhammad Reza; Heriansyah; Saputra, EfaMaydhona; Musa, Ardiansyah
2018-03-01
To support the increase of wireless geolocation development as the key of the technology in the future, this paper proposes theoretical bound derivation, i.e., Cramer Rao lower bound (CRLB) for energy efficient of received signal strength (RSS)-based factor graph wireless geolocation technique. The theoretical bound derivation is crucially important to evaluate whether the energy efficient technique of RSS-based factor graph wireless geolocation is effective as well as to open the opportunity to further innovation of the technique. The CRLB is derived in this paper by using the Fisher information matrix (FIM) of the main formula of the RSS-based factor graph geolocation technique, which is lied on the Jacobian matrix. The simulation result shows that the derived CRLB has the highest accuracy as a bound shown by its lowest root mean squared error (RMSE) curve compared to the RMSE curve of the RSS-based factor graph geolocation technique. Hence, the derived CRLB becomes the lower bound for the efficient technique of RSS-based factor graph wireless geolocation.
Tsunami Field Survey for the Solomon Islands Earthquake of April 1, 2007
NASA Astrophysics Data System (ADS)
Nishimura, Y.; Tanioka, Y.; Nakamura, Y.; Tsuji, Y.; Namegaya, Y.; Murata, M.; Woodward, S.
2007-12-01
Two weeks after the 2007 off-Solomon earthquake, an international tsunami survey team (ITST) of Japanese and US researchers performed a post tsunami survey in Ghizo and adjacent islands. Main purpose of the team was to provide information on the earthquake and tsunami to the national disaster council of the Solomon Islands, who was responsible for the disaster management at that time. The ITST had interview with the affected people and conducted reconnaissance mapping of the tsunami heights and flow directions. Tsunami flow heights at beach and inland were evaluated from watermarks on buildings and the position of broken branches and stuck materials on trees. These tsunami heights along the southern to western coasts of Ghizo Island were ca. 5m (a.s.l.). Tsunami run-up was traced by distribution of floating debris that carried up by the tsunami and deposited at their inundation limit. The maximum run-up was measured at Tapurai of Simbo Island to be ca. 9 m. Most of the inundation area was covered by 0-10 cm thick tsunami deposit that consists of beach sand, coral peaces and eroded soil. Coseismic uplift and subsidence were clearly identified by changes of the sea level before and after the earthquake, that were inferred by eyewitness accounts and evidences such as dried up coral reeves. These deformation patterns, as well as the tsunami height distribution, could constrain the earthquake fault geometry and motion. It is worthy of mention that the tsunami damage in villages in Ranongga Island has significantly reduced by 2-3 m uplift before the tsunami attack.
Hybrid information privacy system: integration of chaotic neural network and RSA coding
NASA Astrophysics Data System (ADS)
Hsu, Ming-Kai; Willey, Jeff; Lee, Ting N.; Szu, Harold H.
2005-03-01
Electronic mails are adopted worldwide; most are easily hacked by hackers. In this paper, we purposed a free, fast and convenient hybrid privacy system to protect email communication. The privacy system is implemented by combining private security RSA algorithm with specific chaos neural network encryption process. The receiver can decrypt received email as long as it can reproduce the specified chaos neural network series, so called spatial-temporal keys. The chaotic typing and initial seed value of chaos neural network series, encrypted by the RSA algorithm, can reproduce spatial-temporal keys. The encrypted chaotic typing and initial seed value are hidden in watermark mixed nonlinearly with message media, wrapped with convolution error correction codes for wireless 3rd generation cellular phones. The message media can be an arbitrary image. The pattern noise has to be considered during transmission and it could affect/change the spatial-temporal keys. Since any change/modification on chaotic typing or initial seed value of chaos neural network series is not acceptable, the RSA codec system must be robust and fault-tolerant via wireless channel. The robust and fault-tolerant properties of chaos neural networks (CNN) were proved by a field theory of Associative Memory by Szu in 1997. The 1-D chaos generating nodes from the logistic map having arbitrarily negative slope a = p/q generating the N-shaped sigmoid was given first by Szu in 1992. In this paper, we simulated the robust and fault-tolerance properties of CNN under additive noise and pattern noise. We also implement a private version of RSA coding and chaos encryption process on messages.
NASA Technical Reports Server (NTRS)
Ricks, Wendell R.; Abbott, Kathy H.
1987-01-01
A traditional programming technique for controlling the display of optional flight information in a civil transport cockpit is compared to a rule-based technique for the same function. This application required complex decision logic and a frequently modified rule base. The techniques are evaluated for execution efficiency and implementation ease; the criterion used to calculate the execution efficiency is the total number of steps required to isolate hypotheses that were true and the criteria used to evaluate the implementability are ease of modification and verification and explanation capability. It is observed that the traditional program is more efficient than the rule-based program; however, the rule-based programming technique is more applicable for improving programmer productivity.
Hardcastle, Sarah J; Fortier, Michelle; Blake, Nicola; Hagger, Martin S
2017-03-01
Motivational interviewing (MI) is a complex intervention comprising multiple techniques aimed at changing health-related motivation and behaviour. However, MI techniques have not been systematically isolated and classified. This study aimed to identify the techniques unique to MI, classify them as content-related or relational, and evaluate the extent to which they overlap with techniques from the behaviour change technique taxonomy version 1 [BCTTv1; Michie, S., Richardson, M., Johnston, M., Abraham, C., Francis, J., Hardeman, W., … Wood, C. E. (2013). The behavior change technique taxonomy (v1) of 93 hierarchically clustered techniques: Building an international consensus for the reporting of behavior change interventions. Annals of Behavioral Medicine, 46, 81-95]. Behaviour change experts (n = 3) content-analysed MI techniques based on Miller and Rollnick's [(2013). Motivational interviewing: Preparing people for change (3rd ed.). New York: Guildford Press] conceptualisation. Each technique was then coded for independence and uniqueness by independent experts (n = 10). The experts also compared each MI technique to those from the BCTTv1. Experts identified 38 distinct MI techniques with high agreement on clarity, uniqueness, preciseness, and distinctiveness ratings. Of the identified techniques, 16 were classified as relational techniques. The remaining 22 techniques were classified as content based. Sixteen of the MI techniques were identified as having substantial overlap with techniques from the BCTTv1. The isolation and classification of MI techniques will provide researchers with the necessary tools to clearly specify MI interventions and test the main and interactive effects of the techniques on health behaviour. The distinction between relational and content-based techniques within MI is also an important advance, recognising that changes in motivation and behaviour in MI is a function of both intervention content and the interpersonal style in which the content is delivered.
Olyaeemanesh, Alireza; Bavandpour, Elahe; Mobinizadeh, Mohammadreza; Ashrafinia, Mansoor; Bavandpour, Maryam; Nouhi, Mojtaba
2017-01-01
Background: Caesarean section (C-section) is the most common surgery among women worldwide, and the global rate of this surgical procedure has been continuously rising. Hence, it is significantly crucial to develop and apply highly effective and safe caesarean section techniques. In this review study, we aimed at assessing the safety and effectiveness of the Joel-Cohen-based technique and comparing the results with the transverse Pfannenstiel incision for C-section. Methods: In this study, various reliable databases such as the PubMed Central, COCHRANE, DARE, and Ovid MEDLINE were targeted. Reviews, systematic reviews, and randomized clinical trial studies comparing the Joel-Cohen-based technique and the transverse Pfannenstiel incision were selected based on the inclusion criteria. Selected studies were checked by 2 independent reviewers based on the inclusion criteria, and the quality of these studies was assessed. Then, their data were extracted and analyzed. Results: Five randomized clinical trial studies met the inclusion criteria. According to the exiting evidence, statistical results of the Joel-Cohen-based technique showed that this technique is more effective compared to the transverse Pfannenstiel incision. Metaanalysis results of the 3 outcomes were as follow: operation time (5 trials, 764 women; WMD -9.78; 95% CI:-14.49-5.07 minutes, p<0.001), blood loss (3 trials, 309 women; WMD -53.23ml; 95% -CI: 90.20-16.26 ml, p= 0.004), and post-operative hospital stay (3 trials, 453 women; WMD -.69 day; 95% CI: 1.4-0.03 day, p<0.001). Statistical results revealed a significant difference between the 2 techniques. Conclusion: According to the literature, despite having a number of side effects, the Joel-Cohen-based technique is generally more effective than the Pfannenstiel incision technique. In addition, it was recommended that the Joel-Cohen-based technique be used as a replacement for the Pfannenstiel incision technique according to the surgeons' preferences and the patients' conditions.
Zeiler, Frederick A; Donnelly, Joseph; Calviello, Leanne; Menon, David K; Smielewski, Peter; Czosnyka, Marek
2017-12-01
The purpose of this study was to perform a systematic, scoping review of commonly described intermittent/semi-intermittent autoregulation measurement techniques in adult traumatic brain injury (TBI). Nine separate systematic reviews were conducted for each intermittent technique: computed tomographic perfusion (CTP)/Xenon-CT (Xe-CT), positron emission tomography (PET), magnetic resonance imaging (MRI), arteriovenous difference in oxygen (AVDO 2 ) technique, thigh cuff deflation technique (TCDT), transient hyperemic response test (THRT), orthostatic hypotension test (OHT), mean flow index (Mx), and transfer function autoregulation index (TF-ARI). MEDLINE ® , BIOSIS, EMBASE, Global Health, Scopus, Cochrane Library (inception to December 2016), and reference lists of relevant articles were searched. A two tier filter of references was conducted. The total number of articles utilizing each of the nine searched techniques for intermittent/semi-intermittent autoregulation techniques in adult TBI were: CTP/Xe-CT (10), PET (6), MRI (0), AVDO 2 (10), ARI-based TCDT (9), THRT (6), OHT (3), Mx (17), and TF-ARI (6). The premise behind all of the intermittent techniques is manipulation of systemic blood pressure/blood volume via either chemical (such as vasopressors) or mechanical (such as thigh cuffs or carotid compression) means. Exceptionally, Mx and TF-ARI are based on spontaneous fluctuations of cerebral perfusion pressure (CPP) or mean arterial pressure (MAP). The method for assessing the cerebral circulation during these manipulations varies, with both imaging-based techniques and TCD utilized. Despite the limited literature for intermittent/semi-intermittent techniques in adult TBI (minus Mx), it is important to acknowledge the availability of such tests. They have provided fundamental insight into human autoregulatory capacity, leading to the development of continuous and more commonly applied techniques in the intensive care unit (ICU). Numerous methods of intermittent/semi-intermittent pressure autoregulation assessment in adult TBI exist, including: CTP/Xe-CT, PET, AVDO 2 technique, TCDT-based ARI, THRT, OHT, Mx, and TF-ARI. MRI-based techniques in adult TBI are yet to be described, with the main focus of MRI techniques on metabolic-based cerebrovascular reactivity (CVR) and not pressure-based autoregulation.
Change detection from remotely sensed images: From pixel-based to object-based approaches
NASA Astrophysics Data System (ADS)
Hussain, Masroor; Chen, Dongmei; Cheng, Angela; Wei, Hui; Stanley, David
2013-06-01
The appetite for up-to-date information about earth's surface is ever increasing, as such information provides a base for a large number of applications, including local, regional and global resources monitoring, land-cover and land-use change monitoring, and environmental studies. The data from remote sensing satellites provide opportunities to acquire information about land at varying resolutions and has been widely used for change detection studies. A large number of change detection methodologies and techniques, utilizing remotely sensed data, have been developed, and newer techniques are still emerging. This paper begins with a discussion of the traditionally pixel-based and (mostly) statistics-oriented change detection techniques which focus mainly on the spectral values and mostly ignore the spatial context. This is succeeded by a review of object-based change detection techniques. Finally there is a brief discussion of spatial data mining techniques in image processing and change detection from remote sensing data. The merits and issues of different techniques are compared. The importance of the exponential increase in the image data volume and multiple sensors and associated challenges on the development of change detection techniques are highlighted. With the wide use of very-high-resolution (VHR) remotely sensed images, object-based methods and data mining techniques may have more potential in change detection.
Developing a hybrid dictionary-based bio-entity recognition technique.
Song, Min; Yu, Hwanjo; Han, Wook-Shin
2015-01-01
Bio-entity extraction is a pivotal component for information extraction from biomedical literature. The dictionary-based bio-entity extraction is the first generation of Named Entity Recognition (NER) techniques. This paper presents a hybrid dictionary-based bio-entity extraction technique. The approach expands the bio-entity dictionary by combining different data sources and improves the recall rate through the shortest path edit distance algorithm. In addition, the proposed technique adopts text mining techniques in the merging stage of similar entities such as Part of Speech (POS) expansion, stemming, and the exploitation of the contextual cues to further improve the performance. The experimental results show that the proposed technique achieves the best or at least equivalent performance among compared techniques, GENIA, MESH, UMLS, and combinations of these three resources in F-measure. The results imply that the performance of dictionary-based extraction techniques is largely influenced by information resources used to build the dictionary. In addition, the edit distance algorithm shows steady performance with three different dictionaries in precision whereas the context-only technique achieves a high-end performance with three difference dictionaries in recall.
Developing a hybrid dictionary-based bio-entity recognition technique
2015-01-01
Background Bio-entity extraction is a pivotal component for information extraction from biomedical literature. The dictionary-based bio-entity extraction is the first generation of Named Entity Recognition (NER) techniques. Methods This paper presents a hybrid dictionary-based bio-entity extraction technique. The approach expands the bio-entity dictionary by combining different data sources and improves the recall rate through the shortest path edit distance algorithm. In addition, the proposed technique adopts text mining techniques in the merging stage of similar entities such as Part of Speech (POS) expansion, stemming, and the exploitation of the contextual cues to further improve the performance. Results The experimental results show that the proposed technique achieves the best or at least equivalent performance among compared techniques, GENIA, MESH, UMLS, and combinations of these three resources in F-measure. Conclusions The results imply that the performance of dictionary-based extraction techniques is largely influenced by information resources used to build the dictionary. In addition, the edit distance algorithm shows steady performance with three different dictionaries in precision whereas the context-only technique achieves a high-end performance with three difference dictionaries in recall. PMID:26043907
Hardware implementation of hierarchical volume subdivision-based elastic registration.
Dandekar, Omkar; Walimbe, Vivek; Shekhar, Raj
2006-01-01
Real-time, elastic and fully automated 3D image registration is critical to the efficiency and effectiveness of many image-guided diagnostic and treatment procedures relying on multimodality image fusion or serial image comparison. True, real-time performance will make many 3D image registration-based techniques clinically viable. Hierarchical volume subdivision-based image registration techniques are inherently faster than most elastic registration techniques, e.g. free-form deformation (FFD)-based techniques, and are more amenable for achieving real-time performance through hardware acceleration. Our group has previously reported an FPGA-based architecture for accelerating FFD-based image registration. In this article we show how our existing architecture can be adapted to support hierarchical volume subdivision-based image registration. A proof-of-concept implementation of the architecture achieved speedups of 100 for elastic registration against an optimized software implementation on a 3.2 GHz Pentium III Xeon workstation. Due to inherent parallel nature of the hierarchical volume subdivision-based image registration techniques further speedup can be achieved by using several computing modules in parallel.
Low-dose CT image reconstruction using gain intervention-based dictionary learning
NASA Astrophysics Data System (ADS)
Pathak, Yadunath; Arya, K. V.; Tiwari, Shailendra
2018-05-01
Computed tomography (CT) approach is extensively utilized in clinical diagnoses. However, X-ray residue in human body may introduce somatic damage such as cancer. Owing to radiation risk, research has focused on the radiation exposure distributed to patients through CT investigations. Therefore, low-dose CT has become a significant research area. Many researchers have proposed different low-dose CT reconstruction techniques. But, these techniques suffer from various issues such as over smoothing, artifacts, noise, etc. Therefore, in this paper, we have proposed a novel integrated low-dose CT reconstruction technique. The proposed technique utilizes global dictionary-based statistical iterative reconstruction (GDSIR) and adaptive dictionary-based statistical iterative reconstruction (ADSIR)-based reconstruction techniques. In case the dictionary (D) is predetermined, then GDSIR can be used and if D is adaptively defined then ADSIR is appropriate choice. The gain intervention-based filter is also used as a post-processing technique for removing the artifacts from low-dose CT reconstructed images. Experiments have been done by considering the proposed and other low-dose CT reconstruction techniques on well-known benchmark CT images. Extensive experiments have shown that the proposed technique outperforms the available approaches.
The detection of bulk explosives using nuclear-based techniques
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morgado, R.E.; Gozani, T.; Seher, C.C.
1988-01-01
In 1986 we presented a rationale for the detection of bulk explosives based on nuclear techniques that addressed the requirements of civil aviation security in the airport environment. Since then, efforts have intensified to implement a system based on thermal neutron activation (TNA), with new work developing in fast neutron and energetic photon reactions. In this paper we will describe these techniques and present new results from laboratory and airport testing. Based on preliminary results, we contended in our earlier paper that nuclear-based techniques did provide sufficiently penetrating probes and distinguishable detectable reaction products to achieve the FAA operational goals;more » new data have supported this contention. The status of nuclear-based techniques for the detection of bulk explosives presently under investigation by the US Federal Aviation Administration (FAA) is reviewed. These include thermal neutron activation (TNA), fast neutron activation (FNA), the associated particle technique, nuclear resonance absorption, and photoneutron activation. The results of comprehensive airport testing of the TNA system performed during 1987-88 are summarized. From a technical point of view, nuclear-based techniques now represent the most comprehensive and feasible approach for meeting the operational criteria of detection, false alarms, and throughput. 9 refs., 5 figs., 2 tabs.« less
Design and Evaluation of Perceptual-based Object Group Selection Techniques
NASA Astrophysics Data System (ADS)
Dehmeshki, Hoda
Selecting groups of objects is a frequent task in graphical user interfaces. It is required prior to many standard operations such as deletion, movement, or modification. Conventional selection techniques are lasso, rectangle selection, and the selection and de-selection of items through the use of modifier keys. These techniques may become time-consuming and error-prone when target objects are densely distributed or when the distances between target objects are large. Perceptual-based selection techniques can considerably improve selection tasks when targets have a perceptual structure, for example when arranged along a line. Current methods to detect such groups use ad hoc grouping algorithms that are not based on results from perception science. Moreover, these techniques do not allow selecting groups with arbitrary arrangements or permit modifying a selection. This dissertation presents two domain-independent perceptual-based systems that address these issues. Based on established group detection models from perception research, the proposed systems detect perceptual groups formed by the Gestalt principles of good continuation and proximity. The new systems provide gesture-based or click-based interaction techniques for selecting groups with curvilinear or arbitrary structures as well as clusters. Moreover, the gesture-based system is adapted for the graph domain to facilitate path selection. This dissertation includes several user studies that show the proposed systems outperform conventional selection techniques when targets form salient perceptual groups and are still competitive when targets are semi-structured.
Aerodynamic measurement techniques. [laser based diagnostic techniques
NASA Technical Reports Server (NTRS)
Hunter, W. W., Jr.
1976-01-01
Laser characteristics of intensity, monochromatic, spatial coherence, and temporal coherence were developed to advance laser based diagnostic techniques for aerodynamic related research. Two broad categories of visualization and optical measurements were considered, and three techniques received significant attention. These are holography, laser velocimetry, and Raman scattering. Examples of the quantitative laser velocimeter and Raman scattering measurements of velocity, temperature, and density indicated the potential of these nonintrusive techniques.
Macready, Anna L; Fallaize, Rosalind; Butler, Laurie T; Ellis, Judi A; Kuznesof, Sharron; Frewer, Lynn J; Celis-Morales, Carlos; Livingstone, Katherine M; Araújo-Soares, Vera; Fischer, Arnout RH; Stewart-Knox, Barbara J; Mathers, John C
2018-01-01
Background To determine the efficacy of behavior change techniques applied in dietary and physical activity intervention studies, it is first necessary to record and describe techniques that have been used during such interventions. Published frameworks used in dietary and smoking cessation interventions undergo continuous development, and most are not adapted for Web-based delivery. The Food4Me study (N=1607) provided the opportunity to use existing frameworks to describe standardized Web-based techniques employed in a large-scale, internet-based intervention to change dietary behavior and physical activity. Objective The aims of this study were (1) to describe techniques embedded in the Food4Me study design and explain the selection rationale and (2) to demonstrate the use of behavior change technique taxonomies, develop standard operating procedures for training, and identify strengths and limitations of the Food4Me framework that will inform its use in future studies. Methods The 6-month randomized controlled trial took place simultaneously in seven European countries, with participants receiving one of four levels of personalized advice (generalized, intake-based, intake+phenotype–based, and intake+phenotype+gene–based). A three-phase approach was taken: (1) existing taxonomies were reviewed and techniques were identified a priori for possible inclusion in the Food4Me study, (2) a standard operating procedure was developed to maintain consistency in the use of methods and techniques across research centers, and (3) the Food4Me behavior change technique framework was reviewed and updated post intervention. An analysis of excluded techniques was also conducted. Results Of 46 techniques identified a priori as being applicable to Food4Me, 17 were embedded in the intervention design; 11 were from a dietary taxonomy, and 6 from a smoking cessation taxonomy. In addition, the four-category smoking cessation framework structure was adopted for clarity of communication. Smoking cessation texts were adapted for dietary use where necessary. A posteriori, a further 9 techniques were included. Examination of excluded items highlighted the distinction between techniques considered appropriate for face-to-face versus internet-based delivery. Conclusions The use of existing taxonomies facilitated the description and standardization of techniques used in Food4Me. We recommend that for complex studies of this nature, technique analysis should be conducted a priori to develop standardized procedures and training and reviewed a posteriori to audit the techniques actually adopted. The present framework description makes a valuable contribution to future systematic reviews and meta-analyses that explore technique efficacy and underlying psychological constructs. This was a novel application of the behavior change taxonomies and was the first internet-based personalized nutrition intervention to use such a framework remotely. Trial Registration ClinicalTrials.gov NCT01530139; https://clinicaltrials.gov/ct2/show/NCT01530139 (Archived by WebCite at http://www.webcitation.org/6y8XYUft1) PMID:29631993
Efficient techniques for wave-based sound propagation in interactive applications
NASA Astrophysics Data System (ADS)
Mehra, Ravish
Sound propagation techniques model the effect of the environment on sound waves and predict their behavior from point of emission at the source to the final point of arrival at the listener. Sound is a pressure wave produced by mechanical vibration of a surface that propagates through a medium such as air or water, and the problem of sound propagation can be formulated mathematically as a second-order partial differential equation called the wave equation. Accurate techniques based on solving the wave equation, also called the wave-based techniques, are too expensive computationally and memory-wise. Therefore, these techniques face many challenges in terms of their applicability in interactive applications including sound propagation in large environments, time-varying source and listener directivity, and high simulation cost for mid-frequencies. In this dissertation, we propose a set of efficient wave-based sound propagation techniques that solve these three challenges and enable the use of wave-based sound propagation in interactive applications. Firstly, we propose a novel equivalent source technique for interactive wave-based sound propagation in large scenes spanning hundreds of meters. It is based on the equivalent source theory used for solving radiation and scattering problems in acoustics and electromagnetics. Instead of using a volumetric or surface-based approach, this technique takes an object-centric approach to sound propagation. The proposed equivalent source technique generates realistic acoustic effects and takes orders of magnitude less runtime memory compared to prior wave-based techniques. Secondly, we present an efficient framework for handling time-varying source and listener directivity for interactive wave-based sound propagation. The source directivity is represented as a linear combination of elementary spherical harmonic sources. This spherical harmonic-based representation of source directivity can support analytical, data-driven, rotating or time-varying directivity function at runtime. Unlike previous approaches, the listener directivity approach can be used to compute spatial audio (3D audio) for a moving, rotating listener at interactive rates. Lastly, we propose an efficient GPU-based time-domain solver for the wave equation that enables wave simulation up to the mid-frequency range in tens of minutes on a desktop computer. It is demonstrated that by carefully mapping all the components of the wave simulator to match the parallel processing capabilities of the graphics processors, significant improvement in performance can be achieved compared to the CPU-based simulators, while maintaining numerical accuracy. We validate these techniques with offline numerical simulations and measured data recorded in an outdoor scene. We present results of preliminary user evaluations conducted to study the impact of these techniques on user's immersion in virtual environment. We have integrated these techniques with the Half-Life 2 game engine, Oculus Rift head-mounted display, and Xbox game controller to enable users to experience high-quality acoustics effects and spatial audio in the virtual environment.
Olyaeemanesh, Alireza; Bavandpour, Elahe; Mobinizadeh, Mohammadreza; Ashrafinia, Mansoor; Bavandpour, Maryam; Nouhi, Mojtaba
2017-01-01
Background: Caesarean section (C-section) is the most common surgery among women worldwide, and the global rate of this surgical procedure has been continuously rising. Hence, it is significantly crucial to develop and apply highly effective and safe caesarean section techniques. In this review study, we aimed at assessing the safety and effectiveness of the Joel-Cohen-based technique and comparing the results with the transverse Pfannenstiel incision for C-section. Methods: In this study, various reliable databases such as the PubMed Central, COCHRANE, DARE, and Ovid MEDLINE were targeted. Reviews, systematic reviews, and randomized clinical trial studies comparing the Joel-Cohen-based technique and the transverse Pfannenstiel incision were selected based on the inclusion criteria. Selected studies were checked by 2 independent reviewers based on the inclusion criteria, and the quality of these studies was assessed. Then, their data were extracted and analyzed. Results: Five randomized clinical trial studies met the inclusion criteria. According to the exiting evidence, statistical results of the Joel-Cohen-based technique showed that this technique is more effective compared to the transverse Pfannenstiel incision. Metaanalysis results of the 3 outcomes were as follow: operation time (5 trials, 764 women; WMD -9.78; 95% CI:-14.49-5.07 minutes, p<0.001), blood loss (3 trials, 309 women; WMD -53.23ml; 95% –CI: 90.20-16.26 ml, p= 0.004), and post-operative hospital stay (3 trials, 453 women; WMD -.69 day; 95% CI: 1.4-0.03 day, p<0.001). Statistical results revealed a significant difference between the 2 techniques. Conclusion: According to the literature, despite having a number of side effects, the Joel-Cohen-based technique is generally more effective than the Pfannenstiel incision technique. In addition, it was recommended that the Joel-Cohen-based technique be used as a replacement for the Pfannenstiel incision technique according to the surgeons’ preferences and the patients’ conditions. PMID:29445683
2017-11-01
ARL-TR-8225 ● NOV 2017 US Army Research Laboratory Methodology for Designing and Developing a New Ultra-Wideband Antenna Based...Research Laboratory Methodology for Designing and Developing a New Ultra-Wideband Antenna Based on Bio-Inspired Optimization Techniques by...SUBTITLE Methodology for Designing and Developing a New Ultra-Wideband Antenna Based on Bio-Inspired Optimization Techniques 5a. CONTRACT NUMBER
Qu, Yongzhi; He, David; Yoon, Jae; Van Hecke, Brandon; Bechhoefer, Eric; Zhu, Junda
2014-01-01
In recent years, acoustic emission (AE) sensors and AE-based techniques have been developed and tested for gearbox fault diagnosis. In general, AE-based techniques require much higher sampling rates than vibration analysis-based techniques for gearbox fault diagnosis. Therefore, it is questionable whether an AE-based technique would give a better or at least the same performance as the vibration analysis-based techniques using the same sampling rate. To answer the question, this paper presents a comparative study for gearbox tooth damage level diagnostics using AE and vibration measurements, the first known attempt to compare the gearbox fault diagnostic performance of AE- and vibration analysis-based approaches using the same sampling rate. Partial tooth cut faults are seeded in a gearbox test rig and experimentally tested in a laboratory. Results have shown that the AE-based approach has the potential to differentiate gear tooth damage levels in comparison with the vibration-based approach. While vibration signals are easily affected by mechanical resonance, the AE signals show more stable performance. PMID:24424467
Hopkins, D S; Phoenix, R D; Abrahamsen, T C
1997-09-01
A technique for the fabrication of light-activated maxillary record bases is described. The use of a segmental polymerization process provides improved palatal adaptation by minimizing the effects of polymerization shrinkage. Utilization of this technique results in record bases that are well adapted to the corresponding master casts.
PAPR reduction in FBMC using an ACE-based linear programming optimization
NASA Astrophysics Data System (ADS)
van der Neut, Nuan; Maharaj, Bodhaswar TJ; de Lange, Frederick; González, Gustavo J.; Gregorio, Fernando; Cousseau, Juan
2014-12-01
This paper presents four novel techniques for peak-to-average power ratio (PAPR) reduction in filter bank multicarrier (FBMC) modulation systems. The approach extends on current PAPR reduction active constellation extension (ACE) methods, as used in orthogonal frequency division multiplexing (OFDM), to an FBMC implementation as the main contribution. The four techniques introduced can be split up into two: linear programming optimization ACE-based techniques and smart gradient-project (SGP) ACE techniques. The linear programming (LP)-based techniques compensate for the symbol overlaps by utilizing a frame-based approach and provide a theoretical upper bound on achievable performance for the overlapping ACE techniques. The overlapping ACE techniques on the other hand can handle symbol by symbol processing. Furthermore, as a result of FBMC properties, the proposed techniques do not require side information transmission. The PAPR performance of the techniques is shown to match, or in some cases improve, on current PAPR techniques for FBMC. Initial analysis of the computational complexity of the SGP techniques indicates that the complexity issues with PAPR reduction in FBMC implementations can be addressed. The out-of-band interference introduced by the techniques is investigated. As a result, it is shown that the interference can be compensated for, whilst still maintaining decent PAPR performance. Additional results are also provided by means of a study of the PAPR reduction of the proposed techniques at a fixed clipping probability. The bit error rate (BER) degradation is investigated to ensure that the trade-off in terms of BER degradation is not too severe. As illustrated by exhaustive simulations, the SGP ACE-based technique proposed are ideal candidates for practical implementation in systems employing the low-complexity polyphase implementation of FBMC modulators. The methods are shown to offer significant PAPR reduction and increase the feasibility of FBMC as a replacement modulation system for OFDM.
NASA Astrophysics Data System (ADS)
Chockalingam, Letchumanan
2005-01-01
The data of Gunung Ledang region of Malaysia acquired through LANDSAT are considered to map certain hydrogeolocial features. To map these significant features, image-processing tools such as contrast enhancement, edge detection techniques are employed. The advantages of these techniques over the other methods are evaluated from the point of their validity in properly isolating features of hydrogeolocial interest are discussed. As these techniques take the advantage of spectral aspects of the images, these techniques have several limitations to meet the objectives. To discuss these limitations, a morphological transformation, which generally considers the structural aspects rather than spectral aspects from the image, are applied to provide comparisons between the results derived from spectral based and the structural based filtering techniques.
Reachability analysis of real-time systems using time Petri nets.
Wang, J; Deng, Y; Xu, G
2000-01-01
Time Petri nets (TPNs) are a popular Petri net model for specification and verification of real-time systems. A fundamental and most widely applied method for analyzing Petri nets is reachability analysis. The existing technique for reachability analysis of TPNs, however, is not suitable for timing property verification because one cannot derive end-to-end delay in task execution, an important issue for time-critical systems, from the reachability tree constructed using the technique. In this paper, we present a new reachability based analysis technique for TPNs for timing property analysis and verification that effectively addresses the problem. Our technique is based on a concept called clock-stamped state class (CS-class). With the reachability tree generated based on CS-classes, we can directly compute the end-to-end time delay in task execution. Moreover, a CS-class can be uniquely mapped to a traditional state class based on which the conventional reachability tree is constructed. Therefore, our CS-class-based analysis technique is more general than the existing technique. We show how to apply this technique to timing property verification of the TPN model of a command and control (C2) system.
NASA Astrophysics Data System (ADS)
Straub, Jeremy
2016-05-01
Quality control is critical to manufacturing. Frequently, techniques are used to define object conformity bounds, based on historical quality data. This paper considers techniques for bespoke and small batch jobs that are not statistical model based. These techniques also serve jobs where 100% validation is needed due to the mission or safety critical nature of particular parts. One issue with this type of system is alignment discrepancies between the generated model and the physical part. This paper discusses and evaluates techniques for characterizing and correcting alignment issues between the projected and perceived data sets to prevent errors attributable to misalignment.
Errorless-based techniques can improve route finding in early Alzheimer's disease: a case study.
Provencher, Véronique; Bier, Nathalie; Audet, Thérèse; Gagnon, Lise
2008-01-01
Topographical disorientation is a common and early manifestation of dementia of Alzheimer type, which threatens independence in activities of daily living. Errorless-based techniques appear to be effective in helping patients with amnesia to learn routes, but little is known about their effectiveness in early dementia of Alzheimer type. A 77-year-old woman with dementia of Alzheimer type had difficulty in finding her way around her seniors residence, which reduced her social activities. This study used an ABA design (A is the baseline and B is the intervention) with multiple baselines across routes for going to the rosary (target), laundry, and game rooms (controls). The errorless-based technique intervention was applied to 2 of the 3 routes. Analyses showed significant improvement only for the routes learned with errorless-based techniques. Following the study, the participant increased her topographical knowledge of her surroundings. Route learning interventions based on errorless-based techniques appear to be a promising approach for improving the independence in early dementia of Alzheimer type.
Beidas, Rinad S; Marcus, Steven; Aarons, Gregory A; Hoagwood, Kimberly E; Schoenwald, Sonja; Evans, Arthur C; Hurford, Matthew O; Hadley, Trevor; Barg, Frances K; Walsh, Lucia M; Adams, Danielle R; Mandell, David S
2015-04-01
Few studies have examined the effects of individual and organizational characteristics on the use of evidence-based practices in mental health care. Improved understanding of these factors could guide future implementation efforts to ensure effective adoption, implementation, and sustainment of evidence-based practices. To estimate the relative contribution of individual and organizational factors on therapist self-reported use of cognitive-behavioral, family, and psychodynamic therapy techniques within the context of a large-scale effort to increase use of evidence-based practices in an urban public mental health system serving youth and families. In this observational, cross-sectional study of 23 organizations, data were collected from March 1 through July 25, 2013. We used purposive sampling to recruit the 29 largest child-serving agencies, which together serve approximately 80% of youth receiving publically funded mental health care. The final sample included 19 agencies with 23 sites, 130 therapists, 36 supervisors, and 22 executive administrators. Therapist self-reported use of cognitive-behavioral, family, and psychodynamic therapy techniques, as measured by the Therapist Procedures Checklist-Family Revised. Individual factors accounted for the following percentages of the overall variation: cognitive-behavioral therapy techniques, 16%; family therapy techniques, 7%; and psychodynamic therapy techniques, 20%. Organizational factors accounted for the following percentages of the overall variation: cognitive-behavioral therapy techniques, 23%; family therapy techniques, 19%; and psychodynamic therapy techniques, 7%. Older therapists and therapists with more open attitudes were more likely to endorse use of cognitive-behavioral therapy techniques, as were those in organizations that had spent fewer years participating in evidence-based practice initiatives, had more resistant cultures, and had more functional climates. Women were more likely to endorse use of family therapy techniques, as were those in organizations employing more fee-for-service staff and with more stressful climates. Therapists with more divergent attitudes and less knowledge about evidence-based practices were more likely to use psychodynamic therapy techniques. This study suggests that individual and organizational factors are important in explaining therapist behavior and use of evidence-based practices, but the relative importance varies by therapeutic technique.
Artificial Intelligence Techniques: Applications for Courseware Development.
ERIC Educational Resources Information Center
Dear, Brian L.
1986-01-01
Introduces some general concepts and techniques of artificial intelligence (natural language interfaces, expert systems, knowledge bases and knowledge representation, heuristics, user-interface metaphors, and object-based environments) and investigates ways these techniques might be applied to analysis, design, development, implementation, and…
ERIC Educational Resources Information Center
Elrick, Mike
2003-01-01
Traditional techniques and gear are better suited for comfortable extended wilderness trips with high school students than are emerging technologies and techniques based on low-impact camping and petroleum-based clothing, which send students the wrong messages about ecological relatedness and sustainability. Traditional travel techniques and…
Meditation and mindfulness in clinical practice.
Simkin, Deborah R; Black, Nancy B
2014-07-01
This article describes the various forms of meditation and provides an overview of research using these techniques for children, adolescents, and their families. The most researched techniques in children and adolescents are mindfulness-based stress reduction, mindfulness-based cognitive therapy, yoga meditation, transcendental meditation, mind-body techniques (meditation, relaxation), and body-mind techniques (yoga poses, tai chi movements). Current data are suggestive of a possible value of meditation and mindfulness techniques for treating symptomatic anxiety, depression, and pain in youth. Clinicians must be properly trained before using these techniques. Copyright © 2014 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Ibraheem, Omveer, Hasan, N.
2010-10-01
A new hybrid stochastic search technique is proposed to design of suboptimal AGC regulator for a two area interconnected non reheat thermal power system incorporating DC link in parallel with AC tie-line. In this technique, we are proposing the hybrid form of Genetic Algorithm (GA) and simulated annealing (SA) based regulator. GASA has been successfully applied to constrained feedback control problems where other PI based techniques have often failed. The main idea in this scheme is to seek a feasible PI based suboptimal solution at each sampling time. The feasible solution decreases the cost function rather than minimizing the cost function.
Weighted hybrid technique for recommender system
NASA Astrophysics Data System (ADS)
Suriati, S.; Dwiastuti, Meisyarah; Tulus, T.
2017-12-01
Recommender system becomes very popular and has important role in an information system or webpages nowadays. A recommender system tries to make a prediction of which item a user may like based on his activity on the system. There are some familiar techniques to build a recommender system, such as content-based filtering and collaborative filtering. Content-based filtering does not involve opinions from human to make the prediction, while collaborative filtering does, so collaborative filtering can predict more accurately. However, collaborative filtering cannot give prediction to items which have never been rated by any user. In order to cover the drawbacks of each approach with the advantages of other approach, both approaches can be combined with an approach known as hybrid technique. Hybrid technique used in this work is weighted technique in which the prediction score is combination linear of scores gained by techniques that are combined.The purpose of this work is to show how an approach of weighted hybrid technique combining content-based filtering and item-based collaborative filtering can work in a movie recommender system and to show the performance comparison when both approachare combined and when each approach works alone. There are three experiments done in this work, combining both techniques with different parameters. The result shows that the weighted hybrid technique that is done in this work does not really boost the performance up, but it helps to give prediction score for unrated movies that are impossible to be recommended by only using collaborative filtering.
A review on creatinine measurement techniques.
Mohabbati-Kalejahi, Elham; Azimirad, Vahid; Bahrami, Manouchehr; Ganbari, Ahmad
2012-08-15
This paper reviews the entire recent global tendency for creatinine measurement. Creatinine biosensors involve complex relationships between biology and micro-mechatronics to which the blood is subjected. Comparison between new and old methods shows that new techniques (e.g. Molecular Imprinted Polymers based algorithms) are better than old methods (e.g. Elisa) in terms of stability and linear range. All methods and their details for serum, plasma, urine and blood samples are surveyed. They are categorized into five main algorithms: optical, electrochemical, impedometrical, Ion Selective Field-Effect Transistor (ISFET) based technique and chromatography. Response time, detection limit, linear range and selectivity of reported sensors are discussed. Potentiometric measurement technique has the lowest response time of 4-10 s and the lowest detection limit of 0.28 nmol L(-1) belongs to chromatographic technique. Comparison between various techniques of measurements indicates that the best selectivity belongs to MIP based and chromatographic techniques. Copyright © 2012 Elsevier B.V. All rights reserved.
Visibility enhancement of color images using Type-II fuzzy membership function
NASA Astrophysics Data System (ADS)
Singh, Harmandeep; Khehra, Baljit Singh
2018-04-01
Images taken in poor environmental conditions decrease the visibility and hidden information of digital images. Therefore, image enhancement techniques are necessary for improving the significant details of these images. An extensive review has shown that histogram-based enhancement techniques greatly suffer from over/under enhancement issues. Fuzzy-based enhancement techniques suffer from over/under saturated pixels problems. In this paper, a novel Type-II fuzzy-based image enhancement technique has been proposed for improving the visibility of images. The Type-II fuzzy logic can automatically extract the local atmospheric light and roughly eliminate the atmospheric veil in local detail enhancement. The proposed technique has been evaluated on 10 well-known weather degraded color images and is also compared with four well-known existing image enhancement techniques. The experimental results reveal that the proposed technique outperforms others regarding visible edge ratio, color gradients and number of saturated pixels.
The application analysis of the multi-angle polarization technique for ocean color remote sensing
NASA Astrophysics Data System (ADS)
Zhang, Yongchao; Zhu, Jun; Yin, Huan; Zhang, Keli
2017-02-01
The multi-angle polarization technique, which uses the intensity of polarized radiation as the observed quantity, is a new remote sensing means for earth observation. With this method, not only can the multi-angle light intensity data be provided, but also the multi-angle information of polarized radiation can be obtained. So, the technique may solve the problems, those could not be solved with the traditional remote sensing methods. Nowadays, the multi-angle polarization technique has become one of the hot topics in the field of the international quantitative research on remote sensing. In this paper, we firstly introduce the principles of the multi-angle polarization technique, then the situations of basic research and engineering applications are particularly summarized and analysed in 1) the peeled-off method of sun glitter based on polarization, 2) the ocean color remote sensing based on polarization, 3) oil spill detection using polarization technique, 4) the ocean aerosol monitoring based on polarization. Finally, based on the previous work, we briefly present the problems and prospects of the multi-angle polarization technique used in China's ocean color remote sensing.
The Effects of Practice-Based Training on Graduate Teaching Assistants’ Classroom Practices
Becker, Erin A.; Easlon, Erin J.; Potter, Sarah C.; Guzman-Alvarez, Alberto; Spear, Jensen M.; Facciotti, Marc T.; Igo, Michele M.; Singer, Mitchell; Pagliarulo, Christopher
2017-01-01
Evidence-based teaching is a highly complex skill, requiring repeated cycles of deliberate practice and feedback to master. Despite existing well-characterized frameworks for practice-based training in K–12 teacher education, the major principles of these frameworks have not yet been transferred to instructor development in higher educational contexts, including training of graduate teaching assistants (GTAs). We sought to determine whether a practice-based training program could help GTAs learn and use evidence-based teaching methods in their classrooms. We implemented a weekly training program for introductory biology GTAs that included structured drills of techniques selected to enhance student practice, logic development, and accountability and reduce apprehension. These elements were selected based on their previous characterization as dimensions of active learning. GTAs received regular performance feedback based on classroom observations. To quantify use of target techniques and levels of student participation, we collected and coded 160 h of video footage. We investigated the relationship between frequency of GTA implementation of target techniques and student exam scores; however, we observed no significant relationship. Although GTAs adopted and used many of the target techniques with high frequency, techniques that enforced student participation were not stably adopted, and their use was unresponsive to formal feedback. We also found that techniques discussed in training, but not practiced, were not used at quantifiable frequencies, further supporting the importance of practice-based training for influencing instructional practices. PMID:29146664
Baghaie, Ahmadreza; Yu, Zeyun; D'Souza, Roshan M
2017-04-01
In this paper, we review state-of-the-art techniques to correct eye motion artifacts in Optical Coherence Tomography (OCT) imaging. The methods for eye motion artifact reduction can be categorized into two major classes: (1) hardware-based techniques and (2) software-based techniques. In the first class, additional hardware is mounted onto the OCT scanner to gather information about the eye motion patterns during OCT data acquisition. This information is later processed and applied to the OCT data for creating an anatomically correct representation of the retina, either in an offline or online manner. In software based techniques, the motion patterns are approximated either by comparing the acquired data to a reference image, or by considering some prior assumptions about the nature of the eye motion. Careful investigations done on the most common methods in the field provides invaluable insight regarding future directions of the research in this area. The challenge in hardware-based techniques lies in the implementation aspects of particular devices. However, the results of these techniques are superior to those obtained from software-based techniques because they are capable of capturing secondary data related to eye motion during OCT acquisition. Software-based techniques on the other hand, achieve moderate success and their performance is highly dependent on the quality of the OCT data in terms of the amount of motion artifacts contained in them. However, they are still relevant to the field since they are the sole class of techniques with the ability to be applied to legacy data acquired using systems that do not have extra hardware to track eye motion. Copyright © 2017 Elsevier B.V. All rights reserved.
Model-based RSA of a femoral hip stem using surface and geometrical shape models.
Kaptein, Bart L; Valstar, Edward R; Spoor, Cees W; Stoel, Berend C; Rozing, Piet M
2006-07-01
Roentgen stereophotogrammetry (RSA) is a highly accurate three-dimensional measuring technique for assessing micromotion of orthopaedic implants. A drawback is that markers have to be attached to the implant. Model-based techniques have been developed to prevent using special marked implants. We compared two model-based RSA methods with standard marker-based RSA techniques. The first model-based RSA method used surface models, and the second method used elementary geometrical shape (EGS) models. We used a commercially available stem to perform experiments with a phantom as well as reanalysis of patient RSA radiographs. The data from the phantom experiment indicated the accuracy and precision of the elementary geometrical shape model-based RSA method is equal to marker-based RSA. For model-based RSA using surface models, the accuracy is equal to the accuracy of marker-based RSA, but its precision is worse. We found no difference in accuracy and precision between the two model-based RSA techniques in clinical data. For this particular hip stem, EGS model-based RSA is a good alternative for marker-based RSA.
Review of Fluorescence-Based Velocimetry Techniques to Study High-Speed Compressible Flows
NASA Technical Reports Server (NTRS)
Bathel, Brett F.; Johansen, Criag; Inman, Jennifer A.; Jones, Stephen B.; Danehy, Paul M.
2013-01-01
This paper reviews five laser-induced fluorescence-based velocimetry techniques that have been used to study high-speed compressible flows at NASA Langley Research Center. The techniques discussed in this paper include nitric oxide (NO) molecular tagging velocimetry (MTV), nitrogen dioxide photodissociation (NO2-to-NO) MTV, and NO and atomic oxygen (O-atom) Doppler-shift-based velocimetry. Measurements of both single-component and two-component velocity have been performed using these techniques. This paper details the specific application and experiment for which each technique has been used, the facility in which the experiment was performed, the experimental setup, sample results, and a discussion of the lessons learned from each experiment.
Scalable graphene production: perspectives and challenges of plasma applications
NASA Astrophysics Data System (ADS)
Levchenko, Igor; Ostrikov, Kostya (Ken); Zheng, Jie; Li, Xingguo; Keidar, Michael; B. K. Teo, Kenneth
2016-05-01
Graphene, a newly discovered and extensively investigated material, has many unique and extraordinary properties which promise major technological advances in fields ranging from electronics to mechanical engineering and food production. Unfortunately, complex techniques and high production costs hinder commonplace applications. Scaling of existing graphene production techniques to the industrial level without compromising its properties is a current challenge. This article focuses on the perspectives and challenges of scalability, equipment, and technological perspectives of the plasma-based techniques which offer many unique possibilities for the synthesis of graphene and graphene-containing products. The plasma-based processes are amenable for scaling and could also be useful to enhance the controllability of the conventional chemical vapour deposition method and some other techniques, and to ensure a good quality of the produced graphene. We examine the unique features of the plasma-enhanced graphene production approaches, including the techniques based on inductively-coupled and arc discharges, in the context of their potential scaling to mass production following the generic scaling approaches applicable to the existing processes and systems. This work analyses a large amount of the recent literature on graphene production by various techniques and summarizes the results in a tabular form to provide a simple and convenient comparison of several available techniques. Our analysis reveals a significant potential of scalability for plasma-based technologies, based on the scaling-related process characteristics. Among other processes, a greater yield of 1 g × h-1 m-2 was reached for the arc discharge technology, whereas the other plasma-based techniques show process yields comparable to the neutral-gas based methods. Selected plasma-based techniques show lower energy consumption than in thermal CVD processes, and the ability to produce graphene flakes of various sizes reaching hundreds of square millimetres, and the thickness varying from a monolayer to 10-20 layers. Additional factors such as electrical voltage and current, not available in thermal CVD processes could potentially lead to better scalability, flexibility and control of the plasma-based processes. Advantages and disadvantages of various systems are also considered.
Scalable graphene production: perspectives and challenges of plasma applications.
Levchenko, Igor; Ostrikov, Kostya Ken; Zheng, Jie; Li, Xingguo; Keidar, Michael; B K Teo, Kenneth
2016-05-19
Graphene, a newly discovered and extensively investigated material, has many unique and extraordinary properties which promise major technological advances in fields ranging from electronics to mechanical engineering and food production. Unfortunately, complex techniques and high production costs hinder commonplace applications. Scaling of existing graphene production techniques to the industrial level without compromising its properties is a current challenge. This article focuses on the perspectives and challenges of scalability, equipment, and technological perspectives of the plasma-based techniques which offer many unique possibilities for the synthesis of graphene and graphene-containing products. The plasma-based processes are amenable for scaling and could also be useful to enhance the controllability of the conventional chemical vapour deposition method and some other techniques, and to ensure a good quality of the produced graphene. We examine the unique features of the plasma-enhanced graphene production approaches, including the techniques based on inductively-coupled and arc discharges, in the context of their potential scaling to mass production following the generic scaling approaches applicable to the existing processes and systems. This work analyses a large amount of the recent literature on graphene production by various techniques and summarizes the results in a tabular form to provide a simple and convenient comparison of several available techniques. Our analysis reveals a significant potential of scalability for plasma-based technologies, based on the scaling-related process characteristics. Among other processes, a greater yield of 1 g × h(-1) m(-2) was reached for the arc discharge technology, whereas the other plasma-based techniques show process yields comparable to the neutral-gas based methods. Selected plasma-based techniques show lower energy consumption than in thermal CVD processes, and the ability to produce graphene flakes of various sizes reaching hundreds of square millimetres, and the thickness varying from a monolayer to 10-20 layers. Additional factors such as electrical voltage and current, not available in thermal CVD processes could potentially lead to better scalability, flexibility and control of the plasma-based processes. Advantages and disadvantages of various systems are also considered.
Linear time relational prototype based learning.
Gisbrecht, Andrej; Mokbel, Bassam; Schleif, Frank-Michael; Zhu, Xibin; Hammer, Barbara
2012-10-01
Prototype based learning offers an intuitive interface to inspect large quantities of electronic data in supervised or unsupervised settings. Recently, many techniques have been extended to data described by general dissimilarities rather than Euclidean vectors, so-called relational data settings. Unlike the Euclidean counterparts, the techniques have quadratic time complexity due to the underlying quadratic dissimilarity matrix. Thus, they are infeasible already for medium sized data sets. The contribution of this article is twofold: On the one hand we propose a novel supervised prototype based classification technique for dissimilarity data based on popular learning vector quantization (LVQ), on the other hand we transfer a linear time approximation technique, the Nyström approximation, to this algorithm and an unsupervised counterpart, the relational generative topographic mapping (GTM). This way, linear time and space methods result. We evaluate the techniques on three examples from the biomedical domain.
Mujtaba, Ghulam; Shuib, Liyana; Raj, Ram Gopal; Rajandram, Retnagowri; Shaikh, Khairunisa; Al-Garadi, Mohammed Ali
2018-06-01
Text categorization has been used extensively in recent years to classify plain-text clinical reports. This study employs text categorization techniques for the classification of open narrative forensic autopsy reports. One of the key steps in text classification is document representation. In document representation, a clinical report is transformed into a format that is suitable for classification. The traditional document representation technique for text categorization is the bag-of-words (BoW) technique. In this study, the traditional BoW technique is ineffective in classifying forensic autopsy reports because it merely extracts frequent but discriminative features from clinical reports. Moreover, this technique fails to capture word inversion, as well as word-level synonymy and polysemy, when classifying autopsy reports. Hence, the BoW technique suffers from low accuracy and low robustness unless it is improved with contextual and application-specific information. To overcome the aforementioned limitations of the BoW technique, this research aims to develop an effective conceptual graph-based document representation (CGDR) technique to classify 1500 forensic autopsy reports from four (4) manners of death (MoD) and sixteen (16) causes of death (CoD). Term-based and Systematized Nomenclature of Medicine-Clinical Terms (SNOMED CT) based conceptual features were extracted and represented through graphs. These features were then used to train a two-level text classifier. The first level classifier was responsible for predicting MoD. In addition, the second level classifier was responsible for predicting CoD using the proposed conceptual graph-based document representation technique. To demonstrate the significance of the proposed technique, its results were compared with those of six (6) state-of-the-art document representation techniques. Lastly, this study compared the effects of one-level classification and two-level classification on the experimental results. The experimental results indicated that the CGDR technique achieved 12% to 15% improvement in accuracy compared with fully automated document representation baseline techniques. Moreover, two-level classification obtained better results compared with one-level classification. The promising results of the proposed conceptual graph-based document representation technique suggest that pathologists can adopt the proposed system as their basis for second opinion, thereby supporting them in effectively determining CoD. Copyright © 2018 Elsevier Inc. All rights reserved.
Comparison of raised-microdisk whispering-gallery-mode characterization techniques.
Redding, Brandon; Marchena, Elton; Creazzo, Tim; Shi, Shouyuan; Prather, Dennis W
2010-04-01
We compare the two prevailing raised-microdisk whispering-gallery-mode (WGM) characterization techniques, one based on coupling emission to a tapered fiber and the other based on collecting emission in the far field. We applied both techniques to study WGMs in Si nanocrystal raised microdisks and observed dramatically different behavior. We explain this difference in terms of the radiative bending loss on which the far-field collection technique relies and discuss the regimes of operation in which each technique is appropriate.
Vision-based system identification technique for building structures using a motion capture system
NASA Astrophysics Data System (ADS)
Oh, Byung Kwan; Hwang, Jin Woo; Kim, Yousok; Cho, Tongjun; Park, Hyo Seon
2015-11-01
This paper presents a new vision-based system identification (SI) technique for building structures by using a motion capture system (MCS). The MCS with outstanding capabilities for dynamic response measurements can provide gage-free measurements of vibrations through the convenient installation of multiple markers. In this technique, from the dynamic displacement responses measured by MCS, the dynamic characteristics (natural frequency, mode shape, and damping ratio) of building structures are extracted after the processes of converting the displacement from MCS to acceleration and conducting SI by frequency domain decomposition. A free vibration experiment on a three-story shear frame was conducted to validate the proposed technique. The SI results from the conventional accelerometer-based method were compared with those from the proposed technique and showed good agreement, which confirms the validity and applicability of the proposed vision-based SI technique for building structures. Furthermore, SI directly employing MCS measured displacements to FDD was performed and showed identical results to those of conventional SI method.
Constraint-based component-modeling for knowledge-based design
NASA Technical Reports Server (NTRS)
Kolb, Mark A.
1992-01-01
The paper describes the application of various advanced programming techniques derived from artificial intelligence research to the development of flexible design tools for conceptual design. Special attention is given to two techniques which appear to be readily applicable to such design tools: the constraint propagation technique and the object-oriented programming. The implementation of these techniques in a prototype computer tool, Rubber Airplane, is described.
Overland flow erosion inferred from Martian channel network geometry
NASA Astrophysics Data System (ADS)
Seybold, Hansjörg; Kirchner, James
2016-04-01
The controversy about the origin of Mars' channel networks is almost as old as their discovery 150 years ago. Over the last few decades, new Mars probes have revealed more detailed structures in Martian The controversy about the origin of Mars' channel networks is almost as old as their discovery 150 years ago. Over the last few decades, new Mars probes have revealed more detailed structures in Martian drainage networks, and new studies suggest that Mars once had large volumes of surface water. But how this water flowed, and how it could have carved the channels, remains unclear. Simple scaling arguments show that networks formed by similar mechanisms should have similar branching angles on Earth and Mars, suggesting that Earth analogues can be informative here. A recent analysis of high-resolution data for the continental United States shows that climate leaves a characteristic imprint in the branching geometry of stream networks. Networks growing in humid regions have an average branching angle of α = 2π/5 = 72° [1], which is characteristic of network growth by groundwater sapping [2]. Networks in arid regions, where overland flow erosion is more dominant, show much smaller branching angles. Here we show that the channel networks on Mars have branching angles that resemble those created by surficial flows on Earth. This result implies that the growth of Martian channel networks was dominated by near-surface flow, and suggests that deeper infiltration was inhibited, potentially by permafrost or by impermeable weathered soils. [1] Climate's Watermark in the Geometry of River Networks, Seybold et al.; under review [2] Ramification of stream networks, Devauchelle et al.; PNAS (2012)
Predicting Impacts of tropical cyclones and sea-Level rise on beach mouse habitat
Chen, Qin; Wang, Hongqing; Wang, Lixia; Tawes, Robert; Rollman, Drew
2014-01-01
Alabama beach mouse (ABM) (Peromyscus polionotus ammobates) is an important component of the coastal dune ecosystem along the Gulf of Mexico. Due to habitat loss and degradation, ABM is federally listed as an endangered species. In this study, we examined the impacts of storm surge and wind waves, which are induced by hurricanes and sea-level rise (SLR), on the ABM habitat on Fort Morgan Peninsula, Alabama, using advanced storm surge and wind wave models and spatial analysis tools in geographic information systems (GIS). Statistical analyses of the long-term historical data enabled us to predict the extreme values of winds, wind waves, and water levels in the study area at different return periods. We developed a series of nested domains for both wave and surge modeling and validated the models using field observations of surge hydrographs and high watermarks of Hurricane Ivan (2004). We then developed wave atlases and flood maps corresponding to the extreme wind, surge and waves without SLR and with a 0.5 m of SLR by coupling the wave and surge prediction models. The flood maps were then merged with a map of ABM habitat to determine the extent and location of habitat impacted by the 100-year storm with and without SLR. Simulation results indicate that more than 82% of ABM habitat would be inundated in such an extreme storm event, especially under SLR, making ABM populations more vulnerable to future storm damage. These results have aided biologists, community planners, and other stakeholders in the identification, restoration and protection of key beach mouse habitat in Alabama. Methods outlined in this paper could also be used to assist in the conservation and recovery of imperiled coastal species elsewhere.
Controls on stream network branching angles, tested using landscape evolution models
NASA Astrophysics Data System (ADS)
Theodoratos, Nikolaos; Seybold, Hansjörg; Kirchner, James W.
2016-04-01
Stream networks are striking landscape features. The topology of stream networks has been extensively studied, but their geometry has received limited attention. Analyses of nearly 1 million stream junctions across the contiguous United States [1] have revealed that stream branching angles vary systematically with climate and topographic gradients at continental scale. Stream networks in areas with wet climates and gentle slopes tend to have wider branching angles than in areas with dry climates or steep slopes, but the mechanistic linkages underlying these empirical correlations remain unclear. Under different climatic and topographic conditions different runoff generation mechanisms and, consequently, transport processes are dominant. Models [2] and experiments [3] have shown that the relative strength of channel incision versus diffusive hillslope transport controls the spacing between valleys, an important geometric property of stream networks. We used landscape evolution models (LEMs) to test whether similar factors control network branching angles as well. We simulated stream networks using a wide range of hillslope diffusion and channel incision parameters. The resulting branching angles vary systematically with the parameters, but by much less than the regional variability in real-world stream networks. Our results suggest that the competition between hillslope and channeling processes influences branching angles, but that other mechanisms may also be needed to account for the variability in branching angles observed in the field. References: [1] H. Seybold, D. H. Rothman, and J. W. Kirchner, 2015, Climate's watermark in the geometry of river networks, Submitted manuscript. [2] J. T. Perron, W. E. Dietrich, and J. W. Kirchner, 2008, Controls on the spacing of first-order valleys, Journal of Geophysical Research, 113, F04016. [3] K. E. Sweeney, J. J. Roering, and C. Ellis, 2015, Experimental evidence for hillslope control of landscape scale, Science, 349(6243), 51-53.
Medical image security in a HIPAA mandated PACS environment.
Cao, F; Huang, H K; Zhou, X Q
2003-01-01
Medical image security is an important issue when digital images and their pertinent patient information are transmitted across public networks. Mandates for ensuring health data security have been issued by the federal government such as Health Insurance Portability and Accountability Act (HIPAA), where healthcare institutions are obliged to take appropriate measures to ensure that patient information is only provided to people who have a professional need. Guidelines, such as digital imaging and communication in medicine (DICOM) standards that deal with security issues, continue to be published by organizing bodies in healthcare. However, there are many differences in implementation especially for an integrated system like picture archiving and communication system (PACS), and the infrastructure to deploy these security standards is often lacking. Over the past 6 years, members in the Image Processing and Informatics Laboratory, Childrens Hospital, Los Angeles/University of Southern California, have actively researched image security issues related to PACS and teleradiology. The paper summarizes our previous work and presents an approach to further research on the digital envelope (DE) concept that provides image integrity and security assurance in addition to conventional network security protection. The DE, including the digital signature (DS) of the image as well as encrypted patient information from the DICOM image header, can be embedded in the background area of the image as an invisible permanent watermark. The paper outlines the systematic development, evaluation and deployment of the DE method in a PACS environment. We have also proposed a dedicated PACS security server that will act as an image authority to check and certify the image origin and integrity upon request by a user, and meanwhile act also as a secure DICOM gateway to the outside connections and a PACS operation monitor for HIPAA supporting information. Copyright 2002 Elsevier Science Ltd.
Kruse, B J
1994-01-01
The author of the famous midwifery text book Der schwangeren Frauen und Hebammen Rosengarten has until now thought to have been Eucharius Rösslin the Elder, in whose name the first printed edition of the work appeared in 1513. According to him, he compiled the text from various sources in the years 1508-1512 at the suggestion of the Duchess Catherine of Brunswick-Luneburg. In the SB und UB Hamburg there is a handwritten preliminary draft of Rosengarten (Cod. med. 801, p. 9-130), dated by the scribe in the year 1494 (this is borne out by watermark analysis). It reproduces the text of Rosengarten without the privilegium, the dedication and the rhyming 'admonition' of the pregnant women and the midwives, as well as the glossary and the illustrative woodcuts almost identically. The printed version of Rosengarten was also expanded by Eucharius Rösslin the Elder with passages among others from Ps.-Ortolfs Frauenbüchlein. The author of this paper was also able to trace a handwritten preliminary draft of Frauenbüchlein, until now unknown, in manuscript 2967 of the Austrian National Library in Vienna. The remark Hic liber pertinet ad Constantinum Roeslin written in the manuscript by a previous owner, and a treatise on syphilis in the hand Eucharius Rösslin the Younger, would indicate that Cod. med. 801 was once in the possession of the Rösslin family. Since Eucharius Rösslin the Elder was born around 1470, and since errors and omissions in Cod. med. 801 indicate that it is a copy of an older text, we are confronted with the question of whether or not the handwritten edition of Rosengarten originates from him or from some other author.
NASA Astrophysics Data System (ADS)
Bian, Qiang; Song, Zhangqi; Zhang, Xueliang; Yu, Yang; Chen, Yuzhong
2018-03-01
We proposed a refractive index sensor based on optical fiber end face using pulse reference-based compensation technique. With good compensation effect of this compensation technique, the power fluctuation of light source, the change of optic components transmission loss and coupler splitting ratio can be compensated, which largely reduces the background noise. The refractive index resolutions can achieve 3.8 × 10-6 RIU and1.6 × 10-6 RIU in different refractive index regions.
Macready, Anna L; Fallaize, Rosalind; Butler, Laurie T; Ellis, Judi A; Kuznesof, Sharron; Frewer, Lynn J; Celis-Morales, Carlos; Livingstone, Katherine M; Araújo-Soares, Vera; Fischer, Arnout Rh; Stewart-Knox, Barbara J; Mathers, John C; Lovegrove, Julie A
2018-04-09
To determine the efficacy of behavior change techniques applied in dietary and physical activity intervention studies, it is first necessary to record and describe techniques that have been used during such interventions. Published frameworks used in dietary and smoking cessation interventions undergo continuous development, and most are not adapted for Web-based delivery. The Food4Me study (N=1607) provided the opportunity to use existing frameworks to describe standardized Web-based techniques employed in a large-scale, internet-based intervention to change dietary behavior and physical activity. The aims of this study were (1) to describe techniques embedded in the Food4Me study design and explain the selection rationale and (2) to demonstrate the use of behavior change technique taxonomies, develop standard operating procedures for training, and identify strengths and limitations of the Food4Me framework that will inform its use in future studies. The 6-month randomized controlled trial took place simultaneously in seven European countries, with participants receiving one of four levels of personalized advice (generalized, intake-based, intake+phenotype-based, and intake+phenotype+gene-based). A three-phase approach was taken: (1) existing taxonomies were reviewed and techniques were identified a priori for possible inclusion in the Food4Me study, (2) a standard operating procedure was developed to maintain consistency in the use of methods and techniques across research centers, and (3) the Food4Me behavior change technique framework was reviewed and updated post intervention. An analysis of excluded techniques was also conducted. Of 46 techniques identified a priori as being applicable to Food4Me, 17 were embedded in the intervention design; 11 were from a dietary taxonomy, and 6 from a smoking cessation taxonomy. In addition, the four-category smoking cessation framework structure was adopted for clarity of communication. Smoking cessation texts were adapted for dietary use where necessary. A posteriori, a further 9 techniques were included. Examination of excluded items highlighted the distinction between techniques considered appropriate for face-to-face versus internet-based delivery. The use of existing taxonomies facilitated the description and standardization of techniques used in Food4Me. We recommend that for complex studies of this nature, technique analysis should be conducted a priori to develop standardized procedures and training and reviewed a posteriori to audit the techniques actually adopted. The present framework description makes a valuable contribution to future systematic reviews and meta-analyses that explore technique efficacy and underlying psychological constructs. This was a novel application of the behavior change taxonomies and was the first internet-based personalized nutrition intervention to use such a framework remotely. ClinicalTrials.gov NCT01530139; https://clinicaltrials.gov/ct2/show/NCT01530139 (Archived by WebCite at http://www.webcitation.org/6y8XYUft1). ©Anna L Macready, Rosalind Fallaize, Laurie T Butler, Judi A Ellis, Sharron Kuznesof, Lynn J Frewer, Carlos Celis-Morales, Katherine M Livingstone, Vera Araújo-Soares, Arnout RH Fischer, Barbara J Stewart-Knox, John C Mathers, Julie A Lovegrove. Originally published in JMIR Research Protocols (http://www.researchprotocols.org), 09.04.2018.
NASA Astrophysics Data System (ADS)
Binol, Hamidullah; Bal, Abdullah; Cukur, Huseyin
2015-10-01
The performance of the kernel based techniques depends on the selection of kernel parameters. That's why; suitable parameter selection is an important problem for many kernel based techniques. This article presents a novel technique to learn the kernel parameters in kernel Fukunaga-Koontz Transform based (KFKT) classifier. The proposed approach determines the appropriate values of kernel parameters through optimizing an objective function constructed based on discrimination ability of KFKT. For this purpose we have utilized differential evolution algorithm (DEA). The new technique overcomes some disadvantages such as high time consumption existing in the traditional cross-validation method, and it can be utilized in any type of data. The experiments for target detection applications on the hyperspectral images verify the effectiveness of the proposed method.
NASA Astrophysics Data System (ADS)
Shao, Xupeng
2017-04-01
Glutenite bodies are widely developed in northern Minfeng zone of Dongying Sag. Their litho-electric relationship is not clear. In addition, as the conventional sequence stratigraphic research method drawbacks of involving too many subjective human factors, it has limited deepening of the regional sequence stratigraphic research. The wavelet transform technique based on logging data and the time-frequency analysis technique based on seismic data have advantages of dividing sequence stratigraphy quantitatively comparing with the conventional methods. Under the basis of the conventional sequence research method, this paper used the above techniques to divide the fourth-order sequence of the upper Es4 in northern Minfeng zone of Dongying Sag. The research shows that the wavelet transform technique based on logging data and the time-frequency analysis technique based on seismic data are essentially consistent, both of which divide sequence stratigraphy quantitatively in the frequency domain; wavelet transform technique has high resolutions. It is suitable for areas with wells. The seismic time-frequency analysis technique has wide applicability, but a low resolution. Both of the techniques should be combined; the upper Es4 in northern Minfeng zone of Dongying Sag is a complete set of third-order sequence, which can be further subdivided into 5 fourth-order sequences that has the depositional characteristics of fine-upward sequence in granularity. Key words: Dongying sag, northern Minfeng zone, wavelet transform technique, time-frequency analysis technique ,the upper Es4, sequence stratigraphy
Ultrabroadband Phased-Array Receivers Based on Optical Techniques
2016-02-26
AFRL-AFOSR-VA-TR-2016-0121 Ultrabroadband Phased- array Receivers Based on Optical Techniques Christopher Schuetz UNIVERSITY OF DELAWARE Final Report...Jul 15 4. TITLE AND SUBTITLE Ultrabroadband Phased- Array Receivers Based on Optical Techniques 5a. CONTRACT NUMBER 5b. GRANT NUMBER FA9550-12-1...receiver that enables us to capture and convert signals across an array using photonic modulators, routing these signals to a central location using
Activity Detection and Retrieval for Image and Video Data with Limited Training
2015-06-10
applications. Here we propose two techniques for image segmentation. The first involves an automata based multiple threshold selection scheme, where a... automata . For our second approach to segmentation, we employ a region based segmentation technique that is capable of handling intensity inhomogeneity...techniques for image segmentation. The first involves an automata based multiple threshold selection scheme, where a mixture of Gaussian is fitted to the
Mirapeix, J; Cobo, A; González, D A; López-Higuera, J M
2007-02-19
A new plasma spectroscopy analysis technique based on the generation of synthetic spectra by means of optimization processes is presented in this paper. The technique has been developed for its application in arc-welding quality assurance. The new approach has been checked through several experimental tests, yielding results in reasonably good agreement with the ones offered by the traditional spectroscopic analysis technique.
NASA Astrophysics Data System (ADS)
Zhang, Huibin; Wang, Yuqiao; Chen, Haoran; Zhao, Yongli; Zhang, Jie
2017-12-01
In software defined optical networks (SDON), the centralized control plane may encounter numerous intrusion threatens which compromise the security level of provisioned services. In this paper, the issue of control plane security is studied and two machine-learning-based control plane intrusion detection techniques are proposed for SDON with properly selected features such as bandwidth, route length, etc. We validate the feasibility and efficiency of the proposed techniques by simulations. Results show an accuracy of 83% for intrusion detection can be achieved with the proposed machine-learning-based control plane intrusion detection techniques.
Separation techniques: Chromatography
Coskun, Ozlem
2016-01-01
Chromatography is an important biophysical technique that enables the separation, identification, and purification of the components of a mixture for qualitative and quantitative analysis. Proteins can be purified based on characteristics such as size and shape, total charge, hydrophobic groups present on the surface, and binding capacity with the stationary phase. Four separation techniques based on molecular characteristics and interaction type use mechanisms of ion exchange, surface adsorption, partition, and size exclusion. Other chromatography techniques are based on the stationary bed, including column, thin layer, and paper chromatography. Column chromatography is one of the most common methods of protein purification. PMID:28058406
Monitoring Knowledge Base (MKB)
The Monitoring Knowledge Base (MKB) is a compilation of emissions measurement and monitoring techniques associated with air pollution control devices, industrial process descriptions, and permitting techniques, including flexible permit development. Using MKB, one can gain a comprehensive understanding of emissions sources, control devices, and monitoring techniques, enabling one to determine appropriate permit terms and conditions.
Techniques for Enhancing Web-Based Education.
ERIC Educational Resources Information Center
Barbieri, Kathy; Mehringer, Susan
The Virtual Workshop is a World Wide Web-based set of modules on high performance computing developed at the Cornell Theory Center (CTC) (New York). This approach reaches a large audience, leverages staff effort, and poses challenges for developing interesting presentation techniques. This paper describes the following techniques with their…
Survey of in-situ and remote sensing methods for soil moisture determination
NASA Technical Reports Server (NTRS)
Schmugge, T. J.; Jackson, T. J.; Mckim, H. L.
1981-01-01
General methods for determining the moisture content in the surface layers of the soil based on in situ or point measurements, soil water models and remote sensing observations are surveyed. In situ methods described include gravimetric techniques, nuclear techniques based on neutron scattering or gamma-ray attenuation, electromagnetic techniques, tensiometric techniques and hygrometric techniques. Soil water models based on column mass balance treat soil moisture contents as a result of meteorological inputs (precipitation, runoff, subsurface flow) and demands (evaporation, transpiration, percolation). The remote sensing approaches are based on measurements of the diurnal range of surface temperature and the crop canopy temperature in the thermal infrared, measurements of the radar backscattering coefficient in the microwave region, and measurements of microwave emission or brightness temperature. Advantages and disadvantages of the various methods are pointed out, and it is concluded that a successful monitoring system must incorporate all of the approaches considered.
Low Power LDPC Code Decoder Architecture Based on Intermediate Message Compression Technique
NASA Astrophysics Data System (ADS)
Shimizu, Kazunori; Togawa, Nozomu; Ikenaga, Takeshi; Goto, Satoshi
Reducing the power dissipation for LDPC code decoder is a major challenging task to apply it to the practical digital communication systems. In this paper, we propose a low power LDPC code decoder architecture based on an intermediate message-compression technique which features as follows: (i) An intermediate message compression technique enables the decoder to reduce the required memory capacity and write power dissipation. (ii) A clock gated shift register based intermediate message memory architecture enables the decoder to decompress the compressed messages in a single clock cycle while reducing the read power dissipation. The combination of the above two techniques enables the decoder to reduce the power dissipation while keeping the decoding throughput. The simulation results show that the proposed architecture improves the power efficiency up to 52% and 18% compared to that of the decoder based on the overlapped schedule and the rapid convergence schedule without the proposed techniques respectively.
NASA Astrophysics Data System (ADS)
Ageev, O. A.; Il'in, O. I.; Rubashkina, M. V.; Smirnov, V. A.; Fedotov, A. A.; Tsukanova, O. G.
2015-07-01
Techniques are developed to determine the resistance per unit length and the electrical resistivity of vertically aligned carbon nanotubes (VA CNTs) using atomic force microscopy (AFM) and scanning tunneling microscopy (STM). These techniques are used to study the resistance of VA CNTs. The resistance of an individual VA CNT calculated with the AFM-based technique is shown to be higher than the resistance of VA CNTs determined by the STM-based technique by a factor of 200, which is related to the influence of the resistance of the contact of an AFM probe to VA CNTs. The resistance per unit length and the electrical resistivity of an individual VA CNT 118 ± 39 nm in diameter and 2.23 ± 0.37 μm in height that are determined by the STM-based technique are 19.28 ± 3.08 kΩ/μm and 8.32 ± 3.18 × 10-4 Ω m, respectively. The STM-based technique developed to determine the resistance per unit length and the electrical resistivity of VA CNTs can be used to diagnose the electrical parameters of VA CNTs and to create VA CNT-based nanoelectronic elements.
NASA Astrophysics Data System (ADS)
Suresha, B. L.; Sumantha, H. S.; Salman, K. Mohammed; Pramod, N. G.; Abhiram, J.
2018-04-01
The ionization potential is usually found to be less in acid and more in base. The experiment proves that the ionization potential increases on dilution of acid to base and reduces from base to acid. The potential can be tailored according to the desired properties based on our choice of acid or base. The experimental study establishes a direct relationship between pH and electric potential. This work provides theoretical insights on the need for a basic media of pH 10 in chemical thin film growth techniques called Chemical Bath Deposition Techniques.
Wavelet Transform Based Filter to Remove the Notches from Signal Under Harmonic Polluted Environment
NASA Astrophysics Data System (ADS)
Das, Sukanta; Ranjan, Vikash
2017-12-01
The work proposes to annihilate the notches present in the synchronizing signal required for converter operation appearing due to switching of semiconductor devices connected to the system in the harmonic polluted environment. The disturbances in the signal are suppressed by wavelet based novel filtering technique. In the proposed technique, the notches in the signal are determined and eliminated by the wavelet based multi-rate filter using `Daubechies4' (db4) as mother wavelet. The computational complexity of the adapted technique is very less as compared to any other conventional notch filtering techniques. The proposed technique is developed in MATLAB/Simulink and finally validated with dSPACE-1103 interface. The recovered signal, thus obtained, is almost free of the notches.
Levecke, Bruno; De Wilde, Nathalie; Vandenhoute, Els; Vercruysse, Jozef
2009-01-01
Background Soil-transmitted helminths, such as Trichuris trichiura, are of major concern in public health. Current efforts to control these helminth infections involve periodic mass treatment in endemic areas. Since these large-scale interventions are likely to intensify, monitoring the drug efficacy will become indispensible. However, studies comparing detection techniques based on sensitivity, fecal egg counts (FEC), feasibility for mass diagnosis and drug efficacy estimates are scarce. Methodology/Principal Findings In the present study, the ether-based concentration, the Parasep Solvent Free (SF), the McMaster and the FLOTAC techniques were compared based on both validity and feasibility for the detection of Trichuris eggs in 100 fecal samples of nonhuman primates. In addition, the drug efficacy estimates of quantitative techniques was examined using a statistical simulation. Trichuris eggs were found in 47% of the samples. FLOTAC was the most sensitive technique (100%), followed by the Parasep SF (83.0% [95% confidence interval (CI): 82.4–83.6%]) and the ether-based concentration technique (76.6% [95% CI: 75.8–77.3%]). McMaster was the least sensitive (61.7% [95% CI: 60.7–62.6%]) and failed to detect low FEC. The quantitative comparison revealed a positive correlation between the four techniques (Rs = 0.85–0.93; p<0.0001). However, the ether-based concentration technique and the Parasep SF detected significantly fewer eggs than both the McMaster and the FLOTAC (p<0.0083). Overall, the McMaster was the most feasible technique (3.9 min/sample for preparing, reading and cleaning of the apparatus), followed by the ether-based concentration technique (7.7 min/sample) and the FLOTAC (9.8 min/sample). Parasep SF was the least feasible (17.7 min/sample). The simulation revealed that the sensitivity is less important for monitoring drug efficacy and that both FLOTAC and McMaster were reliable estimators. Conclusions/Significance The results of this study demonstrated that McMaster is a promising technique when making use of FEC to monitor drug efficacy in Trichuris. PMID:19172171
Justus, Alan L
2015-05-01
This paper presents technically-based techniques to deal with nuisance personnel contamination monitor (PCM) alarms. The techniques derive from the fundamental physical characteristics of radon progeny. Some PCM alarms, although valid alarms and not actually "false," could be due to nuisance naturally-occurring radionuclides (i.e., radon progeny). Based on certain observed characteristics of the radon progeny, several prompt techniques are discussed that could either remediate or at least mitigate the problem of nuisance alarms. Examples are provided which demonstrate the effective use of the techniques.
NASA Astrophysics Data System (ADS)
Drakopoulou, E.; Cowan, G. A.; Needham, M. D.; Playfer, S.; Taani, M.
2018-04-01
The application of machine learning techniques to the reconstruction of lepton energies in water Cherenkov detectors is discussed and illustrated for TITUS, a proposed intermediate detector for the Hyper-Kamiokande experiment. It is found that applying these techniques leads to an improvement of more than 50% in the energy resolution for all lepton energies compared to an approach based upon lookup tables. Machine learning techniques can be easily applied to different detector configurations and the results are comparable to likelihood-function based techniques that are currently used.
Fu, Yu; Pedrini, Giancarlo
2014-01-01
In recent years, optical interferometry-based techniques have been widely used to perform noncontact measurement of dynamic deformation in different industrial areas. In these applications, various physical quantities need to be measured in any instant and the Nyquist sampling theorem has to be satisfied along the time axis on each measurement point. Two types of techniques were developed for such measurements: one is based on high-speed cameras and the other uses a single photodetector. The limitation of the measurement range along the time axis in camera-based technology is mainly due to the low capturing rate, while the photodetector-based technology can only do the measurement on a single point. In this paper, several aspects of these two technologies are discussed. For the camera-based interferometry, the discussion includes the introduction of the carrier, the processing of the recorded images, the phase extraction algorithms in various domains, and how to increase the temporal measurement range by using multiwavelength techniques. For the detector-based interferometry, the discussion mainly focuses on the single-point and multipoint laser Doppler vibrometers and their applications for measurement under extreme conditions. The results show the effort done by researchers for the improvement of the measurement capabilities using interferometry-based techniques to cover the requirements needed for the industrial applications. PMID:24963503
Passive Optical Locking Techniques for Diode Lasers
NASA Astrophysics Data System (ADS)
Zhang, Quan
1995-01-01
Most current diode-based nonlinear frequency converters utilize electronic frequency locking techniques. However, this type of locking technique typically involves very complex electronics, and suffers the 'power-drop' problem. This dissertation is devoted to the development of an all-optical passive locking technique that locks the diode laser frequency to the external cavity resonance stably without using any kind of electronic servo. The amplitude noise problem associated with the strong optical locking has been studied. Single-mode operation of a passively locked single-stripe diode with an amplitude stability better than 1% has been achieved. This passive optical locking technique applies to broad-area diodes as well as single-stripe diodes, and can be easily used to generate blue light. A schematic of a milliwatt level blue laser based on the single-stripe diode locking technique has been proposed. A 120 mW 467 nm blue laser has been built using the tapered amplifier locking technique. In addition to diode-based blue lasers, this passive locking technique has applications in nonlinear frequency conversions, resonant spectroscopy, particle counter devices, telecommunications, and medical devices.
Radiometrically accurate scene-based nonuniformity correction for array sensors.
Ratliff, Bradley M; Hayat, Majeed M; Tyo, J Scott
2003-10-01
A novel radiometrically accurate scene-based nonuniformity correction (NUC) algorithm is described. The technique combines absolute calibration with a recently reported algebraic scene-based NUC algorithm. The technique is based on the following principle: First, detectors that are along the perimeter of the focal-plane array are absolutely calibrated; then the calibration is transported to the remaining uncalibrated interior detectors through the application of the algebraic scene-based algorithm, which utilizes pairs of image frames exhibiting arbitrary global motion. The key advantage of this technique is that it can obtain radiometric accuracy during NUC without disrupting camera operation. Accurate estimates of the bias nonuniformity can be achieved with relatively few frames, which can be fewer than ten frame pairs. Advantages of this technique are discussed, and a thorough performance analysis is presented with use of simulated and real infrared imagery.
[Three-dimensional endoscopic endonasal study of skull base anatomy].
Abarca-Olivas, Javier; Monjas-Cánovas, Irene; López-Álvarez, Beatriz; Lloret-García, Jaime; Sanchez-del Campo, Jose; Gras-Albert, Juan Ramon; Moreno-López, Pedro
2014-01-01
Training in dissection of the paranasal sinuses and the skull base is essential for anatomical understanding and correct surgical techniques. Three-dimensional (3D) visualisation of endoscopic skull base anatomy increases spatial orientation and allows depth perception. To show endoscopic skull base anatomy based on the 3D technique. We performed endoscopic dissection in cadaveric specimens fixed with formalin and with the Thiel technique, both prepared using intravascular injection of coloured material. Endonasal approaches were performed with conventional 2D endoscopes. Then we applied the 3D anaglyph technique to illustrate the pictures in 3D. The most important anatomical structures and landmarks of the sellar region under endonasal endoscopic vision are illustrated in 3D images. The skull base consists of complex bony and neurovascular structures. Experience with cadaver dissection is essential to understand complex anatomy and develop surgical skills. A 3D view constitutes a useful tool for understanding skull base anatomy. Copyright © 2012 Sociedad Española de Neurocirugía. Published by Elsevier España. All rights reserved.
NASA Technical Reports Server (NTRS)
Harrison, P. Ann
1993-01-01
All the NASA VEGetation Workbench (VEG) goals except the Learning System provide the scientist with several different techniques. When VEG is run, rules assist the scientist in selecting the best of the available techniques to apply to the sample of cover type data being studied. The techniques are stored in the VEG knowledge base. The design and implementation of an interface that allows the scientist to add new techniques to VEG without assistance from the developer were completed. A new interface that enables the scientist to add techniques to VEG without assistance from the developer was designed and implemented. This interface does not require the scientist to have a thorough knowledge of Knowledge Engineering Environment (KEE) by Intellicorp or a detailed knowledge of the structure of VEG. The interface prompts the scientist to enter the required information about the new technique. It prompts the scientist to enter the required Common Lisp functions for executing the technique and the left hand side of the rule that causes the technique to be selected. A template for each function and rule and detailed instructions about the arguments of the functions, the values they should return, and the format of the rule are displayed. Checks are made to ensure that the required data were entered, the functions compiled correctly, and the rule parsed correctly before the new technique is stored. The additional techniques are stored separately from the VEG knowledge base. When the VEG knowledge base is loaded, the additional techniques are not normally loaded. The interface allows the scientist the option of adding all the previously defined new techniques before running VEG. When the techniques are added, the required units to store the additional techniques are created automatically in the correct places in the VEG knowledge base. The methods file containing the functions required by the additional techniques is loaded. New rule units are created to store the new rules. The interface that allow the scientist to select which techniques to use is updated automatically to include the new techniques. Task H was completed. The interface that allows the scientist to add techniques to VEG was implemented and comprehensively tested. The Common Lisp code for the Add Techniques system is listed in Appendix A.
Writing with Basals: A Sentence Combining Approach to Comprehension.
ERIC Educational Resources Information Center
Reutzel, D. Ray; Merrill, Jimmie D.
Sentence combining techniques can be used with basal readers to help students develop writing skills. The first technique is addition, characterized by using the connecting word "and" to join two or more base sentences together. The second technique is called "embedding," and is characterized by putting parts of two or more base sentences together…
Mobility based key management technique for multicast security in mobile ad hoc networks.
Madhusudhanan, B; Chitra, S; Rajan, C
2015-01-01
In MANET multicasting, forward and backward secrecy result in increased packet drop rate owing to mobility. Frequent rekeying causes large message overhead which increases energy consumption and end-to-end delay. Particularly, the prevailing group key management techniques cause frequent mobility and disconnections. So there is a need to design a multicast key management technique to overcome these problems. In this paper, we propose the mobility based key management technique for multicast security in MANET. Initially, the nodes are categorized according to their stability index which is estimated based on the link availability and mobility. A multicast tree is constructed such that for every weak node, there is a strong parent node. A session key-based encryption technique is utilized to transmit a multicast data. The rekeying process is performed periodically by the initiator node. The rekeying interval is fixed depending on the node category so that this technique greatly minimizes the rekeying overhead. By simulation results, we show that our proposed approach reduces the packet drop rate and improves the data confidentiality.
Noise suppression in surface microseismic data
Forghani-Arani, Farnoush; Batzle, Mike; Behura, Jyoti; Willis, Mark; Haines, Seth S.; Davidson, Michael
2012-01-01
We introduce a passive noise suppression technique, based on the τ − p transform. In the τ − p domain, one can separate microseismic events from surface noise based on distinct characteristics that are not visible in the time-offset domain. By applying the inverse τ − p transform to the separated microseismic event, we suppress the surface noise in the data. Our technique significantly improves the signal-to-noise ratios of the microseismic events and is superior to existing techniques for passive noise suppression in the sense that it preserves the waveform. We introduce a passive noise suppression technique, based on the τ − p transform. In the τ − p domain, one can separate microseismic events from surface noise based on distinct characteristics that are not visible in the time-offset domain. By applying the inverse τ − p transform to the separated microseismic event, we suppress the surface noise in the data. Our technique significantly improves the signal-to-noise ratios of the microseismic events and is superior to existing techniques for passive noise suppression in the sense that it preserves the waveform.
NASA Astrophysics Data System (ADS)
Tombak, Ali
The recent advancement in wireless communications demands an ever increasing improvement in the system performance and functionality with a reduced size and cost. This thesis demonstrates novel RF and microwave components based on ferroelectric and solid-state based tunable capacitor (varactor) technologies for the design of low-cost, small-size and multi-functional wireless communication systems. These include tunable lumped element VHF filters based on ferroelectric varactors, a beam-steering technique which, unlike conventional systems, does not require separate power divider and phase shifters, and a predistortion linearization technique that uses a varactor based tunable R-L-C resonator. Among various ferroelectric materials, Barium Strontium Titanate (BST) is actively being studied for the fabrication of high performance varactors at RF and microwave frequencies. BST based tunable capacitors are presented with typical tunabilities of 4.2:1 with the application of 5 to 10 V DC bias voltages and typical loss tangents in the range of 0.003--0.009 at VHF frequencies. Tunable lumped element lowpass and bandpass VHF filters based on BST varactors are also demonstrated with tunabilities of 40% and 57%, respectively. A new beam-steering technique is developed based on the extended resonance power dividing technique. Phased arrays based on this technique do not require separate power divider and phase shifters. Instead, the power division and phase shifting circuits are combined into a single circuit, which utilizes tunable capacitors. This results in a substantial reduction in the circuit complexity and cost. Phased arrays based on this technique can be employed in mobile multimedia services and automotive collision avoidance radars. A 2-GHz 4-antenna and a 10-GHz 8-antenna extended resonance phased arrays are demonstrated with scan ranges of 20 degrees and 18 degrees, respectively. A new predistortion linearization technique for the linearization of RF/microwave power amplifiers is also presented. This technique utilizes a varactor based tunable R-L-C resonator in shunt configuration. Due to the small number of circuit elements required, linearizers based on this technique offer low-cost and simple circuitry, hence can be utilized in handheld and cellular applications. A 1.8 GHz power amplifier with 9 dB gain is linearized using this technique. The linearizer improves the output 1-dB compression point of the power amplifier from 21 to 22.8 dBm. Adjacent channel power ratio (ACPR) is improved approximately 11 dB at an output RF power level of 17.5 dBm. The thesis is concluded by summarizing the main achievements and discussing the future work directions.
Statistical approach for selection of biologically informative genes.
Das, Samarendra; Rai, Anil; Mishra, D C; Rai, Shesh N
2018-05-20
Selection of informative genes from high dimensional gene expression data has emerged as an important research area in genomics. Many gene selection techniques have been proposed so far are either based on relevancy or redundancy measure. Further, the performance of these techniques has been adjudged through post selection classification accuracy computed through a classifier using the selected genes. This performance metric may be statistically sound but may not be biologically relevant. A statistical approach, i.e. Boot-MRMR, was proposed based on a composite measure of maximum relevance and minimum redundancy, which is both statistically sound and biologically relevant for informative gene selection. For comparative evaluation of the proposed approach, we developed two biological sufficient criteria, i.e. Gene Set Enrichment with QTL (GSEQ) and biological similarity score based on Gene Ontology (GO). Further, a systematic and rigorous evaluation of the proposed technique with 12 existing gene selection techniques was carried out using five gene expression datasets. This evaluation was based on a broad spectrum of statistically sound (e.g. subject classification) and biological relevant (based on QTL and GO) criteria under a multiple criteria decision-making framework. The performance analysis showed that the proposed technique selects informative genes which are more biologically relevant. The proposed technique is also found to be quite competitive with the existing techniques with respect to subject classification and computational time. Our results also showed that under the multiple criteria decision-making setup, the proposed technique is best for informative gene selection over the available alternatives. Based on the proposed approach, an R Package, i.e. BootMRMR has been developed and available at https://cran.r-project.org/web/packages/BootMRMR. This study will provide a practical guide to select statistical techniques for selecting informative genes from high dimensional expression data for breeding and system biology studies. Published by Elsevier B.V.
NASA Technical Reports Server (NTRS)
Ricks, Wendell R.; Abbott, Kathy H.
1987-01-01
To the software design community, the concern over the costs associated with a program's execution time and implementation is great. It is always desirable, and sometimes imperative, that the proper programming technique is chosen which minimizes all costs for a given application or type of application. A study is described that compared cost-related factors associated with traditional programming techniques to rule-based programming techniques for a specific application. The results of this study favored the traditional approach regarding execution efficiency, but favored the rule-based approach regarding programmer productivity (implementation ease). Although this study examined a specific application, the results should be widely applicable.
Screening and Biosensor-Based Approaches for Lung Cancer Detection
Wang, Lulu
2017-01-01
Early diagnosis of lung cancer helps to reduce the cancer death rate significantly. Over the years, investigators worldwide have extensively investigated many screening modalities for lung cancer detection, including computerized tomography, chest X-ray, positron emission tomography, sputum cytology, magnetic resonance imaging and biopsy. However, these techniques are not suitable for patients with other pathologies. Developing a rapid and sensitive technique for early diagnosis of lung cancer is urgently needed. Biosensor-based techniques have been recently recommended as a rapid and cost-effective tool for early diagnosis of lung tumor markers. This paper reviews the recent development in screening and biosensor-based techniques for early lung cancer detection. PMID:29065541
Mass spectrometry for fragment screening.
Chan, Daniel Shiu-Hin; Whitehouse, Andrew J; Coyne, Anthony G; Abell, Chris
2017-11-08
Fragment-based approaches in chemical biology and drug discovery have been widely adopted worldwide in both academia and industry. Fragment hits tend to interact weakly with their targets, necessitating the use of sensitive biophysical techniques to detect their binding. Common fragment screening techniques include differential scanning fluorimetry (DSF) and ligand-observed NMR. Validation and characterization of hits is usually performed using a combination of protein-observed NMR, isothermal titration calorimetry (ITC) and X-ray crystallography. In this context, MS is a relatively underutilized technique in fragment screening for drug discovery. MS-based techniques have the advantage of high sensitivity, low sample consumption and being label-free. This review highlights recent examples of the emerging use of MS-based techniques in fragment screening. © 2017 The Author(s). Published by Portland Press Limited on behalf of the Biochemical Society.
Screen Space Ambient Occlusion Based Multiple Importance Sampling for Real-Time Rendering
NASA Astrophysics Data System (ADS)
Zerari, Abd El Mouméne; Babahenini, Mohamed Chaouki
2018-03-01
We propose a new approximation technique for accelerating the Global Illumination algorithm for real-time rendering. The proposed approach is based on the Screen-Space Ambient Occlusion (SSAO) method, which approximates the global illumination for large, fully dynamic scenes at interactive frame rates. Current algorithms that are based on the SSAO method suffer from difficulties due to the large number of samples that are required. In this paper, we propose an improvement to the SSAO technique by integrating it with a Multiple Importance Sampling technique that combines a stratified sampling method with an importance sampling method, with the objective of reducing the number of samples. Experimental evaluation demonstrates that our technique can produce high-quality images in real time and is significantly faster than traditional techniques.
A Randomized Controlled Trial of the Group-Based Modified Story Memory Technique in TBI
2017-10-01
AWARD NUMBER: W81XWH-16-1-0726 TITLE: A Randomized Controlled Trial of the Group -Based Modified Story Memory Technique in TBI PRINCIPAL...2017 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER A Randomized Controlled Trial of the Group -Based Modified Story Memory Technique in TBI 5b. GRANT...forthcoming, The current study addresses this need through a double blind, placebo- controlled , randomized clinical trial (RCT) of a group
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Huiqiang; Wu, Xizeng, E-mail: xwu@uabmc.edu, E-mail: tqxiao@sinap.ac.cn; Xiao, Tiqiao, E-mail: xwu@uabmc.edu, E-mail: tqxiao@sinap.ac.cn
Purpose: Propagation-based phase-contrast CT (PPCT) utilizes highly sensitive phase-contrast technology applied to x-ray microtomography. Performing phase retrieval on the acquired angular projections can enhance image contrast and enable quantitative imaging. In this work, the authors demonstrate the validity and advantages of a novel technique for high-resolution PPCT by using the generalized phase-attenuation duality (PAD) method of phase retrieval. Methods: A high-resolution angular projection data set of a fish head specimen was acquired with a monochromatic 60-keV x-ray beam. In one approach, the projection data were directly used for tomographic reconstruction. In two other approaches, the projection data were preprocessed bymore » phase retrieval based on either the linearized PAD method or the generalized PAD method. The reconstructed images from all three approaches were then compared in terms of tissue contrast-to-noise ratio and spatial resolution. Results: The authors’ experimental results demonstrated the validity of the PPCT technique based on the generalized PAD-based method. In addition, the results show that the authors’ technique is superior to the direct PPCT technique as well as the linearized PAD-based PPCT technique in terms of their relative capabilities for tissue discrimination and characterization. Conclusions: This novel PPCT technique demonstrates great potential for biomedical imaging, especially for applications that require high spatial resolution and limited radiation exposure.« less
A combination of selected mapping and clipping to increase energy efficiency of OFDM systems
Lee, Byung Moo; Rim, You Seung
2017-01-01
We propose an energy efficient combination design for OFDM systems based on selected mapping (SLM) and clipping peak-to-average power ratio (PAPR) reduction techniques, and show the related energy efficiency (EE) performance analysis. The combination of two different PAPR reduction techniques can provide a significant benefit in increasing EE, because it can take advantages of both techniques. For the combination, we choose the clipping and SLM techniques, since the former technique is quite simple and effective, and the latter technique does not cause any signal distortion. We provide the structure and the systematic operating method, and show the various analyzes to derive the EE gain based on the combined technique. Our analysis show that the combined technique increases the EE by 69% compared to no PAPR reduction, and by 19.34% compared to only using SLM technique. PMID:29023591
Investigation of laser Doppler anemometry in developing a velocity-based measurement technique
NASA Astrophysics Data System (ADS)
Jung, Ki Won
2009-12-01
Acoustic properties, such as the characteristic impedance and the complex propagation constant, of porous materials have been traditionally characterized based on pressure-based measurement techniques using microphones. Although the microphone techniques have evolved since their introduction, the most general form of the microphone technique employs two microphones in characterizing the acoustic field for one continuous medium. The shortcomings of determining the acoustic field based on only two microphones can be overcome by using numerous microphones. However, the use of a number of microphones requires a careful and intricate calibration procedure. This dissertation uses laser Doppler anemometry (LDA) to establish a new measurement technique which can resolve issues that microphone techniques have: First, it is based on a single sensor, thus the calibration is unnecessary when only overall ratio of the acoustic field is required for the characterization of a system. This includes the measurements of the characteristic impedance and the complex propagation constant of a system. Second, it can handle multiple positional measurements without calibrating the signal at each position. Third, it can measure three dimensional components of velocity even in a system with a complex geometry. Fourth, it has a flexible adaptability which is not restricted to a certain type of apparatus only if the apparatus is transparent. LDA is known to possess several disadvantages, such as the requirement of a transparent apparatus, high cost, and necessity of seeding particles. The technique based on LDA combined with a curvefitting algorithm is validated through measurements on three systems. First, the complex propagation constant of the air is measured in a rigidly terminated cylindrical pipe which has very low dissipation. Second, the radiation impedance of an open-ended pipe is measured. These two parameters can be characterized by the ratio of acoustic field measured at multiple locations. Third, the power dissipated in a variable RLC load is measured. The three experiments validate the LDA technique proposed. The utility of the LDA method is then extended to the measurement of the complex propagation constant of the air inside a 100 ppi reticulated vitreous carbon (RVC) sample. Compared to measurements in the available studies, the measurement with the 100 ppi RVC sample supports the LDA technique in that it can achieve a low uncertainty in the determined quantity. This dissertation concludes with using the LDA technique for modal decomposition of the plane wave mode and the (1,1) mode that are driven simultaneously. This modal decomposition suggests that the LDA technique surpasses microphone-based techniques, because they are unable to determine the acoustic field based on an acoustic model with unconfined propagation constants for each modal component.
NASA Astrophysics Data System (ADS)
Sehad, Mounir; Lazri, Mourad; Ameur, Soltane
2017-03-01
In this work, a new rainfall estimation technique based on the high spatial and temporal resolution of the Spinning Enhanced Visible and Infra Red Imager (SEVIRI) aboard the Meteosat Second Generation (MSG) is presented. This work proposes efficient scheme rainfall estimation based on two multiclass support vector machine (SVM) algorithms: SVM_D for daytime and SVM_N for night time rainfall estimations. Both SVM models are trained using relevant rainfall parameters based on optical, microphysical and textural cloud proprieties. The cloud parameters are derived from the Spectral channels of the SEVIRI MSG radiometer. The 3-hourly and daily accumulated rainfall are derived from the 15 min-rainfall estimation given by the SVM classifiers for each MSG observation image pixel. The SVMs were trained with ground meteorological radar precipitation scenes recorded from November 2006 to March 2007 over the north of Algeria located in the Mediterranean region. Further, the SVM_D and SVM_N models were used to estimate 3-hourly and daily rainfall using data set gathered from November 2010 to March 2011 over north Algeria. The results were validated against collocated rainfall observed by rain gauge network. Indeed, the statistical scores given by correlation coefficient, bias, root mean square error and mean absolute error, showed good accuracy of rainfall estimates by the present technique. Moreover, rainfall estimates of our technique were compared with two high accuracy rainfall estimates methods based on MSG SEVIRI imagery namely: random forests (RF) based approach and an artificial neural network (ANN) based technique. The findings of the present technique indicate higher correlation coefficient (3-hourly: 0.78; daily: 0.94), and lower mean absolute error and root mean square error values. The results show that the new technique assign 3-hourly and daily rainfall with good and better accuracy than ANN technique and (RF) model.
Model averaging techniques for quantifying conceptual model uncertainty.
Singh, Abhishek; Mishra, Srikanta; Ruskauff, Greg
2010-01-01
In recent years a growing understanding has emerged regarding the need to expand the modeling paradigm to include conceptual model uncertainty for groundwater models. Conceptual model uncertainty is typically addressed by formulating alternative model conceptualizations and assessing their relative likelihoods using statistical model averaging approaches. Several model averaging techniques and likelihood measures have been proposed in the recent literature for this purpose with two broad categories--Monte Carlo-based techniques such as Generalized Likelihood Uncertainty Estimation or GLUE (Beven and Binley 1992) and criterion-based techniques that use metrics such as the Bayesian and Kashyap Information Criteria (e.g., the Maximum Likelihood Bayesian Model Averaging or MLBMA approach proposed by Neuman 2003) and Akaike Information Criterion-based model averaging (AICMA) (Poeter and Anderson 2005). These different techniques can often lead to significantly different relative model weights and ranks because of differences in the underlying statistical assumptions about the nature of model uncertainty. This paper provides a comparative assessment of the four model averaging techniques (GLUE, MLBMA with KIC, MLBMA with BIC, and AIC-based model averaging) mentioned above for the purpose of quantifying the impacts of model uncertainty on groundwater model predictions. Pros and cons of each model averaging technique are examined from a practitioner's perspective using two groundwater modeling case studies. Recommendations are provided regarding the use of these techniques in groundwater modeling practice.
A Biomechanical Modeling Guided CBCT Estimation Technique
Zhang, You; Tehrani, Joubin Nasehi; Wang, Jing
2017-01-01
Two-dimensional-to-three-dimensional (2D-3D) deformation has emerged as a new technique to estimate cone-beam computed tomography (CBCT) images. The technique is based on deforming a prior high-quality 3D CT/CBCT image to form a new CBCT image, guided by limited-view 2D projections. The accuracy of this intensity-based technique, however, is often limited in low-contrast image regions with subtle intensity differences. The solved deformation vector fields (DVFs) can also be biomechanically unrealistic. To address these problems, we have developed a biomechanical modeling guided CBCT estimation technique (Bio-CBCT-est) by combining 2D-3D deformation with finite element analysis (FEA)-based biomechanical modeling of anatomical structures. Specifically, Bio-CBCT-est first extracts the 2D-3D deformation-generated displacement vectors at the high-contrast anatomical structure boundaries. The extracted surface deformation fields are subsequently used as the boundary conditions to drive structure-based FEA to correct and fine-tune the overall deformation fields, especially those at low-contrast regions within the structure. The resulting FEA-corrected deformation fields are then fed back into 2D-3D deformation to form an iterative loop, combining the benefits of intensity-based deformation and biomechanical modeling for CBCT estimation. Using eleven lung cancer patient cases, the accuracy of the Bio-CBCT-est technique has been compared to that of the 2D-3D deformation technique and the traditional CBCT reconstruction techniques. The accuracy was evaluated in the image domain, and also in the DVF domain through clinician-tracked lung landmarks. PMID:27831866
An adaptive technique to maximize lossless image data compression of satellite images
NASA Technical Reports Server (NTRS)
Stewart, Robert J.; Lure, Y. M. Fleming; Liou, C. S. Joe
1994-01-01
Data compression will pay an increasingly important role in the storage and transmission of image data within NASA science programs as the Earth Observing System comes into operation. It is important that the science data be preserved at the fidelity the instrument and the satellite communication systems were designed to produce. Lossless compression must therefore be applied, at least, to archive the processed instrument data. In this paper, we present an analysis of the performance of lossless compression techniques and develop an adaptive approach which applied image remapping, feature-based image segmentation to determine regions of similar entropy and high-order arithmetic coding to obtain significant improvements over the use of conventional compression techniques alone. Image remapping is used to transform the original image into a lower entropy state. Several techniques were tested on satellite images including differential pulse code modulation, bi-linear interpolation, and block-based linear predictive coding. The results of these experiments are discussed and trade-offs between computation requirements and entropy reductions are used to identify the optimum approach for a variety of satellite images. Further entropy reduction can be achieved by segmenting the image based on local entropy properties then applying a coding technique which maximizes compression for the region. Experimental results are presented showing the effect of different coding techniques for regions of different entropy. A rule-base is developed through which the technique giving the best compression is selected. The paper concludes that maximum compression can be achieved cost effectively and at acceptable performance rates with a combination of techniques which are selected based on image contextual information.
Study of fault tolerant software technology for dynamic systems
NASA Technical Reports Server (NTRS)
Caglayan, A. K.; Zacharias, G. L.
1985-01-01
The major aim of this study is to investigate the feasibility of using systems-based failure detection isolation and compensation (FDIC) techniques in building fault-tolerant software and extending them, whenever possible, to the domain of software fault tolerance. First, it is shown that systems-based FDIC methods can be extended to develop software error detection techniques by using system models for software modules. In particular, it is demonstrated that systems-based FDIC techniques can yield consistency checks that are easier to implement than acceptance tests based on software specifications. Next, it is shown that systems-based failure compensation techniques can be generalized to the domain of software fault tolerance in developing software error recovery procedures. Finally, the feasibility of using fault-tolerant software in flight software is investigated. In particular, possible system and version instabilities, and functional performance degradation that may occur in N-Version programming applications to flight software are illustrated. Finally, a comparative analysis of N-Version and recovery block techniques in the context of generic blocks in flight software is presented.
Attitudes of Nigerian Secondary School Teachers towards Media-Based Learning.
ERIC Educational Resources Information Center
Ekpo, C. M.
This document presents results of a study assessing the attitudes of secondary school teachers towards media based learning. The study explores knowledge of and exposure to media based learning techniques of a cross section of Nigerian secondary school teachers. Factors that affect the use of media based learning technique are sought. Media based…
Berton, Paula; Lana, Nerina B; Ríos, Juan M; García-Reyes, Juan F; Altamirano, Jorgelina C
2016-01-28
Green chemistry principles for developing methodologies have gained attention in analytical chemistry in recent decades. A growing number of analytical techniques have been proposed for determination of organic persistent pollutants in environmental and biological samples. In this light, the current review aims to present state-of-the-art sample preparation approaches based on green analytical principles proposed for the determination of polybrominated diphenyl ethers (PBDEs) and metabolites (OH-PBDEs and MeO-PBDEs) in environmental and biological samples. Approaches to lower the solvent consumption and accelerate the extraction, such as pressurized liquid extraction, microwave-assisted extraction, and ultrasound-assisted extraction, are discussed in this review. Special attention is paid to miniaturized sample preparation methodologies and strategies proposed to reduce organic solvent consumption. Additionally, extraction techniques based on alternative solvents (surfactants, supercritical fluids, or ionic liquids) are also commented in this work, even though these are scarcely used for determination of PBDEs. In addition to liquid-based extraction techniques, solid-based analytical techniques are also addressed. The development of greener, faster and simpler sample preparation approaches has increased in recent years (2003-2013). Among green extraction techniques, those based on the liquid phase predominate over those based on the solid phase (71% vs. 29%, respectively). For solid samples, solvent assisted extraction techniques are preferred for leaching of PBDEs, and liquid phase microextraction techniques are mostly used for liquid samples. Likewise, green characteristics of the instrumental analysis used after the extraction and clean-up steps are briefly discussed. Copyright © 2015 Elsevier B.V. All rights reserved.
Qualitative and quantitative detection of T7 bacteriophages using paper based sandwich ELISA.
Khan, Mohidus Samad; Pande, Tripti; van de Ven, Theo G M
2015-08-01
Viruses cause many infectious diseases and consequently epidemic health threats. Paper based diagnostics and filters can offer attractive options for detecting and deactivating pathogens. However, due to their infectious characteristics, virus detection using paper diagnostics is more challenging compared to the detection of bacteria, enzymes, DNA or antigens. The major objective of this study was to prepare reliable, degradable and low cost paper diagnostics to detect viruses, without using sophisticated optical or microfluidic analytical instruments. T7 bacteriophage was used as a model virus. A paper based sandwich ELISA technique was developed to detect and quantify the T7 phages in solution. The paper based sandwich ELISA detected T7 phage concentrations as low as 100 pfu/mL to as high as 10(9) pfu/mL. The compatibility of paper based sandwich ELISA with the conventional titre count was tested using T7 phage solutions of unknown concentrations. The paper based sandwich ELISA technique is faster and economical compared to the traditional detection techniques. Therefore, with proper calibration and right reagents, and by following the biosafety regulations, the paper based technique can be said to be compatible and economical to the sophisticated laboratory diagnostic techniques applied to detect pathogenic viruses and other microorganisms. Copyright © 2015 Elsevier B.V. All rights reserved.
Biosensor-based microRNA detection: techniques, design, performance, and challenges.
Johnson, Blake N; Mutharasan, Raj
2014-04-07
The current state of biosensor-based techniques for amplification-free microRNA (miRNA) detection is critically reviewed. Comparison with non-sensor and amplification-based molecular techniques (MTs), such as polymerase-based methods, is made in terms of transduction mechanism, associated protocol, and sensitivity. Challenges associated with miRNA hybridization thermodynamics which affect assay selectivity and amplification bias are briefly discussed. Electrochemical, electromechanical, and optical classes of miRNA biosensors are reviewed in terms of transduction mechanism, limit of detection (LOD), time-to-results (TTR), multiplexing potential, and measurement robustness. Current trends suggest that biosensor-based techniques (BTs) for miRNA assay will complement MTs due to the advantages of amplification-free detection, LOD being femtomolar (fM)-attomolar (aM), short TTR, multiplexing capability, and minimal sample preparation requirement. Areas of future importance in miRNA BT development are presented which include focus on achieving high measurement confidence and multiplexing capabilities.
Anonymity and Historical-Anonymity in Location-Based Services
NASA Astrophysics Data System (ADS)
Bettini, Claudio; Mascetti, Sergio; Wang, X. Sean; Freni, Dario; Jajodia, Sushil
The problem of protecting user’s privacy in Location-Based Services (LBS) has been extensively studied recently and several defense techniques have been proposed. In this contribution, we first present a categorization of privacy attacks and related defenses. Then, we consider the class of defense techniques that aim at providing privacy through anonymity and in particular algorithms achieving “historical k- anonymity” in the case of the adversary obtaining a trace of requests recognized as being issued by the same (anonymous) user. Finally, we investigate the issues involved in the experimental evaluation of anonymity based defense techniques; we show that user movement simulations based on mostly random movements can lead to overestimate the privacy protection in some cases and to overprotective techniques in other cases. The above results are obtained by comparison to a more realistic simulation with an agent-based simulator, considering a specific deployment scenario.
Rasta, Seyed Hossein; Partovi, Mahsa Eisazadeh; Seyedarabi, Hadi; Javadzadeh, Alireza
2015-01-01
To investigate the effect of preprocessing techniques including contrast enhancement and illumination correction on retinal image quality, a comparative study was carried out. We studied and implemented a few illumination correction and contrast enhancement techniques on color retinal images to find out the best technique for optimum image enhancement. To compare and choose the best illumination correction technique we analyzed the corrected red and green components of color retinal images statistically and visually. The two contrast enhancement techniques were analyzed using a vessel segmentation algorithm by calculating the sensitivity and specificity. The statistical evaluation of the illumination correction techniques were carried out by calculating the coefficients of variation. The dividing method using the median filter to estimate background illumination showed the lowest Coefficients of variations in the red component. The quotient and homomorphic filtering methods after the dividing method presented good results based on their low Coefficients of variations. The contrast limited adaptive histogram equalization increased the sensitivity of the vessel segmentation algorithm up to 5% in the same amount of accuracy. The contrast limited adaptive histogram equalization technique has a higher sensitivity than the polynomial transformation operator as a contrast enhancement technique for vessel segmentation. Three techniques including the dividing method using the median filter to estimate background, quotient based and homomorphic filtering were found as the effective illumination correction techniques based on a statistical evaluation. Applying the local contrast enhancement technique, such as CLAHE, for fundus images presented good potentials in enhancing the vasculature segmentation.
Elsyad, Moustafa Abdou; El-Waseef, Fatma Ahmad; Al-Mahdy, Yasmeen Fathy; Fouad, Mohammed Mohammed
2013-08-01
This study aimed to evaluate mandibular denture base deformation along with three impression techniques used for implant-retained overdenture. Ten edentulous patients (five men and five women) received two implants in the canine region of the mandible and three duplicate mandibular overdentures which were constructed with mucostatic, selective pressure, and definitive pressure impression techniques. Ball abutments and respective gold matrices were used to connect the overdentures to the implants. Six linear strain gauges were bonded to the lingual polished surface of each duplicate overdenture at midline and implant areas to measure strain during maximal clenching and gum chewing. The strains recorded at midline were compressive while strains at implant areas were tensile. Clenching recorded significant higher strain when compared with gum chewing for all techniques. The mucostatic technique recorded the highest strain and the definite pressure technique recorded the lowest. There was no significant difference between the strain recorded with mucostatic technique and that registered with selective pressure technique. The highest strain was recorded at the level of ball abutment's top with the mucostatic technique during clenching. Definite pressure impression technique for implant-retained mandibular overdenture is associated with minimal denture deformation during function when compared with mucostatic and selective pressure techniques. Reinforcement of the denture base over the implants may be recommended to increase resistance of fracture when mucostatic or selective pressure impression technique is used. © 2012 John Wiley & Sons A/S.
Laser-based direct-write techniques for cell printing
Schiele, Nathan R; Corr, David T; Huang, Yong; Raof, Nurazhani Abdul; Xie, Yubing; Chrisey, Douglas B
2016-01-01
Fabrication of cellular constructs with spatial control of cell location (±5 μm) is essential to the advancement of a wide range of applications including tissue engineering, stem cell and cancer research. Precise cell placement, especially of multiple cell types in co- or multi-cultures and in three dimensions, can enable research possibilities otherwise impossible, such as the cell-by-cell assembly of complex cellular constructs. Laser-based direct writing, a printing technique first utilized in electronics applications, has been adapted to transfer living cells and other biological materials (e.g., enzymes, proteins and bioceramics). Many different cell types have been printed using laser-based direct writing, and this technique offers significant improvements when compared to conventional cell patterning techniques. The predominance of work to date has not been in application of the technique, but rather focused on demonstrating the ability of direct writing to pattern living cells, in a spatially precise manner, while maintaining cellular viability. This paper reviews laser-based additive direct-write techniques for cell printing, and the various cell types successfully laser direct-written that have applications in tissue engineering, stem cell and cancer research are highlighted. A particular focus is paid to process dynamics modeling and process-induced cell injury during laser-based cell direct writing. PMID:20814088
NASA Astrophysics Data System (ADS)
Vidya Sagar, R.; Raghu Prasad, B. K.
2012-03-01
This article presents a review of recent developments in parametric based acoustic emission (AE) techniques applied to concrete structures. It recapitulates the significant milestones achieved by previous researchers including various methods and models developed in AE testing of concrete structures. The aim is to provide an overview of the specific features of parametric based AE techniques of concrete structures carried out over the years. Emphasis is given to traditional parameter-based AE techniques applied to concrete structures. A significant amount of research on AE techniques applied to concrete structures has already been published and considerable attention has been given to those publications. Some recent studies such as AE energy analysis and b-value analysis used to assess damage of concrete bridge beams have also been discussed. The formation of fracture process zone and the AE energy released during the fracture process in concrete beam specimens have been summarised. A large body of experimental data on AE characteristics of concrete has accumulated over the last three decades. This review of parametric based AE techniques applied to concrete structures may be helpful to the concerned researchers and engineers to better understand the failure mechanism of concrete and evolve more useful methods and approaches for diagnostic inspection of structural elements and failure prediction/prevention of concrete structures.
Recent Progress in Optical Biosensors Based on Smartphone Platforms
Geng, Zhaoxin; Zhang, Xiong; Fan, Zhiyuan; Lv, Xiaoqing; Su, Yue; Chen, Hongda
2017-01-01
With a rapid improvement of smartphone hardware and software, especially complementary metal oxide semiconductor (CMOS) cameras, many optical biosensors based on smartphone platforms have been presented, which have pushed the development of the point-of-care testing (POCT). Imaging-based and spectrometry-based detection techniques have been widely explored via different approaches. Combined with the smartphone, imaging-based and spectrometry-based methods are currently used to investigate a wide range of molecular properties in chemical and biological science for biosensing and diagnostics. Imaging techniques based on smartphone-based microscopes are utilized to capture microscale analysts, while spectrometry-based techniques are used to probe reactions or changes of molecules. Here, we critically review the most recent progress in imaging-based and spectrometry-based smartphone-integrated platforms that have been developed for chemical experiments and biological diagnosis. We focus on the analytical performance and the complexity for implementation of the platforms. PMID:29068375
Recent Progress in Optical Biosensors Based on Smartphone Platforms.
Geng, Zhaoxin; Zhang, Xiong; Fan, Zhiyuan; Lv, Xiaoqing; Su, Yue; Chen, Hongda
2017-10-25
With a rapid improvement of smartphone hardware and software, especially complementary metal oxide semiconductor (CMOS) cameras, many optical biosensors based on smartphone platforms have been presented, which have pushed the development of the point-of-care testing (POCT). Imaging-based and spectrometry-based detection techniques have been widely explored via different approaches. Combined with the smartphone, imaging-based and spectrometry-based methods are currently used to investigate a wide range of molecular properties in chemical and biological science for biosensing and diagnostics. Imaging techniques based on smartphone-based microscopes are utilized to capture microscale analysts, while spectrometry-based techniques are used to probe reactions or changes of molecules. Here, we critically review the most recent progress in imaging-based and spectrometry-based smartphone-integrated platforms that have been developed for chemical experiments and biological diagnosis. We focus on the analytical performance and the complexity for implementation of the platforms.
Issues to Consider in Designing WebQuests: A Literature Review
ERIC Educational Resources Information Center
Kurt, Serhat
2012-01-01
A WebQuest is an inquiry-based online learning technique. This technique has been widely adopted in K-16 education. Therefore, it is important that conditions of effective WebQuest design are defined. Through this article the author presents techniques for improving WebQuest design based on current research. More specifically, the author analyzes…
NASA Astrophysics Data System (ADS)
Chowdhury, D. P.; Pal, Sujit; Parthasarathy, R.; Mathur, P. K.; Kohli, A. K.; Limaye, P. K.
1998-09-01
Thin layer activation (TLA) technique has been developed in Zr based alloy materials, e.g., zircaloy II, using 40 MeV α-particles from Variable Energy Cyclotron Centre at Calcutta. A brief description of the methodology of TLA technique is presented to determine the surface wear. The sensitivity of the measurement of surface wear in zircaloy material is found to be 0.22±0.05 μm. The surface wear is determined by TLA technique in zircaloy material which is used in pressurised heavy water reactor and the values have been compared with that obtained by conventional technique for the analytical validation of the TLA technique.
2D DOST based local phase pattern for face recognition
NASA Astrophysics Data System (ADS)
Moniruzzaman, Md.; Alam, Mohammad S.
2017-05-01
A new two dimensional (2-D) Discrete Orthogonal Stcokwell Transform (DOST) based Local Phase Pattern (LPP) technique has been proposed for efficient face recognition. The proposed technique uses 2-D DOST as preliminary preprocessing and local phase pattern to form robust feature signature which can effectively accommodate various 3D facial distortions and illumination variations. The S-transform, is an extension of the ideas of the continuous wavelet transform (CWT), is also known for its local spectral phase properties in time-frequency representation (TFR). It provides a frequency dependent resolution of the time-frequency space and absolutely referenced local phase information while maintaining a direct relationship with the Fourier spectrum which is unique in TFR. After utilizing 2-D Stransform as the preprocessing and build local phase pattern from extracted phase information yield fast and efficient technique for face recognition. The proposed technique shows better correlation discrimination compared to alternate pattern recognition techniques such as wavelet or Gabor based face recognition. The performance of the proposed method has been tested using the Yale and extended Yale facial database under different environments such as illumination variation and 3D changes in facial expressions. Test results show that the proposed technique yields better performance compared to alternate time-frequency representation (TFR) based face recognition techniques.
DNA-based cryptographic methods for data hiding in DNA media.
Marwan, Samiha; Shawish, Ahmed; Nagaty, Khaled
2016-12-01
Information security can be achieved using cryptography, steganography or a combination of them, where data is firstly encrypted using any of the available cryptography techniques and then hid into any hiding medium. Recently, the famous genomic DNA has been introduced as a hiding medium, known as DNA steganography, due to its notable ability to hide huge data sets with a high level of randomness and hence security. Despite the numerous cryptography techniques, to our knowledge only the vigenere cipher and the DNA-based playfair cipher have been combined with the DNA steganography, which keeps space for investigation of other techniques and coming up with new improvements. This paper presents a comprehensive analysis between the DNA-based playfair, vigenere, RSA and the AES ciphers, each combined with a DNA hiding technique. The conducted analysis reports the performance diversity of each combined technique in terms of security, speed, hiding capacity in addition to both key size and data size. Moreover, this paper proposes a modification of the current combined DNA-based playfair cipher technique, which makes it not only simple and fast but also provides a significantly higher hiding capacity and security. The conducted extensive experimental studies confirm such outstanding performance in comparison with all the discussed combined techniques. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Grubbs, Guy II; Michell, Robert; Samara, Marilia; Hampton, Don; Jahn, Jorg-Micha
2016-01-01
A technique is presented for the periodic and systematic calibration of ground-based optical imagers. It is important to have a common system of units (Rayleighs or photon flux) for cross comparison as well as self-comparison over time. With the advancement in technology, the sensitivity of these imagers has improved so that stars can be used for more precise calibration. Background subtraction, flat fielding, star mapping, and other common techniques are combined in deriving a calibration technique appropriate for a variety of ground-based imager installations. Spectral (4278, 5577, and 8446 A ) ground-based imager data with multiple fields of view (19, 47, and 180 deg) are processed and calibrated using the techniques developed. The calibration techniques applied result in intensity measurements in agreement between different imagers using identical spectral filtering, and the intensity at each wavelength observed is within the expected range of auroral measurements. The application of these star calibration techniques, which convert raw imager counts into units of photon flux, makes it possible to do quantitative photometry. The computed photon fluxes, in units of Rayleighs, can be used for the absolute photometry between instruments or as input parameters for auroral electron transport models.
NASA Astrophysics Data System (ADS)
Lee, Seungwan; Kang, Sooncheol; Eom, Jisoo
2017-03-01
Contrast-enhanced mammography has been used to demonstrate functional information about a breast tumor by injecting contrast agents. However, a conventional technique with a single exposure degrades the efficiency of tumor detection due to structure overlapping. Dual-energy techniques with energy-integrating detectors (EIDs) also cause an increase of radiation dose and an inaccuracy of material decomposition due to the limitations of EIDs. On the other hands, spectral mammography with photon-counting detectors (PCDs) is able to resolve the issues induced by the conventional technique and EIDs using their energy-discrimination capabilities. In this study, the contrast-enhanced spectral mammography based on a PCD was implemented by using a polychromatic dual-energy model, and the proposed technique was compared with the dual-energy technique with an EID in terms of quantitative accuracy and radiation dose. The results showed that the proposed technique improved the quantitative accuracy as well as reduced radiation dose comparing to the dual-energy technique with an EID. The quantitative accuracy of the contrast-enhanced spectral mammography based on a PCD was slightly improved as a function of radiation dose. Therefore, the contrast-enhanced spectral mammography based on a PCD is able to provide useful information for detecting breast tumors and improving diagnostic accuracy.
room) or while being on the mobile (agents in action). While desktop based applications can be used to monitor but also process and analyse surveillance data coming from a variety of sources, mobile-based techniques Digital forensics analysis Visualization techniques for surveillance Mobile-based surveillance
Reyes, Camilo; Mason, Eric; Solares, C. Arturo
2014-01-01
Introduction A substantial body of literature has been devoted to the distinct characteristics and surgical options to repair the skull base. However, the skull base is an anatomically challenging location that requires a three-dimensional reconstruction approach. Furthermore, advances in endoscopic skull base surgery encompass a wide range of surgical pathology, from benign tumors to sinonasal cancer. This has resulted in the creation of wide defects that yield a new challenge in skull base reconstruction. Progress in technology and imaging has made this approach an internationally accepted method to repair these defects. Objectives Discuss historical developments and flaps available for skull base reconstruction. Data Synthesis Free grafts in skull base reconstruction are a viable option in small defects and low-flow leaks. Vascularized flaps pose a distinct advantage in large defects and high-flow leaks. When open techniques are used, free flap reconstruction techniques are often necessary to repair large entry wound defects. Conclusions Reconstruction of skull base defects requires a thorough knowledge of surgical anatomy, disease, and patient risk factors associated with high-flow cerebrospinal fluid leaks. Various reconstruction techniques are available, from free tissue grafting to vascularized flaps. Possible complications that can befall after these procedures need to be considered. Although endonasal techniques are being used with increasing frequency, open techniques are still necessary in selected cases. PMID:25992142
Nano-Al Based Energetics: Rapid Heating Studies and a New Preparation Technique
NASA Astrophysics Data System (ADS)
Sullivan, Kyle; Kuntz, Josh; Gash, Alex; Zachariah, Michael
2011-06-01
Nano-Al based thermites have become an attractive alternative to traditional energetic formulations due to their increased energy density and high reactivity. Understanding the intrinsic reaction mechanism has been a difficult task, largely due to the lack of experimental techniques capable of rapidly and uniform heating a sample (~104- 108 K/s). The current work presents several studies on nano-Al based thermites, using rapid heating techniques. A new mechanism termed a Reactive Sintering Mechanism is proposed for nano-Al based thermites. In addition, new experimental techniques for nanocomposite thermite deposition onto thin Pt electrodes will be discussed. This combined technique will offer more precise control of the deposition, and will serve to further our understanding of the intrinsic reaction mechanism of rapidly heated energetic systems. An improved mechanistic understanding will lead to the development of optimized formulations and architectures. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
Efficient Privacy-Enhancing Techniques for Medical Databases
NASA Astrophysics Data System (ADS)
Schartner, Peter; Schaffer, Martin
In this paper, we introduce an alternative for using linkable unique health identifiers: locally generated system-wide unique digital pseudonyms. The presented techniques are based on a novel technique called collision-free number generation which is discussed in the introductory part of the article. Afterwards, attention is payed onto two specific variants of collision-free number generation: one based on the RSA-Problem and the other one based on the Elliptic Curve Discrete Logarithm Problem. Finally, two applications are sketched: centralized medical records and anonymous medical databases.
Time series forecasting using ERNN and QR based on Bayesian model averaging
NASA Astrophysics Data System (ADS)
Pwasong, Augustine; Sathasivam, Saratha
2017-08-01
The Bayesian model averaging technique is a multi-model combination technique. The technique was employed to amalgamate the Elman recurrent neural network (ERNN) technique with the quadratic regression (QR) technique. The amalgamation produced a hybrid technique known as the hybrid ERNN-QR technique. The potentials of forecasting with the hybrid technique are compared with the forecasting capabilities of individual techniques of ERNN and QR. The outcome revealed that the hybrid technique is superior to the individual techniques in the mean square error sense.
Wood lens design philosophy based on a binary additive manufacturing technique
NASA Astrophysics Data System (ADS)
Marasco, Peter L.; Bailey, Christopher
2016-04-01
Using additive manufacturing techniques in optical engineering to construct a gradient index (GRIN) optic may overcome a number of limitations of GRIN technology. Such techniques are maturing quickly, yielding additional design degrees of freedom for the engineer. How best to employ these degrees of freedom is not completely clear at this time. This paper describes a preliminary design philosophy, including assumptions, pertaining to a particular printing technique for GRIN optics. It includes an analysis based on simulation and initial component measurement.
NASA Astrophysics Data System (ADS)
Zaripov, D. I.; Renfu, Li
2018-05-01
The implementation of high-efficiency digital image correlation methods based on a zero-normalized cross-correlation (ZNCC) procedure for high-speed, time-resolved measurements using a high-resolution digital camera is associated with big data processing and is often time consuming. In order to speed-up ZNCC computation, a high-speed technique based on a parallel projection correlation procedure is proposed. The proposed technique involves the use of interrogation window projections instead of its two-dimensional field of luminous intensity. This simplification allows acceleration of ZNCC computation up to 28.8 times compared to ZNCC calculated directly, depending on the size of interrogation window and region of interest. The results of three synthetic test cases, such as a one-dimensional uniform flow, a linear shear flow and a turbulent boundary-layer flow, are discussed in terms of accuracy. In the latter case, the proposed technique is implemented together with an iterative window-deformation technique. On the basis of the results of the present work, the proposed technique is recommended to be used for initial velocity field calculation, with further correction using more accurate techniques.
NASA Technical Reports Server (NTRS)
Matic, Roy M.; Mosley, Judith I.
1994-01-01
Future space-based, remote sensing systems will have data transmission requirements that exceed available downlinks necessitating the use of lossy compression techniques for multispectral data. In this paper, we describe several algorithms for lossy compression of multispectral data which combine spectral decorrelation techniques with an adaptive, wavelet-based, image compression algorithm to exploit both spectral and spatial correlation. We compare the performance of several different spectral decorrelation techniques including wavelet transformation in the spectral dimension. The performance of each technique is evaluated at compression ratios ranging from 4:1 to 16:1. Performance measures used are visual examination, conventional distortion measures, and multispectral classification results. We also introduce a family of distortion metrics that are designed to quantify and predict the effect of compression artifacts on multi spectral classification of the reconstructed data.
Ultrafast Laser-Based Spectroscopy and Sensing: Applications in LIBS, CARS, and THz Spectroscopy
Leahy-Hoppa, Megan R.; Miragliotta, Joseph; Osiander, Robert; Burnett, Jennifer; Dikmelik, Yamac; McEnnis, Caroline; Spicer, James B.
2010-01-01
Ultrafast pulsed lasers find application in a range of spectroscopy and sensing techniques including laser induced breakdown spectroscopy (LIBS), coherent Raman spectroscopy, and terahertz (THz) spectroscopy. Whether based on absorption or emission processes, the characteristics of these techniques are heavily influenced by the use of ultrafast pulses in the signal generation process. Depending on the energy of the pulses used, the essential laser interaction process can primarily involve lattice vibrations, molecular rotations, or a combination of excited states produced by laser heating. While some of these techniques are currently confined to sensing at close ranges, others can be implemented for remote spectroscopic sensing owing principally to the laser pulse duration. We present a review of ultrafast laser-based spectroscopy techniques and discuss the use of these techniques to current and potential chemical and environmental sensing applications. PMID:22399883
NASA Astrophysics Data System (ADS)
Dostal, P.; Krasula, L.; Klima, M.
2012-06-01
Various image processing techniques in multimedia technology are optimized using visual attention feature of the human visual system. Spatial non-uniformity causes that different locations in an image are of different importance in terms of perception of the image. In other words, the perceived image quality depends mainly on the quality of important locations known as regions of interest. The performance of such techniques is measured by subjective evaluation or objective image quality criteria. Many state-of-the-art objective metrics are based on HVS properties; SSIM, MS-SSIM based on image structural information, VIF based on the information that human brain can ideally gain from the reference image or FSIM utilizing the low-level features to assign the different importance to each location in the image. But still none of these objective metrics utilize the analysis of regions of interest. We solve the question if these objective metrics can be used for effective evaluation of images reconstructed by processing techniques based on ROI analysis utilizing high-level features. In this paper authors show that the state-of-the-art objective metrics do not correlate well with subjective evaluation while the demosaicing based on ROI analysis is used for reconstruction. The ROI were computed from "ground truth" visual attention data. The algorithm combining two known demosaicing techniques on the basis of ROI location is proposed to reconstruct the ROI in fine quality while the rest of image is reconstructed with low quality. The color image reconstructed by this ROI approach was compared with selected demosaicing techniques by objective criteria and subjective testing. The qualitative comparison of the objective and subjective results indicates that the state-of-the-art objective metrics are still not suitable for evaluation image processing techniques based on ROI analysis and new criteria is demanded.