Sample records for fingerprint image compression

  1. Fingerprint recognition of wavelet-based compressed images by neuro-fuzzy clustering

    NASA Astrophysics Data System (ADS)

    Liu, Ti C.; Mitra, Sunanda

    1996-06-01

    Image compression plays a crucial role in many important and diverse applications requiring efficient storage and transmission. This work mainly focuses on a wavelet transform (WT) based compression of fingerprint images and the subsequent classification of the reconstructed images. The algorithm developed involves multiresolution wavelet decomposition, uniform scalar quantization, entropy and run- length encoder/decoder and K-means clustering of the invariant moments as fingerprint features. The performance of the WT-based compression algorithm has been compared with JPEG current image compression standard. Simulation results show that WT outperforms JPEG in high compression ratio region and the reconstructed fingerprint image yields proper classification.

  2. Wavelet/scalar quantization compression standard for fingerprint images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brislawn, C.M.

    1996-06-12

    US Federal Bureau of Investigation (FBI) has recently formulated a national standard for digitization and compression of gray-scale fingerprint images. Fingerprints are scanned at a spatial resolution of 500 dots per inch, with 8 bits of gray-scale resolution. The compression algorithm for the resulting digital images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition (wavelet/scalar quantization method). The FBI standard produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. The compression standard specifies a class ofmore » potential encoders and a universal decoder with sufficient generality to reconstruct compressed images produced by any compliant encoder, allowing flexibility for future improvements in encoder technology. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations.« less

  3. The wavelet/scalar quantization compression standard for digital fingerprint images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bradley, J.N.; Brislawn, C.M.

    1994-04-01

    A new digital image compression standard has been adopted by the US Federal Bureau of Investigation for use on digitized gray-scale fingerprint images. The algorithm is based on adaptive uniform scalar quantization of a discrete wavelet transform image decomposition and is referred to as the wavelet/scalar quantization standard. The standard produces archival quality images at compression ratios of around 20:1 and will allow the FBI to replace their current database of paper fingerprint cards with digital imagery.

  4. Dual Resolution Images from Paired Fingerprint Cards

    National Institute of Standards and Technology Data Gateway

    NIST Dual Resolution Images from Paired Fingerprint Cards (Web, free access)   NIST Special Database 30 is being distributed for use in development and testing of fingerprint compression and fingerprint matching systems. The database allows the user to develop and evaluate data compression algorithms for fingerprint images scanned at both 19.7 ppmm (500 dpi) and 39.4 ppmm (1000 dpi). The data consist of 36 ten-print paired cards with both the rolled and plain images scanned at 19.7 and 39.4 pixels per mm. A newer version of the compression/decompression software on the CDROM can be found at the website http://www.nist.gov/itl/iad/ig/nigos.cfm as part of the NBIS package.

  5. The FBI compression standard for digitized fingerprint images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brislawn, C.M.; Bradley, J.N.; Onyshczak, R.J.

    1996-10-01

    The FBI has formulated national standards for digitization and compression of gray-scale fingerprint images. The compression algorithm for the digitized images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition, a technique referred to as the wavelet/scalar quantization method. The algorithm produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations. We will review the currentmore » status of the FBI standard, including the compliance testing process and the details of the first-generation encoder.« less

  6. FBI compression standard for digitized fingerprint images

    NASA Astrophysics Data System (ADS)

    Brislawn, Christopher M.; Bradley, Jonathan N.; Onyshczak, Remigius J.; Hopper, Thomas

    1996-11-01

    The FBI has formulated national standards for digitization and compression of gray-scale fingerprint images. The compression algorithm for the digitized images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition, a technique referred to as the wavelet/scalar quantization method. The algorithm produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations. We will review the current status of the FBI standard, including the compliance testing process and the details of the first-generation encoder.

  7. The FBI wavelet/scalar quantization standard for gray-scale fingerprint image compression

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bradley, J.N.; Brislawn, C.M.; Hopper, T.

    1993-05-01

    The FBI has recently adopted a standard for the compression of digitized 8-bit gray-scale fingerprint images. The standard is based on scalar quantization of a 64-subband discrete wavelet transform decomposition of the images, followed by Huffman coding. Novel features of the algorithm include the use of symmetric boundary conditions for transforming finite-length signals and a subband decomposition tailored for fingerprint images scanned at 500 dpi. The standard is intended for use in conjunction with ANSI/NBS-CLS 1-1993, American National Standard Data Format for the Interchange of Fingerprint Information, and the FBI`s Integrated Automated Fingerprint Identification System.

  8. The FBI wavelet/scalar quantization standard for gray-scale fingerprint image compression

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bradley, J.N.; Brislawn, C.M.; Hopper, T.

    1993-01-01

    The FBI has recently adopted a standard for the compression of digitized 8-bit gray-scale fingerprint images. The standard is based on scalar quantization of a 64-subband discrete wavelet transform decomposition of the images, followed by Huffman coding. Novel features of the algorithm include the use of symmetric boundary conditions for transforming finite-length signals and a subband decomposition tailored for fingerprint images scanned at 500 dpi. The standard is intended for use in conjunction with ANSI/NBS-CLS 1-1993, American National Standard Data Format for the Interchange of Fingerprint Information, and the FBI's Integrated Automated Fingerprint Identification System.

  9. 8-Bit Gray Scale Images of Fingerprint Image Groups

    National Institute of Standards and Technology Data Gateway

    NIST 8-Bit Gray Scale Images of Fingerprint Image Groups (Web, free access)   The NIST database of fingerprint images contains 2000 8-bit gray scale fingerprint image pairs. A newer version of the compression/decompression software on the CDROM can be found at the website http://www.nist.gov/itl/iad/ig/nigos.cfm as part of the NBIS package.

  10. Low-Speed Fingerprint Image Capture System User`s Guide, June 1, 1993

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whitus, B.R.; Goddard, J.S.; Jatko, W.B.

    1993-06-01

    The Low-Speed Fingerprint Image Capture System (LS-FICS) uses a Sun workstation controlling a Lenzar ElectroOptics Opacity 1000 imaging system to digitize fingerprint card images to support the Federal Bureau of Investigation`s (FBI`s) Automated Fingerprint Identification System (AFIS) program. The system also supports the operations performed by the Oak Ridge National Laboratory- (ORNL-) developed Image Transmission Network (ITN) prototype card scanning system. The input to the system is a single FBI fingerprint card of the agreed-upon standard format and a user-specified identification number. The output is a file formatted to be compatible with the National Institute of Standards and Technology (NIST)more » draft standard for fingerprint data exchange dated June 10, 1992. These NIST compatible files contain the required print and text images. The LS-FICS is designed to provide the FBI with the capability of scanning fingerprint cards into a digital format. The FBI will replicate the system to generate a data base of test images. The Host Workstation contains the image data paths and the compression algorithm. A local area network interface, disk storage, and tape drive are used for the image storage and retrieval, and the Lenzar Opacity 1000 scanner is used to acquire the image. The scanner is capable of resolving 500 pixels/in. in both x and y directions. The print images are maintained in full 8-bit gray scale and compressed with an FBI-approved wavelet-based compression algorithm. The text fields are downsampled to 250 pixels/in. and 2-bit gray scale. The text images are then compressed using a lossless Huffman coding scheme. The text fields retrieved from the output files are easily interpreted when displayed on the screen. Detailed procedures are provided for system calibration and operation. Software tools are provided to verify proper system operation.« less

  11. Tools for quality control of fingerprint databases

    NASA Astrophysics Data System (ADS)

    Swann, B. Scott; Libert, John M.; Lepley, Margaret A.

    2010-04-01

    Integrity of fingerprint data is essential to biometric and forensic applications. Accordingly, the FBI's Criminal Justice Information Services (CJIS) Division has sponsored development of software tools to facilitate quality control functions relative to maintaining its fingerprint data assets inherent to the Integrated Automated Fingerprint Identification System (IAFIS) and Next Generation Identification (NGI). This paper provides an introduction of two such tools. The first FBI-sponsored tool was developed by the National Institute of Standards and Technology (NIST) and examines and detects the spectral signature of the ridge-flow structure characteristic of friction ridge skin. The Spectral Image Validation/Verification (SIVV) utility differentiates fingerprints from non-fingerprints, including blank frames or segmentation failures erroneously included in data; provides a "first look" at image quality; and can identify anomalies in sample rates of scanned images. The SIVV utility might detect errors in individual 10-print fingerprints inaccurately segmented from the flat, multi-finger image acquired by one of the automated collection systems increasing in availability and usage. In such cases, the lost fingerprint can be recovered by re-segmentation from the now compressed multi-finger image record. The second FBI-sponsored tool, CropCoeff was developed by MITRE and thoroughly tested via NIST. CropCoeff enables cropping of the replacement single print directly from the compressed data file, thus avoiding decompression and recompression of images that might degrade fingerprint features necessary for matching.

  12. Enhancing security of fingerprints through contextual biometric watermarking.

    PubMed

    Noore, Afzel; Singh, Richa; Vatsa, Mayank; Houck, Max M

    2007-07-04

    This paper presents a novel digital watermarking technique using face and demographic text data as multiple watermarks for verifying the chain of custody and protecting the integrity of a fingerprint image. The watermarks are embedded in selected texture regions of a fingerprint image using discrete wavelet transform. Experimental results show that modifications in these locations are visually imperceptible and maintain the minutiae details. The integrity of the fingerprint image is verified through the high matching scores obtained from an automatic fingerprint identification system. There is also a high degree of visual correlation between the embedded images, and the extracted images from the watermarked fingerprint. The degree of similarity is computed using pixel-based metrics and human visual system metrics. The results also show that the proposed watermarked fingerprint and the extracted images are resilient to common attacks such as compression, filtering, and noise.

  13. A New Approach for Fingerprint Image Compression

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mazieres, Bertrand

    1997-12-01

    The FBI has been collecting fingerprint cards since 1924 and now has over 200 million of them. Digitized with 8 bits of grayscale resolution at 500 dots per inch, it means 2000 terabytes of information. Also, without any compression, transmitting a 10 Mb card over a 9600 baud connection will need 3 hours. Hence we need a compression and a compression as close to lossless as possible: all fingerprint details must be kept. A lossless compression usually do not give a better compression ratio than 2:1, which is not sufficient. Compressing these images with the JPEG standard leads to artefactsmore » which appear even at low compression rates. Therefore the FBI has chosen in 1993 a scheme of compression based on a wavelet transform, followed by a scalar quantization and an entropy coding : the so-called WSQ. This scheme allows to achieve compression ratios of 20:1 without any perceptible loss of quality. The publication of the FBI specifies a decoder, which means that many parameters can be changed in the encoding process: the type of analysis/reconstruction filters, the way the bit allocation is made, the number of Huffman tables used for the entropy coding. The first encoder used 9/7 filters for the wavelet transform and did the bit allocation using a high-rate bit assumption. Since the transform is made into 64 subbands, quite a lot of bands receive only a few bits even at an archival quality compression rate of 0.75 bit/pixel. Thus, after a brief overview of the standard, we will discuss a new approach for the bit-allocation that seems to make more sense where theory is concerned. Then we will talk about some implementation aspects, particularly for the new entropy coder and the features that allow other applications than fingerprint image compression. Finally, we will compare the performances of the new encoder to those of the first encoder.« less

  14. Plain and Rolled Images from Paired Fingerprint Cards

    National Institute of Standards and Technology Data Gateway

    NIST Plain and Rolled Images from Paired Fingerprint Cards (Web, free access)   NIST Special Database 29 is being distributed for use in development and testing fingerprint matching systems. The data consist of 216 ten-print fingerprint card pairs with both the rolled and plains (from a bottom of the fingerprint card) scanned at 19.7 pixels per mm. A newer version of the compression/decompression software on the CDROM can be found at the website http://www.nist.gov/itl/iad/ig/nigos.cfm as part of the NBIS package.

  15. Mated Fingerprint Card Pairs (Volumes 1-5)

    National Institute of Standards and Technology Data Gateway

    NIST Mated Fingerprint Card Pairs (Volumes 1-5) (Web, free access)   The NIST database of mated fingerprint card pairs (Special Database 9) consists of multiple volumes. Currently five volumes have been released. Each volume will be a 3-disk set with each CD-ROM containing 90 mated card pairs of segmented 8-bit gray scale fingerprint images (900 fingerprint image pairs per CD-ROM). A newer version of the compression/decompression software on the CDROM can be found at the website http://www.nist.gov/itl/iad/ig/nigos.cfm as part of the NBIS package.

  16. Fingerprinting with Wow

    NASA Astrophysics Data System (ADS)

    Yu, Eugene; Craver, Scott

    2006-02-01

    Wow, or time warping caused by speed fluctuations in analog audio equipment, provides a wealth of applications in watermarking. Very subtle temporal distortion has been used to defeat watermarks, and as components in watermarking systems. In the image domain, the analogous warping of an image's canvas has been used both to defeat watermarks and also proposed to prevent collusion attacks on fingerprinting systems. In this paper, we explore how subliminal levels of wow can be used for steganography and fingerprinting. We present both a low-bitrate robust solution and a higher-bitrate solution intended for steganographic communication. As already observed, such a fingerprinting algorithm naturally discourages collusion by averaging, owing to flanging effects when misaligned audio is averaged. Another advantage of warping is that even when imperceptible, it can be beyond the reach of compression algorithms. We use this opportunity to debunk the common misconception that steganography is impossible under "perfect compression."

  17. Mated Fingerprint Card Pairs 2 (MFCP2)

    National Institute of Standards and Technology Data Gateway

    NIST Mated Fingerprint Card Pairs 2 (MFCP2) (Web, free access)   NIST Special Database 14 is being distributed for use in development and testing of automated fingerprint classification and matching systems on a set of images which approximate a natural horizontal distribution of the National Crime Information Center (NCIC) fingerprint classes. A newer version of the compression/decompression software on the CDROM can be found at the website http://www.nist.gov/itl/iad/ig/nigos.cfm as part of the NBIS package.

  18. Phase unwinding for dictionary compression with multiple channel transmission in magnetic resonance fingerprinting.

    PubMed

    Lattanzi, Riccardo; Zhang, Bei; Knoll, Florian; Assländer, Jakob; Cloos, Martijn A

    2018-06-01

    Magnetic Resonance Fingerprinting reconstructions can become computationally intractable with multiple transmit channels, if the B 1 + phases are included in the dictionary. We describe a general method that allows to omit the transmit phases. We show that this enables straightforward implementation of dictionary compression to further reduce the problem dimensionality. We merged the raw data of each RF source into a single k-space dataset, extracted the transceiver phases from the corresponding reconstructed images and used them to unwind the phase in each time frame. All phase-unwound time frames were combined in a single set before performing SVD-based compression. We conducted synthetic, phantom and in-vivo experiments to demonstrate the feasibility of SVD-based compression in the case of two-channel transmission. Unwinding the phases before SVD-based compression yielded artifact-free parameter maps. For fully sampled acquisitions, parameters were accurate with as few as 6 compressed time frames. SVD-based compression performed well in-vivo with highly under-sampled acquisitions using 16 compressed time frames, which reduced reconstruction time from 750 to 25min. Our method reduces the dimensions of the dictionary atoms and enables to implement any fingerprint compression strategy in the case of multiple transmit channels. Copyright © 2018 Elsevier Inc. All rights reserved.

  19. Supplemental Fingerprint Card Data (SFCD) for NIST Special Database 9

    National Institute of Standards and Technology Data Gateway

    Supplemental Fingerprint Card Data (SFCD) for NIST Special Database 9 (Web, free access)   NIST Special Database 10 (Supplemental Fingerprint Card Data for Special Database 9 - 8-Bit Gray Scale Images) provides a larger sample of fingerprint patterns that have a low natural frequency of occurrence and transitional fingerprint classes in NIST Special Database 9. The software is the same code used with NIST Special Database 4 and 9. A newer version of the compression/decompression software on the CDROM can be found at the website http://www.nist.gov/itl/iad/ig/nigos.cfm as part of the NBIS package.

  20. A novel secret sharing with two users based on joint transform correlator and compressive sensing

    NASA Astrophysics Data System (ADS)

    Zhao, Tieyu; Chi, Yingying

    2018-05-01

    Recently, joint transform correlator (JTC) has been widely applied to image encryption and authentication. This paper presents a novel secret sharing scheme with two users based on JTC. Two users must be present during the decryption that the system has high security and reliability. In the scheme, two users use their fingerprints to encrypt plaintext, and they can decrypt only if both of them provide the fingerprints which are successfully authenticated. The linear relationship between the plaintext and ciphertext is broken using the compressive sensing, which can resist existing attacks on JTC. The results of the theoretical analysis and numerical simulation confirm the validity of the system.

  1. Optimized satellite image compression and reconstruction via evolution strategies

    NASA Astrophysics Data System (ADS)

    Babb, Brendan; Moore, Frank; Peterson, Michael

    2009-05-01

    This paper describes the automatic discovery, via an Evolution Strategy with Covariance Matrix Adaptation (CMA-ES), of vectors of real-valued coefficients representing matched forward and inverse transforms that outperform the 9/7 Cohen-Daubechies-Feauveau (CDF) discrete wavelet transform (DWT) for satellite image compression and reconstruction under conditions subject to quantization error. The best transform evolved during this study reduces the mean squared error (MSE) present in reconstructed satellite images by an average of 33.78% (1.79 dB), while maintaining the average information entropy (IE) of compressed images at 99.57% in comparison to the wavelet. In addition, this evolved transform achieves 49.88% (3.00 dB) average MSE reduction when tested on 80 images from the FBI fingerprint test set, and 42.35% (2.39 dB) average MSE reduction when tested on a set of 18 digital photographs, while achieving average IE of 104.36% and 100.08%, respectively. These results indicate that our evolved transform greatly improves the quality of reconstructed images without substantial loss of compression capability over a broad range of image classes.

  2. Converting Panax ginseng DNA and chemical fingerprints into two-dimensional barcode.

    PubMed

    Cai, Yong; Li, Peng; Li, Xi-Wen; Zhao, Jing; Chen, Hai; Yang, Qing; Hu, Hao

    2017-07-01

    In this study, we investigated how to convert the Panax ginseng DNA sequence code and chemical fingerprints into a two-dimensional code. In order to improve the compression efficiency, GATC2Bytes and digital merger compression algorithms are proposed. HPLC chemical fingerprint data of 10 groups of P. ginseng from Northeast China and the internal transcribed spacer 2 (ITS2) sequence code as the DNA sequence code were ready for conversion. In order to convert such data into a two-dimensional code, the following six steps were performed: First, the chemical fingerprint characteristic data sets were obtained through the inflection filtering algorithm. Second, precompression processing of such data sets is undertaken. Third, precompression processing was undertaken with the P. ginseng DNA (ITS2) sequence codes. Fourth, the precompressed chemical fingerprint data and the DNA (ITS2) sequence code were combined in accordance with the set data format. Such combined data can be compressed by Zlib, an open source data compression algorithm. Finally, the compressed data generated a two-dimensional code called a quick response code (QR code). Through the abovementioned converting process, it can be found that the number of bytes needed for storing P. ginseng chemical fingerprints and its DNA (ITS2) sequence code can be greatly reduced. After GTCA2Bytes algorithm processing, the ITS2 compression rate reaches 75% and the chemical fingerprint compression rate exceeds 99.65% via filtration and digital merger compression algorithm processing. Therefore, the overall compression ratio even exceeds 99.36%. The capacity of the formed QR code is around 0.5k, which can easily and successfully be read and identified by any smartphone. P. ginseng chemical fingerprints and its DNA (ITS2) sequence code can form a QR code after data processing, and therefore the QR code can be a perfect carrier of the authenticity and quality of P. ginseng information. This study provides a theoretical basis for the development of a quality traceability system of traditional Chinese medicine based on a two-dimensional code.

  3. SVD compression for magnetic resonance fingerprinting in the time domain.

    PubMed

    McGivney, Debra F; Pierre, Eric; Ma, Dan; Jiang, Yun; Saybasili, Haris; Gulani, Vikas; Griswold, Mark A

    2014-12-01

    Magnetic resonance (MR) fingerprinting is a technique for acquiring and processing MR data that simultaneously provides quantitative maps of different tissue parameters through a pattern recognition algorithm. A predefined dictionary models the possible signal evolutions simulated using the Bloch equations with different combinations of various MR parameters and pattern recognition is completed by computing the inner product between the observed signal and each of the predicted signals within the dictionary. Though this matching algorithm has been shown to accurately predict the MR parameters of interest, one desires a more efficient method to obtain the quantitative images. We propose to compress the dictionary using the singular value decomposition, which will provide a low-rank approximation. By compressing the size of the dictionary in the time domain, we are able to speed up the pattern recognition algorithm, by a factor of between 3.4-4.8, without sacrificing the high signal-to-noise ratio of the original scheme presented previously.

  4. SVD Compression for Magnetic Resonance Fingerprinting in the Time Domain

    PubMed Central

    McGivney, Debra F.; Pierre, Eric; Ma, Dan; Jiang, Yun; Saybasili, Haris; Gulani, Vikas; Griswold, Mark A.

    2016-01-01

    Magnetic resonance fingerprinting is a technique for acquiring and processing MR data that simultaneously provides quantitative maps of different tissue parameters through a pattern recognition algorithm. A predefined dictionary models the possible signal evolutions simulated using the Bloch equations with different combinations of various MR parameters and pattern recognition is completed by computing the inner product between the observed signal and each of the predicted signals within the dictionary. Though this matching algorithm has been shown to accurately predict the MR parameters of interest, one desires a more efficient method to obtain the quantitative images. We propose to compress the dictionary using the singular value decomposition (SVD), which will provide a low-rank approximation. By compressing the size of the dictionary in the time domain, we are able to speed up the pattern recognition algorithm, by a factor of between 3.4-4.8, without sacrificing the high signal-to-noise ratio of the original scheme presented previously. PMID:25029380

  5. Chemical imaging of latent fingerprints by mass spectrometry based on laser activated electron tunneling.

    PubMed

    Tang, Xuemei; Huang, Lulu; Zhang, Wenyang; Zhong, Hongying

    2015-03-03

    Identification of endogenous and exogenous chemicals contained in latent fingerprints is important for forensic science in order to acquire evidence of criminal identities and contacts with specific chemicals. Mass spectrometry has emerged as a powerful technique for such applications without any derivatization or fluorescent tags. Among these techniques, MALDI (Matrix Assisted Laser Desorption Ionization) provides small beam size but has interferences with MALDI matrix materials, which cause ion suppressions as well as limited spatial resolution resulting from uneven distribution of MALDI matrix crystals with different sizes. LAET (Laser Activated Electron Tunneling) described in this work offers capabilities for chemical imaging through electron-directed soft ionization. A special film of semiconductors has been designed for collection of fingerprints. Nanoparticles of bismuth cobalt zinc oxide were compressed on a conductive metal substrate (Al or Cu sticky tape) under 10 MPa pressure. Resultant uniform thin films provide tight and shining surfaces on which fingers are impressed. Irradiation of ultraviolet laser pulses (355 nm) on the thin film instantly generates photoelectrons that can be captured by adsorbed organic molecules and subsequently cause electron-directed ionization and fragmentation. Imaging of latent fingerprints is achieved by visualization of the spatial distribution of these molecular ions and structural information-rich fragment ions. Atomic electron emission together with finely tuned laser beam size improve spatial resolution. With the LAET technique, imaging analysis not only can identify physical shapes but also reveal endogenous metabolites present in females and males, detect contacts with prohibited substances, and resolve overlapped latent fingerprints.

  6. Sparse modeling applied to patient identification for safety in medical physics applications

    NASA Astrophysics Data System (ADS)

    Lewkowitz, Stephanie

    Every scheduled treatment at a radiation therapy clinic involves a series of safety protocol to ensure the utmost patient care. Despite safety protocol, on a rare occasion an entirely preventable medical event, an accident, may occur. Delivering a treatment plan to the wrong patient is preventable, yet still is a clinically documented error. This research describes a computational method to identify patients with a novel machine learning technique to combat misadministration. The patient identification program stores face and fingerprint data for each patient. New, unlabeled data from those patients are categorized according to the library. The categorization of data by this face-fingerprint detector is accomplished with new machine learning algorithms based on Sparse Modeling that have already begun transforming the foundation of Computer Vision. Previous patient recognition software required special subroutines for faces and different tailored subroutines for fingerprints. In this research, the same exact model is used for both fingerprints and faces, without any additional subroutines and even without adjusting the two hyperparameters. Sparse modeling is a powerful tool, already shown utility in the areas of super-resolution, denoising, inpainting, demosaicing, and sub-nyquist sampling, i.e. compressed sensing. Sparse Modeling is possible because natural images are inherently sparse in some bases, due to their inherent structure. This research chooses datasets of face and fingerprint images to test the patient identification model. The model stores the images of each dataset as a basis (library). One image at a time is removed from the library, and is classified by a sparse code in terms of the remaining library. The Locally Competitive Algorithm, a truly neural inspired Artificial Neural Network, solves the computationally difficult task of finding the sparse code for the test image. The components of the sparse representation vector are summed by ℓ1 pooling, and correct patient identification is consistently achieved 100% over 1000 trials, when either the face data or fingerprint data are implemented as a classification basis. The algorithm gets 100% classification when faces and fingerprints are concatenated into multimodal datasets. This suggests that 100% patient identification will be achievable in the clinal setting.

  7. Digital Video of Live-Scan Fingerprint Data

    National Institute of Standards and Technology Data Gateway

    NIST Digital Video of Live-Scan Fingerprint Data (PC database for purchase)   NIST Special Database 24 contains MPEG-2 (Moving Picture Experts Group) compressed digital video of live-scan fingerprint data. The database is being distributed for use in developing and testing of fingerprint verification systems.

  8. Low-order auditory Zernike moment: a novel approach for robust music identification in the compressed domain

    NASA Astrophysics Data System (ADS)

    Li, Wei; Xiao, Chuan; Liu, Yaduo

    2013-12-01

    Audio identification via fingerprint has been an active research field for years. However, most previously reported methods work on the raw audio format in spite of the fact that nowadays compressed format audio, especially MP3 music, has grown into the dominant way to store music on personal computers and/or transmit it over the Internet. It will be interesting if a compressed unknown audio fragment could be directly recognized from the database without decompressing it into the wave format at first. So far, very few algorithms run directly on the compressed domain for music information retrieval, and most of them take advantage of the modified discrete cosine transform coefficients or derived cepstrum and energy type of features. As a first attempt, we propose in this paper utilizing compressed domain auditory Zernike moment adapted from image processing techniques as the key feature to devise a novel robust audio identification algorithm. Such fingerprint exhibits strong robustness, due to its statistically stable nature, against various audio signal distortions such as recompression, noise contamination, echo adding, equalization, band-pass filtering, pitch shifting, and slight time scale modification. Experimental results show that in a music database which is composed of 21,185 MP3 songs, a 10-s long music segment is able to identify its original near-duplicate recording, with average top-5 hit rate up to 90% or above even under severe audio signal distortions.

  9. Gabor filter based fingerprint image enhancement

    NASA Astrophysics Data System (ADS)

    Wang, Jin-Xiang

    2013-03-01

    Fingerprint recognition technology has become the most reliable biometric technology due to its uniqueness and invariance, which has been most convenient and most reliable technique for personal authentication. The development of Automated Fingerprint Identification System is an urgent need for modern information security. Meanwhile, fingerprint preprocessing algorithm of fingerprint recognition technology has played an important part in Automatic Fingerprint Identification System. This article introduces the general steps in the fingerprint recognition technology, namely the image input, preprocessing, feature recognition, and fingerprint image enhancement. As the key to fingerprint identification technology, fingerprint image enhancement affects the accuracy of the system. It focuses on the characteristics of the fingerprint image, Gabor filters algorithm for fingerprint image enhancement, the theoretical basis of Gabor filters, and demonstration of the filter. The enhancement algorithm for fingerprint image is in the windows XP platform with matlab.65 as a development tool for the demonstration. The result shows that the Gabor filter is effective in fingerprint image enhancement technology.

  10. Steganalysis based on JPEG compatibility

    NASA Astrophysics Data System (ADS)

    Fridrich, Jessica; Goljan, Miroslav; Du, Rui

    2001-11-01

    In this paper, we introduce a new forensic tool that can reliably detect modifications in digital images, such as distortion due to steganography and watermarking, in images that were originally stored in the JPEG format. The JPEG compression leave unique fingerprints and serves as a fragile watermark enabling us to detect changes as small as modifying the LSB of one randomly chosen pixel. The detection of changes is based on investigating the compatibility of 8x8 blocks of pixels with JPEG compression with a given quantization matrix. The proposed steganalytic method is applicable to virtually all steganongraphic and watermarking algorithms with the exception of those that embed message bits into the quantized JPEG DCT coefficients. The method can also be used to estimate the size of the secret message and identify the pixels that carry message bits. As a consequence of our steganalysis, we strongly recommend avoiding using images that have been originally stored in the JPEG format as cover-images for spatial-domain steganography.

  11. Uses of software in digital image analysis: a forensic report

    NASA Astrophysics Data System (ADS)

    Sharma, Mukesh; Jha, Shailendra

    2010-02-01

    Forensic image analysis is required an expertise to interpret the content of an image or the image itself in legal matters. Major sub-disciplines of forensic image analysis with law enforcement applications include photo-grammetry, photographic comparison, content analysis and image authentication. It has wide applications in forensic science range from documenting crime scenes to enhancing faint or indistinct patterns such as partial fingerprints. The process of forensic image analysis can involve several different tasks, regardless of the type of image analysis performed. Through this paper authors have tried to explain these tasks, which are described in to three categories: Image Compression, Image Enhancement & Restoration and Measurement Extraction. With the help of examples like signature comparison, counterfeit currency comparison and foot-wear sole impression using the software Canvas and Corel Draw.

  12. [Physical fingerprint for quality control of traditional Chinese medicine extract powders].

    PubMed

    Zhang, Yi; Xu, Bing; Sun, Fei; Wang, Xin; Zhang, Na; Shi, Xin-Yuan; Qiao, Yan-Jiang

    2016-06-01

    The physical properties of both raw materials and excipients are closely correlated with the quality of traditional Chinese medicine preparations in oral solid dosage forms. In this paper, based on the concept of the chemical fingerprint for quality control of traditional Chinese medicine products, the method of physical fingerprint for quality evaluation of traditional Chinese medicine extract powders was proposed. This novel physical fingerprint was built by the radar map, and consisted of five primary indexes (i.e. stackablity, homogeneity, flowability, compressibility and stability) and 12 secondary indexes (i.e. bulk density, tap density, particle size<50 μm percentage, relative homogeneity index, hausner ratio, angle of repose, powder flow time, inter-particle porosity, Carr index, cohesion index, loss on drying, hygroscopicity). Panax notoginseng saponins (PNS) extract was taken for an example. This paper introduced the application of physical fingerprint in the evaluation of source-to-source and batch-to-batch quality consistence of PNS extract powders. Moreover, the physical fingerprint of PNS was built by calculating the index of parameters, the index of parametric profile and the index of good compressibility, in order to successfully predict the compressibility of the PNS extract powder and relevant formulations containing PNS extract powder and conventional pharmaceutical excipients. The results demonstrated that the proposed method could not only provide new insights into the development and process control of traditional Chinese medicine solid dosage forms. Copyright© by the Chinese Pharmaceutical Association.

  13. A Support Vector Machine Approach for Truncated Fingerprint Image Detection from Sweeping Fingerprint Sensors

    PubMed Central

    Chen, Chi-Jim; Pai, Tun-Wen; Cheng, Mox

    2015-01-01

    A sweeping fingerprint sensor converts fingerprints on a row by row basis through image reconstruction techniques. However, a built fingerprint image might appear to be truncated and distorted when the finger was swept across a fingerprint sensor at a non-linear speed. If the truncated fingerprint images were enrolled as reference targets and collected by any automated fingerprint identification system (AFIS), successful prediction rates for fingerprint matching applications would be decreased significantly. In this paper, a novel and effective methodology with low time computational complexity was developed for detecting truncated fingerprints in a real time manner. Several filtering rules were implemented to validate existences of truncated fingerprints. In addition, a machine learning method of supported vector machine (SVM), based on the principle of structural risk minimization, was applied to reject pseudo truncated fingerprints containing similar characteristics of truncated ones. The experimental result has shown that an accuracy rate of 90.7% was achieved by successfully identifying truncated fingerprint images from testing images before AFIS enrollment procedures. The proposed effective and efficient methodology can be extensively applied to all existing fingerprint matching systems as a preliminary quality control prior to construction of fingerprint templates. PMID:25835186

  14. Secure fingerprint identification based on structural and microangiographic optical coherence tomography.

    PubMed

    Liu, Xuan; Zaki, Farzana; Wang, Yahui; Huang, Qiongdan; Mei, Xin; Wang, Jiangjun

    2017-03-10

    Optical coherence tomography (OCT) allows noncontact acquisition of fingerprints and hence is a highly promising technology in the field of biometrics. OCT can be used to acquire both structural and microangiographic images of fingerprints. Microangiographic OCT derives its contrast from the blood flow in the vasculature of viable skin tissue, and microangiographic fingerprint imaging is inherently immune to fake fingerprint attack. Therefore, dual-modality (structural and microangiographic) OCT imaging of fingerprints will enable more secure acquisition of biometric data, which has not been investigated before. Our study on fingerprint identification based on structural and microangiographic OCT imaging is, we believe, highly innovative. In this study, we performed OCT imaging study for fingerprint acquisition, and demonstrated the capability of dual-modality OCT imaging for the identification of fake fingerprints.

  15. Three-dimensional imaging of artificial fingerprint by optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Larin, Kirill V.; Cheng, Yezeng

    2008-03-01

    Fingerprint recognition is one of the popular used methods of biometrics. However, due to the surface topography limitation, fingerprint recognition scanners are easily been spoofed, e.g. using artificial fingerprint dummies. Thus, biometric fingerprint identification devices need to be more accurate and secure to deal with different fraudulent methods including dummy fingerprints. Previously, we demonstrated that Optical Coherence Tomography (OCT) images revealed the presence of the artificial fingerprints (made from different household materials, such as cement and liquid silicone rubber) at all times, while the artificial fingerprints easily spoofed the commercial fingerprint reader. Also we demonstrated that an analysis of the autocorrelation of the OCT images could be used in automatic recognition systems. Here, we exploited the three-dimensional (3D) imaging of the artificial fingerprint by OCT to generate vivid 3D image for both the artificial fingerprint layer and the real fingerprint layer beneath. With the reconstructed 3D image, it could not only point out whether there exists an artificial material, which is intended to spoof the scanner, above the real finger, but also could provide the hacker's fingerprint. The results of these studies suggested that Optical Coherence Tomography could be a powerful real-time noninvasive method for accurate identification of artificial fingerprints real fingerprints as well.

  16. Combining Digital Watermarking and Fingerprinting Techniques to Identify Copyrights for Color Images

    PubMed Central

    Hsieh, Shang-Lin; Chen, Chun-Che; Shen, Wen-Shan

    2014-01-01

    This paper presents a copyright identification scheme for color images that takes advantage of the complementary nature of watermarking and fingerprinting. It utilizes an authentication logo and the extracted features of the host image to generate a fingerprint, which is then stored in a database and also embedded in the host image to produce a watermarked image. When a dispute over the copyright of a suspect image occurs, the image is first processed by watermarking. If the watermark can be retrieved from the suspect image, the copyright can then be confirmed; otherwise, the watermark then serves as the fingerprint and is processed by fingerprinting. If a match in the fingerprint database is found, then the suspect image will be considered a duplicated one. Because the proposed scheme utilizes both watermarking and fingerprinting, it is more robust than those that only adopt watermarking, and it can also obtain the preliminary result more quickly than those that only utilize fingerprinting. The experimental results show that when the watermarked image suffers slight attacks, watermarking alone is enough to identify the copyright. The results also show that when the watermarked image suffers heavy attacks that render watermarking incompetent, fingerprinting can successfully identify the copyright, hence demonstrating the effectiveness of the proposed scheme. PMID:25114966

  17. Effects of compression and individual variability on face recognition performance

    NASA Astrophysics Data System (ADS)

    McGarry, Delia P.; Arndt, Craig M.; McCabe, Steven A.; D'Amato, Donald P.

    2004-08-01

    The Enhanced Border Security and Visa Entry Reform Act of 2002 requires that the Visa Waiver Program be available only to countries that have a program to issue to their nationals machine-readable passports incorporating biometric identifiers complying with applicable standards established by the International Civil Aviation Organization (ICAO). In June 2002, the New Technologies Working Group of ICAO unanimously endorsed the use of face recognition (FR) as the globally interoperable biometric for machine-assisted identity confirmation with machine-readable travel documents (MRTDs), although Member States may elect to use fingerprint and/or iris recognition as additional biometric technologies. The means and formats are still being developed through which biometric information might be stored in the constrained space of integrated circuit chips embedded within travel documents. Such information will be stored in an open, yet unalterable and very compact format, probably as digitally signed and efficiently compressed images. The objective of this research is to characterize the many factors that affect FR system performance with respect to the legislated mandates concerning FR. A photograph acquisition environment and a commercial face recognition system have been installed at Mitretek, and over 1,400 images have been collected of volunteers. The image database and FR system are being used to analyze the effects of lossy image compression, individual differences, such as eyeglasses and facial hair, and the acquisition environment on FR system performance. Images are compressed by varying ratios using JPEG2000 to determine the trade-off points between recognition accuracy and compression ratio. The various acquisition factors that contribute to differences in FR system performance among individuals are also being measured. The results of this study will be used to refine and test efficient face image interchange standards that ensure highly accurate recognition, both for automated FR systems and human inspectors. Working within the M1-Biometrics Technical Committee of the InterNational Committee for Information Technology Standards (INCITS) organization, a standard face image format will be tested and submitted to organizations such as ICAO.

  18. Score-Level Fusion of Phase-Based and Feature-Based Fingerprint Matching Algorithms

    NASA Astrophysics Data System (ADS)

    Ito, Koichi; Morita, Ayumi; Aoki, Takafumi; Nakajima, Hiroshi; Kobayashi, Koji; Higuchi, Tatsuo

    This paper proposes an efficient fingerprint recognition algorithm combining phase-based image matching and feature-based matching. In our previous work, we have already proposed an efficient fingerprint recognition algorithm using Phase-Only Correlation (POC), and developed commercial fingerprint verification units for access control applications. The use of Fourier phase information of fingerprint images makes it possible to achieve robust recognition for weakly impressed, low-quality fingerprint images. This paper presents an idea of improving the performance of POC-based fingerprint matching by combining it with feature-based matching, where feature-based matching is introduced in order to improve recognition efficiency for images with nonlinear distortion. Experimental evaluation using two different types of fingerprint image databases demonstrates efficient recognition performance of the combination of the POC-based algorithm and the feature-based algorithm.

  19. Eliminate background interference from latent fingerprints using ultraviolet multispectral imaging

    NASA Astrophysics Data System (ADS)

    Huang, Wei; Xu, Xiaojing; Wang, Guiqiang

    2014-02-01

    Fingerprints are the most important evidence in crime scene. The technology of developing latent fingerprints is one of the hottest research areas in forensic science. Recently, multispectral imaging which has shown great capability in fingerprints development, questioned document detection and trace evidence examination is used in detecting material evidence. This paper studied how to eliminate background interference from non-porous and porous surface latent fingerprints by rotating filter wheel ultraviolet multispectral imaging. The results approved that background interference could be removed clearly from latent fingerprints by using multispectral imaging in ultraviolet bandwidth.

  20. Multispectral imaging for biometrics

    NASA Astrophysics Data System (ADS)

    Rowe, Robert K.; Corcoran, Stephen P.; Nixon, Kristin A.; Ostrom, Robert E.

    2005-03-01

    Automated identification systems based on fingerprint images are subject to two significant types of error: an incorrect decision about the identity of a person due to a poor quality fingerprint image and incorrectly accepting a fingerprint image generated from an artificial sample or altered finger. This paper discusses the use of multispectral sensing as a means to collect additional information about a finger that significantly augments the information collected using a conventional fingerprint imager based on total internal reflectance. In the context of this paper, "multispectral sensing" is used broadly to denote a collection of images taken under different polarization conditions and illumination configurations, as well as using multiple wavelengths. Background information is provided on conventional fingerprint imaging. A multispectral imager for fingerprint imaging is then described and a means to combine the two imaging systems into a single unit is discussed. Results from an early-stage prototype of such a system are shown.

  1. Fingerprint pattern restoration by digital image processing techniques.

    PubMed

    Wen, Che-Yen; Yu, Chiu-Chung

    2003-09-01

    Fingerprint evidence plays an important role in solving criminal problems. However, defective (lacking information needed for completeness) or contaminated (undesirable information included) fingerprint patterns make identifying and recognizing processes difficult. Unfortunately. this is the usual case. In the recognizing process (enhancement of patterns, or elimination of "false alarms" so that a fingerprint pattern can be searched in the Automated Fingerprint Identification System (AFIS)), chemical and physical techniques have been proposed to improve pattern legibility. In the identifying process, a fingerprint examiner can enhance contaminated (but not defective) fingerprint patterns under guidelines provided by the Scientific Working Group on Friction Ridge Analysis, Study and Technology (SWGFAST), the Scientific Working Group on Imaging Technology (SWGIT), and an AFIS working group within the National Institute of Justice. Recently, the image processing techniques have been successfully applied in forensic science. For example, we have applied image enhancement methods to improve the legibility of digital images such as fingerprints and vehicle plate numbers. In this paper, we propose a novel digital image restoration technique based on the AM (amplitude modulation)-FM (frequency modulation) reaction-diffusion method to restore defective or contaminated fingerprint patterns. This method shows its potential application to fingerprint pattern enhancement in the recognizing process (but not for the identifying process). Synthetic and real images are used to show the capability of the proposed method. The results of enhancing fingerprint patterns by the manual process and our method are evaluated and compared.

  2. Straightforward fabrication of black nano silica dusting powder for latent fingerprint imaging

    NASA Astrophysics Data System (ADS)

    Komalasari, Isna; Krismastuti, Fransiska Sri Herwahyu; Elishian, Christine; Handayani, Eka Mardika; Nugraha, Willy Cahya; Ketrin, Rosi

    2017-11-01

    Imaging of latent fingerprint pattern (aka fingermark) is one of the most important and accurate detection methods in forensic investigation because of the characteristic of individual fingerprint. This detection technique relies on the mechanical adherence of fingerprint powder to the moisture and oily component of the skin left on the surface. The particle size of fingerprint powder is one of the critical parameter to obtain excellent fingerprint image. This study develops a simple, cheap and straightforward method to fabricate Nano size black dusting fingerprint powder based on Nano silica and applies the powder to visualize latent fingerprint. The nanostructured silica was prepared from tetraethoxysilane (TEOS) and then modified with Nano carbon, methylene blue and sodium acetate to color the powder. Finally, as a proof-of-principle, the ability of this black Nano silica dusting powder to image latent fingerprint is successfully demonstrated and the results show that this fingerprint powder provides clearer fingerprint pattern compared to the commercial one highlighting the potential application of the nanostructured silica in forensic science.

  3. 3D fingerprint imaging system based on full-field fringe projection profilometry

    NASA Astrophysics Data System (ADS)

    Huang, Shujun; Zhang, Zonghua; Zhao, Yan; Dai, Jie; Chen, Chao; Xu, Yongjia; Zhang, E.; Xie, Lili

    2014-01-01

    As an unique, unchangeable and easily acquired biometrics, fingerprint has been widely studied in academics and applied in many fields over the years. The traditional fingerprint recognition methods are based on the obtained 2D feature of fingerprint. However, fingerprint is a 3D biological characteristic. The mapping from 3D to 2D loses 1D information and causes nonlinear distortion of the captured fingerprint. Therefore, it is becoming more and more important to obtain 3D fingerprint information for recognition. In this paper, a novel 3D fingerprint imaging system is presented based on fringe projection technique to obtain 3D features and the corresponding color texture information. A series of color sinusoidal fringe patterns with optimum three-fringe numbers are projected onto a finger surface. From another viewpoint, the fringe patterns are deformed by the finger surface and captured by a CCD camera. 3D shape data of the finger can be obtained from the captured fringe pattern images. This paper studies the prototype of the 3D fingerprint imaging system, including principle of 3D fingerprint acquisition, hardware design of the 3D imaging system, 3D calibration of the system, and software development. Some experiments are carried out by acquiring several 3D fingerprint data. The experimental results demonstrate the feasibility of the proposed 3D fingerprint imaging system.

  4. Highly efficient codec based on significance-linked connected-component analysis of wavelet coefficients

    NASA Astrophysics Data System (ADS)

    Chai, Bing-Bing; Vass, Jozsef; Zhuang, Xinhua

    1997-04-01

    Recent success in wavelet coding is mainly attributed to the recognition of importance of data organization. There has been several very competitive wavelet codecs developed, namely, Shapiro's Embedded Zerotree Wavelets (EZW), Servetto et. al.'s Morphological Representation of Wavelet Data (MRWD), and Said and Pearlman's Set Partitioning in Hierarchical Trees (SPIHT). In this paper, we propose a new image compression algorithm called Significant-Linked Connected Component Analysis (SLCCA) of wavelet coefficients. SLCCA exploits both within-subband clustering of significant coefficients and cross-subband dependency in significant fields. A so-called significant link between connected components is designed to reduce the positional overhead of MRWD. In addition, the significant coefficients' magnitude are encoded in bit plane order to match the probability model of the adaptive arithmetic coder. Experiments show that SLCCA outperforms both EZW and MRWD, and is tied with SPIHT. Furthermore, it is observed that SLCCA generally has the best performance on images with large portion of texture. When applied to fingerprint image compression, it outperforms FBI's wavelet scalar quantization by about 1 dB.

  5. Content Based Image Matching for Planetary Science

    NASA Astrophysics Data System (ADS)

    Deans, M. C.; Meyer, C.

    2006-12-01

    Planetary missions generate large volumes of data. With the MER rovers still functioning on Mars, PDS contains over 7200 released images from the Microscopic Imagers alone. These data products are only searchable by keys such as the Sol, spacecraft clock, or rover motion counter index, with little connection to the semantic content of the images. We have developed a method for matching images based on the visual textures in images. For every image in a database, a series of filters compute the image response to localized frequencies and orientations. Filter responses are turned into a low dimensional descriptor vector, generating a 37 dimensional fingerprint. For images such as the MER MI, this represents a compression ratio of 99.9965% (the fingerprint is approximately 0.0035% the size of the original image). At query time, fingerprints are quickly matched to find images with similar appearance. Image databases containing several thousand images are preprocessed offline in a matter of hours. Image matches from the database are found in a matter of seconds. We have demonstrated this image matching technique using three sources of data. The first database consists of 7200 images from the MER Microscopic Imager. The second database consists of 3500 images from the Narrow Angle Mars Orbital Camera (MOC-NA), which were cropped into 1024×1024 sub-images for consistency. The third database consists of 7500 scanned archival photos from the Apollo Metric Camera. Example query results from all three data sources are shown. We have also carried out user tests to evaluate matching performance by hand labeling results. User tests verify approximately 20% false positive rate for the top 14 results for MOC NA and MER MI data. This means typically 10 to 12 results out of 14 match the query image sufficiently. This represents a powerful search tool for databases of thousands of images where the a priori match probability for an image might be less than 1%. Qualitatively, correct matches can also be confirmed by verifying MI images taken in the same z-stack, or MOC image tiles taken from the same image strip. False negatives are difficult to quantify as it would mean finding matches in the database of thousands of images that the algorithm did not detect.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shimizu, Y; Yoon, Y; Iwase, K

    Purpose: We are trying to develop an image-searching technique to identify misfiled images in a picture archiving and communication system (PACS) server by using five biological fingerprints: the whole lung field, cardiac shadow, superior mediastinum, lung apex, and right lower lung. Each biological fingerprint in a chest radiograph includes distinctive anatomical structures to identify misfiled images. The whole lung field was less effective for evaluating the similarity between two images than the other biological fingerprints. This was mainly due to the variation in the positioning for chest radiographs. The purpose of this study is to develop new biological fingerprints thatmore » could reduce influence of differences in the positioning for chest radiography. Methods: Two hundred patients were selected randomly from our database (36,212 patients). These patients had two images each (current and previous images). Current images were used as the misfiled images in this study. A circumscribed rectangular area of the lung and the upper half of the rectangle were selected automatically as new biological fingerprints. These biological fingerprints were matched to all previous images in the database. The degrees of similarity between the two images were calculated for the same and different patients. The usefulness of new the biological fingerprints for automated patient recognition was examined in terms of receiver operating characteristic (ROC) analysis. Results: Area under the ROC curves (AUCs) for the circumscribed rectangle of the lung, upper half of the rectangle, and whole lung field were 0.980, 0.994, and 0.950, respectively. The new biological fingerprints showed better performance in identifying the patients correctly than the whole lung field. Conclusion: We have developed new biological fingerprints: circumscribed rectangle of the lung and upper half of the rectangle. These new biological fingerprints would be useful for automated patient identification system because they are less affected by positioning differences during imaging.« less

  7. Raman chemical imaging of explosive-contaminated fingerprints.

    PubMed

    Emmons, E D; Tripathi, A; Guicheteau, J A; Christesen, S D; Fountain, A W

    2009-11-01

    Raman chemical imaging (RCI) has been used to detect and identify explosives in contaminated fingerprints. Bright-field imaging is used to identify regions of interest within a fingerprint, which can then be examined to determine their chemical composition using RCI and fluorescence imaging. Results are presented where explosives in contaminated fingerprints are identified and their spatial distributions are obtained. Identification of explosives is obtained using Pearson's cosine cross-correlation technique using the characteristic region (500-1850 cm(-1)) of the spectrum. This study shows the ability to identify explosives nondestructively so that the fingerprint remains intact for further biometric analysis. Prospects for forensic examination of contaminated fingerprints are discussed.

  8. Missing data reconstruction using Gaussian mixture models for fingerprint images

    NASA Astrophysics Data System (ADS)

    Agaian, Sos S.; Yeole, Rushikesh D.; Rao, Shishir P.; Mulawka, Marzena; Troy, Mike; Reinecke, Gary

    2016-05-01

    Publisher's Note: This paper, originally published on 25 May 2016, was replaced with a revised version on 16 June 2016. If you downloaded the original PDF, but are unable to access the revision, please contact SPIE Digital Library Customer Service for assistance. One of the most important areas in biometrics is matching partial fingerprints in fingerprint databases. Recently, significant progress has been made in designing fingerprint identification systems for missing fingerprint information. However, a dependable reconstruction of fingerprint images still remains challenging due to the complexity and the ill-posed nature of the problem. In this article, both binary and gray-level images are reconstructed. This paper also presents a new similarity score to evaluate the performance of the reconstructed binary image. The offered fingerprint image identification system can be automated and extended to numerous other security applications such as postmortem fingerprints, forensic science, investigations, artificial intelligence, robotics, all-access control, and financial security, as well as for the verification of firearm purchasers, driver license applicants, etc.

  9. A further study to investigate the detection and enhancement of latent fingerprints using visible absorption and luminescence chemical imaging.

    PubMed

    Payne, Gemma; Reedy, Brian; Lennard, Chris; Comber, Bruce; Exline, David; Roux, Claude

    2005-05-28

    This study investigated the application of chemical imaging to the detection of latent fingerprints using the Condor macroscopic chemical imaging system (ChemImage Corp., Pittsburgh, USA). Methods were developed and optimised for the visualisation of untreated latent fingerprints and fingerprints processed with DFO, ninhydrin, cyanoacrylate, and cyanoacrylate plus rhodamine 6G stain. The results obtained with chemical imaging were compared to the detection achieved using conventional imaging techniques. The Condor significantly improved the detection of many prints, especially those that might be considered poor quality or borderline prints. Prints on newspaper treated with ninhydrin and DFO, and prints on white and yellow paper treated with ninhydrin, benefited the most from chemical imaging detection. In many cases, fingerprints undetectable using conventional imaging techniques could be visualised with chemical imaging. Ridge detail from untreated prints on yellow paper was also detected using the Condor. When prints of high quality were examined, both detection techniques produced quality results. The results of this project demonstrate that chemical imaging offers advantages over conventional visualisation techniques when examining latent fingerprints, especially those that would be considered difficult, such as weak prints or prints on surfaces that produce highly luminescent backgrounds. Standard testing procedures for the detection and enhancement of fingerprints by chemical imaging are presented and discussed.

  10. Toward surface-enhanced Raman imaging of latent fingerprints.

    PubMed

    Connatser, R Maggie; Prokes, Sharka M; Glembocki, Orest J; Schuler, Rebecca L; Gardner, Charles W; Lewis, Samuel A; Lewis, Linda A

    2010-11-01

    Exposure to light or heat, or simply a dearth of fingerprint material, renders some latent fingerprints undetectable using conventional methods. We begin to address such elusive fingerprints using detection targeting photo- and thermally stable fingerprint constituents: surface-enhanced Raman spectroscopy (SERS). SERS can give descriptive vibrational spectra of amino acids, among other robust fingerprint constituents, and good sensitivity can be attained by improving metal-dielectric nanoparticle substrates. With SERS chemical imaging, vibrational bands' intensities recreate a visual of fingerprint topography. The impact of nanoparticle synthesis route, dispersal methodology-deposition solvent, and laser wavelength are discussed, as are data from enhanced vibrational spectra of fingerprint components. SERS and Raman chemical images of fingerprints and realistic contaminants are shown. To our knowledge, this represents the first SERS imaging of fingerprints. In conclusion, this work progresses toward the ultimate goal of vibrationally detecting latent prints that would otherwise remain undetected using traditional development methods. 2010 American Academy of Forensic Sciences. Published 2010. This article is a U.S. Government work and is in the public domain in the U.S.A.

  11. The use of fingerprints available on the web in false identity documents: Analysis from a forensic intelligence perspective.

    PubMed

    Girelli, Carlos Magno Alves

    2016-05-01

    Fingerprints present in false identity documents were found on the web. In some cases, laterally reversed (mirrored) images of a same fingerprint were observed in different documents. In the present work, 100 fingerprints images downloaded from the web, as well as their reversals obtained by image editing, were compared between themselves and against the database of the Brazilian Federal Police AFIS, in order to better understand trends about this kind of forgery in Brazil. Some image editing effects were observed in the analyzed fingerprints: addition of artifacts (such as watermarks), image rotation, image stylization, lateral reversal and tonal reversal. Discussion about lateral reversals' detection is presented in this article, as well as suggestion to reduce errors due to missed HIT decisions between reversed fingerprints. The present work aims to highlight the importance of the fingerprints' analysis when performing document examination, especially when only copies of documents are available, something very common in Brazil. Besides the intrinsic features of the fingermarks considered in three levels of details by ACE-V methodology, some visual features of the fingerprints images can be helpful to identify sources of forgeries and modus operandi, such as: limits and image contours, fails in the friction ridges caused by excess or lack of inking and presence of watermarks and artifacts arising from the background. Based on the agreement of such features in fingerprints present in different identity documents and also on the analysis of the time and location where the documents were seized, it is possible to highlight potential links between apparently unconnected crimes. Therefore, fingerprints have potential to reduce linkage blindness and the present work suggests the analysis of fingerprints when profiling false identity documents, as well as the inclusion of fingerprints features in the profile of the documents. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  12. Forensic applications of chemical imaging: latent fingerprint detection using visible absorption and luminescence.

    PubMed

    Exline, David L; Wallace, Christie; Roux, Claude; Lennard, Chris; Nelson, Matthew P; Treado, Patrick J

    2003-09-01

    Chemical imaging technology is a rapid examination technique that combines molecular spectroscopy and digital imaging, providing information on morphology, composition, structure, and concentration of a material. Among many other applications, chemical imaging offers an array of novel analytical testing methods, which limits sample preparation and provides high-quality imaging data essential in the detection of latent fingerprints. Luminescence chemical imaging and visible absorbance chemical imaging have been successfully applied to ninhydrin, DFO, cyanoacrylate, and luminescent dye-treated latent fingerprints, demonstrating the potential of this technology to aid forensic investigations. In addition, visible absorption chemical imaging has been applied successfully to visualize untreated latent fingerprints.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Connatser, Raynella M; Prokes, Sharka M.; Glembocki, Orest

    Exposure to light or heat, or simply a dearth of fingerprint material, renders some latent fingerprints undetectable using conventional methods. We begin to address such elusive fingerprints using detection targeting photo- and thermally stable fingerprint constituents: surface-enhanced Raman spectroscopy (SERS). SERS can give descriptive vibrational spectra of amino acids, among other robust fingerprint constituents, and good sensitivity can be attained by improving metal-dielectric nanoparticle substrates. With SERS chemical imaging, vibrational bands intensities recreate a visual of fingerprint topography. The impact of nanoparticle synthesis route, dispersal methodology-deposition solvent, and laser wavelength are discussed, as are data from enhanced vibrational spectra ofmore » fingerprint components. SERS and Raman chemical images of fingerprints and realistic contaminants are shown. To our knowledge, this represents the first SERS imaging of fingerprints. In conclusion, this work progresses toward the ultimate goal of vibrationally detecting latent prints that would otherwise remain undetected using traditional development methods.« less

  14. Efficiency and Flexibility of Fingerprint Scheme Using Partial Encryption and Discrete Wavelet Transform to Verify User in Cloud Computing.

    PubMed

    Yassin, Ali A

    2014-01-01

    Now, the security of digital images is considered more and more essential and fingerprint plays the main role in the world of image. Furthermore, fingerprint recognition is a scheme of biometric verification that applies pattern recognition techniques depending on image of fingerprint individually. In the cloud environment, an adversary has the ability to intercept information and must be secured from eavesdroppers. Unluckily, encryption and decryption functions are slow and they are often hard. Fingerprint techniques required extra hardware and software; it is masqueraded by artificial gummy fingers (spoof attacks). Additionally, when a large number of users are being verified at the same time, the mechanism will become slow. In this paper, we employed each of the partial encryptions of user's fingerprint and discrete wavelet transform to obtain a new scheme of fingerprint verification. Moreover, our proposed scheme can overcome those problems; it does not require cost, reduces the computational supplies for huge volumes of fingerprint images, and resists well-known attacks. In addition, experimental results illustrate that our proposed scheme has a good performance of user's fingerprint verification.

  15. Efficiency and Flexibility of Fingerprint Scheme Using Partial Encryption and Discrete Wavelet Transform to Verify User in Cloud Computing

    PubMed Central

    Yassin, Ali A.

    2014-01-01

    Now, the security of digital images is considered more and more essential and fingerprint plays the main role in the world of image. Furthermore, fingerprint recognition is a scheme of biometric verification that applies pattern recognition techniques depending on image of fingerprint individually. In the cloud environment, an adversary has the ability to intercept information and must be secured from eavesdroppers. Unluckily, encryption and decryption functions are slow and they are often hard. Fingerprint techniques required extra hardware and software; it is masqueraded by artificial gummy fingers (spoof attacks). Additionally, when a large number of users are being verified at the same time, the mechanism will become slow. In this paper, we employed each of the partial encryptions of user's fingerprint and discrete wavelet transform to obtain a new scheme of fingerprint verification. Moreover, our proposed scheme can overcome those problems; it does not require cost, reduces the computational supplies for huge volumes of fingerprint images, and resists well-known attacks. In addition, experimental results illustrate that our proposed scheme has a good performance of user's fingerprint verification. PMID:27355051

  16. Infrared spectroscopic imaging for noninvasive detection of latent fingerprints.

    PubMed

    Crane, Nicole J; Bartick, Edward G; Perlman, Rebecca Schwartz; Huffman, Scott

    2007-01-01

    The capability of Fourier transform infrared (FTIR) spectroscopic imaging to provide detailed images of unprocessed latent fingerprints while also preserving important trace evidence is demonstrated. Unprocessed fingerprints were developed on various porous and nonporous substrates. Data-processing methods used to extract the latent fingerprint ridge pattern from the background material included basic infrared spectroscopic band intensities, addition and subtraction of band intensity measurements, principal components analysis (PCA) and calculation of second derivative band intensities, as well as combinations of these various techniques. Additionally, trace evidence within the fingerprints was recovered and identified.

  17. Method and apparatus for imaging and documenting fingerprints

    DOEpatents

    Fernandez, Salvador M.

    2002-01-01

    The invention relates to a method and apparatus for imaging and documenting fingerprints. A fluorescent dye brought in intimate proximity with the lipid residues of a latent fingerprint is caused to fluoresce on exposure to light energy. The resulting fluorescing image may be recorded photographically.

  18. [InlineEquation not available: see fulltext.]-Means Based Fingerprint Segmentation with Sensor Interoperability

    NASA Astrophysics Data System (ADS)

    Yang, Gongping; Zhou, Guang-Tong; Yin, Yilong; Yang, Xiukun

    2010-12-01

    A critical step in an automatic fingerprint recognition system is the segmentation of fingerprint images. Existing methods are usually designed to segment fingerprint images originated from a certain sensor. Thus their performances are significantly affected when dealing with fingerprints collected by different sensors. This work studies the sensor interoperability of fingerprint segmentation algorithms, which refers to the algorithm's ability to adapt to the raw fingerprints obtained from different sensors. We empirically analyze the sensor interoperability problem, and effectively address the issue by proposing a [InlineEquation not available: see fulltext.]-means based segmentation method called SKI. SKI clusters foreground and background blocks of a fingerprint image based on the [InlineEquation not available: see fulltext.]-means algorithm, where a fingerprint block is represented by a 3-dimensional feature vector consisting of block-wise coherence, mean, and variance (abbreviated as CMV). SKI also employs morphological postprocessing to achieve favorable segmentation results. We perform SKI on each fingerprint to ensure sensor interoperability. The interoperability and robustness of our method are validated by experiments performed on a number of fingerprint databases which are obtained from various sensors.

  19. Portable hyperspectral imager with continuous wave green laser for identification and detection of untreated latent fingerprints on walls.

    PubMed

    Nakamura, Atsushi; Okuda, Hidekazu; Nagaoka, Takashi; Akiba, Norimitsu; Kurosawa, Kenji; Kuroki, Kenro; Ichikawa, Fumihiko; Torao, Akira; Sota, Takayuki

    2015-09-01

    Untreated latent fingerprints are known to exhibit fluorescence under UV laser excitation. Previously, the hyperspectral imager (HSI) has been primarily evaluated in terms of its potential to enhance the sensitivity of latent fingerprint detection following treatment by conventional chemical methods in the forensic science field. In this study however, the potential usability of the HSI for the visualization and detection of untreated latent fingerprints by measuring their inherent fluorescence under continuous wave (CW) visible laser excitation was examined. Its potential to undertake spectral separation of overlapped fingerprints was also evaluated. The excitation wavelength dependence of fluorescent images was examined using an untreated palm print on a steel based wall, and it was found that green laser excitation is superior to blue and yellow lasers' excitation for the production of high contrast fluorescence images. In addition, a spectral separation method for overlapped fingerprints/palm prints on a plaster wall was proposed using new images converted by the division and subtraction of two single wavelength images constructed based on measured hyperspectral data (HSD). In practical tests, the relative isolation of two overlapped fingerprints/palm prints was successful in twelve out of seventeen cases. Only one fingerprint/palm print was extracted for an additional three cases. These results revealed that the feasibility of overlapped fingerprint/palm print spectral separation depends on the difference in the temporal degeneration of each fluorescence spectrum. The present results demonstrate that a combination of a portable HSI and CW green laser has considerable potential for the identification and detection of untreated latent fingerprints/palm prints on the walls under study, while the use of HSD makes it practically possible for doubly overlapped fingerprints/palm prints to be separated spectrally. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  20. Study of UV imaging technology for noninvasive detection of latent fingerprints

    NASA Astrophysics Data System (ADS)

    Li, Hong-xia; Cao, Jing; Niu, Jie-qing; Huang, Yun-gang

    2013-09-01

    Using UV imaging technology, according to the special absorption 、reflection 、scattering and fluorescence characterization of the various residues in fingerprints (fatty acid ester, protein, and carboxylic acid salts etc) to the UV light, weaken or eliminate the background disturbance to increase the brightness contrast of fingerprints with the background, and design、setup the illumination optical system and UV imaging system, the noninvasive detection of latent fingerprints remaining on various object surface are studied. In the illumination optical system, using the 266nm UV Nd:YAG solid state laser as illumination light source, by calculating the best coupling conditions of the laser beam with UV liquid core fiber and analyzing the beam transforming characterizations, we designed and setup the optical system to realize the UV imaging uniform illumination. In the UV imaging system, the UV lens is selected as the fingerprint imaging element, and the UV intensified CCD (ICCD) which consists of a second-generation UV image intensifier and a CCD coupled by fiber plate and taper directly are used as the imaging sensing element. The best imaging conditions of the UV lens with ICCD were analyzed and the imaging system was designed and setup. In this study, by analyzing the factors which influence the detection effect, optimal design and setup the illumination system and imaging system, latent fingerprints on the surface of the paint tin box、plastic、smooth paper、notebook paper and print paper were noninvasive detected and appeared, and the result meet the fingerprint identification requirements in forensic science.

  1. Evaluation of fingerprint deformation using optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Gutierrez da Costa, Henrique S.; Maxey, Jessica R.; Silva, Luciano; Ellerbee, Audrey K.

    2014-02-01

    Biometric identification systems have important applications to privacy and security. The most widely used of these, print identification, is based on imaging patterns present in the fingers, hands and feet that are formed by the ridges, valleys and pores of the skin. Most modern print sensors acquire images of the finger when pressed against a sensor surface. Unfortunately, this pressure may result in deformations, characterized by changes in the sizes and relative distances of the print patterns, and such changes have been shown to negatively affect the performance of fingerprint identification algorithms. Optical coherence tomography (OCT) is a novel imaging technique that is capable of imaging the subsurface of biological tissue. Hence, OCT may be used to obtain images of subdermal skin structures from which one can extract an internal fingerprint. The internal fingerprint is very similar in structure to the commonly used external fingerprint and is of increasing interest in investigations of identify fraud. We proposed and tested metrics based on measurements calculated from external and internal fingerprints to evaluate the amount of deformation of the skin. Such metrics were used to test hypotheses about the differences of deformation between the internal and external images, variations with the type of finger and location inside the fingerprint.

  2. Infrared Spectroscopic Imaging of Latent Fingerprints and Associated Forensic Evidence

    PubMed Central

    Chen, Tsoching; Schultz, Zachary D.; Levin, Ira W.

    2011-01-01

    Fingerprints reflecting a specific chemical history, such as exposure to explosives, are clearly distinguished from overlapping, and interfering latent fingerprints using infrared spectroscopic imaging techniques and multivariate analysis. PMID:19684917

  3. Forensic comparison and matching of fingerprints: using quantitative image measures for estimating error rates through understanding and predicting difficulty.

    PubMed

    Kellman, Philip J; Mnookin, Jennifer L; Erlikhman, Gennady; Garrigan, Patrick; Ghose, Tandra; Mettler, Everett; Charlton, David; Dror, Itiel E

    2014-01-01

    Latent fingerprint examination is a complex task that, despite advances in image processing, still fundamentally depends on the visual judgments of highly trained human examiners. Fingerprints collected from crime scenes typically contain less information than fingerprints collected under controlled conditions. Specifically, they are often noisy and distorted and may contain only a portion of the total fingerprint area. Expertise in fingerprint comparison, like other forms of perceptual expertise, such as face recognition or aircraft identification, depends on perceptual learning processes that lead to the discovery of features and relations that matter in comparing prints. Relatively little is known about the perceptual processes involved in making comparisons, and even less is known about what characteristics of fingerprint pairs make particular comparisons easy or difficult. We measured expert examiner performance and judgments of difficulty and confidence on a new fingerprint database. We developed a number of quantitative measures of image characteristics and used multiple regression techniques to discover objective predictors of error as well as perceived difficulty and confidence. A number of useful predictors emerged, and these included variables related to image quality metrics, such as intensity and contrast information, as well as measures of information quantity, such as the total fingerprint area. Also included were configural features that fingerprint experts have noted, such as the presence and clarity of global features and fingerprint ridges. Within the constraints of the overall low error rates of experts, a regression model incorporating the derived predictors demonstrated reasonable success in predicting objective difficulty for print pairs, as shown both in goodness of fit measures to the original data set and in a cross validation test. The results indicate the plausibility of using objective image metrics to predict expert performance and subjective assessment of difficulty in fingerprint comparisons.

  4. High Resolution Ultrasonic Method for 3D Fingerprint Representation in Biometrics

    NASA Astrophysics Data System (ADS)

    Maev, R. Gr.; Bakulin, E. Y.; Maeva, E. Y.; Severin, F. M.

    Biometrics is an important field which studies different possible ways of personal identification. Among a number of existing biometric techniques fingerprint recognition stands alone - because very large database of fingerprints has already been acquired. Also, fingerprints are an important evidence that can be collected at a crime scene. Therefore, of all automated biometric techniques, especially in the field of law enforcement, fingerprint identification seems to be the most promising. Ultrasonic method of fingerprint imaging was originally introduced over a decade as the mapping of the reflection coefficient at the interface between the finger and a covering plate and has shown very good reliability and free from imperfections of previous two methods. This work introduces a newer development of the ultrasonic fingerprint imaging, focusing on the imaging of the internal structures of fingerprints (including sweat pores) with raw acoustic resolution of about 500 dpi (0.05 mm) using a scanning acoustic microscope to obtain images and acoustic data in the form of 3D data array. C-scans from different depths inside the fingerprint area of fingers of several volunteers were obtained and showed good contrast of ridges-and-valleys patterns and practically exact correspondence to the standard ink-and-paper prints of the same areas. Important feature reveled on the acoustic images was the clear appearance of the sweat pores, which could provide additional means of identification.

  5. Accessible biometrics: A frustrated total internal reflection approach to imaging fingerprints.

    PubMed

    Smith, Nathan D; Sharp, James S

    2017-05-01

    Fingerprints are widely used as a means of identifying persons of interest because of the highly individual nature of the spatial distribution and types of features (or minuta) found on the surface of a finger. This individuality has led to their wide application in the comparison of fingerprints found at crime scenes with those taken from known offenders and suspects in custody. However, despite recent advances in machine vision technology and image processing techniques, fingerprint evidence is still widely being collected using outdated practices involving ink and paper - a process that can be both time consuming and expensive. Reduction of forensic service budgets increasingly requires that evidence be gathered and processed more rapidly and efficiently. However, many of the existing digital fingerprint acquisition devices have proven too expensive to roll out on a large scale. As a result new, low-cost imaging technologies are required to increase the quality and throughput of the processing of fingerprint evidence. Here we describe an inexpensive approach to digital fingerprint acquisition that is based upon frustrated total internal reflection imaging. The quality and resolution of the images produced are shown to be as good as those currently acquired using ink and paper based methods. The same imaging technique is also shown to be capable of imaging powdered fingerprints that have been lifted from a crime scene using adhesive tape or gel lifters. Copyright © 2017 The Chartered Society of Forensic Sciences. Published by Elsevier B.V. All rights reserved.

  6. Revealing Individual Lifestyles through Mass Spectrometry Imaging of Chemical Compounds in Fingerprints.

    PubMed

    Hinners, Paige; O'Neill, Kelly C; Lee, Young Jin

    2018-03-26

    Fingerprints, specifically the ridge details within the print, have long been used in forensic investigations for individual identification. Beyond the ridge detail, fingerprints contain useful chemical information. The study of fingerprint chemical information has become of interest, especially with mass spectrometry imaging technologies. Mass spectrometry imaging visualizes the spatial relationship of each compound detected, allowing ridge detail and chemical information in a single analysis. In this work, a range of exogenous fingerprint compounds that may reveal a personal lifestyle were studied using matrix-assisted laser desorption/ionization mass spectrometry imaging (MALDI-MSI). Studied chemical compounds include various brands of bug sprays and sunscreens, as well as food oils, alcohols, and citrus fruits. Brand differentiation and source determination were possible based on the active ingredients or exclusive compounds left in fingerprints. Tandem mass spectrometry was performed for the key compounds, so that these compounds could be confidently identified in a single multiplex mass spectrometry imaging data acquisition.

  7. Semi-automated detection of trace explosives in fingerprints on strongly interfering surfaces with Raman chemical imaging.

    PubMed

    Tripathi, Ashish; Emmons, Erik D; Wilcox, Phillip G; Guicheteau, Jason A; Emge, Darren K; Christesen, Steven D; Fountain, Augustus W

    2011-06-01

    We have previously demonstrated the use of wide-field Raman chemical imaging (RCI) to detect and identify the presence of trace explosives in contaminated fingerprints. In this current work we demonstrate the detection of trace explosives in contaminated fingerprints on strongly Raman scattering surfaces such as plastics and painted metals using an automated background subtraction routine. We demonstrate the use of partial least squares subtraction to minimize the interfering surface spectral signatures, allowing the detection and identification of explosive materials in the corrected Raman images. The resulting analyses are then visually superimposed on the corresponding bright field images to physically locate traces of explosives. Additionally, we attempt to address the question of whether a complete RCI of a fingerprint is required for trace explosive detection or whether a simple non-imaging Raman spectrum is sufficient. This investigation further demonstrates the ability to nondestructively identify explosives on fingerprints present on commonly found surfaces such that the fingerprint remains intact for further biometric analysis.

  8. Low rank approximation methods for MR fingerprinting with large scale dictionaries.

    PubMed

    Yang, Mingrui; Ma, Dan; Jiang, Yun; Hamilton, Jesse; Seiberlich, Nicole; Griswold, Mark A; McGivney, Debra

    2018-04-01

    This work proposes new low rank approximation approaches with significant memory savings for large scale MR fingerprinting (MRF) problems. We introduce a compressed MRF with randomized singular value decomposition method to significantly reduce the memory requirement for calculating a low rank approximation of large sized MRF dictionaries. We further relax this requirement by exploiting the structures of MRF dictionaries in the randomized singular value decomposition space and fitting them to low-degree polynomials to generate high resolution MRF parameter maps. In vivo 1.5T and 3T brain scan data are used to validate the approaches. T 1 , T 2 , and off-resonance maps are in good agreement with that of the standard MRF approach. Moreover, the memory savings is up to 1000 times for the MRF-fast imaging with steady-state precession sequence and more than 15 times for the MRF-balanced, steady-state free precession sequence. The proposed compressed MRF with randomized singular value decomposition and dictionary fitting methods are memory efficient low rank approximation methods, which can benefit the usage of MRF in clinical settings. They also have great potentials in large scale MRF problems, such as problems considering multi-component MRF parameters or high resolution in the parameter space. Magn Reson Med 79:2392-2400, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  9. Fast subsurface fingerprint imaging with full-field optical coherence tomography system equipped with a silicon camera

    NASA Astrophysics Data System (ADS)

    Auksorius, Egidijus; Boccara, A. Claude

    2017-09-01

    Images recorded below the surface of a finger can have more details and be of higher quality than the conventional surface fingerprint images. This is particularly true when the quality of the surface fingerprints is compromised by, for example, moisture or surface damage. However, there is an unmet need for an inexpensive fingerprint sensor that is able to acquire high-quality images deep below the surface in short time. To this end, we report on a cost-effective full-field optical coherent tomography system comprised of a silicon camera and a powerful near-infrared LED light source. The system, for example, is able to record 1.7 cm×1.7 cm en face images in 0.12 s with the spatial sampling rate of 2116 dots per inch and the sensitivity of 93 dB. We show that the system can be used to image internal fingerprints and sweat ducts with good contrast. Finally, to demonstrate its biometric performance, we acquired subsurface fingerprint images from 240 individual fingers and estimated the equal-error-rate to be ˜0.8%. The developed instrument could also be used in other en face deep-tissue imaging applications because of its high sensitivity, such as in vivo skin imaging.

  10. Super fast detection of latent fingerprints with water soluble CdTe quantum dots.

    PubMed

    Cai, Kaiyang; Yang, Ruiqin; Wang, Yanji; Yu, Xuejiao; Liu, Jianjun

    2013-03-10

    A new method based on the use of highly fluorescent water-soluble cadmium telluride (CdTe) quantum dots (QDs) capped with mercaptosuccinic acid (MSA) was explored to develop latent fingerprints. After optimized the effectiveness of QDs method contains pH value and developing time, super fast detection was achieved. Excellent fingerprint images were obtained in 1-3s after immersed the latent fingerprints into quantum dots solution on various non-porous surfaces, i.e. adhesive tape, transparent tape, aluminum foil and stainless steel. High sensitivity of the new latent fingerprints develop method was obtained by developing the fingerprints pressed on aluminum foil successively with the same finger. Compared with methyl violet and rhodamine 6G, the MSA-CdTe QDs showed the higher develop speed and fingerprint image quality. Clear image can be maintained for months by extending exposure time of CCD camera, storing fingerprints in a low temperature condition and secondary development. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  11. Improving the recognition of fingerprint biometric system using enhanced image fusion

    NASA Astrophysics Data System (ADS)

    Alsharif, Salim; El-Saba, Aed; Stripathi, Reshma

    2010-04-01

    Fingerprints recognition systems have been widely used by financial institutions, law enforcement, border control, visa issuing, just to mention few. Biometric identifiers can be counterfeited, but considered more reliable and secure compared to traditional ID cards or personal passwords methods. Fingerprint pattern fusion improves the performance of a fingerprint recognition system in terms of accuracy and security. This paper presents digital enhancement and fusion approaches that improve the biometric of the fingerprint recognition system. It is a two-step approach. In the first step raw fingerprint images are enhanced using high-frequency-emphasis filtering (HFEF). The second step is a simple linear fusion process between the raw images and the HFEF ones. It is shown that the proposed approach increases the verification and identification of the fingerprint biometric recognition system, where any improvement is justified using the correlation performance metrics of the matching algorithm.

  12. Detection of protein deposition within latent fingerprints by surface-enhanced Raman spectroscopy imaging

    NASA Astrophysics Data System (ADS)

    Song, Wei; Mao, Zhu; Liu, Xiaojuan; Lu, Yong; Li, Zhishi; Zhao, Bing; Lu, Lehui

    2012-03-01

    The detection of metabolites is very important for the estimation of the health of human beings. Latent fingerprint contains many constituents and specific contaminants, which give much information of the individual, such as health status, drug abuse etc. For a long time, many efforts have been focused on visualizing latent fingerprints, but little attention has been paid to the detection of such substances at the same time. In this article, we have devised a versatile approach for the ultra-sensitive detection and identification of specific biomolecules deposited within fingerprints via a large-area SERS imaging technique. The antibody bound to the Raman probe modified silver nanoparticles enables the binding to specific proteins within the fingerprints to afford high-definition SERS images of the fingerprint pattern. The SERS spectra and images of Raman probes indirectly provide chemical information regarding the given proteins. By taking advantage of the high sensitivity and the capability of SERS technique to obtain abundant vibrational signatures of biomolecules, we have successfully detected minute quantities of protein present within a latent fingerprint. This technique provides a versatile and effective model to detect biomarkers within fingerprints for medical diagnostics, criminal investigation and other fields.

  13. Fast imaging of eccrine latent fingerprints with nontoxic Mn-doped ZnS QDs.

    PubMed

    Xu, Chaoying; Zhou, Ronghui; He, Wenwei; Wu, Lan; Wu, Peng; Hou, Xiandeng

    2014-04-01

    Fingerprints are unique characteristics of an individual, and their imaging and recognition is a top-priority task in forensic science. Fast LFP (latent fingerprint) acquirement can greatly help policemen in screening the potential criminal scenes and capturing fingerprint clues. Of the two major latent fingerprints (LFP), eccrine is expected to be more representative than sebaceous in LFP identification. Here we explored the heavy metal-free Mn-doped ZnS quantum dots (QDs) as a new imaging moiety for eccrine LFPs. To study the effects of different ligands on the LFP image quality, we prepared Mn-doped ZnS QDs with various surface-capping ligands using QDs synthesized in high-temperature organic media as starting material. The orange fluorescence emission from Mn-doped ZnS QDs clearly revealed the optical images of eccrine LFPs. Interestingly, N-acetyl-cysteine-capped Mn-doped ZnS QDs could stain the eccrine LFPs in as fast as 5 s. Meanwhile, the levels 2 and 3 substructures of the fingerprints could also be simultaneously and clearly identified. While in the absence of QDs or without rubbing and stamping the finger onto foil, no fluorescent fingerprint images could be visualized. Besides fresh fingerprint, aged (5, 10, and 50 days), incomplete eccrine LFPs could also be successfully stained with N-acetyl-cysteine-capped Mn-doped ZnS QDs, demonstrating the analytical potential of this method in real world applications. The method was also robust for imaging of eccrine LFPs on a series of nonporous surfaces, such as aluminum foil, compact discs, glass, and black plastic bags.

  14. Forensic Comparison and Matching of Fingerprints: Using Quantitative Image Measures for Estimating Error Rates through Understanding and Predicting Difficulty

    PubMed Central

    Kellman, Philip J.; Mnookin, Jennifer L.; Erlikhman, Gennady; Garrigan, Patrick; Ghose, Tandra; Mettler, Everett; Charlton, David; Dror, Itiel E.

    2014-01-01

    Latent fingerprint examination is a complex task that, despite advances in image processing, still fundamentally depends on the visual judgments of highly trained human examiners. Fingerprints collected from crime scenes typically contain less information than fingerprints collected under controlled conditions. Specifically, they are often noisy and distorted and may contain only a portion of the total fingerprint area. Expertise in fingerprint comparison, like other forms of perceptual expertise, such as face recognition or aircraft identification, depends on perceptual learning processes that lead to the discovery of features and relations that matter in comparing prints. Relatively little is known about the perceptual processes involved in making comparisons, and even less is known about what characteristics of fingerprint pairs make particular comparisons easy or difficult. We measured expert examiner performance and judgments of difficulty and confidence on a new fingerprint database. We developed a number of quantitative measures of image characteristics and used multiple regression techniques to discover objective predictors of error as well as perceived difficulty and confidence. A number of useful predictors emerged, and these included variables related to image quality metrics, such as intensity and contrast information, as well as measures of information quantity, such as the total fingerprint area. Also included were configural features that fingerprint experts have noted, such as the presence and clarity of global features and fingerprint ridges. Within the constraints of the overall low error rates of experts, a regression model incorporating the derived predictors demonstrated reasonable success in predicting objective difficulty for print pairs, as shown both in goodness of fit measures to the original data set and in a cross validation test. The results indicate the plausibility of using objective image metrics to predict expert performance and subjective assessment of difficulty in fingerprint comparisons. PMID:24788812

  15. Convolution Comparison Pattern: An Efficient Local Image Descriptor for Fingerprint Liveness Detection

    PubMed Central

    Gottschlich, Carsten

    2016-01-01

    We present a new type of local image descriptor which yields binary patterns from small image patches. For the application to fingerprint liveness detection, we achieve rotation invariant image patches by taking the fingerprint segmentation and orientation field into account. We compute the discrete cosine transform (DCT) for these rotation invariant patches and attain binary patterns by comparing pairs of two DCT coefficients. These patterns are summarized into one or more histograms per image. Each histogram comprises the relative frequencies of pattern occurrences. Multiple histograms are concatenated and the resulting feature vector is used for image classification. We name this novel type of descriptor convolution comparison pattern (CCP). Experimental results show the usefulness of the proposed CCP descriptor for fingerprint liveness detection. CCP outperforms other local image descriptors such as LBP, LPQ and WLD on the LivDet 2013 benchmark. The CCP descriptor is a general type of local image descriptor which we expect to prove useful in areas beyond fingerprint liveness detection such as biological and medical image processing, texture recognition, face recognition and iris recognition, liveness detection for face and iris images, and machine vision for surface inspection and material classification. PMID:26844544

  16. Fast subsurface fingerprint imaging with full-field optical coherence tomography system equipped with a silicon camera.

    PubMed

    Auksorius, Egidijus; Boccara, A Claude

    2017-09-01

    Images recorded below the surface of a finger can have more details and be of higher quality than the conventional surface fingerprint images. This is particularly true when the quality of the surface fingerprints is compromised by, for example, moisture or surface damage. However, there is an unmet need for an inexpensive fingerprint sensor that is able to acquire high-quality images deep below the surface in short time. To this end, we report on a cost-effective full-field optical coherent tomography system comprised of a silicon camera and a powerful near-infrared LED light source. The system, for example, is able to record 1.7  cm×1.7  cmen face images in 0.12 s with the spatial sampling rate of 2116 dots per inch and the sensitivity of 93 dB. We show that the system can be used to image internal fingerprints and sweat ducts with good contrast. Finally, to demonstrate its biometric performance, we acquired subsurface fingerprint images from 240 individual fingers and estimated the equal-error-rate to be ∼0.8%. The developed instrument could also be used in other en face deep-tissue imaging applications because of its high sensitivity, such as in vivo skin imaging. (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  17. Expertise with unfamiliar objects is flexible to changes in task but not changes in class

    PubMed Central

    Tangen, Jason M.

    2017-01-01

    Perceptual expertise is notoriously specific and bound by familiarity; generalizing to novel or unfamiliar images, objects, identities, and categories often comes at some cost to performance. In forensic and security settings, however, examiners are faced with the task of discriminating unfamiliar images of unfamiliar objects within their general domain of expertise (e.g., fingerprints, faces, or firearms). The job of a fingerprint expert, for instance, is to decide whether two unfamiliar fingerprint images were left by the same unfamiliar finger (e.g., Smith’s left thumb), or two different unfamiliar fingers (e.g., Smith and Jones’s left thumb). Little is known about the limits of this kind of perceptual expertise. Here, we examine fingerprint experts’ and novices’ ability to distinguish fingerprints compared to inverted faces in two different tasks. Inverted face images serve as an ideal comparison because they vary naturally between and within identities, as do fingerprints, and people tend to be less accurate or more novice-like at distinguishing faces when they are presented in an inverted or unfamiliar orientation. In Experiment 1, fingerprint experts outperformed novices in locating categorical fingerprint outliers (i.e., a loop pattern in an array of whorls), but not inverted face outliers (i.e., an inverted male face in an array of inverted female faces). In Experiment 2, fingerprint experts were more accurate than novices at discriminating matching and mismatching fingerprints that were presented very briefly, but not so for inverted faces. Our data show that perceptual expertise with fingerprints can be flexible to changing task demands, but there can also be abrupt limits: fingerprint expertise did not generalize to an unfamiliar class of stimuli. We interpret these findings as evidence that perceptual expertise with unfamiliar objects is highly constrained by one’s experience. PMID:28574998

  18. AIR-MRF: Accelerated iterative reconstruction for magnetic resonance fingerprinting.

    PubMed

    Cline, Christopher C; Chen, Xiao; Mailhe, Boris; Wang, Qiu; Pfeuffer, Josef; Nittka, Mathias; Griswold, Mark A; Speier, Peter; Nadar, Mariappan S

    2017-09-01

    Existing approaches for reconstruction of multiparametric maps with magnetic resonance fingerprinting (MRF) are currently limited by their estimation accuracy and reconstruction time. We aimed to address these issues with a novel combination of iterative reconstruction, fingerprint compression, additional regularization, and accelerated dictionary search methods. The pipeline described here, accelerated iterative reconstruction for magnetic resonance fingerprinting (AIR-MRF), was evaluated with simulations as well as phantom and in vivo scans. We found that the AIR-MRF pipeline provided reduced parameter estimation errors compared to non-iterative and other iterative methods, particularly at shorter sequence lengths. Accelerated dictionary search methods incorporated into the iterative pipeline reduced the reconstruction time at little cost of quality. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. Artificial fingerprint recognition by using optical coherence tomography with autocorrelation analysis.

    PubMed

    Cheng, Yezeng; Larin, Kirill V

    2006-12-20

    Fingerprint recognition is one of the most widely used methods of biometrics. This method relies on the surface topography of a finger and, thus, is potentially vulnerable for spoofing by artificial dummies with embedded fingerprints. In this study, we applied the optical coherence tomography (OCT) technique to distinguish artificial materials commonly used for spoofing fingerprint scanning systems from the real skin. Several artificial fingerprint dummies made from household cement and liquid silicone rubber were prepared and tested using a commercial fingerprint reader and an OCT system. While the artificial fingerprints easily spoofed the commercial fingerprint reader, OCT images revealed the presence of them at all times. We also demonstrated that an autocorrelation analysis of the OCT images could be potentially used in automatic recognition systems.

  20. Artificial fingerprint recognition by using optical coherence tomography with autocorrelation analysis

    NASA Astrophysics Data System (ADS)

    Cheng, Yezeng; Larin, Kirill V.

    2006-12-01

    Fingerprint recognition is one of the most widely used methods of biometrics. This method relies on the surface topography of a finger and, thus, is potentially vulnerable for spoofing by artificial dummies with embedded fingerprints. In this study, we applied the optical coherence tomography (OCT) technique to distinguish artificial materials commonly used for spoofing fingerprint scanning systems from the real skin. Several artificial fingerprint dummies made from household cement and liquid silicone rubber were prepared and tested using a commercial fingerprint reader and an OCT system. While the artificial fingerprints easily spoofed the commercial fingerprint reader, OCT images revealed the presence of them at all times. We also demonstrated that an autocorrelation analysis of the OCT images could be potentially used in automatic recognition systems.

  1. Fingerprint enhancement using a multispectral sensor

    NASA Astrophysics Data System (ADS)

    Rowe, Robert K.; Nixon, Kristin A.

    2005-03-01

    The level of performance of a biometric fingerprint sensor is critically dependent on the quality of the fingerprint images. One of the most common types of optical fingerprint sensors relies on the phenomenon of total internal reflectance (TIR) to generate an image. Under ideal conditions, a TIR fingerprint sensor can produce high-contrast fingerprint images with excellent feature definition. However, images produced by the same sensor under conditions that include dry skin, dirt on the skin, and marginal contact between the finger and the sensor, are likely to be severely degraded. This paper discusses the use of multispectral sensing as a means to collect additional images with new information about the fingerprint that can significantly augment the system performance under both normal and adverse sample conditions. In the context of this paper, "multispectral sensing" is used to broadly denote a collection of images taken under different illumination conditions: different polarizations, different illumination/detection configurations, as well as different wavelength illumination. Results from three small studies using an early-stage prototype of the multispectral-TIR (MTIR) sensor are presented along with results from the corresponding TIR data. The first experiment produced data from 9 people, 4 fingers from each person and 3 measurements per finger under "normal" conditions. The second experiment provided results from a study performed to test the relative performance of TIR and MTIR images when taken under extreme dry and dirty conditions. The third experiment examined the case where the area of contact between the finger and sensor is greatly reduced.

  2. A fingerprint classification algorithm based on combination of local and global information

    NASA Astrophysics Data System (ADS)

    Liu, Chongjin; Fu, Xiang; Bian, Junjie; Feng, Jufu

    2011-12-01

    Fingerprint recognition is one of the most important technologies in biometric identification and has been wildly applied in commercial and forensic areas. Fingerprint classification, as the fundamental procedure in fingerprint recognition, can sharply decrease the quantity for fingerprint matching and improve the efficiency of fingerprint recognition. Most fingerprint classification algorithms are based on the number and position of singular points. Because the singular points detecting method only considers the local information commonly, the classification algorithms are sensitive to noise. In this paper, we propose a novel fingerprint classification algorithm combining the local and global information of fingerprint. Firstly we use local information to detect singular points and measure their quality considering orientation structure and image texture in adjacent areas. Furthermore the global orientation model is adopted to measure the reliability of singular points group. Finally the local quality and global reliability is weighted to classify fingerprint. Experiments demonstrate the accuracy and effectivity of our algorithm especially for the poor quality fingerprint images.

  3. Polarization-based and specular-reflection-based noncontact latent fingerprint imaging and lifting

    NASA Astrophysics Data System (ADS)

    Lin, Shih-Schön; Yemelyanov, Konstantin M.; Pugh, Edward N., Jr.; Engheta, Nader

    2006-09-01

    In forensic science the finger marks left unintentionally by people at a crime scene are referred to as latent fingerprints. Most existing techniques to detect and lift latent fingerprints require application of a certain material directly onto the exhibit. The chemical and physical processing applied to the fingerprint potentially degrades or prevents further forensic testing on the same evidence sample. Many existing methods also have deleterious side effects. We introduce a method to detect and extract latent fingerprint images without applying any powder or chemicals on the object. Our method is based on the optical phenomena of polarization and specular reflection together with the physiology of fingerprint formation. The recovered image quality is comparable to existing methods. In some cases, such as the sticky side of tape, our method shows unique advantages.

  4. Detection of protein deposition within latent fingerprints by surface-enhanced Raman spectroscopy imaging.

    PubMed

    Song, Wei; Mao, Zhu; Liu, Xiaojuan; Lu, Yong; Li, Zhishi; Zhao, Bing; Lu, Lehui

    2012-04-07

    The detection of metabolites is very important for the estimation of the health of human beings. Latent fingerprint contains many constituents and specific contaminants, which give much information of the individual, such as health status, drug abuse etc. For a long time, many efforts have been focused on visualizing latent fingerprints, but little attention has been paid to the detection of such substances at the same time. In this article, we have devised a versatile approach for the ultra-sensitive detection and identification of specific biomolecules deposited within fingerprints via a large-area SERS imaging technique. The antibody bound to the Raman probe modified silver nanoparticles enables the binding to specific proteins within the fingerprints to afford high-definition SERS images of the fingerprint pattern. The SERS spectra and images of Raman probes indirectly provide chemical information regarding the given proteins. By taking advantage of the high sensitivity and the capability of SERS technique to obtain abundant vibrational signatures of biomolecules, we have successfully detected minute quantities of protein present within a latent fingerprint. This technique provides a versatile and effective model to detect biomarkers within fingerprints for medical diagnostics, criminal investigation and other fields. This journal is © The Royal Society of Chemistry 2012

  5. The application of UV multispectral technology in extract trace evdidence

    NASA Astrophysics Data System (ADS)

    Guo, Jingjing; Xu, Xiaojing; Li, Zhihui; Xu, Lei; Xie, Lanchi

    2015-11-01

    Multispectral imaging is becoming more and more important in the field of examination of material evidence, especially the ultraviolet spectral imaging. Fingerprints development, questioned document detection, trace evidence examination-all can used of it. This paper introduce a UV multispectral equipment which was developed by BITU & IFSC, it can extract trace evidence-extract fingerprints. The result showed that this technology can develop latent sweat-sebum mixed fingerprint on photo and ID card blood fingerprint on steel hold. We used the UV spectrum data analysis system to make the UV spectral image clear to identify and analyse.

  6. Line-scanning Raman imaging spectroscopy for detection of fingerprints.

    PubMed

    Deng, Sunan; Liu, Le; Liu, Zhiyi; Shen, Zhiyuan; Li, Guohua; He, Yonghong

    2012-06-10

    Fingerprints are the best form of personal identification for criminal investigation purposes. We present a line-scanning Raman imaging system and use it to detect fingerprints composed of β-carotene and fish oil on different substrates. Although the line-scanning Raman system has been used to map the distribution of materials such as polystyrene spheres and minerals within geological samples, this is the first time to our knowledge that the method is used in imaging fingerprints. Two Raman peaks of β-carotene (501.2, 510.3 nm) are detected and the results demonstrate that both peaks can generate excellent images with little difference between them. The system operates at a spectra resolution of about 0.4 nm and can detect β-carotene signals in petroleum ether solution with the limit of detection of 3.4×10(-9) mol/L. The results show that the line-scanning Raman imaging spectroscopy we have built has a high accuracy and can be used in the detection of latent fingerprints in the future.

  7. Optical image encryption using QR code and multilevel fingerprints in gyrator transform domains

    NASA Astrophysics Data System (ADS)

    Wei, Yang; Yan, Aimin; Dong, Jiabin; Hu, Zhijuan; Zhang, Jingtao

    2017-11-01

    A new concept of GT encryption scheme is proposed in this paper. We present a novel optical image encryption method by using quick response (QR) code and multilevel fingerprint keys in gyrator transform (GT) domains. In this method, an original image is firstly transformed into a QR code, which is placed in the input plane of cascaded GTs. Subsequently, the QR code is encrypted into the cipher-text by using multilevel fingerprint keys. The original image can be obtained easily by reading the high-quality retrieved QR code with hand-held devices. The main parameters used as private keys are GTs' rotation angles and multilevel fingerprints. Biometrics and cryptography are integrated with each other to improve data security. Numerical simulations are performed to demonstrate the validity and feasibility of the proposed encryption scheme. In the future, the method of applying QR codes and fingerprints in GT domains possesses much potential for information security.

  8. DWI-based neural fingerprinting technology: a preliminary study on stroke analysis.

    PubMed

    Ye, Chenfei; Ma, Heather Ting; Wu, Jun; Yang, Pengfei; Chen, Xuhui; Yang, Zhengyi; Ma, Jingbo

    2014-01-01

    Stroke is a common neural disorder in neurology clinics. Magnetic resonance imaging (MRI) has become an important tool to assess the neural physiological changes under stroke, such as diffusion weighted imaging (DWI) and diffusion tensor imaging (DTI). Quantitative analysis of MRI images would help medical doctors to localize the stroke area in the diagnosis in terms of structural information and physiological characterization. However, current quantitative approaches can only provide localization of the disorder rather than measure physiological variation of subtypes of ischemic stroke. In the current study, we hypothesize that each kind of neural disorder would have its unique physiological characteristics, which could be reflected by DWI images on different gradients. Based on this hypothesis, a DWI-based neural fingerprinting technology was proposed to classify subtypes of ischemic stroke. The neural fingerprint was constructed by the signal intensity of the region of interest (ROI) on the DWI images under different gradients. The fingerprint derived from the manually drawn ROI could classify the subtypes with accuracy 100%. However, the classification accuracy was worse when using semiautomatic and automatic method in ROI segmentation. The preliminary results showed promising potential of DWI-based neural fingerprinting technology in stroke subtype classification. Further studies will be carried out for enhancing the fingerprinting accuracy and its application in other clinical practices.

  9. Fast 3D magnetic resonance fingerprinting for a whole-brain coverage.

    PubMed

    Ma, Dan; Jiang, Yun; Chen, Yong; McGivney, Debra; Mehta, Bhairav; Gulani, Vikas; Griswold, Mark

    2018-04-01

    The purpose of this study was to accelerate the acquisition and reconstruction time of 3D magnetic resonance fingerprinting scans. A 3D magnetic resonance fingerprinting scan was accelerated by using a single-shot spiral trajectory with an undersampling factor of 48 in the x-y plane, and an interleaved sampling pattern with an undersampling factor of 3 through plane. Further acceleration came from reducing the waiting time between neighboring partitions. The reconstruction time was accelerated by applying singular value decomposition compression in k-space. Finally, a 3D premeasured B 1 map was used to correct for the B 1 inhomogeneity. The T 1 and T 2 values of the International Society for Magnetic Resonance in Medicine/National Institute of Standards and Technology MRI phantom showed a good agreement with the standard values, with an average concordance correlation coefficient of 0.99, and coefficient of variation of 7% in the repeatability scans. The results from in vivo scans also showed high image quality in both transverse and coronal views. This study applied a fast acquisition scheme for a fully quantitative 3D magnetic resonance fingerprinting scan with a total acceleration factor of 144 as compared with the Nyquist rate, such that 3D T 1 , T 2 , and proton density maps can be acquired with whole-brain coverage at clinical resolution in less than 5 min. Magn Reson Med 79:2190-2197, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  10. Photogrammetric fingerprint unwrapping

    NASA Astrophysics Data System (ADS)

    Paar, Gerhard; del Pilar Caballo Perucha, Maria; Bauer, Arnold; Nauschnegg, Bernhard

    2008-04-01

    Fingerprints are important biometric cues. Compared to conventional fingerprint sensors the use of contact-free stereoscopic image acquisition of the front-most finger segment has a set of advantages: Finger deformation is avoided, the entire relevant area for biometric use is covered, some technical aspects like sensor maintenance and cleaning are facilitated, and access to a three-dimensional reconstruction of the covered area is possible. We describe a photogrammetric workflow for nail-to-nail fingerprint reconstruction: A calibrated sensor setup with typically 5 cameras and dedicated illumination acquires adjacent stereo pairs. Using the silhouettes of the segmented finger a raw cylindrical model is generated. After preprocessing (shading correction, dust removal, lens distortion correction), each individual camera texture is projected onto the model. Image-to-image matching on these pseudo ortho images and dense 3D reconstruction obtains a textured cylindrical digital surface model with radial distances around the major axis and a grid size in the range of 25-50 µm. The model allows for objective fingerprint unwrapping and novel fingerprint matching algorithms since 3D relations between fingerprint features are available as additional cues. Moreover, covering the entire region with relevant fingerprint texture is particularly important for establishing a comprehensive forensic database. The workflow has been implemented in portable C and is ready for industrial exploitation. Further improvement issues are code optimization, unwrapping method, illumination strategy to avoid highlights and to improve the initial segmentation, and the comparison of the unwrapping result to conventional fingerprint acquisition technology.

  11. Optimization of illuminating system to detect optical properties inside a finger

    NASA Astrophysics Data System (ADS)

    Sano, Emiko; Shikai, Masahiro; Shiratsuki, Akihide; Maeda, Takuji; Matsushita, Masahito; Sasakawa, Koichi

    2007-01-01

    Biometrics performs personal authentication using individual bodily features including fingerprints, faces, etc. These technologies have been studied and developed for many years. In particular, fingerprint authentication has evolved over many years, and fingerprinting is currently one of world's most established biometric authentication techniques. Not long ago this technique was only used for personal identification in criminal investigations and high-security facilities. In recent years, however, various biometric authentication techniques have appeared in everyday applications. Even though providing great convenience, they have also produced a number of technical issues concerning operation. Generally, fingerprint authentication is comprised of a number of component technologies: (1) sensing technology for detecting the fingerprint pattern; (2) image processing technology for converting the captured pattern into feature data that can be used for verification; (3) verification technology for comparing the feature data with a reference and determining whether it matches. Current fingerprint authentication issues, revealed in research results, originate with fingerprint sensing technology. Sensing methods for detecting a person's fingerprint pattern for image processing are particularly important because they impact overall fingerprint authentication performance. The following are the current problems concerning sensing methods that occur in some cases: Some fingers whose fingerprints used to be difficult to detect by conventional sensors. Fingerprint patterns are easily affected by the finger's surface condition, such noise as discontinuities and thin spots can appear in fingerprint patterns obtained from wrinkled finger, sweaty finger, and so on. To address these problems, we proposed a novel fingerprint sensor based on new scientific knowledge. A characteristic of this new method is that obtained fingerprint patterns are not easily affected by the finger's surface condition because it detects the fingerprint pattern inside the finger using transmitted light. We examined optimization of illumination system of this novel fingerprint sensor to detect contrasty fingerprint pattern from wide area and to improve image processing at (2).

  12. FBI Fingerprint Image Capture System High-Speed-Front-End throughput modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rathke, P.M.

    1993-09-01

    The Federal Bureau of Investigation (FBI) has undertaken a major modernization effort called the Integrated Automated Fingerprint Identification System (IAFISS). This system will provide centralized identification services using automated fingerprint, subject descriptor, mugshot, and document processing. A high-speed Fingerprint Image Capture System (FICS) is under development as part of the IAFIS program. The FICS will capture digital and microfilm images of FBI fingerprint cards for input into a central database. One FICS design supports two front-end scanning subsystems, known as the High-Speed-Front-End (HSFE) and Low-Speed-Front-End, to supply image data to a common data processing subsystem. The production rate of themore » HSFE is critical to meeting the FBI`s fingerprint card processing schedule. A model of the HSFE has been developed to help identify the issues driving the production rate, assist in the development of component specifications, and guide the evolution of an operations plan. A description of the model development is given, the assumptions are presented, and some HSFE throughput analysis is performed.« less

  13. Visualization of latent fingerprints beneath opaque electrical tapes by optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Liu, Kangkang; Zhang, Ning; Meng, Li; Li, Zhigang; Xu, Xiaojing

    2018-03-01

    Electrical tape is found as one type of important trace evidence in crime scene. For example, it is very frequently used to insulate wires in explosive devices in many criminal cases. The fingerprints of the suspects were often left on the adhesive side of the tapes, which can provide very useful clues for the investigation and make it possible for individual identification. The most commonly used method to detect and visualize those latent fingerprints is to peel off each layer of the tapes first and then adopt the chemical methods to develop the fingerprints on the tapes. However, the peeling-off and chemical development process would degrade and contaminate the fingerprints and thus adversely affect the accuracy of identification. Optical coherence tomography (OCT) is a novel forensic imaging modality based on lowcoherence interferometry, which has the advantages of non-destruction, micrometer-level high resolution and crosssectional imaging. In this study, a fiber-based spectral-domain OCT (SD-OCT) system with {6μm resolution was employed to obtain the image of fingerprint sandwiched between two opaque electrical tapes without any pre-processing procedure like peeling-off. Three-dimensional (3D) OCT reconstruction was performed and the subsurface image was produced to visualize the latent fingerprints. The results demonstrate that OCT is a promising tool for recovering the latent fingerprints hidden beneath opaque electrical tape non-destructively and rapidly.

  14. Altered fingerprints: analysis and detection.

    PubMed

    Yoon, Soweon; Feng, Jianjiang; Jain, Anil K

    2012-03-01

    The widespread deployment of Automated Fingerprint Identification Systems (AFIS) in law enforcement and border control applications has heightened the need for ensuring that these systems are not compromised. While several issues related to fingerprint system security have been investigated, including the use of fake fingerprints for masquerading identity, the problem of fingerprint alteration or obfuscation has received very little attention. Fingerprint obfuscation refers to the deliberate alteration of the fingerprint pattern by an individual for the purpose of masking his identity. Several cases of fingerprint obfuscation have been reported in the press. Fingerprint image quality assessment software (e.g., NFIQ) cannot always detect altered fingerprints since the implicit image quality due to alteration may not change significantly. The main contributions of this paper are: 1) compiling case studies of incidents where individuals were found to have altered their fingerprints for circumventing AFIS, 2) investigating the impact of fingerprint alteration on the accuracy of a commercial fingerprint matcher, 3) classifying the alterations into three major categories and suggesting possible countermeasures, 4) developing a technique to automatically detect altered fingerprints based on analyzing orientation field and minutiae distribution, and 5) evaluating the proposed technique and the NFIQ algorithm on a large database of altered fingerprints provided by a law enforcement agency. Experimental results show the feasibility of the proposed approach in detecting altered fingerprints and highlight the need to further pursue this problem.

  15. Sensor-oriented feature usability evaluation in fingerprint segmentation

    NASA Astrophysics Data System (ADS)

    Li, Ying; Yin, Yilong; Yang, Gongping

    2013-06-01

    Existing fingerprint segmentation methods usually process fingerprint images captured by different sensors with the same feature or feature set. We propose to improve the fingerprint segmentation result in view of an important fact that images from different sensors have different characteristics for segmentation. Feature usability evaluation, which means to evaluate the usability of features to find the personalized feature or feature set for different sensors to improve the performance of segmentation. The need for feature usability evaluation for fingerprint segmentation is raised and analyzed as a new issue. To address this issue, we present a decision-tree-based feature-usability evaluation method, which utilizes a C4.5 decision tree algorithm to evaluate and pick the best suitable feature or feature set for fingerprint segmentation from a typical candidate feature set. We apply the novel method on the FVC2002 database of fingerprint images, which are acquired by four different respective sensors and technologies. Experimental results show that the accuracy of segmentation is improved, and time consumption for feature extraction is dramatically reduced with selected feature(s).

  16. Acquiring a 2D rolled equivalent fingerprint image from a non-contact 3D finger scan

    NASA Astrophysics Data System (ADS)

    Fatehpuria, Abhishika; Lau, Daniel L.; Hassebrook, Laurence G.

    2006-04-01

    The use of fingerprints as a biometric is both the oldest mode of computer aided personal identification and the most relied-upon technology in use today. But current fingerprint scanning systems have some challenging and peculiar difficulties. Often skin conditions and imperfect acquisition circumstances cause the captured fingerprint image to be far from ideal. Also some of the acquisition techniques can be slow and cumbersome to use and may not provide the complete information required for reliable feature extraction and fingerprint matching. Most of the difficulties arise due to the contact of the fingerprint surface with the sensor platen. To attain a fast-capture, non-contact, fingerprint scanning technology, we are developing a scanning system that employs structured light illumination as a means for acquiring a 3-D scan of the finger with sufficiently high resolution to record ridge-level details. In this paper, we describe the postprocessing steps used for converting the acquired 3-D scan of the subject's finger into a 2-D rolled equivalent image.

  17. Reference point detection for camera-based fingerprint image based on wavelet transformation.

    PubMed

    Khalil, Mohammed S

    2015-04-30

    Fingerprint recognition systems essentially require core-point detection prior to fingerprint matching. The core-point is used as a reference point to align the fingerprint with a template database. When processing a larger fingerprint database, it is necessary to consider the core-point during feature extraction. Numerous core-point detection methods are available and have been reported in the literature. However, these methods are generally applied to scanner-based images. Hence, this paper attempts to explore the feasibility of applying a core-point detection method to a fingerprint image obtained using a camera phone. The proposed method utilizes a discrete wavelet transform to extract the ridge information from a color image. The performance of proposed method is evaluated in terms of accuracy and consistency. These two indicators are calculated automatically by comparing the method's output with the defined core points. The proposed method is tested on two data sets, controlled and uncontrolled environment, collected from 13 different subjects. In the controlled environment, the proposed method achieved a detection rate 82.98%. In uncontrolled environment, the proposed method yield a detection rate of 78.21%. The proposed method yields promising results in a collected-image database. Moreover, the proposed method outperformed compare to existing method.

  18. Performance analysis of three-dimensional ridge acquisition from live finger and palm surface scans

    NASA Astrophysics Data System (ADS)

    Fatehpuria, Abhishika; Lau, Daniel L.; Yalla, Veeraganesh; Hassebrook, Laurence G.

    2007-04-01

    Fingerprints are one of the most commonly used and relied-upon biometric technology. But often the captured fingerprint image is far from ideal due to imperfect acquisition techniques that can be slow and cumbersome to use without providing complete fingerprint information. Most of the diffculties arise due to the contact of the fingerprint surface with the sensor platen. To overcome these diffculties we have been developing a noncontact scanning system for acquiring a 3-D scan of a finger with suffciently high resolution which is then converted into a 2-D rolled equivalent image. In this paper, we describe certain quantitative measures evaluating scanner performance. Specifically, we use some image software components developed by the National Institute of Standards and Technology, to derive our performance metrics. Out of the eleven identified metrics, three were found to be most suitable for evaluating scanner performance. A comparison is also made between 2D fingerprint images obtained by the traditional means and the 2D images obtained after unrolling the 3D scans and the quality of the acquired scans is quantified using the metrics.

  19. Chemical Visualization of Sweat Pores in Fingerprints Using GO-Enhanced TOF-SIMS.

    PubMed

    Cai, Lesi; Xia, Meng-Chan; Wang, Zhaoying; Zhao, Ya-Bin; Li, Zhanping; Zhang, Sichun; Zhang, Xinrong

    2017-08-15

    Time-of-flight secondary ion mass spectrometry (TOF-SIMS) has been used in imaging of small molecules (<500 Da) in fingerprints, such as gunshot residues and illicit drugs. However, identifying and mapping relatively high mass molecules are quite difficult owing to insufficient ion yield of their molecular ions. In this report, graphene oxide (GO)-enhanced TOF-SIMS was used to detect and image relatively high mass molecules such as poison, alkaloids (>600 Da) and controlled drugs, and antibiotics (>700 Da) in fingerprints. Detail features of fingerprints such as the number and distribution of sweat pores in a ridge and even the delicate morphology of one pore were clearly revealed in SIMS images of relatively high mass molecules. The detail features combining with identified chemical composition were sufficient to establish a human identity and link the suspect to a crime scene. The wide detectable mass range and high spatial resolution make GO-enhanced TOF-SIMS a promising tool in accurate and fast analysis of fingerprints, especially in fragmental fingerprint analysis.

  20. Recognizable-image selection for fingerprint recognition with a mobile-device camera.

    PubMed

    Lee, Dongjae; Choi, Kyoungtaek; Choi, Heeseung; Kim, Jaihie

    2008-02-01

    This paper proposes a recognizable-image selection algorithm for fingerprint-verification systems that use a camera embedded in a mobile device. A recognizable image is defined as the fingerprint image which includes the characteristics that are sufficiently discriminating an individual from other people. While general camera systems obtain focused images by using various gradient measures to estimate high-frequency components, mobile cameras cannot acquire recognizable images in the same way because the obtained images may not be adequate for fingerprint recognition, even if they are properly focused. A recognizable image has to meet the following two conditions: First, valid region in the recognizable image should be large enough compared with other nonrecognizable images. Here, a valid region is a well-focused part, and ridges in the region are clearly distinguishable from valleys. In order to select valid regions, this paper proposes a new focus-measurement algorithm using the secondary partial derivatives and a quality estimation utilizing the coherence and symmetry of gradient distribution. Second, rolling and pitching degrees of a finger measured from the camera plane should be within some limit for a recognizable image. The position of a core point and the contour of a finger are used to estimate the degrees of rolling and pitching. Experimental results show that our proposed method selects valid regions and estimates the degrees of rolling and pitching properly. In addition, fingerprint-verification performance is improved by detecting the recognizable images.

  1. Contactless optical scanning of fingerprints with 180 degrees view.

    PubMed

    Palma, J; Liessner, C; Mil'shtein, S

    2006-01-01

    Fingerprint recognition technology is an integral part of criminal investigations. It is the basis for the design of numerous security systems in both the private and public sectors. In a recent study emulating the fingerprinting procedure with widely used optical scanners, it was found that, on average, the distance between ridges decreases about 20% when a finger is positioned on a scanner. Using calibrated silicon pressure sensors, the authors scanned the distribution of pressure across a finger, pixel by pixel, and also generated maps of the average pressure distribution during fingerprinting. Controlled loading of a finger demonstrated that it is impossible to reproduce the same distribution of pressure across a given finger during repeated fingerprinting procedures. Based on this study, a novel method of scanning the fingerprint with more than a 180 degrees view was developed. Using a camera rotated around the finger, small slices of the entire image of the finger were acquired. Equal sized slices of the image were processed with a special program assembling a more than 180 degrees view of the finger. Comparison of two images of the same fingerprint, namely the registered and actual images, could be performed by a new algorithm based on the symmetry of the correlation function. The novel method is the first contactless optical scanning technique to view 180 degrees of a fingerprint without moving the finger. In a machine which is under design, it is expected that the full view of one finger would be acquired in about a second.

  2. An image analysis of TLC patterns for quality control of saffron based on soil salinity effect: A strategy for data (pre)-processing.

    PubMed

    Sereshti, Hassan; Poursorkh, Zahra; Aliakbarzadeh, Ghazaleh; Zarre, Shahin; Ataolahi, Sahar

    2018-01-15

    Quality of saffron, a valuable food additive, could considerably affect the consumers' health. In this work, a novel preprocessing strategy for image analysis of saffron thin layer chromatographic (TLC) patterns was introduced. This includes performing a series of image pre-processing techniques on TLC images such as compression, inversion, elimination of general baseline (using asymmetric least squares (AsLS)), removing spots shift and concavity (by correlation optimization warping (COW)), and finally conversion to RGB chromatograms. Subsequently, an unsupervised multivariate data analysis including principal component analysis (PCA) and k-means clustering was utilized to investigate the soil salinity effect, as a cultivation parameter, on saffron TLC patterns. This method was used as a rapid and simple technique to obtain the chemical fingerprints of saffron TLC images. Finally, the separated TLC spots were chemically identified using high-performance liquid chromatography-diode array detection (HPLC-DAD). Accordingly, the saffron quality from different areas of Iran was evaluated and classified. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Fingerprint imaging from the inside of a finger with full-field optical coherence tomography

    PubMed Central

    Auksorius, Egidijus; Boccara, A. Claude

    2015-01-01

    Imaging below fingertip surface might be a useful alternative to the traditional fingerprint sensing since the internal finger features are more reliable than the external ones. One of the most promising subsurface imaging technique is optical coherence tomography (OCT), which, however, has to acquire 3-D data even when a single en face image is required. This makes OCT inherently slow for en face imaging and produce unnecessary large data sets. Here we demonstrate that full-field optical coherence tomography (FF-OCT) can be used to produce en face images of sweat pores and internal fingerprints, which can be used for the identification purposes. PMID:26601009

  4. Image encryption using fingerprint as key based on phase retrieval algorithm and public key cryptography

    NASA Astrophysics Data System (ADS)

    Zhao, Tieyu; Ran, Qiwen; Yuan, Lin; Chi, Yingying; Ma, Jing

    2015-09-01

    In this paper, a novel image encryption system with fingerprint used as a secret key is proposed based on the phase retrieval algorithm and RSA public key algorithm. In the system, the encryption keys include the fingerprint and the public key of RSA algorithm, while the decryption keys are the fingerprint and the private key of RSA algorithm. If the users share the fingerprint, then the system will meet the basic agreement of asymmetric cryptography. The system is also applicable for the information authentication. The fingerprint as secret key is used in both the encryption and decryption processes so that the receiver can identify the authenticity of the ciphertext by using the fingerprint in decryption process. Finally, the simulation results show the validity of the encryption scheme and the high robustness against attacks based on the phase retrieval technique.

  5. Effect of Aging and Surface Interactions on the Diffusion of Endogenous Compounds in Latent Fingerprints Studied by Mass Spectrometry Imaging.

    PubMed

    O'Neill, Kelly C; Lee, Young Jin

    2018-05-01

    The ability to determine the age of fingerprints would be immeasurably beneficial in criminal investigations. We explore the possibility of determining the age of fingerprints by analyzing various compounds as they diffuse from the ridges to the valleys of fingerprints using matrix-assisted laser desorption/ionization mass spectrometry imaging. The diffusion of two classes of endogenous fingerprint compounds, fatty acids and triacylglycerols (TGs), was studied in fresh and aged fingerprints on four surfaces. We expected higher molecular weight TGs would diffuse slower than fatty acids and allow us to determine the age of older fingerprints. However, we found interactions between endogenous compounds and the surface have a much stronger impact on diffusion than molecular weight. For example, diffusion of TGs is faster on hydrophilic plain glass or partially hydrophilic stainless steel surfaces, than on a hydrophobic Rain-x treated surface. This result further complicates utilizing a diffusion model to age fingerprints. © 2017 American Academy of Forensic Sciences.

  6. Study of noninvasive detection of latent fingerprints using UV laser

    NASA Astrophysics Data System (ADS)

    Li, Hong-xia; Cao, Jing; Niu, Jie-qing; Huang, Yun-gang; Mao, Lin-jie; Chen, Jing-rong

    2011-06-01

    Latent fingerprints present a considerable challenge in forensics, and noninvasive procedure that captures a digital image of the latent fingerprints is significant in the field of criminal investigation. The capability of photography technologies using 266nm UV Nd:YAG solid state laser as excitation light source to provide detailed images of unprocessed latent fingerprints is demonstrated. Unprocessed latent fingerprints were developed on various non-absorbent and absorbing substrates. According to the special absorption, reflection, scattering and fluorescence characterization of the various residues in fingerprints (fatty acid ester, protein, and carbosylic acid salts etc) to the UV light to weaken or eliminate the background disturbance and increase the brightness contrast of fingerprints with the background, and using 266nm UV laser as excitation light source, fresh and old latent fingerprints on the surface of four types of non-absorbent objects as magazine cover, glass, back of cellphone, wood desktop paintwork and two types of absorbing objects as manila envelope, notebook paper were noninvasive detected and appeared through reflection photography and fluorescence photography technologies, and the results meet the fingerprint identification requirements in forensic science.

  7. Evaluation of C60 secondary ion mass spectrometry for the chemical analysis and imaging of fingerprints.

    PubMed

    Sisco, Edward; Demoranville, Leonard T; Gillen, Greg

    2013-09-10

    The feasibility of using C60(+) cluster primary ion bombardment secondary ion mass spectrometry (C60(+) SIMS) for the analysis of the chemical composition of fingerprints is evaluated. It was found that C60(+) SIMS could be used to detect and image the spatial localization of a number of sebaceous and eccrine components in fingerprints. These analyses were also found to not be hindered by the use of common latent print powder development techniques. Finally, the ability to monitor the depth distribution of fingerprint constituents was found to be possible - a capability which has not been shown using other chemical imaging techniques. This paper illustrates a number of strengths and potential weaknesses of C60(+) SIMS as an additional or complimentary technique for the chemical analysis of fingerprints. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  8. Air Land Sea Bulletin. Issue No. 2013-1

    DTIC Science & Technology

    2013-01-01

    face, finger- print, iris , DNA, and palm print. Biometric capabilities may achieve enabling effects such as the ability to separate, identify...to obtain forensic-quality fingerprints, latent fingerprints, iris images, photos, and other biometric data. Figure 1. SEEK II ALSB 2013-1 12...logical and biographical contextual data of POIs and matches fingerprints and iris images against an internal biomet- rics enrollment database. The

  9. Reconfigurable Gabor Filter For Fingerprint Recognition Using FPGA Verilog

    NASA Astrophysics Data System (ADS)

    Rosshidi, H. T.; Hadi, A. R.

    2009-06-01

    This paper present the implementations of Gabor filter for fingerprint recognition using Verilog HDL. This work demonstrates the application of Gabor Filter technique to enhance the fingerprint image. The incoming signal in form of image pixel will be filter out or convolute by the Gabor filter to define the ridge and valley regions of fingerprint. This is done with the application of a real time convolve based on Field Programmable Gate Array (FPGA) to perform the convolution operation. The main characteristic of the proposed approach are the usage of memory to store the incoming image pixel and the coefficient of the Gabor filter before the convolution matrix take place. The result was the signal convoluted with the Gabor coefficient.

  10. Audio fingerprint extraction for content identification

    NASA Astrophysics Data System (ADS)

    Shiu, Yu; Yeh, Chia-Hung; Kuo, C. C. J.

    2003-11-01

    In this work, we present an audio content identification system that identifies some unknown audio material by comparing its fingerprint with those extracted off-line and saved in the music database. We will describe in detail the procedure to extract audio fingerprints and demonstrate that they are robust to noise and content-preserving manipulations. The main feature in the proposed system is the zero-crossing rate extracted with the octave-band filter bank. The zero-crossing rate can be used to describe the dominant frequency in each subband with a very low computational cost. The size of audio fingerprint is small and can be efficiently stored along with the compressed files in the database. It is also robust to many modifications such as tempo change and time-alignment distortion. Besides, the octave-band filter bank is used to enhance the robustness to distortion, especially those localized on some frequency regions.

  11. Separation and sequence detection of overlapped fingerprints: experiments and first results

    NASA Astrophysics Data System (ADS)

    Kärgel, Rainer; Giebel, Sascha; Leich, Marcus; Dittmann, Jana

    2011-11-01

    Latent fingerprints provide vital information in modern crime scene investigation. On frequently touched surfaces the fingerprints may overlap which poses a major problem for forensic analysis. In order to make such overlapping fingerprints available for analysis, they have to be separated. An additional evaluation of the sequence in which the fingerprints were brought onto the surface can help to reconstruct the progression of events. Advances in both tasks can considerably aid crime investigation agencies and are the subject of this work. Here, a statistical approach, initially devised for the separation of overlapping text patterns by Tonazzini et al.,1 is employed to separate overlapping fingerprints. The method involves a maximum a posteriori estimation of the single fingerprints and the mixing coefficients, computed by an expectation-maximization algorithm. A fingerprint age determination feature based on corrosion is evaluated for sequence estimation. The approaches are evaluated using 30 samples of overlapping latent fingerprints on two different substrates. The fingerprint images are acquired with a non-destructive chromatic white light surface measurement device, each sample containing exactly two fingerprints that overlap in the center of the image. Since forensic investigations rely on manual assessment of acquired fingerprints by forensics experts, a subjective scale ranging from 0 to 8 is used to rate the separation results. Our results indicate that the chosen method can separate overlapped fingerprints which exhibit strong differences in contrast, since results gradually improve with the growing contrast difference of the overlapping fingerprints. Investigating the effects of corrosion leads to a reliable determination of the fingerprints' sequence as the timespan between their leaving increases.

  12. Fingerprint image enhancement by differential hysteresis processing.

    PubMed

    Blotta, Eduardo; Moler, Emilce

    2004-05-10

    A new method to enhance defective fingerprints images through image digital processing tools is presented in this work. When the fingerprints have been taken without any care, blurred and in some cases mostly illegible, as in the case presented here, their classification and comparison becomes nearly impossible. A combination of spatial domain filters, including a technique called differential hysteresis processing (DHP), is applied to improve these kind of images. This set of filtering methods proved to be satisfactory in a wide range of cases by uncovering hidden details that helped to identify persons. Dactyloscopy experts from Policia Federal Argentina and the EAAF have validated these results.

  13. Soft-landing ion mobility of silver clusters for small-molecule matrix-assisted laser desorption ionization mass spectrometry and imaging of latent fingerprints.

    PubMed

    Walton, Barbara L; Verbeck, Guido F

    2014-08-19

    Matrix-assisted laser desorption ionization (MALDI) imaging is gaining popularity, but matrix effects such as mass spectral interference and damage to the sample limit its applications. Replacing traditional matrices with silver particles capable of equivalent or increased photon energy absorption from the incoming laser has proven to be beneficial for low mass analysis. Not only can silver clusters be advantageous for low mass compound detection, but they can be used for imaging as well. Conventional matrix application methods can obstruct samples, such as fingerprints, rendering them useless after mass analysis. The ability to image latent fingerprints without causing damage to the ridge pattern is important as it allows for further characterization of the print. The application of silver clusters by soft-landing ion mobility allows for enhanced MALDI and preservation of fingerprint integrity.

  14. Advanced Technologies for Touchless Fingerprint Recognition

    NASA Astrophysics Data System (ADS)

    Parziale, Giuseppe; Chen, Yi

    A fingerprint capture consists of touching or rolling a finger onto a rigid sensing surface. During this act, the elastic skin of the finger deforms. The quantity and direction of the pressure applied by the user, the skin conditions, and the projection of an irregular 3D object (the finger) onto a 2D flat plane introduce distortions, noise, and inconsistencies on the captured fingerprint image. Due to these negative effects, the representation of the same fingerprint changes every time the finger is placed on the sensor platen, increasing the complexity of the fingerprint matching and representing a negative influence on the system performance. Recently, a new approach to capture fingerprints has been proposed. This approach, referred to as touchless or contactless fingerprinting, tries to overcome the above-cited problems. Because of the lack of contact between the finger and any rigid surface, the skin does not deform during the capture and the repeatability of the measure is quiet ensured. However, this technology introduces new challenges. Finger positioning, illumination, image contrast adjustment, data format compatibility, and user convenience are key in the design and development of touchless fingerprint systems. In addition, vulnerability to spoofing attacks of some touchless fingerprint systems must be addressed.

  15. Sensor noise camera identification: countering counter-forensics

    NASA Astrophysics Data System (ADS)

    Goljan, Miroslav; Fridrich, Jessica; Chen, Mo

    2010-01-01

    In camera identification using sensor noise, the camera that took a given image can be determined with high certainty by establishing the presence of the camera's sensor fingerprint in the image. In this paper, we develop methods to reveal counter-forensic activities in which an attacker estimates the camera fingerprint from a set of images and pastes it onto an image from a different camera with the intent to introduce a false alarm and, in doing so, frame an innocent victim. We start by classifying different scenarios based on the sophistication of the attacker's activity and the means available to her and to the victim, who wishes to defend herself. The key observation is that at least some of the images that were used by the attacker to estimate the fake fingerprint will likely be available to the victim as well. We describe the socalled "triangle test" that helps the victim reveal attacker's malicious activity with high certainty under a wide range of conditions. This test is then extended to the case when none of the images that the attacker used to create the fake fingerprint are available to the victim but the victim has at least two forged images to analyze. We demonstrate the test's performance experimentally and investigate its limitations. The conclusion that can be made from this study is that planting a sensor fingerprint in an image without leaving a trace is significantly more difficult than previously thought.

  16. Detection of illicit substances in fingerprints by infrared spectral imaging.

    PubMed

    Ng, Ping Hei Ronnie; Walker, Sarah; Tahtouh, Mark; Reedy, Brian

    2009-08-01

    FTIR and Raman spectral imaging can be used to simultaneously image a latent fingerprint and detect exogenous substances deposited within it. These substances might include drugs of abuse or traces of explosives or gunshot residue. In this work, spectral searching algorithms were tested for their efficacy in finding targeted substances deposited within fingerprints. "Reverse" library searching, where a large number of possibly poor-quality spectra from a spectral image are searched against a small number of high-quality reference spectra, poses problems for common search algorithms as they are usually implemented. Out of a range of algorithms which included conventional Euclidean distance searching, the spectral angle mapper (SAM) and correlation algorithms gave the best results when used with second-derivative image and reference spectra. All methods tested gave poorer performances with first derivative and undifferentiated spectra. In a search against a caffeine reference, the SAM and correlation methods were able to correctly rank a set of 40 confirmed but poor-quality caffeine spectra at the top of a dataset which also contained 4,096 spectra from an image of an uncontaminated latent fingerprint. These methods also successfully and individually detected aspirin, diazepam and caffeine that had been deposited together in another fingerprint, and they did not indicate any of these substances as a match in a search for another substance which was known not to be present. The SAM was used to successfully locate explosive components in fingerprints deposited on silicon windows. The potential of other spectral searching algorithms used in the field of remote sensing is considered, and the applicability of the methods tested in this work to other modes of spectral imaging is discussed.

  17. Optical Methods in Fingerprint Imaging for Medical and Personality Applications

    PubMed Central

    Wang, Jing-Wein; Lin, Ming-Hsun; Chang, Yao-Lang; Kuo, Chia-Ming

    2017-01-01

    Over the years, analysis and induction of personality traits has been a topic for individual subjective conjecture or speculation, rather than a focus of inductive scientific analysis. This study proposes a novel framework for analysis and induction of personality traits. First, 14 personality constructs based on the “Big Five” personality factors were developed. Next, a new fingerprint image algorithm was used for classification, and the fingerprints were classified into eight types. The relationship between personality traits and fingerprint type was derived from the results of the questionnaire survey. After comparison of pre-test and post-test results, this study determined the induction ability of personality traits from fingerprint type. Experimental results showed that the left/right thumbprint type of a majority of subjects was left loop/right loop and that the personalities of individuals with this fingerprint type were moderate with no significant differences in the 14 personality constructs. PMID:29065556

  18. Optical Methods in Fingerprint Imaging for Medical and Personality Applications.

    PubMed

    Wang, Chia-Nan; Wang, Jing-Wein; Lin, Ming-Hsun; Chang, Yao-Lang; Kuo, Chia-Ming

    2017-10-23

    Over the years, analysis and induction of personality traits has been a topic for individual subjective conjecture or speculation, rather than a focus of inductive scientific analysis. This study proposes a novel framework for analysis and induction of personality traits. First, 14 personality constructs based on the "Big Five" personality factors were developed. Next, a new fingerprint image algorithm was used for classification, and the fingerprints were classified into eight types. The relationship between personality traits and fingerprint type was derived from the results of the questionnaire survey. After comparison of pre-test and post-test results, this study determined the induction ability of personality traits from fingerprint type. Experimental results showed that the left/right thumbprint type of a majority of subjects was left loop/right loop and that the personalities of individuals with this fingerprint type were moderate with no significant differences in the 14 personality constructs.

  19. A preliminary study of DTI Fingerprinting on stroke analysis.

    PubMed

    Ma, Heather T; Ye, Chenfei; Wu, Jun; Yang, Pengfei; Chen, Xuhui; Yang, Zhengyi; Ma, Jingbo

    2014-01-01

    DTI (Diffusion Tensor Imaging) is a well-known MRI (Magnetic Resonance Imaging) technique which provides useful structural information about human brain. However, the quantitative measurement to physiological variation of subtypes of ischemic stroke is not available. An automatically quantitative method for DTI analysis will enhance the DTI application in clinics. In this study, we proposed a DTI Fingerprinting technology to quantitatively analyze white matter tissue, which was applied in stroke classification. The TBSS (Tract Based Spatial Statistics) method was employed to generate mask automatically. To evaluate the clustering performance of the automatic method, lesion ROI (Region of Interest) is manually drawn on the DWI images as a reference. The results from the DTI Fingerprinting were compared with those obtained from the reference ROIs. It indicates that the DTI Fingerprinting could identify different states of ischemic stroke and has promising potential to provide a more comprehensive measure of the DTI data. Further development should be carried out to improve DTI Fingerprinting technology in clinics.

  20. Influence of Skin Diseases on Fingerprint Recognition

    PubMed Central

    Drahansky, Martin; Dolezel, Michal; Urbanek, Jaroslav; Brezinova, Eva; Kim, Tai-hoon

    2012-01-01

    There are many people who suffer from some of the skin diseases. These diseases have a strong influence on the process of fingerprint recognition. People with fingerprint diseases are unable to use fingerprint scanners, which is discriminating for them, since they are not allowed to use their fingerprints for the authentication purposes. First in this paper the various diseases, which might influence functionality of the fingerprint-based systems, are introduced, mainly from the medical point of view. This overview is followed by some examples of diseased finger fingerprints, acquired both from dactyloscopic card and electronic sensors. At the end of this paper the proposed fingerprint image enhancement algorithm is described. PMID:22654483

  1. Influence of skin diseases on fingerprint recognition.

    PubMed

    Drahansky, Martin; Dolezel, Michal; Urbanek, Jaroslav; Brezinova, Eva; Kim, Tai-hoon

    2012-01-01

    There are many people who suffer from some of the skin diseases. These diseases have a strong influence on the process of fingerprint recognition. People with fingerprint diseases are unable to use fingerprint scanners, which is discriminating for them, since they are not allowed to use their fingerprints for the authentication purposes. First in this paper the various diseases, which might influence functionality of the fingerprint-based systems, are introduced, mainly from the medical point of view. This overview is followed by some examples of diseased finger fingerprints, acquired both from dactyloscopic card and electronic sensors. At the end of this paper the proposed fingerprint image enhancement algorithm is described.

  2. General fusion approaches for the age determination of latent fingerprint traces: results for 2D and 3D binary pixel feature fusion

    NASA Astrophysics Data System (ADS)

    Merkel, Ronny; Gruhn, Stefan; Dittmann, Jana; Vielhauer, Claus; Bräutigam, Anja

    2012-03-01

    Determining the age of latent fingerprint traces found at crime scenes is an unresolved research issue since decades. Solving this issue could provide criminal investigators with the specific time a fingerprint trace was left on a surface, and therefore would enable them to link potential suspects to the time a crime took place as well as to reconstruct the sequence of events or eliminate irrelevant fingerprints to ensure privacy constraints. Transferring imaging techniques from different application areas, such as 3D image acquisition, surface measurement and chemical analysis to the domain of lifting latent biometric fingerprint traces is an upcoming trend in forensics. Such non-destructive sensor devices might help to solve the challenge of determining the age of a latent fingerprint trace, since it provides the opportunity to create time series and process them using pattern recognition techniques and statistical methods on digitized 2D, 3D and chemical data, rather than classical, contact-based capturing techniques, which alter the fingerprint trace and therefore make continuous scans impossible. In prior work, we have suggested to use a feature called binary pixel, which is a novel approach in the working field of fingerprint age determination. The feature uses a Chromatic White Light (CWL) image sensor to continuously scan a fingerprint trace over time and retrieves a characteristic logarithmic aging tendency for 2D-intensity as well as 3D-topographic images from the sensor. In this paper, we propose to combine such two characteristic aging features with other 2D and 3D features from the domains of surface measurement, microscopy, photography and spectroscopy, to achieve an increase in accuracy and reliability of a potential future age determination scheme. Discussing the feasibility of such variety of sensor devices and possible aging features, we propose a general fusion approach, which might combine promising features to a joint age determination scheme in future. We furthermore demonstrate the feasibility of the introduced approach by exemplary fusing the binary pixel features based on 2D-intensity and 3D-topographic images of the mentioned CWL sensor. We conclude that a formula based age determination approach requires very precise image data, which cannot be achieved at the moment, whereas a machine learning based classification approach seems to be feasible, if an adequate amount of features can be provided.

  3. Evaluation of a single-pixel one-transistor active pixel sensor for fingerprint imaging

    NASA Astrophysics Data System (ADS)

    Xu, Man; Ou, Hai; Chen, Jun; Wang, Kai

    2015-08-01

    Since it first appeared in iPhone 5S in 2013, fingerprint identification (ID) has rapidly gained popularity among consumers. Current fingerprint-enabled smartphones unanimously consists of a discrete sensor to perform fingerprint ID. This architecture not only incurs higher material and manufacturing cost, but also provides only static identification and limited authentication. Hence as the demand for a thinner, lighter, and more secure handset grows, we propose a novel pixel architecture that is a photosensitive device embedded in a display pixel and detects the reflected light from the finger touch for high resolution, high fidelity and dynamic biometrics. To this purpose, an amorphous silicon (a-Si:H) dual-gate photo TFT working in both fingerprint-imaging mode and display-driving mode will be developed.

  4. Detection of visible and latent fingerprints using micro-X-ray fluorescence elemental imaging.

    PubMed

    Worley, Christopher G; Wiltshire, Sara S; Miller, Thomasin C; Havrilla, George J; Majidi, Vahid

    2006-01-01

    Using micro-X-ray fluorescence (MXRF), a novel means of detecting fingerprints was examined in which the prints were imaged based on their elemental composition. MXRF is a nondestructive technique. Although this method requires a priori knowledge about the approximate location of a print, it offers a new and complementary means for detecting fingerprints that are also left pristine for further analysis (including potential DNA extraction) or archiving purposes. Sebaceous fingerprints and those made after perspiring were detected based on elements such as potassium and chlorine present in the print residue. Unique prints were also detected including those containing lotion, saliva, banana, or sunscreen. This proof-of-concept study demonstrates the potential for visualizing fingerprints by MXRF on surfaces that can be problematic using current methods.

  5. Simple multispectral imaging approach for determining the transfer of explosive residues in consecutive fingerprints.

    PubMed

    Lees, Heidi; Zapata, Félix; Vaher, Merike; García-Ruiz, Carmen

    2018-07-01

    This novel investigation focused on studying the transfer of explosive residues (TNT, HMTD, PETN, ANFO, dynamite, black powder, NH 4 NO 3 , KNO 3 , NaClO 3 ) in ten consecutive fingerprints to two different surfaces - cotton fabric and polycarbonate plastic - by using multispectral imaging (MSI). Imaging was performed employing a reflex camera in a purpose-built photo studio. Images were processed in MATLAB to select the most discriminating frame - the one that provided the sharpest contrast between the explosive and the material in the red-green-blue (RGB) visible region. The amount of explosive residues transferred in each fingerprint was determined as the number of pixels containing explosive particles. First, the pattern of PETN transfer by ten different persons in successive fingerprints was studied. No significant differences in the pattern of transfer of PETN between subjects were observed, which was also confirmed by multivariate analysis of variance (MANOVA). Then, the transfer of traces of the nine above explosives in ten consecutive fingerprints to cotton fabric and polycarbonate plastic was investigated. The obtained results demonstrated that the amount of explosive residues deposited on successive fingerprints tended to undergo a power or exponential decrease, with the exception of inorganic salts (NH 4 NO 3 , KNO 3 , NaClO 3 ) and ANFO (consists of 90% NH 4 NO 3 ). Copyright © 2018 Elsevier B.V. All rights reserved.

  6. Filter Design and Performance Evaluation for Fingerprint Image Segmentation

    PubMed Central

    Thai, Duy Hoang; Huckemann, Stephan; Gottschlich, Carsten

    2016-01-01

    Fingerprint recognition plays an important role in many commercial applications and is used by millions of people every day, e.g. for unlocking mobile phones. Fingerprint image segmentation is typically the first processing step of most fingerprint algorithms and it divides an image into foreground, the region of interest, and background. Two types of error can occur during this step which both have a negative impact on the recognition performance: ‘true’ foreground can be labeled as background and features like minutiae can be lost, or conversely ‘true’ background can be misclassified as foreground and spurious features can be introduced. The contribution of this paper is threefold: firstly, we propose a novel factorized directional bandpass (FDB) segmentation method for texture extraction based on the directional Hilbert transform of a Butterworth bandpass (DHBB) filter interwoven with soft-thresholding. Secondly, we provide a manually marked ground truth segmentation for 10560 images as an evaluation benchmark. Thirdly, we conduct a systematic performance comparison between the FDB method and four of the most often cited fingerprint segmentation algorithms showing that the FDB segmentation method clearly outperforms these four widely used methods. The benchmark and the implementation of the FDB method are made publicly available. PMID:27171150

  7. Study on internal to surface fingerprint correlation using optical coherence tomography and internal fingerprint extraction

    NASA Astrophysics Data System (ADS)

    Darlow, Luke Nicholas; Connan, James

    2015-11-01

    Surface fingerprint scanners are limited to a two-dimensional representation of the fingerprint topography, and thus, are vulnerable to fingerprint damage, distortion, and counterfeiting. Optical coherence tomography (OCT) scanners are able to image (in three dimensions) the internal structure of the fingertip skin. Techniques for obtaining the internal fingerprint from OCT scans have since been developed. This research presents an internal fingerprint extraction algorithm designed to extract high-quality internal fingerprints from touchless OCT fingertip scans. Furthermore, it serves as a correlation study between surface and internal fingerprints. Provided the scanned region contains sufficient fingerprint information, correlation to the surface topography is shown to be good (74% have true matches). The cross-correlation of internal fingerprints (96% have true matches) is substantial that internal fingerprints can constitute a fingerprint database. The internal fingerprints' performance was also compared to the performance of cropped surface counterparts, to eliminate bias owing to information level present, showing that the internal fingerprints' performance is superior 63.6% of the time.

  8. Recent advances in photoluminescence detection of fingerprints.

    PubMed

    Menzel, E R

    2001-10-02

    Photoluminescence detection of latent fingerprints has over the last quarter century brought about a new level of fingerprint detection sensitivity. The current state of the art is briefly reviewed to set the stage for upcoming new fingerprint processing strategies. These are designed for suppression of background fluorescence from articles holding latent prints, an often serious problem. The suppression of the background involves time-resolved imaging, which is dealt with from the perspective of instrumentation as well as the design of fingerprint treatment strategies. These focus on lanthanide chelates, nanocrystals, and nanocomposites functionalized to label fingerprints.

  9. Nucleic-acid-programmed Ag-nanoclusters as a generic platform for visualization of latent fingerprints and exogenous substances.

    PubMed

    Ran, Xiang; Wang, Zhenzhen; Zhang, Zhijun; Pu, Fang; Ren, Jinsong; Qu, Xiaogang

    2016-01-11

    We display a nucleic acid controlled AgNC platform for latent fingerprint visualization. The versatile emission of aptamer-modified AgNCs was regulated by the nearby DNA regions. Multi-color images for simultaneous visualization of fingerprints and exogenous components were successfully obtained. A quantitative detection strategy for exogenous substances in fingerprints was also established.

  10. Capturing latent fingerprints from metallic painted surfaces using UV-VIS spectroscope

    NASA Astrophysics Data System (ADS)

    Makrushin, Andrey; Scheidat, Tobias; Vielhauer, Claus

    2015-03-01

    In digital crime scene forensics, contactless non-destructive detection and acquisition of latent fingerprints by means of optical devices such as a high-resolution digital camera, confocal microscope, or chromatic white-light sensor is the initial step prior to destructive chemical development. The applicability of an optical sensor to digitalize latent fingerprints primarily depends on reflection properties of a substrate. Metallic painted surfaces, for instance, pose a problem for conventional sensors which make use of visible light. Since metallic paint is a semi-transparent layer on top of the surface, visible light penetrates it and is reflected off of the metallic flakes randomly disposed in the paint. Fingerprint residues do not impede light beams making ridges invisible. Latent fingerprints can be revealed, however, using ultraviolet light which does not penetrate the paint. We apply a UV-VIS spectroscope that is capable of capturing images within the range from 163 to 844 nm using 2048 discrete levels. We empirically show that latent fingerprints left behind on metallic painted surfaces become clearly visible within the range from 205 to 385 nm. Our proposed streakiness score feature determining the proportion of a ridge-valley pattern in an image is applied for automatic assessment of a fingerprint's visibility and distinguishing between fingerprint and empty regions. The experiments are carried out with 100 fingerprint and 100 non-fingerprint samples.

  11. Privacy protection schemes for fingerprint recognition systems

    NASA Astrophysics Data System (ADS)

    Marasco, Emanuela; Cukic, Bojan

    2015-05-01

    The deployment of fingerprint recognition systems has always raised concerns related to personal privacy. A fingerprint is permanently associated with an individual and, generally, it cannot be reset if compromised in one application. Given that fingerprints are not a secret, potential misuses besides personal recognition represent privacy threats and may lead to public distrust. Privacy mechanisms control access to personal information and limit the likelihood of intrusions. In this paper, image- and feature-level schemes for privacy protection in fingerprint recognition systems are reviewed. Storing only key features of a biometric signature can reduce the likelihood of biometric data being used for unintended purposes. In biometric cryptosystems and biometric-based key release, the biometric component verifies the identity of the user, while the cryptographic key protects the communication channel. Transformation-based approaches only a transformed version of the original biometric signature is stored. Different applications can use different transforms. Matching is performed in the transformed domain which enable the preservation of low error rates. Since such templates do not reveal information about individuals, they are referred to as cancelable templates. A compromised template can be re-issued using a different transform. At image-level, de-identification schemes can remove identifiers disclosed for objectives unrelated to the original purpose, while permitting other authorized uses of personal information. Fingerprint images can be de-identified by, for example, mixing fingerprints or removing gender signature. In both cases, degradation of matching performance is minimized.

  12. 3D matching techniques using OCT fingerprint point clouds

    NASA Astrophysics Data System (ADS)

    Gutierrez da Costa, Henrique S.; Silva, Luciano; Bellon, Olga R. P.; Bowden, Audrey K.; Czovny, Raphael K.

    2017-02-01

    Optical Coherence Tomography (OCT) makes viable acquisition of 3D fingerprints from both dermis and epidermis skin layers and their interfaces, exposing features that can be explored to improve biometric identification such as the curvatures and distinctive 3D regions. Scanned images from eleven volunteers allowed the construction of the first OCT 3D fingerprint database, to our knowledge, containing epidermal and dermal fingerprints. 3D dermal fingerprints can be used to overcome cases of Failure to Enroll (FTE) due to poor ridge image quality and skin alterations, cases that affect 2D matching performance. We evaluate three matching techniques, including the well-established Iterative Closest Points algorithm (ICP), Surface Interpenetration Measure (SIM) and the well-known KH Curvature Maps, all assessed using a 3D OCT fingerprint database, the first one for this purpose. Two of these techniques are based on registration techniques and one on curvatures. These were evaluated, compared and the fusion of matching scores assessed. We applied a sequence of steps to extract regions of interest named (ROI) minutiae clouds, representing small regions around distinctive minutia, usually located at ridges/valleys endings or bifurcations. The obtained ROI is acquired from the epidermis and dermis-epidermis interface by OCT imaging. A comparative analysis of identification accuracy was explored using different scenarios and the obtained results shows improvements for biometric identification. A comparison against 2D fingerprint matching algorithms is also presented to assess the improvements.

  13. Effective Fingerprint Quality Estimation for Diverse Capture Sensors

    PubMed Central

    Xie, Shan Juan; Yoon, Sook; Shin, Jinwook; Park, Dong Sun

    2010-01-01

    Recognizing the quality of fingerprints in advance can be beneficial for improving the performance of fingerprint recognition systems. The representative features to assess the quality of fingerprint images from different types of capture sensors are known to vary. In this paper, an effective quality estimation system that can be adapted for different types of capture sensors is designed by modifying and combining a set of features including orientation certainty, local orientation quality and consistency. The proposed system extracts basic features, and generates next level features which are applicable for various types of capture sensors. The system then uses the Support Vector Machine (SVM) classifier to determine whether or not an image should be accepted as input to the recognition system. The experimental results show that the proposed method can perform better than previous methods in terms of accuracy. In the meanwhile, the proposed method has an ability to eliminate residue images from the optical and capacitive sensors, and the coarse images from thermal sensors. PMID:22163632

  14. A novel hand-type detection technique with fingerprint sensor

    NASA Astrophysics Data System (ADS)

    Abe, Narishige; Shinzaki, Takashi

    2013-05-01

    In large-scale biometric authentication systems such as the US-Visit (USA), a 10-fingerprints scanner which simultaneously captures four fingerprints is used. In traditional systems, specific hand-types (left or right) are indicated, but it is difficult to detect hand-type due to the hand rotation and the opening and closing of fingers. In this paper, we evaluated features that were extracted from hand images (which were captured by a general optical scanner) that are considered to be effective for detecting hand-type. Furthermore, we extended the knowledge to real fingerprint images, and evaluated the accuracy with which it detects hand-type. We obtained an accuracy of about 80% with only three fingers (index, middle, ring finger).

  15. Nanoplasmonic imaging of latent fingerprints with explosive RDX residues.

    PubMed

    Peng, Tianhuan; Qin, Weiwei; Wang, Kun; Shi, Jiye; Fan, Chunhai; Li, Di

    2015-09-15

    Explosive detection is a critical element in preventing terrorist attacks, especially in crowded and influential areas. It is probably more important to establish the connection of explosive loading with a carrier's personal identity. In the present work, we introduce fingerprinting as physical personal identification and develop a nondestructive nanoplasmonic method for the imaging of latent fingerprints. We further integrate the nanoplasmonic response of catalytic growth of Au NPs with NADH-mediated reduction of 1,3,5-trinitro-1,3,5-triazinane (RDX) for the quantitative analysis of RDX explosive residues in latent fingerprints. This generic nanoplasmonic strategy is expected to be used in forensic investigation to distinguish terrorists that carry explosives.

  16. Magnetic Resonance Fingerprinting - a promising new approach to obtain standardized imaging biomarkers from MRI.

    PubMed

    2015-04-01

    Current routine MRI examinations rely on the acquisition of qualitative images that have a contrast "weighted" for a mixture of (magnetic) tissue properties. Recently, a novel approach was introduced, namely MR Fingerprinting (MRF) with a completely different approach to data acquisition, post-processing and visualization. Instead of using a repeated, serial acquisition of data for the characterization of individual parameters of interest, MRF uses a pseudo randomized acquisition that causes the signals from different tissues to have a unique signal evolution or 'fingerprint' that is simultaneously a function of the multiple material properties under investigation. The processing after acquisition involves a pattern recognition algorithm to match the fingerprints to a predefined dictionary of predicted signal evolutions. These can then be translated into quantitative maps of the magnetic parameters of interest. MR Fingerprinting (MRF) is a technique that could theoretically be applied to most traditional qualitative MRI methods and replaces them with acquisition of truly quantitative tissue measures. MRF is, thereby, expected to be much more accurate and reproducible than traditional MRI and should improve multi-center studies and significantly reduce reader bias when diagnostic imaging is performed. Key Points • MR fingerprinting (MRF) is a new approach to data acquisition, post-processing and visualization.• MRF provides highly accurate quantitative maps of T1, T2, proton density, diffusion.• MRF may offer multiparametric imaging with high reproducibility, and high potential for multicenter/ multivendor studies.

  17. In vivo metabolic fingerprinting of neutral lipids with hyperspectral stimulated Raman scattering microscopy.

    PubMed

    Fu, Dan; Yu, Yong; Folick, Andrew; Currie, Erin; Farese, Robert V; Tsai, Tsung-Huang; Xie, Xiaoliang Sunney; Wang, Meng C

    2014-06-18

    Metabolic fingerprinting provides valuable information on the physiopathological states of cells and tissues. Traditional imaging mass spectrometry and magnetic resonance imaging are unable to probe the spatial-temporal dynamics of metabolites at the subcellular level due to either lack of spatial resolution or inability to perform live cell imaging. Here we report a complementary metabolic imaging technique that is based on hyperspectral stimulated Raman scattering (hsSRS). We demonstrated the use of hsSRS imaging in quantifying two major neutral lipids: cholesteryl ester and triacylglycerol in cells and tissues. Our imaging results revealed previously unknown changes of lipid composition associated with obesity and steatohepatitis. We further used stable-isotope labeling to trace the metabolic dynamics of fatty acids in live cells and live Caenorhabditis elegans with hsSRS imaging. We found that unsaturated fatty acid has preferential uptake into lipid storage while saturated fatty acid exhibits toxicity in hepatic cells. Simultaneous metabolic fingerprinting of deuterium-labeled saturated and unsaturated fatty acids in living C. elegans revealed that there is a lack of interaction between the two, unlike previously hypothesized. Our findings provide new approaches for metabolic tracing of neutral lipids and their precursors in living cells and organisms, and could potentially serve as a general approach for metabolic fingerprinting of other metabolites.

  18. Non-invasive detection of superimposed latent fingerprints and inter-ridge trace evidence by infrared spectroscopic imaging.

    PubMed

    Bhargava, Rohit; Perlman, Rebecca Schwartz; Fernandez, Daniel C; Levin, Ira W; Bartick, Edward G

    2009-08-01

    Current latent print and trace evidence collecting technologies are usually invasive and can be destructive to the original deposits. We describe a non-invasive vibrational spectroscopic approach that yields latent fingerprints that are overlaid on top of one another or that may contain trace evidence that needs to be distinguished from the print. Because of the variation in the chemical composition distribution within the fingerprint, we demonstrate that linear unmixing applied to the spectral content of the data can be used to provide images that reveal superimposed fingerprints. In addition, we demonstrate that the chemical composition of the trace evidence located in the region of the print can potentially be identified by its infrared spectrum. Thus, trace evidence found at a crime scene that previously could not be directly related to an individual, now has the potential to be directly related by its presence in the individual-identifying fingerprints.

  19. Ultrafast fingerprint indexing for embedded systems

    NASA Astrophysics Data System (ADS)

    Zhou, Ru; Sin, Sang Woo; Li, Dongju; Isshiki, Tsuyoshi; Kunieda, Hiroaki

    2011-10-01

    A novel core-based fingerprint indexing scheme for embedded systems is presented in this paper. Our approach is enabled by our new precise and fast core-detection algorithm with the direction map. It introduces the feature of CMP (core minutiae pair), which describes the coordinates of minutiae and the direction of ridges associated with the minutiae based on the uniquely defined core coordinates. Since each CMP is identical against the shift and rotation of the fingerprint image, the CMP comparison between a template and an input image can be performed without any alignment. The proposed indexing algorithm based on CMP is suitable for embedded systems because the tremendous speed up and the memory reduction are achieved. In fact, the experiments with the fingerprint database FVC2002 show that its speed for the identifications becomes about 40 times faster than conventional approaches, even though the database includes fingerprints with no core.

  20. Performance characterization of structured light-based fingerprint scanner

    NASA Astrophysics Data System (ADS)

    Hassebrook, Laurence G.; Wang, Minghao; Daley, Raymond C.

    2013-05-01

    Our group believes that the evolution of fingerprint capture technology is in transition to include 3-D non-contact fingerprint capture. More specifically we believe that systems based on structured light illumination provide the highest level of depth measurement accuracy. However, for these new technologies to be fully accepted by the biometric community, they must be compliant with federal standards of performance. At present these standards do not exist for this new biometric technology. We propose and define a set of test procedures to be used to verify compliance with the Federal Bureau of Investigation's image quality specification for Personal Identity Verification single fingerprint capture devices. The proposed test procedures include: geometric accuracy, lateral resolution based on intensity or depth, gray level uniformity and flattened fingerprint image quality. Several 2-D contact analogies, performance tradeoffs and optimization dilemmas are evaluated and proposed solutions are presented.

  1. Fingerprint Change: Not Visible, But Tangible.

    PubMed

    Negri, Francesca V; De Giorgi, Annamaria; Bozzetti, Cecilia; Squadrilli, Anna; Petronini, Pier Giorgio; Leonardi, Francesco; Bisogno, Luigi; Garofano, Luciano

    2017-09-01

    Hand-foot syndrome, a chemotherapy-induced cutaneous toxicity, can cause an alteration in fingerprints causing a setback for cancer patients due to the occurrence of false rejections. A colon cancer patient was fingerprinted after not having been able to use fingerprint recognition devices after 6 months of adjuvant chemotherapy. The fingerprint images were digitally processed to improve fingerprint definition without altering the papillary design. No evidence of skin toxicity was present. Two months later, the situation returned to normal. The fingerprint evaluation conducted on 15 identification points highlighted the quantitative and qualitative fingerprint alteration details detected after the end of chemotherapy and 2 months later. Fingerprint alteration during chemotherapy has been reported, but to our knowledge, this particular case is the first ever reported without evident clinical signs. Alternative fingerprint identification methods as well as improved biometric identification systems are needed in case of unexpected situations. © 2017 American Academy of Forensic Sciences.

  2. An effective one-dimensional anisotropic fingerprint enhancement algorithm

    NASA Astrophysics Data System (ADS)

    Ye, Zhendong; Xie, Mei

    2012-01-01

    Fingerprint identification is one of the most important biometric technologies. The performance of the minutiae extraction and the speed of the fingerprint verification system rely heavily on the quality of the input fingerprint images, so the enhancement of the low fingerprint is a critical and difficult step in a fingerprint verification system. In this paper we proposed an effective algorithm for fingerprint enhancement. Firstly we use normalization algorithm to reduce the variations in gray level values along ridges and valleys. Then we utilize the structure tensor approach to estimate each pixel of the fingerprint orientations. At last we propose a novel algorithm which combines the advantages of onedimensional Gabor filtering method and anisotropic method to enhance the fingerprint in recoverable region. The proposed algorithm has been evaluated on the database of Fingerprint Verification Competition 2004, and the results show that our algorithm performs within less time.

  3. An effective one-dimensional anisotropic fingerprint enhancement algorithm

    NASA Astrophysics Data System (ADS)

    Ye, Zhendong; Xie, Mei

    2011-12-01

    Fingerprint identification is one of the most important biometric technologies. The performance of the minutiae extraction and the speed of the fingerprint verification system rely heavily on the quality of the input fingerprint images, so the enhancement of the low fingerprint is a critical and difficult step in a fingerprint verification system. In this paper we proposed an effective algorithm for fingerprint enhancement. Firstly we use normalization algorithm to reduce the variations in gray level values along ridges and valleys. Then we utilize the structure tensor approach to estimate each pixel of the fingerprint orientations. At last we propose a novel algorithm which combines the advantages of onedimensional Gabor filtering method and anisotropic method to enhance the fingerprint in recoverable region. The proposed algorithm has been evaluated on the database of Fingerprint Verification Competition 2004, and the results show that our algorithm performs within less time.

  4. Piezoelectric micromachined ultrasonic transducers for fingerprint sensing

    NASA Astrophysics Data System (ADS)

    Lu, Yipeng

    Fingerprint identification is the most prevalent biometric technology due to its uniqueness, universality and convenience. Over the past two decades, a variety of physical mechanisms have been exploited to capture an electronic image of a human fingerprint. Among these, capacitive fingerprint sensors are the ones most widely used in consumer electronics because they are fabricated using conventional complementary metal oxide semiconductor (CMOS) integrated circuit technology. However, capacitive fingerprint sensors are extremely sensitive to finger contamination and moisture. This thesis will introduce an ultrasonic fingerprint sensor using a PMUT array, which offers a potential solution to this problem. In addition, it has the potential to increase security, as it allows images to be collected at various depths beneath the epidermis, providing images of the sub-surface dermis layer and blood vessels. Firstly, PMUT sensitivity is maximized by optimizing the layer stack and electrode design, and the coupling coefficient is doubled via series transduction. Moreover, a broadband PMUT with 97% fractional bandwidth is achieved by utilizing a thinner structure excited at two adjacent mechanical vibration modes with overlapping bandwidth. In addition, we proposed waveguide PMUTs, which function to direct acoustic waves, confine acoustic energy, and provide mechanical protection for the PMUT array. Furthermore, PMUT arrays were fabricated with different processes to form the membrane, including front-side etching with a patterned sacrificial layer, front-side etching with additional anchor, cavity SOI wafers and eutectic bonding. Additionally, eutectic bonding allows the PMUT to be integrated with CMOS circuits. PMUTs were characterized in the mechanical, electrical and acoustic domains. Using transmit beamforming, a narrow acoustic beam was achieved, and high-resolution (sub-100 microm) and short-range (~1 mm) pulse-echo ultrasonic imaging was demonstrated using a steel phantom. Finally, a novel ultrasonic fingerprint sensor was demonstrated using a 24x8 array of 22 MHz PMUTs with 100 microm pitch, fully integrated with 180 nm CMOS circuitry through eutectic wafer bonding. Each PMUT is directly bonded to a dedicated CMOS receive amplifier, minimizing electrical parasitics and eliminating the need for through-silicon vias. Pulse-echo imaging of a 1D steel grating is demonstrated using electronic scanning of a 20x8 sub-array, resulting in 300 mV maximum received amplitude and 5:1 contrast ratio. Because the small size of this array limits the maximum image size, mechanical scanning was used to image a 2D PDMS fingerprint phantom (10 mm by 8 mm) at a 1.2 mm distance from the array.

  5. A review of state-of-the-art speckle reduction techniques for optical coherence tomography fingertip scans

    NASA Astrophysics Data System (ADS)

    Darlow, Luke N.; Akhoury, Sharat S.; Connan, James

    2015-02-01

    Standard surface fingerprint scanners are vulnerable to counterfeiting attacks and also failure due to skin damage and distortion. Thus a high security and damage resistant means of fingerprint acquisition is needed, providing scope for new approaches and technologies. Optical Coherence Tomography (OCT) is a high resolution imaging technology that can be used to image the human fingertip and allow for the extraction of a subsurface fingerprint. Being robust toward spoofing and damage, the subsurface fingerprint is an attractive solution. However, the nature of the OCT scanning process induces speckle: a correlative and multiplicative noise. Six speckle reducing filters for the digital enhancement of OCT fingertip scans have been evaluated. The optimized Bayesian non-local means algorithm improved the structural similarity between processed and reference images by 34%, increased the signal-to-noise ratio, and yielded the most promising visual results. An adaptive wavelet approach, originally designed for ultrasound imaging, and a speckle reducing anisotropic diffusion approach also yielded promising results. A reformulation of these in future work, with an OCT-speckle specific model, may improve their performance.

  6. A joint FED watermarking system using spatial fusion for verifying the security issues of teleradiology.

    PubMed

    Viswanathan, P; Krishna, P Venkata

    2014-05-01

    Teleradiology allows transmission of medical images for clinical data interpretation to provide improved e-health care access, delivery, and standards. The remote transmission raises various ethical and legal issues like image retention, fraud, privacy, malpractice liability, etc. A joint FED watermarking system means a joint fingerprint/encryption/dual watermarking system is proposed for addressing these issues. The system combines a region based substitution dual watermarking algorithm using spatial fusion, stream cipher algorithm using symmetric key, and fingerprint verification algorithm using invariants. This paper aims to give access to the outcomes of medical images with confidentiality, availability, integrity, and its origin. The watermarking, encryption, and fingerprint enrollment are conducted jointly in protection stage such that the extraction, decryption, and verification can be applied independently. The dual watermarking system, introducing two different embedding schemes, one used for patient data and other for fingerprint features, reduces the difficulty in maintenance of multiple documents like authentication data, personnel and diagnosis data, and medical images. The spatial fusion algorithm, which determines the region of embedding using threshold from the image to embed the encrypted patient data, follows the exact rules of fusion resulting in better quality than other fusion techniques. The four step stream cipher algorithm using symmetric key for encrypting the patient data with fingerprint verification system using algebraic invariants improves the robustness of the medical information. The experiment result of proposed scheme is evaluated for security and quality analysis in DICOM medical images resulted well in terms of attacks, quality index, and imperceptibility.

  7. A Fingerprint Encryption Scheme Based on Irreversible Function and Secure Authentication

    PubMed Central

    Yu, Jianping; Zhang, Peng; Wang, Shulan

    2015-01-01

    A fingerprint encryption scheme based on irreversible function has been designed in this paper. Since the fingerprint template includes almost the entire information of users' fingerprints, the personal authentication can be determined only by the fingerprint features. This paper proposes an irreversible transforming function (using the improved SHA1 algorithm) to transform the original minutiae which are extracted from the thinned fingerprint image. Then, Chinese remainder theorem is used to obtain the biokey from the integration of the transformed minutiae and the private key. The result shows that the scheme has better performance on security and efficiency comparing with other irreversible function schemes. PMID:25873989

  8. A fingerprint encryption scheme based on irreversible function and secure authentication.

    PubMed

    Yang, Yijun; Yu, Jianping; Zhang, Peng; Wang, Shulan

    2015-01-01

    A fingerprint encryption scheme based on irreversible function has been designed in this paper. Since the fingerprint template includes almost the entire information of users' fingerprints, the personal authentication can be determined only by the fingerprint features. This paper proposes an irreversible transforming function (using the improved SHA1 algorithm) to transform the original minutiae which are extracted from the thinned fingerprint image. Then, Chinese remainder theorem is used to obtain the biokey from the integration of the transformed minutiae and the private key. The result shows that the scheme has better performance on security and efficiency comparing with other irreversible function schemes.

  9. Comparative study of different approaches for multivariate image analysis in HPTLC fingerprinting of natural products such as plant resin.

    PubMed

    Ristivojević, Petar; Trifković, Jelena; Vovk, Irena; Milojković-Opsenica, Dušanka

    2017-01-01

    Considering the introduction of phytochemical fingerprint analysis, as a method of screening the complex natural products for the presence of most bioactive compounds, use of chemometric classification methods, application of powerful scanning and image capturing and processing devices and algorithms, advancement in development of novel stationary phases as well as various separation modalities, high-performance thin-layer chromatography (HPTLC) fingerprinting is becoming attractive and fruitful field of separation science. Multivariate image analysis is crucial in the light of proper data acquisition. In a current study, different image processing procedures were studied and compared in detail on the example of HPTLC chromatograms of plant resins. In that sense, obtained variables such as gray intensities of pixels along the solvent front, peak area and mean values of peak were used as input data and compared to obtained best classification models. Important steps in image analysis, baseline removal, denoising, target peak alignment and normalization were pointed out. Numerical data set based on mean value of selected bands and intensities of pixels along the solvent front proved to be the most convenient for planar-chromatographic profiling, although required at least the basic knowledge on image processing methodology, and could be proposed for further investigation in HPLTC fingerprinting. Copyright © 2016 Elsevier B.V. All rights reserved.

  10. High resolution imaging of latent fingerprints by localized corrosion on brass surfaces.

    PubMed

    Goddard, Alex J; Hillman, A Robert; Bond, John W

    2010-01-01

    The Atomic Force Microscope (AFM) is capable of imaging fingerprint ridges on polished brass substrates at an unprecedented level of detail. While exposure to elevated humidity at ambient or slightly raised temperatures does not change the image appreciably, subsequent brief heating in a flame results in complete loss of the sweat deposit and the appearance of pits and trenches. Localized elemental analysis (using EDAX, coupled with SEM imaging) shows the presence of the constituents of salt in the initial deposits. Together with water and atmospheric oxygen--and with thermal enhancement--these are capable of driving a surface corrosion process. This process is sufficiently localized that it has the potential to generate a durable negative topographical image of the fingerprint. AFM examination of surface regions between ridges revealed small deposits (probably microscopic "spatter" of sweat components or transferred particulates) that may ultimately limit the level of ridge detail analysis.

  11. Electrochromic enhancement of latent fingerprints by poly(3,4-ethylenedioxythiophene).

    PubMed

    Brown, Rachel M; Hillman, A Robert

    2012-06-28

    Spatially selective electrodeposition of poly-3,4-ethylenedioxythiophene (PEDOT) thin films on metallic surfaces is shown to be an effective means of visualizing latent fingerprints. The technique exploits the fingerprint deposit as an insulating mask, such that electrochemical processes (here, polymer deposition) may only take place on deposit-free areas of the surface between the ridges of the fingerprint deposit; the end result is a negative image of the fingermark. Use of a surfactant (sodium dodecylsulphate, SDS) to solubilise the EDOT monomer allows the use of an aqueous electrolyte. Electrochemical (coulometric) data provide a total assay of deposited material, yielding spatially averaged film thicknesses, which are commensurate with substantive filling of the trenches between fingerprint deposit ridges, but not overfilling to the extent that the ridge detail is covered. This is confirmed by optical microscopy and AFM images, which show continuous polymer deposition within the trenches and good definition at the ridge edges. Stainless steel substrates treated in this manner and transferred to background electrolyte (aqueous sulphuric acid) showed enhanced fingerprints when the contrast between the polymer background and fingerprint deposit was optimised using the electrochromic properties of the PEDOT films. The facility of the method to reveal fingerprints of various ages and subjected to plausible environmental histories was demonstrated. Comparison of this enhancement methodology with commonly used fingerprint enhancement methods (dusting with powder, application of wet powder suspensions and cyanoacrylate fuming) showed promising performance in selected scenarios of practical interest.

  12. Known plaintext attack on double random phase encoding using fingerprint as key and a method for avoiding the attack.

    PubMed

    Tashima, Hideaki; Takeda, Masafumi; Suzuki, Hiroyuki; Obi, Takashi; Yamaguchi, Masahiro; Ohyama, Nagaaki

    2010-06-21

    We have shown that the application of double random phase encoding (DRPE) to biometrics enables the use of biometrics as cipher keys for binary data encryption. However, DRPE is reported to be vulnerable to known-plaintext attacks (KPAs) using a phase recovery algorithm. In this study, we investigated the vulnerability of DRPE using fingerprints as cipher keys to the KPAs. By means of computational experiments, we estimated the encryption key and restored the fingerprint image using the estimated key. Further, we propose a method for avoiding the KPA on the DRPE that employs the phase retrieval algorithm. The proposed method makes the amplitude component of the encrypted image constant in order to prevent the amplitude component of the encrypted image from being used as a clue for phase retrieval. Computational experiments showed that the proposed method not only avoids revealing the cipher key and the fingerprint but also serves as a sufficiently accurate verification system.

  13. Stand-off detection of explosive particles by imaging Raman spectroscopy

    NASA Astrophysics Data System (ADS)

    Nordberg, Markus; Åkeson, Madeleine; Östmark, Henric; Carlsson, Torgny E.

    2011-06-01

    A multispectral imaging technique has been developed to detect and identify explosive particles, e.g. from a fingerprint, at stand-off distances using Raman spectroscopy. When handling IED's as well as other explosive devices, residues can easily be transferred via fingerprints onto other surfaces e.g. car handles, gear sticks and suite cases. By imaging the surface using multispectral imaging Raman technique the explosive particles can be identified and displayed using color-coding. The technique has been demonstrated by detecting fingerprints containing significant amounts of 2,4-dinitrotoulene (DNT), 2,4,6-trinitrotoulene (TNT) and ammonium nitrate at a distance of 12 m in less than 90 seconds (22 images × 4 seconds)1. For each measurement, a sequence of images, one image for each wave number, is recorded. The spectral data from each pixel is compared with reference spectra of the substances to be detected. The pixels are marked with different colors corresponding to the detected substances in the fingerprint. The system has now been further developed to become less complex and thereby less sensitive to the environment such as temperature fluctuations. The optical resolution has been improved to less than 70 μm measured at 546 nm wavelength. The total detection time is ranging from less then one minute to around five minutes depending on the size of the particles and how confident the identification should be. The results indicate a great potential for multi-spectral imaging Raman spectroscopy as a stand-off technique for detection of single explosive particles.

  14. On relative distortion in fingerprint comparison.

    PubMed

    Kalka, Nathan D; Hicklin, R Austin

    2014-11-01

    When fingerprints are deposited, non-uniform pressure in conjunction with the inherent elasticity of friction ridge skin often causes linear and non-linear distortions in the ridge and valley structure. The effects of these distortions must be considered during analysis of fingerprint images. Even when individual prints are not notably distorted, relative distortion between two prints can have a serious impact on comparison. In this paper we discuss several metrics for quantifying and visualizing linear and non-linear fingerprint deformations, and software tools to assist examiners in accounting for distortion in fingerprint comparisons. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  15. Compact touchless fingerprint reader based on digital variable-focus liquid lens

    NASA Astrophysics Data System (ADS)

    Tsai, C. W.; Wang, P. J.; Yeh, J. A.

    2014-09-01

    Identity certification in the cyberworld has always been troublesome if critical information and financial transaction must be processed. Biometric identification is the most effective measure to circumvent the identity issues in mobile devices. Due to bulky and pricy optical design, conventional optical fingerprint readers have been discarded for mobile applications. In this paper, a digital variable-focus liquid lens was adopted for capture of a floating finger via fast focusplane scanning. Only putting a finger in front of a camera could fulfill the fingerprint ID process. This prototyped fingerprint reader scans multiple focal planes from 30 mm to 15 mm in 0.2 second. Through multiple images at various focuses, one of the images is chosen for extraction of fingerprint minutiae used for identity certification. In the optical design, a digital liquid lens atop a webcam with a fixed-focus lens module is to fast-scan a floating finger at preset focus planes. The distance, rolling angle and pitching angle of the finger are stored for crucial parameters during the match process of fingerprint minutiae. This innovative compact touchless fingerprint reader could be packed into a minute size of 9.8*9.8*5 (mm) after the optical design and multiple focus-plane scan function are optimized.

  16. The non-contact detection and identification of blood stained fingerprints using visible wavelength reflectance hyperspectral imaging: Part 1.

    PubMed

    Cadd, Samuel; Li, Bo; Beveridge, Peter; O'Hare, William T; Campbell, Andrew; Islam, Meez

    2016-05-01

    Blood is one of the most commonly encountered types of biological evidence found at scenes of violent crime and one of the most commonly observed fingerprint contaminants. Current visualisation methods rely on presumptive tests or chemical enhancement methods. Although these can successfully visualise ridge detail, they are destructive, do not confirm the presence of blood and can have a negative impact on DNA sampling. A novel application of visible wavelength reflectance hyperspectral imaging (HSI) has been used for the detection and positive identification of blood stained fingerprints in a non-contact and non-destructive manner on white ceramic tiles. The identification of blood was based on the unique visible absorption spectrum of haemoglobin between 400 and 500 nm. HSI has been used to successfully visualise ridge detail in blood stained fingerprints to the ninth depletion. Ridge detail was still detectable with diluted blood to 20-fold dilutions. Latent blood stains were detectable to 15,000-fold dilutions. Ridge detail was detectable for fingerprints up to 6 months old. HSI was also able to conclusively distinguish blood stained fingerprints from fingerprints in six paints and eleven other red/brown media with zero false positives. Copyright © 2016 The Chartered Society of Forensic Sciences. Published by Elsevier Ireland Ltd. All rights reserved.

  17. Can the RUVIS reflected UV imaging system visualize fingerprint corrosion on brass cartridge casings postfiring?

    PubMed

    Leintz, Rachel; Bond, John W

    2013-05-01

    Comparisons are made between the visualization of fingerprint corrosion ridge detail on fired brass cartridge casings, where fingerprint sweat was deposited prefiring, using both ultraviolet (UV) and visible (natural daylight) light sources. A reflected ultraviolet imaging system (RUVIS), normally used for visualizing latent fingerprint sweat deposits, is compared with optical interference and digital color mapping of visible light, the latter using apparatus constructed to easily enable selection of the optimum viewing angle. Results show that reflected UV, with a monochromatic UV source of 254 nm, was unable to visualize fingerprint ridge detail on any of 12 casings analyzed, whereas optical interference and digital color mapping using natural daylight yielded ridge detail on three casings. Reasons for the lack of success with RUVIS are discussed in terms of the variation in thickness of the thin film of metal oxide corrosion and absorption wavelengths for the corrosion products of brass. © 2013 American Academy of Forensic Sciences.

  18. Review of passive-blind detection in digital video forgery based on sensing and imaging techniques

    NASA Astrophysics Data System (ADS)

    Tao, Junjie; Jia, Lili; You, Ying

    2016-01-01

    Advances in digital video compression and IP communication technologies raised new issues and challenges concerning the integrity and authenticity of surveillance videos. It is so important that the system should ensure that once recorded, the video cannot be altered; ensuring the audit trail is intact for evidential purposes. This paper gives an overview of passive techniques of Digital Video Forensics which are based on intrinsic fingerprints inherent in digital surveillance videos. In this paper, we performed a thorough research of literatures relevant to video manipulation detection methods which accomplish blind authentications without referring to any auxiliary information. We presents review of various existing methods in literature, and much more work is needed to be done in this field of video forensics based on video data analysis and observation of the surveillance systems.

  19. An investigation of fake fingerprint detection approaches

    NASA Astrophysics Data System (ADS)

    Ahmad, Asraful Syifaa'; Hassan, Rohayanti; Othman, Razib M.

    2017-10-01

    The most reliable biometrics technology, fingerprint recognition is widely used in terms of security due to its permanence and uniqueness. However, it is also vulnerable to the certain type of attacks including presenting fake fingerprints to the sensor which requires the development of new and efficient protection measures. Particularly, the aim is to identify the most recent literature related to the fake fingerprint recognition and only focus on software-based approaches. A systematic review is performed by analyzing 146 primary studies from the gross collection of 34 research papers to determine the taxonomy, approaches, online public databases, and limitations of the fake fingerprint. Fourteen software-based approaches have been briefly described, four limitations of fake fingerprint image were revealed and two known fake fingerprint databases were addressed briefly in this review. Therefore this work provides an overview of an insight into the current understanding of fake fingerprint recognition besides identifying future research possibilities.

  20. Transformation optics with windows

    NASA Astrophysics Data System (ADS)

    Oxburgh, Stephen; White, Chris D.; Antoniou, Georgios; Orife, Ejovbokoghene; Courtial, Johannes

    2014-09-01

    Identity certification in the cyberworld has always been troublesome if critical information and financial transaction must be processed. Biometric identification is the most effective measure to circumvent the identity issues in mobile devices. Due to bulky and pricy optical design, conventional optical fingerprint readers have been discarded for mobile applications. In this paper, a digital variable-focus liquid lens was adopted for capture of a floating finger via fast focusplane scanning. Only putting a finger in front of a camera could fulfill the fingerprint ID process. This prototyped fingerprint reader scans multiple focal planes from 30 mm to 15 mm in 0.2 second. Through multiple images at various focuses, one of the images is chosen for extraction of fingerprint minutiae used for identity certification. In the optical design, a digital liquid lens atop a webcam with a fixed-focus lens module is to fast-scan a floating finger at preset focus planes. The distance, rolling angle and pitching angle of the finger are stored for crucial parameters during the match process of fingerprint minutiae. This innovative compact touchless fingerprint reader could be packed into a minute size of 9.8*9.8*5 (mm) after the optical design and multiple focus-plane scan function are optimized.

  1. Silver Coating for High-Mass-Accuracy Imaging Mass Spectrometry of Fingerprints on Nanostructured Silicon.

    PubMed

    Guinan, Taryn M; Gustafsson, Ove J R; McPhee, Gordon; Kobus, Hilton; Voelcker, Nicolas H

    2015-11-17

    Nanostructure imaging mass spectrometry (NIMS) using porous silicon (pSi) is a key technique for molecular imaging of exogenous and endogenous low molecular weight compounds from fingerprints. However, high-mass-accuracy NIMS can be difficult to achieve as time-of-flight (ToF) mass analyzers, which dominate the field, cannot sufficiently compensate for shifts in measured m/z values. Here, we show internal recalibration using a thin layer of silver (Ag) sputter-coated onto functionalized pSi substrates. NIMS peaks for several previously reported fingerprint components were selected and mass accuracy was compared to theoretical values. Mass accuracy was improved by more than an order of magnitude in several cases. This straightforward method should form part of the standard guidelines for NIMS studies for spatial characterization of small molecules.

  2. Automatic mapping of event landslides at basin scale in Taiwan using a Montecarlo approach and synthetic land cover fingerprints

    NASA Astrophysics Data System (ADS)

    Mondini, Alessandro C.; Chang, Kang-Tsung; Chiang, Shou-Hao; Schlögel, Romy; Notarnicola, Claudia; Saito, Hitoshi

    2017-12-01

    We propose a framework to systematically generate event landslide inventory maps from satellite images in southern Taiwan, where landslides are frequent and abundant. The spectral information is used to assess the pixel land cover class membership probability through a Maximum Likelihood classifier trained with randomly generated synthetic land cover spectral fingerprints, which are obtained from an independent training images dataset. Pixels are classified as landslides when the calculated landslide class membership probability, weighted by a susceptibility model, is higher than membership probabilities of other classes. We generated synthetic fingerprints from two FORMOSAT-2 images acquired in 2009 and tested the procedure on two other images, one in 2005 and the other in 2009. We also obtained two landslide maps through manual interpretation. The agreement between the two sets of inventories is given by the Cohen's k coefficients of 0.62 and 0.64, respectively. This procedure can now classify a new FORMOSAT-2 image automatically facilitating the production of landslide inventory maps.

  3. Visualization of latent fingerprint corrosion of metallic surfaces.

    PubMed

    Bond, John W

    2008-07-01

    Chemical reactions between latent fingerprints and a variety of metal surfaces are investigated by heating the metal up to temperatures of approximately 600 degrees C after deposition of the fingerprint. Ionic salts present in the fingerprint residue corrode the metal surface to produce an image of the fingerprint that is both durable and resistant to cleaning of the metal. The degree of fingerprint enhancement appears independent of the elapsed time between deposition and heating but is very dependent on both the composition of the metal and the level of salt secretion by the fingerprint donor. Results are presented that show practical applications for the enhancement to fingerprints deposited in arson crime scenes, contaminated by spray painting, or deposited on brass cartridge cases prior to discharge. The corrosion of the metal surface is further exploited by the demonstration of a novel technique for fingerprint enhancement based on the electrostatic charging of the metal and then the preferential adherence of a metallic powder to the corroded part of the metal surface.

  4. High-speed biometrics ultrasonic system for 3D fingerprint imaging

    NASA Astrophysics Data System (ADS)

    Maev, Roman G.; Severin, Fedar

    2012-10-01

    The objective of this research is to develop a new robust fingerprint identification technology based upon forming surface-subsurface (under skin) ultrasonic 3D images of the finger pads. The presented work aims to create specialized ultrasonic scanning methods for biometric purposes. Preliminary research has demonstrated the applicability of acoustic microscopy for fingerprint reading. The additional information from internal skin layers and dermis structures contained in the scan can essentially improve confidence in the identification. Advantages of this system include high resolution and quick scanning time. Operating in pulse-echo mode provides spatial resolution up to 0.05 mm. Technology advantages of the proposed technology are the following: • Full-range scanning of the fingerprint area "nail to nail" (2.5 x 2.5 cm) can be done in less than 5 sec with a resolution of up to 1000 dpi. • Collection of information about the in-depth structure of the fingerprint realized by the set of spherically focused 50 MHz acoustic lens provide the resolution ~ 0.05 mm or better • In addition to fingerprints, this technology can identify sweat porous at the surface and under the skin • No sensitivity to the contamination of the finger's surface • Detection of blood velocity using Doppler effect can be implemented to distinguish living specimens • Utilization as polygraph device • Simple connectivity to fingerprint databases obtained with other techniques • The digitally interpolated images can then be enhanced allowing for greater resolution • Method can be applied to fingernails and underlying tissues, providing more information • A laboratory prototype of the biometrics system based on these described principles was designed, built and tested. It is the first step toward a practical implementation of this technique.

  5. High Resolution Ultrasonic Method for 3D Fingerprint Recognizable Characteristics in Biometrics Identification

    NASA Astrophysics Data System (ADS)

    Maev, R. Gr.; Bakulin, E. Yu.; Maeva, A.; Severin, F.

    Biometrics is a rapidly evolving scientific and applied discipline that studies possible ways of personal identification by means of unique biological characteristics. Such identification is important in various situations requiring restricted access to certain areas, information and personal data and for cases of medical emergencies. A number of automated biometric techniques have been developed, including fingerprint, hand shape, eye and facial recognition, thermographic imaging, etc. All these techniques differ in the recognizable parameters, usability, accuracy and cost. Among these, fingerprint recognition stands alone since a very large database of fingerprints has already been acquired. Also, fingerprints are key evidence left at a crime scene and can be used to indentify suspects. Therefore, of all automated biometric techniques, especially in the field of law enforcement, fingerprint identification seems to be the most promising. We introduce a newer development of the ultrasonic fingerprint imaging. The proposed method obtains a scan only once and then varies the C-scan gate position and width to visualize acoustic reflections from any appropriate depth inside the skin. Also, B-scans and A-scans can be recreated from any position using such data array, which gives the control over the visualization options. By setting the C-scan gate deeper inside the skin, distribution of the sweat pores (which are located along the ridges) can be easily visualized. This distribution should be unique for each individual so this provides a means of personal identification, which is not affected by any changes (accidental or intentional) of the fingers' surface conditions. This paper discusses different setups, acoustic parameters of the system, signal and image processing options and possible ways of 3-dimentional visualization that could be used as a recognizable characteristic in biometric identification.

  6. Abnormal Connectional Fingerprint in Schizophrenia: A Novel Network Analysis of Diffusion Tensor Imaging Data

    PubMed Central

    Edwin Thanarajah, Sharmili; Han, Cheol E.; Rotarska-Jagiela, Anna; Singer, Wolf; Deichmann, Ralf; Maurer, Konrad; Kaiser, Marcus; Uhlhaas, Peter J.

    2016-01-01

    The graph theoretical analysis of structural magnetic resonance imaging (MRI) data has received a great deal of interest in recent years to characterize the organizational principles of brain networks and their alterations in psychiatric disorders, such as schizophrenia. However, the characterization of networks in clinical populations can be challenging, since the comparison of connectivity between groups is influenced by several factors, such as the overall number of connections and the structural abnormalities of the seed regions. To overcome these limitations, the current study employed the whole-brain analysis of connectional fingerprints in diffusion tensor imaging data obtained at 3 T of chronic schizophrenia patients (n = 16) and healthy, age-matched control participants (n = 17). Probabilistic tractography was performed to quantify the connectivity of 110 brain areas. The connectional fingerprint of a brain area represents the set of relative connection probabilities to all its target areas and is, hence, less affected by overall white and gray matter changes than absolute connectivity measures. After detecting brain regions with abnormal connectional fingerprints through similarity measures, we tested each of its relative connection probability between groups. We found altered connectional fingerprints in schizophrenia patients consistent with a dysconnectivity syndrome. While the medial frontal gyrus showed only reduced connectivity, the connectional fingerprints of the inferior frontal gyrus and the putamen mainly contained relatively increased connection probabilities to areas in the frontal, limbic, and subcortical areas. These findings are in line with previous studies that reported abnormalities in striatal–frontal circuits in the pathophysiology of schizophrenia, highlighting the potential utility of connectional fingerprints for the analysis of anatomical networks in the disorder. PMID:27445870

  7. Abnormal Connectional Fingerprint in Schizophrenia: A Novel Network Analysis of Diffusion Tensor Imaging Data.

    PubMed

    Edwin Thanarajah, Sharmili; Han, Cheol E; Rotarska-Jagiela, Anna; Singer, Wolf; Deichmann, Ralf; Maurer, Konrad; Kaiser, Marcus; Uhlhaas, Peter J

    2016-01-01

    The graph theoretical analysis of structural magnetic resonance imaging (MRI) data has received a great deal of interest in recent years to characterize the organizational principles of brain networks and their alterations in psychiatric disorders, such as schizophrenia. However, the characterization of networks in clinical populations can be challenging, since the comparison of connectivity between groups is influenced by several factors, such as the overall number of connections and the structural abnormalities of the seed regions. To overcome these limitations, the current study employed the whole-brain analysis of connectional fingerprints in diffusion tensor imaging data obtained at 3 T of chronic schizophrenia patients (n = 16) and healthy, age-matched control participants (n = 17). Probabilistic tractography was performed to quantify the connectivity of 110 brain areas. The connectional fingerprint of a brain area represents the set of relative connection probabilities to all its target areas and is, hence, less affected by overall white and gray matter changes than absolute connectivity measures. After detecting brain regions with abnormal connectional fingerprints through similarity measures, we tested each of its relative connection probability between groups. We found altered connectional fingerprints in schizophrenia patients consistent with a dysconnectivity syndrome. While the medial frontal gyrus showed only reduced connectivity, the connectional fingerprints of the inferior frontal gyrus and the putamen mainly contained relatively increased connection probabilities to areas in the frontal, limbic, and subcortical areas. These findings are in line with previous studies that reported abnormalities in striatal-frontal circuits in the pathophysiology of schizophrenia, highlighting the potential utility of connectional fingerprints for the analysis of anatomical networks in the disorder.

  8. New Horizons for Ninhydrin: Colorimetric Determination of Gender from Fingerprints.

    PubMed

    Brunelle, Erica; Huynh, Crystal; Le, Anh Minh; Halámková, Lenka; Agudelo, Juliana; Halámek, Jan

    2016-02-16

    In the past century, forensic investigators have universally accepted fingerprinting as a reliable identification method via pictorial comparison. One of the most traditional detection methods uses ninhydrin, a chemical that reacts with amino acids in the fingerprint content to produce the blue-purple color known as Ruhemann's purple. It has recently been demonstrated that the amino acid content in fingerprints can be used to differentiate between male and female fingerprints. Here, we present a modified approach to the traditional ninhydrin method. This new approach for using ninhydrin is combined with an optimized extraction protocol and the concept of determining gender from fingerprints. In doing so, we are able to focus on the biochemical material rather than exclusively the physical image.

  9. Detection of latent fingerprint hidden beneath adhesive tape by optical coherence tomography.

    PubMed

    Zhang, Ning; Wang, Chengming; Sun, Zhenwen; Li, Zhigang; Xie, Lanchi; Yan, Yuwen; Xu, Lei; Guo, Jingjing; Huang, Wei; Li, Zhihui; Xue, Jing; Liu, Huan; Xu, Xiaojing

    2018-06-01

    Adhesive tape is one type of common item which can be encountered in criminal cases involving rape, murder, kidnapping and explosives. It is often the case that a suspect deposits latent fingerprints on the sticky side of adhesive tape material when tying up victims, manufacturing improvised explosive devices or packaging illegal goods. However, the adhesive tapes found at crime scenes are usually stuck together or attached to a certain substrate, and thus the latent fingerprints may be hidden beneath the tapes. Current methods to detect latent fingerprint hidden beneath adhesive tape need to peel it off first and then apply physical or chemical methods to develop the fingerprint, which undergo complicated procedures and would affect the original condition of latent print. Optical coherence tomography (OCT) is a novel applied techniques in forensics which enables obtaining cross-sectional structure with the advantages of non-invasive, in-situ, high resolution and high speed. In this paper, a custom-built spectral-domain OCT (SD-OCT) system with a hand-held probe was employed to detect fingerprints hidden beneath different types of adhesive tapes. Three-dimensional (3D) OCT reconstructions were performed and the en face images were presented to reveal the hidden fingerprints. The results demonstrate that OCT is a promising tool for rapidly detecting and recovering high quality image of latent fingerprint hidden beneath adhesive tape without any changes to the original state and preserve the integrity of the evidence. Copyright © 2018 Elsevier B.V. All rights reserved.

  10. A network identity authentication system based on Fingerprint identification technology

    NASA Astrophysics Data System (ADS)

    Xia, Hong-Bin; Xu, Wen-Bo; Liu, Yuan

    2005-10-01

    Fingerprint verification is one of the most reliable personal identification methods. However, most of the automatic fingerprint identification system (AFIS) is not run via Internet/Intranet environment to meet today's increasing Electric commerce requirements. This paper describes the design and implementation of the archetype system of identity authentication based on fingerprint biometrics technology, and the system can run via Internet environment. And in our system the COM and ASP technology are used to integrate Fingerprint technology with Web database technology, The Fingerprint image preprocessing algorithms are programmed into COM, which deployed on the internet information server. The system's design and structure are proposed, and the key points are discussed. The prototype system of identity authentication based on Fingerprint have been successfully tested and evaluated on our university's distant education applications in an internet environment.

  11. Accurate, fast, and secure biometric fingerprint recognition system utilizing sensor fusion of fingerprint patterns

    NASA Astrophysics Data System (ADS)

    El-Saba, Aed; Alsharif, Salim; Jagapathi, Rajendarreddy

    2011-04-01

    Fingerprint recognition is one of the first techniques used for automatically identifying people and today it is still one of the most popular and effective biometric techniques. With this increase in fingerprint biometric uses, issues related to accuracy, security and processing time are major challenges facing the fingerprint recognition systems. Previous work has shown that polarization enhancementencoding of fingerprint patterns increase the accuracy and security of fingerprint systems without burdening the processing time. This is mainly due to the fact that polarization enhancementencoding is inherently a hardware process and does not have detrimental time delay effect on the overall process. Unpolarized images, however, posses a high visual contrast and when fused (without digital enhancement) properly with polarized ones, is shown to increase the recognition accuracy and security of the biometric system without any significant processing time delay.

  12. Usefulness of biological fingerprint in magnetic resonance imaging for patient verification.

    PubMed

    Ueda, Yasuyuki; Morishita, Junji; Kudomi, Shohei; Ueda, Katsuhiko

    2016-09-01

    The purpose of our study is to investigate the feasibility of automated patient verification using multi-planar reconstruction (MPR) images generated from three-dimensional magnetic resonance (MR) imaging of the brain. Several anatomy-related MPR images generated from three-dimensional fast scout scan of each MR examination were used as biological fingerprint images in this study. The database of this study consisted of 730 temporal pairs of MR examination of the brain. We calculated the correlation value between current and prior biological fingerprint images of the same patient and also all combinations of two images for different patients to evaluate the effectiveness of our method for patient verification. The best performance of our system were as follows: a half-total error rate of 1.59 % with a false acceptance rate of 0.023 % and a false rejection rate of 3.15 %, an equal error rate of 1.37 %, and a rank-one identification rate of 98.6 %. Our method makes it possible to verify the identity of the patient using only some existing medical images without the addition of incidental equipment. Also, our method will contribute to patient misidentification error management caused by human errors.

  13. Detection and Rectification of Distorted Fingerprints.

    PubMed

    Si, Xuanbin; Feng, Jianjiang; Zhou, Jie; Luo, Yuxuan

    2015-03-01

    Elastic distortion of fingerprints is one of the major causes for false non-match. While this problem affects all fingerprint recognition applications, it is especially dangerous in negative recognition applications, such as watchlist and deduplication applications. In such applications, malicious users may purposely distort their fingerprints to evade identification. In this paper, we proposed novel algorithms to detect and rectify skin distortion based on a single fingerprint image. Distortion detection is viewed as a two-class classification problem, for which the registered ridge orientation map and period map of a fingerprint are used as the feature vector and a SVM classifier is trained to perform the classification task. Distortion rectification (or equivalently distortion field estimation) is viewed as a regression problem, where the input is a distorted fingerprint and the output is the distortion field. To solve this problem, a database (called reference database) of various distorted reference fingerprints and corresponding distortion fields is built in the offline stage, and then in the online stage, the nearest neighbor of the input fingerprint is found in the reference database and the corresponding distortion field is used to transform the input fingerprint into a normal one. Promising results have been obtained on three databases containing many distorted fingerprints, namely FVC2004 DB1, Tsinghua Distorted Fingerprint database, and the NIST SD27 latent fingerprint database.

  14. JPEG and wavelet compression of ophthalmic images

    NASA Astrophysics Data System (ADS)

    Eikelboom, Robert H.; Yogesan, Kanagasingam; Constable, Ian J.; Barry, Christopher J.

    1999-05-01

    This study was designed to determine the degree and methods of digital image compression to produce ophthalmic imags of sufficient quality for transmission and diagnosis. The photographs of 15 subjects, which inclined eyes with normal, subtle and distinct pathologies, were digitized to produce 1.54MB images and compressed to five different methods: (i) objectively by calculating the RMS error between the uncompressed and compressed images, (ii) semi-subjectively by assessing the visibility of blood vessels, and (iii) subjectively by asking a number of experienced observers to assess the images for quality and clinical interpretation. Results showed that as a function of compressed image size, wavelet compressed images produced less RMS error than JPEG compressed images. Blood vessel branching could be observed to a greater extent after Wavelet compression compared to JPEG compression produced better images then a JPEG compression for a given image size. Overall, it was shown that images had to be compressed to below 2.5 percent for JPEG and 1.7 percent for Wavelet compression before fine detail was lost, or when image quality was too poor to make a reliable diagnosis.

  15. Lossless Astronomical Image Compression and the Effects of Random Noise

    NASA Technical Reports Server (NTRS)

    Pence, William

    2009-01-01

    In this paper we compare a variety of modern image compression methods on a large sample of astronomical images. We begin by demonstrating from first principles how the amount of noise in the image pixel values sets a theoretical upper limit on the lossless compression ratio of the image. We derive simple procedures for measuring the amount of noise in an image and for quantitatively predicting how much compression will be possible. We then compare the traditional technique of using the GZIP utility to externally compress the image, with a newer technique of dividing the image into tiles, and then compressing and storing each tile in a FITS binary table structure. This tiled-image compression technique offers a choice of other compression algorithms besides GZIP, some of which are much better suited to compressing astronomical images. Our tests on a large sample of images show that the Rice algorithm provides the best combination of speed and compression efficiency. In particular, Rice typically produces 1.5 times greater compression and provides much faster compression speed than GZIP. Floating point images generally contain too much noise to be effectively compressed with any lossless algorithm. We have developed a compression technique which discards some of the useless noise bits by quantizing the pixel values as scaled integers. The integer images can then be compressed by a factor of 4 or more. Our image compression and uncompression utilities (called fpack and funpack) that were used in this study are publicly available from the HEASARC web site.Users may run these stand-alone programs to compress and uncompress their own images.

  16. Multiplex mass spectrometry imaging for latent fingerprints.

    PubMed

    Yagnik, Gargey B; Korte, Andrew R; Lee, Young Jin

    2013-01-01

    We have previously developed in-parallel data acquisition of orbitrap mass spectrometry (MS) and ion trap MS and/or MS/MS scans for matrix-assisted laser desorption/ionization MS imaging (MSI) to obtain rich chemical information in less data acquisition time. In the present study, we demonstrate a novel application of this multiplex MSI methodology for latent fingerprints. In a single imaging experiment, we could obtain chemical images of various endogenous and exogenous compounds, along with simultaneous MS/MS images of a few selected compounds. This work confirms the usefulness of multiplex MSI to explore chemical markers when the sample specimen is very limited. Copyright © 2013 John Wiley & Sons, Ltd.

  17. Biometric template revocation

    NASA Astrophysics Data System (ADS)

    Arndt, Craig M.

    2004-08-01

    Biometric are a powerful technology for identifying humans both locally and at a distance. In order to perform identification or verification biometric systems capture an image of some biometric of a user or subject. The image is then converted mathematical to representation of the person call a template. Since we know that every human in the world is different each human will have different biometric images (different fingerprints, or faces, etc.). This is what makes biometrics useful for identification. However unlike a credit card number or a password to can be given to a person and later revoked if it is compromised and biometric is with the person for life. The problem then is to develop biometric templates witch can be easily revoked and reissued which are also unique to the user and can be easily used for identification and verification. In this paper we develop and present a method to generate a set of templates which are fully unique to the individual and also revocable. By using bases set compression algorithms in an n-dimensional orthogonal space we can represent a give biometric image in an infinite number of equally valued and unique ways. The verification and biometric matching system would be presented with a given template and revocation code. The code will then representing where in the sequence of n-dimensional vectors to start the recognition.

  18. Displaying radiologic images on personal computers: image storage and compression--Part 2.

    PubMed

    Gillespy, T; Rowberg, A H

    1994-02-01

    This is part 2 of our article on image storage and compression, the third article of our series for radiologists and imaging scientists on displaying, manipulating, and analyzing radiologic images on personal computers. Image compression is classified as lossless (nondestructive) or lossy (destructive). Common lossless compression algorithms include variable-length bit codes (Huffman codes and variants), dictionary-based compression (Lempel-Ziv variants), and arithmetic coding. Huffman codes and the Lempel-Ziv-Welch (LZW) algorithm are commonly used for image compression. All of these compression methods are enhanced if the image has been transformed into a differential image based on a differential pulse-code modulation (DPCM) algorithm. The LZW compression after the DPCM image transformation performed the best on our example images, and performed almost as well as the best of the three commercial compression programs tested. Lossy compression techniques are capable of much higher data compression, but reduced image quality and compression artifacts may be noticeable. Lossy compression is comprised of three steps: transformation, quantization, and coding. Two commonly used transformation methods are the discrete cosine transformation and discrete wavelet transformation. In both methods, most of the image information is contained in a relatively few of the transformation coefficients. The quantization step reduces many of the lower order coefficients to 0, which greatly improves the efficiency of the coding (compression) step. In fractal-based image compression, image patterns are stored as equations that can be reconstructed at different levels of resolution.

  19. Longitudinal study of fingerprint recognition.

    PubMed

    Yoon, Soweon; Jain, Anil K

    2015-07-14

    Human identification by fingerprints is based on the fundamental premise that ridge patterns from distinct fingers are different (uniqueness) and a fingerprint pattern does not change over time (persistence). Although the uniqueness of fingerprints has been investigated by developing statistical models to estimate the probability of error in comparing two random samples of fingerprints, the persistence of fingerprints has remained a general belief based on only a few case studies. In this study, fingerprint match (similarity) scores are analyzed by multilevel statistical models with covariates such as time interval between two fingerprints in comparison, subject's age, and fingerprint image quality. Longitudinal fingerprint records of 15,597 subjects are sampled from an operational fingerprint database such that each individual has at least five 10-print records over a minimum time span of 5 y. In regard to the persistence of fingerprints, the longitudinal analysis on a single (right index) finger demonstrates that (i) genuine match scores tend to significantly decrease when time interval between two fingerprints in comparison increases, whereas the change in impostor match scores is negligible; and (ii) fingerprint recognition accuracy at operational settings, nevertheless, tends to be stable as the time interval increases up to 12 y, the maximum time span in the dataset. However, the uncertainty of temporal stability of fingerprint recognition accuracy becomes substantially large if either of the two fingerprints being compared is of poor quality. The conclusions drawn from 10-finger fusion analysis coincide with the conclusions from single-finger analysis.

  20. Longitudinal study of fingerprint recognition

    PubMed Central

    Yoon, Soweon; Jain, Anil K.

    2015-01-01

    Human identification by fingerprints is based on the fundamental premise that ridge patterns from distinct fingers are different (uniqueness) and a fingerprint pattern does not change over time (persistence). Although the uniqueness of fingerprints has been investigated by developing statistical models to estimate the probability of error in comparing two random samples of fingerprints, the persistence of fingerprints has remained a general belief based on only a few case studies. In this study, fingerprint match (similarity) scores are analyzed by multilevel statistical models with covariates such as time interval between two fingerprints in comparison, subject’s age, and fingerprint image quality. Longitudinal fingerprint records of 15,597 subjects are sampled from an operational fingerprint database such that each individual has at least five 10-print records over a minimum time span of 5 y. In regard to the persistence of fingerprints, the longitudinal analysis on a single (right index) finger demonstrates that (i) genuine match scores tend to significantly decrease when time interval between two fingerprints in comparison increases, whereas the change in impostor match scores is negligible; and (ii) fingerprint recognition accuracy at operational settings, nevertheless, tends to be stable as the time interval increases up to 12 y, the maximum time span in the dataset. However, the uncertainty of temporal stability of fingerprint recognition accuracy becomes substantially large if either of the two fingerprints being compared is of poor quality. The conclusions drawn from 10-finger fusion analysis coincide with the conclusions from single-finger analysis. PMID:26124106

  1. Holistic processing of fingerprints by expert forensic examiners.

    PubMed

    Vogelsang, Macgregor D; Palmeri, Thomas J; Busey, Thomas A

    2017-01-01

    Holistic processing is often characterized as a process by which objects are perceived as a whole rather than a compilation of individual features. This mechanism may play an important role in the development of perceptual expertise because it allows for rapid integration across image regions. The present work explores whether holistic processing is present in latent fingerprint examiners, who compare fingerprints collected from crime scenes against a set of standards taken from a suspect. We adapted a composite task widely used in the face recognition and perceptual expertise literatures, in which participants were asked to match only a particular half of a fingerprint with a previous image while ignoring the other half. We tested both experts and novices, using both upright and inverted fingerprints. For upright fingerprints, we found weak evidence for holistic processing, but with no differences between experts and novices with respect to holistic processing. For inverted fingerprints, we found stronger evidence of holistic processing, with weak evidence for differences between experts and novices. These relatively weak holistic processing effects contrast with robust evidence for holistic processing with faces and with objects in other domains of perceptual expertise. The data constrain models of holistic processing by demonstrating that latent fingerprint experts and novices may not substantively differ in terms of the amount of holistic processing and that inverted stimuli actually produced more evidence for holistic processing than upright stimuli. Important differences between the present fingerprint stimuli and those in the literature include the lack of verbal labels for experts and the absence of strong vertical asymmetries, both of which might contribute to stronger holistic processing signatures in other stimulus domains.

  2. Optical coherence tomography used for internal biometrics

    NASA Astrophysics Data System (ADS)

    Chang, Shoude; Sherif, Sherif; Mao, Youxin; Flueraru, Costel

    2007-06-01

    Traditional biometric technologies used for security and person identification essentially deal with fingerprints, hand geometry and face images. However, because all these technologies use external features of human body, they can be easily fooled and tampered with by distorting, modifying or counterfeiting these features. Nowadays, internal biometrics which detects the internal ID features of an object is becoming increasingly important. Being capable of exploring under-skin structure, optical coherence tomography (OCT) system can be used as a powerful tool for internal biometrics. We have applied fiber-optic and full-field OCT systems to detect the multiple-layer 2D images and 3D profile of the fingerprints, which eventually result in a higher discrimination than the traditional 2D recognition methods. More importantly, the OCT based fingerprint recognition has the ability to easily distinguish artificial fingerprint dummies by analyzing the extracted layered surfaces. Experiments show that our OCT systems successfully detected the dummy, which was made of plasticene and was used to bypass the commercially available fingerprint scanning system with a false accept rate (FAR) of 100%.

  3. Case study of 3D fingerprints applications

    PubMed Central

    Liu, Feng; Liang, Jinrong; Shen, Linlin; Yang, Meng; Zhang, David; Lai, Zhihui

    2017-01-01

    Human fingers are 3D objects. More information will be provided if three dimensional (3D) fingerprints are available compared with two dimensional (2D) fingerprints. Thus, this paper firstly collected 3D finger point cloud data by Structured-light Illumination method. Additional features from 3D fingerprint images are then studied and extracted. The applications of these features are finally discussed. A series of experiments are conducted to demonstrate the helpfulness of 3D information to fingerprint recognition. Results show that a quick alignment can be easily implemented under the guidance of 3D finger shape feature even though this feature does not work for fingerprint recognition directly. The newly defined distinctive 3D shape ridge feature can be used for personal authentication with Equal Error Rate (EER) of ~8.3%. Also, it is helpful to remove false core point. Furthermore, a promising of EER ~1.3% is realized by combining this feature with 2D features for fingerprint recognition which indicates the prospect of 3D fingerprint recognition. PMID:28399141

  4. Case study of 3D fingerprints applications.

    PubMed

    Liu, Feng; Liang, Jinrong; Shen, Linlin; Yang, Meng; Zhang, David; Lai, Zhihui

    2017-01-01

    Human fingers are 3D objects. More information will be provided if three dimensional (3D) fingerprints are available compared with two dimensional (2D) fingerprints. Thus, this paper firstly collected 3D finger point cloud data by Structured-light Illumination method. Additional features from 3D fingerprint images are then studied and extracted. The applications of these features are finally discussed. A series of experiments are conducted to demonstrate the helpfulness of 3D information to fingerprint recognition. Results show that a quick alignment can be easily implemented under the guidance of 3D finger shape feature even though this feature does not work for fingerprint recognition directly. The newly defined distinctive 3D shape ridge feature can be used for personal authentication with Equal Error Rate (EER) of ~8.3%. Also, it is helpful to remove false core point. Furthermore, a promising of EER ~1.3% is realized by combining this feature with 2D features for fingerprint recognition which indicates the prospect of 3D fingerprint recognition.

  5. MALDI TOF Imaging of Latent Fingerprints a Novel Biosignature Tool

    DTIC Science & Technology

    2010-04-23

    old man have been lightly coated with ointment containing tocopherol and imprinted on stainless-steal MALDI plate. Application of low-concentrated... tocopherol allows efficient laser ionization without use of matrixes or additional treatment of the fingerprint. The result of the MS imaging scan...resolution and contrast. Interestingly, MS method optimized for molecular peak and main fragments of tocopherol (395 m/z) gave signal increase of over

  6. Radiological Image Compression

    NASA Astrophysics Data System (ADS)

    Lo, Shih-Chung Benedict

    The movement toward digital images in radiology presents the problem of how to conveniently and economically store, retrieve, and transmit the volume of digital images. Basic research into image data compression is necessary in order to move from a film-based department to an efficient digital -based department. Digital data compression technology consists of two types of compression technique: error-free and irreversible. Error -free image compression is desired; however, present techniques can only achieve compression ratio of from 1.5:1 to 3:1, depending upon the image characteristics. Irreversible image compression can achieve a much higher compression ratio; however, the image reconstructed from the compressed data shows some difference from the original image. This dissertation studies both error-free and irreversible image compression techniques. In particular, some modified error-free techniques have been tested and the recommended strategies for various radiological images are discussed. A full-frame bit-allocation irreversible compression technique has been derived. A total of 76 images which include CT head and body, and radiographs digitized to 2048 x 2048, 1024 x 1024, and 512 x 512 have been used to test this algorithm. The normalized mean -square-error (NMSE) on the difference image, defined as the difference between the original and the reconstructed image from a given compression ratio, is used as a global measurement on the quality of the reconstructed image. The NMSE's of total of 380 reconstructed and 380 difference images are measured and the results tabulated. Three complex compression methods are also suggested to compress images with special characteristics. Finally, various parameters which would effect the quality of the reconstructed images are discussed. A proposed hardware compression module is given in the last chapter.

  7. A humming retrieval system based on music fingerprint

    NASA Astrophysics Data System (ADS)

    Han, Xingkai; Cao, Baiyu

    2011-10-01

    In this paper, we proposed an improved music information retrieval method utilizing the music fingerprint. The goal of this method is to represent the music with compressed musical information. Based on the selected MIDI files, which are generated automatically as our music target database, we evaluate the accuracy, effectiveness, and efficiency of this method. In this research we not only extract the feature sequence, which can represent the file effectively, from the query and melody database, but also make it possible for retrieving the results in an innovative way. We investigate on the influence of noise to the performance of our system. As experimental result shows, the retrieval accuracy arriving at up to91% without noise is pretty well

  8. Rapid discrimination of different Apiaceae species based on HPTLC fingerprints and targeted flavonoids determination using multivariate image analysis.

    PubMed

    Shawky, Eman; Abou El Kheir, Rasha M

    2018-02-11

    Species of Apiaceae are used in folk medicine as spices and in officinal medicinal preparations of drugs. They are an excellent source of phenolics exhibiting antioxidant activity, which are of great benefit to human health. Discrimination among Apiaceae medicinal herbs remains an intricate challenge due to their morphological similarity. In this study, a combined "untargeted" and "targeted" approach to investigate different Apiaceae plants species was proposed by using the merging of high-performance thin layer chromatography (HPTLC)-image analysis and pattern recognition methods which were used for fingerprinting and classification of 42 different Apiaceae samples collected from Egypt. Software for image processing was applied for fingerprinting and data acquisition. HPTLC fingerprint assisted by principal component analysis (PCA) and hierarchical cluster analysis (HCA)-heat maps resulted in a reliable untargeted approach for discrimination and classification of different samples. The "targeted" approach was performed by developing and validating an HPTLC method allowing the quantification of eight flavonoids. The combination of quantitative data with PCA and HCA-heat-maps allowed the different samples to be discriminated from each other. The use of chemometrics tools for evaluation of fingerprints reduced expense and analysis time. The proposed method can be adopted for routine discrimination and evaluation of the phytochemical variability in different Apiaceae species extracts. Copyright © 2018 John Wiley & Sons, Ltd.

  9. Recognizable or Not: Towards Image Semantic Quality Assessment for Compression

    NASA Astrophysics Data System (ADS)

    Liu, Dong; Wang, Dandan; Li, Houqiang

    2017-12-01

    Traditionally, image compression was optimized for the pixel-wise fidelity or the perceptual quality of the compressed images given a bit-rate budget. But recently, compressed images are more and more utilized for automatic semantic analysis tasks such as recognition and retrieval. For these tasks, we argue that the optimization target of compression is no longer perceptual quality, but the utility of the compressed images in the given automatic semantic analysis task. Accordingly, we propose to evaluate the quality of the compressed images neither at pixel level nor at perceptual level, but at semantic level. In this paper, we make preliminary efforts towards image semantic quality assessment (ISQA), focusing on the task of optical character recognition (OCR) from compressed images. We propose a full-reference ISQA measure by comparing the features extracted from text regions of original and compressed images. We then propose to integrate the ISQA measure into an image compression scheme. Experimental results show that our proposed ISQA measure is much better than PSNR and SSIM in evaluating the semantic quality of compressed images; accordingly, adopting our ISQA measure to optimize compression for OCR leads to significant bit-rate saving compared to using PSNR or SSIM. Moreover, we perform subjective test about text recognition from compressed images, and observe that our ISQA measure has high consistency with subjective recognizability. Our work explores new dimensions in image quality assessment, and demonstrates promising direction to achieve higher compression ratio for specific semantic analysis tasks.

  10. On non-invasive 2D and 3D Chromatic White Light image sensors for age determination of latent fingerprints.

    PubMed

    Merkel, Ronny; Gruhn, Stefan; Dittmann, Jana; Vielhauer, Claus; Bräutigam, Anja

    2012-10-10

    The feasibility of 2D-intensity and 3D-topography images from a non-invasive Chromatic White Light (CWL) sensor for the age determination of latent fingerprints is investigated. The proposed method might provide the means to solve the so far unresolved issue of determining a fingerprints age in forensics. Conducting numerous experiments for an indoor crime scene using selected surfaces, different influences on the aging of fingerprints are investigated and the resulting aging variability is determined in terms of inter-person, intra-person, inter-finger and intra-finger variation. Main influence factors are shown to be the sweat composition, temperature, humidity, wind, UV-radiation, surface type, contamination of the finger with water-containing substances, resolution and measured area size, whereas contact time, contact pressure and smearing of the print seem to be of minor importance. Such influences lead to a certain experimental variability in inter-person and intra-person variation, which is higher than the inter-finger and intra-finger variation. Comparing the aging behavior of 17 different features using 1490 time series with a total of 41,520 fingerprint images, the great potential of the CWL technique in combination with the binary pixel feature from prior work is shown. Performing three different experiments for the classification of fingerprints into the two time classes [0, 5 h] and [5, 24 h], a maximum classification performance of 79.29% (kappa=0.46) is achieved for a general case, which is further improved for special cases. The statistical significance of the two best-performing features (both binary pixel versions based on 2D-intensity images) is manually shown and a feature fusion is performed, highlighting the strong dependency of the features on each other. It is concluded that such method might be combined with additional capturing devices, such as microscopes or spectroscopes, to a very promising age estimation scheme. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  11. Nanoplasmonic imaging of latent fingerprints and identification of cocaine.

    PubMed

    Li, Kun; Qin, Weiwei; Li, Fan; Zhao, Xingchun; Jiang, Bowei; Wang, Kun; Deng, Suhui; Fan, Chunhai; Li, Di

    2013-10-25

    Search for traces: Aptamer-bound Au nanoparticles (Au NPs) were used to provide high-resolution dark-field microscopy images of latent fingerprints (LFPs) with level 2 and level 3 details. Furthermore, the cocaine-induced aggregation of Au NPs results in a true green-to-red color change of the scattered light, providing a quasi-quantative method to identify cocaine loadings in LFPs. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Image quality (IQ) guided multispectral image compression

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng; Chen, Genshe; Wang, Zhonghai; Blasch, Erik

    2016-05-01

    Image compression is necessary for data transportation, which saves both transferring time and storage space. In this paper, we focus on our discussion on lossy compression. There are many standard image formats and corresponding compression algorithms, for examples, JPEG (DCT -- discrete cosine transform), JPEG 2000 (DWT -- discrete wavelet transform), BPG (better portable graphics) and TIFF (LZW -- Lempel-Ziv-Welch). The image quality (IQ) of decompressed image will be measured by numerical metrics such as root mean square error (RMSE), peak signal-to-noise ratio (PSNR), and structural Similarity (SSIM) Index. Given an image and a specified IQ, we will investigate how to select a compression method and its parameters to achieve an expected compression. Our scenario consists of 3 steps. The first step is to compress a set of interested images by varying parameters and compute their IQs for each compression method. The second step is to create several regression models per compression method after analyzing the IQ-measurement versus compression-parameter from a number of compressed images. The third step is to compress the given image with the specified IQ using the selected compression method (JPEG, JPEG2000, BPG, or TIFF) according to the regressed models. The IQ may be specified by a compression ratio (e.g., 100), then we will select the compression method of the highest IQ (SSIM, or PSNR). Or the IQ may be specified by a IQ metric (e.g., SSIM = 0.8, or PSNR = 50), then we will select the compression method of the highest compression ratio. Our experiments tested on thermal (long-wave infrared) images (in gray scales) showed very promising results.

  13. Image compression system and method having optimized quantization tables

    NASA Technical Reports Server (NTRS)

    Ratnakar, Viresh (Inventor); Livny, Miron (Inventor)

    1998-01-01

    A digital image compression preprocessor for use in a discrete cosine transform-based digital image compression device is provided. The preprocessor includes a gathering mechanism for determining discrete cosine transform statistics from input digital image data. A computing mechanism is operatively coupled to the gathering mechanism to calculate a image distortion array and a rate of image compression array based upon the discrete cosine transform statistics for each possible quantization value. A dynamic programming mechanism is operatively coupled to the computing mechanism to optimize the rate of image compression array against the image distortion array such that a rate-distortion-optimal quantization table is derived. In addition, a discrete cosine transform-based digital image compression device and a discrete cosine transform-based digital image compression and decompression system are provided. Also, a method for generating a rate-distortion-optimal quantization table, using discrete cosine transform-based digital image compression, and operating a discrete cosine transform-based digital image compression and decompression system are provided.

  14. Secure Fingerprint Identification of High Accuracy

    DTIC Science & Technology

    2014-01-01

    secure ) solution of complexity O(n3) based on Gaussian elimination. When it is applied to biometrics X and Y with mX and mY minutiae, respectively...collections of biometric data in use today include, for example, fingerprint, face, and iris images collected by the US Department of Homeland Security ...work we focus on fingerprint data due to popularity and good accuracy of this type of biometry. We formulate the problem of private, or secure , finger

  15. Fingerprint Changes in Coeliac Disease

    PubMed Central

    David, T. J.; Ajdukiewicz, A. B.; Read, A. E.

    1970-01-01

    Study of the fingerprints of 73 patients with coeliac disease, taken carefully, showed changes varying between moderate epidermal ridge atrophy and actual loss of fingerprint patterns. Of the patients 63 had these abnormalities, compared with 3 out of 485 controls. A high degree of correlation existed between ridge atrophy and changes in the clinical state of patients with coeliac disease. ImagesFig. 1Fig. 2Fig. 3Fig. 4Fig. 5Fig. 6 PMID:5488703

  16. High-quality JPEG compression history detection for fake uncompressed images

    NASA Astrophysics Data System (ADS)

    Zhang, Rong; Wang, Rang-Ding; Guo, Li-Jun; Jiang, Bao-Chuan

    2017-05-01

    Authenticity is one of the most important evaluation factors of images for photography competitions or journalism. Unusual compression history of an image often implies the illicit intent of its author. Our work aims at distinguishing real uncompressed images from fake uncompressed images that are saved in uncompressed formats but have been previously compressed. To detect the potential image JPEG compression, we analyze the JPEG compression artifacts based on the tetrolet covering, which corresponds to the local image geometrical structure. Since the compression can alter the structure information, the tetrolet covering indexes may be changed if a compression is performed on the test image. Such changes can provide valuable clues about the image compression history. To be specific, the test image is first compressed with different quality factors to generate a set of temporary images. Then, the test image is compared with each temporary image block-by-block to investigate whether the tetrolet covering index of each 4×4 block is different between them. The percentages of the changed tetrolet covering indexes corresponding to the quality factors (from low to high) are computed and used to form the p-curve, the local minimum of which may indicate the potential compression. Our experimental results demonstrate the advantage of our method to detect JPEG compressions of high quality, even the highest quality factors such as 98, 99, or 100 of the standard JPEG compression, from uncompressed-format images. At the same time, our detection algorithm can accurately identify the corresponding compression quality factor.

  17. Experimental evaluation of fingerprint verification system based on double random phase encoding

    NASA Astrophysics Data System (ADS)

    Suzuki, Hiroyuki; Yamaguchi, Masahiro; Yachida, Masuyoshi; Ohyama, Nagaaki; Tashima, Hideaki; Obi, Takashi

    2006-03-01

    We proposed a smart card holder authentication system that combines fingerprint verification with PIN verification by applying a double random phase encoding scheme. In this system, the probability of accurate verification of an authorized individual reduces when the fingerprint is shifted significantly. In this paper, a review of the proposed system is presented and preprocessing for improving the false rejection rate is proposed. In the proposed method, the position difference between two fingerprint images is estimated by using an optimized template for core detection. When the estimated difference exceeds the permissible level, the user inputs the fingerprint again. The effectiveness of the proposed method is confirmed by a computational experiment; its results show that the false rejection rate is improved.

  18. Fingerprint Recognition with Identical Twin Fingerprints

    PubMed Central

    Yang, Xin; Tian, Jie

    2012-01-01

    Fingerprint recognition with identical twins is a challenging task due to the closest genetics-based relationship existing in the identical twins. Several pioneers have analyzed the similarity between twins' fingerprints. In this work we continue to investigate the topic of the similarity of identical twin fingerprints. Our study was tested based on a large identical twin fingerprint database that contains 83 twin pairs, 4 fingers per individual and six impressions per finger: 3984 (83*2*4*6) images. Compared to the previous work, our contributions are summarized as follows: (1) Two state-of-the-art fingerprint identification methods: P071 and VeriFinger 6.1 were used, rather than one fingerprint identification method in previous studies. (2) Six impressions per finger were captured, rather than just one impression, which makes the genuine distribution of matching scores more realistic. (3) A larger sample (83 pairs) was collected. (4) A novel statistical analysis, which aims at showing the probability distribution of the fingerprint types for the corresponding fingers of identical twins which have same fingerprint type, has been conducted. (5) A novel analysis, which aims at showing which finger from identical twins has higher probability of having same fingerprint type, has been conducted. Our results showed that: (a) A state-of-the-art automatic fingerprint verification system can distinguish identical twins without drastic degradation in performance. (b) The chance that the fingerprints have the same type from identical twins is 0.7440, comparing to 0.3215 from non-identical twins. (c) For the corresponding fingers of identical twins which have same fingerprint type, the probability distribution of five major fingerprint types is similar to the probability distribution for all the fingers' fingerprint type. (d) For each of four fingers of identical twins, the probability of having same fingerprint type is similar. PMID:22558204

  19. Fingerprint recognition with identical twin fingerprints.

    PubMed

    Tao, Xunqiang; Chen, Xinjian; Yang, Xin; Tian, Jie

    2012-01-01

    Fingerprint recognition with identical twins is a challenging task due to the closest genetics-based relationship existing in the identical twins. Several pioneers have analyzed the similarity between twins' fingerprints. In this work we continue to investigate the topic of the similarity of identical twin fingerprints. Our study was tested based on a large identical twin fingerprint database that contains 83 twin pairs, 4 fingers per individual and six impressions per finger: 3984 (83*2*4*6) images. Compared to the previous work, our contributions are summarized as follows: (1) Two state-of-the-art fingerprint identification methods: P071 and VeriFinger 6.1 were used, rather than one fingerprint identification method in previous studies. (2) Six impressions per finger were captured, rather than just one impression, which makes the genuine distribution of matching scores more realistic. (3) A larger sample (83 pairs) was collected. (4) A novel statistical analysis, which aims at showing the probability distribution of the fingerprint types for the corresponding fingers of identical twins which have same fingerprint type, has been conducted. (5) A novel analysis, which aims at showing which finger from identical twins has higher probability of having same fingerprint type, has been conducted. Our results showed that: (a) A state-of-the-art automatic fingerprint verification system can distinguish identical twins without drastic degradation in performance. (b) The chance that the fingerprints have the same type from identical twins is 0.7440, comparing to 0.3215 from non-identical twins. (c) For the corresponding fingers of identical twins which have same fingerprint type, the probability distribution of five major fingerprint types is similar to the probability distribution for all the fingers' fingerprint type. (d) For each of four fingers of identical twins, the probability of having same fingerprint type is similar.

  20. Separation of high-resolution samples of overlapping latent fingerprints using relaxation labeling

    NASA Astrophysics Data System (ADS)

    Qian, Kun; Schott, Maik; Schöne, Werner; Hildebrandt, Mario

    2012-06-01

    The analysis of latent fingerprint patterns generally requires clearly recognizable friction ridge patterns. Currently, overlapping latent fingerprints pose a major problem for traditional crime scene investigation. This is due to the fact that these fingerprints usually have very similar optical properties. Consequently, the distinction of two or more overlapping fingerprints from each other is not trivially possible. While it is possible to employ chemical imaging to separate overlapping fingerprints, the corresponding methods require sophisticated fingerprint acquisition methods and are not compatible with conventional forensic fingerprint data. A separation technique that is purely based on the local orientation of the ridge patterns of overlapping fingerprints is proposed by Chen et al. and quantitatively evaluated using off-the-shelf fingerprint matching software with mostly artificially composed overlapping fingerprint samples, which is motivated by the scarce availability of authentic test samples. The work described in this paper adapts the approach presented by Chen et al. for its application on authentic high resolution fingerprint samples acquired by a contactless measurement device based on a Chromatic White Light (CWL) sensor. An evaluation of the work is also given, with the analysis of all adapted parameters. Additionally, the separability requirement proposed by Chen et al. is also evaluated for practical feasibility. Our results show promising tendencies for the application of this approach on high-resolution data, yet the separability requirement still poses a further challenge.

  1. Application of content-based image compression to telepathology

    NASA Astrophysics Data System (ADS)

    Varga, Margaret J.; Ducksbury, Paul G.; Callagy, Grace

    2002-05-01

    Telepathology is a means of practicing pathology at a distance, viewing images on a computer display rather than directly through a microscope. Without compression, images take too long to transmit to a remote location and are very expensive to store for future examination. However, to date the use of compressed images in pathology remains controversial. This is because commercial image compression algorithms such as JPEG achieve data compression without knowledge of the diagnostic content. Often images are lossily compressed at the expense of corrupting informative content. None of the currently available lossy compression techniques are concerned with what information has been preserved and what data has been discarded. Their sole objective is to compress and transmit the images as fast as possible. By contrast, this paper presents a novel image compression technique, which exploits knowledge of the slide diagnostic content. This 'content based' approach combines visually lossless and lossy compression techniques, judiciously applying each in the appropriate context across an image so as to maintain 'diagnostic' information while still maximising the possible compression. Standard compression algorithms, e.g. wavelets, can still be used, but their use in a context sensitive manner can offer high compression ratios and preservation of diagnostically important information. When compared with lossless compression the novel content-based approach can potentially provide the same degree of information with a smaller amount of data. When compared with lossy compression it can provide more information for a given amount of compression. The precise gain in the compression performance depends on the application (e.g. database archive or second opinion consultation) and the diagnostic content of the images.

  2. Fpack and Funpack Utilities for FITS Image Compression and Uncompression

    NASA Technical Reports Server (NTRS)

    Pence, W.

    2008-01-01

    Fpack is a utility program for optimally compressing images in the FITS (Flexible Image Transport System) data format (see http://fits.gsfc.nasa.gov). The associated funpack program restores the compressed image file back to its original state (as long as a lossless compression algorithm is used). These programs may be run from the host operating system command line and are analogous to the gzip and gunzip utility programs except that they are optimized for FITS format images and offer a wider choice of compression algorithms. Fpack stores the compressed image using the FITS tiled image compression convention (see http://fits.gsfc.nasa.gov/fits_registry.html). Under this convention, the image is first divided into a user-configurable grid of rectangular tiles, and then each tile is individually compressed and stored in a variable-length array column in a FITS binary table. By default, fpack usually adopts a row-by-row tiling pattern. The FITS image header keywords remain uncompressed for fast access by FITS reading and writing software. The tiled image compression convention can in principle support any number of different compression algorithms. The fpack and funpack utilities call on routines in the CFITSIO library (http://hesarc.gsfc.nasa.gov/fitsio) to perform the actual compression and uncompression of the FITS images, which currently supports the GZIP, Rice, H-compress, and PLIO IRAF pixel list compression algorithms.

  3. In vivo laser confocal microscopy findings in patients with map-dot-fingerprint (epithelial basement membrane) dystrophy.

    PubMed

    Kobayashi, Akira; Yokogawa, Hideaki; Sugiyama, Kazuhisa

    2012-01-01

    The purpose of this study was to investigate pathological changes of the corneal cell layer in patients with map-dot-fingerprint (epithelial basement membrane) dystrophy by in vivo laser corneal confocal microscopy. Two patients were evaluated using a cornea-specific in vivo laser scanning confocal microscope (Heidelberg Retina Tomograph 2 Rostock Cornea Module, HRT 2-RCM). The affected corneal areas of both patients were examined. Image analysis was performed to identify corneal epithelial and stromal deposits correlated with this dystrophy. Variously shaped (linear, multilaminar, curvilinear, ring-shape, geographic) highly reflective materials were observed in the "map" area, mainly in the basal epithelial cell layer. In "fingerprint" lesions, multiple linear and curvilinear hyporeflective lines were observed. Additionally, in the affected corneas, infiltration of possible Langerhans cells and other inflammatory cells was observed as highly reflective Langerhans cell-like or dot images. Finally, needle-shaped materials were observed in one patient. HRT 2-RCM laser confocal microscopy is capable of identifying corneal microstructural changes related to map-dot-fingerprint corneal dystrophy in vivo. The technique may be useful in elucidating the pathogenesis and natural course of map-dot-fingerprint corneal dystrophy and other similar basement membrane abnormalities.

  4. Efficient Fingercode Classification

    NASA Astrophysics Data System (ADS)

    Sun, Hong-Wei; Law, Kwok-Yan; Gollmann, Dieter; Chung, Siu-Leung; Li, Jian-Bin; Sun, Jia-Guang

    In this paper, we present an efficient fingerprint classification algorithm which is an essential component in many critical security application systems e. g. systems in the e-government and e-finance domains. Fingerprint identification is one of the most important security requirements in homeland security systems such as personnel screening and anti-money laundering. The problem of fingerprint identification involves searching (matching) the fingerprint of a person against each of the fingerprints of all registered persons. To enhance performance and reliability, a common approach is to reduce the search space by firstly classifying the fingerprints and then performing the search in the respective class. Jain et al. proposed a fingerprint classification algorithm based on a two-stage classifier, which uses a K-nearest neighbor classifier in its first stage. The fingerprint classification algorithm is based on the fingercode representation which is an encoding of fingerprints that has been demonstrated to be an effective fingerprint biometric scheme because of its ability to capture both local and global details in a fingerprint image. We enhance this approach by improving the efficiency of the K-nearest neighbor classifier for fingercode-based fingerprint classification. Our research firstly investigates the various fast search algorithms in vector quantization (VQ) and the potential application in fingerprint classification, and then proposes two efficient algorithms based on the pyramid-based search algorithms in VQ. Experimental results on DB1 of FVC 2004 demonstrate that our algorithms can outperform the full search algorithm and the original pyramid-based search algorithms in terms of computational efficiency without sacrificing accuracy.

  5. From template to image: reconstructing fingerprints from minutiae points.

    PubMed

    Ross, Arun; Shah, Jidnya; Jain, Anil K

    2007-04-01

    Most fingerprint-based biometric systems store the minutiae template of a user in the database. It has been traditionally assumed that the minutiae template of a user does not reveal any information about the original fingerprint. In this paper, we challenge this notion and show that three levels of information about the parent fingerprint can be elicited from the minutiae template alone, viz., 1) the orientation field information, 2) the class or type information, and 3) the friction ridge structure. The orientation estimation algorithm determines the direction of local ridges using the evidence of minutiae triplets. The estimated orientation field, along with the given minutiae distribution, is then used to predict the class of the fingerprint. Finally, the ridge structure of the parent fingerprint is generated using streamlines that are based on the estimated orientation field. Line Integral Convolution is used to impart texture to the ensuing ridges, resulting in a ridge map resembling the parent fingerprint. The salient feature of this noniterative method to generate ridges is its ability to preserve the minutiae at specified locations in the reconstructed ridge map. Experiments using a commercial fingerprint matcher suggest that the reconstructed ridge structure bears close resemblance to the parent fingerprint.

  6. Interpretation of fingerprint image quality features extracted by self-organizing maps

    NASA Astrophysics Data System (ADS)

    Danov, Ivan; Olsen, Martin A.; Busch, Christoph

    2014-05-01

    Accurate prediction of fingerprint quality is of significant importance to any fingerprint-based biometric system. Ensuring high quality samples for both probe and reference can substantially improve the system's performance by lowering false non-matches, thus allowing finer adjustment of the decision threshold of the biometric system. Furthermore, the increasing usage of biometrics in mobile contexts demands development of lightweight methods for operational environment. A novel two-tier computationally efficient approach was recently proposed based on modelling block-wise fingerprint image data using Self-Organizing Map (SOM) to extract specific ridge pattern features, which are then used as an input to a Random Forests (RF) classifier trained to predict the quality score of a propagated sample. This paper conducts an investigative comparative analysis on a publicly available dataset for the improvement of the two-tier approach by proposing additionally three feature interpretation methods, based respectively on SOM, Generative Topographic Mapping and RF. The analysis shows that two of the proposed methods produce promising results on the given dataset.

  7. Fingerprint recognition system by use of graph matching

    NASA Astrophysics Data System (ADS)

    Shen, Wei; Shen, Jun; Zheng, Huicheng

    2001-09-01

    Fingerprint recognition is an important subject in biometrics to identify or verify persons by physiological characteristics, and has found wide applications in different domains. In the present paper, we present a finger recognition system that combines singular points and structures. The principal steps of processing in our system are: preprocessing and ridge segmentation, singular point extraction and selection, graph representation, and finger recognition by graphs matching. Our fingerprint recognition system is implemented and tested for many fingerprint images and the experimental result are satisfactory. Different techniques are used in our system, such as fast calculation of orientation field, local fuzzy dynamical thresholding, algebraic analysis of connections and fingerprints representation and matching by graphs. Wed find that for fingerprint database that is not very large, the recognition rate is very high even without using a prior coarse category classification. This system works well for both one-to-few and one-to-many problems.

  8. Task-oriented lossy compression of magnetic resonance images

    NASA Astrophysics Data System (ADS)

    Anderson, Mark C.; Atkins, M. Stella; Vaisey, Jacques

    1996-04-01

    A new task-oriented image quality metric is used to quantify the effects of distortion introduced into magnetic resonance images by lossy compression. This metric measures the similarity between a radiologist's manual segmentation of pathological features in the original images and the automated segmentations performed on the original and compressed images. The images are compressed using a general wavelet-based lossy image compression technique, embedded zerotree coding, and segmented using a three-dimensional stochastic model-based tissue segmentation algorithm. The performance of the compression system is then enhanced by compressing different regions of the image volume at different bit rates, guided by prior knowledge about the location of important anatomical regions in the image. Application of the new system to magnetic resonance images is shown to produce compression results superior to the conventional methods, both subjectively and with respect to the segmentation similarity metric.

  9. In vivo microcirculation imaging of the sub surface fingertip using correlation mapping optical coherence tomography (cmOCT)

    NASA Astrophysics Data System (ADS)

    Dsouza, Roshan I.; Zam, Azhar; Subhash, Hrebesh M.; Larin, Kirill V.; Leahy, Martin

    2013-02-01

    We describe a novel application of correlation mapping optical coherence tomography (cmOCT) for sub-surface fingerprint biometric identification. Fingerprint biometrics including automated fingerprint identification systems, are commonly used to recognise the fingerprint, since they constitute simple, effective and valuable physical evidence. Spoofing of biometric fingerprint devices can be easily done because of the limited information obtained from the surface topography. In order to overcome this limitation a potentially more secure source of information is required for biometric identification applications. In this study, we retrieve the microcirculation map of the subsurface fingertip by use of the cmOCT technique. To increase probing depth of the sub surface microcirculation, an optical clearing agent composed of 75% glycerol in aqueous solution was applied topically and kept in contact for 15 min. OCT intensity images were acquired from commercial research grade swept source OCT system (model OCT1300SS, Thorlabs Inc. USA). A 3D OCT scan of the fingertip was acquired over an area of 5x5 mm using 1024x1024 A-scans in approximately 70 s. The resulting volume was then processed using the cmOCT technique with a 7x7 kernel to provide a microcirculation map. We believe these results will demonstrate an enhanced security level over artificial fingertips. To the best of our knowledge, this is the first demonstration of imaging microcirculation map of the subsurface fingertip.

  10. Detection of microscopic particles present as contaminants in latent fingerprints by means of synchrotron radiation-based Fourier transform infra-red micro-imaging.

    PubMed

    Banas, A; Banas, K; Breese, M B H; Loke, J; Heng Teo, B; Lim, S K

    2012-08-07

    Synchrotron radiation-based Fourier transform infra-red (SR-FTIR) micro-imaging has been developed as a rapid, direct and non-destructive technique. This method, taking advantage of the high brightness and small effective source size of synchrotron light, is capable of exploring the molecular chemistry within the microstructures of microscopic particles without their destruction at high spatial resolutions. This is in contrast to traditional "wet" chemical methods, which, during processing for analysis, often caused destruction of the original samples. In the present study, we demonstrate the potential of SR-FTIR micro-imaging as an effective way to accurately identify microscopic particles deposited within latent fingerprints. These particles are present from residual amounts of materials left on a person's fingers after handling such materials. Fingerprints contaminated with various types of powders, creams, medications and high explosive materials (3-nitrooxy-2,2-bis(nitrooxymethyl)propyl nitrate (PETN), 1,3,5-trinitro-1,3,5-triazinane (RDX), 2-methyl-1,3,5-trinitrobenzene (TNT)) deposited on various - daily used - substrates have been analysed herein without any further sample preparation. A non-destructive method for the transfer of contaminated fingerprints from hard-to-reach areas of the substrates to the place of analysis is also presented. This method could have a significant impact on forensic science and could dramatically enhance the amount of information that can be obtained from the study of fingerprints.

  11. Machine-assisted verification of latent fingerprints: first results for nondestructive contact-less optical acquisition techniques with a CWL sensor

    NASA Astrophysics Data System (ADS)

    Hildebrandt, Mario; Kiltz, Stefan; Krapyvskyy, Dmytro; Dittmann, Jana; Vielhauer, Claus; Leich, Marcus

    2011-11-01

    A machine-assisted analysis of traces from crime scenes might be possible with the advent of new high-resolution non-destructive contact-less acquisition techniques for latent fingerprints. This requires reliable techniques for the automatic extraction of fingerprint features from latent and exemplar fingerprints for matching purposes using pattern recognition approaches. Therefore, we evaluate the NIST Biometric Image Software for the feature extraction and verification of contact-lessly acquired latent fingerprints to determine potential error rates. Our exemplary test setup includes 30 latent fingerprints from 5 people in two test sets that are acquired from different surfaces using a chromatic white light sensor. The first test set includes 20 fingerprints on two different surfaces. It is used to determine the feature extraction performance. The second test set includes one latent fingerprint on 10 different surfaces and an exemplar fingerprint to determine the verification performance. This utilized sensing technique does not require a physical or chemical visibility enhancement of the fingerprint residue, thus the original trace remains unaltered for further investigations. No particular feature extraction and verification techniques have been applied to such data, yet. Hence, we see the need for appropriate algorithms that are suitable to support forensic investigations.

  12. Image compression/decompression based on mathematical transform, reduction/expansion, and image sharpening

    DOEpatents

    Fu, Chi-Yung; Petrich, Loren I.

    1997-01-01

    An image represented in a first image array of pixels is first decimated in two dimensions before being compressed by a predefined compression algorithm such as JPEG. Another possible predefined compression algorithm can involve a wavelet technique. The compressed, reduced image is then transmitted over the limited bandwidth transmission medium, and the transmitted image is decompressed using an algorithm which is an inverse of the predefined compression algorithm (such as reverse JPEG). The decompressed, reduced image is then interpolated back to its original array size. Edges (contours) in the image are then sharpened to enhance the perceptual quality of the reconstructed image. Specific sharpening techniques are described.

  13. Spatial compression algorithm for the analysis of very large multivariate images

    DOEpatents

    Keenan, Michael R [Albuquerque, NM

    2008-07-15

    A method for spatially compressing data sets enables the efficient analysis of very large multivariate images. The spatial compression algorithms use a wavelet transformation to map an image into a compressed image containing a smaller number of pixels that retain the original image's information content. Image analysis can then be performed on a compressed data matrix consisting of a reduced number of significant wavelet coefficients. Furthermore, a block algorithm can be used for performing common operations more efficiently. The spatial compression algorithms can be combined with spectral compression algorithms to provide further computational efficiencies.

  14. High-definition Fourier transform infrared spectroscopic imaging of prostate tissue

    NASA Astrophysics Data System (ADS)

    Wrobel, Tomasz P.; Kwak, Jin Tae; Kadjacsy-Balla, Andre; Bhargava, Rohit

    2016-03-01

    Histopathology forms the gold standard for cancer diagnosis and therapy, and generally relies on manual examination of microscopic structural morphology within tissue. Fourier-Transform Infrared (FT-IR) imaging is an emerging vibrational spectroscopic imaging technique, especially in a High-Definition (HD) format, that provides the spatial specificity of microscopy at magnifications used in diagnostic surgical pathology. While it has been shown for standard imaging that IR absorption by tissue creates a strong signal where the spectrum at each pixel is a quantitative "fingerprint" of the molecular composition of the sample, here we show that this fingerprint also enables direct digital pathology without the need for stains or dyes for HD imaging. An assessment of the potential of HD imaging to improve diagnostic pathology accuracy is presented.

  15. Online fingerprint verification.

    PubMed

    Upendra, K; Singh, S; Kumar, V; Verma, H K

    2007-01-01

    As organizations search for more secure authentication methods for user access, e-commerce, and other security applications, biometrics is gaining increasing attention. With an increasing emphasis on the emerging automatic personal identification applications, fingerprint based identification is becoming more popular. The most widely used fingerprint representation is the minutiae based representation. The main drawback with this representation is that it does not utilize a significant component of the rich discriminatory information available in the fingerprints. Local ridge structures cannot be completely characterized by minutiae. Also, it is difficult quickly to match two fingerprint images containing different number of unregistered minutiae points. In this study filter bank based representation, which eliminates these weakness, is implemented and the overall performance of the developed system is tested. The results have shown that this system can be used effectively for secure online verification applications.

  16. New patient-controlled abdominal compression method in radiography: radiation dose and image quality.

    PubMed

    Piippo-Huotari, Oili; Norrman, Eva; Anderzén-Carlsson, Agneta; Geijer, Håkan

    2018-05-01

    The radiation dose for patients can be reduced with many methods and one way is to use abdominal compression. In this study, the radiation dose and image quality for a new patient-controlled compression device were compared with conventional compression and compression in the prone position . To compare radiation dose and image quality of patient-controlled compression compared with conventional and prone compression in general radiography. An experimental design with quantitative approach. After obtaining the approval of the ethics committee, a consecutive sample of 48 patients was examined with the standard clinical urography protocol. The radiation doses were measured as dose-area product and analyzed with a paired t-test. The image quality was evaluated by visual grading analysis. Four radiologists evaluated each image individually by scoring nine criteria modified from the European quality criteria for diagnostic radiographic images. There was no significant difference in radiation dose or image quality between conventional and patient-controlled compression. Prone position resulted in both higher dose and inferior image quality. Patient-controlled compression gave similar dose levels as conventional compression and lower than prone compression. Image quality was similar with both patient-controlled and conventional compression and was judged to be better than in the prone position.

  17. Colony fingerprint for discrimination of microbial species based on lensless imaging of microcolonies

    PubMed Central

    Maeda, Yoshiaki; Dobashi, Hironori; Sugiyama, Yui; Saeki, Tatsuya; Lim, Tae-kyu; Harada, Manabu; Matsunaga, Tadashi; Yoshino, Tomoko

    2017-01-01

    Detection and identification of microbial species are crucial in a wide range of industries, including production of beverages, foods, cosmetics, and pharmaceuticals. Traditionally, colony formation and its morphological analysis (e.g., size, shape, and color) with a naked eye have been employed for this purpose. However, such a conventional method is time consuming, labor intensive, and not very reproducible. To overcome these problems, we propose a novel method that detects microcolonies (diameter 10–500 μm) using a lensless imaging system. When comparing colony images of five microorganisms from different genera (Escherichia coli, Salmonella enterica, Pseudomonas aeruginosa, Staphylococcus aureus, and Candida albicans), the images showed obvious different features. Being closely related species, St. aureus and St. epidermidis resembled each other, but the imaging analysis could extract substantial information (colony fingerprints) including the morphological and physiological features, and linear discriminant analysis of the colony fingerprints distinguished these two species with 100% of accuracy. Because this system may offer many advantages such as high-throughput testing, lower costs, more compact equipment, and ease of automation, it holds promise for microbial detection and identification in various academic and industrial areas. PMID:28369067

  18. High excimer-state emission of perylene bisimides and recognition of latent fingerprints.

    PubMed

    Wang, Ke-Rang; Yang, Zi-Bo; Li, Xiao-Liu

    2015-04-07

    High excimer-state emission in the H-type aggregate of a novel asymmetric perylene bisimide derivative, 6, with triethyleneglycol chains and lactose functionalization was achieved in water. Furthermore, its application for enhancing the visualization of transfer latent fingerprints from glass slides to the poly(vinylidene fluoride) (PVDF) membrane was explored, which showed clear images of the latent fingerprint in daylight and under 365 nm ultraviolet illumination. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. Development and evaluation of a novel lossless image compression method (AIC: artificial intelligence compression method) using neural networks as artificial intelligence.

    PubMed

    Fukatsu, Hiroshi; Naganawa, Shinji; Yumura, Shinnichiro

    2008-04-01

    This study was aimed to validate the performance of a novel image compression method using a neural network to achieve a lossless compression. The encoding consists of the following blocks: a prediction block; a residual data calculation block; a transformation and quantization block; an organization and modification block; and an entropy encoding block. The predicted image is divided into four macro-blocks using the original image for teaching; and then redivided into sixteen sub-blocks. The predicted image is compared to the original image to create the residual image. The spatial and frequency data of the residual image are compared and transformed. Chest radiography, computed tomography (CT), magnetic resonance imaging, positron emission tomography, radioisotope mammography, ultrasonography, and digital subtraction angiography images were compressed using the AIC lossless compression method; and the compression rates were calculated. The compression rates were around 15:1 for chest radiography and mammography, 12:1 for CT, and around 6:1 for other images. This method thus enables greater lossless compression than the conventional methods. This novel method should improve the efficiency of handling of the increasing volume of medical imaging data.

  20. Image splitting and remapping method for radiological image compression

    NASA Astrophysics Data System (ADS)

    Lo, Shih-Chung B.; Shen, Ellen L.; Mun, Seong K.

    1990-07-01

    A new decomposition method using image splitting and gray-level remapping has been proposed for image compression, particularly for images with high contrast resolution. The effects of this method are especially evident in our radiological image compression study. In our experiments, we tested the impact of this decomposition method on image compression by employing it with two coding techniques on a set of clinically used CT images and several laser film digitized chest radiographs. One of the compression techniques used was full-frame bit-allocation in the discrete cosine transform domain, which has been proven to be an effective technique for radiological image compression. The other compression technique used was vector quantization with pruned tree-structured encoding, which through recent research has also been found to produce a low mean-square-error and a high compression ratio. The parameters we used in this study were mean-square-error and the bit rate required for the compressed file. In addition to these parameters, the difference between the original and reconstructed images will be presented so that the specific artifacts generated by both techniques can be discerned by visual perception.

  1. Optical cryptography with biometrics for multi-depth objects.

    PubMed

    Yan, Aimin; Wei, Yang; Hu, Zhijuan; Zhang, Jingtao; Tsang, Peter Wai Ming; Poon, Ting-Chung

    2017-10-11

    We propose an optical cryptosystem for encrypting images of multi-depth objects based on the combination of optical heterodyne technique and fingerprint keys. Optical heterodyning requires two optical beams to be mixed. For encryption, each optical beam is modulated by an optical mask containing either the fingerprint of the person who is sending, or receiving the image. The pair of optical masks are taken as the encryption keys. Subsequently, the two beams are used to scan over a multi-depth 3-D object to obtain an encrypted hologram. During the decryption process, each sectional image of the 3-D object is recovered by convolving its encrypted hologram (through numerical computation) with the encrypted hologram of a pinhole image that is positioned at the same depth as the sectional image. Our proposed method has three major advantages. First, the lost-key situation can be avoided with the use of fingerprints as the encryption keys. Second, the method can be applied to encrypt 3-D images for subsequent decrypted sectional images. Third, since optical heterodyning scanning is employed to encrypt a 3-D object, the optical system is incoherent, resulting in negligible amount of speckle noise upon decryption. To the best of our knowledge, this is the first time optical cryptography of 3-D object images has been demonstrated in an incoherent optical system with biometric keys.

  2. Image compression/decompression based on mathematical transform, reduction/expansion, and image sharpening

    DOEpatents

    Fu, C.Y.; Petrich, L.I.

    1997-12-30

    An image represented in a first image array of pixels is first decimated in two dimensions before being compressed by a predefined compression algorithm such as JPEG. Another possible predefined compression algorithm can involve a wavelet technique. The compressed, reduced image is then transmitted over the limited bandwidth transmission medium, and the transmitted image is decompressed using an algorithm which is an inverse of the predefined compression algorithm (such as reverse JPEG). The decompressed, reduced image is then interpolated back to its original array size. Edges (contours) in the image are then sharpened to enhance the perceptual quality of the reconstructed image. Specific sharpening techniques are described. 22 figs.

  3. Fingerprint Recognition

    DTIC Science & Technology

    2006-06-01

    RECOGNITION by Graig T. Diefenderfer June 2006 Thesis Advisor: Monique P. Fargues Second Reader: Roberto Cristi...Approved by: Monique P. Fargues Thesis Advisor Roberto Cristi Second Reader Jeffrey B. Knorr Chairman, Department of Electrical and...matching for low- quality fingerprints. Proceedings of IEEE International Conference on Image Processing, 2, 33- 36. Jain, A., Hong. L., & Bolle

  4. Prediction of compression-induced image interpretability degradation

    NASA Astrophysics Data System (ADS)

    Blasch, Erik; Chen, Hua-Mei; Irvine, John M.; Wang, Zhonghai; Chen, Genshe; Nagy, James; Scott, Stephen

    2018-04-01

    Image compression is an important component in modern imaging systems as the volume of the raw data collected is increasing. To reduce the volume of data while collecting imagery useful for analysis, choosing the appropriate image compression method is desired. Lossless compression is able to preserve all the information, but it has limited reduction power. On the other hand, lossy compression, which may result in very high compression ratios, suffers from information loss. We model the compression-induced information loss in terms of the National Imagery Interpretability Rating Scale or NIIRS. NIIRS is a user-based quantification of image interpretability widely adopted by the Geographic Information System community. Specifically, we present the Compression Degradation Image Function Index (CoDIFI) framework that predicts the NIIRS degradation (i.e., a decrease of NIIRS level) for a given compression setting. The CoDIFI-NIIRS framework enables a user to broker the maximum compression setting while maintaining a specified NIIRS rating.

  5. Compressed domain indexing of losslessly compressed images

    NASA Astrophysics Data System (ADS)

    Schaefer, Gerald

    2001-12-01

    Image retrieval and image compression have been pursued separately in the past. Only little research has been done on a synthesis of the two by allowing image retrieval to be performed directly in the compressed domain of images without the need to uncompress them first. In this paper methods for image retrieval in the compressed domain of losslessly compressed images are introduced. While most image compression techniques are lossy, i.e. discard visually less significant information, lossless techniques are still required in fields like medical imaging or in situations where images must not be changed due to legal reasons. The algorithms in this paper are based on predictive coding methods where a pixel is encoded based on the pixel values of its (already encoded) neighborhood. The first method is based on an understanding that predictively coded data is itself indexable and represents a textural description of the image. The second method operates directly on the entropy encoded data by comparing codebooks of images. Experiments show good image retrieval results for both approaches.

  6. A comparison of select image-compression algorithms for an electronic still camera

    NASA Technical Reports Server (NTRS)

    Nerheim, Rosalee

    1989-01-01

    This effort is a study of image-compression algorithms for an electronic still camera. An electronic still camera can record and transmit high-quality images without the use of film, because images are stored digitally in computer memory. However, high-resolution images contain an enormous amount of information, and will strain the camera's data-storage system. Image compression will allow more images to be stored in the camera's memory. For the electronic still camera, a compression algorithm that produces a reconstructed image of high fidelity is most important. Efficiency of the algorithm is the second priority. High fidelity and efficiency are more important than a high compression ratio. Several algorithms were chosen for this study and judged on fidelity, efficiency and compression ratio. The transform method appears to be the best choice. At present, the method is compressing images to a ratio of 5.3:1 and producing high-fidelity reconstructed images.

  7. Lossless medical image compression with a hybrid coder

    NASA Astrophysics Data System (ADS)

    Way, Jing-Dar; Cheng, Po-Yuen

    1998-10-01

    The volume of medical image data is expected to increase dramatically in the next decade due to the large use of radiological image for medical diagnosis. The economics of distributing the medical image dictate that data compression is essential. While there is lossy image compression, the medical image must be recorded and transmitted lossless before it reaches the users to avoid wrong diagnosis due to the image data lost. Therefore, a low complexity, high performance lossless compression schematic that can approach the theoretic bound and operate in near real-time is needed. In this paper, we propose a hybrid image coder to compress the digitized medical image without any data loss. The hybrid coder is constituted of two key components: an embedded wavelet coder and a lossless run-length coder. In this system, the medical image is compressed with the lossy wavelet coder first, and the residual image between the original and the compressed ones is further compressed with the run-length coder. Several optimization schemes have been used in these coders to increase the coding performance. It is shown that the proposed algorithm is with higher compression ratio than run-length entropy coders such as arithmetic, Huffman and Lempel-Ziv coders.

  8. Fast and accurate face recognition based on image compression

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng; Blasch, Erik

    2017-05-01

    Image compression is desired for many image-related applications especially for network-based applications with bandwidth and storage constraints. The face recognition community typical reports concentrate on the maximal compression rate that would not decrease the recognition accuracy. In general, the wavelet-based face recognition methods such as EBGM (elastic bunch graph matching) and FPB (face pattern byte) are of high performance but run slowly due to their high computation demands. The PCA (Principal Component Analysis) and LDA (Linear Discriminant Analysis) algorithms run fast but perform poorly in face recognition. In this paper, we propose a novel face recognition method based on standard image compression algorithm, which is termed as compression-based (CPB) face recognition. First, all gallery images are compressed by the selected compression algorithm. Second, a mixed image is formed with the probe and gallery images and then compressed. Third, a composite compression ratio (CCR) is computed with three compression ratios calculated from: probe, gallery and mixed images. Finally, the CCR values are compared and the largest CCR corresponds to the matched face. The time cost of each face matching is about the time of compressing the mixed face image. We tested the proposed CPB method on the "ASUMSS face database" (visible and thermal images) from 105 subjects. The face recognition accuracy with visible images is 94.76% when using JPEG compression. On the same face dataset, the accuracy of FPB algorithm was reported as 91.43%. The JPEG-compressionbased (JPEG-CPB) face recognition is standard and fast, which may be integrated into a real-time imaging device.

  9. A database for assessment of effect of lossy compression on digital mammograms

    NASA Astrophysics Data System (ADS)

    Wang, Jiheng; Sahiner, Berkman; Petrick, Nicholas; Pezeshk, Aria

    2018-03-01

    With widespread use of screening digital mammography, efficient storage of the vast amounts of data has become a challenge. While lossless image compression causes no risk to the interpretation of the data, it does not allow for high compression rates. Lossy compression and the associated higher compression ratios are therefore more desirable. The U.S. Food and Drug Administration (FDA) currently interprets the Mammography Quality Standards Act as prohibiting lossy compression of digital mammograms for primary image interpretation, image retention, or transfer to the patient or her designated recipient. Previous work has used reader studies to determine proper usage criteria for evaluating lossy image compression in mammography, and utilized different measures and metrics to characterize medical image quality. The drawback of such studies is that they rely on a threshold on compression ratio as the fundamental criterion for preserving the quality of images. However, compression ratio is not a useful indicator of image quality. On the other hand, many objective image quality metrics (IQMs) have shown excellent performance for natural image content for consumer electronic applications. In this paper, we create a new synthetic mammogram database with several unique features. We compare and characterize the impact of image compression on several clinically relevant image attributes such as perceived contrast and mass appearance for different kinds of masses. We plan to use this database to develop a new objective IQM for measuring the quality of compressed mammographic images to help determine the allowed maximum compression for different kinds of breasts and masses in terms of visual and diagnostic quality.

  10. Digital data registration and differencing compression system

    NASA Technical Reports Server (NTRS)

    Ransford, Gary A. (Inventor); Cambridge, Vivien J. (Inventor)

    1990-01-01

    A process is disclosed for x ray registration and differencing which results in more efficient compression. Differencing of registered modeled subject image with a modeled reference image forms a differenced image for compression with conventional compression algorithms. Obtention of a modeled reference image includes modeling a relatively unrelated standard reference image upon a three-dimensional model, which three-dimensional model is also used to model the subject image for obtaining the modeled subject image. The registration process of the modeled subject image and modeled reference image translationally correlates such modeled images for resulting correlation thereof in spatial and spectral dimensions. Prior to compression, a portion of the image falling outside a designated area of interest may be eliminated, for subsequent replenishment with a standard reference image. The compressed differenced image may be subsequently transmitted and/or stored, for subsequent decompression and addition to a standard reference image so as to form a reconstituted or approximated subject image at either a remote location and/or at a later moment in time. Overall effective compression ratios of 100:1 are possible for thoracic x ray digital images.

  11. Digital Data Registration and Differencing Compression System

    NASA Technical Reports Server (NTRS)

    Ransford, Gary A. (Inventor); Cambridge, Vivien J. (Inventor)

    1996-01-01

    A process for X-ray registration and differencing results in more efficient compression. Differencing of registered modeled subject image with a modeled reference image forms a differenced image for compression with conventional compression algorithms. Obtention of a modeled reference image includes modeling a relatively unrelated standard reference image upon a three-dimensional model, which three-dimensional model is also used to model the subject image for obtaining the modeled subject image. The registration process of the modeled subject image and modeled reference image translationally correlates such modeled images for resulting correlation thereof in spatial and spectral dimensions. Prior to compression, a portion of the image falling outside a designated area of interest may be eliminated, for subsequent replenishment with a standard reference image. The compressed differenced image may be subsequently transmitted and/or stored, for subsequent decompression and addition to a standard reference image so as to form a reconstituted or approximated subject image at either a remote location and/or at a later moment in time. Overall effective compression ratios of 100:1 are possible for thoracic X-ray digital images.

  12. Digital data registration and differencing compression system

    NASA Technical Reports Server (NTRS)

    Ransford, Gary A. (Inventor); Cambridge, Vivien J. (Inventor)

    1992-01-01

    A process for x ray registration and differencing results in more efficient compression is discussed. Differencing of registered modeled subject image with a modeled reference image forms a differential image for compression with conventional compression algorithms. Obtention of a modeled reference image includes modeling a relatively unrelated standard reference image upon a three dimensional model, which three dimensional model is also used to model the subject image for obtaining the modeled subject image. The registration process of the modeled subject image and modeled reference image translationally correlates such modeled images for resulting correlation thereof in spatial and spectral dimensions. Prior to compression, a portion of the image falling outside a designated area of interest may be eliminated, for subsequent replenishment with a standard reference image. The compressed differenced image may be subsequently transmitted and/or stored, for subsequent decompression and addition to a standard reference image so as to form a reconstituted or approximated subject image at either remote location and/or at a later moment in time. Overall effective compression ratios of 100:1 are possible for thoracic x ray digital images.

  13. Ballistics projectile image analysis for firearm identification.

    PubMed

    Li, Dongguang

    2006-10-01

    This paper is based upon the observation that, when a bullet is fired, it creates characteristic markings on the cartridge case and projectile. From these markings, over 30 different features can be distinguished, which, in combination, produce a "fingerprint" for a firearm. By analyzing features within such a set of firearm fingerprints, it will be possible to identify not only the type and model of a firearm, but also each and every individual weapon just as effectively as human fingerprint identification. A new analytic system based on the fast Fourier transform for identifying projectile specimens by the line-scan imaging technique is proposed in this paper. This paper develops optical, photonic, and mechanical techniques to map the topography of the surfaces of forensic projectiles for the purpose of identification. Experiments discussed in this paper are performed on images acquired from 16 various weapons. Experimental results show that the proposed system can be used for firearm identification efficiently and precisely through digitizing and analyzing the fired projectiles specimens.

  14. Image compression-encryption algorithms by combining hyper-chaotic system with discrete fractional random transform

    NASA Astrophysics Data System (ADS)

    Gong, Lihua; Deng, Chengzhi; Pan, Shumin; Zhou, Nanrun

    2018-07-01

    Based on hyper-chaotic system and discrete fractional random transform, an image compression-encryption algorithm is designed. The original image is first transformed into a spectrum by the discrete cosine transform and the resulting spectrum is compressed according to the method of spectrum cutting. The random matrix of the discrete fractional random transform is controlled by a chaotic sequence originated from the high dimensional hyper-chaotic system. Then the compressed spectrum is encrypted by the discrete fractional random transform. The order of DFrRT and the parameters of the hyper-chaotic system are the main keys of this image compression and encryption algorithm. The proposed algorithm can compress and encrypt image signal, especially can encrypt multiple images once. To achieve the compression of multiple images, the images are transformed into spectra by the discrete cosine transform, and then the spectra are incised and spliced into a composite spectrum by Zigzag scanning. Simulation results demonstrate that the proposed image compression and encryption algorithm is of high security and good compression performance.

  15. Video-Based Fingerprint Verification

    PubMed Central

    Qin, Wei; Yin, Yilong; Liu, Lili

    2013-01-01

    Conventional fingerprint verification systems use only static information. In this paper, fingerprint videos, which contain dynamic information, are utilized for verification. Fingerprint videos are acquired by the same capture device that acquires conventional fingerprint images, and the user experience of providing a fingerprint video is the same as that of providing a single impression. After preprocessing and aligning processes, “inside similarity” and “outside similarity” are defined and calculated to take advantage of both dynamic and static information contained in fingerprint videos. Match scores between two matching fingerprint videos are then calculated by combining the two kinds of similarity. Experimental results show that the proposed video-based method leads to a relative reduction of 60 percent in the equal error rate (EER) in comparison to the conventional single impression-based method. We also analyze the time complexity of our method when different combinations of strategies are used. Our method still outperforms the conventional method, even if both methods have the same time complexity. Finally, experimental results demonstrate that the proposed video-based method can lead to better accuracy than the multiple impressions fusion method, and the proposed method has a much lower false acceptance rate (FAR) when the false rejection rate (FRR) is quite low. PMID:24008283

  16. Extracting valley-ridge lines from point-cloud-based 3D fingerprint models.

    PubMed

    Pang, Xufang; Song, Zhan; Xie, Wuyuan

    2013-01-01

    3D fingerprinting is an emerging technology with the distinct advantage of touchless operation. More important, 3D fingerprint models contain more biometric information than traditional 2D fingerprint images. However, current approaches to fingerprint feature detection usually must transform the 3D models to a 2D space through unwrapping or other methods, which might introduce distortions. A new approach directly extracts valley-ridge features from point-cloud-based 3D fingerprint models. It first applies the moving least-squares method to fit a local paraboloid surface and represent the local point cloud area. It then computes the local surface's curvatures and curvature tensors to facilitate detection of the potential valley and ridge points. The approach projects those points to the most likely valley-ridge lines, using statistical means such as covariance analysis and cross correlation. To finally extract the valley-ridge lines, it grows the polylines that approximate the projected feature points and removes the perturbations between the sampled points. Experiments with different 3D fingerprint models demonstrate this approach's feasibility and performance.

  17. Fingerprint identification: advances since the 2009 National Research Council report

    PubMed Central

    Champod, Christophe

    2015-01-01

    This paper will discuss the major developments in the area of fingerprint identification that followed the publication of the National Research Council (NRC, of the US National Academies of Sciences) report in 2009 entitled: Strengthening Forensic Science in the United States: A Path Forward. The report portrayed an image of a field of expertise used for decades without the necessary scientific research-based underpinning. The advances since the report and the needs in selected areas of fingerprinting will be detailed. It includes the measurement of the accuracy, reliability, repeatability and reproducibility of the conclusions offered by fingerprint experts. The paper will also pay attention to the development of statistical models allowing assessment of fingerprint comparisons. As a corollary of these developments, the next challenge is to reconcile a traditional practice dominated by deterministic conclusions with the probabilistic logic of any statistical model. There is a call for greater candour and fingerprint experts will need to communicate differently on the strengths and limitations of their findings. Their testimony will have to go beyond the blunt assertion of the uniqueness of fingerprints or the opinion delivered ispe dixit. PMID:26101284

  18. Optimal Compression of Floating-Point Astronomical Images Without Significant Loss of Information

    NASA Technical Reports Server (NTRS)

    Pence, William D.; White, R. L.; Seaman, R.

    2010-01-01

    We describe a compression method for floating-point astronomical images that gives compression ratios of 6 - 10 while still preserving the scientifically important information in the image. The pixel values are first preprocessed by quantizing them into scaled integer intensity levels, which removes some of the uncompressible noise in the image. The integers are then losslessly compressed using the fast and efficient Rice algorithm and stored in a portable FITS format file. Quantizing an image more coarsely gives greater image compression, but it also increases the noise and degrades the precision of the photometric and astrometric measurements in the quantized image. Dithering the pixel values during the quantization process greatly improves the precision of measurements in the more coarsely quantized images. We perform a series of experiments on both synthetic and real astronomical CCD images to quantitatively demonstrate that the magnitudes and positions of stars in the quantized images can be measured with the predicted amount of precision. In order to encourage wider use of these image compression methods, we have made available a pair of general-purpose image compression programs, called fpack and funpack, which can be used to compress any FITS format image.

  19. OPTIMAL EXPERIMENT DESIGN FOR MAGNETIC RESONANCE FINGERPRINTING

    PubMed Central

    Zhao, Bo; Haldar, Justin P.; Setsompop, Kawin; Wald, Lawrence L.

    2017-01-01

    Magnetic resonance (MR) fingerprinting is an emerging quantitative MR imaging technique that simultaneously acquires multiple tissue parameters in an efficient experiment. In this work, we present an estimation-theoretic framework to evaluate and design MR fingerprinting experiments. More specifically, we derive the Cramér-Rao bound (CRB), a lower bound on the covariance of any unbiased estimator, to characterize parameter estimation for MR fingerprinting. We then formulate an optimal experiment design problem based on the CRB to choose a set of acquisition parameters (e.g., flip angles and/or repetition times) that maximizes the signal-to-noise ratio efficiency of the resulting experiment. The utility of the proposed approach is validated by numerical studies. Representative results demonstrate that the optimized experiments allow for substantial reduction in the length of an MR fingerprinting acquisition, and substantial improvement in parameter estimation performance. PMID:28268369

  20. Optimal experiment design for magnetic resonance fingerprinting.

    PubMed

    Bo Zhao; Haldar, Justin P; Setsompop, Kawin; Wald, Lawrence L

    2016-08-01

    Magnetic resonance (MR) fingerprinting is an emerging quantitative MR imaging technique that simultaneously acquires multiple tissue parameters in an efficient experiment. In this work, we present an estimation-theoretic framework to evaluate and design MR fingerprinting experiments. More specifically, we derive the Cramér-Rao bound (CRB), a lower bound on the covariance of any unbiased estimator, to characterize parameter estimation for MR fingerprinting. We then formulate an optimal experiment design problem based on the CRB to choose a set of acquisition parameters (e.g., flip angles and/or repetition times) that maximizes the signal-to-noise ratio efficiency of the resulting experiment. The utility of the proposed approach is validated by numerical studies. Representative results demonstrate that the optimized experiments allow for substantial reduction in the length of an MR fingerprinting acquisition, and substantial improvement in parameter estimation performance.

  1. Compression for radiological images

    NASA Astrophysics Data System (ADS)

    Wilson, Dennis L.

    1992-07-01

    The viewing of radiological images has peculiarities that must be taken into account in the design of a compression technique. The images may be manipulated on a workstation to change the contrast, to change the center of the brightness levels that are viewed, and even to invert the images. Because of the possible consequences of losing information in a medical application, bit preserving compression is used for the images used for diagnosis. However, for archiving the images may be compressed to 10 of their original size. A compression technique based on the Discrete Cosine Transform (DCT) takes the viewing factors into account by compressing the changes in the local brightness levels. The compression technique is a variation of the CCITT JPEG compression that suppresses the blocking of the DCT except in areas of very high contrast.

  2. Reversible Watermarking Surviving JPEG Compression.

    PubMed

    Zain, J; Clarke, M

    2005-01-01

    This paper will discuss the properties of watermarking medical images. We will also discuss the possibility of such images being compressed by JPEG and give an overview of JPEG compression. We will then propose a watermarking scheme that is reversible and robust to JPEG compression. The purpose is to verify the integrity and authenticity of medical images. We used 800x600x8 bits ultrasound (US) images in our experiment. SHA-256 of the image is then embedded in the Least significant bits (LSB) of an 8x8 block in the Region of Non Interest (RONI). The image is then compressed using JPEG and decompressed using Photoshop 6.0. If the image has not been altered, the watermark extracted will match the hash (SHA256) of the original image. The result shown that the embedded watermark is robust to JPEG compression up to image quality 60 (~91% compressed).

  3. Mixed raster content (MRC) model for compound image compression

    NASA Astrophysics Data System (ADS)

    de Queiroz, Ricardo L.; Buckley, Robert R.; Xu, Ming

    1998-12-01

    This paper will describe the Mixed Raster Content (MRC) method for compressing compound images, containing both binary test and continuous-tone images. A single compression algorithm that simultaneously meets the requirements for both text and image compression has been elusive. MRC takes a different approach. Rather than using a single algorithm, MRC uses a multi-layered imaging model for representing the results of multiple compression algorithms, including ones developed specifically for text and for images. As a result, MRC can combine the best of existing or new compression algorithms and offer different quality-compression ratio tradeoffs. The algorithms used by MRC set the lower bound on its compression performance. Compared to existing algorithms, MRC has some image-processing overhead to manage multiple algorithms and the imaging model. This paper will develop the rationale for the MRC approach by describing the multi-layered imaging model in light of a rate-distortion trade-off. Results will be presented comparing images compressed using MRC, JPEG and state-of-the-art wavelet algorithms such as SPIHT. MRC has been approved or proposed as an architectural model for several standards, including ITU Color Fax, IETF Internet Fax, and JPEG 2000.

  4. High bit depth infrared image compression via low bit depth codecs

    NASA Astrophysics Data System (ADS)

    Belyaev, Evgeny; Mantel, Claire; Forchhammer, Søren

    2017-08-01

    Future infrared remote sensing systems, such as monitoring of the Earth's environment by satellites, infrastructure inspection by unmanned airborne vehicles etc., will require 16 bit depth infrared images to be compressed and stored or transmitted for further analysis. Such systems are equipped with low power embedded platforms where image or video data is compressed by a hardware block called the video processing unit (VPU). However, in many cases using two 8-bit VPUs can provide advantages compared with using higher bit depth image compression directly. We propose to compress 16 bit depth images via 8 bit depth codecs in the following way. First, an input 16 bit depth image is mapped into 8 bit depth images, e.g., the first image contains only the most significant bytes (MSB image) and the second one contains only the least significant bytes (LSB image). Then each image is compressed by an image or video codec with 8 bits per pixel input format. We analyze how the compression parameters for both MSB and LSB images should be chosen to provide the maximum objective quality for a given compression ratio. Finally, we apply the proposed infrared image compression method utilizing JPEG and H.264/AVC codecs, which are usually available in efficient implementations, and compare their rate-distortion performance with JPEG2000, JPEG-XT and H.265/HEVC codecs supporting direct compression of infrared images in 16 bit depth format. A preliminary result shows that two 8 bit H.264/AVC codecs can achieve similar result as 16 bit HEVC codec.

  5. Waveform Fingerprinting for Efficient Seismic Signal Detection

    NASA Astrophysics Data System (ADS)

    Yoon, C. E.; OReilly, O. J.; Beroza, G. C.

    2013-12-01

    Cross-correlating an earthquake waveform template with continuous waveform data has proven a powerful approach for detecting events missing from earthquake catalogs. If templates do not exist, it is possible to divide the waveform data into short overlapping time windows, then identify window pairs with similar waveforms. Applying these approaches to earthquake monitoring in seismic networks has tremendous potential to improve the completeness of earthquake catalogs, but because effort scales quadratically with time, it rapidly becomes computationally infeasible. We develop a fingerprinting technique to identify similar waveforms, using only a few compact features of the original data. The concept is similar to human fingerprints, which utilize key diagnostic features to identify people uniquely. Analogous audio-fingerprinting approaches have accurately and efficiently found similar audio clips within large databases; example applications include identifying songs and finding copyrighted content within YouTube videos. In order to fingerprint waveforms, we compute a spectrogram of the time series, and segment it into multiple overlapping windows (spectral images). For each spectral image, we apply a wavelet transform, and retain only the sign of the maximum magnitude wavelet coefficients. This procedure retains just the large-scale structure of the data, providing both robustness to noise and significant dimensionality reduction. Each fingerprint is a high-dimensional, sparse, binary data object that can be stored in a database without significant storage costs. Similar fingerprints within the database are efficiently searched using locality-sensitive hashing. We test this technique on waveform data from the Northern California Seismic Network that contains events not detected in the catalog. We show that this algorithm successfully identifies similar waveforms and detects uncataloged low magnitude events in addition to cataloged events, while running to completion faster than a comparison waveform autocorrelation code.

  6. The Pixon Method for Data Compression Image Classification, and Image Reconstruction

    NASA Technical Reports Server (NTRS)

    Puetter, Richard; Yahil, Amos

    2002-01-01

    As initially proposed, this program had three goals: (1) continue to develop the highly successful Pixon method for image reconstruction and support other scientist in implementing this technique for their applications; (2) develop image compression techniques based on the Pixon method; and (3) develop artificial intelligence algorithms for image classification based on the Pixon approach for simplifying neural networks. Subsequent to proposal review the scope of the program was greatly reduced and it was decided to investigate the ability of the Pixon method to provide superior restorations of images compressed with standard image compression schemes, specifically JPEG-compressed images.

  7. A new hyperspectral image compression paradigm based on fusion

    NASA Astrophysics Data System (ADS)

    Guerra, Raúl; Melián, José; López, Sebastián.; Sarmiento, Roberto

    2016-10-01

    The on-board compression of remote sensed hyperspectral images is an important task nowadays. One of the main difficulties is that the compression of these images must be performed in the satellite which carries the hyperspectral sensor. Hence, this process must be performed by space qualified hardware, having area, power and speed limitations. Moreover, it is important to achieve high compression ratios without compromising the quality of the decompress image. In this manuscript we proposed a new methodology for compressing hyperspectral images based on hyperspectral image fusion concepts. The proposed compression process has two independent steps. The first one is to spatially degrade the remote sensed hyperspectral image to obtain a low resolution hyperspectral image. The second step is to spectrally degrade the remote sensed hyperspectral image to obtain a high resolution multispectral image. These two degraded images are then send to the earth surface, where they must be fused using a fusion algorithm for hyperspectral and multispectral image, in order to recover the remote sensed hyperspectral image. The main advantage of the proposed methodology for compressing remote sensed hyperspectral images is that the compression process, which must be performed on-board, becomes very simple, being the fusion process used to reconstruct image the more complex one. An extra advantage is that the compression ratio can be fixed in advanced. Many simulations have been performed using different fusion algorithms and different methodologies for degrading the hyperspectral image. The results obtained in the simulations performed corroborate the benefits of the proposed methodology.

  8. Data Compression Techniques for Maps

    DTIC Science & Technology

    1989-01-01

    Lempel - Ziv compression is applied to the classified and unclassified images as also to the output of the compression algorithms . The algorithms ...resulted in a compression of 7:1. The output of the quadtree coding algorithm was then compressed using Lempel - Ziv coding. The compression ratio achieved...using Lempel - Ziv coding. The unclassified image gave a compression ratio of only 1.4:1. The K means classified image

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    I. W. Ginsberg

    Multiresolutional decompositions known as spectral fingerprints are often used to extract spectral features from multispectral/hyperspectral data. In this study, the authors investigate the use of wavelet-based algorithms for generating spectral fingerprints. The wavelet-based algorithms are compared to the currently used method, traditional convolution with first-derivative Gaussian filters. The comparison analyses consists of two parts: (a) the computational expense of the new method is compared with the computational costs of the current method and (b) the outputs of the wavelet-based methods are compared with those of the current method to determine any practical differences in the resulting spectral fingerprints. The resultsmore » show that the wavelet-based algorithms can greatly reduce the computational expense of generating spectral fingerprints, while practically no differences exist in the resulting fingerprints. The analysis is conducted on a database of hyperspectral signatures, namely, Hyperspectral Digital Image Collection Experiment (HYDICE) signatures. The reduction in computational expense is by a factor of about 30, and the average Euclidean distance between resulting fingerprints is on the order of 0.02.« less

  10. Detecting the fingerprints of complex land management practices in a tallgrass prairie site using phenocam and satellite images, and the eddy covariance technique

    USDA-ARS?s Scientific Manuscript database

    Burning, grazing, and baling (hay harvesting) are common management practices for tallgrass prairie. However, the impacts of these management practices on grassland phenology and carbon uptake are not well understood. Utilizing multiple observations to detect fingerprints of various management pract...

  11. Fast Lossless Compression of Multispectral-Image Data

    NASA Technical Reports Server (NTRS)

    Klimesh, Matthew

    2006-01-01

    An algorithm that effects fast lossless compression of multispectral-image data is based on low-complexity, proven adaptive-filtering algorithms. This algorithm is intended for use in compressing multispectral-image data aboard spacecraft for transmission to Earth stations. Variants of this algorithm could be useful for lossless compression of three-dimensional medical imagery and, perhaps, for compressing image data in general.

  12. Optimal Compression Methods for Floating-point Format Images

    NASA Technical Reports Server (NTRS)

    Pence, W. D.; White, R. L.; Seaman, R.

    2009-01-01

    We report on the results of a comparison study of different techniques for compressing FITS images that have floating-point (real*4) pixel values. Standard file compression methods like GZIP are generally ineffective in this case (with compression ratios only in the range 1.2 - 1.6), so instead we use a technique of converting the floating-point values into quantized scaled integers which are compressed using the Rice algorithm. The compressed data stream is stored in FITS format using the tiled-image compression convention. This is technically a lossy compression method, since the pixel values are not exactly reproduced, however all the significant photometric and astrometric information content of the image can be preserved while still achieving file compression ratios in the range of 4 to 8. We also show that introducing dithering, or randomization, when assigning the quantized pixel-values can significantly improve the photometric and astrometric precision in the stellar images in the compressed file without adding additional noise. We quantify our results by comparing the stellar magnitudes and positions as measured in the original uncompressed image to those derived from the same image after applying successively greater amounts of compression.

  13. Outer planet Pioneer imaging communications system study. [data compression

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The effects of different types of imaging data compression on the elements of the Pioneer end-to-end data system were studied for three imaging transmission methods. These were: no data compression, moderate data compression, and the advanced imaging communications system. It is concluded that: (1) the value of data compression is inversely related to the downlink telemetry bit rate; (2) the rolling characteristics of the spacecraft limit the selection of data compression ratios; and (3) data compression might be used to perform acceptable outer planet mission at reduced downlink telemetry bit rates.

  14. Compressive sensing in medical imaging

    PubMed Central

    Graff, Christian G.; Sidky, Emil Y.

    2015-01-01

    The promise of compressive sensing, exploitation of compressibility to achieve high quality image reconstructions with less data, has attracted a great deal of attention in the medical imaging community. At the Compressed Sensing Incubator meeting held in April 2014 at OSA Headquarters in Washington, DC, presentations were given summarizing some of the research efforts ongoing in compressive sensing for x-ray computed tomography and magnetic resonance imaging systems. This article provides an expanded version of these presentations. Sparsity-exploiting reconstruction algorithms that have gained popularity in the medical imaging community are studied, and examples of clinical applications that could benefit from compressive sensing ideas are provided. The current and potential future impact of compressive sensing on the medical imaging field is discussed. PMID:25968400

  15. Artifacts in slab average-intensity-projection images reformatted from JPEG 2000 compressed thin-section abdominal CT data sets.

    PubMed

    Kim, Bohyoung; Lee, Kyoung Ho; Kim, Kil Joong; Mantiuk, Rafal; Kim, Hye-ri; Kim, Young Hoon

    2008-06-01

    The objective of our study was to assess the effects of compressing source thin-section abdominal CT images on final transverse average-intensity-projection (AIP) images. At reversible, 4:1, 6:1, 8:1, 10:1, and 15:1 Joint Photographic Experts Group (JPEG) 2000 compressions, we compared the artifacts in 20 matching compressed thin sections (0.67 mm), compressed thick sections (5 mm), and AIP images (5 mm) reformatted from the compressed thin sections. The artifacts were quantitatively measured with peak signal-to-noise ratio (PSNR) and a perceptual quality metric (High Dynamic Range Visual Difference Predictor [HDR-VDP]). By comparing the compressed and original images, three radiologists independently graded the artifacts as 0 (none, indistinguishable), 1 (barely perceptible), 2 (subtle), or 3 (significant). Friedman tests and exact tests for paired proportions were used. At irreversible compressions, the artifacts tended to increase in the order of AIP, thick-section, and thin-section images in terms of PSNR (p < 0.0001), HDR-VDP (p < 0.0001), and the readers' grading (p < 0.01 at 6:1 or higher compressions). At 6:1 and 8:1, distinguishable pairs (grades 1-3) tended to increase in the order of AIP, thick-section, and thin-section images. Visually lossless threshold for the compression varied between images but decreased in the order of AIP, thick-section, and thin-section images (p < 0.0001). Compression artifacts in thin sections are significantly attenuated in AIP images. On the premise that thin sections are typically reviewed using an AIP technique, it is justifiable to compress them to a compression level currently accepted for thick sections.

  16. Electronic fingerprints of DNA bases on graphene.

    PubMed

    Ahmed, Towfiq; Kilina, Svetlana; Das, Tanmoy; Haraldsen, Jason T; Rehr, John J; Balatsky, Alexander V

    2012-02-08

    We calculate the electronic local density of states (LDOS) of DNA nucleotide bases (A,C,G,T), deposited on graphene. We observe significant base-dependent features in the LDOS in an energy range within a few electronvolts of the Fermi level. These features can serve as electronic fingerprints for the identification of individual bases in scanning tunneling spectroscopy (STS) experiments that perform image and site dependent spectroscopy on biomolecules. Thus the fingerprints of DNA-graphene hybrid structures may provide an alternative route to DNA sequencing using STS. © 2012 American Chemical Society

  17. JPEG2000 still image coding quality.

    PubMed

    Chen, Tzong-Jer; Lin, Sheng-Chieh; Lin, You-Chen; Cheng, Ren-Gui; Lin, Li-Hui; Wu, Wei

    2013-10-01

    This work demonstrates the image qualities between two popular JPEG2000 programs. Two medical image compression algorithms are both coded using JPEG2000, but they are different regarding the interface, convenience, speed of computation, and their characteristic options influenced by the encoder, quantization, tiling, etc. The differences in image quality and compression ratio are also affected by the modality and compression algorithm implementation. Do they provide the same quality? The qualities of compressed medical images from two image compression programs named Apollo and JJ2000 were evaluated extensively using objective metrics. These algorithms were applied to three medical image modalities at various compression ratios ranging from 10:1 to 100:1. Following that, the quality of the reconstructed images was evaluated using five objective metrics. The Spearman rank correlation coefficients were measured under every metric in the two programs. We found that JJ2000 and Apollo exhibited indistinguishable image quality for all images evaluated using the above five metrics (r > 0.98, p < 0.001). It can be concluded that the image quality of the JJ2000 and Apollo algorithms is statistically equivalent for medical image compression.

  18. The compression and storage method of the same kind of medical images: DPCM

    NASA Astrophysics Data System (ADS)

    Zhao, Xiuying; Wei, Jingyuan; Zhai, Linpei; Liu, Hong

    2006-09-01

    Medical imaging has started to take advantage of digital technology, opening the way for advanced medical imaging and teleradiology. Medical images, however, require large amounts of memory. At over 1 million bytes per image, a typical hospital needs a staggering amount of memory storage (over one trillion bytes per year), and transmitting an image over a network (even the promised superhighway) could take minutes--too slow for interactive teleradiology. This calls for image compression to reduce significantly the amount of data needed to represent an image. Several compression techniques with different compression ratio have been developed. However, the lossless techniques, which allow for perfect reconstruction of the original images, yield modest compression ratio, while the techniques that yield higher compression ratio are lossy, that is, the original image is reconstructed only approximately. Medical imaging poses the great challenge of having compression algorithms that are lossless (for diagnostic and legal reasons) and yet have high compression ratio for reduced storage and transmission time. To meet this challenge, we are developing and studying some compression schemes, which are either strictly lossless or diagnostically lossless, taking advantage of the peculiarities of medical images and of the medical practice. In order to increase the Signal to Noise Ratio (SNR) by exploitation of correlations within the source signal, a method of combining differential pulse code modulation (DPCM) is presented.

  19. Spectral Phasor approach for fingerprinting of photo-activatable fluorescent proteins Dronpa, Kaede and KikGR

    PubMed Central

    Cutrale, Francesco; Salih, Anya; Gratton, Enrico

    2013-01-01

    The phasor global analysis algorithm is common for fluorescence lifetime applications, but has only been recently proposed for spectral analysis. Here the phasor representation and fingerprinting is exploited in its second harmonic to determine the number and spectra of photo-activated states as well as their conversion dynamics. We follow the sequence of photo-activation of proteins over time by rapidly collecting multiple spectral images. The phasor representation of the cumulative images provides easy identification of the spectral signatures of each photo-activatable protein. PMID:24040513

  20. Subjective evaluation of compressed image quality

    NASA Astrophysics Data System (ADS)

    Lee, Heesub; Rowberg, Alan H.; Frank, Mark S.; Choi, Hyung-Sik; Kim, Yongmin

    1992-05-01

    Lossy data compression generates distortion or error on the reconstructed image and the distortion becomes visible as the compression ratio increases. Even at the same compression ratio, the distortion appears differently depending on the compression method used. Because of the nonlinearity of the human visual system and lossy data compression methods, we have evaluated subjectively the quality of medical images compressed with two different methods, an intraframe and interframe coding algorithms. The evaluated raw data were analyzed statistically to measure interrater reliability and reliability of an individual reader. Also, the analysis of variance was used to identify which compression method is better statistically, and from what compression ratio the quality of a compressed image is evaluated as poorer than that of the original. Nine x-ray CT head images from three patients were used as test cases. Six radiologists participated in reading the 99 images (some were duplicates) compressed at four different compression ratios, original, 5:1, 10:1, and 15:1. The six readers agree more than by chance alone and their agreement was statistically significant, but there were large variations among readers as well as within a reader. The displacement estimated interframe coding algorithm is significantly better in quality than that of the 2-D block DCT at significance level 0.05. Also, 10:1 compressed images with the interframe coding algorithm do not show any significant differences from the original at level 0.05.

  1. Image-adapted visually weighted quantization matrices for digital image compression

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B. (Inventor)

    1994-01-01

    A method for performing image compression that eliminates redundant and invisible image components is presented. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The present invention adapts or customizes the quantization matrix to the image being compressed. The quantization matrix comprises visual masking by luminance and contrast techniques and by an error pooling technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.

  2. Analyzing Personalized Policies for Online Biometric Verification

    PubMed Central

    Sadhwani, Apaar; Yang, Yan; Wein, Lawrence M.

    2014-01-01

    Motivated by India’s nationwide biometric program for social inclusion, we analyze verification (i.e., one-to-one matching) in the case where we possess similarity scores for 10 fingerprints and two irises between a resident’s biometric images at enrollment and his biometric images during his first verification. At subsequent verifications, we allow individualized strategies based on these 12 scores: we acquire a subset of the 12 images, get new scores for this subset that quantify the similarity to the corresponding enrollment images, and use the likelihood ratio (i.e., the likelihood of observing these scores if the resident is genuine divided by the corresponding likelihood if the resident is an imposter) to decide whether a resident is genuine or an imposter. We also consider two-stage policies, where additional images are acquired in a second stage if the first-stage results are inconclusive. Using performance data from India’s program, we develop a new probabilistic model for the joint distribution of the 12 similarity scores and find near-optimal individualized strategies that minimize the false reject rate (FRR) subject to constraints on the false accept rate (FAR) and mean verification delay for each resident. Our individualized policies achieve the same FRR as a policy that acquires (and optimally fuses) 12 biometrics for each resident, which represents a five (four, respectively) log reduction in FRR relative to fingerprint (iris, respectively) policies previously proposed for India’s biometric program. The mean delay is sec for our proposed policy, compared to 30 sec for a policy that acquires one fingerprint and 107 sec for a policy that acquires all 12 biometrics. This policy acquires iris scans from 32–41% of residents (depending on the FAR) and acquires an average of 1.3 fingerprints per resident. PMID:24787752

  3. Analyzing personalized policies for online biometric verification.

    PubMed

    Sadhwani, Apaar; Yang, Yan; Wein, Lawrence M

    2014-01-01

    Motivated by India's nationwide biometric program for social inclusion, we analyze verification (i.e., one-to-one matching) in the case where we possess similarity scores for 10 fingerprints and two irises between a resident's biometric images at enrollment and his biometric images during his first verification. At subsequent verifications, we allow individualized strategies based on these 12 scores: we acquire a subset of the 12 images, get new scores for this subset that quantify the similarity to the corresponding enrollment images, and use the likelihood ratio (i.e., the likelihood of observing these scores if the resident is genuine divided by the corresponding likelihood if the resident is an imposter) to decide whether a resident is genuine or an imposter. We also consider two-stage policies, where additional images are acquired in a second stage if the first-stage results are inconclusive. Using performance data from India's program, we develop a new probabilistic model for the joint distribution of the 12 similarity scores and find near-optimal individualized strategies that minimize the false reject rate (FRR) subject to constraints on the false accept rate (FAR) and mean verification delay for each resident. Our individualized policies achieve the same FRR as a policy that acquires (and optimally fuses) 12 biometrics for each resident, which represents a five (four, respectively) log reduction in FRR relative to fingerprint (iris, respectively) policies previously proposed for India's biometric program. The mean delay is [Formula: see text] sec for our proposed policy, compared to 30 sec for a policy that acquires one fingerprint and 107 sec for a policy that acquires all 12 biometrics. This policy acquires iris scans from 32-41% of residents (depending on the FAR) and acquires an average of 1.3 fingerprints per resident.

  4. Toward an image compression algorithm for the high-resolution electronic still camera

    NASA Technical Reports Server (NTRS)

    Nerheim, Rosalee

    1989-01-01

    Taking pictures with a camera that uses a digital recording medium instead of film has the advantage of recording and transmitting images without the use of a darkroom or a courier. However, high-resolution images contain an enormous amount of information and strain data-storage systems. Image compression will allow multiple images to be stored in the High-Resolution Electronic Still Camera. The camera is under development at Johnson Space Center. Fidelity of the reproduced image and compression speed are of tantamount importance. Lossless compression algorithms are fast and faithfully reproduce the image, but their compression ratios will be unacceptably low due to noise in the front end of the camera. Future efforts will include exploring methods that will reduce the noise in the image and increase the compression ratio.

  5. Evidence acquisition tools for cyber sex crimes investigations

    NASA Astrophysics Data System (ADS)

    Novotny, Jon M.; Meehan, A.; Schulte, D.; Manes, Gavin W.; Shenoi, Sujeet

    2002-08-01

    Sexually explicit Internet chat rooms are increasingly used by pedophiles to reach potential victims. Logging and linking suspects to chat room conversations and e-mails exchanged with undercover detectives are crucial to prosecuting travelers, i.e., pedophiles who travel across state lines to engage in sexual acts with minors. This paper describes two tools, a chat room monitor and a remote fingerprinter, for acquiring and preserving evidence. The chat room monitor logs online communications as well as screen images and keystrokes of the undercover detective. stored to allow the chronological reconstruction and replay of the investigation. The remote fingerprinter uses sophisticated scanning techniques to capture and preserve a unique fingerprint of the suspect's computer over the Internet. Once the suspect's computer is seized, it is scanned again; matching this new fingerprint with the remotely acquired fingerprint establishes that the suspect's computer was used to communicate with the detective.

  6. Collusion-Resistant Audio Fingerprinting System in the Modulated Complex Lapped Transform Domain

    PubMed Central

    Garcia-Hernandez, Jose Juan; Feregrino-Uribe, Claudia; Cumplido, Rene

    2013-01-01

    Collusion-resistant fingerprinting paradigm seems to be a practical solution to the piracy problem as it allows media owners to detect any unauthorized copy and trace it back to the dishonest users. Despite the billionaire losses in the music industry, most of the collusion-resistant fingerprinting systems are devoted to digital images and very few to audio signals. In this paper, state-of-the-art collusion-resistant fingerprinting ideas are extended to audio signals and the corresponding parameters and operation conditions are proposed. Moreover, in order to carry out fingerprint detection using just a fraction of the pirate audio clip, block-based embedding and its corresponding detector is proposed. Extensive simulations show the robustness of the proposed system against average collusion attack. Moreover, by using an efficient Fast Fourier Transform core and standard computer machines it is shown that the proposed system is suitable for real-world scenarios. PMID:23762455

  7. Forensic Identification of Gender from Fingerprints.

    PubMed

    Huynh, Crystal; Brunelle, Erica; Halámková, Lenka; Agudelo, Juliana; Halámek, Jan

    2015-11-17

    In the past century, forensic investigators have universally accepted fingerprinting as a reliable identification method, which relies mainly on pictorial comparisons. Despite developments to software systems in order to increase the probability and speed of identification, there has been limited success in the efforts that have been made to move away from the discipline's absolute dependence on the existence of a prerecorded matching fingerprint. Here, we have revealed that an information-rich latent fingerprint has not been used to its full potential. In our approach, the content present in the sweat left behind-namely the amino acids-can be used to determine physical such as gender of the originator. As a result, we were able to focus on the biochemical content in the fingerprint using a biocatalytic assay, coupled with a specially designed extraction protocol, for determining gender rather than focusing solely on the physical image.

  8. Lossless compression of VLSI layout image data.

    PubMed

    Dai, Vito; Zakhor, Avideh

    2006-09-01

    We present a novel lossless compression algorithm called Context Copy Combinatorial Code (C4), which integrates the advantages of two very disparate compression techniques: context-based modeling and Lempel-Ziv (LZ) style copying. While the algorithm can be applied to many lossless compression applications, such as document image compression, our primary target application has been lossless compression of integrated circuit layout image data. These images contain a heterogeneous mix of data: dense repetitive data better suited to LZ-style coding, and less dense structured data, better suited to context-based encoding. As part of C4, we have developed a novel binary entropy coding technique called combinatorial coding which is simultaneously as efficient as arithmetic coding, and as fast as Huffman coding. Compression results show C4 outperforms JBIG, ZIP, BZIP2, and two-dimensional LZ, and achieves lossless compression ratios greater than 22 for binary layout image data, and greater than 14 for gray-pixel image data.

  9. Cloud solution for histopathological image analysis using region of interest based compression.

    PubMed

    Kanakatte, Aparna; Subramanya, Rakshith; Delampady, Ashik; Nayak, Rajarama; Purushothaman, Balamuralidhar; Gubbi, Jayavardhana

    2017-07-01

    Recent technological gains have led to the adoption of innovative cloud based solutions in medical imaging field. Once the medical image is acquired, it can be viewed, modified, annotated and shared on many devices. This advancement is mainly due to the introduction of Cloud computing in medical domain. Tissue pathology images are complex and are normally collected at different focal lengths using a microscope. The single whole slide image contains many multi resolution images stored in a pyramidal structure with the highest resolution image at the base and the smallest thumbnail image at the top of the pyramid. Highest resolution image will be used for tissue pathology diagnosis and analysis. Transferring and storing such huge images is a big challenge. Compression is a very useful and effective technique to reduce the size of these images. As pathology images are used for diagnosis, no information can be lost during compression (lossless compression). A novel method of extracting the tissue region and applying lossless compression on this region and lossy compression on the empty regions has been proposed in this paper. The resulting compression ratio along with lossless compression on tissue region is in acceptable range allowing efficient storage and transmission to and from the Cloud.

  10. Benford's Law based detection of latent fingerprint forgeries on the example of artificial sweat printed fingerprints captured by confocal laser scanning microscopes

    NASA Astrophysics Data System (ADS)

    Hildebrandt, Mario; Dittmann, Jana

    2015-03-01

    The possibility of forging latent fingerprints at crime scenes is known for a long time. Ever since it has been stated that an expert is capable of recognizing the presence of multiple identical latent prints as an indicator towards forgeries. With the possibility of printing fingerprint patterns to arbitrary surfaces using affordable ink- jet printers equipped with artificial sweat, it is rather simple to create a multitude of fingerprints with slight variations to avoid raising any suspicion. Such artificially printed fingerprints are often hard to detect during the analysis procedure. Moreover, the visibility of particular detection properties might be decreased depending on the utilized enhancement and acquisition technique. In previous work primarily such detection properties are used in combination with non-destructive high resolution sensory and pattern recognition techniques to detect fingerprint forgeries. In this paper we apply Benford's Law in the spatial domain to differentiate between real latent fingerprints and printed fingerprints. This technique has been successfully applied in media forensics to detect image manipulations. We use the differences between Benford's Law and the distribution of the most significant digit of the intensity and topography data from a confocal laser scanning microscope as features for a pattern recognition based detection of printed fingerprints. Our evaluation based on 3000 printed and 3000 latent print samples shows a very good detection performance of up to 98.85% using WEKA's Bagging classifier in a 10-fold stratified cross-validation.

  11. Compression of regions in the global advanced very high resolution radiometer 1-km data set

    NASA Technical Reports Server (NTRS)

    Kess, Barbara L.; Steinwand, Daniel R.; Reichenbach, Stephen E.

    1994-01-01

    The global advanced very high resolution radiometer (AVHRR) 1-km data set is a 10-band image produced at USGS' EROS Data Center for the study of the world's land surfaces. The image contains masked regions for non-land areas which are identical in each band but vary between data sets. They comprise over 75 percent of this 9.7 gigabyte image. The mask is compressed once and stored separately from the land data which is compressed for each of the 10 bands. The mask is stored in a hierarchical format for multi-resolution decompression of geographic subwindows of the image. The land for each band is compressed by modifying a method that ignores fill values. This multi-spectral region compression efficiently compresses the region data and precludes fill values from interfering with land compression statistics. Results show that the masked regions in a one-byte test image (6.5 Gigabytes) compress to 0.2 percent of the 557,756,146 bytes they occupy in the original image, resulting in a compression ratio of 89.9 percent for the entire image.

  12. Partial fingerprint identification algorithm based on the modified generalized Hough transform on mobile device

    NASA Astrophysics Data System (ADS)

    Qin, Jin; Tang, Siqi; Han, Congying; Guo, Tiande

    2018-04-01

    Partial fingerprint identification technology which is mainly used in device with small sensor area like cellphone, U disk and computer, has taken more attention in recent years with its unique advantages. However, owing to the lack of sufficient minutiae points, the conventional method do not perform well in the above situation. We propose a new fingerprint matching technique which utilizes ridges as features to deal with partial fingerprint images and combines the modified generalized Hough transform and scoring strategy based on machine learning. The algorithm can effectively meet the real-time and space-saving requirements of the resource constrained devices. Experiments on in-house database indicate that the proposed algorithm have an excellent performance.

  13. A new method of artificial latent fingerprint creation using artificial sweat and inkjet printer.

    PubMed

    Hong, Sungwook; Hong, Ingi; Han, Aleum; Seo, Jin Yi; Namgung, Juyoung

    2015-12-01

    In order to study fingerprinting in the field of forensic science, it is very important to have two or more latent fingerprints with identical chemical composition and intensity. However, it is impossible to obtain identical fingerprints, in reality, because fingerprinting comes out slightly differently every time. A previous research study had proposed an artificial fingerprint creation method in which inkjet ink was replaced with amino acids and sodium chloride solution: the components of human sweat. But, this method had some drawbacks: divalent cations were not added while formulating the artificial sweat solution, and diluted solutions were used for creating weakly deposited latent fingerprint. In this study, a method was developed for overcoming the drawbacks of the methods used in the previous study. Several divalent cations were added in this study because the amino acid-ninhydrin (or some of its analogues) complex is known to react with divalent cations to produce a photoluminescent product; and, similarly, the amino acid-1,2-indanedione complex is known to be catalyzed by a small amount of zinc ions to produce a highly photoluminescent product. Also, in this study, a new technique was developed which enables to adjust the intensity when printing the latent fingerprint patterns. In this method, image processing software is used to control the intensity of the master fingerprint patterns, which adjusts the printing intensity of the latent fingerprints. This new method opened the way to produce a more realistic artificial fingerprint in various strengths with one artificial sweat working solution. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  14. A new efficient method for color image compression based on visual attention mechanism

    NASA Astrophysics Data System (ADS)

    Shao, Xiaoguang; Gao, Kun; Lv, Lily; Ni, Guoqiang

    2010-11-01

    One of the key procedures in color image compression is to extract its region of interests (ROIs) and evaluate different compression ratios. A new non-uniform color image compression algorithm with high efficiency is proposed in this paper by using a biology-motivated selective attention model for the effective extraction of ROIs in natural images. When the ROIs have been extracted and labeled in the image, the subsequent work is to encode the ROIs and other regions with different compression ratios via popular JPEG algorithm. Furthermore, experiment results and quantitative and qualitative analysis in the paper show perfect performance when comparing with other traditional color image compression approaches.

  15. Digitized hand-wrist radiographs: comparison of subjective and software-derived image quality at various compression ratios.

    PubMed

    McCord, Layne K; Scarfe, William C; Naylor, Rachel H; Scheetz, James P; Silveira, Anibal; Gillespie, Kevin R

    2007-05-01

    The objectives of this study were to compare the effect of JPEG 2000 compression of hand-wrist radiographs on observer image quality qualitative assessment and to compare with a software-derived quantitative image quality index. Fifteen hand-wrist radiographs were digitized and saved as TIFF and JPEG 2000 images at 4 levels of compression (20:1, 40:1, 60:1, and 80:1). The images, including rereads, were viewed by 13 orthodontic residents who determined the image quality rating on a scale of 1 to 5. A quantitative analysis was also performed by using a readily available software based on the human visual system (Image Quality Measure Computer Program, version 6.2, Mitre, Bedford, Mass). ANOVA was used to determine the optimal compression level (P < or =.05). When we compared subjective indexes, JPEG compression greater than 60:1 significantly reduced image quality. When we used quantitative indexes, the JPEG 2000 images had lower quality at all compression ratios compared with the original TIFF images. There was excellent correlation (R2 >0.92) between qualitative and quantitative indexes. Image Quality Measure indexes are more sensitive than subjective image quality assessments in quantifying image degradation with compression. There is potential for this software-based quantitative method in determining the optimal compression ratio for any image without the use of subjective raters.

  16. Data compression experiments with LANDSAT thematic mapper and Nimbus-7 coastal zone color scanner data

    NASA Technical Reports Server (NTRS)

    Tilton, James C.; Ramapriyan, H. K.

    1989-01-01

    A case study is presented where an image segmentation based compression technique is applied to LANDSAT Thematic Mapper (TM) and Nimbus-7 Coastal Zone Color Scanner (CZCS) data. The compression technique, called Spatially Constrained Clustering (SCC), can be regarded as an adaptive vector quantization approach. The SCC can be applied to either single or multiple spectral bands of image data. The segmented image resulting from SCC is encoded in small rectangular blocks, with the codebook varying from block to block. Lossless compression potential (LDP) of sample TM and CZCS images are evaluated. For the TM test image, the LCP is 2.79. For the CZCS test image the LCP is 1.89, even though when only a cloud-free section of the image is considered the LCP increases to 3.48. Examples of compressed images are shown at several compression ratios ranging from 4 to 15. In the case of TM data, the compressed data are classified using the Bayes' classifier. The results show an improvement in the similarity between the classification results and ground truth when compressed data are used, thus showing that compression is, in fact, a useful first step in the analysis.

  17. Comparison of two SVD-based color image compression schemes.

    PubMed

    Li, Ying; Wei, Musheng; Zhang, Fengxia; Zhao, Jianli

    2017-01-01

    Color image compression is a commonly used process to represent image data as few bits as possible, which removes redundancy in the data while maintaining an appropriate level of quality for the user. Color image compression algorithms based on quaternion are very common in recent years. In this paper, we propose a color image compression scheme, based on the real SVD, named real compression scheme. First, we form a new real rectangular matrix C according to the red, green and blue components of the original color image and perform the real SVD for C. Then we select several largest singular values and the corresponding vectors in the left and right unitary matrices to compress the color image. We compare the real compression scheme with quaternion compression scheme by performing quaternion SVD using the real structure-preserving algorithm. We compare the two schemes in terms of operation amount, assignment number, operation speed, PSNR and CR. The experimental results show that with the same numbers of selected singular values, the real compression scheme offers higher CR, much less operation time, but a little bit smaller PSNR than the quaternion compression scheme. When these two schemes have the same CR, the real compression scheme shows more prominent advantages both on the operation time and PSNR.

  18. Comparison of two SVD-based color image compression schemes

    PubMed Central

    Li, Ying; Wei, Musheng; Zhang, Fengxia; Zhao, Jianli

    2017-01-01

    Color image compression is a commonly used process to represent image data as few bits as possible, which removes redundancy in the data while maintaining an appropriate level of quality for the user. Color image compression algorithms based on quaternion are very common in recent years. In this paper, we propose a color image compression scheme, based on the real SVD, named real compression scheme. First, we form a new real rectangular matrix C according to the red, green and blue components of the original color image and perform the real SVD for C. Then we select several largest singular values and the corresponding vectors in the left and right unitary matrices to compress the color image. We compare the real compression scheme with quaternion compression scheme by performing quaternion SVD using the real structure-preserving algorithm. We compare the two schemes in terms of operation amount, assignment number, operation speed, PSNR and CR. The experimental results show that with the same numbers of selected singular values, the real compression scheme offers higher CR, much less operation time, but a little bit smaller PSNR than the quaternion compression scheme. When these two schemes have the same CR, the real compression scheme shows more prominent advantages both on the operation time and PSNR. PMID:28257451

  19. Compression of the Global Land 1-km AVHRR dataset

    USGS Publications Warehouse

    Kess, B. L.; Steinwand, D.R.; Reichenbach, S.E.

    1996-01-01

    Large datasets, such as the Global Land 1-km Advanced Very High Resolution Radiometer (AVHRR) Data Set (Eidenshink and Faundeen 1994), require compression methods that provide efficient storage and quick access to portions of the data. A method of lossless compression is described that provides multiresolution decompression within geographic subwindows of multi-spectral, global, 1-km, AVHRR images. The compression algorithm segments each image into blocks and compresses each block in a hierarchical format. Users can access the data by specifying either a geographic subwindow or the whole image and a resolution (1,2,4, 8, or 16 km). The Global Land 1-km AVHRR data are presented in the Interrupted Goode's Homolosine map projection. These images contain masked regions for non-land areas which comprise 80 per cent of the image. A quadtree algorithm is used to compress the masked regions. The compressed region data are stored separately from the compressed land data. Results show that the masked regions compress to 0·143 per cent of the bytes they occupy in the test image and the land areas are compressed to 33·2 per cent of their original size. The entire image is compressed hierarchically to 6·72 per cent of the original image size, reducing the data from 9·05 gigabytes to 623 megabytes. These results are compared to the first order entropy of the residual image produced with lossless Joint Photographic Experts Group predictors. Compression results are also given for Lempel-Ziv-Welch (LZW) and LZ77, the algorithms used by UNIX compress and GZIP respectively. In addition to providing multiresolution decompression of geographic subwindows of the data, the hierarchical approach and the use of quadtrees for storing the masked regions gives a marked improvement over these popular methods.

  20. Edge-preserving image compression for magnetic-resonance images using dynamic associative neural networks (DANN)-based neural networks

    NASA Astrophysics Data System (ADS)

    Wan, Tat C.; Kabuka, Mansur R.

    1994-05-01

    With the tremendous growth in imaging applications and the development of filmless radiology, the need for compression techniques that can achieve high compression ratios with user specified distortion rates becomes necessary. Boundaries and edges in the tissue structures are vital for detection of lesions and tumors, which in turn requires the preservation of edges in the image. The proposed edge preserving image compressor (EPIC) combines lossless compression of edges with neural network compression techniques based on dynamic associative neural networks (DANN), to provide high compression ratios with user specified distortion rates in an adaptive compression system well-suited to parallel implementations. Improvements to DANN-based training through the use of a variance classifier for controlling a bank of neural networks speed convergence and allow the use of higher compression ratios for `simple' patterns. The adaptation and generalization capabilities inherent in EPIC also facilitate progressive transmission of images through varying the number of quantization levels used to represent compressed patterns. Average compression ratios of 7.51:1 with an averaged average mean squared error of 0.0147 were achieved.

  1. Multiple-image encryption via lifting wavelet transform and XOR operation based on compressive ghost imaging scheme

    NASA Astrophysics Data System (ADS)

    Li, Xianye; Meng, Xiangfeng; Yang, Xiulun; Wang, Yurong; Yin, Yongkai; Peng, Xiang; He, Wenqi; Dong, Guoyan; Chen, Hongyi

    2018-03-01

    A multiple-image encryption method via lifting wavelet transform (LWT) and XOR operation is proposed, which is based on a row scanning compressive ghost imaging scheme. In the encryption process, the scrambling operation is implemented for the sparse images transformed by LWT, then the XOR operation is performed on the scrambled images, and the resulting XOR images are compressed in the row scanning compressive ghost imaging, through which the ciphertext images can be detected by bucket detector arrays. During decryption, the participant who possesses his/her correct key-group, can successfully reconstruct the corresponding plaintext image by measurement key regeneration, compression algorithm reconstruction, XOR operation, sparse images recovery, and inverse LWT (iLWT). Theoretical analysis and numerical simulations validate the feasibility of the proposed method.

  2. Image data compression having minimum perceptual error

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B. (Inventor)

    1995-01-01

    A method for performing image compression that eliminates redundant and invisible image components is described. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The present invention adapts or customizes the quantization matrix to the image being compressed. The quantization matrix comprises visual masking by luminance and contrast techniques and by an error pooling technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.

  3. High efficient optical remote sensing images acquisition for nano-satellite-framework

    NASA Astrophysics Data System (ADS)

    Li, Feng; Xin, Lei; Liu, Yang; Fu, Jie; Liu, Yuhong; Guo, Yi

    2017-09-01

    It is more difficult and challenging to implement Nano-satellite (NanoSat) based optical Earth observation missions than conventional satellites because of the limitation of volume, weight and power consumption. In general, an image compression unit is a necessary onboard module to save data transmission bandwidth and disk space. The image compression unit can get rid of redundant information of those captured images. In this paper, a new image acquisition framework is proposed for NanoSat based optical Earth observation applications. The entire process of image acquisition and compression unit can be integrated in the photo detector array chip, that is, the output data of the chip is already compressed. That is to say, extra image compression unit is no longer needed; therefore, the power, volume, and weight of the common onboard image compression units consumed can be largely saved. The advantages of the proposed framework are: the image acquisition and image compression are combined into a single step; it can be easily built in CMOS architecture; quick view can be provided without reconstruction in the framework; Given a certain compression ratio, the reconstructed image quality is much better than those CS based methods. The framework holds promise to be widely used in the future.

  4. Anti-collusion forensics of multimedia fingerprinting using orthogonal modulation.

    PubMed

    Wang, Z Jane; Wu, Min; Zhao, Hong Vicky; Trappe, Wade; Liu, K J Ray

    2005-06-01

    Digital fingerprinting is a method for protecting digital data in which fingerprints that are embedded in multimedia are capable of identifying unauthorized use of digital content. A powerful attack that can be employed to reduce this tracing capability is collusion, where several users combine their copies of the same content to attenuate/remove the original fingerprints. In this paper, we study the collusion resistance of a fingerprinting system employing Gaussian distributed fingerprints and orthogonal modulation. We introduce the maximum detector and the thresholding detector for colluder identification. We then analyze the collusion resistance of a system to the averaging collusion attack for the performance criteria represented by the probability of a false negative and the probability of a false positive. Lower and upper bounds for the maximum number of colluders K(max) are derived. We then show that the detectors are robust to different collusion attacks. We further study different sets of performance criteria, and our results indicate that attacks based on a few dozen independent copies can confound such a fingerprinting system. We also propose a likelihood-based approach to estimate the number of colluders. Finally, we demonstrate the performance for detecting colluders through experiments using real images.

  5. Automated mapping of explosives particles in composition C-4 fingerprints.

    PubMed

    Verkouteren, Jennifer R; Coleman, Jessica L; Cho, Inho

    2010-03-01

    A method is described to perform automated mapping of hexahydro-1,3,5-trinitro-1,3,5-triazine (RDX) particles in C-4 fingerprints. The method employs polarized light microscopy and image analysis to map the entire fingerprint and the distribution of RDX particles. This method can be used to evaluate a large number of fingerprints to aid in the development of threat libraries that can be used to determine performance requirements of explosive trace detectors. A series of 50 C-4 fingerprints were characterized, and results show that the number of particles varies significantly from print to print, and within a print. The particle size distributions can be used to estimate the mass of RDX in the fingerprint. These estimates were found to be within +/-26% relative of the results obtained from dissolution gas chromatography/micro-electron capture detection for four of six prints, which is quite encouraging for a particle counting approach. By evaluating the average mass and frequency of particles with respect to size for this series of fingerprints, we conclude that particles 10-20 microm in diameter could be targeted to improve detection of traces of C-4 explosives.

  6. Identification of recently handled materials by analysis of latent human fingerprints using infrared spectromicroscopy.

    PubMed

    Grant, Ashleigh; Wilkinson, T J; Holman, Derek R; Martin, Michael C

    2005-09-01

    Analysis of fingerprints has predominantly focused on matching the pattern of ridges to a specific person as a form of identification. The present work focuses on identifying extrinsic materials that are left within a person's fingerprint after recent handling of such materials. Specifically, we employed infrared spectromicroscopy to locate and positively identify microscopic particles from a mixture of common materials in the latent human fingerprints of volunteer subjects. We were able to find and correctly identify all test substances based on their unique infrared spectral signatures. Spectral imaging is demonstrated as a method for automating recognition of specific substances in a fingerprint. We also demonstrate the use of attenuated total reflectance (ATR) and synchrotron-based infrared spectromicroscopy for obtaining high-quality spectra from particles that were too thick or too small, respectively, for reflection/absorption measurements. We believe the application of this rapid, nondestructive analytical technique to the forensic study of latent human fingerprints has the potential to add a new layer of information available to investigators. Using fingerprints to not only identify who was present at a crime scene, but also to link who was handling key materials, will be a powerful investigative tool.

  7. Strategies for potential age dating of fingerprints through the diffusion of sebum molecules on a nonporous surface analyzed using time-of-flight secondary ion mass spectrometry.

    PubMed

    Muramoto, Shin; Sisco, Edward

    2015-08-18

    Age dating of fingerprints could have a significant impact in forensic science, as it has the potential to facilitate the judicial process by assessing the relevance of a fingerprint found at a crime scene. However, no method currently exists that can reliably predict the age of a latent fingerprint. In this manuscript, time-of-flight secondary ion imaging mass spectrometry (TOF-SIMS) was used to measure the diffusivity of saturated fatty acid molecules from a fingerprint on a silicon wafer. It was found that their diffusion from relatively fresh fingerprints (t ≤ 96 h) could be modeled using an error function, with diffusivities (mm(2)/h) that followed a power function when plotted against molecular weight. The equation x = 0.02t(0.5) was obtained for palmitic acid that could be used to find its position in millimeters (where the concentration is 50% of its initial value or c0/2) as a function of time in hours. The results show that on a clean silicon substrate, the age of a fingerprint (t ≤ 96 h) could reliably be obtained through the extent of diffusion of palmitic acid.

  8. Estimating JPEG2000 compression for image forensics using Benford's Law

    NASA Astrophysics Data System (ADS)

    Qadir, Ghulam; Zhao, Xi; Ho, Anthony T. S.

    2010-05-01

    With the tremendous growth and usage of digital images nowadays, the integrity and authenticity of digital content is becoming increasingly important, and a growing concern to many government and commercial sectors. Image Forensics, based on a passive statistical analysis of the image data only, is an alternative approach to the active embedding of data associated with Digital Watermarking. Benford's Law was first introduced to analyse the probability distribution of the 1st digit (1-9) numbers of natural data, and has since been applied to Accounting Forensics for detecting fraudulent income tax returns [9]. More recently, Benford's Law has been further applied to image processing and image forensics. For example, Fu et al. [5] proposed a Generalised Benford's Law technique for estimating the Quality Factor (QF) of JPEG compressed images. In our previous work, we proposed a framework incorporating the Generalised Benford's Law to accurately detect unknown JPEG compression rates of watermarked images in semi-fragile watermarking schemes. JPEG2000 (a relatively new image compression standard) offers higher compression rates and better image quality as compared to JPEG compression. In this paper, we propose the novel use of Benford's Law for estimating JPEG2000 compression for image forensics applications. By analysing the DWT coefficients and JPEG2000 compression on 1338 test images, the initial results indicate that the 1st digit probability of DWT coefficients follow the Benford's Law. The unknown JPEG2000 compression rates of the image can also be derived, and proved with the help of a divergence factor, which shows the deviation between the probabilities and Benford's Law. Based on 1338 test images, the mean divergence for DWT coefficients is approximately 0.0016, which is lower than DCT coefficients at 0.0034. However, the mean divergence for JPEG2000 images compression rate at 0.1 is 0.0108, which is much higher than uncompressed DWT coefficients. This result clearly indicates a presence of compression in the image. Moreover, we compare the results of 1st digit probability and divergence among JPEG2000 compression rates at 0.1, 0.3, 0.5 and 0.9. The initial results show that the expected difference among them could be used for further analysis to estimate the unknown JPEG2000 compression rates.

  9. Low rank alternating direction method of multipliers reconstruction for MR fingerprinting.

    PubMed

    Assländer, Jakob; Cloos, Martijn A; Knoll, Florian; Sodickson, Daniel K; Hennig, Jürgen; Lattanzi, Riccardo

    2018-01-01

    The proposed reconstruction framework addresses the reconstruction accuracy, noise propagation and computation time for magnetic resonance fingerprinting. Based on a singular value decomposition of the signal evolution, magnetic resonance fingerprinting is formulated as a low rank (LR) inverse problem in which one image is reconstructed for each singular value under consideration. This LR approximation of the signal evolution reduces the computational burden by reducing the number of Fourier transformations. Also, the LR approximation improves the conditioning of the problem, which is further improved by extending the LR inverse problem to an augmented Lagrangian that is solved by the alternating direction method of multipliers. The root mean square error and the noise propagation are analyzed in simulations. For verification, in vivo examples are provided. The proposed LR alternating direction method of multipliers approach shows a reduced root mean square error compared to the original fingerprinting reconstruction, to a LR approximation alone and to an alternating direction method of multipliers approach without a LR approximation. Incorporating sensitivity encoding allows for further artifact reduction. The proposed reconstruction provides robust convergence, reduced computational burden and improved image quality compared to other magnetic resonance fingerprinting reconstruction approaches evaluated in this study. Magn Reson Med 79:83-96, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  10. Feature hashing for fast image retrieval

    NASA Astrophysics Data System (ADS)

    Yan, Lingyu; Fu, Jiarun; Zhang, Hongxin; Yuan, Lu; Xu, Hui

    2018-03-01

    Currently, researches on content based image retrieval mainly focus on robust feature extraction. However, due to the exponential growth of online images, it is necessary to consider searching among large scale images, which is very timeconsuming and unscalable. Hence, we need to pay much attention to the efficiency of image retrieval. In this paper, we propose a feature hashing method for image retrieval which not only generates compact fingerprint for image representation, but also prevents huge semantic loss during the process of hashing. To generate the fingerprint, an objective function of semantic loss is constructed and minimized, which combine the influence of both the neighborhood structure of feature data and mapping error. Since the machine learning based hashing effectively preserves neighborhood structure of data, it yields visual words with strong discriminability. Furthermore, the generated binary codes leads image representation building to be of low-complexity, making it efficient and scalable to large scale databases. Experimental results show good performance of our approach.

  11. Spectroscopically Enhanced Method and System for Multi-Factor Biometric Authentication

    NASA Astrophysics Data System (ADS)

    Pishva, Davar

    This paper proposes a spectroscopic method and system for preventing spoofing of biometric authentication. One of its focus is to enhance biometrics authentication with a spectroscopic method in a multifactor manner such that a person's unique ‘spectral signatures’ or ‘spectral factors’ are recorded and compared in addition to a non-spectroscopic biometric signature to reduce the likelihood of imposter getting authenticated. By using the ‘spectral factors’ extracted from reflectance spectra of real fingers and employing cluster analysis, it shows how the authentic fingerprint image presented by a real finger can be distinguished from an authentic fingerprint image embossed on an artificial finger, or molded on a fingertip cover worn by an imposter. This paper also shows how to augment two widely used biometrics systems (fingerprint and iris recognition devices) with spectral biometrics capabilities in a practical manner and without creating much overhead or inconveniencing their users.

  12. A Framework for Reproducible Latent Fingerprint Enhancements.

    PubMed

    Carasso, Alfred S

    2014-01-01

    Photoshop processing of latent fingerprints is the preferred methodology among law enforcement forensic experts, but that appproach is not fully reproducible and may lead to questionable enhancements. Alternative, independent, fully reproducible enhancements, using IDL Histogram Equalization and IDL Adaptive Histogram Equalization, can produce better-defined ridge structures, along with considerable background information. Applying a systematic slow motion smoothing procedure to such IDL enhancements, based on the rapid FFT solution of a Lévy stable fractional diffusion equation, can attenuate background detail while preserving ridge information. The resulting smoothed latent print enhancements are comparable to, but distinct from, forensic Photoshop images suitable for input into automated fingerprint identification systems, (AFIS). In addition, this progressive smoothing procedure can be reexamined by displaying the suite of progressively smoother IDL images. That suite can be stored, providing an audit trail that allows monitoring for possible loss of useful information, in transit to the user-selected optimal image. Such independent and fully reproducible enhancements provide a valuable frame of reference that may be helpful in informing, complementing, and possibly validating the forensic Photoshop methodology.

  13. A Framework for Reproducible Latent Fingerprint Enhancements

    PubMed Central

    Carasso, Alfred S.

    2014-01-01

    Photoshop processing1 of latent fingerprints is the preferred methodology among law enforcement forensic experts, but that appproach is not fully reproducible and may lead to questionable enhancements. Alternative, independent, fully reproducible enhancements, using IDL Histogram Equalization and IDL Adaptive Histogram Equalization, can produce better-defined ridge structures, along with considerable background information. Applying a systematic slow motion smoothing procedure to such IDL enhancements, based on the rapid FFT solution of a Lévy stable fractional diffusion equation, can attenuate background detail while preserving ridge information. The resulting smoothed latent print enhancements are comparable to, but distinct from, forensic Photoshop images suitable for input into automated fingerprint identification systems, (AFIS). In addition, this progressive smoothing procedure can be reexamined by displaying the suite of progressively smoother IDL images. That suite can be stored, providing an audit trail that allows monitoring for possible loss of useful information, in transit to the user-selected optimal image. Such independent and fully reproducible enhancements provide a valuable frame of reference that may be helpful in informing, complementing, and possibly validating the forensic Photoshop methodology. PMID:26601028

  14. Evaluation of image compression for computer-aided diagnosis of breast tumors in 3D sonography

    NASA Astrophysics Data System (ADS)

    Chen, We-Min; Huang, Yu-Len; Tao, Chi-Chuan; Chen, Dar-Ren; Moon, Woo-Kyung

    2006-03-01

    Medical imaging examinations form the basis for physicians diagnosing diseases, as evidenced by the increasing use of digital medical images for picture archiving and communications systems (PACS). However, with enlarged medical image databases and rapid growth of patients' case reports, PACS requires image compression to accelerate the image transmission rate and conserve disk space for diminishing implementation costs. For this purpose, JPEG and JPEG2000 have been accepted as legal formats for the digital imaging and communications in medicine (DICOM). The high compression ratio is felt to be useful for medical imagery. Therefore, this study evaluates the compression ratios of JPEG and JPEG2000 standards for computer-aided diagnosis (CAD) of breast tumors in 3-D medical ultrasound (US) images. The 3-D US data sets with various compression ratios are compressed using the two efficacious image compression standards. The reconstructed data sets are then diagnosed by a previous proposed CAD system. The diagnostic accuracy is measured based on receiver operating characteristic (ROC) analysis. Namely, the ROC curves are used to compare the diagnostic performance of two or more reconstructed images. Analysis results ensure a comparison of the compression ratios by using JPEG and JPEG2000 for 3-D US images. Results of this study provide the possible bit rates using JPEG and JPEG2000 for 3-D breast US images.

  15. Learning random networks for compression of still and moving images

    NASA Technical Reports Server (NTRS)

    Gelenbe, Erol; Sungur, Mert; Cramer, Christopher

    1994-01-01

    Image compression for both still and moving images is an extremely important area of investigation, with numerous applications to videoconferencing, interactive education, home entertainment, and potential applications to earth observations, medical imaging, digital libraries, and many other areas. We describe work on a neural network methodology to compress/decompress still and moving images. We use the 'point-process' type neural network model which is closer to biophysical reality than standard models, and yet is mathematically much more tractable. We currently achieve compression ratios of the order of 120:1 for moving grey-level images, based on a combination of motion detection and compression. The observed signal-to-noise ratio varies from values above 25 to more than 35. The method is computationally fast so that compression and decompression can be carried out in real-time. It uses the adaptive capabilities of a set of neural networks so as to select varying compression ratios in real-time as a function of quality achieved. It also uses a motion detector which will avoid retransmitting portions of the image which have varied little from the previous frame. Further improvements can be achieved by using on-line learning during compression, and by appropriate compensation of nonlinearities in the compression/decompression scheme. We expect to go well beyond the 250:1 compression level for color images with good quality levels.

  16. Wavelet-based compression of pathological images for telemedicine applications

    NASA Astrophysics Data System (ADS)

    Chen, Chang W.; Jiang, Jianfei; Zheng, Zhiyong; Wu, Xue G.; Yu, Lun

    2000-05-01

    In this paper, we present the performance evaluation of wavelet-based coding techniques as applied to the compression of pathological images for application in an Internet-based telemedicine system. We first study how well suited the wavelet-based coding is as it applies to the compression of pathological images, since these images often contain fine textures that are often critical to the diagnosis of potential diseases. We compare the wavelet-based compression with the DCT-based JPEG compression in the DICOM standard for medical imaging applications. Both objective and subjective measures have been studied in the evaluation of compression performance. These studies are performed in close collaboration with expert pathologists who have conducted the evaluation of the compressed pathological images and communication engineers and information scientists who designed the proposed telemedicine system. These performance evaluations have shown that the wavelet-based coding is suitable for the compression of various pathological images and can be integrated well with the Internet-based telemedicine systems. A prototype of the proposed telemedicine system has been developed in which the wavelet-based coding is adopted for the compression to achieve bandwidth efficient transmission and therefore speed up the communications between the remote terminal and the central server of the telemedicine system.

  17. An image assessment study of image acceptability of the Galileo low gain antenna mission

    NASA Technical Reports Server (NTRS)

    Chuang, S. L.; Haines, R. F.; Grant, T.; Gold, Yaron; Cheung, Kar-Ming

    1994-01-01

    This paper describes a study conducted by NASA Ames Research Center (ARC) in collaboration with the Jet Propulsion Laboratory (JPL), Pasadena, California on the image acceptability of the Galileo Low Gain Antenna mission. The primary objective of the study is to determine the impact of the Integer Cosine Transform (ICT) compression algorithm on Galilean images of atmospheric bodies, moons, asteroids and Jupiter's rings. The approach involved fifteen volunteer subjects representing twelve institutions involved with the Galileo Solid State Imaging (SSI) experiment. Four different experiment specific quantization tables (q-table) and various compression stepsizes (q-factor) to achieve different compression ratios were used. It then determined the acceptability of the compressed monochromatic astronomical images as evaluated by Galileo SSI mission scientists. Fourteen different images were evaluated. Each observer viewed two versions of the same image side by side on a high resolution monitor, each was compressed using a different quantization stepsize. They were requested to select which image had the highest overall quality to support them in carrying out their visual evaluations of image content. Then they rated both images using a scale from one to five on its judged degree of usefulness. Up to four pre-selected types of images were presented with and without noise to each subject based upon results of a previously administered survey of their image preferences. Fourteen different images in seven image groups were studied. The results showed that: (1) acceptable compression ratios vary widely with the type of images; (2) noisy images detract greatly from image acceptability and acceptable compression ratios; and (3) atmospheric images of Jupiter seem to have higher compression ratios of 4 to 5 times that of some clear surface satellite images.

  18. Impact of Finger Type in Fingerprint Authentication

    NASA Astrophysics Data System (ADS)

    Gafurov, Davrondzhon; Bours, Patrick; Yang, Bian; Busch, Christoph

    Nowadays fingerprint verification system is the most widespread and accepted biometric technology that explores various features of the human fingers for this purpose. In general, every normal person has 10 fingers with different size. Although it is claimed that recognition performance with little fingers can be less accurate compared to other finger types, to our best knowledge, this has not been investigated yet. This paper presents our study on the topic of influence of the finger type into fingerprint recognition performance. For analysis we employ two fingerprint verification software packages (one public and one commercial). We conduct test on GUC100 multi sensor fingerprint database which contains fingerprint images of all 10 fingers from 100 subjects. Our analysis indeed confirms that performance with small fingers is less accurate than performance with the others fingers of the hand. It also appears that best performance is being obtained with thumb or index fingers. For example, performance deterioration from the best finger (i.e. index or thumb) to the worst fingers (i.e. small ones) can be in the range of 184%-1352%.

  19. Magnetic fingerprint of individual Fe4 molecular magnets under compression by a scanning tunnelling microscope

    NASA Astrophysics Data System (ADS)

    Burgess, Jacob A. J.; Malavolti, Luigi; Lanzilotto, Valeria; Mannini, Matteo; Yan, Shichao; Ninova, Silviya; Totti, Federico; Rolf-Pissarczyk, Steffen; Cornia, Andrea; Sessoli, Roberta; Loth, Sebastian

    2015-09-01

    Single-molecule magnets (SMMs) present a promising avenue to develop spintronic technologies. Addressing individual molecules with electrical leads in SMM-based spintronic devices remains a ubiquitous challenge: interactions with metallic electrodes can drastically modify the SMM's properties by charge transfer or through changes in the molecular structure. Here, we probe electrical transport through individual Fe4 SMMs using a scanning tunnelling microscope at 0.5 K. Correlation of topographic and spectroscopic information permits identification of the spin excitation fingerprint of intact Fe4 molecules. Building from this, we find that the exchange coupling strength within the molecule's magnetic core is significantly enhanced. First-principles calculations support the conclusion that this is the result of confinement of the molecule in the two-contact junction formed by the microscope tip and the sample surface.

  20. Compressed/reconstructed test images for CRAF/Cassini

    NASA Technical Reports Server (NTRS)

    Dolinar, S.; Cheung, K.-M.; Onyszchuk, I.; Pollara, F.; Arnold, S.

    1991-01-01

    A set of compressed, then reconstructed, test images submitted to the Comet Rendezvous Asteroid Flyby (CRAF)/Cassini project is presented as part of its evaluation of near lossless high compression algorithms for representing image data. A total of seven test image files were provided by the project. The seven test images were compressed, then reconstructed with high quality (root mean square error of approximately one or two gray levels on an 8 bit gray scale), using discrete cosine transforms or Hadamard transforms and efficient entropy coders. The resulting compression ratios varied from about 2:1 to about 10:1, depending on the activity or randomness in the source image. This was accomplished without any special effort to optimize the quantizer or to introduce special postprocessing to filter the reconstruction errors. A more complete set of measurements, showing the relative performance of the compression algorithms over a wide range of compression ratios and reconstruction errors, shows that additional compression is possible at a small sacrifice in fidelity.

  1. High-performance compression of astronomical images

    NASA Technical Reports Server (NTRS)

    White, Richard L.

    1993-01-01

    Astronomical images have some rather unusual characteristics that make many existing image compression techniques either ineffective or inapplicable. A typical image consists of a nearly flat background sprinkled with point sources and occasional extended sources. The images are often noisy, so that lossless compression does not work very well; furthermore, the images are usually subjected to stringent quantitative analysis, so any lossy compression method must be proven not to discard useful information, but must instead discard only the noise. Finally, the images can be extremely large. For example, the Space Telescope Science Institute has digitized photographic plates covering the entire sky, generating 1500 images each having 14000 x 14000 16-bit pixels. Several astronomical groups are now constructing cameras with mosaics of large CCD's (each 2048 x 2048 or larger); these instruments will be used in projects that generate data at a rate exceeding 100 MBytes every 5 minutes for many years. An effective technique for image compression may be based on the H-transform (Fritze et al. 1977). The method that we have developed can be used for either lossless or lossy compression. The digitized sky survey images can be compressed by at least a factor of 10 with no noticeable losses in the astrometric and photometric properties of the compressed images. The method has been designed to be computationally efficient: compression or decompression of a 512 x 512 image requires only 4 seconds on a Sun SPARCstation 1. The algorithm uses only integer arithmetic, so it is completely reversible in its lossless mode, and it could easily be implemented in hardware for space applications.

  2. Halftoning processing on a JPEG-compressed image

    NASA Astrophysics Data System (ADS)

    Sibade, Cedric; Barizien, Stephane; Akil, Mohamed; Perroton, Laurent

    2003-12-01

    Digital image processing algorithms are usually designed for the raw format, that is on an uncompressed representation of the image. Therefore prior to transforming or processing a compressed format, decompression is applied; then, the result of the processing application is finally re-compressed for further transfer or storage. The change of data representation is resource-consuming in terms of computation, time and memory usage. In the wide format printing industry, this problem becomes an important issue: e.g. a 1 m2 input color image, scanned at 600 dpi exceeds 1.6 GB in its raw representation. However, some image processing algorithms can be performed in the compressed-domain, by applying an equivalent operation on the compressed format. This paper is presenting an innovative application of the halftoning processing operation by screening, to be applied on JPEG-compressed image. This compressed-domain transform is performed by computing the threshold operation of the screening algorithm in the DCT domain. This algorithm is illustrated by examples for different halftone masks. A pre-sharpening operation, applied on a JPEG-compressed low quality image is also described; it allows to de-noise and to enhance the contours of this image.

  3. Wavelet subband coding of computer simulation output using the A++ array class library

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bradley, J.N.; Brislawn, C.M.; Quinlan, D.J.

    1995-07-01

    The goal of the project is to produce utility software for off-line compression of existing data and library code that can be called from a simulation program for on-line compression of data dumps as the simulation proceeds. Naturally, we would like the amount of CPU time required by the compression algorithm to be small in comparison to the requirements of typical simulation codes. We also want the algorithm to accomodate a wide variety of smooth, multidimensional data types. For these reasons, the subband vector quantization (VQ) approach employed in has been replaced by a scalar quantization (SQ) strategy using amore » bank of almost-uniform scalar subband quantizers in a scheme similar to that used in the FBI fingerprint image compression standard. This eliminates the considerable computational burdens of training VQ codebooks for each new type of data and performing nearest-vector searches to encode the data. The comparison of subband VQ and SQ algorithms in indicated that, in practice, there is relatively little additional gain from using vector as opposed to scalar quantization on DWT subbands, even when the source imagery is from a very homogeneous population, and our subjective experience with synthetic computer-generated data supports this stance. It appears that a careful study is needed of the tradeoffs involved in selecting scalar vs. vector subband quantization, but such an analysis is beyond the scope of this paper. Our present work is focused on the problem of generating wavelet transform/scalar quantization (WSQ) implementations that can be ported easily between different hardware environments. This is an extremely important consideration given the great profusion of different high-performance computing architectures available, the high cost associated with learning how to map algorithms effectively onto a new architecture, and the rapid rate of evolution in the world of high-performance computing.« less

  4. Development of ultrasound/endoscopy PACS (picture archiving and communication system) and investigation of compression method for cine images

    NASA Astrophysics Data System (ADS)

    Osada, Masakazu; Tsukui, Hideki

    2002-09-01

    ABSTRACT Picture Archiving and Communication System (PACS) is a system which connects imaging modalities, image archives, and image workstations to reduce film handling cost and improve hospital workflow. Handling diagnostic ultrasound and endoscopy images is challenging, because it produces large amount of data such as motion (cine) images of 30 frames per second, 640 x 480 in resolution, with 24-bit color. Also, it requires enough image quality for clinical review. We have developed PACS which is able to manage ultrasound and endoscopy cine images with above resolution and frame rate, and investigate suitable compression method and compression rate for clinical image review. Results show that clinicians require capability for frame-by-frame forward and backward review of cine images because they carefully look through motion images to find certain color patterns which may appear in one frame. In order to satisfy this quality, we have chosen motion JPEG, installed and confirmed that we could capture this specific pattern. As for acceptable image compression rate, we have performed subjective evaluation. No subjects could tell the difference between original non-compressed images and 1:10 lossy compressed JPEG images. One subject could tell the difference between original and 1:20 lossy compressed JPEG images although it is acceptable. Thus, ratios of 1:10 to 1:20 are acceptable to reduce data amount and cost while maintaining quality for clinical review.

  5. Dual Contrast - Magnetic Resonance Fingerprinting (DC-MRF): A Platform for Simultaneous Quantification of Multiple MRI Contrast Agents.

    PubMed

    Anderson, Christian E; Donnola, Shannon B; Jiang, Yun; Batesole, Joshua; Darrah, Rebecca; Drumm, Mitchell L; Brady-Kalnay, Susann M; Steinmetz, Nicole F; Yu, Xin; Griswold, Mark A; Flask, Chris A

    2017-08-16

    Injectable Magnetic Resonance Imaging (MRI) contrast agents have been widely used to provide critical assessments of disease for both clinical and basic science imaging research studies. The scope of available MRI contrast agents has expanded over the years with the emergence of molecular imaging contrast agents specifically targeted to biological markers. Unfortunately, synergistic application of more than a single molecular contrast agent has been limited by MRI's ability to only dynamically measure a single agent at a time. In this study, a new Dual Contrast - Magnetic Resonance Fingerprinting (DC - MRF) methodology is described that can detect and independently quantify the local concentration of multiple MRI contrast agents following simultaneous administration. This "multi-color" MRI methodology provides the opportunity to monitor multiple molecular species simultaneously and provides a practical, quantitative imaging framework for the eventual clinical translation of molecular imaging contrast agents.

  6. The effect of lossy image compression on image classification

    NASA Technical Reports Server (NTRS)

    Paola, Justin D.; Schowengerdt, Robert A.

    1995-01-01

    We have classified four different images, under various levels of JPEG compression, using the following classification algorithms: minimum-distance, maximum-likelihood, and neural network. The training site accuracy and percent difference from the original classification were tabulated for each image compression level, with maximum-likelihood showing the poorest results. In general, as compression ratio increased, the classification retained its overall appearance, but much of the pixel-to-pixel detail was eliminated. We also examined the effect of compression on spatial pattern detection using a neural network.

  7. Reevaluation of JPEG image compression to digitalized gastrointestinal endoscopic color images: a pilot study

    NASA Astrophysics Data System (ADS)

    Kim, Christopher Y.

    1999-05-01

    Endoscopic images p lay an important role in describing many gastrointestinal (GI) disorders. The field of radiology has been on the leading edge of creating, archiving and transmitting digital images. With the advent of digital videoendoscopy, endoscopists now have the ability to generate images for storage and transmission. X-rays can be compressed 30-40X without appreciable decline in quality. We reported results of a pilot study using JPEG compression of 24-bit color endoscopic images. For that study, the result indicated that adequate compression ratios vary according to the lesion and that images could be compressed to between 31- and 99-fold smaller than the original size without an appreciable decline in quality. The purpose of this study was to expand upon the methodology of the previous sty with an eye towards application for the WWW, a medium which would expand both clinical and educational purposes of color medical imags. The results indicate that endoscopists are able to tolerate very significant compression of endoscopic images without loss of clinical image quality. This finding suggests that even 1 MB color images can be compressed to well under 30KB, which is considered a maximal tolerable image size for downloading on the WWW.

  8. EBLAST: an efficient high-compression image transformation 3. application to Internet image and video transmission

    NASA Astrophysics Data System (ADS)

    Schmalz, Mark S.; Ritter, Gerhard X.; Caimi, Frank M.

    2001-12-01

    A wide variety of digital image compression transforms developed for still imaging and broadcast video transmission are unsuitable for Internet video applications due to insufficient compression ratio, poor reconstruction fidelity, or excessive computational requirements. Examples include hierarchical transforms that require all, or large portion of, a source image to reside in memory at one time, transforms that induce significant locking effect at operationally salient compression ratios, and algorithms that require large amounts of floating-point computation. The latter constraint holds especially for video compression by small mobile imaging devices for transmission to, and compression on, platforms such as palmtop computers or personal digital assistants (PDAs). As Internet video requirements for frame rate and resolution increase to produce more detailed, less discontinuous motion sequences, a new class of compression transforms will be needed, especially for small memory models and displays such as those found on PDAs. In this, the third series of papers, we discuss the EBLAST compression transform and its application to Internet communication. Leading transforms for compression of Internet video and still imagery are reviewed and analyzed, including GIF, JPEG, AWIC (wavelet-based), wavelet packets, and SPIHT, whose performance is compared with EBLAST. Performance analysis criteria include time and space complexity and quality of the decompressed image. The latter is determined by rate-distortion data obtained from a database of realistic test images. Discussion also includes issues such as robustness of the compressed format to channel noise. EBLAST has been shown to perform superiorly to JPEG and, unlike current wavelet compression transforms, supports fast implementation on embedded processors with small memory models.

  9. Novel Near-Lossless Compression Algorithm for Medical Sequence Images with Adaptive Block-Based Spatial Prediction.

    PubMed

    Song, Xiaoying; Huang, Qijun; Chang, Sheng; He, Jin; Wang, Hao

    2016-12-01

    To address the low compression efficiency of lossless compression and the low image quality of general near-lossless compression, a novel near-lossless compression algorithm based on adaptive spatial prediction is proposed for medical sequence images for possible diagnostic use in this paper. The proposed method employs adaptive block size-based spatial prediction to predict blocks directly in the spatial domain and Lossless Hadamard Transform before quantization to improve the quality of reconstructed images. The block-based prediction breaks the pixel neighborhood constraint and takes full advantage of the local spatial correlations found in medical images. The adaptive block size guarantees a more rational division of images and the improved use of the local structure. The results indicate that the proposed algorithm can efficiently compress medical images and produces a better peak signal-to-noise ratio (PSNR) under the same pre-defined distortion than other near-lossless methods.

  10. Using irreversible compression in digital radiology: a preliminary study of the opinions of radiologists

    NASA Astrophysics Data System (ADS)

    Seeram, Euclid

    2006-03-01

    The large volumes of digital images produced by digital imaging modalities in Radiology have provided the motivation for the development of picture archiving and communication systems (PACS) in an effort to provide an organized mechanism for digital image management. The development of more sophisticated methods of digital image acquisition (Multislice CT and Digital Mammography, for example), as well as the implementation and performance of PACS and Teleradiology systems in a health care environment, have created challenges in the area of image compression with respect to storing and transmitting digital images. Image compression can be reversible (lossless) or irreversible (lossy). While in the former, there is no loss of information, the latter presents concerns since there is a loss of information. This loss of information from diagnostic medical images is of primary concern not only to radiologists, but also to patients and their physicians. In 1997, Goldberg pointed out that "there is growing evidence that lossy compression can be applied without significantly affecting the diagnostic content of images... there is growing consensus in the radiologic community that some forms of lossy compression are acceptable". The purpose of this study was to explore the opinions of expert radiologists, and related professional organizations on the use of irreversible compression in routine practice The opinions of notable radiologists in the US and Canada are varied indicating no consensus of opinion on the use of irreversible compression in primary diagnosis, however, they are generally positive on the notion of the image storage and transmission advantages. Almost all radiologists are concerned with the litigation potential of an incorrect diagnosis based on irreversible compressed images. The survey of several radiology professional and related organizations reveals that no professional practice standards exist for the use of irreversible compression. Currently, the only standard for image compression is stated in the ACR's Technical Standards for Teleradiology and Digital Image Management.

  11. Oblivious image watermarking combined with JPEG compression

    NASA Astrophysics Data System (ADS)

    Chen, Qing; Maitre, Henri; Pesquet-Popescu, Beatrice

    2003-06-01

    For most data hiding applications, the main source of concern is the effect of lossy compression on hidden information. The objective of watermarking is fundamentally in conflict with lossy compression. The latter attempts to remove all irrelevant and redundant information from a signal, while the former uses the irrelevant information to mask the presence of hidden data. Compression on a watermarked image can significantly affect the retrieval of the watermark. Past investigations of this problem have heavily relied on simulation. It is desirable not only to measure the effect of compression on embedded watermark, but also to control the embedding process to survive lossy compression. In this paper, we focus on oblivious watermarking by assuming that the watermarked image inevitably undergoes JPEG compression prior to watermark extraction. We propose an image-adaptive watermarking scheme where the watermarking algorithm and the JPEG compression standard are jointly considered. Watermark embedding takes into consideration the JPEG compression quality factor and exploits an HVS model to adaptively attain a proper trade-off among transparency, hiding data rate, and robustness to JPEG compression. The scheme estimates the image-dependent payload under JPEG compression to achieve the watermarking bit allocation in a determinate way, while maintaining consistent watermark retrieval performance.

  12. Clinical utility of wavelet compression for resolution-enhanced chest radiography

    NASA Astrophysics Data System (ADS)

    Andriole, Katherine P.; Hovanes, Michael E.; Rowberg, Alan H.

    2000-05-01

    This study evaluates the usefulness of wavelet compression for resolution-enhanced storage phosphor chest radiographs in the detection of subtle interstitial disease, pneumothorax and other abnormalities. A wavelet compression technique, MrSIDTM (LizardTech, Inc., Seattle, WA), is implemented which compresses the images from their original 2,000 by 2,000 (2K) matrix size, and then decompresses the image data for display at optimal resolution by matching the spatial frequency characteristics of image objects using a 4,000- square matrix. The 2K-matrix computed radiography (CR) chest images are magnified to a 4K-matrix using wavelet series expansion. The magnified images are compared with the original uncompressed 2K radiographs and with two-times magnification of the original images. Preliminary results show radiologist preference for MrSIDTM wavelet-based magnification over magnification of original data, and suggest that the compressed/decompressed images may provide an enhancement to the original. Data collection for clinical trials of 100 chest radiographs including subtle interstitial abnormalities and/or subtle pneumothoraces and normal cases, are in progress. Three experienced thoracic radiologists will view images side-by- side on calibrated softcopy workstations under controlled viewing conditions, and rank order preference tests will be performed. This technique combines image compression with image enhancement, and suggests that compressed/decompressed images can actually improve the originals.

  13. Environmental impact to multimedia systems on the example of fingerprint aging behavior at crime scenes

    NASA Astrophysics Data System (ADS)

    Merkel, Ronny; Breuhan, Andy; Hildebrandt, Mario; Vielhauer, Claus; Bräutigam, Anja

    2012-06-01

    In the field of crime scene forensics, current methods of evidence collection, such as the acquisition of shoe-marks, tireimpressions, palm-prints or fingerprints are in most cases still performed in an analogue way. For example, fingerprints are captured by powdering and sticky tape lifting, ninhydrine bathing or cyanoacrylate fuming and subsequent photographing. Images of the evidence are then further processed by forensic experts. With the upcoming use of new multimedia systems for the digital capturing and processing of crime scene traces in forensics, higher resolutions can be achieved, leading to a much better quality of forensic images. Furthermore, the fast and mostly automated preprocessing of such data using digital signal processing techniques is an emerging field. Also, by the optical and non-destructive lifting of forensic evidence, traces are not destroyed and therefore can be re-captured, e.g. by creating time series of a trace, to extract its aging behavior and maybe determine the time the trace was left. However, such new methods and tools face different challenges, which need to be addressed before a practical application in the field. Based on the example of fingerprint age determination, which is an unresolved research challenge to forensic experts since decades, we evaluate the influences of different environmental conditions as well as different types of sweating and their implications to the capturing sensory, preprocessing methods and feature extraction. We use a Chromatic White Light (CWL) sensor to exemplary represent such a new optical and contactless measurement device and investigate the influence of 16 different environmental conditions, 8 different sweat types and 11 different preprocessing methods on the aging behavior of 48 fingerprint time series (2592 fingerprint scans in total). We show the challenges that arise for such new multimedia systems capturing and processing forensic evidence

  14. Ultrasonic Fingerprint Sensor With Transmit Beamforming Based on a PMUT Array Bonded to CMOS Circuitry.

    PubMed

    Jiang, Xiaoyue; Tang, Hao-Yen; Lu, Yipeng; Ng, Eldwin J; Tsai, Julius M; Boser, Bernhard E; Horsley, David A

    2017-09-01

    In this paper, we present a single-chip 65 ×42 element ultrasonic pulse-echo fingerprint sensor with transmit (TX) beamforming based on piezoelectric micromachined ultrasonic transducers directly bonded to a CMOS readout application-specific integrated circuit (ASIC). The readout ASIC was realized in a standard 180-nm CMOS process with a 24-V high-voltage transistor option. Pulse-echo measurements are performed column-by-column in sequence using either one column or five columns to TX the ultrasonic pulse at 20 MHz. TX beamforming is used to focus the ultrasonic beam at the imaging plane where the finger is located, increasing the ultrasonic pressure and narrowing the 3-dB beamwidth to [Formula: see text], a factor of 6.4 narrower than nonbeamformed measurements. The surface of the sensor is coated with a poly-dimethylsiloxane (PDMS) layer to provide good acoustic impedance matching to skin. Scanning laser Doppler vibrometry of the PDMS surface was used to map the ultrasonic pressure field at the imaging surface, demonstrating the expected increase in pressure, and reduction in beamwidth. Imaging experiments were conducted using both PDMS phantoms and real fingerprints. The average image contrast is increased by a factor of 1.5 when beamforming is used.

  15. Pornographic image recognition and filtering using incremental learning in compressed domain

    NASA Astrophysics Data System (ADS)

    Zhang, Jing; Wang, Chao; Zhuo, Li; Geng, Wenhao

    2015-11-01

    With the rapid development and popularity of the network, the openness, anonymity, and interactivity of networks have led to the spread and proliferation of pornographic images on the Internet, which have done great harm to adolescents' physical and mental health. With the establishment of image compression standards, pornographic images are mainly stored with compressed formats. Therefore, how to efficiently filter pornographic images is one of the challenging issues for information security. A pornographic image recognition and filtering method in the compressed domain is proposed by using incremental learning, which includes the following steps: (1) low-resolution (LR) images are first reconstructed from the compressed stream of pornographic images, (2) visual words are created from the LR image to represent the pornographic image, and (3) incremental learning is adopted to continuously adjust the classification rules to recognize the new pornographic image samples after the covering algorithm is utilized to train and recognize the visual words in order to build the initial classification model of pornographic images. The experimental results show that the proposed pornographic image recognition method using incremental learning has a higher recognition rate as well as costing less recognition time in the compressed domain.

  16. A new approach of objective quality evaluation on JPEG2000 lossy-compressed lung cancer CT images

    NASA Astrophysics Data System (ADS)

    Cai, Weihua; Tan, Yongqiang; Zhang, Jianguo

    2007-03-01

    Image compression has been used to increase the communication efficiency and storage capacity. JPEG 2000 compression, based on the wavelet transformation, has its advantages comparing to other compression methods, such as ROI coding, error resilience, adaptive binary arithmetic coding and embedded bit-stream. However it is still difficult to find an objective method to evaluate the image quality of lossy-compressed medical images so far. In this paper, we present an approach to evaluate the image quality by using a computer aided diagnosis (CAD) system. We selected 77 cases of CT images, bearing benign and malignant lung nodules with confirmed pathology, from our clinical Picture Archiving and Communication System (PACS). We have developed a prototype of CAD system to classify these images into benign ones and malignant ones, the performance of which was evaluated by the receiver operator characteristics (ROC) curves. We first used JPEG 2000 to compress these cases of images with different compression ratio from lossless to lossy, and used the CAD system to classify the cases with different compressed ratio, then compared the ROC curves from the CAD classification results. Support vector machine (SVM) and neural networks (NN) were used to classify the malignancy of input nodules. In each approach, we found that the area under ROC (AUC) decreases with the increment of compression ratio with small fluctuations.

  17. A Framework of Hyperspectral Image Compression using Neural Networks

    DOE PAGES

    Masalmah, Yahya M.; Martínez Nieves, Christian; Rivera Soto, Rafael; ...

    2015-01-01

    Hyperspectral image analysis has gained great attention due to its wide range of applications. Hyperspectral images provide a vast amount of information about underlying objects in an image by using a large range of the electromagnetic spectrum for each pixel. However, since the same image is taken multiple times using distinct electromagnetic bands, the size of such images tend to be significant, which leads to greater processing requirements. The aim of this paper is to present a proposed framework for image compression and to study the possible effects of spatial compression on quality of unmixing results. Image compression allows usmore » to reduce the dimensionality of an image while still preserving most of the original information, which could lead to faster image processing. Lastly, this paper presents preliminary results of different training techniques used in Artificial Neural Network (ANN) based compression algorithm.« less

  18. Two-dimensional compression of surface electromyographic signals using column-correlation sorting and image encoders.

    PubMed

    Costa, Marcus V C; Carvalho, Joao L A; Berger, Pedro A; Zaghetto, Alexandre; da Rocha, Adson F; Nascimento, Francisco A O

    2009-01-01

    We present a new preprocessing technique for two-dimensional compression of surface electromyographic (S-EMG) signals, based on correlation sorting. We show that the JPEG2000 coding system (originally designed for compression of still images) and the H.264/AVC encoder (video compression algorithm operating in intraframe mode) can be used for compression of S-EMG signals. We compare the performance of these two off-the-shelf image compression algorithms for S-EMG compression, with and without the proposed preprocessing step. Compression of both isotonic and isometric contraction S-EMG signals is evaluated. The proposed methods were compared with other S-EMG compression algorithms from the literature.

  19. Transparent Fingerprint Sensor System for Large Flat Panel Display.

    PubMed

    Seo, Wonkuk; Pi, Jae-Eun; Cho, Sung Haeung; Kang, Seung-Youl; Ahn, Seong-Deok; Hwang, Chi-Sun; Jeon, Ho-Sik; Kim, Jong-Uk; Lee, Myunghee

    2018-01-19

    In this paper, we introduce a transparent fingerprint sensing system using a thin film transistor (TFT) sensor panel, based on a self-capacitive sensing scheme. An armorphousindium gallium zinc oxide (a-IGZO) TFT sensor array and associated custom Read-Out IC (ROIC) are implemented for the system. The sensor panel has a 200 × 200 pixel array and each pixel size is as small as 50 μm × 50 μm. The ROIC uses only eight analog front-end (AFE) amplifier stages along with a successive approximation analog-to-digital converter (SAR ADC). To get the fingerprint image data from the sensor array, the ROIC senses a capacitance, which is formed by a cover glass material between a human finger and an electrode of each pixel of the sensor array. Three methods are reviewed for estimating the self-capacitance. The measurement result demonstrates that the transparent fingerprint sensor system has an ability to differentiate a human finger's ridges and valleys through the fingerprint sensor array.

  20. Transparent Fingerprint Sensor System for Large Flat Panel Display

    PubMed Central

    Seo, Wonkuk; Pi, Jae-Eun; Cho, Sung Haeung; Kang, Seung-Youl; Ahn, Seong-Deok; Hwang, Chi-Sun; Jeon, Ho-Sik; Kim, Jong-Uk

    2018-01-01

    In this paper, we introduce a transparent fingerprint sensing system using a thin film transistor (TFT) sensor panel, based on a self-capacitive sensing scheme. An armorphousindium gallium zinc oxide (a-IGZO) TFT sensor array and associated custom Read-Out IC (ROIC) are implemented for the system. The sensor panel has a 200 × 200 pixel array and each pixel size is as small as 50 μm × 50 μm. The ROIC uses only eight analog front-end (AFE) amplifier stages along with a successive approximation analog-to-digital converter (SAR ADC). To get the fingerprint image data from the sensor array, the ROIC senses a capacitance, which is formed by a cover glass material between a human finger and an electrode of each pixel of the sensor array. Three methods are reviewed for estimating the self-capacitance. The measurement result demonstrates that the transparent fingerprint sensor system has an ability to differentiate a human finger’s ridges and valleys through the fingerprint sensor array. PMID:29351218

  1. Injection and injection-compression moulding replication capability for the production of polymer lab-on-a-chip with nano structures

    NASA Astrophysics Data System (ADS)

    Calaon, M.; Tosello, G.; Garnaes, J.; Hansen, H. N.

    2017-10-01

    The manufacturing precision and accuracy in the production of polymer lab-on-a-chip components with 100-130 nm deep nanochannels are evaluated using a metrological approach. Replication fidelity on corresponding process fingerprint test nanostructures over different substrates (nickel tool and polymer part) is quantified through traceable atomic force microscope measurements. Dimensions of injection moulded (IM) and injection-compression moulded (ICM) thermoplastic cyclic olefin copolymer nanofeatures are characterized depending on process parameters and four different features positions on a 30  ×  80 mm2 area. Replication capability of IM and ICM technologies are quantified and the products tolerance at the nanometre dimensional scale verified.

  2. A block-based JPEG-LS compression technique with lossless region of interest

    NASA Astrophysics Data System (ADS)

    Deng, Lihua; Huang, Zhenghua; Yao, Shoukui

    2018-03-01

    JPEG-LS lossless compression algorithm is used in many specialized applications that emphasize on the attainment of high fidelity for its lower complexity and better compression ratios than the lossless JPEG standard. But it cannot prevent error diffusion because of the context dependence of the algorithm, and have low compression rate when compared to lossy compression. In this paper, we firstly divide the image into two parts: ROI regions and non-ROI regions. Then we adopt a block-based image compression technique to decrease the range of error diffusion. We provide JPEG-LS lossless compression for the image blocks which include the whole or part region of interest (ROI) and JPEG-LS near lossless compression for the image blocks which are included in the non-ROI (unimportant) regions. Finally, a set of experiments are designed to assess the effectiveness of the proposed compression method.

  3. CWICOM: A Highly Integrated & Innovative CCSDS Image Compression ASIC

    NASA Astrophysics Data System (ADS)

    Poupat, Jean-Luc; Vitulli, Raffaele

    2013-08-01

    The space market is more and more demanding in terms of on image compression performances. The earth observation satellites instrument resolution, the agility and the swath are continuously increasing. It multiplies by 10 the volume of picture acquired on one orbit. In parallel, the satellites size and mass are decreasing, requiring innovative electronic technologies reducing size, mass and power consumption. Astrium, leader on the market of the combined solutions for compression and memory for space application, has developed a new image compression ASIC which is presented in this paper. CWICOM is a high performance and innovative image compression ASIC developed by Astrium in the frame of the ESA contract n°22011/08/NLL/LvH. The objective of this ESA contract is to develop a radiation hardened ASIC that implements the CCSDS 122.0-B-1 Standard for Image Data Compression, that has a SpaceWire interface for configuring and controlling the device, and that is compatible with Sentinel-2 interface and with similar Earth Observation missions. CWICOM stands for CCSDS Wavelet Image COMpression ASIC. It is a large dynamic, large image and very high speed image compression ASIC potentially relevant for compression of any 2D image with bi-dimensional data correlation such as Earth observation, scientific data compression… The paper presents some of the main aspects of the CWICOM development, such as the algorithm and specification, the innovative memory organization, the validation approach and the status of the project.

  4. Image Data Compression Having Minimum Perceptual Error

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B. (Inventor)

    1997-01-01

    A method is presented for performing color or grayscale image compression that eliminates redundant and invisible image components. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The quantization matrix comprises visual masking by luminance and contrast technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.

  5. Fluorescence development of fingerprints by combining conjugated polymer nanoparticles with cyanoacrylate fuming.

    PubMed

    Chen, Hong; Ma, Rong-Liang; Fan, Zhinan; Chen, Yun; Wang, Zizheng; Fan, Li-Juan

    2018-05-23

    Selecting appropriate developing methods/reagents or their combination to enhance the effect for fingerprint development is of great significance for practical forensic investigation. Ethyl-2-cyanoacrylate ester (superglue) fuming is a popular method for "in-situ" developing fingerprints in forensic science, followed by fluorescence staining to enhance the contrast of the fingerprint image in some occasion. In this study, a series of fluorescent poly(p-phenylene vinylene) (PPV) nanoparticles (NPs) in colloidal solution were successfully prepared and the emission color was tuned via a simple way. The fuming process was carried out using a home-made device. The staining was accomplished by immersing a piece of absorbent cotton into the solution of NPs, and then gently applied on the fumed fingerprints for several times. The PPV NPs were found to have a better developing effect than Rhodamine 6G when excited by 365 nm UV lamp. Different emission colors of NPs are advantageous in developing fingerprints on various substrates. Mechanism study suggested that the NPs were embedded in the porous structure of the superglue resin. In all, the combination of fuming method with the staining by conjugated polymer NPs has been demonstrated to be successful for fluorescent fingerprint development and be promising for more practical forensic applications. Copyright © 2018. Published by Elsevier Inc.

  6. A framework of multitemplate ensemble for fingerprint verification

    NASA Astrophysics Data System (ADS)

    Yin, Yilong; Ning, Yanbin; Ren, Chunxiao; Liu, Li

    2012-12-01

    How to improve performance of an automatic fingerprint verification system (AFVS) is always a big challenge in biometric verification field. Recently, it becomes popular to improve the performance of AFVS using ensemble learning approach to fuse related information of fingerprints. In this article, we propose a novel framework of fingerprint verification which is based on the multitemplate ensemble method. This framework is consisted of three stages. In the first stage, enrollment stage, we adopt an effective template selection method to select those fingerprints which best represent a finger, and then, a polyhedron is created by the matching results of multiple template fingerprints and a virtual centroid of the polyhedron is given. In the second stage, verification stage, we measure the distance between the centroid of the polyhedron and a query image. In the final stage, a fusion rule is used to choose a proper distance from a distance set. The experimental results on the FVC2004 database prove the improvement on the effectiveness of the new framework in fingerprint verification. With a minutiae-based matching method, the average EER of four databases in FVC2004 drops from 10.85 to 0.88, and with a ridge-based matching method, the average EER of these four databases also decreases from 14.58 to 2.51.

  7. High-performance compression and double cryptography based on compressive ghost imaging with the fast Fourier transform

    NASA Astrophysics Data System (ADS)

    Leihong, Zhang; Zilan, Pan; Luying, Wu; Xiuhua, Ma

    2016-11-01

    To solve the problem that large images can hardly be retrieved for stringent hardware restrictions and the security level is low, a method based on compressive ghost imaging (CGI) with Fast Fourier Transform (FFT) is proposed, named FFT-CGI. Initially, the information is encrypted by the sender with FFT, and the FFT-coded image is encrypted by the system of CGI with a secret key. Then the receiver decrypts the image with the aid of compressive sensing (CS) and FFT. Simulation results are given to verify the feasibility, security, and compression of the proposed encryption scheme. The experiment suggests the method can improve the quality of large images compared with conventional ghost imaging and achieve the imaging for large-sized images, further the amount of data transmitted largely reduced because of the combination of compressive sensing and FFT, and improve the security level of ghost images through ciphertext-only attack (COA), chosen-plaintext attack (CPA), and noise attack. This technique can be immediately applied to encryption and data storage with the advantages of high security, fast transmission, and high quality of reconstructed information.

  8. Blind compressed sensing image reconstruction based on alternating direction method

    NASA Astrophysics Data System (ADS)

    Liu, Qinan; Guo, Shuxu

    2018-04-01

    In order to solve the problem of how to reconstruct the original image under the condition of unknown sparse basis, this paper proposes an image reconstruction method based on blind compressed sensing model. In this model, the image signal is regarded as the product of a sparse coefficient matrix and a dictionary matrix. Based on the existing blind compressed sensing theory, the optimal solution is solved by the alternative minimization method. The proposed method solves the problem that the sparse basis in compressed sensing is difficult to represent, which restrains the noise and improves the quality of reconstructed image. This method ensures that the blind compressed sensing theory has a unique solution and can recover the reconstructed original image signal from a complex environment with a stronger self-adaptability. The experimental results show that the image reconstruction algorithm based on blind compressed sensing proposed in this paper can recover high quality image signals under the condition of under-sampling.

  9. Anti-spoof touchless 3D fingerprint recognition system using single shot fringe projection and biospeckle analysis

    NASA Astrophysics Data System (ADS)

    Chatterjee, Amit; Bhatia, Vimal; Prakash, Shashi

    2017-08-01

    Fingerprint is a unique, un-alterable and easily collected biometric of a human being. Although it is a 3D biological characteristic, traditional methods are designed to provide only a 2D image. This touch based mapping of 3D shape to 2D image losses information and leads to nonlinear distortions. Moreover, as only topographic details are captured, conventional systems are potentially vulnerable to spoofing materials (e.g. artificial fingers, dead fingers, false prints, etc.). In this work, we demonstrate an anti-spoof touchless 3D fingerprint detection system using a combination of single shot fringe projection and biospeckle analysis. For fingerprint detection using fringe projection, light from a low power LED source illuminates a finger through a sinusoidal grating. The fringe pattern modulated because of features on the fingertip is captured using a CCD camera. Fourier transform method based frequency filtering is used for the reconstruction of 3D fingerprint from the captured fringe pattern. In the next step, for spoof detection using biospeckle analysis a visuo-numeric algorithm based on modified structural function and non-normalized histogram is proposed. High activity biospeckle patterns are generated because of interaction of collimated laser light with internal fluid flow of the real finger sample. This activity reduces abruptly in case of layered fake prints, and is almost absent in dead or fake fingers. Furthermore, the proposed setup is fast, low-cost, involves non-mechanical scanning and is highly stable.

  10. Angstrom-Resolution Magnetic Resonance Imaging of Single Molecules via Wave-Function Fingerprints of Nuclear Spins

    NASA Astrophysics Data System (ADS)

    Ma, Wen-Long; Liu, Ren-Bao

    2016-08-01

    Single-molecule sensitivity of nuclear magnetic resonance (NMR) and angstrom resolution of magnetic resonance imaging (MRI) are the highest challenges in magnetic microscopy. Recent development in dynamical-decoupling- (DD) enhanced diamond quantum sensing has enabled single-nucleus NMR and nanoscale NMR. Similar to conventional NMR and MRI, current DD-based quantum sensing utilizes the "frequency fingerprints" of target nuclear spins. The frequency fingerprints by their nature cannot resolve different nuclear spins that have the same noise frequency or differentiate different types of correlations in nuclear-spin clusters, which limit the resolution of single-molecule MRI. Here we show that this limitation can be overcome by using "wave-function fingerprints" of target nuclear spins, which is much more sensitive than the frequency fingerprints to the weak hyperfine interaction between the targets and a sensor under resonant DD control. We demonstrate a scheme of angstrom-resolution MRI that is capable of counting and individually localizing single nuclear spins of the same frequency and characterizing the correlations in nuclear-spin clusters. A nitrogen-vacancy-center spin sensor near a diamond surface, provided that the coherence time is improved by surface engineering in the near future, may be employed to determine with angstrom resolution the positions and conformation of single molecules that are isotope labeled. The scheme in this work offers an approach to breaking the resolution limit set by the "frequency gradients" in conventional MRI and to reaching the angstrom-scale resolution.

  11. A novel color image compression algorithm using the human visual contrast sensitivity characteristics

    NASA Astrophysics Data System (ADS)

    Yao, Juncai; Liu, Guizhong

    2017-03-01

    In order to achieve higher image compression ratio and improve visual perception of the decompressed image, a novel color image compression scheme based on the contrast sensitivity characteristics of the human visual system (HVS) is proposed. In the proposed scheme, firstly the image is converted into the YCrCb color space and divided into sub-blocks. Afterwards, the discrete cosine transform is carried out for each sub-block, and three quantization matrices are built to quantize the frequency spectrum coefficients of the images by combining the contrast sensitivity characteristics of HVS. The Huffman algorithm is used to encode the quantized data. The inverse process involves decompression and matching to reconstruct the decompressed color image. And simulations are carried out for two color images. The results show that the average structural similarity index measurement (SSIM) and peak signal to noise ratio (PSNR) under the approximate compression ratio could be increased by 2.78% and 5.48%, respectively, compared with the joint photographic experts group (JPEG) compression. The results indicate that the proposed compression algorithm in the text is feasible and effective to achieve higher compression ratio under ensuring the encoding and image quality, which can fully meet the needs of storage and transmission of color images in daily life.

  12. Detection of Fingerprints Based on Elemental Composition Using Micro-X-Ray Fluorescence.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Worley, C. G.; Wiltshire, S.; Miller, T. C.

    A method was developed to detect fingerprints using a technique known as micro-X-ray fluorescence. The traditional method of detecting fingerprints involves treating the sample with certain powders, liquids, or vapors to add color to the fingerprint so that it can be easily seen and photographed for forensic purposes. This is known as contrast enhancement, and a multitude of chemical processing methods have been developed in the past century to render fingerprints visible. However, fingerprints present on certain substances such as fibrous papers and textiles, wood, leather, plastic, adhesives, and human skin can sometimes be difficult to detect by contrast enhancement.more » Children's fingerprints are also difficult to detect due to the absence of sebum on their skin, and detection of prints left on certain colored backgrounds can sometimes be problematic. Micro-X-ray fluorescence (MXRF) was studied here as a method to detect fingerprints based on chemical elements present in fingerprint residue. For example, salts such as sodium chloride and potassium chloride excreted in sweat are sometimes present in detectable quantities in fingerprints. We demonstrated that MXRF can be used to detect this sodium, potassium, and chlorine from such salts. Furthermore, using MXRF, each of these elements (and many other elements if present) can be detected as a function of location on a surface, so we were able to 'see' a fingerprint because these salts are deposited mainly along the patterns present in a fingerprint (traditionally called friction ridges in forensic science). MXRF is not a panacea for detecting all fingerprints; some prints will not contain enough detectable material to be 'seen'; however, determining an effective means of coloring a fingerprint with traditional contrast enhancement methods can sometimes be an arduous process with limited success. Thus, MXRF offers a possible alternative for detecting fingerprints, and it does not require any additional chemical treatment steps which can be time consuming and permanently alter the sample. Additionally, MXRF is noninvasive, so a fingerprint analyzed by this method is left pristine for examination by other methods (eg. DNA extraction). To the best of the author's knowledge, no studies have been published to date concerning the detection of fingerprints by micro-X-ray fluorescence. Some studies have been published in which other spectroscopic methods were employed to examine the chemical composition of fingerprints (eg. IR, SEM/EDX, and Auger), but very few papers discuss the actual detection and imaging of a complete fingerprint by any spectroscopic method. Thus, this work is unique.« less

  13. Effects of Image Compression on Automatic Count of Immunohistochemically Stained Nuclei in Digital Images

    PubMed Central

    López, Carlos; Lejeune, Marylène; Escrivà, Patricia; Bosch, Ramón; Salvadó, Maria Teresa; Pons, Lluis E.; Baucells, Jordi; Cugat, Xavier; Álvaro, Tomás; Jaén, Joaquín

    2008-01-01

    This study investigates the effects of digital image compression on automatic quantification of immunohistochemical nuclear markers. We examined 188 images with a previously validated computer-assisted analysis system. A first group was composed of 47 images captured in TIFF format, and other three contained the same images converted from TIFF to JPEG format with 3×, 23× and 46× compression. Counts of TIFF format images were compared with the other three groups. Overall, differences in the count of the images increased with the percentage of compression. Low-complexity images (≤100 cells/field, without clusters or with small-area clusters) had small differences (<5 cells/field in 95–100% of cases) and high-complexity images showed substantial differences (<35–50 cells/field in 95–100% of cases). Compression does not compromise the accuracy of immunohistochemical nuclear marker counts obtained by computer-assisted analysis systems for digital images with low complexity and could be an efficient method for storing these images. PMID:18755997

  14. Highly Efficient Photothermal Semiconductor Nanocomposites for Photothermal Imaging of Latent Fingerprints.

    PubMed

    Cui, Jiabin; Xu, Suying; Guo, Chang; Jiang, Rui; James, Tony D; Wang, Leyu

    2015-11-17

    Optical imaging of latent fingerprints (LFPs) has been widely used in forensic science and for antiterrorist applications, but it suffers from interference from autofluorescence and the substrates background color. Cu7S4 nanoparticles (NPs), with excellent photothermal properties, were synthesized using a new strategy and then fabricated into amphiphilic nanocomposites (NCs) via polymerization of allyl mercaptan coated on Cu7S4 NPs to offer good affinities toward LFPs. Here, we develop a facile and versatile photothermal LFP imaging method based on the high photothermal conversion efficiency (52.92%, 808 nm) of Cu7S4 NCs, indicating its effectiveness for imaging LFPs left on different substrates (with various background colors), which will be extremely useful for crime scene investigations. Furthermore, by fabricating Cu7S4-CdSe@ZnS NCs, a fluorescent-photothermal dual-mode imaging strategy was used to detect trinitrotoluene (TNT) in LFPs while still maintaining a complete photothermal image of LFP.

  15. A synthesis of fluorescent starch based on carbon nanoparticles for fingerprints detection

    NASA Astrophysics Data System (ADS)

    Li, Hongren; Guo, Xingjia; Liu, Jun; Li, Feng

    2016-10-01

    A pyrolysis method for synthesizing carbon nanoparticles (CNPs) were developed by using malic acid and ammonium oxalate as raw materials. The incorporation of a minor amount of carbon nanoparticles into starch powder imparts remarkable color-tunability. Based on this phenomenon, an environment friendly fluorescent starch powder for detecting latent fingerprints in non-porous surfaces was prepared. The fingerprints on different non-porous surfaces developed with this powder showed very good fluorescent images under ultraviolet excitation. The method using fluorescent starch powder as fluorescent marks is simple, rapid and green. Experimental results illustrated the effectiveness of proposed methods, enabling its practical applications in forensic sciences.

  16. Optimal color coding for compression of true color images

    NASA Astrophysics Data System (ADS)

    Musatenko, Yurij S.; Kurashov, Vitalij N.

    1998-11-01

    In the paper we present the method that improves lossy compression of the true color or other multispectral images. The essence of the method is to project initial color planes into Karhunen-Loeve (KL) basis that gives completely decorrelated representation for the image and to compress basis functions instead of the planes. To do that the new fast algorithm of true KL basis construction with low memory consumption is suggested and our recently proposed scheme for finding optimal losses of Kl functions while compression is used. Compare to standard JPEG compression of the CMYK images the method provides the PSNR gain from 0.2 to 2 dB for the convenient compression ratios. Experimental results are obtained for high resolution CMYK images. It is demonstrated that presented scheme could work on common hardware.

  17. Simultaneous compression and encryption of closely resembling images: application to video sequences and polarimetric images.

    PubMed

    Aldossari, M; Alfalou, A; Brosseau, C

    2014-09-22

    This study presents and validates an optimized method of simultaneous compression and encryption designed to process images with close spectra. This approach is well adapted to the compression and encryption of images of a time-varying scene but also to static polarimetric images. We use the recently developed spectral fusion method [Opt. Lett.35, 1914-1916 (2010)] to deal with the close resemblance of the images. The spectral plane (containing the information to send and/or to store) is decomposed in several independent areas which are assigned according a specific way. In addition, each spectrum is shifted in order to minimize their overlap. The dual purpose of these operations is to optimize the spectral plane allowing us to keep the low- and high-frequency information (compression) and to introduce an additional noise for reconstructing the images (encryption). Our results show that not only can the control of the spectral plane enhance the number of spectra to be merged, but also that a compromise between the compression rate and the quality of the reconstructed images can be tuned. We use a root-mean-square (RMS) optimization criterion to treat compression. Image encryption is realized at different security levels. Firstly, we add a specific encryption level which is related to the different areas of the spectral plane, and then, we make use of several random phase keys. An in-depth analysis at the spectral fusion methodology is done in order to find a good trade-off between the compression rate and the quality of the reconstructed images. Our new proposal spectral shift allows us to minimize the image overlap. We further analyze the influence of the spectral shift on the reconstructed image quality and compression rate. The performance of the multiple-image optical compression and encryption method is verified by analyzing several video sequences and polarimetric images.

  18. A Lossless hybrid wavelet-fractal compression for welding radiographic images.

    PubMed

    Mekhalfa, Faiza; Avanaki, Mohammad R N; Berkani, Daoud

    2016-01-01

    In this work a lossless wavelet-fractal image coder is proposed. The process starts by compressing and decompressing the original image using wavelet transformation and fractal coding algorithm. The decompressed image is removed from the original one to obtain a residual image which is coded by using Huffman algorithm. Simulation results show that with the proposed scheme, we achieve an infinite peak signal to noise ratio (PSNR) with higher compression ratio compared to typical lossless method. Moreover, the use of wavelet transform speeds up the fractal compression algorithm by reducing the size of the domain pool. The compression results of several welding radiographic images using the proposed scheme are evaluated quantitatively and compared with the results of Huffman coding algorithm.

  19. Digital mammography, cancer screening: Factors important for image compression

    NASA Technical Reports Server (NTRS)

    Clarke, Laurence P.; Blaine, G. James; Doi, Kunio; Yaffe, Martin J.; Shtern, Faina; Brown, G. Stephen; Winfield, Daniel L.; Kallergi, Maria

    1993-01-01

    The use of digital mammography for breast cancer screening poses several novel problems such as development of digital sensors, computer assisted diagnosis (CAD) methods for image noise suppression, enhancement, and pattern recognition, compression algorithms for image storage, transmission, and remote diagnosis. X-ray digital mammography using novel direct digital detection schemes or film digitizers results in large data sets and, therefore, image compression methods will play a significant role in the image processing and analysis by CAD techniques. In view of the extensive compression required, the relative merit of 'virtually lossless' versus lossy methods should be determined. A brief overview is presented here of the developments of digital sensors, CAD, and compression methods currently proposed and tested for mammography. The objective of the NCI/NASA Working Group on Digital Mammography is to stimulate the interest of the image processing and compression scientific community for this medical application and identify possible dual use technologies within the NASA centers.

  20. Image compression technique

    DOEpatents

    Fu, Chi-Yung; Petrich, Loren I.

    1997-01-01

    An image is compressed by identifying edge pixels of the image; creating a filled edge array of pixels each of the pixels in the filled edge array which corresponds to an edge pixel having a value equal to the value of a pixel of the image array selected in response to the edge pixel, and each of the pixels in the filled edge array which does not correspond to an edge pixel having a value which is a weighted average of the values of surrounding pixels in the filled edge array which do correspond to edge pixels; and subtracting the filled edge array from the image array to create a difference array. The edge file and the difference array are then separately compressed and transmitted or stored. The original image is later reconstructed by creating a preliminary array in response to the received edge file, and adding the preliminary array to the received difference array. Filling is accomplished by solving Laplace's equation using a multi-grid technique. Contour and difference file coding techniques also are described. The techniques can be used in a method for processing a plurality of images by selecting a respective compression approach for each image, compressing each of the images according to the compression approach selected, and transmitting each of the images as compressed, in correspondence with an indication of the approach selected for the image.

  1. Image compression technique

    DOEpatents

    Fu, C.Y.; Petrich, L.I.

    1997-03-25

    An image is compressed by identifying edge pixels of the image; creating a filled edge array of pixels each of the pixels in the filled edge array which corresponds to an edge pixel having a value equal to the value of a pixel of the image array selected in response to the edge pixel, and each of the pixels in the filled edge array which does not correspond to an edge pixel having a value which is a weighted average of the values of surrounding pixels in the filled edge array which do correspond to edge pixels; and subtracting the filled edge array from the image array to create a difference array. The edge file and the difference array are then separately compressed and transmitted or stored. The original image is later reconstructed by creating a preliminary array in response to the received edge file, and adding the preliminary array to the received difference array. Filling is accomplished by solving Laplace`s equation using a multi-grid technique. Contour and difference file coding techniques also are described. The techniques can be used in a method for processing a plurality of images by selecting a respective compression approach for each image, compressing each of the images according to the compression approach selected, and transmitting each of the images as compressed, in correspondence with an indication of the approach selected for the image. 16 figs.

  2. Fingerprint Identification Using SIFT-Based Minutia Descriptors and Improved All Descriptor-Pair Matching

    PubMed Central

    Zhou, Ru; Zhong, Dexing; Han, Jiuqiang

    2013-01-01

    The performance of conventional minutiae-based fingerprint authentication algorithms degrades significantly when dealing with low quality fingerprints with lots of cuts or scratches. A similar degradation of the minutiae-based algorithms is observed when small overlapping areas appear because of the quite narrow width of the sensors. Based on the detection of minutiae, Scale Invariant Feature Transformation (SIFT) descriptors are employed to fulfill verification tasks in the above difficult scenarios. However, the original SIFT algorithm is not suitable for fingerprint because of: (1) the similar patterns of parallel ridges; and (2) high computational resource consumption. To enhance the efficiency and effectiveness of the algorithm for fingerprint verification, we propose a SIFT-based Minutia Descriptor (SMD) to improve the SIFT algorithm through image processing, descriptor extraction and matcher. A two-step fast matcher, named improved All Descriptor-Pair Matching (iADM), is also proposed to implement the 1:N verifications in real-time. Fingerprint Identification using SMD and iADM (FISiA) achieved a significant improvement with respect to accuracy in representative databases compared with the conventional minutiae-based method. The speed of FISiA also can meet real-time requirements. PMID:23467056

  3. Hyperspectral data compression using a Wiener filter predictor

    NASA Astrophysics Data System (ADS)

    Villeneuve, Pierre V.; Beaven, Scott G.; Stocker, Alan D.

    2013-09-01

    The application of compression to hyperspectral image data is a significant technical challenge. A primary bottleneck in disseminating data products to the tactical user community is the limited communication bandwidth between the airborne sensor and the ground station receiver. This report summarizes the newly-developed "Z-Chrome" algorithm for lossless compression of hyperspectral image data. A Wiener filter prediction framework is used as a basis for modeling new image bands from already-encoded bands. The resulting residual errors are then compressed using available state-of-the-art lossless image compression functions. Compression performance is demonstrated using a large number of test data collected over a wide variety of scene content from six different airborne and spaceborne sensors .

  4. Impact of lossy compression on diagnostic accuracy of radiographs for periapical lesions

    NASA Technical Reports Server (NTRS)

    Eraso, Francisco E.; Analoui, Mostafa; Watson, Andrew B.; Rebeschini, Regina

    2002-01-01

    OBJECTIVES: The purpose of this study was to evaluate the lossy Joint Photographic Experts Group compression for endodontic pretreatment digital radiographs. STUDY DESIGN: Fifty clinical charge-coupled device-based, digital radiographs depicting periapical areas were selected. Each image was compressed at 2, 4, 8, 16, 32, 48, and 64 compression ratios. One root per image was marked for examination. Images were randomized and viewed by four clinical observers under standardized viewing conditions. Each observer read the image set three times, with at least two weeks between each reading. Three pre-selected sites per image (mesial, distal, apical) were scored on a five-scale score confidence scale. A panel of three examiners scored the uncompressed images, with a consensus score for each site. The consensus score was used as the baseline for assessing the impact of lossy compression on the diagnostic values of images. The mean absolute error between consensus and observer scores was computed for each observer, site, and reading session. RESULTS: Balanced one-way analysis of variance for all observers indicated that for compression ratios 48 and 64, there was significant difference between mean absolute error of uncompressed and compressed images (P <.05). After converting the five-scale score to two-level diagnostic values, the diagnostic accuracy was strongly correlated (R (2) = 0.91) with the compression ratio. CONCLUSION: The results of this study suggest that high compression ratios can have a severe impact on the diagnostic quality of the digital radiographs for detection of periapical lesions.

  5. Legally compatible design of digital dactyloscopy in future surveillance scenarios

    NASA Astrophysics Data System (ADS)

    Pocs, Matthias; Schott, Maik; Hildebrandt, Mario

    2012-06-01

    Innovation in multimedia systems impacts on our society. For example surveillance camera systems combine video and audio information. Currently a new sensor for capturing fingerprint traces is being researched. It combines greyscale images to determine the intensity of the image signal, on one hand, and topographic information to determine fingerprint texture on a variety of surface materials, on the other. This research proposes new application areas which will be analyzed from a technical-legal view point. It assesses how technology design can promote legal criteria of German and European privacy and data protection. For this we focus on one technology goal as an example.

  6. Ultrasonic fingerprint sensor using a piezoelectric micromachined ultrasonic transducer array integrated with complementary metal oxide semiconductor electronics

    NASA Astrophysics Data System (ADS)

    Lu, Y.; Tang, H.; Fung, S.; Wang, Q.; Tsai, J. M.; Daneman, M.; Boser, B. E.; Horsley, D. A.

    2015-06-01

    This paper presents an ultrasonic fingerprint sensor based on a 24 × 8 array of 22 MHz piezoelectric micromachined ultrasonic transducers (PMUTs) with 100 μm pitch, fully integrated with 180 nm complementary metal oxide semiconductor (CMOS) circuitry through eutectic wafer bonding. Each PMUT is directly bonded to a dedicated CMOS receive amplifier, minimizing electrical parasitics and eliminating the need for through-silicon vias. The array frequency response and vibration mode-shape were characterized using laser Doppler vibrometry and verified via finite element method simulation. The array's acoustic output was measured using a hydrophone to be ˜14 kPa with a 28 V input, in reasonable agreement with predication from analytical calculation. Pulse-echo imaging of a 1D steel grating is demonstrated using electronic scanning of a 20 × 8 sub-array, resulting in 300 mV maximum received amplitude and 5:1 contrast ratio. Because the small size of this array limits the maximum image size, mechanical scanning was used to image a 2D polydimethylsiloxane fingerprint phantom (10 mm × 8 mm) at a 1.2 mm distance from the array.

  7. Image Quality Assessment of JPEG Compressed Mars Science Laboratory Mastcam Images using Convolutional Neural Networks

    NASA Astrophysics Data System (ADS)

    Kerner, H. R.; Bell, J. F., III; Ben Amor, H.

    2017-12-01

    The Mastcam color imaging system on the Mars Science Laboratory Curiosity rover acquires images within Gale crater for a variety of geologic and atmospheric studies. Images are often JPEG compressed before being downlinked to Earth. While critical for transmitting images on a low-bandwidth connection, this compression can result in image artifacts most noticeable as anomalous brightness or color changes within or near JPEG compression block boundaries. In images with significant high-frequency detail (e.g., in regions showing fine layering or lamination in sedimentary rocks), the image might need to be re-transmitted losslessly to enable accurate scientific interpretation of the data. The process of identifying which images have been adversely affected by compression artifacts is performed manually by the Mastcam science team, costing significant expert human time. To streamline the tedious process of identifying which images might need to be re-transmitted, we present an input-efficient neural network solution for predicting the perceived quality of a compressed Mastcam image. Most neural network solutions require large amounts of hand-labeled training data for the model to learn the target mapping between input (e.g. distorted images) and output (e.g. quality assessment). We propose an automatic labeling method using joint entropy between a compressed and uncompressed image to avoid the need for domain experts to label thousands of training examples by hand. We use automatically labeled data to train a convolutional neural network to estimate the probability that a Mastcam user would find the quality of a given compressed image acceptable for science analysis. We tested our model on a variety of Mastcam images and found that the proposed method correlates well with image quality perception by science team members. When assisted by our proposed method, we estimate that a Mastcam investigator could reduce the time spent reviewing images by a minimum of 70%.

  8. An adaptive technique to maximize lossless image data compression of satellite images

    NASA Technical Reports Server (NTRS)

    Stewart, Robert J.; Lure, Y. M. Fleming; Liou, C. S. Joe

    1994-01-01

    Data compression will pay an increasingly important role in the storage and transmission of image data within NASA science programs as the Earth Observing System comes into operation. It is important that the science data be preserved at the fidelity the instrument and the satellite communication systems were designed to produce. Lossless compression must therefore be applied, at least, to archive the processed instrument data. In this paper, we present an analysis of the performance of lossless compression techniques and develop an adaptive approach which applied image remapping, feature-based image segmentation to determine regions of similar entropy and high-order arithmetic coding to obtain significant improvements over the use of conventional compression techniques alone. Image remapping is used to transform the original image into a lower entropy state. Several techniques were tested on satellite images including differential pulse code modulation, bi-linear interpolation, and block-based linear predictive coding. The results of these experiments are discussed and trade-offs between computation requirements and entropy reductions are used to identify the optimum approach for a variety of satellite images. Further entropy reduction can be achieved by segmenting the image based on local entropy properties then applying a coding technique which maximizes compression for the region. Experimental results are presented showing the effect of different coding techniques for regions of different entropy. A rule-base is developed through which the technique giving the best compression is selected. The paper concludes that maximum compression can be achieved cost effectively and at acceptable performance rates with a combination of techniques which are selected based on image contextual information.

  9. The effect of JPEG compression on automated detection of microaneurysms in retinal images

    NASA Astrophysics Data System (ADS)

    Cree, M. J.; Jelinek, H. F.

    2008-02-01

    As JPEG compression at source is ubiquitous in retinal imaging, and the block artefacts introduced are known to be of similar size to microaneurysms (an important indicator of diabetic retinopathy) it is prudent to evaluate the effect of JPEG compression on automated detection of retinal pathology. Retinal images were acquired at high quality and then compressed to various lower qualities. An automated microaneurysm detector was run on the retinal images of various qualities of JPEG compression and the ability to predict the presence of diabetic retinopathy based on the detected presence of microaneurysms was evaluated with receiver operating characteristic (ROC) methodology. The negative effect of JPEG compression on automated detection was observed even at levels of compression sometimes used in retinal eye-screening programmes and these may have important clinical implications for deciding on acceptable levels of compression for a fully automated eye-screening programme.

  10. Novel image compression-encryption hybrid algorithm based on key-controlled measurement matrix in compressive sensing

    NASA Astrophysics Data System (ADS)

    Zhou, Nanrun; Zhang, Aidi; Zheng, Fen; Gong, Lihua

    2014-10-01

    The existing ways to encrypt images based on compressive sensing usually treat the whole measurement matrix as the key, which renders the key too large to distribute and memorize or store. To solve this problem, a new image compression-encryption hybrid algorithm is proposed to realize compression and encryption simultaneously, where the key is easily distributed, stored or memorized. The input image is divided into 4 blocks to compress and encrypt, then the pixels of the two adjacent blocks are exchanged randomly by random matrices. The measurement matrices in compressive sensing are constructed by utilizing the circulant matrices and controlling the original row vectors of the circulant matrices with logistic map. And the random matrices used in random pixel exchanging are bound with the measurement matrices. Simulation results verify the effectiveness, security of the proposed algorithm and the acceptable compression performance.

  11. COxSwAIN: Compressive Sensing for Advanced Imaging and Navigation

    NASA Technical Reports Server (NTRS)

    Kurwitz, Richard; Pulley, Marina; LaFerney, Nathan; Munoz, Carlos

    2015-01-01

    The COxSwAIN project focuses on building an image and video compression scheme that can be implemented in a small or low-power satellite. To do this, we used Compressive Sensing, where the compression is performed by matrix multiplications on the satellite and reconstructed on the ground. Our paper explains our methodology and demonstrates the results of the scheme, being able to achieve high quality image compression that is robust to noise and corruption.

  12. Novel approach to multispectral image compression on the Internet

    NASA Astrophysics Data System (ADS)

    Zhu, Yanqiu; Jin, Jesse S.

    2000-10-01

    Still image coding techniques such as JPEG have been always applied onto intra-plane images. Coding fidelity is always utilized in measuring the performance of intra-plane coding methods. In many imaging applications, it is more and more necessary to deal with multi-spectral images, such as the color images. In this paper, a novel approach to multi-spectral image compression is proposed by using transformations among planes for further compression of spectral planes. Moreover, a mechanism of introducing human visual system to the transformation is provided for exploiting the psycho visual redundancy. The new technique for multi-spectral image compression, which is designed to be compatible with the JPEG standard, is demonstrated on extracting correlation among planes based on human visual system. A high measure of compactness in the data representation and compression can be seen with the power of the scheme taken into account.

  13. Efficient Imaging and Real-Time Display of Scanning Ion Conductance Microscopy Based on Block Compressive Sensing

    NASA Astrophysics Data System (ADS)

    Li, Gongxin; Li, Peng; Wang, Yuechao; Wang, Wenxue; Xi, Ning; Liu, Lianqing

    2014-07-01

    Scanning Ion Conductance Microscopy (SICM) is one kind of Scanning Probe Microscopies (SPMs), and it is widely used in imaging soft samples for many distinctive advantages. However, the scanning speed of SICM is much slower than other SPMs. Compressive sensing (CS) could improve scanning speed tremendously by breaking through the Shannon sampling theorem, but it still requires too much time in image reconstruction. Block compressive sensing can be applied to SICM imaging to further reduce the reconstruction time of sparse signals, and it has another unique application that it can achieve the function of image real-time display in SICM imaging. In this article, a new method of dividing blocks and a new matrix arithmetic operation were proposed to build the block compressive sensing model, and several experiments were carried out to verify the superiority of block compressive sensing in reducing imaging time and real-time display in SICM imaging.

  14. Compressed Sensing for Body MRI

    PubMed Central

    Feng, Li; Benkert, Thomas; Block, Kai Tobias; Sodickson, Daniel K; Otazo, Ricardo; Chandarana, Hersh

    2016-01-01

    The introduction of compressed sensing for increasing imaging speed in MRI has raised significant interest among researchers and clinicians, and has initiated a large body of research across multiple clinical applications over the last decade. Compressed sensing aims to reconstruct unaliased images from fewer measurements than that are traditionally required in MRI by exploiting image compressibility or sparsity. Moreover, appropriate combinations of compressed sensing with previously introduced fast imaging approaches, such as parallel imaging, have demonstrated further improved performance. The advent of compressed sensing marks the prelude to a new era of rapid MRI, where the focus of data acquisition has changed from sampling based on the nominal number of voxels and/or frames to sampling based on the desired information content. This paper presents a brief overview of the application of compressed sensing techniques in body MRI, where imaging speed is crucial due to the presence of respiratory motion along with stringent constraints on spatial and temporal resolution. The first section provides an overview of the basic compressed sensing methodology, including the notion of sparsity, incoherence, and non-linear reconstruction. The second section reviews state-of-the-art compressed sensing techniques that have been demonstrated for various clinical body MRI applications. In the final section, the paper discusses current challenges and future opportunities. PMID:27981664

  15. RSA Key Development Using Fingerprint Image on Text Message

    NASA Astrophysics Data System (ADS)

    Rahman, Sayuti; Triana, Indah; Khairani, Sumi; Yasir, Amru; Sundari, Siti

    2017-12-01

    Along with the development of technology today, humans are very facilitated in accessing information and Communicate with various media, including through the Internet network . Messages are sent by media such as text are not necessarily guaranteed security. it is often found someone that wants to send a secret message to the recipient, but the messages can be known by irresponsible people. So the sender feels dissappointed because the secret message that should be known only to the recipient only becomes known by the irresponsible people . It is necessary to do security the message by using the RSA algorithm, Using fingerprint image to generate RSA key.This is a solution to enrich the security of a message,it is needed to process images firstly before generating RSA keys with feature extraction.

  16. Digital Image Compression Using Artificial Neural Networks

    NASA Technical Reports Server (NTRS)

    Serra-Ricart, M.; Garrido, L.; Gaitan, V.; Aloy, A.

    1993-01-01

    The problem of storing, transmitting, and manipulating digital images is considered. Because of the file sizes involved, large amounts of digitized image information are becoming common in modern projects. Our goal is to described an image compression transform coder based on artificial neural networks techniques (NNCTC). A comparison of the compression results obtained from digital astronomical images by the NNCTC and the method used in the compression of the digitized sky survey from the Space Telescope Science Institute based on the H-transform is performed in order to assess the reliability of the NNCTC.

  17. Color separation in forensic image processing using interactive differential evolution.

    PubMed

    Mushtaq, Harris; Rahnamayan, Shahryar; Siddiqi, Areeb

    2015-01-01

    Color separation is an image processing technique that has often been used in forensic applications to differentiate among variant colors and to remove unwanted image interference. This process can reveal important information such as covered text or fingerprints in forensic investigation procedures. However, several limitations prevent users from selecting the appropriate parameters pertaining to the desired and undesired colors. This study proposes the hybridization of an interactive differential evolution (IDE) and a color separation technique that no longer requires users to guess required control parameters. The IDE algorithm optimizes these parameters in an interactive manner by utilizing human visual judgment to uncover desired objects. A comprehensive experimental verification has been conducted on various sample test images, including heavily obscured texts, texts with subtle color variations, and fingerprint smudges. The advantage of IDE is apparent as it effectively optimizes the color separation parameters at a level indiscernible to the naked eyes. © 2014 American Academy of Forensic Sciences.

  18. Iris Recognition: The Consequences of Image Compression

    NASA Astrophysics Data System (ADS)

    Ives, Robert W.; Bishop, Daniel A.; Du, Yingzi; Belcher, Craig

    2010-12-01

    Iris recognition for human identification is one of the most accurate biometrics, and its employment is expanding globally. The use of portable iris systems, particularly in law enforcement applications, is growing. In many of these applications, the portable device may be required to transmit an iris image or template over a narrow-bandwidth communication channel. Typically, a full resolution image (e.g., VGA) is desired to ensure sufficient pixels across the iris to be confident of accurate recognition results. To minimize the time to transmit a large amount of data over a narrow-bandwidth communication channel, image compression can be used to reduce the file size of the iris image. In other applications, such as the Registered Traveler program, an entire iris image is stored on a smart card, but only 4 kB is allowed for the iris image. For this type of application, image compression is also the solution. This paper investigates the effects of image compression on recognition system performance using a commercial version of the Daugman iris2pi algorithm along with JPEG-2000 compression, and links these to image quality. Using the ICE 2005 iris database, we find that even in the face of significant compression, recognition performance is minimally affected.

  19. Measurement of Full Field Strains in Filament Wound Composite Tubes Under Axial Compressive Loading by the Digital Image Correlation (DIC) Technique

    DTIC Science & Technology

    2013-05-01

    Measurement of Full Field Strains in Filament Wound Composite Tubes Under Axial Compressive Loading by the Digital Image Correlation (DIC...of Full Field Strains in Filament Wound Composite Tubes Under Axial Compressive Loading by the Digital Image Correlation (DIC) Technique Todd C...Wound Composite Tubes Under Axial Compressive Loading by the Digital Image Correlation (DIC) Technique 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c

  20. The effects of video compression on acceptability of images for monitoring life sciences experiments

    NASA Astrophysics Data System (ADS)

    Haines, Richard F.; Chuang, Sherry L.

    1992-07-01

    Future manned space operations for Space Station Freedom will call for a variety of carefully planned multimedia digital communications, including full-frame-rate color video, to support remote operations of scientific experiments. This paper presents the results of an investigation to determine if video compression is a viable solution to transmission bandwidth constraints. It reports on the impact of different levels of compression and associated calculational parameters on image acceptability to investigators in life-sciences research at ARC. Three nonhuman life-sciences disciplines (plant, rodent, and primate biology) were selected for this study. A total of 33 subjects viewed experimental scenes in their own scientific disciplines. Ten plant scientists viewed still images of wheat stalks at various stages of growth. Each image was compressed to four different compression levels using the Joint Photographic Expert Group (JPEG) standard algorithm, and the images were presented in random order. Twelve and eleven staffmembers viewed 30-sec videotaped segments showing small rodents and a small primate, respectively. Each segment was repeated at four different compression levels in random order using an inverse cosine transform (ICT) algorithm. Each viewer made a series of subjective image-quality ratings. There was a significant difference in image ratings according to the type of scene viewed within disciplines; thus, ratings were scene dependent. Image (still and motion) acceptability does, in fact, vary according to compression level. The JPEG still-image-compression levels, even with the large range of 5:1 to 120:1 in this study, yielded equally high levels of acceptability. In contrast, the ICT algorithm for motion compression yielded a sharp decline in acceptability below 768 kb/sec. Therefore, if video compression is to be used as a solution for overcoming transmission bandwidth constraints, the effective management of the ratio and compression parameters according to scientific discipline and experiment type is critical to the success of remote experiments.

  1. The effects of video compression on acceptability of images for monitoring life sciences experiments

    NASA Technical Reports Server (NTRS)

    Haines, Richard F.; Chuang, Sherry L.

    1992-01-01

    Future manned space operations for Space Station Freedom will call for a variety of carefully planned multimedia digital communications, including full-frame-rate color video, to support remote operations of scientific experiments. This paper presents the results of an investigation to determine if video compression is a viable solution to transmission bandwidth constraints. It reports on the impact of different levels of compression and associated calculational parameters on image acceptability to investigators in life-sciences research at ARC. Three nonhuman life-sciences disciplines (plant, rodent, and primate biology) were selected for this study. A total of 33 subjects viewed experimental scenes in their own scientific disciplines. Ten plant scientists viewed still images of wheat stalks at various stages of growth. Each image was compressed to four different compression levels using the Joint Photographic Expert Group (JPEG) standard algorithm, and the images were presented in random order. Twelve and eleven staffmembers viewed 30-sec videotaped segments showing small rodents and a small primate, respectively. Each segment was repeated at four different compression levels in random order using an inverse cosine transform (ICT) algorithm. Each viewer made a series of subjective image-quality ratings. There was a significant difference in image ratings according to the type of scene viewed within disciplines; thus, ratings were scene dependent. Image (still and motion) acceptability does, in fact, vary according to compression level. The JPEG still-image-compression levels, even with the large range of 5:1 to 120:1 in this study, yielded equally high levels of acceptability. In contrast, the ICT algorithm for motion compression yielded a sharp decline in acceptability below 768 kb/sec. Therefore, if video compression is to be used as a solution for overcoming transmission bandwidth constraints, the effective management of the ratio and compression parameters according to scientific discipline and experiment type is critical to the success of remote experiments.

  2. Digital compression algorithms for HDTV transmission

    NASA Technical Reports Server (NTRS)

    Adkins, Kenneth C.; Shalkhauser, Mary JO; Bibyk, Steven B.

    1990-01-01

    Digital compression of video images is a possible avenue for high definition television (HDTV) transmission. Compression needs to be optimized while picture quality remains high. Two techniques for compression the digital images are explained and comparisons are drawn between the human vision system and artificial compression techniques. Suggestions for improving compression algorithms through the use of neural and analog circuitry are given.

  3. Real-Time Aggressive Image Data Compression

    DTIC Science & Technology

    1990-03-31

    implemented with higher degrees of modularity, concurrency, and higher levels of machine intelligence , thereby providing higher data -throughput rates...Project Summary Project Title: Real-Time Aggressive Image Data Compression Principal Investigators: Dr. Yih-Fang Huang and Dr. Ruey-wen Liu Institution...Summary The objective of the proposed research is to develop reliable algorithms !.hat can achieve aggressive image data compression (with a compression

  4. Context Modeler for Wavelet Compression of Spectral Hyperspectral Images

    NASA Technical Reports Server (NTRS)

    Kiely, Aaron; Xie, Hua; Klimesh, matthew; Aranki, Nazeeh

    2010-01-01

    A context-modeling sub-algorithm has been developed as part of an algorithm that effects three-dimensional (3D) wavelet-based compression of hyperspectral image data. The context-modeling subalgorithm, hereafter denoted the context modeler, provides estimates of probability distributions of wavelet-transformed data being encoded. These estimates are utilized by an entropy coding subalgorithm that is another major component of the compression algorithm. The estimates make it possible to compress the image data more effectively than would otherwise be possible. The following background discussion is prerequisite to a meaningful summary of the context modeler. This discussion is presented relative to ICER-3D, which is the name attached to a particular compression algorithm and the software that implements it. The ICER-3D software is summarized briefly in the preceding article, ICER-3D Hyperspectral Image Compression Software (NPO-43238). Some aspects of this algorithm were previously described, in a slightly more general context than the ICER-3D software, in "Improving 3D Wavelet-Based Compression of Hyperspectral Images" (NPO-41381), NASA Tech Briefs, Vol. 33, No. 3 (March 2009), page 7a. In turn, ICER-3D is a product of generalization of ICER, another previously reported algorithm and computer program that can perform both lossless and lossy wavelet-based compression and decompression of gray-scale-image data. In ICER-3D, hyperspectral image data are decomposed using a 3D discrete wavelet transform (DWT). Following wavelet decomposition, mean values are subtracted from spatial planes of spatially low-pass subbands prior to encoding. The resulting data are converted to sign-magnitude form and compressed. In ICER-3D, compression is progressive, in that compressed information is ordered so that as more of the compressed data stream is received, successive reconstructions of the hyperspectral image data are of successively higher overall fidelity.

  5. Neural Network Design on the SRC-6 Reconfigurable Computer

    DTIC Science & Technology

    2006-12-01

    fingerprint identification. In this field, automatic identification methods are used to save time, especially for the purpose of fingerprint matching in...grid widths and lengths and therefore was useful in producing an accurate canvas with which to create sample training images. The added benefit of...tools available free of charge and readily accessible on the computer, it was simple to design bitmap data files visually on a canvas and then

  6. In vivo laser confocal microscopy findings in patients with map-dot-fingerprint (epithelial basement membrane) dystrophy

    PubMed Central

    Kobayashi, Akira; Yokogawa, Hideaki; Sugiyama, Kazuhisa

    2012-01-01

    Background: The purpose of this study was to investigate pathological changes of the corneal cell layer in patients with map-dot-fingerprint (epithelial basement membrane) dystrophy by in vivo laser corneal confocal microscopy. Methods: Two patients were evaluated using a cornea-specific in vivo laser scanning confocal microscope (Heidelberg Retina Tomograph 2 Rostock Cornea Module, HRT 2-RCM). The affected corneal areas of both patients were examined. Image analysis was performed to identify corneal epithelial and stromal deposits correlated with this dystrophy. Results: Variously shaped (linear, multilaminar, curvilinear, ring-shape, geographic) highly reflective materials were observed in the “map” area, mainly in the basal epithelial cell layer. In “fingerprint” lesions, multiple linear and curvilinear hyporeflective lines were observed. Additionally, in the affected corneas, infiltration of possible Langerhans cells and other inflammatory cells was observed as highly reflective Langerhans cell-like or dot images. Finally, needle-shaped materials were observed in one patient. Conclusion: HRT 2-RCM laser confocal microscopy is capable of identifying corneal microstructural changes related to map-dot-fingerprint corneal dystrophy in vivo. The technique may be useful in elucidating the pathogenesis and natural course of map-dot-fingerprint corneal dystrophy and other similar basement membrane abnormalities. PMID:22888214

  7. Image-Data Compression Using Edge-Optimizing Algorithm for WFA Inference.

    ERIC Educational Resources Information Center

    Culik, Karel II; Kari, Jarkko

    1994-01-01

    Presents an inference algorithm that produces a weighted finite automata (WFA), in particular, the grayness functions of graytone images. Image-data compression results based on the new inference algorithm produces a WFA with a relatively small number of edges. Image-data compression results alone and in combination with wavelets are discussed.…

  8. Performance evaluation of the multiple-image optical compression and encryption method by increasing the number of target images

    NASA Astrophysics Data System (ADS)

    Aldossari, M.; Alfalou, A.; Brosseau, C.

    2017-08-01

    In an earlier study [Opt. Express 22, 22349-22368 (2014)], a compression and encryption method that simultaneous compress and encrypt closely resembling images was proposed and validated. This multiple-image optical compression and encryption (MIOCE) method is based on a special fusion of the different target images spectra in the spectral domain. Now for the purpose of assessing the capacity of the MIOCE method, we would like to evaluate and determine the influence of the number of target images. This analysis allows us to evaluate the performance limitation of this method. To achieve this goal, we use a criterion based on the root-mean-square (RMS) [Opt. Lett. 35, 1914-1916 (2010)] and compression ratio to determine the spectral plane area. Then, the different spectral areas are merged in a single spectrum plane. By choosing specific areas, we can compress together 38 images instead of 26 using the classical MIOCE method. The quality of the reconstructed image is evaluated by making use of the mean-square-error criterion (MSE).

  9. Compression of multispectral fluorescence microscopic images based on a modified set partitioning in hierarchal trees

    NASA Astrophysics Data System (ADS)

    Mansoor, Awais; Robinson, J. Paul; Rajwa, Bartek

    2009-02-01

    Modern automated microscopic imaging techniques such as high-content screening (HCS), high-throughput screening, 4D imaging, and multispectral imaging are capable of producing hundreds to thousands of images per experiment. For quick retrieval, fast transmission, and storage economy, these images should be saved in a compressed format. A considerable number of techniques based on interband and intraband redundancies of multispectral images have been proposed in the literature for the compression of multispectral and 3D temporal data. However, these works have been carried out mostly in the elds of remote sensing and video processing. Compression for multispectral optical microscopy imaging, with its own set of specialized requirements, has remained under-investigated. Digital photography{oriented 2D compression techniques like JPEG (ISO/IEC IS 10918-1) and JPEG2000 (ISO/IEC 15444-1) are generally adopted for multispectral images which optimize visual quality but do not necessarily preserve the integrity of scientic data, not to mention the suboptimal performance of 2D compression techniques in compressing 3D images. Herein we report our work on a new low bit-rate wavelet-based compression scheme for multispectral fluorescence biological imaging. The sparsity of signicant coefficients in high-frequency subbands of multispectral microscopic images is found to be much greater than in natural images; therefore a quad-tree concept such as Said et al.'s SPIHT1 along with correlation of insignicant wavelet coefficients has been proposed to further exploit redundancy at high-frequency subbands. Our work propose a 3D extension to SPIHT, incorporating a new hierarchal inter- and intra-spectral relationship amongst the coefficients of 3D wavelet-decomposed image. The new relationship, apart from adopting the parent-child relationship of classical SPIHT, also brought forth the conditional "sibling" relationship by relating only the insignicant wavelet coefficients of subbands at the same level of decomposition. The insignicant quadtrees in dierent subbands in the high-frequency subband class are coded by a combined function to reduce redundancy. A number of experiments conducted on microscopic multispectral images have shown promising results for the proposed method over current state-of-the-art image-compression techniques.

  10. The spectroscopic detection of exogenous material in fingerprints after development with powders and recovery with adhesive lifters.

    PubMed

    West, Matthew J; Went, Michael J

    2008-01-15

    The application of powders to fingerprints has long been established as an effective and reliable method for developing latent fingerprints. The powders adhere to the ridge pattern of the fingerprint only, thus allowing the image to be visualised. Fingerprints developed in situ at a crime scene routinely undergo lifting with specialist tapes to facilitate subsequent laboratory analysis. As with all recovered evidence these samples would be stored in evidence bags to allow secure transit from the scene to the laboratory and also to preserve the chain of evidence. In this paper, the application of Raman spectroscopy for the analysis of exogenous material in latent fingerprints is reported for contaminated fingerprints that had been treated with powders and also subsequently lifted with adhesive tapes. A selection of over the counter (OTC) analgesics were used as samples for the analysis and contaminated fingerprints were deposited on clean glass slides. The application of aluminium or iron based powders to contaminated fingerprints did not interfere with the Raman spectra obtained for the contaminants. In most cases background fluorescence attributed to the sebaceous content of the latent fingerprint was reduced by the application of the powder thus reducing spectral interference. Contaminated fingerprints developed with powders and then lifted with lifting tapes were also examined. The combination of these two techniques did not interfere with the successful analysis of exogenous contaminants by Raman spectroscopy. The lifting process was repeated using hinge lifters. As the hinge lifters exhibited strong Raman bands the spectroscopic analysis was more complex and an increase in the number of exposures to the detector allowed for improved clarification. Raman spectra of developed and lifted fingerprints recorded through evidence bags were obtained and it was found that the detection process was not compromised in any way. Although the application of powders did not interfere with the detection process the time taken to locate the contaminant was increased due to the physical presence of more material within the fingerprint. The presence of interfering Raman bands from lifting tapes is another potential complication. This, however, could be removed by spectral subtraction or by the choice of lifting tapes that have only weak Raman bands.

  11. Multispectral Image Compression Based on DSC Combined with CCSDS-IDC

    PubMed Central

    Li, Jin; Xing, Fei; Sun, Ting; You, Zheng

    2014-01-01

    Remote sensing multispectral image compression encoder requires low complexity, high robust, and high performance because it usually works on the satellite where the resources, such as power, memory, and processing capacity, are limited. For multispectral images, the compression algorithms based on 3D transform (like 3D DWT, 3D DCT) are too complex to be implemented in space mission. In this paper, we proposed a compression algorithm based on distributed source coding (DSC) combined with image data compression (IDC) approach recommended by CCSDS for multispectral images, which has low complexity, high robust, and high performance. First, each band is sparsely represented by DWT to obtain wavelet coefficients. Then, the wavelet coefficients are encoded by bit plane encoder (BPE). Finally, the BPE is merged to the DSC strategy of Slepian-Wolf (SW) based on QC-LDPC by deep coupling way to remove the residual redundancy between the adjacent bands. A series of multispectral images is used to test our algorithm. Experimental results show that the proposed DSC combined with the CCSDS-IDC (DSC-CCSDS)-based algorithm has better compression performance than the traditional compression approaches. PMID:25110741

  12. Two-level image authentication by two-step phase-shifting interferometry and compressive sensing

    NASA Astrophysics Data System (ADS)

    Zhang, Xue; Meng, Xiangfeng; Yin, Yongkai; Yang, Xiulun; Wang, Yurong; Li, Xianye; Peng, Xiang; He, Wenqi; Dong, Guoyan; Chen, Hongyi

    2018-01-01

    A two-level image authentication method is proposed; the method is based on two-step phase-shifting interferometry, double random phase encoding, and compressive sensing (CS) theory, by which the certification image can be encoded into two interferograms. Through discrete wavelet transform (DWT), sparseness processing, Arnold transform, and data compression, two compressed signals can be generated and delivered to two different participants of the authentication system. Only the participant who possesses the first compressed signal attempts to pass the low-level authentication. The application of Orthogonal Match Pursuit CS algorithm reconstruction, inverse Arnold transform, inverse DWT, two-step phase-shifting wavefront reconstruction, and inverse Fresnel transform can result in the output of a remarkable peak in the central location of the nonlinear correlation coefficient distributions of the recovered image and the standard certification image. Then, the other participant, who possesses the second compressed signal, is authorized to carry out the high-level authentication. Therefore, both compressed signals are collected to reconstruct the original meaningful certification image with a high correlation coefficient. Theoretical analysis and numerical simulations verify the feasibility of the proposed method.

  13. Multispectral image compression based on DSC combined with CCSDS-IDC.

    PubMed

    Li, Jin; Xing, Fei; Sun, Ting; You, Zheng

    2014-01-01

    Remote sensing multispectral image compression encoder requires low complexity, high robust, and high performance because it usually works on the satellite where the resources, such as power, memory, and processing capacity, are limited. For multispectral images, the compression algorithms based on 3D transform (like 3D DWT, 3D DCT) are too complex to be implemented in space mission. In this paper, we proposed a compression algorithm based on distributed source coding (DSC) combined with image data compression (IDC) approach recommended by CCSDS for multispectral images, which has low complexity, high robust, and high performance. First, each band is sparsely represented by DWT to obtain wavelet coefficients. Then, the wavelet coefficients are encoded by bit plane encoder (BPE). Finally, the BPE is merged to the DSC strategy of Slepian-Wolf (SW) based on QC-LDPC by deep coupling way to remove the residual redundancy between the adjacent bands. A series of multispectral images is used to test our algorithm. Experimental results show that the proposed DSC combined with the CCSDS-IDC (DSC-CCSDS)-based algorithm has better compression performance than the traditional compression approaches.

  14. Planning/scheduling techniques for VQ-based image compression

    NASA Technical Reports Server (NTRS)

    Short, Nicholas M., Jr.; Manohar, Mareboyana; Tilton, James C.

    1994-01-01

    The enormous size of the data holding and the complexity of the information system resulting from the EOS system pose several challenges to computer scientists, one of which is data archival and dissemination. More than ninety percent of the data holdings of NASA is in the form of images which will be accessed by users across the computer networks. Accessing the image data in its full resolution creates data traffic problems. Image browsing using a lossy compression reduces this data traffic, as well as storage by factor of 30-40. Of the several image compression techniques, VQ is most appropriate for this application since the decompression of the VQ compressed images is a table lookup process which makes minimal additional demands on the user's computational resources. Lossy compression of image data needs expert level knowledge in general and is not straightforward to use. This is especially true in the case of VQ. It involves the selection of appropriate codebooks for a given data set and vector dimensions for each compression ratio, etc. A planning and scheduling system is described for using the VQ compression technique in the data access and ingest of raw satellite data.

  15. Effect of data compression on diagnostic accuracy in digital hand and chest radiography

    NASA Astrophysics Data System (ADS)

    Sayre, James W.; Aberle, Denise R.; Boechat, Maria I.; Hall, Theodore R.; Huang, H. K.; Ho, Bruce K. T.; Kashfian, Payam; Rahbar, Guita

    1992-05-01

    Image compression is essential to handle a large volume of digital images including CT, MR, CR, and digitized films in a digital radiology operation. The full-frame bit allocation using the cosine transform technique developed during the last few years has been proven to be an excellent irreversible image compression method. This paper describes the effect of using the hardware compression module on diagnostic accuracy in hand radiographs with subperiosteal resorption and chest radiographs with interstitial disease. Receiver operating characteristic analysis using 71 hand radiographs and 52 chest radiographs with five observers each demonstrates that there is no statistical significant difference in diagnostic accuracy between the original films and the compressed images with a compression ratio as high as 20:1.

  16. Architecture for one-shot compressive imaging using computer-generated holograms.

    PubMed

    Macfaden, Alexander J; Kindness, Stephen J; Wilkinson, Timothy D

    2016-09-10

    We propose a synchronous implementation of compressive imaging. This method is mathematically equivalent to prevailing sequential methods, but uses a static holographic optical element to create a spatially distributed spot array from which the image can be reconstructed with an instantaneous measurement. We present the holographic design requirements and demonstrate experimentally that the linear algebra of compressed imaging can be implemented with this technique. We believe this technique can be integrated with optical metasurfaces, which will allow the development of new compressive sensing methods.

  17. Intelligent bandwith compression

    NASA Astrophysics Data System (ADS)

    Tseng, D. Y.; Bullock, B. L.; Olin, K. E.; Kandt, R. K.; Olsen, J. D.

    1980-02-01

    The feasibility of a 1000:1 bandwidth compression ratio for image transmission has been demonstrated using image-analysis algorithms and a rule-based controller. Such a high compression ratio was achieved by first analyzing scene content using auto-cueing and feature-extraction algorithms, and then transmitting only the pertinent information consistent with mission requirements. A rule-based controller directs the flow of analysis and performs priority allocations on the extracted scene content. The reconstructed bandwidth-compressed image consists of an edge map of the scene background, with primary and secondary target windows embedded in the edge map. The bandwidth-compressed images are updated at a basic rate of 1 frame per second, with the high-priority target window updated at 7.5 frames per second. The scene-analysis algorithms used in this system together with the adaptive priority controller are described. Results of simulated 1000:1 band width-compressed images are presented. A video tape simulation of the Intelligent Bandwidth Compression system has been produced using a sequence of video input from the data base.

  18. Minutia Tensor Matrix: A New Strategy for Fingerprint Matching

    PubMed Central

    Fu, Xiang; Feng, Jufu

    2015-01-01

    Establishing correspondences between two minutia sets is a fundamental issue in fingerprint recognition. This paper proposes a new tensor matching strategy. First, the concept of minutia tensor matrix (simplified as MTM) is proposed. It describes the first-order features and second-order features of a matching pair. In the MTM, the diagonal elements indicate similarities of minutia pairs and non-diagonal elements indicate pairwise compatibilities between minutia pairs. Correct minutia pairs are likely to establish both large similarities and large compatibilities, so they form a dense sub-block. Minutia matching is then formulated as recovering the dense sub-block in the MTM. This is a new tensor matching strategy for fingerprint recognition. Second, as fingerprint images show both local rigidity and global nonlinearity, we design two different kinds of MTMs: local MTM and global MTM. Meanwhile, a two-level matching algorithm is proposed. For local matching level, the local MTM is constructed and a novel local similarity calculation strategy is proposed. It makes full use of local rigidity in fingerprints. For global matching level, the global MTM is constructed to calculate similarities of entire minutia sets. It makes full use of global compatibility in fingerprints. Proposed method has stronger description ability and better robustness to noise and nonlinearity. Experiments conducted on Fingerprint Verification Competition databases (FVC2002 and FVC2004) demonstrate the effectiveness and the efficiency. PMID:25822489

  19. Onboard Image Processing System for Hyperspectral Sensor

    PubMed Central

    Hihara, Hiroki; Moritani, Kotaro; Inoue, Masao; Hoshi, Yoshihiro; Iwasaki, Akira; Takada, Jun; Inada, Hitomi; Suzuki, Makoto; Seki, Taeko; Ichikawa, Satoshi; Tanii, Jun

    2015-01-01

    Onboard image processing systems for a hyperspectral sensor have been developed in order to maximize image data transmission efficiency for large volume and high speed data downlink capacity. Since more than 100 channels are required for hyperspectral sensors on Earth observation satellites, fast and small-footprint lossless image compression capability is essential for reducing the size and weight of a sensor system. A fast lossless image compression algorithm has been developed, and is implemented in the onboard correction circuitry of sensitivity and linearity of Complementary Metal Oxide Semiconductor (CMOS) sensors in order to maximize the compression ratio. The employed image compression method is based on Fast, Efficient, Lossless Image compression System (FELICS), which is a hierarchical predictive coding method with resolution scaling. To improve FELICS’s performance of image decorrelation and entropy coding, we apply a two-dimensional interpolation prediction and adaptive Golomb-Rice coding. It supports progressive decompression using resolution scaling while still maintaining superior performance measured as speed and complexity. Coding efficiency and compression speed enlarge the effective capacity of signal transmission channels, which lead to reducing onboard hardware by multiplexing sensor signals into a reduced number of compression circuits. The circuitry is embedded into the data formatter of the sensor system without adding size, weight, power consumption, and fabrication cost. PMID:26404281

  20. JPEG vs. JPEG 2000: an objective comparison of image encoding quality

    NASA Astrophysics Data System (ADS)

    Ebrahimi, Farzad; Chamik, Matthieu; Winkler, Stefan

    2004-11-01

    This paper describes an objective comparison of the image quality of different encoders. Our approach is based on estimating the visual impact of compression artifacts on perceived quality. We present a tool that measures these artifacts in an image and uses them to compute a prediction of the Mean Opinion Score (MOS) obtained in subjective experiments. We show that the MOS predictions by our proposed tool are a better indicator of perceived image quality than PSNR, especially for highly compressed images. For the encoder comparison, we compress a set of 29 test images with two JPEG encoders (Adobe Photoshop and IrfanView) and three JPEG2000 encoders (JasPer, Kakadu, and IrfanView) at various compression ratios. We compute blockiness, blur, and MOS predictions as well as PSNR of the compressed images. Our results show that the IrfanView JPEG encoder produces consistently better images than the Adobe Photoshop JPEG encoder at the same data rate. The differences between the JPEG2000 encoders in our test are less pronounced; JasPer comes out as the best codec, closely followed by IrfanView and Kakadu. Comparing the JPEG- and JPEG2000-encoding quality of IrfanView, we find that JPEG has a slight edge at low compression ratios, while JPEG2000 is the clear winner at medium and high compression ratios.

  1. Integrating dynamic and distributed compressive sensing techniques to enhance image quality of the compressive line sensing system for unmanned aerial vehicles application

    NASA Astrophysics Data System (ADS)

    Ouyang, Bing; Hou, Weilin; Caimi, Frank M.; Dalgleish, Fraser R.; Vuorenkoski, Anni K.; Gong, Cuiling

    2017-07-01

    The compressive line sensing imaging system adopts distributed compressive sensing (CS) to acquire data and reconstruct images. Dynamic CS uses Bayesian inference to capture the correlated nature of the adjacent lines. An image reconstruction technique that incorporates dynamic CS in the distributed CS framework was developed to improve the quality of reconstructed images. The effectiveness of the technique was validated using experimental data acquired in an underwater imaging test facility. Results that demonstrate contrast and resolution improvements will be presented. The improved efficiency is desirable for unmanned aerial vehicles conducting long-duration missions.

  2. Fast computational scheme of image compression for 32-bit microprocessors

    NASA Technical Reports Server (NTRS)

    Kasperovich, Leonid

    1994-01-01

    This paper presents a new computational scheme of image compression based on the discrete cosine transform (DCT), underlying JPEG and MPEG International Standards. The algorithm for the 2-d DCT computation uses integer operations (register shifts and additions / subtractions only); its computational complexity is about 8 additions per image pixel. As a meaningful example of an on-board image compression application we consider the software implementation of the algorithm for the Mars Rover (Marsokhod, in Russian) imaging system being developed as a part of Mars-96 International Space Project. It's shown that fast software solution for 32-bit microprocessors may compete with the DCT-based image compression hardware.

  3. Psychophysical Comparisons in Image Compression Algorithms.

    DTIC Science & Technology

    1999-03-01

    Leister, M., "Lossy Lempel - Ziv Algorithm for Large Alphabet Sources and Applications to Image Compression ," IEEE Proceedings, v.I, pp. 225-228, September...1623-1642, September 1990. Sanford, M.A., An Analysis of Data Compression Algorithms used in the Transmission of Imagery, Master’s Thesis, Naval...NAVAL POSTGRADUATE SCHOOL Monterey, California THESIS PSYCHOPHYSICAL COMPARISONS IN IMAGE COMPRESSION ALGORITHMS by % Christopher J. Bodine • March

  4. Roundness variation in JPEG images affects the automated process of nuclear immunohistochemical quantification: correction with a linear regression model.

    PubMed

    López, Carlos; Jaén Martinez, Joaquín; Lejeune, Marylène; Escrivà, Patricia; Salvadó, Maria T; Pons, Lluis E; Alvaro, Tomás; Baucells, Jordi; García-Rojo, Marcial; Cugat, Xavier; Bosch, Ramón

    2009-10-01

    The volume of digital image (DI) storage continues to be an important problem in computer-assisted pathology. DI compression enables the size of files to be reduced but with the disadvantage of loss of quality. Previous results indicated that the efficiency of computer-assisted quantification of immunohistochemically stained cell nuclei may be significantly reduced when compressed DIs are used. This study attempts to show, with respect to immunohistochemically stained nuclei, which morphometric parameters may be altered by the different levels of JPEG compression, and the implications of these alterations for automated nuclear counts, and further, develops a method for correcting this discrepancy in the nuclear count. For this purpose, 47 DIs from different tissues were captured in uncompressed TIFF format and converted to 1:3, 1:23 and 1:46 compression JPEG images. Sixty-five positive objects were selected from these images, and six morphological parameters were measured and compared for each object in TIFF images and those of the different compression levels using a set of previously developed and tested macros. Roundness proved to be the only morphological parameter that was significantly affected by image compression. Factors to correct the discrepancy in the roundness estimate were derived from linear regression models for each compression level, thereby eliminating the statistically significant differences between measurements in the equivalent images. These correction factors were incorporated in the automated macros, where they reduced the nuclear quantification differences arising from image compression. Our results demonstrate that it is possible to carry out unbiased automated immunohistochemical nuclear quantification in compressed DIs with a methodology that could be easily incorporated in different systems of digital image analysis.

  5. Adaptive compressive ghost imaging based on wavelet trees and sparse representation.

    PubMed

    Yu, Wen-Kai; Li, Ming-Fei; Yao, Xu-Ri; Liu, Xue-Feng; Wu, Ling-An; Zhai, Guang-Jie

    2014-03-24

    Compressed sensing is a theory which can reconstruct an image almost perfectly with only a few measurements by finding its sparsest representation. However, the computation time consumed for large images may be a few hours or more. In this work, we both theoretically and experimentally demonstrate a method that combines the advantages of both adaptive computational ghost imaging and compressed sensing, which we call adaptive compressive ghost imaging, whereby both the reconstruction time and measurements required for any image size can be significantly reduced. The technique can be used to improve the performance of all computational ghost imaging protocols, especially when measuring ultra-weak or noisy signals, and can be extended to imaging applications at any wavelength.

  6. Adjustable lossless image compression based on a natural splitting of an image into drawing, shading, and fine-grained components

    NASA Technical Reports Server (NTRS)

    Novik, Dmitry A.; Tilton, James C.

    1993-01-01

    The compression, or efficient coding, of single band or multispectral still images is becoming an increasingly important topic. While lossy compression approaches can produce reconstructions that are visually close to the original, many scientific and engineering applications require exact (lossless) reconstructions. However, the most popular and efficient lossless compression techniques do not fully exploit the two-dimensional structural links existing in the image data. We describe here a general approach to lossless data compression that effectively exploits two-dimensional structural links of any length. After describing in detail two main variants on this scheme, we discuss experimental results.

  7. Likelihood ratio data to report the validation of a forensic fingerprint evaluation method.

    PubMed

    Ramos, Daniel; Haraksim, Rudolf; Meuwly, Didier

    2017-02-01

    Data to which the authors refer to throughout this article are likelihood ratios (LR) computed from the comparison of 5-12 minutiae fingermarks with fingerprints. These LRs data are used for the validation of a likelihood ratio (LR) method in forensic evidence evaluation. These data present a necessary asset for conducting validation experiments when validating LR methods used in forensic evidence evaluation and set up validation reports. These data can be also used as a baseline for comparing the fingermark evidence in the same minutiae configuration as presented in (D. Meuwly, D. Ramos, R. Haraksim,) [1], although the reader should keep in mind that different feature extraction algorithms and different AFIS systems used may produce different LRs values. Moreover, these data may serve as a reproducibility exercise, in order to train the generation of validation reports of forensic methods, according to [1]. Alongside the data, a justification and motivation for the use of methods is given. These methods calculate LRs from the fingerprint/mark data and are subject to a validation procedure. The choice of using real forensic fingerprint in the validation and simulated data in the development is described and justified. Validation criteria are set for the purpose of validation of the LR methods, which are used to calculate the LR values from the data and the validation report. For privacy and data protection reasons, the original fingerprint/mark images cannot be shared. But these images do not constitute the core data for the validation, contrarily to the LRs that are shared.

  8. JPEG2000 Image Compression on Solar EUV Images

    NASA Astrophysics Data System (ADS)

    Fischer, Catherine E.; Müller, Daniel; De Moortel, Ineke

    2017-01-01

    For future solar missions as well as ground-based telescopes, efficient ways to return and process data have become increasingly important. Solar Orbiter, which is the next ESA/NASA mission to explore the Sun and the heliosphere, is a deep-space mission, which implies a limited telemetry rate that makes efficient onboard data compression a necessity to achieve the mission science goals. Missions like the Solar Dynamics Observatory (SDO) and future ground-based telescopes such as the Daniel K. Inouye Solar Telescope, on the other hand, face the challenge of making petabyte-sized solar data archives accessible to the solar community. New image compression standards address these challenges by implementing efficient and flexible compression algorithms that can be tailored to user requirements. We analyse solar images from the Atmospheric Imaging Assembly (AIA) instrument onboard SDO to study the effect of lossy JPEG2000 (from the Joint Photographic Experts Group 2000) image compression at different bitrates. To assess the quality of compressed images, we use the mean structural similarity (MSSIM) index as well as the widely used peak signal-to-noise ratio (PSNR) as metrics and compare the two in the context of solar EUV images. In addition, we perform tests to validate the scientific use of the lossily compressed images by analysing examples of an on-disc and off-limb coronal-loop oscillation time-series observed by AIA/SDO.

  9. Tomographic Image Compression Using Multidimensional Transforms.

    ERIC Educational Resources Information Center

    Villasenor, John D.

    1994-01-01

    Describes a method for compressing tomographic images obtained using Positron Emission Tomography (PET) and Magnetic Resonance (MR) by applying transform compression using all available dimensions. This takes maximum advantage of redundancy of the data, allowing significant increases in compression efficiency and performance. (13 references) (KRN)

  10. Image compression-encryption scheme based on hyper-chaotic system and 2D compressive sensing

    NASA Astrophysics Data System (ADS)

    Zhou, Nanrun; Pan, Shumin; Cheng, Shan; Zhou, Zhihong

    2016-08-01

    Most image encryption algorithms based on low-dimensional chaos systems bear security risks and suffer encryption data expansion when adopting nonlinear transformation directly. To overcome these weaknesses and reduce the possible transmission burden, an efficient image compression-encryption scheme based on hyper-chaotic system and 2D compressive sensing is proposed. The original image is measured by the measurement matrices in two directions to achieve compression and encryption simultaneously, and then the resulting image is re-encrypted by the cycle shift operation controlled by a hyper-chaotic system. Cycle shift operation can change the values of the pixels efficiently. The proposed cryptosystem decreases the volume of data to be transmitted and simplifies the keys distribution simultaneously as a nonlinear encryption system. Simulation results verify the validity and the reliability of the proposed algorithm with acceptable compression and security performance.

  11. Intelligent bandwidth compression

    NASA Astrophysics Data System (ADS)

    Tseng, D. Y.; Bullock, B. L.; Olin, K. E.; Kandt, R. K.; Olsen, J. D.

    1980-02-01

    The feasibility of a 1000:1 bandwidth compression ratio for image transmission has been demonstrated using image-analysis algorithms and a rule-based controller. Such a high compression ratio was achieved by first analyzing scene content using auto-cueing and feature-extraction algorithms, and then transmitting only the pertinent information consistent with mission requirements. A rule-based controller directs the flow of analysis and performs priority allocations on the extracted scene content. The reconstructed bandwidth-compressed image consists of an edge map of the scene background, with primary and secondary target windows embedded in the edge map. The bandwidth-compressed images are updated at a basic rate of 1 frame per second, with the high-priority target window updated at 7.5 frames per second. The scene-analysis algorithms used in this system together with the adaptive priority controller are described. Results of simulated 1000:1 bandwidth-compressed images are presented.

  12. Telemedicine + OCT: toward design of optimized algorithms for high-quality compressed images

    NASA Astrophysics Data System (ADS)

    Mousavi, Mahta; Lurie, Kristen; Land, Julian; Javidi, Tara; Ellerbee, Audrey K.

    2014-03-01

    Telemedicine is an emerging technology that aims to provide clinical healthcare at a distance. Among its goals, the transfer of diagnostic images over telecommunication channels has been quite appealing to the medical community. When viewed as an adjunct to biomedical device hardware, one highly important consideration aside from the transfer rate and speed is the accuracy of the reconstructed image at the receiver end. Although optical coherence tomography (OCT) is an established imaging technique that is ripe for telemedicine, the effects of OCT data compression, which may be necessary on certain telemedicine platforms, have not received much attention in the literature. We investigate the performance and efficiency of several lossless and lossy compression techniques for OCT data and characterize their effectiveness with respect to achievable compression ratio, compression rate and preservation of image quality. We examine the effects of compression in the interferogram vs. A-scan domain as assessed with various objective and subjective metrics.

  13. Observer performance assessment of JPEG-compressed high-resolution chest images

    NASA Astrophysics Data System (ADS)

    Good, Walter F.; Maitz, Glenn S.; King, Jill L.; Gennari, Rose C.; Gur, David

    1999-05-01

    The JPEG compression algorithm was tested on a set of 529 chest radiographs that had been digitized at a spatial resolution of 100 micrometer and contrast sensitivity of 12 bits. Images were compressed using five fixed 'psychovisual' quantization tables which produced average compression ratios in the range 15:1 to 61:1, and were then printed onto film. Six experienced radiologists read all cases from the laser printed film, in each of the five compressed modes as well as in the non-compressed mode. For comparison purposes, observers also read the same cases with reduced pixel resolutions of 200 micrometer and 400 micrometer. The specific task involved detecting masses, pneumothoraces, interstitial disease, alveolar infiltrates and rib fractures. Over the range of compression ratios tested, for images digitized at 100 micrometer, we were unable to demonstrate any statistically significant decrease (p greater than 0.05) in observer performance as measured by ROC techniques. However, the observers' subjective assessments of image quality did decrease significantly as image resolution was reduced and suggested a decreasing, but nonsignificant, trend as the compression ratio was increased. The seeming discrepancy between our failure to detect a reduction in observer performance, and other published studies, is likely due to: (1) the higher resolution at which we digitized our images; (2) the higher signal-to-noise ratio of our digitized films versus typical CR images; and (3) our particular choice of an optimized quantization scheme.

  14. Image compression using singular value decomposition

    NASA Astrophysics Data System (ADS)

    Swathi, H. R.; Sohini, Shah; Surbhi; Gopichand, G.

    2017-11-01

    We often need to transmit and store the images in many applications. Smaller the image, less is the cost associated with transmission and storage. So we often need to apply data compression techniques to reduce the storage space consumed by the image. One approach is to apply Singular Value Decomposition (SVD) on the image matrix. In this method, digital image is given to SVD. SVD refactors the given digital image into three matrices. Singular values are used to refactor the image and at the end of this process, image is represented with smaller set of values, hence reducing the storage space required by the image. Goal here is to achieve the image compression while preserving the important features which describe the original image. SVD can be adapted to any arbitrary, square, reversible and non-reversible matrix of m × n size. Compression ratio and Mean Square Error is used as performance metrics.

  15. Watermarking of ultrasound medical images in teleradiology using compressed watermark

    PubMed Central

    Badshah, Gran; Liew, Siau-Chuin; Zain, Jasni Mohamad; Ali, Mushtaq

    2016-01-01

    Abstract. The open accessibility of Internet-based medical images in teleradialogy face security threats due to the nonsecured communication media. This paper discusses the spatial domain watermarking of ultrasound medical images for content authentication, tamper detection, and lossless recovery. For this purpose, the image is divided into two main parts, the region of interest (ROI) and region of noninterest (RONI). The defined ROI and its hash value are combined as watermark, lossless compressed, and embedded into the RONI part of images at pixel’s least significant bits (LSBs). The watermark lossless compression and embedding at pixel’s LSBs preserve image diagnostic and perceptual qualities. Different lossless compression techniques including Lempel-Ziv-Welch (LZW) were tested for watermark compression. The performances of these techniques were compared based on more bit reduction and compression ratio. LZW was found better than others and used in tamper detection and recovery watermarking of medical images (TDARWMI) scheme development to be used for ROI authentication, tamper detection, localization, and lossless recovery. TDARWMI performance was compared and found to be better than other watermarking schemes. PMID:26839914

  16. Efficient image acquisition design for a cancer detection system

    NASA Astrophysics Data System (ADS)

    Nguyen, Dung; Roehrig, Hans; Borders, Marisa H.; Fitzpatrick, Kimberly A.; Roveda, Janet

    2013-09-01

    Modern imaging modalities, such as Computed Tomography (CT), Digital Breast Tomosynthesis (DBT) or Magnetic Resonance Tomography (MRT) are able to acquire volumetric images with an isotropic resolution in micrometer (um) or millimeter (mm) range. When used in interactive telemedicine applications, these raw images need a huge storage unit, thereby necessitating the use of high bandwidth data communication link. To reduce the cost of transmission and enable archiving, especially for medical applications, image compression is performed. Recent advances in compression algorithms have resulted in a vast array of data compression techniques, but because of the characteristics of these images, there are challenges to overcome to transmit these images efficiently. In addition, the recent studies raise the low dose mammography risk on high risk patient. Our preliminary studies indicate that by bringing the compression before the analog-to-digital conversion (ADC) stage is more efficient than other compression techniques after the ADC. The linearity characteristic of the compressed sensing and ability to perform the digital signal processing (DSP) during data conversion open up a new area of research regarding the roles of sparsity in medical image registration, medical image analysis (for example, automatic image processing algorithm to efficiently extract the relevant information for the clinician), further Xray dose reduction for mammography, and contrast enhancement.

  17. Binary video codec for data reduction in wireless visual sensor networks

    NASA Astrophysics Data System (ADS)

    Khursheed, Khursheed; Ahmad, Naeem; Imran, Muhammad; O'Nils, Mattias

    2013-02-01

    Wireless Visual Sensor Networks (WVSN) is formed by deploying many Visual Sensor Nodes (VSNs) in the field. Typical applications of WVSN include environmental monitoring, health care, industrial process monitoring, stadium/airports monitoring for security reasons and many more. The energy budget in the outdoor applications of WVSN is limited to the batteries and the frequent replacement of batteries is usually not desirable. So the processing as well as the communication energy consumption of the VSN needs to be optimized in such a way that the network remains functional for longer duration. The images captured by VSN contain huge amount of data and require efficient computational resources for processing the images and wide communication bandwidth for the transmission of the results. Image processing algorithms must be designed and developed in such a way that they are computationally less complex and must provide high compression rate. For some applications of WVSN, the captured images can be segmented into bi-level images and hence bi-level image coding methods will efficiently reduce the information amount in these segmented images. But the compression rate of the bi-level image coding methods is limited by the underlined compression algorithm. Hence there is a need for designing other intelligent and efficient algorithms which are computationally less complex and provide better compression rate than that of bi-level image coding methods. Change coding is one such algorithm which is computationally less complex (require only exclusive OR operations) and provide better compression efficiency compared to image coding but it is effective for applications having slight changes between adjacent frames of the video. The detection and coding of the Region of Interest (ROIs) in the change frame efficiently reduce the information amount in the change frame. But, if the number of objects in the change frames is higher than a certain level then the compression efficiency of both the change coding and ROI coding becomes worse than that of image coding. This paper explores the compression efficiency of the Binary Video Codec (BVC) for the data reduction in WVSN. We proposed to implement all the three compression techniques i.e. image coding, change coding and ROI coding at the VSN and then select the smallest bit stream among the results of the three compression techniques. In this way the compression performance of the BVC will never become worse than that of image coding. We concluded that the compression efficiency of BVC is always better than that of change coding and is always better than or equal that of ROI coding and image coding.

  18. An Efficient, Lossless Database for Storing and Transmitting Medical Images

    NASA Technical Reports Server (NTRS)

    Fenstermacher, Marc J.

    1998-01-01

    This research aimed in creating new compression methods based on the central idea of Set Redundancy Compression (SRC). Set Redundancy refers to the common information that exists in a set of similar images. SRC compression methods take advantage of this common information and can achieve improved compression of similar images by reducing their Set Redundancy. The current research resulted in the development of three new lossless SRC compression methods: MARS (Median-Aided Region Sorting), MAZE (Max-Aided Zero Elimination) and MaxGBA (Max-Guided Bit Allocation).

  19. Spectral compression algorithms for the analysis of very large multivariate images

    DOEpatents

    Keenan, Michael R.

    2007-10-16

    A method for spectrally compressing data sets enables the efficient analysis of very large multivariate images. The spectral compression algorithm uses a factored representation of the data that can be obtained from Principal Components Analysis or other factorization technique. Furthermore, a block algorithm can be used for performing common operations more efficiently. An image analysis can be performed on the factored representation of the data, using only the most significant factors. The spectral compression algorithm can be combined with a spatial compression algorithm to provide further computational efficiencies.

  20. Imaging-based molecular barcoding with pixelated dielectric metasurfaces

    NASA Astrophysics Data System (ADS)

    Tittl, Andreas; Leitis, Aleksandrs; Liu, Mingkai; Yesilkoy, Filiz; Choi, Duk-Yong; Neshev, Dragomir N.; Kivshar, Yuri S.; Altug, Hatice

    2018-06-01

    Metasurfaces provide opportunities for wavefront control, flat optics, and subwavelength light focusing. We developed an imaging-based nanophotonic method for detecting mid-infrared molecular fingerprints and implemented it for the chemical identification and compositional analysis of surface-bound analytes. Our technique features a two-dimensional pixelated dielectric metasurface with a range of ultrasharp resonances, each tuned to a discrete frequency; this enables molecular absorption signatures to be read out at multiple spectral points, and the resulting information is then translated into a barcode-like spatial absorption map for imaging. The signatures of biological, polymer, and pesticide molecules can be detected with high sensitivity, covering applications such as biosensing and environmental monitoring. Our chemically specific technique can resolve absorption fingerprints without the need for spectrometry, frequency scanning, or moving mechanical parts, thereby paving the way toward sensitive and versatile miniaturized mid-infrared spectroscopy devices.

  1. Imaging Fibrosis and Separating Collagens using Second Harmonic Generation and Phasor Approach to Fluorescence Lifetime Imaging

    PubMed Central

    Ranjit, Suman; Dvornikov, Alexander; Stakic, Milka; Hong, Suk-Hyun; Levi, Moshe; Evans, Ronald M.; Gratton, Enrico

    2015-01-01

    In this paper we have used second harmonic generation (SHG) and phasor approach to auto fluorescence lifetime imaging (FLIM) to obtain fingerprints of different collagens and then used these fingerprints to observe bone marrow fibrosis in the mouse femur. This is a label free approach towards fast automatable detection of fibrosis in tissue samples. FLIM has previously been used as a method of contrast in different tissues and in this paper phasor approach to FLIM is used to separate collagen I from collagen III, the markers of fibrosis, the largest groups of disorders that are often without any effective therapy. Often characterized by an increase in collagen content of the corresponding tissue, the samples are usually visualized by histochemical staining, which is pathologist dependent and cannot be automated. PMID:26293987

  2. Image Processing Software

    NASA Technical Reports Server (NTRS)

    1990-01-01

    The Ames digital image velocimetry technology has been incorporated in a commercially available image processing software package that allows motion measurement of images on a PC alone. The software, manufactured by Werner Frei Associates, is IMAGELAB FFT. IMAGELAB FFT is a general purpose image processing system with a variety of other applications, among them image enhancement of fingerprints and use by banks and law enforcement agencies for analysis of videos run during robberies.

  3. Integer cosine transform compression for Galileo at Jupiter: A preliminary look

    NASA Technical Reports Server (NTRS)

    Ekroot, L.; Dolinar, S.; Cheung, K.-M.

    1993-01-01

    The Galileo low-gain antenna mission has a severely rate-constrained channel over which we wish to send large amounts of information. Because of this link pressure, compression techniques for image and other data are being selected. The compression technique that will be used for images is the integer cosine transform (ICT). This article investigates the compression performance of Galileo's ICT algorithm as applied to Galileo images taken during the early portion of the mission and to images that simulate those expected from the encounter at Jupiter.

  4. A High Performance Image Data Compression Technique for Space Applications

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu; Venbrux, Jack

    2003-01-01

    A highly performing image data compression technique is currently being developed for space science applications under the requirement of high-speed and pushbroom scanning. The technique is also applicable to frame based imaging data. The algorithm combines a two-dimensional transform with a bitplane encoding; this results in an embedded bit string with exact desirable compression rate specified by the user. The compression scheme performs well on a suite of test images acquired from spacecraft instruments. It can also be applied to three-dimensional data cube resulting from hyper-spectral imaging instrument. Flight qualifiable hardware implementations are in development. The implementation is being designed to compress data in excess of 20 Msampledsec and support quantization from 2 to 16 bits. This paper presents the algorithm, its applications and status of development.

  5. Observer detection of image degradation caused by irreversible data compression processes

    NASA Astrophysics Data System (ADS)

    Chen, Ji; Flynn, Michael J.; Gross, Barry; Spizarny, David

    1991-05-01

    Irreversible data compression methods have been proposed to reduce the data storage and communication requirements of digital imaging systems. In general, the error produced by compression increases as an algorithm''s compression ratio is increased. We have studied the relationship between compression ratios and the detection of induced error using radiologic observers. The nature of the errors was characterized by calculating the power spectrum of the difference image. In contrast with studies designed to test whether detected errors alter diagnostic decisions, this study was designed to test whether observers could detect the induced error. A paired-film observer study was designed to test whether induced errors were detected. The study was conducted with chest radiographs selected and ranked for subtle evidence of interstitial disease, pulmonary nodules, or pneumothoraces. Images were digitized at 86 microns (4K X 5K) and 2K X 2K regions were extracted. A full-frame discrete cosine transform method was used to compress images at ratios varying between 6:1 and 60:1. The decompressed images were reprinted next to the original images in a randomized order with a laser film printer. The use of a film digitizer and a film printer which can reproduce all of the contrast and detail in the original radiograph makes the results of this study insensitive to instrument performance and primarily dependent on radiographic image quality. The results of this study define conditions for which errors associated with irreversible compression cannot be detected by radiologic observers. The results indicate that an observer can detect the errors introduced by this compression algorithm for compression ratios of 10:1 (1.2 bits/pixel) or higher.

  6. Comparison of lossless compression techniques for prepress color images

    NASA Astrophysics Data System (ADS)

    Van Assche, Steven; Denecker, Koen N.; Philips, Wilfried R.; Lemahieu, Ignace L.

    1998-12-01

    In the pre-press industry color images have both a high spatial and a high color resolution. Such images require a considerable amount of storage space and impose long transmission times. Data compression is desired to reduce these storage and transmission problems. Because of the high quality requirements in the pre-press industry only lossless compression is acceptable. Most existing lossless compression schemes operate on gray-scale images. In this case the color components of color images must be compressed independently. However, higher compression ratios can be achieved by exploiting inter-color redundancies. In this paper we present a comparison of three state-of-the-art lossless compression techniques which exploit such color redundancies: IEP (Inter- color Error Prediction) and a KLT-based technique, which are both linear color decorrelation techniques, and Interframe CALIC, which uses a non-linear approach to color decorrelation. It is shown that these techniques are able to exploit color redundancies and that color decorrelation can be done effectively and efficiently. The linear color decorrelators provide a considerable coding gain (about 2 bpp) on some typical prepress images. The non-linear interframe CALIC predictor does not yield better results, but the full interframe CALIC technique does.

  7. 32 CFR 161.7 - ID card life-cycle procedures.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... provide two fingerprint biometric scans and a facial image, to assist with authenticating the applicant's... manner: (i) A digitized, full-face passport-type photograph will be captured for the facial image and stored in DEERS and shall have a plain white or off-white background. No flags, posters, or other images...

  8. SIMULTANEOUS MULTISLICE MAGNETIC RESONANCE FINGERPRINTING WITH LOW-RANK AND SUBSPACE MODELING

    PubMed Central

    Zhao, Bo; Bilgic, Berkin; Adalsteinsson, Elfar; Griswold, Mark A.; Wald, Lawrence L.; Setsompop, Kawin

    2018-01-01

    Magnetic resonance fingerprinting (MRF) is a new quantitative imaging paradigm that enables simultaneous acquisition of multiple magnetic resonance tissue parameters (e.g., T1, T2, and spin density). Recently, MRF has been integrated with simultaneous multislice (SMS) acquisitions to enable volumetric imaging with faster scan time. In this paper, we present a new image reconstruction method based on low-rank and subspace modeling for improved SMS-MRF. Here the low-rank model exploits strong spatiotemporal correlation among contrast-weighted images, while the subspace model captures the temporal evolution of magnetization dynamics. With the proposed model, the image reconstruction problem is formulated as a convex optimization problem, for which we develop an algorithm based on variable splitting and the alternating direction method of multipliers. The performance of the proposed method has been evaluated by numerical experiments, and the results demonstrate that the proposed method leads to improved accuracy over the conventional approach. Practically, the proposed method has a potential to allow for a 3x speedup with minimal reconstruction error, resulting in less than 5 sec imaging time per slice. PMID:29060594

  9. Simultaneous multislice magnetic resonance fingerprinting with low-rank and subspace modeling.

    PubMed

    Bo Zhao; Bilgic, Berkin; Adalsteinsson, Elfar; Griswold, Mark A; Wald, Lawrence L; Setsompop, Kawin

    2017-07-01

    Magnetic resonance fingerprinting (MRF) is a new quantitative imaging paradigm that enables simultaneous acquisition of multiple magnetic resonance tissue parameters (e.g., T 1 , T 2 , and spin density). Recently, MRF has been integrated with simultaneous multislice (SMS) acquisitions to enable volumetric imaging with faster scan time. In this paper, we present a new image reconstruction method based on low-rank and subspace modeling for improved SMS-MRF. Here the low-rank model exploits strong spatiotemporal correlation among contrast-weighted images, while the subspace model captures the temporal evolution of magnetization dynamics. With the proposed model, the image reconstruction problem is formulated as a convex optimization problem, for which we develop an algorithm based on variable splitting and the alternating direction method of multipliers. The performance of the proposed method has been evaluated by numerical experiments, and the results demonstrate that the proposed method leads to improved accuracy over the conventional approach. Practically, the proposed method has a potential to allow for a 3× speedup with minimal reconstruction error, resulting in less than 5 sec imaging time per slice.

  10. The Central Italy Seismic Sequence (2016): Spatial Patterns and Dynamic Fingerprints

    NASA Astrophysics Data System (ADS)

    Suteanu, Cristian; Liucci, Luisa; Melelli, Laura

    2018-01-01

    The paper investigates spatio-temporal aspects of the seismic sequence that started in Central Italy (Amatrice, Lazio region) in August 2016, causing hundreds of fatalities and producing major damage to settlements. On one hand, scaling properties of the landscape topography are identified and related to geomorphological processes, supporting the identification of preferential spatial directions in tectonic activity and confirming the role of the past tectonic periods and ongoing processes with respect to the driving of the geomorphological evolution of the area. On the other hand, relations between the spatio-temporal evolution of the sequence and the seismogenic fault systems are studied. The dynamic fingerprints of seismicity are established with the help of events thread analysis (ETA), which characterizes anisotropy in spatio-temporal earthquake patterns. ETA confirms the fact that the direction of the seismogenic normal fault-oriented (N)NW-(S)SE is characterized by persistent seismic activity. More importantly, it also highlights the role of the pre-existing compressive structures, Neogenic thrust and transpressive regional fronts, with a trend-oriented (N)NE-(S)SW, in the stress transfer. Both the fractal features of the topographic surface and the dynamic fingerprint of the recent seismic sequence point to the hypothesis of an active interaction between the Quaternary fault systems and the pre-existing compressional structures.

  11. Ultra-Low Power Dynamic Knob in Adaptive Compressed Sensing Towards Biosignal Dynamics.

    PubMed

    Wang, Aosen; Lin, Feng; Jin, Zhanpeng; Xu, Wenyao

    2016-06-01

    Compressed sensing (CS) is an emerging sampling paradigm in data acquisition. Its integrated analog-to-information structure can perform simultaneous data sensing and compression with low-complexity hardware. To date, most of the existing CS implementations have a fixed architectural setup, which lacks flexibility and adaptivity for efficient dynamic data sensing. In this paper, we propose a dynamic knob (DK) design to effectively reconfigure the CS architecture by recognizing the biosignals. Specifically, the dynamic knob design is a template-based structure that comprises a supervised learning module and a look-up table module. We model the DK performance in a closed analytic form and optimize the design via a dynamic programming formulation. We present the design on a 130 nm process, with a 0.058 mm (2) fingerprint and a 187.88 nJ/event energy-consumption. Furthermore, we benchmark the design performance using a publicly available dataset. Given the energy constraint in wireless sensing, the adaptive CS architecture can consistently improve the signal reconstruction quality by more than 70%, compared with the traditional CS. The experimental results indicate that the ultra-low power dynamic knob can provide an effective adaptivity and improve the signal quality in compressed sensing towards biosignal dynamics.

  12. A generalized Benford's law for JPEG coefficients and its applications in image forensics

    NASA Astrophysics Data System (ADS)

    Fu, Dongdong; Shi, Yun Q.; Su, Wei

    2007-02-01

    In this paper, a novel statistical model based on Benford's law for the probability distributions of the first digits of the block-DCT and quantized JPEG coefficients is presented. A parametric logarithmic law, i.e., the generalized Benford's law, is formulated. Furthermore, some potential applications of this model in image forensics are discussed in this paper, which include the detection of JPEG compression for images in bitmap format, the estimation of JPEG compression Qfactor for JPEG compressed bitmap image, and the detection of double compressed JPEG image. The results of our extensive experiments demonstrate the effectiveness of the proposed statistical model.

  13. Ultrasonic fingerprint sensor using a piezoelectric micromachined ultrasonic transducer array integrated with complementary metal oxide semiconductor electronics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Y.; Fung, S.; Wang, Q.

    2015-06-29

    This paper presents an ultrasonic fingerprint sensor based on a 24 × 8 array of 22 MHz piezoelectric micromachined ultrasonic transducers (PMUTs) with 100 μm pitch, fully integrated with 180 nm complementary metal oxide semiconductor (CMOS) circuitry through eutectic wafer bonding. Each PMUT is directly bonded to a dedicated CMOS receive amplifier, minimizing electrical parasitics and eliminating the need for through-silicon vias. The array frequency response and vibration mode-shape were characterized using laser Doppler vibrometry and verified via finite element method simulation. The array's acoustic output was measured using a hydrophone to be ∼14 kPa with a 28 V input, in reasonable agreement with predication from analyticalmore » calculation. Pulse-echo imaging of a 1D steel grating is demonstrated using electronic scanning of a 20 × 8 sub-array, resulting in 300 mV maximum received amplitude and 5:1 contrast ratio. Because the small size of this array limits the maximum image size, mechanical scanning was used to image a 2D polydimethylsiloxane fingerprint phantom (10 mm × 8 mm) at a 1.2 mm distance from the array.« less

  14. Computational simulation of breast compression based on segmented breast and fibroglandular tissues on magnetic resonance images.

    PubMed

    Shih, Tzu-Ching; Chen, Jeon-Hor; Liu, Dongxu; Nie, Ke; Sun, Lizhi; Lin, Muqing; Chang, Daniel; Nalcioglu, Orhan; Su, Min-Ying

    2010-07-21

    This study presents a finite element-based computational model to simulate the three-dimensional deformation of a breast and fibroglandular tissues under compression. The simulation was based on 3D MR images of the breast, and craniocaudal and mediolateral oblique compression, as used in mammography, was applied. The geometry of the whole breast and the segmented fibroglandular tissues within the breast were reconstructed using triangular meshes by using the Avizo 6.0 software package. Due to the large deformation in breast compression, a finite element model was used to simulate the nonlinear elastic tissue deformation under compression, using the MSC.Marc software package. The model was tested in four cases. The results showed a higher displacement along the compression direction compared to the other two directions. The compressed breast thickness in these four cases at a compression ratio of 60% was in the range of 5-7 cm, which is a typical range of thickness in mammography. The projection of the fibroglandular tissue mesh at a compression ratio of 60% was compared to the corresponding mammograms of two women, and they demonstrated spatially matched distributions. However, since the compression was based on magnetic resonance imaging (MRI), which has much coarser spatial resolution than the in-plane resolution of mammography, this method is unlikely to generate a synthetic mammogram close to the clinical quality. Whether this model may be used to understand the technical factors that may impact the variations in breast density needs further investigation. Since this method can be applied to simulate compression of the breast at different views and different compression levels, another possible application is to provide a tool for comparing breast images acquired using different imaging modalities--such as MRI, mammography, whole breast ultrasound and molecular imaging--that are performed using different body positions and under different compression conditions.

  15. Perceptual Image Compression in Telemedicine

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Ahumada, Albert J., Jr.; Eckstein, Miguel; Null, Cynthia H. (Technical Monitor)

    1996-01-01

    The next era of space exploration, especially the "Mission to Planet Earth" will generate immense quantities of image data. For example, the Earth Observing System (EOS) is expected to generate in excess of one terabyte/day. NASA confronts a major technical challenge in managing this great flow of imagery: in collection, pre-processing, transmission to earth, archiving, and distribution to scientists at remote locations. Expected requirements in most of these areas clearly exceed current technology. Part of the solution to this problem lies in efficient image compression techniques. For much of this imagery, the ultimate consumer is the human eye. In this case image compression should be designed to match the visual capacities of the human observer. We have developed three techniques for optimizing image compression for the human viewer. The first consists of a formula, developed jointly with IBM and based on psychophysical measurements, that computes a DCT quantization matrix for any specified combination of viewing distance, display resolution, and display brightness. This DCT quantization matrix is used in most recent standards for digital image compression (JPEG, MPEG, CCITT H.261). The second technique optimizes the DCT quantization matrix for each individual image, based on the contents of the image. This is accomplished by means of a model of visual sensitivity to compression artifacts. The third technique extends the first two techniques to the realm of wavelet compression. Together these two techniques will allow systematic perceptual optimization of image compression in NASA imaging systems. Many of the image management challenges faced by NASA are mirrored in the field of telemedicine. Here too there are severe demands for transmission and archiving of large image databases, and the imagery is ultimately used primarily by human observers, such as radiologists. In this presentation I will describe some of our preliminary explorations of the applications of our technology to the special problems of telemedicine.

  16. Nonlinear pulse compression in pulse-inversion fundamental imaging.

    PubMed

    Cheng, Yun-Chien; Shen, Che-Chou; Li, Pai-Chi

    2007-04-01

    Coded excitation can be applied in ultrasound contrast agent imaging to enhance the signal-to-noise ratio with minimal destruction of the microbubbles. Although the axial resolution is usually compromised by the requirement for a long coded transmit waveforms, this can be restored by using a compression filter to compress the received echo. However, nonlinear responses from microbubbles may cause difficulties in pulse compression and result in severe range side-lobe artifacts, particularly in pulse-inversion-based (PI) fundamental imaging. The efficacy of pulse compression in nonlinear contrast imaging was evaluated by investigating several factors relevant to PI fundamental generation using both in-vitro experiments and simulations. The results indicate that the acoustic pressure and the bubble size can alter the nonlinear characteristics of microbubbles and change the performance of the compression filter. When nonlinear responses from contrast agents are enhanced by using a higher acoustic pressure or when more microbubbles are near the resonance size of the transmit frequency, higher range side lobes are produced in both linear imaging and PI fundamental imaging. On the other hand, contrast detection in PI fundamental imaging significantly depends on the magnitude of the nonlinear responses of the bubbles and thus the resultant contrast-to-tissue ratio (CTR) still increases with acoustic pressure and the nonlinear resonance of microbubbles. It should be noted, however, that the CTR in PI fundamental imaging after compression is consistently lower than that before compression due to obvious side-lobe artifacts. Therefore, the use of coded excitation is not beneficial in PI fundamental contrast detection.

  17. An investigative study of multispectral data compression for remotely-sensed images using vector quantization and difference-mapped shift-coding

    NASA Technical Reports Server (NTRS)

    Jaggi, S.

    1993-01-01

    A study is conducted to investigate the effects and advantages of data compression techniques on multispectral imagery data acquired by NASA's airborne scanners at the Stennis Space Center. The first technique used was vector quantization. The vector is defined in the multispectral imagery context as an array of pixels from the same location from each channel. The error obtained in substituting the reconstructed images for the original set is compared for different compression ratios. Also, the eigenvalues of the covariance matrix obtained from the reconstructed data set are compared with the eigenvalues of the original set. The effects of varying the size of the vector codebook on the quality of the compression and on subsequent classification are also presented. The output data from the Vector Quantization algorithm was further compressed by a lossless technique called Difference-mapped Shift-extended Huffman coding. The overall compression for 7 channels of data acquired by the Calibrated Airborne Multispectral Scanner (CAMS), with an RMS error of 15.8 pixels was 195:1 (0.41 bpp) and with an RMS error of 3.6 pixels was 18:1 (.447 bpp). The algorithms were implemented in software and interfaced with the help of dedicated image processing boards to an 80386 PC compatible computer. Modules were developed for the task of image compression and image analysis. Also, supporting software to perform image processing for visual display and interpretation of the compressed/classified images was developed.

  18. Application of a Noise Adaptive Contrast Sensitivity Function to Image Data Compression

    NASA Astrophysics Data System (ADS)

    Daly, Scott J.

    1989-08-01

    The visual contrast sensitivity function (CSF) has found increasing use in image compression as new algorithms optimize the display-observer interface in order to reduce the bit rate and increase the perceived image quality. In most compression algorithms, increasing the quantization intervals reduces the bit rate at the expense of introducing more quantization error, a potential image quality degradation. The CSF can be used to distribute this error as a function of spatial frequency such that it is undetectable by the human observer. Thus, instead of being mathematically lossless, the compression algorithm can be designed to be visually lossless, with the advantage of a significantly reduced bit rate. However, the CSF is strongly affected by image noise, changing in both shape and peak sensitivity. This work describes a model of the CSF that includes these changes as a function of image noise level by using the concepts of internal visual noise, and tests this model in the context of image compression with an observer study.

  19. Capacity and optimal collusion attack channels for Gaussian fingerprinting games

    NASA Astrophysics Data System (ADS)

    Wang, Ying; Moulin, Pierre

    2007-02-01

    In content fingerprinting, the same media covertext - image, video, audio, or text - is distributed to many users. A fingerprint, a mark unique to each user, is embedded into each copy of the distributed covertext. In a collusion attack, two or more users may combine their copies in an attempt to "remove" their fingerprints and forge a pirated copy. To trace the forgery back to members of the coalition, we need fingerprinting codes that can reliably identify the fingerprints of those members. Researchers have been focusing on designing or testing fingerprints for Gaussian host signals and the mean square error (MSE) distortion under some classes of collusion attacks, in terms of the detector's error probability in detecting collusion members. For example, under the assumptions of Gaussian fingerprints and Gaussian attacks (the fingerprinted signals are averaged and then the result is passed through a Gaussian test channel), Moulin and Briassouli1 derived optimal strategies in a game-theoretic framework that uses the detector's error probability as the performance measure for a binary decision problem (whether a user participates in the collusion attack or not); Stone2 and Zhao et al. 3 studied average and other non-linear collusion attacks for Gaussian-like fingerprints; Wang et al. 4 stated that the average collusion attack is the most efficient one for orthogonal fingerprints; Kiyavash and Moulin 5 derived a mathematical proof of the optimality of the average collusion attack under some assumptions. In this paper, we also consider Gaussian cover signals, the MSE distortion, and memoryless collusion attacks. We do not make any assumption about the fingerprinting codes used other than an embedding distortion constraint. Also, our only assumptions about the attack channel are an expected distortion constraint, a memoryless constraint, and a fairness constraint. That is, the colluders are allowed to use any arbitrary nonlinear strategy subject to the above constraints. Under those constraints on the fingerprint embedder and the colluders, fingerprinting capacity is obtained as the solution of a mutual-information game involving probability density functions (pdf's) designed by the embedder and the colluders. We show that the optimal fingerprinting strategy is a Gaussian test channel where the fingerprinted signal is the sum of an attenuated version of the cover signal plus a Gaussian information-bearing noise, and the optimal collusion strategy is to average fingerprinted signals possessed by all the colluders and pass the averaged copy through a Gaussian test channel. The capacity result and the optimal strategies are the same for both the private and public games. In the former scenario, the original covertext is available to the decoder, while in the latter setup, the original covertext is available to the encoder but not to the decoder.

  20. A comparison of visible wavelength reflectance hyperspectral imaging and Acid Black 1 for the detection and identification of blood stained fingerprints.

    PubMed

    Cadd, Samuel; Li, Bo; Beveridge, Peter; O Hare, William T; Campbell, Andrew; Islam, Meez

    2016-07-01

    Bloodstains are often encountered at scenes of violent crime and have significant forensic value for criminal investigations. Blood is one of the most commonly encountered types of biological evidence and is the most commonly observed fingerprint contaminant. Presumptive tests are used to test blood stain and blood stained fingerprints are targeted with chemical enhancement methods, such as acid stains, including Acid Black 1, Acid Violet 17 or Acid Yellow 7. Although these techniques successfully visualise ridge detail, they are destructive, do not confirm the presence of blood and can have a negative impact on DNA sampling. A novel application of visible wavelength hyperspectral imaging (HSI) is used for the non-contact, non-destructive detection and identification of blood stained fingerprints on white tiles both before and after wet chemical enhancement using Acid Black 1. The identification was obtained in a non-contact and non-destructive manner, based on the unique visible absorption spectrum of haemoglobin between 400 and 500nm. Results from the exploration of the selectivity of the setup to detect blood against ten other non-blood protein contaminants are also presented. A direct comparison of the effectiveness of HSI with chemical enhancement using Acid Black 1 on white tiles is also shown. Copyright © 2016 The Chartered Society of Forensic Sciences. Published by Elsevier Ireland Ltd. All rights reserved.

  1. The non-contact detection and identification of blood stained fingerprints using visible wavelength hyperspectral imaging: Part II effectiveness on a range of substrates.

    PubMed

    Cadd, Samuel; Li, Bo; Beveridge, Peter; O'Hare, William T; Campbell, Andrew; Islam, Meez

    2016-05-01

    Biological samples, such as blood, are regularly encountered at violent crime scenes and successful identification is critical for criminal investigations. Blood is one of the most commonly encountered fingerprint contaminants and current identification methods involve presumptive tests or wet chemical enhancement. These are destructive however; can affect subsequent DNA sampling; and do not confirm the presence of blood, meaning they are susceptible to false positives. A novel application of visible wavelength reflectance hyperspectral imaging (HSI) has been used for the non-contact, non-destructive detection and identification of blood stained fingerprints across a range of coloured substrates of varying porosities. The identification of blood was based on the Soret γ band absorption of haemoglobin between 400 nm and 500 nm. Ridge detail was successfully visualised to the third depletion across light coloured substrates and the stain detected to the tenth depletion on both porous and non-porous substrates. A higher resolution setup for blood stained fingerprints on black tiles, detected ridge detail to the third depletion and the stain to the tenth depletion, demonstrating considerable advancements from previous work. Diluted blood stains at 1500 and 1000 fold dilutions for wet and dry stains respectively were also detected on pig skin as a replica for human skin. Copyright © 2016 The Chartered Society of Forensic Sciences. Published by Elsevier Ireland Ltd. All rights reserved.

  2. Polarimetric and Indoor Imaging Fusion Based on Compressive Sensing

    DTIC Science & Technology

    2013-04-01

    Signal Process., vol. 57, no. 6, pp. 2275-2284, 2009. [20] A. Gurbuz, J. McClellan, and W. Scott, Jr., "Compressive sensing for subsurface imaging using...SciTech Publishing, 2010, pp. 922- 938. [45] A. C. Gurbuz, J. H. McClellan, and W. R. Scott, Jr., "Compressive sensing for subsurface imaging using

  3. Compression of high-density EMG signals for trapezius and gastrocnemius muscles.

    PubMed

    Itiki, Cinthia; Furuie, Sergio S; Merletti, Roberto

    2014-03-10

    New technologies for data transmission and multi-electrode arrays increased the demand for compressing high-density electromyography (HD EMG) signals. This article aims the compression of HD EMG signals recorded by two-dimensional electrode matrices at different muscle-contraction forces. It also shows methodological aspects of compressing HD EMG signals for non-pinnate (upper trapezius) and pinnate (medial gastrocnemius) muscles, using image compression techniques. HD EMG signals were placed in image rows, according to two distinct electrode orders: parallel and perpendicular to the muscle longitudinal axis. For the lossless case, the images obtained from single-differential signals as well as their differences in time were compressed. For the lossy algorithm, the images associated to the recorded monopolar or single-differential signals were compressed for different compression levels. Lossless compression provided up to 59.3% file-size reduction (FSR), with lower contraction forces associated to higher FSR. For lossy compression, a 90.8% reduction on the file size was attained, while keeping the signal-to-noise ratio (SNR) at 21.19 dB. For a similar FSR, higher contraction forces corresponded to higher SNR CONCLUSIONS: The computation of signal differences in time improves the performance of lossless compression while the selection of signals in the transversal order improves the lossy compression of HD EMG, for both pinnate and non-pinnate muscles.

  4. Compression of high-density EMG signals for trapezius and gastrocnemius muscles

    PubMed Central

    2014-01-01

    Background New technologies for data transmission and multi-electrode arrays increased the demand for compressing high-density electromyography (HD EMG) signals. This article aims the compression of HD EMG signals recorded by two-dimensional electrode matrices at different muscle-contraction forces. It also shows methodological aspects of compressing HD EMG signals for non-pinnate (upper trapezius) and pinnate (medial gastrocnemius) muscles, using image compression techniques. Methods HD EMG signals were placed in image rows, according to two distinct electrode orders: parallel and perpendicular to the muscle longitudinal axis. For the lossless case, the images obtained from single-differential signals as well as their differences in time were compressed. For the lossy algorithm, the images associated to the recorded monopolar or single-differential signals were compressed for different compression levels. Results Lossless compression provided up to 59.3% file-size reduction (FSR), with lower contraction forces associated to higher FSR. For lossy compression, a 90.8% reduction on the file size was attained, while keeping the signal-to-noise ratio (SNR) at 21.19 dB. For a similar FSR, higher contraction forces corresponded to higher SNR Conclusions The computation of signal differences in time improves the performance of lossless compression while the selection of signals in the transversal order improves the lossy compression of HD EMG, for both pinnate and non-pinnate muscles. PMID:24612604

  5. JPEG2000 and dissemination of cultural heritage over the Internet.

    PubMed

    Politou, Eugenia A; Pavlidis, George P; Chamzas, Christodoulos

    2004-03-01

    By applying the latest technologies in image compression for managing the storage of massive image data within cultural heritage databases and by exploiting the universality of the Internet we are now able not only to effectively digitize, record and preserve, but also to promote the dissemination of cultural heritage. In this work we present an application of the latest image compression standard JPEG2000 in managing and browsing image databases, focusing on the image transmission aspect rather than database management and indexing. We combine the technologies of JPEG2000 image compression with client-server socket connections and client browser plug-in, as to provide with an all-in-one package for remote browsing of JPEG2000 compressed image databases, suitable for the effective dissemination of cultural heritage.

  6. Technology study of quantum remote sensing imaging

    NASA Astrophysics Data System (ADS)

    Bi, Siwen; Lin, Xuling; Yang, Song; Wu, Zhiqiang

    2016-02-01

    According to remote sensing science and technology development and application requirements, quantum remote sensing is proposed. First on the background of quantum remote sensing, quantum remote sensing theory, information mechanism, imaging experiments and prototype principle prototype research situation, related research at home and abroad are briefly introduced. Then we expounds compress operator of the quantum remote sensing radiation field and the basic principles of single-mode compression operator, quantum quantum light field of remote sensing image compression experiment preparation and optical imaging, the quantum remote sensing imaging principle prototype, Quantum remote sensing spaceborne active imaging technology is brought forward, mainly including quantum remote sensing spaceborne active imaging system composition and working principle, preparation and injection compression light active imaging device and quantum noise amplification device. Finally, the summary of quantum remote sensing research in the past 15 years work and future development are introduced.

  7. Joint image encryption and compression scheme based on IWT and SPIHT

    NASA Astrophysics Data System (ADS)

    Zhang, Miao; Tong, Xiaojun

    2017-03-01

    A joint lossless image encryption and compression scheme based on integer wavelet transform (IWT) and set partitioning in hierarchical trees (SPIHT) is proposed to achieve lossless image encryption and compression simultaneously. Making use of the properties of IWT and SPIHT, encryption and compression are combined. Moreover, the proposed secure set partitioning in hierarchical trees (SSPIHT) via the addition of encryption in the SPIHT coding process has no effect on compression performance. A hyper-chaotic system, nonlinear inverse operation, Secure Hash Algorithm-256(SHA-256), and plaintext-based keystream are all used to enhance the security. The test results indicate that the proposed methods have high security and good lossless compression performance.

  8. An Energy-Efficient Compressive Image Coding for Green Internet of Things (IoT).

    PubMed

    Li, Ran; Duan, Xiaomeng; Li, Xu; He, Wei; Li, Yanling

    2018-04-17

    Aimed at a low-energy consumption of Green Internet of Things (IoT), this paper presents an energy-efficient compressive image coding scheme, which provides compressive encoder and real-time decoder according to Compressive Sensing (CS) theory. The compressive encoder adaptively measures each image block based on the block-based gradient field, which models the distribution of block sparse degree, and the real-time decoder linearly reconstructs each image block through a projection matrix, which is learned by Minimum Mean Square Error (MMSE) criterion. Both the encoder and decoder have a low computational complexity, so that they only consume a small amount of energy. Experimental results show that the proposed scheme not only has a low encoding and decoding complexity when compared with traditional methods, but it also provides good objective and subjective reconstruction qualities. In particular, it presents better time-distortion performance than JPEG. Therefore, the proposed compressive image coding is a potential energy-efficient scheme for Green IoT.

  9. The New CCSDS Image Compression Recommendation

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu; Armbruster, Philippe; Kiely, Aaron; Masschelein, Bart; Moury, Gilles; Schaefer, Christoph

    2005-01-01

    The Consultative Committee for Space Data Systems (CCSDS) data compression working group has recently adopted a recommendation for image data compression, with a final release expected in 2005. The algorithm adopted in the recommendation consists of a two-dimensional discrete wavelet transform of the image, followed by progressive bit-plane coding of the transformed data. The algorithm can provide both lossless and lossy compression, and allows a user to directly control the compressed data volume or the fidelity with which the wavelet-transformed data can be reconstructed. The algorithm is suitable for both frame-based image data and scan-based sensor data, and has applications for near-Earth and deep-space missions. The standard will be accompanied by free software sources on a future web site. An Application-Specific Integrated Circuit (ASIC) implementation of the compressor is currently under development. This paper describes the compression algorithm along with the requirements that drove the selection of the algorithm. Performance results and comparisons with other compressors are given for a test set of space images.

  10. Wavelet compression of noisy tomographic images

    NASA Astrophysics Data System (ADS)

    Kappeler, Christian; Mueller, Stefan P.

    1995-09-01

    3D data acquisition is increasingly used in positron emission tomography (PET) to collect a larger fraction of the emitted radiation. A major practical difficulty with data storage and transmission in 3D-PET is the large size of the data sets. A typical dynamic study contains about 200 Mbyte of data. PET images inherently have a high level of photon noise and therefore usually are evaluated after being processed by a smoothing filter. In this work we examined lossy compression schemes under the postulate not induce image modifications exceeding those resulting from low pass filtering. The standard we will refer to is the Hanning filter. Resolution and inhomogeneity serve as figures of merit for quantification of image quality. The images to be compressed are transformed to a wavelet representation using Daubechies12 wavelets and compressed after filtering by thresholding. We do not include further compression by quantization and coding here. Achievable compression factors at this level of processing are thirty to fifty.

  11. Image quality enhancement in low-light-level ghost imaging using modified compressive sensing method

    NASA Astrophysics Data System (ADS)

    Shi, Xiaohui; Huang, Xianwei; Nan, Suqin; Li, Hengxing; Bai, Yanfeng; Fu, Xiquan

    2018-04-01

    Detector noise has a significantly negative impact on ghost imaging at low light levels, especially for existing recovery algorithm. Based on the characteristics of the additive detector noise, a method named modified compressive sensing ghost imaging is proposed to reduce the background imposed by the randomly distributed detector noise at signal path. Experimental results show that, with an appropriate choice of threshold value, modified compressive sensing ghost imaging algorithm can dramatically enhance the contrast-to-noise ratio of the object reconstruction significantly compared with traditional ghost imaging and compressive sensing ghost imaging methods. The relationship between the contrast-to-noise ratio of the reconstruction image and the intensity ratio (namely, the average signal intensity to average noise intensity ratio) for the three reconstruction algorithms are also discussed. This noise suppression imaging technique will have great applications in remote-sensing and security areas.

  12. Quantization Distortion in Block Transform-Compressed Data

    NASA Technical Reports Server (NTRS)

    Boden, A. F.

    1995-01-01

    The popular JPEG image compression standard is an example of a block transform-based compression scheme; the image is systematically subdivided into block that are individually transformed, quantized, and encoded. The compression is achieved by quantizing the transformed data, reducing the data entropy and thus facilitating efficient encoding. A generic block transform model is introduced.

  13. Perceptually lossless fractal image compression

    NASA Astrophysics Data System (ADS)

    Lin, Huawu; Venetsanopoulos, Anastasios N.

    1996-02-01

    According to the collage theorem, the encoding distortion for fractal image compression is directly related to the metric used in the encoding process. In this paper, we introduce a perceptually meaningful distortion measure based on the human visual system's nonlinear response to luminance and the visual masking effects. Blackwell's psychophysical raw data on contrast threshold are first interpolated as a function of background luminance and visual angle, and are then used as an error upper bound for perceptually lossless image compression. For a variety of images, experimental results show that the algorithm produces a compression ratio of 8:1 to 10:1 without introducing visual artifacts.

  14. Remote Sensing Image Quality Assessment Experiment with Post-Processing

    NASA Astrophysics Data System (ADS)

    Jiang, W.; Chen, S.; Wang, X.; Huang, Q.; Shi, H.; Man, Y.

    2018-04-01

    This paper briefly describes the post-processing influence assessment experiment, the experiment includes three steps: the physical simulation, image processing, and image quality assessment. The physical simulation models sampled imaging system in laboratory, the imaging system parameters are tested, the digital image serving as image processing input are produced by this imaging system with the same imaging system parameters. The gathered optical sampled images with the tested imaging parameters are processed by 3 digital image processes, including calibration pre-processing, lossy compression with different compression ratio and image post-processing with different core. Image quality assessment method used is just noticeable difference (JND) subject assessment based on ISO20462, through subject assessment of the gathered and processing images, the influence of different imaging parameters and post-processing to image quality can be found. The six JND subject assessment experimental data can be validated each other. Main conclusions include: image post-processing can improve image quality; image post-processing can improve image quality even with lossy compression, image quality with higher compression ratio improves less than lower ratio; with our image post-processing method, image quality is better, when camera MTF being within a small range.

  15. Lossless Compression of Classification-Map Data

    NASA Technical Reports Server (NTRS)

    Hua, Xie; Klimesh, Matthew

    2009-01-01

    A lossless image-data-compression algorithm intended specifically for application to classification-map data is based on prediction, context modeling, and entropy coding. The algorithm was formulated, in consideration of the differences between classification maps and ordinary images of natural scenes, so as to be capable of compressing classification- map data more effectively than do general-purpose image-data-compression algorithms. Classification maps are typically generated from remote-sensing images acquired by instruments aboard aircraft (see figure) and spacecraft. A classification map is a synthetic image that summarizes information derived from one or more original remote-sensing image(s) of a scene. The value assigned to each pixel in such a map is the index of a class that represents some type of content deduced from the original image data for example, a type of vegetation, a mineral, or a body of water at the corresponding location in the scene. When classification maps are generated onboard the aircraft or spacecraft, it is desirable to compress the classification-map data in order to reduce the volume of data that must be transmitted to a ground station.

  16. 32 CFR 161.6 - Procedures.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... photocopying of DoD ID cards to facilitate medical care processing, check cashing, voting, tax matters... support CAC issuance, which includes fingerprints and facial images specified in FIPS Publication 201-1... the Office of the USD(AT&L), implement the capability to obtain two segmented images (primary and...

  17. Enhancement of Satellite Image Compression Using a Hybrid (DWT-DCT) Algorithm

    NASA Astrophysics Data System (ADS)

    Shihab, Halah Saadoon; Shafie, Suhaidi; Ramli, Abdul Rahman; Ahmad, Fauzan

    2017-12-01

    Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) image compression techniques have been utilized in most of the earth observation satellites launched during the last few decades. However, these techniques have some issues that should be addressed. The DWT method has proven to be more efficient than DCT for several reasons. Nevertheless, the DCT can be exploited to improve the high-resolution satellite image compression when combined with the DWT technique. Hence, a proposed hybrid (DWT-DCT) method was developed and implemented in the current work, simulating an image compression system on-board on a small remote sensing satellite, with the aim of achieving a higher compression ratio to decrease the onboard data storage and the downlink bandwidth, while avoiding further complex levels of DWT. This method also succeeded in maintaining the reconstructed satellite image quality through replacing the standard forward DWT thresholding and quantization processes with an alternative process that employed the zero-padding technique, which also helped to reduce the processing time of DWT compression. The DCT, DWT and the proposed hybrid methods were implemented individually, for comparison, on three LANDSAT 8 images, using the MATLAB software package. A comparison was also made between the proposed method and three other previously published hybrid methods. The evaluation of all the objective and subjective results indicated the feasibility of using the proposed hybrid (DWT-DCT) method to enhance the image compression process on-board satellites.

  18. Breast compression in mammography: how much is enough?

    PubMed

    Poulos, Ann; McLean, Donald; Rickard, Mary; Heard, Robert

    2003-06-01

    The amount of breast compression that is applied during mammography potentially influences image quality and the discomfort experienced. The aim of this study was to determine the relationship between applied compression force, breast thickness, reported discomfort and image quality. Participants were women attending routine breast screening by mammography at BreastScreen New South Wales Central and Eastern Sydney. During the mammographic procedure, an 'extra' craniocaudal (CC) film was taken at a reduced level of compression ranging from 10 to 30 Newtons. Breast thickness measurements were recorded for both the normal and the extra CC film. Details of discomfort experienced, cup size, menstrual status, existing breast pain and breast problems were also recorded. Radiologists were asked to compare the image quality of the normal and manipulated film. The results indicated that 24% of women did not experience a difference in thickness when the compression was reduced. This is an important new finding because the aim of breast compression is to reduce breast thickness. If breast thickness is not reduced when compression force is applied then discomfort is increased with no benefit in image quality. This has implications for mammographic practice when determining how much breast compression is sufficient. Radiologists found a decrease in contrast resolution within the fatty area of the breast between the normal and the extra CC film, confirming a decrease in image quality due to insufficient applied compression force.

  19. Using a visual discrimination model for the detection of compression artifacts in virtual pathology images.

    PubMed

    Johnson, Jeffrey P; Krupinski, Elizabeth A; Yan, Michelle; Roehrig, Hans; Graham, Anna R; Weinstein, Ronald S

    2011-02-01

    A major issue in telepathology is the extremely large and growing size of digitized "virtual" slides, which can require several gigabytes of storage and cause significant delays in data transmission for remote image interpretation and interactive visualization by pathologists. Compression can reduce this massive amount of virtual slide data, but reversible (lossless) methods limit data reduction to less than 50%, while lossy compression can degrade image quality and diagnostic accuracy. "Visually lossless" compression offers the potential for using higher compression levels without noticeable artifacts, but requires a rate-control strategy that adapts to image content and loss visibility. We investigated the utility of a visual discrimination model (VDM) and other distortion metrics for predicting JPEG 2000 bit rates corresponding to visually lossless compression of virtual slides for breast biopsy specimens. Threshold bit rates were determined experimentally with human observers for a variety of tissue regions cropped from virtual slides. For test images compressed to their visually lossless thresholds, just-noticeable difference (JND) metrics computed by the VDM were nearly constant at the 95th percentile level or higher, and were significantly less variable than peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) metrics. Our results suggest that VDM metrics could be used to guide the compression of virtual slides to achieve visually lossless compression while providing 5-12 times the data reduction of reversible methods.

  20. Extreme compression for extreme conditions: pilot study to identify optimal compression of CT images using MPEG-4 video compression.

    PubMed

    Peterson, P Gabriel; Pak, Sung K; Nguyen, Binh; Jacobs, Genevieve; Folio, Les

    2012-12-01

    This study aims to evaluate the utility of compressed computed tomography (CT) studies (to expedite transmission) using Motion Pictures Experts Group, Layer 4 (MPEG-4) movie formatting in combat hospitals when guiding major treatment regimens. This retrospective analysis was approved by Walter Reed Army Medical Center institutional review board with a waiver for the informed consent requirement. Twenty-five CT chest, abdomen, and pelvis exams were converted from Digital Imaging and Communications in Medicine to MPEG-4 movie format at various compression ratios. Three board-certified radiologists reviewed various levels of compression on emergent CT findings on 25 combat casualties and compared with the interpretation of the original series. A Universal Trauma Window was selected at -200 HU level and 1,500 HU width, then compressed at three lossy levels. Sensitivities and specificities for each reviewer were calculated along with 95 % confidence intervals using the method of general estimating equations. The compression ratios compared were 171:1, 86:1, and 41:1 with combined sensitivities of 90 % (95 % confidence interval, 79-95), 94 % (87-97), and 100 % (93-100), respectively. Combined specificities were 100 % (85-100), 100 % (85-100), and 96 % (78-99), respectively. The introduction of CT in combat hospitals with increasing detectors and image data in recent military operations has increased the need for effective teleradiology; mandating compression technology. Image compression is currently used to transmit images from combat hospital to tertiary care centers with subspecialists and our study demonstrates MPEG-4 technology as a reasonable means of achieving such compression.

  1. Locally adaptive vector quantization: Data compression with feature preservation

    NASA Technical Reports Server (NTRS)

    Cheung, K. M.; Sayano, M.

    1992-01-01

    A study of a locally adaptive vector quantization (LAVQ) algorithm for data compression is presented. This algorithm provides high-speed one-pass compression and is fully adaptable to any data source and does not require a priori knowledge of the source statistics. Therefore, LAVQ is a universal data compression algorithm. The basic algorithm and several modifications to improve performance are discussed. These modifications are nonlinear quantization, coarse quantization of the codebook, and lossless compression of the output. Performance of LAVQ on various images using irreversible (lossy) coding is comparable to that of the Linde-Buzo-Gray algorithm, but LAVQ has a much higher speed; thus this algorithm has potential for real-time video compression. Unlike most other image compression algorithms, LAVQ preserves fine detail in images. LAVQ's performance as a lossless data compression algorithm is comparable to that of Lempel-Ziv-based algorithms, but LAVQ uses far less memory during the coding process.

  2. Beyond the ridge pattern: multi-informative analysis of latent fingermarks by MALDI mass spectrometry.

    PubMed

    Francese, S; Bradshaw, R; Ferguson, L S; Wolstenholme, R; Clench, M R; Bleay, S

    2013-08-07

    After over a century, fingerprints are still one of the most powerful means of biometric identification. The conventional forensic workflow for suspect identification consists of (i) recovering latent marks from crime scenes using the appropriate enhancement technique and (ii) obtaining an image of the mark to compare either against known suspect prints and/or to search in a Fingerprint Database. The suspect is identified through matching the ridge pattern and local characteristics of the ridge pattern (minutiae). However successful, there are a number of scenarios in which this process may fail; they include the recovery of partial, distorted or smudged marks, poor quality of the image resulting from inadequacy of the enhancement technique applied, extensive scarring/abrasion of the fingertips or absence of suspect's fingerprint records in the database. In all of these instances it would be very desirable to have a technology able to provide additional information from a fingermark exploiting its endogenous and exogenous chemical content. This opportunity could potentially provide new investigative leads, especially when the fingermark comparison and match process fails. We have demonstrated that Matrix Assisted Laser Desorption Ionisation Mass Spectrometry and Mass Spectrometry Imaging (MALDI MSI) can provide multiple images of the same fingermark in one analysis simultaneous with additional intelligence. Here, a review on the pioneering use and development of MALDI MSI for the analysis of latent fingermarks is presented along with the latest achievements on the forensic intelligence retrievable.

  3. Interleaved EPI diffusion imaging using SPIRiT-based reconstruction with virtual coil compression.

    PubMed

    Dong, Zijing; Wang, Fuyixue; Ma, Xiaodong; Zhang, Zhe; Dai, Erpeng; Yuan, Chun; Guo, Hua

    2018-03-01

    To develop a novel diffusion imaging reconstruction framework based on iterative self-consistent parallel imaging reconstruction (SPIRiT) for multishot interleaved echo planar imaging (iEPI), with computation acceleration by virtual coil compression. As a general approach for autocalibrating parallel imaging, SPIRiT improves the performance of traditional generalized autocalibrating partially parallel acquisitions (GRAPPA) methods in that the formulation with self-consistency is better conditioned, suggesting SPIRiT to be a better candidate in k-space-based reconstruction. In this study, a general SPIRiT framework is adopted to incorporate both coil sensitivity and phase variation information as virtual coils and then is applied to 2D navigated iEPI diffusion imaging. To reduce the reconstruction time when using a large number of coils and shots, a novel shot-coil compression method is proposed for computation acceleration in Cartesian sampling. Simulations and in vivo experiments were conducted to evaluate the performance of the proposed method. Compared with the conventional coil compression, the shot-coil compression achieved higher compression rates with reduced errors. The simulation and in vivo experiments demonstrate that the SPIRiT-based reconstruction outperformed the existing method, realigned GRAPPA, and provided superior images with reduced artifacts. The SPIRiT-based reconstruction with virtual coil compression is a reliable method for high-resolution iEPI diffusion imaging. Magn Reson Med 79:1525-1531, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  4. Mammographic compression in Asian women.

    PubMed

    Lau, Susie; Abdul Aziz, Yang Faridah; Ng, Kwan Hoong

    2017-01-01

    To investigate: (1) the variability of mammographic compression parameters amongst Asian women; and (2) the effects of reducing compression force on image quality and mean glandular dose (MGD) in Asian women based on phantom study. We retrospectively collected 15818 raw digital mammograms from 3772 Asian women aged 35-80 years who underwent screening or diagnostic mammography between Jan 2012 and Dec 2014 at our center. The mammograms were processed using a volumetric breast density (VBD) measurement software (Volpara) to assess compression force, compression pressure, compressed breast thickness (CBT), breast volume, VBD and MGD against breast contact area. The effects of reducing compression force on image quality and MGD were also evaluated based on measurement obtained from 105 Asian women, as well as using the RMI156 Mammographic Accreditation Phantom and polymethyl methacrylate (PMMA) slabs. Compression force, compression pressure, CBT, breast volume, VBD and MGD correlated significantly with breast contact area (p<0.0001). Compression parameters including compression force, compression pressure, CBT and breast contact area were widely variable between [relative standard deviation (RSD)≥21.0%] and within (p<0.0001) Asian women. The median compression force should be about 8.1 daN compared to the current 12.0 daN. Decreasing compression force from 12.0 daN to 9.0 daN increased CBT by 3.3±1.4 mm, MGD by 6.2-11.0%, and caused no significant effects on image quality (p>0.05). Force-standardized protocol led to widely variable compression parameters in Asian women. Based on phantom study, it is feasible to reduce compression force up to 32.5% with minimal effects on image quality and MGD.

  5. Complex adaptation-based LDR image rendering for 3D image reconstruction

    NASA Astrophysics Data System (ADS)

    Lee, Sung-Hak; Kwon, Hyuk-Ju; Sohng, Kyu-Ik

    2014-07-01

    A low-dynamic tone-compression technique is developed for realistic image rendering that can make three-dimensional (3D) images similar to realistic scenes by overcoming brightness dimming in the 3D display mode. The 3D surround provides varying conditions for image quality, illuminant adaptation, contrast, gamma, color, sharpness, and so on. In general, gain/offset adjustment, gamma compensation, and histogram equalization have performed well in contrast compression; however, as a result of signal saturation and clipping effects, image details are removed and information is lost on bright and dark areas. Thus, an enhanced image mapping technique is proposed based on space-varying image compression. The performance of contrast compression is enhanced with complex adaptation in a 3D viewing surround combining global and local adaptation. Evaluating local image rendering in view of tone and color expression, noise reduction, and edge compensation confirms that the proposed 3D image-mapping model can compensate for the loss of image quality in the 3D mode.

  6. Optical security verification for blurred fingerprints

    NASA Astrophysics Data System (ADS)

    Soon, Boon Y.; Karim, Mohammad A.; Alam, Mohammad S.

    1998-12-01

    Optical fingerprint security verification is gaining popularity, as it has the potential to perform correlation at the speed of light. With advancement in optical security verification techniques, authentication process can be almost foolproof and reliable for financial transaction, banking, etc. In law enforcement, when a fingerprint is obtained from a crime scene, it may be blurred and can be an unhealthy candidate for correlation purposes. Therefore, the blurred fingerprint needs to be clarified before it is used for the correlation process. There are a several different types of blur, such as linear motion blur and defocus blur, induced by aberration of imaging system. In addition, we may or may not know the blur function. In this paper, we propose the non-singularity inverse filtering in frequency/power domain for deblurring known motion-induced blur in fingerprints. This filtering process will be incorporated with the pow spectrum subtraction technique, uniqueness comparison scheme, and the separated target and references planes method in the joint transform correlator. The proposed hardware implementation is a hybrid electronic-optical correlator system. The performance of the proposed system would be verified with computer simulation for both cases: with and without additive random noise corruption.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grant, Ashleigh; Wilkinson, T.J.; Holman, Thomas

    Analysis of fingerprints has predominantly focused on matching the pattern of ridges to a specific person as a form of identification. The present work focuses on identifying extrinsic materials that are left within a person's fingerprint after recent handling of such materials. Specifically, we employed infrared spectromicroscopy to locate and positively identify microscopic particles from a mixture of common materials in the latent human fingerprints of volunteer subjects. We were able to find and correctly identify all test substances based on their unique infrared spectral signatures. Spectral imaging is demonstrated as a method for automating recognition of specific substances inmore » a fingerprint. We also demonstrate the use of Attenuated Total Reflectance (ATR) and synchrotron-based infrared spectromicroscopy for obtaining high-quality spectra from particles that were too thick or too small, respectively, for reflection/absorption measurements. We believe the application of this rapid, non-destructive analytical technique to the forensic study of latent human finger prints has the potential to add a new layer of information available to investigators. Using fingerprints to not only identify who was present at a crime scene, but also to link who was handling key materials will be a powerful investigative tool.« less

  8. Automatic Construction of Wi-Fi Radio Map Using Smartphones

    NASA Astrophysics Data System (ADS)

    Liu, Tao; Li, Qingquan; Zhang, Xing

    2016-06-01

    Indoor positioning could provide interesting services and applications. As one of the most popular indoor positioning methods, location fingerprinting determines the location of mobile users by matching the received signal strength (RSS) which is location dependent. However, fingerprinting-based indoor positioning requires calibration and updating of the fingerprints which is labor-intensive and time-consuming. In this paper, we propose a visual-based approach for the construction of radio map for anonymous indoor environments without any prior knowledge. This approach collects multi-sensors data, e.g. video, accelerometer, gyroscope, Wi-Fi signals, etc., when people (with smartphones) walks freely in indoor environments. Then, it uses the multi-sensor data to restore the trajectories of people based on an integrated structure from motion (SFM) and image matching method, and finally estimates location of sampling points on the trajectories and construct Wi-Fi radio map. Experiment results show that the average location error of the fingerprints is about 0.53 m.

  9. Laser speckle decorrelation for fingerprint acquisition

    NASA Astrophysics Data System (ADS)

    Schirripa Spagnolo, Giuseppe; Cozzella, Lorenzo

    2012-09-01

    Biometry is gaining popularity as a physical security approach in situations where a high level of security is necessary. Currently, biometric solutions are embedded in a very large and heterogeneous group of applications. One of the most sensible is for airport security access to boarding gates. More airports are introducing biometric solutions based on face, fingerprint or iris recognition for passenger identification. In particular, fingerprints are the most widely used biometric, and they are mandatorily included in electronic identification documents. One important issue, which is difficult to address in traditional fingerprint acquisition systems, is preventing contact between subsequent users; sebum, which can be a potential vector for contagious diseases. Currently, non-contact devices are used to overcome this problem. In this paper, a new contact device based on laser speckle decorrelation is presented. Our system has the advantage of being compact and low-cost compared with an actual contactless system, allowing enhancement of the sebum pattern imaging contrast in a simple and low-cost way. Furthermore, it avoids the spreading of contagious diseases.

  10. Real-time broadband terahertz spectroscopic imaging by using a high-sensitivity terahertz camera

    NASA Astrophysics Data System (ADS)

    Kanda, Natsuki; Konishi, Kuniaki; Nemoto, Natsuki; Midorikawa, Katsumi; Kuwata-Gonokami, Makoto

    2017-02-01

    Terahertz (THz) imaging has a strong potential for applications because many molecules have fingerprint spectra in this frequency region. Spectroscopic imaging in the THz region is a promising technique to fully exploit this characteristic. However, the performance of conventional techniques is restricted by the requirement of multidimensional scanning, which implies an image data acquisition time of several minutes. In this study, we propose and demonstrate a novel broadband THz spectroscopic imaging method that enables real-time image acquisition using a high-sensitivity THz camera. By exploiting the two-dimensionality of the detector, a broadband multi-channel spectrometer near 1 THz was constructed with a reflection type diffraction grating and a high-power THz source. To demonstrate the advantages of the developed technique, we performed molecule-specific imaging and high-speed acquisition of two-dimensional (2D) images. Two different sugar molecules (lactose and D-fructose) were identified with fingerprint spectra, and their distributions in one-dimensional space were obtained at a fast video rate (15 frames per second). Combined with the one-dimensional (1D) mechanical scanning of the sample, two-dimensional molecule-specific images can be obtained only in a few seconds. Our method can be applied in various important fields such as security and biomedicine.

  11. A Posteriori Restoration of Block Transform-Compressed Data

    NASA Technical Reports Server (NTRS)

    Brown, R.; Boden, A. F.

    1995-01-01

    The Galileo spacecraft will use lossy data compression for the transmission of its science imagery over the low-bandwidth communication system. The technique chosen for image compression is a block transform technique based on the Integer Cosine Transform, a derivative of the JPEG image compression standard. Considered here are two known a posteriori enhancement techniques, which are adapted.

  12. Compressive hyperspectral and multispectral imaging fusion

    NASA Astrophysics Data System (ADS)

    Espitia, Óscar; Castillo, Sergio; Arguello, Henry

    2016-05-01

    Image fusion is a valuable framework which combines two or more images of the same scene from one or multiple sensors, allowing to improve the resolution of the images and increase the interpretable content. In remote sensing a common fusion problem consists of merging hyperspectral (HS) and multispectral (MS) images that involve large amount of redundant data, which ignores the highly correlated structure of the datacube along the spatial and spectral dimensions. Compressive HS and MS systems compress the spectral data in the acquisition step allowing to reduce the data redundancy by using different sampling patterns. This work presents a compressed HS and MS image fusion approach, which uses a high dimensional joint sparse model. The joint sparse model is formulated by combining HS and MS compressive acquisition models. The high spectral and spatial resolution image is reconstructed by using sparse optimization algorithms. Different fusion spectral image scenarios are used to explore the performance of the proposed scheme. Several simulations with synthetic and real datacubes show promising results as the reliable reconstruction of a high spectral and spatial resolution image can be achieved by using as few as just the 50% of the datacube.

  13. Adaptive compressed sensing of remote-sensing imaging based on the sparsity prediction

    NASA Astrophysics Data System (ADS)

    Yang, Senlin; Li, Xilong; Chong, Xin

    2017-10-01

    The conventional compressive sensing works based on the non-adaptive linear projections, and the parameter of its measurement times is usually set empirically. As a result, the quality of image reconstruction is always affected. Firstly, the block-based compressed sensing (BCS) with conventional selection for compressive measurements was given. Then an estimation method for the sparsity of image was proposed based on the two dimensional discrete cosine transform (2D DCT). With an energy threshold given beforehand, the DCT coefficients were processed with both energy normalization and sorting in descending order, and the sparsity of the image can be achieved by the proportion of dominant coefficients. And finally, the simulation result shows that, the method can estimate the sparsity of image effectively, and provides an active basis for the selection of compressive observation times. The result also shows that, since the selection of observation times is based on the sparse degree estimated with the energy threshold provided, the proposed method can ensure the quality of image reconstruction.

  14. Quality of reconstruction of compressed off-axis digital holograms by frequency filtering and wavelets.

    PubMed

    Cheremkhin, Pavel A; Kurbatova, Ekaterina A

    2018-01-01

    Compression of digital holograms can significantly help with the storage of objects and data in 2D and 3D form, its transmission, and its reconstruction. Compression of standard images by methods based on wavelets allows high compression ratios (up to 20-50 times) with minimum losses of quality. In the case of digital holograms, application of wavelets directly does not allow high values of compression to be obtained. However, additional preprocessing and postprocessing can afford significant compression of holograms and the acceptable quality of reconstructed images. In this paper application of wavelet transforms for compression of off-axis digital holograms are considered. The combined technique based on zero- and twin-order elimination, wavelet compression of the amplitude and phase components of the obtained Fourier spectrum, and further additional compression of wavelet coefficients by thresholding and quantization is considered. Numerical experiments on reconstruction of images from the compressed holograms are performed. The comparative analysis of applicability of various wavelets and methods of additional compression of wavelet coefficients is performed. Optimum parameters of compression of holograms by the methods can be estimated. Sizes of holographic information were decreased up to 190 times.

  15. Low rank magnetic resonance fingerprinting.

    PubMed

    Mazor, Gal; Weizman, Lior; Tal, Assaf; Eldar, Yonina C

    2016-08-01

    Magnetic Resonance Fingerprinting (MRF) is a relatively new approach that provides quantitative MRI using randomized acquisition. Extraction of physical quantitative tissue values is preformed off-line, based on acquisition with varying parameters and a dictionary generated according to the Bloch equations. MRF uses hundreds of radio frequency (RF) excitation pulses for acquisition, and therefore high under-sampling ratio in the sampling domain (k-space) is required. This under-sampling causes spatial artifacts that hamper the ability to accurately estimate the quantitative tissue values. In this work, we introduce a new approach for quantitative MRI using MRF, called Low Rank MRF. We exploit the low rank property of the temporal domain, on top of the well-known sparsity of the MRF signal in the generated dictionary domain. We present an iterative scheme that consists of a gradient step followed by a low rank projection using the singular value decomposition. Experiments on real MRI data demonstrate superior results compared to conventional implementation of compressed sensing for MRF at 15% sampling ratio.

  16. Imaging industry expectations for compressed sensing in MRI

    NASA Astrophysics Data System (ADS)

    King, Kevin F.; Kanwischer, Adriana; Peters, Rob

    2015-09-01

    Compressed sensing requires compressible data, incoherent acquisition and a nonlinear reconstruction algorithm to force creation of a compressible image consistent with the acquired data. MRI images are compressible using various transforms (commonly total variation or wavelets). Incoherent acquisition of MRI data by appropriate selection of pseudo-random or non-Cartesian locations in k-space is straightforward. Increasingly, commercial scanners are sold with enough computing power to enable iterative reconstruction in reasonable times. Therefore integration of compressed sensing into commercial MRI products and clinical practice is beginning. MRI frequently requires the tradeoff of spatial resolution, temporal resolution and volume of spatial coverage to obtain reasonable scan times. Compressed sensing improves scan efficiency and reduces the need for this tradeoff. Benefits to the user will include shorter scans, greater patient comfort, better image quality, more contrast types per patient slot, the enabling of previously impractical applications, and higher throughput. Challenges to vendors include deciding which applications to prioritize, guaranteeing diagnostic image quality, maintaining acceptable usability and workflow, and acquisition and reconstruction algorithm details. Application choice depends on which customer needs the vendor wants to address. The changing healthcare environment is putting cost and productivity pressure on healthcare providers. The improved scan efficiency of compressed sensing can help alleviate some of this pressure. Image quality is strongly influenced by image compressibility and acceleration factor, which must be appropriately limited. Usability and workflow concerns include reconstruction time and user interface friendliness and response. Reconstruction times are limited to about one minute for acceptable workflow. The user interface should be designed to optimize workflow and minimize additional customer training. Algorithm concerns include the decision of which algorithms to implement as well as the problem of optimal setting of adjustable parameters. It will take imaging vendors several years to work through these challenges and provide solutions for a wide range of applications.

  17. Cloud Optimized Image Format and Compression

    NASA Astrophysics Data System (ADS)

    Becker, P.; Plesea, L.; Maurer, T.

    2015-04-01

    Cloud based image storage and processing requires revaluation of formats and processing methods. For the true value of the massive volumes of earth observation data to be realized, the image data needs to be accessible from the cloud. Traditional file formats such as TIF and NITF were developed in the hay day of the desktop and assumed fast low latency file access. Other formats such as JPEG2000 provide for streaming protocols for pixel data, but still require a server to have file access. These concepts no longer truly hold in cloud based elastic storage and computation environments. This paper will provide details of a newly evolving image storage format (MRF) and compression that is optimized for cloud environments. Although the cost of storage continues to fall for large data volumes, there is still significant value in compression. For imagery data to be used in analysis and exploit the extended dynamic range of the new sensors, lossless or controlled lossy compression is of high value. Compression decreases the data volumes stored and reduces the data transferred, but the reduced data size must be balanced with the CPU required to decompress. The paper also outlines a new compression algorithm (LERC) for imagery and elevation data that optimizes this balance. Advantages of the compression include its simple to implement algorithm that enables it to be efficiently accessed using JavaScript. Combing this new cloud based image storage format and compression will help resolve some of the challenges of big image data on the internet.

  18. New image compression scheme for digital angiocardiography application

    NASA Astrophysics Data System (ADS)

    Anastassopoulos, George C.; Lymberopoulos, Dimitris C.; Kotsopoulos, Stavros A.; Kokkinakis, George C.

    1993-06-01

    The present paper deals with the development and evaluation of a new compression scheme, for angiocardiography images. This scheme provides considerable compression of the medical data file, through two different stages. The first stage obliterates the redundancy inside a single frame domain since the second stage obliterates the redundancy among the sequential frames. Within these stages the employed data compression ratio can be easily adjusted according to the needs of the angiocardiography applications, where still or moving (in slow or full motion) images are hauled. The developed scheme has been tailored on the real needs of the diagnosis oriented conferencing-teleworking processes, where Unified Image Viewing facilities are required.

  19. Multi-rate, real time image compression for images dominated by point sources

    NASA Technical Reports Server (NTRS)

    Huber, A. Kris; Budge, Scott E.; Harris, Richard W.

    1993-01-01

    An image compression system recently developed for compression of digital images dominated by point sources is presented. Encoding consists of minimum-mean removal, vector quantization, adaptive threshold truncation, and modified Huffman encoding. Simulations are presented showing that the peaks corresponding to point sources can be transmitted losslessly for low signal-to-noise ratios (SNR) and high point source densities while maintaining a reduced output bit rate. Encoding and decoding hardware has been built and tested which processes 552,960 12-bit pixels per second at compression rates of 10:1 and 4:1. Simulation results are presented for the 10:1 case only.

  20. Science-based Region-of-Interest Image Compression

    NASA Technical Reports Server (NTRS)

    Wagstaff, K. L.; Castano, R.; Dolinar, S.; Klimesh, M.; Mukai, R.

    2004-01-01

    As the number of currently active space missions increases, so does competition for Deep Space Network (DSN) resources. Even given unbounded DSN time, power and weight constraints onboard the spacecraft limit the maximum possible data transmission rate. These factors highlight a critical need for very effective data compression schemes. Images tend to be the most bandwidth-intensive data, so image compression methods are particularly valuable. In this paper, we describe a method for prioritizing regions in an image based on their scientific value. Using a wavelet compression method that can incorporate priority information, we ensure that the highest priority regions are transmitted with the highest fidelity.

  1. Least Median of Squares Filtering of Locally Optimal Point Matches for Compressible Flow Image Registration

    PubMed Central

    Castillo, Edward; Castillo, Richard; White, Benjamin; Rojo, Javier; Guerrero, Thomas

    2012-01-01

    Compressible flow based image registration operates under the assumption that the mass of the imaged material is conserved from one image to the next. Depending on how the mass conservation assumption is modeled, the performance of existing compressible flow methods is limited by factors such as image quality, noise, large magnitude voxel displacements, and computational requirements. The Least Median of Squares Filtered Compressible Flow (LFC) method introduced here is based on a localized, nonlinear least squares, compressible flow model that describes the displacement of a single voxel that lends itself to a simple grid search (block matching) optimization strategy. Spatially inaccurate grid search point matches, corresponding to erroneous local minimizers of the nonlinear compressible flow model, are removed by a novel filtering approach based on least median of squares fitting and the forward search outlier detection method. The spatial accuracy of the method is measured using ten thoracic CT image sets and large samples of expert determined landmarks (available at www.dir-lab.com). The LFC method produces an average error within the intra-observer error on eight of the ten cases, indicating that the method is capable of achieving a high spatial accuracy for thoracic CT registration. PMID:22797602

  2. Design of Restoration Method Based on Compressed Sensing and TwIST Algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Fei; Piao, Yan

    2018-04-01

    In order to improve the subjective and objective quality of degraded images at low sampling rates effectively,save storage space and reduce computational complexity at the same time, this paper proposes a joint restoration algorithm of compressed sensing and two step iterative threshold shrinkage (TwIST). The algorithm applies the TwIST algorithm which used in image restoration to the compressed sensing theory. Then, a small amount of sparse high-frequency information is obtained in frequency domain. The TwIST algorithm based on compressed sensing theory is used to accurately reconstruct the high frequency image. The experimental results show that the proposed algorithm achieves better subjective visual effects and objective quality of degraded images while accurately restoring degraded images.

  3. 12 CFR 1022.3 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... fingerprint, voice print, retina or iris image, or other unique physical representation; (3) Unique electronic... reporting agency to require additional documentation or information, such as a notarized affidavit. (j...

  4. 12 CFR 1022.3 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... fingerprint, voice print, retina or iris image, or other unique physical representation; (3) Unique electronic... reporting agency to require additional documentation or information, such as a notarized affidavit. (j...

  5. Hyperspectral image compressing using wavelet-based method

    NASA Astrophysics Data System (ADS)

    Yu, Hui; Zhang, Zhi-jie; Lei, Bo; Wang, Chen-sheng

    2017-10-01

    Hyperspectral imaging sensors can acquire images in hundreds of continuous narrow spectral bands. Therefore each object presented in the image can be identified from their spectral response. However, such kind of imaging brings a huge amount of data, which requires transmission, processing, and storage resources for both airborne and space borne imaging. Due to the high volume of hyperspectral image data, the exploration of compression strategies has received a lot of attention in recent years. Compression of hyperspectral data cubes is an effective solution for these problems. Lossless compression of the hyperspectral data usually results in low compression ratio, which may not meet the available resources; on the other hand, lossy compression may give the desired ratio, but with a significant degradation effect on object identification performance of the hyperspectral data. Moreover, most hyperspectral data compression techniques exploits the similarities in spectral dimensions; which requires bands reordering or regrouping, to make use of the spectral redundancy. In this paper, we explored the spectral cross correlation between different bands, and proposed an adaptive band selection method to obtain the spectral bands which contain most of the information of the acquired hyperspectral data cube. The proposed method mainly consist three steps: First, the algorithm decomposes the original hyperspectral imagery into a series of subspaces based on the hyper correlation matrix of the hyperspectral images between different bands. And then the Wavelet-based algorithm is applied to the each subspaces. At last the PCA method is applied to the wavelet coefficients to produce the chosen number of components. The performance of the proposed method was tested by using ISODATA classification method.

  6. Coil Compression for Accelerated Imaging with Cartesian Sampling

    PubMed Central

    Zhang, Tao; Pauly, John M.; Vasanawala, Shreyas S.; Lustig, Michael

    2012-01-01

    MRI using receiver arrays with many coil elements can provide high signal-to-noise ratio and increase parallel imaging acceleration. At the same time, the growing number of elements results in larger datasets and more computation in the reconstruction. This is of particular concern in 3D acquisitions and in iterative reconstructions. Coil compression algorithms are effective in mitigating this problem by compressing data from many channels into fewer virtual coils. In Cartesian sampling there often are fully sampled k-space dimensions. In this work, a new coil compression technique for Cartesian sampling is presented that exploits the spatially varying coil sensitivities in these non-subsampled dimensions for better compression and computation reduction. Instead of directly compressing in k-space, coil compression is performed separately for each spatial location along the fully-sampled directions, followed by an additional alignment process that guarantees the smoothness of the virtual coil sensitivities. This important step provides compatibility with autocalibrating parallel imaging techniques. Its performance is not susceptible to artifacts caused by a tight imaging fieldof-view. High quality compression of in-vivo 3D data from a 32 channel pediatric coil into 6 virtual coils is demonstrated. PMID:22488589

  7. Subjective evaluations of integer cosine transform compressed Galileo solid state imagery

    NASA Technical Reports Server (NTRS)

    Haines, Richard F.; Gold, Yaron; Grant, Terry; Chuang, Sherry

    1994-01-01

    This paper describes a study conducted for the Jet Propulsion Laboratory, Pasadena, California, using 15 evaluators from 12 institutions involved in the Galileo Solid State Imaging (SSI) experiment. The objective of the study was to determine the impact of integer cosine transform (ICT) compression using specially formulated quantization (q) tables and compression ratios on acceptability of the 800 x 800 x 8 monochromatic astronomical images as evaluated visually by Galileo SSI mission scientists. Fourteen different images in seven image groups were evaluated. Each evaluator viewed two versions of the same image side by side on a high-resolution monitor; each was compressed using a different q level. First the evaluators selected the image with the highest overall quality to support them in their visual evaluations of image content. Next they rated each image using a scale from one to five indicating its judged degree of usefulness. Up to four preselected types of images with and without noise were presented to each evaluator.

  8. Computational Simulation of Breast Compression Based on Segmented Breast and Fibroglandular Tissues on Magnetic Resonance Images

    PubMed Central

    Shih, Tzu-Ching; Chen, Jeon-Hor; Liu, Dongxu; Nie, Ke; Sun, Lizhi; Lin, Muqing; Chang, Daniel; Nalcioglu, Orhan; Su, Min-Ying

    2010-01-01

    This study presents a finite element based computational model to simulate the three-dimensional deformation of the breast and the fibroglandular tissues under compression. The simulation was based on 3D MR images of the breast, and the craniocaudal and mediolateral oblique compression as used in mammography was applied. The geometry of whole breast and the segmented fibroglandular tissues within the breast were reconstructed using triangular meshes by using the Avizo® 6.0 software package. Due to the large deformation in breast compression, a finite element model was used to simulate the non-linear elastic tissue deformation under compression, using the MSC.Marc® software package. The model was tested in 4 cases. The results showed a higher displacement along the compression direction compared to the other two directions. The compressed breast thickness in these 4 cases at 60% compression ratio was in the range of 5-7 cm, which is the typical range of thickness in mammography. The projection of the fibroglandular tissue mesh at 60% compression ratio was compared to the corresponding mammograms of two women, and they demonstrated spatially matched distributions. However, since the compression was based on MRI, which has much coarser spatial resolution than the in-plane resolution of mammography, this method is unlikely to generate a synthetic mammogram close to the clinical quality. Whether this model may be used to understand the technical factors that may impact the variations in breast density measurements needs further investigation. Since this method can be applied to simulate compression of the breast at different views and different compression levels, another possible application is to provide a tool for comparing breast images acquired using different imaging modalities – such as MRI, mammography, whole breast ultrasound, and molecular imaging – that are performed using different body positions and different compression conditions. PMID:20601773

  9. Image segmentation by iterative parallel region growing with application to data compression and image analysis

    NASA Technical Reports Server (NTRS)

    Tilton, James C.

    1988-01-01

    Image segmentation can be a key step in data compression and image analysis. However, the segmentation results produced by most previous approaches to region growing are suspect because they depend on the order in which portions of the image are processed. An iterative parallel segmentation algorithm avoids this problem by performing globally best merges first. Such a segmentation approach, and two implementations of the approach on NASA's Massively Parallel Processor (MPP) are described. Application of the segmentation approach to data compression and image analysis is then described, and results of such application are given for a LANDSAT Thematic Mapper image.

  10. Rudiments of curvelet with applications

    NASA Astrophysics Data System (ADS)

    Zahra, Noor e.

    2012-07-01

    Curvelet transform is now a days a favored tool for image processing. Edges are an important part of an image and usually they are not straight lines. Curvelet prove to be very efficient in representing curve like edges. In this chapter application of curvelet is shown with some examples like seismic wave analysis, oil exploration, fingerprint identification and biomedical images like mammography and MRI.

  11. An ultra-low-power image compressor for capsule endoscope.

    PubMed

    Lin, Meng-Chun; Dung, Lan-Rong; Weng, Ping-Kuo

    2006-02-25

    Gastrointestinal (GI) endoscopy has been popularly applied for the diagnosis of diseases of the alimentary canal including Crohn's Disease, Celiac disease and other malabsorption disorders, benign and malignant tumors of the small intestine, vascular disorders and medication related small bowel injury. The wireless capsule endoscope has been successfully utilized to diagnose diseases of the small intestine and alleviate the discomfort and pain of patients. However, the resolution of demosaicked image is still low, and some interesting spots may be unintentionally omitted. Especially, the images will be severely distorted when physicians zoom images in for detailed diagnosis. Increasing resolution may cause significant power consumption in RF transmitter; hence, image compression is necessary for saving the power dissipation of RF transmitter. To overcome this drawback, we have been developing a new capsule endoscope, called GICam. We developed an ultra-low-power image compression processor for capsule endoscope or swallowable imaging capsules. In applications of capsule endoscopy, it is imperative to consider battery life/performance trade-offs. Applying state-of-the-art video compression techniques may significantly reduce the image bit rate by their high compression ratio, but they all require intensive computation and consume much battery power. There are many fast compression algorithms for reducing computation load; however, they may result in distortion of the original image, which is not good for use in the medical care. Thus, this paper will first simplify traditional video compression algorithms and propose a scalable compression architecture. As the result, the developed video compressor only costs 31 K gates at 2 frames per second, consumes 14.92 mW, and reduces the video size by 75% at least.

  12. Photoacoustic and Colorimetric Visualization of Latent Fingerprints.

    PubMed

    Song, Kai; Huang, Peng; Yi, Chenglin; Ning, Bo; Hu, Song; Nie, Liming; Chen, Xiaoyuan; Nie, Zhihong

    2015-12-22

    There is a high demand on a simple, rapid, accurate, user-friendly, cost-effective, and nondestructive universal method for latent fingerprint (LFP) detection. Herein, we describe a combination imaging strategy for LFP visualization with high resolution using poly(styrene-alt-maleic anhydride)-b-polystyrene (PSMA-b-PS) functionalized gold nanoparticles (GNPs). This general approach integrates the merits of both colorimetric imaging and photoacoustic imaging. In comparison with the previous methods, our strategy is single-step and does not require the signal amplification by silver staining. The PSMA-b-PS functionalized GNPs have good stability, tunable color, and high affinity for universal secretions (proteins/polypeptides/amino acids), which makes our approach general and flexible for visualizing LFPs on different substrates (presumably with different colors) and from different people. Moreover, the unique optical property of GNPs enables the photoacoustic imaging of GNPs-deposited LFPs with high resolution. This allows observation of level 3 hyperfine features of LFPs such as the pores and ridge contours by photoacoustic imaging. This technique can potentially be used to identify chemicals within LFP residues. We believe that this dual-modality imaging of LFPs will find widespread use in forensic investigations and medical diagnostics.

  13. Color image lossy compression based on blind evaluation and prediction of noise characteristics

    NASA Astrophysics Data System (ADS)

    Ponomarenko, Nikolay N.; Lukin, Vladimir V.; Egiazarian, Karen O.; Lepisto, Leena

    2011-03-01

    The paper deals with JPEG adaptive lossy compression of color images formed by digital cameras. Adaptation to noise characteristics and blur estimated for each given image is carried out. The dominant factor degrading image quality is determined in a blind manner. Characteristics of this dominant factor are then estimated. Finally, a scaling factor that determines quantization steps for default JPEG table is adaptively set (selected). Within this general framework, two possible strategies are considered. A first one presumes blind estimation for an image after all operations in digital image processing chain just before compressing a given raster image. A second strategy is based on prediction of noise and blur parameters from analysis of RAW image under quite general assumptions concerning characteristics parameters of transformations an image will be subject to at further processing stages. The advantages of both strategies are discussed. The first strategy provides more accurate estimation and larger benefit in image compression ratio (CR) compared to super-high quality (SHQ) mode. However, it is more complicated and requires more resources. The second strategy is simpler but less beneficial. The proposed approaches are tested for quite many real life color images acquired by digital cameras and shown to provide more than two time increase of average CR compared to SHQ mode without introducing visible distortions with respect to SHQ compressed images.

  14. Image compression software for the SOHO LASCO and EIT experiments

    NASA Technical Reports Server (NTRS)

    Grunes, Mitchell R.; Howard, Russell A.; Hoppel, Karl; Mango, Stephen A.; Wang, Dennis

    1994-01-01

    This paper describes the lossless and lossy image compression algorithms to be used on board the Solar Heliospheric Observatory (SOHO) in conjunction with the Large Angle Spectrometric Coronograph and Extreme Ultraviolet Imaging Telescope experiments. It also shows preliminary results obtained using similar prior imagery and discusses the lossy compression artifacts which will result. This paper is in part intended for the use of SOHO investigators who need to understand the results of SOHO compression in order to better allocate the transmission bits which they have been allocated.

  15. Design of a Lossless Image Compression System for Video Capsule Endoscopy and Its Performance in In-Vivo Trials

    PubMed Central

    Khan, Tareq H.; Wahid, Khan A.

    2014-01-01

    In this paper, a new low complexity and lossless image compression system for capsule endoscopy (CE) is presented. The compressor consists of a low-cost YEF color space converter and variable-length predictive with a combination of Golomb-Rice and unary encoding. All these components have been heavily optimized for low-power and low-cost and lossless in nature. As a result, the entire compression system does not incur any loss of image information. Unlike transform based algorithms, the compressor can be interfaced with commercial image sensors which send pixel data in raster-scan fashion that eliminates the need of having large buffer memory. The compression algorithm is capable to work with white light imaging (WLI) and narrow band imaging (NBI) with average compression ratio of 78% and 84% respectively. Finally, a complete capsule endoscopy system is developed on a single, low-power, 65-nm field programmable gate arrays (FPGA) chip. The prototype is developed using circular PCBs having a diameter of 16 mm. Several in-vivo and ex-vivo trials using pig's intestine have been conducted using the prototype to validate the performance of the proposed lossless compression algorithm. The results show that, compared with all other existing works, the proposed algorithm offers a solution to wireless capsule endoscopy with lossless and yet acceptable level of compression. PMID:25375753

  16. Selection of bi-level image compression method for reduction of communication energy in wireless visual sensor networks

    NASA Astrophysics Data System (ADS)

    Khursheed, Khursheed; Imran, Muhammad; Ahmad, Naeem; O'Nils, Mattias

    2012-06-01

    Wireless Visual Sensor Network (WVSN) is an emerging field which combines image sensor, on board computation unit, communication component and energy source. Compared to the traditional wireless sensor network, which operates on one dimensional data, such as temperature, pressure values etc., WVSN operates on two dimensional data (images) which requires higher processing power and communication bandwidth. Normally, WVSNs are deployed in areas where installation of wired solutions is not feasible. The energy budget in these networks is limited to the batteries, because of the wireless nature of the application. Due to the limited availability of energy, the processing at Visual Sensor Nodes (VSN) and communication from VSN to server should consume as low energy as possible. Transmission of raw images wirelessly consumes a lot of energy and requires higher communication bandwidth. Data compression methods reduce data efficiently and hence will be effective in reducing communication cost in WVSN. In this paper, we have compared the compression efficiency and complexity of six well known bi-level image compression methods. The focus is to determine the compression algorithms which can efficiently compress bi-level images and their computational complexity is suitable for computational platform used in WVSNs. These results can be used as a road map for selection of compression methods for different sets of constraints in WVSN.

  17. High-grade video compression of echocardiographic studies: a multicenter validation study of selected motion pictures expert groups (MPEG)-4 algorithms.

    PubMed

    Barbier, Paolo; Alimento, Marina; Berna, Giovanni; Celeste, Fabrizio; Gentile, Francesco; Mantero, Antonio; Montericcio, Vincenzo; Muratori, Manuela

    2007-05-01

    Large files produced by standard compression algorithms slow down spread of digital and tele-echocardiography. We validated echocardiographic video high-grade compression with the new Motion Pictures Expert Groups (MPEG)-4 algorithms with a multicenter study. Seven expert cardiologists blindly scored (5-point scale) 165 uncompressed and compressed 2-dimensional and color Doppler video clips, based on combined diagnostic content and image quality (uncompressed files as references). One digital video and 3 MPEG-4 algorithms (WM9, MV2, and DivX) were used, the latter at 3 compression levels (0%, 35%, and 60%). Compressed file sizes decreased from 12 to 83 MB to 0.03 to 2.3 MB (1:1051-1:26 reduction ratios). Mean SD of differences was 0.81 for intraobserver variability (uncompressed and digital video files). Compared with uncompressed files, only the DivX mean score at 35% (P = .04) and 60% (P = .001) compression was significantly reduced. At subcategory analysis, these differences were still significant for gray-scale and fundamental imaging but not for color or second harmonic tissue imaging. Original image quality, session sequence, compression grade, and bitrate were all independent determinants of mean score. Our study supports use of MPEG-4 algorithms to greatly reduce echocardiographic file sizes, thus facilitating archiving and transmission. Quality evaluation studies should account for the many independent variables that affect image quality grading.

  18. On use of image quality metrics for perceptual blur modeling: image/video compression case

    NASA Astrophysics Data System (ADS)

    Cha, Jae H.; Olson, Jeffrey T.; Preece, Bradley L.; Espinola, Richard L.; Abbott, A. Lynn

    2018-02-01

    Linear system theory is employed to make target acquisition performance predictions for electro-optical/infrared imaging systems where the modulation transfer function (MTF) may be imposed from a nonlinear degradation process. Previous research relying on image quality metrics (IQM) methods, which heuristically estimate perceived MTF has supported that an average perceived MTF can be used to model some types of degradation such as image compression. Here, we discuss the validity of the IQM approach by mathematically analyzing the associated heuristics from the perspective of reliability, robustness, and tractability. Experiments with standard images compressed by x.264 encoding suggest that the compression degradation can be estimated by a perceived MTF within boundaries defined by well-behaved curves with marginal error. Our results confirm that the IQM linearizer methodology provides a credible tool for sensor performance modeling.

  19. Robust sliding-window reconstruction for Accelerating the acquisition of MR fingerprinting.

    PubMed

    Cao, Xiaozhi; Liao, Congyu; Wang, Zhixing; Chen, Ying; Ye, Huihui; He, Hongjian; Zhong, Jianhui

    2017-10-01

    To develop a method for accelerated and robust MR fingerprinting (MRF) with improved image reconstruction and parameter matching processes. A sliding-window (SW) strategy was applied to MRF, in which signal and dictionary matching was conducted between fingerprints consisting of mixed-contrast image series reconstructed from consecutive data frames segmented by a sliding window, and a precalculated mixed-contrast dictionary. The effectiveness and performance of this new method, dubbed SW-MRF, was evaluated in both phantom and in vivo. Error quantifications were conducted on results obtained with various settings of SW reconstruction parameters. Compared with the original MRF strategy, the results of both phantom and in vivo experiments demonstrate that the proposed SW-MRF strategy either provided similar accuracy with reduced acquisition time, or improved accuracy with equal acquisition time. Parametric maps of T 1 , T 2 , and proton density of comparable quality could be achieved with a two-fold or more reduction in acquisition time. The effect of sliding-window width on dictionary sensitivity was also estimated. The novel SW-MRF recovers high quality image frames from highly undersampled MRF data, which enables more robust dictionary matching with reduced numbers of data frames. This time efficiency may facilitate MRF applications in time-critical clinical settings. Magn Reson Med 78:1579-1588, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  20. Oxidation management of white wines using cyclic voltammetry and multivariate process monitoring.

    PubMed

    Martins, Rui C; Oliveira, Raquel; Bento, Fatima; Geraldo, Dulce; Lopes, Vitor V; Guedes de Pinho, Paula; Oliveira, Carla M; Silva Ferreira, Antonio C

    2008-12-24

    The development of a fingerprinting strategy capable to evaluate the "oxidation status" of white wines based on cyclic voltammetry is proposed here. It is known that the levels of specific antioxidants and redox mechanisms may be evaluated by cyclic voltammetry. This electrochemical technique was applied on two sets of samples. One group was composed of normal aged white wines and a second group obtained from a white wine forced aging protocol with different oxygen, SO(2), pH, and temperature regimens. A study of antioxidant additions, namely ascorbic acid, was also made in order to establish a statistical link between voltammogram fingerprints and chemical antioxidant substances. It was observed that the oxidation curve presented typical features, which enables sample discrimination according to age, oxygen consumption, and antioxidant additions. In fact, it was possible to place the results into four significant orthogonal directions, compressing 99.8% of nonrandom features. Attempts were made to make voltammogram fingerprinting a tool for monitoring oxidation management. For this purpose, a supervised multivariate control chart was developed using a control sample as reference. When white wines are plotted onto the chart, it is possible to monitor the oxidation status and to diagnose the effects of oxygen regimes and antioxidant activity. Finally, quantification of substances implicated in the oxidation process as reagents (antioxidants) and products (off-flavors) was tried using a supervised algorithmic the partial least square regression analysis. Good correlations (r > 0.93) were observed for ascorbic acid, Folin-Ciocalteu index, total SO(2), methional, and phenylacetaldehyde. These results show that cyclic voltammetry fingerprinting can be used to monitor and diagnose the effects of wine oxidation.

  1. Joint image encryption and compression scheme based on a new hyperchaotic system and curvelet transform

    NASA Astrophysics Data System (ADS)

    Zhang, Miao; Tong, Xiaojun

    2017-07-01

    This paper proposes a joint image encryption and compression scheme based on a new hyperchaotic system and curvelet transform. A new five-dimensional hyperchaotic system based on the Rabinovich system is presented. By means of the proposed hyperchaotic system, a new pseudorandom key stream generator is constructed. The algorithm adopts diffusion and confusion structure to perform encryption, which is based on the key stream generator and the proposed hyperchaotic system. The key sequence used for image encryption is relation to plain text. By means of the second generation curvelet transform, run-length coding, and Huffman coding, the image data are compressed. The joint operation of compression and encryption in a single process is performed. The security test results indicate the proposed methods have high security and good compression effect.

  2. An L1-norm phase constraint for half-Fourier compressed sensing in 3D MR imaging.

    PubMed

    Li, Guobin; Hennig, Jürgen; Raithel, Esther; Büchert, Martin; Paul, Dominik; Korvink, Jan G; Zaitsev, Maxim

    2015-10-01

    In most half-Fourier imaging methods, explicit phase replacement is used. In combination with parallel imaging, or compressed sensing, half-Fourier reconstruction is usually performed in a separate step. The purpose of this paper is to report that integration of half-Fourier reconstruction into iterative reconstruction minimizes reconstruction errors. The L1-norm phase constraint for half-Fourier imaging proposed in this work is compared with the L2-norm variant of the same algorithm, with several typical half-Fourier reconstruction methods. Half-Fourier imaging with the proposed phase constraint can be seamlessly combined with parallel imaging and compressed sensing to achieve high acceleration factors. In simulations and in in-vivo experiments half-Fourier imaging with the proposed L1-norm phase constraint enables superior performance both reconstruction of image details and with regard to robustness against phase estimation errors. The performance and feasibility of half-Fourier imaging with the proposed L1-norm phase constraint is reported. Its seamless combination with parallel imaging and compressed sensing enables use of greater acceleration in 3D MR imaging.

  3. Influence of image compression on the interpretation of spectral-domain optical coherence tomography in exudative age-related macular degeneration

    PubMed Central

    Kim, J H; Kang, S W; Kim, J-r; Chang, Y S

    2014-01-01

    Purpose To evaluate the effect of image compression of spectral-domain optical coherence tomography (OCT) images in the examination of eyes with exudative age-related macular degeneration (AMD). Methods Thirty eyes from 30 patients who were diagnosed with exudative AMD were included in this retrospective observational case series. The horizontal OCT scans centered at the center of the fovea were conducted using spectral-domain OCT. The images were exported to Tag Image File Format (TIFF) and 100, 75, 50, 25 and 10% quality of Joint Photographic Experts Group (JPEG) format. OCT images were taken before and after intravitreal ranibizumab injections, and after relapse. The prevalence of subretinal and intraretinal fluids was determined. Differences in choroidal thickness between the TIFF and JPEG images were compared with the intra-observer variability. Results The prevalence of subretinal and intraretinal fluids was comparable regardless of the degree of compression. However, the chorio–scleral interface was not clearly identified in many images with a high degree of compression. In images with 25 and 10% quality of JPEG, the difference in choroidal thickness between the TIFF images and the respective JPEG images was significantly greater than the intra-observer variability of the TIFF images (P=0.029 and P=0.024, respectively). Conclusions In OCT images of eyes with AMD, 50% of the quality of the JPEG format would be an optimal degree of compression for efficient data storage and transfer without sacrificing image quality. PMID:24788012

  4. Storage and retrieval of large digital images

    DOEpatents

    Bradley, J.N.

    1998-01-20

    Image compression and viewing are implemented with (1) a method for performing DWT-based compression on a large digital image with a computer system possessing a two-level system of memory and (2) a method for selectively viewing areas of the image from its compressed representation at multiple resolutions and, if desired, in a client-server environment. The compression of a large digital image I(x,y) is accomplished by first defining a plurality of discrete tile image data subsets T{sub ij}(x,y) that, upon superposition, form the complete set of image data I(x,y). A seamless wavelet-based compression process is effected on I(x,y) that is comprised of successively inputting the tiles T{sub ij}(x,y) in a selected sequence to a DWT routine, and storing the resulting DWT coefficients in a first primary memory. These coefficients are periodically compressed and transferred to a secondary memory to maintain sufficient memory in the primary memory for data processing. The sequence of DWT operations on the tiles T{sub ij}(x,y) effectively calculates a seamless DWT of I(x,y). Data retrieval consists of specifying a resolution and a region of I(x,y) for display. The subset of stored DWT coefficients corresponding to each requested scene is determined and then decompressed for input to an inverse DWT, the output of which forms the image display. The repeated process whereby image views are specified may take the form an interaction with a computer pointing device on an image display from a previous retrieval. 6 figs.

  5. Storage and retrieval of large digital images

    DOEpatents

    Bradley, Jonathan N.

    1998-01-01

    Image compression and viewing are implemented with (1) a method for performing DWT-based compression on a large digital image with a computer system possessing a two-level system of memory and (2) a method for selectively viewing areas of the image from its compressed representation at multiple resolutions and, if desired, in a client-server environment. The compression of a large digital image I(x,y) is accomplished by first defining a plurality of discrete tile image data subsets T.sub.ij (x,y) that, upon superposition, form the complete set of image data I(x,y). A seamless wavelet-based compression process is effected on I(x,y) that is comprised of successively inputting the tiles T.sub.ij (x,y) in a selected sequence to a DWT routine, and storing the resulting DWT coefficients in a first primary memory. These coefficients are periodically compressed and transferred to a secondary memory to maintain sufficient memory in the primary memory for data processing. The sequence of DWT operations on the tiles T.sub.ij (x,y) effectively calculates a seamless DWT of I(x,y). Data retrieval consists of specifying a resolution and a region of I(x,y) for display. The subset of stored DWT coefficients corresponding to each requested scene is determined and then decompressed for input to an inverse DWT, the output of which forms the image display. The repeated process whereby image views are specified may take the form an interaction with a computer pointing device on an image display from a previous retrieval.

  6. Choice of word length in the design of a specialized hardware for lossless wavelet compression of medical images

    NASA Astrophysics Data System (ADS)

    Urriza, Isidro; Barragan, Luis A.; Artigas, Jose I.; Garcia, Jose I.; Navarro, Denis

    1997-11-01

    Image compression plays an important role in the archiving and transmission of medical images. Discrete cosine transform (DCT)-based compression methods are not suitable for medical images because of block-like image artifacts that could mask or be mistaken for pathology. Wavelet transforms (WTs) are used to overcome this problem. When implementing WTs in hardware, finite precision arithmetic introduces quantization errors. However, lossless compression is usually required in the medical image field. Thus, the hardware designer must look for the optimum register length that, while ensuring the lossless accuracy criteria, will also lead to a high-speed implementation with small chip area. In addition, wavelet choice is a critical issue that affects image quality as well as system design. We analyze the filters best suited to image compression that appear in the literature. For them, we obtain the maximum quantization errors produced in the calculation of the WT components. Thus, we deduce the minimum word length required for the reconstructed image to be numerically identical to the original image. The theoretical results are compared with experimental results obtained from algorithm simulations on random test images. These results enable us to compare the hardware implementation cost of the different filter banks. Moreover, to reduce the word length, we have analyzed the case of increasing the integer part of the numbers while maintaining constant the word length when the scale increases.

  7. Nonlinear Multiscale Transformations: From Synchronization to Error Control

    DTIC Science & Technology

    2001-07-01

    transformation (plus the quantization step) has taken place, a lossless Lempel - Ziv compression algorithm is applied to reduce the size of the transformed... compressed data are all very close, however the visual quality of the reconstructed image is significantly better for the EC compression algorithm ...used in recent times in the first step of transform coding algorithms for image compression . Ideally, a multiscale transformation allows for an

  8. The Polygon-Ellipse Method of Data Compression of Weather Maps

    DTIC Science & Technology

    1994-03-28

    Report No. DOT’•FAAJRD-9416 Pr•oject Report AD-A278 958 ATC-213 The Polygon-Ellipse Method of Data Compression of Weather Maps ELDCT E J.L. GerIz 28...a o means must he- found to Compress this image. The l’olygion.Ellip.e (PE.) encoding algorithm develop.ed in this report rt-premrnt. weather regions...severely compress the image. For example, Mode S would require approximately a 10-fold compression . In addition, the algorithms used to perform the

  9. JP3D compressed-domain watermarking of volumetric medical data sets

    NASA Astrophysics Data System (ADS)

    Ouled Zaid, Azza; Makhloufi, Achraf; Olivier, Christian

    2010-01-01

    Increasing transmission of medical data across multiple user systems raises concerns for medical image watermarking. Additionaly, the use of volumetric images triggers the need for efficient compression techniques in picture archiving and communication systems (PACS), or telemedicine applications. This paper describes an hybrid data hiding/compression system, adapted to volumetric medical imaging. The central contribution is to integrate blind watermarking, based on turbo trellis-coded quantization (TCQ), to JP3D encoder. Results of our method applied to Magnetic Resonance (MR) and Computed Tomography (CT) medical images have shown that our watermarking scheme is robust to JP3D compression attacks and can provide relative high data embedding rate whereas keep a relative lower distortion.

  10. Compressive passive millimeter wave imager

    DOEpatents

    Gopalsami, Nachappa; Liao, Shaolin; Elmer, Thomas W; Koehl, Eugene R; Heifetz, Alexander; Raptis, Apostolos C

    2015-01-27

    A compressive scanning approach for millimeter wave imaging and sensing. A Hadamard mask is positioned to receive millimeter waves from an object to be imaged. A subset of the full set of Hadamard acquisitions is sampled. The subset is used to reconstruct an image representing the object.

  11. Energy Efficient Image/Video Data Transmission on Commercial Multi-Core Processors

    PubMed Central

    Lee, Sungju; Kim, Heegon; Chung, Yongwha; Park, Daihee

    2012-01-01

    In transmitting image/video data over Video Sensor Networks (VSNs), energy consumption must be minimized while maintaining high image/video quality. Although image/video compression is well known for its efficiency and usefulness in VSNs, the excessive costs associated with encoding computation and complexity still hinder its adoption for practical use. However, it is anticipated that high-performance handheld multi-core devices will be used as VSN processing nodes in the near future. In this paper, we propose a way to improve the energy efficiency of image and video compression with multi-core processors while maintaining the image/video quality. We improve the compression efficiency at the algorithmic level or derive the optimal parameters for the combination of a machine and compression based on the tradeoff between the energy consumption and the image/video quality. Based on experimental results, we confirm that the proposed approach can improve the energy efficiency of the straightforward approach by a factor of 2∼5 without compromising image/video quality. PMID:23202181

  12. CoGI: Towards Compressing Genomes as an Image.

    PubMed

    Xie, Xiaojing; Zhou, Shuigeng; Guan, Jihong

    2015-01-01

    Genomic science is now facing an explosive increase of data thanks to the fast development of sequencing technology. This situation poses serious challenges to genomic data storage and transferring. It is desirable to compress data to reduce storage and transferring cost, and thus to boost data distribution and utilization efficiency. Up to now, a number of algorithms / tools have been developed for compressing genomic sequences. Unlike the existing algorithms, most of which treat genomes as one-dimensional text strings and compress them based on dictionaries or probability models, this paper proposes a novel approach called CoGI (the abbreviation of Compressing Genomes as an Image) for genome compression, which transforms the genomic sequences to a two-dimensional binary image (or bitmap), then applies a rectangular partition coding algorithm to compress the binary image. CoGI can be used as either a reference-based compressor or a reference-free compressor. For the former, we develop two entropy-based algorithms to select a proper reference genome. Performance evaluation is conducted on various genomes. Experimental results show that the reference-based CoGI significantly outperforms two state-of-the-art reference-based genome compressors GReEn and RLZ-opt in both compression ratio and compression efficiency. It also achieves comparable compression ratio but two orders of magnitude higher compression efficiency in comparison with XM--one state-of-the-art reference-free genome compressor. Furthermore, our approach performs much better than Gzip--a general-purpose and widely-used compressor, in both compression speed and compression ratio. So, CoGI can serve as an effective and practical genome compressor. The source code and other related documents of CoGI are available at: http://admis.fudan.edu.cn/projects/cogi.htm.

  13. A Novel 2D Image Compression Algorithm Based on Two Levels DWT and DCT Transforms with Enhanced Minimize-Matrix-Size Algorithm for High Resolution Structured Light 3D Surface Reconstruction

    NASA Astrophysics Data System (ADS)

    Siddeq, M. M.; Rodrigues, M. A.

    2015-09-01

    Image compression techniques are widely used on 2D image 2D video 3D images and 3D video. There are many types of compression techniques and among the most popular are JPEG and JPEG2000. In this research, we introduce a new compression method based on applying a two level discrete cosine transform (DCT) and a two level discrete wavelet transform (DWT) in connection with novel compression steps for high-resolution images. The proposed image compression algorithm consists of four steps. (1) Transform an image by a two level DWT followed by a DCT to produce two matrices: DC- and AC-Matrix, or low and high frequency matrix, respectively, (2) apply a second level DCT on the DC-Matrix to generate two arrays, namely nonzero-array and zero-array, (3) apply the Minimize-Matrix-Size algorithm to the AC-Matrix and to the other high-frequencies generated by the second level DWT, (4) apply arithmetic coding to the output of previous steps. A novel decompression algorithm, Fast-Match-Search algorithm (FMS), is used to reconstruct all high-frequency matrices. The FMS-algorithm computes all compressed data probabilities by using a table of data, and then using a binary search algorithm for finding decompressed data inside the table. Thereafter, all decoded DC-values with the decoded AC-coefficients are combined in one matrix followed by inverse two levels DCT with two levels DWT. The technique is tested by compression and reconstruction of 3D surface patches. Additionally, this technique is compared with JPEG and JPEG2000 algorithm through 2D and 3D root-mean-square-error following reconstruction. The results demonstrate that the proposed compression method has better visual properties than JPEG and JPEG2000 and is able to more accurately reconstruct surface patches in 3D.

  14. Finite-element modeling of compression and gravity on a population of breast phantoms for multimodality imaging simulation.

    PubMed

    Sturgeon, Gregory M; Kiarashi, Nooshin; Lo, Joseph Y; Samei, E; Segars, W P

    2016-05-01

    The authors are developing a series of computational breast phantoms based on breast CT data for imaging research. In this work, the authors develop a program that will allow a user to alter the phantoms to simulate the effect of gravity and compression of the breast (craniocaudal or mediolateral oblique) making the phantoms applicable to multimodality imaging. This application utilizes a template finite-element (FE) breast model that can be applied to their presegmented voxelized breast phantoms. The FE model is automatically fit to the geometry of a given breast phantom, and the material properties of each element are set based on the segmented voxels contained within the element. The loading and boundary conditions, which include gravity, are then assigned based on a user-defined position and compression. The effect of applying these loads to the breast is computed using a multistage contact analysis in FEBio, a freely available and well-validated FE software package specifically designed for biomedical applications. The resulting deformation of the breast is then applied to a boundary mesh representation of the phantom that can be used for simulating medical images. An efficient script performs the above actions seamlessly. The user only needs to specify which voxelized breast phantom to use, the compressed thickness, and orientation of the breast. The authors utilized their FE application to simulate compressed states of the breast indicative of mammography and tomosynthesis. Gravity and compression were simulated on example phantoms and used to generate mammograms in the craniocaudal or mediolateral oblique views. The simulated mammograms show a high degree of realism illustrating the utility of the FE method in simulating imaging data of repositioned and compressed breasts. The breast phantoms and the compression software can become a useful resource to the breast imaging research community. These phantoms can then be used to evaluate and compare imaging modalities that involve different positioning and compression of the breast.

  15. Fingerprint verification on medical image reporting system.

    PubMed

    Chen, Yen-Cheng; Chen, Liang-Kuang; Tsai, Ming-Dar; Chiu, Hou-Chang; Chiu, Jainn-Shiun; Chong, Chee-Fah

    2008-03-01

    The healthcare industry is recently going through extensive changes, through adoption of robust, interoperable healthcare information technology by means of electronic medical records (EMR). However, a major concern of EMR is adequate confidentiality of the individual records being managed electronically. Multiple access points over an open network like the Internet increases possible patient data interception. The obligation is on healthcare providers to procure information security solutions that do not hamper patient care while still providing the confidentiality of patient information. Medical images are also part of the EMR which need to be protected from unauthorized users. This study integrates the techniques of fingerprint verification, DICOM object, digital signature and digital envelope in order to ensure that access to the hospital Picture Archiving and Communication System (PACS) or radiology information system (RIS) is only by certified parties.

  16. Comparison of fingerprint and facial biometric verification technologies for user access and patient identification in a clinical environment

    NASA Astrophysics Data System (ADS)

    Guo, Bing; Zhang, Yu; Documet, Jorge; Liu, Brent; Lee, Jasper; Shrestha, Rasu; Wang, Kevin; Huang, H. K.

    2007-03-01

    As clinical imaging and informatics systems continue to integrate the healthcare enterprise, the need to prevent patient mis-identification and unauthorized access to clinical data becomes more apparent especially under the Health Insurance Portability and Accountability Act (HIPAA) mandate. Last year, we presented a system to track and verify patients and staff within a clinical environment. This year, we further address the biometric verification component in order to determine which Biometric system is the optimal solution for given applications in the complex clinical environment. We install two biometric identification systems including fingerprint and facial recognition systems at an outpatient imaging facility, Healthcare Consultation Center II (HCCII). We evaluated each solution and documented the advantages and pitfalls of each biometric technology in this clinical environment.

  17. Cluster compression algorithm: A joint clustering/data compression concept

    NASA Technical Reports Server (NTRS)

    Hilbert, E. E.

    1977-01-01

    The Cluster Compression Algorithm (CCA), which was developed to reduce costs associated with transmitting, storing, distributing, and interpreting LANDSAT multispectral image data is described. The CCA is a preprocessing algorithm that uses feature extraction and data compression to more efficiently represent the information in the image data. The format of the preprocessed data enables simply a look-up table decoding and direct use of the extracted features to reduce user computation for either image reconstruction, or computer interpretation of the image data. Basically, the CCA uses spatially local clustering to extract features from the image data to describe spectral characteristics of the data set. In addition, the features may be used to form a sequence of scalar numbers that define each picture element in terms of the cluster features. This sequence, called the feature map, is then efficiently represented by using source encoding concepts. Various forms of the CCA are defined and experimental results are presented to show trade-offs and characteristics of the various implementations. Examples are provided that demonstrate the application of the cluster compression concept to multi-spectral images from LANDSAT and other sources.

  18. Algorithm for Lossless Compression of Calibrated Hyperspectral Imagery

    NASA Technical Reports Server (NTRS)

    Kiely, Aaron B.; Klimesh, Matthew A.

    2010-01-01

    A two-stage predictive method was developed for lossless compression of calibrated hyperspectral imagery. The first prediction stage uses a conventional linear predictor intended to exploit spatial and/or spectral dependencies in the data. The compressor tabulates counts of the past values of the difference between this initial prediction and the actual sample value. To form the ultimate predicted value, in the second stage, these counts are combined with an adaptively updated weight function intended to capture information about data regularities introduced by the calibration process. Finally, prediction residuals are losslessly encoded using adaptive arithmetic coding. Algorithms of this type are commonly tested on a readily available collection of images from the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) hyperspectral imager. On the standard calibrated AVIRIS hyperspectral images that are most widely used for compression benchmarking, the new compressor provides more than 0.5 bits/sample improvement over the previous best compression results. The algorithm has been implemented in Mathematica. The compression algorithm was demonstrated as beneficial on 12-bit calibrated AVIRIS images.

  19. Ultra high-speed x-ray imaging of laser-driven shock compression using synchrotron light

    NASA Astrophysics Data System (ADS)

    Olbinado, Margie P.; Cantelli, Valentina; Mathon, Olivier; Pascarelli, Sakura; Grenzer, Joerg; Pelka, Alexander; Roedel, Melanie; Prencipe, Irene; Laso Garcia, Alejandro; Helbig, Uwe; Kraus, Dominik; Schramm, Ulrich; Cowan, Tom; Scheel, Mario; Pradel, Pierre; De Resseguier, Thibaut; Rack, Alexander

    2018-02-01

    A high-power, nanosecond pulsed laser impacting the surface of a material can generate an ablation plasma that drives a shock wave into it; while in situ x-ray imaging can provide a time-resolved probe of the shock-induced material behaviour on macroscopic length scales. Here, we report on an investigation into laser-driven shock compression of a polyurethane foam and a graphite rod by means of single-pulse synchrotron x-ray phase-contrast imaging with MHz frame rate. A 6 J, 10 ns pulsed laser was used to generate shock compression. Physical processes governing the laser-induced dynamic response such as elastic compression, compaction, pore collapse, fracture, and fragmentation have been imaged; and the advantage of exploiting the partial spatial coherence of a synchrotron source for studying low-density, carbon-based materials is emphasized. The successful combination of a high-energy laser and ultra high-speed x-ray imaging using synchrotron light demonstrates the potentiality of accessing complementary information from scientific studies of laser-driven shock compression.

  20. Fingerprint recognition of alien invasive weeds based on the texture character and machine learning

    NASA Astrophysics Data System (ADS)

    Yu, Jia-Jia; Li, Xiao-Li; He, Yong; Xu, Zheng-Hao

    2008-11-01

    Multi-spectral imaging technique based on texture analysis and machine learning was proposed to discriminate alien invasive weeds with similar outline but different categories. The objectives of this study were to investigate the feasibility of using Multi-spectral imaging, especially the near-infrared (NIR) channel (800 nm+/-10 nm) to find the weeds' fingerprints, and validate the performance with specific eigenvalues by co-occurrence matrix. Veronica polita Pries, Veronica persica Poir, longtube ground ivy, Laminum amplexicaule Linn. were selected in this study, which perform different effect in field, and are alien invasive species in China. 307 weed leaves' images were randomly selected for the calibration set, while the remaining 207 samples for the prediction set. All images were pretreated by Wallis filter to adjust the noise by uneven lighting. Gray level co-occurrence matrix was applied to extract the texture character, which shows density, randomness correlation, contrast and homogeneity of texture with different algorithms. Three channels (green channel by 550 nm+/-10 nm, red channel by 650 nm+/-10 nm and NIR channel by 800 nm+/-10 nm) were respectively calculated to get the eigenvalues.Least-squares support vector machines (LS-SVM) was applied to discriminate the categories of weeds by the eigenvalues from co-occurrence matrix. Finally, recognition ratio of 83.35% by NIR channel was obtained, better than the results by green channel (76.67%) and red channel (69.46%). The prediction results of 81.35% indicated that the selected eigenvalues reflected the main characteristics of weeds' fingerprint based on multi-spectral (especially by NIR channel) and LS-SVM model.

Top