Sample records for digital compressive detection

  1. First Digit Law and Its Application to Digital Forensics

    NASA Astrophysics Data System (ADS)

    Shi, Yun Q.

    Digital data forensics, which gathers evidence of data composition, origin, and history, is crucial in our digital world. Although this new research field is still in its infancy stage, it has started to attract increasing attention from the multimedia-security research community. This lecture addresses the first digit law and its applications to digital forensics. First, the Benford and generalized Benford laws, referred to as first digit law, are introduced. Then, the application of first digit law to detection of JPEG compression history for a given BMP image and detection of double JPEG compressions are presented. Finally, applying first digit law to detection of double MPEG video compressions is discussed. It is expected that the first digit law may play an active role in other task of digital forensics. The lesson learned is that statistical models play an important role in digital forensics and for a specific forensic task different models may provide different performance.

  2. Authenticity examination of compressed audio recordings using detection of multiple compression and encoders' identification.

    PubMed

    Korycki, Rafal

    2014-05-01

    Since the appearance of digital audio recordings, audio authentication has been becoming increasingly difficult. The currently available technologies and free editing software allow a forger to cut or paste any single word without audible artifacts. Nowadays, the only method referring to digital audio files commonly approved by forensic experts is the ENF criterion. It consists in fluctuation analysis of the mains frequency induced in electronic circuits of recording devices. Therefore, its effectiveness is strictly dependent on the presence of mains signal in the recording, which is a rare occurrence. Recently, much attention has been paid to authenticity analysis of compressed multimedia files and several solutions were proposed for detection of double compression in both digital video and digital audio. This paper addresses the problem of tampering detection in compressed audio files and discusses new methods that can be used for authenticity analysis of digital recordings. Presented approaches consist in evaluation of statistical features extracted from the MDCT coefficients as well as other parameters that may be obtained from compressed audio files. Calculated feature vectors are used for training selected machine learning algorithms. The detection of multiple compression covers up tampering activities as well as identification of traces of montage in digital audio recordings. To enhance the methods' robustness an encoder identification algorithm was developed and applied based on analysis of inherent parameters of compression. The effectiveness of tampering detection algorithms is tested on a predefined large music database consisting of nearly one million of compressed audio files. The influence of compression algorithms' parameters on the classification performance is discussed, based on the results of the current study. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  3. Digital mammography, cancer screening: Factors important for image compression

    NASA Technical Reports Server (NTRS)

    Clarke, Laurence P.; Blaine, G. James; Doi, Kunio; Yaffe, Martin J.; Shtern, Faina; Brown, G. Stephen; Winfield, Daniel L.; Kallergi, Maria

    1993-01-01

    The use of digital mammography for breast cancer screening poses several novel problems such as development of digital sensors, computer assisted diagnosis (CAD) methods for image noise suppression, enhancement, and pattern recognition, compression algorithms for image storage, transmission, and remote diagnosis. X-ray digital mammography using novel direct digital detection schemes or film digitizers results in large data sets and, therefore, image compression methods will play a significant role in the image processing and analysis by CAD techniques. In view of the extensive compression required, the relative merit of 'virtually lossless' versus lossy methods should be determined. A brief overview is presented here of the developments of digital sensors, CAD, and compression methods currently proposed and tested for mammography. The objective of the NCI/NASA Working Group on Digital Mammography is to stimulate the interest of the image processing and compression scientific community for this medical application and identify possible dual use technologies within the NASA centers.

  4. Observer detection of image degradation caused by irreversible data compression processes

    NASA Astrophysics Data System (ADS)

    Chen, Ji; Flynn, Michael J.; Gross, Barry; Spizarny, David

    1991-05-01

    Irreversible data compression methods have been proposed to reduce the data storage and communication requirements of digital imaging systems. In general, the error produced by compression increases as an algorithm''s compression ratio is increased. We have studied the relationship between compression ratios and the detection of induced error using radiologic observers. The nature of the errors was characterized by calculating the power spectrum of the difference image. In contrast with studies designed to test whether detected errors alter diagnostic decisions, this study was designed to test whether observers could detect the induced error. A paired-film observer study was designed to test whether induced errors were detected. The study was conducted with chest radiographs selected and ranked for subtle evidence of interstitial disease, pulmonary nodules, or pneumothoraces. Images were digitized at 86 microns (4K X 5K) and 2K X 2K regions were extracted. A full-frame discrete cosine transform method was used to compress images at ratios varying between 6:1 and 60:1. The decompressed images were reprinted next to the original images in a randomized order with a laser film printer. The use of a film digitizer and a film printer which can reproduce all of the contrast and detail in the original radiograph makes the results of this study insensitive to instrument performance and primarily dependent on radiographic image quality. The results of this study define conditions for which errors associated with irreversible compression cannot be detected by radiologic observers. The results indicate that an observer can detect the errors introduced by this compression algorithm for compression ratios of 10:1 (1.2 bits/pixel) or higher.

  5. Observer performance assessment of JPEG-compressed high-resolution chest images

    NASA Astrophysics Data System (ADS)

    Good, Walter F.; Maitz, Glenn S.; King, Jill L.; Gennari, Rose C.; Gur, David

    1999-05-01

    The JPEG compression algorithm was tested on a set of 529 chest radiographs that had been digitized at a spatial resolution of 100 micrometer and contrast sensitivity of 12 bits. Images were compressed using five fixed 'psychovisual' quantization tables which produced average compression ratios in the range 15:1 to 61:1, and were then printed onto film. Six experienced radiologists read all cases from the laser printed film, in each of the five compressed modes as well as in the non-compressed mode. For comparison purposes, observers also read the same cases with reduced pixel resolutions of 200 micrometer and 400 micrometer. The specific task involved detecting masses, pneumothoraces, interstitial disease, alveolar infiltrates and rib fractures. Over the range of compression ratios tested, for images digitized at 100 micrometer, we were unable to demonstrate any statistically significant decrease (p greater than 0.05) in observer performance as measured by ROC techniques. However, the observers' subjective assessments of image quality did decrease significantly as image resolution was reduced and suggested a decreasing, but nonsignificant, trend as the compression ratio was increased. The seeming discrepancy between our failure to detect a reduction in observer performance, and other published studies, is likely due to: (1) the higher resolution at which we digitized our images; (2) the higher signal-to-noise ratio of our digitized films versus typical CR images; and (3) our particular choice of an optimized quantization scheme.

  6. Detecting double compression of audio signal

    NASA Astrophysics Data System (ADS)

    Yang, Rui; Shi, Yun Q.; Huang, Jiwu

    2010-01-01

    MP3 is the most popular audio format nowadays in our daily life, for example music downloaded from the Internet and file saved in the digital recorder are often in MP3 format. However, low bitrate MP3s are often transcoded to high bitrate since high bitrate ones are of high commercial value. Also audio recording in digital recorder can be doctored easily by pervasive audio editing software. This paper presents two methods for the detection of double MP3 compression. The methods are essential for finding out fake-quality MP3 and audio forensics. The proposed methods use support vector machine classifiers with feature vectors formed by the distributions of the first digits of the quantized MDCT (modified discrete cosine transform) coefficients. Extensive experiments demonstrate the effectiveness of the proposed methods. To the best of our knowledge, this piece of work is the first one to detect double compression of audio signal.

  7. Estimating JPEG2000 compression for image forensics using Benford's Law

    NASA Astrophysics Data System (ADS)

    Qadir, Ghulam; Zhao, Xi; Ho, Anthony T. S.

    2010-05-01

    With the tremendous growth and usage of digital images nowadays, the integrity and authenticity of digital content is becoming increasingly important, and a growing concern to many government and commercial sectors. Image Forensics, based on a passive statistical analysis of the image data only, is an alternative approach to the active embedding of data associated with Digital Watermarking. Benford's Law was first introduced to analyse the probability distribution of the 1st digit (1-9) numbers of natural data, and has since been applied to Accounting Forensics for detecting fraudulent income tax returns [9]. More recently, Benford's Law has been further applied to image processing and image forensics. For example, Fu et al. [5] proposed a Generalised Benford's Law technique for estimating the Quality Factor (QF) of JPEG compressed images. In our previous work, we proposed a framework incorporating the Generalised Benford's Law to accurately detect unknown JPEG compression rates of watermarked images in semi-fragile watermarking schemes. JPEG2000 (a relatively new image compression standard) offers higher compression rates and better image quality as compared to JPEG compression. In this paper, we propose the novel use of Benford's Law for estimating JPEG2000 compression for image forensics applications. By analysing the DWT coefficients and JPEG2000 compression on 1338 test images, the initial results indicate that the 1st digit probability of DWT coefficients follow the Benford's Law. The unknown JPEG2000 compression rates of the image can also be derived, and proved with the help of a divergence factor, which shows the deviation between the probabilities and Benford's Law. Based on 1338 test images, the mean divergence for DWT coefficients is approximately 0.0016, which is lower than DCT coefficients at 0.0034. However, the mean divergence for JPEG2000 images compression rate at 0.1 is 0.0108, which is much higher than uncompressed DWT coefficients. This result clearly indicates a presence of compression in the image. Moreover, we compare the results of 1st digit probability and divergence among JPEG2000 compression rates at 0.1, 0.3, 0.5 and 0.9. The initial results show that the expected difference among them could be used for further analysis to estimate the unknown JPEG2000 compression rates.

  8. With the Advent of Tomosynthesis in the Workup of Mammographic Abnormality, is Spot Compression Mammography Now Obsolete? An Initial Clinical Experience.

    PubMed

    Ni Mhuircheartaigh, Neasa; Coffey, Louise; Fleming, Hannah; O' Doherty, Ann; McNally, Sorcha

    2017-09-01

    To determine if the routine use of spot compression mammography is now obsolete in the assessment of screen detected masses, asymmetries and architectural distortion since the availability of digital breast tomosynthesis. We introduced breast tomosynthesis in the workup of screen detected abnormalities in our screening center in January 2015. During an initial learning period with tomosynthesis standard spot compression views were also performed. Three consultant breast radiologists retrospectively reviewed all screening mammograms recalled for assessment over the first 6-month period. We assessed retrospectively whether there was any additional diagnostic information obtained from spot compression views not already apparent on tomography. All cases were also reviewed for any additional lesions detected by tomosynthesis, not detected on routine 2-view screening mammography. 548 women screened with standard 2-view digital screening mammography were recalled for assessment in the selected period and a total of 565 lesions were assessed. 341 lesions were assessed by both tomosynthesis and routine spot compression mammography. The spot compression view was considered more helpful than tomosynthesis in only one patient. This was because the breast was inadequately positioned for tomosynthesis and the area in question was not adequately imaged. Apart from this technical error there was no asymmetry, distortion or mass where spot compression provided more diagnostic information than tomosynthesis alone. We detected three additional cancers on tomosynthesis, not detected by routine screening mammography. From our initial experience with tomosynthesis we conclude that spot compression mammography is now obsolete in the assessment of screen detected masses, asymmetries and distortions where tomosynthesis is available. © 2017 Wiley Periodicals, Inc.

  9. An accuracy aware low power wireless EEG unit with information content based adaptive data compression.

    PubMed

    Tolbert, Jeremy R; Kabali, Pratik; Brar, Simeranjit; Mukhopadhyay, Saibal

    2009-01-01

    We present a digital system for adaptive data compression for low power wireless transmission of Electroencephalography (EEG) data. The proposed system acts as a base-band processor between the EEG analog-to-digital front-end and RF transceiver. It performs a real-time accuracy energy trade-off for multi-channel EEG signal transmission by controlling the volume of transmitted data. We propose a multi-core digital signal processor for on-chip processing of EEG signals, to detect signal information of each channel and perform real-time adaptive compression. Our analysis shows that the proposed approach can provide significant savings in transmitter power with minimal impact on the overall signal accuracy.

  10. Detecting double compressed MPEG videos with the same quantization matrix and synchronized group of pictures structure

    NASA Astrophysics Data System (ADS)

    Aghamaleki, Javad Abbasi; Behrad, Alireza

    2018-01-01

    Double compression detection is a crucial stage in digital image and video forensics. However, the detection of double compressed videos is challenging when the video forger uses the same quantization matrix and synchronized group of pictures (GOP) structure during the recompression history to conceal tampering effects. A passive approach is proposed for detecting double compressed MPEG videos with the same quantization matrix and synchronized GOP structure. To devise the proposed algorithm, the effects of recompression on P frames are mathematically studied. Then, based on the obtained guidelines, a feature vector is proposed to detect double compressed frames on the GOP level. Subsequently, sparse representations of the feature vectors are used for dimensionality reduction and enrich the traces of recompression. Finally, a support vector machine classifier is employed to detect and localize double compression in temporal domain. The experimental results show that the proposed algorithm achieves the accuracy of more than 95%. In addition, the comparisons of the results of the proposed method with those of other methods reveal the efficiency of the proposed algorithm.

  11. A generalized Benford's law for JPEG coefficients and its applications in image forensics

    NASA Astrophysics Data System (ADS)

    Fu, Dongdong; Shi, Yun Q.; Su, Wei

    2007-02-01

    In this paper, a novel statistical model based on Benford's law for the probability distributions of the first digits of the block-DCT and quantized JPEG coefficients is presented. A parametric logarithmic law, i.e., the generalized Benford's law, is formulated. Furthermore, some potential applications of this model in image forensics are discussed in this paper, which include the detection of JPEG compression for images in bitmap format, the estimation of JPEG compression Qfactor for JPEG compressed bitmap image, and the detection of double compressed JPEG image. The results of our extensive experiments demonstrate the effectiveness of the proposed statistical model.

  12. Wavelet Compression of Satellite-Transmitted Digital Mammograms

    NASA Technical Reports Server (NTRS)

    Zheng, Yuan F.

    2001-01-01

    Breast cancer is one of the major causes of cancer death in women in the United States. The most effective way to treat breast cancer is to detect it at an early stage by screening patients periodically. Conventional film-screening mammography uses X-ray films which are effective in detecting early abnormalities of the breast. Direct digital mammography has the potential to improve the image quality and to take advantages of convenient storage, efficient transmission, and powerful computer-aided diagnosis, etc. One effective alternative to direct digital imaging is secondary digitization of X-ray films. This technique may not provide as high an image quality as the direct digital approach, but definitely have other advantages inherent to digital images. One of them is the usage of satellite-transmission technique for transferring digital mammograms between a remote image-acquisition site and a central image-reading site. This technique can benefit a large population of women who reside in remote areas where major screening and diagnosing facilities are not available. The NASA-Lewis Research Center (LeRC), in collaboration with the Cleveland Clinic Foundation (CCF), has begun a pilot study to investigate the application of the Advanced Communications Technology Satellite (ACTS) network to telemammography. The bandwidth of the T1 transmission is limited (1.544 Mbps) while the size of a mammographic image is huge. It takes a long time to transmit a single mammogram. For example, a mammogram of 4k by 4k pixels with 16 bits per pixel needs more than 4 minutes to transmit. Four images for a typical screening exam would take more than 16 minutes. This is too long a time period for a convenient screening. Consequently, compression is necessary for making satellite-transmission of mammographic images practically possible. The Wavelet Research Group of the Department of Electrical Engineering at The Ohio State University (OSU) participated in the LeRC-CCF collaboration by providing advanced compression technology using wavelet transform. OSU developed a time-efficient software package with various wavelets to compress a serious of mammographic images. This documents reports the result of the compression activities.

  13. Privacy protection in surveillance systems based on JPEG DCT baseline compression and spectral domain watermarking

    NASA Astrophysics Data System (ADS)

    Sablik, Thomas; Velten, Jörg; Kummert, Anton

    2015-03-01

    An novel system for automatic privacy protection in digital media based on spectral domain watermarking and JPEG compression is described in the present paper. In a first step private areas are detected. Therefore a detection method is presented. The implemented method uses Haar cascades to detects faces. Integral images are used to speed up calculations and the detection. Multiple detections of one face are combined. Succeeding steps comprise embedding the data into the image as part of JPEG compression using spectral domain methods and protecting the area of privacy. The embedding process is integrated into and adapted to JPEG compression. A Spread Spectrum Watermarking method is used to embed the size and position of the private areas into the cover image. Different methods for embedding regarding their robustness are compared. Moreover the performance of the method concerning tampered images is presented.

  14. A High-Performance Lossless Compression Scheme for EEG Signals Using Wavelet Transform and Neural Network Predictors

    PubMed Central

    Sriraam, N.

    2012-01-01

    Developments of new classes of efficient compression algorithms, software systems, and hardware for data intensive applications in today's digital health care systems provide timely and meaningful solutions in response to exponentially growing patient information data complexity and associated analysis requirements. Of the different 1D medical signals, electroencephalography (EEG) data is of great importance to the neurologist for detecting brain-related disorders. The volume of digitized EEG data generated and preserved for future reference exceeds the capacity of recent developments in digital storage and communication media and hence there is a need for an efficient compression system. This paper presents a new and efficient high performance lossless EEG compression using wavelet transform and neural network predictors. The coefficients generated from the EEG signal by integer wavelet transform are used to train the neural network predictors. The error residues are further encoded using a combinational entropy encoder, Lempel-Ziv-arithmetic encoder. Also a new context-based error modeling is also investigated to improve the compression efficiency. A compression ratio of 2.99 (with compression efficiency of 67%) is achieved with the proposed scheme with less encoding time thereby providing diagnostic reliability for lossless transmission as well as recovery of EEG signals for telemedicine applications. PMID:22489238

  15. A high-performance lossless compression scheme for EEG signals using wavelet transform and neural network predictors.

    PubMed

    Sriraam, N

    2012-01-01

    Developments of new classes of efficient compression algorithms, software systems, and hardware for data intensive applications in today's digital health care systems provide timely and meaningful solutions in response to exponentially growing patient information data complexity and associated analysis requirements. Of the different 1D medical signals, electroencephalography (EEG) data is of great importance to the neurologist for detecting brain-related disorders. The volume of digitized EEG data generated and preserved for future reference exceeds the capacity of recent developments in digital storage and communication media and hence there is a need for an efficient compression system. This paper presents a new and efficient high performance lossless EEG compression using wavelet transform and neural network predictors. The coefficients generated from the EEG signal by integer wavelet transform are used to train the neural network predictors. The error residues are further encoded using a combinational entropy encoder, Lempel-Ziv-arithmetic encoder. Also a new context-based error modeling is also investigated to improve the compression efficiency. A compression ratio of 2.99 (with compression efficiency of 67%) is achieved with the proposed scheme with less encoding time thereby providing diagnostic reliability for lossless transmission as well as recovery of EEG signals for telemedicine applications.

  16. Digital TV processing system

    NASA Technical Reports Server (NTRS)

    1975-01-01

    Two digital video data compression systems directly applicable to the Space Shuttle TV Communication System were described: (1) For the uplink, a low rate monochrome data compressor is used. The compression is achieved by using a motion detection technique in the Hadamard domain. To transform the variable source rate into a fixed rate, an adaptive rate buffer is provided. (2) For the downlink, a color data compressor is considered. The compression is achieved first by intra-color transformation of the original signal vector, into a vector which has lower information entropy. Then two-dimensional data compression techniques are applied to the Hadamard transformed components of this last vector. Mathematical models and data reliability analyses were also provided for the above video data compression techniques transmitted over a channel encoded Gaussian channel. It was shown that substantial gains can be achieved by the combination of video source and channel coding.

  17. Impact of lossy compression on diagnostic accuracy of radiographs for periapical lesions

    NASA Technical Reports Server (NTRS)

    Eraso, Francisco E.; Analoui, Mostafa; Watson, Andrew B.; Rebeschini, Regina

    2002-01-01

    OBJECTIVES: The purpose of this study was to evaluate the lossy Joint Photographic Experts Group compression for endodontic pretreatment digital radiographs. STUDY DESIGN: Fifty clinical charge-coupled device-based, digital radiographs depicting periapical areas were selected. Each image was compressed at 2, 4, 8, 16, 32, 48, and 64 compression ratios. One root per image was marked for examination. Images were randomized and viewed by four clinical observers under standardized viewing conditions. Each observer read the image set three times, with at least two weeks between each reading. Three pre-selected sites per image (mesial, distal, apical) were scored on a five-scale score confidence scale. A panel of three examiners scored the uncompressed images, with a consensus score for each site. The consensus score was used as the baseline for assessing the impact of lossy compression on the diagnostic values of images. The mean absolute error between consensus and observer scores was computed for each observer, site, and reading session. RESULTS: Balanced one-way analysis of variance for all observers indicated that for compression ratios 48 and 64, there was significant difference between mean absolute error of uncompressed and compressed images (P <.05). After converting the five-scale score to two-level diagnostic values, the diagnostic accuracy was strongly correlated (R (2) = 0.91) with the compression ratio. CONCLUSION: The results of this study suggest that high compression ratios can have a severe impact on the diagnostic quality of the digital radiographs for detection of periapical lesions.

  18. Efficient image acquisition design for a cancer detection system

    NASA Astrophysics Data System (ADS)

    Nguyen, Dung; Roehrig, Hans; Borders, Marisa H.; Fitzpatrick, Kimberly A.; Roveda, Janet

    2013-09-01

    Modern imaging modalities, such as Computed Tomography (CT), Digital Breast Tomosynthesis (DBT) or Magnetic Resonance Tomography (MRT) are able to acquire volumetric images with an isotropic resolution in micrometer (um) or millimeter (mm) range. When used in interactive telemedicine applications, these raw images need a huge storage unit, thereby necessitating the use of high bandwidth data communication link. To reduce the cost of transmission and enable archiving, especially for medical applications, image compression is performed. Recent advances in compression algorithms have resulted in a vast array of data compression techniques, but because of the characteristics of these images, there are challenges to overcome to transmit these images efficiently. In addition, the recent studies raise the low dose mammography risk on high risk patient. Our preliminary studies indicate that by bringing the compression before the analog-to-digital conversion (ADC) stage is more efficient than other compression techniques after the ADC. The linearity characteristic of the compressed sensing and ability to perform the digital signal processing (DSP) during data conversion open up a new area of research regarding the roles of sparsity in medical image registration, medical image analysis (for example, automatic image processing algorithm to efficiently extract the relevant information for the clinician), further Xray dose reduction for mammography, and contrast enhancement.

  19. Wavelet-based reversible watermarking for authentication

    NASA Astrophysics Data System (ADS)

    Tian, Jun

    2002-04-01

    In the digital information age, digital content (audio, image, and video) can be easily copied, manipulated, and distributed. Copyright protection and content authentication of digital content has become an urgent problem to content owners and distributors. Digital watermarking has provided a valuable solution to this problem. Based on its application scenario, most digital watermarking methods can be divided into two categories: robust watermarking and fragile watermarking. As a special subset of fragile watermark, reversible watermark (which is also called lossless watermark, invertible watermark, erasable watermark) enables the recovery of the original, unwatermarked content after the watermarked content has been detected to be authentic. Such reversibility to get back unwatermarked content is highly desired in sensitive imagery, such as military data and medical data. In this paper we present a reversible watermarking method based on an integer wavelet transform. We look into the binary representation of each wavelet coefficient and embed an extra bit to expandable wavelet coefficient. The location map of all expanded coefficients will be coded by JBIG2 compression and these coefficient values will be losslessly compressed by arithmetic coding. Besides these two compressed bit streams, an SHA-256 hash of the original image will also be embedded for authentication purpose.

  20. Image compression system and method having optimized quantization tables

    NASA Technical Reports Server (NTRS)

    Ratnakar, Viresh (Inventor); Livny, Miron (Inventor)

    1998-01-01

    A digital image compression preprocessor for use in a discrete cosine transform-based digital image compression device is provided. The preprocessor includes a gathering mechanism for determining discrete cosine transform statistics from input digital image data. A computing mechanism is operatively coupled to the gathering mechanism to calculate a image distortion array and a rate of image compression array based upon the discrete cosine transform statistics for each possible quantization value. A dynamic programming mechanism is operatively coupled to the computing mechanism to optimize the rate of image compression array against the image distortion array such that a rate-distortion-optimal quantization table is derived. In addition, a discrete cosine transform-based digital image compression device and a discrete cosine transform-based digital image compression and decompression system are provided. Also, a method for generating a rate-distortion-optimal quantization table, using discrete cosine transform-based digital image compression, and operating a discrete cosine transform-based digital image compression and decompression system are provided.

  1. Application of Compressive Sensing to Digital Holography

    DTIC Science & Technology

    2015-05-01

    WITH ASSIGNED DISTRIBUTION STATEMENT. // Signature// // Signature// DAVID J. RABB BRIAN D. EWERT, Chief Program Manager...Signature// TRACY W. JOHNSTON, Chief Multispectral Sensing and Detection Division Sensors Directorate This report is published in

  2. Compressed sensing: Radar signal detection and parameter measurement for EW applications

    NASA Astrophysics Data System (ADS)

    Rao, M. Sreenivasa; Naik, K. Krishna; Reddy, K. Maheshwara

    2016-09-01

    State of the art system development is very much required for UAVs (Unmanned Aerial Vehicle) and other airborne applications, where miniature, lightweight and low-power specifications are essential. Currently, the airborne Electronic Warfare (EW) systems are developed with digital receiver technology using Nyquist sampling. The detection of radar signals and parameter measurement is a necessary requirement in EW digital receivers. The Random Modulator Pre-Integrator (RMPI) can be used for matched detection of signals using smashed filter. RMPI hardware eliminates the high sampling rate analog to digital computer and reduces the number of samples using random sampling and detection of sparse orthonormal basis vectors. RMPI explore the structural and geometrical properties of the signal apart from traditional time and frequency domain analysis for improved detection. The concept has been proved with the help of MATLAB and LabVIEW simulations.

  3. Digital compression algorithms for HDTV transmission

    NASA Technical Reports Server (NTRS)

    Adkins, Kenneth C.; Shalkhauser, Mary JO; Bibyk, Steven B.

    1990-01-01

    Digital compression of video images is a possible avenue for high definition television (HDTV) transmission. Compression needs to be optimized while picture quality remains high. Two techniques for compression the digital images are explained and comparisons are drawn between the human vision system and artificial compression techniques. Suggestions for improving compression algorithms through the use of neural and analog circuitry are given.

  4. A Wireless Headstage for Combined Optogenetics and Multichannel Electrophysiological Recording.

    PubMed

    Gagnon-Turcotte, Gabriel; LeChasseur, Yoan; Bories, Cyril; Messaddeq, Younes; De Koninck, Yves; Gosselin, Benoit

    2017-02-01

    This paper presents a wireless headstage with real-time spike detection and data compression for combined optogenetics and multichannel electrophysiological recording. The proposed headstage, which is intended to perform both optical stimulation and electrophysiological recordings simultaneously in freely moving transgenic rodents, is entirely built with commercial off-the-shelf components, and includes 32 recording channels and 32 optical stimulation channels. It can detect, compress and transmit full action potential waveforms over 32 channels in parallel and in real time using an embedded digital signal processor based on a low-power field programmable gate array and a Microblaze microprocessor softcore. Such a processor implements a complete digital spike detector featuring a novel adaptive threshold based on a Sigma-delta control loop, and a wavelet data compression module using a new dynamic coefficient re-quantization technique achieving large compression ratios with higher signal quality. Simultaneous optical stimulation and recording have been performed in-vivo using an optrode featuring 8 microelectrodes and 1 implantable fiber coupled to a 465-nm LED, in the somatosensory cortex and the Hippocampus of a transgenic mouse expressing ChannelRhodospin (Thy1::ChR2-YFP line 4) under anesthetized conditions. Experimental results show that the proposed headstage can trigger neuron activity while collecting, detecting and compressing single cell microvolt amplitude activity from multiple channels in parallel while achieving overall compression ratios above 500. This is the first reported high-channel count wireless optogenetic device providing simultaneous optical stimulation and recording. Measured characteristics show that the proposed headstage can achieve up to 100% of true positive detection rate for signal-to-noise ratio (SNR) down to 15 dB, while achieving up to 97.28% at SNR as low as 5 dB. The implemented prototype features a lifespan of up to 105 minutes, and uses a lightweight (2.8 g) and compact [Formula: see text] rigid-flex printed circuit board.

  5. Image compression evaluation for digital cinema: the case of Star Wars: Episode II

    NASA Astrophysics Data System (ADS)

    Schnuelle, David L.

    2003-05-01

    A program of evaluation of compression algorithms proposed for use in a digital cinema application is described and the results presented in general form. The work was intended to aid in the selection of a compression system to be used for the digital cinema release of Star Wars: Episode II, in May 2002. An additional goal was to provide feedback to the algorithm proponents on what parameters and performance levels the feature film industry is looking for in digital cinema compression. The primary conclusion of the test program is that any of the current digital cinema compression proponents will work for digital cinema distribution to today's theaters.

  6. Overview of machine vision methods in x-ray imaging and microtomography

    NASA Astrophysics Data System (ADS)

    Buzmakov, Alexey; Zolotov, Denis; Chukalina, Marina; Nikolaev, Dmitry; Gladkov, Andrey; Ingacheva, Anastasia; Yakimchuk, Ivan; Asadchikov, Victor

    2018-04-01

    Digital X-ray imaging became widely used in science, medicine, non-destructive testing. This allows using modern digital images analysis for automatic information extraction and interpretation. We give short review of scientific applications of machine vision in scientific X-ray imaging and microtomography, including image processing, feature detection and extraction, images compression to increase camera throughput, microtomography reconstruction, visualization and setup adjustment.

  7. Digital Image Compression Using Artificial Neural Networks

    NASA Technical Reports Server (NTRS)

    Serra-Ricart, M.; Garrido, L.; Gaitan, V.; Aloy, A.

    1993-01-01

    The problem of storing, transmitting, and manipulating digital images is considered. Because of the file sizes involved, large amounts of digitized image information are becoming common in modern projects. Our goal is to described an image compression transform coder based on artificial neural networks techniques (NNCTC). A comparison of the compression results obtained from digital astronomical images by the NNCTC and the method used in the compression of the digitized sky survey from the Space Telescope Science Institute based on the H-transform is performed in order to assess the reliability of the NNCTC.

  8. Shear wave pulse compression for dynamic elastography using phase-sensitive optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Nguyen, Thu-Mai; Song, Shaozhen; Arnal, Bastien; Wong, Emily Y.; Huang, Zhihong; Wang, Ruikang K.; O'Donnell, Matthew

    2014-01-01

    Assessing the biomechanical properties of soft tissue provides clinically valuable information to supplement conventional structural imaging. In the previous studies, we introduced a dynamic elastography technique based on phase-sensitive optical coherence tomography (PhS-OCT) to characterize submillimetric structures such as skin layers or ocular tissues. Here, we propose to implement a pulse compression technique for shear wave elastography. We performed shear wave pulse compression in tissue-mimicking phantoms. Using a mechanical actuator to generate broadband frequency-modulated vibrations (1 to 5 kHz), induced displacements were detected at an equivalent frame rate of 47 kHz using a PhS-OCT. The recorded signal was digitally compressed to a broadband pulse. Stiffness maps were then reconstructed from spatially localized estimates of the local shear wave speed. We demonstrate that a simple pulse compression scheme can increase shear wave detection signal-to-noise ratio (>12 dB gain) and reduce artifacts in reconstructing stiffness maps of heterogeneous media.

  9. Measurement of Full Field Strains in Filament Wound Composite Tubes Under Axial Compressive Loading by the Digital Image Correlation (DIC) Technique

    DTIC Science & Technology

    2013-05-01

    Measurement of Full Field Strains in Filament Wound Composite Tubes Under Axial Compressive Loading by the Digital Image Correlation (DIC...of Full Field Strains in Filament Wound Composite Tubes Under Axial Compressive Loading by the Digital Image Correlation (DIC) Technique Todd C...Wound Composite Tubes Under Axial Compressive Loading by the Digital Image Correlation (DIC) Technique 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c

  10. Review of passive-blind detection in digital video forgery based on sensing and imaging techniques

    NASA Astrophysics Data System (ADS)

    Tao, Junjie; Jia, Lili; You, Ying

    2016-01-01

    Advances in digital video compression and IP communication technologies raised new issues and challenges concerning the integrity and authenticity of surveillance videos. It is so important that the system should ensure that once recorded, the video cannot be altered; ensuring the audit trail is intact for evidential purposes. This paper gives an overview of passive techniques of Digital Video Forensics which are based on intrinsic fingerprints inherent in digital surveillance videos. In this paper, we performed a thorough research of literatures relevant to video manipulation detection methods which accomplish blind authentications without referring to any auxiliary information. We presents review of various existing methods in literature, and much more work is needed to be done in this field of video forensics based on video data analysis and observation of the surveillance systems.

  11. Influence of video compression on the measurement error of the television system

    NASA Astrophysics Data System (ADS)

    Sotnik, A. V.; Yarishev, S. N.; Korotaev, V. V.

    2015-05-01

    Video data require a very large memory capacity. Optimal ratio quality / volume video encoding method is one of the most actual problem due to the urgent need to transfer large amounts of video over various networks. The technology of digital TV signal compression reduces the amount of data used for video stream representation. Video compression allows effective reduce the stream required for transmission and storage. It is important to take into account the uncertainties caused by compression of the video signal in the case of television measuring systems using. There are a lot digital compression methods. The aim of proposed work is research of video compression influence on the measurement error in television systems. Measurement error of the object parameter is the main characteristic of television measuring systems. Accuracy characterizes the difference between the measured value abd the actual parameter value. Errors caused by the optical system can be selected as a source of error in the television systems measurements. Method of the received video signal processing is also a source of error. Presence of error leads to large distortions in case of compression with constant data stream rate. Presence of errors increases the amount of data required to transmit or record an image frame in case of constant quality. The purpose of the intra-coding is reducing of the spatial redundancy within a frame (or field) of television image. This redundancy caused by the strong correlation between the elements of the image. It is possible to convert an array of image samples into a matrix of coefficients that are not correlated with each other, if one can find corresponding orthogonal transformation. It is possible to apply entropy coding to these uncorrelated coefficients and achieve a reduction in the digital stream. One can select such transformation that most of the matrix coefficients will be almost zero for typical images . Excluding these zero coefficients also possible reducing of the digital stream. Discrete cosine transformation is most widely used among possible orthogonal transformation. Errors of television measuring systems and data compression protocols analyzed In this paper. The main characteristics of measuring systems and detected sources of their error detected. The most effective methods of video compression are determined. The influence of video compression error on television measuring systems was researched. Obtained results will increase the accuracy of the measuring systems. In television image quality measuring system reduces distortion identical distortion in analog systems and specific distortions resulting from the process of coding / decoding digital video signal and errors in the transmission channel. By the distortions associated with encoding / decoding signal include quantization noise, reducing resolution, mosaic effect, "mosquito" effect edging on sharp drops brightness, blur colors, false patterns, the effect of "dirty window" and other defects. The size of video compression algorithms used in television measuring systems based on the image encoding with intra- and inter prediction individual fragments. The process of encoding / decoding image is non-linear in space and in time, because the quality of the playback of a movie at the reception depends on the pre- and post-history of a random, from the preceding and succeeding tracks, which can lead to distortion of the inadequacy of the sub-picture and a corresponding measuring signal.

  12. Effect of data compression on diagnostic accuracy in digital hand and chest radiography

    NASA Astrophysics Data System (ADS)

    Sayre, James W.; Aberle, Denise R.; Boechat, Maria I.; Hall, Theodore R.; Huang, H. K.; Ho, Bruce K. T.; Kashfian, Payam; Rahbar, Guita

    1992-05-01

    Image compression is essential to handle a large volume of digital images including CT, MR, CR, and digitized films in a digital radiology operation. The full-frame bit allocation using the cosine transform technique developed during the last few years has been proven to be an excellent irreversible image compression method. This paper describes the effect of using the hardware compression module on diagnostic accuracy in hand radiographs with subperiosteal resorption and chest radiographs with interstitial disease. Receiver operating characteristic analysis using 71 hand radiographs and 52 chest radiographs with five observers each demonstrates that there is no statistical significant difference in diagnostic accuracy between the original films and the compressed images with a compression ratio as high as 20:1.

  13. Assessment of low-contrast detectability for compressed digital chest images

    NASA Astrophysics Data System (ADS)

    Cook, Larry T.; Insana, Michael F.; McFadden, Michael A.; Hall, Timothy J.; Cox, Glendon G.

    1994-04-01

    The ability of human observers to detect low-contrast targets in screen-film (SF) images, computed radiographic (CR) images, and compressed CR images was measured using contrast detail (CD) analysis. The results of these studies were used to design a two- alternative forced-choice (2AFC) experiment to investigate the detectability of nodules in adult chest radiographs. CD curves for a common screen-film system were compared with CR images compressed up to 125:1. Data from clinical chest exams were used to define a CD region of clinical interest that sufficiently challenged the observer. From that data, simulated lesions were introduced into 100 normal CR chest films, and forced-choice observer performance studies were performed. CR images were compressed using a full-frame discrete cosine transform (FDCT) technique, where the 2D Fourier space was divided into four areas of different quantization depending on the cumulative power spectrum (energy) of each image. The characteristic curve of the CR images was adjusted so that optical densities matched those of the SF system. The CD curves for SF and uncompressed CR systems were statistically equivalent. The slope of the CD curve for each was - 1.0 as predicted by the Rose model. There was a significant degradation in detection found for CR images compressed to 125:1. Furthermore, contrast-detail analysis demonstrated that many pulmonary nodules encountered in clinical practice are significantly above the average observer threshold for detection. We designed a 2AFC observer study using simulated 1-cm lesions introduced into normal CR chest radiographs. Detectability was reduced for all compressed CR radiographs.

  14. Comparative data compression techniques and multi-compression results

    NASA Astrophysics Data System (ADS)

    Hasan, M. R.; Ibrahimy, M. I.; Motakabber, S. M. A.; Ferdaus, M. M.; Khan, M. N. H.

    2013-12-01

    Data compression is very necessary in business data processing, because of the cost savings that it offers and the large volume of data manipulated in many business applications. It is a method or system for transmitting a digital image (i.e., an array of pixels) from a digital data source to a digital data receiver. More the size of the data be smaller, it provides better transmission speed and saves time. In this communication, we always want to transmit data efficiently and noise freely. This paper will provide some compression techniques for lossless text type data compression and comparative result of multiple and single compression, that will help to find out better compression output and to develop compression algorithms.

  15. Quality of reconstruction of compressed off-axis digital holograms by frequency filtering and wavelets.

    PubMed

    Cheremkhin, Pavel A; Kurbatova, Ekaterina A

    2018-01-01

    Compression of digital holograms can significantly help with the storage of objects and data in 2D and 3D form, its transmission, and its reconstruction. Compression of standard images by methods based on wavelets allows high compression ratios (up to 20-50 times) with minimum losses of quality. In the case of digital holograms, application of wavelets directly does not allow high values of compression to be obtained. However, additional preprocessing and postprocessing can afford significant compression of holograms and the acceptable quality of reconstructed images. In this paper application of wavelet transforms for compression of off-axis digital holograms are considered. The combined technique based on zero- and twin-order elimination, wavelet compression of the amplitude and phase components of the obtained Fourier spectrum, and further additional compression of wavelet coefficients by thresholding and quantization is considered. Numerical experiments on reconstruction of images from the compressed holograms are performed. The comparative analysis of applicability of various wavelets and methods of additional compression of wavelet coefficients is performed. Optimum parameters of compression of holograms by the methods can be estimated. Sizes of holographic information were decreased up to 190 times.

  16. Radar Range Sidelobe Reduction Using Adaptive Pulse Compression Technique

    NASA Technical Reports Server (NTRS)

    Li, Lihua; Coon, Michael; McLinden, Matthew

    2013-01-01

    Pulse compression has been widely used in radars so that low-power, long RF pulses can be transmitted, rather than a highpower short pulse. Pulse compression radars offer a number of advantages over high-power short pulsed radars, such as no need of high-power RF circuitry, no need of high-voltage electronics, compact size and light weight, better range resolution, and better reliability. However, range sidelobe associated with pulse compression has prevented the use of this technique on spaceborne radars since surface returns detected by range sidelobes may mask the returns from a nearby weak cloud or precipitation particles. Research on adaptive pulse compression was carried out utilizing a field-programmable gate array (FPGA) waveform generation board and a radar transceiver simulator. The results have shown significant improvements in pulse compression sidelobe performance. Microwave and millimeter-wave radars present many technological challenges for Earth and planetary science applications. The traditional tube-based radars use high-voltage power supply/modulators and high-power RF transmitters; therefore, these radars usually have large size, heavy weight, and reliability issues for space and airborne platforms. Pulse compression technology has provided a path toward meeting many of these radar challenges. Recent advances in digital waveform generation, digital receivers, and solid-state power amplifiers have opened a new era for applying pulse compression to the development of compact and high-performance airborne and spaceborne remote sensing radars. The primary objective of this innovative effort is to develop and test a new pulse compression technique to achieve ultrarange sidelobes so that this technique can be applied to spaceborne, airborne, and ground-based remote sensing radars to meet future science requirements. By using digital waveform generation, digital receiver, and solid-state power amplifier technologies, this improved pulse compression technique could bring significant impact on future radar development. The novel feature of this innovation is the non-linear FM (NLFM) waveform design. The traditional linear FM has the limit (-20 log BT -3 dB) for achieving ultra-low-range sidelobe in pulse compression. For this study, a different combination of 20- or 40-microsecond chirp pulse width and 2- or 4-MHz chirp bandwidth was used. These are typical operational parameters for airborne or spaceborne weather radars. The NLFM waveform design was then implemented on a FPGA board to generate a real chirp signal, which was then sent to the radar transceiver simulator. The final results have shown significant improvement on sidelobe performance compared to that obtained using a traditional linear FM chirp.

  17. Using irreversible compression in digital radiology: a preliminary study of the opinions of radiologists

    NASA Astrophysics Data System (ADS)

    Seeram, Euclid

    2006-03-01

    The large volumes of digital images produced by digital imaging modalities in Radiology have provided the motivation for the development of picture archiving and communication systems (PACS) in an effort to provide an organized mechanism for digital image management. The development of more sophisticated methods of digital image acquisition (Multislice CT and Digital Mammography, for example), as well as the implementation and performance of PACS and Teleradiology systems in a health care environment, have created challenges in the area of image compression with respect to storing and transmitting digital images. Image compression can be reversible (lossless) or irreversible (lossy). While in the former, there is no loss of information, the latter presents concerns since there is a loss of information. This loss of information from diagnostic medical images is of primary concern not only to radiologists, but also to patients and their physicians. In 1997, Goldberg pointed out that "there is growing evidence that lossy compression can be applied without significantly affecting the diagnostic content of images... there is growing consensus in the radiologic community that some forms of lossy compression are acceptable". The purpose of this study was to explore the opinions of expert radiologists, and related professional organizations on the use of irreversible compression in routine practice The opinions of notable radiologists in the US and Canada are varied indicating no consensus of opinion on the use of irreversible compression in primary diagnosis, however, they are generally positive on the notion of the image storage and transmission advantages. Almost all radiologists are concerned with the litigation potential of an incorrect diagnosis based on irreversible compressed images. The survey of several radiology professional and related organizations reveals that no professional practice standards exist for the use of irreversible compression. Currently, the only standard for image compression is stated in the ACR's Technical Standards for Teleradiology and Digital Image Management.

  18. Frequency-Based Precursory Acoustic Emission Failure Sequences In Sedimentary And Igneous Rocks Under Uniaxial Compression

    NASA Astrophysics Data System (ADS)

    Colin, C.; Anderson, R. C.; Chasek, M. D.; Peters, G. H.; Carey, E. M.

    2016-12-01

    Identifiable precursors to rock failure have been a long pursued and infrequently encountered phenomena in rock mechanics and acoustic emission studies. Since acoustic emissions in compressed rocks were found to follow the Gutenberg-Richter law, failure-prediction strategies based on temporal changes in b-value have been recurrent. In this study, we extend on the results of Ohnaka and Mogi [Journal of Geophysical Research, Vol. 87, No. B5, p. 3873-3884, (1982)], where the bulk frequency characteristics of rocks under incremental uniaxial compression were observed in relation to changes in b-value before and after failure. Based on the proposition that the number of low-frequency acoustic emissions is proportional to the number of high-amplitude acoustic emissions in compressed rocks, Ohnaka and Mogi (1982) demonstrated that b-value changes in granite and andesite cores under incremental uniaxial compression could be expressed in terms of the percent abundance of low-frequency events. In this study, we attempt to demonstrate that the results of Ohnaka and Mogi (1982) hold true for different rock types (basalt, sandstone, and limestone) and different sample geometries (rectangular prisms). In order to do so, the design of the compression tests was kept similar to that of Ohnaka and Mogi (1982). Two high frequency piezoelectric transducers of 1 MHz and a 500 kHz coupled to the sides of the samples detected higher and lower frequency acoustic emission signals. However, rather than gathering parametric data from an analog signal using a counter as per Ohnaka and Mogi (1982), we used an oscilloscope as an analog to digital converter interfacing with LabVIEW 2015 to record the complete waveforms. The digitally stored waveforms were then processed, detecting acoustic emission events using a statistical method, and filtered using a 2nd order Butterworth filter. In addition to calculating the percent abundance of low-frequency events over time, the peak frequency of the acoustic emissions over time was observable due to the digital method of waveform capture. This allows for a more direct comparison between frequency characteristics and b-values of rocks under compression and investigates the viability of observing frequency behavior over time as a method of rock failure prediction.

  19. Radiological Image Compression

    NASA Astrophysics Data System (ADS)

    Lo, Shih-Chung Benedict

    The movement toward digital images in radiology presents the problem of how to conveniently and economically store, retrieve, and transmit the volume of digital images. Basic research into image data compression is necessary in order to move from a film-based department to an efficient digital -based department. Digital data compression technology consists of two types of compression technique: error-free and irreversible. Error -free image compression is desired; however, present techniques can only achieve compression ratio of from 1.5:1 to 3:1, depending upon the image characteristics. Irreversible image compression can achieve a much higher compression ratio; however, the image reconstructed from the compressed data shows some difference from the original image. This dissertation studies both error-free and irreversible image compression techniques. In particular, some modified error-free techniques have been tested and the recommended strategies for various radiological images are discussed. A full-frame bit-allocation irreversible compression technique has been derived. A total of 76 images which include CT head and body, and radiographs digitized to 2048 x 2048, 1024 x 1024, and 512 x 512 have been used to test this algorithm. The normalized mean -square-error (NMSE) on the difference image, defined as the difference between the original and the reconstructed image from a given compression ratio, is used as a global measurement on the quality of the reconstructed image. The NMSE's of total of 380 reconstructed and 380 difference images are measured and the results tabulated. Three complex compression methods are also suggested to compress images with special characteristics. Finally, various parameters which would effect the quality of the reconstructed images are discussed. A proposed hardware compression module is given in the last chapter.

  20. Cost-effective handling of digital medical images in the telemedicine environment.

    PubMed

    Choong, Miew Keen; Logeswaran, Rajasvaran; Bister, Michel

    2007-09-01

    This paper concentrates on strategies for less costly handling of medical images. Aspects of digitization using conventional digital cameras, lossy compression with good diagnostic quality, and visualization through less costly monitors are discussed. For digitization of film-based media, subjective evaluation of the suitability of digital cameras as an alternative to the digitizer was undertaken. To save on storage, bandwidth and transmission time, the acceptable degree of compression with diagnostically no loss of important data was studied through randomized double-blind tests of the subjective image quality when compression noise was kept lower than the inherent noise. A diagnostic experiment was undertaken to evaluate normal low cost computer monitors as viable viewing displays for clinicians. The results show that conventional digital camera images of X-ray images were diagnostically similar to the expensive digitizer. Lossy compression, when used moderately with the imaging noise to compression noise ratio (ICR) greater than four, can bring about image improvement with better diagnostic quality than the original image. Statistical analysis shows that there is no diagnostic difference between expensive high quality monitors and conventional computer monitors. The results presented show good potential in implementing the proposed strategies to promote widespread cost-effective telemedicine and digital medical environments. 2006 Elsevier Ireland Ltd

  1. Compression of computer generated phase-shifting hologram sequence using AVC and HEVC

    NASA Astrophysics Data System (ADS)

    Xing, Yafei; Pesquet-Popescu, Béatrice; Dufaux, Frederic

    2013-09-01

    With the capability of achieving twice the compression ratio of Advanced Video Coding (AVC) with similar reconstruction quality, High Efficiency Video Coding (HEVC) is expected to become the newleading technique of video coding. In order to reduce the storage and transmission burden of digital holograms, in this paper we propose to use HEVC for compressing the phase-shifting digital hologram sequences (PSDHS). By simulating phase-shifting digital holography (PSDH) interferometry, interference patterns between illuminated three dimensional( 3D) virtual objects and the stepwise phase changed reference wave are generated as digital holograms. The hologram sequences are obtained by the movement of the virtual objects and compressed by AVC and HEVC. The experimental results show that AVC and HEVC are efficient to compress PSDHS, with HEVC giving better performance. Good compression rate and reconstruction quality can be obtained with bitrate above 15000kbps.

  2. The wavelet/scalar quantization compression standard for digital fingerprint images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bradley, J.N.; Brislawn, C.M.

    1994-04-01

    A new digital image compression standard has been adopted by the US Federal Bureau of Investigation for use on digitized gray-scale fingerprint images. The algorithm is based on adaptive uniform scalar quantization of a discrete wavelet transform image decomposition and is referred to as the wavelet/scalar quantization standard. The standard produces archival quality images at compression ratios of around 20:1 and will allow the FBI to replace their current database of paper fingerprint cards with digital imagery.

  3. Low cost voice compression for mobile digital radios

    NASA Technical Reports Server (NTRS)

    Omura, J. K.

    1985-01-01

    A new technique for low cost rubust voice compression at 4800 bits per second was studied. The approach was based on using a cascade of digital biquad adaptive filters with simplified multipulse excitation followed by simple bit sequence compression.

  4. Watermarking requirements for Boeing digital cinema

    NASA Astrophysics Data System (ADS)

    Lixvar, John P.

    2003-06-01

    The enormous economic incentives for safeguarding intellectual property in the digital domain have made forensic watermarking a research topic of considerable interest. However, a recent examination of some of the leading product development efforts reveals that at present there is no effective watermarking implementation that addresses both the fidelity and security requirements of high definition digital cinema. If Boeing Digital Cinema (BDC, a business unit of Boeing Integrated Defense Systems) is to succeed in using watermarking as a deterrent to the unauthorized capture and distribution of high value cinematic material, the technology must be robust, transparent, asymmetric in its insertion/detection costs, and compatible with all the other elements of Boeing's multi-layered security system, including its compression, encryption, and key management services.

  5. Compressed ultrasound video image-quality evaluation using a Likert scale and Kappa statistical analysis

    NASA Astrophysics Data System (ADS)

    Stewart, Brent K.; Carter, Stephen J.; Langer, Steven G.; Andrew, Rex K.

    1998-06-01

    Experiments using NASA's Advanced Communications Technology Satellite were conducted to provide an estimate of the compressed video quality required for preservation of clinically relevant features for the detection of trauma. Bandwidth rates of 128, 256 and 384 kbps were used. A five point Likert scale (1 equals no useful information and 5 equals good diagnostic quality) was used for a subjective preference questionnaire to evaluate the quality of the compressed ultrasound imagery at the three compression rates for several anatomical regions of interest. At 384 kbps the Likert scores (mean plus or minus SD) were abdomen (4.45 plus or minus 0.71), carotid artery (4.70 plus or minus 0.36), kidney (5.0 plus or minus 0.0), liver (4.67 plus or minus 0.58) and thyroid (4.03 plus or minus 0.74). Due to the volatile nature of the H.320 compressed digital video stream, no statistically significant results can be derived through this methodology. As the MPEG standard has at its roots many of the same intraframe and motion vector compression algorithms as the H.261 (such as that used in the previous ACTS/AMT experiments), we are using the MPEG compressed video sequences to best gauge what minimum bandwidths are necessary for preservation of clinically relevant features for the detection of trauma. We have been using an MPEG codec board to collect losslessly compressed video clips from high quality S- VHS tapes and through direct digitization of S-video. Due to the large number of videoclips and questions to be presented to the radiologists and for ease of application, we have developed a web browser interface for this video visual perception study. Due to the large numbers of observations required to reach statistical significance in most ROC studies, Kappa statistical analysis is used to analyze the degree of agreement between observers and between viewing assessment. If the degree of agreement amongst readers is high, then there is a possibility that the ratings (i.e., average Likert score at each bandwidth) do in fact reflect the dimension they are purported to reflect (video quality versus bandwidth). It is then possible to make intelligent choice of bandwidth for streaming compressed video and compressed videoclips.

  6. Visually lossless compression of digital hologram sequences

    NASA Astrophysics Data System (ADS)

    Darakis, Emmanouil; Kowiel, Marcin; Näsänen, Risto; Naughton, Thomas J.

    2010-01-01

    Digital hologram sequences have great potential for the recording of 3D scenes of moving macroscopic objects as their numerical reconstruction can yield a range of perspective views of the scene. Digital holograms inherently have large information content and lossless coding of holographic data is rather inefficient due to the speckled nature of the interference fringes they contain. Lossy coding of still holograms and hologram sequences has shown promising results. By definition, lossy compression introduces errors in the reconstruction. In all of the previous studies, numerical metrics were used to measure the compression error and through it, the coding quality. Digital hologram reconstructions are highly speckled and the speckle pattern is very sensitive to data changes. Hence, numerical quality metrics can be misleading. For example, for low compression ratios, a numerically significant coding error can have visually negligible effects. Yet, in several cases, it is of high interest to know how much lossy compression can be achieved, while maintaining the reconstruction quality at visually lossless levels. Using an experimental threshold estimation method, the staircase algorithm, we determined the highest compression ratio that was not perceptible to human observers for objects compressed with Dirac and MPEG-4 compression methods. This level of compression can be regarded as the point below which compression is perceptually lossless although physically the compression is lossy. It was found that up to 4 to 7.5 fold compression can be obtained with the above methods without any perceptible change in the appearance of video sequences.

  7. The FBI compression standard for digitized fingerprint images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brislawn, C.M.; Bradley, J.N.; Onyshczak, R.J.

    1996-10-01

    The FBI has formulated national standards for digitization and compression of gray-scale fingerprint images. The compression algorithm for the digitized images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition, a technique referred to as the wavelet/scalar quantization method. The algorithm produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations. We will review the currentmore » status of the FBI standard, including the compliance testing process and the details of the first-generation encoder.« less

  8. FBI compression standard for digitized fingerprint images

    NASA Astrophysics Data System (ADS)

    Brislawn, Christopher M.; Bradley, Jonathan N.; Onyshczak, Remigius J.; Hopper, Thomas

    1996-11-01

    The FBI has formulated national standards for digitization and compression of gray-scale fingerprint images. The compression algorithm for the digitized images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition, a technique referred to as the wavelet/scalar quantization method. The algorithm produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations. We will review the current status of the FBI standard, including the compliance testing process and the details of the first-generation encoder.

  9. Compressive sensing for single-shot two-dimensional coherent spectroscopy

    NASA Astrophysics Data System (ADS)

    Harel, E.; Spencer, A.; Spokoyny, B.

    2017-02-01

    In this work, we explore the use of compressive sensing for the rapid acquisition of two-dimensional optical spectra that encodes the electronic structure and ultrafast dynamics of condensed-phase molecular species. Specifically, we have developed a means to combine multiplexed single-element detection and single-shot and phase-resolved two-dimensional coherent spectroscopy. The method described, which we call Single Point Array Reconstruction by Spatial Encoding (SPARSE) eliminates the need for costly array detectors while speeding up acquisition by several orders of magnitude compared to scanning methods. Physical implementation of SPARSE is facilitated by combining spatiotemporal encoding of the nonlinear optical response and signal modulation by a high-speed digital micromirror device. We demonstrate the approach by investigating a well-characterized cyanine molecule and a photosynthetic pigment-protein complex. Hadamard and compressive sensing algorithms are demonstrated, with the latter achieving compression factors as high as ten. Both show good agreement with directly detected spectra. We envision a myriad of applications in nonlinear spectroscopy using SPARSE with broadband femtosecond light sources in so-far unexplored regions of the electromagnetic spectrum.

  10. Macro-carriers of plastic deformation of steel surface layers detected by digital image correlation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kopanitsa, D. G., E-mail: kopanitsa@mail.ru; Ustinov, A. M., E-mail: artemustinov@mail.ru; Potekaev, A. I., E-mail: potekaev@spti.tsu.ru

    2016-01-15

    This paper presents a study of characteristics of an evolution of deformation fields in surface layers of medium-carbon low-alloy specimens under compression. The experiments were performed on the “Universal Testing Machine 4500” using a digital stereoscopic image processing system Vic-3D. A transition between stages is reflected as deformation redistribution on the near-surface layers. Electronic microscopy shows that the structure of the steel is a mixture of pearlite and ferrite grains. A proportion of pearlite is 40% and ferrite is 60%.

  11. Compressed 3D and 2D digital images versus standard 3D slide film for the evaluation of glaucomatous optic nerve features.

    PubMed

    Sandhu, Simrenjeet; Rudnisky, Chris; Arora, Sourabh; Kassam, Faazil; Douglas, Gordon; Edwards, Marianne C; Verstraten, Karin; Wong, Beatrice; Damji, Karim F

    2018-03-01

    Clinicians can feel confident compressed three-dimensional digital (3DD) and two-dimensional digital (2DD) imaging evaluating important features of glaucomatous disc damage is comparable to the previous gold standard of stereoscopic slide film photography, supporting the use of digital imaging for teleglaucoma applications. To compare the sensitivity and specificity of 3DD and 2DD photography with stereo slide film in detecting glaucomatous optic nerve head features. This prospective, multireader validation study imaged and compressed glaucomatous, suspicious or normal optic nerves using a ratio of 16:1 into 3DD and 2DD (1024×1280 pixels) and compared both to stereo slide film. The primary outcome was vertical cup-to-disc ratio (VCDR) and secondary outcomes, including disc haemorrhage and notching, were also evaluated. Each format was graded randomly by four glaucoma specialists. A protocol was implemented for harmonising data including consensus-based interpretation as needed. There were 192 eyes imaged with each format. The mean VCDR for slide, 3DD and 2DD was 0.59±0.20, 0.60±0.18 and 0.62±0.17, respectively. The agreement of VCDR for 3DD versus film was κ=0.781 and for 2DD versus film was κ=0.69. Sensitivity (95.2%), specificity (95.2%) and area under the curve (AUC; 0.953) of 3DD imaging to detect notching were better (p=0.03) than for 2DD (90.5%; 88.6%; AUC=0.895). Similarly, sensitivity (77.8%), specificity (98.9%) and AUC (0.883) of 3DD to detect disc haemorrhage were better (p=0.049) than for 2DD (44.4%; 99.5%; AUC=0.72). There was no difference between 3DD and 2DD imaging in detecting disc tilt (p=0.7), peripapillary atrophy (p=0.16), grey crescent (p=0.1) or pallor (p=0.43), although 3D detected sloping better (p=0.013). Both 3DD and 2DD imaging demonstrates excellent reproducibility in comparison to stereo slide film with experts evaluating VCDR, notching and disc haemorrhage. 3DD in this study was slightly more accurate than 2DD for evaluating disc haemorrhage, notching and sloping. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  12. Gradual cut detection using low-level vision for digital video

    NASA Astrophysics Data System (ADS)

    Lee, Jae-Hyun; Choi, Yeun-Sung; Jang, Ok-bae

    1996-09-01

    Digital video computing and organization is one of the important issues in multimedia system, signal compression, or database. Video should be segmented into shots to be used for identification and indexing. This approach requires a suitable method to automatically locate cut points in order to separate shot in a video. Automatic cut detection to isolate shots in a video has received considerable attention due to many practical applications; our video database, browsing, authoring system, retrieval and movie. Previous studies are based on a set of difference mechanisms and they measured the content changes between video frames. But they could not detect more special effects which include dissolve, wipe, fade-in, fade-out, and structured flashing. In this paper, a new cut detection method for gradual transition based on computer vision techniques is proposed. And then, experimental results applied to commercial video are presented and evaluated.

  13. A Two-Stage Reconstruction Processor for Human Detection in Compressive Sensing CMOS Radar.

    PubMed

    Tsao, Kuei-Chi; Lee, Ling; Chu, Ta-Shun; Huang, Yuan-Hao

    2018-04-05

    Complementary metal-oxide-semiconductor (CMOS) radar has recently gained much research attraction because small and low-power CMOS devices are very suitable for deploying sensing nodes in a low-power wireless sensing system. This study focuses on the signal processing of a wireless CMOS impulse radar system that can detect humans and objects in the home-care internet-of-things sensing system. The challenges of low-power CMOS radar systems are the weakness of human signals and the high computational complexity of the target detection algorithm. The compressive sensing-based detection algorithm can relax the computational costs by avoiding the utilization of matched filters and reducing the analog-to-digital converter bandwidth requirement. The orthogonal matching pursuit (OMP) is one of the popular signal reconstruction algorithms for compressive sensing radar; however, the complexity is still very high because the high resolution of human respiration leads to high-dimension signal reconstruction. Thus, this paper proposes a two-stage reconstruction algorithm for compressive sensing radar. The proposed algorithm not only has lower complexity than the OMP algorithm by 75% but also achieves better positioning performance than the OMP algorithm especially in noisy environments. This study also designed and implemented the algorithm by using Vertex-7 FPGA chip (Xilinx, San Jose, CA, USA). The proposed reconstruction processor can support the 256 × 13 real-time radar image display with a throughput of 28.2 frames per second.

  14. A multi-channel setup to study fractures in scintillators

    NASA Astrophysics Data System (ADS)

    Tantot, A.; Bouard, C.; Briche, R.; Lefèvre, G.; Manier, B.; Zaïm, N.; Deschanel, S.; Vanel, L.; Di Stefano, P. C. F.

    2016-12-01

    To investigate fractoluminescence in scintillating crystals used for particle detection, we have developed a multi-channel setup built around samples of double-cleavage drilled compression (DCDC) geometry in a controllable atmosphere. The setup allows the continuous digitization over hours of various parameters, including the applied load, and the compressive strain of the sample, as well as the acoustic emission. Emitted visible light is recorded with nanosecond resolution, and crack propagation is monitored using infrared lighting and camera. An example of application to \\text{B}{{\\text{i}}4}\\text{G}{{\\text{e}}3}{{\\text{O}}12} (BGO) is provided.

  15. Data compression and information retrieval via symbolization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tang, X.Z.; Tracy, E.R.

    Converting a continuous signal into a multisymbol stream is a simple method of data compression which preserves much of the dynamical information present in the original signal. The retrieval of selected types of information from symbolic data involves binary operations and is therefore optimal for digital computers. For example, correlation time scales can be easily recovered, even at high noise levels, by varying the time delay for symbolization. Also, the presence of periodicity in the signal can be reliably detected even if it is weak and masked by a dominant chaotic/stochastic background. {copyright} {ital 1998 American Institute of Physics.}

  16. Visual content highlighting via automatic extraction of embedded captions on MPEG compressed video

    NASA Astrophysics Data System (ADS)

    Yeo, Boon-Lock; Liu, Bede

    1996-03-01

    Embedded captions in TV programs such as news broadcasts, documentaries and coverage of sports events provide important information on the underlying events. In digital video libraries, such captions represent a highly condensed form of key information on the contents of the video. In this paper we propose a scheme to automatically detect the presence of captions embedded in video frames. The proposed method operates on reduced image sequences which are efficiently reconstructed from compressed MPEG video and thus does not require full frame decompression. The detection, extraction and analysis of embedded captions help to capture the highlights of visual contents in video documents for better organization of video, to present succinctly the important messages embedded in the images, and to facilitate browsing, searching and retrieval of relevant clips.

  17. Compressed digital holography: from micro towards macro

    NASA Astrophysics Data System (ADS)

    Schretter, Colas; Bettens, Stijn; Blinder, David; Pesquet-Popescu, Béatrice; Cagnazzo, Marco; Dufaux, Frédéric; Schelkens, Peter

    2016-09-01

    signal processing methods from software-driven computer engineering and applied mathematics. The compressed sensing theory in particular established a practical framework for reconstructing the scene content using few linear combinations of complex measurements and a sparse prior for regularizing the solution. Compressed sensing found direct applications in digital holography for microscopy. Indeed, the wave propagation phenomenon in free space mixes in a natural way the spatial distribution of point sources from the 3-dimensional scene. As the 3-dimensional scene is mapped to a 2-dimensional hologram, the hologram samples form a compressed representation of the scene as well. This overview paper discusses contributions in the field of compressed digital holography at the micro scale. Then, an outreach on future extensions towards the real-size macro scale is discussed. Thanks to advances in sensor technologies, increasing computing power and the recent improvements in sparse digital signal processing, holographic modalities are on the verge of practical high-quality visualization at a macroscopic scale where much higher resolution holograms must be acquired and processed on the computer.

  18. Wavelet/scalar quantization compression standard for fingerprint images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brislawn, C.M.

    1996-06-12

    US Federal Bureau of Investigation (FBI) has recently formulated a national standard for digitization and compression of gray-scale fingerprint images. Fingerprints are scanned at a spatial resolution of 500 dots per inch, with 8 bits of gray-scale resolution. The compression algorithm for the resulting digital images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition (wavelet/scalar quantization method). The FBI standard produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. The compression standard specifies a class ofmore » potential encoders and a universal decoder with sufficient generality to reconstruct compressed images produced by any compliant encoder, allowing flexibility for future improvements in encoder technology. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations.« less

  19. A closed-loop compressive-sensing-based neural recording system.

    PubMed

    Zhang, Jie; Mitra, Srinjoy; Suo, Yuanming; Cheng, Andrew; Xiong, Tao; Michon, Frederic; Welkenhuysen, Marleen; Kloosterman, Fabian; Chin, Peter S; Hsiao, Steven; Tran, Trac D; Yazicioglu, Firat; Etienne-Cummings, Ralph

    2015-06-01

    This paper describes a low power closed-loop compressive sensing (CS) based neural recording system. This system provides an efficient method to reduce data transmission bandwidth for implantable neural recording devices. By doing so, this technique reduces a majority of system power consumption which is dissipated at data readout interface. The design of the system is scalable and is a viable option for large scale integration of electrodes or recording sites onto a single device. The entire system consists of an application-specific integrated circuit (ASIC) with 4 recording readout channels with CS circuits, a real time off-chip CS recovery block and a recovery quality evaluation block that provides a closed feedback to adaptively adjust compression rate. Since CS performance is strongly signal dependent, the ASIC has been tested in vivo and with standard public neural databases. Implemented using efficient digital circuit, this system is able to achieve >10 times data compression on the entire neural spike band (500-6KHz) while consuming only 0.83uW (0.53 V voltage supply) additional digital power per electrode. When only the spikes are desired, the system is able to further compress the detected spikes by around 16 times. Unlike other similar systems, the characteristic spikes and inter-spike data can both be recovered which guarantes a >95% spike classification success rate. The compression circuit occupied 0.11mm(2)/electrode in a 180nm CMOS process. The complete signal processing circuit consumes <16uW/electrode. Power and area efficiency demonstrated by the system make it an ideal candidate for integration into large recording arrays containing thousands of electrode. Closed-loop recording and reconstruction performance evaluation further improves the robustness of the compression method, thus making the system more practical for long term recording.

  20. Convolution neural-network-based detection of lung structures

    NASA Astrophysics Data System (ADS)

    Hasegawa, Akira; Lo, Shih-Chung B.; Freedman, Matthew T.; Mun, Seong K.

    1994-05-01

    Chest radiography is one of the most primary and widely used techniques in diagnostic imaging. Nowadays with the advent of digital radiology, the digital medical image processing techniques for digital chest radiographs have attracted considerable attention, and several studies on the computer-aided diagnosis (CADx) as well as on the conventional image processing techniques for chest radiographs have been reported. In the automatic diagnostic process for chest radiographs, it is important to outline the areas of the lungs, the heart, and the diaphragm. This is because the original chest radiograph is composed of important anatomic structures and, without knowing exact positions of the organs, the automatic diagnosis may result in unexpected detections. The automatic extraction of an anatomical structure from digital chest radiographs can be a useful tool for (1) the evaluation of heart size, (2) automatic detection of interstitial lung diseases, (3) automatic detection of lung nodules, and (4) data compression, etc. Based on the clearly defined boundaries of heart area, rib spaces, rib positions, and rib cage extracted, one should be able to use this information to facilitate the tasks of the CADx on chest radiographs. In this paper, we present an automatic scheme for the detection of lung field from chest radiographs by using a shift-invariant convolution neural network. A novel algorithm for smoothing boundaries of lungs is also presented.

  1. Distributed Compressive Sensing vs. Dynamic Compressive Sensing: Improving the Compressive Line Sensing Imaging System through Their Integration

    DTIC Science & Technology

    2015-01-01

    streak tube imaging Lidar [15]. Nevertheless, instead of one- dimensional (1D) fan beam, a laser source modulates the digital micromirror device DMD and...Trans. Inform. Theory, vol. 52, pp. 1289-1306, 2006. [10] D. Dudley, W. Duncan and J. Slaughter, "Emerging Digital Micromirror Device (DMD) Applications

  2. An object-oriented simulator for 3D digital breast tomosynthesis imaging system.

    PubMed

    Seyyedi, Saeed; Cengiz, Kubra; Kamasak, Mustafa; Yildirim, Isa

    2013-01-01

    Digital breast tomosynthesis (DBT) is an innovative imaging modality that provides 3D reconstructed images of breast to detect the breast cancer. Projections obtained with an X-ray source moving in a limited angle interval are used to reconstruct 3D image of breast. Several reconstruction algorithms are available for DBT imaging. Filtered back projection algorithm has traditionally been used to reconstruct images from projections. Iterative reconstruction algorithms such as algebraic reconstruction technique (ART) were later developed. Recently, compressed sensing based methods have been proposed in tomosynthesis imaging problem. We have developed an object-oriented simulator for 3D digital breast tomosynthesis (DBT) imaging system using C++ programming language. The simulator is capable of implementing different iterative and compressed sensing based reconstruction methods on 3D digital tomosynthesis data sets and phantom models. A user friendly graphical user interface (GUI) helps users to select and run the desired methods on the designed phantom models or real data sets. The simulator has been tested on a phantom study that simulates breast tomosynthesis imaging problem. Results obtained with various methods including algebraic reconstruction technique (ART) and total variation regularized reconstruction techniques (ART+TV) are presented. Reconstruction results of the methods are compared both visually and quantitatively by evaluating performances of the methods using mean structural similarity (MSSIM) values.

  3. An Object-Oriented Simulator for 3D Digital Breast Tomosynthesis Imaging System

    PubMed Central

    Cengiz, Kubra

    2013-01-01

    Digital breast tomosynthesis (DBT) is an innovative imaging modality that provides 3D reconstructed images of breast to detect the breast cancer. Projections obtained with an X-ray source moving in a limited angle interval are used to reconstruct 3D image of breast. Several reconstruction algorithms are available for DBT imaging. Filtered back projection algorithm has traditionally been used to reconstruct images from projections. Iterative reconstruction algorithms such as algebraic reconstruction technique (ART) were later developed. Recently, compressed sensing based methods have been proposed in tomosynthesis imaging problem. We have developed an object-oriented simulator for 3D digital breast tomosynthesis (DBT) imaging system using C++ programming language. The simulator is capable of implementing different iterative and compressed sensing based reconstruction methods on 3D digital tomosynthesis data sets and phantom models. A user friendly graphical user interface (GUI) helps users to select and run the desired methods on the designed phantom models or real data sets. The simulator has been tested on a phantom study that simulates breast tomosynthesis imaging problem. Results obtained with various methods including algebraic reconstruction technique (ART) and total variation regularized reconstruction techniques (ART+TV) are presented. Reconstruction results of the methods are compared both visually and quantitatively by evaluating performances of the methods using mean structural similarity (MSSIM) values. PMID:24371468

  4. Digital micromirror devices in Raman trace detection of explosives

    NASA Astrophysics Data System (ADS)

    Glimtoft, Martin; Svanqvist, Mattias; Ågren, Matilda; Nordberg, Markus; Östmark, Henric

    2016-05-01

    Imaging Raman spectroscopy based on tunable filters is an established technique for detecting single explosives particles at stand-off distances. However, large light losses are inherent in the design due to sequential imaging at different wavelengths, leading to effective transmission often well below 1 %. The use of digital micromirror devices (DMD) and compressive sensing (CS) in imaging Raman explosives trace detection can improve light throughput and add significant flexibility compared to existing systems. DMDs are based on mature microelectronics technology, and are compact, scalable, and can be customized for specific tasks, including new functions not available with current technologies. This paper has been focusing on investigating how a DMD can be used when applying CS-based imaging Raman spectroscopy on stand-off explosives trace detection, and evaluating the performance in terms of light throughput, image reconstruction ability and potential detection limits. This type of setup also gives the possibility to combine imaging Raman with non-spatially resolved fluorescence suppression techniques, such as Kerr gating. The system used consists of a 2nd harmonics Nd:YAG laser for sample excitation, collection optics, DMD, CMOScamera and a spectrometer with ICCD camera for signal gating and detection. Initial results for compressive sensing imaging Raman shows a stable reconstruction procedure even at low signals and in presence of interfering background signal. It is also shown to give increased effective light transmission without sacrificing molecular specificity or area coverage compared to filter based imaging Raman. At the same time it adds flexibility so the setup can be customized for new functionality.

  5. Towards Statistically Undetectable Steganography

    DTIC Science & Technology

    2011-06-30

    payload size. Middle, payload proportional to y/N. Right, proportional to N. LSB replacement steganography in never-compressed cover images , detected...Books. (1) J. Fridrich, Steganography in Digital Media: Principles, Algorithms , and Applications, Cambridge University Press, November 2009. Journal... Images for Applications in Steganography ," IEEE Trans, on Info. Forensics and Security, vol. 3(2), pp. 247-258, 2008. Conference papers. (1) T. Filler

  6. Compression of CCD raw images for digital still cameras

    NASA Astrophysics Data System (ADS)

    Sriram, Parthasarathy; Sudharsanan, Subramania

    2005-03-01

    Lossless compression of raw CCD images captured using color filter arrays has several benefits. The benefits include improved storage capacity, reduced memory bandwidth, and lower power consumption for digital still camera processors. The paper discusses the benefits in detail and proposes the use of a computationally efficient block adaptive scheme for lossless compression. Experimental results are provided that indicate that the scheme performs well for CCD raw images attaining compression factors of more than two. The block adaptive method also compares favorably with JPEG-LS. A discussion is provided indicating how the proposed lossless coding scheme can be incorporated into digital still camera processors enabling lower memory bandwidth and storage requirements.

  7. [Research and realization of signal processing algorithms based on FPGA in digital ophthalmic ultrasonography imaging].

    PubMed

    Fang, Simin; Zhou, Sheng; Wang, Xiaochun; Ye, Qingsheng; Tian, Ling; Ji, Jianjun; Wang, Yanqun

    2015-01-01

    To design and improve signal processing algorithms of ophthalmic ultrasonography based on FPGA. Achieved three signal processing modules: full parallel distributed dynamic filter, digital quadrature demodulation, logarithmic compression, using Verilog HDL hardware language in Quartus II. Compared to the original system, the hardware cost is reduced, the whole image shows clearer and more information of the deep eyeball contained in the image, the depth of detection increases from 5 cm to 6 cm. The new algorithms meet the design requirements and achieve the system's optimization that they can effectively improve the image quality of existing equipment.

  8. Digital image forensics for photographic copying

    NASA Astrophysics Data System (ADS)

    Yin, Jing; Fang, Yanmei

    2012-03-01

    Image display technology has greatly developed over the past few decades, which make it possible to recapture high-quality images from the display medium, such as a liquid crystal display(LCD) screen or a printed paper. The recaptured images are not regarded as a separate image class in the current research of digital image forensics, while the content of the recaptured images may have been tempered. In this paper, two sets of features based on the noise and the traces of double JPEG compression are proposed to identify these recaptured images. Experimental results showed that our proposed features perform well for detecting photographic copying.

  9. Evaluation of Digital Compressed Sensing for Real-Time Wireless ECG System with Bluetooth low Energy.

    PubMed

    Wang, Yishan; Doleschel, Sammy; Wunderlich, Ralf; Heinen, Stefan

    2016-07-01

    In this paper, a wearable and wireless ECG system is firstly designed with Bluetooth Low Energy (BLE). It can detect 3-lead ECG signals and is completely wireless. Secondly the digital Compressed Sensing (CS) is implemented to increase the energy efficiency of wireless ECG sensor. Different sparsifying basis, various compression ratio (CR) and several reconstruction algorithms are simulated and discussed. Finally the reconstruction is done by the android application (App) on smartphone to display the signal in real time. The power efficiency is measured and compared with the system without CS. The optimum satisfying basis built by 3-level decomposed db4 wavelet coefficients, 1-bit Bernoulli random matrix and the most suitable reconstruction algorithm are selected by the simulations and applied on the sensor node and App. The signal is successfully reconstructed and displayed on the App of smartphone. Battery life of sensor node is extended from 55 h to 67 h. The presented wireless ECG system with CS can significantly extend the battery life by 22 %. With the compact characteristic and long term working time, the system provides a feasible solution for the long term homecare utilization.

  10. A Two-Stage Reconstruction Processor for Human Detection in Compressive Sensing CMOS Radar

    PubMed Central

    Tsao, Kuei-Chi; Lee, Ling; Chu, Ta-Shun

    2018-01-01

    Complementary metal-oxide-semiconductor (CMOS) radar has recently gained much research attraction because small and low-power CMOS devices are very suitable for deploying sensing nodes in a low-power wireless sensing system. This study focuses on the signal processing of a wireless CMOS impulse radar system that can detect humans and objects in the home-care internet-of-things sensing system. The challenges of low-power CMOS radar systems are the weakness of human signals and the high computational complexity of the target detection algorithm. The compressive sensing-based detection algorithm can relax the computational costs by avoiding the utilization of matched filters and reducing the analog-to-digital converter bandwidth requirement. The orthogonal matching pursuit (OMP) is one of the popular signal reconstruction algorithms for compressive sensing radar; however, the complexity is still very high because the high resolution of human respiration leads to high-dimension signal reconstruction. Thus, this paper proposes a two-stage reconstruction algorithm for compressive sensing radar. The proposed algorithm not only has lower complexity than the OMP algorithm by 75% but also achieves better positioning performance than the OMP algorithm especially in noisy environments. This study also designed and implemented the algorithm by using Vertex-7 FPGA chip (Xilinx, San Jose, CA, USA). The proposed reconstruction processor can support the 256×13 real-time radar image display with a throughput of 28.2 frames per second. PMID:29621170

  11. Digital image modification detection using color information and its histograms.

    PubMed

    Zhou, Haoyu; Shen, Yue; Zhu, Xinghui; Liu, Bo; Fu, Zigang; Fan, Na

    2016-09-01

    The rapid development of many open source and commercial image editing software makes the authenticity of the digital images questionable. Copy-move forgery is one of the most widely used tampering techniques to create desirable objects or conceal undesirable objects in a scene. Existing techniques reported in the literature to detect such tampering aim to improve the robustness of these methods against the use of JPEG compression, blurring, noise, or other types of post processing operations. These post processing operations are frequently used with the intention to conceal tampering and reduce tampering clues. A robust method based on the color moments and other five image descriptors is proposed in this paper. The method divides the image into fixed size overlapping blocks. Clustering operation divides entire search space into smaller pieces with similar color distribution. Blocks from the tampered regions will reside within the same cluster since both copied and moved regions have similar color distributions. Five image descriptors are used to extract block features, which makes the method more robust to post processing operations. An ensemble of deep compositional pattern-producing neural networks are trained with these extracted features. Similarity among feature vectors in clusters indicates possible forged regions. Experimental results show that the proposed method can detect copy-move forgery even if an image was distorted by gamma correction, addictive white Gaussian noise, JPEG compression, or blurring. Copyright © 2016. Published by Elsevier Ireland Ltd.

  12. Non-Destructive Detection of Wire Rope Discontinuities from Residual Magnetic Field Images Using the Hilbert-Huang Transform and Compressed Sensing

    PubMed Central

    Zhang, Juwei; Tan, Xiaojiang; Zheng, Pengbo

    2017-01-01

    Electromagnetic methods are commonly employed to detect wire rope discontinuities. However, determining the residual strength of wire rope based on the quantitative recognition of discontinuities remains problematic. We have designed a prototype device based on the residual magnetic field (RMF) of ferromagnetic materials, which overcomes the disadvantages associated with in-service inspections, such as large volume, inconvenient operation, low precision, and poor portability by providing a relatively small and lightweight device with improved detection precision. A novel filtering system consisting of the Hilbert-Huang transform and compressed sensing wavelet filtering is presented. Digital image processing was applied to achieve the localization and segmentation of defect RMF images. The statistical texture and invariant moment characteristics of the defect images were extracted as the input of a radial basis function neural network. Experimental results show that the RMF device can detect defects in various types of wire rope and prolong the service life of test equipment by reducing the friction between the detection device and the wire rope by accommodating a high lift-off distance. PMID:28300790

  13. Method and apparatus for signal compression

    DOEpatents

    Carangelo, R.M.

    1994-02-08

    The method and apparatus of the invention effects compression of an analog electrical signal (e.g., representing an interferogram) by introducing into it a component that is a cubic function thereof, normally as a nonlinear negative signal in a feedback loop of an Op Amp. The compressed signal will most desirably be digitized and then digitally decompressed so as to produce a signal that emulates the original. 8 figures.

  14. Method and apparatus for signal compression

    DOEpatents

    Carangelo, Robert M.

    1994-02-08

    The method and apparatus of the invention effects compression of an analog electrical signal (e.g., representing an interferogram) by introducing into it a component that is a cubic function thereof, normally as a nonlinear negative signal in a feedback loop of an Op Amp. The compressed signal will most desirably be digitized and then digitally decompressed so as to produce a signal that emulates the original.

  15. A low-cost digital filing system for echocardiography data with MPEG4 compression and its application to remote diagnosis.

    PubMed

    Umeda, Akira; Iwata, Yasushi; Okada, Yasumasa; Shimada, Megumi; Baba, Akiyasu; Minatogawa, Yasuyuki; Yamada, Takayasu; Chino, Masao; Watanabe, Takafumi; Akaishi, Makoto

    2004-12-01

    The high cost of digital echocardiographs and the large size of data files hinder the adoption of remote diagnosis of digitized echocardiography data. We have developed a low-cost digital filing system for echocardiography data. In this system, data from a conventional analog echocardiograph are captured using a personal computer (PC) equipped with an analog-to-digital converter board. Motion picture data are promptly compressed using a moving pictures expert group (MPEG) 4 codec. The digitized data with preliminary reports obtained in a rural hospital are then sent to cardiologists at distant urban general hospitals via the internet. The cardiologists can evaluate the data using widely available movie-viewing software (Windows Media Player). The diagnostic accuracy of this double-check system was confirmed by comparison with ordinary super-VHS videotapes. We have demonstrated that digitization of echocardiography data from a conventional analog echocardiograph and MPEG 4 compression can be performed using an ordinary PC-based system, and that this system enables highly efficient digital storage and remote diagnosis at low cost.

  16. Digital cinema video compression

    NASA Astrophysics Data System (ADS)

    Husak, Walter

    2003-05-01

    The Motion Picture Industry began a transition from film based distribution and projection to digital distribution and projection several years ago. Digital delivery and presentation offers the prospect to increase the quality of the theatrical experience for the audience, reduce distribution costs to the distributors, and create new business opportunities for the theater owners and the studios. Digital Cinema also presents an opportunity to provide increased flexibility and security of the movies for the content owners and the theater operators. Distribution of content via electronic means to theaters is unlike any of the traditional applications for video compression. The transition from film-based media to electronic media represents a paradigm shift in video compression techniques and applications that will be discussed in this paper.

  17. A multi-channel low-power system-on-chip for single-unit recording and narrowband wireless transmission of neural signal.

    PubMed

    Bonfanti, A; Ceravolo, M; Zambra, G; Gusmeroli, R; Spinelli, A S; Lacaita, A L; Angotzi, G N; Baranauskas, G; Fadiga, L

    2010-01-01

    This paper reports a multi-channel neural recording system-on-chip (SoC) with digital data compression and wireless telemetry. The circuit consists of a 16 amplifiers, an analog time division multiplexer, an 8-bit SAR AD converter, a digital signal processor (DSP) and a wireless narrowband 400-MHz binary FSK transmitter. Even though only 16 amplifiers are present in our current die version, the whole system is designed to work with 64 channels demonstrating the feasibility of a digital processing and narrowband wireless transmission of 64 neural recording channels. A digital data compression, based on the detection of action potentials and storage of correspondent waveforms, allows the use of a 1.25-Mbit/s binary FSK wireless transmission. This moderate bit-rate and a low frequency deviation, Manchester-coded modulation are crucial for exploiting a narrowband wireless link and an efficient embeddable antenna. The chip is realized in a 0.35- εm CMOS process with a power consumption of 105 εW per channel (269 εW per channel with an extended transmission range of 4 m) and an area of 3.1 × 2.7 mm(2). The transmitted signal is captured by a digital TV tuner and demodulated by a wideband phase-locked loop (PLL), and then sent to a PC via an FPGA module. The system has been tested for electrical specifications and its functionality verified in in-vivo neural recording experiments.

  18. Intra-operative cone beam computed tomography can help avoid reinterventions and reduce CT follow up after infrarenal EVAR.

    PubMed

    Törnqvist, P; Dias, N; Sonesson, B; Kristmundsson, T; Resch, T

    2015-04-01

    Re-interventions after endovascular abdominal aortic aneurysm repair (EVAR) are common and therefore a strict imaging follow up protocol is required. The purpose of this study was to evaluate whether cone beam computed tomography (CBCT) can detect intra-operative complications and to compare this with angiography and the 1 month CT follow up (computed tomography angiography [CTA]). Fifty-one patients (44 men) were enrolled in a prospective trial. Patients underwent completion angiography and CBCT during infrarenal EVAR. Contrast was used except when pre-operative renal insufficiency was present or if the maximum contrast dose threshold was reached. CBCT reconstruction included the top of the stent graft to the iliac bifurcation. Endoleaks, kinks, or compressions were recorded. CBCT was technically successful in all patients. Twelve endoleaks were detected on completion digital subtraction angiography (CA). CBCT detected 4/5 type 1 endoleaks, but only one type 2 endoleak. CTA identified eight type 2 endoleaks and one residual type I endoleak. Two cases of stent compression were seen on CA. CBCT revealed five stent compressions and one kink, which resulted in four intra-operative adjunctive manoeuvres. CTA identified all cases of kinks or compressions that were left untreated. Two of them were corrected later. No additional kinks/compressions were found on CTA. Groin closure consisted of 78 fascia sutures, nine cut downs, and 11 percutaneous sutures. Seven femoral artery pseudoaneurysms (<1 cm) were detected on CTA, but no intervention was needed. CA is better than CBCT in detecting and categorizing endoleaks but CBCT (with or without contrast) is better than CA for detection of kinks or stentgraft compression. CTA plus CBCT identified all significant complications noted on the 1 month follow up CTA. The use of intra-operative CA and CBCT could replace early CTA after standard EVAR thus reducing overall radiation and contrast use. Technical development might further improve the resolution and usefulness of CBCT. Copyright © 2015 European Society for Vascular Surgery. Published by Elsevier Ltd. All rights reserved.

  19. Steganalysis based on JPEG compatibility

    NASA Astrophysics Data System (ADS)

    Fridrich, Jessica; Goljan, Miroslav; Du, Rui

    2001-11-01

    In this paper, we introduce a new forensic tool that can reliably detect modifications in digital images, such as distortion due to steganography and watermarking, in images that were originally stored in the JPEG format. The JPEG compression leave unique fingerprints and serves as a fragile watermark enabling us to detect changes as small as modifying the LSB of one randomly chosen pixel. The detection of changes is based on investigating the compatibility of 8x8 blocks of pixels with JPEG compression with a given quantization matrix. The proposed steganalytic method is applicable to virtually all steganongraphic and watermarking algorithms with the exception of those that embed message bits into the quantized JPEG DCT coefficients. The method can also be used to estimate the size of the secret message and identify the pixels that carry message bits. As a consequence of our steganalysis, we strongly recommend avoiding using images that have been originally stored in the JPEG format as cover-images for spatial-domain steganography.

  20. Learning random networks for compression of still and moving images

    NASA Technical Reports Server (NTRS)

    Gelenbe, Erol; Sungur, Mert; Cramer, Christopher

    1994-01-01

    Image compression for both still and moving images is an extremely important area of investigation, with numerous applications to videoconferencing, interactive education, home entertainment, and potential applications to earth observations, medical imaging, digital libraries, and many other areas. We describe work on a neural network methodology to compress/decompress still and moving images. We use the 'point-process' type neural network model which is closer to biophysical reality than standard models, and yet is mathematically much more tractable. We currently achieve compression ratios of the order of 120:1 for moving grey-level images, based on a combination of motion detection and compression. The observed signal-to-noise ratio varies from values above 25 to more than 35. The method is computationally fast so that compression and decompression can be carried out in real-time. It uses the adaptive capabilities of a set of neural networks so as to select varying compression ratios in real-time as a function of quality achieved. It also uses a motion detector which will avoid retransmitting portions of the image which have varied little from the previous frame. Further improvements can be achieved by using on-line learning during compression, and by appropriate compensation of nonlinearities in the compression/decompression scheme. We expect to go well beyond the 250:1 compression level for color images with good quality levels.

  1. Digital holographic image fusion for a larger size object using compressive sensing

    NASA Astrophysics Data System (ADS)

    Tian, Qiuhong; Yan, Liping; Chen, Benyong; Yao, Jiabao; Zhang, Shihua

    2017-05-01

    Digital holographic imaging fusion for a larger size object using compressive sensing is proposed. In this method, the high frequency component of the digital hologram under discrete wavelet transform is represented sparsely by using compressive sensing so that the data redundancy of digital holographic recording can be resolved validly, the low frequency component is retained totally to ensure the image quality, and multiple reconstructed images with different clear parts corresponding to a laser spot size are fused to realize the high quality reconstructed image of a larger size object. In addition, a filter combing high-pass and low-pass filters is designed to remove the zero-order term from a digital hologram effectively. The digital holographic experimental setup based on off-axis Fresnel digital holography was constructed. The feasible and comparative experiments were carried out. The fused image was evaluated by using the Tamura texture features. The experimental results demonstrated that the proposed method can improve the processing efficiency and visual characteristics of the fused image and enlarge the size of the measured object effectively.

  2. Network Monitoring Traffic Compression Using Singular Value Decomposition

    DTIC Science & Technology

    2014-03-27

    Shootouts." Workshop on Intrusion Detection and Network Monitoring. 1999. [12] Goodall , John R. "Visualization is better! a comparative evaluation...34 Visualization for Cyber Security, 2009. VizSec 2009. 6th International Workshop on IEEE, 2009. [13] Goodall , John R., and Mark Sowul. "VIAssist...Viruses and Log Visualization.” In Australian Digital Forensics Conference. Paper 54, 2008. [30] Tesone, Daniel R., and John R. Goodall . "Balancing

  3. Design of a Biorthogonal Wavelet Transform Based R-Peak Detection and Data Compression Scheme for Implantable Cardiac Pacemaker Systems.

    PubMed

    Kumar, Ashish; Kumar, Manjeet; Komaragiri, Rama

    2018-04-19

    Bradycardia can be modulated using the cardiac pacemaker, an implantable medical device which sets and balances the patient's cardiac health. The device has been widely used to detect and monitor the patient's heart rate. The data collected hence has the highest authenticity assurance and is convenient for further electric stimulation. In the pacemaker, ECG detector is one of the most important element. The device is available in its new digital form, which is more efficient and accurate in performance with the added advantage of economical power consumption platform. In this work, a joint algorithm based on biorthogonal wavelet transform and run-length encoding (RLE) is proposed for QRS complex detection of the ECG signal and compressing the detected ECG data. Biorthogonal wavelet transform of the input ECG signal is first calculated using a modified demand based filter bank architecture which consists of a series combination of three lowpass filters with a highpass filter. Lowpass and highpass filters are realized using a linear phase structure which reduces the hardware cost of the proposed design approximately by 50%. Then, the location of the R-peak is found by comparing the denoised ECG signal with the threshold value. The proposed R-peak detector achieves the highest sensitivity and positive predictivity of 99.75 and 99.98 respectively with the MIT-BIH arrhythmia database. Also, the proposed R-peak detector achieves a comparatively low data error rate (DER) of 0.002. The use of RLE for the compression of detected ECG data achieves a higher compression ratio (CR) of 17.1. To justify the effectiveness of the proposed algorithm, the results have been compared with the existing methods, like Huffman coding/simple predictor, Huffman coding/adaptive, and slope predictor/fixed length packaging.

  4. Data compression/error correction digital test system. Appendix 3: Maintenance. Book 2: Receiver assembly drawings

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The assembly drawings of the receiver unit are presented for the data compression/error correction digital test system. Equipment specifications are given for the various receiver parts, including the TV input buffer register, delta demodulator, TV sync generator, memory devices, and data storage devices.

  5. A database for assessment of effect of lossy compression on digital mammograms

    NASA Astrophysics Data System (ADS)

    Wang, Jiheng; Sahiner, Berkman; Petrick, Nicholas; Pezeshk, Aria

    2018-03-01

    With widespread use of screening digital mammography, efficient storage of the vast amounts of data has become a challenge. While lossless image compression causes no risk to the interpretation of the data, it does not allow for high compression rates. Lossy compression and the associated higher compression ratios are therefore more desirable. The U.S. Food and Drug Administration (FDA) currently interprets the Mammography Quality Standards Act as prohibiting lossy compression of digital mammograms for primary image interpretation, image retention, or transfer to the patient or her designated recipient. Previous work has used reader studies to determine proper usage criteria for evaluating lossy image compression in mammography, and utilized different measures and metrics to characterize medical image quality. The drawback of such studies is that they rely on a threshold on compression ratio as the fundamental criterion for preserving the quality of images. However, compression ratio is not a useful indicator of image quality. On the other hand, many objective image quality metrics (IQMs) have shown excellent performance for natural image content for consumer electronic applications. In this paper, we create a new synthetic mammogram database with several unique features. We compare and characterize the impact of image compression on several clinically relevant image attributes such as perceived contrast and mass appearance for different kinds of masses. We plan to use this database to develop a new objective IQM for measuring the quality of compressed mammographic images to help determine the allowed maximum compression for different kinds of breasts and masses in terms of visual and diagnostic quality.

  6. JPEG and wavelet compression of ophthalmic images

    NASA Astrophysics Data System (ADS)

    Eikelboom, Robert H.; Yogesan, Kanagasingam; Constable, Ian J.; Barry, Christopher J.

    1999-05-01

    This study was designed to determine the degree and methods of digital image compression to produce ophthalmic imags of sufficient quality for transmission and diagnosis. The photographs of 15 subjects, which inclined eyes with normal, subtle and distinct pathologies, were digitized to produce 1.54MB images and compressed to five different methods: (i) objectively by calculating the RMS error between the uncompressed and compressed images, (ii) semi-subjectively by assessing the visibility of blood vessels, and (iii) subjectively by asking a number of experienced observers to assess the images for quality and clinical interpretation. Results showed that as a function of compressed image size, wavelet compressed images produced less RMS error than JPEG compressed images. Blood vessel branching could be observed to a greater extent after Wavelet compression compared to JPEG compression produced better images then a JPEG compression for a given image size. Overall, it was shown that images had to be compressed to below 2.5 percent for JPEG and 1.7 percent for Wavelet compression before fine detail was lost, or when image quality was too poor to make a reliable diagnosis.

  7. Effects of Image Compression on Automatic Count of Immunohistochemically Stained Nuclei in Digital Images

    PubMed Central

    López, Carlos; Lejeune, Marylène; Escrivà, Patricia; Bosch, Ramón; Salvadó, Maria Teresa; Pons, Lluis E.; Baucells, Jordi; Cugat, Xavier; Álvaro, Tomás; Jaén, Joaquín

    2008-01-01

    This study investigates the effects of digital image compression on automatic quantification of immunohistochemical nuclear markers. We examined 188 images with a previously validated computer-assisted analysis system. A first group was composed of 47 images captured in TIFF format, and other three contained the same images converted from TIFF to JPEG format with 3×, 23× and 46× compression. Counts of TIFF format images were compared with the other three groups. Overall, differences in the count of the images increased with the percentage of compression. Low-complexity images (≤100 cells/field, without clusters or with small-area clusters) had small differences (<5 cells/field in 95–100% of cases) and high-complexity images showed substantial differences (<35–50 cells/field in 95–100% of cases). Compression does not compromise the accuracy of immunohistochemical nuclear marker counts obtained by computer-assisted analysis systems for digital images with low complexity and could be an efficient method for storing these images. PMID:18755997

  8. Using a visual discrimination model for the detection of compression artifacts in virtual pathology images.

    PubMed

    Johnson, Jeffrey P; Krupinski, Elizabeth A; Yan, Michelle; Roehrig, Hans; Graham, Anna R; Weinstein, Ronald S

    2011-02-01

    A major issue in telepathology is the extremely large and growing size of digitized "virtual" slides, which can require several gigabytes of storage and cause significant delays in data transmission for remote image interpretation and interactive visualization by pathologists. Compression can reduce this massive amount of virtual slide data, but reversible (lossless) methods limit data reduction to less than 50%, while lossy compression can degrade image quality and diagnostic accuracy. "Visually lossless" compression offers the potential for using higher compression levels without noticeable artifacts, but requires a rate-control strategy that adapts to image content and loss visibility. We investigated the utility of a visual discrimination model (VDM) and other distortion metrics for predicting JPEG 2000 bit rates corresponding to visually lossless compression of virtual slides for breast biopsy specimens. Threshold bit rates were determined experimentally with human observers for a variety of tissue regions cropped from virtual slides. For test images compressed to their visually lossless thresholds, just-noticeable difference (JND) metrics computed by the VDM were nearly constant at the 95th percentile level or higher, and were significantly less variable than peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) metrics. Our results suggest that VDM metrics could be used to guide the compression of virtual slides to achieve visually lossless compression while providing 5-12 times the data reduction of reversible methods.

  9. Design of a digital compression technique for shuttle television

    NASA Technical Reports Server (NTRS)

    Habibi, A.; Fultz, G.

    1976-01-01

    The determination of the performance and hardware complexity of data compression algorithms applicable to color television signals, were studied to assess the feasibility of digital compression techniques for shuttle communications applications. For return link communications, it is shown that a nonadaptive two dimensional DPCM technique compresses the bandwidth of field-sequential color TV to about 13 MBPS and requires less than 60 watts of secondary power. For forward link communications, a facsimile coding technique is recommended which provides high resolution slow scan television on a 144 KBPS channel. The onboard decoder requires about 19 watts of secondary power.

  10. Lossless and lossy compression of quantitative phase images of red blood cells obtained by digital holographic imaging.

    PubMed

    Jaferzadeh, Keyvan; Gholami, Samaneh; Moon, Inkyu

    2016-12-20

    In this paper, we evaluate lossless and lossy compression techniques to compress quantitative phase images of red blood cells (RBCs) obtained by an off-axis digital holographic microscopy (DHM). The RBC phase images are numerically reconstructed from their digital holograms and are stored in 16-bit unsigned integer format. In the case of lossless compression, predictive coding of JPEG lossless (JPEG-LS), JPEG2000, and JP3D are evaluated, and compression ratio (CR) and complexity (compression time) are compared against each other. It turns out that JP2k can outperform other methods by having the best CR. In the lossy case, JP2k and JP3D with different CRs are examined. Because some data is lost in a lossy way, the degradation level is measured by comparing different morphological and biochemical parameters of RBC before and after compression. Morphological parameters are volume, surface area, RBC diameter, sphericity index, and the biochemical cell parameter is mean corpuscular hemoglobin (MCH). Experimental results show that JP2k outperforms JP3D not only in terms of mean square error (MSE) when CR increases, but also in compression time in the lossy compression way. In addition, our compression results with both algorithms demonstrate that with high CR values the three-dimensional profile of RBC can be preserved and morphological and biochemical parameters can still be within the range of reported values.

  11. Compression embedding

    DOEpatents

    Sandford, M.T. II; Handel, T.G.; Bradley, J.N.

    1998-07-07

    A method and apparatus for embedding auxiliary information into the digital representation of host data created by a lossy compression technique and a method and apparatus for constructing auxiliary data from the correspondence between values in a digital key-pair table with integer index values existing in a representation of host data created by a lossy compression technique are disclosed. The methods apply to data compressed with algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as ordered sequences of blocks containing integer indices having redundancy and uncertainty of value by one unit, allowing indices which are adjacent in value to be manipulated to encode auxiliary data. Also included is a method to improve the efficiency of lossy compression algorithms by embedding white noise into the integer indices. Lossy compression methods use loss-less compression to reduce to the final size the intermediate representation as indices. The efficiency of the loss-less compression, known also as entropy coding compression, is increased by manipulating the indices at the intermediate stage. Manipulation of the intermediate representation improves lossy compression performance by 1 to 10%. 21 figs.

  12. Compression embedding

    DOEpatents

    Sandford, II, Maxwell T.; Handel, Theodore G.; Bradley, Jonathan N.

    1998-01-01

    A method and apparatus for embedding auxiliary information into the digital representation of host data created by a lossy compression technique and a method and apparatus for constructing auxiliary data from the correspondence between values in a digital key-pair table with integer index values existing in a representation of host data created by a lossy compression technique. The methods apply to data compressed with algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as ordered sequences of blocks containing integer indices having redundancy and uncertainty of value by one unit, allowing indices which are adjacent in value to be manipulated to encode auxiliary data. Also included is a method to improve the efficiency of lossy compression algorithms by embedding white noise into the integer indices. Lossy compression methods use loss-less compression to reduce to the final size the intermediate representation as indices. The efficiency of the loss-less compression, known also as entropy coding compression, is increased by manipulating the indices at the intermediate stage. Manipulation of the intermediate representation improves lossy compression performance by 1 to 10%.

  13. Digital Signal Processing For Low Bit Rate TV Image Codecs

    NASA Astrophysics Data System (ADS)

    Rao, K. R.

    1987-06-01

    In view of the 56 KBPS digital switched network services and the ISDN, low bit rate codecs for providing real time full motion color video are under various stages of development. Some companies have already brought the codecs into the market. They are being used by industry and some Federal Agencies for video teleconferencing. In general, these codecs have various features such as multiplexing audio and data, high resolution graphics, encryption, error detection and correction, self diagnostics, freezeframe, split video, text overlay etc. To transmit the original color video on a 56 KBPS network requires bit rate reduction of the order of 1400:1. Such a large scale bandwidth compression can be realized only by implementing a number of sophisticated,digital signal processing techniques. This paper provides an overview of such techniques and outlines the newer concepts that are being investigated. Before resorting to the data compression techniques, various preprocessing operations such as noise filtering, composite-component transformation and horizontal and vertical blanking interval removal are to be implemented. Invariably spatio-temporal subsampling is achieved by appropriate filtering. Transform and/or prediction coupled with motion estimation and strengthened by adaptive features are some of the tools in the arsenal of the data reduction methods. Other essential blocks in the system are quantizer, bit allocation, buffer, multiplexer, channel coding etc.

  14. Bringing the Digital Camera to the Physics Lab

    ERIC Educational Resources Information Center

    Rossi, M.; Gratton, L. M.; Oss, S.

    2013-01-01

    We discuss how compressed images created by modern digital cameras can lead to even severe problems in the quantitative analysis of experiments based on such images. Difficulties result from the nonlinear treatment of lighting intensity values stored in compressed files. To overcome such troubles, one has to adopt noncompressed, native formats, as…

  15. Syndrome source coding and its universal generalization

    NASA Technical Reports Server (NTRS)

    Ancheta, T. C., Jr.

    1975-01-01

    A method of using error-correcting codes to obtain data compression, called syndrome-source-coding, is described in which the source sequence is treated as an error pattern whose syndrome forms the compressed data. It is shown that syndrome-source-coding can achieve arbitrarily small distortion with the number of compressed digits per source digit arbitrarily close to the entropy of a binary memoryless source. A universal generalization of syndrome-source-coding is formulated which provides robustly-effective, distortionless, coding of source ensembles.

  16. Clinical evaluation of JPEG2000 compression for digital mammography

    NASA Astrophysics Data System (ADS)

    Sung, Min-Mo; Kim, Hee-Joung; Kim, Eun-Kyung; Kwak, Jin-Young; Yoo, Jae-Kyung; Yoo, Hyung-Sik

    2002-06-01

    Medical images, such as computed radiography (CR), and digital mammographic images will require large storage facilities and long transmission times for picture archiving and communications system (PACS) implementation. American College of Radiology and National Equipment Manufacturers Association (ACR/NEMA) group is planning to adopt a JPEG2000 compression algorithm in digital imaging and communications in medicine (DICOM) standard to better utilize medical images. The purpose of the study was to evaluate the compression ratios of JPEG2000 for digital mammographic images using peak signal-to-noise ratio (PSNR), receiver operating characteristic (ROC) analysis, and the t-test. The traditional statistical quality measures such as PSNR, which is a commonly used measure for the evaluation of reconstructed images, measures how the reconstructed image differs from the original by making pixel-by-pixel comparisons. The ability to accurately discriminate diseased cases from normal cases is evaluated using ROC curve analysis. ROC curves can be used to compare the diagnostic performance of two or more reconstructed images. The t test can be also used to evaluate the subjective image quality of reconstructed images. The results of the t test suggested that the possible compression ratios using JPEG2000 for digital mammographic images may be as much as 15:1 without visual loss or with preserving significant medical information at a confidence level of 99%, although both PSNR and ROC analyses suggest as much as 80:1 compression ratio can be achieved without affecting clinical diagnostic performance.

  17. Reevaluation of JPEG image compression to digitalized gastrointestinal endoscopic color images: a pilot study

    NASA Astrophysics Data System (ADS)

    Kim, Christopher Y.

    1999-05-01

    Endoscopic images p lay an important role in describing many gastrointestinal (GI) disorders. The field of radiology has been on the leading edge of creating, archiving and transmitting digital images. With the advent of digital videoendoscopy, endoscopists now have the ability to generate images for storage and transmission. X-rays can be compressed 30-40X without appreciable decline in quality. We reported results of a pilot study using JPEG compression of 24-bit color endoscopic images. For that study, the result indicated that adequate compression ratios vary according to the lesion and that images could be compressed to between 31- and 99-fold smaller than the original size without an appreciable decline in quality. The purpose of this study was to expand upon the methodology of the previous sty with an eye towards application for the WWW, a medium which would expand both clinical and educational purposes of color medical imags. The results indicate that endoscopists are able to tolerate very significant compression of endoscopic images without loss of clinical image quality. This finding suggests that even 1 MB color images can be compressed to well under 30KB, which is considered a maximal tolerable image size for downloading on the WWW.

  18. Digital CODEC for real-time processing of broadcast quality video signals at 1.8 bits/pixel

    NASA Technical Reports Server (NTRS)

    Shalkhauser, Mary JO; Whyte, Wayne A., Jr.

    1989-01-01

    Advances in very large-scale integration and recent work in the field of bandwidth efficient digital modulation techniques have combined to make digital video processing technically feasible and potentially cost competitive for broadcast quality television transmission. A hardware implementation was developed for a DPCM-based digital television bandwidth compression algorithm which processes standard NTSC composite color television signals and produces broadcast quality video in real time at an average of 1.8 bits/pixel. The data compression algorithm and the hardware implementation of the CODEC are described, and performance results are provided.

  19. Digital CODEC for real-time processing of broadcast quality video signals at 1.8 bits/pixel

    NASA Technical Reports Server (NTRS)

    Shalkhauser, Mary JO; Whyte, Wayne A.

    1991-01-01

    Advances in very large scale integration and recent work in the field of bandwidth efficient digital modulation techniques have combined to make digital video processing technically feasible an potentially cost competitive for broadcast quality television transmission. A hardware implementation was developed for DPCM (differential pulse code midulation)-based digital television bandwidth compression algorithm which processes standard NTSC composite color television signals and produces broadcast quality video in real time at an average of 1.8 bits/pixel. The data compression algorithm and the hardware implementation of the codec are described, and performance results are provided.

  20. High-grade video compression of echocardiographic studies: a multicenter validation study of selected motion pictures expert groups (MPEG)-4 algorithms.

    PubMed

    Barbier, Paolo; Alimento, Marina; Berna, Giovanni; Celeste, Fabrizio; Gentile, Francesco; Mantero, Antonio; Montericcio, Vincenzo; Muratori, Manuela

    2007-05-01

    Large files produced by standard compression algorithms slow down spread of digital and tele-echocardiography. We validated echocardiographic video high-grade compression with the new Motion Pictures Expert Groups (MPEG)-4 algorithms with a multicenter study. Seven expert cardiologists blindly scored (5-point scale) 165 uncompressed and compressed 2-dimensional and color Doppler video clips, based on combined diagnostic content and image quality (uncompressed files as references). One digital video and 3 MPEG-4 algorithms (WM9, MV2, and DivX) were used, the latter at 3 compression levels (0%, 35%, and 60%). Compressed file sizes decreased from 12 to 83 MB to 0.03 to 2.3 MB (1:1051-1:26 reduction ratios). Mean SD of differences was 0.81 for intraobserver variability (uncompressed and digital video files). Compared with uncompressed files, only the DivX mean score at 35% (P = .04) and 60% (P = .001) compression was significantly reduced. At subcategory analysis, these differences were still significant for gray-scale and fundamental imaging but not for color or second harmonic tissue imaging. Original image quality, session sequence, compression grade, and bitrate were all independent determinants of mean score. Our study supports use of MPEG-4 algorithms to greatly reduce echocardiographic file sizes, thus facilitating archiving and transmission. Quality evaluation studies should account for the many independent variables that affect image quality grading.

  1. A 64-channel ultra-low power system-on-chip for local field and action potentials recording

    NASA Astrophysics Data System (ADS)

    Rodríguez-Pérez, Alberto; Delgado-Restituto, Manuel; Darie, Angela; Soto-Sánchez, Cristina; Fernández-Jover, Eduardo; Rodríguez-Vázquez, Ángel

    2015-06-01

    This paper reports an integrated 64-channel neural recording sensor. Neural signals are acquired, filtered, digitized and compressed in the channels. Additionally, each channel implements an auto-calibration mechanism which configures the transfer characteristics of the recording site. The system has two transmission modes; in one case the information captured by the channels is sent as uncompressed raw data; in the other, feature vectors extracted from the detected neural spikes are released. Data streams coming from the channels are serialized by an embedded digital processor. Experimental results, including in vivo measurements, show that the power consumption of the complete system is lower than 330μW.

  2. Compressive self-interference Fresnel digital holography with faithful reconstruction

    NASA Astrophysics Data System (ADS)

    Wan, Yuhong; Man, Tianlong; Han, Ying; Zhou, Hongqiang; Wang, Dayong

    2017-05-01

    We developed compressive self-interference digital holographic approach that allows retrieving three-dimensional information of the spatially incoherent objects from single-shot captured hologram. The Fresnel incoherent correlation holography is combined with parallel phase-shifting technique to instantaneously obtain spatial-multiplexed phase-shifting holograms. The recording scheme is regarded as compressive forward sensing model, thus the compressive-sensing-based reconstruction algorithm is implemented to reconstruct the original object from the under sampled demultiplexed sub-holograms. The concept was verified by simulations and experiments with simulating use of the polarizer array. The proposed technique has great potential to be applied in 3D tracking of spatially incoherent samples.

  3. Digital-computer normal shock position and restart control of a Mach 2.5 axisymmetric mixed-compression inlet

    NASA Technical Reports Server (NTRS)

    Neiner, G. H.; Cole, G. L.; Arpasi, D. J.

    1972-01-01

    Digital computer control of a mixed-compression inlet is discussed. The inlet was terminated with a choked orifice at the compressor face station to dynamically simulate a turbojet engine. Inlet diffuser exit airflow disturbances were used. A digital version of a previously tested analog control system was used for both normal shock and restart control. Digital computer algorithms were derived using z-transform and finite difference methods. Using a sample rate of 1000 samples per second, the digital normal shock and restart controls essentially duplicated the inlet analog computer control results. At a sample rate of 100 samples per second, the control system performed adequately but was less stable.

  4. Multimodal breast cancer imaging using coregistered dynamic diffuse optical tomography and digital breast tomosynthesis

    NASA Astrophysics Data System (ADS)

    Zimmermann, Bernhard B.; Deng, Bin; Singh, Bhawana; Martino, Mark; Selb, Juliette; Fang, Qianqian; Sajjadi, Amir Y.; Cormier, Jayne; Moore, Richard H.; Kopans, Daniel B.; Boas, David A.; Saksena, Mansi A.; Carp, Stefan A.

    2017-04-01

    Diffuse optical tomography (DOT) is emerging as a noninvasive functional imaging method for breast cancer diagnosis and neoadjuvant chemotherapy monitoring. In particular, the multimodal approach of combining DOT with x-ray digital breast tomosynthesis (DBT) is especially synergistic as DBT prior information can be used to enhance the DOT reconstruction. DOT, in turn, provides a functional information overlay onto the mammographic images, increasing sensitivity and specificity to cancer pathology. We describe a dynamic DOT apparatus designed for tight integration with commercial DBT scanners and providing a fast (up to 1 Hz) image acquisition rate to enable tracking hemodynamic changes induced by the mammographic breast compression. The system integrates 96 continuous-wave and 24 frequency-domain source locations as well as 32 continuous wave and 20 frequency-domain detection locations into low-profile plastic plates that can easily mate to the DBT compression paddle and x-ray detector cover, respectively. We demonstrate system performance using static and dynamic tissue-like phantoms as well as in vivo images acquired from the pool of patients recalled for breast biopsies at the Massachusetts General Hospital Breast Imaging Division.

  5. A compressed sensing method with analytical results for lidar feature classification

    NASA Astrophysics Data System (ADS)

    Allen, Josef D.; Yuan, Jiangbo; Liu, Xiuwen; Rahmes, Mark

    2011-04-01

    We present an innovative way to autonomously classify LiDAR points into bare earth, building, vegetation, and other categories. One desirable product of LiDAR data is the automatic classification of the points in the scene. Our algorithm automatically classifies scene points using Compressed Sensing Methods via Orthogonal Matching Pursuit algorithms utilizing a generalized K-Means clustering algorithm to extract buildings and foliage from a Digital Surface Models (DSM). This technology reduces manual editing while being cost effective for large scale automated global scene modeling. Quantitative analyses are provided using Receiver Operating Characteristics (ROC) curves to show Probability of Detection and False Alarm of buildings vs. vegetation classification. Histograms are shown with sample size metrics. Our inpainting algorithms then fill the voids where buildings and vegetation were removed, utilizing Computational Fluid Dynamics (CFD) techniques and Partial Differential Equations (PDE) to create an accurate Digital Terrain Model (DTM) [6]. Inpainting preserves building height contour consistency and edge sharpness of identified inpainted regions. Qualitative results illustrate other benefits such as Terrain Inpainting's unique ability to minimize or eliminate undesirable terrain data artifacts.

  6. Bringing the Digital Camera to the Physics Lab

    NASA Astrophysics Data System (ADS)

    Rossi, M.; Gratton, L. M.; Oss, S.

    2013-03-01

    We discuss how compressed images created by modern digital cameras can lead to even severe problems in the quantitative analysis of experiments based on such images. Difficulties result from the nonlinear treatment of lighting intensity values stored in compressed files. To overcome such troubles, one has to adopt noncompressed, native formats, as we examine in this work.

  7. An efficient and robust 3D mesh compression based on 3D watermarking and wavelet transform

    NASA Astrophysics Data System (ADS)

    Zagrouba, Ezzeddine; Ben Jabra, Saoussen; Didi, Yosra

    2011-06-01

    The compression and watermarking of 3D meshes are very important in many areas of activity including digital cinematography, virtual reality as well as CAD design. However, most studies on 3D watermarking and 3D compression are done independently. To verify a good trade-off between protection and a fast transfer of 3D meshes, this paper proposes a new approach which combines 3D mesh compression with mesh watermarking. This combination is based on a wavelet transformation. In fact, the used compression method is decomposed to two stages: geometric encoding and topologic encoding. The proposed approach consists to insert a signature between these two stages. First, the wavelet transformation is applied to the original mesh to obtain two components: wavelets coefficients and a coarse mesh. Then, the geometric encoding is done on these two components. The obtained coarse mesh will be marked using a robust mesh watermarking scheme. This insertion into coarse mesh allows obtaining high robustness to several attacks. Finally, the topologic encoding is applied to the marked coarse mesh to obtain the compressed mesh. The combination of compression and watermarking permits to detect the presence of signature after a compression of the marked mesh. In plus, it allows transferring protected 3D meshes with the minimum size. The experiments and evaluations show that the proposed approach presents efficient results in terms of compression gain, invisibility and robustness of the signature against of many attacks.

  8. MP3 compression of Doppler ultrasound signals.

    PubMed

    Poepping, Tamie L; Gill, Jeremy; Fenster, Aaron; Holdsworth, David W

    2003-01-01

    The effect of lossy, MP3 compression on spectral parameters derived from Doppler ultrasound (US) signals was investigated. Compression was tested on signals acquired from two sources: 1. phase quadrature and 2. stereo audio directional output. A total of 11, 10-s acquisitions of Doppler US signal were collected from each source at three sites in a flow phantom. Doppler signals were digitized at 44.1 kHz and compressed using four grades of MP3 compression (in kilobits per second, kbps; compression ratios in brackets): 1400 kbps (uncompressed), 128 kbps (11:1), 64 kbps (22:1) and 32 kbps (44:1). Doppler spectra were characterized by peak velocity, mean velocity, spectral width, integrated power and ratio of spectral power between negative and positive velocities. The results suggest that MP3 compression on digital Doppler US signals is feasible at 128 kbps, with a resulting 11:1 compression ratio, without compromising clinically relevant information. Higher compression ratios led to significant differences for both signal sources when compared with the uncompressed signals. Copyright 2003 World Federation for Ultrasound in Medicine & Biology

  9. 42 CFR 37.44 - Approval of radiographic facilities that use digital radiography systems.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... image acquisition, digitization, processing, compression, transmission, display, archiving, and... quality digital chest radiographs by submitting to NIOSH digital radiographic image files of a test object... digital radiographic image files from six or more sample chest radiographs that are of acceptable quality...

  10. 42 CFR 37.44 - Approval of radiographic facilities that use digital radiography systems.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... image acquisition, digitization, processing, compression, transmission, display, archiving, and... quality digital chest radiographs by submitting to NIOSH digital radiographic image files of a test object... digital radiographic image files from six or more sample chest radiographs that are of acceptable quality...

  11. Compressive hyperspectral sensor for LWIR gas detection

    NASA Astrophysics Data System (ADS)

    Russell, Thomas A.; McMackin, Lenore; Bridge, Bob; Baraniuk, Richard

    2012-06-01

    Focal plane arrays with associated electronics and cooling are a substantial portion of the cost, complexity, size, weight, and power requirements of Long-Wave IR (LWIR) imagers. Hyperspectral LWIR imagers add significant data volume burden as they collect a high-resolution spectrum at each pixel. We report here on a LWIR Hyperspectral Sensor that applies Compressive Sensing (CS) in order to achieve benefits in these areas. The sensor applies single-pixel detection technology demonstrated by Rice University. The single-pixel approach uses a Digital Micro-mirror Device (DMD) to reflect and multiplex the light from a random assortment of pixels onto the detector. This is repeated for a number of measurements much less than the total number of scene pixels. We have extended this architecture to hyperspectral LWIR sensing by inserting a Fabry-Perot spectrometer in the optical path. This compressive hyperspectral imager collects all three dimensions on a single detection element, greatly reducing the size, weight and power requirements of the system relative to traditional approaches, while also reducing data volume. The CS architecture also supports innovative adaptive approaches to sensing, as the DMD device allows control over the selection of spatial scene pixels to be multiplexed on the detector. We are applying this advantage to the detection of plume gases, by adaptively locating and concentrating target energy. A key challenge in this system is the diffraction loss produce by the DMD in the LWIR. We report the results of testing DMD operation in the LWIR, as well as system spatial and spectral performance.

  12. FERMI: a digital Front End and Readout MIcrosystem for high resolution calorimetry

    NASA Astrophysics Data System (ADS)

    Alexanian, H.; Appelquist, G.; Bailly, P.; Benetta, R.; Berglund, S.; Bezamat, J.; Blouzon, F.; Bohm, C.; Breveglieri, L.; Brigati, S.; Cattaneo, P. W.; Dadda, L.; David, J.; Engström, M.; Genat, J. F.; Givoletti, M.; Goggi, V. G.; Gong, S.; Grieco, G. M.; Hansen, M.; Hentzell, H.; Holmberg, T.; Höglund, I.; Inkinen, S. J.; Kerek, A.; Landi, C.; Ledortz, O.; Lippi, M.; Lofstedt, B.; Lund-Jensen, B.; Maloberti, F.; Mutz, S.; Nayman, P.; Piuri, V.; Polesello, G.; Sami, M.; Savoy-Navarro, A.; Schwemling, P.; Stefanelli, R.; Sundblad, R.; Svensson, C.; Torelli, G.; Vanuxem, J. P.; Yamdagni, N.; Yuan, J.; Ödmark, A.; Fermi Collaboration

    1995-02-01

    We present a digital solution for the front-end electronics of high resolution calorimeters at future colliders. It is based on analogue signal compression, high speed {A}/{D} converters, a fully programmable pipeline and a digital signal processing (DSP) chain with local intelligence and system supervision. This digital solution is aimed at providing maximal front-end processing power by performing waveform analysis using DSP methods. For the system integration of the multichannel device a multi-chip, silicon-on-silicon multi-chip module (MCM) has been adopted. This solution allows a high level of integration of complex analogue and digital functions, with excellent flexibility in mixing technologies for the different functional blocks. This type of multichip integration provides a high degree of reliability and programmability at both the function and the system level, with the additional possibility of customising the microsystem to detector-specific requirements. For enhanced reliability in high radiation environments, fault tolerance strategies, i.e. redundancy, reconfigurability, majority voting and coding for error detection and correction, are integrated into the design.

  13. Digital Imagery Compression Best Practices Guide - A Motion Imagery Standards Profile (MISP) Compliant Architecture

    DTIC Science & Technology

    2012-06-01

    MISP) COMPLIANT ARCHITECTURE WHITE SANDS MISSILE RANGE REAGAN TEST SITE YUMA PROVING GROUND DUGWAY PROVING GROUND ABERDEEN TEST CENTER...DIGITAL MOTION IMAGERY COMPRESSION BEST PRACTICES GUIDE – A MOTION IMAGERY STANDARDS PROFILE (MISP) COMPLIANT ARCHITECTURE ...delivery, and archival purposes. These practices are based on a Motion Imagery Standards Profile (MISP) compliant architecture , which has been defined

  14. Lossy compression of weak lensing data

    DOE PAGES

    Vanderveld, R. Ali; Bernstein, Gary M.; Stoughton, Chris; ...

    2011-07-12

    Future orbiting observatories will survey large areas of sky in order to constrain the physics of dark matter and dark energy using weak gravitational lensing and other methods. Lossy compression of the resultant data will improve the cost and feasibility of transmitting the images through the space communication network. We evaluate the consequences of the lossy compression algorithm of Bernstein et al. (2010) for the high-precision measurement of weak-lensing galaxy ellipticities. This square-root algorithm compresses each pixel independently, and the information discarded is by construction less than the Poisson error from photon shot noise. For simulated space-based images (without cosmicmore » rays) digitized to the typical 16 bits per pixel, application of the lossy compression followed by image-wise lossless compression yields images with only 2.4 bits per pixel, a factor of 6.7 compression. We demonstrate that this compression introduces no bias in the sky background. The compression introduces a small amount of additional digitization noise to the images, and we demonstrate a corresponding small increase in ellipticity measurement noise. The ellipticity measurement method is biased by the addition of noise, so the additional digitization noise is expected to induce a multiplicative bias on the galaxies measured ellipticities. After correcting for this known noise-induced bias, we find a residual multiplicative ellipticity bias of m {approx} -4 x 10 -4. This bias is small when compared to the many other issues that precision weak lensing surveys must confront, and furthermore we expect it to be reduced further with better calibration of ellipticity measurement methods.« less

  15. A digital acquisition and elaboration system for nuclear fast pulse detection

    NASA Astrophysics Data System (ADS)

    Esposito, B.; Riva, M.; Marocco, D.; Kaschuck, Y.

    2007-03-01

    A new digital acquisition and elaboration system has been developed and assembled in ENEA-Frascati for the direct sampling of fast pulses from nuclear detectors such as scintillators and diamond detectors. The system is capable of performing the digital sampling of the pulses (200 MSamples/s, 14-bit) and the simultaneous (compressed) data transfer for further storage and software elaboration. The design (FPGA-based) is oriented to real-time applications and has been developed in order to allow acquisition with no loss of pulses and data storage for long-time intervals (tens of s at MHz pulse count rates) without the need of large on-board memory. A dedicated pulse analysis software, written in LabVIEWTM, performs the treatment of the acquired pulses, including pulse recognition, pile-up rejection, baseline removal, pulse shape particle separation and pulse height spectra analysis. The acquisition and pre-elaboration programs have been fully integrated with the analysis software.

  16. Camera-Model Identification Using Markovian Transition Probability Matrix

    NASA Astrophysics Data System (ADS)

    Xu, Guanshuo; Gao, Shang; Shi, Yun Qing; Hu, Ruimin; Su, Wei

    Detecting the (brands and) models of digital cameras from given digital images has become a popular research topic in the field of digital forensics. As most of images are JPEG compressed before they are output from cameras, we propose to use an effective image statistical model to characterize the difference JPEG 2-D arrays of Y and Cb components from the JPEG images taken by various camera models. Specifically, the transition probability matrices derived from four different directional Markov processes applied to the image difference JPEG 2-D arrays are used to identify statistical difference caused by image formation pipelines inside different camera models. All elements of the transition probability matrices, after a thresholding technique, are directly used as features for classification purpose. Multi-class support vector machines (SVM) are used as the classification tool. The effectiveness of our proposed statistical model is demonstrated by large-scale experimental results.

  17. Digital storage and analysis of color Doppler echocardiograms

    NASA Technical Reports Server (NTRS)

    Chandra, S.; Thomas, J. D.

    1997-01-01

    Color Doppler flow mapping has played an important role in clinical echocardiography. Most of the clinical work, however, has been primarily qualitative. Although qualitative information is very valuable, there is considerable quantitative information stored within the velocity map that has not been extensively exploited so far. Recently, many researchers have shown interest in using the encoded velocities to address the clinical problems such as quantification of valvular regurgitation, calculation of cardiac output, and characterization of ventricular filling. In this article, we review some basic physics and engineering aspects of color Doppler echocardiography, as well as drawbacks of trying to retrieve velocities from video tape data. Digital storage, which plays a critical role in performing quantitative analysis, is discussed in some detail with special attention to velocity encoding in DICOM 3.0 (medical image storage standard) and the use of digital compression. Lossy compression can considerably reduce file size with minimal loss of information (mostly redundant); this is critical for digital storage because of the enormous amount of data generated (a 10 minute study could require 18 Gigabytes of storage capacity). Lossy JPEG compression and its impact on quantitative analysis has been studied, showing that images compressed at 27:1 using the JPEG algorithm compares favorably with directly digitized video images, the current goldstandard. Some potential applications of these velocities in analyzing the proximal convergence zones, mitral inflow, and some areas of future development are also discussed in the article.

  18. Automated detection of diagnostically relevant regions in H&E stained digital pathology slides

    NASA Astrophysics Data System (ADS)

    Bahlmann, Claus; Patel, Amar; Johnson, Jeffrey; Ni, Jie; Chekkoury, Andrei; Khurd, Parmeshwar; Kamen, Ali; Grady, Leo; Krupinski, Elizabeth; Graham, Anna; Weinstein, Ronald

    2012-03-01

    We present a computationally efficient method for analyzing H&E stained digital pathology slides with the objective of discriminating diagnostically relevant vs. irrelevant regions. Such technology is useful for several applications: (1) It can speed up computer aided diagnosis (CAD) for histopathology based cancer detection and grading by an order of magnitude through a triage-like preprocessing and pruning. (2) It can improve the response time for an interactive digital pathology workstation (which is usually dealing with several GByte digital pathology slides), e.g., through controlling adaptive compression or prioritization algorithms. (3) It can support the detection and grading workflow for expert pathologists in a semi-automated diagnosis, hereby increasing throughput and accuracy. At the core of the presented method is the statistical characterization of tissue components that are indicative for the pathologist's decision about malignancy vs. benignity, such as, nuclei, tubules, cytoplasm, etc. In order to allow for effective yet computationally efficient processing, we propose visual descriptors that capture the distribution of color intensities observed for nuclei and cytoplasm. Discrimination between statistics of relevant vs. irrelevant regions is learned from annotated data, and inference is performed via linear classification. We validate the proposed method both qualitatively and quantitatively. Experiments show a cross validation error rate of 1.4%. We further show that the proposed method can prune ~90% of the area of pathological slides while maintaining 100% of all relevant information, which allows for a speedup of a factor of 10 for CAD systems.

  19. Optical scanning holography based on compressive sensing using a digital micro-mirror device

    NASA Astrophysics Data System (ADS)

    A-qian, Sun; Ding-fu, Zhou; Sheng, Yuan; You-jun, Hu; Peng, Zhang; Jian-ming, Yue; xin, Zhou

    2017-02-01

    Optical scanning holography (OSH) is a distinct digital holography technique, which uses a single two-dimensional (2D) scanning process to record the hologram of a three-dimensional (3D) object. Usually, these 2D scanning processes are in the form of mechanical scanning, and the quality of recorded hologram may be affected due to the limitation of mechanical scanning accuracy and unavoidable vibration of stepper motor's start-stop. In this paper, we propose a new framework, which replaces the 2D mechanical scanning mirrors with a Digital Micro-mirror Device (DMD) to modulate the scanning light field, and we call it OSH based on Compressive Sensing (CS) using a digital micro-mirror device (CS-OSH). CS-OSH can reconstruct the hologram of an object through the use of compressive sensing theory, and then restore the image of object itself. Numerical simulation results confirm this new type OSH can get a reconstructed image with favorable visual quality even under the condition of a low sample rate.

  20. Real-time demonstration hardware for enhanced DPCM video compression algorithm

    NASA Technical Reports Server (NTRS)

    Bizon, Thomas P.; Whyte, Wayne A., Jr.; Marcopoli, Vincent R.

    1992-01-01

    The lack of available wideband digital links as well as the complexity of implementation of bandwidth efficient digital video CODECs (encoder/decoder) has worked to keep the cost of digital television transmission too high to compete with analog methods. Terrestrial and satellite video service providers, however, are now recognizing the potential gains that digital video compression offers and are proposing to incorporate compression systems to increase the number of available program channels. NASA is similarly recognizing the benefits of and trend toward digital video compression techniques for transmission of high quality video from space and therefore, has developed a digital television bandwidth compression algorithm to process standard National Television Systems Committee (NTSC) composite color television signals. The algorithm is based on differential pulse code modulation (DPCM), but additionally utilizes a non-adaptive predictor, non-uniform quantizer and multilevel Huffman coder to reduce the data rate substantially below that achievable with straight DPCM. The non-adaptive predictor and multilevel Huffman coder combine to set this technique apart from other DPCM encoding algorithms. All processing is done on a intra-field basis to prevent motion degradation and minimize hardware complexity. Computer simulations have shown the algorithm will produce broadcast quality reconstructed video at an average transmission rate of 1.8 bits/pixel. Hardware implementation of the DPCM circuit, non-adaptive predictor and non-uniform quantizer has been completed, providing realtime demonstration of the image quality at full video rates. Video sampling/reconstruction circuits have also been constructed to accomplish the analog video processing necessary for the real-time demonstration. Performance results for the completed hardware compare favorably with simulation results. Hardware implementation of the multilevel Huffman encoder/decoder is currently under development along with implementation of a buffer control algorithm to accommodate the variable data rate output of the multilevel Huffman encoder. A video CODEC of this type could be used to compress NTSC color television signals where high quality reconstruction is desirable (e.g., Space Station video transmission, transmission direct-to-the-home via direct broadcast satellite systems or cable television distribution to system headends and direct-to-the-home).

  1. Digital radiography and caries diagnosis.

    PubMed

    Wenzel, A

    1998-01-01

    Direct digital acquisition of intra-oral radiographs has been possible only in the last decade. Several studies have shown that, theoretically, there are a number of advantages of direct digital radiography compared with conventional film. Laboratory as well as controlled clinical studies are needed to determine whether new digital imaging systems alter diagnosis, treatment and prognosis compared with conventional methods. Most studies so far have evaluated their diagnostic performance only in laboratory settings. This review concentrates on what evidence we have for the diagnostic efficacy of digital systems for caries detection. Digital systems are compared with film and those studies which have evaluated the effects on diagnostic accuracy of contrast and edge enhancement, image size, variations in radiation dose and image compression are reviewed together with the use of automated image analysis for caries diagnosis. Digital intra-oral radiographic systems seem to be as accurate as the currently available dental films for the detection of caries. Sensitivities are relatively high (0.6-0.8) for detection of occlusal lesions into dentine with false positive fractions of 5-10%. A radiolucency in dentine is recognised as a good predictor for demineralisation. Radiography is of no value for the detection of initial (enamel) occlusal lesions. For detection of approximal dentinal lesions, sensitivities, specificities as well as the predictive values are fair, but are very poor for lesions known to be confined to enamel. Very little documented information exists, however, on the utilization of digital systems in the clinic. It is not known whether dose is actually reduced with the storage phosphor system, or whether collimator size is adjusted to fit sensor size in the CCD-based systems. There is no evidence that the number of retakes have been reduced. It is not known how many images are needed with the various CCD systems when compared with a conventional bitewing, nor how stable these systems are in the daily clinical use or whether proper cross-infection control can be maintained in relation to scanning the storage phosphor plates and the sensors and the cable. There is only sparse evidence that the enhancement facilities are used when interpreting images, and none that this has changed working practices or treatment decisions. The economic consequences for the patient, dentist and society require examination.

  2. The Coming of Digital Desktop Media.

    ERIC Educational Resources Information Center

    Galbreath, Jeremy

    1992-01-01

    Discusses the movement toward digital-based platforms including full-motion video for multimedia products. Hardware- and software-based compression techniques for digital data storage are considered, and a chart summarizes features of Digital Video Interactive, Moving Pictures Experts Group, P x 64, Joint Photographic Experts Group, Apple…

  3. Detecting 2LSB steganography using extended pairs of values analysis

    NASA Astrophysics Data System (ADS)

    Khalind, Omed; Aziz, Benjamin

    2014-05-01

    In this paper, we propose an extended pairs of values analysis to detect and estimate the amount of secret messages embedded with 2LSB replacement in digital images based on chi-square attack and regularity rate in pixel values. The detection process is separated from the estimation of the hidden message length, as it is the main requirement of any steganalysis method. Hence, the detection process acts as a discrete classifier, which classifies a given set of images into stego and clean classes. The method can accurately detect 2LSB replacement even when the message length is about 10% of the total capacity, it also reaches its best performance with an accuracy of higher than 0.96 and a true positive rate of more than 0.997 when the amount of data are 20% to 100% of the total capacity. However, the method puts no assumptions neither on the image nor the secret message, as it tested with two sets of 3000 images, compressed and uncompressed, embedded with a random message for each case. This method of detection could also be used as an automated tool to analyse a bulk of images for hidden contents, which could be used by digital forensics analysts in their investigation process.

  4. Syndrome-source-coding and its universal generalization. [error correcting codes for data compression

    NASA Technical Reports Server (NTRS)

    Ancheta, T. C., Jr.

    1976-01-01

    A method of using error-correcting codes to obtain data compression, called syndrome-source-coding, is described in which the source sequence is treated as an error pattern whose syndrome forms the compressed data. It is shown that syndrome-source-coding can achieve arbitrarily small distortion with the number of compressed digits per source digit arbitrarily close to the entropy of a binary memoryless source. A 'universal' generalization of syndrome-source-coding is formulated which provides robustly effective distortionless coding of source ensembles. Two examples are given, comparing the performance of noiseless universal syndrome-source-coding to (1) run-length coding and (2) Lynch-Davisson-Schalkwijk-Cover universal coding for an ensemble of binary memoryless sources.

  5. Astronomical image data compression by morphological skeleton transformation

    NASA Astrophysics Data System (ADS)

    Huang, L.; Bijaoui, A.

    A compression method adapted for exact restoring of the detected objects and based on the morphological skeleton transformation is presented. The morphological skeleton provides a complete and compact description of an object and gives an efficient compression rate. The flexibility of choosing a structuring element adapted to different images and the simplicity of the implementation are considered to be advantages of the method. The experiment was carried out on three typical astronomical images. The first two images were obtained by digitizing a Palomar Schmidt photographic plate in a coma field with the PDS microdensitometer at Nice Observatory. The third image was obtained by CCD camera at the Pic du Midi Observatory. Each pixel was coded by 16 bits and stored at a computer system (VAX785) with STII format. Each image is characterized by 256 x 256 pixels. It is found that first image represents a stellar field, the second represents a set of galaxies in the Coma, and the third image contains an elliptical galaxy.

  6. The effects of wavelet compression on Digital Elevation Models (DEMs)

    USGS Publications Warehouse

    Oimoen, M.J.

    2004-01-01

    This paper investigates the effects of lossy compression on floating-point digital elevation models using the discrete wavelet transform. The compression of elevation data poses a different set of problems and concerns than does the compression of images. Most notably, the usefulness of DEMs depends largely in the quality of their derivatives, such as slope and aspect. Three areas extracted from the U.S. Geological Survey's National Elevation Dataset were transformed to the wavelet domain using the third order filters of the Daubechies family (DAUB6), and were made sparse by setting 95 percent of the smallest wavelet coefficients to zero. The resulting raster is compressible to a corresponding degree. The effects of the nulled coefficients on the reconstructed DEM are noted as residuals in elevation, derived slope and aspect, and delineation of drainage basins and streamlines. A simple masking technique also is presented, that maintains the integrity and flatness of water bodies in the reconstructed DEM.

  7. Detection and localization of copy-paste forgeries in digital videos.

    PubMed

    Singh, Raahat Devender; Aggarwal, Naveen

    2017-12-01

    Amidst the continual march of technology, we find ourselves relying on digital videos to proffer visual evidence in several highly sensitive areas such as journalism, politics, civil and criminal litigation, and military and intelligence operations. However, despite being an indispensable source of information with high evidentiary value, digital videos are also extremely vulnerable to conscious manipulations. Therefore, in a situation where dependence on video evidence is unavoidable, it becomes crucial to authenticate the contents of this evidence before accepting them as an accurate depiction of reality. Digital videos can suffer from several kinds of manipulations, but perhaps, one of the most consequential forgeries is copy-paste forgery, which involves insertion/removal of objects into/from video frames. Copy-paste forgeries alter the information presented by the video scene, which has a direct effect on our basic understanding of what that scene represents, and so, from a forensic standpoint, the challenge of detecting such forgeries is especially significant. In this paper, we propose a sensor pattern noise based copy-paste detection scheme, which is an improved and forensically stronger version of an existing noise-residue based technique. We also study a demosaicing artifact based image forensic scheme to estimate the extent of its viability in the domain of video forensics. Furthermore, we suggest a simplistic clustering technique for the detection of copy-paste forgeries, and determine if it possess the capabilities desired of a viable and efficacious video forensic scheme. Finally, we validate these schemes on a set of realistically tampered MJPEG, MPEG-2, MPEG-4, and H.264/AVC encoded videos in a diverse experimental set-up by varying the strength of post-production re-compressions and transcodings, bitrates, and sizes of the tampered regions. Such an experimental set-up is representative of a neutral testing platform and simulates a real-world forgery scenario where the forensic investigator has no control over any of the variable parameters of the tampering process. When tested in such an experimental set-up, the four forensic schemes achieved varying levels of detection accuracies and exhibited different scopes of applicabilities. For videos compressed using QFs in the range 70-100, the existing noise residue based technique generated average detection accuracy in the range 64.5%-82.0%, while the proposed sensor pattern noise based scheme generated average accuracy in the range 89.9%-98.7%. For the aforementioned range of QFs, average accuracy rates achieved by the suggested clustering technique and the demosaicing artifact based approach were in the range 79.1%-90.1% and 83.2%-93.3%, respectively. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. A crossover trial comparing wide dynamic range compression and frequency compression in hearing aids for tinnitus therapy.

    PubMed

    Hodgson, Shirley-Anne; Herdering, Regina; Singh Shekhawat, Giriraj; Searchfield, Grant D

    2017-01-01

    It has been suggested that frequency lowering may be a superior tinnitus reducing digital signal processing (DSP) strategy in hearing aids than conventional amplification. A crossover trial was undertaken to determine if frequency compression (FC) was superior to wide dynamic range compression (WDRC) in reducing tinnitus. A 6-8-week crossover trial of two digital signal-processing techniques (WDRC and 2 WDRC with FC) was undertaken in 16 persons with high-frequency sensorineural hearing loss and chronic tinnitus. WDRC resulted in larger improvements in Tinnitus Functional Index and rating scale scores than WDRC with FC. The tinnitus improvements obtained with both processing types appear to be due to reduced hearing handicap and possibly decreased tinnitus audibility. Hearing aids are useful assistive devices in the rehabilitation of tinnitus. FC was very successful in a few individuals but was not superior to WDRC across the sample. It is recommended that WDRC remain as the default first choice tinnitus hearing aid processing strategy for tinnitus. FC should be considered as one of the many other options for selection based on individual hearing needs. Implications of Rehabilitation Hearing aids can significantly reduce the effects of tinnitus after 6-8 weeks of use. Addition of frequency compression digital signal processing does not appear superior to standard amplitude compression alone. Improvements in tinnitus were correlated with reductions in hearing handicap.

  9. Digital data registration and differencing compression system

    NASA Technical Reports Server (NTRS)

    Ransford, Gary A. (Inventor); Cambridge, Vivien J. (Inventor)

    1990-01-01

    A process is disclosed for x ray registration and differencing which results in more efficient compression. Differencing of registered modeled subject image with a modeled reference image forms a differenced image for compression with conventional compression algorithms. Obtention of a modeled reference image includes modeling a relatively unrelated standard reference image upon a three-dimensional model, which three-dimensional model is also used to model the subject image for obtaining the modeled subject image. The registration process of the modeled subject image and modeled reference image translationally correlates such modeled images for resulting correlation thereof in spatial and spectral dimensions. Prior to compression, a portion of the image falling outside a designated area of interest may be eliminated, for subsequent replenishment with a standard reference image. The compressed differenced image may be subsequently transmitted and/or stored, for subsequent decompression and addition to a standard reference image so as to form a reconstituted or approximated subject image at either a remote location and/or at a later moment in time. Overall effective compression ratios of 100:1 are possible for thoracic x ray digital images.

  10. Digital Data Registration and Differencing Compression System

    NASA Technical Reports Server (NTRS)

    Ransford, Gary A. (Inventor); Cambridge, Vivien J. (Inventor)

    1996-01-01

    A process for X-ray registration and differencing results in more efficient compression. Differencing of registered modeled subject image with a modeled reference image forms a differenced image for compression with conventional compression algorithms. Obtention of a modeled reference image includes modeling a relatively unrelated standard reference image upon a three-dimensional model, which three-dimensional model is also used to model the subject image for obtaining the modeled subject image. The registration process of the modeled subject image and modeled reference image translationally correlates such modeled images for resulting correlation thereof in spatial and spectral dimensions. Prior to compression, a portion of the image falling outside a designated area of interest may be eliminated, for subsequent replenishment with a standard reference image. The compressed differenced image may be subsequently transmitted and/or stored, for subsequent decompression and addition to a standard reference image so as to form a reconstituted or approximated subject image at either a remote location and/or at a later moment in time. Overall effective compression ratios of 100:1 are possible for thoracic X-ray digital images.

  11. Digital data registration and differencing compression system

    NASA Technical Reports Server (NTRS)

    Ransford, Gary A. (Inventor); Cambridge, Vivien J. (Inventor)

    1992-01-01

    A process for x ray registration and differencing results in more efficient compression is discussed. Differencing of registered modeled subject image with a modeled reference image forms a differential image for compression with conventional compression algorithms. Obtention of a modeled reference image includes modeling a relatively unrelated standard reference image upon a three dimensional model, which three dimensional model is also used to model the subject image for obtaining the modeled subject image. The registration process of the modeled subject image and modeled reference image translationally correlates such modeled images for resulting correlation thereof in spatial and spectral dimensions. Prior to compression, a portion of the image falling outside a designated area of interest may be eliminated, for subsequent replenishment with a standard reference image. The compressed differenced image may be subsequently transmitted and/or stored, for subsequent decompression and addition to a standard reference image so as to form a reconstituted or approximated subject image at either remote location and/or at a later moment in time. Overall effective compression ratios of 100:1 are possible for thoracic x ray digital images.

  12. Phase reconstruction using compressive two-step parallel phase-shifting digital holography

    NASA Astrophysics Data System (ADS)

    Ramachandran, Prakash; Alex, Zachariah C.; Nelleri, Anith

    2018-04-01

    The linear relationship between the sample complex object wave and its approximated complex Fresnel field obtained using single shot parallel phase-shifting digital holograms (PPSDH) is used in compressive sensing framework and an accurate phase reconstruction is demonstrated. It is shown that the accuracy of phase reconstruction of this method is better than that of compressive sensing adapted single exposure inline holography (SEOL) method. It is derived that the measurement model of PPSDH method retains both the real and imaginary parts of the Fresnel field but with an approximation noise and the measurement model of SEOL retains only the real part exactly equal to the real part of the complex Fresnel field and its imaginary part is completely not available. Numerical simulation is performed for CS adapted PPSDH and CS adapted SEOL and it is demonstrated that the phase reconstruction is accurate for CS adapted PPSDH and can be used for single shot digital holographic reconstruction.

  13. The International Remote Monitoring Project: Results of the Swedish Nuclear Power Facility field trial

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, C.S.; af Ekenstam, G.; Sallstrom, M.

    1995-07-01

    The Swedish Nuclear Power Inspectorate (SKI) and the US Department of Energy (DOE) sponsored work on a Remote Monitoring System (RMS) that was installed in August 1994 at the Barseback Works north of Malmo, Sweden. The RMS was designed to test the front end detection concept that would be used for unattended remote monitoring activities. Front end detection reduces the number of video images recorded and provides additional sensor verification of facility operations. The function of any safeguards Containment and Surveillance (C/S) system is to collect information which primarily is images that verify the operations at a nuclear facility. Barsebackmore » is ideal to test the concept of front end detection since most activities of safeguards interest is movement of spent fuel which occurs once a year. The RMS at Barseback uses a network of nodes to collect data from microwave motion detectors placed to detect the entrance and exit of spent fuel casks through a hatch. A video system using digital compression collects digital images and stores them on a hard drive and a digital optical disk. Data and images from the storage area are remotely monitored via telephone from Stockholm, Sweden and Albuquerque, NM, USA. These remote monitoring stations operated by SKI and SNL respectively, can retrieve data and images from the RMS computer at the Barseback Facility. The data and images are encrypted before transmission. This paper presents details of the RMS and test results of this approach to front end detection of safeguard activities.« less

  14. Digital Breast Tomosynthesis in Addition to Conventional 2DMammography Reduces Recall Rates and is CostEffective.

    PubMed

    Pozz, Agostino; Corte, Angelo Della; Lakis, Mustapha A El; Jeong, HeonJae

    2016-01-01

    Digital breast tomosynthesis (DBT) as a breast cancer screening modality, through generation of three dimensional images during standard mammographic compression, can reduce interference from breast tissue overlap, increasing conspicuity of invasive cancers while concomitantly reducing falsepositive results. We here conducted a systematic review on previous studies to synthesize the evidence of DBT efficacy, eventually 18 articles being included in the analysis. The most commonly emerging topics were advantages of DBT screening tool in terms of recall rates, cancer detection rates and costeffectiveness, preventing unnecessary burdens on women and the healthcare system. Further research is needed to evaluate the potential impact of DBT on longerterm outcomes, such as interval cancer rates and mortality, to better understand the broader clinical and economic implications of its adoption.

  15. A novel shape similarity based elastography system for prostate cancer assessment

    NASA Astrophysics Data System (ADS)

    Wang, Haisu; Mousavi, Seyed Reza; Samani, Abbas

    2012-03-01

    Prostate cancer is the second common cancer among men worldwide and remains the second leading cancer-related cause of death in mature men. The disease can be cured if it is detected at early stage. This implies that prostate cancer detection at early stage is very critical for desirable treatment outcome. Conventional techniques of prostate cancer screening and detection, such as Digital Rectal Examination (DRE), Prostate-Specific Antigen (PSA) and Trans Rectal Ultra-Sonography (TRUS), are known to have low sensitivity and specificity. Elastography is an imaging technique that uses tissue stiffness as contrast mechanism. As the association between the degree of prostate tissue stiffness alteration and its pathology is well established, elastography can potentially detect prostate cancer with a high degree of sensitivity and specificity. In this paper, we present a novel elastography technique which, unlike other elastography techniques, does not require displacement data acquisition system. This technique requires the prostate's pre-compression and postcompression transrectal ultrasound images. The conceptual foundation of reconstructing the prostate's normal and pathological tissues elastic moduli is to determine these moduli such that the similarity between calculated and observed shape features of the post compression prostate image is maximized. Results indicate that this technique is highly accurate and robust.

  16. Utilization of KSC Present Broadband Communications Data System for Digital Video Services

    NASA Technical Reports Server (NTRS)

    Andrawis, Alfred S.

    2002-01-01

    This report covers a visibility study of utilizing present KSC broadband communications data system (BCDS) for digital video services. Digital video services include compressed digital TV delivery and video-on-demand. Furthermore, the study examines the possibility of providing interactive video on demand to desktop personal computers via KSC computer network.

  17. Utilization of KSC Present Broadband Communications Data System For Digital Video Services

    NASA Technical Reports Server (NTRS)

    Andrawis, Alfred S.

    2001-01-01

    This report covers a visibility study of utilizing present KSC broadband communications data system (BCDS) for digital video services. Digital video services include compressed digital TV delivery and video-on-demand. Furthermore, the study examines the possibility of providing interactive video on demand to desktop personal computers via KSC computer network.

  18. Compression of ground-motion data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Long, J.W.

    1981-04-01

    Ground motion data has been recorded for many years at Nevada Test Site and is now stored on thousands of digital tapes. The recording format is very inefficient in terms of space on tape. This report outlines a method to compress the data onto a few hundred tapes while maintaining the accuracy of the recording and allowing restoration of any file to the original format for future use. For future digitizing a more efficient format is described and suggested.

  19. Dual domain watermarking for authentication and compression of cultural heritage images.

    PubMed

    Zhao, Yang; Campisi, Patrizio; Kundur, Deepa

    2004-03-01

    This paper proposes an approach for the combined image authentication and compression of color images by making use of a digital watermarking and data hiding framework. The digital watermark is comprised of two components: a soft-authenticator watermark for authentication and tamper assessment of the given image, and a chrominance watermark employed to improve the efficiency of compression. The multipurpose watermark is designed by exploiting the orthogonality of various domains used for authentication, color decomposition and watermark insertion. The approach is implemented as a DCT-DWT dual domain algorithm and is applied for the protection and compression of cultural heritage imagery. Analysis is provided to characterize the behavior of the scheme under ideal conditions. Simulations and comparisons of the proposed approach with state-of-the-art existing work demonstrate the potential of the overall scheme.

  20. Real-time transmission of digital video using variable-length coding

    NASA Technical Reports Server (NTRS)

    Bizon, Thomas P.; Shalkhauser, Mary JO; Whyte, Wayne A., Jr.

    1993-01-01

    Huffman coding is a variable-length lossless compression technique where data with a high probability of occurrence is represented with short codewords, while 'not-so-likely' data is assigned longer codewords. Compression is achieved when the high-probability levels occur so frequently that their benefit outweighs any penalty paid when a less likely input occurs. One instance where Huffman coding is extremely effective occurs when data is highly predictable and differential coding can be applied (as with a digital video signal). For that reason, it is desirable to apply this compression technique to digital video transmission; however, special care must be taken in order to implement a communication protocol utilizing Huffman coding. This paper addresses several of the issues relating to the real-time transmission of Huffman-coded digital video over a constant-rate serial channel. Topics discussed include data rate conversion (from variable to a fixed rate), efficient data buffering, channel coding, recovery from communication errors, decoder synchronization, and decoder architectures. A description of the hardware developed to execute Huffman coding and serial transmission is also included. Although this paper focuses on matters relating to Huffman-coded digital video, the techniques discussed can easily be generalized for a variety of applications which require transmission of variable-length data.

  1. Classification Techniques for Digital Map Compression

    DTIC Science & Technology

    1989-03-01

    classification improved the performance of the K-means classification algorithm resulting in a compression of 8.06:1 with Lempel - Ziv coding. Run-length coding... compression performance are run-length coding [2], [8] and Lempel - Ziv coding 110], [11]. These techniques are chosen because they are most efficient when...investigated. After the classification, some standard file compression methods, such as Lempel - Ziv and run-length encoding were applied to the

  2. Digital Video of Live-Scan Fingerprint Data

    National Institute of Standards and Technology Data Gateway

    NIST Digital Video of Live-Scan Fingerprint Data (PC database for purchase)   NIST Special Database 24 contains MPEG-2 (Moving Picture Experts Group) compressed digital video of live-scan fingerprint data. The database is being distributed for use in developing and testing of fingerprint verification systems.

  3. CTS digital video college curriculum-sharing experiment. [Communications Technology Satellite

    NASA Technical Reports Server (NTRS)

    Lumb, D. R.; Sites, M. J.

    1974-01-01

    NASA-Ames Research Center, Stanford University, and Carleton University, Ottawa, Canada, are participating in a joint experiment to evaluate the feasibility and effectiveness of college curriculum sharing using compressed digital television and the Communications Technology Satellite (CTS). Each university will offer televised courses to the other during the 1976-1977 academic year via CTS, a joint program by NASA and the Canadian Department of Communications. The video compression techniques to be demonstrated will enable economical interconnection of educational institutions using existing and planned domestic satellites.

  4. Medical Ultrasound Video Coding with H.265/HEVC Based on ROI Extraction

    PubMed Central

    Wu, Yueying; Liu, Pengyu; Gao, Yuan; Jia, Kebin

    2016-01-01

    High-efficiency video compression technology is of primary importance to the storage and transmission of digital medical video in modern medical communication systems. To further improve the compression performance of medical ultrasound video, two innovative technologies based on diagnostic region-of-interest (ROI) extraction using the high efficiency video coding (H.265/HEVC) standard are presented in this paper. First, an effective ROI extraction algorithm based on image textural features is proposed to strengthen the applicability of ROI detection results in the H.265/HEVC quad-tree coding structure. Second, a hierarchical coding method based on transform coefficient adjustment and a quantization parameter (QP) selection process is designed to implement the otherness encoding for ROIs and non-ROIs. Experimental results demonstrate that the proposed optimization strategy significantly improves the coding performance by achieving a BD-BR reduction of 13.52% and a BD-PSNR gain of 1.16 dB on average compared to H.265/HEVC (HM15.0). The proposed medical video coding algorithm is expected to satisfy low bit-rate compression requirements for modern medical communication systems. PMID:27814367

  5. Medical Ultrasound Video Coding with H.265/HEVC Based on ROI Extraction.

    PubMed

    Wu, Yueying; Liu, Pengyu; Gao, Yuan; Jia, Kebin

    2016-01-01

    High-efficiency video compression technology is of primary importance to the storage and transmission of digital medical video in modern medical communication systems. To further improve the compression performance of medical ultrasound video, two innovative technologies based on diagnostic region-of-interest (ROI) extraction using the high efficiency video coding (H.265/HEVC) standard are presented in this paper. First, an effective ROI extraction algorithm based on image textural features is proposed to strengthen the applicability of ROI detection results in the H.265/HEVC quad-tree coding structure. Second, a hierarchical coding method based on transform coefficient adjustment and a quantization parameter (QP) selection process is designed to implement the otherness encoding for ROIs and non-ROIs. Experimental results demonstrate that the proposed optimization strategy significantly improves the coding performance by achieving a BD-BR reduction of 13.52% and a BD-PSNR gain of 1.16 dB on average compared to H.265/HEVC (HM15.0). The proposed medical video coding algorithm is expected to satisfy low bit-rate compression requirements for modern medical communication systems.

  6. Color image lossy compression based on blind evaluation and prediction of noise characteristics

    NASA Astrophysics Data System (ADS)

    Ponomarenko, Nikolay N.; Lukin, Vladimir V.; Egiazarian, Karen O.; Lepisto, Leena

    2011-03-01

    The paper deals with JPEG adaptive lossy compression of color images formed by digital cameras. Adaptation to noise characteristics and blur estimated for each given image is carried out. The dominant factor degrading image quality is determined in a blind manner. Characteristics of this dominant factor are then estimated. Finally, a scaling factor that determines quantization steps for default JPEG table is adaptively set (selected). Within this general framework, two possible strategies are considered. A first one presumes blind estimation for an image after all operations in digital image processing chain just before compressing a given raster image. A second strategy is based on prediction of noise and blur parameters from analysis of RAW image under quite general assumptions concerning characteristics parameters of transformations an image will be subject to at further processing stages. The advantages of both strategies are discussed. The first strategy provides more accurate estimation and larger benefit in image compression ratio (CR) compared to super-high quality (SHQ) mode. However, it is more complicated and requires more resources. The second strategy is simpler but less beneficial. The proposed approaches are tested for quite many real life color images acquired by digital cameras and shown to provide more than two time increase of average CR compared to SHQ mode without introducing visible distortions with respect to SHQ compressed images.

  7. Effects of Digitization and JPEG Compression on Land Cover Classification Using Astronaut-Acquired Orbital Photographs

    NASA Technical Reports Server (NTRS)

    Robinson, Julie A.; Webb, Edward L.; Evangelista, Arlene

    2000-01-01

    Studies that utilize astronaut-acquired orbital photographs for visual or digital classification require high-quality data to ensure accuracy. The majority of images available must be digitized from film and electronically transferred to scientific users. This study examined the effect of scanning spatial resolution (1200, 2400 pixels per inch [21.2 and 10.6 microns/pixel]), scanning density range option (Auto, Full) and compression ratio (non-lossy [TIFF], and lossy JPEG 10:1, 46:1, 83:1) on digital classification results of an orbital photograph from the NASA - Johnson Space Center archive. Qualitative results suggested that 1200 ppi was acceptable for visual interpretive uses for major land cover types. Moreover, Auto scanning density range was superior to Full density range. Quantitative assessment of the processing steps indicated that, while 2400 ppi scanning spatial resolution resulted in more classified polygons as well as a substantially greater proportion of polygons < 0.2 ha, overall agreement between 1200 ppi and 2400 ppi was quite high. JPEG compression up to approximately 46:1 also did not appear to have a major impact on quantitative classification characteristics. We conclude that both 1200 and 2400 ppi scanning resolutions are acceptable options for this level of land cover classification, as well as a compression ratio at or below approximately 46:1. Auto range density should always be used during scanning because it acquires more of the information from the film. The particular combination of scanning spatial resolution and compression level will require a case-by-case decision and will depend upon memory capabilities, analytical objectives and the spatial properties of the objects in the image.

  8. Space data management at the NSSDC (National Space Sciences Data Center): Applications for data compression

    NASA Technical Reports Server (NTRS)

    Green, James L.

    1989-01-01

    The National Space Science Data Center (NSSDC), established in 1966, is the largest archive for processed data from NASA's space and Earth science missions. The NSSDC manages over 120,000 data tapes with over 4,000 data sets. The size of the digital archive is approximately 6,000 gigabytes with all of this data in its original uncompressed form. By 1995 the NSSDC digital archive is expected to more than quadruple in size reaching over 28,000 gigabytes. The NSSDC digital archive is expected to more than quadruple in size reaching over 28,000 gigabytes. The NSSDC is beginning several thrusts allowing it to better serve the scientific community and keep up with managing the ever increasing volumes of data. These thrusts involve managing larger and larger amounts of information and data online, employing mass storage techniques, and the use of low rate communications networks to move requested data to remote sites in the United States, Europe and Canada. The success of these thrusts, combined with the tremendous volume of data expected to be archived at the NSSDC, clearly indicates that innovative storage and data management solutions must be sought and implemented. Although not presently used, data compression techniques may be a very important tool for managing a large fraction or all of the NSSDC archive in the future. Some future applications would consist of compressing online data in order to have more data readily available, compress requested data that must be moved over low rate ground networks, and compress all the digital data in the NSSDC archive for a cost effective backup that would be used only in the event of a disaster.

  9. System-Level Design of a 64-Channel Low Power Neural Spike Recording Sensor.

    PubMed

    Delgado-Restituto, Manuel; Rodriguez-Perez, Alberto; Darie, Angela; Soto-Sanchez, Cristina; Fernandez-Jover, Eduardo; Rodriguez-Vazquez, Angel

    2017-04-01

    This paper reports an integrated 64-channel neural spike recording sensor, together with all the circuitry to process and configure the channels, process the neural data, transmit via a wireless link the information and receive the required instructions. Neural signals are acquired, filtered, digitized and compressed in the channels. Additionally, each channel implements an auto-calibration algorithm which individually configures the transfer characteristics of the recording site. The system has two transmission modes; in one case the information captured by the channels is sent as uncompressed raw data; in the other, feature vectors extracted from the detected neural spikes are released. Data streams coming from the channels are serialized by the embedded digital processor. Experimental results, including in vivo measurements, show that the power consumption of the complete system is lower than 330 μW.

  10. Image compression using singular value decomposition

    NASA Astrophysics Data System (ADS)

    Swathi, H. R.; Sohini, Shah; Surbhi; Gopichand, G.

    2017-11-01

    We often need to transmit and store the images in many applications. Smaller the image, less is the cost associated with transmission and storage. So we often need to apply data compression techniques to reduce the storage space consumed by the image. One approach is to apply Singular Value Decomposition (SVD) on the image matrix. In this method, digital image is given to SVD. SVD refactors the given digital image into three matrices. Singular values are used to refactor the image and at the end of this process, image is represented with smaller set of values, hence reducing the storage space required by the image. Goal here is to achieve the image compression while preserving the important features which describe the original image. SVD can be adapted to any arbitrary, square, reversible and non-reversible matrix of m × n size. Compression ratio and Mean Square Error is used as performance metrics.

  11. Image display device in digital TV

    DOEpatents

    Choi, Seung Jong [Seoul, KR

    2006-07-18

    Disclosed is an image display device in a digital TV that is capable of carrying out the conversion into various kinds of resolution by using single bit map data in the digital TV. The image display device includes: a data processing part for executing bit map conversion, compression, restoration and format-conversion for text data; a memory for storing the bit map data obtained according to the bit map conversion and compression in the data processing part and image data inputted from an arbitrary receiving part, the receiving part receiving one of digital image data and analog image data; an image outputting part for reading the image data from the memory; and a display processing part for mixing the image data read from the image outputting part and the bit map data converted in format from the a data processing part. Therefore, the image display device according to the present invention can convert text data in such a manner as to correspond with various resolution, carry out the compression for bit map data, thereby reducing the memory space, and support text data of an HTML format, thereby providing the image with the text data of various shapes.

  12. Digital map databases in support of avionic display systems

    NASA Astrophysics Data System (ADS)

    Trenchard, Michael E.; Lohrenz, Maura C.; Rosche, Henry, III; Wischow, Perry B.

    1991-08-01

    The emergence of computerized mission planning systems (MPS) and airborne digital moving map systems (DMS) has necessitated the development of a global database of raster aeronautical chart data specifically designed for input to these systems. The Naval Oceanographic and Atmospheric Research Laboratory''s (NOARL) Map Data Formatting Facility (MDFF) is presently dedicated to supporting these avionic display systems with the development of the Compressed Aeronautical Chart (CAC) database on Compact Disk Read Only Memory (CDROM) optical discs. The MDFF is also developing a series of aircraft-specific Write-Once Read Many (WORM) optical discs. NOARL has initiated a comprehensive research program aimed at improving the pilots'' moving map displays current research efforts include the development of an alternate image compression technique and generation of a standard set of color palettes. The CAC database will provide digital aeronautical chart data in six different scales. CAC is derived from the Defense Mapping Agency''s (DMA) Equal Arc-second (ARC) Digitized Raster Graphics (ADRG) a series of scanned aeronautical charts. NOARL processes ADRG to tailor the chart image resolution to that of the DMS display while reducing storage requirements through image compression techniques. CAC is being distributed by DMA as a library of CDROMs.

  13. The NORDA MC&G Map Data Formatting Facility: Development of a Digital Map Data Base

    DTIC Science & Technology

    1989-12-01

    Lempel - Ziv compression . extract such features as roads, water, urban areas, and Also investigated were various transform encoding text from the scanned... Compression Ratios scanned maps revealed a small number of color classes and lar .e homogeneous regions. The original 24-bit Lempel Ziv Lempel Ziv pixel...Various high performance, lossless compression tech- Table 6. Compression ratios for VQ classification niques were tried. followed by Lempel Ziv

  14. High throughput dual-wavelength temperature distribution imaging via compressive imaging

    NASA Astrophysics Data System (ADS)

    Yao, Xu-Ri; Lan, Ruo-Ming; Liu, Xue-Feng; Zhu, Ge; Zheng, Fu; Yu, Wen-Kai; Zhai, Guang-Jie

    2018-03-01

    Thermal imaging is an essential tool in a wide variety of research areas. In this work we demonstrate high-throughput double-wavelength temperature distribution imaging using a modified single-pixel camera without the requirement of a beam splitter (BS). A digital micro-mirror device (DMD) is utilized to display binary masks and split the incident radiation, which eliminates the necessity of a BS. Because the spatial resolution is dictated by the DMD, this thermal imaging system has the advantage of perfect spatial registration between the two images, which limits the need for the pixel registration and fine adjustments. Two bucket detectors, which measures the total light intensity reflected from the DMD, are employed in this system and yield an improvement in the detection efficiency of the narrow-band radiation. A compressive imaging algorithm is utilized to achieve under-sampling recovery. A proof-of-principle experiment was presented to demonstrate the feasibility of this structure.

  15. Storage and retrieval of large digital images

    DOEpatents

    Bradley, J.N.

    1998-01-20

    Image compression and viewing are implemented with (1) a method for performing DWT-based compression on a large digital image with a computer system possessing a two-level system of memory and (2) a method for selectively viewing areas of the image from its compressed representation at multiple resolutions and, if desired, in a client-server environment. The compression of a large digital image I(x,y) is accomplished by first defining a plurality of discrete tile image data subsets T{sub ij}(x,y) that, upon superposition, form the complete set of image data I(x,y). A seamless wavelet-based compression process is effected on I(x,y) that is comprised of successively inputting the tiles T{sub ij}(x,y) in a selected sequence to a DWT routine, and storing the resulting DWT coefficients in a first primary memory. These coefficients are periodically compressed and transferred to a secondary memory to maintain sufficient memory in the primary memory for data processing. The sequence of DWT operations on the tiles T{sub ij}(x,y) effectively calculates a seamless DWT of I(x,y). Data retrieval consists of specifying a resolution and a region of I(x,y) for display. The subset of stored DWT coefficients corresponding to each requested scene is determined and then decompressed for input to an inverse DWT, the output of which forms the image display. The repeated process whereby image views are specified may take the form an interaction with a computer pointing device on an image display from a previous retrieval. 6 figs.

  16. Storage and retrieval of large digital images

    DOEpatents

    Bradley, Jonathan N.

    1998-01-01

    Image compression and viewing are implemented with (1) a method for performing DWT-based compression on a large digital image with a computer system possessing a two-level system of memory and (2) a method for selectively viewing areas of the image from its compressed representation at multiple resolutions and, if desired, in a client-server environment. The compression of a large digital image I(x,y) is accomplished by first defining a plurality of discrete tile image data subsets T.sub.ij (x,y) that, upon superposition, form the complete set of image data I(x,y). A seamless wavelet-based compression process is effected on I(x,y) that is comprised of successively inputting the tiles T.sub.ij (x,y) in a selected sequence to a DWT routine, and storing the resulting DWT coefficients in a first primary memory. These coefficients are periodically compressed and transferred to a secondary memory to maintain sufficient memory in the primary memory for data processing. The sequence of DWT operations on the tiles T.sub.ij (x,y) effectively calculates a seamless DWT of I(x,y). Data retrieval consists of specifying a resolution and a region of I(x,y) for display. The subset of stored DWT coefficients corresponding to each requested scene is determined and then decompressed for input to an inverse DWT, the output of which forms the image display. The repeated process whereby image views are specified may take the form an interaction with a computer pointing device on an image display from a previous retrieval.

  17. Dynamic code block size for JPEG 2000

    NASA Astrophysics Data System (ADS)

    Tsai, Ping-Sing; LeCornec, Yann

    2008-02-01

    Since the standardization of the JPEG 2000, it has found its way into many different applications such as DICOM (digital imaging and communication in medicine), satellite photography, military surveillance, digital cinema initiative, professional video cameras, and so on. The unified framework of the JPEG 2000 architecture makes practical high quality real-time compression possible even in video mode, i.e. motion JPEG 2000. In this paper, we present a study of the compression impact using dynamic code block size instead of fixed code block size as specified in the JPEG 2000 standard. The simulation results show that there is no significant impact on compression if dynamic code block sizes are used. In this study, we also unveil the advantages of using dynamic code block sizes.

  18. A study of data coding technology developments in the 1980-1985 time frame, volume 2

    NASA Technical Reports Server (NTRS)

    Ingels, F. M.; Shahsavari, M. M.

    1978-01-01

    The source parameters of digitized analog data are discussed. Different data compression schemes are outlined and analysis of their implementation are presented. Finally, bandwidth compression techniques are given for video signals.

  19. Design of a receiver operating characteristic (ROC) study of 10:1 lossy image compression

    NASA Astrophysics Data System (ADS)

    Collins, Cary A.; Lane, David; Frank, Mark S.; Hardy, Michael E.; Haynor, David R.; Smith, Donald V.; Parker, James E.; Bender, Gregory N.; Kim, Yongmin

    1994-04-01

    The digital archiving system at Madigan Army Medical Center (MAMC) uses a 10:1 lossy data compression algorithm for most forms of computed radiography. A systematic study on the potential effect of lossy image compression on patient care has been initiated with a series of studies focused on specific diagnostic tasks. The studies are based upon the receiver operating characteristic (ROC) method of analysis for diagnostic systems. The null hypothesis is that observer performance with approximately 10:1 compressed and decompressed images is not different from using original, uncompressed images for detecting subtle pathologic findings seen on computed radiographs of bone, chest, or abdomen, when viewed on a high-resolution monitor. Our design involves collecting cases from eight pathologic categories. Truth is determined by committee using confirmatory studies performed during routine clinical practice whenever possible. Software has been developed to aid in case collection and to allow reading of the cases for the study using stand-alone Siemens Litebox workstations. Data analysis uses two methods, ROC analysis and free-response ROC (FROC) methods. This study will be one of the largest ROC/FROC studies of its kind and could benefit clinical radiology practice using PACS technology. The study design and results from a pilot FROC study are presented.

  20. EBLAST: an efficient high-compression image transformation 3. application to Internet image and video transmission

    NASA Astrophysics Data System (ADS)

    Schmalz, Mark S.; Ritter, Gerhard X.; Caimi, Frank M.

    2001-12-01

    A wide variety of digital image compression transforms developed for still imaging and broadcast video transmission are unsuitable for Internet video applications due to insufficient compression ratio, poor reconstruction fidelity, or excessive computational requirements. Examples include hierarchical transforms that require all, or large portion of, a source image to reside in memory at one time, transforms that induce significant locking effect at operationally salient compression ratios, and algorithms that require large amounts of floating-point computation. The latter constraint holds especially for video compression by small mobile imaging devices for transmission to, and compression on, platforms such as palmtop computers or personal digital assistants (PDAs). As Internet video requirements for frame rate and resolution increase to produce more detailed, less discontinuous motion sequences, a new class of compression transforms will be needed, especially for small memory models and displays such as those found on PDAs. In this, the third series of papers, we discuss the EBLAST compression transform and its application to Internet communication. Leading transforms for compression of Internet video and still imagery are reviewed and analyzed, including GIF, JPEG, AWIC (wavelet-based), wavelet packets, and SPIHT, whose performance is compared with EBLAST. Performance analysis criteria include time and space complexity and quality of the decompressed image. The latter is determined by rate-distortion data obtained from a database of realistic test images. Discussion also includes issues such as robustness of the compressed format to channel noise. EBLAST has been shown to perform superiorly to JPEG and, unlike current wavelet compression transforms, supports fast implementation on embedded processors with small memory models.

  1. Towards a Visual Quality Metric for Digital Video

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.

    1998-01-01

    The advent of widespread distribution of digital video creates a need for automated methods for evaluating visual quality of digital video. This is particularly so since most digital video is compressed using lossy methods, which involve the controlled introduction of potentially visible artifacts. Compounding the problem is the bursty nature of digital video, which requires adaptive bit allocation based on visual quality metrics. In previous work, we have developed visual quality metrics for evaluating, controlling, and optimizing the quality of compressed still images. These metrics incorporate simplified models of human visual sensitivity to spatial and chromatic visual signals. The challenge of video quality metrics is to extend these simplified models to temporal signals as well. In this presentation I will discuss a number of the issues that must be resolved in the design of effective video quality metrics. Among these are spatial, temporal, and chromatic sensitivity and their interactions, visual masking, and implementation complexity. I will also touch on the question of how to evaluate the performance of these metrics.

  2. Automated Assessment of Visual Quality of Digital Video

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Ellis, Stephen R. (Technical Monitor)

    1997-01-01

    The advent of widespread distribution of digital video creates a need for automated methods for evaluating visual quality of digital video. This is particularly so since most digital video is compressed using lossy methods, which involve the controlled introduction of potentially visible artifacts. Compounding the problem is the bursty nature of digital video, which requires adaptive bit allocation based on visual quality metrics. In previous work, we have developed visual quality metrics for evaluating, controlling, and optimizing the quality of compressed still images[1-4]. These metrics incorporate simplified models of human visual sensitivity to spatial and chromatic visual signals. The challenge of video quality metrics is to extend these simplified models to temporal signals as well. In this presentation I will discuss a number of the issues that must be resolved in the design of effective video quality metrics. Among these are spatial, temporal, and chromatic sensitivity and their interactions, visual masking, and implementation complexity. I will also touch on the question of how to evaluate the performance of these metrics.

  3. Evaluation of image compression for computer-aided diagnosis of breast tumors in 3D sonography

    NASA Astrophysics Data System (ADS)

    Chen, We-Min; Huang, Yu-Len; Tao, Chi-Chuan; Chen, Dar-Ren; Moon, Woo-Kyung

    2006-03-01

    Medical imaging examinations form the basis for physicians diagnosing diseases, as evidenced by the increasing use of digital medical images for picture archiving and communications systems (PACS). However, with enlarged medical image databases and rapid growth of patients' case reports, PACS requires image compression to accelerate the image transmission rate and conserve disk space for diminishing implementation costs. For this purpose, JPEG and JPEG2000 have been accepted as legal formats for the digital imaging and communications in medicine (DICOM). The high compression ratio is felt to be useful for medical imagery. Therefore, this study evaluates the compression ratios of JPEG and JPEG2000 standards for computer-aided diagnosis (CAD) of breast tumors in 3-D medical ultrasound (US) images. The 3-D US data sets with various compression ratios are compressed using the two efficacious image compression standards. The reconstructed data sets are then diagnosed by a previous proposed CAD system. The diagnostic accuracy is measured based on receiver operating characteristic (ROC) analysis. Namely, the ROC curves are used to compare the diagnostic performance of two or more reconstructed images. Analysis results ensure a comparison of the compression ratios by using JPEG and JPEG2000 for 3-D US images. Results of this study provide the possible bit rates using JPEG and JPEG2000 for 3-D breast US images.

  4. Digitized hand-wrist radiographs: comparison of subjective and software-derived image quality at various compression ratios.

    PubMed

    McCord, Layne K; Scarfe, William C; Naylor, Rachel H; Scheetz, James P; Silveira, Anibal; Gillespie, Kevin R

    2007-05-01

    The objectives of this study were to compare the effect of JPEG 2000 compression of hand-wrist radiographs on observer image quality qualitative assessment and to compare with a software-derived quantitative image quality index. Fifteen hand-wrist radiographs were digitized and saved as TIFF and JPEG 2000 images at 4 levels of compression (20:1, 40:1, 60:1, and 80:1). The images, including rereads, were viewed by 13 orthodontic residents who determined the image quality rating on a scale of 1 to 5. A quantitative analysis was also performed by using a readily available software based on the human visual system (Image Quality Measure Computer Program, version 6.2, Mitre, Bedford, Mass). ANOVA was used to determine the optimal compression level (P < or =.05). When we compared subjective indexes, JPEG compression greater than 60:1 significantly reduced image quality. When we used quantitative indexes, the JPEG 2000 images had lower quality at all compression ratios compared with the original TIFF images. There was excellent correlation (R2 >0.92) between qualitative and quantitative indexes. Image Quality Measure indexes are more sensitive than subjective image quality assessments in quantifying image degradation with compression. There is potential for this software-based quantitative method in determining the optimal compression ratio for any image without the use of subjective raters.

  5. DRACULA: Dynamic range control for broadcasting and other applications

    NASA Astrophysics Data System (ADS)

    Gilchrist, N. H. C.

    The BBC has developed a digital processor which is capable of reducing the dynamic range of audio in an unobtrusive manner. It is ideally suited to the task of controlling the level of musical programs. Operating as a self-contained dynamic range controller, the processor is suitable for controlling levels in conventional AM or FM broadcasting, or for applications such as the compression of program material for in-flight entertainment. It can, alternatively, be used to provide a supplementary signal in DAB (digital audio broadcasting) for optional dynamic compression in the receiver.

  6. HDTV versus electronic cinema

    NASA Astrophysics Data System (ADS)

    Tinker, Michael

    1998-12-01

    We are on the brink of transforming the movie theatre with electronic cinema. Technologies are converging to make true electronic cinema, with a 'film look,' possible for the first time. In order to realize the possibilities, we must leverage current technologies in video compression, electronic projection, digital storage, and digital networks. All these technologies have only recently improved sufficiently to make their use in the electronic cinema worthwhile. Video compression, such as MPEG-2, is designed to overcome the limitations of video, primarily limited bandwidth. As a result, although HDTV offers a serious challenge to film-based cinema, it falls short in a number of areas, such as color depth. Freed from the constraints of video transmission, and using the recently improved technologies available, electronic cinema can move beyond video; Although movies will have to be compressed for some time, what is needed is a concept of 'cinema compression,' rather than video compression. Electronic cinema will open up vast new possibilities for viewing experiences at the theater, while at the same time offering up the potential for new economies in the movie industry.

  7. Challenges of implementing digital technology in motion picture distribution and exhibition: testing and evaluation methodology

    NASA Astrophysics Data System (ADS)

    Swartz, Charles S.

    2003-05-01

    The process of distributing and exhibiting a motion picture has changed little since the Lumière brothers presented the first motion picture to an audience in 1895. While this analog photochemical process is capable of producing screen images of great beauty and expressive power, more often the consumer experience is diminished by third generation prints and by the wear and tear of the mechanical process. Furthermore, the film industry globally spends approximately $1B annually manufacturing and shipping prints. Alternatively, distributing digital files would theoretically yield great benefits in terms of image clarity and quality, lower cost, greater security, and more flexibility in the cinema (e.g., multiple language versions). In order to understand the components of the digital cinema chain and evaluate the proposed technical solutions, the Entertainment Technology Center at USC in 2000 established the Digital Cinema Laboratory as a critical viewing environment, with the highest quality film and digital projection equipment. The presentation describes the infrastructure of the Lab, test materials, and testing methodologies developed for compression evaluation, and lessons learned up to the present. In addition to compression, the Digital Cinema Laboratory plans to evaluate other components of the digital cinema process as well.

  8. Tampered Region Localization of Digital Color Images Based on JPEG Compression Noise

    NASA Astrophysics Data System (ADS)

    Wang, Wei; Dong, Jing; Tan, Tieniu

    With the availability of various digital image edit tools, seeing is no longer believing. In this paper, we focus on tampered region localization for image forensics. We propose an algorithm which can locate tampered region(s) in a lossless compressed tampered image when its unchanged region is output of JPEG decompressor. We find the tampered region and the unchanged region have different responses for JPEG compression. The tampered region has stronger high frequency quantization noise than the unchanged region. We employ PCA to separate different spatial frequencies quantization noises, i.e. low, medium and high frequency quantization noise, and extract high frequency quantization noise for tampered region localization. Post-processing is involved to get final localization result. The experimental results prove the effectiveness of our proposed method.

  9. Digital Images over the Internet: Rome Reborn at the Library of Congress.

    ERIC Educational Resources Information Center

    Valauskas, Edward J.

    1994-01-01

    Describes digital images of incunabula from the Library of the Vatican that are available over the Internet based on an actual exhibit that was displayed at the Library of Congress. Viewers, i.e., compression routines created to efficiently send color images, are explained; and other digital exhibits are described. (Contains three references.)…

  10. Effects of Compression on Speech Acoustics, Intelligibility, and Sound Quality

    PubMed Central

    Souza, Pamela E.

    2002-01-01

    The topic of compression has been discussed quite extensively in the last 20 years (eg, Braida et al., 1982; Dillon, 1996, 2000; Dreschler, 1992; Hickson, 1994; Kuk, 2000 and 2002; Kuk and Ludvigsen, 1999; Moore, 1990; Van Tasell, 1993; Venema, 2000; Verschuure et al., 1996; Walker and Dillon, 1982). However, the latest comprehensive update by this journal was published in 1996 (Kuk, 1996). Since that time, use of compression hearing aids has increased dramatically, from half of hearing aids dispensed only 5 years ago to four out of five hearing aids dispensed today (Strom, 2002b). Most of today's digital and digitally programmable hearing aids are compression devices (Strom, 2002a). It is probable that within a few years, very few patients will be fit with linear hearing aids. Furthermore, compression has increased in complexity, with greater numbers of parameters under the clinician's control. Ideally, these changes will translate to greater flexibility and precision in fitting and selection. However, they also increase the need for information about the effects of compression amplification on speech perception and speech quality. As evidenced by the large number of sessions at professional conferences on fitting compression hearing aids, clinicians continue to have questions about compression technology and when and how it should be used. How does compression work? Who are the best candidates for this technology? How should adjustable parameters be set to provide optimal speech recognition? What effect will compression have on speech quality? These and other questions continue to drive our interest in this technology. This article reviews the effects of compression on the speech signal and the implications for speech intelligibility, quality, and design of clinical procedures. PMID:25425919

  11. Comparative performance between compressed and uncompressed airborne imagery

    NASA Astrophysics Data System (ADS)

    Phan, Chung; Rupp, Ronald; Agarwal, Sanjeev; Trang, Anh; Nair, Sumesh

    2008-04-01

    The US Army's RDECOM CERDEC Night Vision and Electronic Sensors Directorate (NVESD), Countermine Division is evaluating the compressibility of airborne multi-spectral imagery for mine and minefield detection application. Of particular interest is to assess the highest image data compression rate that can be afforded without the loss of image quality for war fighters in the loop and performance of near real time mine detection algorithm. The JPEG-2000 compression standard is used to perform data compression. Both lossless and lossy compressions are considered. A multi-spectral anomaly detector such as RX (Reed & Xiaoli), which is widely used as a core algorithm baseline in airborne mine and minefield detection on different mine types, minefields, and terrains to identify potential individual targets, is used to compare the mine detection performance. This paper presents the compression scheme and compares detection performance results between compressed and uncompressed imagery for various level of compressions. The compression efficiency is evaluated and its dependence upon different backgrounds and other factors are documented and presented using multi-spectral data.

  12. 49 CFR 579.29 - Manner of reporting.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) REPORTING OF INFORMATION AND COMMUNICATIONS ABOUT... through 579.26 of this part may be submitted in digital form using a graphic compression protocol...) Graphic compression protocol. Not later than 30 days prior to the date of its first quarterly submission...

  13. Font group identification using reconstructed fonts

    NASA Astrophysics Data System (ADS)

    Cutter, Michael P.; van Beusekom, Joost; Shafait, Faisal; Breuel, Thomas M.

    2011-01-01

    Ideally, digital versions of scanned documents should be represented in a format that is searchable, compressed, highly readable, and faithful to the original. These goals can theoretically be achieved through OCR and font recognition, re-typesetting the document text with original fonts. However, OCR and font recognition remain hard problems, and many historical documents use fonts that are not available in digital forms. It is desirable to be able to reconstruct fonts with vector glyphs that approximate the shapes of the letters that form a font. In this work, we address the grouping of tokens in a token-compressed document into candidate fonts. This permits us to incorporate font information into token-compressed images even when the original fonts are unknown or unavailable in digital format. This paper extends previous work in font reconstruction by proposing and evaluating an algorithm to assign a font to every character within a document. This is a necessary step to represent a scanned document image with a reconstructed font. Through our evaluation method, we have measured a 98.4% accuracy for the assignment of letters to candidate fonts in multi-font documents.

  14. Digital Motion Imagery, Interoperability Challenges for Space Operations

    NASA Technical Reports Server (NTRS)

    Grubbs, Rodney

    2012-01-01

    With advances in available bandwidth from spacecraft and between terrestrial control centers, digital motion imagery and video is becoming more practical as a data gathering tool for science and engineering, as well as for sharing missions with the public. The digital motion imagery and video industry has done a good job of creating standards for compression, distribution, and physical interfaces. Compressed data streams can easily be transmitted or distributed over radio frequency, internet protocol, and other data networks. All of these standards, however, can make sharing video between spacecraft and terrestrial control centers a frustrating and complicated task when different standards and protocols are used by different agencies. This paper will explore the challenges presented by the abundance of motion imagery and video standards, interfaces and protocols with suggestions for common formats that could simplify interoperability between spacecraft and ground support systems. Real-world examples from the International Space Station will be examined. The paper will also discuss recent trends in the development of new video compression algorithms, as well likely expanded use of Delay (or Disruption) Tolerant Networking nodes.

  15. Development of a neural network for early detection of renal osteodystrophy

    NASA Astrophysics Data System (ADS)

    Cheng, Shirley N.; Chan, Heang-Ping; Adler, Ronald; Niklason, Loren T.; Chang, Chair-Li

    1991-07-01

    Bone erosion presenting as subperiosteal resorption on the phalanges of the hand is an early manifestation of hyperparathyroidism associated with chronic renal failure. At present, the diagnosis is made by trained radiologists through visual inspection of hand radiographs. In this study, a neural network is being developed to assess the feasibility of computer-aided detection of these changes. A two-pass approach is adopted. The digitized image is first compressed by a Laplacian pyramid compact code. The first neural network locates the region of interest using vertical projections along the phalanges and then the horizontal projections across the phalanges. A second neural network is used to classify texture variations of trabecular patterns in the region using a concurrence matrix as the input to a two-dimensional sensor layer to detect the degree of associated osteopenia. Preliminary results demonstrate the feasibility of this approach.

  16. A Novel Method for Block Size Forensics Based on Morphological Operations

    NASA Astrophysics Data System (ADS)

    Luo, Weiqi; Huang, Jiwu; Qiu, Guoping

    Passive forensics analysis aims to find out how multimedia data is acquired and processed without relying on pre-embedded or pre-registered information. Since most existing compression schemes for digital images are based on block processing, one of the fundamental steps for subsequent forensics analysis is to detect the presence of block artifacts and estimate the block size for a given image. In this paper, we propose a novel method for blind block size estimation. A 2×2 cross-differential filter is first applied to detect all possible block artifact boundaries, morphological operations are then used to remove the boundary effects caused by the edges of the actual image contents, and finally maximum-likelihood estimation (MLE) is employed to estimate the block size. The experimental results evaluated on over 1300 nature images show the effectiveness of our proposed method. Compared with existing gradient-based detection method, our method achieves over 39% accuracy improvement on average.

  17. Contrast-enhanced digital mammography (CEDM): imaging modeling, computer simulations, and phantom study

    NASA Astrophysics Data System (ADS)

    Chen, Biao; Jing, Zhenxue; Smith, Andrew

    2005-04-01

    Contrast enhanced digital mammography (CEDM), which is based upon the analysis of a series of x-ray projection images acquired before/after the administration of contrast agents, may provide physicians critical physiologic and morphologic information of breast lesions to determine the malignancy of lesions. This paper proposes to combine the kinetic analysis (KA) of contrast agent uptake/washout process and the dual-energy (DE) contrast enhancement together to formulate a hybrid contrast enhanced breast-imaging framework. The quantitative characteristics of materials and imaging components in the x-ray imaging chain, including x-ray tube (tungsten) spectrum, filter, breast tissues/lesions, contrast agents (non-ionized iodine solution), and selenium detector, were systematically modeled. The contrast-noise-ration (CNR) of iodinated lesions and mean absorbed glandular dose were estimated mathematically. The x-ray techniques optimization was conducted through a series of computer simulations to find the optimal tube voltage, filter thickness, and exposure levels for various breast thicknesses, breast density, and detectable contrast agent concentration levels in terms of detection efficiency (CNR2/dose). A phantom study was performed on a modified Selenia full field digital mammography system to verify the simulated results. The dose level was comparable to the dose in diagnostic mode (less than 4 mGy for an average 4.2 cm compressed breast). The results from the computer simulations and phantom study are being used to optimize an ongoing clinical study.

  18. Traveling front solutions to directed diffusion-limited aggregation, digital search trees, and the Lempel-Ziv data compression algorithm.

    PubMed

    Majumdar, Satya N

    2003-08-01

    We use the traveling front approach to derive exact asymptotic results for the statistics of the number of particles in a class of directed diffusion-limited aggregation models on a Cayley tree. We point out that some aspects of these models are closely connected to two different problems in computer science, namely, the digital search tree problem in data structures and the Lempel-Ziv algorithm for data compression. The statistics of the number of particles studied here is related to the statistics of height in digital search trees which, in turn, is related to the statistics of the length of the longest word formed by the Lempel-Ziv algorithm. Implications of our results to these computer science problems are pointed out.

  19. Traveling front solutions to directed diffusion-limited aggregation, digital search trees, and the Lempel-Ziv data compression algorithm

    NASA Astrophysics Data System (ADS)

    Majumdar, Satya N.

    2003-08-01

    We use the traveling front approach to derive exact asymptotic results for the statistics of the number of particles in a class of directed diffusion-limited aggregation models on a Cayley tree. We point out that some aspects of these models are closely connected to two different problems in computer science, namely, the digital search tree problem in data structures and the Lempel-Ziv algorithm for data compression. The statistics of the number of particles studied here is related to the statistics of height in digital search trees which, in turn, is related to the statistics of the length of the longest word formed by the Lempel-Ziv algorithm. Implications of our results to these computer science problems are pointed out.

  20. High-performance compression of astronomical images

    NASA Technical Reports Server (NTRS)

    White, Richard L.

    1993-01-01

    Astronomical images have some rather unusual characteristics that make many existing image compression techniques either ineffective or inapplicable. A typical image consists of a nearly flat background sprinkled with point sources and occasional extended sources. The images are often noisy, so that lossless compression does not work very well; furthermore, the images are usually subjected to stringent quantitative analysis, so any lossy compression method must be proven not to discard useful information, but must instead discard only the noise. Finally, the images can be extremely large. For example, the Space Telescope Science Institute has digitized photographic plates covering the entire sky, generating 1500 images each having 14000 x 14000 16-bit pixels. Several astronomical groups are now constructing cameras with mosaics of large CCD's (each 2048 x 2048 or larger); these instruments will be used in projects that generate data at a rate exceeding 100 MBytes every 5 minutes for many years. An effective technique for image compression may be based on the H-transform (Fritze et al. 1977). The method that we have developed can be used for either lossless or lossy compression. The digitized sky survey images can be compressed by at least a factor of 10 with no noticeable losses in the astrometric and photometric properties of the compressed images. The method has been designed to be computationally efficient: compression or decompression of a 512 x 512 image requires only 4 seconds on a Sun SPARCstation 1. The algorithm uses only integer arithmetic, so it is completely reversible in its lossless mode, and it could easily be implemented in hardware for space applications.

  1. Full-field acoustomammography using an acousto-optic sensor.

    PubMed

    Sandhu, J S; Schmidt, R A; La Rivière, P J

    2009-06-01

    In this Letter the authors introduce a wide-field transmission ultrasound approach to breast imaging based on the use of a large area acousto-optic (AO) sensor. Accompanied by a suitable acoustic source, such a detector could be mounted on a traditional mammography system and provide a mammographylike ultrasound projection image of the compressed breast in registration with the x-ray mammogram. The authors call the approach acoustography. The hope is that this additional information could improve the sensitivity and specificity of screening mammography. The AO sensor converts ultrasound directly into a visual image by virtue of the acousto-optic effect of the liquid crystal layer contained in the AO sensor. The image is captured with a digital video camera for processing, analysis, and storage. In this Letter, the authors perform a geometrical resolution analysis and also present images of a multimodality breast phantom imaged with both mammography and acoustography to demonstrate the feasibility of the approach. The geometric resolution analysis suggests that the technique could readily detect tumors of diameter of 3 mm using 8.5 MHz ultrasound, with smaller tumors detectable with higher frequency ultrasound, though depth penetration might then become a limiting factor. The preliminary phantom images show high contrast and compare favorably to digital mammograms of the same phantom. The authors have introduced and established, through phantom imaging, the feasibility of a full-field transmission ultrasound detector for breast imaging based on the use of a large area AO sensor. Of course variations in attenuation of connective, glandular, and fatty tissues will lead to images with more cluttered anatomical background than those of the phantom imaged here. Acoustic coupling to the mammographically compressed breast, particularly at the margins, will also have to be addressed.

  2. Full-field acoustomammography using an acousto-optic sensor

    PubMed Central

    Sandhu, J. S.; Schmidt, R. A.; La Rivière, P. J.

    2009-01-01

    In this Letter the authors introduce a wide-field transmission ultrasound approach to breast imaging based on the use of a large area acousto-optic (AO) sensor. Accompanied by a suitable acoustic source, such a detector could be mounted on a traditional mammography system and provide a mammographylike ultrasound projection image of the compressed breast in registration with the x-ray mammogram. The authors call the approach acoustography. The hope is that this additional information could improve the sensitivity and specificity of screening mammography. The AO sensor converts ultrasound directly into a visual image by virtue of the acousto-optic effect of the liquid crystal layer contained in the AO sensor. The image is captured with a digital video camera for processing, analysis, and storage. In this Letter, the authors perform a geometrical resolution analysis and also present images of a multimodality breast phantom imaged with both mammography and acoustography to demonstrate the feasibility of the approach. The geometric resolution analysis suggests that the technique could readily detect tumors of diameter of 3 mm using 8.5 MHz ultrasound, with smaller tumors detectable with higher frequency ultrasound, though depth penetration might then become a limiting factor. The preliminary phantom images show high contrast and compare favorably to digital mammograms of the same phantom. The authors have introduced and established, through phantom imaging, the feasibility of a full-field transmission ultrasound detector for breast imaging based on the use of a large area AO sensor. Of course variations in attenuation of connective, glandular, and fatty tissues will lead to images with more cluttered anatomical background than those of the phantom imaged here. Acoustic coupling to the mammographically compressed breast, particularly at the margins, will also have to be addressed. PMID:19610321

  3. Lossless Coding Standards for Space Data Systems

    NASA Technical Reports Server (NTRS)

    Rice, R. F.

    1996-01-01

    The International Consultative Committee for Space Data Systems (CCSDS) is preparing to issue its first recommendation for a digital data compression standard. Because the space data systems of primary interest are employed to support scientific investigations requiring accurate representation, this initial standard will be restricted to lossless compression.

  4. Parallel phase-shifting self-interference digital holography with faithful reconstruction using compressive sensing

    NASA Astrophysics Data System (ADS)

    Wan, Yuhong; Man, Tianlong; Wu, Fan; Kim, Myung K.; Wang, Dayong

    2016-11-01

    We present a new self-interference digital holographic approach that allows single-shot capturing three-dimensional intensity distribution of the spatially incoherent objects. The Fresnel incoherent correlation holographic microscopy is combined with parallel phase-shifting technique to instantaneously obtain spatially multiplexed phase-shifting holograms. The compressive-sensing-based reconstruction algorithm is implemented to reconstruct the original object from the under sampled demultiplexed holograms. The scheme is verified with simulations. The validity of the proposed method is experimentally demonstrated in an indirectly way by simulating the use of specific parallel phase-shifting recording device.

  5. [The compression and storage of enhanced external counterpulsation waveform based on DICOM standard].

    PubMed

    Hu, Ding; Xie, Shuqun; Yu, Donglan; Zheng, Zhensheng; Wang, Kuijian

    2010-04-01

    The development of external counterpulsation (ECP) local area network system and extensible markup language (XML)-based remote ECP medical information system conformable to digital imaging and communications in medicine (DICOM) standard has been improving the digital interchangeablity and sharability of ECP data. However, the therapy process of ECP is a continuous and longtime supervision which builds a mass of waveform data. In order to reduce the storage space and improve the transmission efficiency, the waveform data with the normative format of ECP data files have to be compressed. In this article, we introduced the compression arithmetic of template matching and improved quick fitting of linear approximation distance thresholding (LADT) in combimation with the characters of enhanced external counterpulsation (EECP) waveform signal. The DICOM standard is used as the storage and transmission standard to make our system compatible with hospital information system. According to the rules of transfer syntaxes, we defined private transfer syntax for one-dimensional compressed waveform data and stored EECP data into a DICOM file. Testing result indicates that the compressed and normative data can be correctly transmitted and displayed between EECP workstations in our EECP laboratory.

  6. 4800 B/S speech compression techniques for mobile satellite systems

    NASA Technical Reports Server (NTRS)

    Townes, S. A.; Barnwell, T. P., III; Rose, R. C.; Gersho, A.; Davidson, G.

    1986-01-01

    This paper will discuss three 4800 bps digital speech compression techniques currently being investigated for application in the mobile satellite service. These three techniques, vector adaptive predictive coding, vector excitation coding, and the self excited vocoder, are the most promising among a number of techniques being developed to possibly provide near-toll-quality speech compression while still keeping the bit-rate low enough for a power and bandwidth limited satellite service.

  7. Variable word length encoder reduces TV bandwith requirements

    NASA Technical Reports Server (NTRS)

    Sivertson, W. E., Jr.

    1965-01-01

    Adaptive variable resolution encoding technique provides an adaptive compression pseudo-random noise signal processor for reducing television bandwidth requirements. Complementary processors are required in both the transmitting and receiving systems. The pretransmission processor is analog-to-digital, while the postreception processor is digital-to-analog.

  8. Image-adapted visually weighted quantization matrices for digital image compression

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B. (Inventor)

    1994-01-01

    A method for performing image compression that eliminates redundant and invisible image components is presented. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The present invention adapts or customizes the quantization matrix to the image being compressed. The quantization matrix comprises visual masking by luminance and contrast techniques and by an error pooling technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.

  9. Spectrally Adaptable Compressive Sensing Imaging System

    DTIC Science & Technology

    2014-05-01

    signal recovering [?, ?]. The time-varying coded apertures can be implemented using micro-piezo motors [?] or through the use of Digital Micromirror ...feasibility of this testbed by developing a Digital- Micromirror -Device-based Snapshot Spectral Imaging (DMD-SSI) system, which implements CS measurement...Y. Wu, I. O. Mirza, G. R. Arce, and D. W. Prather, ”Development of a digital- micromirror - device- based multishot snapshot spectral imaging

  10. An Intrinsically Digital Amplification Scheme for Hearing Aids

    NASA Astrophysics Data System (ADS)

    Blamey, Peter J.; Macfarlane, David S.; Steele, Brenton R.

    2005-12-01

    Results for linear and wide-dynamic range compression were compared with a new 64-channel digital amplification strategy in three separate studies. The new strategy addresses the requirements of the hearing aid user with efficient computations on an open-platform digital signal processor (DSP). The new amplification strategy is not modeled on prior analog strategies like compression and linear amplification, but uses statistical analysis of the signal to optimize the output dynamic range in each frequency band independently. Using the open-platform DSP processor also provided the opportunity for blind trial comparisons of the different processing schemes in BTE and ITE devices of a high commercial standard. The speech perception scores and questionnaire results show that it is possible to provide improved audibility for sound in many narrow frequency bands while simultaneously improving comfort, speech intelligibility in noise, and sound quality.

  11. A configurable and low-power mixed signal SoC for portable ECG monitoring applications.

    PubMed

    Kim, Hyejung; Kim, Sunyoung; Van Helleputte, Nick; Artes, Antonio; Konijnenburg, Mario; Huisken, Jos; Van Hoof, Chris; Yazicioglu, Refet Firat

    2014-04-01

    This paper describes a mixed-signal ECG System-on-Chip (SoC) that is capable of implementing configurable functionality with low-power consumption for portable ECG monitoring applications. A low-voltage and high performance analog front-end extracts 3-channel ECG signals and single channel electrode-tissue-impedance (ETI) measurement with high signal quality. This can be used to evaluate the quality of the ECG measurement and to filter motion artifacts. A custom digital signal processor consisting of 4-way SIMD processor provides the configurability and advanced functionality like motion artifact removal and R peak detection. A built-in 12-bit analog-to-digital converter (ADC) is capable of adaptive sampling achieving a compression ratio of up to 7, and loop buffer integration reduces the power consumption for on-chip memory access. The SoC is implemented in 0.18 μm CMOS process and consumes 32 μ W from a 1.2 V while heart beat detection application is running, and integrated in a wireless ECG monitoring system with Bluetooth protocol. Thanks to the ECG SoC, the overall system power consumption can be reduced significantly.

  12. Chaotic CDMA watermarking algorithm for digital image in FRFT domain

    NASA Astrophysics Data System (ADS)

    Liu, Weizhong; Yang, Wentao; Feng, Zhuoming; Zou, Xuecheng

    2007-11-01

    A digital image-watermarking algorithm based on fractional Fourier transform (FRFT) domain is presented by utilizing chaotic CDMA technique in this paper. As a popular and typical transmission technique, CDMA has many advantages such as privacy, anti-jamming and low power spectral density, which can provide robustness against image distortions and malicious attempts to remove or tamper with the watermark. A super-hybrid chaotic map, with good auto-correlation and cross-correlation characteristics, is adopted to produce many quasi-orthogonal codes (QOC) that can replace the periodic PN-code used in traditional CDAM system. The watermarking data is divided into a lot of segments that correspond to different chaotic QOC respectively and are modulated into the CDMA watermarking data embedded into low-frequency amplitude coefficients of FRFT domain of the cover image. During watermark detection, each chaotic QOC extracts its corresponding watermarking segment by calculating correlation coefficients between chaotic QOC and watermarked data of the detected image. The CDMA technique not only can enhance the robustness of watermark but also can compress the data of the modulated watermark. Experimental results show that the watermarking algorithm has good performances in three aspects: better imperceptibility, anti-attack robustness and security.

  13. Impact of compressed breast thickness and dose on lesion detectability in digital mammography: FROC study with simulated lesions in real mammograms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Salvagnini, Elena, E-mail: elena.salvagnini@gmail.

    Purpose: The aim of this work was twofold: (1) to examine whether, with standard automatic exposure control (AEC) settings that maintain pixel values in the detector constant, lesion detectability in clinical images decreases as a function of breast thickness and (2) to verify whether a new AEC setup can increase lesion detectability at larger breast thicknesses. Methods: Screening patient images, acquired on two identical digital mammography systems, were collected over a period of 2 yr. Mammograms were acquired under standard AEC conditions (part 1) and subsequently with a new AEC setup (part 2), programmed to use the standard AEC settingsmore » for compressed breast thicknesses ≤49 mm, while a relative dose increase was applied above this thickness. The images were divided into four thickness groups: T1 ≤ 29 mm, T2 = 30–49 mm, T3 = 50–69 mm, and T4 ≥ 70 mm, with each thickness group containing 130 randomly selected craniocaudal lesion-free images. Two measures of density were obtained for every image: a BI-RADS score and a map of volumetric breast density created with a software application (VolparaDensity, Matakina, NZ). This information was used to select subsets of four images, containing one image from each thickness group, matched to a (global) BI-RADS score and containing a region with the same (local) VOLPARA volumetric density value. One selected lesion (a microcalcification cluster or a mass) was simulated into each of the four images. This process was repeated so that, for a given thickness group, half the images contained a single lesion and half were lesion-free. The lesion templates created and inserted in groups T3 and T4 for the first part of the study were then inserted into the images of thickness groups T3 and T4 acquired with higher dose settings. Finally, all images were visualized using the ViewDEX software and scored by four radiologists performing a free search study. A statistical jackknife-alternative free-response receiver operating characteristic analysis was applied. Results: For part 1, the alternative free-response receiver operating characteristic curves for the four readers were 0.80, 0.65, 0.55 and 0.56 in going from T1 to T4, indicating a decrease in detectability with increasing breast thickness. P-values and the 95% confidence interval showed no significant difference for the T3-T4 comparison (p = 0.78) while all the other differences were significant (p < 0.05). Separate analysis of microcalcification clusters presented the same results while for mass detection, the only significant difference came when comparing T1 to the other thickness groups. Comparing the scores of part 1 and part 2, results for the T3 group acquired with the new AEC setup and T3 group at standard AEC doses were significantly different (p = 0.0004), indicating improved detection. For this group a subanalysis for microcalcification detection gave the same results while no significant difference was found for mass detection. Conclusions: These data using clinical images confirm results found in simple QA tests for many mammography systems that detectability falls as breast thickness increases. Results obtained with the AEC setup for constant detectability above 49 mm showed an increase in lesion detection with compressed breast thickness, bringing detectability of lesions to the same level.« less

  14. Roundness variation in JPEG images affects the automated process of nuclear immunohistochemical quantification: correction with a linear regression model.

    PubMed

    López, Carlos; Jaén Martinez, Joaquín; Lejeune, Marylène; Escrivà, Patricia; Salvadó, Maria T; Pons, Lluis E; Alvaro, Tomás; Baucells, Jordi; García-Rojo, Marcial; Cugat, Xavier; Bosch, Ramón

    2009-10-01

    The volume of digital image (DI) storage continues to be an important problem in computer-assisted pathology. DI compression enables the size of files to be reduced but with the disadvantage of loss of quality. Previous results indicated that the efficiency of computer-assisted quantification of immunohistochemically stained cell nuclei may be significantly reduced when compressed DIs are used. This study attempts to show, with respect to immunohistochemically stained nuclei, which morphometric parameters may be altered by the different levels of JPEG compression, and the implications of these alterations for automated nuclear counts, and further, develops a method for correcting this discrepancy in the nuclear count. For this purpose, 47 DIs from different tissues were captured in uncompressed TIFF format and converted to 1:3, 1:23 and 1:46 compression JPEG images. Sixty-five positive objects were selected from these images, and six morphological parameters were measured and compared for each object in TIFF images and those of the different compression levels using a set of previously developed and tested macros. Roundness proved to be the only morphological parameter that was significantly affected by image compression. Factors to correct the discrepancy in the roundness estimate were derived from linear regression models for each compression level, thereby eliminating the statistically significant differences between measurements in the equivalent images. These correction factors were incorporated in the automated macros, where they reduced the nuclear quantification differences arising from image compression. Our results demonstrate that it is possible to carry out unbiased automated immunohistochemical nuclear quantification in compressed DIs with a methodology that could be easily incorporated in different systems of digital image analysis.

  15. Design of a digital voice data compression technique for orbiter voice channels

    NASA Technical Reports Server (NTRS)

    1975-01-01

    Candidate techniques were investigated for digital voice compression to a transmission rate of 8 kbps. Good voice quality, speaker recognition, and robustness in the presence of error bursts were considered. The technique of delayed-decision adaptive predictive coding is described and compared with conventional adaptive predictive coding. Results include a set of experimental simulations recorded on analog tape. The two FM broadcast segments produced show the delayed-decision technique to be virtually undegraded or minimally degraded at .001 and .01 Viterbi decoder bit error rates. Preliminary estimates of the hardware complexity of this technique indicate potential for implementation in space shuttle orbiters.

  16. Single stock dynamics on high-frequency data: from a compressed coding perspective.

    PubMed

    Fushing, Hsieh; Chen, Shu-Chun; Hwang, Chii-Ruey

    2014-01-01

    High-frequency return, trading volume and transaction number are digitally coded via a nonparametric computing algorithm, called hierarchical factor segmentation (HFS), and then are coupled together to reveal a single stock dynamics without global state-space structural assumptions. The base-8 digital coding sequence, which is capable of revealing contrasting aggregation against sparsity of extreme events, is further compressed into a shortened sequence of state transitions. This compressed digital code sequence vividly demonstrates that the aggregation of large absolute returns is the primary driving force for stimulating both the aggregations of large trading volumes and transaction numbers. The state of system-wise synchrony is manifested with very frequent recurrence in the stock dynamics. And this data-driven dynamic mechanism is seen to correspondingly vary as the global market transiting in and out of contraction-expansion cycles. These results not only elaborate the stock dynamics of interest to a fuller extent, but also contradict some classical theories in finance. Overall this version of stock dynamics is potentially more coherent and realistic, especially when the current financial market is increasingly powered by high-frequency trading via computer algorithms, rather than by individual investors.

  17. Single Stock Dynamics on High-Frequency Data: From a Compressed Coding Perspective

    PubMed Central

    Fushing, Hsieh; Chen, Shu-Chun; Hwang, Chii-Ruey

    2014-01-01

    High-frequency return, trading volume and transaction number are digitally coded via a nonparametric computing algorithm, called hierarchical factor segmentation (HFS), and then are coupled together to reveal a single stock dynamics without global state-space structural assumptions. The base-8 digital coding sequence, which is capable of revealing contrasting aggregation against sparsity of extreme events, is further compressed into a shortened sequence of state transitions. This compressed digital code sequence vividly demonstrates that the aggregation of large absolute returns is the primary driving force for stimulating both the aggregations of large trading volumes and transaction numbers. The state of system-wise synchrony is manifested with very frequent recurrence in the stock dynamics. And this data-driven dynamic mechanism is seen to correspondingly vary as the global market transiting in and out of contraction-expansion cycles. These results not only elaborate the stock dynamics of interest to a fuller extent, but also contradict some classical theories in finance. Overall this version of stock dynamics is potentially more coherent and realistic, especially when the current financial market is increasingly powered by high-frequency trading via computer algorithms, rather than by individual investors. PMID:24586235

  18. Using Compressed Video To Coach/Mentor Distant Teacher Interns.

    ERIC Educational Resources Information Center

    Hakes, Barbara; And Others

    Wyoming, a rural state with a small population scattered over vast geographic areas, brought a compressed digital video network online to connect the University of Wyoming and the State's seven community colleges. The College of Education at the University received a grant to develop a coaching/mentoring model for teacher interns over distance.…

  19. HVS-based quantization steps for validation of digital cinema extended bitrates

    NASA Astrophysics Data System (ADS)

    Larabi, M.-C.; Pellegrin, P.; Anciaux, G.; Devaux, F.-O.; Tulet, O.; Macq, B.; Fernandez, C.

    2009-02-01

    In Digital Cinema, the video compression must be as transparent as possible to provide the best image quality to the audience. The goal of compression is to simplify transport, storing, distribution and projection of films. For all those tasks, equipments need to be developed. It is thus mandatory to reduce the complexity of the equipments by imposing limitations in the specifications. In this sense, the DCI has fixed the maximum bitrate for a compressed stream to 250 Mbps independently from the input format (4K/24fps, 2K/48fps or 2K/24fps). The work described in this paper This parameter is discussed in this paper because it is not consistent to double/quadruple the input rate without increasing the output rate. The work presented in this paper is intended to define quantization steps ensuring the visually lossless compression. Two steps are followed first to evaluate the effect of each subband separately and then to fin the scaling ratio. The obtained results show that it is necessary to increase the bitrate limit for cinema material in order to achieve the visually lossless.

  20. The effect of JPEG compression on automated detection of microaneurysms in retinal images

    NASA Astrophysics Data System (ADS)

    Cree, M. J.; Jelinek, H. F.

    2008-02-01

    As JPEG compression at source is ubiquitous in retinal imaging, and the block artefacts introduced are known to be of similar size to microaneurysms (an important indicator of diabetic retinopathy) it is prudent to evaluate the effect of JPEG compression on automated detection of retinal pathology. Retinal images were acquired at high quality and then compressed to various lower qualities. An automated microaneurysm detector was run on the retinal images of various qualities of JPEG compression and the ability to predict the presence of diabetic retinopathy based on the detected presence of microaneurysms was evaluated with receiver operating characteristic (ROC) methodology. The negative effect of JPEG compression on automated detection was observed even at levels of compression sometimes used in retinal eye-screening programmes and these may have important clinical implications for deciding on acceptable levels of compression for a fully automated eye-screening programme.

  1. Compressive Video Acquisition, Fusion and Processing

    DTIC Science & Technology

    2010-12-14

    architecture that comprises an optical computer (comprising a digital micromirror device, two lenses, a single photon detector, and an analog-to-digital (A/D...of the missing video frames with reasonable accuracy. Also, the similar nature of the four curves suggests that the actual values of (Ωx,Ωh,Γ) are not

  2. 6 CFR 37.31 - Source document retention.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... keep digital images of source documents must retain the images for a minimum of ten years. (4) States... using digital imaging to retain source documents must store the images as follows: (1) Photo images must be stored in the Joint Photographic Experts Group (JPEG) 2000 standard for image compression, or a...

  3. 6 CFR 37.31 - Source document retention.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... keep digital images of source documents must retain the images for a minimum of ten years. (4) States... using digital imaging to retain source documents must store the images as follows: (1) Photo images must be stored in the Joint Photographic Experts Group (JPEG) 2000 standard for image compression, or a...

  4. 6 CFR 37.31 - Source document retention.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... keep digital images of source documents must retain the images for a minimum of ten years. (4) States... using digital imaging to retain source documents must store the images as follows: (1) Photo images must be stored in the Joint Photographic Experts Group (JPEG) 2000 standard for image compression, or a...

  5. 6 CFR 37.31 - Source document retention.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... keep digital images of source documents must retain the images for a minimum of ten years. (4) States... using digital imaging to retain source documents must store the images as follows: (1) Photo images must be stored in the Joint Photographic Experts Group (JPEG) 2000 standard for image compression, or a...

  6. 6 CFR 37.31 - Source document retention.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... keep digital images of source documents must retain the images for a minimum of ten years. (4) States... using digital imaging to retain source documents must store the images as follows: (1) Photo images must be stored in the Joint Photographic Experts Group (JPEG) 2000 standard for image compression, or a...

  7. Naval Postgraduate School Research. Volume 9, Number 1, February 1999

    DTIC Science & Technology

    1999-02-01

    before the digitization, since these add noise and nonlinear distortion to the signal. After digitization by the digital antenna, the data stream can be...instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information...like pulse compression. (Gener- ally, few experiments have measured the jitter of the lasers.) From the data , we note that the pulse width require

  8. Feasibility Study of Compressive Sensing Underwater Imaging Lidar

    DTIC Science & Technology

    2014-03-28

    Texas Instruments Digital Micromirror Devices development system. In addition, through these studies, the deficiencies and/or areas of lack...device, such as the Digital Micromirror Device (DMD), to spatially modulate the laser source that illuminates the target plane. The same binary patterns...Digital Micromirror Device (DMD) Applications," Proc. of SPIE, 2003, 4985, 14-25. [8] T. E. Giddings and J. J. Shirron, "Numerical Simulation of the

  9. Energy conservation using face detection

    NASA Astrophysics Data System (ADS)

    Deotale, Nilesh T.; Kalbande, Dhananjay R.; Mishra, Akassh A.

    2011-10-01

    Computerized Face Detection, is concerned with the difficult task of converting a video signal of a person to written text. It has several applications like face recognition, simultaneous multiple face processing, biometrics, security, video surveillance, human computer interface, image database management, digital cameras use face detection for autofocus, selecting regions of interest in photo slideshows that use a pan-and-scale and The Present Paper deals with energy conservation using face detection. Automating the process to a computer requires the use of various image processing techniques. There are various methods that can be used for Face Detection such as Contour tracking methods, Template matching, Controlled background, Model based, Motion based and color based. Basically, the video of the subject are converted into images are further selected manually for processing. However, several factors like poor illumination, movement of face, viewpoint-dependent Physical appearance, Acquisition geometry, Imaging conditions, Compression artifacts makes Face detection difficult. This paper reports an algorithm for conservation of energy using face detection for various devices. The present paper suggests Energy Conservation can be done by Detecting the Face and reducing the brightness of complete image and then adjusting the brightness of the particular area of an image where the face is located using histogram equalization.

  10. Stereo sequence transmission via conventional transmission channel

    NASA Astrophysics Data System (ADS)

    Lee, Ho-Keun; Kim, Chul-Hwan; Han, Kyu-Phil; Ha, Yeong-Ho

    2003-05-01

    This paper proposes a new stereo sequence transmission technique using digital watermarking for compatibility with conventional 2D digital TV. We, generally, compress and transmit image sequence using temporal-spatial redundancy between stereo images. It is difficult for users with conventional digital TV to watch the transmitted 3D image sequence because many 3D image compression methods are different. To solve such a problem, in this paper, we perceive the concealment of new information of digital watermarking and conceal information of the other stereo image into three channels of the reference image. The main target of the technique presented is to let the people who have conventional DTV watch stereo movies at the same time. This goal is reached by considering the response of human eyes to color information and by using digital watermarking. To hide right images into left images effectively, bit-change in 3 color channels and disparity estimation according to the value of estimated disparity are performed. The proposed method assigns the displacement information of right image to each channel of YCbCr on DCT domain. Each LSB bit on YCbCr channels is changed according to the bits of disparity information. The performance of the presented methods is confirmed by several computer experiments.

  11. Generation new MP3 data set after compression

    NASA Astrophysics Data System (ADS)

    Atoum, Mohammed Salem; Almahameed, Mohammad

    2016-02-01

    The success of audio steganography techniques is to ensure imperceptibility of the embedded secret message in stego file and withstand any form of intentional or un-intentional degradation of secret message (robustness). Crucial to that using digital audio file such as MP3 file, which comes in different compression rate, however research studies have shown that performing steganography in MP3 format after compression is the most suitable one. Unfortunately until now the researchers can not test and implement their algorithm because no standard data set in MP3 file after compression is generated. So this paper focuses to generate standard data set with different compression ratio and different Genre to help researchers to implement their algorithms.

  12. The effects of lossy compression on diagnostically relevant seizure information in EEG signals.

    PubMed

    Higgins, G; McGinley, B; Faul, S; McEvoy, R P; Glavin, M; Marnane, W P; Jones, E

    2013-01-01

    This paper examines the effects of compression on EEG signals, in the context of automated detection of epileptic seizures. Specifically, it examines the use of lossy compression on EEG signals in order to reduce the amount of data which has to be transmitted or stored, while having as little impact as possible on the information in the signal relevant to diagnosing epileptic seizures. Two popular compression methods, JPEG2000 and SPIHT, were used. A range of compression levels was selected for both algorithms in order to compress the signals with varying degrees of loss. This compression was applied to the database of epileptiform data provided by the University of Freiburg, Germany. The real-time EEG analysis for event detection automated seizure detection system was used in place of a trained clinician for scoring the reconstructed data. Results demonstrate that compression by a factor of up to 120:1 can be achieved, with minimal loss in seizure detection performance as measured by the area under the receiver operating characteristic curve of the seizure detection system.

  13. Digital image compression for a 2f multiplexing optical setup

    NASA Astrophysics Data System (ADS)

    Vargas, J.; Amaya, D.; Rueda, E.

    2016-07-01

    In this work a virtual 2f multiplexing system was implemented in combination with digital image compression techniques and redundant information elimination. Depending on the image type to be multiplexed, a memory-usage saving of as much as 99% was obtained. The feasibility of the system was tested using three types of images, binary characters, QR codes, and grey level images. A multiplexing step was implemented digitally, while a demultiplexing step was implemented in a virtual 2f optical setup following real experimental parameters. To avoid cross-talk noise, each image was codified with a specially designed phase diffraction carrier that would allow the separation and relocation of the multiplexed images on the observation plane by simple light propagation. A description of the system is presented together with simulations that corroborate the method. The present work may allow future experimental implementations that will make use of all the parallel processing capabilities of optical systems.

  14. Analog-to-digital conversion to accommodate the dynamics of live music in hearing instruments.

    PubMed

    Hockley, Neil S; Bahlmann, Frauke; Fulton, Bernadette

    2012-09-01

    Hearing instrument design focuses on the amplification of speech to reduce the negative effects of hearing loss. Many amateur and professional musicians, along with music enthusiasts, also require their hearing instruments to perform well when listening to the frequent, high amplitude peaks of live music. One limitation, in most current digital hearing instruments with 16-bit analog-to-digital (A/D) converters, is that the compressor before the A/D conversion is limited to 95 dB (SPL) or less at the input. This is more than adequate for the dynamic range of speech; however, this does not accommodate the amplitude peaks present in live music. The hearing instrument input compression system can be adjusted to accommodate for the amplitudes present in music that would otherwise be compressed before the A/D converter in the hearing instrument. The methodology behind this technological approach will be presented along with measurements to demonstrate its effectiveness.

  15. A study on multiresolution lossless video coding using inter/intra frame adaptive prediction

    NASA Astrophysics Data System (ADS)

    Nakachi, Takayuki; Sawabe, Tomoko; Fujii, Tetsuro

    2003-06-01

    Lossless video coding is required in the fields of archiving and editing digital cinema or digital broadcasting contents. This paper combines a discrete wavelet transform and adaptive inter/intra-frame prediction in the wavelet transform domain to create multiresolution lossless video coding. The multiresolution structure offered by the wavelet transform facilitates interchange among several video source formats such as Super High Definition (SHD) images, HDTV, SDTV, and mobile applications. Adaptive inter/intra-frame prediction is an extension of JPEG-LS, a state-of-the-art lossless still image compression standard. Based on the image statistics of the wavelet transform domains in successive frames, inter/intra frame adaptive prediction is applied to the appropriate wavelet transform domain. This adaptation offers superior compression performance. This is achieved with low computational cost and no increase in additional information. Experiments on digital cinema test sequences confirm the effectiveness of the proposed algorithm.

  16. Digital integrated control of a Mach 2.5 mixed-compression supersonic inlet and an augmented mixed-flow turbofan engine

    NASA Technical Reports Server (NTRS)

    Batterton, P. G.; Arpasi, D. J.; Baumbick, R. J.

    1974-01-01

    A digitally implemented integrated inlet-engine control system was designed and tested on a mixed-compression, axisymmetric, Mach 2.5, supersonic inlet with 45 percent internal supersonic area contraction and a TF30-P-3 augmented turbofan engine. The control matched engine airflow to available inlet airflow. By monitoring inlet terminal shock position and over-board bypass door command, the control adjusted engine speed so that in steady state, the shock would be at the desired location and the overboard bypass doors would be closed. During engine-induced transients, such as augmentor light-off and cutoff, the inlet operating point was momentarily changed to a more supercritical point to minimize unstarts. The digital control also provided automatic inlet restart. A variable inlet throat bleed control, based on throat Mach number, provided additional inlet stability margin.

  17. Compression embedding

    DOEpatents

    Sandford, M.T. II; Handel, T.G.; Bradley, J.N.

    1998-03-10

    A method of embedding auxiliary information into the digital representation of host data created by a lossy compression technique is disclosed. The method applies to data compressed with lossy algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as integer indices having redundancy and uncertainty in value by one unit. Indices which are adjacent in value are manipulated to encode auxiliary data. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user. Lossy compression methods use loss-less compressions known also as entropy coding, to reduce to the final size the intermediate representation as indices. The efficiency of the compression entropy coding, known also as entropy coding is increased by manipulating the indices at the intermediate stage in the manner taught by the method. 11 figs.

  18. Compression embedding

    DOEpatents

    Sandford, II, Maxwell T.; Handel, Theodore G.; Bradley, Jonathan N.

    1998-01-01

    A method of embedding auxiliary information into the digital representation of host data created by a lossy compression technique. The method applies to data compressed with lossy algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as integer indices having redundancy and uncertainty in value by one unit. Indices which are adjacent in value are manipulated to encode auxiliary data. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user. Lossy compression methods use loss-less compressions known also as entropy coding, to reduce to the final size the intermediate representation as indices. The efficiency of the compression entropy coding, known also as entropy coding is increased by manipulating the indices at the intermediate stage in the manner taught by the method.

  19. Data compression techniques applied to high resolution high frame rate video technology

    NASA Technical Reports Server (NTRS)

    Hartz, William G.; Alexovich, Robert E.; Neustadter, Marc S.

    1989-01-01

    An investigation is presented of video data compression applied to microgravity space experiments using High Resolution High Frame Rate Video Technology (HHVT). An extensive survey of methods of video data compression, described in the open literature, was conducted. The survey examines compression methods employing digital computing. The results of the survey are presented. They include a description of each method and assessment of image degradation and video data parameters. An assessment is made of present and near term future technology for implementation of video data compression in high speed imaging system. Results of the assessment are discussed and summarized. The results of a study of a baseline HHVT video system, and approaches for implementation of video data compression, are presented. Case studies of three microgravity experiments are presented and specific compression techniques and implementations are recommended.

  20. MO-E-217A-01: Contrast-Enhanced Spectral Mammography - Physical Aspects and QA.

    PubMed

    Yaffe, M; Hill, M

    2012-06-01

    To describe the current state of dual energy contrast-enhanced digital mammography, to discuss those aspects of its operation that require evaluation or monitoring and to propose elements of a program for quality assurance of such systems. The principles of dual-energy contrast imaging will be discussed and tools and techniques for assessment of performance will be described. Many of the elements affecting image quality and dose performance in digital mammography (eg noise, system linearity, consistency of x-ray output and detector performance, artifacts) remain important. In addition, the ability to register images can influence the resultant image quality. The maintenance of breast compression thickness during the imaging procedure and calibration of the system to allow quantification of iodine in the breast represent new challenges to quality assurance. CESM provides a means of acquiring new information regarding tumor angiogenesis and may reveal some cancers that will not be detectable on digital mammography. It may also better demonstrate the extent of disease. The medical physicist must understand the dependence of image quality on physical factors. Implementation of a relevant QA program will be required if the promise of this new modality is to be delivered. © 2012 American Association of Physicists in Medicine.

  1. The development of machine technology processing for earth resource survey

    NASA Technical Reports Server (NTRS)

    Landgrebe, D. A.

    1970-01-01

    The following technologies are considered for automatic processing of earth resources data: (1) registration of multispectral and multitemporal images, (2) digital image display systems, (3) data system parameter effects on satellite remote sensing systems, and (4) data compression techniques based on spectral redundancy. The importance of proper spectral band and compression algorithm selections is pointed out.

  2. Identifying sports videos using replay, text, and camera motion features

    NASA Astrophysics Data System (ADS)

    Kobla, Vikrant; DeMenthon, Daniel; Doermann, David S.

    1999-12-01

    Automated classification of digital video is emerging as an important piece of the puzzle in the design of content management systems for digital libraries. The ability to classify videos into various classes such as sports, news, movies, or documentaries, increases the efficiency of indexing, browsing, and retrieval of video in large databases. In this paper, we discuss the extraction of features that enable identification of sports videos directly from the compressed domain of MPEG video. These features include detecting the presence of action replays, determining the amount of scene text in vide, and calculating various statistics on camera and/or object motion. The features are derived from the macroblock, motion,and bit-rate information that is readily accessible from MPEG video with very minimal decoding, leading to substantial gains in processing speeds. Full-decoding of selective frames is required only for text analysis. A decision tree classifier built using these features is able to identify sports clips with an accuracy of about 93 percent.

  3. Toward an image compression algorithm for the high-resolution electronic still camera

    NASA Technical Reports Server (NTRS)

    Nerheim, Rosalee

    1989-01-01

    Taking pictures with a camera that uses a digital recording medium instead of film has the advantage of recording and transmitting images without the use of a darkroom or a courier. However, high-resolution images contain an enormous amount of information and strain data-storage systems. Image compression will allow multiple images to be stored in the High-Resolution Electronic Still Camera. The camera is under development at Johnson Space Center. Fidelity of the reproduced image and compression speed are of tantamount importance. Lossless compression algorithms are fast and faithfully reproduce the image, but their compression ratios will be unacceptably low due to noise in the front end of the camera. Future efforts will include exploring methods that will reduce the noise in the image and increase the compression ratio.

  4. Pulse-compression ghost imaging lidar via coherent detection.

    PubMed

    Deng, Chenjin; Gong, Wenlin; Han, Shensheng

    2016-11-14

    Ghost imaging (GI) lidar, as a novel remote sensing technique, has been receiving increasing interest in recent years. By combining pulse-compression technique and coherent detection with GI, we propose a new lidar system called pulse-compression GI lidar. Our analytical results, which are backed up by numerical simulations, demonstrate that pulse-compression GI lidar can obtain the target's spatial intensity distribution, range and moving velocity. Compared with conventional pulsed GI lidar system, pulse-compression GI lidar, without decreasing the range resolution, is easy to obtain high single pulse energy with the use of a long pulse, and the mechanism of coherent detection can eliminate the influence of the stray light, which is helpful to improve the detection sensitivity and detection range.

  5. TU-CD-207-09: Analysis of the 3-D Shape of Patients’ Breast for Breast Imaging and Surgery Planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Agasthya, G; Sechopoulos, I

    2015-06-15

    Purpose: Develop a method to accurately capture the 3-D shape of patients’ external breast surface before and during breast compression for mammography/tomosynthesis. Methods: During this IRB-approved, HIPAA-compliant study, 50 women were recruited to undergo 3-D breast surface imaging during breast compression and imaging for the cranio-caudal (CC) view on a digital mammography/breast tomosynthesis system. Digital projectors and cameras mounted on tripods were used to acquire 3-D surface images of the breast, in three conditions: (a) positioned on the support paddle before compression, (b) during compression by the compression paddle and (c) the anterior-posterior view with the breast in its natural,more » unsupported position. The breast was compressed to standard full compression with the compression paddle and a tomosynthesis image was acquired simultaneously with the 3-D surface. The 3-D surface curvature and deformation with respect to the uncompressed surface was analyzed using contours. The 3-D surfaces were voxelized to capture breast shape in a format that can be manipulated for further analysis. Results: A protocol was developed to accurately capture the 3-D shape of patients’ breast before and during compression for mammography. Using a pair of 3-D scanners, the 50 patient breasts were scanned in three conditions, resulting in accurate representations of the breast surfaces. The surfaces were post processed, analyzed using contours and voxelized, with 1 mm{sup 3} voxels, converting the breast shape into a format that can be easily modified as required. Conclusion: Accurate characterization of the breast curvature and shape for the generation of 3-D models is possible. These models can be used for various applications such as improving breast dosimetry, accurate scatter estimation, conducting virtual clinical trials and validating compression algorithms. Ioannis Sechopoulos is consultant for Fuji Medical Systems USA.« less

  6. Intention to Commit Online Music Piracy and Its Antecedents: An Empirical Investigation

    ERIC Educational Resources Information Center

    Morton, Neil A.; Koufteros, Xenophon

    2008-01-01

    Online piracy of copyrighted digital music has become rampant as Internet bandwidth and digital compression technologies have advanced. The music industry has suffered significant financial losses and has responded with lawsuits, although online music piracy remains prevalent. This article developed a research model to study the determinants of…

  7. A 1-channel 3-band wide dynamic range compression chip for vibration transducer of implantable hearing aids.

    PubMed

    Kim, Dongwook; Seong, Kiwoong; Kim, Myoungnam; Cho, Jinho; Lee, Jyunghyun

    2014-01-01

    In this paper, a digital audio processing chip which uses a wide dynamic range compression (WDRC) algorithm is designed and implemented for implantable hearing aids system. The designed chip operates at a single voltage of 3.3V and drives a 16 bit parallel input and output at 32 kHz sample. The designed chip has 1-channel 3-band WDRC composed of a FIR filter bank, a level detector, and a compression part. To verify the performance of the designed chip, we measured the frequency separations of bands and compression gain control to reflect the hearing threshold level.

  8. New image compression scheme for digital angiocardiography application

    NASA Astrophysics Data System (ADS)

    Anastassopoulos, George C.; Lymberopoulos, Dimitris C.; Kotsopoulos, Stavros A.; Kokkinakis, George C.

    1993-06-01

    The present paper deals with the development and evaluation of a new compression scheme, for angiocardiography images. This scheme provides considerable compression of the medical data file, through two different stages. The first stage obliterates the redundancy inside a single frame domain since the second stage obliterates the redundancy among the sequential frames. Within these stages the employed data compression ratio can be easily adjusted according to the needs of the angiocardiography applications, where still or moving (in slow or full motion) images are hauled. The developed scheme has been tailored on the real needs of the diagnosis oriented conferencing-teleworking processes, where Unified Image Viewing facilities are required.

  9. Multi-rate, real time image compression for images dominated by point sources

    NASA Technical Reports Server (NTRS)

    Huber, A. Kris; Budge, Scott E.; Harris, Richard W.

    1993-01-01

    An image compression system recently developed for compression of digital images dominated by point sources is presented. Encoding consists of minimum-mean removal, vector quantization, adaptive threshold truncation, and modified Huffman encoding. Simulations are presented showing that the peaks corresponding to point sources can be transmitted losslessly for low signal-to-noise ratios (SNR) and high point source densities while maintaining a reduced output bit rate. Encoding and decoding hardware has been built and tested which processes 552,960 12-bit pixels per second at compression rates of 10:1 and 4:1. Simulation results are presented for the 10:1 case only.

  10. A comparison of select image-compression algorithms for an electronic still camera

    NASA Technical Reports Server (NTRS)

    Nerheim, Rosalee

    1989-01-01

    This effort is a study of image-compression algorithms for an electronic still camera. An electronic still camera can record and transmit high-quality images without the use of film, because images are stored digitally in computer memory. However, high-resolution images contain an enormous amount of information, and will strain the camera's data-storage system. Image compression will allow more images to be stored in the camera's memory. For the electronic still camera, a compression algorithm that produces a reconstructed image of high fidelity is most important. Efficiency of the algorithm is the second priority. High fidelity and efficiency are more important than a high compression ratio. Several algorithms were chosen for this study and judged on fidelity, efficiency and compression ratio. The transform method appears to be the best choice. At present, the method is compressing images to a ratio of 5.3:1 and producing high-fidelity reconstructed images.

  11. Correlating locations in ipsilateral breast tomosynthesis views using an analytical hemispherical compression model

    NASA Astrophysics Data System (ADS)

    van Schie, Guido; Tanner, Christine; Snoeren, Peter; Samulski, Maurice; Leifland, Karin; Wallis, Matthew G.; Karssemeijer, Nico

    2011-08-01

    To improve cancer detection in mammography, breast examinations usually consist of two views per breast. In order to combine information from both views, corresponding regions in the views need to be matched. In 3D digital breast tomosynthesis (DBT), this may be a difficult and time-consuming task for radiologists, because many slices have to be inspected individually. For multiview computer-aided detection (CAD) systems, matching corresponding regions is an essential step that needs to be automated. In this study, we developed an automatic method to quickly estimate corresponding locations in ipsilateral tomosynthesis views by applying a spatial transformation. First we match a model of a compressed breast to the tomosynthesis view containing a point of interest. Then we estimate the location of the corresponding point in the ipsilateral view by assuming that this model was decompressed, rotated and compressed again. In this study, we use a relatively simple, elastically deformable sphere model to obtain an analytical solution for the transformation in a given DBT case. We investigate three different methods to match the compression model to the data by using automatic segmentation of the pectoral muscle, breast tissue and nipple. For validation, we annotated 208 landmarks in both views of a total of 146 imaged breasts of 109 different patients and applied our method to each location. The best results are obtained by using the centre of gravity of the breast to define the central axis of the model, around which the breast is assumed to rotate between views. Results show a median 3D distance between the actual location and the estimated location of 14.6 mm, a good starting point for a registration method or a feature-based local search method to link suspicious regions in a multiview CAD system. Approximately half of the estimated locations are at most one slice away from the actual location, which makes the method useful as a mammographic workstation tool for radiologists to interactively find corresponding locations in ipsilateral tomosynthesis views.

  12. Research on key technologies for data-interoperability-based metadata, data compression and encryption, and their application

    NASA Astrophysics Data System (ADS)

    Yu, Xu; Shao, Quanqin; Zhu, Yunhai; Deng, Yuejin; Yang, Haijun

    2006-10-01

    With the development of informationization and the separation between data management departments and application departments, spatial data sharing becomes one of the most important objectives for the spatial information infrastructure construction, and spatial metadata management system, data transmission security and data compression are the key technologies to realize spatial data sharing. This paper discusses the key technologies for metadata based on data interoperability, deeply researches the data compression algorithms such as adaptive Huffman algorithm, LZ77 and LZ78 algorithm, studies to apply digital signature technique to encrypt spatial data, which can not only identify the transmitter of spatial data, but also find timely whether the spatial data are sophisticated during the course of network transmission, and based on the analysis of symmetric encryption algorithms including 3DES,AES and asymmetric encryption algorithm - RAS, combining with HASH algorithm, presents a improved mix encryption method for spatial data. Digital signature technology and digital watermarking technology are also discussed. Then, a new solution of spatial data network distribution is put forward, which adopts three-layer architecture. Based on the framework, we give a spatial data network distribution system, which is efficient and safe, and also prove the feasibility and validity of the proposed solution.

  13. A Digital Compressed Sensing-Based Energy-Efficient Single-Spot Bluetooth ECG Node

    PubMed Central

    Cai, Zhipeng; Zou, Fumin; Zhang, Xiangyu

    2018-01-01

    Energy efficiency is still the obstacle for long-term real-time wireless ECG monitoring. In this paper, a digital compressed sensing- (CS-) based single-spot Bluetooth ECG node is proposed to deal with the challenge in wireless ECG application. A periodic sleep/wake-up scheme and a CS-based compression algorithm are implemented in a node, which consists of ultra-low-power analog front-end, microcontroller, Bluetooth 4.0 communication module, and so forth. The efficiency improvement and the node's specifics are evidenced by the experiments using the ECG signals sampled by the proposed node under daily activities of lay, sit, stand, walk, and run. Under using sparse binary matrix (SBM), block sparse Bayesian learning (BSBL) method, and discrete cosine transform (DCT) basis, all ECG signals were essentially undistorted recovered with root-mean-square differences (PRDs) which are less than 6%. The proposed sleep/wake-up scheme and data compression can reduce the airtime over energy-hungry wireless links, the energy consumption of proposed node is 6.53 mJ, and the energy consumption of radio decreases 77.37%. Moreover, the energy consumption increase caused by CS code execution is negligible, which is 1.3% of the total energy consumption. PMID:29599945

  14. A Digital Compressed Sensing-Based Energy-Efficient Single-Spot Bluetooth ECG Node.

    PubMed

    Luo, Kan; Cai, Zhipeng; Du, Keqin; Zou, Fumin; Zhang, Xiangyu; Li, Jianqing

    2018-01-01

    Energy efficiency is still the obstacle for long-term real-time wireless ECG monitoring. In this paper, a digital compressed sensing- (CS-) based single-spot Bluetooth ECG node is proposed to deal with the challenge in wireless ECG application. A periodic sleep/wake-up scheme and a CS-based compression algorithm are implemented in a node, which consists of ultra-low-power analog front-end, microcontroller, Bluetooth 4.0 communication module, and so forth. The efficiency improvement and the node's specifics are evidenced by the experiments using the ECG signals sampled by the proposed node under daily activities of lay, sit, stand, walk, and run. Under using sparse binary matrix (SBM), block sparse Bayesian learning (BSBL) method, and discrete cosine transform (DCT) basis, all ECG signals were essentially undistorted recovered with root-mean-square differences (PRDs) which are less than 6%. The proposed sleep/wake-up scheme and data compression can reduce the airtime over energy-hungry wireless links, the energy consumption of proposed node is 6.53 mJ, and the energy consumption of radio decreases 77.37%. Moreover, the energy consumption increase caused by CS code execution is negligible, which is 1.3% of the total energy consumption.

  15. Temporary morphological changes in plus disease induced during contact digital imaging

    PubMed Central

    Zepeda-Romero, L C; Martinez-Perez, M E; Ruiz-Velasco, S; Ramirez-Ortiz, M A; Gutierrez-Padilla, J A

    2011-01-01

    Objective To compare and quantify the retinal vascular changes induced by non-intentional pressure contact by digital handheld camera during retinopathy of prematurity (ROP) imaging by means of a computer-based image analysis system, Retinal Image multiScale Analysis. Methods A set of 10 wide-angle retinal pairs of photographs per patient, who underwent routine ROP examinations, was measured. Vascular trees were matched between ‘compression artifact' (absence of the vascular column at the optic nerve) and ‘not compression artifact' conditions. Parameters were analyzed using a two-level linear model for each individual parameter for arterioles and venules separately: integrated curvature (IC), diameter (d), and tortuosity index (TI). Results Images affected with compression artifact showed significant vascular d (P<0.01) changes in both arteries and veins, as well as in artery IC (P<0.05). Vascular TI remained unchanged in both groups. Conclusions Non-adverted corneal pressure with the RetCam lens could compress and decrease intra-arterial diameter or even collapse retinal vessels. Careful attention to technique is essential to avoid absence of the arterial blood column at the optic nerve head that is indicative of increased pressure during imaging. PMID:21760627

  16. Remote Sensing Image Quality Assessment Experiment with Post-Processing

    NASA Astrophysics Data System (ADS)

    Jiang, W.; Chen, S.; Wang, X.; Huang, Q.; Shi, H.; Man, Y.

    2018-04-01

    This paper briefly describes the post-processing influence assessment experiment, the experiment includes three steps: the physical simulation, image processing, and image quality assessment. The physical simulation models sampled imaging system in laboratory, the imaging system parameters are tested, the digital image serving as image processing input are produced by this imaging system with the same imaging system parameters. The gathered optical sampled images with the tested imaging parameters are processed by 3 digital image processes, including calibration pre-processing, lossy compression with different compression ratio and image post-processing with different core. Image quality assessment method used is just noticeable difference (JND) subject assessment based on ISO20462, through subject assessment of the gathered and processing images, the influence of different imaging parameters and post-processing to image quality can be found. The six JND subject assessment experimental data can be validated each other. Main conclusions include: image post-processing can improve image quality; image post-processing can improve image quality even with lossy compression, image quality with higher compression ratio improves less than lower ratio; with our image post-processing method, image quality is better, when camera MTF being within a small range.

  17. Lower limb vascular disease in diabetic patients: a study with calf compression contrast-enhanced magnetic resonance angiography at 3.0 Tesla.

    PubMed

    Li, Jie; Zhao, Jun-Gong; Li, Ming-Hua

    2011-06-01

    To retrospectively analyze the significance of 3.0-T contrast-enhanced (CE) magnetic resonance angiography (MRA) with calf compression in the lower limbs of diabetic patients with peripheral vascular disease. Sixty-one type 2 diabetes patients underwent both MRA and digital subtraction angiography (DSA) within 1 week. The patients were divided into two groups: one with (pressure) and one without (conventional) calf compression during MRA. Two radiologists evaluated the quality of MRA images and compared the two groups. Cohen's kappa statistic was used to determine the concordance between MRA and DSA. Image quality in the calf and foot was better in the group with calf pressure than the conventional group without applied pressure (P = .001 [calf], 0.008 [foot]). Significantly more runoff vessels in the calf were detected with MRA than with DSA (P = .0043 [conventional], 0.0031 [pressure]). The kappa values were 0.928 in the conventional group and 0.979 in the pressure group, but in the conventional group, the diagnostic accuracy of CE-MRA was lower than that of DSA (P = .002). Diagnostic accuracy in the pressure group was significantly higher than that in the conventional group (P = .009). The overall sensitivity and specificity for >50% stenosis or occlusion was 93.8% and 98.5%, respectively, in the conventional group and 98.7% and 99.6%, respectively, in the pressure group. With calf compression, venous overlap (P = .0396, .0425) and deep vein overlap (P = .022, .022) were significantly reduced in the leg and foot. Calf compression with 3.0-T CE-MRA was convenient and practical and could improve image quality and diagnostic accuracy in diabetic patients with peripheral vascular disease by reducing venous overlap. Copyright © 2011 AUR. Published by Elsevier Inc. All rights reserved.

  18. High-quality JPEG compression history detection for fake uncompressed images

    NASA Astrophysics Data System (ADS)

    Zhang, Rong; Wang, Rang-Ding; Guo, Li-Jun; Jiang, Bao-Chuan

    2017-05-01

    Authenticity is one of the most important evaluation factors of images for photography competitions or journalism. Unusual compression history of an image often implies the illicit intent of its author. Our work aims at distinguishing real uncompressed images from fake uncompressed images that are saved in uncompressed formats but have been previously compressed. To detect the potential image JPEG compression, we analyze the JPEG compression artifacts based on the tetrolet covering, which corresponds to the local image geometrical structure. Since the compression can alter the structure information, the tetrolet covering indexes may be changed if a compression is performed on the test image. Such changes can provide valuable clues about the image compression history. To be specific, the test image is first compressed with different quality factors to generate a set of temporary images. Then, the test image is compared with each temporary image block-by-block to investigate whether the tetrolet covering index of each 4×4 block is different between them. The percentages of the changed tetrolet covering indexes corresponding to the quality factors (from low to high) are computed and used to form the p-curve, the local minimum of which may indicate the potential compression. Our experimental results demonstrate the advantage of our method to detect JPEG compressions of high quality, even the highest quality factors such as 98, 99, or 100 of the standard JPEG compression, from uncompressed-format images. At the same time, our detection algorithm can accurately identify the corresponding compression quality factor.

  19. Using Time-Compression to Make Multimedia Learning More Efficient: Current Research and Practice

    ERIC Educational Resources Information Center

    Pastore, Raymond; Ritzhaupt, Albert D.

    2015-01-01

    It is now common practice for instructional designers to incorporate digitally recorded lectures for Podcasts (e.g., iTunes University), voice-over presentations (e.g., PowerPoint), animated screen captures with narration (e.g., Camtasia), and other various learning objects with digital audio in the instructional method. As a result, learners are…

  20. Video Compression

    NASA Technical Reports Server (NTRS)

    1996-01-01

    Optivision developed two PC-compatible boards and associated software under a Goddard Space Flight Center Small Business Innovation Research grant for NASA applications in areas such as telerobotics, telesciences and spaceborne experimentation. From this technology, the company used its own funds to develop commercial products, the OPTIVideo MPEG Encoder and Decoder, which are used for realtime video compression and decompression. They are used in commercial applications including interactive video databases and video transmission. The encoder converts video source material to a compressed digital form that can be stored or transmitted, and the decoder decompresses bit streams to provide high quality playback.

  1. Is breast compression associated with breast cancer detection and other early performance measures in a population-based breast cancer screening program?

    PubMed

    Moshina, Nataliia; Sebuødegård, Sofie; Hofvind, Solveig

    2017-06-01

    We aimed to investigate early performance measures in a population-based breast cancer screening program stratified by compression force and pressure at the time of mammographic screening examination. Early performance measures included recall rate, rates of screen-detected and interval breast cancers, positive predictive value of recall (PPV), sensitivity, specificity, and histopathologic characteristics of screen-detected and interval breast cancers. Information on 261,641 mammographic examinations from 93,444 subsequently screened women was used for analyses. The study period was 2007-2015. Compression force and pressure were categorized using tertiles as low, medium, or high. χ 2 test, t tests, and test for trend were used to examine differences between early performance measures across categories of compression force and pressure. We applied generalized estimating equations to identify the odds ratios (OR) of screen-detected or interval breast cancer associated with compression force and pressure, adjusting for fibroglandular and/or breast volume and age. The recall rate decreased, while PPV and specificity increased with increasing compression force (p for trend <0.05 for all). The recall rate increased, while rate of screen-detected cancer, PPV, sensitivity, and specificity decreased with increasing compression pressure (p for trend <0.05 for all). High compression pressure was associated with higher odds of interval breast cancer compared with low compression pressure (1.89; 95% CI 1.43-2.48). High compression force and low compression pressure were associated with more favorable early performance measures in the screening program.

  2. Communication acoustics in Bell Labs

    NASA Astrophysics Data System (ADS)

    Flanagan, J. L.

    2004-05-01

    Communication aoustics has been a central theme in Bell Labs research since its inception. Telecommunication serves human information exchange. And, humans favor spoken language as a principal mode. The atmospheric medium typically provides the link between articulation and hearing. Creation, control and detection of sound, and the human's facility for generation and perception are basic ingredients of telecommunication. Electronics technology of the 1920s ushered in great advances in communication at a distance, a strong economical impetus being to overcome bandwidth limitations of wireline and cable. Early research established criteria for speech transmission with high quality and intelligibility. These insights supported exploration of means for efficient transmission-obtaining the greatest amount of speech information over a given bandwidth. Transoceanic communication was initiated by undersea cables for telegraphy. But these long cables exhibited very limited bandwidth (order of few hundred Hz). The challenge of sending voice across the oceans spawned perhaps the best known speech compression technique of history-the Vocoder, which parametrized the signal for transmission in about 300 Hz bandwidth, one-tenth that required for the typical waveform channel. Quality and intelligibility were grave issues (and they still are). At the same time parametric representation offered possibilities for encryption and privacy inside a traditional voice bandwidth. Confidential conversations between Roosevelt and Churchill during World War II were carried over high-frequency radio by an encrypted vocoder system known as Sigsaly. Major engineering advances in the late 1940s and early 1950s moved telecommunications into a new regime-digital technology. These key advances were at least three: (i) new understanding of time-discrete (sampled) representation of signals, (ii) digital computation (especially binary based), and (iii) evolving capabilities in microelectronics that ultimately provided circuits of enormous complexity with low cost and power. Digital transmission (as exemplified in pulse code modulation-PCM, and its many derivatives) became a telecommunication mainstay, along with switches to control and route information in digital form. Concomitantly, storage means for digital information advanced, providing another impetus for speech compression. More and more, humans saw the need to exchange speech information with machines, as well as with other humans. Human-machine speech communication came to full stride in the early 1990s, and now has expanded to multimodal domains that begin to support enhanced naturalness, using contemporaneous sight, sound and touch signaling. Packet transmission is supplanting circuit switching, and voice and video are commonly being carried by Internet protocol.

  3. Non-contact evaluation of milk-based products using air-coupled ultrasound

    NASA Astrophysics Data System (ADS)

    Meyer, S.; Hindle, S. A.; Sandoz, J.-P.; Gan, T. H.; Hutchins, D. A.

    2006-07-01

    An air-coupled ultrasonic technique has been developed and used to detect physicochemical changes of liquid beverages within a glass container. This made use of two wide-bandwidth capacitive transducers, combined with pulse-compression techniques. The use of a glass container to house samples enabled visual inspection, helping to verify the results of some of the ultrasonic measurements. The non-contact pulse-compression system was used to evaluate agglomeration processes in milk-based products. It is shown that the amplitude of the signal varied with time after the samples had been treated with lactic acid, thus promoting sample destabilization. Non-contact imaging was also performed to follow destabilization of samples by scanning in various directions across the container. The obtained ultrasonic images were also compared to those from a digital camera. Coagulation with glucono-delta-lactone of skim milk poured into this container could be monitored within a precision of a pH of 0.15. This rapid, non-contact and non-destructive technique has shown itself to be a feasible method for investigating the quality of milk-based beverages, and possibly other food products.

  4. Intelligent vehicle control: Opportunities for terrestrial-space system integration

    NASA Technical Reports Server (NTRS)

    Shoemaker, Charles

    1994-01-01

    For 11 years the Department of Defense has cooperated with a diverse array of other Federal agencies including the National Institute of Standards and Technology, the Jet Propulsion Laboratory, and the Department of Energy, to develop robotics technology for unmanned ground systems. These activities have addressed control system architectures supporting sharing of tasks between the system operator and various automated subsystems, man-machine interfaces to intelligent vehicles systems, video compression supporting vehicle driving in low data rate digital communication environments, multiple simultaneous vehicle control by a single operator, path planning and retrace, and automated obstacle detection and avoidance subsystem. Performance metrics and test facilities for robotic vehicles were developed permitting objective performance assessment of a variety of operator-automated vehicle control regimes. Progress in these areas will be described in the context of robotic vehicle testbeds specifically developed for automated vehicle research. These initiatives, particularly as regards the data compression, task sharing, and automated mobility topics, also have relevance in the space environment. The intersection of technology development interests between these two communities will be discussed in this paper.

  5. [Improvement of Digital Capsule Endoscopy System and Image Interpolation].

    PubMed

    Zhao, Shaopeng; Yan, Guozheng; Liu, Gang; Kuang, Shuai

    2016-01-01

    Traditional capsule image collects and transmits analog image, with weak anti-interference ability, low frame rate, low resolution. This paper presents a new digital image capsule, which collects and transmits digital image, with frame rate up to 30 frames/sec and pixels resolution of 400 x 400. The image is compressed in the capsule, and is transmitted to the outside of the capsule for decompression and interpolation. A new type of interpolation algorithm is proposed, which is based on the relationship between the image planes, to obtain higher quality colour images. capsule endoscopy, digital image, SCCB protocol, image interpolation

  6. Digital Video (DV): A Primer for Developing an Enterprise Video Strategy

    NASA Astrophysics Data System (ADS)

    Talovich, Thomas L.

    2002-09-01

    The purpose of this thesis is to provide an overview of digital video production and delivery. The thesis presents independent research demonstrating the educational value of incorporating video and multimedia content in training and education programs. The thesis explains the fundamental concepts associated with the process of planning, preparing, and publishing video content and assists in the development of follow-on strategies for incorporation of video content into distance training and education programs. The thesis provides an overview of the following technologies: Digital Video, Digital Video Editors, Video Compression, Streaming Video, and Optical Storage Media.

  7. Influence of chest compression artefact on capnogram-based ventilation detection during out-of-hospital cardiopulmonary resuscitation.

    PubMed

    Leturiondo, Mikel; Ruiz de Gauna, Sofía; Ruiz, Jesus M; Julio Gutiérrez, J; Leturiondo, Luis A; González-Otero, Digna M; Russell, James K; Zive, Dana; Daya, Mohamud

    2018-03-01

    Capnography has been proposed as a method for monitoring the ventilation rate during cardiopulmonary resuscitation (CPR). A high incidence (above 70%) of capnograms distorted by chest compression induced oscillations has been previously reported in out-of-hospital (OOH) CPR. The aim of the study was to better characterize the chest compression artefact and to evaluate its influence on the performance of a capnogram-based ventilation detector during OOH CPR. Data from the MRx monitor-defibrillator were extracted from OOH cardiac arrest episodes. For each episode, presence of chest compression artefact was annotated in the capnogram. Concurrent compression depth and transthoracic impedance signals were used to identify chest compressions and to annotate ventilations, respectively. We designed a capnogram-based ventilation detection algorithm and tested its performance with clean and distorted episodes. Data were collected from 232 episodes comprising 52 654 ventilations, with a mean (±SD) of 227 (±118) per episode. Overall, 42% of the capnograms were distorted. Presence of chest compression artefact degraded algorithm performance in terms of ventilation detection, estimation of ventilation rate, and the ability to detect hyperventilation. Capnogram-based ventilation detection during CPR using our algorithm was compromised by the presence of chest compression artefact. In particular, artefact spanning from the plateau to the baseline strongly degraded ventilation detection, and caused a high number of false hyperventilation alarms. Further research is needed to reduce the impact of chest compression artefact on capnographic ventilation monitoring. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Analog-to-Digital Conversion to Accommodate the Dynamics of Live Music in Hearing Instruments

    PubMed Central

    Bahlmann, Frauke; Fulton, Bernadette

    2012-01-01

    Hearing instrument design focuses on the amplification of speech to reduce the negative effects of hearing loss. Many amateur and professional musicians, along with music enthusiasts, also require their hearing instruments to perform well when listening to the frequent, high amplitude peaks of live music. One limitation, in most current digital hearing instruments with 16-bit analog-to-digital (A/D) converters, is that the compressor before the A/D conversion is limited to 95 dB (SPL) or less at the input. This is more than adequate for the dynamic range of speech; however, this does not accommodate the amplitude peaks present in live music. The hearing instrument input compression system can be adjusted to accommodate for the amplitudes present in music that would otherwise be compressed before the A/D converter in the hearing instrument. The methodology behind this technological approach will be presented along with measurements to demonstrate its effectiveness. PMID:23258618

  9. Hardware Implementation of 32-Bit High-Speed Direct Digital Frequency Synthesizer

    PubMed Central

    Ibrahim, Salah Hasan; Ali, Sawal Hamid Md.; Islam, Md. Shabiul

    2014-01-01

    The design and implementation of a high-speed direct digital frequency synthesizer are presented. A modified Brent-Kung parallel adder is combined with pipelining technique to improve the speed of the system. A gated clock technique is proposed to reduce the number of registers in the phase accumulator design. The quarter wave symmetry technique is used to store only one quarter of the sine wave. The ROM lookup table (LUT) is partitioned into three 4-bit sub-ROMs based on angular decomposition technique and trigonometric identity. Exploiting the advantages of sine-cosine symmetrical attributes together with XOR logic gates, one sub-ROM block can be removed from the design. These techniques, compressed the ROM into 368 bits. The ROM compressed ratio is 534.2 : 1, with only two adders, two multipliers, and XOR-gates with high frequency resolution of 0.029 Hz. These techniques make the direct digital frequency synthesizer an attractive candidate for wireless communication applications. PMID:24991635

  10. Discrete Walsh Hadamard transform based visible watermarking technique for digital color images

    NASA Astrophysics Data System (ADS)

    Santhi, V.; Thangavelu, Arunkumar

    2011-10-01

    As the size of the Internet is growing enormously the illegal manipulation of digital multimedia data become very easy with the advancement in technology tools. In order to protect those multimedia data from unauthorized access the digital watermarking system is used. In this paper a new Discrete walsh Hadamard Transform based visible watermarking system is proposed. As the watermark is embedded in transform domain, the system is robust to many signal processing attacks. Moreover in this proposed method the watermark is embedded in tiling manner in all the range of frequencies to make it robust to compression and cropping attack. The robustness of the algorithm is tested against noise addition, cropping, compression, Histogram equalization and resizing attacks. The experimental results show that the algorithm is robust to common signal processing attacks and the observed peak signal to noise ratio (PSNR) of watermarked image is varying from 20 to 30 db depends on the size of the watermark.

  11. Optical encryption of multiple three-dimensional objects based on multiple interferences and single-pixel digital holography

    NASA Astrophysics Data System (ADS)

    Wang, Ying; Liu, Qi; Wang, Jun; Wang, Qiong-Hua

    2018-03-01

    We present an optical encryption method of multiple three-dimensional objects based on multiple interferences and single-pixel digital holography. By modifying the Mach–Zehnder interferometer, the interference of the multiple objects beams and the one reference beam is used to simultaneously encrypt multiple objects into a ciphertext. During decryption, each three-dimensional object can be decrypted independently without having to decrypt other objects. Since the single-pixel digital holography based on compressive sensing theory is introduced, the encrypted data of this method is effectively reduced. In addition, recording fewer encrypted data can greatly reduce the bandwidth of network transmission. Moreover, the compressive sensing essentially serves as a secret key that makes an intruder attack invalid, which means that the system is more secure than the conventional encryption method. Simulation results demonstrate the feasibility of the proposed method and show that the system has good security performance. Project supported by the National Natural Science Foundation of China (Grant Nos. 61405130 and 61320106015).

  12. A randomized control hands-on defibrillation study-Barrier use evaluation.

    PubMed

    Wampler, David; Kharod, Chetan; Bolleter, Scotty; Burkett, Alison; Gabehart, Caitlin; Manifold, Craig

    2016-06-01

    Chest compressions and defibrillation are the only therapies proven to increase survival in cardiac arrest. Historically, rescuers must remove hands to shock, thereby interrupting chest compressions. This hands-off time results in a zero blood flow state. Pauses have been associated with poorer neurological recovery. This was a blinded randomized control cadaver study evaluating the detection of defibrillation during manual chest compressions. An active defibrillator was connected to the cadaver in the sternum-apex configuration. The sham defibrillator was not connected to the cadaver. Subjects performed chest compressions using 6 barrier types: barehand, single and double layer nitrile gloves, firefighter gloves, neoprene pad, and a manual chest compression/decompression device. Randomized defibrillations (10 per barrier type) were delivered at 30 joules (J) for bare hand and 360J for all other barriers. After each shock, the subject indicated degree of sensation on a VAS scale. Ten subjects participated. All subjects detected 30j shocks during barehand compressions, with only 1 undetected real shock. All barriers combined totaled 500 shocks delivered. Five (1%) active shocks were detected, 1(0.2%) single layer of Nitrile, 3(0.6%) with double layer nitrile, and 1(0.2%) with the neoprene barrier. One sham shock was reported with the single layer nitrile glove. No shocks were detected with fire gloves or compression decompression device. All shocks detected barely perceptible (0.25(±0.05)cm on 10cm VAS scale). Nitrile gloves and neoprene pad prevent (99%) responder's detection of defibrillation of a cadaver. Fire gloves and compression decompression device prevented detection. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  13. Performance of target detection algorithm in compressive sensing miniature ultraspectral imaging compressed sensing system

    NASA Astrophysics Data System (ADS)

    Gedalin, Daniel; Oiknine, Yaniv; August, Isaac; Blumberg, Dan G.; Rotman, Stanley R.; Stern, Adrian

    2017-04-01

    Compressive sensing theory was proposed to deal with the high quantity of measurements demanded by traditional hyperspectral systems. Recently, a compressive spectral imaging technique dubbed compressive sensing miniature ultraspectral imaging (CS-MUSI) was presented. This system uses a voltage controlled liquid crystal device to create multiplexed hyperspectral cubes. We evaluate the utility of the data captured using the CS-MUSI system for the task of target detection. Specifically, we compare the performance of the matched filter target detection algorithm in traditional hyperspectral systems and in CS-MUSI multiplexed hyperspectral cubes. We found that the target detection algorithm performs similarly in both cases, despite the fact that the CS-MUSI data is up to an order of magnitude less than that in conventional hyperspectral cubes. Moreover, the target detection is approximately an order of magnitude faster in CS-MUSI data.

  14. A survey of the state-of-the-art and focused research in range systems, task 1

    NASA Technical Reports Server (NTRS)

    Omura, J. K.

    1986-01-01

    This final report presents the latest research activity in voice compression. We have designed a non-real time simulation system that is implemented around the IBM-PC where the IBM-PC is used as a speech work station for data acquisition and analysis of voice samples. A real-time implementation is also proposed. This real-time Voice Compression Board (VCB) is built around the Texas Instruments TMS-3220. The voice compression algorithm investigated here was described in an earlier report titled, Low Cost Voice Compression for Mobile Digital Radios, by the author. We will assume the reader is familiar with the voice compression algorithm discussed in this report. The VCB compresses speech waveforms at data rates ranging from 4.8 K bps to 16 K bps. This board interfaces to the IBM-PC 8-bit bus, and plugs into a single expansion slot on the mother board.

  15. Optimisation algorithms for ECG data compression.

    PubMed

    Haugland, D; Heber, J G; Husøy, J H

    1997-07-01

    The use of exact optimisation algorithms for compressing digital electrocardiograms (ECGs) is demonstrated. As opposed to traditional time-domain methods, which use heuristics to select a small subset of representative signal samples, the problem of selecting the subset is formulated in rigorous mathematical terms. This approach makes it possible to derive algorithms guaranteeing the smallest possible reconstruction error when a bounded selection of signal samples is interpolated. The proposed model resembles well-known network models and is solved by a cubic dynamic programming algorithm. When applied to standard test problems, the algorithm produces a compressed representation for which the distortion is about one-half of that obtained by traditional time-domain compression techniques at reasonable compression ratios. This illustrates that, in terms of the accuracy of decoded signals, existing time-domain heuristics for ECG compression may be far from what is theoretically achievable. The paper is an attempt to bridge this gap.

  16. A multicenter observer performance study of 3D JPEG2000 compression of thin-slice CT.

    PubMed

    Erickson, Bradley J; Krupinski, Elizabeth; Andriole, Katherine P

    2010-10-01

    The goal of this study was to determine the compression level at which 3D JPEG2000 compression of thin-slice CTs of the chest and abdomen-pelvis becomes visually perceptible. A secondary goal was to determine if residents in training and non-physicians are substantially different from experienced radiologists in their perception of compression-related changes. This study used multidetector computed tomography 3D datasets with 0.625-1-mm thickness slices of standard chest, abdomen, or pelvis, clipped to 12 bits. The Kakadu v5.2 JPEG2000 compression algorithm was used to compress and decompress the 80 examinations creating four sets of images: lossless, 1.5 bpp (8:1), 1 bpp (12:1), and 0.75 bpp (16:1). Two randomly selected slices from each examination were shown to observers using a flicker mode paradigm in which observers rapidly toggled between two images, the original and a compressed version, with the task of deciding whether differences between them could be detected. Six staff radiologists, four residents, and six PhDs experienced in medical imaging (from three institutions) served as observers. Overall, 77.46% of observers detected differences at 8:1, 94.75% at 12:1, and 98.59% at 16:1 compression levels. Across all compression levels, the staff radiologists noted differences 64.70% of the time, the resident's detected differences 71.91% of the time, and the PhDs detected differences 69.95% of the time. Even mild compression is perceptible with current technology. The ability to detect differences does not equate to diagnostic differences, although perception of compression artifacts could affect diagnostic decision making and diagnostic workflow.

  17. Picture data compression coder using subband/transform coding with a Lempel-Ziv-based coder

    NASA Technical Reports Server (NTRS)

    Glover, Daniel R. (Inventor)

    1995-01-01

    Digital data coders/decoders are used extensively in video transmission. A digitally encoded video signal is separated into subbands. Separating the video into subbands allows transmission at low data rates. Once the data is separated into these subbands it can be coded and then decoded by statistical coders such as the Lempel-Ziv based coder.

  18. Toward a perceptual video-quality metric

    NASA Astrophysics Data System (ADS)

    Watson, Andrew B.

    1998-07-01

    The advent of widespread distribution of digital video creates a need for automated methods for evaluating the visual quality of digital video. This is particularly so since most digital video is compressed using lossy methods, which involve the controlled introduction of potentially visible artifacts. Compounding the problem is the bursty nature of digital video, which requires adaptive bit allocation based on visual quality metrics, and the economic need to reduce bit-rate to the lowest level that yields acceptable quality. In previous work, we have developed visual quality metrics for evaluating, controlling,a nd optimizing the quality of compressed still images. These metrics incorporate simplified models of human visual sensitivity to spatial and chromatic visual signals. Here I describe a new video quality metric that is an extension of these still image metrics into the time domain. Like the still image metrics, it is based on the Discrete Cosine Transform. An effort has been made to minimize the amount of memory and computation required by the metric, in order that might be applied in the widest range of applications. To calibrate the basic sensitivity of this metric to spatial and temporal signals we have made measurements of visual thresholds for temporally varying samples of DCT quantization noise.

  19. Image splitting and remapping method for radiological image compression

    NASA Astrophysics Data System (ADS)

    Lo, Shih-Chung B.; Shen, Ellen L.; Mun, Seong K.

    1990-07-01

    A new decomposition method using image splitting and gray-level remapping has been proposed for image compression, particularly for images with high contrast resolution. The effects of this method are especially evident in our radiological image compression study. In our experiments, we tested the impact of this decomposition method on image compression by employing it with two coding techniques on a set of clinically used CT images and several laser film digitized chest radiographs. One of the compression techniques used was full-frame bit-allocation in the discrete cosine transform domain, which has been proven to be an effective technique for radiological image compression. The other compression technique used was vector quantization with pruned tree-structured encoding, which through recent research has also been found to produce a low mean-square-error and a high compression ratio. The parameters we used in this study were mean-square-error and the bit rate required for the compressed file. In addition to these parameters, the difference between the original and reconstructed images will be presented so that the specific artifacts generated by both techniques can be discerned by visual perception.

  20. Preliminary Geologic Map of the Topanga 7.5' Quadrangle, Southern California: A Digital Database

    USGS Publications Warehouse

    Yerkes, R.F.; Campbell, R.H.

    1995-01-01

    INTRODUCTION This Open-File report is a digital geologic map database. This pamphlet serves to introduce and describe the digital data. There is no paper map included in the Open-File report. This digital map database is compiled from previously published sources combined with some new mapping and modifications in nomenclature. The geologic map database delineates map units that are identified by general age and lithology following the stratigraphic nomenclature of the U. S. Geological Survey. For detailed descriptions of the units, their stratigraphic relations and sources of geologic mapping consult Yerkes and Campbell (1994). More specific information about the units may be available in the original sources. The content and character of the database and methods of obtaining it are described herein. The geologic map database itself, consisting of three ARC coverages and one base layer, can be obtained over the Internet or by magnetic tape copy as described below. The processes of extracting the geologic map database from the tar file, and importing the ARC export coverages (procedure described herein), will result in the creation of an ARC workspace (directory) called 'topnga.' The database was compiled using ARC/INFO version 7.0.3, a commercial Geographic Information System (Environmental Systems Research Institute, Redlands, California), with version 3.0 of the menu interface ALACARTE (Fitzgibbon and Wentworth, 1991, Fitzgibbon, 1991, Wentworth and Fitzgibbon, 1991). It is stored in uncompressed ARC export format (ARC/INFO version 7.x) in a compressed UNIX tar (tape archive) file. The tar file was compressed with gzip, and may be uncompressed with gzip, which is available free of charge via the Internet from the gzip Home Page (http://w3.teaser.fr/~jlgailly/gzip). A tar utility is required to extract the database from the tar file. This utility is included in most UNIX systems, and can be obtained free of charge via the Internet from Internet Literacy's Common Internet File Formats Webpage http://www.matisse.net/files/formats.html). ARC/INFO export files (files with the .e00 extension) can be converted into ARC/INFO coverages in ARC/INFO (see below) and can be read by some other Geographic Information Systems, such as MapInfo via ArcLink and ESRI's ArcView (version 1.0 for Windows 3.1 to 3.11 is available for free from ESRI's web site: http://www.esri.com). 1. Different base layer - The original digital database included separates clipped out of the Los Angeles 1:100,000 sheet. This release includes a vectorized scan of a scale-stable negative of the Topanga 7.5 minute quadrangle. 2. Map projection - The files in the original release were in polyconic projection. The projection used in this release is state plane, which allows for the tiling of adjacent quadrangles. 3. File compression - The files in the original release were compressed with UNIX compression. The files in this release are compressed with gzip.

  1. Compressive Sensing of Roller Bearing Faults via Harmonic Detection from Under-Sampled Vibration Signals

    PubMed Central

    Tang, Gang; Hou, Wei; Wang, Huaqing; Luo, Ganggang; Ma, Jianwei

    2015-01-01

    The Shannon sampling principle requires substantial amounts of data to ensure the accuracy of on-line monitoring of roller bearing fault signals. Challenges are often encountered as a result of the cumbersome data monitoring, thus a novel method focused on compressed vibration signals for detecting roller bearing faults is developed in this study. Considering that harmonics often represent the fault characteristic frequencies in vibration signals, a compressive sensing frame of characteristic harmonics is proposed to detect bearing faults. A compressed vibration signal is first acquired from a sensing matrix with information preserved through a well-designed sampling strategy. A reconstruction process of the under-sampled vibration signal is then pursued as attempts are conducted to detect the characteristic harmonics from sparse measurements through a compressive matching pursuit strategy. In the proposed method bearing fault features depend on the existence of characteristic harmonics, as typically detected directly from compressed data far before reconstruction completion. The process of sampling and detection may then be performed simultaneously without complete recovery of the under-sampled signals. The effectiveness of the proposed method is validated by simulations and experiments. PMID:26473858

  2. Glaucoma risk index: automated glaucoma detection from color fundus images.

    PubMed

    Bock, Rüdiger; Meier, Jörg; Nyúl, László G; Hornegger, Joachim; Michelson, Georg

    2010-06-01

    Glaucoma as a neurodegeneration of the optic nerve is one of the most common causes of blindness. Because revitalization of the degenerated nerve fibers of the optic nerve is impossible early detection of the disease is essential. This can be supported by a robust and automated mass-screening. We propose a novel automated glaucoma detection system that operates on inexpensive to acquire and widely used digital color fundus images. After a glaucoma specific preprocessing, different generic feature types are compressed by an appearance-based dimension reduction technique. Subsequently, a probabilistic two-stage classification scheme combines these features types to extract the novel Glaucoma Risk Index (GRI) that shows a reasonable glaucoma detection performance. On a sample set of 575 fundus images a classification accuracy of 80% has been achieved in a 5-fold cross-validation setup. The GRI gains a competitive area under ROC (AUC) of 88% compared to the established topography-based glaucoma probability score of scanning laser tomography with AUC of 87%. The proposed color fundus image-based GRI achieves a competitive and reliable detection performance on a low-priced modality by the statistical analysis of entire images of the optic nerve head. Copyright (c) 2010 Elsevier B.V. All rights reserved.

  3. Surface structural damage study in cortical bone due to medical drilling.

    PubMed

    Tavera R, Cesar G; De la Torre-I, Manuel H; Flores-M, Jorge M; Hernandez M, Ma Del Socorro; Mendoza-Santoyo, Fernando; Briones-R, Manuel de J; Sanchez-P, Jorge

    2017-05-01

    A bone's fracture could be produced by an excessive, repetitive, or sudden load. A regular medical practice to heal it is to fix it in two possible ways: external immobilization, using a ferule, or an internal fixation, using a prosthetic device commonly attached to the bone by means of surgical screws. The bone's volume loss due to this drilling modifies its structure either in the presence or absence of a fracture. To observe the bone's surface behavior caused by the drilling effects, a digital holographic interferometer is used to analyze the displacement surface's variations in nonfractured post-mortem porcine femoral bones. Several nondrilled post-mortem bones are compressed and compared to a set of post-mortem bones with a different number of cortical drillings. During each compression test, a series of digital interferometric holograms were recorded using a high-speed CMOS camera. The results are presented as pseudo 3D mesh displacement maps for comparisons in the physiological range of load (30 and 50 lbs) and beyond (100, 200, and 400 lbs). The high resolution of the optical phase gives a better understanding about the bone's microstructural modifications. Finally, a relationship between compression load and bone volume loss due to the drilling was observed. The results prove that digital holographic interferometry is a viable technique to study the conditions that avoid the surgical screw from loosening in medical procedures of this kind.

  4. Longitudinally Jointed Edge-Wise Compression HoneyComb Composite Sandwich Coupon Testing And Fe Analysis: Three Methods of Strain Measurement, And Comparison

    NASA Technical Reports Server (NTRS)

    Farrokh, Babak; Rahim, Nur Aida Abul; Segal, Ken; Fan, Terry; Jones, Justin; Hodges, Ken; Mashni, Noah; Garg, Naman; Sang, Alex

    2013-01-01

    Three distinct strain measurement methods (i.e., foil resistance strain gages, fiber optic strain sensors, and a three-dimensional digital image photogrammetry that gives full field strain and displacement measurements) were implemented to measure strains on the back and front surfaces of a longitudinally jointed curved test article subjected to edge-wise compression testing, at NASA Goddard Space Flight Center, according to ASTM C364. The pre-test finite element analysis (FEA) was conducted to assess ultimate failure load and predict strain distribution pattern throughout the test coupon. The predicted strain pattern contours were then utilized as guidelines for installing the strain measurement instrumentations. The foil resistance strain gages and fiber optic strain sensors were bonded on the specimen at locations with nearly the same analytically predicted strain values, and as close as possible to each other, so that, comparisons between the measured strains by strain gages and fiber optic sensors, as well as the three-dimensional digital image photogrammetric system are relevant. The test article was loaded to failure (at 167 kN), at the compressive strain value of 10,000 micro epsilon. As a part of this study, the validity of the measured strains by fiber optic sensors is examined against the foil resistance strain gages and the three-dimensional digital image photogrammetric data, and comprehensive comparisons are made with FEA predictions.

  5. Compression evaluation of surgery video recordings retaining diagnostic credibility (compression evaluation of surgery video)

    NASA Astrophysics Data System (ADS)

    Duplaga, M.; Leszczuk, M. I.; Papir, Z.; Przelaskowski, A.

    2008-12-01

    Wider dissemination of medical digital video libraries is affected by two correlated factors, resource effective content compression that directly influences its diagnostic credibility. It has been proved that it is possible to meet these contradictory requirements halfway for long-lasting and low motion surgery recordings at compression ratios close to 100 (bronchoscopic procedures were a case study investigated). As the main supporting assumption, it has been accepted that the content can be compressed as far as clinicians are not able to sense a loss of video diagnostic fidelity (a visually lossless compression). Different market codecs were inspected by means of the combined subjective and objective tests toward their usability in medical video libraries. Subjective tests involved a panel of clinicians who had to classify compressed bronchoscopic video content according to its quality under the bubble sort algorithm. For objective tests, two metrics (hybrid vector measure and hosaka Plots) were calculated frame by frame and averaged over a whole sequence.

  6. Detection of inter-frame forgeries in digital videos.

    PubMed

    K, Sitara; Mehtre, B M

    2018-05-26

    Videos are acceptable as evidence in the court of law, provided its authenticity and integrity are scientifically validated. Videos recorded by surveillance systems are susceptible to malicious alterations of visual content by perpetrators locally or remotely. Such malicious alterations of video contents (called video forgeries) are categorized into inter-frame and intra-frame forgeries. In this paper, we propose inter-frame forgery detection techniques using tamper traces from spatio-temporal and compressed domains. Pristine videos containing frames that are recorded during sudden camera zooming event, may get wrongly classified as tampered videos leading to an increase in false positives. To address this issue, we propose a method for zooming detection and it is incorporated in video tampering detection. Frame shuffling detection, which was not explored so far is also addressed in our work. Our method is capable of differentiating various inter-frame tamper events and its localization in the temporal domain. The proposed system is tested on 23,586 videos of which 2346 are pristine and rest of them are candidates of inter-frame forged videos. Experimental results show that we have successfully detected frame shuffling with encouraging accuracy rates. We have achieved improved accuracy on forgery detection in frame insertion, frame deletion and frame duplication. Copyright © 2018. Published by Elsevier B.V.

  7. Coding For Compression Of Low-Entropy Data

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu

    1994-01-01

    Improved method of encoding digital data provides for efficient lossless compression of partially or even mostly redundant data from low-information-content source. Method of coding implemented in relatively simple, high-speed arithmetic and logic circuits. Also increases coding efficiency beyond that of established Huffman coding method in that average number of bits per code symbol can be less than 1, which is the lower bound for Huffman code.

  8. Adaptive sampler

    DOEpatents

    Watson, Bobby L.; Aeby, Ian

    1982-01-01

    An adaptive data compression device for compressing data having variable frequency content, including a plurality of digital filters for analyzing the content of the data over a plurality of frequency regions, a memory, and a control logic circuit for generating a variable rate memory clock corresponding to the analyzed frequency content of the data in the frequency region and for clocking the data into the memory in response to the variable rate memory clock.

  9. Global Seismicity: Three New Maps Compiled with Geographic Information Systems

    NASA Technical Reports Server (NTRS)

    Lowman, Paul D., Jr.; Montgomery, Brian C.

    1996-01-01

    This paper presents three new maps of global seismicity compiled from NOAA digital data, covering the interval 1963-1998, with three different magnitude ranges (mb): greater than 3.5, less than 3.5, and all detectable magnitudes. A commercially available geographic information system (GIS) was used as the database manager. Epicenter locations were acquired from a CD-ROM supplied by the National Geophysical Data Center. A methodology is presented that can be followed by general users. The implications of the maps are discussed, including the limitations of conventional plate models, and the different tectonic behavior of continental vs. oceanic lithosphere. Several little-known areas of intraplate or passive margin seismicity are also discussed, possibly expressing horizontal compression generated by ridge push.

  10. JPEG 2000-based compression of fringe patterns for digital holographic microscopy

    NASA Astrophysics Data System (ADS)

    Blinder, David; Bruylants, Tim; Ottevaere, Heidi; Munteanu, Adrian; Schelkens, Peter

    2014-12-01

    With the advent of modern computing and imaging technologies, digital holography is becoming widespread in various scientific disciplines such as microscopy, interferometry, surface shape measurements, vibration analysis, data encoding, and certification. Therefore, designing an efficient data representation technology is of particular importance. Off-axis holograms have very different signal properties with respect to regular imagery, because they represent a recorded interference pattern with its energy biased toward the high-frequency bands. This causes traditional images' coders, which assume an underlying 1/f2 power spectral density distribution, to perform suboptimally for this type of imagery. We propose a JPEG 2000-based codec framework that provides a generic architecture suitable for the compression of many types of off-axis holograms. This framework has a JPEG 2000 codec at its core, extended with (1) fully arbitrary wavelet decomposition styles and (2) directional wavelet transforms. Using this codec, we report significant improvements in coding performance for off-axis holography relative to the conventional JPEG 2000 standard, with Bjøntegaard delta-peak signal-to-noise ratio improvements ranging from 1.3 to 11.6 dB for lossy compression in the 0.125 to 2.00 bpp range and bit-rate reductions of up to 1.6 bpp for lossless compression.

  11. Detection of shifted double JPEG compression by an adaptive DCT coefficient model

    NASA Astrophysics Data System (ADS)

    Wang, Shi-Lin; Liew, Alan Wee-Chung; Li, Sheng-Hong; Zhang, Yu-Jin; Li, Jian-Hua

    2014-12-01

    In many JPEG image splicing forgeries, the tampered image patch has been JPEG-compressed twice with different block alignments. Such phenomenon in JPEG image forgeries is called the shifted double JPEG (SDJPEG) compression effect. Detection of SDJPEG-compressed patches could help in detecting and locating the tampered region. However, the current SDJPEG detection methods do not provide satisfactory results especially when the tampered region is small. In this paper, we propose a new SDJPEG detection method based on an adaptive discrete cosine transform (DCT) coefficient model. DCT coefficient distributions for SDJPEG and non-SDJPEG patches have been analyzed and a discriminative feature has been proposed to perform the two-class classification. An adaptive approach is employed to select the most discriminative DCT modes for SDJPEG detection. The experimental results show that the proposed approach can achieve much better results compared with some existing approaches in SDJPEG patch detection especially when the patch size is small.

  12. NASA Tech Briefs, June 2012

    NASA Technical Reports Server (NTRS)

    2012-01-01

    Topics covered include: iGlobe Interactive Visualization and Analysis of Spatial Data; Broad-Bandwidth FPGA-Based Digital Polyphase Spectrometer; Small Aircraft Data Distribution System; Earth Science Datacasting v2.0; Algorithm for Compressing Time-Series Data; Onboard Science and Applications Algorithm for Hyperspectral Data Reduction; Sampling Technique for Robust Odorant Detection Based on MIT RealNose Data; Security Data Warehouse Application; Integrated Laser Characterization, Data Acquisition, and Command and Control Test System; Radiation-Hard SpaceWire/Gigabit Ethernet-Compatible Transponder; Hardware Implementation of Lossless Adaptive Compression of Data From a Hyperspectral Imager; High-Voltage, Low-Power BNC Feedthrough Terminator; SpaceCube Mini; Dichroic Filter for Separating W-Band and Ka-Band; Active Mirror Predictive and Requirement Verification Software (AMP-ReVS); Navigation/Prop Software Suite; Personal Computer Transport Analysis Program; Pressure Ratio to Thermal Environments; Probabilistic Fatigue Damage Program (FATIG); ASCENT Program; JPL Genesis and Rapid Intensification Processes (GRIP) Portal; Data::Downloader; Fault Tolerance Middleware for a Multi-Core System; DspaceOgreTerrain 3D Terrain Visualization Tool; Trick Simulation Environment 07; Geometric Reasoning for Automated Planning; Water Detection Based on Color Variation; Single-Layer, All-Metal Patch Antenna Element with Wide Bandwidth; Scanning Laser Infrared Molecular Spectrometer (SLIMS); Next-Generation Microshutter Arrays for Large-Format Imaging and Spectroscopy; Detection of Carbon Monoxide Using Polymer-Composite Films with a Porphyrin-Functionalized Polypyrrole; Enhanced-Adhesion Multiwalled Carbon Nanotubes on Titanium Substrates for Stray Light Control; Three-Dimensional Porous Particles Composed of Curved, Two-Dimensional, Nano-Sized Layers for Li-Ion Batteries 23 Ultra-Lightweight; and Ultra-Lightweight Nanocomposite Foams and Sandwich Structures for Space Structure Applications.

  13. Digital coding of Shuttle TV

    NASA Technical Reports Server (NTRS)

    Habibi, A.; Batson, B.

    1976-01-01

    Space Shuttle will be using a field-sequential color television system for the first few missions, but the present plans are to switch to a NTSC color TV system for future missions. The field-sequential color TV system uses a modified black and white camera, producing a TV signal with a digital bandwidth of about 60 Mbps. This article discusses the characteristics of the Shuttle TV systems and proposes a bandwidth-compression technique for the field-sequential color TV system that could operate at 13 Mbps to produce a high-fidelity signal. The proposed bandwidth-compression technique is based on a two-dimensional DPCM system that utilizes temporal, spectral, and spatial correlation inherent in the field-sequential color TV imagery. The proposed system requires about 60 watts and less than 200 integrated circuits.

  14. Potential digitization/compression techniques for Shuttle video

    NASA Technical Reports Server (NTRS)

    Habibi, A.; Batson, B. H.

    1978-01-01

    The Space Shuttle initially will be using a field-sequential color television system but it is possible that an NTSC color TV system may be used for future missions. In addition to downlink color TV transmission via analog FM links, the Shuttle will use a high resolution slow-scan monochrome system for uplink transmission of text and graphics information. This paper discusses the characteristics of the Shuttle video systems, and evaluates digitization and/or bandwidth compression techniques for the various links. The more attractive techniques for the downlink video are based on a two-dimensional DPCM encoder that utilizes temporal and spectral as well as the spatial correlation of the color TV imagery. An appropriate technique for distortion-free coding of the uplink system utilizes two-dimensional HCK codes.

  15. Digital Phase-Locked Loop With Phase And Frequency Feedback

    NASA Technical Reports Server (NTRS)

    Thomas, J. Brooks

    1991-01-01

    Advanced design for digital phase-lock loop (DPLL) allows loop gains higher than those used in other designs. Divided into two major components: counterrotation processor and tracking processor. Notable features include use of both phase and rate-of-change-of-phase feedback instead of frequency feedback alone, normalized sine phase extractor, improved method for extracting measured phase, and improved method for "compressing" output rate.

  16. Nonoperative Management and Novel Imaging for Posterior Circumflex Humeral Artery Injury in Volleyball.

    PubMed

    van de Pol, Daan; Planken, R Nils; Terpstra, Aart; Pannekoek-Hekman, Marja; Kuijer, P Paul F M; Maas, Mario

    We report on a 34-yr-old male elite volleyball player with symptomatic emboli in the spiking hand from a partially thrombosed aneurysm of the posterior circumflex humeral artery (PCHA) in his dominant shoulder. At initial diagnosis and follow-up, a combination of time-resolved and high-resolution steady state contrast-enhanced magnetic resonance angiography (CE-MRA) enabled detailed visualization of: (1) emboli that were not detectable by vascular ultrasound; and (2) the PCHA aneurysm, including compression during abduction and external rotation (ABER provocation). At 15-month follow-up, including forced cessation of volleyball activities over the preceding 9 months, the PCHA aneurysm remained unchanged. Central filling defects in the palmar arch and digital arteries resolved over time and affected arterial vessel segments showed postthrombotic changes. Digital blood pressure values improved substantially and almost normalized during follow-up. In conclusion, this case report is the first to show promising results of nonoperative management for a vascular shoulder overuse injury in a professional volleyball player as an alternative to invasive therapeutic options.

  17. Comparison of reversible methods for data compression

    NASA Astrophysics Data System (ADS)

    Heer, Volker K.; Reinfelder, Hans-Erich

    1990-07-01

    Widely differing methods for data compression described in the ACR-NEMA draft are used in medical imaging. In our contribution we will review various methods briefly and discuss the relevant advantages and disadvantages. In detail we evaluate 1st order DPCM pyramid transformation and S transformation. We compare as coding algorithms both fixed and adaptive Huffman coding and Lempel-Ziv coding. Our comparison is performed on typical medical images from CT MR DSA and DLR (Digital Luminescence Radiography). Apart from the achieved compression factors we take into account CPU time required and main memory requirement both for compression and for decompression. For a realistic comparison we have implemented the mentioned algorithms in the C program language on a MicroVAX II and a SPARC station 1. 2.

  18. Some aspects of adaptive transform coding of multispectral data

    NASA Technical Reports Server (NTRS)

    Ahmed, N.; Natarajan, T.

    1977-01-01

    This paper concerns a data compression study pertaining to multi-spectral scanner (MSS) data. The motivation for this undertaking is the need for securing data compression of images obtained in connection with the Landsat Follow-On Mission, where a compression of at least 6:1 is required. The MSS data used in this study consisted of four scenes: Tristate, consisting of 256 pels per row and a total of 512 rows - i.e., (256x512), (2) Sacramento (256x512), (3) Portland (256x512), and (4) Bald Knob (200x256). All these scenes were on digital tape at 6 bits/pel. The corresponding reconstructed scenes of 1 bit/pel (i.e., a 6:1 compression) are included.

  19. Quantitative holographic interferometry applied to combustion and compressible flow research

    NASA Astrophysics Data System (ADS)

    Bryanston-Cross, Peter J.; Towers, D. P.

    1993-03-01

    The application of holographic interferometry to phase object analysis is described. Emphasis has been given to a method of extracting quantitative information automatically from the interferometric fringe data. To achieve this a carrier frequency has been added to the holographic data. This has made it possible, firstly to form a phase map using a fast Fourier transform (FFT) algorithm. Then to `solve,' or unwrap, this image to give a contiguous density map using a minimum weight spanning tree (MST) noise immune algorithm, known as fringe analysis (FRAN). Applications of this work to a burner flame and a compressible flow are presented. In both cases the spatial frequency of the fringes exceed the resolvable limit of conventional digital framestores. Therefore, a flatbed scanner with a resolution of 3200 X 2400 pixels has been used to produce very high resolution digital images from photographs. This approach has allowed the processing of data despite the presence of caustics, generated by strong thermal gradients at the edge of the combustion field. A similar example is presented from the analysis of a compressible transonic flow in the shock wave and trailing edge regions.

  20. Spatial-temporal distortion metric for in-service quality monitoring of any digital video system

    NASA Astrophysics Data System (ADS)

    Wolf, Stephen; Pinson, Margaret H.

    1999-11-01

    Many organizations have focused on developing digital video quality metrics which produce results that accurately emulate subjective responses. However, to be widely applicable a metric must also work over a wide range of quality, and be useful for in-service quality monitoring. The Institute for Telecommunication Sciences (ITS) has developed spatial-temporal distortion metrics that meet all of these requirements. These objective metrics are described in detail and have a number of interesting properties, including utilization of (1) spatial activity filters which emphasize long edges on the order of 10 arc min while simultaneously performing large amounts of noise suppression, (2) the angular direction of the spatial gradient, (3) spatial-temporal compression factors of at least 384:1 (spatial compression of at least 64:1 and temporal compression of at least 6:1, and 4) simple perceptibility thresholds and spatial-temporal masking functions. Results are presented that compare the objective metric values with mean opinion scores from a wide range of subjective data bases spanning many different scenes, systems, bit-rates, and applications.

  1. Description and test results of a digital supersonic propulsion system integrated control

    NASA Technical Reports Server (NTRS)

    Batterton, P. G.; Arpasi, D. J.; Baumbick, R. J.

    1976-01-01

    A digitally implemented integrated inlet/engine control system was developed and tested on a mixed compression, Mach 2.5, supersonic inlet and augmented turbofan engine. The control matched engine airflow to available inlet airflow so that in steady state, the shock would be at the desired location, and the overboard bypass doors would be closed. During engine induced transients, such as augmentor lights and cutoffs, the inlet operating point was momentarily changed to a more supercritical point to minimize unstarts. The digital control also provided automatic inlet restart.

  2. Speech coding

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ravishankar, C., Hughes Network Systems, Germantown, MD

    Speech is the predominant means of communication between human beings and since the invention of the telephone by Alexander Graham Bell in 1876, speech services have remained to be the core service in almost all telecommunication systems. Original analog methods of telephony had the disadvantage of speech signal getting corrupted by noise, cross-talk and distortion Long haul transmissions which use repeaters to compensate for the loss in signal strength on transmission links also increase the associated noise and distortion. On the other hand digital transmission is relatively immune to noise, cross-talk and distortion primarily because of the capability to faithfullymore » regenerate digital signal at each repeater purely based on a binary decision. Hence end-to-end performance of the digital link essentially becomes independent of the length and operating frequency bands of the link Hence from a transmission point of view digital transmission has been the preferred approach due to its higher immunity to noise. The need to carry digital speech became extremely important from a service provision point of view as well. Modem requirements have introduced the need for robust, flexible and secure services that can carry a multitude of signal types (such as voice, data and video) without a fundamental change in infrastructure. Such a requirement could not have been easily met without the advent of digital transmission systems, thereby requiring speech to be coded digitally. The term Speech Coding is often referred to techniques that represent or code speech signals either directly as a waveform or as a set of parameters by analyzing the speech signal. In either case, the codes are transmitted to the distant end where speech is reconstructed or synthesized using the received set of codes. A more generic term that is applicable to these techniques that is often interchangeably used with speech coding is the term voice coding. This term is more generic in the sense that the coding techniques are equally applicable to any voice signal whether or not it carries any intelligible information, as the term speech implies. Other terms that are commonly used are speech compression and voice compression since the fundamental idea behind speech coding is to reduce (compress) the transmission rate (or equivalently the bandwidth) And/or reduce storage requirements In this document the terms speech and voice shall be used interchangeably.« less

  3. Converting Panax ginseng DNA and chemical fingerprints into two-dimensional barcode.

    PubMed

    Cai, Yong; Li, Peng; Li, Xi-Wen; Zhao, Jing; Chen, Hai; Yang, Qing; Hu, Hao

    2017-07-01

    In this study, we investigated how to convert the Panax ginseng DNA sequence code and chemical fingerprints into a two-dimensional code. In order to improve the compression efficiency, GATC2Bytes and digital merger compression algorithms are proposed. HPLC chemical fingerprint data of 10 groups of P. ginseng from Northeast China and the internal transcribed spacer 2 (ITS2) sequence code as the DNA sequence code were ready for conversion. In order to convert such data into a two-dimensional code, the following six steps were performed: First, the chemical fingerprint characteristic data sets were obtained through the inflection filtering algorithm. Second, precompression processing of such data sets is undertaken. Third, precompression processing was undertaken with the P. ginseng DNA (ITS2) sequence codes. Fourth, the precompressed chemical fingerprint data and the DNA (ITS2) sequence code were combined in accordance with the set data format. Such combined data can be compressed by Zlib, an open source data compression algorithm. Finally, the compressed data generated a two-dimensional code called a quick response code (QR code). Through the abovementioned converting process, it can be found that the number of bytes needed for storing P. ginseng chemical fingerprints and its DNA (ITS2) sequence code can be greatly reduced. After GTCA2Bytes algorithm processing, the ITS2 compression rate reaches 75% and the chemical fingerprint compression rate exceeds 99.65% via filtration and digital merger compression algorithm processing. Therefore, the overall compression ratio even exceeds 99.36%. The capacity of the formed QR code is around 0.5k, which can easily and successfully be read and identified by any smartphone. P. ginseng chemical fingerprints and its DNA (ITS2) sequence code can form a QR code after data processing, and therefore the QR code can be a perfect carrier of the authenticity and quality of P. ginseng information. This study provides a theoretical basis for the development of a quality traceability system of traditional Chinese medicine based on a two-dimensional code.

  4. Adaptive sampler

    DOEpatents

    Watson, B.L.; Aeby, I.

    1980-08-26

    An adaptive data compression device for compressing data is described. The device has a frequency content, including a plurality of digital filters for analyzing the content of the data over a plurality of frequency regions, a memory, and a control logic circuit for generating a variable rate memory clock corresponding to the analyzed frequency content of the data in the frequency region and for clocking the data into the memory in response to the variable rate memory clock.

  5. Lossless medical image compression with a hybrid coder

    NASA Astrophysics Data System (ADS)

    Way, Jing-Dar; Cheng, Po-Yuen

    1998-10-01

    The volume of medical image data is expected to increase dramatically in the next decade due to the large use of radiological image for medical diagnosis. The economics of distributing the medical image dictate that data compression is essential. While there is lossy image compression, the medical image must be recorded and transmitted lossless before it reaches the users to avoid wrong diagnosis due to the image data lost. Therefore, a low complexity, high performance lossless compression schematic that can approach the theoretic bound and operate in near real-time is needed. In this paper, we propose a hybrid image coder to compress the digitized medical image without any data loss. The hybrid coder is constituted of two key components: an embedded wavelet coder and a lossless run-length coder. In this system, the medical image is compressed with the lossy wavelet coder first, and the residual image between the original and the compressed ones is further compressed with the run-length coder. Several optimization schemes have been used in these coders to increase the coding performance. It is shown that the proposed algorithm is with higher compression ratio than run-length entropy coders such as arithmetic, Huffman and Lempel-Ziv coders.

  6. Development and evaluation of a novel lossless image compression method (AIC: artificial intelligence compression method) using neural networks as artificial intelligence.

    PubMed

    Fukatsu, Hiroshi; Naganawa, Shinji; Yumura, Shinnichiro

    2008-04-01

    This study was aimed to validate the performance of a novel image compression method using a neural network to achieve a lossless compression. The encoding consists of the following blocks: a prediction block; a residual data calculation block; a transformation and quantization block; an organization and modification block; and an entropy encoding block. The predicted image is divided into four macro-blocks using the original image for teaching; and then redivided into sixteen sub-blocks. The predicted image is compared to the original image to create the residual image. The spatial and frequency data of the residual image are compared and transformed. Chest radiography, computed tomography (CT), magnetic resonance imaging, positron emission tomography, radioisotope mammography, ultrasonography, and digital subtraction angiography images were compressed using the AIC lossless compression method; and the compression rates were calculated. The compression rates were around 15:1 for chest radiography and mammography, 12:1 for CT, and around 6:1 for other images. This method thus enables greater lossless compression than the conventional methods. This novel method should improve the efficiency of handling of the increasing volume of medical imaging data.

  7. Data compression/error correction digital test system. Appendix 2: Theory of operation

    NASA Technical Reports Server (NTRS)

    1972-01-01

    An overall block diagram of the DC/EC digital system test is shown. The system is divided into two major units: the transmitter and the receiver. In operation, the transmitter and receiver are connected only by a real or simulated transmission link. The system inputs consist of: (1) standard format TV video, (2) two channels of analog voice, and (3) one serial PCM bit stream.

  8. Comparative Study Of Image Enhancement Algorithms For Digital And Film Mammography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Delgado-Gonzalez, A.; Sanmiguel, R. E.

    2008-08-11

    Here we discuss the application of edge enhancement algorithms on images obtained with a Mammography System which has a Selenium Detector and on the other hand, on images obtained from digitized film mammography. Comparative analysis of such images includes the study of technical aspects of image acquisition, storage, compression and display. A protocol for a local database has been created as a result of this study.

  9. Processing Of Binary Images

    NASA Astrophysics Data System (ADS)

    Hou, H. S.

    1985-07-01

    An overview of the recent progress in the area of digital processing of binary images in the context of document processing is presented here. The topics covered include input scan, adaptive thresholding, halftoning, scaling and resolution conversion, data compression, character recognition, electronic mail, digital typography, and output scan. Emphasis has been placed on illustrating the basic principles rather than descriptions of a particular system. Recent technology advances and research in this field are also mentioned.

  10. Compression-based integral curve data reuse framework for flow visualization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong, Fan; Bi, Chongke; Guo, Hanqi

    Currently, by default, integral curves are repeatedly re-computed in different flow visualization applications, such as FTLE field computation, source-destination queries, etc., leading to unnecessary resource cost. We present a compression-based data reuse framework for integral curves, to greatly reduce their retrieval cost, especially in a resource-limited environment. In our design, a hierarchical and hybrid compression scheme is proposed to balance three objectives, including high compression ratio, controllable error, and low decompression cost. Specifically, we use and combine digitized curve sparse representation, floating-point data compression, and octree space partitioning to adaptively achieve the objectives. Results have shown that our data reusemore » framework could acquire tens of times acceleration in the resource-limited environment compared to on-the-fly particle tracing, and keep controllable information loss. Moreover, our method could provide fast integral curve retrieval for more complex data, such as unstructured mesh data.« less

  11. Correlation estimation and performance optimization for distributed image compression

    NASA Astrophysics Data System (ADS)

    He, Zhihai; Cao, Lei; Cheng, Hui

    2006-01-01

    Correlation estimation plays a critical role in resource allocation and rate control for distributed data compression. A Wyner-Ziv encoder for distributed image compression is often considered as a lossy source encoder followed by a lossless Slepian-Wolf encoder. The source encoder consists of spatial transform, quantization, and bit plane extraction. In this work, we find that Gray code, which has been extensively used in digital modulation, is able to significantly improve the correlation between the source data and its side information. Theoretically, we analyze the behavior of Gray code within the context of distributed image compression. Using this theoretical model, we are able to efficiently allocate the bit budget and determine the code rate of the Slepian-Wolf encoder. Our experimental results demonstrate that the Gray code, coupled with accurate correlation estimation and rate control, significantly improves the picture quality, by up to 4 dB, over the existing methods for distributed image compression.

  12. High performance MPEG-audio decoder IC

    NASA Technical Reports Server (NTRS)

    Thorn, M.; Benbassat, G.; Cyr, K.; Li, S.; Gill, M.; Kam, D.; Walker, K.; Look, P.; Eldridge, C.; Ng, P.

    1993-01-01

    The emerging digital audio and video compression technology brings both an opportunity and a new challenge to IC design. The pervasive application of compression technology to consumer electronics will require high volume, low cost IC's and fast time to market of the prototypes and production units. At the same time, the algorithms used in the compression technology result in complex VLSI IC's. The conflicting challenges of algorithm complexity, low cost, and fast time to market have an impact on device architecture and design methodology. The work presented in this paper is about the design of a dedicated, high precision, Motion Picture Expert Group (MPEG) audio decoder.

  13. A Low-Complexity Circuit for On-Sensor Concurrent A/D Conversion and Compression

    NASA Technical Reports Server (NTRS)

    Leon-Salas, Walter D.; Balkir, Sina; Sayood, Khalid; Schemm, Nathan; Hoffman, Michael W.

    2007-01-01

    A low-complexity circuit for on-sensor compression is presented. The proposed circuit achieves complexity savings by combining a single-slope analog-to-digital converter with a Golomb-Rice entropy encoder and by implementing a low-complexity adaptation rule. The adaptation rule monitors the output codewords and minimizes their length by incrementing or decrementing the value of the Golomb-Rice coding parameter k. Its hardware implementation is one order of magnitude lower than existing adaptive algorithms. The compression circuit has been fabricated using a 0.35 micrometers CMOS technology and occupies an area of 0.0918 mm2. Test measurements confirm the validity of the design

  14. Pedal angiography in peripheral arterial occlusive disease: first-pass i.v. contrast-enhanced MR angiography with blood pool contrast medium versus intraarterial digital subtraction angiography.

    PubMed

    Kos, Sebastian; Reisinger, Clemens; Aschwanden, Markus; Bongartz, Georg M; Jacob, Augustinus L; Bilecen, Deniz

    2009-03-01

    The purpose of this study was to prospectively evaluate first-pass i.v. gadofosveset-enhanced MR angiography in patients with peripheral arterial occlusive disease for visualization of the pedal arteries and stenosis or occlusion of those arteries with intraarterial digital subtraction angiography as the reference standard. Twenty patients with peripheral arterial occlusive disease (nine women, 11 men; age-range 58-83 years) were prospectively enrolled. Gadofosveset first-pass contrast-enhanced MR angiography was performed with a 1.5-T system, a dedicated foot coil, and cuff compression to the calf. Arterial segments were assessed for degree of arterial stenosis, arterial visibility, diagnostic utility, and venous contamination. Detection of vessel stenosis or occlusion was evaluated in comparison with findings at digital subtraction angiography. The unpaired Student's t test was used to test arterial visibility with the two techniques. First-pass MR angiography with gadofosveset had good diagnostic utility in 83.9% of all segments and no venous contamination in 96.8% of all segments. There was no difference between the performance of intraarterial digital subtraction angiography and that of i.v. contrast-enhanced MR angiography in arterial visibility overall (p = 0.245) or in subgroup analysis of surgical arterial bypass targets (p = 0.202). The overall sensitivity, specificity, and accuracy of i.v. gadofosveset-enhanced MR angiography for characterization of clinically significant stenosis and occlusion were 91.4%, 96.1%, and 93.9%. In the subgroup analysis, the sensitivity, specificity, and accuracy were 85.5%, 96.5%, and 92.1%. Gadofosveset-enhanced MR angiography of the pedal arteries in patients with peripheral arterial occlusive disease has arterial visibility equal to that of digital subtraction angiography and facilitates depiction of clinically significant stenosis and occlusion.

  15. Design of an H.264/SVC resilient watermarking scheme

    NASA Astrophysics Data System (ADS)

    Van Caenegem, Robrecht; Dooms, Ann; Barbarien, Joeri; Schelkens, Peter

    2010-01-01

    The rapid dissemination of media technologies has lead to an increase of unauthorized copying and distribution of digital media. Digital watermarking, i.e. embedding information in the multimedia signal in a robust and imperceptible manner, can tackle this problem. Recently, there has been a huge growth in the number of different terminals and connections that can be used to consume multimedia. To tackle the resulting distribution challenges, scalable coding is often employed. Scalable coding allows the adaptation of a single bit-stream to varying terminal and transmission characteristics. As a result of this evolution, watermarking techniques that are robust against scalable compression become essential in order to control illegal copying. In this paper, a watermarking technique resilient against scalable video compression using the state-of-the-art H.264/SVC codec is therefore proposed and evaluated.

  16. JPEG2000 and dissemination of cultural heritage over the Internet.

    PubMed

    Politou, Eugenia A; Pavlidis, George P; Chamzas, Christodoulos

    2004-03-01

    By applying the latest technologies in image compression for managing the storage of massive image data within cultural heritage databases and by exploiting the universality of the Internet we are now able not only to effectively digitize, record and preserve, but also to promote the dissemination of cultural heritage. In this work we present an application of the latest image compression standard JPEG2000 in managing and browsing image databases, focusing on the image transmission aspect rather than database management and indexing. We combine the technologies of JPEG2000 image compression with client-server socket connections and client browser plug-in, as to provide with an all-in-one package for remote browsing of JPEG2000 compressed image databases, suitable for the effective dissemination of cultural heritage.

  17. Imaging with a small number of photons

    PubMed Central

    Morris, Peter A.; Aspden, Reuben S.; Bell, Jessica E. C.; Boyd, Robert W.; Padgett, Miles J.

    2015-01-01

    Low-light-level imaging techniques have application in many diverse fields, ranging from biological sciences to security. A high-quality digital camera based on a multi-megapixel array will typically record an image by collecting of order 105 photons per pixel, but by how much could this photon flux be reduced? In this work we demonstrate a single-photon imaging system based on a time-gated intensified camera from which the image of an object can be inferred from very few detected photons. We show that a ghost-imaging configuration, where the image is obtained from photons that have never interacted with the object, is a useful approach for obtaining images with high signal-to-noise ratios. The use of heralded single photons ensures that the background counts can be virtually eliminated from the recorded images. By applying principles of image compression and associated image reconstruction, we obtain high-quality images of objects from raw data formed from an average of fewer than one detected photon per image pixel. PMID:25557090

  18. Method to improve cancerous lesion detection sensitivity in a dedicated dual-head scintimammography system

    DOEpatents

    Kieper, Douglas Arthur [Seattle, WA; Majewski, Stanislaw [Morgantown, WV; Welch, Benjamin L [Hampton, VA

    2012-07-03

    An improved method for enhancing the contrast between background and lesion areas of a breast undergoing dual-head scintimammographic examination comprising: 1) acquiring a pair of digital images from a pair of small FOV or mini gamma cameras compressing the breast under examination from opposing sides; 2) inverting one of the pair of images to align or co-register with the other of the images to obtain co-registered pixel values; 3) normalizing the pair of images pixel-by-pixel by dividing pixel values from each of the two acquired images and the co-registered image by the average count per pixel in the entire breast area of the corresponding detector; and 4) multiplying the number of counts in each pixel by the value obtained in step 3 to produce a normalization enhanced two dimensional contrast map. This enhanced (increased contrast) contrast map enhances the visibility of minor local increases (uptakes) of activity over the background and therefore improves lesion detection sensitivity, especially of small lesions.

  19. Method to improve cancerous lesion detection sensitivity in a dedicated dual-head scintimammography system

    DOEpatents

    Kieper, Douglas Arthur [Newport News, VA; Majewski, Stanislaw [Yorktown, VA; Welch, Benjamin L [Hampton, VA

    2008-10-28

    An improved method for enhancing the contrast between background and lesion areas of a breast undergoing dual-head scintimammographic examination comprising: 1) acquiring a pair of digital images from a pair of small FOV or mini gamma cameras compressing the breast under examination from opposing sides; 2) inverting one of the pair of images to align or co-register with the other of the images to obtain co-registered pixel values; 3) normalizing the pair of images pixel-by-pixel by dividing pixel values from each of the two acquired images and the co-registered image by the average count per pixel in the entire breast area of the corresponding detector; and 4) multiplying the number of counts in each pixel by the value obtained in step 3 to produce a normalization enhanced two dimensional contrast map. This enhanced (increased contrast) contrast map enhances the visibility of minor local increases (uptakes) of activity over the background and therefore improves lesion detection sensitivity, especially of small lesions.

  20. Robustness evaluation of transactional audio watermarking systems

    NASA Astrophysics Data System (ADS)

    Neubauer, Christian; Steinebach, Martin; Siebenhaar, Frank; Pickel, Joerg

    2003-06-01

    Distribution via Internet is of increasing importance. Easy access, transmission and consumption of digitally represented music is very attractive to the consumer but led also directly to an increasing problem of illegal copying. To cope with this problem watermarking is a promising concept since it provides a useful mechanism to track illicit copies by persistently attaching property rights information to the material. Especially for online music distribution the use of so-called transaction watermarking, also denoted with the term bitstream watermarking, is beneficial since it offers the opportunity to embed watermarks directly into perceptually encoded material without the need of full decompression/compression. Besides the concept of bitstream watermarking, former publications presented the complexity, the audio quality and the detection performance. These results are now extended by an assessment of the robustness of such schemes. The detection performance before and after applying selected attacks is presented for MPEG-1/2 Layer 3 (MP3) and MPEG-2/4 AAC bitstream watermarking, contrasted to the performance of PCM spread spectrum watermarking.

  1. Digital Libraries and the Problem of Purpose [and] On DigiPaper and the Dissemination of Electronic Documents [and] DFAS: The Distributed Finding Aid Search System [and] Best Practices for Digital Archiving: An Information Life Cycle Approach [and] Mapping and Converting Essential Federal Geographic Data Committee (FGDC) Metadata into MARC21 and Dublin Core: Towards an Alternative to the FGDC Clearinghouse [and] Evaluating Website Modifications at the National Library of Medicine through Search Log analysis.

    ERIC Educational Resources Information Center

    Levy, David M.; Huttenlocher, Dan; Moll, Angela; Smith, MacKenzie; Hodge, Gail M.; Chandler, Adam; Foley, Dan; Hafez, Alaaeldin M.; Redalen, Aaron; Miller, Naomi

    2000-01-01

    Includes six articles focusing on the purpose of digital public libraries; encoding electronic documents through compression techniques; a distributed finding aid server; digital archiving practices in the framework of information life cycle management; converting metadata into MARC format and Dublin Core formats; and evaluating Web sites through…

  2. Compact storage of medical images with patient information.

    PubMed

    Acharya, R; Anand, D; Bhat, S; Niranjan, U C

    2001-12-01

    Digital watermarking is a technique of hiding specific identification data for copyright authentication. This technique is adapted here for interleaving patient information with medical images to reduce storage and transmission overheads. The text data are encrypted before interleaving with images to ensure greater security. The graphical signals are compressed and subsequently interleaved with the image. Differential pulse-code-modulation and adaptive-delta-modulation techniques are employed for data compression, and encryption and results are tabulated for a specific example.

  3. Rapid classification of pharmaceutical ingredients with Raman spectroscopy using compressive detection strategy with PLS-DA multivariate filters.

    PubMed

    Cebeci Maltaş, Derya; Kwok, Kaho; Wang, Ping; Taylor, Lynne S; Ben-Amotz, Dor

    2013-06-01

    Identifying pharmaceutical ingredients is a routine procedure required during industrial manufacturing. Here we show that a recently developed Raman compressive detection strategy can be employed to classify various widely used pharmaceutical materials using a hybrid supervised/unsupervised strategy in which only two ingredients are used for training and yet six other ingredients can also be distinguished. More specifically, our liquid crystal spatial light modulator (LC-SLM) based compressive detection instrument is trained using only the active ingredient, tadalafil, and the excipient, lactose, but is tested using these and various other excipients; microcrystalline cellulose, magnesium stearate, titanium (IV) oxide, talc, sodium lauryl sulfate and hydroxypropyl cellulose. Partial least squares discriminant analysis (PLS-DA) is used to generate the compressive detection filters necessary for fast chemical classification. Although the filters used in this study are trained on only lactose and tadalafil, we show that all the pharmaceutical ingredients mentioned above can be differentiated and classified using PLS-DA compressive detection filters with an accumulation time of 10ms per filter. Copyright © 2013 Elsevier B.V. All rights reserved.

  4. A Standard Mammography Unit - Standard 3D Ultrasound Probe Fusion Prototype: First Results.

    PubMed

    Schulz-Wendtland, Rüdiger; Jud, Sebastian M; Fasching, Peter A; Hartmann, Arndt; Radicke, Marcus; Rauh, Claudia; Uder, Michael; Wunderle, Marius; Gass, Paul; Langemann, Hanna; Beckmann, Matthias W; Emons, Julius

    2017-06-01

    The combination of different imaging modalities through the use of fusion devices promises significant diagnostic improvement for breast pathology. The aim of this study was to evaluate image quality and clinical feasibility of a prototype fusion device (fusion prototype) constructed from a standard tomosynthesis mammography unit and a standard 3D ultrasound probe using a new method of breast compression. Imaging was performed on 5 mastectomy specimens from patients with confirmed DCIS or invasive carcinoma (BI-RADS ™ 6). For the preclinical fusion prototype an ABVS system ultrasound probe from an Acuson S2000 was integrated into a MAMMOMAT Inspiration (both Siemens Healthcare Ltd) and, with the aid of a newly developed compression plate, digital mammogram and automated 3D ultrasound images were obtained. The quality of digital mammogram images produced by the fusion prototype was comparable to those produced using conventional compression. The newly developed compression plate did not influence the applied x-ray dose. The method was not more labour intensive or time-consuming than conventional mammography. From the technical perspective, fusion of the two modalities was achievable. In this study, using only a few mastectomy specimens, the fusion of an automated 3D ultrasound machine with a standard mammography unit delivered images of comparable quality to conventional mammography. The device allows simultaneous ultrasound - the second important imaging modality in complementary breast diagnostics - without increasing examination time or requiring additional staff.

  5. PubMed Central

    Schulz-Wendtland, Rüdiger; Jud, Sebastian M.; Fasching, Peter A.; Hartmann, Arndt; Radicke, Marcus; Rauh, Claudia; Uder, Michael; Wunderle, Marius; Gass, Paul; Langemann, Hanna; Beckmann, Matthias W.; Emons, Julius

    2017-01-01

    Aim The combination of different imaging modalities through the use of fusion devices promises significant diagnostic improvement for breast pathology. The aim of this study was to evaluate image quality and clinical feasibility of a prototype fusion device (fusion prototype) constructed from a standard tomosynthesis mammography unit and a standard 3D ultrasound probe using a new method of breast compression. Materials and Methods Imaging was performed on 5 mastectomy specimens from patients with confirmed DCIS or invasive carcinoma (BI-RADS ™ 6). For the preclinical fusion prototype an ABVS system ultrasound probe from an Acuson S2000 was integrated into a MAMMOMAT Inspiration (both Siemens Healthcare Ltd) and, with the aid of a newly developed compression plate, digital mammogram and automated 3D ultrasound images were obtained. Results The quality of digital mammogram images produced by the fusion prototype was comparable to those produced using conventional compression. The newly developed compression plate did not influence the applied x-ray dose. The method was not more labour intensive or time-consuming than conventional mammography. From the technical perspective, fusion of the two modalities was achievable. Conclusion In this study, using only a few mastectomy specimens, the fusion of an automated 3D ultrasound machine with a standard mammography unit delivered images of comparable quality to conventional mammography. The device allows simultaneous ultrasound – the second important imaging modality in complementary breast diagnostics – without increasing examination time or requiring additional staff. PMID:28713173

  6. Impact of Altering Various Image Parameters on Human Epidermal Growth Factor Receptor 2 Image Analysis Data Quality.

    PubMed

    Pantanowitz, Liron; Liu, Chi; Huang, Yue; Guo, Huazhang; Rohde, Gustavo K

    2017-01-01

    The quality of data obtained from image analysis can be directly affected by several preanalytical (e.g., staining, image acquisition), analytical (e.g., algorithm, region of interest [ROI]), and postanalytical (e.g., computer processing) variables. Whole-slide scanners generate digital images that may vary depending on the type of scanner and device settings. Our goal was to evaluate the impact of altering brightness, contrast, compression, and blurring on image analysis data quality. Slides from 55 patients with invasive breast carcinoma were digitized to include a spectrum of human epidermal growth factor receptor 2 (HER2) scores analyzed with Visiopharm (30 cases with score 0, 10 with 1+, 5 with 2+, and 10 with 3+). For all images, an ROI was selected and four parameters (brightness, contrast, JPEG2000 compression, out-of-focus blurring) then serially adjusted. HER2 scores were obtained for each altered image. HER2 scores decreased with increased illumination, higher compression ratios, and increased blurring. HER2 scores increased with greater contrast. Cases with HER2 score 0 were least affected by image adjustments. This experiment shows that variations in image brightness, contrast, compression, and blurring can have major influences on image analysis results. Such changes can result in under- or over-scoring with image algorithms. Standardization of image analysis is recommended to minimize the undesirable impact such variations may have on data output.

  7. Human movement analysis with image processing in real time

    NASA Astrophysics Data System (ADS)

    Fauvet, Eric; Paindavoine, Michel; Cannard, F.

    1991-04-01

    In the field of the human sciences, a lot of applications needs to know the kinematic characteristics of the human movements Psycology is associating the characteristics with the control mechanism, sport and biomechariics are associating them with the performance of the sportman or of the patient. So the trainers or the doctors can correct the gesture of the subject to obtain a better performance if he knows the motion properties. Roherton's studies show the children motion evolution2 . Several investigations methods are able to measure the human movement But now most of the studies are based on image processing. Often the systems are working at the T.V. standard (50 frame per secund ). they permit only to study very slow gesture. A human operator analyses the digitizing sequence of the film manually giving a very expensive, especially long and unprecise operation. On these different grounds many human movement analysis systems were implemented. They consist of: - markers which are fixed to the anatomical interesting points on the subject in motion, - Image compression which is the art to coding picture data. Generally the compression Is limited to the centroid coordinates calculation tor each marker. These systems differ from one other in image acquisition and markers detection.

  8. Loss tolerant speech decoder for telecommunications

    NASA Technical Reports Server (NTRS)

    Prieto, Jr., Jaime L. (Inventor)

    1999-01-01

    A method and device for extrapolating past signal-history data for insertion into missing data segments in order to conceal digital speech frame errors. The extrapolation method uses past-signal history that is stored in a buffer. The method is implemented with a device that utilizes a finite-impulse response (FIR) multi-layer feed-forward artificial neural network that is trained by back-propagation for one-step extrapolation of speech compression algorithm (SCA) parameters. Once a speech connection has been established, the speech compression algorithm device begins sending encoded speech frames. As the speech frames are received, they are decoded and converted back into speech signal voltages. During the normal decoding process, pre-processing of the required SCA parameters will occur and the results stored in the past-history buffer. If a speech frame is detected to be lost or in error, then extrapolation modules are executed and replacement SCA parameters are generated and sent as the parameters required by the SCA. In this way, the information transfer to the SCA is transparent, and the SCA processing continues as usual. The listener will not normally notice that a speech frame has been lost because of the smooth transition between the last-received, lost, and next-received speech frames.

  9. Clinical validation of different echocardiographic motion pictures expert group-4 algorythms and compression levels for telemedicine.

    PubMed

    Barbier, Paolo; Alimento, Marina; Berna, Giovanni; Cavoretto, Dario; Celeste, Fabrizio; Muratori, Manuela; Guazzi, Maurizio D

    2004-01-01

    Tele-echocardiography is not widely used because of lengthy transmission times when using standard Motion Pictures Expert Groups (MPEG)-2 lossy compression algorythms, unless expensive high bandwidth lines are used. We sought to validate the newer MPEG-4 algorythms to allow further reduction in echocardiographic motion video file size. Four cardiologists expert in echocardiography read blindly 165 randomized uncompressed and compressed 2D and color Doppler normal and pathologic motion images. One Digital Video and 3 MPEG-4 compression algorythms were tested, the latter at 3 decreasing compression quality levels (100%, 65% and 40%). Mean diagnostic and image quality scores were computed for each file and compared across the 3 compression levels using uncompressed files as controls. File dimensions decreased from a range of uncompressed 12-83 MB to MPEG-4 0.03-2.3 MB. All algorythms showed mean scores that were not significantly different from uncompressed source, except the MPEG-4 DivX algorythm at the highest selected compression (40%, p=.002). These data support the use of MPEG-4 compression to reduce echocardiographic motion image size for transmission purposes, allowing cost reduction through use of low bandwidth lines.

  10. Halftoning processing on a JPEG-compressed image

    NASA Astrophysics Data System (ADS)

    Sibade, Cedric; Barizien, Stephane; Akil, Mohamed; Perroton, Laurent

    2003-12-01

    Digital image processing algorithms are usually designed for the raw format, that is on an uncompressed representation of the image. Therefore prior to transforming or processing a compressed format, decompression is applied; then, the result of the processing application is finally re-compressed for further transfer or storage. The change of data representation is resource-consuming in terms of computation, time and memory usage. In the wide format printing industry, this problem becomes an important issue: e.g. a 1 m2 input color image, scanned at 600 dpi exceeds 1.6 GB in its raw representation. However, some image processing algorithms can be performed in the compressed-domain, by applying an equivalent operation on the compressed format. This paper is presenting an innovative application of the halftoning processing operation by screening, to be applied on JPEG-compressed image. This compressed-domain transform is performed by computing the threshold operation of the screening algorithm in the DCT domain. This algorithm is illustrated by examples for different halftone masks. A pre-sharpening operation, applied on a JPEG-compressed low quality image is also described; it allows to de-noise and to enhance the contours of this image.

  11. Estimating corresponding locations in ipsilateral breast tomosynthesis views

    NASA Astrophysics Data System (ADS)

    van Schie, Guido; Tanner, Christine; Karssemeijer, Nico

    2011-03-01

    To improve cancer detection in mammography, breast exams usually consist of two views per breast. To combine information from both views, radiologists and multiview computer-aided detection (CAD) systems need to match corresponding regions in the two views. In digital breast tomosynthesis (DBT), finding corresponding regions in ipsilateral volumes may be a difficult and time-consuming task for radiologists, because many slices have to be inspected individually. In this study we developed a method to quickly estimate corresponding locations in ipsilateral tomosynthesis views by applying a mathematical transformation. First a compressed breast model is matched to the tomosynthesis view containing a point of interest. Then we decompress, rotate and compress again to estimate the location of the corresponding point in the ipsilateral view. In this study we use a simple elastically deformable sphere model to obtain an analytical solution for the transformation in a given DBT case. The model is matched to the volume by using automatic segmentation of the pectoral muscle, breast tissue and nipple. For validation we annotated 181 landmarks in both views and applied our method to each location. Results show a median 3D distance between the actual location and estimated location of 1.5 cm; a good starting point for a feature based local search method to link lesions for a multiview CAD system. Half of the estimated locations were at most 1 slice away from the actual location, making our method useful as a tool in mammographic workstations to interactively find corresponding locations in ipsilateral tomosynthesis views.

  12. An effective and efficient compression algorithm for ECG signals with irregular periods.

    PubMed

    Chou, Hsiao-Hsuan; Chen, Ying-Jui; Shiau, Yu-Chien; Kuo, Te-Son

    2006-06-01

    This paper presents an effective and efficient preprocessing algorithm for two-dimensional (2-D) electrocardiogram (ECG) compression to better compress irregular ECG signals by exploiting their inter- and intra-beat correlations. To better reveal the correlation structure, we first convert the ECG signal into a proper 2-D representation, or image. This involves a few steps including QRS detection and alignment, period sorting, and length equalization. The resulting 2-D ECG representation is then ready to be compressed by an appropriate image compression algorithm. We choose the state-of-the-art JPEG2000 for its high efficiency and flexibility. In this way, the proposed algorithm is shown to outperform some existing arts in the literature by simultaneously achieving high compression ratio (CR), low percent root mean squared difference (PRD), low maximum error (MaxErr), and low standard derivation of errors (StdErr). In particular, because the proposed period sorting method rearranges the detected heartbeats into a smoother image that is easier to compress, this algorithm is insensitive to irregular ECG periods. Thus either the irregular ECG signals or the QRS false-detection cases can be better compressed. This is a significant improvement over existing 2-D ECG compression methods. Moreover, this algorithm is not tied exclusively to JPEG2000. It can also be combined with other 2-D preprocessing methods or appropriate codecs to enhance the compression performance in irregular ECG cases.

  13. Holographic techniques for cellular fluorescence microscopy

    NASA Astrophysics Data System (ADS)

    Kim, Myung K.

    2017-04-01

    We have constructed a prototype instrument for holographic fluorescence microscopy (HFM) based on self-interference incoherent digital holography (SIDH) and demonstrate novel imaging capabilities such as differential 3D fluorescence microscopy and optical sectioning by compressive sensing.

  14. Through-the-earth radio

    DOEpatents

    Reagor, David [Los Alamos, NM; Vasquez-Dominguez, Jose [Los Alamos, NM

    2006-05-09

    A method and apparatus for effective through-the-earth communication involves a signal input device connected to a transmitter operating at a predetermined frequency sufficiently low to effectively penetrate useful distances through-the earth, and having an analog to digital converter receiving the signal input and passing the signal input to a data compression circuit that is connected to an encoding processor, the encoding processor output being provided to a digital to analog converter. An amplifier receives the analog output from the digital to analog converter for amplifying said analog output and outputting said analog output to an antenna. A receiver having an antenna receives the analog output passes the analog signal to a band pass filter whose output is connected to an analog to digital converter that provides a digital signal to a decoding processor whose output is connected to an data decompressor, the data decompressor providing a decompressed digital signal to a digital to analog converter. An audio output device receives the analog output form the digital to analog converter for producing audible output.

  15. Advanced Military Pay System Concepts. Evaluation of Opportunities through Information Technology.

    DTIC Science & Technology

    1980-07-01

    trans- mdtter (UART) to interface with a modem . The main processor was then responsible for input and output between main memory and the UART...digital, "run-length" encoding scheme which is very effective in reducing the amount of data to be transmitted. Machines of this type include a modem ...Output control as well as data compression will be combined with appropriate modems or interfaces to digital transmission channels and microprocessor

  16. Quality and noise measurements in mobile phone video capture

    NASA Astrophysics Data System (ADS)

    Petrescu, Doina; Pincenti, John

    2011-02-01

    The quality of videos captured with mobile phones has become increasingly important particularly since resolutions and formats have reached a level that rivals the capabilities available in the digital camcorder market, and since many mobile phones now allow direct playback on large HDTVs. The video quality is determined by the combined quality of the individual parts of the imaging system including the image sensor, the digital color processing, and the video compression, each of which has been studied independently. In this work, we study the combined effect of these elements on the overall video quality. We do this by evaluating the capture under various lighting, color processing, and video compression conditions. First, we measure full reference quality metrics between encoder input and the reconstructed sequence, where the encoder input changes with light and color processing modifications. Second, we introduce a system model which includes all elements that affect video quality, including a low light additive noise model, ISP color processing, as well as the video encoder. Our experiments show that in low light conditions and for certain choices of color processing the system level visual quality may not improve when the encoder becomes more capable or the compression ratio is reduced.

  17. Combining Digital Image Correlation and Acoustic Emission for Monitoring of the Strain Distribution until Yielding During Compression of Bovine Cancellous Bone

    NASA Astrophysics Data System (ADS)

    Tsirigotis, Athanasios; Deligianni, Despoina D.

    2017-12-01

    In this work, the surface heterogeneity in mechanical compressive strain of cancellous bone was investigated with digital image correlation (DIC). Moreover, the onset and progression of failure was studied by acoustic emission (AE). Cubic cancellous bone specimens, with side of 15 mm, were obtained from bovine femur and kept frozen at -20ºC until testing. Specimen strain was analyzed by measuring the change of distance between the platens (crosshead) and via an optical method, by following the strain evolution with a camera. Simultaneously, AE monitoring was performed. The experiments showed that compressive Young’s modulus determined by crosshead strain is underestimated at 23% in comparison to optically determined strain. However, surface strain fields defined by DIC displayed steep strain gradients, which can be attributed to cancellous bone porosity and inhomogeneity. The cumulative number of events for the total AE activity recorded from the sensors showed that the activity started at a mean load level of 36% of the maximum load and indicated the initiation of micro-cracking phenomena. Further experiments, determining 3D strain with μCT apart from surface strain, are necessary to clarify the issue of strain inhomogeneity in cancellous bone.

  18. VINSON/AUTOVON Interface Applique for the Modem, Digital Data, AN/GSC-38

    DTIC Science & Technology

    1980-11-01

    Measurement Indication Result Before Step 6 None Noise and beeping are heard in handset After Step 7 None Noise and beepi ng disappear Condition Measurement...linear range due to the compression used. Lowering the levels below the compression range may give increased linearity, but may cause signal-to- noise ...are encountered where the bit error rate at 16 KB/S results is objectionable audio noise or causes the KY-58 to squelch. On these channels the bit

  19. Compressive Sensing Based Bio-Inspired Shape Feature Detection CMOS Imager

    NASA Technical Reports Server (NTRS)

    Duong, Tuan A. (Inventor)

    2015-01-01

    A CMOS imager integrated circuit using compressive sensing and bio-inspired detection is presented which integrates novel functions and algorithms within a novel hardware architecture enabling efficient on-chip implementation.

  20. Digital radiographic imaging transfer: comparison with plain radiographs.

    PubMed

    Averch, T D; O'Sullivan, D; Breitenbach, C; Beser, N; Schulam, P G; Moore, R G; Kavoussi, L R

    1997-04-01

    Advances in digital imaging and computer display technology have allowed development of clinical teleradiographic systems. There are limited data assessing the effectiveness of such systems when applied to urologic pathology. In an effort to appraise the effectiveness of teleradiology in identifying renal calculi, the accuracy of findings on transmitted radiographic images were compared with those made when viewing the actual plain film. Plain films (KUB) were obtained from 26 patients who presented to the radiology department to rule out urinary calculous disease. The films were digitalized by a radiograph scanner into ARCNEMA-2 file format, compressed by a NASA algorithm, and transferred via a 28.8-kbps modern over standard telephone lines to a remote section 25 miles away, where they were decompressed and viewed on a 1600 x 1200-pixel monitor. Two attending urologists and two endourologic fellows were randomized to read either the transmitted image or the original radiograph with minimal clinical history provided. Of the 26 plain radiographic films, 24 were correctly interpreted by the fellows and 25 by the attending physicians (92% and 96% accuracy, respectively) for a total accuracy of 94% with no statistical difference (p = 0.16). After compression, all but one of the digital images were transferred successfully. The attending physicians correctly interpreted 24 of the 25 digital images (96%), whereas the fellows were correct on 21 interpretations (84%), resulting in a total 90% accuracy with a significant difference between the groups (p < or = 0.04). Overall, no statistical difference between the interpretations of the plain film and the digital image was revealed (p = 0.21). Using available technology, KUB images can be transmitted to a remote site, and the location of a stone can be determined correctly. Higher accuracy is demonstrated by experienced surgeons.

  1. Digital codec for real-time processing of broadcast quality video signals at 1.8 bits/pixel

    NASA Technical Reports Server (NTRS)

    Shalkhauser, Mary JO; Whyte, Wayne A., Jr.

    1989-01-01

    The authors present the hardware implementation of a digital television bandwidth compression algorithm which processes standard NTSC (National Television Systems Committee) composite color television signals and produces broadcast-quality video in real time at an average of 1.8 b/pixel. The sampling rate used with this algorithm results in 768 samples over the active portion of each video line by 512 active video lines per video frame. The algorithm is based on differential pulse code modulation (DPCM), but additionally utilizes a nonadaptive predictor, nonuniform quantizer, and multilevel Huffman coder to reduce the data rate substantially below that achievable with straight DPCM. The nonadaptive predictor and multilevel Huffman coder combine to set this technique apart from prior-art DPCM encoding algorithms. The authors describe the data compression algorithm and the hardware implementation of the codec and provide performance results.

  2. GIS-based identification of active lineaments within the Krasnokamensk Area, Transbaikalia, Russia

    NASA Astrophysics Data System (ADS)

    Petrov, V. A.; Lespinasse, M.; Ustinov, S. A.; Cialec, C.

    2017-07-01

    Lineament analysis was carried out using detailed digital elevation models (DEM) of the Krasnokamensk Area, southeastern Transbaikalia (Russia). The results of this research confirm the presence of already known faults, but also identify unknown fault zones. The primary focus was identifying small discontinuities and their relationship with extended fault zones. The developed technique allowed construction and identification of the active lineaments with their orientation of the compression and expansion axes in the horizontal plane, their direction of shear movement (right or left), and their geodynamic setting of formation (compression or stretching). The results of active faults identification and definition of their kinematics on digital elevation models were confirmed by measuring the velocities and directions of modern horizontal surface motions using a geodesic GPS, as well as identifying the principal stress axes directions of the modern stress field using modern-day earthquake data. The obtained results are deemed necessary for proper rational environmental management decisions.

  3. An image compression algorithm for a high-resolution digital still camera

    NASA Technical Reports Server (NTRS)

    Nerheim, Rosalee

    1989-01-01

    The Electronic Still Camera (ESC) project will provide for the capture and transmission of high-quality images without the use of film. The image quality will be superior to video and will approach the quality of 35mm film. The camera, which will have the same general shape and handling as a 35mm camera, will be able to send images to earth in near real-time. Images will be stored in computer memory (RAM) in removable cartridges readable by a computer. To save storage space, the image will be compressed and reconstructed at the time of viewing. Both lossless and loss-y image compression algorithms are studied, described, and compared.

  4. Dictionary Approaches to Image Compression and Reconstruction

    NASA Technical Reports Server (NTRS)

    Ziyad, Nigel A.; Gilmore, Erwin T.; Chouikha, Mohamed F.

    1998-01-01

    This paper proposes using a collection of parameterized waveforms, known as a dictionary, for the purpose of medical image compression. These waveforms, denoted as phi(sub gamma), are discrete time signals, where gamma represents the dictionary index. A dictionary with a collection of these waveforms is typically complete or overcomplete. Given such a dictionary, the goal is to obtain a representation image based on the dictionary. We examine the effectiveness of applying Basis Pursuit (BP), Best Orthogonal Basis (BOB), Matching Pursuits (MP), and the Method of Frames (MOF) methods for the compression of digitized radiological images with a wavelet-packet dictionary. The performance of these algorithms is studied for medical images with and without additive noise.

  5. Dictionary Approaches to Image Compression and Reconstruction

    NASA Technical Reports Server (NTRS)

    Ziyad, Nigel A.; Gilmore, Erwin T.; Chouikha, Mohamed F.

    1998-01-01

    This paper proposes using a collection of parameterized waveforms, known as a dictionary, for the purpose of medical image compression. These waveforms, denoted as lambda, are discrete time signals, where y represents the dictionary index. A dictionary with a collection of these waveforms Is typically complete or over complete. Given such a dictionary, the goal is to obtain a representation Image based on the dictionary. We examine the effectiveness of applying Basis Pursuit (BP), Best Orthogonal Basis (BOB), Matching Pursuits (MP), and the Method of Frames (MOF) methods for the compression of digitized radiological images with a wavelet-packet dictionary. The performance of these algorithms is studied for medical images with and without additive noise.

  6. Low-Complexity Lossless and Near-Lossless Data Compression Technique for Multispectral Imagery

    NASA Technical Reports Server (NTRS)

    Xie, Hua; Klimesh, Matthew A.

    2009-01-01

    This work extends the lossless data compression technique described in Fast Lossless Compression of Multispectral- Image Data, (NPO-42517) NASA Tech Briefs, Vol. 30, No. 8 (August 2006), page 26. The original technique was extended to include a near-lossless compression option, allowing substantially smaller compressed file sizes when a small amount of distortion can be tolerated. Near-lossless compression is obtained by including a quantization step prior to encoding of prediction residuals. The original technique uses lossless predictive compression and is designed for use on multispectral imagery. A lossless predictive data compression algorithm compresses a digitized signal one sample at a time as follows: First, a sample value is predicted from previously encoded samples. The difference between the actual sample value and the prediction is called the prediction residual. The prediction residual is encoded into the compressed file. The decompressor can form the same predicted sample and can decode the prediction residual from the compressed file, and so can reconstruct the original sample. A lossless predictive compression algorithm can generally be converted to a near-lossless compression algorithm by quantizing the prediction residuals prior to encoding them. In this case, since the reconstructed sample values will not be identical to the original sample values, the encoder must determine the values that will be reconstructed and use these values for predicting later sample values. The technique described here uses this method, starting with the original technique, to allow near-lossless compression. The extension to allow near-lossless compression adds the ability to achieve much more compression when small amounts of distortion are tolerable, while retaining the low complexity and good overall compression effectiveness of the original algorithm.

  7. Advanced digital SAR processing study

    NASA Technical Reports Server (NTRS)

    Martinson, L. W.; Gaffney, B. P.; Liu, B.; Perry, R. P.; Ruvin, A.

    1982-01-01

    A highly programmable, land based, real time synthetic aperture radar (SAR) processor requiring a processed pixel rate of 2.75 MHz or more in a four look system was designed. Variations in range and azimuth compression, number of looks, range swath, range migration and SR mode were specified. Alternative range and azimuth processing algorithms were examined in conjunction with projected integrated circuit, digital architecture, and software technologies. The advaced digital SAR processor (ADSP) employs an FFT convolver algorithm for both range and azimuth processing in a parallel architecture configuration. Algorithm performace comparisons, design system design, implementation tradeoffs and the results of a supporting survey of integrated circuit and digital architecture technologies are reported. Cost tradeoffs and projections with alternate implementation plans are presented.

  8. Subjective matters: from image quality to image psychology

    NASA Astrophysics Data System (ADS)

    Fedorovskaya, Elena A.; De Ridder, Huib

    2013-03-01

    From the advent of digital imaging through several decades of studies, the human vision research community systematically focused on perceived image quality and digital artifacts due to resolution, compression, gamma, dynamic range, capture and reproduction noise, blur, etc., to help overcome existing technological challenges and shortcomings. Technological advances made digital images and digital multimedia nearly flawless in quality, and ubiquitous and pervasive in usage, provide us with the exciting but at the same time demanding possibility to turn to the domain of human experience including higher psychological functions, such as cognition, emotion, awareness, social interaction, consciousness and Self. In this paper we will outline the evolution of human centered multidisciplinary studies related to imaging and propose steps and potential foci of future research.

  9. Level-dependent changes in detection of temporal gaps in noise markers by adults with normal and impaired hearing

    PubMed Central

    Horwitz, Amy R.; Ahlstrom, Jayne B.; Dubno, Judy R.

    2011-01-01

    Compression in the basilar-membrane input–output response flattens the temporal envelope of a fluctuating signal when more gain is applied to lower level than higher level temporal components. As a result, level-dependent changes in gap detection for signals with different depths of envelope fluctuation and for subjects with normal and impaired hearing may reveal effects of compression. To test these assumptions, gap detection with and without a broadband noise was measured with 1 000-Hz-wide (flatter) and 50-Hz-wide (fluctuating) noise markers as a function of marker level. As marker level increased, background level also increased, maintaining a fixed acoustic signal-to-noise ratio (SNR) to minimize sensation-level effects on gap detection. Significant level-dependent changes in gap detection were observed, consistent with effects of cochlear compression. For the flatter marker, gap detection that declines with increases in level up to mid levels and improves with further increases in level may be explained by an effective flattening of the temporal envelope at mid levels, where compression effects are expected to be strongest. A flatter effective temporal envelope corresponds to a reduced effective SNR. The effects of a reduction in compression (resulting in larger effective SNRs) may contribute to better-than-normal gap detection observed for some hearing-impaired listeners. PMID:22087921

  10. Near-infrared Compressive Line Sensing Imaging System using Individually Addressable Laser Diode Array

    DTIC Science & Technology

    2015-05-11

    Micromirror Device (DMD) is a microelectromechanical (MEMS) device. A DMD consists of millions of electrostatically actuated micro- mirrors (or pixels...digital micromirror device) were analyzed. We discussed the effort of developing such a prototype by Proc. of SPIE Vol. 9484 94840I-11 Downloaded...to Digital Micromirror Device (DMD) Technology”, (n.d.) Retrieved May 1, 2011, from http://www.ti.com/lit/an/dlpa008a/dlpa008a.pdf. [16

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aimthikul, Y.

    This thesis reviews the essential aspects of speech synthesis and distinguishes between the two prevailing techniques: compressed digital speech and phonemic synthesis. It then presents the hardware details of the five speech modules evaluated. FORTRAN programs were written to facilitate message creation and retrieval with four of the modules driven by a PDP-11 minicomputer. The fifth module was driven directly by a computer terminal. The compressed digital speech modules (T.I. 990/306, T.S.I. Series 3D and N.S. Digitalker) each contain a limited vocabulary produced by the manufacturers while both the phonemic synthesizers made by Votrax permit an almost unlimited set ofmore » sounds and words. A text-to-phoneme rules program was adapted for the PDP-11 (running under the RSX-11M operating system) to drive the Votrax Speech Pac module. However, the Votrax Type'N Talk unit has its own built-in translator. Comparison of these modules revealed that the compressed digital speech modules were superior in pronouncing words on an individual basis but lacked the inflection capability that permitted the phonemic synthesizers to generate more coherent phrases. These findings were necessarily highly subjective and dependent on the specific words and phrases studied. In addition, the rapid introduction of new modules by manufacturers will necessitate new comparisons. However, the results of this research verified that all of the modules studied do possess reasonable quality of speech that is suitable for man-machine applications. Furthermore, the development tools are now in place to permit the addition of computer speech output in such applications.« less

  12. Raynaud’s phenomenon and digital ischemia: a practical approach to risk stratification, diagnosis and management

    PubMed Central

    McMahan, Zsuzsanna H.; Wigley, Fredrick M.

    2015-01-01

    Digital ischemia is a painful and often disfiguring event. Such an ischemic event often leads to tissue loss and can significantly affect the patient’s quality of life. Digital ischemia can be secondary to a vasculopathy, vasculitis, embolic disease, trauma, or extrinsic vascular compression. It is an especially serious complication in patients with scleroderma. Risk stratification of patients with scleroderma at risk for digital ischemia is now possible with clinical assessment and autoantibody profiles. Because there are a variety of conditions that lead to digital ischemia, it is important to understand the pathophysiology underlying each ischemic presentation in order to target therapy appropriately. Significant progress has been made in the last two decades in defining the pathophysiological processes leading to digital ischemia in rheumatic diseases. In this article we review the risk stratification, diagnosis, and management of patients with digital ischemia and provide a practical approach to therapy, particularly in scleroderma. PMID:26523153

  13. Quantization noise in digital speech. M.S. Thesis- Houston Univ.

    NASA Technical Reports Server (NTRS)

    Schmidt, O. L.

    1972-01-01

    The amount of quantization noise generated in a digital-to-analog converter is dependent on the number of bits or quantization levels used to digitize the analog signal in the analog-to-digital converter. The minimum number of quantization levels and the minimum sample rate were derived for a digital voice channel. A sample rate of 6000 samples per second and lowpass filters with a 3 db cutoff of 2400 Hz are required for 100 percent sentence intelligibility. Consonant sounds are the first speech components to be degraded by quantization noise. A compression amplifier can be used to increase the weighting of the consonant sound amplitudes in the analog-to-digital converter. An expansion network must be installed at the output of the digital-to-analog converter to restore the original weighting of the consonant sounds. This technique results in 100 percent sentence intelligibility for a sample rate of 5000 samples per second, eight quantization levels, and lowpass filters with a 3 db cutoff of 2000 Hz.

  14. Mammographic compression in Asian women.

    PubMed

    Lau, Susie; Abdul Aziz, Yang Faridah; Ng, Kwan Hoong

    2017-01-01

    To investigate: (1) the variability of mammographic compression parameters amongst Asian women; and (2) the effects of reducing compression force on image quality and mean glandular dose (MGD) in Asian women based on phantom study. We retrospectively collected 15818 raw digital mammograms from 3772 Asian women aged 35-80 years who underwent screening or diagnostic mammography between Jan 2012 and Dec 2014 at our center. The mammograms were processed using a volumetric breast density (VBD) measurement software (Volpara) to assess compression force, compression pressure, compressed breast thickness (CBT), breast volume, VBD and MGD against breast contact area. The effects of reducing compression force on image quality and MGD were also evaluated based on measurement obtained from 105 Asian women, as well as using the RMI156 Mammographic Accreditation Phantom and polymethyl methacrylate (PMMA) slabs. Compression force, compression pressure, CBT, breast volume, VBD and MGD correlated significantly with breast contact area (p<0.0001). Compression parameters including compression force, compression pressure, CBT and breast contact area were widely variable between [relative standard deviation (RSD)≥21.0%] and within (p<0.0001) Asian women. The median compression force should be about 8.1 daN compared to the current 12.0 daN. Decreasing compression force from 12.0 daN to 9.0 daN increased CBT by 3.3±1.4 mm, MGD by 6.2-11.0%, and caused no significant effects on image quality (p>0.05). Force-standardized protocol led to widely variable compression parameters in Asian women. Based on phantom study, it is feasible to reduce compression force up to 32.5% with minimal effects on image quality and MGD.

  15. CNES studies for on-board implementation via HLS tools of a cloud-detection module for selective compression

    NASA Astrophysics Data System (ADS)

    Camarero, R.; Thiebaut, C.; Dejean, Ph.; Speciel, A.

    2010-08-01

    Future CNES high resolution instruments for remote sensing missions will lead to higher data-rates because of the increase in resolution and dynamic range. For example, the ground resolution improvement has induced a data-rate multiplied by 8 from SPOT4 to SPOT5 [1] and by 28 to PLEIADES-HR [2]. Innovative "smart" compression techniques will be then required, performing different types of compression inside a scene, in order to reach higher global compression ratios while complying with image quality requirements. This socalled "selective compression", allows important compression gains by detecting and then differently compressing the regions-of-interest (ROI) and non-interest in the image (e.g. higher compression ratios are assigned to the non-interesting data). Given that most of CNES high resolution images are cloudy [1], significant mass-memory and transmission gain could be reached by just detecting and suppressing (or compressing significantly) the areas covered by clouds. Since 2007, CNES works on a cloud detection module [3] as a simplification for on-board implementation of an already existing module used on-ground for PLEIADES-HR album images [4]. The different steps of this Support Vector Machine classifier have already been analyzed, for simplification and optimization, during this on-board implementation study: reflectance computation, characteristics vector computation (based on multispectral criteria) and computation of the SVM output. In order to speed up the hardware design phase, a new approach based on HLS [5] tools is being tested for the VHDL description stage. The aim is to obtain a bit-true VDHL design directly from a high level description language as C or Matlab/Simulink [6].

  16. Random lattice structures. Modelling, manufacture and FEA of their mechanical response

    NASA Astrophysics Data System (ADS)

    Maliaris, G.; Sarafis, I. T.; Lazaridis, T.; Varoutoglou, A.; Tsakataras, G.

    2016-11-01

    The implementation of lightweight structures in various applications, especially in Aerospace/ Automotive industries and Orthopaedics, has become a necessity due to their exceptional mechanical properties with respect to reduced weight. In this work we present a Voronoi tessellation based algorithm, which has been developed for modelling stochastic lattice structures. With the proposed algorithm, is possible to generate CAD geometry with controllable structural parameters, such as porosity, cell number and strut thickness. The digital structures were transformed into physical objects through the combination of 3D printing technics and investment casting. This process was applied to check the mechanical behaviour of generated digital models. Until now, the only way to materialize such structures into physical objects, was feasible through 3D printing methods such as Selective Laser Sintering/ Melting (SLS/ SLM). Investment casting possesses numerous advantages against SLS or SLA, with the major one being the material variety. On the other hand, several trials are required in order to calibrate the process parameters to have successful castings, which is the major drawback of investment casting. The manufactured specimens were subjected to compression tests, where their mechanical response was registered in the form of compressive load - displacement curves. Also, a finite element model was developed, using the specimens’ CAD data and compression test parameters. The FE assisted calculation of specimen plastic deformation is identical with the one of the physical object, which validates the conclusions drawn from the simulation results. As it was observed, strut contact is initiated when specimen deformation is approximately 5mm. Although FE calculated compressive force follows the same trend for the first 3mm of compression, then diverges because of the elasto-plastic FE model type definition and the occurred remeshing steps.

  17. Analysis of Compression Algorithm in Ground Collision Avoidance Systems (Auto-GCAS)

    NASA Technical Reports Server (NTRS)

    Schmalz, Tyler; Ryan, Jack

    2011-01-01

    Automatic Ground Collision Avoidance Systems (Auto-GCAS) utilizes Digital Terrain Elevation Data (DTED) stored onboard a plane to determine potential recovery maneuvers. Because of the current limitations of computer hardware on military airplanes such as the F-22 and F-35, the DTED must be compressed through a lossy technique called binary-tree tip-tilt. The purpose of this study is to determine the accuracy of the compressed data with respect to the original DTED. This study is mainly interested in the magnitude of the error between the two as well as the overall distribution of the errors throughout the DTED. By understanding how the errors of the compression technique are affected by various factors (topography, density of sampling points, sub-sampling techniques, etc.), modifications can be made to the compression technique resulting in better accuracy. This, in turn, would minimize unnecessary activation of A-GCAS during flight as well as maximizing its contribution to fighter safety.

  18. Computing mammographic density from a multiple regression model constructed with image-acquisition parameters from a full-field digital mammographic unit

    PubMed Central

    Lu, Lee-Jane W.; Nishino, Thomas K.; Khamapirad, Tuenchit; Grady, James J; Leonard, Morton H.; Brunder, Donald G.

    2009-01-01

    Breast density (the percentage of fibroglandular tissue in the breast) has been suggested to be a useful surrogate marker for breast cancer risk. It is conventionally measured using screen-film mammographic images by a labor intensive histogram segmentation method (HSM). We have adapted and modified the HSM for measuring breast density from raw digital mammograms acquired by full-field digital mammography. Multiple regression model analyses showed that many of the instrument parameters for acquiring the screening mammograms (e.g. breast compression thickness, radiological thickness, radiation dose, compression force, etc) and image pixel intensity statistics of the imaged breasts were strong predictors of the observed threshold values (model R2=0.93) and %density (R2=0.84). The intra-class correlation coefficient of the %-density for duplicate images was estimated to be 0.80, using the regression model-derived threshold values, and 0.94 if estimated directly from the parameter estimates of the %-density prediction regression model. Therefore, with additional research, these mathematical models could be used to compute breast density objectively, automatically bypassing the HSM step, and could greatly facilitate breast cancer research studies. PMID:17671343

  19. Digital audio watermarking using moment-preserving thresholding

    NASA Astrophysics Data System (ADS)

    Choi, DooSeop; Jung, Hae Kyung; Choi, Hyuk; Kim, Taejeong

    2007-09-01

    The Moment-Preserving Thresholding technique for digital images has been used in digital image processing for decades, especially in image binarization and image compression. Its main strength lies in that the binary values that the MPT produces as a result, called representative values, are usually unaffected when the signal being thresholded goes through a signal processing operation. The two representative values in MPT together with the threshold value are obtained by solving the system of the preservation equations for the first, second, and third moment. Relying on this robustness of the representative values to various signal processing attacks considered in the watermarking context, this paper proposes a new watermarking scheme for audio signals. The watermark is embedded in the root-sum-square (RSS) of the two representative values of each signal block using the quantization technique. As a result, the RSS values are modified by scaling the signal according to the watermark bit sequence under the constraint of inaudibility relative to the human psycho-acoustic model. We also address and suggest solutions to the problem of synchronization and power scaling attacks. Experimental results show that the proposed scheme maintains high audio quality and robustness to various attacks including MP3 compression, re-sampling, jittering, and, DA/AD conversion.

  20. Shear sensing in bonded composites with cantilever beam microsensors and dual-plane digital image correlation

    NASA Astrophysics Data System (ADS)

    Baur, Jeffery W.; Slinker, Keith; Kondash, Corey

    2017-04-01

    Understanding the shear strain, viscoelastic response, and onset of damage within bonded composites is critical to their design, processing, and reliability. This presentation will discuss the multidisciplinary research conducted which led to the conception, development, and demonstration of two methods for measuring the shear within a bonded joint - dualplane digital image correlation (DIC) and a micro-cantilever shear sensor. The dual plane DIC method was developed to measure the strain field on opposing sides of a transparent single-lap joint in order to spatially quantify the joint shear strain. The sensor consists of a single glass fiber cantilever beam with a radially-grown forest of carbon nanotubes (CNTs) within a capillary pore. When the fiber is deflected, the internal radial CNT array is compressed against an electrode within the pore and the corresponding decrease in electrical resistance is correlated with the external loading. When this small, simple, and low-cost sensor was integrated within a composite bonded joint and cycled in tension, the onset of damage prior to joint failure was observed. In a second sample configuration, both the dual plane DIC and the hair sensor detected viscoplastic changes in the strain of the sample in response to continued loading.

  1. [Glossary of terms used by radiologists in image processing].

    PubMed

    Rolland, Y; Collorec, R; Bruno, A; Ramée, A; Morcet, N; Haigron, P

    1995-01-01

    We give the definition of 166 words used in image processing. Adaptivity, aliazing, analog-digital converter, analysis, approximation, arc, artifact, artificial intelligence, attribute, autocorrelation, bandwidth, boundary, brightness, calibration, class, classification, classify, centre, cluster, coding, color, compression, contrast, connectivity, convolution, correlation, data base, decision, decomposition, deconvolution, deduction, descriptor, detection, digitization, dilation, discontinuity, discretization, discrimination, disparity, display, distance, distorsion, distribution dynamic, edge, energy, enhancement, entropy, erosion, estimation, event, extrapolation, feature, file, filter, filter floaters, fitting, Fourier transform, frequency, fusion, fuzzy, Gaussian, gradient, graph, gray level, group, growing, histogram, Hough transform, Houndsfield, image, impulse response, inertia, intensity, interpolation, interpretation, invariance, isotropy, iterative, JPEG, knowledge base, label, laplacian, learning, least squares, likelihood, matching, Markov field, mask, matching, mathematical morphology, merge (to), MIP, median, minimization, model, moiré, moment, MPEG, neural network, neuron, node, noise, norm, normal, operator, optical system, optimization, orthogonal, parametric, pattern recognition, periodicity, photometry, pixel, polygon, polynomial, prediction, pulsation, pyramidal, quantization, raster, reconstruction, recursive, region, rendering, representation space, resolution, restoration, robustness, ROC, thinning, transform, sampling, saturation, scene analysis, segmentation, separable function, sequential, smoothing, spline, split (to), shape, threshold, tree, signal, speckle, spectrum, spline, stationarity, statistical, stochastic, structuring element, support, syntaxic, synthesis, texture, truncation, variance, vision, voxel, windowing.

  2. The effect of hydraulic bed movement on the quality of chest compressions.

    PubMed

    Park, Maeng Real; Lee, Dae Sup; In Kim, Yong; Ryu, Ji Ho; Cho, Young Mo; Kim, Hyung Bin; Yeom, Seok Ran; Min, Mun Ki

    2017-08-01

    The hydraulic height control systems of hospital beds provide convenience and shock absorption. However, movements in a hydraulic bed may reduce the effectiveness of chest compressions. This study investigated the effects of hydraulic bed movement on chest compressions. Twenty-eight participants were recruited for this study. All participants performed chest compressions for 2min on a manikin and three surfaces: the floor (Day 1), a firm plywood bed (Day 2), and a hydraulic bed (Day 3). We considered 28 participants of Day 1 as control and each 28 participants of Day 2 and Day 3 as study subjects. The compression rates, depths, and good compression ratios (>5-cm compressions/all compressions) were compared between the three surfaces. When we compared the three surfaces, we did not detect a significant difference in the speed of chest compressions (p=0.582). However, significantly lower values were observed on the hydraulic bed in terms of compression depth (p=0.001) and the good compression ratio (p=0.003) compared to floor compressions. When we compared the plywood and hydraulic beds, we did not detect significant differences in compression depth (p=0.351) and the good compression ratio (p=0.391). These results indicate that the movements in our hydraulic bed were associated with a non-statistically significant trend towards lower-quality chest compressions. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. Wireless EEG System Achieving High Throughput and Reduced Energy Consumption Through Lossless and Near-Lossless Compression.

    PubMed

    Alvarez, Guillermo Dufort Y; Favaro, Federico; Lecumberry, Federico; Martin, Alvaro; Oliver, Juan P; Oreggioni, Julian; Ramirez, Ignacio; Seroussi, Gadiel; Steinfeld, Leonardo

    2018-02-01

    This work presents a wireless multichannel electroencephalogram (EEG) recording system featuring lossless and near-lossless compression of the digitized EEG signal. Two novel, low-complexity, efficient compression algorithms were developed and tested in a low-power platform. The algorithms were tested on six public EEG databases comparing favorably with the best compression rates reported up to date in the literature. In its lossless mode, the platform is capable of encoding and transmitting 59-channel EEG signals, sampled at 500 Hz and 16 bits per sample, at a current consumption of 337 A per channel; this comes with a guarantee that the decompressed signal is identical to the sampled one. The near-lossless mode allows for significant energy savings and/or higher throughputs in exchange for a small guaranteed maximum per-sample distortion in the recovered signal. Finally, we address the tradeoff between computation cost and transmission savings by evaluating three alternatives: sending raw data, or encoding with one of two compression algorithms that differ in complexity and compression performance. We observe that the higher the throughput (number of channels and sampling rate) the larger the benefits obtained from compression.

  4. Convergent Technologies in Distance Learning Delivery.

    ERIC Educational Resources Information Center

    Wheeler, Steve

    1999-01-01

    Describes developments in British education in distance learning technologies. Highlights include networking the rural areas; communication, community, and paradigm shifts; digital compression techniques and telematics; Web-based material delivered over the Internet; system flexibility; social support; learning support; videoconferencing; and…

  5. Data compressor designed to improve recognition of magnetic phases

    NASA Astrophysics Data System (ADS)

    Vogel, E. E.; Saravia, G.; Cortez, L. V.

    2012-02-01

    Data compressors available in the web have been used to determine magnetic phases for two-dimensional (2D) systems [E. Vogel, G. Saravia, F. Bachmann, B. Fierro, J. Fischer, Phase transitions in Edwards-Anderson model by means of information theory, Physica A 388 2009 4075-4082]. In the present work, we push this line forward along four different directions. First, the compressor itself: we design a new data compressor, named wlzip, optimized for the recognition of information having physical (or scientific) information instead of the random digital information usually compressed. Second, for the first time we extend the data compression analysis to the 3D Ising ferromagnetic model using wlzip. Third, we discuss the tuning possibilities of wlzip in terms of the number of digits considered in the compression to yield maximum definition; in this way, the transition temperature of both 2D and 3D Ising ferromagnets can be reported with very good resolution. Fourth, the extension of the time window through which the data file is actually compressed is also considered to get optimum accuracy. The paper is focused on the new compressor, its algorithm in general and the way to apply it. Advantages and disadvantages of wlzip are discussed. Toward the end, we mention other possible applications of this technique to recognize stable and unstable regimes in the evolution of variables in meteorology (such as pollution content or atmospheric pressure), biology (blood pressure) and econophysics (prices of the stock market).

  6. Impact of Altering Various Image Parameters on Human Epidermal Growth Factor Receptor 2 Image Analysis Data Quality

    PubMed Central

    Pantanowitz, Liron; Liu, Chi; Huang, Yue; Guo, Huazhang; Rohde, Gustavo K.

    2017-01-01

    Introduction: The quality of data obtained from image analysis can be directly affected by several preanalytical (e.g., staining, image acquisition), analytical (e.g., algorithm, region of interest [ROI]), and postanalytical (e.g., computer processing) variables. Whole-slide scanners generate digital images that may vary depending on the type of scanner and device settings. Our goal was to evaluate the impact of altering brightness, contrast, compression, and blurring on image analysis data quality. Methods: Slides from 55 patients with invasive breast carcinoma were digitized to include a spectrum of human epidermal growth factor receptor 2 (HER2) scores analyzed with Visiopharm (30 cases with score 0, 10 with 1+, 5 with 2+, and 10 with 3+). For all images, an ROI was selected and four parameters (brightness, contrast, JPEG2000 compression, out-of-focus blurring) then serially adjusted. HER2 scores were obtained for each altered image. Results: HER2 scores decreased with increased illumination, higher compression ratios, and increased blurring. HER2 scores increased with greater contrast. Cases with HER2 score 0 were least affected by image adjustments. Conclusion: This experiment shows that variations in image brightness, contrast, compression, and blurring can have major influences on image analysis results. Such changes can result in under- or over-scoring with image algorithms. Standardization of image analysis is recommended to minimize the undesirable impact such variations may have on data output. PMID:28966838

  7. Compressive sensing for efficient health monitoring and effective damage detection of structures

    NASA Astrophysics Data System (ADS)

    Jayawardhana, Madhuka; Zhu, Xinqun; Liyanapathirana, Ranjith; Gunawardana, Upul

    2017-02-01

    Real world Structural Health Monitoring (SHM) systems consist of sensors in the scale of hundreds, each sensor generating extremely large amounts of data, often arousing the issue of the cost associated with data transfer and storage. Sensor energy is a major component included in this cost factor, especially in Wireless Sensor Networks (WSN). Data compression is one of the techniques that is being explored to mitigate the effects of these issues. In contrast to traditional data compression techniques, Compressive Sensing (CS) - a very recent development - introduces the means of accurately reproducing a signal by acquiring much less number of samples than that defined by Nyquist's theorem. CS achieves this task by exploiting the sparsity of the signal. By the reduced amount of data samples, CS may help reduce the energy consumption and storage costs associated with SHM systems. This paper investigates CS based data acquisition in SHM, in particular, the implications of CS on damage detection and localization. CS is implemented in a simulation environment to compress structural response data from a Reinforced Concrete (RC) structure. Promising results were obtained from the compressed data reconstruction process as well as the subsequent damage identification process using the reconstructed data. A reconstruction accuracy of 99% could be achieved at a Compression Ratio (CR) of 2.48 using the experimental data. Further analysis using the reconstructed signals provided accurate damage detection and localization results using two damage detection algorithms, showing that CS has not compromised the crucial information on structural damages during the compression process.

  8. Watermarking of ultrasound medical images in teleradiology using compressed watermark

    PubMed Central

    Badshah, Gran; Liew, Siau-Chuin; Zain, Jasni Mohamad; Ali, Mushtaq

    2016-01-01

    Abstract. The open accessibility of Internet-based medical images in teleradialogy face security threats due to the nonsecured communication media. This paper discusses the spatial domain watermarking of ultrasound medical images for content authentication, tamper detection, and lossless recovery. For this purpose, the image is divided into two main parts, the region of interest (ROI) and region of noninterest (RONI). The defined ROI and its hash value are combined as watermark, lossless compressed, and embedded into the RONI part of images at pixel’s least significant bits (LSBs). The watermark lossless compression and embedding at pixel’s LSBs preserve image diagnostic and perceptual qualities. Different lossless compression techniques including Lempel-Ziv-Welch (LZW) were tested for watermark compression. The performances of these techniques were compared based on more bit reduction and compression ratio. LZW was found better than others and used in tamper detection and recovery watermarking of medical images (TDARWMI) scheme development to be used for ROI authentication, tamper detection, localization, and lossless recovery. TDARWMI performance was compared and found to be better than other watermarking schemes. PMID:26839914

  9. Solid state VRX CT detector

    NASA Astrophysics Data System (ADS)

    DiBianca, Frank A.; Melnyk, Roman; Sambari, Aniket; Jordan, Lawrence M.; Laughter, Joseph S.; Zou, Ping

    2000-04-01

    A technique called Variable-Resolution X-ray (VRX) detection that greatly increases the spatial resolution in computed tomography (CT) and digital radiography (DR) is presented. The technique is based on a principle called 'projective compression' that allows the resolution element of a CT detector to scale with the subject or field size. For very large (40 - 50 cm) field sizes, resolution exceeding 2 cy/mm is possible and for very small fields, microscopy is attainable with resolution exceeding 100 cy/mm. Preliminary results from a 576-channel solid-state detector are presented. The detector has a dual-arm geometry and is comprised of CdWO4 scintillator crystals arranged in 24 modules of 24 channels/module. The scintillators are 0.85 mm wide and placed on 1 mm centers. Measurements of signal level, MTF and SNR, all versus detector angle, are presented.

  10. Comparison of VRX CT scanners geometries

    NASA Astrophysics Data System (ADS)

    DiBianca, Frank A.; Melnyk, Roman; Duckworth, Christopher N.; Russ, Stephan; Jordan, Lawrence M.; Laughter, Joseph S.

    2001-06-01

    A technique called Variable-Resolution X-ray (VRX) detection greatly increases the spatial resolution in computed tomography (CT) and digital radiography (DR) as the field size decreases. The technique is based on a principle called `projective compression' that allows both the resolution element and the sampling distance of a CT detector to scale with the subject or field size. For very large (40 - 50 cm) field sizes, resolution exceeding 2 cy/mm is possible and for very small fields, microscopy is attainable with resolution exceeding 100 cy/mm. This paper compares the benefits obtainable with two different VRX detector geometries: the single-arm geometry and the dual-arm geometry. The analysis is based on Monte Carlo simulations and direct calculations. The results of this study indicate that the dual-arm system appears to have more advantages than the single-arm technique.

  11. Bit Grooming: statistically accurate precision-preserving quantization with compression, evaluated in the netCDF Operators (NCO, v4.4.8+)

    NASA Astrophysics Data System (ADS)

    Zender, Charles S.

    2016-09-01

    Geoscientific models and measurements generate false precision (scientifically meaningless data bits) that wastes storage space. False precision can mislead (by implying noise is signal) and be scientifically pointless, especially for measurements. By contrast, lossy compression can be both economical (save space) and heuristic (clarify data limitations) without compromising the scientific integrity of data. Data quantization can thus be appropriate regardless of whether space limitations are a concern. We introduce, implement, and characterize a new lossy compression scheme suitable for IEEE floating-point data. Our new Bit Grooming algorithm alternately shaves (to zero) and sets (to one) the least significant bits of consecutive values to preserve a desired precision. This is a symmetric, two-sided variant of an algorithm sometimes called Bit Shaving that quantizes values solely by zeroing bits. Our variation eliminates the artificial low bias produced by always zeroing bits, and makes Bit Grooming more suitable for arrays and multi-dimensional fields whose mean statistics are important. Bit Grooming relies on standard lossless compression to achieve the actual reduction in storage space, so we tested Bit Grooming by applying the DEFLATE compression algorithm to bit-groomed and full-precision climate data stored in netCDF3, netCDF4, HDF4, and HDF5 formats. Bit Grooming reduces the storage space required by initially uncompressed and compressed climate data by 25-80 and 5-65 %, respectively, for single-precision values (the most common case for climate data) quantized to retain 1-5 decimal digits of precision. The potential reduction is greater for double-precision datasets. When used aggressively (i.e., preserving only 1-2 digits), Bit Grooming produces storage reductions comparable to other quantization techniques such as Linear Packing. Unlike Linear Packing, whose guaranteed precision rapidly degrades within the relatively narrow dynamic range of values that it can compress, Bit Grooming guarantees the specified precision throughout the full floating-point range. Data quantization by Bit Grooming is irreversible (i.e., lossy) yet transparent, meaning that no extra processing is required by data users/readers. Hence Bit Grooming can easily reduce data storage volume without sacrificing scientific precision or imposing extra burdens on users.

  12. Low Power LDPC Code Decoder Architecture Based on Intermediate Message Compression Technique

    NASA Astrophysics Data System (ADS)

    Shimizu, Kazunori; Togawa, Nozomu; Ikenaga, Takeshi; Goto, Satoshi

    Reducing the power dissipation for LDPC code decoder is a major challenging task to apply it to the practical digital communication systems. In this paper, we propose a low power LDPC code decoder architecture based on an intermediate message-compression technique which features as follows: (i) An intermediate message compression technique enables the decoder to reduce the required memory capacity and write power dissipation. (ii) A clock gated shift register based intermediate message memory architecture enables the decoder to decompress the compressed messages in a single clock cycle while reducing the read power dissipation. The combination of the above two techniques enables the decoder to reduce the power dissipation while keeping the decoding throughput. The simulation results show that the proposed architecture improves the power efficiency up to 52% and 18% compared to that of the decoder based on the overlapped schedule and the rapid convergence schedule without the proposed techniques respectively.

  13. Advanced application flight experiment breadboard pulse compression radar altimeter program

    NASA Technical Reports Server (NTRS)

    1976-01-01

    Design, development and performance of the pulse compression radar altimeter is described. The high resolution breadboard system is designed to operate from an aircraft at 10 Kft above the ocean and to accurately measure altitude, sea wave height and sea reflectivity. The minicomputer controlled Ku band system provides six basic variables and an extensive digital recording capability for experimentation purposes. Signal bandwidths of 360 MHz are obtained using a reflective array compression line. Stretch processing is used to achieve 1000:1 pulse compression. The system range command LSB is 0.62 ns or 9.25 cm. A second order altitude tracker, aided by accelerometer inputs is implemented in the system software. During flight tests the system demonstrated an altitude resolution capability of 2.1 cm and sea wave height estimation accuracy of 10%. The altitude measurement performance exceeds that of the Skylab and GEOS-C predecessors by approximately an order of magnitude.

  14. Method and apparatus for data sampling

    DOEpatents

    Odell, Daniel M. C.

    1994-01-01

    A method and apparatus for sampling radiation detector outputs and determining event data from the collected samples. The method uses high speed sampling of the detector output, the conversion of the samples to digital values, and the discrimination of the digital values so that digital values representing detected events are determined. The high speed sampling and digital conversion is performed by an A/D sampler that samples the detector output at a rate high enough to produce numerous digital samples for each detected event. The digital discrimination identifies those digital samples that are not representative of detected events. The sampling and discrimination also provides for temporary or permanent storage, either serially or in parallel, to a digital storage medium.

  15. Image processing in forensic pathology.

    PubMed

    Oliver, W R

    1998-03-01

    Image processing applications in forensic pathology are becoming increasingly important. This article introduces basic concepts in image processing as applied to problems in forensic pathology in a non-mathematical context. Discussions of contrast enhancement, digital encoding, compression, deblurring, and other topics are presented.

  16. The compression and storage method of the same kind of medical images: DPCM

    NASA Astrophysics Data System (ADS)

    Zhao, Xiuying; Wei, Jingyuan; Zhai, Linpei; Liu, Hong

    2006-09-01

    Medical imaging has started to take advantage of digital technology, opening the way for advanced medical imaging and teleradiology. Medical images, however, require large amounts of memory. At over 1 million bytes per image, a typical hospital needs a staggering amount of memory storage (over one trillion bytes per year), and transmitting an image over a network (even the promised superhighway) could take minutes--too slow for interactive teleradiology. This calls for image compression to reduce significantly the amount of data needed to represent an image. Several compression techniques with different compression ratio have been developed. However, the lossless techniques, which allow for perfect reconstruction of the original images, yield modest compression ratio, while the techniques that yield higher compression ratio are lossy, that is, the original image is reconstructed only approximately. Medical imaging poses the great challenge of having compression algorithms that are lossless (for diagnostic and legal reasons) and yet have high compression ratio for reduced storage and transmission time. To meet this challenge, we are developing and studying some compression schemes, which are either strictly lossless or diagnostically lossless, taking advantage of the peculiarities of medical images and of the medical practice. In order to increase the Signal to Noise Ratio (SNR) by exploitation of correlations within the source signal, a method of combining differential pulse code modulation (DPCM) is presented.

  17. Development and application of absolute quantitative detection by duplex chamber-based digital PCR of genetically modified maize events without pretreatment steps.

    PubMed

    Zhu, Pengyu; Fu, Wei; Wang, Chenguang; Du, Zhixin; Huang, Kunlun; Zhu, Shuifang; Xu, Wentao

    2016-04-15

    The possibility of the absolute quantitation of GMO events by digital PCR was recently reported. However, most absolute quantitation methods based on the digital PCR required pretreatment steps. Meanwhile, singleplex detection could not meet the demand of the absolute quantitation of GMO events that is based on the ratio of foreign fragments and reference genes. Thus, to promote the absolute quantitative detection of different GMO events by digital PCR, we developed a quantitative detection method based on duplex digital PCR without pretreatment. Moreover, we tested 7 GMO events in our study to evaluate the fitness of our method. The optimized combination of foreign and reference primers, limit of quantitation (LOQ), limit of detection (LOD) and specificity were validated. The results showed that the LOQ of our method for different GMO events was 0.5%, while the LOD is 0.1%. Additionally, we found that duplex digital PCR could achieve the detection results with lower RSD compared with singleplex digital PCR. In summary, the duplex digital PCR detection system is a simple and stable way to achieve the absolute quantitation of different GMO events. Moreover, the LOQ and LOD indicated that this method is suitable for the daily detection and quantitation of GMO events. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. Sparsity based target detection for compressive spectral imagery

    NASA Astrophysics Data System (ADS)

    Boada, David Alberto; Arguello Fuentes, Henry

    2016-09-01

    Hyperspectral imagery provides significant information about the spectral characteristics of objects and materials present in a scene. It enables object and feature detection, classification, or identification based on the acquired spectral characteristics. However, it relies on sophisticated acquisition and data processing systems able to acquire, process, store, and transmit hundreds or thousands of image bands from a given area of interest which demands enormous computational resources in terms of storage, computationm, and I/O throughputs. Specialized optical architectures have been developed for the compressed acquisition of spectral images using a reduced set of coded measurements contrary to traditional architectures that need a complete set of measurements of the data cube for image acquisition, dealing with the storage and acquisition limitations. Despite this improvement, if any processing is desired, the image has to be reconstructed by an inverse algorithm in order to be processed, which is also an expensive task. In this paper, a sparsity-based algorithm for target detection in compressed spectral images is presented. Specifically, the target detection model adapts a sparsity-based target detector to work in a compressive domain, modifying the sparse representation basis in the compressive sensing problem by means of over-complete training dictionaries and a wavelet basis representation. Simulations show that the presented method can achieve even better detection results than the state of the art methods.

  19. Elastic properties of rigid fiber-reinforced composites

    NASA Astrophysics Data System (ADS)

    Chen, J.; Thorpe, M. F.; Davis, L. C.

    1995-05-01

    We study the elastic properties of rigid fiber-reinforced composites with perfect bonding between fibers and matrix, and also with sliding boundary conditions. In the dilute region, there exists an exact analytical solution. Around the rigidity threshold we find the elastic moduli and Poisson's ratio by decomposing the deformation into a compression mode and a rotation mode. For perfect bonding, both modes are important, whereas only the compression mode is operative for sliding boundary conditions. We employ the digital-image-based method and a finite element analysis to perform computer simulations which confirm our analytical predictions.

  20. Energy and Quality Evaluation for Compressive Sensing of Fetal Electrocardiogram Signals

    PubMed Central

    Da Poian, Giulia; Brandalise, Denis; Bernardini, Riccardo; Rinaldo, Roberto

    2016-01-01

    This manuscript addresses the problem of non-invasive fetal Electrocardiogram (ECG) signal acquisition with low power/low complexity sensors. A sensor architecture using the Compressive Sensing (CS) paradigm is compared to a standard compression scheme using wavelets in terms of energy consumption vs. reconstruction quality, and, more importantly, vs. performance of fetal heart beat detection in the reconstructed signals. We show in this paper that a CS scheme based on reconstruction with an over-complete dictionary has similar reconstruction quality to one based on wavelet compression. We also consider, as a more important figure of merit, the accuracy of fetal beat detection after reconstruction as a function of the sensor power consumption. Experimental results with an actual implementation in a commercial device show that CS allows significant reduction of energy consumption in the sensor node, and that the detection performance is comparable to that obtained from original signals for compression ratios up to about 75%. PMID:28025510

  1. An evaluation of information-theoretic methods for detecting structural microbial biosignatures.

    PubMed

    Wagstaff, Kiri L; Corsetti, Frank A

    2010-05-01

    The first observations of extraterrestrial environments will most likely be in the form of digital images. Given an image of a rock that contains layered structures, is it possible to determine whether the layers were created by life (biogenic)? While conclusive judgments about biogenicity are unlikely to be made solely on the basis of image features, an initial assessment of the importance of a given sample can inform decisions about follow-up searches for other types of possible biosignatures (e.g., isotopic or chemical analysis). In this study, we evaluated several quantitative measures that capture the degree of complexity in visible structures, in terms of compressibility (to detect order) and the entropy (spread) of their intensity distributions. Computing complexity inside a sliding analysis window yields a map of each of these features that indicates how they vary spatially across the sample. We conducted experiments on both biogenic and abiogenic terrestrial stromatolites and on laminated structures found on Mars. The degree to which each feature separated biogenic from abiogenic samples (separability) was assessed quantitatively. None of the techniques provided a consistent, statistically significant distinction between all biogenic and abiogenic samples. However, the PNG compression ratio provided the strongest distinction (2.80 in standard deviation units) and could inform future techniques. Increasing the analysis window size or the magnification level, or both, improved the separability of the samples. Finally, data from all four Mars samples plotted well outside the biogenic field suggested by the PNG analyses, although we caution against a direct comparison of terrestrial stromatolites and martian non-stromatolites.

  2. SU-E-I-58: Objective Models of Breast Shape Undergoing Mammography and Tomosynthesis Using Principal Component Analysis.

    PubMed

    Feng, Ssj; Sechopoulos, I

    2012-06-01

    To develop an objective model of the shape of the compressed breast undergoing mammographic or tomosynthesis acquisition. Automated thresholding and edge detection was performed on 984 anonymized digital mammograms (492 craniocaudal (CC) view mammograms and 492 medial lateral oblique (MLO) view mammograms), to extract the edge of each breast. Principal Component Analysis (PCA) was performed on these edge vectors to identify a limited set of parameters and eigenvectors that. These parameters and eigenvectors comprise a model that can be used to describe the breast shapes present in acquired mammograms and to generate realistic models of breasts undergoing acquisition. Sample breast shapes were then generated from this model and evaluated. The mammograms in the database were previously acquired for a separate study and authorized for use in further research. The PCA successfully identified two principal components and their corresponding eigenvectors, forming the basis for the breast shape model. The simulated breast shapes generated from the model are reasonable approximations of clinically acquired mammograms. Using PCA, we have obtained models of the compressed breast undergoing mammographic or tomosynthesis acquisition based on objective analysis of a large image database. Up to now, the breast in the CC view has been approximated as a semi-circular tube, while there has been no objectively-obtained model for the MLO view breast shape. Such models can be used for various breast imaging research applications, such as x-ray scatter estimation and correction, dosimetry estimates, and computer-aided detection and diagnosis. © 2012 American Association of Physicists in Medicine.

  3. Method and apparatus for data sampling

    DOEpatents

    Odell, D.M.C.

    1994-04-19

    A method and apparatus for sampling radiation detector outputs and determining event data from the collected samples is described. The method uses high speed sampling of the detector output, the conversion of the samples to digital values, and the discrimination of the digital values so that digital values representing detected events are determined. The high speed sampling and digital conversion is performed by an A/D sampler that samples the detector output at a rate high enough to produce numerous digital samples for each detected event. The digital discrimination identifies those digital samples that are not representative of detected events. The sampling and discrimination also provides for temporary or permanent storage, either serially or in parallel, to a digital storage medium. 6 figures.

  4. Networking of three dimensional sonography volume data.

    PubMed

    Kratochwil, A; Lee, A; Schoisswohl, A

    2000-09-01

    Three-dimensioned (3D) sonography enables the examiner to store, instead of copies from single B-scan planes, a volume consisting of 300 scan planes. The volume is displayed on a monitor in form of three orthogonal planes--longitudinal, axial and coronal. Translation and rotation facilitates anatomical orientation and provides any arbitrary plane within the volume to generate organ optimized scan planes. Different algorithms allow the extraction of different information such as surface, or bone structures by maximum mode, or fluid filled structures, such as vessels by the minimum mode. The volume may contain as well color information of vessels. The digitized information is stored on a magnetic optical disc. This allows virtual scanning in absence of the patient under the same conditions as the volume was primarily stored. The volume size is dependent on different, examiner-controlled settings. A volume may need a storage capacity between 2 and 16 MB of 8-bit gray level information. As such huge data sets are unsuitable for network transfer, data compression is of paramount interest. 100 stored volumes were submitted to JPEG, MPEG, and biorthogonal wavelet compression. The original and compressed volumes were randomly shown on two monitors. In case of noticeable image degradation, information on the location of the original and compressed volume and the ratio of compression was read. Numerical values for proving compression fidelity as pixel error calculation and computation of square root error have been unsuitable for evaluating image degradation. The best results in recognizing image degradation were achieved by image experts. The experts disagreed on the ratio where image degradation became visible in only 4% of the volumes. Wavelet compression ratios of 20:1 or 30:1 could be performed without discernible information reduction. The effect of volume compression is reflected both in the reduction of transfer time and in storage capacity. Transmission time for a volume of 6 MB using a normal telephone with a data flow of 56 kB/s was reduced from 14 min to 28 s at a compression rate of 30:1. Compression reduced storage requirements from 6 MB uncompressed to 200 kB at a compression rate of 30:1. This successful compression opens new possibilities of intra- and extra-hospital and global information for 3D sonography. The key to this communication is not only volume compression, but also the fact that the 3D examination can be simulated on any PC by the developed 3D software. PACS teleradiology using digitized radiographs transmitted over standard telephone lines. Systems in combination with the management systems of HIS and RIS are available for archiving, retrieval of images and reports and for local and global communication. This form of tele-medicine will have an impact on cost reduction in hospitals, reduction of transport costs. On this fundament worldwide education and multi-center studies becomes possible.

  5. Tools for a Document Image Utility.

    ERIC Educational Resources Information Center

    Krishnamoorthy, M.; And Others

    1993-01-01

    Describes a project conducted at Rensselaer Polytechnic Institute (New York) that developed methods for automatically subdividing pages from technical journals into smaller semantic units for transmission, display, and further processing in an electronic environment. Topics discussed include optical scanning and image compression, digital image…

  6. Comparison of Digital Rectal Examination and Serum Prostate Specific Antigen in the Early Detection of Prostate Cancer: Results of a Multicenter Clinical Trial of 6,630 Men.

    PubMed

    Catalona, William J; Richie, Jerome P; Ahmann, Frederick R; Hudson, M'Liss A; Scardino, Peter T; Flanigan, Robert C; DeKernion, Jean B; Ratliff, Timothy L; Kavoussi, Louis R; Dalkin, Bruce L; Waters, W Bedford; MacFarlane, Michael T; Southwick, Paula C

    2017-02-01

    To compare the efficacy of digital rectal examination and serum prostate specific antigen (PSA) in the early detection of prostate cancer, we conducted a prospective clinical trial at 6 university centers of 6,630 male volunteers 50 years old or older who underwent PSA determination (Hybritech Tandom-E or Tandem-R assays) and digital rectal examination. Quadrant biopsies were performed if the PSA level was greater than 4 μg./l. or digital rectal examination was suspicious, even if transrectal ultrasonography revealed no areas suspicious for cancer. The results showed that 15% of the men had a PSA level of greater than 4 μg./l., 15% had a suspicious digital rectal examination and 26% had suspicious findings on either or both tests. Of 1,167 biopsies performed cancer was detected in 264. PSA detected significantly more tumors (82%, 216 of 264 cancers) than digital rectal examination (55%, 146 of 264, p = 0.001). The cancer detection rate was 3.2% for digital rectal examination, 4.6% for PSA and 5.8% for the 2 methods combined. Positive predictive value was 32% for PSA and 21% for digital rectal examination. Of 160 patients who underwent radical prostatectomy and pathological staging 114 (71%) had organ confined cancer: PSA detected 85 (75%) and digital rectal examination detected 64 (56%, p = 0.003). Use of the 2 methods in combination increased detection of organ confined disease by 78% (50 of 64 cases) over digital rectal examination alone. If the performance of a biopsy would have required suspicious transrectal ultrasonography findings, nearly 40% of the tumors would have been missed. We conclude that the use of PSA in conjunction with digital rectal examination enhances early prostate cancer detection. Prostatic biopsy should be considered if either the PSA level is greater than 4 μg./l. or digital rectal examination is suspicious for cancer, even in the absence of abnormal transrectal ultrasonography findings. Copyright © 1994 American Urological Association, Inc. Published by Elsevier Inc. All rights reserved.

  7. Effects of exposure equalization on image signal-to-noise ratios in digital mammography: A simulation study with an anthropomorphic breast phantom

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu Xinming; Lai Chaojen; Whitman, Gary J.

    Purpose: The scan equalization digital mammography (SEDM) technique combines slot scanning and exposure equalization to improve low-contrast performance of digital mammography in dense tissue areas. In this study, full-field digital mammography (FFDM) images of an anthropomorphic breast phantom acquired with an anti-scatter grid at various exposure levels were superimposed to simulate SEDM images and investigate the improvement of low-contrast performance as quantified by primary signal-to-noise ratios (PSNRs). Methods: We imaged an anthropomorphic breast phantom (Gammex 169 ''Rachel,'' Gammex RMI, Middleton, WI) at various exposure levels using a FFDM system (Senographe 2000D, GE Medical Systems, Milwaukee, WI). The exposure equalization factorsmore » were computed based on a standard FFDM image acquired in the automatic exposure control (AEC) mode. The equalized image was simulated and constructed by superimposing a selected set of FFDM images acquired at 2, 1, 1/2, 1/4, 1/8, 1/16, and 1/32 times of exposure levels to the standard AEC timed technique (125 mAs) using the equalization factors computed for each region. Finally, the equalized image was renormalized regionally with the exposure equalization factors to result in an appearance similar to that with standard digital mammography. Two sets of FFDM images were acquired to allow for two identically, but independently, formed equalized images to be subtracted from each other to estimate the noise levels. Similarly, two identically but independently acquired standard FFDM images were subtracted to estimate the noise levels. Corrections were applied to remove the excess system noise accumulated during image superimposition in forming the equalized image. PSNRs over the compressed area of breast phantom were computed and used to quantitatively study the effects of exposure equalization on low-contrast performance in digital mammography. Results: We found that the highest achievable PSNR improvement factor was 1.89 for the anthropomorphic breast phantom used in this study. The overall PSNRs were measured to be 79.6 for the FFDM imaging and 107.6 for the simulated SEDM imaging on average in the compressed area of breast phantom, resulting in an average improvement of PSNR by {approx}35% with exposure equalization. We also found that the PSNRs appeared to be largely uniform with exposure equalization, and the standard deviations of PSNRs were estimated to be 10.3 and 7.9 for the FFDM imaging and the simulated SEDM imaging, respectively. The average glandular dose for SEDM was estimated to be 212.5 mrad, {approx}34% lower than that of standard AEC-timed FFDM (323.8 mrad) as a result of exposure equalization for the entire breast phantom. Conclusions: Exposure equalization was found to substantially improve image PSNRs in dense tissue regions and result in more uniform image PSNRs. This improvement may lead to better low-contrast performance in detecting and visualizing soft tissue masses and micro-calcifications in dense tissue areas for breast imaging tasks.« less

  8. Digital pathology: DICOM-conform draft, testbed, and first results.

    PubMed

    Zwönitzer, Ralf; Kalinski, Thomas; Hofmann, Harald; Roessner, Albert; Bernarding, Johannes

    2007-09-01

    Hospital information systems are state of the art nowadays. Therefore, Digital Pathology, also labelled as Virtual Microscopy, has gained increased attention. Triggered by radiology, standardized information models and workflows were world-wide defined based on DICOM. However, DICOM-conform integration of Digital Pathology into existing clinical information systems imposes new problems requiring specific solutions concerning the huge amount of data as well as the special structure of the data to be managed, transferred, and stored. We implemented a testbed to realize and evaluate the workflow of digitized slides from acquisition to archiving. The experiences led to the draft of a DICOM-conform information model that accounted for extensions, definitions, and technical requirements necessary to integrate digital pathology in a hospital-wide DICOM environment. Slides were digitized, compressed, and could be viewed remotely. Real-time transfer of the huge amount of data was optimized using streaming techniques. Compared to a recent discussion in the DICOM Working Group for Digital Pathology (WG26) our experiences led to a preference of a JPEG2000/JPIP-based streaming of the whole slide image. The results showed that digital pathology is feasible but strong efforts by users and vendors are still necessary to integrate Digital Pathology into existing information systems.

  9. Cognitive Processing Speed, Working Memory, and the Intelligibility of Hearing Aid-Processed Speech in Persons with Hearing Impairment

    PubMed Central

    Yumba, Wycliffe Kabaywe

    2017-01-01

    Previous studies have demonstrated that successful listening with advanced signal processing in digital hearing aids is associated with individual cognitive capacity, particularly working memory capacity (WMC). This study aimed to examine the relationship between cognitive abilities (cognitive processing speed and WMC) and individual listeners’ responses to digital signal processing settings in adverse listening conditions. A total of 194 native Swedish speakers (83 women and 111 men), aged 33–80 years (mean = 60.75 years, SD = 8.89), with bilateral, symmetrical mild to moderate sensorineural hearing loss who had completed a lexical decision speed test (measuring cognitive processing speed) and semantic word-pair span test (SWPST, capturing WMC) participated in this study. The Hagerman test (capturing speech recognition in noise) was conducted using an experimental hearing aid with three digital signal processing settings: (1) linear amplification without noise reduction (NoP), (2) linear amplification with noise reduction (NR), and (3) non-linear amplification without NR (“fast-acting compression”). The results showed that cognitive processing speed was a better predictor of speech intelligibility in noise, regardless of the types of signal processing algorithms used. That is, there was a stronger association between cognitive processing speed and NR outcomes and fast-acting compression outcomes (in steady state noise). We observed a weaker relationship between working memory and NR, but WMC did not relate to fast-acting compression. WMC was a relatively weaker predictor of speech intelligibility in noise. These findings might have been different if the participants had been provided with training and or allowed to acclimatize to binary masking noise reduction or fast-acting compression. PMID:28861009

  10. Extreme compression for extreme conditions: pilot study to identify optimal compression of CT images using MPEG-4 video compression.

    PubMed

    Peterson, P Gabriel; Pak, Sung K; Nguyen, Binh; Jacobs, Genevieve; Folio, Les

    2012-12-01

    This study aims to evaluate the utility of compressed computed tomography (CT) studies (to expedite transmission) using Motion Pictures Experts Group, Layer 4 (MPEG-4) movie formatting in combat hospitals when guiding major treatment regimens. This retrospective analysis was approved by Walter Reed Army Medical Center institutional review board with a waiver for the informed consent requirement. Twenty-five CT chest, abdomen, and pelvis exams were converted from Digital Imaging and Communications in Medicine to MPEG-4 movie format at various compression ratios. Three board-certified radiologists reviewed various levels of compression on emergent CT findings on 25 combat casualties and compared with the interpretation of the original series. A Universal Trauma Window was selected at -200 HU level and 1,500 HU width, then compressed at three lossy levels. Sensitivities and specificities for each reviewer were calculated along with 95 % confidence intervals using the method of general estimating equations. The compression ratios compared were 171:1, 86:1, and 41:1 with combined sensitivities of 90 % (95 % confidence interval, 79-95), 94 % (87-97), and 100 % (93-100), respectively. Combined specificities were 100 % (85-100), 100 % (85-100), and 96 % (78-99), respectively. The introduction of CT in combat hospitals with increasing detectors and image data in recent military operations has increased the need for effective teleradiology; mandating compression technology. Image compression is currently used to transmit images from combat hospital to tertiary care centers with subspecialists and our study demonstrates MPEG-4 technology as a reasonable means of achieving such compression.

  11. Compression fractures detection on CT

    NASA Astrophysics Data System (ADS)

    Bar, Amir; Wolf, Lior; Bergman Amitai, Orna; Toledano, Eyal; Elnekave, Eldad

    2017-03-01

    The presence of a vertebral compression fracture is highly indicative of osteoporosis and represents the single most robust predictor for development of a second osteoporotic fracture in the spine or elsewhere. Less than one third of vertebral compression fractures are diagnosed clinically. We present an automated method for detecting spine compression fractures in Computed Tomography (CT) scans. The algorithm is composed of three processes. First, the spinal column is segmented and sagittal patches are extracted. The patches are then binary classified using a Convolutional Neural Network (CNN). Finally a Recurrent Neural Network (RNN) is utilized to predict whether a vertebral fracture is present in the series of patches.

  12. Applications of wavelet-based compression to multidimensional Earth science data

    NASA Technical Reports Server (NTRS)

    Bradley, Jonathan N.; Brislawn, Christopher M.

    1993-01-01

    A data compression algorithm involving vector quantization (VQ) and the discrete wavelet transform (DWT) is applied to two different types of multidimensional digital earth-science data. The algorithms (WVQ) is optimized for each particular application through an optimization procedure that assigns VQ parameters to the wavelet transform subbands subject to constraints on compression ratio and encoding complexity. Preliminary results of compressing global ocean model data generated on a Thinking Machines CM-200 supercomputer are presented. The WVQ scheme is used in both a predictive and nonpredictive mode. Parameters generated by the optimization algorithm are reported, as are signal-to-noise (SNR) measurements of actual quantized data. The problem of extrapolating hydrodynamic variables across the continental landmasses in order to compute the DWT on a rectangular grid is discussed. Results are also presented for compressing Landsat TM 7-band data using the WVQ scheme. The formulation of the optimization problem is presented along with SNR measurements of actual quantized data. Postprocessing applications are considered in which the seven spectral bands are clustered into 256 clusters using a k-means algorithm and analyzed using the Los Alamos multispectral data analysis program, SPECTRUM, both before and after being compressed using the WVQ program.

  13. A new method for digital video documentation in surgical procedures and minimally invasive surgery.

    PubMed

    Wurnig, P N; Hollaus, P H; Wurnig, C H; Wolf, R K; Ohtsuka, T; Pridun, N S

    2003-02-01

    Documentation of surgical procedures is limited to the accuracy of description, which depends on the vocabulary and the descriptive prowess of the surgeon. Even analog video recording could not solve the problem of documentation satisfactorily due to the abundance of recorded material. By capturing the video digitally, most problems are solved in the circumstances described in this article. We developed a cheap and useful digital video capturing system that consists of conventional computer components. Video images and clips can be captured intraoperatively and are immediately available. The system is a commercial personal computer specially configured for digital video capturing and is connected by wire to the video tower. Filming was done with a conventional endoscopic video camera. A total of 65 open and endoscopic procedures were documented in an orthopedic and a thoracic surgery unit. The median number of clips per surgical procedure was 6 (range, 1-17), and the median storage volume was 49 MB (range, 3-360 MB) in compressed form. The median duration of a video clip was 4 min 25 s (range, 45 s to 21 min). Median time for editing a video clip was 12 min for an advanced user (including cutting, title for the movie, and compression). The quality of the clips renders them suitable for presentations. This digital video documentation system allows easy capturing of intraoperative video sequences in high quality. All possibilities of documentation can be performed. With the use of an endoscopic video camera, no compromises with respect to sterility and surgical elbowroom are necessary. The cost is much lower than commercially available systems, and setting changes can be performed easily without trained specialists.

  14. Neural network-based landmark detection for mobile robot

    NASA Astrophysics Data System (ADS)

    Sekiguchi, Minoru; Okada, Hiroyuki; Watanabe, Nobuo

    1996-03-01

    The mobile robot can essentially have only the relative position data for the real world. However, there are many cases that the robot has to know where it is located. In those cases, the useful method is to detect landmarks in the real world and adjust its position using detected landmarks. In this point of view, it is essential to develop a mobile robot that can accomplish the path plan successfully using natural or artificial landmarks. However, artificial landmarks are often difficult to construct and natural landmarks are very complicated to detect. In this paper, the method of acquiring landmarks by using the sensor data from the mobile robot necessary for planning the path is described. The landmark we discuss here is the natural one and is composed of the compression of sensor data from the robot. The sensor data is compressed and memorized by using five layered neural network that is called a sand glass model. The input and output data that neural network should learn is the sensor data of the robot that are exactly the same. Using the intermediate output data of the network, a compressed data is obtained, which expresses a landmark data. If the sensor data is ambiguous or enormous, it is easy to detect the landmark because the data is compressed and classified by the neural network. Using the backward three layers, the compressed landmark data is expanded to original data at some level. The studied neural network categorizes the detected sensor data to the known landmark.

  15. Dynamic compressive behavior of Pr-Nd alloy at high strain rates and temperatures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang Huanran; Cai Canyuan; Chen Danian

    2012-07-01

    Based on compressive tests, static on 810 material test system and dynamic on the first compressive loading in split Hopkinson pressure bar (SHPB) tests for Pr-Nd alloy cylinder specimens at high strain rates and temperatures, this study determined a J-C type [G. R. Johnson and W. H. Cook, in Proceedings of Seventh International Symposium on Ballistics (The Hague, The Netherlands, 1983), pp. 541-547] compressive constitutive equation of Pr-Nd alloy. It was recorded by a high speed camera that the Pr-Nd alloy cylinder specimens fractured during the first compressive loading in SHPB tests at high strain rates and temperatures. From highmore » speed camera images, the critical strains of the dynamic shearing instability for Pr-Nd alloy in SHPB tests were determined, which were consistent with that estimated by using Batra and Wei's dynamic shearing instability criterion [R. C. Batra and Z. G. Wei, Int. J. Impact Eng. 34, 448 (2007)] and the determined compressive constitutive equation of Pr-Nd alloy. The transmitted and reflected pulses of SHPB tests for Pr-Nd alloy cylinder specimens computed with the determined compressive constitutive equation of Pr-Nd alloy and Batra and Wei's dynamic shearing instability criterion could be consistent with the experimental data. The fractured Pr-Nd alloy cylinder specimens of compressive tests were investigated by using 3D supper depth digital microscope and scanning electron microscope.« less

  16. Analog voicing detector responds to pitch

    NASA Technical Reports Server (NTRS)

    Abel, R. S.; Watkins, H. E.

    1967-01-01

    Modified electronic voice encoder /Vocoder/ includes an independent analog mode of operation in addition to the conventional digital mode. The Vocoder is a bandwidth compression equipment that permits voice transmission over channels, having only a fraction of the bandwidth required for conventional telephone-quality speech transmission.

  17. So Wide a Web, So Little Time.

    ERIC Educational Resources Information Center

    McConville, David; And Others

    1996-01-01

    Discusses new trends in the World Wide Web. Highlights include multimedia; digitized audio-visual files; compression technology; telephony; virtual reality modeling language (VRML); open architecture; and advantages of Java, an object-oriented programming language, including platform independence, distributed development, and pay-per-use software.…

  18. New Technologies in Amplification: Applications to the Pediatric Population.

    ERIC Educational Resources Information Center

    Kopun, Judy

    1995-01-01

    Discussion of technological advances in amplification for children with hearing impairments focuses on the advantages and limitations of fitting children with devices that have features such as dynamic-range compression, multiband signal processing, multimemory capability, digital feedback reduction, and frequency transposition. (Author/DB)

  19. Digital Compositing Techniques for Coronal Imaging (Invited review)

    NASA Astrophysics Data System (ADS)

    Espenak, F.

    2000-04-01

    The solar corona exhibits a huge range in brightness which cannot be captured in any single photographic exposure. Short exposures show the bright inner corona and prominences, while long exposures reveal faint details in equatorial streamers and polar brushes. For many years, radial gradient filters and other analog techniques have been used to compress the corona's dynamic range in order to study its morphology. Such techniques demand perfect pointing and tracking during the eclipse, and can be difficult to calibrate. In the past decade, the speed, memory and hard disk capacity of personal computers have rapidly increased as prices continue to drop. It is now possible to perform sophisticated image processing of eclipse photographs on commercially available CPU's. Software programs such as Adobe Photoshop permit combining multiple eclipse photographs into a composite image which compresses the corona's dynamic range and can reveal subtle features and structures. Algorithms and digital techniques used for processing 1998 eclipse photographs will be discussed which are equally applicable to the recent eclipse of 1999 August 11.

  20. Resistance Curves in the Tensile and Compressive Longitudinal Failure of Composites

    NASA Technical Reports Server (NTRS)

    Camanho, Pedro P.; Catalanotti, Giuseppe; Davila, Carlos G.; Lopes, Claudio S.; Bessa, Miguel A.; Xavier, Jose C.

    2010-01-01

    This paper presents a new methodology to measure the crack resistance curves associated with fiber-dominated failure modes in polymer-matrix composites. These crack resistance curves not only characterize the fracture toughness of the material, but are also the basis for the identification of the parameters of the softening laws used in the analytical and numerical simulation of fracture in composite materials. The method proposed is based on the identification of the crack tip location by the use of Digital Image Correlation and the calculation of the J-integral directly from the test data using a simple expression derived for cross-ply composite laminates. It is shown that the results obtained using the proposed methodology yield crack resistance curves similar to those obtained using FEM-based methods in compact tension carbon-epoxy specimens. However, it is also shown that the Digital Image Correlation based technique can be used to extract crack resistance curves in compact compression tests for which FEM-based techniques are inadequate.

  1. [Vertebroplasty: state of the art].

    PubMed

    Chiras, J; Barragán-Campos, H M; Cormier, E; Jean, B; Rose, M; LeJean, L

    2007-09-01

    Over the last 10 years, there has been much development in the management of metastatic and osteoporotic vertebral compression fractures using vertebroplasty. This percutaneous image-guided interventional radiology procedure allows stabilization of a vertebral body by injection of an acrylic cement and frequently results in significant symptomatic relief. During cement polymerisation, an exothermic reaction may destroy adjacent tumor cells. Advances have been made to reduce complications from extravasation of cement in veins or surrounding soft tissues. Safety relates to experience but also to technical parameters: optimal cement radio-density, adequate digital fluoroscopy unit (single or bi-plane digital angiography unit), development of cements other than PMMA to avoid the risk of adjacent vertebral compression fractures. The rate of symptomatic relief from vertebroplasty performed for its principal indications (vertebral hemangioma, metastases, osteoporotic fractures) reaches 90-95%. The rate of complications is about 2% for metastases and less than 0.5% for osteoporotic fractures. Vertebroplasty plays a major role in the management of specific bone weakening vertebral lesions causing, obviating the need for kyphoplasty.

  2. Distribution to the Astronomy Community of the Compressed Digitized Sky Survey

    NASA Astrophysics Data System (ADS)

    Postman, Marc

    1996-03-01

    The Space Telescope Science Institute has compressed an all-sky collection of ground-based images and has printed the data on a two volume, 102 CD-ROM disc set. The first part of the survey (containing images of the southern sky) was published in May 1994. The second volume (containing images of the northern sky) was published in January 1995. Software which manages the image retrieval is included with each volume. The Astronomical Society of the Pacific (ASP) is handling the distribution of the lOx compressed data and has sold 310 sets as of October 1996. ASP is also handling the distribution of the recently published 100x version of the northern sky survey which is publicly available at a low cost. The target markets for the 100x compressed data set are the amateur astronomy community, educational institutions, and the general public. During the next year, we plan to publish the first version of a photometric calibration database which will allow users of the compressed sky survey to determine the brightness of stars in the images.

  3. Human Motion Capture Data Tailored Transform Coding.

    PubMed

    Junhui Hou; Lap-Pui Chau; Magnenat-Thalmann, Nadia; Ying He

    2015-07-01

    Human motion capture (mocap) is a widely used technique for digitalizing human movements. With growing usage, compressing mocap data has received increasing attention, since compact data size enables efficient storage and transmission. Our analysis shows that mocap data have some unique characteristics that distinguish themselves from images and videos. Therefore, directly borrowing image or video compression techniques, such as discrete cosine transform, does not work well. In this paper, we propose a novel mocap-tailored transform coding algorithm that takes advantage of these features. Our algorithm segments the input mocap sequences into clips, which are represented in 2D matrices. Then it computes a set of data-dependent orthogonal bases to transform the matrices to frequency domain, in which the transform coefficients have significantly less dependency. Finally, the compression is obtained by entropy coding of the quantized coefficients and the bases. Our method has low computational cost and can be easily extended to compress mocap databases. It also requires neither training nor complicated parameter setting. Experimental results demonstrate that the proposed scheme significantly outperforms state-of-the-art algorithms in terms of compression performance and speed.

  4. Distribution to the Astronomy Community of the Compressed Digitized Sky Survey

    NASA Technical Reports Server (NTRS)

    Postman, Marc

    1996-01-01

    The Space Telescope Science Institute has compressed an all-sky collection of ground-based images and has printed the data on a two volume, 102 CD-ROM disc set. The first part of the survey (containing images of the southern sky) was published in May 1994. The second volume (containing images of the northern sky) was published in January 1995. Software which manages the image retrieval is included with each volume. The Astronomical Society of the Pacific (ASP) is handling the distribution of the lOx compressed data and has sold 310 sets as of October 1996. ASP is also handling the distribution of the recently published 100x version of the northern sky survey which is publicly available at a low cost. The target markets for the 100x compressed data set are the amateur astronomy community, educational institutions, and the general public. During the next year, we plan to publish the first version of a photometric calibration database which will allow users of the compressed sky survey to determine the brightness of stars in the images.

  5. Selective encryption for H.264/AVC video coding

    NASA Astrophysics Data System (ADS)

    Shi, Tuo; King, Brian; Salama, Paul

    2006-02-01

    Due to the ease with which digital data can be manipulated and due to the ongoing advancements that have brought us closer to pervasive computing, the secure delivery of video and images has become a challenging problem. Despite the advantages and opportunities that digital video provide, illegal copying and distribution as well as plagiarism of digital audio, images, and video is still ongoing. In this paper we describe two techniques for securing H.264 coded video streams. The first technique, SEH264Algorithm1, groups the data into the following blocks of data: (1) a block that contains the sequence parameter set and the picture parameter set, (2) a block containing a compressed intra coded frame, (3) a block containing the slice header of a P slice, all the headers of the macroblock within the same P slice, and all the luma and chroma DC coefficients belonging to the all the macroblocks within the same slice, (4) a block containing all the ac coefficients, and (5) a block containing all the motion vectors. The first three are encrypted whereas the last two are not. The second method, SEH264Algorithm2, relies on the use of multiple slices per coded frame. The algorithm searches the compressed video sequence for start codes (0x000001) and then encrypts the next N bits of data.

  6. Symmetric and asymmetric hybrid cryptosystem based on compressive sensing and computer generated holography

    NASA Astrophysics Data System (ADS)

    Ma, Lihong; Jin, Weimin

    2018-01-01

    A novel symmetric and asymmetric hybrid optical cryptosystem is proposed based on compressive sensing combined with computer generated holography. In this method there are six encryption keys, among which two decryption phase masks are different from the two random phase masks used in the encryption process. Therefore, the encryption system has the feature of both symmetric and asymmetric cryptography. On the other hand, because computer generated holography can flexibly digitalize the encrypted information and compressive sensing can significantly reduce data volume, what is more, the final encryption image is real function by phase truncation, the method favors the storage and transmission of the encryption data. The experimental results demonstrate that the proposed encryption scheme boosts the security and has high robustness against noise and occlusion attacks.

  7. Comparison of transform coding methods with an optimal predictor for the data compression of digital elevation models

    NASA Technical Reports Server (NTRS)

    Lewis, Michael

    1994-01-01

    Statistical encoding techniques enable the reduction of the number of bits required to encode a set of symbols, and are derived from their probabilities. Huffman encoding is an example of statistical encoding that has been used for error-free data compression. The degree of compression given by Huffman encoding in this application can be improved by the use of prediction methods. These replace the set of elevations by a set of corrections that have a more advantageous probability distribution. In particular, the method of Lagrange Multipliers for minimization of the mean square error has been applied to local geometrical predictors. Using this technique, an 8-point predictor achieved about a 7 percent improvement over an existing simple triangular predictor.

  8. Audiovisual focus of attention and its application to Ultra High Definition video compression

    NASA Astrophysics Data System (ADS)

    Rerabek, Martin; Nemoto, Hiromi; Lee, Jong-Seok; Ebrahimi, Touradj

    2014-02-01

    Using Focus of Attention (FoA) as a perceptual process in image and video compression belongs to well-known approaches to increase coding efficiency. It has been shown that foveated coding, when compression quality varies across the image according to region of interest, is more efficient than the alternative coding, when all region are compressed in a similar way. However, widespread use of such foveated compression has been prevented due to two main conflicting causes, namely, the complexity and the efficiency of algorithms for FoA detection. One way around these is to use as much information as possible from the scene. Since most video sequences have an associated audio, and moreover, in many cases there is a correlation between the audio and the visual content, audiovisual FoA can improve efficiency of the detection algorithm while remaining of low complexity. This paper discusses a simple yet efficient audiovisual FoA algorithm based on correlation of dynamics between audio and video signal components. Results of audiovisual FoA detection algorithm are subsequently taken into account for foveated coding and compression. This approach is implemented into H.265/HEVC encoder producing a bitstream which is fully compliant to any H.265/HEVC decoder. The influence of audiovisual FoA in the perceived quality of high and ultra-high definition audiovisual sequences is explored and the amount of gain in compression efficiency is analyzed.

  9. Digital Data Compression Algorithm Performance Comparisons. Proposed NATO Standard Algorithm Provides Better Facsimile in a Noisy Communications Environment Than Present Tactical Digital Facsimile Algorithm.

    DTIC Science & Technology

    1981-04-30

    VILLE 99000 VILLE OOUICILIArION 5ANCAfR( OU VENDEUR PAIS0((VN PA’’ CIE DES’NAT ON) COOT BANQUE CODE GUICHET COMMr CLIENI ctfl"OCNS OE ,PA(SON c ( 4’r...tj~r. ttm - has -J-Ay 4:-,se (in Chr -r. *.nd 10i) .K. ’-" Wihu’ua le ~...only part of the-forvard rnergy_ ks ~ open i ..- ’s:". Ct-.,-~11) and 16

  10. The utility of six over-the-counter (home) pregnancy tests.

    PubMed

    Cole, Laurence A

    2011-08-01

    The home pregnancy market is rapidly evolving. It has moved from detection of pregnancy on the day of missed menstrual bleeding, to detection claims 4 days prior. It is moving from all manual tests to digital tests, with a monitor reading the bands and informing women they are pregnant. A thorough study is needed to investigate the validity of claims and evolving usefulness of devices. Studies were proposed to examine the sensitivity and specificity of home tests and their abilities to detect pregnancy. Methods examined the abilities of tests to detect human chorionic gonadotropin (hCG), hyperglycosylated hCG, free β-subunit, a mixture of these antigens in 40 individual early pregnancy urines. Using a mixture of hCG, hyperglycosylated hCG and free β-subunit typical for early pregnancy, the sensitivity of the First Response manual and digital tests was 5.5 mIU/mL, while the sensitivities of the EPT and ClearBlue brand manual and digital tests was 22 mIU/mL. On further evaluation, the First Response manual and digital tests both detected 97% of 120 pregnancies on the day of missed menstrual bleeding. The EPT manual and digital devices detected 54% and 67% of pregnancies, respectively, and the ClearBlue manual and digital devices detected 64% and 54% of pregnancies, respectively. First Response manual and digital claim >99% detection on the day of missed menses. The results here suggest similar sensitivity for these two tests. The EPT and ClearBlue manual and digital test make similar >99% claims, the data presented here disputes their elevated claim.

  11. Image detection and compression for memory efficient system analysis

    NASA Astrophysics Data System (ADS)

    Bayraktar, Mustafa

    2015-02-01

    The advances in digital signal processing have been progressing towards efficient use of memory and processing. Both of these factors can be utilized efficiently by using feasible techniques of image storage by computing the minimum information of image which will enhance computation in later processes. Scale Invariant Feature Transform (SIFT) can be utilized to estimate and retrieve of an image. In computer vision, SIFT can be implemented to recognize the image by comparing its key features from SIFT saved key point descriptors. The main advantage of SIFT is that it doesn't only remove the redundant information from an image but also reduces the key points by matching their orientation and adding them together in different windows of image [1]. Another key property of this approach is that it works on highly contrasted images more efficiently because it`s design is based on collecting key points from the contrast shades of image.

  12. Poisson’s Ratio Extrapolation from Digital Image Correlation Experiments

    DTIC Science & Technology

    2013-03-01

    prior to dewetting ). Also, it is often impractical to measure compressibility. Current rocket laboratory methods measure strains in propellants...distribution unlimited. Public Affairs Clearance Number XXXXX. Damage Characterization of Propellants 16 Dewetting Results 0 2 4 6 8 10 0 5 10 15 20

  13. Using Internet Audio to Enhance Online Accessibility

    ERIC Educational Resources Information Center

    Schwartz, Linda Matula

    2004-01-01

    Accessibility to online education programs is an important factor that requires continued research, improvement, and regulation. Particularly valuable in the enhancement of online accessibility is the Voice-over Internet Protocol (VOIP) medium. VOIP compresses analog voice data and converts it into digital packets for transmission over the…

  14. Double resonator cantilever accelerometer

    DOEpatents

    Koehler, Dale R.

    1984-01-01

    A digital quartz accelerometer includes a pair of spaced double-ended tuning forks fastened at one end to a base and at the other end through a spacer mass. Transverse movement of the resonator members stresses one and compresses the other, providing a differential frequency output which is indicative of acceleration.

  15. Compression of facsimile graphics for transmission over digital mobile satellite circuits

    NASA Astrophysics Data System (ADS)

    Dimolitsas, Spiros; Corcoran, Frank L.

    A technique for reducing the transmission requirements of facsimile images while maintaining high intelligibility in mobile communications environments is described. The algorithms developed are capable of achieving a compression of approximately 32 to 1. The technique focuses on the implementation of a low-cost interface unit suitable for facsimile communication between low-power mobile stations and fixed stations for both point-to-point and point-to-multipoint transmissions. This interface may be colocated with the transmitting facsimile terminals. The technique was implemented and tested by intercepting facsimile documents in a store-and-forward mode.

  16. Electronic polarization-division demultiplexing based on digital signal processing in intensity-modulation direct-detection optical communication systems.

    PubMed

    Kikuchi, Kazuro

    2014-01-27

    We propose a novel configuration of optical receivers for intensity-modulation direct-detection (IM · DD) systems, which can cope with dual-polarization (DP) optical signals electrically. Using a Stokes analyzer and a newly-developed digital signal-processing (DSP) algorithm, we can achieve polarization tracking and demultiplexing in the digital domain after direct detection. Simulation results show that the power penalty stemming from digital polarization manipulations is negligibly small.

  17. Compression-induced crystallization of amorphous indomethacin in tablets: characterization of spatial heterogeneity by two-dimensional X-ray diffractometry.

    PubMed

    Thakral, Naveen K; Mohapatra, Sarat; Stephenson, Gregory A; Suryanarayanan, Raj

    2015-01-05

    Tablets of amorphous indomethacin were compressed at 10, 25, 50, or 100 MPa using either an unlubricated or a lubricated die and stored individually at 35 °C in sealed Mylar pouches. At selected time points, tablets were analyzed by two-dimensional X-ray diffractometry (2D-XRD), which enabled us to profile the extent of drug crystallization in tablets, in both the radial and axial directions. To evaluate the role of lubricant, magnesium stearate was used as "internal" and/or "external" lubricant. Indomethacin crystallization propensity increased as a function of compression pressure, with 100 MPa pressure causing crystallization immediately after compression (detected using synchrotron radiation). However, the drug crystallization was not uniform throughout the tablets. In unlubricated systems, pronounced crystallization at the radial surface could be attributed to die wall friction. The tablet core remained substantially amorphous, irrespective of the compression pressure. Lubrication of the die wall with magnesium stearate, as external lubricant, dramatically decreased drug crystallization at the radial surface. The spatial heterogeneity in drug crystallization, as a function of formulation composition and compression pressure, was systematically investigated. When formulating amorphous systems as tablets, the potential for compression induced crystallization warrants careful consideration. Very low levels of crystallization on the tablet surface, while profoundly affecting product performance (decrease in dissolution rate), may not be readily detected by conventional analytical techniques. Early detection of crystallization could be pivotal in the successful design of a dosage form where, in order to obtain the desired bioavailability, the drug may be in a high energy state. Specialized X-ray diffractometric techniques (2D; use of high intensity synchrotron radiation) enabled detection of very low levels of drug crystallization and revealed the heterogeneity in crystallization within the tablet.

  18. Image based automatic water meter reader

    NASA Astrophysics Data System (ADS)

    Jawas, N.; Indrianto

    2018-01-01

    Water meter is used as a tool to calculate water consumption. This tool works by utilizing water flow and shows the calculation result with mechanical digit counter. Practically, in everyday use, an operator will manually check the digit counter periodically. The Operator makes logs of the number shows by water meter to know the water consumption. This manual operation is time consuming and prone to human error. Therefore, in this paper we propose an automatic water meter digit reader from digital image. The digits sequence is detected by utilizing contour information of the water meter front panel.. Then an OCR method is used to get the each digit character. The digit sequence detection is an important part of overall process. It determines the success of overall system. The result shows promising results especially in sequence detection.

  19. The integrated design and archive of space-borne signal processing and compression coding

    NASA Astrophysics Data System (ADS)

    He, Qiang-min; Su, Hao-hang; Wu, Wen-bo

    2017-10-01

    With the increasing demand of users for the extraction of remote sensing image information, it is very urgent to significantly enhance the whole system's imaging quality and imaging ability by using the integrated design to achieve its compact structure, light quality and higher attitude maneuver ability. At this present stage, the remote sensing camera's video signal processing unit and image compression and coding unit are distributed in different devices. The volume, weight and consumption of these two units is relatively large, which unable to meet the requirements of the high mobility remote sensing camera. This paper according to the high mobility remote sensing camera's technical requirements, designs a kind of space-borne integrated signal processing and compression circuit by researching a variety of technologies, such as the high speed and high density analog-digital mixed PCB design, the embedded DSP technology and the image compression technology based on the special-purpose chips. This circuit lays a solid foundation for the research of the high mobility remote sensing camera.

  20. Carbon Dioxide Angiography: Scientific Principles and Practice

    PubMed Central

    Cho, Kyung Jae

    2015-01-01

    Carbon dioxide (CO2) is a colorless, odorless gas which occurs naturally in the atmosphere and human body. With the advent of digital subtraction angiography, the gas has been used as a safe and useful alternative contrast agent in both arteriography and venography. Because of its lack of renal toxicity and allergic potential, CO2 is a preferred contrast agent in patients with renal failure or contrast allergy, and particularly in patients who require large volumes of contrast medium for complex endovascular procedures. Understanding of the unique physical properties of CO2 (high solubility, low viscosity, buoyancy, and compressibility) is essential in obtaining a successful CO2 angiogram and in guiding endovascular intervention. Unlike iodinated contrast material, CO2 displaces the blood and produces a negative contrast for digital subtraction imaging. Indications for use of CO2 as a contrast agent include: aortography and runoff, detection of bleeding, renal transplant arteriography, portal vein visualization with wedged hepatic venous injection, venography, arterial and venous interventions, and endovascular aneurysm repair. CO2 should not be used in the thoracic aorta, the coronary artery, and cerebral circulation. Exploitation of CO2 properties, avoidance of air contamination and facile catheterization technique are important to the safe and effective performance of CO2 angiography and CO2-guided endovascular intervention. PMID:26509137

  1. Multifrequency spectrum analysis using fully digital G Mode-Kelvin probe force microscopy.

    PubMed

    Collins, Liam; Belianinov, Alex; Somnath, Suhas; Rodriguez, Brian J; Balke, Nina; Kalinin, Sergei V; Jesse, Stephen

    2016-03-11

    Since its inception over two decades ago, Kelvin probe force microscopy (KPFM) has become the standard technique for characterizing electrostatic, electrochemical and electronic properties at the nanoscale. In this work, we present a purely digital, software-based approach to KPFM utilizing big data acquisition and analysis methods. General mode (G-Mode) KPFM works by capturing the entire photodetector data stream, typically at the sampling rate limit, followed by subsequent de-noising, analysis and compression of the cantilever response. We demonstrate that the G-Mode approach allows simultaneous multi-harmonic detection, combined with on-the-fly transfer function correction-required for quantitative CPD mapping. The KPFM approach outlined in this work significantly simplifies the technique by avoiding cumbersome instrumentation optimization steps (i.e. lock in parameters, feedback gains etc), while also retaining the flexibility to be implemented on any atomic force microscopy platform. We demonstrate the added advantages of G-Mode KPFM by allowing simultaneous mapping of CPD and capacitance gradient (C') channels as well as increased flexibility in data exploration across frequency, time, space, and noise domains. G-Mode KPFM is particularly suitable for characterizing voltage sensitive materials or for operation in conductive electrolytes, and will be useful for probing electrodynamics in photovoltaics, liquids and ionic conductors.

  2. Multifrequency spectrum analysis using fully digital G Mode-Kelvin probe force microscopy

    DOE PAGES

    Collins, Liam F.; Jesse, Stephen; Belianinov, Alex; ...

    2016-02-11

    Since its inception over two decades ago, Kelvin probe force microscopy (KPFM) has become the standard technique for characterizing electrostatic, electrochemical and electronic properties at the nanoscale. In this work, we present a purely digital, software-based approach to KPFM utilizing big data acquisition and analysis methods. General Mode (G-Mode) KPFM, works by capturing the entire photodetector data stream, typically at the sampling rate limit, followed by subsequent de-noising, analysis and compression of the cantilever response. We demonstrate that the G-Mode approach allows simultaneous multi-harmonic detection, combined with on-the-fly transfer function correction required for quantitative CPD mapping. The KPFM approach outlinedmore » in this work significantly simplifies the technique by avoiding cumbersome instrumentation optimization steps (i.e. lock in parameters, feedback gains etc.), while also retaining the flexibility to be implemented on any atomic force microscopy platform. We demonstrate the added advantages of G-Mode KPFM by allowing simultaneous mapping of CPD and capacitance gradient (C') channels as well as increased flexibility in data exploration across frequency, time, space, and noise domains. As a result, G-Mode KPFM is particularly suitable for characterizing voltage sensitive materials or for operation in conductive electrolytes, and will be useful for probing electrodynamics in photovoltaics, liquids and ionic conductors.« less

  3. Collaborative Wideband Compressed Signal Detection in Interplanetary Internet

    NASA Astrophysics Data System (ADS)

    Wang, Yulin; Zhang, Gengxin; Bian, Dongming; Gou, Liang; Zhang, Wei

    2014-07-01

    As the development of autonomous radio in deep space network, it is possible to actualize communication between explorers, aircrafts, rovers and satellites, e.g. from different countries, adopting different signal modes. The first mission to enforce the autonomous radio is to detect signals of the explorer autonomously without disturbing the original communication. This paper develops a collaborative wideband compressed signal detection approach for InterPlaNetary (IPN) Internet where there exist sparse active signals in the deep space environment. Compressed sensing (CS) can be utilized by exploiting the sparsity of IPN Internet communication signal, whose useful frequency support occupies only a small portion of an entirely wide spectrum. An estimate of the signal spectrum can be obtained by using reconstruction algorithms. Against deep space shadowing and channel fading, multiple satellites collaboratively sense and make a final decision according to certain fusion rule to gain spatial diversity. A couple of novel discrete cosine transform (DCT) and walsh-hadamard transform (WHT) based compressed spectrum detection methods are proposed which significantly improve the performance of spectrum recovery and signal detection. Finally, extensive simulation results are presented to show the effectiveness of our proposed collaborative scheme for signal detection in IPN Internet. Compared with the conventional discrete fourier transform (DFT) based method, our DCT and WHT based methods reduce computational complexity, decrease processing time, save energy and enhance probability of detection.

  4. Synthetic aperture in terahertz in-line digital holography for resolution enhancement.

    PubMed

    Huang, Haochong; Rong, Lu; Wang, Dayong; Li, Weihua; Deng, Qinghua; Li, Bin; Wang, Yunxin; Zhan, Zhiqiang; Wang, Xuemin; Wu, Weidong

    2016-01-20

    Terahertz digital holography is a combination of terahertz technology and digital holography. In digital holography, the imaging resolution is the key parameter in determining the detailed quality of a reconstructed wavefront. In this paper, the synthetic aperture method is used in terahertz digital holography and the in-line arrangement is built to perform the detection. The resolved capability of previous terahertz digital holographic systems restricts this technique to meet the requirement of practical detection. In contrast, the experimental resolved power of the present method can reach 125 μm, which is the best resolution of terahertz digital holography to date. Furthermore, the basic detection of a biological specimen is conducted to show the practical application. In all, the results of the proposed method demonstrate the enhancement of experimental imaging resolution and that the amplitude and phase distributions of the fine structure of samples can be reconstructed by using terahertz digital holography.

  5. Bit Grooming: Statistically accurate precision-preserving quantization with compression, evaluated in the netCDF operators (NCO, v4.4.8+)

    DOE PAGES

    Zender, Charles S.

    2016-09-19

    Geoscientific models and measurements generate false precision (scientifically meaningless data bits) that wastes storage space. False precision can mislead (by implying noise is signal) and be scientifically pointless, especially for measurements. By contrast, lossy compression can be both economical (save space) and heuristic (clarify data limitations) without compromising the scientific integrity of data. Data quantization can thus be appropriate regardless of whether space limitations are a concern. We introduce, implement, and characterize a new lossy compression scheme suitable for IEEE floating-point data. Our new Bit Grooming algorithm alternately shaves (to zero) and sets (to one) the least significant bits ofmore » consecutive values to preserve a desired precision. This is a symmetric, two-sided variant of an algorithm sometimes called Bit Shaving that quantizes values solely by zeroing bits. Our variation eliminates the artificial low bias produced by always zeroing bits, and makes Bit Grooming more suitable for arrays and multi-dimensional fields whose mean statistics are important. Bit Grooming relies on standard lossless compression to achieve the actual reduction in storage space, so we tested Bit Grooming by applying the DEFLATE compression algorithm to bit-groomed and full-precision climate data stored in netCDF3, netCDF4, HDF4, and HDF5 formats. Bit Grooming reduces the storage space required by initially uncompressed and compressed climate data by 25–80 and 5–65 %, respectively, for single-precision values (the most common case for climate data) quantized to retain 1–5 decimal digits of precision. The potential reduction is greater for double-precision datasets. When used aggressively (i.e., preserving only 1–2 digits), Bit Grooming produces storage reductions comparable to other quantization techniques such as Linear Packing. Unlike Linear Packing, whose guaranteed precision rapidly degrades within the relatively narrow dynamic range of values that it can compress, Bit Grooming guarantees the specified precision throughout the full floating-point range. Data quantization by Bit Grooming is irreversible (i.e., lossy) yet transparent, meaning that no extra processing is required by data users/readers. Hence Bit Grooming can easily reduce data storage volume without sacrificing scientific precision or imposing extra burdens on users.« less

  6. Stress relaxation of swine growth plate in semi-confined compression: depth dependent tissue deformational behavior versus extracellular matrix composition and collagen fiber organization.

    PubMed

    Amini, Samira; Mortazavi, Farhad; Sun, Jun; Levesque, Martin; Hoemann, Caroline D; Villemure, Isabelle

    2013-01-01

    Mechanical environment is one of the regulating factors involved in the process of longitudinal bone growth. Non-physiological compressive loading can lead to infantile and juvenile musculoskeletal deformities particularly during growth spurt. We hypothesized that tissue mechanical behavior in sub-regions (reserve, proliferative and hypertrophic zones) of the growth plate is related to its collagen and proteoglycan content as well as its collagen fiber orientation. To characterize the strain distribution through growth plate thickness and to evaluate biochemical content and collagen fiber organization of the three histological zones of growth plate tissue. Distal ulnar growth plate samples (N = 29) from 4-week old pigs were analyzed histologically for collagen fiber organization (N = 7) or average zonal thickness (N = 8), or trimmed into the three average zones, based on the estimated thickness of each histological zone, for biochemical analysis of water, collagen and glycosaminoglycan content (N = 7). Other samples (N = 7) were tested in semi-confined compression under 10% compressive strain. Digital images of the fluorescently labeled nuclei were concomitantly acquired by confocal microscopy before loading and after tissue relaxation. Strain fields were subsequently calculated using a custom-designed 2D digital image correlation algorithm. Depth-dependent compressive strain patterns and collagen content were observed. The proliferative and hypertrophic zone developed the highest axial and transverse strains, respectively, under compression compared to the reserve zone, in which the lowest axial and transverse strains arose. The collagen content per wet mass was significantly lower in the proliferative and hypertrophic zones compared to the reserve zone, and all three zones had similar glycosaminoglycan and water content.Polarized light microscopy showed that collagen fibers were mainly organized horizontally in the reserve zone and vertically aligned with the growth direction in the proliferative and hypertrophic zones. Higher strains were developed in growth plate areas (proliferative and hypertrophic) composed of lower collagen content and of vertical collagen fiber organization. The stiffer reserve zone, with its higher collagen content and collagen fibers oriented to restrain lateral expansion under compression, could play a greater role of mechanical support compared to the proliferative and hypertrophic zones, which could be more susceptible to be involved in an abnormal growth process.

  7. Two-thumb technique is superior to two-finger technique during lone rescuer infant manikin CPR.

    PubMed

    Udassi, Sharda; Udassi, Jai P; Lamb, Melissa A; Theriaque, Douglas W; Shuster, Jonathan J; Zaritsky, Arno L; Haque, Ikram U

    2010-06-01

    Infant CPR guidelines recommend two-finger chest compression with a lone rescuer and two-thumb with two rescuers. Two-thumb provides better chest compression but is perceived to be associated with increased ventilation hands-off time. We hypothesized that lone rescuer two-thumb CPR is associated with increased ventilation cycle time, decreased ventilation quality and fewer chest compressions compared to two-finger CPR in an infant manikin model. Crossover observational study randomizing 34 healthcare providers to perform 2 min CPR at a compression rate of 100 min(-1) using a 30:2 compression:ventilation ratio comparing two-thumb vs. two-finger techniques. A Laerdal Baby ALS Trainer manikin was modified to digitally record compression rate, compression depth and compression pressure and ventilation cycle time (two mouth-to-mouth breaths). Manikin chest rise with breaths was video recorded and later reviewed by two blinded CPR instructors for percent effective breaths. Data (mean+/-SD) were analyzed using a two-tailed paired t-test. Significance was defined qualitatively as p< or =0.05. Mean % effective breaths were 90+/-18.6% in two-thumb and 88.9+/-21.1% in two-finger, p=0.65. Mean time (s) to deliver two mouth-to-mouth breaths was 7.6+/-1.6 in two-thumb and 7.0+/-1.5 in two-finger, p<0.0001. Mean delivered compressions per minute were 87+/-11 in two-thumb and 92+/-12 in two-finger, p=0.0005. Two-thumb resulted in significantly higher compression depth and compression pressure compared to the two-finger technique. Healthcare providers required 0.6s longer time to deliver two breaths during two-thumb lone rescuer infant CPR, but there was no significant difference in percent effective breaths delivered between the two techniques. Two-thumb CPR had 4 fewer delivered compressions per minute, which may be offset by far more effective compression depth and compression pressure compared to two-finger technique. Copyright 2010 Elsevier Ireland Ltd. All rights reserved.

  8. The Effectiveness of Low-Cost Tele-Lecturing.

    ERIC Educational Resources Information Center

    Muta, Hiromitsu; Kikuta, Reiko; Hamano, Takashi; Maesako, Takanori

    1997-01-01

    Compares distance education using PictureTel, a compressed-digital-video system via telephone lines (audio and visual interactive communication) in terms of its costs and effectiveness with traditional in-class education. Costing less than half the traditional approach, the study suggested distance education would be economical if used frequently.…

  9. Compressed Sensing (CS) Imaging with Wide FOV and Dynamic Magnification

    DTIC Science & Technology

    2011-03-14

    Digital Micromirror Device (DMD) to implement the CS measurement patterns. The core component of the DMD is a 768(V)?1024(H) aluminum micromirror array...image has different curves and textures, thus has different statistical model parameters. The sampling 19 Table 2: Reconstruction of images in

  10. 49 CFR 579.29 - Manner of reporting.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... identified on NHTSA's Internet homepage (www.nhtsa.dot.gov). A manufacturer must use templates provided at... templates. (2) Each report required under § 579.27 of this part may be submitted to NHTSA's early warning... through 579.26 of this part may be submitted in digital form using a graphic compression protocol...

  11. 49 CFR 579.29 - Manner of reporting.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... identified on NHTSA's Internet homepage (www.nhtsa.dot.gov). A manufacturer must use templates provided at... templates. (2) Each report required under § 579.27 of this part may be submitted to NHTSA's early warning... through 579.26 of this part may be submitted in digital form using a graphic compression protocol...

  12. 49 CFR 579.29 - Manner of reporting.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... identified on NHTSA's Internet homepage (www.nhtsa.dot.gov). A manufacturer must use templates provided at... templates. (2) Each report required under § 579.27 of this part may be submitted to NHTSA's early warning... through 579.26 of this part may be submitted in digital form using a graphic compression protocol...

  13. 49 CFR 579.29 - Manner of reporting.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... identified on NHTSA's Internet homepage (www.nhtsa.dot.gov). A manufacturer must use templates provided at... templates. (2) Each report required under § 579.27 of this part may be submitted to NHTSA's early warning... through 579.26 of this part may be submitted in digital form using a graphic compression protocol...

  14. Digital stethoscopes compared to standard auscultation for detecting abnormal paediatric breath sounds.

    PubMed

    Kevat, Ajay C; Kalirajah, Anaath; Roseby, Robert

    2017-07-01

    Our study aimed to objectively describe the audiological characteristics of wheeze and crackles in children by using digital stethoscope (DS) auscultation, as well as assess concordance between standard auscultation and two different DS devices in their ability to detect pathological breath sounds. Twenty children were auscultated by a paediatric consultant doctor and digitally recorded using the Littman™ 3200 Digital Electronic Stethoscope and a Clinicloud™ DS with smart device. Using spectrographic analysis, we found those with clinically described wheeze had prominent periodic waveform segments spanning expiration for a period of 0.03-1.2 s at frequencies of 100-1050 Hz, and occasionally spanning shorter inspiratory segments; paediatric crackles were brief discontinuous sounds with a distinguishing waveform. There was moderate concordance with respect to wheeze detection between digital and standard binaural stethoscopes, and 100% concordance for crackle detection. Importantly, DS devices were more sensitive than clinician auscultation in detecting wheeze in our study. Objective definition of audio characteristics of abnormal paediatric breath sounds was achieved using DS technology. We demonstrated superiority of our DS method compared to traditional auscultation for detection of wheeze. What is Known: • The audiological characteristics of abnormal breath sounds have been well-described in adult populations but not in children. • Inter-observer agreement for detection of pathological breath sounds using standard auscultation has been shown to be poor, but the clinical value of now easily available digital stethoscopes has not been sufficiently examined. What is New: • Digital stethoscopes can objectively define the nature of pathological breath sounds such as wheeze and crackles in children. • Paediatric wheeze was better detected by digital stethoscopes than by standard auscultation performed by an expert paediatric clinician.

  15. Method of improving a digital image

    NASA Technical Reports Server (NTRS)

    Jobson, Daniel J. (Inventor); Woodell, Glenn A. (Inventor); Rahman, Zia-ur (Inventor)

    1999-01-01

    A method of improving a digital image is provided. The image is initially represented by digital data indexed to represent positions on a display. The digital data is indicative of an intensity value I.sub.i (x,y) for each position (x,y) in each i-th spectral band. The intensity value for each position in each i-th spectral band is adjusted to generate an adjusted intensity value for each position in each i-th spectral band in accordance with ##EQU1## where S is the number of unique spectral bands included in said digital data, W.sub.n is a weighting factor and * denotes the convolution operator. Each surround function F.sub.n (x,y) is uniquely scaled to improve an aspect of the digital image, e.g., dynamic range compression, color constancy, and lightness rendition. The adjusted intensity value for each position in each i-th spectral band is filtered with a common function and then presented to a display device. For color images, a novel color restoration step is added to give the image true-to-life color that closely matches human observation.

  16. ICC '86; Proceedings of the International Conference on Communications, Toronto, Canada, June 22-25, 1986, Conference Record. Volumes 1, 2, & 3

    NASA Astrophysics Data System (ADS)

    Papers are presented on ISDN, mobile radio systems and techniques for digital connectivity, centralized and distributed algorithms in computer networks, communications networks, quality assurance and impact on cost, adaptive filters in communications, the spread spectrum, signal processing, video communication techniques, and digital satellite services. Topics discussed include performance evaluation issues for integrated protocols, packet network operations, the computer network theory and multiple-access, microwave single sideband systems, switching architectures, fiber optic systems, wireless local communications, modulation, coding, and synchronization, remote switching, software quality, transmission, and expert systems in network operations. Consideration is given to wide area networks, image and speech processing, office communications application protocols, multimedia systems, customer-controlled network operations, digital radio systems, channel modeling and signal processing in digital communications, earth station/on-board modems, computer communications system performance evaluation, source encoding, compression, and quantization, and adaptive communications systems.

  17. Digital media in the home: technical and research challenges

    NASA Astrophysics Data System (ADS)

    Ribas-Corbera, Jordi

    2005-03-01

    This article attempts to identify some of the technology and research challenges facing the digital media industry in the future. We first discuss several trends in the industry, such as the rapid growth of broadband Internet networks and the emergence of networking and media-capable devices in the home. Next, we present technical challenges that result from these trends, such as effective media interoperability in devices, and provide a brief overview of Windows Media, which is one of the technologies in the market attempting to address these challenges. Finally, given these trends and the state of the art, we argue that further research on data compression, encoder optimization, and multi-format transcoding can potentially make a significant technical and business impact in digital media. We also explore the reasons that research on related techniques such as wavelets or scalable video coding is having a relatively minor impact in today"s practical digital media systems.

  18. High Density Digital Data Storage System

    NASA Technical Reports Server (NTRS)

    Wright, Kenneth D., II; Gray, David L.; Rowland, Wayne D.

    1991-01-01

    The High Density Digital Data Storage System was designed to provide a cost effective means for storing real-time data from the field-deployable digital acoustic measurement system. However, the high density data storage system is a standalone system that could provide a storage solution for many other real time data acquisition applications. The storage system has inputs for up to 20 channels of 16-bit digital data. The high density tape recorders presently being used in the storage system are capable of storing over 5 gigabytes of data at overall transfer rates of 500 kilobytes per second. However, through the use of data compression techniques the system storage capacity and transfer rate can be doubled. Two tape recorders have been incorporated into the storage system to produce a backup tape of data in real-time. An analog output is provided for each data channel as a means of monitoring the data as it is being recorded.

  19. Detecting Copy Move Forgery In Digital Images

    NASA Astrophysics Data System (ADS)

    Gupta, Ashima; Saxena, Nisheeth; Vasistha, S. K.

    2012-03-01

    In today's world several image manipulation software's are available. Manipulation of digital images has become a serious problem nowadays. There are many areas like medical imaging, digital forensics, journalism, scientific publications, etc, where image forgery can be done very easily. To determine whether a digital image is original or doctored is a big challenge. To find the marks of tampering in a digital image is a challenging task. The detection methods can be very useful in image forensics which can be used as a proof for the authenticity of a digital image. In this paper we propose the method to detect region duplication forgery by dividing the image into overlapping block and then perform searching to find out the duplicated region in the image.

  20. A novel dentin bond strength measurement technique using a composite disk in diametral compression.

    PubMed

    Huang, Shih-Hao; Lin, Lian-Shan; Rudney, Joel; Jones, Rob; Aparicio, Conrado; Lin, Chun-Pin; Fok, Alex

    2012-04-01

    New methods are needed that can predict the clinical failure of dental restorations that primarily rely on dentin bonding. Existing methods have shortcomings, e.g. severe deviation in the actual stress distribution from theory and a large standard deviation in the measured bond strength. We introduce here a novel test specimen by examining an endodontic model for dentin bonding. Specifically, we evaluated the feasibility of using the modified Brazilian disk test to measure the post-dentin interfacial bond strength. Four groups of resin composite disks which contained a slice of dentin with or without an intracanal post in the center were tested under diametral compression until fracture. Advanced nondestructive examination and imaging techniques in the form of acoustic emission (AE) and digital image correlation (DIC) were used innovatively to capture the fracture process in real time. DIC showed strain concentration first appearing at one of the lateral sides of the post-dentin interface. The appearance of the interfacial strain concentration also coincided with the first AE signal detected. Utilizing both the experimental data and finite-element analysis, the bond/tensile strengths were calculated to be: 11.2 MPa (fiber posts), 12.9 MPa (metal posts), 8.9 MPa (direct resin fillings) and 82.6 MPa for dentin. We have thus established the feasibility of using the composite disk in diametral compression to measure the bond strength between intracanal posts and dentin. The new method has the advantages of simpler specimen preparation, no premature failure, more consistent failure mode and smaller variations in the calculated bond strength. Copyright © 2012 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.

  1. Genetic programs can be compressed and autonomously decompressed in live cells

    NASA Astrophysics Data System (ADS)

    Lapique, Nicolas; Benenson, Yaakov

    2018-04-01

    Fundamental computer science concepts have inspired novel information-processing molecular systems in test tubes1-13 and genetically encoded circuits in live cells14-21. Recent research has shown that digital information storage in DNA, implemented using deep sequencing and conventional software, can approach the maximum Shannon information capacity22 of two bits per nucleotide23. In nature, DNA is used to store genetic programs, but the information content of the encoding rarely approaches this maximum24. We hypothesize that the biological function of a genetic program can be preserved while reducing the length of its DNA encoding and increasing the information content per nucleotide. Here we support this hypothesis by describing an experimental procedure for compressing a genetic program and its subsequent autonomous decompression and execution in human cells. As a test-bed we choose an RNAi cell classifier circuit25 that comprises redundant DNA sequences and is therefore amenable for compression, as are many other complex gene circuits15,18,26-28. In one example, we implement a compressed encoding of a ten-gene four-input AND gate circuit using only four genetic constructs. The compression principles applied to gene circuits can enable fitting complex genetic programs into DNA delivery vehicles with limited cargo capacity, and storing compressed and biologically inert programs in vivo for on-demand activation.

  2. Speech coding and compression using wavelets and lateral inhibitory networks

    NASA Astrophysics Data System (ADS)

    Ricart, Richard

    1990-12-01

    The purpose of this thesis is to introduce the concept of lateral inhibition as a generalized technique for compressing time/frequency representations of electromagnetic and acoustical signals, particularly speech. This requires at least a rudimentary treatment of the theory of frames- which generalizes most commonly known time/frequency distributions -the biology of hearing, and digital signal processing. As such, this material, along with the interrelationships of the disparate subjects, is presented in a tutorial style. This may leave the mathematician longing for more rigor, the neurophysiological psychologist longing for more substantive support of the hypotheses presented, and the engineer longing for a reprieve from the theoretical barrage. Despite the problems that arise when trying to appeal to too wide an audience, this thesis should be a cogent analysis of the compression of time/frequency distributions via lateral inhibitory networks.

  3. High-speed real-time image compression based on all-optical discrete cosine transformation

    NASA Astrophysics Data System (ADS)

    Guo, Qiang; Chen, Hongwei; Wang, Yuxi; Chen, Minghua; Yang, Sigang; Xie, Shizhong

    2017-02-01

    In this paper, we present a high-speed single-pixel imaging (SPI) system based on all-optical discrete cosine transform (DCT) and demonstrate its capability to enable noninvasive imaging of flowing cells in a microfluidic channel. Through spectral shaping based on photonic time stretch (PTS) and wavelength-to-space conversion, structured illumination patterns are generated at a rate (tens of MHz) which is three orders of magnitude higher than the switching rate of a digital micromirror device (DMD) used in a conventional single-pixel camera. Using this pattern projector, high-speed image compression based on DCT can be achieved in the optical domain. In our proposed system, a high compression ratio (approximately 10:1) and a fast image reconstruction procedure are both achieved, which implicates broad applications in industrial quality control and biomedical imaging.

  4. Simulation study on compressive laminar optical tomography for cardiac action potential propagation

    PubMed Central

    Harada, Takumi; Tomii, Naoki; Manago, Shota; Kobayashi, Etsuko; Sakuma, Ichiro

    2017-01-01

    To measure the activity of tissue at the microscopic level, laminar optical tomography (LOT), which is a microscopic form of diffuse optical tomography, has been developed. However, obtaining sufficient recording speed to determine rapidly changing dynamic activity remains major challenges. For a high frame rate of the reconstructed data, we here propose a new LOT method using compressed sensing theory, called compressive laminar optical tomography (CLOT), in which novel digital micromirror device-based illumination and data reduction in a single reconstruction are applied. In the simulation experiments, the reconstructed volumetric images of the action potentials that were acquired from 5 measured images with random pattern featured a wave border at least to a depth of 2.5 mm. Consequently, it was shown that CLOT has potential for over 200 fps required for the cardiac electrophysiological phenomena. PMID:28736675

  5. Future trends in image coding

    NASA Astrophysics Data System (ADS)

    Habibi, Ali

    1993-01-01

    The objective of this article is to present a discussion on the future of image data compression in the next two decades. It is virtually impossible to predict with any degree of certainty the breakthroughs in theory and developments, the milestones in advancement of technology and the success of the upcoming commercial products in the market place which will be the main factors in establishing the future stage to image coding. What we propose to do, instead, is look back at the progress in image coding during the last two decades and assess the state of the art in image coding today. Then, by observing the trends in developments of theory, software, and hardware coupled with the future needs for use and dissemination of imagery data and the constraints on the bandwidth and capacity of various networks, predict the future state of image coding. What seems to be certain today is the growing need for bandwidth compression. The television is using a technology which is half a century old and is ready to be replaced by high definition television with an extremely high digital bandwidth. Smart telephones coupled with personal computers and TV monitors accommodating both printed and video data will be common in homes and businesses within the next decade. Efficient and compact digital processing modules using developing technologies will make bandwidth compressed imagery the cheap and preferred alternative in satellite and on-board applications. In view of the above needs, we expect increased activities in development of theory, software, special purpose chips and hardware for image bandwidth compression in the next two decades. The following sections summarize the future trends in these areas.

  6. "Stayin' alive": a novel mental metronome to maintain compression rates in simulated cardiac arrests.

    PubMed

    Hafner, John W; Sturgell, Jeremy L; Matlock, David L; Bockewitz, Elizabeth G; Barker, Lisa T

    2012-11-01

    A novel and yet untested memory aid has anecdotally been proposed for aiding practitioners in complying with American Heart Association (AHA) cardiopulmonary resuscitation (CPR) compression rate guidelines (at least 100 compressions per minute). This study investigates how subjects using this memory aid adhered to current CPR guidelines in the short and long term. A prospective observational study was conducted with medical providers certified in 2005 AHA guideline CPR. Subjects were randomly paired and alternated administering CPR compressions on a mannequin during a standardized cardiac arrest scenario. While performing compressions, subjects listened to a digital recording of the Bee Gees song "Stayin' Alive," and were asked to time compressions to the musical beat. After at least 5 weeks, the participants were retested without directly listening to the recorded music. Attitudinal views were gathered using a post-session questionnaire. Fifteen subjects (mean age 29.3 years, 66.7% resident physicians and 80% male) were enrolled. The mean compression rate during the primary assessment (with music) was 109.1, and during the secondary assessment (without music) the rate was 113.2. Mean CPR compression rates did not vary by training level, CPR experience, or time to secondary assessment. Subjects felt that utilizing the music improved their ability to provide CPR and they felt more confident in performing CPR. Medical providers trained to use a novel musical memory aid effectively maintained AHA guideline CPR compression rates initially and in long-term follow-up. Subjects felt that the aid improved their technical abilities and confidence in providing CPR. Copyright © 2012. Published by Elsevier Inc.

  7. The effect of lossy image compression on image classification

    NASA Technical Reports Server (NTRS)

    Paola, Justin D.; Schowengerdt, Robert A.

    1995-01-01

    We have classified four different images, under various levels of JPEG compression, using the following classification algorithms: minimum-distance, maximum-likelihood, and neural network. The training site accuracy and percent difference from the original classification were tabulated for each image compression level, with maximum-likelihood showing the poorest results. In general, as compression ratio increased, the classification retained its overall appearance, but much of the pixel-to-pixel detail was eliminated. We also examined the effect of compression on spatial pattern detection using a neural network.

  8. Efficient audio signal processing for embedded systems

    NASA Astrophysics Data System (ADS)

    Chiu, Leung Kin

    As mobile platforms continue to pack on more computational power, electronics manufacturers start to differentiate their products by enhancing the audio features. However, consumers also demand smaller devices that could operate for longer time, hence imposing design constraints. In this research, we investigate two design strategies that would allow us to efficiently process audio signals on embedded systems such as mobile phones and portable electronics. In the first strategy, we exploit properties of the human auditory system to process audio signals. We designed a sound enhancement algorithm to make piezoelectric loudspeakers sound ”richer" and "fuller." Piezoelectric speakers have a small form factor but exhibit poor response in the low-frequency region. In the algorithm, we combine psychoacoustic bass extension and dynamic range compression to improve the perceived bass coming out from the tiny speakers. We also developed an audio energy reduction algorithm for loudspeaker power management. The perceptually transparent algorithm extends the battery life of mobile devices and prevents thermal damage in speakers. This method is similar to audio compression algorithms, which encode audio signals in such a ways that the compression artifacts are not easily perceivable. Instead of reducing the storage space, however, we suppress the audio contents that are below the hearing threshold, therefore reducing the signal energy. In the second strategy, we use low-power analog circuits to process the signal before digitizing it. We designed an analog front-end for sound detection and implemented it on a field programmable analog array (FPAA). The system is an example of an analog-to-information converter. The sound classifier front-end can be used in a wide range of applications because programmable floating-gate transistors are employed to store classifier weights. Moreover, we incorporated a feature selection algorithm to simplify the analog front-end. A machine learning algorithm AdaBoost is used to select the most relevant features for a particular sound detection application. In this classifier architecture, we combine simple "base" analog classifiers to form a strong one. We also designed the circuits to implement the AdaBoost-based analog classifier.

  9. A New Challenge for Compression Algorithms: Genetic Sequences.

    ERIC Educational Resources Information Center

    Grumbach, Stephane; Tahi, Fariza

    1994-01-01

    Analyzes the properties of genetic sequences that cause the failure of classical algorithms used for data compression. A lossless algorithm, which compresses the information contained in DNA and RNA sequences by detecting regularities such as palindromes, is presented. This algorithm combines substitutional and statistical methods and appears to…

  10. Flexible and Compressible PEDOT:PSS@Melamine Conductive Sponge Prepared via One-Step Dip Coating as Piezoresistive Pressure Sensor for Human Motion Detection.

    PubMed

    Ding, Yichun; Yang, Jack; Tolle, Charles R; Zhu, Zhengtao

    2018-05-09

    Flexible and wearable pressure sensor may offer convenient, timely, and portable solutions to human motion detection, yet it is a challenge to develop cost-effective materials for pressure sensor with high compressibility and sensitivity. Herein, a cost-efficient and scalable approach is reported to prepare a highly flexible and compressible conductive sponge for piezoresistive pressure sensor. The conductive sponge, poly(3,4-ethylenedioxythiophene):poly(styrenesulfonate) (PEDOT:PSS)@melamine sponge (MS), is prepared by one-step dip coating the commercial melamine sponge (MS) in an aqueous dispersion of poly(3,4-ethylenedioxythiophene):poly(styrenesulfonate) (PEDOT:PSS). Due to the interconnected porous structure of MS, the conductive PEDOT:PSS@MS has a high compressibility and a stable piezoresistive response at the compressive strain up to 80%, as well as good reproducibility over 1000 cycles. Thereafter, versatile pressure sensors fabricated using the conductive PEDOT:PSS@MS sponges are attached to the different parts of human body; the capabilities of these devices to detect a variety of human motions including speaking, finger bending, elbow bending, and walking are evaluated. Furthermore, prototype tactile sensory array based on these pressure sensors is demonstrated.

  11. Feasibility of spatial frequency-domain imaging for monitoring palpable breast lesions

    NASA Astrophysics Data System (ADS)

    Robbins, Constance M.; Raghavan, Guruprasad; Antaki, James F.; Kainerstorfer, Jana M.

    2017-12-01

    In breast cancer diagnosis and therapy monitoring, there is a need for frequent, noninvasive disease progression evaluation. Breast tumors differ from healthy tissue in mechanical stiffness as well as optical properties, which allows optical methods to detect and monitor breast lesions noninvasively. Spatial frequency-domain imaging (SFDI) is a reflectance-based diffuse optical method that can yield two-dimensional images of absolute optical properties of tissue with an inexpensive and portable system, although depth penetration is limited. Since the absorption coefficient of breast tissue is relatively low and the tissue is quite flexible, there is an opportunity for compression of tissue to bring stiff, palpable breast lesions within the detection range of SFDI. Sixteen breast tissue-mimicking phantoms were fabricated containing stiffer, more highly absorbing tumor-mimicking inclusions of varying absorption contrast and depth. These phantoms were imaged with an SFDI system at five levels of compression. An increase in absorption contrast was observed with compression, and reliable detection of each inclusion was achieved when compression was sufficient to bring the inclusion center within ˜12 mm of the phantom surface. At highest compression level, contrasts achieved with this system were comparable to those measured with single source-detector near-infrared spectroscopy.

  12. Storage, retrieval, and edit of digital video using Motion JPEG

    NASA Astrophysics Data System (ADS)

    Sudharsanan, Subramania I.; Lee, D. H.

    1994-04-01

    In a companion paper we describe a Micro Channel adapter card that can perform real-time JPEG (Joint Photographic Experts Group) compression of a 640 by 480 24-bit image within 1/30th of a second. Since this corresponds to NTSC video rates at considerably good perceptual quality, this system can be used for real-time capture and manipulation of continuously fed video. To facilitate capturing the compressed video in a storage medium, an IBM Bus master SCSI adapter with cache is utilized. Efficacy of the data transfer mechanism is considerably improved using the System Control Block architecture, an extension to Micro Channel bus masters. We show experimental results that the overall system can perform at compressed data rates of about 1.5 MBytes/second sustained and with sporadic peaks to about 1.8 MBytes/second depending on the image sequence content. We also describe mechanisms to access the compressed data very efficiently through special file formats. This in turn permits creation of simpler sequence editors. Another advantage of the special file format is easy control of forward, backward and slow motion playback. The proposed method can be extended for design of a video compression subsystem for a variety of personal computing systems.

  13. Application of Compressive Sensing to Gravitational Microlensing Experiments

    NASA Technical Reports Server (NTRS)

    Korde-Patel, Asmita; Barry, Richard K.; Mohsenin, Tinoosh

    2016-01-01

    Compressive Sensing is an emerging technology for data compression and simultaneous data acquisition. This is an enabling technique for significant reduction in data bandwidth, and transmission power and hence, can greatly benefit spaceflight instruments. We apply this process to detect exoplanets via gravitational microlensing. We experiment with various impact parameters that describe microlensing curves to determine the effectiveness and uncertainty caused by Compressive Sensing. Finally, we describe implications for spaceflight missions.

  14. Calibration of a speckle-based compressive sensing receiver

    NASA Astrophysics Data System (ADS)

    Sefler, George A.; Shaw, T. Justin; Stapleton, Andrew D.; Valley, George C.

    2017-02-01

    Optical speckle in a multimode waveguide has been proposed to perform the function of a compressive sensing (CS) measurement matrix (MM) in a receiver for GHz-band radio frequency (RF) signals. Unlike other devices used for the CS MM, e.g. the digital micromirror device (DMD) used in the single pixel camera, the elements of the speckle MM are not known before use and must be measured and calibrated. In our system, the RF signal is modulated on a repetitively pulsed chirped wavelength laser source, generated from mode-locked laser pulses that have been dispersed in time or from an electrically addressed distributed Bragg reflector laser. Next, the optical beam with RF propagates through a multimode fiber or waveguide, which applies different weights in wavelength (or equivalently time) and space and performs the function of the CS MM. The output of the guide is directed to or imaged on a bank of photodiodes with integration time set to the pulse length of the chirp waveform. The output of each photodiode is digitized by an analog-to-digital converter (ADC), and the data from these ADCs are used to form the CS measurement vector. Accurate recovery of the RF signal from CS measurements depends critically on knowledge of the weights in the MM. Here we present results using a stable wavelength laser source to probe the guide.

  15. Improved JPEG anti-forensics with better image visual quality and forensic undetectability.

    PubMed

    Singh, Gurinder; Singh, Kulbir

    2017-08-01

    There is an immediate need to validate the authenticity of digital images due to the availability of powerful image processing tools that can easily manipulate the digital image information without leaving any traces. The digital image forensics most often employs the tampering detectors based on JPEG compression. Therefore, to evaluate the competency of the JPEG forensic detectors, an anti-forensic technique is required. In this paper, two improved JPEG anti-forensic techniques are proposed to remove the blocking artifacts left by the JPEG compression in both spatial and DCT domain. In the proposed framework, the grainy noise left by the perceptual histogram smoothing in DCT domain can be reduced significantly by applying the proposed de-noising operation. Two types of denoising algorithms are proposed, one is based on the constrained minimization problem of total variation of energy and other on the normalized weighted function. Subsequently, an improved TV based deblocking operation is proposed to eliminate the blocking artifacts in the spatial domain. Then, a decalibration operation is applied to bring the processed image statistics back to its standard position. The experimental results show that the proposed anti-forensic approaches outperform the existing state-of-the-art techniques in achieving enhanced tradeoff between image visual quality and forensic undetectability, but with high computational cost. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Effect of filter on average glandular dose and image quality in digital mammography

    NASA Astrophysics Data System (ADS)

    Songsaeng, C.; Krisanachinda, A.; Theerakul, K.

    2016-03-01

    To determine the average glandular dose and entrance surface air kerma in both phantoms and patients to assess image quality for different target-filters (W/Rh and W/Ag) in digital mammography system. The compressed breast thickness, compression force, average glandular dose, entrance surface air kerma, peak kilovoltage and tube current time were recorded and compared between W/Rh and W/Ag target filter. The CNR and the figure of merit were used to determine the effect of target filter on image quality. The mean AGD of the W/Rh target filter was 1.75 mGy, the mean ESAK was 6.67 mGy, the mean CBT was 54.1 mm, the mean CF was 14 1bs. The mean AGD of W/Ag target filter was 2.7 mGy, the mean ESAK was 12.6 mGy, the mean CBT was 75.5 mm, the mean CF was 15 1bs. In phantom study, the AGD was 1.2 mGy at 4 cm, 3.3 mGy at 6 cm and 3.83 mGy at 7 cm thickness. The FOM was 24.6, CNR was 9.02 at thickness 6 cm. The FOM was 18.4, CNR was 8.6 at thickness 7 cm. The AGD from Digital Mammogram system with W/Rh of thinner CBT was lower than the AGD from W/Ag target filter.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    de Gasperin, F.; Ogrean, G. A.; van Weeren, R. J.

    We report that extended steep-spectrum radio emission in a galaxy cluster is usually associated with a recent merger. However, given the complex scenario of galaxy cluster mergers, many of the discovered sources hardly fit into the strict boundaries of a precise taxonomy. This is especially true for radio phoenixes that do not have very well defined observational criteria. Radio phoenixes are aged radio galaxy lobes whose emission is reactivated by compression or other mechanisms. Here in this paper, we present the detection of a radio phoenix close to the moment of its formation. The source is located in Abell 1033,more » a peculiar galaxy cluster which underwent a recent merger. To support our claim, we present unpublished Westerbork Synthesis Radio Telescope and Chandra observations together with archival data from the Very Large Array and the Sloan Digital Sky Survey. We discover the presence of two subclusters displaced along the N–S direction. The two subclusters probably underwent a recent merger which is the cause of a moderately perturbed X-ray brightness distribution. A steep-spectrum extended radio source very close to an active galactic nucleus (AGN) is proposed to be a newly born radio phoenix: the AGN lobes have been displaced/compressed by shocks formed during the merger event. This scenario explains the source location, morphology, spectral index, and brightness. Finally, we show evidence of a density discontinuity close to the radio phoenix and discuss the consequences of its presence.« less

  18. Time Resolved Digital PIV Measurements of Flow Field Cyclic Variation in an Optical IC Engine

    NASA Astrophysics Data System (ADS)

    Jarvis, S.; Justham, T.; Clarke, A.; Garner, C. P.; Hargrave, G. K.; Halliwell, N. A.

    2006-07-01

    Time resolved digital particle image velocimetry (DPIV) experimental data is presented for the in-cylinder flow field development of a motored four stroke spark ignition (SI) optical internal combustion (IC) engine. A high speed DPIV system was employed to quantify the velocity field development during the intake and compression stroke at an engine speed of 1500 rpm. The results map the spatial and temporal development of the in-cylinder flow field structure allowing comparison between traditional ensemble average and cycle average flow field structures. Conclusions are drawn with respect to engine flow field cyclic variations.

  19. Two-way digital communications

    NASA Astrophysics Data System (ADS)

    Glenn, William E.; Daly, Ed

    1996-03-01

    The communications industry has been rapidly converting from analog to digital communications for audio, video, and data. The initial applications have been concentrating on point-to-multipoint transmission. Currently, a new revolution is occurring in which two-way point-to-point transmission is a rapidly growing market. The system designs for video compression developed for point-to-multipoint transmission are unsuitable for this new market as well as for satellite based video encoding. A new system developed by the Space Communications Technology Center has been designed to address both of these newer applications. An update on the system performance and design will be given.

  20. Impact of the Introduction of Digital Mammography in an Organized Screening Program on the Recall and Detection Rate.

    PubMed

    Campari, Cinzia; Giorgi Rossi, Paolo; Mori, Carlo Alberto; Ravaioli, Sara; Nitrosi, Andrea; Vacondio, Rita; Mancuso, Pamela; Cattani, Antonella; Pattacini, Pierpaolo

    2016-04-01

    In 2012, the Reggio Emilia Breast Cancer Screening Program introduced digital mammography in all its facilities at the same time. The aim of this work is to analyze the impact of digital mammography introduction on the recall rate, detection rate, and positive predictive value. The program actively invites women aged 45-74 years. We included women screened in 2011, all of whom underwent film-screen mammography, and all women screened in 2012, all of whom underwent digital mammography. Double reading was used for all mammograms, with arbitration in the event of disagreement. A total of 42,240 women underwent screen-film mammography and 45,196 underwent digital mammography. The recall rate increased from 3.3 to 4.4% in the first year of digital mammography (relative recall adjusted by age and round 1.46, 95% CI = 1.37-1.56); the positivity rate for each individual reading, before arbitration, rose from 3 to 5.7%. The digital mammography recall rate decreased during 2012: after 12 months, it was similar to the recall rate with screen-film mammography. The detection rate was similar: 5.9/1000 and 5.2/1000 with screen-film and digital mammography, respectively (adjusted relative detection rate 0.95, 95% CI = 0.79-1.13). The relative detection rate for ductal carcinoma in situ remained the same. The introduction of digital mammography to our organized screening program had a negative impact on specificity, thereby increasing the recall rate. The effect was limited to the first 12 months after introduction and was attenuated by the double reading with arbitration. We did not observe any relevant effects on the detection rate.

  1. Iterative dictionary construction for compression of large DNA data sets.

    PubMed

    Kuruppu, Shanika; Beresford-Smith, Bryan; Conway, Thomas; Zobel, Justin

    2012-01-01

    Genomic repositories increasingly include individual as well as reference sequences, which tend to share long identical and near-identical strings of nucleotides. However, the sequential processing used by most compression algorithms, and the volumes of data involved, mean that these long-range repetitions are not detected. An order-insensitive, disk-based dictionary construction method can detect this repeated content and use it to compress collections of sequences. We explore a dictionary construction method that improves repeat identification in large DNA data sets. Our adaptation, COMRAD, of an existing disk-based method identifies exact repeated content in collections of sequences with similarities within and across the set of input sequences. COMRAD compresses the data over multiple passes, which is an expensive process, but allows COMRAD to compress large data sets within reasonable time and space. COMRAD allows for random access to individual sequences and subsequences without decompressing the whole data set. COMRAD has no competitor in terms of the size of data sets that it can compress (extending to many hundreds of gigabytes) and, even for smaller data sets, the results are competitive compared to alternatives; as an example, 39 S. cerevisiae genomes compressed to 0.25 bits per base.

  2. Effectiveness of digital infrared thermal imaging in detecting lower extremity deep venous thrombosis.

    PubMed

    Deng, Fangge; Tang, Qing; Zeng, Guangqiao; Wu, Hua; Zhang, Nuofu; Zhong, Nanshan

    2015-05-01

    The authors aimed to determine the effectiveness of infrared thermal imaging (IRTI) as a novel, noninvasive technique in adjunctive diagnostic screening for lower limb deep venous thrombosis (DVT). The authors used an infrared thermal imaging sensor to examine the lower limbs of 64 DVT patients and 64 healthy volunteers. The DVT patients had been definitively diagnosed with either Doppler vascular compression ultrasonography or angiography. The mean area temperature (T_area) and mean linear temperature (T_line) in the region of interest were determined with infrared thermal imaging. Images were evaluated with qualitative pseudocolor analysis to verify specific color-temperature responses and with quantitative temperature analysis. Differences in T_area and T_line between the DVT limb and the nonaffected limb in each DVT patient and temperature differences (TDs) in T_area (TDarea) and T_line (TDline) between DVT patients and non-DVT volunteers were compared. Qualitative pseudocolor analysis revealed visible asymmetry between the DVT side and non-DVT side in the presentation and distribution characteristics (PDCs) of infrared thermal images. The DVT limbs had areas of abnormally high temperature, indicating the presence of DVT. Of the 64 confirmed DVT patients, 62 (96.88%) were positive by IRTI detection. Among these 62 IRTI-positive cases, 53 (82.81%) showed PDCs that agreed with the DVT regions detected by Doppler vascular compression ultrasonography or angiography. In nine patients (14.06%), IRTI PDCs did not definitively agree with the DVT regions established with other testing methods, but still correctly indicated the DVT-affected limb. There was a highly significant difference between DVT and non-DVT sides in DVT patients (P < 0.01). The TDarea and TDline in non-DVT volunteers ranged from 0.19 ± 0.15 °C to 0.21 °C ± 0.17 °C; those in DVT patients ranged from 0.86 °C ± 0.71 °C to 1.03 °C ± 0.79 °C (P < 0.01). Infrared thermal imaging can be effectively used in DVT detection and adjunctive diagnostic screening because of its specific infrared PDCs and TDs values.

  3. Layer detection and snowpack stratigraphy characterisation from digital penetrometer signals

    NASA Astrophysics Data System (ADS)

    Floyer, James Antony

    Forecasting for slab avalanches benefits from precise measurements of snow stratigraphy. Snow penetrometers offer the possibility of providing detailed information about snowpack structure; however, their use has yet to be adopted by avalanche forecasting operations in Canada. A manually driven, variable rate force-resistance penetrometer is tested for its ability to measure snowpack information suitable for avalanche forecasting and for spatial variability studies on snowpack properties. Subsequent to modifications, weak layers of 5 mm thick are reliably detected from the penetrometer signals. Rate effects are investigated and found to be insignificant for push velocities between 0.5 to 100 cm s-1 for dry snow. An analysis of snow deformation below the penetrometer tip is presented using particle image velocimetry and two zones associated with particle deflection are identified. The compacted zone is a region of densified snow that is pushed ahead of the penetrometer tip; the deformation zone is a broader zone surrounding the compacted zone, where deformation is in compression and in shear. Initial formation of the compacted zone is responsible for pronounced force spikes in the penetrometer signal. A layer tracing algorithm for tracing weak layers, crusts and interfaces across transects or grids of penetrometer profiles is presented. This algorithm uses Wiener spiking deconvolution to detect a portion of the signal manually identified as a layer in one profile across to an adjacent profile. Layer tracing is found to be most effective for tracing crusts and prominent weak layers, although weak layers close to crusts were not well traced. A framework for extending this method for detecting weak layers with no prior knowledge of weak layer existence is also presented. A study relating the fracture character of layers identified in compression tests is presented. A multivariate model is presented that distinguishes between sudden and other fracture characters 80% of the time. Transects of penetrometer profiles are presented over several alpine terrain features commonly associated with spatial variability of snowpack properties. Physical processes relating to the variability of certain snowpack properties revealed in the transects is discussed. The importance of characteristic signatures for training avalanche practitioners to recognise potentially unstable terrain is also discussed.

  4. Advances in image compression and automatic target recognition; Proceedings of the Meeting, Orlando, FL, Mar. 30, 31, 1989

    NASA Technical Reports Server (NTRS)

    Tescher, Andrew G. (Editor)

    1989-01-01

    Various papers on image compression and automatic target recognition are presented. Individual topics addressed include: target cluster detection in cluttered SAR imagery, model-based target recognition using laser radar imagery, Smart Sensor front-end processor for feature extraction of images, object attitude estimation and tracking from a single video sensor, symmetry detection in human vision, analysis of high resolution aerial images for object detection, obscured object recognition for an ATR application, neural networks for adaptive shape tracking, statistical mechanics and pattern recognition, detection of cylinders in aerial range images, moving object tracking using local windows, new transform method for image data compression, quad-tree product vector quantization of images, predictive trellis encoding of imagery, reduced generalized chain code for contour description, compact architecture for a real-time vision system, use of human visibility functions in segmentation coding, color texture analysis and synthesis using Gibbs random fields.

  5. Visual pattern image sequence coding

    NASA Technical Reports Server (NTRS)

    Silsbee, Peter; Bovik, Alan C.; Chen, Dapang

    1990-01-01

    The visual pattern image coding (VPIC) configurable digital image-coding process is capable of coding with visual fidelity comparable to the best available techniques, at compressions which (at 30-40:1) exceed all other technologies. These capabilities are associated with unprecedented coding efficiencies; coding and decoding operations are entirely linear with respect to image size and entail a complexity that is 1-2 orders of magnitude faster than any previous high-compression technique. The visual pattern image sequence coding to which attention is presently given exploits all the advantages of the static VPIC in the reduction of information from an additional, temporal dimension, to achieve unprecedented image sequence coding performance.

  6. The FBI wavelet/scalar quantization standard for gray-scale fingerprint image compression

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bradley, J.N.; Brislawn, C.M.; Hopper, T.

    1993-05-01

    The FBI has recently adopted a standard for the compression of digitized 8-bit gray-scale fingerprint images. The standard is based on scalar quantization of a 64-subband discrete wavelet transform decomposition of the images, followed by Huffman coding. Novel features of the algorithm include the use of symmetric boundary conditions for transforming finite-length signals and a subband decomposition tailored for fingerprint images scanned at 500 dpi. The standard is intended for use in conjunction with ANSI/NBS-CLS 1-1993, American National Standard Data Format for the Interchange of Fingerprint Information, and the FBI`s Integrated Automated Fingerprint Identification System.

  7. The FBI wavelet/scalar quantization standard for gray-scale fingerprint image compression

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bradley, J.N.; Brislawn, C.M.; Hopper, T.

    1993-01-01

    The FBI has recently adopted a standard for the compression of digitized 8-bit gray-scale fingerprint images. The standard is based on scalar quantization of a 64-subband discrete wavelet transform decomposition of the images, followed by Huffman coding. Novel features of the algorithm include the use of symmetric boundary conditions for transforming finite-length signals and a subband decomposition tailored for fingerprint images scanned at 500 dpi. The standard is intended for use in conjunction with ANSI/NBS-CLS 1-1993, American National Standard Data Format for the Interchange of Fingerprint Information, and the FBI's Integrated Automated Fingerprint Identification System.

  8. Video data compression using artificial neural network differential vector quantization

    NASA Technical Reports Server (NTRS)

    Krishnamurthy, Ashok K.; Bibyk, Steven B.; Ahalt, Stanley C.

    1991-01-01

    An artificial neural network vector quantizer is developed for use in data compression applications such as Digital Video. Differential Vector Quantization is used to preserve edge features, and a new adaptive algorithm, known as Frequency-Sensitive Competitive Learning, is used to develop the vector quantizer codebook. To develop real time performance, a custom Very Large Scale Integration Application Specific Integrated Circuit (VLSI ASIC) is being developed to realize the associative memory functions needed in the vector quantization algorithm. By using vector quantization, the need for Huffman coding can be eliminated, resulting in superior performance against channel bit errors than methods that use variable length codes.

  9. A perioperative echocardiographic reporting and recording system.

    PubMed

    Pybus, David A

    2004-11-01

    Advances in video capture, compression, and streaming technology, coupled with improvements in central processing unit design and the inclusion of a database engine in the Windows operating system, have simplified the task of implementing a digital echocardiographic recording system. I describe an application that uses these technologies and runs on a notebook computer.

  10. Embedded importance watermarking for image verification in radiology

    NASA Astrophysics Data System (ADS)

    Osborne, Domininc; Rogers, D.; Sorell, M.; Abbott, Derek

    2004-03-01

    Digital medical images used in radiology are quite different to everyday continuous tone images. Radiology images require that all detailed diagnostic information can be extracted, which traditionally constrains digital medical images to be of large size and stored without loss of information. In order to transmit diagnostic images over a narrowband wireless communication link for remote diagnosis, lossy compression schemes must be used. This involves discarding detailed information and compressing the data, making it more susceptible to error. The loss of image detail and incidental degradation occurring during transmission have potential legal accountability issues, especially in the case of the null diagnosis of a tumor. The work proposed here investigates techniques for verifying the voracity of medical images - in particular, detailing the use of embedded watermarking as an objective means to ensure that important parts of the medical image can be verified. We propose a result to show how embedded watermarking can be used to differentiate contextual from detailed information. The type of images that will be used include spiral hairline fractures and small tumors, which contain the essential diagnostic high spatial frequency information.

  11. Multiple channel data acquisition system

    DOEpatents

    Crawley, H. Bert; Rosenberg, Eli I.; Meyer, W. Thomas; Gorbics, Mark S.; Thomas, William D.; McKay, Roy L.; Homer, Jr., John F.

    1990-05-22

    A multiple channel data acquisition system for the transfer of large amounts of data from a multiplicity of data channels has a plurality of modules which operate in parallel to convert analog signals to digital data and transfer that data to a communications host via a FASTBUS. Each module has a plurality of submodules which include a front end buffer (FEB) connected to input circuitry having an analog to digital converter with cache memory for each of a plurality of channels. The submodules are interfaced with the FASTBUS via a FASTBUS coupler which controls a module bus and a module memory. The system is triggered to effect rapid parallel data samplings which are stored to the cache memories. The cache memories are uploaded to the FEBs during which zero suppression occurs. The data in the FEBs is reformatted and compressed by a local processor during transfer to the module memory. The FASTBUS coupler is used by the communications host to upload the compressed and formatted data from the module memory. The local processor executes programs which are downloaded to the module memory through the FASTBUS coupler.

  12. Multiple channel data acquisition system

    DOEpatents

    Crawley, H.B.; Rosenberg, E.I.; Meyer, W.T.; Gorbics, M.S.; Thomas, W.D.; McKay, R.L.; Homer, J.F. Jr.

    1990-05-22

    A multiple channel data acquisition system for the transfer of large amounts of data from a multiplicity of data channels has a plurality of modules which operate in parallel to convert analog signals to digital data and transfer that data to a communications host via a FASTBUS. Each module has a plurality of submodules which include a front end buffer (FEB) connected to input circuitry having an analog to digital converter with cache memory for each of a plurality of channels. The submodules are interfaced with the FASTBUS via a FASTBUS coupler which controls a module bus and a module memory. The system is triggered to effect rapid parallel data samplings which are stored to the cache memories. The cache memories are uploaded to the FEBs during which zero suppression occurs. The data in the FEBs is reformatted and compressed by a local processor during transfer to the module memory. The FASTBUS coupler is used by the communications host to upload the compressed and formatted data from the module memory. The local processor executes programs which are downloaded to the module memory through the FASTBUS coupler. 25 figs.

  13. Region-growing approach to detect microcalcifications in digital mammograms

    NASA Astrophysics Data System (ADS)

    Shin, Jin-Wook; Chae, Soo-Ik; Sook, Yoon M.; Park, Dong-Sun

    2001-09-01

    Detecting early symptoms of breast cancer is very important to enhance the possibility of cure. There have been active researches to develop computer-aided diagnosis(CAD) systems detecting early symptoms of breast cancer in digital mammograms. An expert or a CAD system can recognize the early symptoms based on microcalcifications appeared in digital mammographic images. Microcalcifications have higher gray value than surrounding regions, so these can be detected by expanding a region from a local maximum. However the resultant image contains unnecessary elements such as noise, holes and valleys. Mathematical morphology is a good solution to delete regions that are affected by the unnecessary elements. In this paper, we present a method that effectively detects microcalcifications in digital mammograms using a combination of local maximum operation and the region-growing operation.

  14. A New Approach for Fingerprint Image Compression

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mazieres, Bertrand

    1997-12-01

    The FBI has been collecting fingerprint cards since 1924 and now has over 200 million of them. Digitized with 8 bits of grayscale resolution at 500 dots per inch, it means 2000 terabytes of information. Also, without any compression, transmitting a 10 Mb card over a 9600 baud connection will need 3 hours. Hence we need a compression and a compression as close to lossless as possible: all fingerprint details must be kept. A lossless compression usually do not give a better compression ratio than 2:1, which is not sufficient. Compressing these images with the JPEG standard leads to artefactsmore » which appear even at low compression rates. Therefore the FBI has chosen in 1993 a scheme of compression based on a wavelet transform, followed by a scalar quantization and an entropy coding : the so-called WSQ. This scheme allows to achieve compression ratios of 20:1 without any perceptible loss of quality. The publication of the FBI specifies a decoder, which means that many parameters can be changed in the encoding process: the type of analysis/reconstruction filters, the way the bit allocation is made, the number of Huffman tables used for the entropy coding. The first encoder used 9/7 filters for the wavelet transform and did the bit allocation using a high-rate bit assumption. Since the transform is made into 64 subbands, quite a lot of bands receive only a few bits even at an archival quality compression rate of 0.75 bit/pixel. Thus, after a brief overview of the standard, we will discuss a new approach for the bit-allocation that seems to make more sense where theory is concerned. Then we will talk about some implementation aspects, particularly for the new entropy coder and the features that allow other applications than fingerprint image compression. Finally, we will compare the performances of the new encoder to those of the first encoder.« less

  15. [Digital thoracic radiology: devices, image processing, limits].

    PubMed

    Frija, J; de Géry, S; Lallouet, F; Guermazi, A; Zagdanski, A M; De Kerviler, E

    2001-09-01

    In a first part, the different techniques of digital thoracic radiography are described. Since computed radiography with phosphore plates are the most commercialized it is more emphasized. But the other detectors are also described, as the drum coated with selenium and the direct digital radiography with selenium detectors. The other detectors are also studied in particular indirect flat panels detectors and the system with four high resolution CCD cameras. In a second step the most important image processing are discussed: the gradation curves, the unsharp mask processing, the system MUSICA, the dynamic range compression or reduction, the soustraction with dual energy. In the last part the advantages and the drawbacks of computed thoracic radiography are emphasized. The most important are the almost constant good quality of the pictures and the possibilities of image processing.

  16. Sparsity based terahertz reflective off-axis digital holography

    NASA Astrophysics Data System (ADS)

    Wan, Min; Muniraj, Inbarasan; Malallah, Ra'ed; Zhao, Liang; Ryle, James P.; Rong, Lu; Healy, John J.; Wang, Dayong; Sheridan, John T.

    2017-05-01

    Terahertz radiation lies between the microwave and infrared regions in the electromagnetic spectrum. Emitted frequencies range from 0.1 to 10 THz with corresponding wavelengths ranging from 30 μm to 3 mm. In this paper, a continuous-wave Terahertz off-axis digital holographic system is described. A Gaussian fitting method and image normalisation techniques were employed on the recorded hologram to improve the image resolution. A synthesised contrast enhanced hologram is then digitally constructed. Numerical reconstruction is achieved using the angular spectrum method of the filtered off-axis hologram. A sparsity based compression technique is introduced before numerical data reconstruction in order to reduce the dataset required for hologram reconstruction. Results prove that a tiny amount of sparse dataset is sufficient in order to reconstruct the hologram with good image quality.

  17. Visually enhanced CCTV digital surveillance utilizing Intranet and Internet.

    PubMed

    Ozaki, Nobuyuki

    2002-07-01

    This paper describes a solution for integrated plant supervision utilizing closed circuit television (CCTV) digital surveillance. Three basic requirements are first addressed as the platform of the system, with discussion on the suitable video compression. The system configuration is described in blocks. The system provides surveillance functionality: real-time monitoring, and process analysis functionality: a troubleshooting tool. This paper describes the formulation of practical performance design for determining various encoder parameters. It also introduces image processing techniques for enhancing the original CCTV digital image to lessen the burden on operators. Some screenshots are listed for the surveillance functionality. For the process analysis, an image searching filter supported by image processing techniques is explained with screenshots. Multimedia surveillance, which is the merger with process data surveillance, or the SCADA system, is also explained.

  18. Compressive Detection of Highly Overlapped Spectra Using Walsh-Hadamard-Based Filter Functions.

    PubMed

    Corcoran, Timothy C

    2018-03-01

    In the chemometric context in which spectral loadings of the analytes are already known, spectral filter functions may be constructed which allow the scores of mixtures of analytes to be determined in on-the-fly fashion directly, by applying a compressive detection strategy. Rather than collecting the entire spectrum over the relevant region for the mixture, a filter function may be applied within the spectrometer itself so that only the scores are recorded. Consequently, compressive detection shrinks data sets tremendously. The Walsh functions, the binary basis used in Walsh-Hadamard transform spectroscopy, form a complete orthonormal set well suited to compressive detection. A method for constructing filter functions using binary fourfold linear combinations of Walsh functions is detailed using mathematics borrowed from genetic algorithm work, as a means of optimizing said functions for a specific set of analytes. These filter functions can be constructed to automatically strip the baseline from analysis. Monte Carlo simulations were performed with a mixture of four highly overlapped Raman loadings and with ten excitation-emission matrix loadings; both sets showed a very high degree of spectral overlap. Reasonable estimates of the true scores were obtained in both simulations using noisy data sets, proving the linearity of the method.

  19. Video Data Link Provides Television Pictures In Near Real Time Via Tactical Radio And Satellite Channels

    NASA Astrophysics Data System (ADS)

    Hartman, Richard V.

    1987-02-01

    Advances in sophisticated algorithms and parallel VLSI processing have resulted in the capability for near real-time transmission of television pictures (optical and FLIR) via existing telephone lines, tactical radios, and military satellite channels. Concepts have been field demonstrated with production ready engineering development models using transform compression techniques. Preliminary design has been completed for packaging an existing command post version into a 20 pound 1/2 ATR enclosure for use on jeeps, backpacks, RPVs, helicopters, and reconnaissance aircraft. The system will also have a built-in error correction code 2 (ECC) unit, allowing operation via communicatons media exhibiting a bit error rate of 1 X 10-or better. In the past several years, two nearly simultaneous developments show promise of allowing the breakthrough needed to give the operational commander a practical means for obtaining pictorial information from the battlefield. And, he can obtain this information in near real time using available communications channels--his long sought after pictorial force multiplier: • High speed digital integrated circuitry that is affordable, and • An understanding of the practical applications of information theory. High speed digital integrated circuits allow an analog television picture to be nearly instantaneously converted to a digital serial bit stream so that it can be transmitted as rapidly or slowly as desired, depending on the available transmission channel bandwidth. Perhaps more importantly, digitizing the picture allows it to be stored and processed in a number of ways. Most typically, processing is performed to reduce the amount of data that must be transmitted, while still maintaining maximum picture quality. Reducing the amount of data that must be transmitted is important since it allows a narrower bandwidth in the scarce frequency spectrum to be used for transmission of pictures, or if only a narrow bandwidth is available, it takes less time for the picture to be transmitted. This process of reducing the amount of data that must be transmitted to represent a picture is called compression, truncation, or most typically, video compression. Keep in mind that the pictures you see on your home TV are nothing more than a series of still pictures displayed at a rate of 30 frames per second. If you grabbed one of those frames, digitized it, stored it in memory, and then transmitted it at the most rapid rate the bandwidth of your communications channel would allow, you would be using the so-called slow scan techniques.

  20. Computational imaging with a balanced detector.

    PubMed

    Soldevila, F; Clemente, P; Tajahuerce, E; Uribe-Patarroyo, N; Andrés, P; Lancis, J

    2016-06-29

    Single-pixel cameras allow to obtain images in a wide range of challenging scenarios, including broad regions of the electromagnetic spectrum and through scattering media. However, there still exist several drawbacks that single-pixel architectures must address, such as acquisition speed and imaging in the presence of ambient light. In this work we introduce balanced detection in combination with simultaneous complementary illumination in a single-pixel camera. This approach enables to acquire information even when the power of the parasite signal is higher than the signal itself. Furthermore, this novel detection scheme increases both the frame rate and the signal-to-noise ratio of the system. By means of a fast digital micromirror device together with a low numerical aperture collecting system, we are able to produce a live-feed video with a resolution of 64 × 64 pixels at 5 Hz. With advanced undersampling techniques, such as compressive sensing, we can acquire information at rates of 25 Hz. By using this strategy, we foresee real-time biological imaging with large area detectors in conditions where array sensors are unable to operate properly, such as infrared imaging and dealing with objects embedded in turbid media.

  1. Computational imaging with a balanced detector

    NASA Astrophysics Data System (ADS)

    Soldevila, F.; Clemente, P.; Tajahuerce, E.; Uribe-Patarroyo, N.; Andrés, P.; Lancis, J.

    2016-06-01

    Single-pixel cameras allow to obtain images in a wide range of challenging scenarios, including broad regions of the electromagnetic spectrum and through scattering media. However, there still exist several drawbacks that single-pixel architectures must address, such as acquisition speed and imaging in the presence of ambient light. In this work we introduce balanced detection in combination with simultaneous complementary illumination in a single-pixel camera. This approach enables to acquire information even when the power of the parasite signal is higher than the signal itself. Furthermore, this novel detection scheme increases both the frame rate and the signal-to-noise ratio of the system. By means of a fast digital micromirror device together with a low numerical aperture collecting system, we are able to produce a live-feed video with a resolution of 64 × 64 pixels at 5 Hz. With advanced undersampling techniques, such as compressive sensing, we can acquire information at rates of 25 Hz. By using this strategy, we foresee real-time biological imaging with large area detectors in conditions where array sensors are unable to operate properly, such as infrared imaging and dealing with objects embedded in turbid media.

  2. Computational imaging with a balanced detector

    PubMed Central

    Soldevila, F.; Clemente, P.; Tajahuerce, E.; Uribe-Patarroyo, N.; Andrés, P.; Lancis, J.

    2016-01-01

    Single-pixel cameras allow to obtain images in a wide range of challenging scenarios, including broad regions of the electromagnetic spectrum and through scattering media. However, there still exist several drawbacks that single-pixel architectures must address, such as acquisition speed and imaging in the presence of ambient light. In this work we introduce balanced detection in combination with simultaneous complementary illumination in a single-pixel camera. This approach enables to acquire information even when the power of the parasite signal is higher than the signal itself. Furthermore, this novel detection scheme increases both the frame rate and the signal-to-noise ratio of the system. By means of a fast digital micromirror device together with a low numerical aperture collecting system, we are able to produce a live-feed video with a resolution of 64 × 64 pixels at 5 Hz. With advanced undersampling techniques, such as compressive sensing, we can acquire information at rates of 25 Hz. By using this strategy, we foresee real-time biological imaging with large area detectors in conditions where array sensors are unable to operate properly, such as infrared imaging and dealing with objects embedded in turbid media. PMID:27353733

  3. Application of Compressive Sensing to Gravitational Microlensing Experiments

    NASA Astrophysics Data System (ADS)

    Korde-Patel, Asmita; Barry, Richard K.; Mohsenin, Tinoosh

    2017-06-01

    Compressive Sensing is an emerging technology for data compression and simultaneous data acquisition. This is an enabling technique for significant reduction in data bandwidth, and transmission power and hence, can greatly benefit space-flight instruments. We apply this process to detect exoplanets via gravitational microlensing. We experiment with various impact parameters that describe microlensing curves to determine the effectiveness and uncertainty caused by Compressive Sensing. Finally, we describe implications for space-flight missions.

  4. Evaluation of digital PCR for detecting low-level EGFR mutations in advanced lung adenocarcinoma patients: a cross-platform comparison study

    PubMed Central

    Liu, Bing; Li, Lei; Huang, Lixia; Li, Shaoli; Rao, Guanhua; Yu, Yang; Zhou, Yanbin

    2017-01-01

    Emerging evidence has indicated that circulating tumor DNA (ctDNA) from plasma could be used to analyze EGFR mutation status for NSCLC patients; however, due to the low level of ctDNA in plasma, highly sensitive approaches are required to detect low frequency mutations. In addition, the cutoff for the mutation abundance that can be detected in tumor tissue but cannot be detected in matched ctDNA is still unknown. To assess a highly sensitive method, we evaluated the use of digital PCR in the detection of EGFR mutations in tumor tissue from 47 advanced lung adenocarcinoma patients through comparison with NGS and ARMS. We determined the degree of concordance between tumor tissue DNA and paired ctDNA and analyzed the mutation abundance relationship between them. Digital PCR and Proton had a high sensitivity (96.00% vs. 100%) compared with that of ARMS in the detection of mutations in tumor tissue. Digital PCR outperformed Proton in identifying more low abundance mutations. The ctDNA detection rate of digital PCR was 87.50% in paired tumor tissue with a mutation abundance above 5% and 7.59% in paired tumor tissue with a mutation abundance below 5%. When the DNA mutation abundance of tumor tissue was above 3.81%, it could identify mutations in paired ctDNA with a high sensitivity. Digital PCR will help identify alternative methods for detecting low abundance mutations in tumor tissue DNA and plasma ctDNA. PMID:28978074

  5. Mammography image quality and evidence based practice: Analysis of the demonstration of the inframammary angle in the digital setting.

    PubMed

    Spuur, Kelly; Webb, Jodi; Poulos, Ann; Nielsen, Sharon; Robinson, Wayne

    2018-03-01

    The aim of this study is to determine the clinical rates of the demonstration of the inframammary angle (IMA) on the mediolateral oblique (MLO) view of the breast on digital mammograms and to compare the outcomes with current accreditation standards for compliance. Relationships between the IMA, age, the posterior nipple line (PNL) and compressed breast thickness will be identified and the study outcomes validated using appropriate analyses of inter-reader and inter-rater reliability and variability. Differences in left versus right data were also investigated. A quantitative retrospective study of 2270 randomly selected paired digital mammograms performed by BreastScreen NSW was undertaken. Data was collected by direct measurement and visual analysis. Intra-class correlation analyses were used to evaluate inter- and intra-rater reliability. The IMA was demonstrated on 52.4% of individual and 42.6% of paired mammograms. A linear relationship was found between the posterior nipple line (PNL) and age (p-value <0.001). The PNL was predicted to increase by 0.48 mm for every one year increment in age. The odds of demonstrating the IMA reduced by 2% for every one year increase in age (p-value = 0.001); are 0.4% higher for every 1 mm increase in PNL (p-value = 0.001) and 1.6% lower for every 1 mm increase in compressed breast thickness, (p-value<0.001). There was high inter- and intra-rater reliability for the PNL while there was 100% agreement for the demonstration of the IMA. Analysis of the demonstration of the IMA indicates clinically achievable rates (42.6%) well below that required for compliance (50%-75%) to known worldwide accreditation standards for screening mammography. These standards should be aligned to the reported evidence base. Visualisation of the IMA is impacted negatively by increasing age and compressed breast thickness but positively by breast size (PNL). Copyright © 2018 Elsevier B.V. All rights reserved.

  6. Evaluating the effectiveness of SW-only video coding for real-time video transmission over low-rate wireless networks

    NASA Astrophysics Data System (ADS)

    Bartolini, Franco; Pasquini, Cristina; Piva, Alessandro

    2001-04-01

    The recent development of video compression algorithms allowed the diffusion of systems for the transmission of video sequences over data networks. However, the transmission over error prone mobile communication channels is yet an open issue. In this paper, a system developed for the real time transmission of H263 video coded sequences over TETRA mobile networks is presented. TETRA is an open digital trunked radio standard defined by the European Telecommunications Standardization Institute developed for professional mobile radio users, providing full integration of voice and data services. Experimental tests demonstrate that, in spite of the low frame rate allowed by the SW only implementation of the decoder and by the low channel rate a video compression technique such as that complying with the H263 standard, is still preferable to a simpler but less effective frame based compression system.

  7. DCTune Perceptual Optimization of Compressed Dental X-Rays

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Null, Cynthia H. (Technical Monitor)

    1997-01-01

    In current dental practice, x-rays of completed dental work are often sent to the insurer for verification. It is faster and cheaper to transmit instead digital scans of the x-rays. Further economies result if the images are sent in compressed form. DCtune is a technology for optimizing DCT quantization matrices to yield maximum perceptual quality for a given bit-rate, or minimum bit-rate for a given perceptual quality. In addition, the technology provides a means of setting the perceptual quality of compressed imagery in a systematic way. The purpose of this research was, with respect to dental x-rays: (1) to verify the advantage of DCTune over standard JPEG; (2) to verify the quality control feature of DCTune; and (3) to discover regularities in the optimized matrices of a set of images. Additional information is contained in the original extended abstract.

  8. Design, Optimization, and Evaluation of A1-2139 Compression Panel with Integral T-Stiffeners

    NASA Technical Reports Server (NTRS)

    Mulani, Sameer B.; Havens, David; Norris, Ashley; Bird, R. Keith; Kapania, Rakesh K.; Olliffe, Robert

    2012-01-01

    A T-stiffened panel was designed and optimized for minimum mass subjected to constraints on buckling load, yielding, and crippling or local stiffener failure using a new analysis and design tool named EBF3PanelOpt. The panel was designed for a compression loading configuration, a realistic load case for a typical aircraft skin-stiffened panel. The panel was integrally machined from 2139 aluminum alloy plate and was tested in compression. The panel was loaded beyond buckling and strains and out-of-plane displacements were extracted from 36 strain gages and one linear variable displacement transducer. A digital photogrammetric system was used to obtain full field displacements and strains on the smooth (unstiffened) side of the panel. The experimental data were compared with the strains and out-of-plane deflections from a high-fidelity nonlinear finite element analysis.

  9. Trigeminal neuralgia caused by an anomalous posterior inferior cerebellar artery from the primitive trigeminal artery: case report.

    PubMed

    Lee, Seung Hwan; Koh, Jun Seok; Lee, Cheol Young

    2011-06-01

    A 61-year-old woman presented with typical trigeminal neuralgia (TN), caused by an aberrant posterior inferior cerebellar artery (PICA) associated with the primitive trigeminal artery (PTA). Magnetic resonance angiography and digital subtraction angiography clearly showed an anomalous artery directly originating from the PTA and coursing into the PICA territory at the cerebellum. During microvascular decompression (MVD), we confirmed and decompressed vascular compression of the trigeminal nerve by this anomalous, PICA-variant type of PTA. The PTA did not conflict with the trigeminal nerve, and the anomalous PICA only compressed the caudolateral part of the trigeminal nerve, without the more common compression at its root entry zone. This case is informative due not only to its very unusual angioanatomical variation but also to its helpfulness for surgeons preparing a MVD for a TN associated with such a rare vascular anomaly.

  10. A novel data hiding scheme for block truncation coding compressed images using dynamic programming strategy

    NASA Astrophysics Data System (ADS)

    Chang, Ching-Chun; Liu, Yanjun; Nguyen, Son T.

    2015-03-01

    Data hiding is a technique that embeds information into digital cover data. This technique has been concentrated on the spatial uncompressed domain, and it is considered more challenging to perform in the compressed domain, i.e., vector quantization, JPEG, and block truncation coding (BTC). In this paper, we propose a new data hiding scheme for BTC-compressed images. In the proposed scheme, a dynamic programming strategy was used to search for the optimal solution of the bijective mapping function for LSB substitution. Then, according to the optimal solution, each mean value embeds three secret bits to obtain high hiding capacity with low distortion. The experimental results indicated that the proposed scheme obtained both higher hiding capacity and hiding efficiency than the other four existing schemes, while ensuring good visual quality of the stego-image. In addition, the proposed scheme achieved a low bit rate as original BTC algorithm.

  11. Lossless Compression of JPEG Coded Photo Collections.

    PubMed

    Wu, Hao; Sun, Xiaoyan; Yang, Jingyu; Zeng, Wenjun; Wu, Feng

    2016-04-06

    The explosion of digital photos has posed a significant challenge to photo storage and transmission for both personal devices and cloud platforms. In this paper, we propose a novel lossless compression method to further reduce the size of a set of JPEG coded correlated images without any loss of information. The proposed method jointly removes inter/intra image redundancy in the feature, spatial, and frequency domains. For each collection, we first organize the images into a pseudo video by minimizing the global prediction cost in the feature domain. We then present a hybrid disparity compensation method to better exploit both the global and local correlations among the images in the spatial domain. Furthermore, the redundancy between each compensated signal and the corresponding target image is adaptively reduced in the frequency domain. Experimental results demonstrate the effectiveness of the proposed lossless compression method. Compared to the JPEG coded image collections, our method achieves average bit savings of more than 31%.

  12. Comparison of flat-panel digital to conventional film-screen radiography in detection of experimentally created lesions of the equine third metacarpal bone.

    PubMed

    Moorman, Valerie J; Marshall, John F; Devine, Dustin V; Payton, Mark; Jann, Henry W; Bahr, Robert

    2009-01-01

    Radiographic diagnosis of equine bone disease using digital radiography is prevalent in veterinary practice. However, the diagnostic quality of digital vs. conventional radiography has not been compared systematically. We hypothesized that digital radiography would be superior to film-screen radiography for detection of subtle lesions of the equine third metacarpal bone. Twenty-four third metacarpal bones were collected from horses euthanized for reasons other than orthopedic disease. Bones were dissected free of soft tissue and computed tomography was performed to ensure that no osseous abnormalities were present. Subtle osseous lesions were produced in the dorsal cortex of the third metacarpal bones, and the bones were radiographed in a soft tissue phantom using indirect digital and conventional radiography at standard exposures. Digital radiographs were printed onto film. Three Diplomates of the American College of Veterinary Radiology evaluated the radiographs for the presence or absence of a lesion. Receiver operator characteristic curves were constructed, and the area under these curves were compared to assess the ability of the digital and film-screen radiographic systems to detect lesions. The area under the ROC curves for film-screen and digital radiography were 0.87 and 0.90, respectively (P = 0.59). We concluded that the digital radiographic system was comparable to the film-screen system for detection of subtle lesions of the equine third metacarpal bone.

  13. Are minidisc recorders adequate for the study of respiratory sounds?

    PubMed

    Kraman, Steve S; Wodicka, George R; Kiyokawa, Hiroshi; Pasterkamp, Hans

    2002-01-01

    Digital audio tape (DAT) recorders have become the de facto gold standard recording devices for lung sounds. Sound recorded on DAT is compact-disk (CD) quality with adequate sensitivity from below 20 Hz to above 20 KHz. However, DAT recorders have drawbacks. Although small, they are relatively heavy, the recording mechanism is complex and delicate, and finding one desired track out of many is inconvenient. A more recent development in portable recording devices is the minidisc (MD) recorder. These recorders are widely available, inexpensive, small and light, rugged, mechanically simple, and record digital data in tracks that may be named and accessed directly. Minidiscs hold as much recorded sound as a compact disk but in about 1/5 of the recordable area. The data compression is achieved by use of a technique known as adaptive transform acoustic coding for minidisc (ATRAC). This coding technique makes decisions about what components of the sound would not be heard by a human listener and discards the digital information that represents these sounds. Most of this compression takes place on sounds above 5.5 KHz. As the intended use of these recorders is the storage and reproduction of music, it is unknown whether ATRAC will discard or distort significant portions of typical lung sound signals. We determined the suitability of MD recorders for respiratory sound research by comparing a variety of normal and pathologic lung sounds that were digitized directly into a computer and also after recording by a DAT recorder and 2 different MD recorders (Sharp and Sony). We found that the frequency spectra and waveforms of respiratory sounds were not distorted in any important way by recording on the two MD recorders tested.

  14. Compression and neutron and ion beams emission mechanisms within a plasma focus device

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yousefi, H. R.; Mohanty, S. R.; Nakada, Y.

    This paper reports some results of investigations of the neutron emission from middle energy Mather-type plasma focus. Multiple compressions were observed, and it seems that multiple compression regimes can occur at low pressure, while single compression appeared at higher pressure, which is favorable for neutron production. The multiple compression mechanism can be attributed to the (m=0 type) instability. The m=0 type instability is a necessary condition for fusion activity and x-ray production, but is not sufficient by itself. Accompanying the multiple compressions, multiple deuteron and neutron pulses were detected, which implies that there are different kinds of acceleration mechanisms.

  15. Hardness and compression resistance of natural rubber and synthetic rubber mixtures

    NASA Astrophysics Data System (ADS)

    Arguello, J. M.; Santos, A.

    2016-02-01

    This project aims to mechanically characterize through compression resistance and shore hardness tests, the mixture of hevea brasiliensis natural rubber with butadiene synthetic rubber (BR), styrene-butadiene rubber (SBR) and ethylene-propylene-diene monomer rubber (EPDM). For each of the studied mixtures were performed 10 tests, each of which increased by 10% the content of synthetic rubber in the mixture; each test consisted of carrying out five tests of compression resistance and five tests of shore hardness. The specimens were vulcanized on a temperature of 160°C, during an approximate time of 15 minutes, and the equipment used in the performance of the mechanical tests were a Shimadzu universal machine and a digital durometer. The results show that the A shore hardness increases directly proportional, with a linear trend, with the content of synthetic BR, SBR or EPDM rubber present in the mixture, being the EPDM the most influential. With respect to the compression resistance is observed that the content of BR or SBR increase this property directly proportional through a linear trend; while the EPDM content also increases but with a polynomial trend.

  16. Optimized satellite image compression and reconstruction via evolution strategies

    NASA Astrophysics Data System (ADS)

    Babb, Brendan; Moore, Frank; Peterson, Michael

    2009-05-01

    This paper describes the automatic discovery, via an Evolution Strategy with Covariance Matrix Adaptation (CMA-ES), of vectors of real-valued coefficients representing matched forward and inverse transforms that outperform the 9/7 Cohen-Daubechies-Feauveau (CDF) discrete wavelet transform (DWT) for satellite image compression and reconstruction under conditions subject to quantization error. The best transform evolved during this study reduces the mean squared error (MSE) present in reconstructed satellite images by an average of 33.78% (1.79 dB), while maintaining the average information entropy (IE) of compressed images at 99.57% in comparison to the wavelet. In addition, this evolved transform achieves 49.88% (3.00 dB) average MSE reduction when tested on 80 images from the FBI fingerprint test set, and 42.35% (2.39 dB) average MSE reduction when tested on a set of 18 digital photographs, while achieving average IE of 104.36% and 100.08%, respectively. These results indicate that our evolved transform greatly improves the quality of reconstructed images without substantial loss of compression capability over a broad range of image classes.

  17. Does endoscopic ultrasound improve detection of locally recurrent anal squamous-cell cancer?

    PubMed

    Peterson, Carrie Y; Weiser, Martin R; Paty, Philip B; Guillem, Jose G; Nash, Garrett M; Garcia-Aguilar, Julio; Patil, Sujata; Temple, Larissa K

    2015-02-01

    Evaluating patients for recurrent anal cancer after primary treatment can be difficult owing to distorted anatomy and scarring. Many institutions incorporate endoscopic ultrasound to improve detection, but the effectiveness is unknown. The aim of this study is to compare the effectiveness of digital rectal examination and endoscopic ultrasound in detecting locally recurrent disease during routine follow-up of patients with anal cancer. This study is a retrospective, single-institution review. This study was conducted at an oncologic tertiary referral center. Included were 175 patients with nonmetastatic anal squamous-cell cancer, without persistent disease after primary chemoradiotherapy, who had at least 1 posttreatment ultrasound and examination by a colorectal surgeon. The primary outcomes measured were the first modality to detect local recurrence, concordance, crude cancer detection rate, sensitivity, specificity, and predictive value. Eight hundred fifty-five endoscopic ultrasounds and 873 digital rectal examinations were performed during 35 months median follow-up. Overall, ultrasound detected 7 (0.8%) mesorectal and 32 (3.7%) anal canal abnormalities; digital examination detected 69 (7.9%) anal canal abnormalities. Locally recurrent disease was found on biopsy in 8 patients, all detected first or only with digital examination. Four patients did not have an ultrasound at the time of diagnosis of recurrence. The concordance of ultrasound and digital examination in detecting recurrent disease was fair at 0.37 (SE, 0.08; 95% CI, 0.21-0.54), and there was no difference in crude cancer detection rate, sensitivity, specificity, and negative or positive predictive values. The heterogeneity of follow-up timing and examinations is not standardized in this study but is reflective of general practice. Endoscopic ultrasound did not provide any advantage over digital rectal examination in identifying locally recurrent anal cancer, and should not be recommended for routine surveillance.

  18. Universal penetration test apparatus with fluid penetration sensor

    DOEpatents

    Johnson, Phillip W.; Stampfer, Joseph F.; Bradley, Orvil D.

    1999-01-01

    A universal penetration test apparatus for measuring resistance of a material to a challenge fluid. The apparatus includes a pad saturated with the challenge fluid. The apparatus includes a compression assembly for compressing the material between the pad and a compression member. The apparatus also includes a sensor mechanism for automatically detecting when the challenge fluid penetrates the material.

  19. Indexing and retrieval of MPEG compressed video

    NASA Astrophysics Data System (ADS)

    Kobla, Vikrant; Doermann, David S.

    1998-04-01

    To keep pace with the increased popularity of digital video as an archival medium, the development of techniques for fast and efficient analysis of ideo streams is essential. In particular, solutions to the problems of storing, indexing, browsing, and retrieving video data from large multimedia databases are necessary to a low access to these collections. Given that video is often stored efficiently in a compressed format, the costly overhead of decompression can be reduced by analyzing the compressed representation directly. In earlier work, we presented compressed domain parsing techniques which identified shots, subshots, and scenes. In this article, we present efficient key frame selection, feature extraction, indexing, and retrieval techniques that are directly applicable to MPEG compressed video. We develop a frame type independent representation which normalizes spatial and temporal features including frame type, frame size, macroblock encoding, and motion compensation vectors. Features for indexing are derived directly from this representation and mapped to a low- dimensional space where they can be accessed using standard database techniques. Spatial information is used as primary index into the database and temporal information is used to rank retrieved clips and enhance the robustness of the system. The techniques presented enable efficient indexing, querying, and retrieval of compressed video as demonstrated by our system which typically takes a fraction of a second to retrieve similar video scenes from a database, with over 95 percent recall.

  20. Watermarking and copyright labeling of printed images

    NASA Astrophysics Data System (ADS)

    Hel-Or, Hagit Z.

    2001-07-01

    Digital watermarking is a labeling technique for digital images which embeds a code into the digital data so the data are marked. Watermarking techniques previously developed deal with on-line digital data. These techniques have been developed to withstand digital attacks such as image processing, image compression and geometric transformations. However, one must also consider the readily available attack of printing and scanning. The available watermarking techniques are not reliable under printing and scanning. In fact, one must consider the availability of watermarks for printed images as well as for digital images. An important issue is to intercept and prevent forgery in printed material such as currency notes, back checks, etc. and to track and validate sensitive and secrete printed material. Watermarking in such printed material can be used not only for verification of ownership but as an indicator of date and type of transaction or date and source of the printed data. In this work we propose a method of embedding watermarks in printed images by inherently taking advantage of the printing process. The method is visually unobtrusive to the printed image, the watermark is easily extracted and is robust under reconstruction errors. The decoding algorithm is automatic given the watermarked image.

  1. Effects of water during cure on the properties of a carbon/phenolic system

    NASA Technical Reports Server (NTRS)

    Penn, B. G.; Clemons, J. M.; Ledbetter, F. E., III; Daniels, J. G.; Thompson, L. M.

    1984-01-01

    The effects of prepreg water contamination on interlaminar shear strength, tranverse compressive strength, and longitudinal compressive strength were determined. Decreases in these properties due to water contamination were sugstantial: 28 percent for the interlaminar shear strength, 21 percent for the transverse compressive strength and 31 percent for the longitudinal compressive strength. Since voids were not detected by X-ray analysis, the most likely cause for these results is fiber-matrix debounding in the laminate.

  2. The effects of video compression on acceptability of images for monitoring life sciences experiments

    NASA Astrophysics Data System (ADS)

    Haines, Richard F.; Chuang, Sherry L.

    1992-07-01

    Future manned space operations for Space Station Freedom will call for a variety of carefully planned multimedia digital communications, including full-frame-rate color video, to support remote operations of scientific experiments. This paper presents the results of an investigation to determine if video compression is a viable solution to transmission bandwidth constraints. It reports on the impact of different levels of compression and associated calculational parameters on image acceptability to investigators in life-sciences research at ARC. Three nonhuman life-sciences disciplines (plant, rodent, and primate biology) were selected for this study. A total of 33 subjects viewed experimental scenes in their own scientific disciplines. Ten plant scientists viewed still images of wheat stalks at various stages of growth. Each image was compressed to four different compression levels using the Joint Photographic Expert Group (JPEG) standard algorithm, and the images were presented in random order. Twelve and eleven staffmembers viewed 30-sec videotaped segments showing small rodents and a small primate, respectively. Each segment was repeated at four different compression levels in random order using an inverse cosine transform (ICT) algorithm. Each viewer made a series of subjective image-quality ratings. There was a significant difference in image ratings according to the type of scene viewed within disciplines; thus, ratings were scene dependent. Image (still and motion) acceptability does, in fact, vary according to compression level. The JPEG still-image-compression levels, even with the large range of 5:1 to 120:1 in this study, yielded equally high levels of acceptability. In contrast, the ICT algorithm for motion compression yielded a sharp decline in acceptability below 768 kb/sec. Therefore, if video compression is to be used as a solution for overcoming transmission bandwidth constraints, the effective management of the ratio and compression parameters according to scientific discipline and experiment type is critical to the success of remote experiments.

  3. The effects of video compression on acceptability of images for monitoring life sciences experiments

    NASA Technical Reports Server (NTRS)

    Haines, Richard F.; Chuang, Sherry L.

    1992-01-01

    Future manned space operations for Space Station Freedom will call for a variety of carefully planned multimedia digital communications, including full-frame-rate color video, to support remote operations of scientific experiments. This paper presents the results of an investigation to determine if video compression is a viable solution to transmission bandwidth constraints. It reports on the impact of different levels of compression and associated calculational parameters on image acceptability to investigators in life-sciences research at ARC. Three nonhuman life-sciences disciplines (plant, rodent, and primate biology) were selected for this study. A total of 33 subjects viewed experimental scenes in their own scientific disciplines. Ten plant scientists viewed still images of wheat stalks at various stages of growth. Each image was compressed to four different compression levels using the Joint Photographic Expert Group (JPEG) standard algorithm, and the images were presented in random order. Twelve and eleven staffmembers viewed 30-sec videotaped segments showing small rodents and a small primate, respectively. Each segment was repeated at four different compression levels in random order using an inverse cosine transform (ICT) algorithm. Each viewer made a series of subjective image-quality ratings. There was a significant difference in image ratings according to the type of scene viewed within disciplines; thus, ratings were scene dependent. Image (still and motion) acceptability does, in fact, vary according to compression level. The JPEG still-image-compression levels, even with the large range of 5:1 to 120:1 in this study, yielded equally high levels of acceptability. In contrast, the ICT algorithm for motion compression yielded a sharp decline in acceptability below 768 kb/sec. Therefore, if video compression is to be used as a solution for overcoming transmission bandwidth constraints, the effective management of the ratio and compression parameters according to scientific discipline and experiment type is critical to the success of remote experiments.

  4. Comparison between different cost devices for digital capture of X-ray films: an image characteristics detection approach.

    PubMed

    Salazar, Antonio José; Camacho, Juan Camilo; Aguirre, Diego Andrés

    2012-02-01

    A common teleradiology practice is digitizing films. The costs of specialized digitizers are very high, that is why there is a trend to use conventional scanners and digital cameras. Statistical clinical studies are required to determine the accuracy of these devices, which are very difficult to carry out. The purpose of this study was to compare three capture devices in terms of their capacity to detect several image characteristics. Spatial resolution, contrast, gray levels, and geometric deformation were compared for a specialized digitizer ICR (US$ 15,000), a conventional scanner UMAX (US$ 1,800), and a digital camera LUMIX (US$ 450, but require an additional support system and a light box for about US$ 400). Test patterns printed in films were used. The results detected gray levels lower than real values for all three devices; acceptable contrast and low geometric deformation with three devices. All three devices are appropriate solutions, but a digital camera requires more operator training and more settings.

  5. Breast cancer screening using tomosynthesis in combination with digital mammography.

    PubMed

    Friedewald, Sarah M; Rafferty, Elizabeth A; Rose, Stephen L; Durand, Melissa A; Plecha, Donna M; Greenberg, Julianne S; Hayes, Mary K; Copit, Debra S; Carlson, Kara L; Cink, Thomas M; Barke, Lora D; Greer, Linda N; Miller, Dave P; Conant, Emily F

    2014-06-25

    Mammography plays a key role in early breast cancer detection. Single-institution studies have shown that adding tomosynthesis to mammography increases cancer detection and reduces false-positive results. To determine if mammography combined with tomosynthesis is associated with better performance of breast screening programs in the United States. Retrospective analysis of screening performance metrics from 13 academic and nonacademic breast centers using mixed models adjusting for site as a random effect. Period 1: digital mammography screening examinations 1 year before tomosynthesis implementation (start dates ranged from March 2010 to October 2011 through the date of tomosynthesis implementation); period 2: digital mammography plus tomosynthesis examinations from initiation of tomosynthesis screening (March 2011 to October 2012) through December 31, 2012. Recall rate for additional imaging, cancer detection rate, and positive predictive values for recall and for biopsy. A total of 454,850 examinations (n=281,187 digital mammography; n=173,663 digital mammography + tomosynthesis) were evaluated. With digital mammography, 29,726 patients were recalled and 5056 biopsies resulted in cancer diagnosis in 1207 patients (n=815 invasive; n=392 in situ). With digital mammography + tomosynthesis, 15,541 patients were recalled and 3285 biopsies resulted in cancer diagnosis in 950 patients (n=707 invasive; n=243 in situ). Model-adjusted rates per 1000 screens were as follows: for recall rate, 107 (95% CI, 89-124) with digital mammography vs 91 (95% CI, 73-108) with digital mammography + tomosynthesis; difference, -16 (95% CI, -18 to -14; P < .001); for biopsies, 18.1 (95% CI, 15.4-20.8) with digital mammography vs 19.3 (95% CI, 16.6-22.1) with digital mammography + tomosynthesis; difference, 1.3 (95% CI, 0.4-2.1; P = .004); for cancer detection, 4.2 (95% CI, 3.8-4.7) with digital mammography vs 5.4 (95% CI, 4.9-6.0) with digital mammography + tomosynthesis; difference, 1.2 (95% CI, 0.8-1.6; P < .001); and for invasive cancer detection, 2.9 (95% CI, 2.5-3.2) with digital mammography vs 4.1 (95% CI, 3.7-4.5) with digital mammography + tomosynthesis; difference, 1.2 (95% CI, 0.8-1.6; P < .001). The in situ cancer detection rate was 1.4 (95% CI, 1.2-1.6) per 1000 screens with both methods. Adding tomosynthesis was associated with an increase in the positive predictive value for recall from 4.3% to 6.4% (difference, 2.1%; 95% CI, 1.7%-2.5%; P < .001) and for biopsy from 24.2% to 29.2% (difference, 5.0%; 95% CI, 3.0%-7.0%; P < .001). Addition of tomosynthesis to digital mammography was associated with a decrease in recall rate and an increase in cancer detection rate. Further studies are needed to assess the relationship to clinical outcomes.

  6. Video copy protection and detection framework (VPD) for e-learning systems

    NASA Astrophysics Data System (ADS)

    ZandI, Babak; Doustarmoghaddam, Danial; Pour, Mahsa R.

    2013-03-01

    This Article reviews and compares the copyright issues related to the digital video files, which can be categorized as contended based and Digital watermarking copy Detection. Then we describe how to protect a digital video by using a special Video data hiding method and algorithm. We also discuss how to detect the copy right of the file, Based on expounding Direction of the technology of the video copy detection, and Combining with the own research results, brings forward a new video protection and copy detection approach in terms of plagiarism and e-learning systems using the video data hiding technology. Finally we introduce a framework for Video protection and detection in e-learning systems (VPD Framework).

  7. Digital signal processor and processing method for GPS receivers

    NASA Technical Reports Server (NTRS)

    Thomas, Jr., Jess B. (Inventor)

    1989-01-01

    A digital signal processor and processing method therefor for use in receivers of the NAVSTAR/GLOBAL POSITIONING SYSTEM (GPS) employs a digital carrier down-converter, digital code correlator and digital tracking processor. The digital carrier down-converter and code correlator consists of an all-digital, minimum bit implementation that utilizes digital chip and phase advancers, providing exceptional control and accuracy in feedback phase and in feedback delay. Roundoff and commensurability errors can be reduced to extremely small values (e.g., less than 100 nanochips and 100 nanocycles roundoff errors and 0.1 millichip and 1 millicycle commensurability errors). The digital tracking processor bases the fast feedback for phase and for group delay in the C/A, P.sub.1, and P.sub.2 channels on the L.sub.1 C/A carrier phase thereby maintaining lock at lower signal-to-noise ratios, reducing errors in feedback delays, reducing the frequency of cycle slips and in some cases obviating the need for quadrature processing in the P channels. Simple and reliable methods are employed for data bit synchronization, data bit removal and cycle counting. Improved precision in averaged output delay values is provided by carrier-aided data-compression techniques. The signal processor employs purely digital operations in the sense that exactly the same carrier phase and group delay measurements are obtained, to the last decimal place, every time the same sampled data (i.e., exactly the same bits) are processed.

  8. [Real-time feedback systems for improvement of resuscitation quality].

    PubMed

    Lukas, R P; Van Aken, H; Engel, P; Bohn, A

    2011-07-01

    The quality of chest compression is a determinant of survival after cardiac arrest. Therefore, the European Resuscitation Council (ERC) 2010 guidelines on resuscitation strongly focus on compression quality. Despite its impact on survival, observational studies have shown that chest compression quality is not reached by professional rescue teams. Real-time feedback devices for resuscitation are able to measure chest compression during an ongoing resuscitation attempt through a sternal sensor equipped with a motion and pressure detection system. In addition to the electrocardiograph (ECG) ventilation can be detected by transthoracic impedance monitoring. In cases of quality deviation, such as shallow chest compression depth or hyperventilation, feedback systems produce visual or acoustic alarms. Rescuers can thereby be supported and guided to the requested quality in chest compression and ventilation. Feedback technology is currently available both as a so-called stand-alone device and as an integrated feature in a monitor/defibrillator unit. Multiple studies have demonstrated sustainable enhancement in the education of resuscitation due to the use of real-time feedback technology. There is evidence that real-time feedback for resuscitation combined with training and debriefing strategies can improve both resuscitation quality and patient survival. Chest compression quality is an independent predictor for survival in resuscitation and should therefore be measured and documented in further clinical multicenter trials.

  9. Compression-sensitive magnetic resonance elastography

    NASA Astrophysics Data System (ADS)

    Hirsch, Sebastian; Beyer, Frauke; Guo, Jing; Papazoglou, Sebastian; Tzschaetzsch, Heiko; Braun, Juergen; Sack, Ingolf

    2013-08-01

    Magnetic resonance elastography (MRE) quantifies the shear modulus of biological tissue to detect disease. Complementary to the shear elastic properties of tissue, the compression modulus may be a clinically useful biomarker because it is sensitive to tissue pressure and poromechanical interactions. In this work, we analyze the capability of MRE to measure volumetric strain and the dynamic bulk modulus (P-wave modulus) at a harmonic drive frequency commonly used in shear-wave-based MRE. Gel phantoms with various densities were created by introducing CO2-filled cavities to establish a compressible effective medium. The dependence of the effective medium's bulk modulus on phantom density was investigated via static compression tests, which confirmed theoretical predictions. The P-wave modulus of three compressible phantoms was calculated from volumetric strain measured by 3D wave-field MRE at 50 Hz drive frequency. The results demonstrate the MRE-derived volumetric strain and P-wave modulus to be sensitive to the compression properties of effective media. Since the reconstruction of the P-wave modulus requires third-order derivatives, noise remains critical, and P-wave moduli are systematically underestimated. Focusing on relative changes in the effective bulk modulus of tissue, compression-sensitive MRE may be useful for the noninvasive detection of diseases involving pathological pressure alterations such as hepatic hypertension or hydrocephalus.

  10. Ion detection device and method with compressing ion-beam shutter

    DOEpatents

    Sperline, Roger P [Tucson, AZ

    2009-05-26

    An ion detection device, method and computer readable medium storing instructions for applying voltages to shutter elements of the detection device to compress ions in a volume defined by the shutter elements and to output the compressed ions to a collector. The ion detection device has a chamber having an inlet and receives ions through the inlet, a shutter provided in the chamber opposite the inlet and configured to allow or prevent the ions to pass the shutter, the shutter having first and second shutter elements, a collector provided in the chamber opposite the shutter and configured to collect ions passed through the shutter, and a processing unit electrically connected to the first and second shutter elements. The processing unit applies, during a first predetermined time interval, a first voltage to the first shutter element and a second voltage to the second shutter element, the second voltage being lower than the first voltage such that ions from the inlet enter a volume defined by the first and second shutter elements, and during a second predetermined time interval, a third voltage to the first shutter element, higher than the first voltage, and a fourth voltage to the second shutter element, the third voltage being higher than the fourth voltage such that ions that entered the volume are compressed as the ions exit the volume and new ions coming from the inlet are prevented from entering the volume. The processing unit is electrically connected to the collector and configured to detect the compressed ions based at least on a current received from the collector and produced by the ions collected by the collector.

  11. 75 FR 70011 - Guidance for Industry, Mammography Quality Standards Act Inspectors, and Food and Drug...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-16

    ... label to assist that office in processing your request, or fax your request to 301-847-8149. See the.... Clarifying that original or lossless compressed digital image files may be acceptable for record transfer; 3... be acceptable to FDA; 4. Deleting the question and answer dealing with image labeling; 5. Modifying...

  12. Bandwidth compression of color video signals. Ph.D. Thesis Final Report, 1 Oct. 1979 - 30 Sep. 1980

    NASA Technical Reports Server (NTRS)

    Schilling, D. L.

    1980-01-01

    The different encoder/decoder strategies to digitally encode video using an adaptive delta modulation are described. The techniques employed are: (1) separately encoding the R, G, and B components; (2) separately encoding the I, Y, and Q components; and (3) encoding the picture in a line sequential manner.

  13. A Novel Pretreatment-Free Duplex Chamber Digital PCR Detection System for the Absolute Quantitation of GMO Samples.

    PubMed

    Zhu, Pengyu; Wang, Chenguang; Huang, Kunlun; Luo, Yunbo; Xu, Wentao

    2016-03-18

    Digital polymerase chain reaction (PCR) has developed rapidly since it was first reported in the 1990s. However, pretreatments are often required during preparation for digital PCR, which can increase operation error. The single-plex amplification of both the target and reference genes may cause uncertainties due to the different reaction volumes and the matrix effect. In the current study, a quantitative detection system based on the pretreatment-free duplex chamber digital PCR was developed. The dynamic range, limit of quantitation (LOQ), sensitivity and specificity were evaluated taking the GA21 event as the experimental object. Moreover, to determine the factors that may influence the stability of the duplex system, we evaluated whether the pretreatments, the primary and secondary structures of the probes and the SNP effect influence the detection. The results showed that the LOQ was 0.5% and the sensitivity was 0.1%. We also found that genome digestion and single nucleotide polymorphism (SNP) sites affect the detection results, whereas the unspecific hybridization within different probes had little side effect. This indicated that the detection system was suited for both chamber-based and droplet-based digital PCR. In conclusion, we have provided a simple and flexible way of achieving absolute quantitation for genetically modified organism (GMO) genome samples using commercial digital PCR detection systems.

  14. A Novel Pretreatment-Free Duplex Chamber Digital PCR Detection System for the Absolute Quantitation of GMO Samples

    PubMed Central

    Zhu, Pengyu; Wang, Chenguang; Huang, Kunlun; Luo, Yunbo; Xu, Wentao

    2016-01-01

    Digital polymerase chain reaction (PCR) has developed rapidly since it was first reported in the 1990s. However, pretreatments are often required during preparation for digital PCR, which can increase operation error. The single-plex amplification of both the target and reference genes may cause uncertainties due to the different reaction volumes and the matrix effect. In the current study, a quantitative detection system based on the pretreatment-free duplex chamber digital PCR was developed. The dynamic range, limit of quantitation (LOQ), sensitivity and specificity were evaluated taking the GA21 event as the experimental object. Moreover, to determine the factors that may influence the stability of the duplex system, we evaluated whether the pretreatments, the primary and secondary structures of the probes and the SNP effect influence the detection. The results showed that the LOQ was 0.5% and the sensitivity was 0.1%. We also found that genome digestion and single nucleotide polymorphism (SNP) sites affect the detection results, whereas the unspecific hybridization within different probes had little side effect. This indicated that the detection system was suited for both chamber-based and droplet-based digital PCR. In conclusion, we have provided a simple and flexible way of achieving absolute quantitation for genetically modified organism (GMO) genome samples using commercial digital PCR detection systems. PMID:26999129

  15. Enhancement of breast periphery region in digital mammography

    NASA Astrophysics Data System (ADS)

    Menegatti Pavan, Ana Luiza; Vacavant, Antoine; Petean Trindade, Andre; Quini, Caio Cesar; Rodrigues de Pina, Diana

    2018-03-01

    Volumetric breast density has been shown to be one of the strongest risk factor for breast cancer diagnosis. This metric can be estimated using digital mammograms. During mammography acquisition, breast is compressed and part of it loses contact with the paddle, resulting in an uncompressed region in periphery with thickness variation. Therefore, reliable density estimation in the breast periphery region is a problem, which affects the accuracy of volumetric breast density measurement. The aim of this study was to enhance breast periphery to solve the problem of thickness variation. Herein, we present an automatic algorithm to correct breast periphery thickness without changing pixel value from internal breast region. The correction pixel values from periphery was based on mean values over iso-distance lines from the breast skin-line using only adipose tissue information. The algorithm detects automatically the periphery region where thickness should be corrected. A correction factor was applied in breast periphery image to enhance the region. We also compare our contribution with two other algorithms from state-of-the-art, and we show its accuracy by means of different quality measures. Experienced radiologists subjectively evaluated resulting images from the tree methods in relation to original mammogram. The mean pixel value, skewness and kurtosis from histogram of the three methods were used as comparison metric. As a result, the methodology presented herein showed to be a good approach to be performed before calculating volumetric breast density.

  16. Low-Dose Contrast-Enhanced Breast CT Using Spectral Shaping Filters: An Experimental Study.

    PubMed

    Makeev, Andrey; Glick, Stephen J

    2017-12-01

    Iodinated contrast-enhanced X-ray imaging of the breast has been studied with various modalities, including full-field digital mammography (FFDM), digital breast tomosynthesis (DBT), and dedicated breast CT. Contrast imaging with breast CT has a number of advantages over FFDM and DBT, including the lack of breast compression, and generation of fully isotropic 3-D reconstructions. Nonetheless, for breast CT to be considered as a viable tool for routine clinical use, it would be desirable to reduce radiation dose. One approach for dose reduction in breast CT is spectral shaping using X-ray filters. In this paper, two high atomic number filter materials are studied, namely, gadolinium (Gd) and erbium (Er), and compared with Al and Cu filters currently used in breast CT systems. Task-based performance is assessed by imaging a cylindrical poly(methyl methacrylate) phantom with iodine inserts on a benchtop breast CT system that emulates clinical breast CT. To evaluate detectability, a channelized hoteling observer (CHO) is used with sums of Laguerre-Gauss channels. It was observed that spectral shaping using Er and Gd filters substantially increased the dose efficiency (defined as signal-to-noise ratio of the CHO divided by mean glandular dose) as compared with kilovolt peak and filter settings used in commercial and prototype breast CT systems. These experimental phantom study results are encouraging for reducing dose of breast CT, however, further evaluation involving patients is needed.

  17. A Novel Texture-Quantization-Based Reversible Multiple Watermarking Scheme Applied to Health Information System.

    PubMed

    Turuk, Mousami; Dhande, Ashwin

    2018-04-01

    The recent innovations in information and communication technologies have appreciably changed the panorama of health information system (HIS). These advances provide new means to process, handle, and share medical images and also augment the medical image security issues in terms of confidentiality, reliability, and integrity. Digital watermarking has emerged as new era that offers acceptable solutions to the security issues in HIS. Texture is a significant feature to detect the embedding sites in an image, which further leads to substantial improvement in the robustness. However, considering the perspective of digital watermarking, this feature has received meager attention in the reported literature. This paper exploits the texture property of an image and presents a novel hybrid texture-quantization-based approach for reversible multiple watermarking. The watermarked image quality has been accessed by peak signal to noise ratio (PSNR), structural similarity measure (SSIM), and universal image quality index (UIQI), and the obtained results are superior to the state-of-the-art methods. The algorithm has been evaluated on a variety of medical imaging modalities (CT, MRA, MRI, US) and robustness has been verified, considering various image processing attacks including JPEG compression. The proposed scheme offers additional security using repetitive embedding of BCH encoded watermarks and ADM encrypted ECG signal. Experimental results achieved a maximum of 22,616 bits hiding capacity with PSNR of 53.64 dB.

  18. Spatial-frequency composite watermarking for digital image copyright protection

    NASA Astrophysics Data System (ADS)

    Su, Po-Chyi; Kuo, C.-C. Jay

    2000-05-01

    Digital watermarks can be classified into two categories according to the embedding and retrieval domain, i.e. spatial- and frequency-domain watermarks. Because the two watermarks have different characteristics and limitations, combination of them can have various interesting properties when applied to different applications. In this research, we examine two spatial-frequency composite watermarking schemes. In both cases, a frequency-domain watermarking technique is applied as a baseline structure in the system. The embedded frequency- domain watermark is robust against filtering and compression. A spatial-domain watermarking scheme is then built to compensate some deficiency of the frequency-domain scheme. The first composite scheme is to embed a robust watermark in images to convey copyright or author information. The frequency-domain watermark contains owner's identification number while the spatial-domain watermark is embedded for image registration to resist cropping attack. The second composite scheme is to embed fragile watermark for image authentication. The spatial-domain watermark helps in locating the tampered part of the image while the frequency-domain watermark indicates the source of the image and prevents double watermarking attack. Experimental results show that the two watermarks do not interfere with each other and different functionalities can be achieved. Watermarks in both domains are detected without resorting to the original image. Furthermore, the resulting watermarked image can still preserve high fidelity without serious visual degradation.

  19. Method for automatic detection of wheezing in lung sounds.

    PubMed

    Riella, R J; Nohama, P; Maia, J M

    2009-07-01

    The present report describes the development of a technique for automatic wheezing recognition in digitally recorded lung sounds. This method is based on the extraction and processing of spectral information from the respiratory cycle and the use of these data for user feedback and automatic recognition. The respiratory cycle is first pre-processed, in order to normalize its spectral information, and its spectrogram is then computed. After this procedure, the spectrogram image is processed by a two-dimensional convolution filter and a half-threshold in order to increase the contrast and isolate its highest amplitude components, respectively. Thus, in order to generate more compressed data to automatic recognition, the spectral projection from the processed spectrogram is computed and stored as an array. The higher magnitude values of the array and its respective spectral values are then located and used as inputs to a multi-layer perceptron artificial neural network, which results an automatic indication about the presence of wheezes. For validation of the methodology, lung sounds recorded from three different repositories were used. The results show that the proposed technique achieves 84.82% accuracy in the detection of wheezing for an isolated respiratory cycle and 92.86% accuracy for the detection of wheezes when detection is carried out using groups of respiratory cycles obtained from the same person. Also, the system presents the original recorded sound and the post-processed spectrogram image for the user to draw his own conclusions from the data.

  20. Initial Image Quality and Clinical Experience with New CR Digital Mammography System: A Phantom and Clinical Study

    NASA Astrophysics Data System (ADS)

    Gaona, Enrique; Alfonso, Beatriz Y. Álvarez; Castellanos, Gustavo Casian; Enríquez, Jesús Gabriel Franco

    2008-08-01

    The goal of the study was to evaluate the first CR digital mammography system (® Konica-Minolta) in Mexico in clinical routine for cancer detection in a screening population and to determine if high resolution CR digital imaging is equivalent to state-of-the-art screen-film imaging. The mammograms were evaluated by two observers with cytological or histological confirmation for BIRADS 3, 4 and 5. Contrast, exposure and artifacts of the images were evaluated. Different details like skin, retromamillary space and parenchymal structures were judged. The detectability of microcalcifications and lesions were compared and correlated to histology. The difference in sensitivity of CR Mammography (CRM) and Screen Film Mammography (SFM) was not statistically significant. However, CRM had a significantly lower recall rate, and the lesion detection was equal or superior to conventional images. There is no significant difference in the number of microcalcifications and highly suspicious calcifications were equally detected on both film-screen and digital images. Different anatomical regions were better detectable in digital than in conventional mammography.

Top